pageping
menu
Enter your Support Code

Get Social

MIT Building. Massachusetts Institute of Technology. SeaBro IT

Researchers at MIT have developed an AI that can link sight and touch senses together, just like a human can.

By looking at an object, we can predict how it will feel if we were to touch it – rough, smooth, firm, sharp, fluffy. However, robots lack this ability to predict the haptic sense from a visual input.

This was the aim of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology; to link visual and haptic inputs into pairs in order to predict the outcome of one from the other.

By using Generative Adversarial Networks (GANs), the AI is able to predict how an object will feel just from how it looks, and how it will look by touching it. The GAN system for training an AI uses two competing networks to create realistic synthetic versions of the data to allow it to learn which is real and which is fake. You can read more about how GANs work in our report about NVIDIA’s GauGAN software.

FlexFELLOW Robot Arm © Kuka Systems. SeaBro IT
The flexFELLOW robotic arm is similar to the one used in these experiments. Image: KUKA Aktiengesellschaft

To train the programme, the researchers used a KUKA robotic arm with GelSight – a special tactile sensor developed by MIT’s Department of Brain and Cognitive Sciences – in conjunction with a standard web camera. The team recorded a range of almost 200 objects being touched over 12,000 times. By breaking those clips into static frames, the team compiles over 3 million visual/tactile images.

The objects involved in this research ranged from tools to household products to fabrics, and many more. This eclectic mix of items meant that the AI had a broad selection of inputs to work with and predict what an unfamiliar object would feel like. This combined dataset is known as ‘VisGel’, as it is a visual version of the GelSight technology.

This is a very challenging problem, since the signals are so different

Not only was this model able to predict how an object would feel depending on how it looks, it is also able to do the reverse. When the robotic arm is presented with an object with no visual input, the system is able to anticipate the general features and look of the item.

Michael Owens is a Postdoctoral Researcher at the University of California at Berkley, and had this to say about the success of this project: “Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’ This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”

Is this the rise of the robotic empire? Not yet, but it certainly is a step to making machines more human-like. This has a number of incredible potential uses, from making warehouse packing more efficient to helping surgeons during an operation. One thing is for sure, this type of experiment is making more sense by the day.

Although futuristic robotics may not be in your office just yet, dated devices can slow your productivity massively. Get in touch with our experts to find out how we can help to modernise your business.

Read more of our blog posts 


      

Jun

Written by
Fraser S
in News. Staff Articles.

Signup to our mailing list for tips and latest offers