IIT and AIIMS Jodhpur Develop 'Talking Gloves' for the Differently-Abled

IIT and AIIMS Jodhpur Develop ‘Talking Gloves’ for the Differently-Abled

Innovators from the Indian Institute of Technology (IIT) Jodhpur and All India Institute Of Medical Science (AIIMS) Jodhpur have developed low-cost ‘talking gloves’ as a joint initiative. The gloves have been developed for people with speech disabilities.

The gloves use the principles of Artificial Intelligence(AI) and Machine Learning(MI) to automatically generate speech that will be language independent and facilitate the communication between people with speech disabilities and people without any speech disability. The device costs less than Rs 5000.

The team behind ‘ talking gloves’

Prof. Sumit Kalra, Assistant Professor, Department of Computer Science and Engineering, IIT Jodhpur, along with his team of innovators including Dr. Arpit Khandelwal from IIT Jodhpur, and Dr. Nithin Prakash Nair (SR, ENT), Dr. Amit Goyal (Prof. & Head, ENT), Dr. Abhinav Dixit (Prof., Dept. of Physiology), from AIIMS Jodhpur, were behind the development of these gloves and has recently acquired a patent for this innovation.

How will the device help people with speech impairment?

Expanding on how the device can help people with speech disabilities, Professor Sumit Kalra, Assistant Professor, Department of Computer Science and Engineering, IIT Jodhpur, said, “The language independent speech generation device will bring people back to the mainstream in today’s global era without any language barrier.”

Users of the device only need to learn once and they will be able to verbally communicate in any language with their knowledge,” he said.

He further said, “Additionally, the device can be customized to produce a voice similar to the original voice of the patients, which makes it appear more natural while using the device.”

How do the ‘talking gloves’ work?

  • The newly developed gloves work by generating electrical signals via the first set of sensors, wearable on a combination of a thumb, finger(s), and/or the wrist of the first hand of a user.
  • These electrical signals can be produced by the combination of fingers, thumb, hand, and wrist movements. Similarly, electrical signals are also generated by the second set of sensors, on the other hand.
  • These electrical signals are received at a signal processing unit.
  • By using Ai and ML algorithms, these combinations of signals are then translated into phonetics corresponding to at least one consonant and a vowel.
  • An audio signal is generated by an audio transmitter corresponding to the assigned phonetic and based on trained data associated with vocal characteristics stored in a machine learning unit.
  • The generation of audio signals according to phonetics having a combination of vowels and consonants leads to the generation of speech and enables mute people to audibly communicate with others.
  • The speech synthesis technique of the present subject uses phonetics, and therefore, the speech generation is independent of any language.

Future plans for the newly developed device

The team is further working to enhance the features such as durability, weight, responsiveness, and ease-of-use, of the developed device.

The developed product will be commercialised through a startup incubated by IIT Jodhpur.

For More Such Articles, News Update, Events, and Many More Click Here

Leave a Reply

Your email address will not be published.