terça-feira, abril 15, 2025
HomeIoTGesture Recognition Gets a Helping Hand

Gesture Recognition Gets a Helping Hand



For the deaf and hard of hearing, sign language opens up a world of communication that would otherwise be impossible. The hand movements, facial gestures, and body language used when signing is highly expressive, and it enables people to convey complex ideas with a great deal of nuance. However, relatively few people understand sign language, which creates communication barriers for those that rely on it.

In years past, few options were available to help break down those barriers. Human translators could do the job, but having someone always at the ready to give assistance is just not feasible. A digital translator would go a long way toward solving this problem, but a completely practical solution has yet to be built. Wearable gloves and other motion sensing devices have been experimented with in the past, but these systems tend to be complex and undesirable for daily use in the real world. But recently, a team of engineers at Florida Atlantic University has reported on their work that could ultimately be used to power a more practical sign language translation device.

The researchers have developed a real-time American Sign Language (ASL) interpretation system that uses artificial intelligence and computer vision to identify and translate hand gestures into text. By combining two cutting-edge technologies — YOLOv11 for gesture recognition and MediaPipe for hand tracking — the system is able to recognize ASL alphabet letters with high levels of speed and accuracy.

The process begins with a camera that captures images of the signer’s hand. Next, MediaPipe maps 21 key points on each hand, creating a skeletal outline that reveals the position of each finger joint and the wrist. Using this skeletal data, YOLOv11 identifies and classifies the gesture being made. Together, these tools allow the system to operate in real time, even under challenging lighting conditions and using only standard hardware and tools.

Testing showed that the system achieved a mean average precision of 98.2%, making it one of the most accurate ASL alphabet recognition systems developed to date. Its high inference speed also means that it could be deployed in live settings, such as classrooms, healthcare facilities, or workplaces, where reliable and immediate interpretation is needed.

While building the system, the researchers curated a dataset of 130,000 annotated ASL hand gesture images, each marked with 21 key points to reflect subtle differences in finger positioning. The dataset includes images taken under a variety of lighting conditions and with different backgrounds, enabling the system to generalize well across different users and environments. This dataset was an important factor in teaching the system to accurately classify visually similar signs.

Looking ahead, the team plans to expand the system’s capabilities from the recognition of individual alphabet letters to complete words and even full sentences. This would allow users to express more complex ideas in a natural and fluid manner, bringing the technology closer to a true digital interpreter for sign language.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments