Researcher’s at Google have developed develop a sign language-translation tool that can be intergraded in a smartphone. This a not a new technology, Google says these tools already exist but the problem was they required powerful computers.
“Our method provides real-time performance on a mobile phone,” said Valentin Bazarevsky and Fan Zhang, researchers at Google.
To achieve this, the company has used over 30,000 images in different poses and hands to train the AI system. The researchers say it was a difficult task since hands move so quickly. It was vital that they use machine learning technology throwing out the need for pre-programming.
“We believe that publishing this technology can give an impulse to new creative ideas and applications by the members of the research and developer community at large. We are excited to see what you can build with it!” says Google.
The company states they summarized the detection process into 21 poses (hand reference points) to simplify how the tasks to its AI, which makes it easier for this process to take place without having a powerful computer but still runs faster.