top of page
PORTFOLIO

Dylan Pallickara

Poudre High School, Fort Collins, Colorado

​

​

Contents

  • Computer Science Research

​

​

​

​

​

  • Creative Writing: Poetry

​

​

​

​

​

​

​

​

​

​

​

Computer-Assisted Recognition of American Sign Language
With Instructional Assistance

Could a performant model (97% accuracy) be used as the basis for instruction? Systems with the requisite precision can allow ASL learners to receive immediate and quantifiable feedback regarding their signing accuracy — helping learners refine their skills without a physical teacher needing to be present. Currently, the vast majority of AI-based ASL research is focused on translation (Bantupalli 2018). But helping users refine their signing skills is an area that is lacking in targeted feedback.

The distilled image representations and the various joint angles that they encapsulate can be used as a training tool to support language acquisition. To this end, standards for each ASL hand sign were established. To accomplish this, joint angles were extracted from each wireframe image in the curated training datasets. An average set of finger angles was established for each sign.

​

This dataset of average joint angles for each ASL sign serves as the reference for identifying deviations. The teacher-model aspect of ASL acquisition involves two phases. In the first phase, wireframes are constructed from raw images of hand gestures representing ASL signs. Alongside the generation of wireframe, all the joint angles are also computed. Second, the classifier is used to identify the sign being attempted by the user. The predicted ASL sign is then cross-referenced with the established finger angles standard for that sign. This is done by identifying deviations for each individual finger and the overall similarity of the inputted hand sign. To find the similarity, each finger angle in the inputted image was subtracted from the average finger angle and divided by the average joint angle to find the extent of deviation. Then, the data regarding the accuracy of each finger joint and the overall average accuracy of the inputted hand sign are returned and displayed. Overlaying the deviation from the canonical wireframe reference, provides timely and targeted feedback that should help with picking up the language. 

​

​

Bibliography

Bantupalli, Kshitij, and Ying Xie. "American sign language recognition using deep learning and computer vision." 2018 IEEE International Conference on Big Data (Big Data).

​

​

bottom of page