Description of ASL Tutor Project
By: Seungyon Lee, project investigator

The project is also called CopyCat. It is basically about the user-centered
development of a game for tutoring American Sign Language (ASL) for young
deaf children. We use our gesture recognition system as a core technology and
try to find a seamless and unobtrusive interaction/interface through several
series of pilot studies with deaf children.  

Currently, we have performed four trials of the pilot study. Each provided
good feedback for developing a criteria for design iteration.  

We focused on the fact that 90% of deaf children are born to hearing parents
who may not be good at ASL. Those children may have less chance to learn
their 1st language (ASL) at home from birth. Since early childhood is a
critical period of language acquisition, we expect that this game will help
young deaf children learn ASL by repetitive tutoring and real-time evaluation
as well as build proper cognitive model through early language training. This
game allows deaf children to play the game via their native language (ASL).  

Here's the simple video describing the basic idea of this game. The gesture
recognition technology is used to evalute the correctness and clearness of
the sign. Currently, the interaction flow is slightly different from this,
but this still shows the basic idea of the game. (In this video, they use the
phrase 'You go catch the butterfly'.)  

http://www.cc.gatech.edu/~sylee/CHI/Seungyon_Lee_movie.mov

We use the Wizard of Oz method for the pilot study. This allows us to develop
the game and the gesture recognition technology separately. During the test,
I, as a wizard, sat behind the wall and executed the feedback (control cat
animation) pretending I was the computer which was good at ASL.  

We ran four times of pilot study. For the first half of the studies, we
evaluated the usage of the game qualitatively through observation and
interview, not using our gesture recognition system. For the last half of the
studies, we attached the gesture recognition system to the game interface and
collect vision/location dataset from users while signing. To collect the
data, we used color gloves and bluetooth accelerometers mounted on user's
wrists. Then calculated the accuracy rate of single vocabulary and phrase of
captured data. 

All tests were run at Atlanta Area School for the Deaf (AASD) in the room
where children usually play.  

I'm in charge of the game development and overall pilot tests. This project
was presented at CHI (Computer Human Interaction) last week. 

Here's the url of the poster and paper I used there.

http://www.cc.gatech.edu/~sylee/CHI/CHI_poster_0411_72dpi.jpg
http://www.cc.gatech.edu/~sylee/CHI/chi05short_final.pdf

The paper was written after we finished our 2nd pilot test whereas the poster
describes the latest results, including our 4th pilot test.  

This project also will be presented at two more conference/symposium this
June; IDC (Interaction Design & Children) and the symposium of Instructional
Technology and Education of the Deaf. 


Seungyon Lee
http://steel.lcc.gatech.edu/~slee

Master's Program in Human-Computer Interaction
GVU Center
Georgia Institute of Technology

