Vlog | Destiny Yarbro | November 12, 2025 | 12 min watch
Have you seen headlines like these before and thought, “Wow, AI must be so close to using sign language.” But how close is AI really to learning sign languages? I don’t know… but I do know it’s a lot further than the latest tech news headlines—or LLMs themselves—think. If large language models are just that — language models — wouldn’t it be easy enough to just add sign languages? Here’s the catch: producing signs and actually understanding sign language are two very different skills — and AI struggles with the understanding part way more.
In this video I'm going to answer four of questions I get asked often about AI and sign languages, starting with this one:
Yes, it's true that AI has made truly amazing leaps and bounds, but have you seen these before? They're funny right? But I think they show how much AI would still struggle with signing. Sign languages are 3D languages. They're not spoken, written, or heard, they're signed and understood through sight. So yes, AI, has made significant improvements since, but I think it's important to understand that producing signs and understanding signs are different skills, right? Which means that AI might be super close to making models, fake people, sign. But understanding sign languages? That's a whole different ballgame. It requires understanding the airspace around someone. Let me explain this with an example. This handshape, you might think AI can easily register this handshape, but let me show you what this one shape can do.
[Video of me signing 4 signs using 1 handshape]
1 handshape made 4 signs. But really, 1 handshape can make thousands of signs. It depends on largely these 5 parameters.
Handshape
Orientation (the direction your palm is facing)
Location (where on the body the sign is)
This handshape could be used on the face or it could be used on the body but their meanings are very different: GIRL on the face and STERNUM SURGERY on the body.
Movement (a sign's direction, frequency, whether it's signed fast or slow)
Non-Manual Markers (meaning facial expressions, body movement, and posture)
All five of these impact what a sign means. The thing is that sign languages don't just have 3D movement, but even their grammar is 3D. Which is a huge challenge for living breathing ASL students, let's alone AI.
Second, it's important to recognize that there is no written form of sign languages. A few people have tried building written systems like these but there are no widely accepted transcription formats. Because sign languages in general are visual and gestural and utilize the airspace around them, and that airspace is where expression, conversation, and grammar are done, AI struggles with the concept of a completely visual language with very few limits to how that language is expressed. Spoken languages and signed languages are very different - for MANY reasons - but one difference is that sign languages include a lot of non-signed information. Let me show you an example. Deaf poetry. You can have an entire Deaf poem signed, in a beautifully visual way, and not have one formal sign. Not one. That's impossible in English. English poetry is limited to words. So again, AI struggles to understand that.
Another example, back in college I attended a class called Visual Gestural Communication (VGC). I absolutely loved it. My dad called it my "mime class." Anyway, the point of that class was to communicate without formal ASL signs and use gestural communication instead without any official signs or words. It was so cool! I fell in love with the class instantly. Now, would that class be possible for English? To disconnect from English words, to disconnect from English letters? No. It would be impossible. But with sign, communication is totally possible without formalized signs. And that skill, I've used constantly in the years following that VGC class as I've traveled and met Deaf around the world. I've been able to communicate without relying on English words or fingerspelling.
And that leads to the third question.
With written languages, AI has gathered tons of information to make a Large Language Model, right? If you google a topic like "finance" you will get an innumerable list of results that seem to go on forever. That is what AI uses to answer our questions. AI needs data. And written languages, English for example, has an incredible amount of records spanning hundreds and hundreds of years for AI to pull from. With sign languages, it's different. The ability to record ASL is such a recent thing, 100 to 120 years at the very most, which started with the invention of cameras and film. Because that is such a recent invention, ASL records are limited. And that's ASL where the USA had access to this technology first (or almost first). First access to filming and first access to the Internet. So ASL has the most data of any other sign language out there. Others have much fewer records. So AI can't pull from an arsenal of records with sign languages, the pickings are very slim, there is very little what they call, "training data" for LLMs. Back in the day, when the World Wide Web was first established AIs would have been impossible. Yes, because the technology wasn't advanced enough, but also because there was no arsenal of data to pull from. So it wasn't until more than 30 years that blogs, articles, online magazines, and library archives were transcribed (and Reddit, let's be honest) that finally the creation of large language models was possible. Another problem is that if you search "sign language" online, you'll get tons of results, sure, but most of those results will be written articles, written blogs, written sources, which means they're all in English. And the videos that pop up in the results? They're from the most popular YouTubers like Baby Sign channels, which are not accurate representations of Deaf signing variation. So there's not a lot of data for AI to compare. So if AI only has a few signers online to pull from, it might think it can understand all of a sign language when really it can't. And we have to keep in mind that other sign languages, like Korean Sign Language, Nepali Sign Language, Kenyan Sign Language, have even less data to pull from.
The reason why I decided to sit down and make a video about this topic is that someone commented under one of my videos. I had taught how to sign "flower" and she had learned a different variation of the sign: "flower" with closed fingers. She said when she first started learning ASL she had to come face to face with her own rigidity and become more fluid, flexible with language. And that's true. Sign languages in general are really "fluid." Remember the Deaf poem? Where the entire piece has not one formal sign? Remember that? It's "fluid." And AI, as we all know, struggles with fluid concepts. Abstract is not an LLM’s strength so it doesn't understand fluid languages. So sign languages themselves are fluid but you have to understand that each user, each signer, is different. Other languages don't have variance to this degree, they just don't. The Deaf community is unique because someone might sign from birth, or might try to speak from birth, or might be born and have no access to language in any way. None. This unfortunately does happen. Which means that each user of American Sign Language, each user of Korean Sign Language, their signing will look completely different from other users. This creates a variance to a degree that is just not found in hearing languages. I talk about this a lot on my channel but you might have three people. The first learned a sign language at age 20—they had language deprivation until the age of 20. The second maybe learned at the age of 12, past the point of natural language acquisition, but still relatively young. Maybe the third learned to sign from birth with parents who signed from day one. Each of their language backgrounds are different and if you were to watch each person sign, would their signing look the same? Not at all. This sentence might be super English-y for one person, more of a mix of English and ASL for the second, and a slangy ASL for the third. It's the same sign language, true, but each of their signing is COMPLETELY different. So again, do spoken languages have that variance? No. Another variance might be "flower" vs. "flower," another is that some people sign (pronunciate) super clear and some sign blurrily—they mumble in sign. They might sign "mom" and "dad" like this and not with stiff precise hands: "mom" "dad." And where they grew up, which school they went to, and personal preferences lead to a wide variance as well.
So, how close is AI to actually becoming fluent in sign languages? In my opinion? Not close. I think we'll have to wait until there is enough training data to create a SLLM, a large SIGN language model.
If you enjoyed this video, maybe check out this next one where I answer the most common questions about sign languages. Also, remember to like, comment, and subscribe. Thank you!