A few years ago, a frustrated user of Vodafone tweeted to the customer care that he want to meet and marry the MD’s daughter by EOD, to everyone’s entertainment, he got an auto response asking to DM his details for further conversation! This is an outcome of a rule driven bot programmed to give canned responses to customer queries. As of last week, I asked ChatGPT what the response should be to such a tweet, and it was no better than the bot from the past.
Whether a process automation bot or a billion-dollar AI model, they seem to be challenged when it comes to understanding humour, sarcasm, and our intent to troll.
Before we get into understanding machine’s capabilities to get sarcasm, and nuances in conversation, here’s an example of fellow humans getting lost in cultural differences, even if the language they speak is the same.
This is an infamous chart that went viral a while ago, it claims to show what British people say and what it means.
The chart goes on to give several innocent sounding phrases which sounds optimistic and encouraging when taken at the face value and seem to mean the exact opposite.
When we ourselves have difficulty understanding nuances in language, how can the machine we built based on rational, logic and structured programming get it? Technically, there are a few reasons why AI doesn’t get human humour, including data set limitations on what it was trained, bias in data and programming on how it was trained.
Our languages are dynamic, and they evolve as cultural expressions. Take emojis for example, you may have used this ‘two hands together’ 🙏 emoji a hundred times this week as a symbol to express gratitude, but many culture use it as a cheerful emotion to show an ‘high five’!
We humans use our emotional intelligence to interpret and create humour and sarcasm, but AI lacks this intrinsic emotional understanding, making it harder to grasp the subtleties involved in these forms of communication.
When a colleague says, ‘Oh great, another meeting’, beyond just the words uttered, you have a huge set of cues to decipher the meaning behind those words. Are they a sales professional who secured a client meeting on a Monday morning? Or are they a frustrated account executive exasperating on a Friday evening about the newfound information that will hold them back in office for another two hours? Are they initiating a celebratory fist pump, or they have their eyes rolled up in aghast when saying those words?
An AI doesn’t have the mechanism to read body language, decipher context, or any emotional understanding of the words it has been fed. It can only work on the face value and give output, so this is another reason beyond their training data on why they can’t understand humour and sarcasm.
If you ask an AI chatbot to give you examples of sarcasm, they do give out a few illustrations. So, in theory, they do know what it means but in practice, they try to identify sarcasm in text by looking for markers like specific keywords or sentence structures, but they often miss the broader context.
Idiomatic Expressions, Cultural gradations, and Tone:
‘Did you see him? What a dog’, ‘It’s gone to dogs’ are vastly different from ‘Hey dog, wassup? Dogs are sure man’s best friend, but the term can be used as an insult, metaphor, or an expression of camaraderie. We learn this based on our exposure, upbringing, social interactions, and formal education, but machines learn through data inputs, pattern recognition, and ML models. So, when they hear something new or have limited inputs on the broader context, they fail to see the bigger picture and default to processing the face value of any given text.
When it comes to tone, nonverbal cues play a big role in adding emotion to the words. Something like ‘I’m sorry, did I hurt you again’ can be said in the most empathetic manner while expressing a sense of guilt and remorse, and the very same words can be said in a condescending manner, say by a bully, to insult another person that they are weak and miserable. While machines are getting good at enhanced sentiment analysis they still a need a lot more of culture training and contextual analysis to get better at understating human ways of communications.
Until then, enjoy this self-deprecating joke written by ChatGPT.
Why did the AI fail its comedy class?
Because it couldn’t tell if the jokes were funny or just another data point!
This article was originally published on The Economic Times
Reach us for a free consultation, or discussing a POC in AI, Cloud, or any form factor agnostic new product development.