Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
“The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis,” says Assistant Professor of Engineering Ivan Garibay ’00MS ’04PhD. “Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization’s Computational Simulation of Online Social Behavior program.
“Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text,” says Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
Jesus: Hey, Dad? God: Yes, Son? Jesus: Western civilization followed me home. Can I keep it? God: Certainly not! And put it down this minute--you don't know where it's been! Tom Robbins in Another Roadside Attraction