We've all been there, right? Somebody misinterpreted the email we sent, Facebook status we secretly thought was hilarious, or Twitter feed that was meant to be witty and non offensive. Recent exploration into sentiment analysis through social media websites has been huge in understanding the general population's "temperature" when examining issues such as the presidential election, product reviews, drug side effects, movie reviews, and stock market behavior. Social mediums provide an ideal central hub of sentiment data waiting to be mined by companies interested in feedback of new products. Many companies have started to explore user generated content on social media websites by using text analytics. Emotion is so easy gauge online, right?
How are statements categorized as positive or negative?One approach to categorizing statements is by using polarity, the simple, binary, black and white, good or bad, assignment of a statement. A study by Peter Turney identified three steps for an algorithm to classify an online review as recommended or not recommended.
1. Use a part-of-speech tagger to identify phrases in the input text that contain adjectives or adverbs.
2. Estimate the semantic orientation of each extracted phrase by calculating its similarity with a positive word ("fantastic") or negative reference ("poor").
3. Assign the review to a class, recommended or not recommended.
According to Turney, 74% of reviews were classified correctly using these steps when compared to reviews evaluated on Epinions. As you have probably anticipated, the process becomes increasingly complicated. The above mentioned process may work for a black and white world, but what about the 50 shades in between? Pang, Lee, and Vaithyanathan  used three machine learning algorithms, Naive Bayes, Maximum Entropy, and Support Vector Machines to categorize movie reviews into positive, negative, and and varying degrees of strength between the two categories. They further identified challenges in sentiment analysis as some writers use words indicative of the opposite sentiment to describe their emotion (i.e. "The plot is such a mess that it's terrible, but I loved it."). Another method of assigning meaning to opinions is through natural language processing in which words are assigned a varying numeric value to determine the strength of the emotion. In a study by Thelwall et al., a training algorithm optimized sentiment word strengths, assigned greater numeric values to subsequent words that "boosted" the emotion, factored in punctuation, and eliminated negative emotion in questions to better optimize a graded scale for determining levels of emotion. Many statements consist of subjective and objective statements and many algorithms have been developed to filter out objective statements to gain insight on the context of subjective statements. In short, there are many factors that contribute to detecting the emotional sentiment of a phrase, and while many algorithms have been developed to recognize adjectives and adverbs contributing to the emotional temperature of a phrase, in many cases it is difficult for a human reader to decipher the sentiment. And I bet you just love a witty sarcastic joke at your expense. (I know I do. For real.)
Are computers humored by irony and sarcasm?One of the challenges in a computer identifying sarcasm, is that people often disagree the boundaries of sarcasm and irony as some of the meaning is based on context and situational factors. Filatova  attempts to computationally unravel sarcasm and irony by first understanding the terminology. The literal meaning of an ironic statement "echoes" an expectation which has been violated. Dews and Winner  define sarcasm as a special case of irony, "Ironic insults, where the positive literal meaning is subverted by the negative intended meaning, and will be perceived to be more positive than direct insults, where the literal meaning is negative." Automatic irony identification focuses on the following features to signal irony: emoticons and onomatopoeic expressions for laughter, heavy punctuation marks, quotation marks, and positive interjections. Filatova also mentions users using hashtags #sarcastic #sarcasm to identify sarcastic Twitter messages (I personally feel the sarcastic value declines with the addition of a #sarcastic). Because sarcastic phrases are dependent upon the surrounding context, algorithms often focus on the document level as well as the text utterance of a sentence when evaluating sarcasm. Often a sarcastic statement is ultimately classified as negative but must be carefully analyzed as it may contain positive sentiment to convey the message.
Sarcasm will continue to be a challenge for text miners hoping to evaluate population sentiment because "computer programming follows strict rules, while natural language, particularly the inside-joke culture of the Web, doesn't."  And who doesn't love a good inside joke?
 Peter Turney (2002). "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews". Proceedings of the Association for Computational Linguistics. pp. 417–424.
 Bo Pang; Lillian Lee and Shivakumar Vaithyanathan (2002). "Thumbs up? Sentiment Classification using Machine Learning Techniques". Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). pp. 79–86.
 Thelwall, Mike; Buckley, Kevan; Paltoglou, Georgios; Cai, Di; Kappas, Arvid (2010). "Sentiment strength detection in short informal text". Journal of the American Society for Information Science and Technology 61 (12): 2544–2558.
 Filatova, Elena. "Irony and Sarcasm: Corpus Generation and Analysis Using Crowdsourcing."
 Dews, Shelly; Winner, Ellen (1995). Muting the meaning: A social function of irony. Metaphor and Symbolic Activity, 10(1):3–19.