There has been much discussion recently about the merits (or lack thereof) of analyzing social media comments for sentiment. Largely motivated by the announcement that Facebook and POLITICO are partnering to analyze qualified political comments on public and private Facebook profiles, skeptics and proponents of sentiment analysis have entered into a healthy debate about the accuracy and value of analyzing political sentiment in unstructured text.
Since we specialize in applying sentiment analysis in the political and business worlds (see some of our recent work here and here), we’d like to share our view on this topic and specifically respond to a blog post titled, “Politico-Facebook Sentiment Analysis will Generate ‘Bogus’ Results, Expert Says” published on the website TechPresident.
There are a few challenges to analyzing textual sentiment mentioned in TechPresident’s blog post. Although we disagree with how TechPresident characterized sentiment analysis, calling the technique “total bunk”, the blog post rightly highlighted some points to consider when considering the value of sentiment analysis.
One issue with sentiment analysis the article references is the trouble with making meaningful judgments about social media users’ intent behind comments posted on Facebook, say for a political candidate or campaign. The author of the article rather bluntly states that “you can’t make any kind of meaningful judgment about what those people intended by that usage without asking them.” The “usage” the author refers to is words commonly associated with candidate mentions in social media.
Naturally, it is quite difficult to simply look at a comment on Facebook or Twitter and with complete confidence determine whether that comment is positive, negative, or neutral in tone depending on the analyzed topic. At this point, it is impossible for software to automatically decode true textual sentiment alone without significant human interference. But the article implies that asking participants to self-report (presumably via a survey) their intentions or sentiment is a more accurate measure. However, there are inherent biases within traditional survey research that often don’t lead to accurate measures of public opinion.
The article goes on to point out another perceived problem with sentiment analysis; namely, the trouble with detecting sentiment in often short social media posts on Facebook (and especially Twitter) and understanding the context surrounding social media comments. The article quotes Libby Hemphill, an assistant professor of communication and information studies at the Illinois Institute of Technology, as saying “these sentiment reports make the analysis sound so much easier than it actually is, and I worry that they’re giving automated detection and classification a bad rep before we even get off the ground.”
We don’t have an issue with Ms. Hemphill’s assessment. Building and optimizing software to even come close to detecting sentiment in text is very difficult. Defining the process of integrating human reason into machine-based sentiment analysis is also very difficult. These are challenges we confront on a daily basis.
So why do customers come to us to understand and monitor social media sentiment, and what is it about our technology and method that allow us to promote our expertise in the field?
Firstly, our expertise in understanding social media sentiment stems from many years of meticulous software development and qualitative research practice. More specifically, our method for analyzing sentiment in text is well defined. In every project we encounter, our research staff begins by collecting a sample of social media text around any issue or topic that is important to our customer. Once we have a sufficient sample, we code the social media text for sentiment in addition to a number of other important variables. From a “human’s” perspective, our sampling and coding scheme provides us with the ability to pick up on language nuances and the context of social media comments and conversations. After this step, our research staff then trains our expert software to understand sentiment within specific conversations and context as described above.
We don’t (and will never) claim to be 100% accurate in decoding sentiment from unstructured text. But no matter how immature the practice of sentiment analysis is, we have already seen its value in quickly assessing global trends in politics and business when applied in it its proper context.
Secondly, there are many valuable uses for political and private organizations to understand broad-based social media sentiment. Here are a few applications we provide to our customers that incorporate some form of sentiment analysis:
- Continuous reputation management that informs political and business leaders of potential threats, problems, or issues with their organizations;
- Social media due diligence in the form of evaluating an organization’s assets and liabilities from a customer’s point of view;
- Informing organizations of issue and message environments online;
- Near instant insight into the reactions of stakeholders (customers, voters) in response to any number of communications initiatives
Consider the following very basic example: If our analysis informs a customer that public sentiment online is highly negative related to their brand, then that’s a great signal for that customer to further investigate the problem through other methods (focus groups, interviews, surveys, etc.) and determine a proper course of action.
Claiming that analyzing text for sentiment is “bogus” (as one expert claims in the article) is shortsighted and does not give enough credit to serious researchers applying best practices in this space. We applaud the efforts of POLITICO and Facebook in using technology like ours to assess public opinion surrounding the 2012 presidential election. We’ll do the same, and we look forward to continuing to be a part of this debate and advancing the practice of sentiment analysis in politics and business.