In our explorations of social media monitoring firms, one differentiating factor between services is often the ability to parse out sentiment and tone of online conversations found about your brand. This task is performed by sophisticated computer functionality called natural language processing (NLP). It’s not the sexiest topic in the world, but it’s a very important one for public relations professionals, marketers and brand managers to understand.
Fortunately, I caught up with Jeffrey Catlin, the CEO of Lexalytics, an Amherst, Massachusetts-based NLP firm, recently to dive into the subject matter a bit. In this episode of SME-TV we learn what drives natural language processing, what the types of accuracy rates should be expected when using tools that provide the functionality and that Lexalytics has some consumer-facing products on the horizon we can be on the lookout for.
Exploring Natural Language Processing With Lexalytics CEO Jeff Catlin from Jason Falls on Vimeo.
As a follow-up to the chat with Catlin, I spoke with a couple of my friends in the monitoring space. A couple of them agreed with most of Catlin’s assessments of the accuracy question. One, however, noted that their service does use a vendor like Lexalytics to provide a foundation of NLP for their tool, but has built an ever-learning algorithm atop that technology. Because they can train the tool to understand the dynamics of conversations around a particular client and build upon that learning in an ongoing fashion, they can produce accuracies in the low- to mid-80s over time and are even around 83% on some clients now.
Another NLP hang up is that tools like Lexalytics were trained to interpret press releases and more formal journalistic pieces. Social media is chock full-o-typos and other truly natural human language quirks that the machines can learn, but only if they’re told to do so. Plus, there are dozens of sentiments around brands, not just positive, negative and neutral, so NLP has its limitations. Still, as Catlin said, with a large amount of information, it can give you a broad brush stroke of look at what’s out there.
Regardless of the tit-for-tat debate over who is more accurate or if they even are, at least we know understand the process and environment better. Hopefully, this helps you make better decisions in social media monitoring solutions for your company or brand.
Be sure to keep an eye on Lexalytics in the coming weeks, however. Their free tools, which I’ve seen demonstrations of, are simple, intuitive and most useful for those of us who have data (perhaps blog content, customer surveys or focus group interview transcripts) and need a tool to process and analyze it without chewing up a lot of budget. I’m excited to see how you can leverage those tools.
And, if you have more questions about natural language processing and its application in the social media monitoring and measurement space, please throw them out in the comments. If I can’t answer them, I’ll do my very best to find someone who can. (Plus, I’ll bet the Lexalytics and social media monitoring folks will be reading with interest.)
Related articles by Zemanta
- Five Myths about Automatic Sentiment Analysis (techrigy.com)
- Five Reasons Sentiment Analysis Won’t Ever Be Enough (threeminds.organic.com)
- Leximo, Natural Language and the Semantic Web… (leximo.org)
- Semantic Technology by Rashmita Pradhan (slideshare.net)
SME Paid Under
Comments are closed.