Social Media Monitoring and the “Likely Voter” Model

by Tom Webster |

This is not a political post, but as I write this, I am on a plane headed to our election headquarters, as my company prepares to conduct the National Election Exit Polls on behalf of the major news networks. While this is an enormously complex effort, one thing we don’t have to worry about is predicting whether or not you actually voted. After all, if you are interviewed leaving your voting precinct, you just voted.

This is not the case with pre-election polls. Anybody can call a bunch of people, ask them who they are going to vote for, and report the results. Asking 1,000 random people who they would vote for, however, is often at odds with asking 1,000 actual voters what they just did.  The key for the diligent pre-election pollster is in what they call their “likely voter” modeling. This “secret sauce” for the pollster consists of a handful of questions, cross-referenced with other data, that allows pollsters to predict whether or not a respondent is actually likely to vote – in other words, the best pollsters are trying to predict behavior, not preference.

This is a picture of an American voting booth....
Image via Wikipedia

I promised this won’t be a political post, so here’s the relevance for you. If your brand is engaged in social media monitoring, you’re likely to be presented with negative sentiment from time to time. Obviously if there is a specific issue, your customer service/rapid response team deals with it. What if, however, you are trying to use your monitored data for more strategic purposes? To extrapolate from x number of people having a problem, to your product actually being flawed?

It seems to me that every brand, product or service could have its own “likely voter” model for social media data. After all, if a number of consumers complain about AT&T dropping calls, these complaints are likely from AT&T customers. But were all of the negative comments about the infamous “Motrin Moms” issue Motrin users (or moms)? A study from Lightspeed Research seemed to suggest that this was not the case. Similarly, was there significant backlash in the recent GAP logo kerfuffle from actual GAP customers? Or even, perhaps, amongst their most valuable customers? In other words, was the displeasure expressed online about these issues in concord with offline word of mouth? Or did these sentiments merely bounce off the interior walls of Twitter’s fishbowl?

Note that I am not being dismissive of the latter possibility. Negative sentiment is a problem, but there is a difference between having a communication problem, a perception problem and a product problem. Negative sentiment on social media may be indicative of one, two or all three of these things. The key is to find the other data – the “likely voter” model for your brand or service – that lets you tie your social media data to other strategic metrics or KPIs that, in turn, allows you to reprocess your social media data in a more strategic light.

Social Media Explorer has been talking a lot lately about merging the offline with the online when it comes to social media initiatives, and monitoring should be no exception. A significant part of making meaning out of social media monitoring is calibrating those metrics to your actual customer base as accurately as possible, through a mix of both online and offline methods. Your brand’s customer base may have a higher (or lower) propensity to be active on a given social network, and this is but one of the base metrics you have to know in order to understand the strategic import of a given report from a monitoring service.

Again, if your customers want to interact with you on various social sites and services, then it’s your business to be where your customers are, communicating with them in the ways and methods of their choosing, not yours. Moving beyond the level of the individual interactions, however, requires a bit more thought.

What percentage of your customers are interacting on Twitter? LinkedIn? Services like Blue Sky Factory’s SocialSync and other social intelligence services can help you get closer to that information, as can simply asking your customers, of course. Knowing where your customers are – and aren’t – and trending this data over time are just two of the keys to making meaning from significant pockets of negative or positive sentiment online relating to your brand.

Going further, a mix of online and offline metrics can help you understand the relationship between a social media perception and your actual brick and mortar sales. It may be that an online reputation crisis has only weak ties to offline perceptions, or it might also be that a problem you had previously blown off as a few cranky Twitter users actually has a strong correlation to offline word of mouth and purchasing behavior.

A “likely customer” model tied uniquely to your brand could be the difference between having a “Twitter problem” and having a strategic issue with your product or service that requires more than a communication strategy. The key here, as in political research, is being able to model how sentiment affects behavior. In other words, being able to discern those who talk about voting from those who actually vote – with their wallets.

Some of you are likely doing these things already, and they are *work*. What have you learned in the process? Share!

Enhanced by Zemanta


About the Author

Tom Webster

Tom Webster is Vice President of Strategy for Edison Research, sole provider of U.S. National Election exit polling data for all major news networks. Webster has 20 years of experience in market and opinion research, with a particular emphasis on consumer behavior and the adoption of new media and technology. He is the principal author of a number of widely-cited research studies, including Twitter Usage In America, The Social Habit, and The Podcast Consumer Revealed, and is co-author of the Edison Research/Arbitron Internet and Multimedia Research Series, now in its 18th iteration. Reach him on Twitter at Webby2001, or on his blog at BrandSavant.