This is not a political post, but as I write this, I am on a plane headed to our election headquarters, as my company prepares to conduct the National Election Exit Polls on behalf of the major news networks. While this is an enormously complex effort, one thing we don’t have to worry about is predicting whether or not you actually voted. After all, if you are interviewed leaving your voting precinct, you just voted.

This is not the case with pre-election polls. Anybody can call a bunch of people, ask them who they are going to vote for, and report the results. Asking 1,000 random people who they would vote for, however, is often at odds with asking 1,000 actual voters what they just did.  The key for the diligent pre-election pollster is in what they call their “likely voter” modeling. This “secret sauce” for the pollster consists of a handful of questions, cross-referenced with other data, that allows pollsters to predict whether or not a respondent is actually likely to vote – in other words, the best pollsters are trying to predict behavior, not preference.

This is a picture of an American voting booth....
Image via Wikipedia

I promised this won’t be a political post, so here’s the relevance for you. If your brand is engaged in social media monitoring, you’re likely to be presented with negative sentiment from time to time. Obviously if there is a specific issue, your customer service/rapid response team deals with it. What if, however, you are trying to use your monitored data for more strategic purposes? To extrapolate from x number of people having a problem, to your product actually being flawed?

It seems to me that every brand, product or service could have its own “likely voter” model for social media data. After all, if a number of consumers complain about AT&T dropping calls, these complaints are likely from AT&T customers. But were all of the negative comments about the infamous “Motrin Moms” issue Motrin users (or moms)? A study from Lightspeed Research seemed to suggest that this was not the case. Similarly, was there significant backlash in the recent GAP logo kerfuffle from actual GAP customers? Or even, perhaps, amongst their most valuable customers? In other words, was the displeasure expressed online about these issues in concord with offline word of mouth? Or did these sentiments merely bounce off the interior walls of Twitter’s fishbowl?

Note that I am not being dismissive of the latter possibility. Negative sentiment is a problem, but there is a difference between having a communication problem, a perception problem and a product problem. Negative sentiment on social media may be indicative of one, two or all three of these things. The key is to find the other data – the “likely voter” model for your brand or service – that lets you tie your social media data to other strategic metrics or KPIs that, in turn, allows you to reprocess your social media data in a more strategic light.

Social Media Explorer has been talking a lot lately about merging the offline with the online when it comes to social media initiatives, and monitoring should be no exception. A significant part of making meaning out of social media monitoring is calibrating those metrics to your actual customer base as accurately as possible, through a mix of both online and offline methods. Your brand’s customer base may have a higher (or lower) propensity to be active on a given social network, and this is but one of the base metrics you have to know in order to understand the strategic import of a given report from a monitoring service.

Again, if your customers want to interact with you on various social sites and services, then it’s your business to be where your customers are, communicating with them in the ways and methods of their choosing, not yours. Moving beyond the level of the individual interactions, however, requires a bit more thought.

What percentage of your customers are interacting on Twitter? LinkedIn? Services like Blue Sky Factory’s SocialSync and other social intelligence services can help you get closer to that information, as can simply asking your customers, of course. Knowing where your customers are – and aren’t – and trending this data over time are just two of the keys to making meaning from significant pockets of negative or positive sentiment online relating to your brand.

Going further, a mix of online and offline metrics can help you understand the relationship between a social media perception and your actual brick and mortar sales. It may be that an online reputation crisis has only weak ties to offline perceptions, or it might also be that a problem you had previously blown off as a few cranky Twitter users actually has a strong correlation to offline word of mouth and purchasing behavior.

A “likely customer” model tied uniquely to your brand could be the difference between having a “Twitter problem” and having a strategic issue with your product or service that requires more than a communication strategy. The key here, as in political research, is being able to model how sentiment affects behavior. In other words, being able to discern those who talk about voting from those who actually vote – with their wallets.

Some of you are likely doing these things already, and they are *work*. What have you learned in the process? Share!

Enhanced by Zemanta
Did you enjoy this blog post? If so, then why not:Leave Comment Below | Subscribe To This Blog | Sign Up For Our Newsletter |

About Tom Webster

Tom Webster

Tom Webster is Vice President of Strategy for Edison Research, sole provider of U.S. National Election exit polling data for all major news networks. Webster has 20 years of experience in market and opinion research, with a particular emphasis on consumer behavior and the adoption of new media and technology. He is the principal author of a number of widely-cited research studies, including Twitter Usage In America, The Social Habit, and The Podcast Consumer Revealed, and is co-author of the Edison Research/Arbitron Internet and Multimedia Research Series, now in its 18th iteration. Reach him on Twitter at Webby2001, or on his blog at BrandSavant.

Other posts by

Comments & Reactions

Comments Policy

Comments on Social Media Explorer are open to anyone. However, I will remove any comment that is disrespectful and not in the spirit of intelligent discourse. You are welcome to leave links to content relevant to the conversation, but I reserve the right to remove it if I don't see the relevancy. Be nice, have fun. Fair?

  • Nick Chinn

    Tom this is so true, as a provider of social media monitoring tools it is very difficult to explain to those that are “jumping on the social media monitoring wagon” that social media data needs to be relevant. Actionable insight can only be gained if the social media data is combined with other data to back up a theory and be strong enough to launch crisis management, product redevelopment or crowdsourcing engine.

    Are those talking a lot about your brand your top brand influences, not if nobody listens to them no!

    Is the iPhone rubbish because your providers 3G signal is overcrowded? No.

    Social media data is vibrant and “in the now” and should be used to further brand, product, service developments by complimenting all that is already known; it’s new, different and exciting but data is data, through true skill is using it well, once you have it!

    Thank you Tom for taking the time to make and write about the connection between the commercial and political world.

  • http://flavors.me/40deuce 40deuce

    I like the point that you're making here Tom. There is a huge difference between people who talk about a brand online and the people who are actually involved with a brand (ie. already customers or real potential customers).
    A few months ago on the Sysomos blog we wrote about understanding the difference between actual complaints and problems and those that are just ranting via social media. These are really important differences to understand. Just because someone is ranting about a company on Twitter is no indication that the ranter even has anything to do with the company.
    The Gap logo is a great example. It would be interesting to know how many of the people that complained about the change were actual avid Gap consumers and how many were just design people who didn't like the new look. In that case, a lot of the complaints were happening on their facebook fan page, so I assume that those people were “fans” of the company, giving some validity to the company's response, but what about the people who were complaining on Twitter? How can you tell if those people actually cared about the company and how many people were just joining in on the ranting?
    The problem is there isn't a great way to answer that question. Social media monitoring can help to see some of these things happening, but the people looking at this data need to do more than just see it and respond. They need to know more about these people and if they are actually complaining or just ranting.
    I always tell people that the key to social media monitoring and measuring doesn't come from our software, it comes from how the real people analyze the data that comes out of the software.

    Cheers,
    Sheldon, community manager for Sysomos (http://sysomos.com)

  • Pingback: Social Media Monitoring and the “Likely Voter” Model | BrandSavant

  • Post2web

    When choosing news monitoring service you should consider Semantic Wire ( http://www.semanticwire.com ).

    It is free, has alerts, electronic news clipping, analysis, action extraction, advanced filtering and more unique features.

    to see it in action go: http://www.semanticwire.com