For the uninitiated, social listening refers to the practice of parsing the content being generated on social platforms (Twitter, Facebook and Instagram being the most common) for certain keywords, phrases, or other variable dimensions (e.g. followers of specific accounts, posts in specific geographic areas).
So, for example, if you’re working on the hit Broadway show Hamilton and you’re interested in the social conversation taking place around the show’s opening in Chicago, you might set your social listening tool to look for social posts within the city of Chicago that contain words like “Hamilton” or “Lin-Manuel.”
The power of this kind of query cannot be understated. With the best solutions (i.e. Spredfast), one can assemble, in real-time, a stream of all social posts matching specific criteria, in a specific area, AND the people who are publishing them. Because social media is increasingly integrated into our lives (for younger millennials, there’s almost no daily activity that doesn’t leave a social footprint), social listening can provide a detailed look at consumer behavior and conversation at any particular moment and in any particular place.
The implications for marketers are astounding. In the above example, the producers of the show might compare the level of discussion to that seen in New York in order to predict how ticket sales will fluctuate over time. Or, they might observe the social chatter about the show in other cities in order to decide where to take the show next. The possibilities are limitless and can quickly become far more sophisticated. Imagine screening this example social stream for the works “missed” or “sold-out.” Anyone using these phrases would be prime targets for future marketing campaigns.
All that said, I’ve run into 3 big problems with current social listening solutions. The difficulty in addressing each varies over the near-term.
1. Lack of Geo-Data. Most social content unfortunately lacks precise geographic data. By most accounts, less than 10% of Tweets are geo-tagged at all, and Instagram no longer provides lat-long info specific to the user – it only offers the coordinates of the specific location being tagged (if tagged at all). In other words, geo-fenced queries are seriously limited, and anyone who tells you they can “listen to all the conversation in a specific area” isn’t telling the whole truth. UNLESS, and this is the biggie, they have access to some other database that can be merged with the social stream. For now, that generally only pertains to law enforcement. However, in the near future, I anticipate the increased fusion of ad-targeting data-sets (e.g. from tracking cookies) with social data, which could provide increasing geographic resolution.
2. F@!K this track is so FIRE! Natural language is a problem. Even the best enterprise-grade social listening tools support only advanced Boolean queries. They generally have integrated sentiment analysis tools, but these can be hard to interpret. If a query returns 60% neutral, 20% positive and 20% negative, is this good? It certainly isn’t very actionable. And there’s no good Boolean query that would capture the overwhelmingly positive sentiment of the above phrase. Profanity could be mistaken as negative, and the use of the word “FIRE” requires knowledge of contemporary culture.
The good news here is that unlike the lack of geographic data, natural language processing (NLP) IS happening and both the private sector and academia are pushing it forward every day. There are now decent NLP APIs available for the enterprise, and we are going to see more marketers merging their expansive social management platforms with best-in-class NLP tools.
3. Words are SO over. Not everything on social media gets typed out in words. The web is increasingly visual. Some research has even shown that over 80% of images posted to social media that include a brand don’t mention the brand in the post’s text. The rise of visual communication is no secret – Instagram, Snapchat and even private one-to-one messaging apps (e.g. WhatsApp) are all playgrounds for people communicating without words. Images, GIFs and video are all the future. Why type it when you can show it? The elephant in the room of social listening is the lack of image recognition in many of the most successful social management platforms. There are standalone tools (Ditto, Talkwalker, others), but they are limited and don’t yet cover video. Sprinkr includes logo search and facial sentiment analysis, but like everyone else lacks a custom search tool (e.g. show me photos that contain images of TVs) and is extremely limited in what it can pick up.
So let’s return to my earlier example. With today’s solutions, the Hamilton producers
- would probably miss a great deal of the social conversation about the show in Chicago (since it lacks geographic data),
- they wouldn’t be able to accurately gauge how people felt (due to the absence of a good NLP solution, although in this case Hamilton isn’t a good example since the sentiment will have to be 100% positive!), and
- their query wouldn’t even return posts of images or videos of smiling fans outside the theater if those posts didn’t include textual information (unless you are lucky enough to be using one of the few tools that performs this kind of visual search AND the logo of the show is clearly included in the photos).
The perfect social listening tools would add better geographic data from other sources to the social networks’ APIs, it would integrate top-shelf natural language processing, AND it would include built-in image recognition. The industry is getting there but still has a long way to go. Luckily there are a lot of smart people working on all these problems.