It’s a Love Story
The history of online reviews is not unlike a love story. In knight’s armor, online reviews enter the scene on a white horse to save consumers from dishonest marketing and advertising by providing authentic reviews from real people. As the online reviews knight and his steed stand there in the majestic light of sunset (A.K.A. Amazon.com circa 1995), consumers stare in awe and fall madly in love at first sight.
When this new information source and apparent transparency arrived, consumers proudly posted their status as “In a Relationship” with the online reviews knight, resulting in a huge wave of dedicated review sites crashing into the market to capitalize on the new craze (e.g., Yelp, TripAdvisor, Angie’s List, and Consumer Reports, just to name a few). Now, hundreds of thousands of individual sites host their own reviews, and on top of that, consumers can read reviews and slam or praise businesses and products on social media sites. In fact, 70% of consumers visit review sites and 57% find recommendations on social media sites before making a purchase.
But more consumer reviews mean more honesty and transparency, which in turn leads to more satisfied consumers, right? Not quite. Like any love story… it’s complicated.
It’s Not You, It’s Me
The age-old problem is that consumers and brands want different things. Consumers want the full picture, while a brand prefers to only show pictures taken on its “good side.” For brands, online reviews are just another medium to convince consumers they should buy their products. So, with Yelp users posting 26,380 reviews per minute and research continuing to show reviews as an essential stage in the shopping journey, it’s not surprising that brands invest in strategies to maintain a positive image across online reviews.
Brands can employ several online reputation management strategies, including hiring a firm to write positive reviews about the brand or negative reviews about a competitor. Ironically (or perhaps not), these types of maneuvers remove the element of online reviews that consumers desire most: honesty. Nowadays, consumers are more aware of these tactics, resulting in growing skepticism of online review content (only 59% trust online reviews).
So, Who Do Consumers Trust?
Reviews are still vital to the purchase journey, but their role has shifted. Rather than using online reviews as the sole basis for a purchase decision, shoppers use them to form a consideration set. Then the question becomes: How do shoppers make the jump from a few brands in the consideration set to one final purchase? In some categories, especially those with high involvement products and/or long distribution channels, buyers and shoppers rely on experts of the trade to make final purchase decisions. Let’s walk through an example of a typical shopping journey.
Joe Schmoe’s Quest for Luxury Toilets
The Bottom Line
The point of the story is that a good online reputation is useless if your brand doesn’t sell. Even though three brands won the online review battle, none of them won the war because they lacked the allegiance of a vital channel member. We see this same story play out all the time across many different backdrops, including the interaction between distributors and store buyers (which can be even more deadly because consumers can’t purchase a brand if it never makes it to the shelf).
How To Grow Your Channel Member Loyalty
We realize that in today’s world it sounds utterly archaic to suggest shifting part of the focus away from the almighty digital landscape to real humans. But as skepticism of online reviews and recommendations grows, enlisting more sales support from channel members is a huge untapped opportunity for brands in certain categories.
Brands can grow channel member loyalty by better understanding which products distributors, contractors, or retailers recommend in a variety of sales situations. For your brand, a good place to start is to quickly diagram the situations in which a channel member recommends products in your category. The next step is to identify which products are recommended by those channel members in each type of sales situation (we use a survey-based simulation methodology called Channel Lab TM to do this). Next, if applicable, try to answer the questions below:
Which selling situations are most lucrative for your brand?
These steps should give you some momentum to begin the journey to achieving more sales support from your channel members. If you have any questions, feel free to contact us. We’d be happy to discuss your unique situation and point you in the right direction.
Imagine that there are two gold miners: Swifty and Grace. Both are skilled mining managers and have dual degrees in engineering and mineralogy. Both are working mines that have been in operation for years.
Over time, Swifty experiences a decline in his monthly yield of gold ore. Figuring that the actions he has taken for years have always worked, Swifty has his crews blast away ever more aggressively, with more equipment deployed over longer hours.
Grace also experiences a similar decline in yield, but she takes a different tack. Instead of just doing more of the same, she assesses the situation. Grace conducts time studies among her work crews. She compares core samples from the areas being mined currently with samples taken in prior years and from outlying land parcels owned by her company. And she consults geological maps to see where gold ore yields are best.
In short, Swifty does more of the same to respond to the What, expecting a different result. However, Grace tries to understand Why her gold yields are down before deciding on a course of action.
Today’s Communication Professionals are Increasingly Like Swifty
Unfortunately, with the availability of “big data,” today’s PR and communication professionals act increasingly like Swifty. They focus solely on the What without first making the effort to understand the Why behind it. As a result, they generate a lot of failed programs.
Even when sales are declining, today’s PR and communication professionals beat up their data sources to discern past customer purchase patterns, falsely believing that doing the same things will lead to success. While we concede that this behavior is helpful to a degree, over-reliance on “excavating the past” has an insidiously damaging effect: It prompts organizations to continue to mine “tapped out” markets using the same old ideas and tactics. Accordingly, “excavating the past” discourages marketers from finding new market segments, identifying more successful selling approaches, using more effective information channels, and generating more innovative communication strategies.
So, ironically, an over-reliance on the shiny new thing we call “big data” too often prevents PR and communication pros from exploring or embracing newer, better approaches. It causes an unhealthy dependence on the What without first understanding the Why.
Where’s the Evidence for Big Data?
It’s true that the arguments above do fly in the face of convention. But ask yourself a simple question: Have you ever seen big data work? Has big data alone ever given you an “aha” moment that provided you with insights for selling a bunch more of…anything?
We asked some leading business professors – people who validate selling and communication strategies for a living – whether they’ve ever seen a positive sales result from big data alone. Guess what their answer was? A resounding “no.” Despite the hype, they couldn’t point to a single study or validated case of how big data, by itself, has improved an organization’s sales results.
So we posed the same question to a category marketing director for a respected global packaged-goods giant. He’s not a believer, either, despite working with data miners for years. Sure, he did say that his company’s data gurus once came to him after analyzing troves of coupon redemption and retail shopping basket data, suggesting that his pet product brands were most often cross-sold with a particular category of alcoholic beverage. Unfortunately, neither he nor his brand teams could figure out a way to capitalize on this pearl of wisdom. After all, he pointed out, it’s pretty difficult to cross-promote categories like dog food and beer.
Does this mean that mining big data never works? Not at all. But it does mean that blindly following past data to wherever it takes you is likely to lead to frustration and failure. There has to be a better way.
Back to the Future
For a clue to resolving the big data dilemma, it’s useful to reexamine the past.
There was a time – not very long ago – when PR and communication pros would examine and discuss whatever sales or market data they could readily glean from any product or service situation. This identified the What. They’d then develop a few hypotheses about Why things were happening and brainstorm the creative opportunities each Why implied for the widgets they were selling. Most importantly, they’d commission research to fill in critical knowledge gaps or test potential communication concepts that might make sense to pursue.
This old approach was clean, it was organized, and it made sense. And, importantly, it married information about What was happening with Why it was happening, resulting in a broad range of plausible marketing and communication ideas to consider and validate. In short, the old approach unleashed pragmatic creativity – not blind slavery to a database.
Under the old rules, for example, finding out that dog food and craft beer were heavily cross-sold couldn’t possibly supply enough information by itself to launch a new campaign, so PR and communication pros would first develop alternative hypotheses as to why this was the case, such as:
They’d then test their “dog/beer hypotheses” by commissioning a research study and develop initiatives or campaign ideas around those validated findings. Additionally, these PR and communication pros would almost certainly test their initiatives and campaign ideas to refine them and make sure they had the power to move the sales needle.
Under the old rules, PR and communication pros wouldn’t put their own creative egos ahead of the needs of the customer, nor would they risk monetary or reputational damage to the client to speed some half-baked campaign to market under the guise of “gotta do it fast.”
Now here’s the maddening thing: This more disciplined “excavate, hypothesize, test, and create” approach would work even better today. That’s because, used correctly, big data has the potential to identify so many more What circumstances than the syndicated market studies and sales data of old. Additionally, research techniques are immeasurably faster these days, so testing the Why hypotheses and validating ensuing creative concepts no longer leads to a long delay before PR pros can hit the start button.
This more disciplined approach is still used by the most sophisticated marketers and their agencies, but not often enough. Additionally, the approach is almost never used by smaller clients and agencies, who would almost certainly receive the greatest marginal benefit from it.
For this unhappy situation to change, PR pros need to be gently reminded that not all What opportunities are fruitful, and not all creative ideas are effective ideas. They also need to become more educated about the many new research methodologies and tools that are available to them.
Tools and Techniques for Better Results
Regrettably, many PR pros (and far too many of their marketing associates) seem to be familiar with only two research techniques: Pre/post campaign measures and gathering online metrics. The problem is, these techniques are far better at assessing the What than the Why of a marketing situation, and they do almost nothing to prompt or validate effective communication initiatives or campaign concept ideas.
For this reason, it might be useful to briefly describe just a handful of the newer research techniques that can help communications pros pursue a more disciplined “excavate, hypothesize, test, and create” development approach:
Don’t be like Swifty
Today’s lesson is easy: Don’t be impulsive like Swifty. Use big data to identify the What, but avoid launching an initiative or campaign until you understand the Why that drives it. Finally, don’t let your creative ego get in the way of success – use one or more of the techniques identified above to validate your assumptions and the likely market acceptance of your proposed solution.
Over the long haul, you’ll gain time, money, and quite a bit of professional respect.
Have you ever been in a situation where big data led your marketing strategy down the wrong path?
If you’d like to understand the Why to your What, contact us here.
Let’s say that you’re a brand manager presenting research results to a room full of senior executives. Which of the following situations would you rather face?
You may think the answer to the above situation is obvious – who wouldn’t rather be in the second situation? Well, recently, the research industry has seen some companies attempt to handle their research needs in-house by replacing expert market research teams with cheap solutions such as free online survey platforms and digital analytics tools.
The Harsh Reality
The truth is, very few businesses have the expertise to take on all of their market research needs in-house. Making do-or-die business decisions with insights derived from improper market research techniques is a dangerous practice. More and more often, companies fail to make good decisions because they are relying on seriously questionable data derived from lousy marketing research.
It’s not to say that bringing marketing research in-house doesn’t make sense for everyone. After all, some companies are actually aware that monadic design isn’t a type of wallpaper. However, this is often not the case, and turning to the appropriate resource is critical for not making catastrophic business decisions.
Where Most Teams Fall Short
There are too many to name them all, but below is a list of the most common shortcomings of amateur researchers:
Take a Moment
There are many questions that you should ask before making crucial business decisions, but a good place to start is to ask yourself:
Be Careful
In today’s instantly gratifying digital world, it is easy to get caught up in the sea of available quick, low-cost options, and the appeal is undeniable. Our warning is this: cheap solutions are NOT synonymous with expert marketing research, and cutting corners can be a costly mistake.
You don’t want to end up like the brand manager in the first scenario! Make sure that your company is not sacrificing the quality of your research for cheap and unreliable “quick-fix” solutions.
We’d love to hear your thoughts. What do you think the biggest advantages (or disadvantages) are to using a marketing research firm?
If you’d like to learn more about Brandware Research, contact us here.
[1] “Littlewoods retailer survey finds mothers asked 228 questions a day” news.com.au 29 March 2013. Web. 16 June 2016. http://www.news.com.au/lifestyle/parenting/littlewoods-retailer-survey-finds-mothers-asked-228-questions-a-day/story-fnet085v-1226609073893
Checking the Pulse of Brand Trackers
As marketing researchers we often ask questions like: “What brands do you think of when you think of tennis shoes?” After all, according to canonical theories of brand marketing, we need to identify what brands come to mind when thinking about a particular category… don’t we?
Typical brand trackers include measures such as unaided brand recall, brand preference, and brand associations, and those measures certainly serve a purpose. But isn’t it appetizing to think that brand trackers could provide much more? Perhaps even results that brand and marketing managers could actually act on?
We thought so. So naturally, we did some research.
Where Did Those Measures Come From?
Brand health trackers use measures that are based on the traditional brand marketing concept, the “consideration set[1],” which is defined as a subset of brands that consumers seriously consider when making purchase decisions within a category. Therefore, with the consideration set in mind, the classic, “What brand comes to mind…?” question is completely valid.
But, it Depends
Who remembers that one provocative classmate that always answered questions with, “It depends”? Well, the same goes for brands that come to mind for a consumer. In reality, the consideration set is populated by brands that a consumer recalls for particular situations. For example, when I think of tennis shoes in general I think of Nike, Asics, New Balance, Mizuno, Reebok, and Adidas. But when thinking of serious running shoes I think of Mizuno; and when I think of casual tennis shoes, I think of New Balance.
So, the better question becomes: Why do so many researchers ask only what brands come to mind when thinking of tennis shoes in general? Isn’t it more useful to ask what brands come to mind when thinking of reasons why consumers actually enter the tennis shoes category in the first place?
Memories Matter
The answer is YES, and the key to unlocking this priceless information is to consider the way that we recall memories[2]. Consumers derive relevant answers based on their own experiences; and obviously, every category has unique cues that consumers call on when they need to make a purchase decision within a category. Through a disciplined sequence of qualitative and quantitative research, brands can map their performance (i.e., captured mindshare) across the most relevant category buying cues.
What Does it all Mean?
Mixing a bit of category buying behavior into your brand health tracker makes a powerful and delicious cocktail that can give you the information you need to grow your brand. Imagine the gold mine of information that can be uncovered when brands identify and measure the pervasiveness of various category buying cues. Managers who understand which cues offer the most potential and the mindshare captured by their brand for each one will be a step ahead of the game.
This post barely scratches the surface of the incredibly complex mind of the consumer and how we can better harness the power of category behavior. If you’d like to learn more, contact Brandware here.
[1] Howard, JA & Sheth, JN. The Theory of Buyer Behavior. John Wiley & Sons. New York. 1969.
[2] Tulving, E & Craik, FIM. The Oxford Handbook of Memory. Oxford University Press. Oxford. 2000.
Here’s a distressing thought: New research might suggest that everything you ever learned about brand marketing was wrong.
Okay, saying everything you learned is invalid might be an exaggeration, but there are several recent studies coming to light that suggest the old brand identity model popularized by David Aaker several years ago just doesn’t work in most product categories. Yes, you read that right – new findings suggest that most of the advice first popularized in Aaker’s blockbuster book, Building Strong Brands, is fundamentally useless to marketing managers. Moreover, these same findings suggest that the wisdom of “The Aaker Model” didn’t simply fade with the rise of digital marketing, but that the model always failed to describe the way shoppers process information and select products or services.
Wow, talk about a generation of marketers getting the rug pulled out from under them!
The Aaker Model
Before we get into why the Aaker model is being challenged, let’s reexamine some of the brand marketing doctrines proposed by Aaker and his contemporaries. These well-meaning professionals declared that solid brand management starts with building a longstanding brand identity – a unique set of functional, emotional, and user associations that signify what the brand stands for and offers to buyers. And they argued that brand identity was instrumental in differentiating and making a brand attractive to a unique set of target customers.
Sounds entirely reasonable so far, right? And if you own an advertising agency, what could be better than to earn a pile of money by creating and promoting a “differentiated and powerful brand identity” that is expressly designed to bring notoriety and sales revenue to your client? The client is happy and the agency is happy – everybody wins!
Except that they don’t.
Unfortunately, there are several faulty assumptions implicit in The Aaker Model, and therein lies its weakness. The first of these assumptions is that most consumers buy using the classic “learn, feel, do” behavioral model. That is, they mentally seek out and process information which brands in a category are best for them, build affinity with particular brands using this newfound knowledge, and then purchase the brand that best fits their needs. The second assumption of the model is that customers seek an ongoing “relationship” with brands – that customers want to be emotionally connected to the brands they buy. And the final assumption is that brands with strong identities ultimately succeed by building loyalty among targeted customers, whose bond with the brand keeps them coming back for more.
Recent Brand Findings
Although the common-sense assumptions that underlie The Aaker Model might initially ring true, peer-reviewed empirical findings from researchers like Andrew Ehrenberg, Gerald Goodhardt, Chris Chatfield, Byron Sharp, and Jenni Romaniuk, largely refute them.
In essence, the newer findings indicate that, in most categories, customers buy out of habit and engage in minimal mental processing when deciding which brands to buy: that is, they follow a “do, learn, feel” behavior that is quite the opposite from what Aaker and his contemporaries have suggested. The newer findings also show that a brand’s market share is most often driven by market penetration rates, not strong customer loyalty (i.e., attracting more customers is generally more effective than convincing current customers to purchase more often). Finally, the newer findings demonstrate that customers in most categories typically engage in choice-seeking behavior, rendering ineffective the targeting of particular market segments for many products and services.
These newer findings have profound implications for managers who are tasked with growing sales and revenue. For example, the findings strongly suggest that narrow brand positions are frequently too limiting and that the key to successful growth isn’t building customer loyalty, but capturing a greater number of light or occasional category buyers. And that implies the need to develop a more varied product line that is communicated to a broader target audience. It also suggests the necessity of making a brand available across a greater number of physical and online buying locations and making it more recognizable across a variety of category buying situations.
Implications for Research and Measurement
As one can imagine, the empirical findings mentioned above also affect the way in which many brands should be evaluated. In fact, in many product and service categories, these findings make obsolete the traditional “awareness and association” research regimen which, until recently, was considered the gold standard of brand measurement.
The specific measurement implications of these findings are too numerous to review here, so we summarized just five of them below:
There’s so much more to report on this important topic, but we’ll save that for another day. Meanwhile, if you’d like to learn more about why The Aaker Model is falling out of favor and what’s replacing it in the brand measurement world, please contact Brandware here.
For the past 20 years, the Internet has provided researchers with tools to quickly and efficiently obtain data for marketers. The monitoring of online data quality has been customarily taken on by professional researchers, who have used a variety of tactics to spot and remove bad data.
Now, with the advent of easy-to-use survey software, some brands are beginning to bring their research in-house. This means that they have to shoulder the responsibility of identifying and removing bad data and dishonest responses—a critical responsibility they too often overlook. And they need to take on this task just as more and more dishonest respondents are becoming experts at cheating and avoiding detection.
To combat these issues, it is vital that anyone with responsibility for conducting online research—whether client or research provider—develop and use an advanced quality check process with dynamic traps, JavaScript, and PHP programming.
Current Tactics
Fraudulent data is a widespread issue, resulting from respondents speeding through a survey, not paying attention to the questions, or becoming fatigued[1]. The problem this presents is obvious: Management cannot make reliable decisions with unreliable data.
Current tactics employed by marketing research firms and in-house researchers alike include:
…among others. While these do help to weed out some bad responses, the rate at which fraudulent data is caught by a specific trap question is small, only about 1% to 3%[2]. Research indicates that about 15% of respondents answer carelessly, and that this number increases with survey length[3]. Furthermore, as the study specifications become more stringent, the proportion of bad responses also rises. Shockingly, using inadequate methodologies to catch bad data can result in as much as 20% of responses being completely random[3].
What’s Wrong With the Current Tactics?
Unfortunately, as online surveys become more ubiquitous, respondents bent on cheating have become more knowledgeable of the tricks-of-the-trade. They might take care to not speed through questions too quickly, write “none” for all verbatim responses, or quickly scan answers for special instructions. At any rate, it’s almost guaranteed that any survey will receive a multitude of false responses.
What about sample providers? Sample providers advertise their capability to filter out bad respondents and provide the most trustworthy panel possible. But even with the supposed advanced methodologies that survey panels employ, researchers still end up with fraudulent responses, suggesting that relying on sample providers is just not good enough.
Regrettably, traditional tactics take hours or even days to detect junk responses, so researchers need a solution with multiple layers that can dynamically flag bad data in real time.
So What’s the Solution?
The best way to ensure all dishonest responses are captured is to flag data quality issues as they happen. Using client-side checks written in JavaScript, as well as server-side languages like PHP and Python, responses can be verified in real time. Additionally, servers capture useful metadata by virtue of their connection to respondents that researchers can utilize not only to inform data, but to verify respondents’ answers.
Employing this methodology, surveys can be fielded quicker with more accurate resulting data. Using anything less could result in imbalanced results, especially if one relies on traditional methods—or no method at all.
As the availability of online survey platforms proliferates, so will respondents looking for an easy buck. It’s the responsibility of those executing the research to familiarize themselves with the common signs of bad responses, and to use the most effective methodologies to combat them in a timely manner. It’s imperative to adapt with the technology, and become acquainted with the tools necessary to dynamically—and quickly—catch data that could adversely impact a brand’s marketing strategy.
If you’d like to learn more about Brandware’s advanced quality checks, contact us here.
[1] Johnson, Jeff. “Improving online panel data usage in sales research.” Journal of Personal Selling & Sales Management 36.1 (2016): 74-85. Online.
[2] Garlick & Knapton. “Catch me if you can.” Quirk’s November 2007, page 58.
[3] Meade & Bartholomew. “Identifying careless responses in survey data.” Psychological Methods 17.3 (2012): 437-455. Online.