More than ever before, success within digital marketing and paid search in particular, requires an honest evaluation of where human operators can add value.
To be more specific, when it comes to campaign performance, we need to acknowledge the limitations of our bias in decision making and as a result, where decisions should therefore be left to automated systems that now have the ability to more effectively make these decisions. Within paid search, this challenge exists mainly in two areas: ad copy and keywords.
Ad copy creation within paid search has generally been approached in a way that’s unique to its environment, making sure headlines and descriptions align with keywords and landing pages while also implementing the traditional way of writing ad copy (call to actions, etc.). However, the actual content that works from a performance point of view has often been very hard to ascertain, given that the way in which Google sees an ad from a ranking perspective is sometimes more important to performance than how a user might see it. With the introduction of responsive search ads (RSAs) and the sunsetting of expanded text ads (ETAs), we’re now moving towards a place where mass testing of headline and description combinations is not only possible, but will be carried out as standard.
The main implication of this shift is that testing ad copy variations no longer results in obvious winners or losers. Testing has become a part of the overall ad serving process, rather than a process in itself. Because of this, our attempted interpretation of winners and losers, which is based on aggregated data at a point in time, will be at best misleading, and at worst detrimental to performance. The focus should be on feeding the system with as many variations as reasonably possible.
Selection and deselection of keywords has relied on our perceived relevance to the product or service, along with aggregated point-in-time performance data. With Google proposing that advertisers should move more of their spend to broad match, the connection between keyword and search terms centers more on intent, rather than historical match types such as exact and broad match modifiers, where the connection was far better understood and explainable. Since advertisers can only guess intent, whereas Google is able to calculate intent via the vast number of user data points outside of the search term, this shift means that selection of keywords based on the old way is likely to result in missed opportunities, as our bias is more likely to result in a narrower selection of keywords.
Part of the process that makes the above shift possible is the advancement of automated bidding, which is more powerful than ever. These systems are now able to accurately estimate conversion propensity (when fed with quality conversion data), meaning each search is given an appropriate bid according to the campaign’s efficiency targets. This allows us to expand further into the search term universe, without the risk of large dents in performance. This keyword universe is complex and contains many searches that on the face of it, we may consider to be outside of what’s appropriate.
However, there are two important factors we need to remember:
- We only see the search term, not the intent. Since bidding is now more intent based, we’re unable to accurately say whether the system was right or wrong.
- There will always be a part of the system that is dedicated to learning. This means that bids set slightly higher or lower than they should be or search terms that have higher or lower conversion propensity, will not only always be part of the process but will potentially strengthen performance over time.
Admittedly this is based on the overall logic of how machine learning systems operate, rather than actual user level data, which would be very hard to acquire. As these machine learning systems become more prevalent, we’ll likely have no choice other than to rely on general rules for how they are managed and operate versus having an intricate knowledge of how they actually work.
The cost of learning
As machine learning and artificial intelligence systems become more prevalent in our industry, it’s highly likely that advertisers who properly understand the benefits of the cost of ‘learning’ will have, at the very least, a speed advantage over competitors. Understanding these benefits will largely be about trust, which will come from honest and accurate analysis of these systems’ benefits, over more manual approaches. Within the fields of ad copy and keywords, this learning cost is integral to performance. There are three key things that need to be kept in mind:
- Avoid removing content based on assumptions of what will and won’t work from an ad headline and description combination point of view as much as possible.
- Keyword research in the realm of full broads is still being developed from a best practice point of view. However, where you have keywords that fill a gray area in terms of relevance, add them by default.
- Whilst having a view on search terms is important, adding negatives based on performance data should only be done where the evidence is strong (statistical significance).
Looking further ahead
Paid search and more broadly, digital marketing, is an ever changing field. The specific approaches mentioned here around keywords and ad copy are likely to change in the not-so-distant future as Google continues to innovate, and perhaps even remove search query data altogether. The overarching goal of honestly assessing where machines are better placed to make certain decisions, however, is more likely to stand the test of time. If improving campaign performance remains our ultimate goal, then we’re going to need to increasingly hand decisions over to machines.