Last week, Google publicly released their Smart Display Product for their Google Display Network (GDN). The product is the latest in a rapidly-expanding line of automated ad solutions that promise to get better results with less effort. Do they work as advertised?
Croud has been beta testing the Smart Display on behalf of several clients. It is an exciting product and we’ve seen promising results. As with any automated product, there are caveats, but Smart Display is worth a look.
To begin with, the creative setup is remarkably easy – account managers simply need to upload a small selection of images, logos, and copy. From there, Google is able to create a wide array of ads to fit different formats. Clients no longer need to provide tens or hundreds of images in 5, 6, 7, or more sizes to have coverage on different devices and ad units.
Bidding is also simple. There are no targeting levers to pull, so there are no massive lists of permutations to add bids for. A campaign target is set based on the advertiser’s AdWords pixel, along with a daily budget and the campaign is ready to launch.
Results have generally been strong, with improvements of up to 100% over standard display set ups. The algorithms do seem to respond slowly to shifts in seasonality, however, so we have found it helpful to reset the campaigns after key timeframes or tentpole events.
The results and reporting also include feedback on which variants of copy and creative are winning the multivariate testing. Once the campaign has declared winners, the losers can be rotated out and replace with new variants, for on-going, tournament-style optimization.
So, setup is simple, performance is strong, and reporting contains unique insights on creative – what’s not to love? There are two primary caveats to consider when testing Smart Display: control and performance.
Regarding control, this is the flip side of the format’s simplicity. There are very few levers to pull to hit non-standard targets or ensure brand safety. You cannot set frequency caps or add negative placements. Not all permutations of the ads are attractive or optimally cropped. Audiences cannot be negatively targeted, so unless you are careful with geo or product line exclusion, you may lose some control over how you talk to various audiences, such as prospecting vs. retargeting.
Performance caveats are a direct result of the limited control. Because you cannot exclude audiences or get reporting on an audience level, it is impossible to know whether Smart Display is cherry-picking the best users – e.g. brand searchers, site visitors, or cart abandoners. When compared to retargeting only and not to all other GDN tactics, the improved performance essentially disappears.
As Smart Display matures, it is likely that Google will address these issues – with greater control and transparency. Until then, it may not be right for all advertisers – particularly those that are sensitive about creative appearance and brand safety, or those who rely heavily on GDN remarketing to drive business.
For the rest, it is an exciting product and worth a test. We recommend a few things for getting the most out of the test:
- Segment the Test – Run GDN Display in its own geo or for its own product line, so that the test does not cannibalize or cloud existing efforts.
- Prepare for Multiple Rounds – Take advantage of the multivariate copy and creative testing and run several different rounds. If each round finds a winner that outperforms the last, the test will only become more successful over time.
- Leverage the Learnings – Use the unique testing setup to find imagery and copy that consumers will respond well to and apply the learnings to standard GDN and/or other channels.
- Keep Tips for Testing In Mind – Approach the test with techniques that have worked well for other automated/AI products.
Happy Testing!