The pay-per-click advertising industry is constantly changing. New strategies are emerging all the time, and running a campaign a year ago may look different today.
But some of Google’s advertising methods are too closed to be useful when changed. This is four.
Google describes the Rank Score as “an analytical tool to give you an idea of how your ad compares to other advertisers.”
This score evaluates each topic on a scale of 1 to 10. A higher score indicates stability during the review process. For example, if a user searches for “oval coffee table,” the ad on the next landing page will use the same keyword. Keywords with higher quality scores tend to have lower click-through rates.
One problem with ranking, however, is that it puts more emphasis on click-through rates than conversions. A subject may have a low quality score but a high variability. Changing this topic can improve quality and reduce conversions.
The degree is not important, but it should not be the deciding factor. For low-key keywords that don’t change, consider:
- Adding bad words,
- Include target keywords repeatedly in the ad,
- Updated landing page to sync with status information.
Advertisers test media properties by pitting them against each other in a single ad. To see which call-to-action, landing page, or social media worked best, the advertiser will create two ads, which Google will show regularly over time.
This is no longer the case.
Responsive search ads have a full title and description and display the best combination in search results immediately. Advertisers don’t know the total number of converts, only the total number. Even with just two ads, how to achieve high impact depends on the conversion goal. Lack of understanding and mis-distribution of media prevents accurate testing.
The answer is Ad Variations, which tests basic ad segments against experimental, 50/50. To test the landing page, the advertiser asks Google to replace the company half-time. Advertisers can’t see stats for each combination, but they can see both set or test ads that performed well.
In the age of automation, media change is the most effective way to test content.
Ad Variations experiments disclose the overall performance of the version that achieved the best results
Match Type Ad Groups
Creating a single ad by game type was common before game types and the break of the big game changed.
For example, a keyword on the topic of “oval coffee table” would have required two ads with the same keyword. One has only word matches, while the other has phrase matches. Importantly, all the keywords in the same match will be negative in the same match phrase, giving the advertiser the opportunity to control the ads that appear. Exactly one game will show one ad, one word will match the other.
Setting the campaign to manual bidding allows advertisers to control the cost (and copy) of each variable, such as $2 on a keyword versus $1.50 on a phrase.
Manual input allows for input changes such as device and location, but smart pointers automatically adjust for these and others. The advanced machine learning achieved through smart billing is superior to manual input. For example, Smart Bidding takes into account the user’s browser and operating system.
However, manual entry is still sometimes useful. For example, bidding more than a few bucks on a keyword may not be profitable for an advertiser. Active participation will set the highest cost per click, buying the benefit of the index for cost control.