How Reliable Are Search Terms for SEO and SEM Results?

With billions of dollars spent each year on search engine optimization (SEO) and search engine marketing (SEM), the power of search terms holds more value than ever. But more than a few digital marketing professionals have become frustrated over the years over the limits just how much can be assumed and predicted based on the search terms themselves.

The same word or term used in five different searches can represent five different meanings. This requires SEO and SEM professionals to draw speculative conclusions on just which search terms may be the most effective for a given marketing campaign or initiative.

This problem is at the center of a recent study that revealed a different approach could provide the context necessary to significantly improve SEO and SEM projects and programs.

The study to be published in the November edition of the INFORMS journal Marketing Scienceis titled “A Semantic Approach for Estimating Consumer Content Preferences from Online Search Queries,” and is authored by Jia Liu of Hong Kong University of Science and Technology; and Olivier Toubia of Columbia Business School.

The researchers focused on the challenge for digital marketers when it comes to inferring content preferences in a more quantified, nuanced and detailed manner. If they could, the researchers offered, then SEO and SEM efforts could be planned, implemented and evaluated with more precision, predictability and effectiveness.

“Because of the nature of textual data in online search, inferring content preferences from search queries presents several challenges,” said Liu. “A first challenge is that search terms tend to be ambiguous; that is, consumers might use the same term in different ways. A second challenge is that the number of possible keywords or queries that consumers can use is vast; and a third challenge is the sparsity of search query. Most search queries contain only up to five words.”

Through their research, the study authors have determined that a different approach might better provide context for individual search terms.

The researchers used a “topic model” that helps combine information from multiple search queries and their associated search results, and then quantified the mapping between queries and results. This model is powered by a learning algorithm that extracts “topics” from text based on the occurrence of the text. The model is designed to establish context where one type of term is semantically related to another type of term. This helps provide the system with context for the use of the term.

As part of their research, the study authors tested various content by monitoring study participant behavior on the search engine in a controlled environment. To do so, the study authors built their own search engine called “Hoogle,” which served as a filter between Google and the user. “Hoogle” ran all queries for study participants and revealed how the learning algorithm could work in a real-world environment.

“We were able to show that our model may be used to explain and predict consumer click-through rates in online search advertising based on the degree of alignment between the search ad copy shown on the search engine results page, and the content preferences estimated by our model,” said Toubia. “In the end, what this enables digital marketers to do is better match actual search results with what users mean or intend when they key in specific search terms.”

The full study is available at https://pubsonline.informs.org/doi/10.1287/mksc.2018.1112.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.