Interview with Bill Slawski – Inside Google’s mind
Today our guest is someone who started to optimize websites for Google even before Google existed. It’s Bill Slawski, the head of SEO Research at Go Fish Digital and founder of SEO by the Sea.
The SEO community knows him as the guy who deciphers Google Patents, and after almost 15 years of studying search related patents and research – Bill busted more SEO myths than anyone else. He has a vast knowledge of the digital marketing industry, deep understanding of how search engines work, and unique insights into SEO best practices. Now he’s here to share his expertize, but we also urge you to follow his website and always stay up to date with what happens inside Google’s mind.
Thank you, Bill!
The SEO community knows him as the guy who deciphers Google Patents, and after almost 15 years of studying search related patents and research – Bill busted more SEO myths than anyone else. He has a vast knowledge of the digital marketing industry, deep understanding of how search engines work, and unique insights into SEO best practices. Now he’s here to share his expertize, but we also urge you to follow his website and always stay up to date with what happens inside Google’s mind.
Thank you, Bill!
SEOlium: When and why did you start analyzing Google patents?
Bill Slawski:
I started paying attention to, and looking for new patents from Google after reading on about Information Retrieval based upon historical data, which was written by some of Google’s top search engineers, and focused upon indexing stale content and spam content, and how Google might avoid both. That patent application led to a number of continuation patents being filed by Google, which narrowed the scope of the claims involved, and updated them. It was interesting seeing how much effort Google was taking to protect their intellectual property, and intriguing being able to gain access to the algorithms that were behind how the search engine worked. I started paying attention to patents that focused upon what I was doing in SEO, such as how much of a role geography had in indexing and rankings, and started looking at all of the newly granted patents, and newly published applications. They provided a look at Google and Yahoo and Microsoft Search that I wasn’t really seeing anywhere else.
SEOlium:
How exactly are you instantly finding out that a patent was granted?
Bill Slawski:
I search through the United States Patent and Trademark Office (USPTO) website a couple of times a week, and the World Intellectual Property Office (WIPO) once a week, both of which show patents that have been granted every week.
SEOlium:
Is there any way to read the patent immediately after Google applied for it? I’m asking because it usually takes a couple of years before it’s granted and I strongly suspect the algorithm already uses the invention.
Bill Slawski:
There isn’t a way to read patents immediately after Anyone applies for one, but there is a requirement that most pending patent applications be published, after a certain amount of time. There are possible ways to delay that. It can sometimes be difficult to tell whether or not a process described in a patent has been implemented yet, or not. It is possible to tell with some, and it’s possible to determine things to look for with others.
SEOlium: A patent isn’t granted unless the “invention element” is rather specific, not general. Do you think it’s possible for spammers to analyze patents and game the system?
Bill Slawski:
A patent is filed, and an attorney from the USPTO decides whether it covers something that is new, non-obvious and useful. Many patents aren’t about how things are ranked, and many SEOs don’t necessarily understand what they are reading when they read a patent Patents focus upon how a search engine might work, rather than how someone might spam a search engine. For someone to read a patent and gain an advantage from reading it, they would already have to know a lot about information retrieval and how search engines work, including possibly information that most search engines don’t share publicly.
SEOlium: Having studied all those patents – do you think that, given unlimited programming manpower, you would be able to replicate Google and create a decent search engine? Or, let me rephrase that – could Google be overturned one day because it published these patents?
Bill Slawski:
The idea behind the patent process is that people are granted patents, which can exclude other people from infringing upon those patents, in exchange for publishing those patents publicly, which is supposed to inspire people to invent and innovate. If I had a chance to build a search engine, I would want to hopefully improve upon what Google or Yahoo or Microsoft has offered. However, Google did succeed as a search engine because of the success and effectiveness of patented algorithms such as PageRank, which Google had an exclusive license to use. If AltaVista, Excite and Inktomi were free to use PageRank, Google may not have achieved the success it had; so patents protected the intellectual property that Larry Page and Sergey Brin developed. It’s possible that patents from Google may inspire inventors to build something that may someday challenge Google. That is what patents are supposed to do.
SEOlium: [Almost] Everyone in the SEO community knows that Google patents aren’t necessarily implemented in the search algorithm. So why do then folks believe that they are? (Because we do, don’t we?)
Bill Slawski:
There have been a number of patents granted that likely have been implemented. Enough have been that we can be certain that some are implemented. It’s sometimes possible to understand the algorithms behind some of them, and see those in action. It’s likely that most SEOs have an idea of how PageRank works, and how Anchor Text Relevance works, and how PageRank might be transferred through a 301 redirect; how Universal Search added images and news and local results to organic results; how NAP consistency is important in Google Maps results. As Google spokespeople sometimes say, “Just because we have a patent about something, doesn’t mean we have implemented it yet.” But, having a patent on something shows that someone at Google did the research and the work to gain a patent, and had Google’s legal team file it. If it is a continuation patent, updating a previously granted patent, it is proof that they are updating the process behind a patent that they are protecting from others who might run a search engine. There are some patents that are related to each other, such as the Phrase-based indexing patents, and when there are more than 20 patents like those on related processes, that is a sign that Google may be using the processes behind those patents.
SEOlium: What was the most interesting patent to analyze?
Bill Slawski:
Synonym identification based on co-occurring terms, which focuses upon query rewriting, and understanding the content of all of the terms in the query. What makes it interesting is that ambiguous terms that are not adjacent to each other in a query may be understood based upon the presence of other terms in the same query. It shows us the importance of context. I spent a few weeks trying to understand it and write about it, and at the end of those weeks, Google announced their Hummingbird Update, which seemed to fit that patent really well.
SEOlium: I’ve seen a lot of chatter about whether or not negative SEO is real. Can competitors really harm you? Did you find anything that would support the case for negative SEO? Perhaps the “Rank Modifiers Spammers” patent?
Bill Slawski:
The Rank Modifying Spammer patent isn’t about how people might do negative SEO, nor does it provide any actual proof that anyone was doing any negative SEO. What it does do is it watches as someone might make changes to their site, and instead of improving rankings in response to those changes, Google may instead not make any positive changes to rankings or instead may make negative changes to rankings of their site (this is not negative SEO, because it is not about someone doing someone malicious to someone else’s website to cause it to rank poorly.) Google may be suspicious that the person making those changes may be doing something manipulative, and this is a social engineering approach to possibly get them to take further actions because their efforts to improve their rankings didn’t have a positive effect, and they might be inspired to do some things that are spammy, such as keyword stuffing or linking from bad neighborhoods. This patent is the kind of patent that Google spokespeople say that “just because we have a patent, doesn’t mean we use it.” I have no proof as to whether they use that one or not, but it does exist.
SEOlium: After punitive algorithms like Panda and Penguin many SEOs are turning away from the link-building paradigm towards quality on-page SEO. So imagine an epic battle between an average page with a fantastic backlink profile and an exceptional page with an average backlink profile. Who would win?
Bill Slawski:
The existence of updates such as Panda or Penguin is supposed to encourage people to create “high quality” websites and have them linked to by high-quality sites. The intent wasn’t to be punitive, and I’ve been doing site audits for many years, even before Panda and Penguin, which focused upon improving the quality of sites, and visitor’s experiences upon them. There had been warnings about Google not liking manipulative practices in patents and whitepapers (such as the ALIWEB Workshop papers). This question reminds me of many arguments I’ve seen in SEO forums in the past about on page SEO versus Off-page SEO. Having a lot of links can make a site rank well, but providing a really helpful experience can lead to referrals and people sharing and linking to a page. If I could, I would prefer to build as high-quality page as I can, and then work upon attaining a fantastic backlink profile (yes, I would want both)
SEOlium: Can you name a few of your most favorite SEO myths that keep circulating?
Bill Slawski:
- Canonical link elements for all pages of a series of paginated pages should be the first page of that series (one of the biggest errors people make with canonical link elements.)
- Latent Semantic Indexing is helpful to SEOs, because using synonyms and semantically related words on a page helps it rank better for specific keywords (this is not how LSI works, and what Google does with semantically related is much more nuanced than this.
SEOlium: And can you name a few interesting patents you feel are not part of the search algorithm yet but might end up there soon?
Bill Slawski:
I have a website where I write about patents, including ones that Google may not have implemented yet, and rather than list some of those here, I invite everyone to read through posts on SEO by the Sea, and decide for themselves how they might test processes described in patents to see if they have been implemented.
SEOlium: Why do you think the long meta descriptions are now gone?
Bill Slawski:
There is nothing really very scientific about the length of meta descriptions, and how long snippets might be in search results. I know that one Google Designer left the company because they did tests on which shade of blue should be used on Google Pages, and they keep on changing it, and he got fed up with that testing (https://www.cnet.com/news/google-designer-leaves-blaming-data-centrism/). I suspect that Google may have done some testing with lengths of snippets. If the meta description you create is too long for what they are using as snippets, they will likely truncate the text that they show.
SEOlium: Do you also study Bing or other patents? What are the conceptual differences between SEO for Google and SEO for Bing?
Bill Slawski:
I used to write about patents from Yahoo and Microsoft and Ask, in addition to Google, and I still do sometimes, and it can be interesting to do so. Sometimes the processes behind things a search engine might do can be very similar (think about the fact that steering wheels in all cars are round, and you will see that sometimes happens). There are patents from Google and Bing for instance about visual segmentation of content on pages, and how that affects relevancy of that content, which are worth knowing, but not really that different from each other (round steering wheels). Microsoft came out with a VIPs paper in around 2003, and patented that segmentation approach a couple of years later, and Google later patented a local search process for distinguishing between content about different local entities that appeared on the same page. The concepts are often similar, but the names are sometimes different (Bing Deeplinks vs. Google Sitelinks). It can be worth learning the differences between Google AMP pages and Apple AMP pages.
SEOlium: What is your advice for those who are just starting out in SEO?
Bill Slawski:
- Find trusted sources of information about SEO (Google Developer Pages, Google Research Publications, Google Help Central Forums)
- Build some sites of your own to learn from (affiliate sites, e-commerce sites, blogs), and experience what it is like to try to rank pages, analyze traffic to your page, develop content attract links, use social media.
- Apply critical thought to whatever you read – the web is a great source of information and misinformation – learn why some sources are more credible than others.
- Learn about fields that are related to SEO, such as design, usability, Information architecture, marketing.
- Meet others and learn to network, both through social sites, and through sources such as meetups and conferences.