One of the amazing things about Google is that it learns our search habits over time. This means we get to the answers we want without being overly specific about the question. Kind of like the shorthand you develop with your significant other or best friend – you can type in something misspelled or non-specific and, between your search history and location settings, Google will figure it out. Super fast. So fast, we don’t even understand the complexity of what is happening.

There’s a complete list and a fantastic post on Beyond, so I’m not going to copy that. But I am going to mention a use case that came up when I got a phone call from a development company that was stumped.
Enter the ‘site:’ search. I searched Google for site:theirdomain.com to get a page count (the number of results are roughly the number of pages in the website, if all is well) and found nothing. This meant that there were no pages in Google’s index. So what was blocking it? Without access to the website, I was limited to public information. I then checked the site’s robots.txt file. This is the file that instructs Google on which elements on a website should be crawled. For example, pages should be crawled, but individual image files should not. And on this robots.txt file, I discovered that the entire site was being blocked from Google.
It took about 3 minutes to find all this and we were still on the phone. So I asked my development partner how long the site had been live in it’s current state. 3 years, I was told. I let her know about the index problem and, after that was fixed, the client decided they didn’t need SEO. Their traffic increased dramatically all on it’s own.
3 years of people looking for help. 3 years of people seeking an AA meeting calendar. People traveling through, people living in the region, and people who needed help right away. And it could have been discovered with a simple site: search.
Google’s index is a great double-check on the health of your website. I check my own often and encourage you to do the same!