Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
I did want to ask you about the section on “Don’t Focus on Long Tail Keywords”. This is topical for me as I actually have a tab opened from a recent post on the MOZ blog from Rand Fishkin that details reasons why you should focus on long tail keywords. I know you have said that “they have their place”, but as I say as a newbie to all of this, ever so slightly differing opinions from two authoritative people in the industry (that’s you and Rand of course 🙂 frazzles my brain somewhat and I’m not sure whether to turn left or right!

When your business is listed in an online directory, it is known as a structured citation. These citations increase exposure and website domain authority while associating your business name with existing high-authority sites like Yelp—all of which is favorable to Google. To create an effective structured citation, include full business contact information on your directories and be consistent with formatting.
It’s a simple Google Chrome extension. First, you have to install the extension in your Google Chrome browser. Once installed, it will appear as a little checkmark icon beside your address bar. When you click on it, it will immediately start scanning all the links on a particular web page. If a link is broken or dead, it will be highlighted in red, and the error will be shown right beside the text (e.g., “404”).
You can confer some of your site's reputation to another site when your site links to it. Sometimes users can take advantage of this by adding links to their own site in your comment sections or message boards. Or sometimes you might mention a site in a negative way and don't want to confer any of your reputation upon it. For example, imagine that you're writing a blog post on the topic of comment spamming and you want to call out a site that recently comment spammed your blog. You want to warn others of the site, so you include the link to it in your content; however, you certainly don't want to give the site some of your reputation from your link. This would be a good time to use nofollow.
Google’s aim is to provide the most relevant result for any given query. Their entire business model relies on them being able to do this, consistently, across hundreds of billions of searches. For that reason, they’ve invested heavily into understanding the intent of queries, i.e., the reason a person typed a specific thing into Google in the first place.

A few links down and I've noticed that Brian has a link from WordPress.org. Not bad! Turns out that his content has been referenced within one of WordPress's codex posts. If I were to reach out and offer some additional insight, citing one of my articles, there's a chance I could bag a similar link, especially considering they have a 'Useful Resources' section.


Google’s local search results are displayed within Google Maps after a user searches for a keyword that includes a location, such as “pizza New York.” Landing within Google’s local search results requires that you have a Google My Business account. This is key for brick-and-mortar-based businesses because it puts your business in front of local people ready to buy.
×