Unstructured citations include things like a mention of your business in an online newspaper article, press release, online job board, and other sites. Unstructured citations and links are important because they let Google know that people are talking about your business. To get the most unstructured citations, work on getting positive reviews, host events, send out press releases to the media, and engage with customers online.
For example, a plumber could first find a service that has search volume on Google, but may not be talked about that in-depth on their competitor’s websites. In this example, a service like “leak detection” may be listed with a small blurb on your competitor’s sites, but none of them have elaborated on every angle, created FAQs, videos, or images. This represents an opportunity to dominate on that topic.
A few links down and I've noticed that Brian has a link from WordPress.org. Not bad! Turns out that his content has been referenced within one of WordPress's codex posts. If I were to reach out and offer some additional insight, citing one of my articles, there's a chance I could bag a similar link, especially considering they have a 'Useful Resources' section.
In United States jurisprudence, there is a distinction between the mere act of linking to someone else's website, and linking to content that is illegal (e.g., gambling illegal in the US) or infringing (e.g., illegal MP3 copies).[16] Several courts have found that merely linking to someone else's website, even if by bypassing commercial advertising, is not copyright or trademark infringement, regardless of how much someone else might object.[17][18][19] Linking to illegal or infringing content can be sufficiently problematic to give rise to legal liability.[20][21][22]Compare [23] For a summary of the current status of US copyright law as to hyperlinking, see the discussion regarding the Arriba Soft and Perfect 10 cases.
Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
Having large groups of content that all revolve around the same topic will build more relevance around keywords that you're trying to rank for within these topics, and it makes it much easier for Google to associate your content with specific topics. Not only that, but it makes it much easier to interlink between your content, pushing more internal links through your website.
The effect of following a hyperlink may vary with the hypertext system and may sometimes depend on the link itself; for instance, on the World Wide Web most hyperlinks cause the target document to replace the document being displayed, but some are marked to cause the target document to open in a new window (or, perhaps, in a new tab[2]). Another possibility is transclusion, for which the link target is a document fragment that replaces the link anchor within the source document. Not only persons browsing the document follow hyperlinks. These hyperlinks may also be followed automatically by programs. A program that traverses the hypertext, following each hyperlink and gathering all the retrieved documents is known as a Web spider or crawler.
As keywords are essentially the backbone of on-page SEO, you need to pay a lot of attention to them. There is no reason not to include them in your URLs.  The inclusion has its benefits. When you assimilate the targeted keyword into the URL, you are ensuring that Google’s has another reason and way to consider your article as more relevant for a particular phrase.

Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn't feasible. In this case, you could automatically generate description meta tags based on each page's content.


An inline link may display a modified version of the content; for instance, instead of an image, a thumbnail, low resolution preview, cropped section, or magnified section may be shown. The full content is then usually available on demand, as is the case with print publishing software – e.g., with an external link. This allows for smaller file sizes and quicker response to changes when the full linked content is not needed, as is the case when rearranging a page layout.
Google’s local search results are displayed within Google Maps after a user searches for a keyword that includes a location, such as “pizza New York.” Landing within Google’s local search results requires that you have a Google My Business account. This is key for brick-and-mortar-based businesses because it puts your business in front of local people ready to buy.
×