Seo, in its many standard sense, relies upon something above all others: Search engine spiders crawling and indexing your site.
But almost every site is going to have pages that you don’t want to include in this exploration.
In a best-case scenario, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more crucial pages.
Fortunately, Google allows webmasters to tell online search engine bots what pages and content to crawl and what to ignore. There are a number of ways to do this, the most typical being using a robots.txt file or the meta robotics tag.
We have an outstanding and detailed description of the ins and outs of robots.txt, which you need to definitely read.
However in high-level terms, it’s a plain text file that resides in your website’s root and follows the Robots Exemption Procedure (REPRESENTATIVE).
Robots.txt provides spiders with instructions about the site as a whole, while meta robots tags consist of instructions for specific pages.
Some meta robots tags you may employ include index, which tells online search engine to add the page to their index; noindex, which informs it not to include a page to the index or include it in search results; follow, which instructs a search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags work tools to keep in your tool kit, but there’s also another way to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for a whole page, in addition to the particular components on that page.
And whereas using meta robots tags is fairly straightforward, the X-Robots-Tag is a bit more complex.
But this, naturally, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any directive that can be used in a robots meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP reaction with both the meta robotics tag and X-Robots Tag, there are specific scenarios where you would wish to use the X-Robots-Tag– the 2 most common being when:
- You want to manage how your non-HTML files are being crawled and indexed.
- You want to serve regulations site-wide rather of on a page level.
For example, if you wish to block a particular image or video from being crawled– the HTTP reaction technique makes this simple.
The X-Robots-Tag header is also beneficial due to the fact that it enables you to combine multiple tags within an HTTP response or use a comma-separated list of regulations to specify regulations.
Maybe you don’t desire a particular page to be cached and desire it to be unavailable after a specific date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these instructions.
Basically, the power of the X-Robots-Tag is that it is a lot more versatile than the meta robotics tag.
The benefit of utilizing an X-Robots-Tag with HTTP reactions is that it enables you to utilize routine expressions to execute crawl instructions on non-HTML, along with use parameters on a larger, worldwide level.
To assist you understand the difference in between these directives, it’s valuable to classify them by type. That is, are they crawler instructions or indexer regulations?
Here’s a convenient cheat sheet to explain:
|Spider Directives||Indexer Directives|
|Robots.txt– uses the user agent, allow, prohibit, and sitemap directives to specify where on-site search engine bots are enabled to crawl and not allowed to crawl.||Meta Robots tag– allows you to define and prevent online search engine from revealing specific pages on a site in search results.
Nofollow– allows you to define links that ought to not pass on authority or PageRank.
X-Robots-tag– permits you to control how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you want to obstruct particular file types. A perfect method would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP responses in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds terrific in theory, however what does it look like in the real world? Let’s have a look.
Let’s say we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the listed below:
area ~ * . pdf$
Now, let’s take a look at a different circumstance. Let’s state we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would look like the below:
Please keep in mind that comprehending how these instructions work and the effect they have on one another is vital.
For instance, what happens if both the X-Robots-Tag and a meta robotics tag lie when spider bots find a URL?
If that URL is blocked from robots.txt, then particular indexing and serving directives can not be found and will not be followed.
If instructions are to be followed, then the URLs containing those can not be disallowed from crawling.
Check For An X-Robots-Tag
There are a couple of different techniques that can be used to look for an X-Robots-Tag on the website.
The most convenient method to inspect is to install a browser extension that will tell you X-Robots-Tag info about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to figure out whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.
By clicking the plugin in your browser and navigating to “View Reaction Headers,” you can see the different HTTP headers being utilized.
Another approach that can be utilized for scaling in order to identify issues on sites with a million pages is Yelling Frog
. After running a website through Yelling Frog, you can navigate to the “X-Robots-Tag” column.
This will reveal you which sections of the website are utilizing the tag, along with which specific instructions.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and controlling how online search engine connect with your website is
the cornerstone of seo. And the X-Robots-Tag is an effective tool you can utilize to do just that. Simply be aware: It’s not without its threats. It is extremely easy to slip up
and deindex your entire site. That stated, if you’re reading this piece, you’re probably not an SEO newbie.
So long as you use it sensibly, take your time and examine your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel