How to implement Website non indexing in Nextjs ?

To make a website non-indexed, you typically want to prevent search engines from crawling and indexing its pages. This can be done by using a combination of strategies:

1. Robots Meta Tag:

You can include a meta tag in the HTML <head> section of your web pages to instruct search engines not to index the content. Add the following meta tag to the pages you want to exclude:

<meta name=”robots” content=”noindex, nofollow”>

The “noindex” directive tells search engines not to index the page, and “nofollow” instructs them not to follow any links on the page.

2. Robots.txt File:

Create a robots.txt file in the root directory of your website. This file can specify rules for search engine crawlers. To prevent indexing of the entire site, you can use:

User-agent: *

Disallow: /

3.This tells all search engine crawlers not to access any part of your site.

Password Protection:

If applicable, you can password-protect your entire website or specific directories. This adds an extra layer of security and prevents search engines from accessing the content without proper authentication.

HTTP Header:

Set the “X-Robots-Tag” HTTP header in your server configuration to specify indexing rules. For example, you can add the following header to prevent indexing:

X-Robots-Tag: noindex, nofollow

 create an API route at pages/api/robots.js to handle the robots.txt content:

// pages/api/robots.js

// pages/api/robots.txt.js

export default (req, res) => {

 res.setHeader(‘Content-Type’, ‘text/plain’);

 res.status(200).send(‘User-agent: *nDisallow: /’);

};

Configure Routing:

Ensure that your next.config.js file includes the API routes configuration. If you don’t have a next.config.js file, you may need to create one.

// next.config.js

module.exports = {

 async rewrites() {

  return [

   {

    source: ‘/robots.txt’,

    destination: ‘/api/robots.txt’,

   },

  ];

 },

};

Leave a comment

Your email address will not be published. Required fields are marked *