How to avoid duplicate issues on your website

When you have duplicate content on your website, it can confuse visitors and cause problems with search engines. Google, in particular, frowns on duplicate content within a website as a technical SEO issue which can affect ranking. There are several instances when duplicate content is both acceptable and necessary, but there are methods to deal with those situations.

301 redirects

Let’s say you have two pages on the site with the same content. When you remove one, you risk losing any link equity from incoming links. The best practice when dealing with such a situation is to use a 301 redirect. The redirect will take the visitor to the correct page and keep the juice from the incoming links.

Robot.txt

Another method to prevent indexation of duplicate content is to block the pages using the robots.txt file. Sites with shopping carts that have multiple URL’s that arrive at the same page can use rules with regular expressions to block specific links. Be careful with the robots.txt as there can be unintended consequences of the changes. Use software to test the robots.txt files before you deploy them to the live site.

Rel=canonical

You can use the rel=“canonical” tag to tell search engine spiders which one of the pages with the same content is the original canonical version. To work you need to put the tag with the URL of the canonical version of the page into the header of every single duplicate page. You can use software to scan your website and locate duplicate pages.

No Comments, Be The First!

Your email address will not be published.