Most online authors and publishers try very hard to avoid duplicate content when producing articles and posts that hopefully can convert to sales. A smaller group of authors however, live off the sweat of other people’s website content and it is no wonder many of them are referred to as ‘spammers’, ‘scrapers’ and other negative terms. In recent years, the number of such scrapers has risen, despite the extensive press outlining the dangers of duplicate content as it relates to Search Engine Optimization. It has therefore become difficult for honest website owners to not be penalized for duplicate content that they likely did not create.
Duplicate content on popular websites occurs in three ways:
- First, there is duplicate content where every bit of content from text to the images is the exact same. This is the most blatant and damaging type of content duplication which leads to the stiffest penalties by the large search engines
- The second type is partial duplicate content where only certain elements of the web content are an exact match
- Finally, duplicate content can exist when you have either an exact or partial duplicate phrase on a website. To avoid this, web publishers can use a duplicate content checker such as PlagSpotter
Manual Action Against Duplicate Content
Many web marketers continue to post duplicate content on different sites in an attempt to drive traffic to their site or to generate income from other people’s website content. This is unethical because it is taking advantage of someone’s hard work. Furthermore, it may lead to search engines taking drastic action against the site’s ranking in the SERPS.
Search engines have become quite proficient in detecting duplicate content. When search engines identify duplicate content they attempt to detect which content is the original and subsequently penalize the duplicate version. Article writers should always check their content to ensure that they are not duplicating content without their knowledge.
Algorithmic Penalty
In other instances some of the duplicate content is unintentional. In this scenario, search engines such as Google may not take action to punish the web owners and their websites but instead constantly update their algorithms to give higher rankings to sites with the original content. A recent update, codenamed Google Panda, is making it difficult for websites lacking unique content to sustain their position within search engine results.
Avoiding Algorithmic Penalties
Web owners should constantly check to ensure that their site is free of any duplicate content to avoid various penalties from the major search engines. If duplicate content is found, it is important to update the content so that the site will not longer be penalized.
Search engines also provide URL removal tools which can help to remove references to duplicate content. These give you control of which duplicate pages to remove from the major search engine indexes. Parameter blocking tools are also available to control which pages should be indexed by the major search engines.
How to Find Duplicate Content – The Hard Way
Below is a video I created in 2012 walking you through how to detect if your content is duplicate. It’s the long way, but works for the occasional “gut check”.
Did You Find Duplicate Content on Your Website?
Try the tool or tutorial above and let us know if you discovered duplicate content on your website. Or comment if you have general feedback. Thanks for reading!