We’re seeing signs of thin-repetitive content finding its way back into Google’s index in the last couple of days. Is this a sign of a gentler, more forgiving search quality algorithm?
There are a few search queries which I perform regularly, one of them scans for recent references of our website online and this typically this returns fresh and unique references. What happened over the weekend was unexpected. Many scraped content websites and search results pages were back in Google’s index indicating a potential adjustment in their search quality algorithm.
Here is one example: “I would like to build website with the same logic and concept of dejanmarketing.com“
First query is related to a Freelancer.com job which used to be available from freelancer.com and related .tlds. Now we’re seeing a whole lot more sites (scrapers of freelancer.com) popping up in fresh results. What is incredible that 80 results are indexed just from one domain, workingbase.com (see query).
Second query is from an article talking about authorship (likely original)
Dan Petrovic, the managing director of DEJAN, is Australia’s best-known name in the field of search engine optimisation. Dan is a web author, innovator and a highly regarded search industry event speaker.
ORCID iD: https://orcid.org/0000-0002-6886-3211
5 thoughts on “Next Panda Update: More Forgiving?”
website is just a piece to the puzzle. good luck with catching up on nearly 70,000 backlinks….
What is more surprising is that a site like that could survive the recent updates.
Google has ~1.5 million indexed URLs for the domain. Open Site Explorer shows a relatively small number of links & they aren’t from high quality sources. The site is full of thin content (each actual question/listing) and oddles of duplicate content (related listings) and as such a lot of boilerplate content.
I haven’t bothered to check, but I wonder if they are doing something clever with the related listings to try and avoid getting hit by the Google Panda algorithm. For instance, if they always list the same related listings on each URL – that’d be a problem. They could improve that by being ‘related’ by category, category+tags or category+tags+full text search. I’d then be tempted to sort the ‘related’ stories by a random number generator, to make sure that each page is going to list related by different items.
I’m sure the algorithm’s that Google use are very smart but maybe some relatively simple techniques like that slip through. After all, they aren’t duplicating an entire URL like you’d typically see with issues like www/non-www, HTTP/HTTPS, mixed URL casing, trailing slash issues and so forth.
Instinctively, I’d suggest that it isn’t a sustainable tactic for them. However with the rise and rise of StackOverflow – maybe it’ll last longer than we think. Granted StackOverflow is producing a lot of high quality, well curated content by experts but again – maybe the Google algorithms can’t detect it yet.
I say we wait and see. It could be temporary ‘loosening’ of a certain set of quality criteria just to see what happens. Google has done that in the past. By the way I should mention that I personally have nothing against that site in particular and using it just as an example.
I have different opinion about the Google algorithm change.
My experience is that Google increased the credit on “traffic” section of the website and makes it hard for SEOer to improve sites ranking.
One test I did on one of our company website:
1) Stop doing any back-links building for one week.
2) Only attract more real/natural traffic from different social media accounts.
3) After one week, I saw the ranking moved from #14 to #7 on the first page of Google.
Freelance would be a good sample of those sort of high traffic websites.
Thank you for making this site very interesting! Keep going! You’re doing very well!