Matt Cutts confirms: we return(ed) random Pagerank dataJune 4th, 2008 — | 7 Comments »
I’m here at the SMX Advanced in Seattle, and there’s def a coverage of the show on many blogs out there, but I guess this little juicy nugget of information might be missing on most of them.
Last night in the “You & A” session Matt Cutts responded to Jay Young talking about building an “Archive of pagerank data” and giggled when he confirmed that they already take care of people querying the PR data too much or too heavily.
In fact he confirmed that years ago already they started to return stupid random data based on some signature they found in the guy’s query signature. He added, that TODAY, they of course are a lot smarter in making sure people don’t overdo the pagerank queries.
For me this is a clear confirmation that they have implemented some (obviously needed) measures to
- detect bot type patterns in the PR queries – i.e. if more than 50 pages from a domain are requested within 2 minutes, it must be a bot, mustn’t it
- implement kind of “quotas” for PR queries (I had clients using Aarons SEO toolbar seeing only white or gray anymore)
- return random or white/grey PR data based on above findings
This again is a clear argument (as Adam Audette said as well in his great presentation) that PR cannot and SHALL NOT be used as a basis for decisions like
- PR sculpting / siloing (how do you know what you’re doing if you get crap data input?)
- link building – or removing built or paid links based on PR changes
Interesting sessions here btw… but more even more important and interesting private talks