The Theory Behind PageRank 0

Share Button

page rank checker

 

 

 

 

 

 

 

 

 

 

Since the early 2000s, many websites have used problematic search engine optimization (SEO) tactics to achieve high rankings in Google’s normal search engine results pages (SERPs).

 

The problem affected Google’s search engine algorithms in the long run. To combat these unethical webmaster actions, Google introduced penalties into their search engine algorithms. The most notable penalty? PageRank 0 (PR0).

 

PageRank 0 is a type of penalty applied to most websites, at least once. This penalty status doesn’t completely exclude sites from the Google search index, instead placing them at the end of search results. Even though it’s primarily defined as a penalty, it’s not actually a penalty in the strictest definition of the word.

 

As an example, Google may apply a PageRank of zero to a site with no inbound links and a high enough PageRank. When this type of web page doesn’t place well in their SERPS or continues harboring ‘insufficient inbound links,’ they may receive the PR0 penalty. Webmaster can see their PageRank results through the Google Toolbar, where its barometer indicates a lack of PageRank (a white bar).

 

In part to the ‘volatile’ nature of Google’s PR0, no webmaster can actually pinpoint the exact cause behind receiving the penalty. Google rarely publishes exact information pertaining to their infamous algorithms. Despite that, there are ways that webmasters can produce ‘theoretical’ approaches to how PageRank 0 may actually work, regarding its long-lasting effects on search engine optimization.

 

The history behind PageRank 0

 

Spam is as intrinsic to search engines as websites are. It’s always affected search engines in some way, dating back to the Web 1.0 days where websites were more simple than they are now.

 

When search engines detect spam, they usually penalize, ban and/or outright remove websites, pages and domains with questionable content from their index. On occasions, search engines even ban entire IP addresses from being indexed in their database.

 

Outright removal of websites, however, actually costs search engines more than most webmasters think. This is the main reason why search engines like Google choose to automate their spam filtering process, saving time and costs on manually removing spam from their search engine.

 

Filtering, however, does have consequences for legitimate webmasters. Automatic spam filtering may catch websites that actually follow Google’s Webmaster Guidelines, leaving suspicious websites unaffected. Thanks to this prospect, Google eventually improved their spam filtering process, to avoid filtering out legitimate websites. This involved introducing a concept known as link analysis to their search engine algorithms.

 

The prospect of Google using link analysis, within their search algorithms, was first introduced on an Internet forum known as WebmasterWorld. In their Google News sub-forum, a Google employee under the moniker ‘GoogleGuy’ confirmed Google’s use of link analysis.

 

This user was later outed as Google’s Matt Cutts, who now leads the search engine’s WebSpam team. At the time, Cutts advised webmasters to avoid ‘linking to bad neighborhoods,’ entire networks of suspicious and often malicious spam pages. Using link analysis, Google can now efficiently detect spam by analyzing the link structure of these spam pages.

 

Webmasters, however, don’t know how this process works. Although Google maintains secrecy in regards to how the actual algorithms work, webmasters have been constructing theories to find answers themselves.



 

 

A theoretical approach to PageRank 0: comparing PageRank and BadRank

 

Due to the abundance of suspicious links on the Internet, PageRank essentially analyzes their link structure, assigning them a ranking based on the algorithms’ intrinsic parameters that eventually determine page rank.

 

To effectively produce a theoretical approach to PageRank’s inner-workings, let’s take a look at a concept known as BadRank. This link analysis concept measures the ‘negative characteristics of a web page,’ instead of its relative importance much like PageRank.

 

BadRank is a link analysis concept based on its connection to ‘bad networks of questionable links.’ To break it down, when a page links to another page with a high BadRank, the first page attains a high BadRank when followed through that link. Since this concept measures outbound links to PageRank’s inbound links, it’s considered a ‘reversion’ of PageRank’s concept.

 

When applied to PageRank’s algorithmic rules, BadRank detects values assigned to a web page if it meets parameters indicating its true content. Filters in search engine algorithms assign special values to certain parameters they detect (like assigning a numeric value that represents spam). Using these values, the filters more or less act accordingly to suspiciously high placing values that indicate the presence of spam.

 

In regards to how the theoretical BadRank would work, its filtering would apply to pages with a higher chance of linking to suspicious websites. This allows the BadRank algorithms to detect areas of the web where spam is more likely to congregate.

 

The speculative theory: contrasting PageRank and BadRank

 

Despite their similarities, the theoretical BadRank and PageRank hold significant differences. These differences are driven by their use of outbound and inbound links, as each type of link may possess different structures in part to their actual usage.

 

Their opposing algorithmic systems, however, may be a driving force behind how some web pages may receive the PageRank 0 penalty. If Google were to use a system similar to BadRank, right before using PageRank to calculate website parameters, a high BadRank page may end up ‘passing’ Google’s PageRank algorithms with little to no PageRank, according to its linked pages. This is what may generate a PageRank of 0.

 

Some webmasters assume this concept is what makes this ‘combination’ hazardous. There are many variables affecting how PageRank may be generated in accordance to a website’s content and link structure. When BadRank is added to the ‘equation,’ it’s a matter of certain BadRanks directly impacting a website’s PageRank, which in turn, can cause its algorithmic values to drive its ranking to PR0.

 

Of course, the simplest way to calculate a so-called PR0 value is having certain BadRank values automatically qualify websites for PR0, regardless of their PageRank Value.

 

No matter how each ranking barometer gets calculated, these barometers may not benefit websites in ways webmasters expect. The complicated nature of such a system, in theory, could actually cause more PR0 penalties for websites than less, as the algorithms may detect more suspicious links as a result.

 

The way Google actually weighs inbound and outbound links is only pure speculation. There’s no real way for webmasters to truly detect how Google’s PageRank algorithms actually work.

 

Constantly checking the site using a popular pagerank checker, analyzing PageRank 0 and other link analysis concepts merely helps webmasters understand some ways Google might handle web spam.