Wikipedia Backlink Finder


What is Wikipedia Backlink Finder?

Wikipedia Backlink Finder is a software tool that allows users to find powerful Wikipedia backlinks to help boost their websites up the Google rankings. Backlinks are still the number one ranking factor in Google's 200+ ranking factors. Now I know what you are thinking... "but wikipedia links are no follow, only do follow links help boost your rankings!?". While this may be true in "general", there appear to be exceptions to this rule. Lets take a look at what google say themselves.

Google webmaster guidelines for no follow

"In general, we don't follow them. This means that Google does not transfer PageRank or anchor text across these links."

There you have it. In general they don't follow them, which means sometimes they do. Which makes sense because if you have one of the worlds most most authoritative websites where 99.9% of its outbound links are no follow some weight should be given to these links either in the form of page rank or general site authority. Many case studies align with this school of thought. Another theory is that Google has evolved past the simple rules of do follow and no follow with machine learning and knows that aged links on wiki pages link to credible high quality websites.

How does this free tool work?

Wikipedia Backlink Finder has two main modes of operation.

Find competitors wiki backlinks

Using this mode you can simply enter your competitors keywords and it will scrape domains from google for those keywords or you can enter your competitors domains manually. Once it has your competitors domains it will search wikipedia to find all pages that are linking to your competitors websites. Once you have a list of wiki pages that are linking to your competitors you can edit these pages (see below for tips of this) and add your own websites domain to the wiki page. This mode can also bring back the Moz PA for each page and allow you to preview the link inside of tool and export results.

Search for citation requests, known dead links and unknown (not marked as dead yet) dead links

Using this mode you enter your websites keywords and wikipedia pages will be scraped from Google that are related to your keywords or/and (depending on your options select) have dead links. These pages are then analyzed and the dead link target is extracted and displayed in the tool. The tool can also check for unknown dead links. So these are broken links on the wiki pages related to your keyword(s) that haven't been marked as dead yet. These could be websites where the domain has expired or the article being linked to has simply been moved resulting in a 404 error. This mode can also bring back the Moz PA for each page and allow you to preview the link inside of tool and export results.

Why should I use this free wikipedia backlink finding tool?

Even if you don't believe that wiki links carry any SEO weight. They generate quality referal traffic and help to maintain a well balanced backlink profile which will help your website regardless of the type of links you've been building. Also...where else on the entire internet would you get a tool link this for free!?

What does this tool look like?

Wikipedia Backlink Finder

Watch Wikipedia Backlink Finder in action / Tutorial



How to get your MOZ API keys (Optional)




Change log

  • 11/11/2017 - Initial release
  • 15/11/2017 - Fixed moz bug causing error and exporting error for citation and dead link results

Video transcript

This is the transcript from the video above, it needs tiding up a little as its from YouTube subs.

Guys, its Jamie from SuperGrowth.com and in this video I'm going to show you how to use my wikipedia backlink finder tool and basically, what this tool does is, it allows you to get backlinks from Wikipedia and see the backlink your competition has. There are two modes of operation you can basically enter in your competitors, keywords or domains, and it will show you on what pages they're getting links from on Wikipedia and also the other mode is a mode that allows you to find backlink opportunities by searching for pages that are related to your keywords that either have citation requests or have known dead links or have unknown dead links.

Lets start off by doing a quick demo of this first mode here, so there's two ways: you can use this first mode, you can either enter in your keywords here so SEO, for example, or you can enter domains in there. So what we're going do is we're going to enter in some keywords here. Just for this demo now, this option here must contain the keywords below, homepage must contain keywords below, what that basically does is it allows you to make the search a bit more targeted. So say you search for SEO what this does in the background, when you have a keyword in here, it goes off to Google and it scrapes the domains for that keyword. So if you want to make sure that the domains that are returned, you know are really about SEO, then you can check the home page to make sure that it contains the SEO keyword. But we won't do that for now, so we're going to leave that as that is.

I also entered my MOZ API keys in the settings here, so that I'll bring back the PA for each result, the authority Authority for each result and we said we're going to use this mode here by clicking scrape competitors domains instead of manually entering domains where we simply enter them in there. Okay, so let's begin - and you can see now we're scraping domains from google for the keyword SEO, so we've got MOZ set and we are getting the Moz for each page. If we just give this a second now, we should start to see some results. There we go yeah, so we can see all the pages that are linking to these domains. So we can click that open that up and we can see it's linking to that MOZ article there we can sort by PA, you can hover over there and preview the link as well.

We should start to see some other domains popping up in a minute, there we go hobo web, now this gives you a rough idea of where your competitors are getting their links from wiki from. Okay, so we're going to stop that and then I'm going to show you the other mode in action. Okay, so we're going to do the same same keyword now, basically, what this mode does in the background is you know you have the advanced Google search operators like site, and things like that, so basically go into Google and I'll put in Wikipedia cite Wikipedia.org and depending on the mode you have selected, it'll have your keyword, which is SEO in this instance and then either for this mode it'll have cite Wikipedia.org, your keyword, citations needed and then now bring back pages from Google that have the citation requests on and it'll list them all there and then it'll do the same for known dead link, so it'll be cite Wikipedia.com, your keyword dead link and then it will bring all the results back here and show you, the URL, the wiki page, that the the URL, the wiki page and the URL of the dead link.

So what youwant to do now is you go along and you edit that article and add your link in and then submit that to the moderators and if it's a good fit, then you get a good, strong link from Wikipedia and this here, so this one will go in way and do cite Wikipedia or your keyword and bring back the results from that and then every outbound link on that page, it will check to see if it's a dead link. So that's a really good way of finding links that are dead and no one's reported as dead, yet so that can really boost. The opportunities you have for finding a dead link that you can hopefully replace with your link.

So what we're going to do now is we're going to leave let's just do them all. Okay, I was right. I'm going to skip out the MOZ PA because you've already seen that, so that's now there, there is a bit more sleeping involved for the Google requests in this mode, because it's an advanced search operator, Google can be quite picky about the amount of time that you have to sleep between requests to Google for scraping Google. You can use proxies to speed that up and make sure you use high quality, dedicated proxies or you'll get poor results if you don't use good proxies. So we'll let that sleep for a minute and then we'll come back to it.

Okay, so had a little sleep just so we didn't get banned, and now you can see the results that have come back from Google. It's working its way, checking all the links on each page, so you can see we found some results. I should have put a search engine optimization because we put SEO, I think, that's something about South Korea, but we'll stay with it for this demonstration. Anyway, so basically, you can see links coming back now from SEO pages that have unknown dead links. So if we were to open up these pages, they'd fail so and that's your opportunity there to go to these pages edit them with working links. The link to you know good high quality related articles that page moderators will approve, so you can hover over it and you can see more about them.

You can see we're getting some more, you'll find you'll get tons and tons of unknown dead links, because you know Wikipedia, it's just absolutely massive and although they've got tons of moderators in a relation to how many pages they've got. They don't really have that many, and every page has a lot of outbound links and that's just almost impossible to make sure every page has working links. So hopefully, that's given you a better idea of what the software can do so, just to recap, you've got this mode here, which is for finding your competitors, current links and what pages they're getting Wiki links from, and you've got this mode here for finding dead wiki links and Citation requests, link opportunities, hopefully that makes sense.

What I'm going to do now is I'm just going to stop it and go through some of the settings quickly. Okay, so I stopped it now. The number of Google results to scrape when searching for competitors. So that's for the first mode here and that's how many domains you want to bring back from Google to check if they have links on Wikipedia - and what pages just ten here to be quick just pick hundred it'll be a little bit longer but not too much longer on that mode, so personally I probably have that at 100 most of the time. Wiki pages, this scrapes from Google for each keyword, so that is for the second mode here, for the citation and dead link opportunity mode one hundred will be longer. You know like two three times longer, but you'll get way way way more results. So, in the background, that's basically saying to Google for each one of these here these options here, if you have them ticked, then you'll get ten results for that ten results. For that ten results for that, and if you crank up to a hundred, you get 100 for that hundred of that and then f or each one of those hundred that comes back to each other's modes. It will go through each of those pages from Google and it will check for citations, known dead links and unknown dead links. So hopefully that makes sense, but basically the higher those are the slower is, but the more results you get but poor if you've got tons and tons of keywords you're putting in, and you want to crank up crank up the pauses between searches unless you're using lots and Lots of good proxies, the program will tell you if your IP has been banned or blocked.

If it has, then either wait 24 hours or reset your router you'll, probably get a new IP or use some good high quality dedicated proxies. Okay, that's it hopefully that'll give you a good indication of what this program will do. It's probably worth having a real quick talk about nofollow links. Actually, so recently, there's been kind of a lot of discussion about do follow and no follow links. The conventional school of thought is that nofollow doesn't pass any page rank, any juice for ranking, and I think the majority of the time, that is true. But even if you look at Google's, you know support page about what they actually say about nofollow on links, Outbound links, they do actually say in general, we don't follow them and pass PageRank and link juice. So what that actually means is sometimes they do and when you think about it, it would make sense, because you know Wikipedia is an extremely trusted site that is heavily moderated and you know 99.9 percent of it is outbound links. Why would Google want to ignore that? Thats such valuable information to them for ranking and also many people think that Google's kind of just evolved past their kind of black and white nofollow do follow with machine learning. They can probably work out that aged Wikipedia links are nearly always link to sites of high quality and then they'll have other the metrics about the website, that confirm that.

But that's if they do have complex machine learning models that's what they would be told about, Wikipedia makes all its link no follow but although they mark them as nofollow, they are very valuable in trying to work out how trusted a website is. Anyway enough of my rambling. Hopefully, you've got an idea of what it's able to do.