Ok so everyone knows the more websites/links that are pointing to your website, the higher you appear on the Google search results. Yes I know there are other things like link quality and content etc to consider but generally speaking this is true. These links can either be marked with "Do follow" or "No follow" by the website owner when they create the link. If the link is Do Follow generally it will help boost you up the Google search engine results and if its NoFollow it will have little if any effect at all. So obviously you only want DoFollow links. Now a few years ago when you posted a comment on a blog any links you added were all DoFollow so people wrote software to mass automate this leaving annoying spammy comments that caused website owners and CMS creators to make sure most comments were automatically NoFollow.
But fear not DoFollow links still exist however now days they are rare and most comments are moderated first. So to successfully use DoFollow blog comments to help rank your website on Google you need to use software to automatically search the internet for these websites that have DoFollow blog comment sections then manually leave a comment and link of value that the website owner will approve. DoComment listens to every tweet that is related to your niche then follows any links it finds in those tweets then checks to see if they're are any DoFollow blog comment links available on that website.
You should use DoComment because its a great way of finding niche related spam free safe backlinks to boost your website up the Google search engine results.
DoComment uses the twitter streaming API to listen to niche realated keywords you set. It then follows the links in these niche related tweets and analyzes the HTML to see if it thinks its found some DoFollow links in the comments section.
This is the transcript from the video above, it needs tiding up a little as its from YouTube subs.
Hi guys it's Jamie from SuperGrowth.com here and in this video, I'm going to show you how to use my Do Comment tool. So we all know the more links you are pointing to your website, the higher up you appear in Google. What Do Comment does is basically searches the internet, for places where you can manually create these links by commenting on other people's websites and leaving a link to your website. In the comments it finds these niche related websites by listening through the Twitter API stream to anything related to your niche. So the end result, being you should end up with an almost unlimited number of niche related - do follow spam free, backlinks.
okay, so let me walk you through how to use it. So the first thing you need is a Twitter API key set of keys. These are completely free to get and really simple to get as well. I'M going to show you how to get those in another video where there should be a link below this one to that video and just doing that, because several of my programs that all use Twitter API key. So get your keys and then once you've got them just simply paste them in normally when that's not populated it'll show you where to put each type of the key then save. Next configure proxies, this program doesn't actually need proxies to run and it'll actually run much better without proxies. I just left it in because I use the code base from another program did need proxies and there are some rare circumstances where you may want proxies with this.
Next you can pick your keywords, so you know if we're in the PPC niche, we just add "PPC" and that's it now ready to go okay. So before we start, there's just a few little options and features I want to show you. First is this one, when to flush the queue so as we're picking up these tweets that contain URLs we're going to crawl for these do follow links, we keep them in a queue, but if we're crawling them rates lower the Twitter API is feeding them into our system. Then our queue is going to get bigger and bigger and bigger and bigger, and eventually, if it's left unchecked, it basically just gobble up all your memory on your machine. We obviously don't want that to happen. So after we've got 100 of them, we set the queue to be flushed.
You can play around with that and test it if you want, but I generally just leave that so we've got current connections and this depends on your internet speed. If you've got you know, lightning, quick connection. You may want to crawl more than 50 websites at once. If you've got really slow. You know village broadband style connection, then you might want to bump that down to like 10 or something probably in the future versions I'll make this kind of intelligent to be self managing but for now, just leave that as it is, so every time we crawl the URL, we make a hash of it and then store it so that we don't keep crawling the size of URLs. So, for example, if someone's tweeting the same, you know niche related tweet every 20 minutes we'll be picking it out, but we don't crawl it every 20 minutes. So every time they get a check, it's not in that cache.
So, even if you see that sites in list go up to, like you know, 50,000 or something you don't need to flush it. Okay, so we've been running for a couple of minutes and we found our first what's hopefully, a do follow comment link, so obviously we checked that out then check and inspect the HTML to see if it is do follow and then write a comment Thats going to stick and you should hopefully end up with a do follow link.__call
Now it's worth just briefly going over how it will the logic of how it finds these things. So it basically parses the HTML on the page where it finds where the comment section begins and then looks inside that div or span to see if there is a do follow link that meet the condition of linking to a domain that isn't a popular comment system or isn't the domain that the comment box is on itself. Also the other piece of logic that it has to pass is if whether the anchor text in the commenting is actually a person's name, the reason for that is the chances are if its a person's name that commented on that blog then it is is moderated and it's way more likely to be a comment link if it's got someone's first name, hopefully get you a nice load of do follow links it's also worth going over a couple of issues which you may run into.
So the Twitter stream is a constant connection between this piece of software and the Twitter API server. So if you do anything to make that connection unstable, it will drop. So, for example, if you hit put your concurrent website crawling options, sky-high see I put mine nice and low there because I'm on really bad internet at the moment. If I have to put that up to you, know twenty thirty fifty or something it may start dropping the connection to the Twitter API because there's not enough bandwidth to maintain it. because I'm crawling fifty other websites at once. So that's one reason it can drop out. You'll get an error message saying that has dropped out and an exception. Also, if you keep stopping and starting it really quickly, you may get that error message pretty quickly, because Twitter only allows three connections. I believe it's three connections from a single IP address, so when you click stop, it should drop the connection very quickly, but I don't think is instant on their end. So if you stop and start several connections very quickly that it may think that you've already got three open and not give you another connection. So, if that you think that is happening just close the program and leave it for a couple of minutes and then come back. Okay and that's it so hopefully this will find you some valuable links. If you like, the software, please leave comment: if there's anything you want creating, you think, there's a good demand for. Please leave a comment or fill out my contact form and that's it. I hope you enjoyed the video guys.