Types of Bad Link Building : Borrowing, Begging, Bartering, Bribing and Buying

Even though this information has been read by thousands of people, it still hasn't been read [or understood] by enough people.

Do you know? A week doesn't go by without someone asking me whether they should pay a company to build links the wrong way.

So, what's the wrong way?

Borrowing
I guarantee that someone will leave a comment on this post with a link to their website in the body of their comment. Here's an example.

Types of Bad Link Building








This is borrowing links.

Many blogs allow commenters to insert links into the body of their comments. Many don't.

So, these link borrowers also resort to inserting their keywords where their name is supposed to go. I wrote a controversial post about this bad practice previously, "Why Leaving Comments is Not a Link Building Strategy."


Our blog software's spam filter catches many of these comments. Yet not every site has spam filters in place, and many people automate or just cut and paste such comments all over the web. This is borrowing and it is wrong. Doing it might get you a short-term, unsustainable burst in rankings. But these are not high quality links. And won't provide long term advantage.


Begging
Then, there's the beggars. They send emails to people they don't know asking for links.

I recently received a really creative e-mail. The person inroduced himself as a big fan of my writing and suggested that "based on reading what I write" I should check out this new social networking site. He wrote about how awesome the site was (without any explanation why). He suggested that I should write about it because everyone else who did it received floods of traffic. He got very offended when I responded that I wasn't interested and that he should be more upfront about his link begging intentions.

Of course, begging comes in many forms. Not all are as creative as this guy. Most people just send emails to webmasters asking them to link to them. While begging is one way to build links, it is ineffective and the fastest way to annoy a webmaster.


Bartering
I'll link to you, if you link to me. I'll buy your services, if you buy mine. I'll show you mine, if you show me yours.

Aah. How adolescent does that sound? And how much easier do you think you can make it for Google to detect that you didn't earn that link? Google's power is that it detects patterns. This is a pretty easy pattern to detect: "Site B links to Site A. Site A links to Site B."

Of course, bartering or trading links gets more creative by saying, "I'll link to you, if you link to him and I'll link to this other guy if he links to you, etc." There are even networks you can join to facilitate this process.

Participating in these "link building rings" is even riskier than regular bartering. It's believed that Google labels websites into "neighborhoods." There are "bad neighborhoods" that you don't want to live in. By interlinking with websites in a bad neighborhood, Google will think you too are bad.


Bribing
Many companies try to bribe webmasters and bloggers for links. We'll send you stuff if you link to us. Although harder for Google to detect, this isn't a great practice. In fact, the FTC frowns upon this. I'm not sure whose wrath I'd fear more--the FTC's or Google's. Either way, it is best to not disobey the government.


Buying
Several sites and services help you buy text links on other sites that pass SEO credit. There is also a black market in link buying that is much less formal, more secretive and not necessarily that organized. I wouldn't recommend either one.

If you're not too averse to risk taking, this is probably the most effective link building strategy of the Bad B's, because you can control what sites you are getting links from, and analyze these sites to ensure that you are not placing links in bad neighborhoods. You can also place links within relevant content and use anchor text for the keywords you are trying to rank for. But, and it's a big but, it's still risky. Google explicitly states:

However, some SEOs and webmasters engage in the practice of buying and selling links that pass PageRank, disregarding the quality of the links, the sources, and the long-term impact it will have on their sites. Buying or selling links that pass PageRank is in violation of Google's webmaster guidelines and can negatively impact a site's ranking in search results.

Therefore, although it might work, I do not advocate buying links.

In fact, as you might have guessed, we do not advocate borrowing, begging, bartering, bribing or buying links.

If I had to pick a winner in the fight between a surreptitious link builder or Google's algorithms, I'd pick the algorithms. In other words, Google will eventually perfect their detection of who is B'ing links and who is earning them.

I personally know entrepreneurs with websites that were generating tens of thousands of dollars one month and zero the next because of practicing the Bad B's of link building.

Don't do it. The risk isn't worth it. Learn to build links the right way.

Firefox will move to Mobile Cell Phones

Firefox on Mobile Cell Phones
Most tech enthusiasts have wondered why web browsers on mobile phones suck so much. Mozilla Foundation CEO Mitchell Baker has been thinking about it too, and looking at how Firefox can be ported to mobile platforms.
Most tech enthusiasts have wondered why web browsers on mobile phones suck so much. Mozilla Foundation CEO Mitchell Baker has been thinking about it too, and looking at how Firefox can be ported to mobile platforms.

Dan Warne (APC): Something I wanted to ask about is one area that Firefox hasn't seemed to have delved into much is putting Firefox on mobile devices and Opera obviously has a pretty good spot in that space and even Microsoft to an extent. Is there any move in that direction?

Mitchell Baker: Yes it is a long‑term move though -- it is not in the next weeks or months. The Mozilla Foundation's mission in life is to improve Internet experience and that is increasingly on devices other than PCs. If we're not there then we won't be able to live the kind of vision that we helped grow.

So that needs to happen. We had a small project looking at it but we decided that the right thing to do is to look first at the technology and look at our core technology and really tune it so that it's best suited for that. We are at work on that now, however it will take a while.

We are also looking at how to reflect the richness of the entire web on a small device with the current constraints and that one we don't know. There is no easy answer for that because the web is growing and the functionalities of the web are growing. We're looking closely at that one.

We have an experiment underway which is clearly a PC-centric experiment related to mobiles (so this is not a pure mobile strategy), but we are experimenting with the relationship between Firefox users and their mobile devices. We know people like Firefox because of the add-ons and the customisation and the ability to get particular information that you want through Firefox and extensions.

So the experiment that's underway is called Joey - we are looking at how you can take information that people like to access and deliver it to a mobile device. Clearly you can go to the web and you can SMS yourself various things ... but what else could be done? If you have Firefox, you already have the ability to customise it and gather certain kinds of information, so what could you do with that so that being a Firefox user makes your mobile experience better.

Dan Warne (APC): So are you talking about a kind of a back end service solution that helps pre-format content for mobiles etc?

Mitchell Baker: It's got a couple of pieces, it's got a server piece, it's got a little client piece that sort of thing and that one we'll launch in labs pretty soon. I think there's information up there about it now so that will launch as an experiment, not necessarily as a product plan, but as a way to start to get information because right now in many countries the user of a mobile device interacts with a carrier and not directly with the software vendor.

So even if we had a great product to do what we do best which is to touch human beings there's no current way to get that on most devices. Another thing that we're trying to sort through and maybe if that doesn't change then maybe this ability to get different kinds of information would make sense, we're not sure.

Opera is probably better suited to us as a supplier to carriers because our DNA is really very consumer and individual-focused. So all of those things have led us to try this experiment whilst we tweak our technology and see what things look like.

Dan Warne (APC): Firefox has a bit of a reputation - I'm not sure whether it's right or wrong - of having a hefty code‑base as a renderer and I think that largely might have come about when Apple chose the KHTML rendering core used in the Linux Konqueror browser. They said at the time reason they chose KHTML over Gecko [Firefox's rendering core] was because it was very lightweight. So is it true that Firefox internally is quite hefty and might be a bit difficult to shoehorn onto a mobile device?

Mitchell Baker: Oh well all of them are difficult to shoehorn onto a mobile device, so we should be clear about that. Opera has done a pretty good job of getting something useful on to a mobile device, but it's not a full fledged and doesn't have the capabilities of Firefox. That's hard to get on any mobile device so that's a separate question.

But yes I think it's fair to say that the actual and better piece - it's called WebKit by Apple - that particular piece is easier to work with currently than our analogous technology.

Now some of that is that when you get our analogous technology you get a whole bunch of other things that allow the creation of the communities that we do. So for extensions and the language of XUL and a whole range of other things come with our technology.

So some of that is just more capability but some of it is that it is a smaller piece and probably more approachable and that's part of what we're looking at when we say before we really launch into a better space we know we have these advantages that I think are probably unmatchable - building that kind of community that we talked about - it's probably unmatchable.

But even given that we should really work harder and smarter to make the core as approachable and easy so that it's easy to develop and you get all the other benefits that come with Firefox and Mozilla technology.

How to Check if Your Images Appear on Google Image Search

Bloggers and webmasters know that every single visitor helps to build up traffic, right? If that is the case, you should make sure that Google is correctly indexing your images, and that people searching for related image terms will have a chance to visit your blog.

Here is a quick check that you can perform to find that out. Just head to Google, and click on the “Images” link on the top left corner. That will take you to the Image Search. Now you just need to type on the search bar “site:yourdomain.com“. This quiery will filter only the results coming from your site.



















If your images are getting indexed correctly by Google you should be able to see a whole bunch of them on the search results.
If, on the other hand, your images are not getting indexed by Google you will just see a “Your search did not match any documents” message.

The most common cause for this problem is a flawed robots.txt file (read “Create a robots.txt file” for an introduction to it).

For example, I used to have a “Disallow: /wp-content/” line on my robots.txt file, with the purpose of blocking some internal WordPress files from search bots. It worked, but as a result it also blocked all my images that were located in /wp-content/uploads/. The solution was simple: I just added the following line after that one: “Allow: /wp-content/uploads/”.

So if your images are not getting indexed, check your robots.txt file to make sure it is not blocking the access to them.

There are other causes that might make Google not list your images on its search results, including a low Pagerank, non relevant tags, poor permalink structure, bad image attributes and so on. If you are sure that your images are accessible to search bots, therefore, it could be a good idea to work on their tags and attributes. Here are two articles that you guide you through the steps required:

19 Ways to Get More Traffic to Your Site Using Google Images
Using Google Image Search to Drive Traffic to Your Site
Even if your images are already indexed, the tips and tricks described in those articles will help you to maximize the incoming traffic from image searches.

How Google Ranks Blogs

Google Blog Search is a new tool that is gaining popularity on the Internet lately. The Blog Search might also be a good source of visitors if your blog rank on the first positions for specific keywords, but what factors does Google take into account to elaborate the search results?

The “Seo by the Sea” blog has an interesting article analyzing a new patent from Google that contains some indicators about the positive and negative factors affecting blog ranking, check it out:

Positive Factors:

Popularity of the blog (RSS subscriptions)
Implied popularity (how many clicks search results get)
Inclusion in blogrolls
Inclusion in “high quality” blogrolls
Tagging of posts (also from users)
References to the blog by sources other than blogs
Pagerank

Negative Factors:

Predictable frequency of posts (short bursts of posts might indicate spam)
Content of the blog does not match content of the feed
Content includes spam keywords
Duplicated content
Posts have all the same size
Link distribution of the blog
Posts primarily link to one page or site

Twitter Search to Become Real Search like Google?

Tweefind Applies Google Magic to Twitter Search
Tweefind







Remember how GoogleGoogle reviews conquered the world of search? They figured out a way to tell which web sites are more important than others, by judging how many links are pointing to them, and called it Google PageRank (it’s a bit more complex than that, but it was one of the key parts of Google’s search algorithm).

Now, Tweefind is doing something similar for TwitterTwitter reviews. It’s a Twitter search engine which returns results based on rank, hopefully returning more relevant results and users on top.

Rank is calculated through several parameters. Creator of Tweefind, luca Filigheddu, lists them:

# followers
# following
# of tweets
# of RT he/she receives
# of replies
# of distinct users who reply
# of distinct users who retweet
# of RT he/she makes
# of links the user shares

This approach raises some interesting questions. Are Twitter users with more followers, tweets, replies or retweets more relevant in the context of real time one-to-many conversations? Is there really a “rank” on Twitter that can be calculated and be useful in real world usage? Could an approach similar to Google’s PR algorithm do for Twitter search what it did for Google?
These questions are definitely worth answering. Yes, if there’s something important happening right now, a quick Twitter search will return a lot of tiny tidbits of info on the subject. But I often feel overwhelmed with the abundance (and similarity) of the results, and I wish there was a way to sift them and find the really relevant tweets.

At the moment, Tweefind does not seem to calculate rank for enough users to answer this question, but in time its results, when compared with standard Twitter search, might prove to be very interesting. Tweefind is also one of those Twitter-related services who are having a go at monetization; in this case it’s a couple of Google ads on top of the page, which luckily aren’t too big of a distraction from the site’s content. If it goes in the right direction, it’ll definitely be a service to watch in the future.

Why does Google Ranking Results keep changing?

Why does Google keep changing what websites have to do to be ranked #1?

If you think about it, the answer is obvious. In olden times (a few months ago) what you had to do to be ranked near the top of Google for a keyword phrase was getting to be pretty well know.

Just follow a few basic rules about keyword density, put keywords in headlines and in bold text and get a few friends to link to you and your site could show up near the top.

As more and more people learned to do this, Google found that the "most optimized" and NOT the "most relevant" websites were showing up at the top.

If Google couldn't deliver "relevant" information, soon no one would be using them for searches. So Google changed many of the rules.

Google will continue to change the rules to keep people from learning how to make their non-relevant websites show up high in the rankings.

The trick is to learn what works before the masses learn it and then be ready to change first when a technique stops working.

You can never master Google's ranking technique. Forget about it. All you can hope for is to be a little ahead of your competition.
Cheers,

SEO Websites That Should Blatantly Have Higher PageRank Scores

1/ In my honest opinion SEOmoz.org should have a PageRank score of at least 7/10 on its homepage. There are 156,000 organic links pointing to it and the SEOmoz.org domain has 1.8 million links in total including link from places like the New York Times and the BBC – the kind of links you drool over. SEOmoz haven’t cheated the system, they got those links because they are incredible at what they do, so why are they not getting the PageRank they deserve?


--------------------------------------------------------------------------------

2/ Next up is SEO Book, another website that should have a PageRank 7/10 on its homepage, but it doesn’t because someone at Google thinks we need to be kept in check. The site has a similar link profile to SEOmoz with 176,000 links pointing at its homepage and there are about half a million links pointing to the SEOBook.com domain in total including dream links from places like The Guardian and the Wall Street Journal.


--------------------------------------------------------------------------------

3/ Jim Boykin’s site We Build Pages has attracted organic links from places like Search Engine Land and Labnol.org thanks to it having kick ass articles published on its blog. Plus the free SEO tools that it offers have gained the site loads. The WeBuildPages.com domain has around 32,000 links pointing to its homepage and about 48,000 links in total. Currently the We Build Pages homepage only has a PageRank score of 5/10, the amount of quality links it has should easily make it a 6.


--------------------------------------------------------------------------------

4/ The BigMouthMedia website has been around for years and it has some amazing links from places like Wired and W3 thanks to it having white papers published on it and about 5 years worth of industry news. According to siteexplorer there are about 26,000 links pointing to the BigMouthMedia homepage and there about 38,000 links pointing to the site as a whole. The BigMouthMedia homepage only has a PR score of 5/10.

Rankings of Best SEO Companies in Mumbai (Exclusive List) - 2019

List of Top 10 Best SEO Companies in Mumbai (Bombay) Maharashtra. Want to know which SEO companies in Mumbai are giving their clients th...