Archive for December, 2013

Title: Google Has A Lot To Say About Duplicate Content These Days

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

Duplicate content has been an issue in search engine optimization for many years now, yet there is still a lot of confusion around what you can and can’t do with it, in terms of staying on Google’s good side.

In fact, even in 2013, Googles’ head of webspam Matt Cutts has had to discuss the issue in several of his regular Webmaster Help videos because people keep asking questions and looking for clarification.

Do you believe your site has been negatively impacted by duplicate content issues in the past? If so, what were the circumstances? Let us know in the comments.

Back in the summer, Cutts talked about duplicate content with regards to disclaimers and Terms and Conditions pages.

“The answer is, I wouldn’t stress about this unless the content that you have is duplicated as spammy or keyword stuffing or something like that, you know, then we might be – an algorithm or a person might take action on – but if it’s legal boiler plate that’s sort of required to be there, we might, at most, might not want to count that, but it’s probably not going to cause you a big issue,” Cutts said at the time.

“We do understand that lots of different places across the web do need to have various disclaimers, legal information, terms and conditions, that sort of stuff, and so it’s the sort of thing where if we were to not rank that stuff well, then that would probably hurt our overall search quality, so I wouldn’t stress about it,” he said.

The subject of duplicate content came up again in September, when Cutts took on a question about e-commerce sites that sell products with “ingredients lists” exactly like other sites selling the same product.

Cutts said, “Let’s consider an ingredients list, which is like food, and you’re listing the ingredients in that food and ingredients like, okay, it’s a product that a lot of affiliates have an affiliate feed for, and you’re just going to display that. If you’re listing something that’s vital, so you’ve got ingredients in food or something like that – specifications that are 18 pages long, but are short specifications, that probably wouldn’t get you into too much of an issue. However, if you just have an affiliate feed, and you have the exact same paragraph or two or three of text that everybody else on the web has, that probably would be more problematic.”

“So what’s the difference between them?” he continued. “Well, hopefully an ingredients list, as you’re describing it as far as the number of components or something probably relatively small – hopefully you’ve got a different page from all the other affiliates in the world, and hopefully you have some original content – something that distinguishes you from the fly-by-night sites that just say, ‘Okay, here’s a product. I got the feed and I’m gonna put these two paragraphs of text that everybody else has.’ If that’s the only value add you have then you should ask yourself, ‘Why should my site rank higher than all these hundreds of other sites when they have the exact same content as well?’”

He went on to note that if the majority of your content is the same content that appears everywhere else, and there’s nothing else to say, that’s probably something you should avoid.

It all comes down to whether or not there’s added value, which is something Google has pretty much always stood by, and is reaffirmed in a newer video.

Cutts took on the subject once again this week. This time, it was in response to this question:

How does Google handle duplicate content and what negative effects can it have on rankings from an SEO perspective?

“It’s important to realize that if you look at content on the web, something like 25 or 30 percent of all of the web’s content is duplicate content,” said Cutts. “There’s man page for Linux, you know, all those sorts of things. So duplicate content does happen. People will quote a paragraph of a blog, and then link to the blog. That sort of thing. So it’s not the case that every single time there’s duplicate content, it’s spam. If we made that assumption, the changes that happened as a result would end up, probably, hurting our search quality rather than helping our search quality.”

“So the fact is, Google looks for duplicate content and where we can find it, we often try to group it all together and treat it as if it’s one piece of content,” he continued. “So most of the time, suppose we’re starting to return a set of search results, and we’ve got two pages that are actually kind of identical. Typically we would say, “Ok, you know what? Rather than show both of those pages (since they’re duplicates) let’s just show one of those pages, and we’ll crowd the other result out.’ And if you get to the bottom of the search results, and you really want to do an exhaustive search, you can change the filtering so that you can say, okay, I want to see every singe page, and then you’d see that other page.”

But for the most part, duplicate content is not really treated as spam.,” he said. “It’s just treated as something that we need to cluster appropriately. We need to make sure that it ranks correctly, but duplicate content does happen. Now, that said, it’s certainly the case that if you do nothing but duplicate content, and you’re doing in in abusive, deceptive or malicious or a manipulative way, we do reserve the right to take action on spam.”

He mentions that someone on Twitter was asking how to do an RSS autoblog to a blog site, and not have that be viewed as spam.

“The problem is that if you are automatically generating stuff that’s coming from nothing but an RSS feed, you’re not adding a lot of value,” said Cutts. “So that duplicate content might be a little more likely to be viewed as spam. But if you’re just making a regular website, and you’re worried about whether you have something on the .com and the, or you might have two versions of your Terms and Conditions – an older version and a newer version – or something like that. That sort of duplicate content happens all the time on the web, and I really wouldn’t get stressed out about the notion that you might have a little bit of duplicate content. As long as you’re not trying to massively copy for every city and every state in the entire United States, show the same boiler plate text….for the most part, you should be in very good shape, and not really have to worry about it.”

In case you’re wondering, quoting is not considered duplicate content in Google’s eyes. Cutts spoke on that late last year. As long as you’re just quoting, using an excerpt from something, and linking to the original source in a fair use kind of way, you should be fine. Doing this with entire articles (which happens all the time) is of course a different story.

Google, as you know, designs its algorithms to abide by its quality guidelines, and duplicate content is part of that, so this is something you’re always going to have to consider. It says right in the guidelines, “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.”

They do, however, offer steps you can take to address any duplicate content issues that you do have. These include using 301s, being consistent, using top-level domains, syndicating “carefully,” using Webmaster Tools to tell Google how you prefer your site to be indexed, minimizing boilerplate repetition, avoiding publishing stubs (empty pages, placeholders), understanding your conteent management system and minimizing similar content.

Google advises blocking it from indexing duplicate content though, so think about that too. This is because it won’t be able to detect when URLs point to the same content, and will have to treat them as separate pages. Use the canonical link element.

Have you been affected by how Google handles duplicate content in any way? Please share.

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.

Title: Google: Your Various ccTLDs Will Probably Be Fine From The Same IP Address

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

Ever wondered if Google would mind if you had multiple ccTLD sites hosted from a single IP address? If you’re afraid they might not take kindly to that, you’re in for some good news. It’s not really that big a deal.

Google’s Matt Cutts may have just saved you some time and money with this one. He takes on the following submitted question in the latest Webmaster Help video:

For one customer we have about a dozen individual websites for different countries and languages, with different TLDs under one IP number. Is this okay for Google or do you prefer one IP number per country TLD?

“In an ideal world, it would be wonderful if you could have, for every different, .com, .fr, .de, if you could have a different, separate IP address for each one of those, and have them each placed in the UK, or France, or Germany, or something like that,” says Cutts. “But in general, the main thing is, as long as you have different country code top level domains, we are able to distinguish between them. So it’s definitely not the end of the world if you need to put them all on one IP address. We do take the top-level domain as a very strong indicator.”

“So if it’s something where it’s a lot of money or it’s a lot of hassle to set that sort of thing up, I wouldn’t worry about it that much,” he adds. “Instead, I’d just go ahead and say, ‘You know what? I’m gonna go ahead and have all of these domains on one IP address, and just let the top-level domain give the hint about what country it’s in. I think it should work pretty well either way.”

While on the subject, you might want to listen to what Cutts had to say about location and ccTLDs earlier this year in another video.

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.

Title: Google Gives Advice On Speedier Penalty Recovery

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

Google has shared some advice in a new Webmaster Help video about recovering from Google penalties that you have incurred as the result of a time period of spammy links.

Now, as we’ve seen, sometimes this happens to a company unintentionally. A business could have hired the wrong person/people to do their SEO work, and gotten their site banished from Google, without even realizing they were doing anything wrong. Remember when Google had to penalize its own Chrome landing page because a third-party firm bent the rules on its behalf?

Google is cautiously suggesting “radical” actions from webmasters, and sending a bit of a mixed message.

How far would you go to get back in Google’s good graces? How important is Google to your business’ survival? Share your thoughts in the comments.

The company’s head of webspam, Matt Cutts, took on the following question:

How did Interflora turn their ban in 11 days? Can you explain what kind of penalty they had, how did they fix it, as some of us have spent months try[ing] to clean things up after an unclear GWT notification.

As you may recall, Interflora, a major UK flowers site, was hit with a Google penalty early this year. Google didn’t exactly call out the company publicly, but after reports of the penalty came out, the company mysteriously wrote a blog post warning people not to engage in the buying and selling of links.

But you don’t have to buy and sell links to get hit with a Google penalty for webspam, and Cutts’ response goes beyond that. He declines to discuss a specific company because that’s not typically not Google’s style, but proceeds to try and answer the question in more general terms.

“Google tends to looking at buying and selling links that pass PageRank as a violation of our guidelines, and if we see that happening multiple times – repeated times – then the actions that we take get more and more severe, so we’re more willing to take stronger action whenever we see repeat violations,” he says.

That’s the first thing to keep in mind, if you’re trying to recover. Don’t try to recover by breaking the rules more, because that will just make Google’s vengeance all the greater when it inevitably catches you.

Google continues to bring the hammer down on any black hat link network it can get its hands on, by the way. Just the other day, Cutts noted that Google has taken out a few of them, following a larger trend that has been going on throughout the year.

The second thing to keep in mind is that Google wants to know your’e taking its guidelines seriously, and that you really do want to get better – you really do want to play by the rules.

“If a company were to be caught buying links, it would be interesting if, for example, [if] you knew that it started in the middle of 2012, and ended in March 2013 or something like that,” Cutts continues in the video. “If a company were to go back and disavow every single link that they had gotten in 2012, that’s a pretty monumentally epic, large action. So that’s the sort of thing where a company is willing to say, ‘You know what? We might have had good links for a number of years, and then we just had really bad advice, and somebody did everything wrong for a few months – maybe up to a year, so just to be safe, let’s just disavow everything in that timeframe.’ That’s a pretty radical action, and that’s the sort of thing where if we heard back in a reconsideration request that someone had taken that kind of a strong action, then we could look, and say, ‘Ok, this is something that people are taking seriously.”

Now, don’t go getting carried away. Google has been pretty clear since the Disavow Links tool launched that this isn’t something that most people want to do.

Cutts reiterates, “So it’s not something that I would typically recommend for everybody – to disavow every link that you’ve gotten for a period of years – but certainly when people start over with completely new websites they bought – we have seen a few cases where people will disavow every single link because they truly want to get a fresh start. It’s a nice looking domain, but the previous owners had just burned it to a crisp in terms of the amount of webspam that they’ve done. So typically what we see from a reconsideration request is people starting out, and just trying to prune a few links. A good reconsideration request is often using the ‘domain:’ query, and taking out large amounts of domains which have bad links.”

“I wouldn’t necessarily recommend going and removing everything from the last year or everything from the last year and a half,” he adds. “But that sort of large-scale action, if taken, can have an impact whenever we’re assessing a domain within a reconsideration request.”

In other words, if your’e willing to go to such great lengths and eliminate such a big number of links, Google’s going to notice.

I don’t know that it’s going to get you out of the penalty box in eleven days (as the Interflora question mentions), but it will at least show Google that you mean business, and, in theory at least, help you get out of it.

Much of what Cutts has to say this time around echoes things he has mentioned in the past. Earlier this year, he suggested using the Disavow Links tool like a “machete”. He noted that Google sees a lot of people trying to go through their links with a fine-toothed comb, when they should really be taking broader swipes.

“For example, often it would help to use the ‘domain:’ operator to disavow all bad backlinks from an entire domain rather than trying to use a scalpel to pick out the individual bad links,” he said. “That’s one reason why we sometimes see it take a while to clean up those old, not-very-good links.”

On another occasion, he discussed some common mistakes he sees people making with the Disavow Links tool. The first time someone attempts a reconsideration request, people are taking the scalpel (or “fine-toothed comb”) approach, rather than the machete approach.

“You need to go a little bit deeper in terms of getting rid of the really bad links,” he said. “So, for example, if you’ve got links from some very spammy forum or something like that, rather than trying to identify the individual pages, that might be the opportunity to do a ‘domain:’. So if you’ve got a lot of links that you think are bad from a particular site, just go ahead and do ‘domain:’ and the name of that domain. Don’t maybe try to pick out the individual links because you might be missing a lot more links.”

And remember, you need to make sure you’re using the right syntax. You need to use the “domain:” query in the following format:

Don’t add an “http” or a ‘www” or anything like that. Just the domain.

So, just to recap: Radical, large-scale actions could be just what you need to take to make Google seriously reconsider your site, and could get things moving more quickly than trying single out links from domains. But Google wouldn’t necessarily recommend doing it.

Oh, Google. You and your crystal clear, never-mixed messaging.

As Max Minzer commented on YouTube (or is that Google+?), “everyone is going to do exactly that now…unfortunately.”

Yes, this advice will no doubt lead many to unnecessarily obliterate many of the backlinks they’ve accumulated – including legitimate links – for fear of Google. Fear they won’t be able to make that recovery at all, let alone quickly. Hopefully the potential for overcompensation will be considered if Google decides to use Disavow Links as a ranking signal.

Would you consider having Google disavow all links from a year’s time? Share your thoughts in the comments.

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.