Archive for September, 2013

Title: Is Google Dumbing Down Search Results?

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

There has been an interesting discussion about Google and search quality this week thanks to comments made by a Googler who suggested that a site with higher quality, better information is not always more useful.

Wait, what?

Hasn’t Google been pounding the message of “high quality content is how you rank well in Google” in everybody’s heads for years? Well, sometimes dumbed down is better. Apparently.

Do you believe there are times when Google should not be providing the most high-quality search results at the top of the rankings? Tell us what you think.

Web developer Kris Walker has started a site called The HTML and CSS Tutorial (pictured), which he aims to make a super high quality resource for beginner developers to learn the tricks of the trade. The goal is to get its content to rank well in search engines – specifically to rank better than content from W3Schools, which he finds to be lackluster.

“The search results for anything related to beginner level web development flat out suck,” he writes.

“So the plan is to create a site, which I’m calling The HTML and CSS Tutorial, with the goal of winning the search engine battle for beginner level web development material,” he says. “To do this it needs to have the best learning material on the web (or close to it), along with comprehensive HTML, CSS, and JavaScript reference material. It needs to provide high quality content coupled with an information architecture that will get a beginner up to speed, meeting their immediate needs, while allowing them to go through a comprehensive course of material when they are ready.”

Okay, so it sounds like he’s got the right attitude and strategy in mind for getting good search rankings. You know, creating high quality content. This is what Google wants. It has said so over and over (and over and over) again. The Panda update completely disrupted the search rankings for many websites based on this notion that high quality, informative content is king when it comes to search visibility. It makes sense. Above all else, people searching for content want to land on something informative, authoritative and trustworthy, right?

Well, not always, according to one Googler.

Walker’s post appeared on Hacker News, and generated a fair amount of comments. One user suggests that higher quality sites are often further down in the search results because they’re not as popular as the sites that are ranked higher.

Google’s Ryan Moulton comments, “There’s a balance between popularity and quality that we try to be very careful with. Ranking isn’t entirely one or the other. It doesn’t help to give people a better page if they aren’t going to click on it anyways.”

In a later comment, Moulton elaborates:

Suppose you search for something like [pinched nerve ibuprofen]. The top two results currently are http://www.mayoclinic.com/health/pinched-nerve/DS00879/DSECT… and http://answers.yahoo.com/question/index?qid=20071010035254AA…
Almost anyone would agree that the mayoclinic result is higher quality. It’s written by professional physicians at a world renowned institution. However, getting the answer to your question requires reading a lot of text. You have to be comfortable with words like “Nonsteroidal anti-inflammatory drugs,” which a lot of people aren’t. Half of people aren’t literate enough to read their prescription drug labels: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1831578/

The answer on yahoo answers is provided by “auntcookie84.” I have no idea who she is, whether she’s qualified to provide this information, or whether the information is correct. However, I have no trouble whatsoever reading what she wrote, regardless of how literate I am.
That’s the balance we have to strike. You could imagine that the most accurate and up to date information would be in the midst of a recent academic paper, but ranking that at 1 wouldn’t actually help many people. This is likely what’s going on between w3schools and MDN. MDN might be higher quality, better information, but that doesn’t necessarily mean it’s more useful to everyone.

Wow, so as far as I can tell, he’s pretty much saying that Google should be showing dumber results for some queries based on the notion that people won’t be smart enough to know what the higher quality results are talking about, or even capable enough to research further and learn more about the info they find in the higher quality result. If you’re interpreting this a different way, please feel free to weigh in.

Note: For me, at least, the Mayo Clinic result is actually ranking higher than the Yahoo Answers result for the “pinched nerve ibuprofen” query example Moulton gave. I guess literacy prevailed after all on that one.

If Google is actually actively dumbing down search results, this seems somewhat detrimental for society, considering the enormous share of the search market Google holds.

Meanwhile, Google itself is only getting smarter. On Thursday, Google revealed that it has launched its biggest algorithm change in twelve years, dubbed Hummingbird. It’s designed to enable Google to better understand all of the content on the web, as it does the information in its own Knowledge Graph. I hope they’re not dumbing down Knowledge Graph results too, especially considering that it is only growing to cover a wider range of data.

Well, Google’s mission is to organize the world’s information and make it universally accessible and useful.” There’s nothing about quality, accuracy, or better informing people in there.

Hat tip to Search Engine Roundtable for pointing to Moulton’s comments.

Should Google assume that people won’t understand (or further research) the highest-quality content, and point them towards lesser-quality content that is easier to read? Let us know what you think.

Image: htmlandcsstutorial.com

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.

Title: Google: No Duplicate Content Issues With IPv4, IPv6

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

Google released a new Webmaster Help video today discussing IPv4 and IPv6 with regards to possible duplicate content issues. To make a long story short, there are none.

Google’s Matt Cutts responded to the following user-submitted question:

As we are now closer than ever to switching to IPv6, could you please share some info on how Google will evaluate websites. One website being in IPv4, exactly the same one in IPv6 – isn’t it considered duplicate content?

“No, it won’t be considered duplicate content, so IPv4 is an IP address that’s specified with four octets,” says Cutts. “IPv6 is specified with six identifiers like that, and you’re basically just serving up the same content on IPv4 and IPv6. Don’t worry about being tagged with duplicate content.”

“It’s the similar sort of question to having content something something dot PL or something something dot com,” he continues. “You know, spammers are very rarely the sorts of people who actually buy multiple domains on different country level domains, and try to have that sort of experience. Normally when you have a site on multiple country domains, we don’t consider that duplicate content. That’s never an issue – very rarely an issue for our rankings, and having the same thing on IPv4 and IPv6 should be totally fine as well.”

More on IPv6 here.

Image: Google

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.

Title: Google Admits Link Mistake, Probably Won’t Help Webmaster Link Hysteria

Search Engine News, Search Engine Optimization

Article Source: Link Exchange Service Network

Google is apparently getting links wrong from time to time. By wrong, we mean giving webmasters example links (in unnatural link warning messaging) that are actually legitimate, natural links.

It’s possible that the instances discussed here are extremely rare cases, but how do we know? It’s concerning that we’re seeing these stories appear so close together. Do you think this is an issue that is happening a lot? Let us know in the comments.

A couple weeks ago, a forum thread received some attention when a webmaster claimed that this happened to him. Eventually Google responded, not quite admitting a mistake, but not denying it either. A Googler told him:

Thanks for your feedback on the example links sent to you in your reconsideration request. We’ll use your comments to improve the messaging and example links that we send.

If you believe that your site no longer violates Google Webmaster Guidelines, you can file a new reconsideration request, and we’ll re-evaluate your site for reconsideration.

Like I said, not exactly an admission of guilt, but it pretty much sounds like they’re acknowledging the merit of the guy’s claims, and keeping these findings in mind to avoid making similar mistakes in the future. That’s just one interpretation, so do with that what you will.

Now, however, we see a Googler clearly admitting a mistake when it provided a webmaster with one of those example URLs for a DMOZ link. Barry Schwartz at Search Engine Roundtable, who pointed out the other thread initially, managed to find this Google+ discussion from even earlier.

Dave Cain shared the message he got from Google, which included the DMOZ link, and tagged Google’s Matt Cutts and John Mueller in the post. Mueller responded, saying, “That particular DMOZ/ODP link-example sounds like a mistake on our side.”

“Keep in mind that these are just examples — fixing (or knowing that you can ignore) one of them, doesn’t mean that there’s nothing else to fix,” he added. “With that in mind, I’d still double-check to see if there are other issues before submitting a reconsideration request, so that you’re a bit more certain that things are really resolved (otherwise it’s just a bit of time wasted with back & forth).”

Cain asked, ” Because of the types of links that were flagged in the RR response (which appear to be false negatives . i.e DMOZ/ODP), would it be safe to assume that the disavow file wasn’t processed with the RR??”

Mueller said that “usually” submitting both at the same time is no problem, adding, “So I imagine it’s more a matter of the webspam team expecting more.”

It’s a good thing Mueller did suggest that Google made a mistake, given the link in question was from DMOZ. There are a lot of links in DMOZ, and that could have created another wave in the ocean of link hysteria. Directories in general have already seen a great deal of requests for link removals.

Here’s a video from a couple summers ago with Cutts giving an update on how Google thinks about DMOZ.

Cutts, of the webpspam team, did not weigh in on Cain’s conversation with Mueller (which took place on August 20th).

Mistakes happen, and Google is not above that. However, seeing one case where Google is openly admitting a mistake so close to another case where it looks like they probably also made a mistake is somewhat troubling, considering all the hysteria we’ve seen over linking over the past year and a half.

It does make you wonder how often it’s happening.

Do you think these are most likely rarities, or do you believe Google is getting things wrong often? Share your thoughts.

Image: Google

The In-Content Ad Leader Buy and Sell text links Health and Beauty Store

Article Source: Link Exchange Service Network
If you like all this stuff here then you can buy me a pack of cigarettes.