In a recent post from Google’s Webmaster Central Blog, Greg Grothaus and Shashi Thakur have shed some light on a few questions that are likely close to many web developer’s hearts.
1. The relationships between search results served to varying geographies, the geographic location of a website’s server, IP addresses and domain extensions are playing a part in which sites top the lists of search results pages.
Domain extensions can tell you something about a site before the first glimpse of a page actually loads. If you’re running a website in Australia and targeting Australian customers, then the .com.au extension will likely help you in Australian search results pages. “Because we attempt to serve geographically relevant content, we factor domains that have a regional significance,” states Shashi Thakur in the post. Additionally, IP addresses can be used to determine geographic location, and Google is not ignoring this information. Be smart about who your target audience is and where they’re located – you can use this information to your advantage.
2. What effects can be expected from cross linking a number of websites that you control, and what guidelines are there for linking strategies if you own and maintain multiple websites?
Shashi weighs in on this issue in a manner that is very consistent with any other linking guidelines that we’ve seen from Google – no surprises here. As with any other linking strategy, the general advice is that links between sites that “provide value” to a user are good, and links that are there simply for the sake of linking are bad. The interesting piece here is the explicit mention of links between topically or thematically related sites: “If the sites are related in business …then it could make sense — the links are organic and useful.”
This advice can be applied to any links, whether between two sites that you own or otherwise. The importance of links to your rankings cannot be understated, and making sure that your limited time and resources are applied to linkbuilding strategies in the most efficient manner is essential.
3. What suggestions are there to helping search engines “comprehend” websites that use DHTML, AJAX, Flash and other Web 2.0 technologies?
Greg Grothaus in his post reaffirms our belief that search engines are not human, and not yet capable of human comprehension. As human visitors to a website, we can look at a photo of a car, and in a fraction of a second draw the conclusion that we are looking at a car. We can watch animations and video and make sense of what we’re seeing. Search engine spiders, on the other hand, can “see” only a mess of pixels and compiled code that does not assist in comprehending what the images or animations are about. For this reason, when designing websites and pages, one must be conscious of the balance between what a search engine sees and what a human user will see.
Google officially recommends using Flash “only where it is needed,” and warns of the dangerous ground you enter when your website displays different content to a search engine spider than it would to a human visitor. Without malicious intent, it would make sense to find a way to tell a visitor that does not have Flash installed (or a search engine spider) what they’re missing in a non-Flash manner. Techniques such as cloaking, doorway pages, scripted redirects, and the use of CSS to hide text CAN be used to accomplish this, but as these are the same techniques that are commonly abused in hopes of manipulating a search engine, using these on your site is begging to be penalized.
In a separate post, Mark Berghausen of Google’s Search Quality Team writes “the only hard and fast rule is to show Googlebot the exact same thing as your users.” We’d heed that warning.