Trusted or Authority Sites – What are They?

Every month or so, Google quietly releases a blog post that outlines the updates they made to their various algorithms.  This month they rolled out a list of their June & July updates.  There was definitely a theme, the buzzwords were “High Quality Content” and “Trusted Sources.”

Now – how does Google find “High Quality Content?”  Honestly there are probably more ways than we’ll ever know, but what we DO know is that shares and links are an indicator. I think if we combine what they say about High Quality Content and Trusted Sources, we can extrapolate that “Shares of high quality content from trusted sources” are golden.

Identifying those trusted sources is tricky.  Trusted sites are Continue reading

What’s in Your Toolbox?

Every marketer who handles multiple aspects of online presence will tell you, we change gears so much, and have fingers in so many buckets, that having tools to help us monitor and keep track of campaigns is essential.  That being said, there are a ton of tools out there to choose from, and everyone has an opinion on what the best ones are.  I generally talk from experience when it comes to tool selection.  I don’t recommend that which I don’t use, and I am pretty picky about usability when it comes to software.

My favorites are the ones that I can figure out without reading a manual.  I’m terrible about instructions.  I want things to be intuitive and easy to use, so multi-step screens and

Continue reading

12 Steps to Great Press Release SEO & Usability

Are you writing regular press releases?  Are you hustling enough to have reason to write regular press releases?  Not only does a press release act as a great brand advocacy tool, if done correctly it can help you create some great SEO results as well.

Press Release Tips and Tricks

Optimizing your press release as you create it is an important step you cannot leave out.  Make sure your PR department is  talking with your online marketing department and either implemeting your Press Release SEO rules, or letting you touch the press release before it’s sent out.

Here are some rules for creating great SEO within your press releases Continue reading

Defined: A Healthy Link Profile

Since the penguin update of May 2012, I’ve gotten quite a few questions about links, specifically incoming links, and their quality, quantity, and acquisition.

Here’s the scoop – good links are earned, not bought, traded or given.  We need to change our thinking from a “linkbuilding” mentality to a mentality of “Engagement and Education.”   I think linkbuilding has completely changed, and we’re looking at a new era and set of definitions that surround the link.  Here’s what I think – stop linkbuilding.

Stop looking for link placements and start writing great content and building relationships.  

Be helpful, guest blog with relevant content, don’t make every link you get on a guest blog or traffic-driving directory say “spammy keyword phrase” and above all, if your website visitor would never click on it, need the content, or remotely care, don’t bother linking to it, or getting a link from it.

Here is what we look for when evaluating an incoming link profile and a healthy link building strategy. Continue reading

Competitor Comparisons – Metrics to Measure

We’ve all been in that situation; you know the one – the keyword that you just can’t seem to get ahead with. That competitor that should not outrank you..is…and its driving you crazy.

It took me quite a long time to realize that chasing competitors can be a waste of time. There’s so much work to do to improve your own website – wasting energy on a competitor that dogs your every step takes time away from actions you can perform to increase your conversions and make their presence a non-issue. Constantly obsessing about a position that flip flops with someone else or where you can’t Continue reading

From The Ground Up – SEO-Friendly Site Construction – Part 2

In our first installment of From the Ground Up, we talked about creating a solid website architecture based on keyword research and a logical hierarchy of pages and content.  Today we’re going to get more in depth with creating a content plan and naming URLs for your new website.

We need to talk about URL names from two angles today – a brand new website, and a redesign of an existing website.  Let’s start with the redesign.  If you are redesigning your website, and  if you don’t have to rename your URLs, don’t.  Some platforms will allow you to keep the current URLs you use for each content page.  If this makes sense with your new website structure and page names, don’t change them.  New URLs mean creating 301 redirects from old pages to new pages.  This will cause you to lose some of your link benefit coming to those old pages.  If your website directories, folders and page names make sense, are hierarchical, and can be translated into your new site, by all means, do that.

In a perfect world, URLs don’t have to change – but in probably 90% of new website designs, we’re talking about a platform change.  Likely you’re changing from one programming language to another, one blogging platform to another, or even one philosophy to another.  This will cause us to have new URLs for existing content.  We recommend using a smart system for naming new URLs, and a plan to create 301 redirects before a page is even created.

In our last post, we talked about taking every URL on your website and placing them in a closely related category or theme.  Some categories or themes may live Continue reading

From the Ground Up – SEO Friendly Site Construction – Part 1

Building a website for a new venture is an exciting prospect. You start thinking about pages and features and then come up with paragraphs of content and graphics long before you put pixel to paper. The want to just “get something up” is overwhelming and you start with the “look” instead of thinking about the “structure.”

Stop…just stop…..you’re not doing yourself any favors. Before you even choose a name for your website, blog or business –think about the words you use. Words first, then we can start building something. Think of this step as the architectural rendering of a home under construction, complete with wall cross sections and material lists.

Creating a solid list of keyword phrases that describe what you are, what you do, and why you do it will help you Continue reading

Domain Moving Day the Key Relevance Way

Domain Moving Day the Key Relevance Way

So, you’re gonna change hosting providers. In many cases, moving the content of the site is as easy as zipping up the content and unzipping it on the new server. There is another aspect of moving the domain that many people over look: DNS.

The Domain Name System (DNS) is the translation service that converts your domain name (e.g. keyrelevance.com) to the corresponding IP address. When you move hosting companies, it’s like changing houses, if you don’t set up the Change of Address information correctly, you might have some visitors going to the old address for a while. Proper handling of the changes to DNS records makes this transition time as short as possible.

Let’s assume that you are changing hosting, and the new hosting company is going to start handling the Authoritative DNS for the domain. The first step is to configure the new hosting company as the authority. This should best be done a couple or more days before the site moves to the new location.

What does “Authoritative DNS” mean?
There are a double-handful of servers (known as the Root DNS servers) whose purpose is to keep track of who is keeping track of the IP addresses for a domain. Rather than them handling EVERY DNS request, they only keep track of who is the authoritative publisher of the DNS information for each domain. In other words, they don’t know your address, but they tell you who does know it.

If we tell the Root level DNS servers that the authority is changing, this information may take up to 48 hours to propagate throughout the internet. By changing the authority without changing the IP addresses, then while visiting browsers are making requests during this transition, both the old authority and the new authority will agree on the address (so no traffic gets forwarded before you move).

Shortening the Transition
The authoritative DNS servers want to minimize their load, so every time they send out an answer to a request address for a given domain, they put an expiration date on it. This is called the “Time To Live”, or TTL. By default, most DNS servers set the domain TTL to 14,400 86,400 seconds, which equals 1 day (thanks Andrew). Thus, when a visitor requests the address of the authoritative DNS, it returns the IP address and says “and don’t bother asking again for 24 hours.” This can cause problems during the actual transition, since the old address might continue to be accessed for a whole day after the address has changed.

The Day Before the Move
Since the new hosting company is the authority, they can shorten the TTL to a much shorter value. We recommend that 15 minutes (900 seconds) is a good compromise TTL value during the transition time.

Moving Day
When you are ready to make the switch, have the new DNS servers change the IP information to the new address(es). Since the TTL was set to 15 minutes, very quickly the other DNS servers on the ‘net will be asking for the IP address of the domain. They will be provided with this info, and the switchover will happen much more quickly than if the authority had not changed. Once the new site is live and you have verified nothing is horribly wrong, you can change the TTL on the new DNS servers back to 1 day. If on the other hand, something IS wrong with the new site, you can change the DNS back to the old IP address and within 15 minutes most if not all traffic should be back to the old servers. We also recommend changing the old DNS info to point to the new IP address as a precaution, but if you follow these steps, most of the traffic should have already trasnsitioned to the new DNS servers.

A Bug in BIND
There is a bug in some versions of the BIND program (which executes the DNS translation). This bug will cause a DNS server to continue to ask the same authoritative DNS server for the info as long as he is willing to give it. To complete the transition cleanly, you need to turn the DNS records for the domain off in the old DNS servers. This will cause it to generate an error, which in turn will cause the requesting DNS server to ask the Root level servers for the new authority. Until you make this change, there is still a chance that some traffic will continue to visit the old domain.

Change of Address Forms
The USPS offers a Change of Address kit to help make moving your house easier. Below is the Key Relevance Change of Address Checklist that may make you site’s transition painless.

 

 

 

Key Relevance Domain Change of Address Checklist

2+ Days Pre-Move
Set up new DNS servers to serve up the OLD IP addresses

  • – handle old subdomains
  • – handle MX records

Once that is complete, Change Authoritative DNS records to point to new DNS servers.

1 Day before move
On new DNS servers, shorten TTL to 15 min (900 sec)

Moving Day
On New DNS Servers

  • – Change IP Addresses to new server
  • – Change TTL to 1 day (86,400 sec), or whatever the default TTL is once you are sure all is OK

On Old DNS Servers

  • – Change IP Addresses to new server to catch DNS stragglers

2 Days Post Move (or when convenient)

  • – Remove DNS records from OLD DNS servers (assuming they are still up)

Understanding Robots.txt

Robots.txt Basics

One of the most over-looked items related to your web site is a small unassuming text file called the robots.txt file. This simple text file has the important job of telling web crawlers (including search engine spiders) which files the web robots can access on your site.

Also known as “A Standard for Robot Exclusion”, the robots.txt file gives the site owner to ability to request that spiders not access certain areas of the site. The problem arises when webmasters accidentally block more than they intend.

At least once a year I get a call from some frantic site owner telling me that their site was penalized and is now out of Google when often they blocked the site from Google via their robots.txt.

An advantage of being a long time search marketer is that experience teaches you to know where to look when sites go awry. Interestingly, people are always looking for a complex reason for an issue when more times than not, it is a simple more basic problem.

It’s a situation not unlike the printing press company hiring the guy who knew which screw to turn. Eliminate the simple things that could be causing the problem before you jump to the complex. With this in mind, one of the first things I always check when I am told a site is having a penalty or crawling issues is the robots.txt file.

Accidental Blockage by Way of Robots.txt
This is often a self-inflicted wound that causes many webmasters to want to pound their heads into their desks when they discover the error. Sadly, it happens to companies small and big including publicly traded businesses with a dedicated staff of IT experts.

There are numerous ways to accidentally alter your robots.txt file. Most often it occurs after a site update when the IT department, designer, or webmaster rolls up files from a staging server to a live server. In these instances, the robots.txt file from the staging server is accidentally included in the upload. (A staging server is a separate server where new or revised web pages are tested prior to uploading to the live server. This server is generally excluded from search engine indexing on purpose to avoid duplicate content issues.)

If your robots.txt excludes your site from being indexed, this won’t force removal of pages from the index, but it will block polite spiders from following links to those pages and prevent the spiders from parsing the content of those pages. (Pages that are blocked may still reside in the index if they are linked to from other places.) You may think you did something wrong that got your site penalized or banned, but it’s actually your robots.txt file telling the engines to go away.

How to Check Your Robots.txt
How do you tell what’s in your robots.txt file? The easiest way to view your robots.txt is to go to a browser and type your domain name followed by a slash then “robots.txt.” It will look something like this in the address bar:

http://www.yourdomainname.com/robots.txt

If you get a 404-error page, don’t panic. The robots.txt file is actually an optional file. It is recommended by most engines but not required.

You can also log into your Google Webmaster Tools account and Google will tell you which URLs are being restricted from indexing.

You have a problem if your robots.txt file says:
User-agent: *
Disallow: /

A robots.txt file that contains the text above is excluding ALL robots – including search engine robots – from indexing the ENTIRE site. Unless you are working on a staging server, you don’t normally want to see this on a site live on the web.

How to Keep Areas of your Site From Being Indexed
There may be certain sections you don’t want indexed by the engines (such as an advertising section or your log files). Fortunately, you can selectively disallow them. A robots.txt that disallows the ads and logs directories would be written like this:
User-agent: *
Disallow: /ads
Disallow: /logs

The disallow statement shown above only keeps the robots from indexing the directories listed. Note that the protocol is pretty simplistic: it does a text comparison of the path of the URL to the Disallow: strings: if the front of the URL matches the text on a Disallow: line (a “head” match), then the URL is not fetched/parsed by the spider.

Many errors are introduced because webmasters think the robots.txt format is smarter than it really is. For example, the basic version of the Protocol does NOT allow:

  • Wildcards in the Disallow: line
  • “Allow:” lines

Google has expanded on the original format to allow both of these options, but these are not universally accepted, so it is recommended that these expansions ONLY be used for a “User-agent:” run by Google (e.g. Googlebot, Googlebot-Image, Mediapartners-Google, Adsbot-Google.).

Does the robots.txt Restrict People From Your Content?
No, it only requests that spiders keep from walking through and parsing the content for its index. Some webmasters falsely think that if they disallow a directory in the robots.txt file that it protects the area from prying eyes. The robots.txt file only tells robots what to do, not people (and the standard is voluntary so only “polite” robots follow it). If certain files are confidential and you don’t want them seen by other people or competitors, they should be password protected.

Note that the robots exclusion standard is a “please don’t parse this page’s content” standard. If you want the content removed from the index, you need to include a Robots noindex Meta tag on each page you want removed from the index.

Check robots.txt First
The good news is that if you have a situation where you accidently blocked your own site, the solution is easy to fix now that you know to look at your Robots.txt file first. Little things matter online. To learn more about the robots.txt file see http://www.robotstxt.org.