If you took the time to read through our first part of our SEO jargon guide, then you’re going to find this sections inclusions just as useful. However, in this second part of the guide, we’ll look more into the slightly more less known areas of SEO.

Although if you’ve been practising SEO for a few years you’ll have come across these terms already. If not, then you’re doing something wrong and you should re-assess how you go about your SEO campaigns.

For those of you that missed our first part, you can read our initial guide to SEO jargon for beginners here.

There is some must know SEO jargon:

Whether you’re an SEO working for an agency or as an in house consultant for a local business, there are some areas of SEO that you simply must know and understand.

Failing to understand the basics of SEO won’t stop you from working on websites but it will make ranking them a lot more difficult. Simply by implementing the basics of online marketing and SEO you can see a quick increase in traffic and user retention.

We’re going to take you through the following:

  • The important response codes you should know
  • Why you should be using analytical software
  • What is duplicate content and how can you manage it
  • Understanding what a robots.txt file is
  • Becoming familiar with canonical URLs

Understanding the different response codes for SEO:

As an SEO there are a few server response codes that you should become familiar with because inevitably you’re going to come across one sooner or later.

One of the most common response codes you’ll find as an SEO is a 404 Error and you’ve probably already come across one of these without even realising. Recognise a page that looks something like this?

404 Error

A 404 page will appear if the URL you’re trying to reach is broken. That means that the page you are navigating from has a broken link somewhere on the page.

The easiest way to identify a broken link is to use Screaming Frog’s SEO Spider. It’s a free tool that you can download to crawl websites, allowing you to identity broken links, duplicate titles and more.

Try to identify 404 Errors as soon as you can and fix them ASAP – a broken a link on your website will have a negative effect on your SEO. There are also two additional response codes that you should be aware of. These are 301 and 302 codes.

These forms of response codes are better known as redirections. A 301 redirect is the permanent redirection of a URL and is the most efficient and productive way to redirect a page.

A 302 redirect is a temporary relocation for a page or URL.

Redirect Examples

Make use of an analytical software:

As a Webmaster you’ll come across terms such as analytics and analytical software and it’s certainly something that you should implement if you haven’t done so already.

The most common platform is Google Analytics – It’s free and is probably the most used software amongst webmasters. You can create custom reports and assess data collected by your website to help you make more informed decisions about how your site is structured.

Its worth assessing your websites performance using an analytical tool at least once a month – if you have the time, make sure that you monitor traffic levels when you post new articles or update content so you can see how it affects how visitors act.

Manage duplicate content correctly:

Another term you’ll come across on a regular basis is “duplicate content” and its something you need to take seriously because how you manage it can have an impact on your sites performance on Google and other search engines.

If you use a CMS such as WordPress it can be easy to duplicate content whether that be in the body or HTML elements throughout your coding and styling.

Copyscape

Duplicate content is exactly as it sounds – It’s the content that has been cached on your website and on another. For example, your website may discuss the benefits of animal hygiene and there could be a similar article of piece of text that is the same on the RSPCA’s site.

You can use free tools such as Copyscape to help you identify duplicate content and make the required changes before it starts to have a negative effect.

Knowing what a Robots.txt file is:

In short the Robots.txt file is what tells the different search engine bots what they can and cannot crawl. When you launch a website, you would typically submit your sitemaps to Google Search Console so that it can be indexed quicker. However, there may be some pages on your website that you don’t necessarily want indexed.

This is when you would optimise and submit your robots.txt file.

The robots file is a more convenient way to help reduce the amount of duplicate content on your site. If you know that you have pages visible that has similar content, you can add it to your robots file and disallow search bots from indexing it.

This is commonly used for pages such as privacy policies and terms and conditions – all valid pages that you want users to see on the site but not necessarily good for Google.

Knowing what a Canonical URL is:

A lot of new SEO’s probably wont be familiar with canonical URL’s and what they do, so hopefully after you’ve read this section it will make life a little easier for you.

I’ll use this example to explain things a little easier. Let’s say that you have a website called Daniels Dog Grooming. You might have the following URL’s active:

  • danielsdoggrooming.co.uk
  • www.danielsdoggrooming.co.uk
  • danielsdoggrooming.co.uk/index.html

A canonical URL is what defines the preferred web address for your website and also helps to prevent duplicate content from developing on your website.

To set a canonical URL correctly you would place a rel=”canonical” link on the two other URL’s so that Google knows that the third is the preferred web address.

For example:

You would place this on each of these URL’s header sections to link to your preferred address – linking all three into one.

This link would be added to both danielsdoggrooming.co.uk and danielsdoggrooming.co.uk/index.html and all three web addresses would merge into one with the “www.” address being recognised as the preferred URL.