Advanced SEO

How to Perform a Technical SEO Audit 

Performing a technical SEO audit of a website is one of the core functions of an SEO professional. An audit helps you assess the technical aspects of a website and identify places where you can optimize or make improvements to the site’s performance, crawlability, and search visibility. 

 Depending on the type of website and its size, your approach to a technical audit may differ. However, you can use the following framework as a starting point for any type of technical SEO audit. 

First, make sure you have the necessary tools in place. The first tool at your disposal is free and comes from Google itself: Google Search Console (or GSC for short). In addition to Google Search Console, you may want to use a web scraping tool, like Screaming Frog, a dedicated SEO tool, like SEMrush, Moz, or Ahrefs, and analytics tools, like Google Analytics or HubSpot. We won’t review these tools, since the tools you use depend on your individual situation. Instead, we’ll focus on the freely available tools provided by Google. 

To get started, break down your technical SEO audit into the following sections:  

• Crawlability and indexation  

• Duplicate content 

• Site speed  

• Mobile friendliness  

• Structured data  

• JavaScript rendering. 

First, identify issues with crawlability and indexation. This step involves making sure that Google (and other search engines) can properly crawl and index your website. In Google Search Console, navigate to Settings and open your Crawl Stats report. Here, you’ll see a crawl request trend report, host status details, and a crawl request breakdown. 

You want to focus on ensuring you receive an “OK (200)” response, which means Google was able to successfully crawl your page. For any pages that didn’t return a 200 response, try to determine why the crawl failed. For example, why did a page return a “Not found (404)” response? Has it been deleted and therefore should be removed from your sitemap? 

Next, check out the Coverage report in GSC. Here, you’ll see the overall status of the indexed pages on your site, along with a helpful list of indexation errors. Since Google wasn’t able to index these pages, they won’t appear in Google search. In some cases, this is a good thing. For example, you may intend for a page, like a private membership page, to not appear in search results by tagging it as “no-index.” 

The final step in identifying issues with crawability and indexation is to review your robots.txt file. The robots.txt file allows you to exclude parts of your site from being crawled by search engine spiders. You can usually find it by going to your website.com/robots.txt. You can also use Google’s free robots.txt testing tool, linked in the resources. 

Depending on what you find during this stage, you’ll likely need to work with your developer or system administrator to resolve some of the issues as they relate to the sitemap and robots.txt file. 

Now, it’s time to identify any duplicate content issues on your site. Duplicate content means there’s similar or identical information appearing on different pages of your site. This can confuse search engine crawlers, leading to pages performing poorly in search results. 

A site typically suffers from duplicate content when there are several versions of the same URL. For example, your website may have an HTTP version, an HTTPS version, a www version, and a non-www version. To fix this issue, be sure to add canonical tags to duplicate pages and set up 301 redirects from your HTTP pages to your HTTPS pages. 

Canonical tags tell search engines to crawl the page containing the primary version of your content. This is the “master page” — the page you want to point search engines to crawl, index, and rank. You may need to work with a developer to implement canonical tags and redirect your HTTP site to your HTTPS site. 

There are several common fixes to optimize page speed, like compressing images, minifying code files, using browser caching, minimizing HTTP requests, and using a content delivery network (CDN). 

Start by running your pages through Google’s Mobile Friendly Test which is linked in the resources section. This test will show you whether your page is mobile-friendly. You can also view mobile-friendliness stats for your entire website in Google Search Console’s Mobile Usability Report. Below this report, you’ll see specific mobile usability errors that you can resolve to make your website more mobile-friendly. 

Having your mobile site match up with your desktop site is also important. Why? Well, Google started employing “mobile-first” indexing in 2021, meaning they’ll crawl your mobile site over your desktop site. This means that if content is missing from the mobile version of your site, it may not get indexed as it would on desktop. 

Make sure this doesn’t happen by checking that these elements are the same across mobile and desktop:  

• Meta robots tags (e.g. index, nofollow)  

• Title tags  

• Heading tags  

• Page content  

• Images  

• Links 

• Structured Data 

• Robots.txt access. 

Both responsive design and consistency across mobile and desktop naturally create a better user experience. But be sure to also check that there’s nothing interfering with people’s ability to navigate your site on mobile devices, like intrusive pop-ups. 

According to Google, “Structured data is a standardized format for providing information about a page and classifying the page content. For example, a recipe page may include a structured format for the ingredients, cooking time and temperature, and calories. 

Google uses this structured data to better understand the contents of a page and potentially display it in rich snippets in the SERPs. Simply having structured data markup on your page doesn’t guarantee that Google will display a rich result for your site, but it gives you a chance to get featured. 

If you have existing structured data on your site, make sure it’s displaying properly by running it through Google’s Structured Data Testing Tool and their Rich Results Test. Spotting errors in your website’s code requires practice, especially if you’re not a regular web developer. Whenever you see errors flagged in one of your testing tools, check the code and try to identify what went wrong.  

According to SEMrush, “Diagnosing technical SEO issues goes much faster and is far less frustrating when your code is clean and correct.” This most likely means you’ll have to work with your web developer to fix or implement the structured data on your site. 

Reducing Page Size and Increasing Load Speed 

The first criteria to look at when optimizing your website is the overall page load speed. Load speed is the time it takes to fully display the content on a specific page. How fast should a page load? Best-in-class webpages should become interactive within 5.3 seconds. That’s how long people are willing to wait before they start clicking the back button and finding a different site that loads faster.  

So when diagnosing your website load speed, there are a few other metrics to look at, including “first contentful paint” and “time to interactive”. First contentful paint (FCP) is the time it takes in seconds for text or images to be shown to users. Time to interact (TTI) is when the page responds to user interactions, such as clicking, within 50ms. FCP and TTI metrics are growing in popularity. Being able to see content on a website and interact with it is closer to how users feel about site speed than the actual page load speed of a web site. But improving load speed can help you improve FCP and TTI metrics as well. 

There are many ways to improve your page load speed, including minification and compression. To understand minification and compression, we’re going to talk about HTML, CSS, and JavaScript. This will not only help you understand how to reduce your page load speed and improve user experience on your website, but it’ll also help you sound knowledgeable in front of your web development team. 

A natural place to start when looking to increase your webpage’s load speed is to reduce the size of what’s actually being loaded. This is where minification comes in. Minification is the process of reducing resource size by removing unnecessary comments and spacing in the source code. These characters include whitespaces, line breaks, and comments which are useful for us humans but unnecessary for machines. 

Here’s what’s happening when you minify HTML, CSS, and JavaScript. HTML minification removes all unnecessary characters from the HTML. CSS minification removes all the unnecessary characters and comments from your stylesheet, the file that contains font and layout rules. JavaScript minification removes all unnecessary characters and comments from JavaScript. 

Limiting HTTP Requests and Maximizing Page Caching 

Every element that appears on a webpage needs to come from somewhere. An HTTP request is an ask for information from the browser, like Chrome or Firefox, to the server, the remote computer that fulfils that request. The server then delivers everything that needs to be displayed. This includes text, images, styles, scripts, and everything else that makes a web page a web page. Whereas reducing the file sizes of your HTML, CSS, and JavaScript impacts your web page’s overall download size, reducing the number of HTTP requests reduces the frequency at which these downloads need to happen. Less frequent downloads mean your website can be displayed faster. If you want to create a high-performing page, aim to have 30 requests max. 

We’ll cover four strategies you can use to reduce the number of HTTP requests.  

• Combine text resources  

• Combine image resources  

• Move render-blocking JavaScript  

• Reduce redirects 

Leave a comment

Your email address will not be published. Required fields are marked *