High authority expired domains are still a powerful tool to have in your SEO arsenal and you’re about to learn one way to find them using Scrapebox.
Scrapebox has been around for 6 years now and is the swiss army knife of SEO’s worldwide. Since its inception, it’s constantly been updated and can be used for a huge array of online marketing related tasks. I paid $47 for it back in 2010 and I can honestly say I’ve never got as much use and value from a piece of software throughout my 10 years marketing online.
In this step-by-step guide I’ll take you through one of the ways to find high authority expired domains using Scrapebox.
Ok, let’s get started….
First we need to create a seed list for Scrapebox.
Enter an industry related seed keyword into your country’s Google.
Now pick a URL from the organic results that we can use for the process
Note: make sure you pick an organic listing and not an Adwords advertiser
For this example, we’ll be picking creativespark.co.uk
NOTE:- if you’ve not got a Google Adwords account, you can use any keyword tool for this step. A good free one to try for ideas is ubersuggest. If you decide to use another keyword tool then jump down to step 7 and continue.
Ok, now login to Adwords at http://adwords.google.com and open up the Keyword Planner
So in this section we’re going to grab keyword ideas for the domain you picked earlier.
Click on ‘search for new keyword and ad group ideas’
Paste the domain name that you found in step 2 into the Your landing page text box
Note: If you searched google.co.uk in step 1, make sure the targeting is set to United Kingdom. If you searched google.com then set the targeting to United States etc etc
Click Get ideas
Click the Download button and down the csv file to your local drive. We’re going to use this as seed keywords for the scrape.
Open the csv file and select all the keywords from the Keyword column
Copy the keywords from the keyword column.
TIP: hold down CTRL & SHIFT and press the down arrow key to select all the keywords in the column
Now we’re going to add some extra data in each line in your keyword list. We’ll be adding quotes, the term links and also a Julian date (I’ll explain this later).
First we are going to add quotes to the beginning of the line.
To do this first open Notepad ++ and paste the keywords into a new window
Place your cursor in the very top left hand side before the first character and open the find function using CTRL F
Click on the Replace tab
Enter ^ into the Find what box
Enter ” into the Replace with
Make sure Regular expression is selected under search mode
Click Replace All
Now we’re going to add quotes to the end of the keywords so we get an exact match when searching in Google. To do this we’re going to replace the ^ in the ‘Find what’ with $
When you’re ready click Replace all
Save the file as Seed List <domainname>. So in this example the file name would be: ‘Seed List creative spark.txt’
NOTE:- This step is only necessary if you want to scrape for older domains. If you decide to leave this datarange step out you can still find good powerful expired domains, they just might not be as old.
TIP: you can use inurl:links here instead of just “links” which will save you work in Step 14. However, you will need to reduce the amount of threads in the Harvester to around 1 to 3 else you will get a lot of errors.
Ok, so first open up Scrapebox.
Now in this particular strategy, we’re going to look for old websites that have links or resource pages. To do this first we’re going to scrape URLs have the word “links” in them. A lot of sites (especially older ones) used to have these types of pages and they’re very useful for finding aged expired domains.
So to find these aged domains we’re going to add a date range to our search query.
In this example, we are going to use 2000 to 2010.
For the Google search engine, the date range has to be added using Julian Date Format. To help you work out Julian date for your required range, use this Julian date conversion tool.
Once you have your date range sorted, enter the “links” daterange into the Custom footprint box and make sure ‘Custom Footprint’ option is selected. So in this example it would be:
Copy the first 100 lines from the Seed List text file and paste them into the Keywords window.
NOTE:- you don’t have to do 100 lines, you could add as many as you like, I just find the scraping process starts to slow down a lot when you get past around 50 due to the custom footprint and only using 20 proxies.
Make sure Use Proxies is selected.
Private Proxies are highly recommended to get the best results with less hassle.
TIP: test using IP authentication with your private proxies. I’ve found out some of them don’t work as well if user/password authentication is used.
Click Start Harvesting
Make sure just Google is selected in the available search engines
Now sit back and let Scrapebox do its stuff.
Once the harvest is complete, you will see a window like the one above.
Click Exit to Main
We want to remove all the duplicates so click on Remove/Filter and select Remove Duplicate URLs
Right click inside the URL window in Scrapebox and Select Copy All URLs to Clipboard
From this section on I use a spreadsheet to organise and sort through all the URLs which you can download below…
Open up the spreadsheet
Make sure the MASTER TAB is selected and paste the URLs into the URL column
In the next few stages we are going to tidy up this list before we scrape the link or resource pages for outbound links.
First we need to create and apply a filter where the URL contains the word – links
Next, paste the URLs into the contains links TAB URL column
You’ll notice in the urls list that there’s a entries with that are ‘tags’in WordPress, we’ll need to remove these by creating a filter where the URL contains the word – tag.
Important: Check the results to make sure they are /tag/ URLs and it doesn’t just have the word tag in the domain. Remove these rows by selecting them and hitting the Delete key
Clear the filter and then sort the Column A-Z using the filter button again. This will make the next step quicker.
Remove any URLs that are not link or resource pages.
i.e. link pages will look like /links/ OR /useful-links/ OR /resource-links etc etc
In the example above you can see this page is about adding links to a sidebar and not a link or resources page, so this would be removed.
Delete all these kinds of non ‘link’ pages by right clicking on the row number and selecting delete
Note:- This is an optional step, I like to do it so that I just get the link/resource pages
If you’re unsure, click the URL and check out the page in a browser to see if its a ‘links’ page or not.
Also remove URLs for pdf’s, facebook.com, linkedin.com, econsultancy.com which can be done easily using a filter.
Once you have cleaned the URL list copy the whole URL column by holding down SHIFT & CTRL and press the down arrow key. This will select all rows in that column with data in
Paste these into a text file and then copy and paste them into Scrapebox. I find doing this this extra step into a text file saves issues with the next step
Remove duplicate URLs
Open the Link Extractor by going to Addons and selecting Scrapebox Link Extractor 64bit
Click the Load button and select Load URL list from Scrapebox Harvester
Make sure the External option is selected
Once its finished click Show save folder
Open up the output text file in a text editor
Copy all the URLs from the link extract file and “Paste & replace” the them into SB harvester
Click Trim and then Trim to Root
Also under the Trim menu click Remove Subdomain from URLs
Remove Duplicate domains
This section will depend on which type of domains you are looking for. So for example, if you want web 2.0s then you would leave these in the list. If you want .in domain then you would leave these in the list…and so on and so on.
I had a piece of software coded that removes all non-relevant domains in a flash. It can be done manually but it just takes a lot more time. If you want a copy of the software hit me up.
Ok, now Right click on the URL Harvester window and Copy All URLs to Clipboard
Open up the spreadsheet and select the ‘Cleaned’ TAB
Paste the URLs into the URL column
It’s important to note, when removing multiple lines within a filter select them and use the delete key DO NOT remove via the method used above in step #17
TIP: after each removal below, clear filter from URL and sort the column A-Z again
Here’s examples of some of the other domains I remove from the list, domains that end in; .weebly.com .wordpress.com hop.clickbank.net .tumblr.com .webgarden.com .livejournal.com .webs.com .edu .yolasite.com .moonfruit.com .bravesites.com .webnode.com .web-gratis.net .tripod.com typepad.com blogs.com rinoweb.com jigsy.com google.com squarespace.com hubspot.com .forrester.com
NOTE: the sub domains above can be stored and checked separately to create web 2.0 lists if you like.
Also check through the list and remove any that are:
– the wrong syntax.
– common domains that you know are not going to be free ie. facebook.com, linkedin.com searchengineland.com etc etc
– just IP addresses
Scrapebox has its own domain availability checker, which has come on massively since I first published this post last year. I’ve been able to check tens of thousands of domains with it in one batch so this is all I use now. .
(If you don’t trust Scrapebox checker then you can use other bulk checkers like Dynadot’s which allows you to check 1000 domains at a time. It will only do about 5 batches though before it hits you with an over usage message and makes you wait about 30 minutes.)
Copy all the URLs from the ‘Cleaned’ TAB into a text file and then copy/paste into Scrapebox
Click Grab / Check and select Check Unregistered Domains
Once you’re happy, click Start
TIP: Sometimes SB will give result in 0/0 for Pass 2 (WHOIS) If you’ve checked a lot and this is the case, close the availability checker, re-open and and try it again.
When its finished, click Export and then select Export Available Domains. I also save the unavailable domains to so these can be double checked.
Open up the excel Worksheet Template and select the Availability Check TAB
Right click in the top left hand cell and select Paste Special. Then choose the paste as text option.
Tidy the first row up by using a simple cut/paste into the next cell along. Delete the first column
For this section you will need a paid version of Majestic.
If you don’t have this you can try the free version of SEO Profiler which will at least allow you to check out the quality of the backlinks.
Note: you can use the copy and paste or just create a file with all the Available domains in and upload. For this example, we will be using the copy/paste 150 method.
Login to Majestic and go to Tools | Link Map Tools | Bulk Backlinks
Paste the first 150 rows into the window
Sort results by: referring domains ( you don’t have to do this as the data can be sorted on the next window)
Now everyone has their own thoughts on how to check out the strength of a domain. This could be a whole blog post on its own, so for the time being I’ll just cover the main points.
So the boundaries you set for lowest Trust Flow, Citation Flow etc is a personal preference I think. If you’re after a guideline then I normally look for domains that have >10 referring domains, TF 15+ and where TF/CF is >.75
So if you look at the image above you’ll see there are a few that would warrant further investigation. I haven’t registered these so check them out and if they’re good and you’re quick you could pick them up 🙂
Make sure you check the TF for the root domain sub domain and full path on each domain that looks good.
You can do this quickly by hovering your mouse over the little cog icon, Right Click on Site Explorer and open it up in a new browser tab
Make sure you check out the backlinks. Do they still exist? Are they spammy? Are they contextual?
Quite often low TF domains can still have really good contextual backlinks so Trust Flow should NOT be your ultimate guide. Make sure you ALWAYS check out the quality of the backlinks.
A lot of people won’t do this because of the time it takes, don’t be one of them else you can end up with bad domains.
Dig down a little deeper into the backlink profile for each domain. Scrolling down on the Site Explorer will show you a quick overview of the Anchor text so you can make sure its not over optimised.
Also click the Backlinks tab and physically look at some of their backlinks to make sure they’re not pure spam.
Just to be double sure, backlinks can also be checked in ahrefs.
Once you have found a domain that has good metrics and authority head on over to http://web.archive.org/ and check what the site looked like in the past.
You’re looking for a site that was a legitimate business. Personally, I stay away from anything that has been chinese or has sold dodgy fake gear, these are obviously bad news. Have a good look through the backlinks tab and make sure the links are not spammy. Also check to make sure the links are still live.
Once you’ve found a domain with solid metrics head on over to your favourite registrar and get it registered.
That’s it for this first method which is just one way that I use Scrapebox to find high authority, expired domains.
Download a pdf version of this step-by-step guide below.
Hope you found this guide useful. Feel free to like and share it or leave a comment below.
Oh and make sure you subscribe to my blog so you don’t miss the next strategy for finding killer expired domains with Scrapebox.
Founder and certified web marketing geek since 2005
Please log in again. The login page will open in a new window. After logging in you can close it and return to this page.