How to Find High Authority Expired Domains Using Scrapebox

High authority expired domains are still a powerful tool to have in your SEO arsenal and you’re about to learn one way to find them using Scrapebox.

 

scrapebox expired domains

Scrapebox has been around for 6 years now and is the swiss army knife of SEO’s worldwide. Since its inception, it’s constantly been updated and can be used for a huge array of online marketing related tasks.  I paid $47 for it back in 2010 and I can honestly say I’ve never got as much use and value from a piece of software throughout my 10 years marketing online.

In this step-by-step guide I’ll take you through one of the ways to find high authority expired domains using Scrapebox.

Ok, let’s get started….

 

1. Enter primary seed keyword into Google

1 Enter_Primary_Seed_Keyword

 

First we need to create a seed list for Scrapebox.

Enter an industry related seed keyword into your country’s Google.

 

2. Pick a website to pull keywords from

 

Pick-a-website-to-pull-keywords-from

Now pick a URL from the organic results that we can use for the process

Note: make sure you pick an organic listing and not an Adwords advertiser

For this example, we’ll be picking creativespark.co.uk

 

3. Open Adwords Keyword Planner

 

Open-Adwords-Keyword-Planner

 

NOTE:- if you’ve not got a Google Adwords account, you can use any keyword tool for this step. A good free one to try for ideas is ubersuggest. If you decide to use another keyword tool then jump down to step 7 and continue.

Ok, now login to Adwords at http://adwords.google.com and open up the Keyword Planner

 

4. Search for new keyword and group ideas

 

search-for-new-keyword-ideas

So in this section we’re going to grab keyword ideas for the domain you picked earlier.

Click on ‘search for new keyword and ad group ideas’

 

4b. Search for new keyword and ad group ideas

Search_for_new_keyword_and_group_ideas.png

 

Paste the domain name that you found in step 2 into the Your landing page text box

Note: If you searched google.co.uk in step 1, make sure the targeting is set to United Kingdom. If you searched google.com then set the targeting to United States etc etc

Click Get ideas

 

5. Download the list

5._Download_the_list.png

Click the Download button and down the csv file to your local drive. We’re going to use this as seed keywords for the scrape.

 

6. Open CSV and copy the keywords

6._Open_CSV_and_copy_the_keywords.png

 

Open the csv file and select all the keywords from the Keyword column

Copy the keywords from the keyword column.

TIP: hold down CTRL & SHIFT and press the down arrow key to select all the keywords in the column

 

7. Prepare the Master Seed list for Scrapebox

7._Prepare_the_Master_Seed_list_for_Scrapebox.png

 

Now we’re going to add some extra data in each line in your keyword list. We’ll be adding quotes, the term links and also a Julian date (I’ll explain this later).

First we are going to add quotes to the beginning of the line.

To do this first open Notepad ++ and paste the keywords into a new window

Place your cursor in the very top left hand side before the first character and open the find function using CTRL F

Click on the Replace tab

Enter ^ into the Find what box

Enter ” into the Replace with

Make sure Regular expression is selected under search mode

Click Replace All

 

7a. Prepare the Master Seed list for Scrapebox

 

Now we’re going to add quotes to the end of the keywords so we get an exact match when searching in Google. To do this we’re going to replace the ^ in the ‘Find what’ with $

When you’re ready click Replace all

Save the file as Seed List <domainname>. So in this example the file name would be: ‘Seed List creative spark.txt’

7b. Set the Custom Footprint

scrapebox expired domains syntax

 

NOTE:- This step is only necessary if you want to scrape for older domains. If you decide to leave this datarange step out you can still find good powerful expired domains, they just might not be as old.

TIP: you can use inurl:links here instead of just “links” which will save you work in Step 14. However, you will need to reduce the amount of threads in the Harvester to around 1 to 3 else you will get a lot of errors.

Ok, so first open up Scrapebox.

Now in this particular strategy, we’re going to look for old websites that have links or resource pages. To do this first we’re going to scrape URLs have the word “links” in them. A lot of sites (especially older ones) used to have these types of pages and they’re very useful for finding aged expired domains.

So to find these aged domains we’re going to add a date range to our search query.

In this example, we are going to use 2000 to 2010.

For the Google search engine, the date range has to be added using Julian Date Format. To help you work out Julian date for your required range, use this Julian date conversion tool.

Once you have your date range sorted, enter the “links” daterange into the Custom footprint box and make sure ‘Custom Footprint’ option is selected. So in this example it would be:

“links” daterange:2451552-2455198

 

8. Paste keywords into Scrapebox

8._Paste_keywords_into_Scrapebox.png

 

Copy the first 100 lines from the Seed List text file and paste them into the Keywords window.

NOTE:- you don’t have to do 100 lines, you could add as many as you like, I just find the scraping process starts to slow down a lot when you get past around 50 due to the custom footprint and only using 20 proxies.

Make sure Use Proxies is selected.

Private Proxies are highly recommended to get the best results with less hassle.

TIP: test using IP authentication with your private proxies. I’ve found out some of them don’t work as well if user/password authentication is used.

Click Start Harvesting

 

9. Start Harvesting

9._Start_Harvesting.png

 

Make sure just Google is selected in the available search engines

Click Start

Now sit back and let Scrapebox do its stuff.

 

10. Harvest Complete

Step 10

 

Once the harvest is complete, you will see a window like the one above.

Click Exit to Main

 

11. Remove duplicates

11 remove dups

 

We want to remove all the duplicates so click on Remove/Filter and select Remove Duplicate URLs

 

12. Copy URLs to the clipboard

12._Copy_URLs_to_the_clipboard

 

Right click inside the URL window in Scrapebox and Select Copy All URLs to Clipboard

 

13. Paste them into the spreadsheet

13 Paste urls

 

From this section on I use a spreadsheet to organise and sort through all the URLs which you can download below…

Open up the spreadsheet

Make sure the MASTER TAB is selected and paste the URLs into the URL column

 

14. Filter URLs that contain the word links

14._Filter_URLs_that_contain_the_word_links.png

 

In the next few stages we are going to tidy up this list before we scrape the link or resource pages for outbound links.

First we need to create and apply a filter where the URL contains the word – links

 

15. Paste results into ‘contains links’ tab

15._Paste_results_into__contains_links__tab.png

 

Next, paste the URLs into the contains links TAB URL column

 

16. Remove ‘tag’ URLs

16._Remove__tag__URLs.png

 

You’ll notice in the urls list that there’s a entries with that are ‘tags’in WordPress, we’ll need to remove these by creating a filter where the URL contains the word – tag.

Important: Check the results to make sure they are /tag/ URLs and it doesn’t just have the word tag in the domain. Remove these rows by selecting them and hitting the Delete key

Clear the filter and then sort the Column A-Z using the filter button again. This will make the next step quicker.

 

17. Remove any URLs that are not link pages

17 Remove_any_URLs_that_are_not_link_pages

 

Remove any URLs that are not link or resource pages.
i.e. link pages will look like /links/ OR /useful-links/ OR /resource-links etc etc

In the example above you can see this page is about adding links to a sidebar and not a link or resources page, so this would be removed.

Delete all these kinds of non ‘link’ pages by right clicking on the row number and selecting delete

Note:- This is an optional step, I like to do it so that I just get the link/resource pages

If you’re unsure, click the URL and check out the page in a browser to see if its a ‘links’ page or not.

Also remove URLs for pdf’s, facebook.com, linkedin.com, econsultancy.com which can be done easily using a filter.

 

18. Paste into Scrapebox

18 paste into scrapebox

 

Once you have cleaned the URL list copy the whole URL column by holding down SHIFT & CTRL and press the down arrow key. This will select all rows in that column with data in

Paste these into a text file and then copy and paste them into Scrapebox. I find doing this this extra step into a text file saves issues with the next step

Remove duplicate URLs

 

19. Link Extractor

19 link extractor

 

Open the Link Extractor by going to Addons and selecting Scrapebox Link Extractor 64bit

Click the Load button and select Load URL list from Scrapebox Harvester

Make sure the External option is selected

Click Start

Once its finished click Show save folder

Open up the output text file in a text editor

Copy all the URLs from the link extract file and “Paste & replace” the them into SB harvester

 

20. Trim URLs

20 Trim_URLs

 

Click Trim and then Trim to Root

Also under the Trim menu click Remove Subdomain from URLs

Remove Duplicate domains

 

21. Remove all non relevant domains

21_Remove_all_non_relevant_domains

 

This section will depend on which type of domains you are looking for. So for example, if you want web 2.0s then you would leave these in the list. If you want .in domain then you would leave these in the list…and so on and so on.

I had a piece of software coded that removes all non-relevant domains in a flash. It can be done manually but it just takes a lot more time. If you want a copy of the software hit me up.

Ok, now Right click on the URL Harvester window and Copy All URLs to Clipboard

Open up the spreadsheet and select the ‘Cleaned’ TAB

Paste the URLs into the URL column

It’s important to note, when removing multiple lines within a filter select them and use the delete key DO NOT remove via the method used above in step #17

TIP: after each removal below, clear filter from URL and sort the column A-Z again

Use a filter to remove any domains that contain: javascript, any pharma type keywords, gambling, blogspot, http://blog. directory.

Here’s examples of some of the other domains I remove from the list, domains that end in; .weebly.com .wordpress.com hop.clickbank.net .tumblr.com .webgarden.com .livejournal.com .webs.com .edu .yolasite.com .moonfruit.com .bravesites.com .webnode.com .web-gratis.net .tripod.com typepad.com blogs.com rinoweb.com jigsy.com google.com squarespace.com hubspot.com .forrester.com

NOTE: the sub domains above can be stored and checked separately to create web 2.0 lists if you like.

Also check through the list and remove any that are:
– the wrong syntax.
– common domains that you know are not going to be free ie. facebook.com, linkedin.com searchengineland.com etc etc
– just IP addresses

 

22a. Check domain availability

22a_Check_domain_availability

 

Scrapebox has its own domain availability checker, which has come on massively since I first published this post last year. I’ve been able to check tens of thousands of domains with it in one batch so this is all I use now. .

(If you don’t trust Scrapebox checker then you can use other bulk checkers like Dynadot’s which allows you to check 1000 domains at a time. It will only do about 5 batches though before it hits you with an over usage message and makes you wait about 30 minutes.)

Copy all the URLs from the ‘Cleaned’ TAB into a text file and then copy/paste into Scrapebox

Click Grab / Check and select Check Unregistered Domains

 

22b. Check domain availability

22b_Check_domain_availability

 

Once you’re happy, click Start

 

22c. Check domain availability

22c

 

TIP: Sometimes SB will give result in 0/0 for Pass 2 (WHOIS) If you’ve checked a lot and this is the case, close the availability checker, re-open and and try it again.

When its finished, click Export and then select Export Available Domains. I also save the unavailable domains to so these can be double checked.

Open up the excel Worksheet Template and select the Availability Check TAB

Right click in the top left hand cell and select Paste Special. Then choose the paste as text option.

Tidy the first row up by using a simple cut/paste into the next cell along. Delete the first column

 

23a. Check Metrics in Majestic

For this section you will need a paid version of Majestic.

If you don’t have this you can try the free version of SEO Profiler which will at least allow you to check out the quality of the backlinks.

23a_Check_Metrics_in_Majestic

 

Note: you can use the copy and paste or just create a file with all the Available domains in and upload. For this example, we will be using the copy/paste 150 method.

Login to Majestic and go to Tools | Link Map Tools | Bulk Backlinks

Paste the first 150 rows into the window

Sort results by: referring domains ( you don’t have to do this as the data can be sorted on the next window)

23b. Check Metrics in Majestic

majestic expired domains

 

Now everyone has their own thoughts on how to check out the strength of a domain. This could be a whole blog post on its own, so for the time being I’ll just cover the main points.

So the boundaries you set for lowest Trust Flow, Citation Flow etc is a personal preference I think. If you’re after a guideline then I normally look for domains that have >10 referring domains, TF 15+ and where TF/CF is >.75

So if you look at the image above you’ll see there are a few that would warrant further investigation. I haven’t registered these so check them out and if they’re good and you’re quick you could pick them up 🙂

Make sure you check the TF for the root domain sub domain and full path on each domain that looks good.

You can do this quickly by hovering your mouse over the little cog icon, Right Click on Site Explorer and open it up in a new browser tab

Make sure you check out the backlinks. Do they still exist? Are they spammy? Are they contextual?

Quite often low TF domains can still have really good contextual backlinks so Trust Flow should NOT be your ultimate guide. Make sure you ALWAYS check out the quality of the backlinks.

A lot of people won’t do this because of the time it takes, don’t be one of them else you can end up with bad domains.

 

23c. Check Metrics in Majestic

23c

 

Dig down a little deeper into the backlink profile for each domain. Scrolling down on the Site Explorer will show you a quick overview of the Anchor text so you can make sure its not over optimised.

Also click the Backlinks tab and physically look at some of their backlinks to make sure they’re not pure spam.

Just to be double sure, backlinks can also be checked in ahrefs.

 

24. Check Web Archive

24 check wayback archive

 

Once you have found a domain that has good metrics and authority head on over to http://web.archive.org/ and check what the site looked like in the past.

You’re looking for a site that was a legitimate business. Personally, I stay away from anything that has been chinese or has sold dodgy fake gear, these are obviously bad news. Have a good look through the backlinks tab and make sure the links are not spammy. Also check to make sure the links are still live.

Once you’ve found a domain with solid metrics head on over to your favourite registrar and get it registered.

That’s it for this first method which is just one way that I use Scrapebox to find high authority, expired domains.

Download a PDF version of this step-by-step guide

Download a pdf version of this step-by-step guide below.

Hope you found this guide useful. Feel free to like and share it or leave a comment below.

Oh and make sure you subscribe to my blog so you don’t miss the next strategy for finding killer expired domains with Scrapebox.

Facebooktwittergoogle_plusredditmail

Ian Harmon

Founder, Inbound Digital Marketing Specialist & Certified Web Marketing Geek for more than 9 years.

44 thoughts on “How to Find High Authority Expired Domains Using Scrapebox

  1. Which TF/CF do you trust more, fresh or historic? What if fresh is good and historic is bad – vice versa. Do both fresh AND historic have to be .75 TF/CF ratio?

  2. Wow Ian, this post is pure gold! Thanks for taking the time to create such an awesome post, there’s a few steps involved but it’s definitely worth it.

    BTW if you add your “links” daterange:2451552.500000-2455198.34551 in to the footprint box and select Custom Footprint it will append it to all your keywords in the keywords box when searching. Then now everytime you load keywords, you can select it from the footprint dropdown and it will save some steps.

    I’m going to give this a try tonight, thanks!

  3. Hi Ian, great post!

    This is just what I was looking for. Just a random question, do you only scrape in a specific niche or do you sometimes scrape multiple niches to find available domains?

    Thanks

    Nate

    1. Thanks Nate, glad you found it useful. I tend to scrape one niche at a time, however you always end up with results from other niches as well which I also log into a spreadsheet in case I need them over the coming weeks.

    1. Hi Sean, I’ve found when you include a special operator like inurl, you start getting a lot of errors from the scrape. You can use inurl and you will get better results, however it will take a lot longer because you have to reduce your threads in Scrapebox to around 1,2 or 3.

      1. Fantastic article Ian – thanks for taking the time to put it together.

        The reason for errors during the scrape when using search operators link intitle and inurl is that for years SEOers were hammering Google’s servers with a certain tool you might be familiar with [ahem] and so now they add captcha to those types of searches, even if you’re doing them by hand.

  4. Thanks for this tutorial Ian. I kept thinking all the way through reading this, wouldn’t it be just easier to use an expired domain site and use their filters? I’m still a beginner so excuse my ignorance if that is the case. Keep up the good articles though, I know how much work goes into these.
    regards
    steve
    steve recently posted…Drop Catching Expired DomainsMy Profile

    1. Hi Steve, I’ve never had much luck with the free expired domain sites. A lot of the domains I’ve found in the past tend to have a spammy backlink profiles. Also, using methods like this you can find some aged little gems. Especially in the non mainstream niches.

      I see you’ve written a domcop guide too which I’ve recently purchased to test out, I’ll have a read of that. Thanks for your input 🙂

    1. Hi Stanley, you would first need to check the backlinks to see which pages are being linked to and then if it’s not the home page, created re-directs to a new page if you can’t match the URL. Once you’ve done this, the PR will flow through.

      Hope that helps. Thanks for stopping by.

  5. This is something i was looking for… correct me if i am wrong, you need Majestci subscription to do this? Is there any free backlink checker that is good that you suggest? Moz crawler is not that good, i know that, but is there anything except ahrefs and majestic ?

  6. Thank you very nice tutorial, I have one question, I want to start building new amazon affiliate website and I want to find expired domain and register it for my website, I try to skip the sandbox period and start building backlink slow and right away to the website, do you recommend expired domain for affiliate websites? Or it’s just waste of time.

    1. I’ve only ever used them for pbns Akram. I can’t see an issue with using them for money sites, however I would do some more research if I was you to make sure.

    1. That’s a tough one Mark. I’ve scraped and checked for a full day and come out with nothing. Then again, I’ve scraped for a few hours and found 4-5 crackers. Its quite random. One thing is the good quality ones, especially in the popular niches are getting a lot harder to find.

  7. Hey Ian,

    There is a pretty neat shortcut for getting your keywords from the Keyword Planner with quotes already around them.

    Check the image here: http://postimg.org/image/y3nsw67w1/

    First hit “Add all” then click the little pencil to change the match type to “Phrase match” (this add quotes), then click on the clipboard icon to open up a text window where you can copy all of the keywords to your clipboard.

    This would help remove a couple steps from your process 🙂

    1. Thanks Robert.

      This method finds expired domains via link and resource pages and can be used alongside the expired domain finder plugin which scrapes URLs from a seed list you need to compile and import into the plugin.

      I’ve been on the beta program for the new expired domain finder and so far I’m really impressed with its features and performance.

      I’m not sure when it’s release date is, however when it is released I would highly recommend you purchase a copy, it will be a great addition to your seo tool set.

  8. tnx alot that was great

    i have a q
    how can i download scrapebox in free because i’m in a country that i cant buy it ???

  9. Hi Ian,

    Thank you very much for writing such a useful tutorial. I have a question, though. Why do you recommend wrapping the keywords in quotes? What happens if you just place the keywords without quotes.

    I know the quotes are used to find the phrase match in Google Search, but using the keywords with quotes isn’t supposed to get even more results?

    Thank you very much

    1. If they’re wrapped in quotes Marius you just get more relevant results. You can search using un-quoted it just means you’ll need to remove a lot more if you want relevant domains

  10. Great article! I do something very similar.. I use the inurl:”links” as the custom footprint then add keywords. Once scraped, I go to “Remove/Filter” -> “Remove URL’s Not Containing” then type in the word “links” so I’m only left with pages that have (hopefully) lots of links.

    Then I remove duplicate URL’s and use the “Scrapebox Link Extractor” plugin to pull all the external links from all the links pages.

    Next, I “Check Unregistered Domains”. Easy and fast without having to leave Scrapebox and always find tons of URL’s 🙂

  11. Thanks for such an informative tutorial Ian. You have not only save my valuable bucks but, also provide be an awesome trick to freelance through Scrapebox. Now, I can not only scrape Expired domains for me but, sell them to others. Hopefully, get some more Scrapebox tutorials from your side in future.

Leave a Reply

Your email address will not be published. Required fields are marked *

CommentLuv badge