How To Find Powerful Expired Domains or Web 2.0’s
The bed rock of any private blog network, is the quality of the properties within it.
Here we will go through the best ways to acquire powerful and relevant expired domains and web 2.0 sites to use for your network.
- 1 How To Find Powerful Expired Domains or Web 2.0’s
- 1.1 Definitions of Key Terms
- 1.2 Scraping domains vs Purchasing Domains
- 1.3 How to scrape expired domains
- 1.4 Expired Domains Crawlers & Software
Definitions of Key Terms
Here are some basic outlines of the key concepts with expired domains in relation to private blog networks.
What is an expired domain?
A domain name is the website name that you type into the address bar in your browser, sometimes referred to as the website url. These domain names are registered by people at specific marketplaces called registrars, who manage the registration of those domain names.
These domains are registered for a period of years at a time, and once this period has ended, the owner can choose to register the domain again, or to let it expire.
If a domain has expired, it can be registered by anyone from their chosen registrar.
What makes an expired domain powerful?
The value of an expired domain is determined by:
- The quantity and quality of the inbound links pointing to the domain.
- The relevancy of those links to the website you eventually want to use the network.
- The amount of social shares pointing to the domain.
- The amount of content there is to restore from the Wayback Machine (archive.org/web).
What does scraping for a domain mean?
Web scraping is the practice of computers crawling websites to retrieve information. In the context of private blog networks, scraping involves following links from powerful websites, to find broken links, which can then be checked to see if the domain name is available to register.
Scraping domains vs Purchasing Domains
This is the long established argument on which one is more cost effective in the long run. Simply put, scraping for your own expired domains requires time, a good seed list, and a good scraping software. For many people, it will be more effective to purchase their expired domains from a domain broker, or from an expired domain auction.
How to scrape expired domains
As mentioned previously, to scrape expired domains you will need:
- a good seed list
- a good expired domain crawler
If you don’t have these things, or cannot afford them, then perhaps using a domain broker or expired domain auctions are the best route for you.
Creating a good seed list
This is the core of your expired domain efforts, and is the best way to set yourself apart from your competitors, who will all be scraping the same set of websites.
Simply put, if a url is ranking in Google for a niche related keyword, then Google believes that the page provides value for that particular keyword.
Therefore, we want our expired domains to have links from pages that are actually ranking, because then we know they will have at least some value in the eyes of Google.
Simply relying on 3rd party metrics will not guarantee that your expired domain is powerful.
Tools for creating a seed list
The best tools for creating a seedlist based on Google ranking pages, are either Scrapebox (paid), or Simple Serp Scraper (free). These will allow you to profile the top webpages for related keywords, giving you a seed list to check for broken links.
How to find keywords for your seed list
You want to scrape pages that are somewhat related to your industry, but the obvious choices will not provide many domains as they are already heavily scraped.
So for less competitive sources of keywords, you can use the following:
- SEMrush ranking keywords – compile a list of the ranking keywords for the authority websites in your industry from SEMrush, and use the keywords that have less than 1000 searches.
- SEObook Keyword Density checker – adding wikipedia urls related to your industry into this tool, can help you find slightly unconventional keywords to run searches on.
- Answerthepublic.com – Entering your main keywords into this tool, will give you related questions, and scraping the SERPs of those questions can help you find less commonly scraped, related pages.
How many urls should your seed list contain?
There is no limit to how many urls you should use, simply make sure there are no duplicates, and then do as many as you like.
The amount of domains you find can vary greatly depending on many different factors, and so practicing this technique will help you understand how many you need to the amount of domains you require.
Expired Domains Crawlers & Software
As mentioned previously, it’s much more effective to use a crawler to find the broken links, which can then be checked to see if the domains are expired or not.
The different types of expired domain crawlers can be categorised into two groups:
- Web based crawlers – a cloud based system where you login, submit your seedlist, and wait until the 3rd party crawler has finished the scrape.
- Application based crawlers – a downloadable software that you can run on your personal computer or VPS, where you will select your own file for your seedlist, and actively start the crawl.
All of these crawlers will:
- Find the broken links on the web pages.
- Check to see if the domain names are available.
- Provide you with 3rd party metrics for the domain.
Explaining Crawl Depth
For these tools, you will receive the option to decide how deep you want your crawler to go. This refers to how many levels, the crawler will go to find more pages to check.
For example, a depth of 1, means that each url on your seedlist will be checked for expired domains, and then the crawler will stop.
If you select a depth of 2, then the crawler will do the steps above, but then it will follow every link on the pages in your seedlist, and also check those web pages for expired domains.
As you can see this process gets exponentially longer, the deeper you set your crawler. For our example of scraping ranking web pages, we only want to crawl depth to be 1, but if you’re scraping a large, relevant website, then your depth should be set from 7-10, to find all of the deep pages within the site.
The Best Web Based Expired Domain Crawlers
They are a much more expensive solution than software based crawlers, but the best out there are Domain Re-animator and Bluechip Backlinks.
However if you are planning on consistently building your PBN over time, I would recommend learning to use the software based solutions, as they are more cost effective.
The Best Application Based Expired Domain Crawlers
The fastest expired domain crawler available is definitely Project Lazarus. The new version (as of August 2016) has brought a lot of new features, including new TLD extensions to scrape, and many different expired web 2.0 sources that are also checked.
The next best option would be expired domain miner, however this isn’t as fast as Project Lazarus.
Whatever tool you use, after the scrape has finished you will receive a spreadsheet with a list of domains, along with 3rd party metrics to go with them.
But which domains should you choose?
It’s time to spam check your domains.