Domain Reanimator vs. Domain Miner Review

Over the past few months I have spent over 550 hours scraping domains using a number of different tools. In this post I will be reviewing two scrapers that I’ve been using heavily: Domain Reanimator and Domain Miner. When put to the test, who will come out on top?

Jump Straight to Reviews
Domain Reanimator
Domain Miner

Domain Reanimator Review

Check out Domain Reanimator here! EXCLUSIVE COUPON CODE: SIMPLESITES20OFF

Specs

Type of scraper: Hosted – browser based

Compatible operating systems: Windows, Mac, Linux. Any up-to-date major internet browser including Chrome, Firefox, Opera. Safari and Internet Explorer.

Maximum crawl depth: 3 levels

Maximum crawl size: Unsure, but very large! Millions shouldn’t be a problem.

Cost

  • Monthly rental
    • $197 – Starter – Single crawler, full features, 25 reanimations
    • $297 – Professional – Two crawlers, full features, 50 reanimations
    • $497 – Agency – Four crawlers, full features, 100 reanimations
    • Domain Reanimator exclusive discount for SimpleSites readers: SIMPLESITES20OFF
  • Money back guarantee up to your first 500 domains
  • NO Trial offer
  • NO Lifetime license
  • NOT available for purchase

Ways to Scrape

  • Seed list
  • Large seed lists
  • Keyword targeting
  • Location targeting
  • Reverse crawling

Features

  • Threads: Varies by subscription (one crawler = 1 thread)
  • Availability checker
  • Moz checker
  • Majestic checker
    • Historic index in sorting, fresh available in accordion
  • Export options
    • CSV file only
  • Wayback research
    • First, median and last screenshot
  • Spam filters: But don’t be a fool, check domains yourself.
  • Multiple TLDs supported
    • The developer says every TLD in the world is supported. If you see one that is missing then contact support.
  • Restore website from archive.org
    • Quantity varies by subscription
  • Spam algorithms
  • NO Keyword spam filter

Support

  • NO Written documentation
  • NO Member forum

Additional Hardware / Software Needs

Hardware

No additional hardware is needed to use Domain Reanimator. This is likely one of the largest selling points. There are of course many revolutionary and refreshing features, but being browser based relieves the end user of many technical pitfalls, operational headaches and logistical nightmares. If you can browse the web from wherever your are, you can scrape domains with Domain Reanimator.

Software

The Domain Reanimator team is hard at work listening to feedback from live users but it’s going to be a while before it’s an all-in-one solution. You’re going to need something to bulk check fresh metrics so you can sort out what the present state of your domains are. Additionally you’ll need a way to filter and clean spam more reliably. The built in features are cutting edge for a scraper, but they aren’t enough for anyone taking this seriously. Simply put if you just take Domain Reanimator at face value you’ll spend a lot of time looking at domains that are junk and overlooking domains that are real keepers.

Tools we recommend are:

Ease of Setup

THIS IS AS EASY AS IT GETS! I can’t think of a way to make this easier while also making it better. It’s very simple and straightforward.

After registering your account you’ll be setting up your first crawl in seconds. Simply click on “New Project” and select the type of crawl you’d like to start. Complete the additional details and you’re all set. Sit back and watch Domain Reanimator work! You’ll start seeing domains coming in pretty quick depending on the types of sites you’re crawling. If you’re crawling the big boys that get crawled often expect to wait a bit for results.

Steps to First Crawl

  1. Subscribe
  2. Click on “New Project”
  3. Name your project
  4. Select the type of crawl you’d like to use (keyword, geo, seedlist or reverse)
  5. Complete the additional details
  6. Click start

Ease of Use

Speed

Domain Reaminator is definitely quick. It’s on par with the best of the crawlers available.

Speed issues are addressed promptly

Again like any crawler this speed will vary based on the speed of the sites you’re crawling, however there are some bonuses here that are worth mentioning. Since this crawler doesn’t rely on your connection speed and doesn’t require additional hardware like a VPS you could in theory utilize this tool with even the slowest or most unreliable internet connections and still have great success. This is still a reality for many who travel often or live in remote areas. Laptop lifestyle anyone? A major benefit to the hosted platform used with Domain Reanimator!

There has been some noise about the system lagging a bit, which has been addressed promptly by the team.

Accuracy

This is definitely a weakness at the moment. Let me explain. As for availability accuracy it’s been reliable so far. However that’s about where it ends.

Useless Metrics
No one has time to hand pick through domains and then do a lot of leg work to determine if they’re good or not. Even with all the data at your finger types you’ll still be left struggling to get good domains quickly out of Domain Reanimator. This boils down to how the data is presented.

The major opportunity here is to present the Fresh Majestic metrics as the criteria for sorting.

See comment by Dixon Jones from Majestic

Using the Historic metrics is useless as most all websites will have lost most if not all their links in the past 5 years.

Example: A domain might indicate 3000 referring domains in the table (presently Historic index) which you’d sort by using the sorting features but on further inspection it shows 6 in the Fresh index. That’s time wasted you can’t get back. In the example below regarding spam you’ll see that 43 referring domains are listed in historic index, not a bad metric, but upon inspecting fresh index it’s only 6, not even worth bothering with.

Changing the first data you see and use to sort the domains to the Fresh Index will give you a true picture of the domains as it is today and allow you to sort faster and make better decisions about what to review and disregard.

Spam Algorithm
I don’t need to tell you that spam algorithms are pretty much a joke. If you’ve tried to rely on them even a tiny bit you’ve likely been disappointed. Domain Reanimator isn’t any going to change that experience. They have integrated some logic that attempts to signal whether a domain is spam or not but it’s pretty much junk and should pretty much be ignored. In fact I’d encourage them to make a strong warned to novice users not to rely heavily on this feature as it is misleading. It’s wise to learn to spot spam yourself and not use the “Hide Spam” checkbox on your crawl results. I can promise you’ll miss perfectly fine domains if you do. Here’s just one example:

Domain labeled spam
Majestic is clean
Archive is also clean

At the time of this writing the only reliable way to check a domain for spam is to review it yourself and use best practices and little common sense to look for the most obvious signs of abuse. Screenshots, anchor text and reviewing backlinks remains the most reliable spam checking process.

The only reliable spam checking progress to date: Find Domain > Bulk Archive Check > Anchor Check > Links Check > Final Archive.org check

To be fair the Domain Reaminator team knows this and isn’t implying you should just take their algorithm as fact. It’s just one more piece of data. They are providing many additional tools such as three screenshots of the domain including the earliest, median (middle) and last as well as an anchor text request. These quick features are definitely going in the right direction however they’re slow and cumbersome since you’re cleaning one at a time.

As domain scraping continues to increase in popularity the number of good domains found, especially on popular sites, will become fewer and fewer. Bulk processing is the best way to get clean domains quickly. Otherwise you’ve really not gained anything by spending $200+ and all the added time to clean and filter over just buying great domains from a reputable domain broker.

Reliability

So far, as reliable as you’d expect from a hosted solution. There have been a few comments in the Facebook group about increasing performance or slow performance as shown above, but I personally have not experience any outages nor was I able to locate anyone complaining about such in the Domain Reanimator Facebook group.

Access to Your Data

At first any hosted solution makes me skeptical about privacy. We’ve just come to trust the cloud without much thought. Domain Reanimator does a good job of being transparent and makes the data you collect with the system very easy to access via download. This however isn’t true entirely.

When you use the keyword, geo target and reverse crawl features there is no way to verify what is being crawled and access that data. The system generates a seed list for you using these features, but that seed list is not made available to you for download or capture. In a way this keeps you reliant on Domain Reanimator’s done-for-you seed list creators. This is both a good and bad thing.

One of the greatest skills any scraper can develop is creating seed lists. If you don’t have a good one, you’ll never find anything worthwhile. Using Domain Reanimator allows a person to partially forgo learning this skill and let the software do the work, though any good domain scraper will tell you no software will ever create a seed list as good as a pro.

If you’re working in a competitive niche like dentists, law, plumbers etc and you’re relying on the keyword crawler feature, plan on not finding much as those top Google results get picked clean in a hurry by the hoards of other people doing the same thing.

Conclusion

What I Like

  • Hosted solution removes so many headaches! No added hardware needed. Use any operating system, any browser, anytime, anywhere and from even the slowest of internet connections.
  • Even at the $200+ price range it’s a pretty good bargain considering they bear the burden for hardware expenses and headaches so by all rights they can justify a higher price point even though it’s very competitively priced. It’s a smoking deal compared to this tool which is nearly 4x the price (based on price per crawl min) with less features.
  • To be fair they’re increased overhead with hardware, rapid/responsive development and quick support means this is a bargain at the current price point as others softwares with similar pricing aren’t offering any of this such as Domain Miner.
  • Clean and intuitive interface. They’ve really made crawling as simple as it should be and even added a few bells and whistles (geo search, reverse crawl) that just increase the value even more.
  • You don’t need basic hacker skills to get good results, but good seed list skills still are critical for best results.
  • Strong sense of engagement from the developers to improve the product based on actual user feedback.
  • Strong training video courses to help people get acclimated to the system quickly. They seem to be doing regular webinars around updates which is just good business.
  • Strong Facebook community which is fast paced and responsive which will be a great asset to novice scrapers. The willingness for the group to share ideas and the developers to respond is unheard of for a tool of this type.

What I Don’t Like

  • Probably the most glaring issue to me is the use of historic metrics as the baseline for sorting domains. This basically renders their sorting feature useless as the metrics aren’t actionable. I won’t go into this it’s been discussed in depth already such as here, here (see comment by Dixon Jones) and here just to name a few.  Fresh Index is a more accurate picture of a domain as it is today and live links. If a domain has been spammed recently, the Fresh index will be your go to dataset. If I’m going to point out that other tools are using Moz metrics which are unreliable for processing domains then we need to do the same for other metrics that fall in the same category.
  • Cost again is an issue since it just doesn’t make sense to pay $200/mo for a software + time to dig up seed lists + hardware to run it + software and time to clean up the outputs if you only need a few domains each month. (It’s not just the cost of the software you need to factor into the cost buying domains from a reliable broker).
  • No access to seeds lists which are generated when using the keyword crawl, reverse crawl and geo crawl features. I’m not saying you should be able to do this, but it certainly would add value to the tool albeit make it much easier to leave the service if you were disgruntled and use your seedlists elsewhere.
  • The spam algorithm is silly and a waste of development resources. This is actually deceptive and should really be removed as a novice may foolishly rely on it when processing and unwittingly be skipping great domains.

Tips and Shortcuts

  • Don’t bother using the built in sorting feature if you want to clean domains fast. Just export your domains and use a bulk metrics checker to get a faster idea of the current status of domains.
  • Use a bulk archive checker to quickly remove domains with spammy waybacks (like porn, Chinese text or pbns),  from your list and then do a final anchor check to weed out the last bit of spam to get a super clean, super solid list

Domain Miner Review

 Check out Domain Miner here!

Specs

Type of scraper: Software – Java based

Maximum crawl depth: Unlimited

Maximum crawl size: 4 million pages (30 large seeds crawled to the 10th level, 60 seeds crawled to the 5th level)

Ways to Scrape

  • Seed list
  • Large seed lists
  • NO Keyword targeting
  • NO Location targeting
  • NO Reverse crawling

Costs

  • Trial offer: $19 for 1 day full-feature trial
  • Monthly rental
    • $97 – 30 days, limited threads
    • $197 – 30 days, full functionality
  • NOT Available for purchase

Features

  • Threads: Varies by subscription. One to Unlimited
  • Availability checker
  • Moz checker
  • Export options
    • CSV file only
  • Multiple TLD’s supported
    • .com, .net, .org, .info, .biz, .us, .uk, .ca, .ie, .it, .eu, .name, .au, .de, .mobi, .in, .nl, .nz, .mx, .br, .bz, .pr
  • Restore website from archive.org
    • Quantity varies by subscription
  • NO Majestic checker
  • NO Wayback research
  • NO Spam filters
  • NO Keyword spam filter
  • NO Spam algorithms

Support

Additional Hardware / Software Needs

Hardware

No additional hardware needed but it’s strongly recommended.

  • Dedicated VPS
    A dedicated VPS is recommended with minimum 2GB ram (4GB is recommended), but will operate on a local computer.CAUTION: Interruptions in internet connection (think if you’re running it on a laptop and need to go anywhere even for a moment you’re out of luck), slow internet speeds and memory load all can have negative or terminal effects on crawler.I’ve been running Domain Miner on my 4GB Offshore SEO VPS 24/7 for 4 straight months with great success. When I put it on my 2GB PowerUpHosting VPS it would have memory issues causing the program to crash.

Software

Yes, you will need additional software.

  • WhoIS Checker
    The WhoIs checker performs some of the monster tasks such as availability and Moz metrics (Free Moz API works just fine), but you’ll still need a way to sort which domains are spam free and have decent metrics.A note about Moz metrics: Moz is falling behind quickly and their metrics shouldn’t be heavily relied upon as their crawler just isn’t capturing enough data. Some pretty great domains are out there with poor or non-existent Moz metrics, so don’t be fooled.
  • Spam checking isn’t a feature of Domain Miner so plan on learning to use other tools to clean out Chinese spam, PBN spam and anchor spam with other softwares.

Tools we recommend are:

Ease of Setup

NOTE: THIS IS NOT FOR BEGINNERS!

Once you sign up for service a download link comes in your email pretty quickly with a software key to activated. My key was invalid. I submitted a support ticket on their website right away, but didn’t get a response for over a day. I then decided to reply to the email that came upon registration. That came back with a support ticket confirmation and within a few minutes Lynn responded that she would provide a new key promptly and she did. It worked this time.

Installing the software was pretty straight forward. Simply unzip the file and run the installer. The first time you open the crawler you’ll be prompted to enter your key. No key, no access. This is where the intuitiveness ends.

Once the crawler opens it’s not obvious at all what to do. There is just a start button staring you in the face. So off to the website I went to see if training was available. I found a few YouTube videos explaining where to put seeds, change crawl depth, crawling threads, adjust TLDs and enter proxies. None of this is intuitive and while not complicated, it’s enough to make someone a little unsure if they’re adjusting the right thing or about to ruin a critical file. Would a slider or dashboard have been so complicated?

Expect to spend about an hour to get the software up and running.

Steps to First Crawl

  1. Download software
  2. Unzip and install
  3. Enter key
  4. Watch training videos
  5. Edit .txt files to configure depth, TLDs, threads and proxies
  6. Enter seed list in .txt file
  7. Open crawler
  8. Open WhoIs checker
  9. Start Crawler
  10. Start WhoIs checker

Ease of Use

Speed

Domain Miner is definitely quick. There are however many factors which affect the speed of your crawls, not just the software.

Factors that affect speed include: speed of the websites you’re spidering, your internet speed (or VPS if you use one) and speed of your computer are all factors. Amazon is going to be very fast, but any site on shared hosting is going to slow the crawler down.

Over the course of 300 separate crawls my average speed was about 300k pages / day or about 12k pages per hour when running 20 threads.

What can be a little confusing here is that the crawler can be very fast and end up overwhelming the WhoIs checker. This isn’t necessarily because of WhoIs checker is slow. The slowness actually is caused by rate limiting by the WhoIs server used for checking domain availability.

As mentioned earlier to keep the speeds at a maximum remove any unneeded TLDs and if possible focus only on TLDs for the highest possible speeds. One of the worst is .org. Avoid this one unless absolutely necessary.

Accuracy

This is kind of a weird thing to include in a review, but getting false positives is actually a problem with some crawlers do to be fair I’ll include it in each review. Domain Miner has been very accurate with no issues giving us false positives. If it says a domain is available and has a given metrics, it’s dead on.

Reliability

Crawling is fraught with problems and one of the is software reliability from things like crashing, bugs, failures, lock ups and more.

Domain Miner is a lean and fast software and for us after 3 months of 24/7 crawling has proven to just work, all the time, every time. However there is one caveat to this.

Whether you choose to run it on your personal system or a VPS if you have any issues with the computer it’s catastrophic for the crawler software. The WhoIs software can crash without issue as all it’s doing is processing the database, but if the crawler crashes, you’ll simply have to start over because you can’t simply resume. This happened to use several times and cost us days and days of lost time. One crawl was running for 120+ hours when windows forced an update and rebooted the VPS. I was livid. This wasn’t a software problem and needless to say I shut off all Windows updates immediately. Something you should seriously consider if you’re using your VPS for crawling.

Access to Your Data

Miner transparency for your data is very good. At any time while the WhoIs checker is processing you can a) export your current urls without pausing b) pause and export your current session then restart the app to refresh the ‘current session’ c) export the entire database. You won’t however at any time be able to unzip the database so you must allow the WhoIs checker to do it’s job.

Conclusion

What I Like

  • Once Domain Miner is dialed in and you get the things sorted it’s a very fast and reliable software.
  • The fast WhoIs tool helps clean up your crawls fast so you’re left with a sheet of availables with a decent idea what’s worth looking into more.
  • It can handle a wad of seeds and crawl for days on end and crawl deep without freezing or crashing
  • Excellent for crawling large sites
  • Good support when you have problems, questions or issues

What I Don’t Like

  • Cost is definitely an issue since it just doesn’t make sense to pay $200/mo for a software + time to dig up seed lists + hardware to run it + software and time to clean up the outputs if you only need a few domains each month. (It’s not just the cost of the software you need to factor into the cost buying domains from a reliable broker)
  • No new development appears to be happening on the tool. You’re literally paying for a license, not development costs.
  • Java based always means limits. One glaring issue being you can’t run more than one crawler on a single server without causing database issues. Even if you pay all the extra for the unlimited threads you’ll still need an entire second setup to crawl two different seed lists etc at a time. (for instance if you’re crawling in two different niches)
  • Pretty complex learning curve to get optimal performance. Far too many .txt files to edit to really recommend to anyone without some tech savvy and willing to risk screwing things up. Shouldn’t have to edit code at this price point.
  • Moz metrics are nice and I get that they are free. I understand why they are preferred over other metrics providers for basic users. My first complaint about this is that a lot of Moz metrics aren’t current and aren’t always a good picture of the domain as it is today since their crawler is so far behind. This isn’t a complaint about Moz, but that this is the primary metric tool given in a premium crawler. Perhaps a feature to integrate a personal Majestic or Ahrefs api would be nice and let the crawler do the work instead of forcing you to do it in some other software in post processing.
  • No way to save crawls. If your VPS has any issues whatsoever you can’t simply restart your crawl where you left off. This is terrible if you’re 100+ hours into a session and need to reboot or something similar. Means losing the entire session in the sense that you can’t start where you left off, though the WhoIs checker can be restarted and you can start where you left off.
  • Only works on Windows. This is because my entire office just moved to Mac, but we run Windows VPS so it’s not the end of the world. There are other scrapers which are OS independent, which gets a nod for forward thinking.

Tips and Shortcuts

  • Remove all the TLDs you don’t intend to use. The WhoIs checker for many TLDs like .org have severe rate limiting which will slow down the software immensely. TLDs are all quite fast up to 15k/hr in many cases, though it’s not uncommon to see speeds around 4-7k/hr.
  • After the WhoIs checker has finished cleaning after a large crawl clear the database by going to the config folder > db and delete each file EXCEPT the one labeled “file”. It’s a 1kb file. This will give your crawler db more room for your next crawl and keep it from slowing down. I do this after each crawl however remember that it’s doing a duplicate check on the db as it’s crawling so the longer you leave the database in tact the less duplicates you’ll get.

Leave a Comment