If you buy expired domains (for PBNs, 301 redirects, affiliate sites, or side projects), your morning probably looks like this: open GoDaddy Auctions, check DropCatch, scroll Dynadot closeouts, glance at Catched. Compare prices, check Wayback age, look at backlink profiles. Across 8 registrars, that eats up a couple hours before you've done any real work.
An expired domains API fixes that. You write one query, get structured data back from all sources, and feed it into whatever you want: a scoring model, a Slack bot, an AI assistant. This guide walks through how to do that with the CatchDoms API.
What is an expired domains API?
It's a REST endpoint that returns expiring, auctioned, or available-for-registration domains as JSON. Instead of scraping web pages or downloading CSV files, you make an HTTP request and get back structured data you can actually work with in code.
The CatchDoms API pulls from 8 registrars (Dynadot, GoDaddy, DropCatch, Catched, Gname, SnapNames, UK Backorder, Subreg) and returns everything through a single endpoint. Each domain comes with:
- Auction info: current price, max bid, bid count, end date
- SEO metrics: DA, backlinks, referring domains, TF, CF
- Wayback history: first archive date (real age), snapshot count
- Detected language of the original content (EN, FR, DE, etc.)
- A 0-100 quality score based on age, authority, backlinks, name, and TLD
- Topical Trust Flow category (Business, Health, Arts, etc.)
- EDU/GOV referring domain counts
That's data you'd normally pull from 4-5 separate tools. One request gets you all of it.
Getting started
You need a Pro account and an API key. Three steps:
- Create an account on CatchDoms
- Subscribe to the Pro plan
- Generate a Bearer token from your API dashboard
Then make your first request:
curl "https://catchdoms.com/api/domains?score_min=50&has_backlinks=1&per_page=5" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Accept: application/json"
The response looks like this:
{
"data": [
{
"id": 42187,
"name": "example-domain.com",
"tld": ".com",
"source": "godaddy",
"type": "auction",
"price": 12.00,
"max_bid": 85.00,
"bids_count": 5,
"auction_end_date": "2026-03-12T18:00:00Z",
"score": 68,
"age": 12,
"domain_authority": 24,
"backlinks_count": 890,
"referring_domains": 67,
"trust_flow": 15,
"citation_flow": 22,
"wayback_snapshots": 134,
"wayback_first_date": "2014-06-21",
"language": "EN",
"purchase_url": "https://auctions.godaddy.com/..."
}
],
"links": { "next": "...", "prev": null },
"meta": { "current_page": 1, "total": 1247 }
}
The purchase_url links directly to the registrar's auction page so you can bid or buy immediately.
Filtering domains
Filtering is where the API saves you the most time. Instead of scrolling through thousands of domains, you tell it exactly what you want:
| Parameter | Description | Example |
|---|---|---|
source | Filter by registrar | godaddy, dropcatch |
tld | Filter by TLD | .com, .fr |
score_min | Minimum quality score | 50 |
age_min | Minimum domain age in years | 10 |
has_backlinks | Only domains with backlinks | 1 |
has_bids | Only domains with auction bids | 1 |
da_min | Minimum Domain Authority | 20 |
language | Filter by content language | EN, FR |
contains | Domain name contains keyword | shop |
per_page | Results per page (max 100) | 50 |
Python: find aged .fr domains with backlinks
This script finds French domains older than 10 years that have referring domains. Good for PBNs or aged domain projects:
import requests
API_KEY = "your_api_key_here"
BASE_URL = "https://catchdoms.com/api/domains"
response = requests.get(BASE_URL, params={
"tld": ".fr",
"age_min": 10,
"has_backlinks": 1,
"score_min": 40,
"language": "FR",
"per_page": 50,
}, headers={
"Authorization": f"Bearer {API_KEY}",
"Accept": "application/json",
})
data = response.json()
for domain in data["data"]:
print(f"{domain['name']:30s} age={domain['age']}y "
f"DA={domain['domain_authority']} "
f"RD={domain['referring_domains']} "
f"score={domain['score']}")
print(f"\nTotal matching: {data['meta']['total']}")
JavaScript: daily domain monitor
A simple Node.js script that checks for new domains every day. You could plug this into a Slack webhook or email in about 10 lines:
const API_KEY = "your_api_key_here";
async function checkNewDomains() {
const params = new URLSearchParams({
score_min: "60",
has_backlinks: "1",
age_min: "5",
per_page: "20",
});
const res = await fetch(
`https://catchdoms.com/api/domains?${params}`,
{
headers: {
Authorization: `Bearer ${API_KEY}`,
Accept: "application/json",
},
}
);
const { data, meta } = await res.json();
console.log(`Found ${meta.total} domains matching criteria\n`);
for (const d of data) {
console.log(`${d.name} - score:${d.score} age:${d.age}y ` +
`DA:${d.domain_authority} RD:${d.referring_domains} ` +
`$${d.max_bid || d.price} (${d.source})`);
}
}
checkNewDomains();
What people build with it
Slack or Discord alerts
A cron job queries the API every hour for .com domains with DA 20+ and 10+ years of age. When new ones show up, they get posted to a Slack channel. Your team catches opportunities in real-time instead of the next morning.
Custom scoring
CatchDoms gives you a quality score out of the box, but you might care about different signals. Maybe you weight Trust Flow heavily. Maybe you only want .de domains with EDU backlinks. Pull from the API, run your own scoring logic, output a shortlist.
AI and MCP
There's also an MCP server that lets Claude Code or Cursor search domains in natural language. You type "find French domains older than 15 years with Trust Flow above 20" and get results right in your IDE. Same data and filters as the REST API, just a different interface.
Portfolio monitoring
If you hold a portfolio, you can track auction activity on similar domains: prices, bid counts, market trends. Useful for pricing your own inventory and spotting acquisition targets.
Why an API beats scraping
You could scrape registrar websites directly. But there are a few reasons not to.
Scrapers break when the source site changes its HTML. That happens often. An API gives you stable JSON, and CatchDoms handles all the scraping and normalization on the backend.
Writing scrapers for 8 platforms, each with their own format, auth, and rate limits, is a lot of maintenance. One API call replaces all of that.
And raw registrar data doesn't include SEO metrics, Wayback history, or language detection. CatchDoms enriches every domain before serving it. You'd need Majestic, Ahrefs, the Wayback CDX API, and a language detector to replicate what one endpoint gives you.
Most tools in this space don't have APIs at all. ExpiredDomains.net is web-only, no API. DomCop has CSV downloads but no programmatic access. If you want to automate anything, you need an API.
Rate limits and tips
The Pro plan gives you 60 requests per minute. Some tips:
- Use filters. Don't fetch everything and filter client-side. Narrower queries mean fewer pages to iterate.
- Cache responses for 15-30 minutes. Domain data updates a few times per day, not every second.
- Set
per_page=100(the max) to reduce the number of requests when paginating. - If you get a 429, wait 60 seconds and retry. Add exponential backoff to be safe.
Need higher limits? Check the Business plan.
Understanding the scores
Every domain has a score from 0 to 100. It factors in age, authority, backlinks, name quality, and TLD. The full breakdown is on the scoring page.
Quick guide:
- 70+ is very good. Usually premium domains with solid backlink profiles.
- 50-70 is worth a closer look.
- 30-50 might have potential but needs manual review.
- Below 30 is mostly thin or low-quality history.
A query like score_min=50&language=FR&age_min=10 is a fast way to find the domains that actually matter to you.
Next steps
If you're tired of checking 8 registrar tabs every morning, an API turns that into a single script that runs while you sleep.
CatchDoms has 370,000+ domains from 8 sources, all enriched with SEO metrics and Wayback data, served through a REST API and an MCP server.
- API overview with code examples
- Full docs for endpoints and parameters
- MCP server for AI assistants
- Pricing to get your API key