Frequently Asked Questions

Everything visible, nothing hidden.

The product

What is Unphurl?

Unphurl is a domain intelligence API. Send a URL, get structured signals back across 7 dimensions: redirect behaviour, brand impersonation, domain intelligence via RDAP, SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Around 30 data points, 23 configurable scoring weights. Three channels: REST API, CLI (npx unphurl), and MCP server for AI tools like Claude (Code, Desktop, or Cowork), ChatGPT desktop, Cursor, and Windsurf.

What are signals and why don't you give verdicts?

Signals are raw data points: "this domain is 3 days old," "this domain impersonates paypal.com," "no MX record." Unphurl does not say "safe" or "unsafe" because what's dangerous depends on context. A parked domain is a non-event for a security bot but a dealbreaker for a cold email campaign. You set your own thresholds using scoring profiles. The score_breakdown in every response shows exactly how the score was calculated.

What does the pipeline actually analyse?

Seven dimensions, all in parallel: redirect chains (up to 10 hops, shortener detection), brand impersonation (Levenshtein distance + homoglyph analysis against 150+ brands across 17 categories), domain intelligence via RDAP (age, registrar, expiration, status codes, nameservers), SSL/TLS validity (real handshake from Cloudflare's edge), parked domain detection (4-layer: registrar pages, for-sale, empty content, parking services), URL structural analysis (length, path depth, subdomain count, entropy, IP detection, encoded characters, TLD changes), and DNS enrichment (MX record existence).

How accurate is the analysis?

Unphurl uses heuristic analysis, not threat database matching. It catches pattern-based threats (typosquatting, fresh infrastructure, suspicious URL patterns) that blocklist-based tools miss. It also catches data quality issues (dead domains, parked sites, stale infrastructure) that security tools ignore entirely. It won't catch a clean-looking domain with a known bad history (that's what blocklists do). The two approaches are complementary. Unphurl is transparent about what it can and cannot detect.

What can Unphurl NOT detect?

Unphurl analyses domain infrastructure and URL patterns. It does not scan page content for malware, check files for viruses, detect drive-by downloads, match against threat intelligence feeds, or evaluate the quality of the content behind a URL. If a domain has legitimate infrastructure but serves malicious content, Unphurl's signals will look clean. It's domain intelligence, not content scanning.

Pricing and billing

What's free and what costs money?

Known domains (Tranco Top 100K, including Google, Amazon, GitHub, and 100,000 others) return instantly at no cost. Previously analysed domains return cached signals at no cost. Only unknown domains that require a full pipeline analysis cost 1 pipeline check credit. In typical batches, 95-99% of URLs resolve free.

What counts as a pipeline check?

A pipeline check is triggered when Unphurl encounters a domain not in the Tranco Top 100K and not in the cache. It runs the full 7-dimension analysis. The result is cached for all future lookups by any customer, so that domain never triggers a pipeline check again (until the cache expires).

Do credits expire?

No. Pipeline check credits never expire. Buy them when you need them, use them whenever.

Do I get free credits to try it?

Yes. Every new account gets 3 free pipeline check credits. No credit card required. Use them to check real URLs and see the full pipeline response. Known domain lookups and cached lookups are always free and don't consume credits.

What happens when I run out of credits?

Unphurl still runs your URLs through the gate system (Tranco, cache) and returns a smart summary: how many are known, how many are cached, how many are unknown, and exactly how many credits you need. No results are returned for the unknowns, but you know exactly what to buy. Account endpoints, pricing, and known domain checks keep working.

How does fair billing work?

If a pipeline check fails completely and returns zero signals, the credit is automatically refunded. For batch requests, credits are deducted upfront and refunded individually for any pipeline checks that fail. Duplicate URLs in a batch are processed once and charged once. You only pay for successful intelligence.

Why do I need credits for free lookups?

All check requests require a positive credit balance, even for lookups that won't deduct credits (known domains and cached domains). This protects the cached intelligence database, which is proprietary data built from every pipeline check across all customers. Every new account gets 3 free credits to start.

What happens if a pipeline check fails?

If the pipeline fails completely and returns zero signals, the credit is automatically refunded to your balance. If the pipeline partially succeeds (some signals returned, some timed out), the credit is charged because you received intelligence. null values in the response indicate which checks timed out.

Getting started

I'm not technical. Can I still use Unphurl?

Yes. If you use Claude (desktop app, Cowork, or Code), OpenClaw, ChatGPT desktop, Cursor, or Windsurf, you can install Unphurl and check URLs by talking to your AI in plain English. No terminal, no code. Say "check this URL" or "batch check these 500 URLs and give me a clean list." Your AI handles everything.

How do I set up Unphurl in Claude, ChatGPT, or Cursor?

Unphurl works with any AI tool that supports MCP servers: Claude Code, Claude Desktop, Claude Cowork, ChatGPT desktop, Cursor, and Windsurf. Add this to your MCP configuration file (usually .mcp.json). Then restart your AI tool. You get 13 tools: check URLs, batch check, manage profiles, view balance, purchase credits, usage stats, resend verification, and more. OpenClaw users can build an Unphurl skill that calls the API directly. If someone else set up your AI tool for you, send them these instructions. They'll know what to do.

{
  "mcpServers": {
    "unphurl": {
      "command": "npx",
      "args": ["-y", "@unphurl/mcp-server"],
      "env": {
        "UNPHURL_API_KEY": "uph_your_key_here"
      }
    }
  }
}

What's Claude Cowork and how does Unphurl work with it?

Claude Cowork is Anthropic's collaborative AI workspace. It supports MCP servers, which means you can install Unphurl and check URLs from inside Cowork without writing code. You can also set up recurring tasks, like "check the links in my recent emails every 2 hours." The companion skill teaches Claude to proactively check URLs as you work, without being asked.

What's the companion skill and do I need it?

The companion skill is a small file you add to your AI tool's skills directory. It teaches the AI to proactively check unfamiliar URLs before following or recommending them, without you asking. You don't need it, but it turns your AI from "checks URLs when asked" to "checks URLs automatically." Works with Claude (Code, Desktop, or Cowork), ChatGPT desktop, Cursor, and Windsurf. Install it by copying skills/check-url-safety.md from the MCP package to your skills directory.

Does Unphurl work with OpenClaw?

Yes. OpenClaw can call the Unphurl API directly via a skill. Build a skill that checks URLs before your agent follows them, using the same REST API that powers the CLI and MCP server. OpenClaw's broad system access (email, browser, messaging) makes Unphurl especially valuable: every URL your agent encounters is a potential attack surface. OpenClaw's heartbeat system also supports scheduled link checking (e.g., scan emails every 30 minutes). See the OpenClaw agent safety use case for a full walkthrough.

How do I sign up?

From the CLI: unphurl signup --email you@example.com --name "Your Name". Or via the API: POST /v1/signup with your email and name. Or ask your AI tool: "Sign me up for Unphurl." The MCP server has a signup tool that works without an API key. You get your API key immediately (shown once), plus 3 free pipeline check credits. Check your email to verify your account. Your account will need to be verified for your API key to work.

What if I didn't receive my verification email?

Run unphurl verify --resend --email you@example.com from the CLI, or ask your AI tool to resend it. You can request up to 3 new links per hour. Check your spam folder too. The verification link expires after 24 hours.

Where's the dashboard?

There isn't one, on purpose. Unphurl is built for AI agents, scripts, and workflows, not for clicking around a web UI. Everything is API-first: check URLs, manage profiles, view history, buy credits. Your CLI, your AI tool, or your code is the interface. That's by design.

How it works

What are scoring profiles and why would I use them?

Scoring profiles are named sets of weight overrides. The same raw signals mean different things depending on your use case. A parked domain is a non-event for a security bot but critical for cold email outreach. Create profiles like "cold-email" (boost parked, no_mx_record) or "security-bot" (boost brand_impersonation, domain_age_7) and reference them per request with ?profile=cold-email. Up to 20 profiles per account. Only specify the weights you want to change; everything else keeps the defaults.

How does batch processing work?

Send up to 500 URLs in one request to POST /v1/check/batch. Known and cached URLs resolve immediately. Unknown URLs are queued for async processing via Cloudflare Queues. You get a job_id to poll for results, or provide a webhook_url and get notified when complete. The CLI auto-chunks larger files into batches of 500. The MCP server handles polling automatically (transparent to the user). Typical processing time: 2-5 minutes for a batch with ~250 unknown URLs. Batch billing is all-or-nothing: if you don't have enough credits for every unknown URL, the batch is rejected with a smart summary showing exactly how many credits you need.

How do I get a clean list from my batch results?

CLI users pipe JSON through jq. Claude Cowork users just ask in natural language: "Check these 500 URLs and give me two lists: clean ones (score under 25) and flagged ones (score 50+). Export as CSV." The AI agent is the filter.

# Get clean URLs (score under 25)
unphurl --batch urls.txt --json | \
  jq -r '.results[] | select(.result.score < 25) | .url' > clean.txt

# Get flagged URLs (score 50+)
unphurl --batch urls.txt --json | \
  jq -r '.results[] | select(.result.score >= 50) | .url' > flagged.txt

How does the cache work? Will I get stale results?

Every pipeline result is cached at the domain level with smart TTLs: 7 days for high-risk domains (score 80-100), 12-24 hours for moderate and low risk, 48 hours for parked domains. Cache hits return the same raw signals but compute the score fresh using your profile weights. Redirects always run fresh (they're volatile). The meta field in every response shows cached: true/false and cache_age_hours so you always know what you're getting.

How do I see my usage statistics?

Use unphurl stats from the CLI, or ask your AI tool "show me my Unphurl stats." The stats show total URLs submitted, how many resolved free (Tranco and cache) vs paid pipeline checks, your free rate percentage, how many URLs scored above key thresholds (50 and 75), and your credit balance. Available via the API at GET /v1/account/stats.

Comparisons

How is Unphurl different from Google Safe Browsing?

Google Safe Browsing is a blocklist: it tells you if a URL is on Google's list of known threats. It's binary (listed or not) and misses anything Google hasn't catalogued yet. Unphurl analyses the domain's actual infrastructure: how old it is, whether it impersonates a brand, its SSL status, redirect behaviour, DNS records. Unphurl catches brand-new threats (fresh domains, typosquatting) that aren't on any blocklist yet. They're complementary, not competitive.

How is Unphurl different from VirusTotal?

VirusTotal aggregates results from 70+ antivirus engines and threat feeds. It's comprehensive for known malware but returns a binary "X out of 70 flagged it" result. Unphurl provides structured signals about domain infrastructure, not malware scanning. Unphurl catches data quality issues (parked domains, dead infrastructure, missing MX records) that VirusTotal doesn't address. Unphurl also offers configurable scoring and custom profiles, so different use cases get different scores from the same data.

How is this different from a WHOIS lookup?

WHOIS gives you raw registration data for one domain. Unphurl goes far beyond that: it analyses 7 dimensions simultaneously (WHOIS/RDAP is just one of them), follows redirect chains, checks SSL, detects brand impersonation, identifies parked domains, analyses URL structure, and checks DNS records. It also scores the results using configurable weights, caches everything for performance, and handles batch processing. It's the difference between a single data source and a full intelligence pipeline.

Security and privacy

Is my data shared with other customers?

The URLs you check are not shared. Your check history, profiles, and account data are private to your API key. However, the pipeline results (domain-level signals like age, registrar, SSL status) are cached and used to serve free lookups for all customers. This is how the "previously analysed" gate works. No personally identifiable information is stored in the cache, just domain infrastructure data.

Do you store the URLs I check?

Unphurl logs the domain (not the full URL path) in your check history for 90 days. The full URL is not stored. Domain-level signals are cached separately with TTLs. Account-level stats (balance, totals) never expire. If you delete your account, all your check history is permanently removed. Cached domain data stays because it's infrastructure data, not personal data.

Looking for specific use cases? See all 27 use cases.