Infrastructure live · 3 worker nodes · Playwright WebSocket + HTTP API

Scrape anything.
At any scale.

Managed Playwright infrastructure with stealth fingerprinting, rotating residential proxies, and a REST scraping API. Connect in one line — or just POST a URL.

scrape.py
import asyncio
from playwright.async_api import async_playwright

BROWSER_TOKEN = "bt_live_YOUR_TOKEN_HERE"

async def main():
    async with async_playwright() as pw:
        # One line change — stealth & proxy included
        browser = await pw.chromium.connect(
            f"wss://api.crawlops.io/ws"
            f"?browser_token={BROWSER_TOKEN}&stealth=true"
        )
        page = await browser.new_page()
        await page.goto("https://example.com")
        print(await page.title())
        await browser.close()

asyncio.run(main())

99.9%

Uptime SLA

<100ms

P95 WS Latency

50M+

Proxy IP pool

2

Products

Two products

Pick the right tool for the job

Both run on the same infrastructure — stealth, proxies, and analytics included.

Playwright WebSocket

wss://api.crawlops.io/ws

Browser Token

Connect your Playwright script directly to a managed, stealth-enabled browser. Full DevTools access — click, fill, scroll, intercept network, execute JavaScript.

Full Playwright API — zero code changes
Camoufox anti-detection engine
Residential proxy per session
Multi-step flows, login, form fill
connect("wss://…/ws?browser_token=bt_live_…")

HTTP Scraping API

POST /v1/scrape

API Key

POST a URL, get back HTML, extracted data, or a screenshot. No browser to manage — the gateway handles everything end-to-end.

No Playwright code required
CSS extraction rules — structured JSON output
Full-page screenshots as base64
Block resources to speed up fetching
POST /v1/scrape · Authorization: Bearer sk_live_…

Which one should I use?

Multi-step flows, login, CAPTCHA, JS interactions→ Playwright WebSocket
One-shot page fetch, bulk scraping at scale→ HTTP Scraping API
Need full DevTools / network intercept→ Playwright WebSocket
Simple HTML + structured extraction→ HTTP Scraping API

Platform

Everything built in

No proxies to manage, no fingerprint tuning, no infrastructure ops. Just code.

Stealth by Default

Camoufox fingerprint randomization bypasses Cloudflare, Akamai, and DataDome. No configuration.

Rotating Residential Proxies

50M+ IPs injected per session automatically. Zero proxy management. Geographic targeting included.

Sub-100ms Connection

Global worker pool, <100ms WebSocket latency. One line of code to connect Playwright.

CSS Extraction Rules

Define named CSS-selector rules in JSON — get back structured data without parsing HTML.

Screenshots on Demand

Full-page or viewport PNG — returned as base64 from the HTTP API or captured in Playwright.

Live Analytics

Session rate, latency percentiles, per-worker load — updating live in Grafana + your dashboard.

Multi-Browser Support

Chromium and Firefox workers, fully isolated browser contexts, switchable with a URL param.

Two Credential Types

Browser Tokens (bt_live_) for WebSocket. API Keys (sk_live_) for HTTP. Per-plan rate limits. Redis-backed.

Integration

From zero to scraping in 60 seconds

1

Create a Browser Token

Dashboard → Browser Tokens → New Token. Copy the bt_live_... value.

2

Replace launch() with connect()

One line change. Pass the WebSocket URL with your token — stealth and proxy are injected automatically.

3

Run your existing Playwright code

Every Playwright method works. Click, fill, scroll, screenshot, evaluate — unchanged.

4

Watch it in the dashboard

Session count, duration, and success rate update in real time.

scrape.py
import asyncio
from playwright.async_api import async_playwright

BROWSER_TOKEN = "bt_live_YOUR_TOKEN_HERE"

async def main():
    async with async_playwright() as pw:
        # One line change — stealth & proxy included
        browser = await pw.chromium.connect(
            f"wss://api.crawlops.io/ws"
            f"?browser_token={BROWSER_TOKEN}&stealth=true"
        )
        page = await browser.new_page()
        await page.goto("https://example.com")
        print(await page.title())
        await browser.close()

asyncio.run(main())
Coming soon

The roadmap

We're building the complete data extraction stack.

Q3 2026

AI-Powered Extractor

Describe in plain English what to extract. Our LLM generates selector rules for any page.

In development
Q3 2026

Headless Action Chains

Define multi-step workflows in YAML — login, fill, click, extract — without writing code.

In development
Q4 2026

CAPTCHA Auto-Solver

hCaptcha, reCAPTCHA v2/v3, Cloudflare Turnstile — solved transparently before your code runs.

In development
Q4 2026

Pipeline Scheduler

Schedule recurring scrape jobs, define triggers, and push results to S3, Postgres, or webhooks.

In development

Pricing

Simple, usage-based pricing

Start free. Scale as you grow. Both products included on every plan.

Free

$0
  • 2 API Keys · 1 Browser Token
  • 10 req/min
  • 1,000 sessions/mo
  • Community support
Get started

Starter

$49/mo
  • 5 API Keys · 3 Browser Tokens
  • 60 req/min
  • 50,000 sessions/mo
  • Email support
  • Residential proxies
Start Starter
MOST POPULAR

Pro

$199/mo
  • 20 API Keys · 10 Browser Tokens
  • 300 req/min
  • 500,000 sessions/mo
  • Stealth + Camoufox
  • Priority Slack
  • Rotating proxies
Start Pro

Enterprise

Custom
  • Unlimited Keys & Tokens
  • 1000+ req/min
  • Unlimited sessions
  • Dedicated fleet
  • SLA + phone support
Contact sales

All plans include SSL, DDoS protection, and 24/7 infrastructure monitoring. · Compare plans →

Ready to scrape at scale?

Join teams using CrawlOps to power their data pipelines. Free plan includes 1,000 sessions/month and both products.

Create free account