Phase 1 · Apple Silicon · Live

Your Mac earns
while it sleeps.

Hatch turns idle Mac Minis and Mac Studios into a global GPU network. Lock your screen and earn HC credits. Pay cents to run 70B LLMs on 192 GB of unified memory.

⚡ Quick Install — Provider (Apple Silicon, macOS 14+)

curl -fsSL https://raw.githubusercontent.com/wkang0223/hatch/master/scripts/install-agent-macos.sh | bash
or download binaries directly
Open Dashboard ★ Fork on GitHub
192 GB
Max unified memory per node
$0.05
HC per hour starting price
0 ms
CPU↔GPU copy overhead
70B+
LLM on a single Mac Mini M4 Pro

Why Apple Silicon?

Unified memory means the GPU and CPU share one pool. No copies. Full bandwidth. A $1,400 Mac Mini M4 Pro outperforms a $30,000 pair of A100s for 70B LLM inference.

Machine Unified RAM GPU Cores Best For Price
Mac Mini M4 16 – 24 GB 10 7B – 13B inference, agents $599
Mac Mini M4 Pro Best value 24 – 64 GB 20 30B – 70B LLM inference $1,399
Mac Studio M4 Max 64 – 128 GB 40 70B fine-tuning, multi-modal $1,999
Mac Studio M4 Ultra 128 – 192 GB 80 405B shards, full pipeline $3,999

How it works

Three commands and your Mac is earning. Jobs run in an isolated macOS sandbox — you never lose control.

Step 01

Install Hatch

One-line installer. Creates the hatch_worker isolation user, installs MLX & PyTorch MPS runtimes, and registers a launchd daemon that starts on boot.

Step 02

Lock your screen

The agent polls GPU utilization via IOKit every 30 s. Once GPU stays below 5% for 10 minutes with your screen locked, your Mac announces itself as available.

Step 03

Jobs run automatically

The coordinator matches jobs to your Mac by RAM requirement and runtime. Jobs execute inside a sandbox-exec profile — file and network access fully restricted.

Step 04

Withdraw HC credits

Earnings settle instantly in HC (Hatch Credits). Cash out to USDC on Arbitrum or spend credits to run your own compute jobs.

Built for idle Macs

No existing platform — io.net, Vast.ai, Salad, Akash — supports Apple Silicon. Hatch fills that gap.

🔒

Screen-lock aware

Uses macOS CGSession API. Your GPU only leases when your screen is locked and GPU is truly idle.

📊

IOKit GPU monitoring

Reads IOAccelStatistics2 every 30 s. Drops off the network the instant your GPU spikes above threshold.

🛡️

macOS sandbox isolation

Jobs run as a restricted hatch_worker OS user with sandbox-exec profiles — no filesystem or outbound network beyond the job directory.

MLX & Metal native

Full Metal GPU access. MLX and PyTorch MPS backends run at native speed — zero virtualization overhead.

💰

HC credits (off-chain → on-chain)

Phase 1: signed Ed25519 escrow receipts in PostgreSQL. Phase 3: trustless settlement on Arbitrum L2.

🦀

Rust throughout

Agent, coordinator, and ledger are all written in Rust. Single binary per component — no runtime dependencies.

Three ways in

Use the live network, self-host on Railway, or fork and build your own.

⬇️

Download & run (Mac provider)

Install the Hatch agent on your M-series Mac. One command — detects your chip, installs runtimes, sets up launchd, and starts earning.

curl -fsSL https://raw.githubusercontent.com/wkang0223/hatch/master/scripts/install-agent-macos.sh | bash
Download binaries →
🚀

Deploy your own coordinator

Self-host the coordinator + ledger on Railway in minutes. Click deploy, add a Postgres database, set three env vars, and you have your own private Hatch network.

1. Fork the repo on GitHub
2. Create a Railway project
3. Add Postgres + deploy 2 services
Deploy on Railway → View config
🍴

Fork & customize

Full Rust monorepo — agent, coordinator, ledger, CLI, Next.js dashboard. Apache 2.0 licensed. Build your own GPU marketplace on top.

cargo build -p hatch-agent
cargo build -p neuralmesh-coordinator
cargo build -p hatch
★ Fork on GitHub →