Launch wedge
GPU runners for AI-native CI.
Pearl starts with a concrete operational problem: GPU jobs in GitHub Actions are still slow, fragile, and too DIY.
The first product is a control plane for ephemeral GPU runners and inference-aware workflow steps, so model-backed checks can live inside the build loop instead of in a separate tab.
That launch stays grounded in a broader Pearl thesis: inference-native software delivery needs explicit trust boundaries, reproducible execution, and operator-visible evidence.
Operating model
Make the workflow legible.
The point is not generic AI productivity. The point is software delivery infrastructure that teams can inspect, control, and run again under pressure.
- fresh runner per job
- trust lanes for CI and release paths
- off-box logs, artifacts, and billing evidence
- narrow launch scope with explicit constraints
Rune preview
Rune previews the runtime layer.
Rune is Pearl's local-first runtime for source-authored agents and services. It shows where the platform goes next: repo-native identity, durable state, and explicit operator control.
Today it is a compact technical preview of the runtime layer beneath Pearl's launch wedge.
- repo as control surface
- state that survives the session
- operators remain in the loop
Signal
Contact Pearl to set up a pilot.
Pearl is AWS-first, GitHub Actions first, and guided-pilot first. Email ryan@wheretocaptain.com if you want saner GPU-backed CI or inference-aware delivery workflows.