Install
1. Models: Any Model, One API
Stop juggling API keys. Point any OpenAI-compatible client atinference.hud.ai and use Claude, GPT, Gemini, or Grok. Browse all available models at hud.ai/models.
2. Environments: Your Code, Agent-Ready
A production API is one live instance with shared state—you can’t run 1,000 parallel tests without them stepping on each other. Environments spin up fresh for every evaluation: isolated, deterministic, reproducible. Each generates training data. Turn your code into tools agents can call. Define scenarios that evaluate what agents do:hud dev, then deploy to the platform:
3. Tasks & Training: Test and Train
Create tasks from your scenarios on hud.ai. Run evaluations across models. Train on successful completions. The same model string works before and after training—just better at your tasks. → More on Tasks & TrainingNext Steps
Models
One endpoint for every model. Native tools.
Environments
Tools, scenarios, and iteration.
Hosted Running
Push to platform. Run at scale.
Tasks & Training
Evaluate and train models.