ART divides the logic for training an agent into two distinct abstractions. The client is responsible for interfacing with the environment in which the agent runs and for sending inference and training requests to the backend. The backend is responsible for generating tokens at inference time, updating the agent’s weights based on past performance, and managing GPU memory as it switches from inference to training mode. This separation of concerns simplifies the process of teaching an agent to improve its performance using RL. While the backend’s training and inference settings are highly configurable, they’re also set up to use intelligent defaults that save beginners time while getting started. However, there are a few important considerations to take before running your first training job.Documentation Index
Fetch the complete documentation index at: https://openpipe-art-austin-megatron-models.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
ServerlessBackend
LocalBackend
Managed or local training
ART provides two backend classes:ServerlessBackend- train remotely on autoscaling GPUsLocalBackend- run your agent and training code on the same machine
LocalBackend. If your agent is running on a machine without an advanced GPU (this includes most personal computers and production servers), use ServerlessBackend instead. ServerlessBackend optimizes speed and cost by autoscaling across managed clusters.
ServerlessBackend
Setting upServerlessBackend requires a W&B API key. Once you have one, you can provide it to ServerlessBackend either as an environment variable or initialization argument.
ServerlessBackend automatically saves your LoRA checkpoints as W&B Artifacts and deploys them for production inference on W&B Inference.
LocalBackend
TheLocalBackend class runs a vLLM server and either an Unsloth or torchtune instance on whatever machine your agent itself is executing. This is a good fit if you’re already running your agent on a machine with a GPU.
To declare a LocalBackend instance, follow the code sample below:
PipelineTrainer, LocalBackend is currently supported only in dedicated mode, where training and inference run on separate GPUs.
LocalBackend still pauses inference during training, so ART rejects that configuration for PipelineTrainer.
In dedicated mode, a new checkpoint becomes the default inference target only after its LoRA has been reloaded into vLLM. That checkpoint publication flow is backend-specific, so save_checkpoint does not have identical semantics across every ART backend.
Requests that are already in flight keep using the adapter they started with; the reload only affects subsequent routing to the latest served step.
Using a backend
Once initialized, a backend can be used in the same way regardless of whether it runs locally or remotely.LocalBackend and ServerlessBackend in action, try the examples below.
2048 Notebook
Use ServerlessBackend to train an agent to play 2048.
Summarizer
Use LocalBackend to train a SOTA summarizing agent.