Transformers documentation
Server optimizations
Server optimizations
transformers serve includes optimizations to improve throughput and reduce memory usage.
Continuous batching
Continuous batching dynamically groups and interleaves requests to share forward passes on the GPU. New requests join the batch as others progress through prefill. Completed requests drop out after decoding. This increases GPU utilization and throughput without compromising latency.
Add --continuous-batching to enable it.
transformers serve \
--continuous-batching
--attn_implementation "sdpa"Monitor continuous batching performance with OpenTelemetry. It collects traces and metrics, but you’ll need a backend to visualize them.
Install the OpenTelemetry dependency.
pip install transformers[open-telemetry]Quantization
Quantization reduces memory usage by mapping weights to a lower precision. transformers serve is compatible with all quantization methods in Transformers. It supports pre-quantized models and runtime quantization.
Pre-quantized models don’t require any changes. They offer the best balance between performance and accuracy. Install the appropriate quantization library. Then pass the pre-quantized model from the Hub to the model argument.
curl http://localhost:8000/v1/responses \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen3-8B-GGUF",
"stream": true,
"input": "Tell me a three sentence bedtime story about a unicorn."
}'Use --quantization to quantize a model at runtime. This is useful for new checkpoints or finetunes without pre-quantized weights. Only bitsandbytes 4-bit and 8-bit quantization are supported.
transformers serve \ --quantization bnb-4bit
Attention backend
An optimized attention backend improves memory efficiency and speeds up inference.
transformers serve \
--continuous_batching \
--attn_implementation "flash_attention_2"Compile
torch.compile traces and compiles the decode loop for faster inference.
Compile is incompatible with continuous batching.
transformers serve \ --compile
Data type
The "bfloat16" or "float16" data types save memory and increase throughput.
transformers serve \
--continuous_batching \
--dtype "bfloat16"