Skip to main content
Rental Type Overview
We currently offer two rental types:
On-Demand (High Priority)
Fixed price set by the host
Runs as long as you want
Cannot be interrupted
More expensive but reliable
Interruptible (Low Priority)
You set a bid price
Can be stopped by higher bids
Saves 50-80% on costs
Good for fault-tolerant workloads
How do interruptible instances compare to AWS Spot?
Similarities:
Both can be interrupted
Both offer significant savings
Differences:
Vast.ai uses direct bidding (you control your bid price)
AWS uses market pricing
No 24-hour limit like GCE preemptible instances
Vast.ai instances can run indefinitely if not outbid
What happens when my interruptible instance loses the bid?
Your instance is stopped (killing running processes). Important considerations:
Save work frequently to disk
Use cloud storage for backups
Instance may wait long to resume
Implement checkpointing for long jobs
When using interruptible instances, always design your workload to handle interruptions gracefully.
DLPerf Scoring
What is DLPerf?
DLPerf (Deep Learning Performance) is our scoring function that estimates performance for typical deep learning tasks. It predicts iterations/second for common tasks like training ResNet50 CNNs.
Example scores:
V100: ~21 DLPerf
2080 Ti: ~14 DLPerf
1080 Ti: ~10 DLPerf
A V100 (21) is roughly 2x faster than a 1080 Ti (10) for typical deep learning.
Is DLPerf accurate for my workload?
DLPerf is optimized for common deep learning tasks:
✅ CNN training (ResNet, VGG, etc.)
✅ Transformer models
✅ Standard computer vision
⚠️ Less accurate for unusual compute patterns
⚠️ Not optimized for non-ML workloads
For specialized workloads, benchmark on different GPUs yourself. While not perfect, DLPerf is more useful than raw TFLOPS for most ML tasks.