What is Daft Cloud?
- Serverless platform for experimenting, deploying, and operating AI pipelines with zero infrastructure headaches.
- Managed LLMs/VLMs and compute workers that scale automatically as your data grows.
- Production-grade auth, observability, retries, and versioning for minimal overhead.
Key features
- Serverless Execution - autoscales to your workloads with minimal engineering operations
- Managed Model Inference - use LLMs/VLMs without wrangling rate limits or GPUs
- Managed Data Connectors - high throughput I/O to read and write from your cloud storage
- Built-in Versioning - runs and pipelines versioned with git
- Graceful Degradation - handles errors gracefully without crashing the entire pipeline
Getting started
- Create a Project: you may simply name your project “hello world”
- Create your first Daft Run: enter the “hello world” project and create a new Run!
- Source: use the Public URL to the repo at
https://github.com/Eventual-Inc/daft-examples - Entrypoint: choose
Functionand point it athello_world.pyfor Module andexamplefor function - Arguments: leave this blank for now because our
hello_world.py:examplefunction doesn’t take any arguments!
- Watch it go zoom: watch as Daft Cloud blazes through your workload, running inference/prompts on every row as configured in your
examplefunction. Your end results are now downloadable as a JSON file!
Next Steps
- Runs - Execute and monitor your Daft workflows
- Python SDK - Programmatically create and manage runs
- Secrets - Securely store credentials and sensitive configuration
- Data Sources - Connect to AWS S3, Supabase Storage, and other data systems
- Catalogs - Connect to Unity Catalog, Supabase Database, and other catalogs
- API Keys - Access Daft Cloud models from your local machine
- Daft Documentation - Learn more about the Daft library
- [COMING SOON] Check out our other examples which run on public datasets: image embeddings, document extraction, post summarization/titling and more
- [COMING SOON] Configure your runs to run automatically with triggers
- [COMING SOON] Configure a live HTTP endpoint as the source of data for triggering runs