About
Autoblocks.ai helps teams improve the reliability of LLM-based products, at every part of the development lifecycle. Its prompt and configuration management solution allows teams to surface any part of their LLM product pipeline in a collaborative UI.
With Autoblocks testing and evaluation tooling, both developers and non-technical stakeholders can experiment with changes to the pipeline and see how they impact output quality. They can compare between different models, prompts, and parameters to pinpoint the combinations that produce the best results.
Autoblocks AI is designed to support any AI system and doesn’t introduce any framework or model dependencies. It sits alongside existing tech stacks to help accelerate AI product development, allowing for rapid iteration and optimization.
How they use Cloudflare
Autoblocks uses Workers to power their event ingestion API. Workers have enabled them to seamlessly scale to ingesting millions of data points per day from users across the globe for some of their largest customers. Using Wrangler CLI, the Autoblocks team deploys global serverless functions in seconds without worrying about regions or scalability. The CLI also enables them to create realistic environments on their development machines ensuring quality and reliability in their application.
To optimize their performance, Autoblocks leverages Cloudflare Queues to run asynchronous tasks in the background, and Hyperdrive for performant global database access. Finally, Autoblocks leverages R2 to store data assets for simplified and cost effective global storage and retrieval.
Why Cloudflare?
“We wanted something that would scale and not keep us up at night. We were able to deploy our API in minutes, providing low latency for all our users across the globe, and has enabled Autoblocks to seamlessly scale as our business continues to grow.”