Quickstart
Quickly start deploying a production-ready core infrastructure stack for your LLM applications
This guide will walk you through setting up a production-ready core infrastructure stack for your LLM Application with minimal effort. In just a few steps, you’ll be able to setup Universal API, AI routing, AI Gateway and Observablity to track and analyze the performance and usage of your Large Language Model (LLM) applications.
Deploy AI studio
Preparing docker environment
Launch AI studio gateway server
Generate an API Key
With AI studio running, the next step is to generate an API key for resource access from AI studio
To generate your first API key, you can use the following command:
Save the API Key
Remember to include this API key in the X-Ms-Api-Key
header for all future API interactions
You’re geared up and ready to go! 🚀
Following these steps should have you AI studio up and running to power up LLMOps for your LLM appplications. If you have any questions or need support, reach out to our Discord Community.
Was this page helpful?