Get started with Flama documentation

Learn how to build, package, and deploy your modern and robust machine learning APIs.

Quickstart

Follow this guide to develop your first API, and explore the main functionality offered by Flama. In this guide, you will learn how to create a new project, define your first endpoint, and run your API locally.

Core

Deep dive into the core functionality of Flama. Learn how to package your models as binary files for reusability, and how to integrate them into your API. Also, customise the interaction with models through Components and Resources.

CLI

Discover the Flama command line interface, and uncover the power of serving ML models codeless. This guide shows how to run an API locally, serve ML models with, and interact with them, without typing a single line of code.