Productionalise your machine learning models seamlessly

Flama is a data-science oriented framework to rapidly build modern and robust machine learning APIs. With Flama application you will be able to deploy models in seconds.
Fire up your models with the flame 🔥

Framework
import flama
app = flama.Flama()
app.models.add_model("/puppy/", "/path/to/puppy_model.flm", name="Puppy")
if __name__ == "__main__": flama.run(flama_app=app, server_host="0.0.0.0", server_port=8080)

Machine Learning Responsive

Let’s face it, there isn’t a single ML framework. Models developed in such different frameworks should be easily integrated together in a single API. However this integration presents a technical challenge, typically unproductive and annoying for a data scientist.


Flama is thought from the very beginning to be compatible with the mainstream data-science frameworks, and it makes easy and simple the packaging of ML models to be integrated together.

Scikit Learn
import flamafrom sklearn.neural_network import MLPClassifier
model = MLPClassifier(activation="tanh", max_iter=2000, hidden_layer_sizes=(10,))model.fit( np.array([[0, 0], [0, 1], [1, 0], [1, 1]]), np.array([0, 1, 1, 0]),)
flama.dump(model, "sklearn_model.flm")

Production-Ready First

Need your models serving ASAP? It does not feel right to have to wait months to see if your models work outside a Jupyter notebook, does it?


Flama makes the deployment of ML models into production as straightforwardly as possible. With the ease of a single command line your packaged models will be ready to serve via HTTP requests in seconds. Flama transforms any model into an ML-API ready to serve its purpose.

Command Line
> flama serve path/to/model.flm
INFO: Started server process [78260]INFO: Waiting for application startup.INFO: Application startup complete.INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

Effortless Development

Flama is designed to be quick to learn and use. This goal is accomplished with a simple and clear syntax, and a rich spectrum of built-in functionality, reducing boilerplating and development time.

There is a wide spectrum of data validation libraries for Python to combine data types into structures, validate them, and provide tools for serialisation of app-level objects to primitive Python types.


Flama natively supports both Typesystem and Marshmallow, data type validation libraries which make possible the standardisation of the API via generation of OpenAPI schemas, and allow the user to define API schemas effortlessly.


Flama Schema generator gathers all the API information needed directly from your code and infers the schema that represents your API based on OpenAPI standard. The schema will be also served at the route /schema/ by default.

Models Lifecycle

Loading ML models in a production application is a demanding and prone-to-error task, which also depends on the specific ML framework.


Flama provides a clean solution to the problem via Components, which load models seamlessly.

Models Lifecycle
from flama import Flama, ModelComponentBuilder

with open("/path/to/model.flm", "rb") as f: component = ModelComponentBuilder.loads(f.read()) ModelType = component.get_model_type # Get the type to allow inject dependency

app = Flama(components=[component])

@app.get("/")def model_view(model: ModelType, model_input: str): """ tags: - model summary: Model prediction. description: Interact with the model to generate a prediction based on given input responses: 200: description: Model prediction. """ model_output = model.predict(model_input) return {"model_output": model_output}

Extensibility

Flama consists of a core of functionality for creating, maintaining and deploying ML-APIs. However, the ML arena is constantly changing, with new products for managing ML projects appearing very often. Being able to integrate your API with such third parties is of crucial importance.


Flama is natively an extensible framework. With the ease of Module you will be able to rapidly develop your own plugins and keep improving Flama integrability.

Extensibility
import typing
import mlflowfrom flama import Module, Flama

class MLFlowModule(Module): name = "mlflow"
def __init__(self, app: Flama, url: str = None, *args, **kwargs): super().__init__(app, *args, **kwargs) self.url = url
async def on_startup(self): mlflow.set_tracking_uri(self.url)
async def on_shutdown(self): ...
def search_runs(self, experiment_ids: typing.List[str], filter_string: str): return mlflow.search_runs(experiment_ids, filter_string)

app = Flama(modules=[MLFlowModule])
# Module usage examplemodel = app.mlflow.search_runs(["foo"], "tags.name = 'bar'")

Development Tools

The process of developing APIs for Machine Learning can be complex and time-consuming, especially when it comes to debugging. Debugging refers to the process of identifying and fixing errors in the code, which can range from simple syntax errors to more complex issues such as incorrect data access or resource management.


Flama provides graphical tools that make debugging simple and direct, allowing you to trace code errors (Internal Server Error), or access to non-existent resources (Not Found) with ease.

Internal Server Error
internal-server-error-page