Holistic AI Library and MLFlow: Creating an End-to-End Application

January 16, 2024
Authored by
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Holistic AI Library and MLFlow: Creating an End-to-End Application

A major challenge in using artificial intelligence (AI) for decision-making is that it might pick up and repeat any biases found in the data it was trained on, leading to biased results.

This is where bias mitigation tools come into play, including tools for the monitoring and controlling of experiments conducted with models.

To aid in the task of conducting reproducible experiments free from discriminatory biases, this tutorial introduces integration between MLFlow and the Holistic AI library. Through this integration, we aim to facilitate a more responsible implementation of AI. We also provide a deployment setup with FastAPI. In this way, it is possible to implement models trained with bias mitigation methods.

What is MLFlow?

MLFlow is an open-source platform for managing the life cycle of AI models. MLFlow enables data scientists to organize and manage experiments, track metrics, store artifacts, and manage AI models, in addition to providing production monitoring.

These capabilities aid with many of the largest challenges in the implementation of AI models including management, monitoring, and result reproducibility. Life cycle management tools like MLFlow are crucial for limiting and tracking bias and promoting reproducibility.

MLFlow Environment Setup

The first step is to import mlflow library and set the tracking. In this case, we will use the localhost http://127.0.0.1:8080.


import mlflow  
 
# set the tracking server to be localhost  
mlflow.set_tracking_uri(uri="http://127.0.0.1:8080")  

To start the MLflow server, run the following command in the terminal:


mlflow server - host 127.0.0.1 - port 8080  

Data Preprocessing

The dataset that we will use is the “Adult” dataset from the UCI Machine Learning Repository, this is a publicly available dataset that contains information about age, education, marital status, race and gender of individuals from the United States. The objective is to predict whether an individual’s income will be above or below $50K per year.

Source: Holistic AI Datasets


from holisticai.datasets import load_dataset 
 
# load dataset and groups 
df, group_a, group_b = load_dataset(dataset='adult', preprocessed=True, as_array=False) 
 
# split the data into X (features) and y (target) 
X = df.iloc[:,:-1] 
y = df.iloc[:,-1] 
 
# print the shape of X and y 
print(X.shape, y.shape) 

Data Shape

In this step, we will split the dataset into training and test sets. In this case is important to assure that the groups are considered in the same proportion in both sets.


from sklearn.model_selection import train_test_split 
 
# split data into train and test sets 
X_train, X_test, y_train, y_test, group_a_tr, group_a_ts, group_b_tr, group_b_ts = \ 
train_test_split(X, y, group_a, group_b, test_size=0.2, random_state=42, stratify=y) 
 
# create train_data and test_data 
train_data = X_train, y_train, group_a_tr, group_b_tr 
test_data = X_test, y_test, group_a_ts, group_b_ts 

Experiment’s parameters

In the next code snippet, we explore the implementation of bias mitigation techniques in machine learning models using the HolisticAI framework. The code demonstrates how to leverage a variety of mitigations including:

  • Reweighing
  • Learning fair representation
  • The removal of correlation
  • The removal of disparate impact to address bias in a Logistic Regression model

By splitting the data into distinct groups and incorporating HolisticAI’s pipeline, the code ensures the fair treatment of different demographic groups during both training and evaluation.


from holisticai.bias import mitigation 
from holisticai.bias import metrics 
from holisticai.pipeline import Pipeline 
from sklearn.linear_model import LogisticRegression 
from sklearn.metrics import accuracy_score 
 
# split X_train and y_train into group_a_train and group_b_train 
X_train, y_train, group_a_train, group_b_train = train_data 
 
# split X_test and y_test into group_a_test and group_b_test 
X_test, y_test, group_a_test, group_b_test = test_data 
 
# define the list of mitigators 
mitigators = { 
  "reweighing": mitigation.Reweighing(), 
  "learning_fair_representation": mitigation.LearningFairRepresentation(), 
  "correlation_remover": mitigation.CorrelationRemover(), 
  "disparate_impact_remover": mitigation.DisparateImpactRemover(), 
  "no_mitigator": None 
} 
 
# fit parameters 
fit_params = { 
  "bm__group_a": group_a_train, 
  "bm__group_b": group_b_train 
} 
 
# predict parameters 
predict_params = { 
  "bm__group_a": group_a_test, 
  "bm__group_b": group_b_test 
} 
 
# define model's parameters 
model_params = { 
  "solver": "lbfgs", 
  "max_iter": 100, 
  "multi_class": "auto", 
  "random_state": 42 
} 

ML Flow tracking

Leveraging Scikit-learn’s Logistic Regression as the base model and the HolisticAI pipeline, the following code encapsulates the trained model within a custom class, MyModel, ensuring compatibility with MLflow standards.


# define the mlflow custom model class 
class MyModel(mlflow.pyfunc.PythonModel): 
   
 def __init__(self, model): 
    self.model = model 
   
 def predict(self, ctx, model_input, params=None): 
    pred_paramns = { 
    "bm__group_a": model_input['group_a'], 
    "bm__group_b": model_input['group_b'] 
    } 
    xtest = model_input.copy().drop(['group_a', 'group_b'], axis=1) 
    return self.model.predict(xtest, **pred_paramns) 

The next code snippet presents a versatile and organized function, train_model, designed for training and evaluating machine learning models within the MLflow framework.

This function allows users to seamlessly incorporate bias mitigation into the training pipeline by specifying a mitigator parameter.

The function not only predicts a given input but also computes and logs essential metrics, including accuracy, disparate impact, statistical parity, and a table artifact with classification bias metrics. Furthermore, it logs model parameters, bias evaluation results, and the trained model itself, providing a comprehensive and transparent record of the model’s performance and bias characteristics.

This approach facilitates reproducibility and thorough analysis, aligning with best practices in machine learning model development and evaluation.


# a function to train the model with pipeline and save logs 
def train_model(name, mitigator=None): 
  with mlflow.start_run(run_name=name): 
     
   # define the pipeline steps 
    steps = [('bm_preprocessing', mitigator)] if mitigator is not None else [] 
    steps.append(('model', LogisticRegression(**model_params))) 
     
   # fit adn predict the pipeline 
    pipeline = Pipeline(steps=steps) 
    pipeline.fit(X_train, y_train, **fit_params) 
    y_pred = pipeline.predict(X_test, **predict_params) 
     
   # calculate metrics 
    accuracy = accuracy_score(y_test, y_pred) 
    disp_impact = metrics.disparate_impact(group_a_test, group_b_test, y_pred) 
    stat_parity = metrics.statistical_parity(group_a_test, group_b_test, y_pred) 
     
   # log the metrics 
    mlflow.log_metric("accuracy", accuracy) 
    mlflow.log_metric("disparate_impact", disp_impact) 
    mlflow.log_metric("statistical_parity", stat_parity) 
     
   # log the model params 
    mlflow.log_params(model_params) 
     
   # set a tag that we can use to remind ourselves what this run was for 
    mlflow.set_tag(f"Training Info", "Basic LR model with {name}") 
     
   # log the model 
    mlflow.pyfunc.log_model( 
          python_model=MyModel(pipeline), 
          artifact_path=f"model_{name}", 
          ) 

Training the models and tracking the experiments

To save the model’s and mitigator’s results we define a simple loop that iterates over a dictionary containing different mitigators, and for each mitigator, it invokes the train_model function with the corresponding name and mitigator settings.


# train the model for each mitigator and save the logs 
for name, mitigator in mitigators.items(): 
  train_model(name, mitigator) 

Evaluation of trained models

The image below shows the results of the trained models. With the results in MLFlow UI, we can compare the performance of the models and the bias metrics.

Evaluation of Trained Models

We also can compare the results of the models using the mlflow API. The charts below show the results of different metrics for the models trained with and without bias mitigation.

The Results of Different Metrics for the Models Trained with and without Bias Mitigation

Load unbiased models and make predictions

In the next code snippet, we dive into the practical aspect of deploying and utilizing a machine learning model that has been logged and saved using MLflow. The process begins with loading a previously saved model, specifically the one trained with the ‘correlation_remover’ mitigator, which is retrieved using its unique run identifier.

The model is then loaded as a PyFuncModel, making it compatible with MLflow’s PyFunc API. Subsequently, predictions are made on a Pandas DataFrame, simulating real-world input data for the model. The data frame includes features from the test set along with corresponding group information. The predictions are then printed, showcasing how to seamlessly apply a previously trained model to new data.

To access the model save we need to use the run_id and model_name of the experiment. This information is available in the MLflow UI.

This Information is Available in the MLflow UI

This code snippet highlights the ease of model deployment and prediction using MLflow, demonstrating the practical utility of the platform in the machine learning development lifecycle.


# run id 
run_id = "56265488a3334e35b2aca6698e3a3ad6" 
 
# model name 
model_name = "model_correlation_remover" 
 
# load the model 
logged_model = 'runs:/{run_id}/{model_name}' 
 
# load model as a PyFuncModel. 
loaded_model = mlflow.pyfunc.load_model(logged_model) 
 
# predict on a Pandas DataFrame. 
input_data = X_test.copy() 
input_data["group_a"] = group_a_test 
input_data["group_b"] = group_b_test 
predictions = loaded_model.predict(input_data) 
print(predictions) 

Downloading Artifacts

Deploy the model with Holistic AI, MLFlow and FastAPI

A FastAPI application is set up to serve predictions from a machine learning model trained and logged using MLflow.

First, we need to create a file called app.py. Use the following command on the terminal:


touch app.py 

The script begins by configuring the MLflow tracking URI and loading the pre-trained model with the correlation_remover mitigator. FastAPI is then initialized, and a /predict route is defined to handle HTTP POST requests for making predictions.

Input data is expected to conform to a specific format, validated using a Pydantic model named InputData. The route’s function processes incoming data, converting it into a Pandas DataFrame, and generates predictions using the loaded MLflow model. Predictions are returned as a JSON response, and exception handling is implemented to manage potential errors, providing informative error messages with appropriate HTTP status codes.


from fastapi import FastAPI, HTTPException 
from fastapi.encoders import jsonable_encoder 
from pydantic import BaseModel BaseModel 
from typing import List 
import pandas as pd 
import mlflow.pyfunc 
import mlflow 
 
# set the tracking server to be localhost 
mlflow.set_tracking_uri(uri="http://127.0.0.1:8080") 
 
# run id 
run_id = "56265488a3334e35b2aca6698e3a3ad6" 
 
# model name 
model_name = "model_correlation_remover" 
 
# load the model 
logged_model = 'runs:/{run_id}/{model_name}' 
 
# Load the MLflow model using mlflow.pyfunc.load_model() 
loaded_model = mlflow.pyfunc.load_model(logged_model) 
app = FastAPI() # Initialize a FastAPI application 
 
# Define a Pydantic model for input data validation 
class InputData(BaseModel): 
  data: List[List[float]] 
  columns: List[str] 
   
 # Define a route and function for making predictions 
  @app.post('/predict') # Define a route '/predict' that accepts HTTP POST requests 
  async def predict(input_data: InputData): 
  try: 
  input_df = pd.DataFrame(jsonable_encoder(input_data.data), columns=input_data.columns) 
   
 # Make predictions using the loaded model 
  predictions = loaded_model.predict(input_df) # Use the model to make predictions on input data 
   
 # Return predictions as a JSON response 
  return {"predictions": predictions.tolist()} # Convert predictions to a JSON response 
  except Exception as e: 
   
 # Handle exceptions (e.g., invalid input or model errors) and return an error message as a JSON response 
  raise HTTPException(status_code=400, detail=str(e)) 

To run the application we can use the following command on the terminal:


uvicorn app:app - host 127.0.0.1 - port 8000 

Finally, the FastAPI application is run on the development server, making the machine learning model accessible at http://127.0.0.1:8000/predict for real-time predictions through a user-friendly API. After running the application, we can access the API documentation at http://127.0.0.1/8000/docs. The image below shows our API waiting for input data.

Our API Waiting for Input Data

This code demonstrates the integration of FastAPI and MLflow, creating a robust and efficient platform for deploying machine learning models without bias using Holistic AI Library with a focus on ease of use and real-time prediction capabilities.

Make Predictions with FastAPI

Now you can make predictions using the FastAPI. To do this, you can use the following command on the terminal:


touch request.py 

After, you need to import essential libraries, including requests for making HTTP requests and pandas for handling data.

We witness the client-side interaction with the FastAPI web application that hosts the machine learning model. First, the input data, mirroring the structure used during model training and testing, is prepared. This includes augmenting the DataFrame with ‘group_a’ and ‘group_b’ columns.

The input data is then converted into JSON format using Pandas’ to_json method. Subsequently, a POST request is made to the local Flask server (‘http://127.0.0.1:5000/predict') with the prepared JSON data.

The response from the server is captured, and predictions are extracted from the returned JSON content. This code provides a practical example of how to interact with a deployed machine learning model using client-side scripting.


import json 
import requests 
 
# lets use the same input data as before 
input_data = X_test.copy() 
input_data["group_a"] = group_a_test 
input_data["group_b"] = group_b_test 
 
# convert input_data to JSON format 
json_input_data = input_data.to_json(orient='split') 
data = json.loads(json_input_data) 
 
# make a POST request to the local FastAPI server 
response = requests.post('http://127.0.0.1:8000/predict', json=data) 
 
# get predictions from the JSON response of the Flask server 
predictions = response.json() 
print(predictions) 

Summary

In conclusion, the presented code not only highlights the seamless integration of MLflow and FastAPI but also emphasizes the incorporation of the HolisticAI library for mitigating bias in machine learning models.

The use of MLflow enables effective model tracking, management, and deployment, ensuring transparency and reproducibility in the machine learning development lifecycle. FastAPI’s modern design and automatic OpenAPI and JSON Schema generation provide an efficient platform for building robust APIs, facilitating real-time predictions.

Additionally, the code showcases the integration of HolisticAI, a powerful library designed to address bias in models. By incorporating bias mitigation and measure techniques, such as the ‘correlation_remover’ used in the showcased example, developers can enhance the fairness and ethical considerations of their machine learning models.

This approach, combining MLflow, FastAPI, and HolisticAI, serves as a comprehensive guide for deploying and consuming bias-aware machine learning models, promoting responsible and inclusive AI practices in production environments.

Take the Next Step with Holistic AI's 360-Degree Governace Platform

While integrating the Holistic AI Library for bias tracking and mitigation is a crucial step, it's just the beginning of your journey towards comprehensive AI governance. To further build on this success, explore our 360-degree AI governance, regulatory, and compliance platform. Our platform offers a more holistic approach, encompassing every aspect of AI deployment and management.

Schedule a consultation with our team today to discover how you can leverage our full suite of tools and services to ensure your AI systems are not only bias-free but also fully compliant with the latest regulations and best practices in AI governance. Let us help you lead the way in responsible AI implementation.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call