What is an AI Artifact?

In the context of AI governance, an artifact is a single, identifiable component of an AI system that exists on one of your connected platforms. When the discovery process scans your infrastructure, it does not find complete, neatly packaged AI systems. Instead, it finds individual pieces: a Jupyter notebook in a GitHub repository, a trained model file stored in a cloud bucket, a live prediction endpoint running on AWS SageMaker, an experiment record in MLflow, or a log of API calls to OpenAI.

Each of these individual pieces is an artifact. Artifacts are the raw evidence of AI activity within your organization. They are the building blocks from which governed AI assets are constructed during the reconciliation stage of the discovery process.

Types of AI Artifacts

Artifacts come in many forms depending on where they are found and what stage of the AI lifecycle they represent:

Model Files

Serialized, trained models saved as files (weights, checkpoints, exported formats)

GitHub, GitLab, Bitbucket, S3, Azure Blob

Notebooks

Jupyter or similar notebooks containing code for data analysis, model training, or experimentation

Databricks, GitHub, GitLab

Endpoints

Live APIs that serve model predictions or AI responses to applications and users

AWS SageMaker, Azure ML, Google Cloud Vertex AI

Experiments

Logged training runs that record parameters, metrics, and results from model development

MLflow, Weights and Biases, Databricks

API Usage Records

Logs of calls made to external AI services including volume, cost, and which models were used

OpenAI, Anthropic, Azure OpenAI, Google AI

Agent Traces

Runtime execution logs from AI agent systems showing how agents planned, acted, and communicated

LangSmith, Langfuse, AgentOps, CrewAI

Documentation

Model cards, data sheets, technical documentation, and governance records related to AI systems

SharePoint, Confluence, Google Drive

The Relationship Between Artifacts and Assets

Artifacts and assets are closely related but represent different levels of the governance hierarchy. An artifact is a single component. An asset is a complete, governed AI system made up of one or more related artifacts.

Consider a fraud detection model as an example. During discovery, the platform might find the following artifacts across different platforms: a training notebook in GitHub containing the model development code, a registered model version in MLflow with its performance metrics, a live endpoint on AWS SageMaker serving predictions, and experiment logs in Weights and Biases tracking training history. Each of these is an individual artifact. During reconciliation, the platform recognizes that these artifacts belong to the same AI system and groups them into a single asset.

This distinction matters because governance operates at the asset level. You do not conduct a compliance assessment on a single notebook. You assess the complete AI system that the notebook is part of. Artifacts provide the granular evidence of what exists. Assets provide the governed, structured view that governance teams work with.

How the Holistic AI Platform Handles Artifacts

The IDENTIFY module discovers artifacts automatically through read only integrations with over 30 platforms. Each artifact is tagged with metadata from its source system, including when it was created, who created it, what platform it lives on, and what type of AI component it represents.

After discovery, artifacts go through classification (determining what type of AI component they are) and then reconciliation (grouping related artifacts into assets). The full lineage from artifact to asset is maintained, so governance teams can always trace an asset back to every individual component that makes it up.

Share this

See it in action

See how Holistic AI puts these concepts into practice.
Request a Demo

Stay informed with the Latest News & Updates