The Holistic AI Brief - July 2025

In This Edition

AI Might Take Your Job. Here Are 22 New Ones It Could Give You

By Robert Capps, The New York Times Magazine – June 17, 2025

What’s New

This feature explores how AI is creating new jobs by outlining 22 emerging job types built around the expanding interface between human accountability and AI’s capability. These include AI auditors, consistency coordinators, escalation officers, and “personality directors” tasked with shaping how AI shows up to users.

Why It Matters

This article reframes AI not just as a productivity accelerator for employees, but as a catalyst for organizational redesign. As more tasks and decisions are delegated to AI, companies will need new human roles to provide accountability, coherence, and strategic direction. The future of work won’t be task-based; it’ll be responsibility-based.

Key Implications

The Bottom Line

As AI capabilities grow, so does the need for thoughtful governance infrastructure. The jobs of the future will be built around trust, translation, and direction—ensuring AI systems operate in service of human and enterprise priorities.

Most AI Projects Still Don’t Deliver ROI

By Sheryl Estrada, Fortune – July 10, 2025

What’s New

A new global study of 1,000+ C-suite leaders finds that only a fraction of AI investments are delivering clear financial returns. While 94% of executives say they’re increasing AI budgets, just 28% report that their initiatives have met or exceeded ROI expectations. The disconnect appears to stem from poor implementation planning, lack of talent, and overreliance on pilot projects that never scale.

Why It Matters

The findings suggest that governance gaps, misaligned incentives, and unclear accountability are limiting the business impact of AI, even as spending accelerates. Without better oversight, AI risks becoming a cost center instead of a growth engine.

Key Implications

The Bottom Line

As AI investment soars, value realization remains elusive. Leaders can go beyond experimentation by embedding AI into core business strategy, establishing clear ownership, and measuring what matters.

What Financial Institutions Must Know About AI Model Drift

By Emre Kazim, Financial Times – June 11, 2025

What's New

As market conditions shift rapidly, financial institutions are discovering that AI models deployed under earlier assumptions are now misfiring. Emre Kazim, co-founder of Holistic AI, highlights “model drift”—the gradual degradation of model accuracy as data diverges from training conditions—as a growing risk across credit, fraud, and liquidity systems.

Why It Matters

Model drift is a silent threat. It undermines confidence in AI without obvious failure points, and many institutions lack processes to detect or correct it. With regulators sharpening their focus and financial exposure mounting, organizations must treat ongoing model performance as a core operational responsibility—not a one-time engineering task.

Key Implications

The Bottom Line

Model drift is often invisible until it causes real harm. Financial institutions must treat models as dynamic assets—regularly monitored, stress-tested, and updated to reflect changing realities.

Artificial Intelligence Insurance? This Startup Will Cover the Costs of AI Mistakes

By Kit Eaton, Inc. – May 13, 2025

What's New

Lloyd’s of London, in collaboration with Toronto-based startup Armilla, has introduced an insurance product that covers companies for damages caused by AI model failures. These failures can include hallucinated chatbot responses, incorrect decisions, or the spread of misinformation to users. The policy is only available to models that have been favorably assessed by the company for risk of degradation over time.

Why It Matters

This is one of the first attempts to offer a formal insurance framework for AI systems, treating model risk as something measurable and underwritable. It represents an early step toward building market mechanisms around AI reliability—similar to how cybersecurity became insurable once standards and assessment tools matured.

Key Implications

The Bottom Line

This product marks a practical step toward treating AI risk like any other business risk—evaluated, priced, and managed. As offerings like this grow, they may start to influence how models are built, tested, and trusted across industries.

Boards Must Lead AI Governance—or Risk Enterprise Value

By Solange Charas, Forbes – June 2025

What's New

Solange Charas, writing in Forbes, argues that boards must take a leadership role in AI governance. The greatest risk from AI isn’t technical error or job displacement, but strategic drift. As AI transforms how value is created, board-level oversight is needed to guide how the workforce evolves alongside the technology.

Why It Matters

AI is now a material financial factor in enterprise valuation. Decisions about workforce automation, augmentation, and upskilling shape innovation capacity, cost structures, and long-term resilience. Disclosure expectations are rising: frameworks like SEC’s Item 101, ISO 30414, and Europe’s CSRD require organizations to account for how human capital is managed amid AI deployment.

Boards that treat AI as a tactical function risk misalignment between business strategy and talent architecture—resulting in reputational, regulatory, and leadership succession vulnerabilities.

Key Implications

The Bottom Line

Many boards still view AI as a technical or operational issue. This article argues that it's increasingly a strategic one—with implications for workforce design, long-term value creation, and emerging disclosure requirements.

Stay informed with the latest news & updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this

Unlock the Future with AI Governance.

Get a demo

Get a demo