By Robert Capps, The New York Times Magazine – June 17, 2025
This feature explores how AI is creating new jobs by outlining 22 emerging job types built around the expanding interface between human accountability and AI’s capability. These include AI auditors, consistency coordinators, escalation officers, and “personality directors” tasked with shaping how AI shows up to users.
This article reframes AI not just as a productivity accelerator for employees, but as a catalyst for organizational redesign. As more tasks and decisions are delegated to AI, companies will need new human roles to provide accountability, coherence, and strategic direction. The future of work won’t be task-based; it’ll be responsibility-based.
As AI capabilities grow, so does the need for thoughtful governance infrastructure. The jobs of the future will be built around trust, translation, and direction—ensuring AI systems operate in service of human and enterprise priorities.
By Sheryl Estrada, Fortune – July 10, 2025
A new global study of 1,000+ C-suite leaders finds that only a fraction of AI investments are delivering clear financial returns. While 94% of executives say they’re increasing AI budgets, just 28% report that their initiatives have met or exceeded ROI expectations. The disconnect appears to stem from poor implementation planning, lack of talent, and overreliance on pilot projects that never scale.
The findings suggest that governance gaps, misaligned incentives, and unclear accountability are limiting the business impact of AI, even as spending accelerates. Without better oversight, AI risks becoming a cost center instead of a growth engine.
As AI investment soars, value realization remains elusive. Leaders can go beyond experimentation by embedding AI into core business strategy, establishing clear ownership, and measuring what matters.
By Emre Kazim, Financial Times – June 11, 2025
As market conditions shift rapidly, financial institutions are discovering that AI models deployed under earlier assumptions are now misfiring. Emre Kazim, co-founder of Holistic AI, highlights “model drift”—the gradual degradation of model accuracy as data diverges from training conditions—as a growing risk across credit, fraud, and liquidity systems.
Model drift is a silent threat. It undermines confidence in AI without obvious failure points, and many institutions lack processes to detect or correct it. With regulators sharpening their focus and financial exposure mounting, organizations must treat ongoing model performance as a core operational responsibility—not a one-time engineering task.
Model drift is often invisible until it causes real harm. Financial institutions must treat models as dynamic assets—regularly monitored, stress-tested, and updated to reflect changing realities.
By Kit Eaton, Inc. – May 13, 2025
Lloyd’s of London, in collaboration with Toronto-based startup Armilla, has introduced an insurance product that covers companies for damages caused by AI model failures. These failures can include hallucinated chatbot responses, incorrect decisions, or the spread of misinformation to users. The policy is only available to models that have been favorably assessed by the company for risk of degradation over time.
This is one of the first attempts to offer a formal insurance framework for AI systems, treating model risk as something measurable and underwritable. It represents an early step toward building market mechanisms around AI reliability—similar to how cybersecurity became insurable once standards and assessment tools matured.
This product marks a practical step toward treating AI risk like any other business risk—evaluated, priced, and managed. As offerings like this grow, they may start to influence how models are built, tested, and trusted across industries.
By Solange Charas, Forbes – June 2025
Solange Charas, writing in Forbes, argues that boards must take a leadership role in AI governance. The greatest risk from AI isn’t technical error or job displacement, but strategic drift. As AI transforms how value is created, board-level oversight is needed to guide how the workforce evolves alongside the technology.
AI is now a material financial factor in enterprise valuation. Decisions about workforce automation, augmentation, and upskilling shape innovation capacity, cost structures, and long-term resilience. Disclosure expectations are rising: frameworks like SEC’s Item 101, ISO 30414, and Europe’s CSRD require organizations to account for how human capital is managed amid AI deployment.
Boards that treat AI as a tactical function risk misalignment between business strategy and talent architecture—resulting in reputational, regulatory, and leadership succession vulnerabilities.
Many boards still view AI as a technical or operational issue. This article argues that it's increasingly a strategic one—with implications for workforce design, long-term value creation, and emerging disclosure requirements.
Get a demo
Get a demo