By Adriano Koshiyama, Co-Founder & Co-CEO, Holistic AI
In a forward-looking scenario from Holistic AI Co-Founder Adriano Koshiyama, AI—including agentic AI—is fully embedded in daily work and life. Hidden dangers, such as emergent behaviors and cascading risks, lurk within this bright future, alongside the incredible power, convenience, and automation agentic AI enables.
Agentic AI are systems that don’t just assist, but act. As agents begin to autonomously manage workflows, make decisions, and execute tasks, today’s governance approaches—built for human-in-the-loop systems—no longer hold. AI governance needs to be rebuilt to address a wide range of new risks and opportunities.
Agentic AI marks a fundamental shift—not just in what AI can do, but in how it must be governed. To scale safely and effectively, organizations need governance systems built for speed, autonomy, and real-time accountability.
By Lance Haun, Reworked
Klarna, the fintech giant, went headfirst into an AI customer support makeover, slashing headcount by 24% and claiming its agent could manage 2.3 million chats a month. Roughly a year later, the company reversed course and began rehiring customer service roles. Customers complained about “robotic responses, inflexible scripts and the Kafkaesque loop of repeating their issue to a human after the bot failed.”
This offers a reality check on the hype surrounding AI-driven job replacement. While AI can bring significant efficiency gains, Klarna’s experience shows that overreliance on automation—especially in nuanced, human-facing roles—can backfire, eroding customer trust and loyalty. The case emphasizes the need for transparency, realistic expectations, and clear communication around AI's role in the workforce.
Klarna’s experience shows that efficiency gains from AI can be real — but so can the risks of overreaching. A balanced, transparent approach that treats AI as a complement to human talent is essential for sustaining trust, morale, and long-term value.
By Clare Duffy, CNN
Workday faces a class-action lawsuit alleging its AI-driven hiring tools discriminate against job applicants over 40.
This lawsuit places enterprise-grade AI hiring tools under renewed scrutiny. As more companies adopt algorithmic screening, the legal and reputational stakes grow, especially when models are trained on biased or incomplete datasets.
Proactive AI bias mitigation is no longer just a best practice; it's a critical legal defense against costly litigation.
By Shirin Ghaffary, Bloomberg
Anthropic CEO Dario Amodei delayed the release of Claude 3.7 Sonnet in February after his safety team raised concerns about potential bioweapons risks.
Anthropic represents a critical test case for whether AI companies can maintain safety commitments while competing in a rapidly accelerating market. Unlike competitors, the company has built formal safety frameworks (Responsible Scaling Policy) and consistently chosen caution over speed, even when facing intense commercial pressure. As AI capabilities approach what Amodei calls "ASL-3" level (significantly dangerous), these decisions will become increasingly consequential for the entire industry.
Anthropic's $60 billion valuation and rapid growth prove that AI safety positioning can be commercially viable, but the company now faces its ultimate test: maintaining ethical commitments as AI capabilities approach genuinely dangerous levels while competing against rivals with fewer constraints. The outcome will likely determine whether the AI industry can self-regulate or require external intervention.
Get a demo
Get a demo