By Rebecca Bellan & Dominic-Madori Davis, TechCrunch – August 3, 2025
Vogue’s July print issue ran a Guess advertisement featuring an AI-generated model. The placement triggered days of online backlash and industry debate about creative labor, consent, and “artificial diversity.” TechCrunch interviewed models, creatives, and technologists across the ecosystem for their take.
Synthetic talent is appealing due to lower cost and speed, but it raises questions around authenticity, consent/likeness rights, and representation that can affect brand trust and regulatory risk.
AI is transforming creative production, offering scale and efficiency but trust and transparency and governance are non-negotiable. Success and customer trust with AI-generated media depends on clear disclosure, consent, and a thoughtful brand strategy.
By Jeremy Hsu, New Scientist – August 2025
Researchers report a promising new universal detector tool that can spot multiple types of manipulated or AI-generated video (not just one generator or technique) and show strong test results across varied sources. The technology helps flag non-consensual AI-generated pornography, deepfake scams, and election misinformation videos.
Widely applicable AI detection has remained elusive thus far. This represents a significant step forward in being able to more easily tell whether content is AI-generated.
While this universal detector is promising progress, it’s not a magic filter. Use it to reduce risk and flag doubtful content faster, while keeping human judgment and provenance checks in the loop.
By Daniel Rissing, Stanford HAI – July 31, 2025
In a new paper published in Science, 20 experts – including Stanford Institute for Human-Centered AI scholars Fei-Fei Li, Yejin Choi, Daniel E. Ho, Percy Liang, and Rishi Bommasani – call for policymakers to ground AI rules in measured evidence and to create structures (testing access, transparency, safe harbors) that generate the evidence regulators need.
Rules built on anecdotes tend to misfire. Measured evaluation makes it easier to target real risks and update policy as the tech evolves.
According to these scholars, evidence-based AI policy offers the best path forward to capture the innovation potential of AI, while minimizing the risks and building trust among consumers, policymakers, and other stakeholder groups.
By Lucy Perkins, Carnegie Mellon University – August 4, 2025
Carnegie Mellon University (CMU) launched a three-year pilot institute to pair machine learning with formal, math-based methods to aid AI reasoning capabilities and accuracy. Programs include workshops, summer schools, and a conference. Backed by the National Science Foundation (NSF) and additional support from the Simons Foundation, the Institute for Computer-Aided Reasoning in Mathematics (ICARM) will help researchers strengthen real-world problem-solving in domains like cybersecurity, finance, space and health care.
As AI moves into high-stakes areas (finance, healthcare, safety), it’s not enough for a model to be accurate, we also need ways to verify its reasoning.
This signals a shift toward mathematically-grounded guardrails for AI. It represents a meaningful step beyond explainability and toward verifiability, calling for AI systems that are testable, not just impressive.
Holistic AI Staff – August 8, 2025
IDC’s ProductScape for Worldwide AI Governance Platforms, 2025 analyzes major AI governance vendors, including Holistic AI, across 5 essential feature sets. The report specifically highlights Holistic AI's end-to-end data security and enterprise risk management capabilities. This recognition comes as organizations increasingly deploy GenAI across mission-critical applications, requiring powerful yet flexible governance frameworks.
With the global AI market projected to reach $151 Billion by 2030, it is now more important than ever to adopt a proven, platform-based AI governance framework for your enterprise. This IDC ProductScape provides critical insights for enterprises and technology decision-makers evaluating AI governance platforms and solutions.
The report notes that Holistic AI “oversees the entire life cycle of generative AI models, addressing governance concerns such as bias, security, and compliance...the platform offers a versatile approach to ensuring transparency, security, and trust, addressing the complexities of responsible AI deployment across industries.”
Get a demo
Get a demo