×
Policy Hour
panel & networking
Online Webinar

Framework for Building Trustworthy AI

Tuesday, 30 January 2024 – 11 am ET
Register for this event
Register for this event

With the exponential growth in AI applications comes a reconsideration by the public and AI users regarding the fairness of systems, their robustness, and efficacy. Businesses are juggling the advent of a new industrial revolution while making sure attention is given to risks such as “can these systems damage our reputation? What are the regulatory requirements? And are there any financial risks to using such systems?”

In this industry-centered fireside chat we explore the intersection of two perspectives on how organizations can build frameworks that support trustworthy AI. In particular, we look at best practices for auditing and building trustworthy AI in finance and accounting sectors. Our talk includes:

  • Danielle Supkis Cheek, VP of Strategy & Industry Relations at MindBridge, a global leader in financial risk discovery.
  • Emre Kazim, co-CEO and co-Founder of Holistic AI.

If you couldn’t make it on Monday or want to revisit the topics we covered, you can also access the recording on-demand above as well as work through some of the questions posed in the event below.

Q&A


Finance and accounting, as highly regulated industries, have very thin thresholds for being wrong about your numbers (and severe penalties for mistakes). Put another way, there’s a high need for trust in finance. With this said within finance, many of the logical steps AI applies are tried and true in non-AI settings.Historically, SOC compliance has played a role in enabling trust and internal controls. As firms increased their processing ability and ability to handle large amounts of information with automation, they haven’t updated this control side. When we can explain in normal human speech how each one of these automated components work and walk through tried-and-true logical steps being applied by AI, this is where explainability is built. On the flip side, the execution of this automation behind the scenes can be very complex. It’s best to try to focus on simplifying down to the basic logical steps in our documentation and external comms. This translation from data science terms to logic and industry-specific terms plays a large role in explainability. Additionally, focus on the use case and business value aids in explainability as well.

In the world of financial and accounting controls, receiving 3rd party reports is normal to understand tests, processes, results, and so forth on systems. This extends across many finance and accounting use cases where teams are used to receiving and then assessing next steps based off reports. Providing similar reports on AI systems generally works for communicating with the larger team, particularly if there’s an emphasis on translating complex data science terms into more fundamental concepts.

Fines up to €35M or 7% of global annual turnover for breaches, subject to changes depending on the nature of the non-compliance and the size of the respective entity, have been outlined in the latest political agreement although these amounts are not yet finalized.

Conservative and risk-averse industries are still looking to automate things that are hard to do at scale. This is an opportunity to start small yet still provide efficiency gains to build trust. You can start with comparing options across a lifecycle of AI applications that are more or less transformative. Starting a point solution or gains in efficiency through automation in minute details of the process can build trust before more dramatic overhauls using AI are attempted.

Transparency around testing, audits, results, and methodology is a core component here. Often “good enough” surfaces where there’s a “human in the loop”. There are many ways and metrics for testing confidence in AI, it’s important to provide some overlap and differing views through testing the performance and safety of an AI system.  Layering many tests over systems removes a single point of failure and allows humans to determine which tests are most predictive of confidence. Human judgment still plays a very large role in parsing through testing and metrics to find the best proxies for explainability and trust.

How we can help

Highly regulated industries like finance and accounting have lower theshholds for error than many other industries. This means efficiency gains from AI and automation need exceedingly well thought through frameworks and processes for building trust. Once you’ve explored the perspective of industry leader Mindbridge through our fireside chat guest Danielle Supkis Cheek, be sure to reach out to the Holistic AI team to see what the the world’s first 360-degree solution for AI trust, risk, security, and compliance can do for your enterprise.

With the exponential growth in AI applications comes a reconsideration by the public and AI users regarding the fairness of systems, their robustness, and efficacy. Businesses are juggling the advent of a new industrial revolution while making sure attention is given to risks such as “can these systems damage our reputation? What are the regulatory requirements? And are there any financial risks to using such systems?”

In this industry-centered fireside chat we explore the intersection of two perspectives on how organizations can build frameworks that support trustworthy AI. In particular, we look at best practices for auditing and building trustworthy AI in finance and accounting sectors. Our talk includes:

  • Danielle Supkis Cheek, VP of Strategy & Industry Relations at MindBridge, a global leader in financial risk discovery.
  • Emre Kazim, co-CEO and co-Founder of Holistic AI.

If you couldn’t make it on Monday or want to revisit the topics we covered, you can also access the recording on-demand above as well as work through some of the questions posed in the event below.

Q&A


Finance and accounting, as highly regulated industries, have very thin thresholds for being wrong about your numbers (and severe penalties for mistakes). Put another way, there’s a high need for trust in finance. With this said within finance, many of the logical steps AI applies are tried and true in non-AI settings.Historically, SOC compliance has played a role in enabling trust and internal controls. As firms increased their processing ability and ability to handle large amounts of information with automation, they haven’t updated this control side. When we can explain in normal human speech how each one of these automated components work and walk through tried-and-true logical steps being applied by AI, this is where explainability is built. On the flip side, the execution of this automation behind the scenes can be very complex. It’s best to try to focus on simplifying down to the basic logical steps in our documentation and external comms. This translation from data science terms to logic and industry-specific terms plays a large role in explainability. Additionally, focus on the use case and business value aids in explainability as well.

In the world of financial and accounting controls, receiving 3rd party reports is normal to understand tests, processes, results, and so forth on systems. This extends across many finance and accounting use cases where teams are used to receiving and then assessing next steps based off reports. Providing similar reports on AI systems generally works for communicating with the larger team, particularly if there’s an emphasis on translating complex data science terms into more fundamental concepts.

Fines up to €35M or 7% of global annual turnover for breaches, subject to changes depending on the nature of the non-compliance and the size of the respective entity, have been outlined in the latest political agreement although these amounts are not yet finalized.

Conservative and risk-averse industries are still looking to automate things that are hard to do at scale. This is an opportunity to start small yet still provide efficiency gains to build trust. You can start with comparing options across a lifecycle of AI applications that are more or less transformative. Starting a point solution or gains in efficiency through automation in minute details of the process can build trust before more dramatic overhauls using AI are attempted.

Transparency around testing, audits, results, and methodology is a core component here. Often “good enough” surfaces where there’s a “human in the loop”. There are many ways and metrics for testing confidence in AI, it’s important to provide some overlap and differing views through testing the performance and safety of an AI system.  Layering many tests over systems removes a single point of failure and allows humans to determine which tests are most predictive of confidence. Human judgment still plays a very large role in parsing through testing and metrics to find the best proxies for explainability and trust.

How we can help

Highly regulated industries like finance and accounting have lower theshholds for error than many other industries. This means efficiency gains from AI and automation need exceedingly well thought through frameworks and processes for building trust. Once you’ve explored the perspective of industry leader Mindbridge through our fireside chat guest Danielle Supkis Cheek, be sure to reach out to the Holistic AI team to see what the the world’s first 360-degree solution for AI trust, risk, security, and compliance can do for your enterprise.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call