Scott Zoldi: The AI Ethics Buck Stops with the CDAO
41 Minuten
Podcast
Podcaster
Beschreibung
vor 5 Jahren
FICO CAO Scott Zoldi outlines how he believes enterprises can
ensure they’re using AI ethically and responsibly in episode two of
the Business of Data podcast
AI ethics emerged as a key barrier to enterprise AI adoption when
analytics company FICO commissioned Corinium to survey 100 CDOs,
CAOs and CDAOs about their AI strategies. So for the second
episode of the Business of Data podcast, we invite FICO CAO Scott
Zoldi to join us and share his views about the findings of this
research.
“The hype cycle of AI is over and the hard work has begun,” he
says. “To the extent that the data which is around our society it
biased (which it is), you need models that you can demonstrate do
not necessarily reflect those biases.”
For Zoldi, the buck for AI ethics stops with a company’s CDO or
CAO. It’s up to them to get ethics recognized as a board-level
issue and ensure there are processes in place to ensure ethical
AI usage.
“They have to define one standard within their organization,” he
explains. “They need to make sure it aligns from a regulatory
perspective. They need to align all their data scientists around
a centralized management or standardization of how you do that.
And that takes a lot of work.”
Crucially, Zoldi stresses that enterprises must monitor AI
systems on an ongoing basis to be sure they’re using AI
ethically. Our research shows that just 33% of AI-using
enterprises currently do this.
“Look at the pandemic,” Zoldi argues. “[The pandemic] affects
different protected and ethnic groups differently, based on their
exposure to the virus and the types of work that they’re forced
to do. That means, [certain] models that may have been ethical at
the time they were built are no longer ethical today.”
He concludes: “You’re not done with the model when you’re done
building it. You’re done with the model when it ceases to be
used.”
Key Takeaways
AI ethics is a board-level issue. It’s up to a
company’s data and analytics leadership to ensure executives
prioritize ethical considerations around AI usage
Ethics policies must be enforced. Strong AI
governance policies are needed to enforce AI ethics standards
across the organization
AI models require continuous monitoring. Data
scientists must monitor the performance of AI models to ensure
their decisions don’t become unfair
ensure they’re using AI ethically and responsibly in episode two of
the Business of Data podcast
AI ethics emerged as a key barrier to enterprise AI adoption when
analytics company FICO commissioned Corinium to survey 100 CDOs,
CAOs and CDAOs about their AI strategies. So for the second
episode of the Business of Data podcast, we invite FICO CAO Scott
Zoldi to join us and share his views about the findings of this
research.
“The hype cycle of AI is over and the hard work has begun,” he
says. “To the extent that the data which is around our society it
biased (which it is), you need models that you can demonstrate do
not necessarily reflect those biases.”
For Zoldi, the buck for AI ethics stops with a company’s CDO or
CAO. It’s up to them to get ethics recognized as a board-level
issue and ensure there are processes in place to ensure ethical
AI usage.
“They have to define one standard within their organization,” he
explains. “They need to make sure it aligns from a regulatory
perspective. They need to align all their data scientists around
a centralized management or standardization of how you do that.
And that takes a lot of work.”
Crucially, Zoldi stresses that enterprises must monitor AI
systems on an ongoing basis to be sure they’re using AI
ethically. Our research shows that just 33% of AI-using
enterprises currently do this.
“Look at the pandemic,” Zoldi argues. “[The pandemic] affects
different protected and ethnic groups differently, based on their
exposure to the virus and the types of work that they’re forced
to do. That means, [certain] models that may have been ethical at
the time they were built are no longer ethical today.”
He concludes: “You’re not done with the model when you’re done
building it. You’re done with the model when it ceases to be
used.”
Key Takeaways
AI ethics is a board-level issue. It’s up to a
company’s data and analytics leadership to ensure executives
prioritize ethical considerations around AI usage
Ethics policies must be enforced. Strong AI
governance policies are needed to enforce AI ethics standards
across the organization
AI models require continuous monitoring. Data
scientists must monitor the performance of AI models to ensure
their decisions don’t become unfair
Weitere Episoden
34 Minuten
vor 1 Jahr
53 Minuten
vor 1 Jahr
45 Minuten
vor 1 Jahr
31 Minuten
vor 2 Jahren
21 Minuten
vor 2 Jahren
In Podcasts werben
Kommentare (0)