Jordan Levine: We Need a Smarter Approach to Combatting AI Bias
26 Minuten
Podcast
Podcaster
Beschreibung
vor 4 Jahren
Jordan Levine, MIT Lecturer and Partner at Dynamic Ideas,
outlines why he believes executives and regulators must do more
to combat AI bias – and what they can do about it.
When the EU announced its proposed new AI legislation in April
2021, the bloc touted the new laws as a necessary step to ensure
Europeans can trust AI technologies. But for Jordan Levine,
Partner at consulting firm Dynamic Ideas, the proposals are
something of a ‘blunt instrument’.
In this week’s Business of Data podcast, Levine argues that this
kind of legislation is, at best, a starting point. It’s up to
AI-focused executives to sit down and implement practical
frameworks for ensuring AI is used responsibly in their
organizations.
“I'm 100% supportive of the government getting involved in
establishing the rules,” he says. “[But] I hope that both
academics and business [and] society-conscious individuals get
excited and say, ‘OK, how do we refine this?’”
In Levine’s experience, there are many things that can cause
ethical issues when enterprises put AI or analytics models into
production. That’s why much of the work he does at Dynamic Ideas
is geared toward educating people about AI bias challenges.
He says it’s important for businesses to have both clear
mitigation strategies to combat ethical issues such as biased
decision-making and the right tools or technologies to
orchestrate those strategies in practice.
“What I try to do is show how to mitigate those issues and then
show actual techniques that exist today, [so] that you can
leverage open-source software to do the processing,” he says.
Levine argues that business leaders must use a framework like the
one he’s developed to make sure they are aware of the ethical
issues that may arise from the ways they’re using AI and
analytics. This will allow them to take steps to make sure these
issues are addressed.
“I hope they can use this framework to actually challenge their
analytics groups,” he says. “To actually sit down with the
individuals writing the algorithms and confirming whether the
issue does or does not exist.”
However, Levine concedes that no framework for combatting AI bias
can ever really be complete. Technology is constantly evolving,
and enterprises are constantly innovating with it. So, AI-focused
executives must be vigilant and reevaluate their AI practices
regularly with an ethics lens.
Levine concludes: “The more precise that we can get in terms of
bias and ethics and the more, the more discrete issues we can
identify and then think through how to mitigate them and show
examples of mitigation, I think, the better we all are.”
Key Takeaways
· Regulatory compliance is not the same as ethical
behavior.Enterprises must go beyond what’s required of
them by law to ensure their AI practices are ethical
· Executives must be aware of potential ethical
issues. If executives don’t know the specific risks that
come with adopting AI technologies, they will struggle to ensure
the right processes are in place to mitigate them
· AI ethics frameworks must be updated
regularly. AI-focused executives must constantly
reevaluate their AI ethics strategies to ensure their teams are
following current industry best practices
Weitere Episoden
34 Minuten
vor 1 Jahr
53 Minuten
vor 1 Jahr
45 Minuten
vor 1 Jahr
31 Minuten
vor 2 Jahren
21 Minuten
vor 2 Jahren
In Podcasts werben
Kommentare (0)