SE Radio 677: Jacob Visovatti and Conner Goodrum on Testing ML Models for Enterprise Products
Jacob Visovatti and Conner Goodrum of Deepgram speak with host
Kanchan Shringi about testing ML models for enterprise use and why
it's critical for product reliability and quality. They discuss the
challenges of testing machine learning models in...
Podcast
Podcaster
Information for Software Developers and Architects
Beschreibung
vor 5 Monaten
Jacob Visovatti and Conner
Goodrum of Deepgram speak with host Kanchan Shringi
about testing ML models for enterprise use and why it's critical
for product reliability and quality. They discuss the challenges
of testing machine learning models in enterprise environments,
especially in foundational AI contexts. The conversation
particularly highlights the differences in testing needs between
companies that build ML models from scratch and those that rely
on existing infrastructure. Jacob and Conner describe how testing
is more complex in ML systems due to unstructured inputs, varied
data distribution, and real-time use cases, in contrast to
traditional software testing frameworks such as the testing
pyramid.
To address the difficulty of ensuring LLM quality, they advocate
for iterative feedback loops, robust observability, and
production-like testing environments. Both guests underscore that
testing and quality assurance are interdisciplinary efforts that
involve data scientists, ML engineers, software engineers, and
product managers. Finally, this episode touches on the importance
of synthetic data generation, fuzz testing, automated retraining
pipelines, and responsible model deployment—especially when
handling sensitive or regulated enterprise data.
Brought to you by IEEE Computer Society and IEEE
Software magazine.
Weitere Episoden
48 Minuten
vor 4 Monaten
55 Minuten
vor 4 Monaten
1 Stunde 2 Minuten
vor 5 Monaten
48 Minuten
vor 5 Monaten
50 Minuten
vor 5 Monaten
In Podcasts werben
Kommentare (0)