SE Radio 677: Jacob Visovatti and Conner Goodrum on Testing ML Models for Enterprise Products

SE Radio 677: Jacob Visovatti and Conner Goodrum on Testing ML Models for Enterprise Products

Jacob Visovatti and Conner Goodrum of Deepgram speak with host Kanchan Shringi about testing ML models for enterprise use and why it's critical for product reliability and quality. They discuss the challenges of testing machine learning models in...

Beschreibung

vor 5 Monaten

Jacob Visovatti and Conner
Goodrum of Deepgram speak with host Kanchan Shringi
about testing ML models for enterprise use and why it's critical
for product reliability and quality. They discuss the challenges
of testing machine learning models in enterprise environments,
especially in foundational AI contexts. The conversation
particularly highlights the differences in testing needs between
companies that build ML models from scratch and those that rely
on existing infrastructure. Jacob and Conner describe how testing
is more complex in ML systems due to unstructured inputs, varied
data distribution, and real-time use cases, in contrast to
traditional software testing frameworks such as the testing
pyramid.


To address the difficulty of ensuring LLM quality, they advocate
for iterative feedback loops, robust observability, and
production-like testing environments. Both guests underscore that
testing and quality assurance are interdisciplinary efforts that
involve data scientists, ML engineers, software engineers, and
product managers. Finally, this episode touches on the importance
of synthetic data generation, fuzz testing, automated retraining
pipelines, and responsible model deployment—especially when
handling sensitive or regulated enterprise data.


Brought to you by IEEE Computer Society and IEEE
Software magazine.

Kommentare (0)

Lade Inhalte...
15
15