Java, LLMs, and Seamless AI Integration with langchain4j, Quarkus and MicroProfile
A conversation with Dmytro Liubarsky about LLMs, langchain4j and
Quarkus
60 Minuten
Podcast
Podcaster
Java, Serverless, Clouds, Architecture and Web conversations with Adam Bien
Beschreibung
vor 1 Jahr
An airhacks.fm conversation with Dmytro Liubarsky (@langchain4j)
about: discussion on recent developments in Java and LLM
integration, new features in langchain4j including Easy RAG for
simplified setup, SQL database retrieval with LLM-generated
queries, integration with graph databases like Neo4j, Neo4j and
graphrag, metadata filtering for improved search capabilities,
observability improvements with listeners and potential integration
with opentelemetry, increased configurability for AI services
enabling state machine-like behavior, the trend towards CPU
inference and smaller, more focused models, langchain4j integration
with quarkus and MicroProfile, parallels between AI integration and
microservices architecture, the importance of decomposing complex
AI tasks into smaller, more manageable pieces, potential cost
optimization strategies for AI applications, the excitement around
creating smooth APIs that integrate well with the Java ecosystem,
the potential future of CPU inference and its parallels with the
evolution of server infrastructure, the upcoming Devoxx conference,
Dmytro Liubarsky on twitter: @langchain4j
about: discussion on recent developments in Java and LLM
integration, new features in langchain4j including Easy RAG for
simplified setup, SQL database retrieval with LLM-generated
queries, integration with graph databases like Neo4j, Neo4j and
graphrag, metadata filtering for improved search capabilities,
observability improvements with listeners and potential integration
with opentelemetry, increased configurability for AI services
enabling state machine-like behavior, the trend towards CPU
inference and smaller, more focused models, langchain4j integration
with quarkus and MicroProfile, parallels between AI integration and
microservices architecture, the importance of decomposing complex
AI tasks into smaller, more manageable pieces, potential cost
optimization strategies for AI applications, the excitement around
creating smooth APIs that integrate well with the Java ecosystem,
the potential future of CPU inference and its parallels with the
evolution of server infrastructure, the upcoming Devoxx conference,
Dmytro Liubarsky on twitter: @langchain4j
Weitere Episoden
1 Stunde 12 Minuten
vor 7 Monaten
1 Stunde 6 Minuten
vor 7 Monaten
57 Minuten
vor 8 Monaten
1 Stunde 5 Minuten
vor 8 Monaten
1 Stunde 13 Minuten
vor 8 Monaten
In Podcasts werben
Kommentare (0)