Quarkus and LangChain4J - A Match Made in Heaven
A conversation with Georgios Andrianakis about langchain4j and
Quarkus
1 Stunde 2 Minuten
Podcast
Podcaster
Java, Serverless, Clouds, Architecture and Web conversations with Adam Bien
Beschreibung
vor 1 Jahr
An airhacks.fm conversation with Georgios Andrianakis (@geoand86)
about: discussion on integrating langchain4j with quarkus for
enterprise AI applications, similarities between LLM integration
and microservice architecture, benefits of using Java and
MicroProfile for AI development, explanation of AI services, chat
memory, and tools in LangChain4J, importance of session management
and fault tolerance in LLM applications, vector databases and
embeddings for efficient information retrieval, RAG (Retrieve
Augmented Generation) implementation in enterprise settings,
Quarkus dev mode features for LLM experimentation, native image
support with GraalVM, local inference possibilities with Java 21's
Vector API and quantized models, challenges in prompt engineering
and model selection, upcoming features in LangChain4J including
Ollama tool support and improved result streaming, future
developments in Java for AI and GPU support with Project Babylon,
importance of enterprise-grade features like CI/CD, testing, and
cloud deployment for LLM applications
Georgios Andrianakis on twitter: @geoand86
about: discussion on integrating langchain4j with quarkus for
enterprise AI applications, similarities between LLM integration
and microservice architecture, benefits of using Java and
MicroProfile for AI development, explanation of AI services, chat
memory, and tools in LangChain4J, importance of session management
and fault tolerance in LLM applications, vector databases and
embeddings for efficient information retrieval, RAG (Retrieve
Augmented Generation) implementation in enterprise settings,
Quarkus dev mode features for LLM experimentation, native image
support with GraalVM, local inference possibilities with Java 21's
Vector API and quantized models, challenges in prompt engineering
and model selection, upcoming features in LangChain4J including
Ollama tool support and improved result streaming, future
developments in Java for AI and GPU support with Project Babylon,
importance of enterprise-grade features like CI/CD, testing, and
cloud deployment for LLM applications
Georgios Andrianakis on twitter: @geoand86
Weitere Episoden
1 Stunde 12 Minuten
vor 7 Monaten
1 Stunde 6 Minuten
vor 7 Monaten
57 Minuten
vor 8 Monaten
1 Stunde 5 Minuten
vor 8 Monaten
1 Stunde 13 Minuten
vor 8 Monaten
In Podcasts werben
Kommentare (0)