In the ever-evolving landscape of artificial intelligence (AI), large language models (LLMs) have emerged as a game-changer, transforming how we interact with and derive insights from textual data. As a Java developer, diving into the world of AI might seem intimidating, but it's not! This tutorial is your gateway to use the power of LLMs through the integration of LangChain4j and Quarkus.
Exploring the capabilities of LangChain4j and Quarkus
LangChain4j, a Java library specializing in natural language processing (NLP), will play a crucial role in this tutorial. Using LangChain4j with Quarkus, a cloud-native, container-first framework, we will develop a tool proficient in analyzing and summarizing the content of blog posts.
The objective here is not just to create a tool, but rather to provide Java developers with the skills to seamlessly integrate LLMs, comprehend the nuances of AI, and refine their skill set through practical application.
Prerequisites
Before diving into the tutorial, make sure you have the following prerequisites in place:
-
OpenAI account: Ensure that you have an active OpenAI account.
-
API Key: Generate an API key from your OpenAI account. This key is essential for accessing OpenAI's services and will be used throughout the tutorial.
-
Credits: Confirm that your OpenAI account has sufficient credits to cover the usage of language models.
Create the application
To create a Quarkus application, run the following Maven command:
mvn io.quarkus.platform:quarkus-maven-plugin:3.6.6:create \
-DprojectGroupId=com.hbelmiro.demos \
-DprojectArtifactId=intelligent-java-blog-reader \
-Dextensions='resteasy-reactive'
cd intelligent-java-blog-reader
With your application created, you can then start writing the code. You can start by writing the code to read the blog post.
Parse the HTML
You need to create a class to read the HTML content for a URL that the user will specify. You can do that by using Jsoup, which is an HTML parser included in Quarkus.
To make the process faster and avoid unnecessary charging, you can send to OpenAI only the HTML element that you know that has the blog content. In the case of this example, prepare your application to read the Red Hat blog. In that blog, the contents is inside a div
element with the rh-push-content-main
class. So, create a class named WebCrawler
that gets the first HTML element of class rh-push-content-main
and returns its HTML.
package com.hbelmiro.demos.intelligentjavablogreader;
import jakarta.enterprise.context.ApplicationScoped;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.io.IOException;
import java.io.UncheckedIOException;
@ApplicationScoped
class WebCrawler {
String crawl(String url) {
Document doc;
try {
doc = Jsoup.connect(url).get();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
return doc.body().getElementsByClass("rh-push-content-main").first().html();
}
}
Now that you have the HTML where the blog content is, you can send it to OpenAI.
Send the blog content to OpenAI
LLMs work like a chat. You send a message using natural language and the model answers also with natural language, or the way you ask it to answer.
So, you'll need to tell the model what you want it to do with the HTML you'll send. You can send something like:
You are an assistant that receives the body of an HTML page and sums up the article on that page. Add key takeaways to the end of the sum-up. Here's the HTML:
{html}
That's it. You can sum up the article and add key takeaways to the end of the sum up.
But, there's a limit of characters that you can send in each message to the LLM. The model you'll use in this example can handle well around 2000 characters per prompt. So, you won't be able to send everything in one single message. You'll have to break the HTML into pieces of 2000 characters each, send each of those pieces in a message, and at the end, ask the model to sum up the article for you.
To do that, first prepare the LLM:
You are an assistant that receives the body of an HTML page and sums up the article on that page. Add key takeaways to the end of the sum-up.
The body will be sent in parts in the next requests. Don't return anything.
Send each part of the HTML:
Here's the next part of the body page:
{html}.
Wait for the next parts. Don't answer anything else.
After sending all the parts, ask the model to sum up the article:
That's it. You can sum up the article and add key takeaways to the end of the sum up.
That's how your application will interact with the LLM. To do that, you'll use a library called LangChain4j. So, add the following dependency to your pom.xml
:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-openai</artifactId>
<version>0.6.3</version>
</dependency>
Then, create the service to interact with OpenAI. The service will contain three methods: one to prepare the model, one to send the HTML, and one to sum up the article:
package com.hbelmiro.demos.intelligentjavablogreader;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
@RegisterAiService
public interface BlogReaderService {
@SystemMessage("You are an assistant that receives the body of an HTML page and sum up the article in that page. Add key takeaways to the end of the sum up.")
@UserMessage("""
The body will be sent in parts in the next requests. Don't return anything.
""")
String prepare();
@UserMessage("""
Here's the next part of the body page:
```html
{html}
```
Wait for the next parts. Don't answer anything else.
""")
String sendBody(String html);
@UserMessage("""
That's it. You can sum up the article and add key takeaways to the end of the sum up.
""")
String sumUp();
}
OK, once you create the AI Service, you need to break the HTML into small pieces. Create the following class to break the text and return a List with all the pieces:
package com.hbelmiro.demos.intelligentjavablogreader;
import jakarta.enterprise.context.ApplicationScoped;
import java.util.ArrayList;
import java.util.List;
@ApplicationScoped
class RequestSplitter {
private static final int MAX_CHARACTERS = 2000;
List<String> split(String text) {
List<String> pieces = new ArrayList<>();
if (text != null && !text.isEmpty() && MAX_CHARACTERS > 0) {
int length = text.length();
if (length <= MAX_CHARACTERS) {
return List.of(text);
}
int startIndex = 0;
int endIndex = MAX_CHARACTERS;
while (startIndex < length) {
String piece = text.substring(startIndex, endIndex);
pieces.add(piece);
startIndex = endIndex;
endIndex = Math.min(startIndex + MAX_CHARACTERS, length);
}
}
return pieces;
}
}
Now you have everything to process the user request. So, create the controller:
package com.hbelmiro.demos.intelligentjavablogreader;
import jakarta.inject.Inject;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import org.slf4j.Logger;
import java.util.List;
@Path("/")
public class BlogReaderResource {
private static final Logger LOGGER = org.slf4j.LoggerFactory.getLogger(BlogReaderResource.class);
private final BlogReaderService blogReaderService;
private final WebCrawler webCrawler;
private final RequestSplitter requestSplitter;
@Inject
public BlogReaderResource(BlogReaderService blogReaderService, WebCrawler webCrawler, RequestSplitter requestSplitter) {
this.blogReaderService = blogReaderService;
this.webCrawler = webCrawler;
this.requestSplitter = requestSplitter;
}
@Path("/read")
@POST
@Produces(MediaType.TEXT_PLAIN)
public String read(String url) {
// Read the HTML from the specified URL
String content = webCrawler.crawl(url);
LOGGER.info("\uD83D\uDD1C Preparing analysis of {}", url);
// Prepare the model
blogReaderService.prepare();
// Split the HTML into small pieces
List<String> split = requestSplitter.split(content);
// Send each piece of HTML to the LLM
for (int i = 0; i < split.size(); i++) {
blogReaderService.sendBody(split.get(i));
LOGGER.info("\uD83E\uDDD0 Analyzing article... Part {} out of {}.", (i + 1), split.size());
}
LOGGER.info("\uD83D\uDCDD Preparing response...");
// Ask the model to sum up the article
String sumUp = blogReaderService.sumUp();
LOGGER.info("✅ Response for {} ready", url);
// Return the result to the user
return sumUp;
}
}
Now you only need to configure your application.
Configure the application
You should set the OpenAI API key and configure timeouts to prevent errors as each interaction with the model takes a few seconds.
Add the following properties to your application.properties
file:
quarkus.http.read-timeout=120s
quarkus.langchain4j.openai.timeout=1m
quarkus.langchain4j.openai.api-key=<YOUR_API_KEY>
Info alert: Note
Set your API Key in the application.properties
file only for testing purposes. For production environments, use something more secure like a Kubernetes Secret or an environment variable.
Now you're ready to run your application.
Run the application
Start your application:
mvn quarkus:dev
With your application up and running, send the following request to analyze the post in https://www.redhat.com/en/blog/the-power-of-ai-is-open:
curl -X 'POST' \
'http://localhost:8080/read' \
-d 'https://www.redhat.com/en/blog/the-power-of-ai-is-open'
You should see an output similar to:
Summary:
The article emphasizes the significance of artificial intelligence (AI) in today's world and how enterprises can no longer ignore its potential. It discusses the various applications of AI, such as chatbots, financial fraud detection, and patient diagnostics. The article emphasizes the importance of operationalizing AI use cases and leveraging existing tools and processes to drive agility and efficiency. It also highlights the need to adhere to security, regulatory, compliance, and governance standards when implementing AI.
Red Hat is introduced as a company that integrates open-source technologies with AI to help organizations solve problems effectively and quickly. Red Hat offers platforms to develop and deploy AI at scale, increasing efficiency and productivity. The article concludes by stating that Red Hat's enterprise-ready AI solutions make it possible to apply AI to everyday business.
Key Takeaways:
1. AI is a significant technology that enterprises cannot ignore.
2. Operationalizing AI use cases and leveraging existing tools and processes is crucial for success.
3. Adhering to security, regulatory, compliance, and governance standards is essential in AI implementation.
4. Red Hat integrates open-source technologies with AI to solve problems effectively and quickly.
5. Red Hat provides platforms for developing and deploying AI at scale, increasing efficiency and productivity.
6. Red Hat's enterprise-ready AI solutions enable the application of AI to everyday business.
These key takeaways highlight the importance of AI in driving organizational success and the role Red Hat plays in enabling AI implementation.
Conclusion
That's it! You just created a Java application that uses artificial intelligence. To go further on the LangChain4j Quarkus extension, read its documentation. You can use it to create more complex applications and use other models than OpenAI, like Hugging Face and Ollama.
You can find the source code of the application you created on GitHub.