Incorporating artificial intelligence into Java applications has never been easier, thanks to the Spring AI project. By leveraging the latest open-source Llama 3.1 AI model from Meta, developers can create powerful AI-driven applications with minimal complexity. In this blog post, we’ll walk through how you can set up and use the Llama 3.1 AI model in your Java application using Spring AI and the Groq API.
What is Spring AI?
Spring AI is a project designed to simplify the integration of AI functionalities into applications. It bridges the gap between enterprise data, APIs, and AI models, making it easier for developers to implement sophisticated AI features. Spring AI supports a variety of major model providers, including OpenAI, Microsoft, Amazon, Google, and Hugging Face. It also supports various model types, such as chat, text-to-image, audio transcription, and text-to-speech.
Key features of Spring AI include:
- Support for all major Model providers
- Mapping AI model output to POJOs.
- Function calling.
- Spring Boot Auto Configuration and Starters for AI Models and Vector Stores
- Support for the Groq AI inference engine by reusing the existing OpenAI client.
Prerequisites
To get started, you’ll need to the following:
- Create a new Spring Boot project at start.spring.io, including the Spring Web and OpenAI dependencies.
- Create a Groq account and obtain an API key.
Application Properties Configuration
Configure your application properties or YAML file with the following settings:
spring:
ai:
openai:
api-key: ${GORQ_API_KEY}
base-url: https://api.groq.com/openai
chat:
options:
model: llama-3.1-8b-instant
Select one of the provided Groq models of Llama 3.1, we are using llama-3.1-8b-instant.
Configure ChatClient
The ChatClient provides a fluent API for interacting with AI models. Here’s a simple configuration class to create a ChatClient bean:
@Configuration
class Config {
@Bean
ChatClient chatClient(ChatClient.Builder builder) {
return builder.build();
}
}
Simple Controller Example
Create a simple @RestController
class to use the chat model for text generation:
@RestController
@AllArgsConstructor
public class SimpleController {
private ChatClient chatClient;
@GetMapping("/ai")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a dad joke") String message) {
log.info("message: [{}]", message);
Map<String, String> completion = Map.of("completion", chatClient.prompt()
.user(message)
.call()
.content());
log.info("result [{}]", completion);
return completion;
}
}
Test your setup using curl or Postman:
curl --location 'localhost:8080/ai'
Enjoy the dad jokes by llama 3.1
Conclusion
In just a few steps, you can set up a Spring AI project to use the latest Llama 3.1 AI model via the Groq API. This streamlined process allows you to quickly integrate advanced AI capabilities into your applications. To explore further, you can check out the official documentation of Spring AI. Now, you’re all set to create your own AI-driven apps!
Read more.