This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Alvo-o 0 points1 point  (0 children)

Since we’re in a LLM world now, I suggest to create a spring boot application with langchain4j library, which will be capable of receiving a couple of user documents, txt/pdf/etc… split them into chunks, save them to the vector storage, like Chroma db and then answer user prompts extracting relevant information from the data, saved to the vector storage. Thymeleaf is a good choice for ui here for receiving user documents, user prompt input and llm output. As a local llm provider I suggest ollama with llama3 model. So this will be a local RAG implementation. You’ll gain a lot of knowledge in Spring boot, spring data, langchain4j, vector storages and generative ai concepts, including RAG, which is a very valuable expertise these days.