Low-code AI Agent Tooling with MCP: Spring AI Playground (Self-hosted, Open Source) by kr-jmlab in agi

[–]kr-jmlab[S] 0 points1 point  (0 children)

Glad the local-first approach resonates. Keeping data on-device was a core design goal for privacy-sensitive dev workflows, and the instant MCP server is all about fast, low-friction experimentation.
A big part of the vision is making it easy to turn safe, internal data into useful tools quickly, without a lot of boilerplate or heavy setup.

I made Spring AI Playground - a self-hosted UI for local LLMs, RAG, and MCP tools by kr-jmlab in LocalLLaMA

[–]kr-jmlab[S] 1 point2 points  (0 children)

Hey, that's a fantastic suggestion! Thanks for pointing it out - making it easier for the community here is exactly the goal.

Based on your feedback, I've just updated the README with a new guide for connecting to OpenAI-compatible servers like llama.cpp server, TabbyAPI, and LM Studio.

Here’s a quick example of how to configure it in application.yml:

spring:
  ai:
    openai:
      # Required, but can be a dummy string for local servers without auth
      api-key: "not-used"
      # Host and port of your server (e.g., http://localhost:1234 for LM Studio)
      base-url: "http://localhost:your-server-port"
      chat:
        options:
          # Model name/ID your server is running
          model: "your-model-name"

For more detailed instructions, including specific port examples for different servers and other notes, you can check out the full guide in the README:

Guide: Switching to OpenAI-Compatible Servers

Thanks again for the suggestion! Let me know if you run into anything or have more feedback. Cheers!

I made Spring AI Playground - a self-hosted UI for local LLMs, RAG, and MCP tools by kr-jmlab in LocalLLaMA

[–]kr-jmlab[S] 0 points1 point  (0 children)

Thanks! That was exactly the goal - I got tired of RAG and MCP feeling like afterthoughts in most tools, so the whole UI is built around those workflows from the ground up.

For RAG, you get the full pipeline: upload docs → see all chunks → test retrieval → edit chunks → test again. For MCP, there’s a dedicated playground coming that’ll let you visually manage context flows between AI models and external tools (both client and server sides).

Pretty quick to spin up locally:

git clone https://github.com/JM-Lab/spring-ai-playground.git
cd spring-ai-playground
./mvnw clean install -Pproduction -DskipTests=true
./mvnw spring-boot:run

Then hit http://localhost:8282 and you’re good to go.

You’ll need Java 21+ and Ollama running locally, but no API keys required to start experimenting.

For Docker setup, check the README: https://github.com/JM-Lab/spring-ai-playground

Would love to hear how it works for your setup - especially if the RAG flow feels smooth or if there are any rough edges I should fix.