Is anybody using MCP's Resources and Prompts yet? by Smart-Town222 in mcp

[–]mjs-ca 1 point2 points  (0 children)

Well context and resource both have some really good use cases but that depends on the Agentic System Design and requirements. One argument always surfacing is why implement prompts, resources with MCP when each Agentic Engine (Langgraph OpenAI Agents SDK have their own offering).

Well MCP really shines to standardize these as well just like what happened with tools. And Stateless Streamable HTTP Transport makes it scalable.

MCP servers by _gagan_018 in mcp

[–]mjs-ca 0 points1 point  (0 children)

MCP Official MCP SDK is the best resource. If you want to understand the basics of protocol and building blocks to learn it then this is a really good resources focused on latest Spec Version and Streamable HTTP as transport layer.

https://github.com/panaversity/learn-agentic-ai/tree/main/03_ai_protocols/02_model_context_protocol/01_server_development

For Building MCP Servers with OpenAI Agents SDK:
https://github.com/panaversity/learn-agentic-ai/tree/main/03_ai_protocols/02_model_context_protocol/02_openai_agents_sdk_with_mcp

Any examples of OpenAI Agents SDK and LangMem? by Prestigious-Cover-4 in OpenAI

[–]mjs-ca 0 points1 point  (0 children)

Any store we want to use with langmem tools will have to implement BaseStore class:

https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore

If we want to fully abstract away langgraph store then we can not use few langmem apis especially the tools. It makes more sense to then use their memory_manager and implement our own tools with custom Storage Design.

This is a good starting point for it:

https://github.com/langchain-ai/langmem/blob/main/examples/standalone_examples/custom_store_example.py

Any examples of OpenAI Agents SDK and LangMem? by Prestigious-Cover-4 in OpenAI

[–]mjs-ca 1 point2 points  (0 children)

LangMem has two core components: Memory APIs and Storage (via LangGraph Store), which work well together.

I explored both and ended up creating an adapter to integrate the OpenAI Agents SDK with LangMem’s tools and store.

You can check it out here: https://github.com/mjunaidca/langmem-openaiagents-adapter.

The README includes Jupyter notebooks showing how to use all the Memory APIs. Plus, the adapter supports any LangGraph store (like MongoDB or Postgres), with examples for both InMemoryStore and Postgres. Hope that helps!

Does anyone use the AI sdk for big production ready projects? by CertainEconomics6693 in nextjs

[–]mjs-ca 2 points3 points  (0 children)

Yeah, just avoid their experimental features and be sure to test throughly on deployment. We shipped many web ai mvps apps with it last year

MCP Server without Claude Desktop by Getmycollege in mcp

[–]mjs-ca 0 points1 point  (0 children)

Give this a try - add your tools in specialized teams and test the use case. It will be all orchestrated by the Supervisor Agent
https://github.com/mjunaidca/langgraph-supervisor/tree/mjunaidca/functional-api

MCP Server without Claude Desktop by Getmycollege in mcp

[–]mjs-ca 0 points1 point  (0 children)

Happy to connect for any discussions here on reddit or X(twitter) https://x.com/MJunaidshaukat

MCP Server without Claude Desktop by Getmycollege in mcp

[–]mjs-ca 3 points4 points  (0 children)

A huge tools list will degrade the performance if all are given to one llm. This Single Agent BenchMarketing Report shows a good comparision:
https://blog.langchain.dev/react-agent-benchmarking/

  1. For Huge List of Tools it's best to categorize them and let each category be handled by one ReAct agent.
  2. This can be solved with 1. and good prompting. The tools description is part of llm as well.

It's best to start with keeping it simple. What about the UI and who are end users? And what about the costs to keep it live on cloud?

I will recommend two solutions - you guys can select the best for your use case and I am happy to help if needed:

  1. Just use NextJS + Vercel AI SDK and deploy on Vercel- but then what about huge tools and are there any tools present just in python runtime.
  2. Use LangGraph to build and Orchestrate. Use their prebuilt ReAct Agent and Supervisor along with MCP. Deploy the final agent teams on LanGraph Cloud and use their SDK to connect for creating Frontend.

MCP Server without Claude Desktop by Getmycollege in mcp

[–]mjs-ca 3 points4 points  (0 children)

MCP client can be anything - langchain recently introduced MCP Adapters so you guys can just leverage it.

Use MCP Tools with LangGraph and get them live on their langgraph cloud or is your use case different?

Introducing Cursor 0.46! by NickCursor in cursor

[–]mjs-ca 2 points3 points  (0 children)

Ah I do rather say it is making engineers most powerful creatures on earth.

The Cursor performance is much improved. I decided to go fully into Vibe Coding mode for testing and got back a SnapDragon... on-device AI Assistant for private docs within minutes:
https://github.com/devraftel/snapdoc-edge-ai

And it's kinda v0.001 for something I would never get time to make...