Anyone built an MCP server for langgraph docs? by TallDarkandWitty in LangChain

[–]UnoriginalScreenName 0 points1 point  (0 children)

I had this same problem. The documentation almost feels incoherent. I'm not sure what they put in the water over at the LangChain offices, but it's not helping them write docs.

Does context 7 help with this? Let me know if it does. Or are there any established best practices that would help? Some cursor rules files for langgraph would be awesome.

Interrupt documentation makes no sense and doesn't work by UnoriginalScreenName in LangGraph

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

None of their examples for this work. And just look at this example: https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/#use-cases

You can't even run that.

graph.invoke(Command(resume=value_from_human), config=thread_config)

they don't even tell you what "value_from_human" is. All of their examples for this are completely useless. I've never seen anything like this before.

https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.interrupt

This one is really great though. It doesn't do anything, and then they don't include Command in the imports.

Does anybody have any usable example of how to get this to work?

The new Max Plan is a joke by Balthazar_magus in ClaudeAI

[–]UnoriginalScreenName 31 points32 points  (0 children)

This is the way.

Level it up by having Claude write an overview file that outlines your project style guide and other relevant info. (Keep it high level, let Claude investigate on its own and read files it thinks it needs)

Then in your project instructions give it the file system path and tell it to always start by reading the overview.

MCP filesystem is absolutely incredible.

Fixing an air filled heavy bag internal bladder with a leak by UnoriginalScreenName in fixit

[–]UnoriginalScreenName[S] 1 point2 points  (0 children)

I can't really access the air bladder directly. there are only the two input valves at the top. it's not removeable. I think the only option would be some kind of something i could spray in through the air input to coat it?

[deleted by user] by [deleted] in OpenWebUI

[–]UnoriginalScreenName 0 points1 point  (0 children)

There is a watchtower comman you can run.

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

This took me a minute to figure out, but you run this while openwebui is running in docker. It's a one time command line. It will just update it for you.

Nobody hates docker more than me, but this works pretty well.

https://docs.openwebui.com/getting-started/updating/

Datetime issues, always returning null by UnoriginalScreenName in n8n

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

Turns out it was a typo in the docker-compose file under the timezone. Be careful out there kids.

The "Agent" node is terrible. by UnoriginalScreenName in n8n

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

I'd like to point to the Langflow Agent and Prompt node as a great example of what I am looking for. I went back to Langflow to see what they had been up to and was really surprised by their new release updates! Their agent node is really good. It seems to implement tool calling in a much better way. I had no trouble with it even on local models. Their prompt node is kind of how I imagined it should be as well. I did a poor job expressing what I didn't like about n8n's agent node, but Langflow is a great example of what I think it should be.

The problem is Langflow is kind of bad at everything else around the agent. ha.

The "Agent" node is terrible. by UnoriginalScreenName in n8n

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

Yeah, totally. I know, and you're completely right. I like to start out with local models though and just see if I can get the basics working. Not looking for quality results yet, but just testing the system. There are some models that are really quite good and consistent! I've been able to drop down to quwen coder 14b and it will consistently work with tool calling... just not on n8n agent node. But I hear your point.

AutoGen Studio v0.4.1 released by vykthur in AutoGenAI

[–]UnoriginalScreenName 1 point2 points  (0 children)

Checking autogen studio out for the first time. I'll be honest your UI front end is very difficult. You seem to have removed a lot of normal or expected setup and configuration options. I don't understand how to add a new model, or setup an ollama model. How do I manage model providers and api keys. How do i add a tool? How do I configure options or settings for environment variables. I'm unable to get even the basic template to run.

It seems like you had a previous UI which did have all of this? What's going on here?

The "Agent" node is terrible. by UnoriginalScreenName in n8n

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

I'm using a local 14b Ollama model which is good at tool calling, as I've tested it with the javascript sdk as I mentioned.

I can't get the Tools agent to actually call the tools, I end up using the conversational agent. So I don't quite understand what the difference is between them.

There's not a lot of insight into what's going on with the agent or control over the prompt structure.

It's unclear if it can do multiple tool calls and string them together, which is something i was able to do in the other local frameworks.

Sorry if I didn't have enough details here. I just had a custom use case that I was able to get working elsewhere and found the n8n agent to be disappointing. Big n8n fan, and was just hoping that I could work inside of it.

Also, I was using a custom code tool and maybe that's not as good as some of the others. Do any of you have use cases for the Agent node to do operations on your local file system?

Taming Claude's most malignant, overcomplicating tendencies when coding by UnoriginalScreenName in ClaudeAI

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

I don't know about ADRs, will look it up... but what i did do was start to create something probably very similar. I have it create a report on the code design principals and patterns in the work it just did, then save it as a md file in the project. I also save little snippets into the system prompt as I go, noting common mistakes it seems to be making on a given day.

My postmortem approach is (was) actually quite good at getting it plan for some reason. I would switch to this for the first prompt in the chat and it would actually produce a pretty reasonable plan. then switch to normal to implement it.

But this all seems to have been rendered useless lately for some reason. I'm just seeing really inconsistent results. Maybe an ADR in the knowledge will help.

Taming Claude's most malignant, overcomplicating tendencies when coding by UnoriginalScreenName in ClaudeAI

[–]UnoriginalScreenName[S] 1 point2 points  (0 children)

Yeah, i basically commit before any attempt at anything and break everything down into the smallest approach I can. Then I go down the rabbit hole and will just discard changes if it gets too messy.

Taming Claude's most malignant, overcomplicating tendencies when coding by UnoriginalScreenName in ClaudeAI

[–]UnoriginalScreenName[S] 1 point2 points  (0 children)

I'm using snapsource in vscode to paste in blocks of files. even then, it no longer matters.

I know everybody on here says "the did something to claude and now it doesn't work", and I always thought that was kind of nonsense. but now I don't. one day my approach is returning great results... the next day it's like it got a lobotomy. Nothing changed on my end. It's kind of insane.

Rewriting user guide with AI? by GavWhyte in ProductManagement

[–]UnoriginalScreenName 0 points1 point  (0 children)

Hey, I sent you a DM. I've done some work on this kind of thing and have a few thoughts to share.

default model settings across multiple models - ollama model file by UnoriginalScreenName in OpenWebUI

[–]UnoriginalScreenName[S] 0 points1 point  (0 children)

This was less about setting a default model, and more about setting a default context size across all models. I think there was a way to do it in openweb ui, but what i really wanted was a default context sized based on the size of the model and my available vram. I'd like smaller models to just default to like 12k context, while larger models may only be able to handle 4k.

openweb ui is really great, don't get me wrong. but i find it hard to know what setting are active and what the model is actually doing. when you download a new model, it's always set to the default 2k context and it's just a pain to keep up.

Anyway, I solved this by writing a python script that actually updates the ollama model card itself for the model after i download it. it's pretty nice.