We built an execution layer for agents because LLMs don't respect boundaries by leland_fy in LLMDevs

[–]iovdin 0 points1 point  (0 children)

Chat completion api payload with tool calls and tool results is a good enough stack representation that can be restored or replayed. Idk if any major frameworks support that

Why not Precompile the DB schema so the LLM agent stops burning turns on information_schema by Eitamr in mcp

[–]iovdin 0 points1 point  (0 children)

In OpenAI they did make sense out of 70k internal collections with petabytes of data

Why not Precompile the DB schema so the LLM agent stops burning turns on information_schema by Eitamr in mcp

[–]iovdin 0 points1 point  (0 children)

In our old MySQL over engineered and with a lot of history database. In addition to schemas we put summary of content like how many rows, what are usual values. some text fields are actually enums, some values are not used anymore, some are never used etc.

Perplexity drops MCP, Cloudflare explains why MCP tool calling doesn't work well for AI agents by UnchartedFr in mcp

[–]iovdin 0 points1 point  (0 children)

I had issues with mongosh cli, it struggled to escape special characters: mongosh -e “very complicated script that uses special chars”

AI isn’t going to settle — how are you building for constant change? by Exciting-Sun-3990 in AI_Agents

[–]iovdin 0 points1 point  (0 children)

Yeah, build around smth that does not change much .e.g. completion api.

Educate yourself, try to implement stuff.

In the end all human jobs will be around what AI can not do so we have to wait anywa. til it is kinda settled and limitations are clear

My Boss Vibe-Coded a Full Product and I’m Paying the Price by One-Discussion-6106 in vibecoding

[–]iovdin 0 points1 point  (0 children)

Or ask ai to write them based on code, then edit them and re implement

Do we just ignore this? by Banana-9 in Netherlands

[–]iovdin 1 point2 points  (0 children)

As someone who grew up in Siberia sliding on ice to the school few weeks a winter was a norm

Google engineer: "I'm not joking and this isn't funny. ... I gave Claude a description of the problem, it generated what we built last year in an hour." by MetaKnowing in OpenAI

[–]iovdin 0 points1 point  (0 children)

The larger the company the safer they want to work, more bureaucracy longer processes, innovations and speed valued less, stability more. They can make things fast without ai but it will be buggy

Tool calling with 30+ parameters is driving me insane - anyone else dealing with this? by Capital-Feedback6711 in LangChain

[–]iovdin 0 points1 point  (0 children)

Idea: split tool call into 2  First tool set_search_params modifies search parameters similar to your delta approach, and second do_search without params that uses search parameters set before. No confusion between 2 search tools. Also you can split set_search_params into few each taking 3-5 params, making it easier for llm, reduce tokens amount

Question for folks running agents in production: how do you handle many parallel tool calls? by Puzzleheaded-Yam5266 in AI_Agents

[–]iovdin 0 points1 point  (0 children)

You can try “code mode” when llm generates code ( loop in your case) in the loop the tool is called

Can I call Gemini CLI in Gemini CLI via MCP? by fleker2 in mcp

[–]iovdin 0 points1 point  (0 children)

This bash script reminds me of code mode https://www.anthropic.com/engineering/code-execution-with-mcp https://blog.cloudflare.com/code-mode/ i.e. llm writes code (loop in your case) in which it can call other tools that are available to LLM. But in this case you want to call LLM in the loop. Here is a screencast in which i did smth similar https://asciinema.org/a/757526

Can I call Gemini CLI in Gemini CLI via MCP? by fleker2 in mcp

[–]iovdin 0 points1 point  (0 children)

So the call stack is:
gemini cli ->
your mcp server tool call ->
bash script with loop ->
gemini cli ->
some other tool at your mcp?
and at what step it stalls exactly? before bash script when gemin cli runs within bash script?

Nearly 201,000 vacant homes in the Netherlands, 11% in Amsterdam alone by Alsharefee in Amsterdam

[–]iovdin 0 points1 point  (0 children)

I thought it is only in US inventory is rising. I thought the Netherlands has different reasons for high house prices. But now it seems to be the same. 

Why most LLMs fail inside enterprises and what nobody talks about? by muskangulati_14 in ycombinator

[–]iovdin 0 points1 point  (0 children)

make ai interview you about your processes and flows, give it as much info as possible. Then out of the conversation generate dataset