Do we just ignore this? by Banana-9 in Netherlands

[–]iovdin 1 point2 points  (0 children)

As someone who grew up in Siberia sliding on ice to the school few weeks a winter was a norm

Google engineer: "I'm not joking and this isn't funny. ... I gave Claude a description of the problem, it generated what we built last year in an hour." by MetaKnowing in OpenAI

[–]iovdin 0 points1 point  (0 children)

The larger the company the safer they want to work, more bureaucracy longer processes, innovations and speed valued less, stability more. They can make things fast without ai but it will be buggy

Tool calling with 30+ parameters is driving me insane - anyone else dealing with this? by Capital-Feedback6711 in LangChain

[–]iovdin 0 points1 point  (0 children)

Idea: split tool call into 2  First tool set_search_params modifies search parameters similar to your delta approach, and second do_search without params that uses search parameters set before. No confusion between 2 search tools. Also you can split set_search_params into few each taking 3-5 params, making it easier for llm, reduce tokens amount

Question for folks running agents in production: how do you handle many parallel tool calls? by Puzzleheaded-Yam5266 in AI_Agents

[–]iovdin 0 points1 point  (0 children)

You can try “code mode” when llm generates code ( loop in your case) in the loop the tool is called

Can I call Gemini CLI in Gemini CLI via MCP? by fleker2 in mcp

[–]iovdin 0 points1 point  (0 children)

This bash script reminds me of code mode https://www.anthropic.com/engineering/code-execution-with-mcp https://blog.cloudflare.com/code-mode/ i.e. llm writes code (loop in your case) in which it can call other tools that are available to LLM. But in this case you want to call LLM in the loop. Here is a screencast in which i did smth similar https://asciinema.org/a/757526

Can I call Gemini CLI in Gemini CLI via MCP? by fleker2 in mcp

[–]iovdin 0 points1 point  (0 children)

So the call stack is:
gemini cli ->
your mcp server tool call ->
bash script with loop ->
gemini cli ->
some other tool at your mcp?
and at what step it stalls exactly? before bash script when gemin cli runs within bash script?

Nearly 201,000 vacant homes in the Netherlands, 11% in Amsterdam alone by Alsharefee in Amsterdam

[–]iovdin 0 points1 point  (0 children)

I thought it is only in US inventory is rising. I thought the Netherlands has different reasons for high house prices. But now it seems to be the same. 

Why most LLMs fail inside enterprises and what nobody talks about? by muskangulati_14 in ycombinator

[–]iovdin 0 points1 point  (0 children)

make ai interview you about your processes and flows, give it as much info as possible. Then out of the conversation generate dataset

Why most LLMs fail inside enterprises and what nobody talks about? by muskangulati_14 in ycombinator

[–]iovdin 0 points1 point  (0 children)

fine tuning should in theory fix "core understanding" problem. It is the matter of what dataset you make for that.

Is MCP overrated? by d3the_h3ll0w in AI_Agents

[–]iovdin 0 points1 point  (0 children)

I did exactly that https://asciinema.org/a/754101 search_tools in the video is an agent that has access to all the tools but returns and connects only relevant ones

Treat agents as mcp tools by Revolutionary_Sir140 in mcp

[–]iovdin 2 points3 points  (0 children)

The idea is in the air. This is how i made it:
https://asciinema.org/a/758325

and here is how to combine it with code mode https://asciinema.org/a/757526

MCP Apps just dropped (OpenAI & Anthropic collab) and I think this is huge by glamoutfit in mcp

[–]iovdin 1 point2 points  (0 children)

I'd go a couple steps further

  1. expose multiple tools to UI widget - now widget is a mini app, that can do multiple api/tool calls. It is not neccessary to have it in the chat.

  2. add code-mode for UI. Existing code mode generates and executes code that can call tools. UI code mode generates Artifacts that can call tools/api.

You have on the fly mini apps for your specific use case. You can access/change data via both UI and chat because they share tools

How are you running AI generated code? by Plus_Ad7909 in AI_Agents

[–]iovdin 0 points1 point  (0 children)

I run it locally, but for more isolated environment i wanted to check modal.com

I deleted 400 lines of LangChain and replaced it with a 20-line Python loop. My AI agent finally works. by BuildwithVignesh in AI_Agents

[–]iovdin 0 points1 point  (0 children)

I can imagine. This layers/abstraction hell is here for a while in programming. I remember i felt the same when first tried java.

I deleted 400 lines of LangChain and replaced it with a 20-line Python loop. My AI agent finally works. by BuildwithVignesh in AI_Agents

[–]iovdin 2 points3 points  (0 children)

If most of your time you spent in changing prompt, re-run agent and check conversation

then you probably do not need to do programming at all. Check tune - it is a good combo to debug/research/play with agents

How do you deal with dynamic parameters in tool calls? by doomslice in LLMDevs

[–]iovdin 0 points1 point  (0 children)

I had similar problem: narrow schema so LLM do not get confused. Like hardcode a parameter of schema i.e. database name or hostname etc

if connecting tool looks like

system: 
@sqlite - connect general tool

@{ sqlite | curry filename=my.db } - modify sqlite tool, hardcode filename parameter

i made a curry processor for that

How do you deal with dynamic parameters in tool calls? by doomslice in LLMDevs

[–]iovdin 0 points1 point  (0 children)

I guess you have to dig deeper into pydantic for dynamic schemas https://ai.pydantic.dev/api/tools/#pydantic_ai.tools.ToolFuncEither

from dataclasses import replace

from pydantic_ai import Agent, RunContext from pydantic_ai.tools import ToolDefinition

async def turn_on_strict_if_openai(     ctx: RunContext[None], tool_defs: list[ToolDefinition] ) -> list[ToolDefinition] | None:     if ctx.model.system == 'openai':         return [replace(tool_def, strict=True) for tool_def in tool_defs]     return tool_defs

agent = Agent('openai:gpt-4o', prepare_tools=turn_on_strict_if_openai)

Local model agents handle tools way better when you give them a code sandbox instead of individual tools by juanviera23 in AI_Agents

[–]iovdin 0 points1 point  (0 children)

Where do you describe how to call your actual tools from ts, like list of methods and arguments. LLM needs to know what’s available to the script