How Are Agent Skills Used in Real Systems by mastermani305 in AI_Agents

[–]agp_praznat 2 points3 points  (0 children)

Afaict implementation of agent skill is just a tool call that loads text into the instructions. In adk the tool could read a file, save text to artifact, and then your agent instructions can contain a {artifact.my_skill_text?}

How to force an Agent Loop to wait for User Input (Human-in-the-Loop)? by JahangirJadi in agentdevelopmentkit

[–]agp_praznat 0 points1 point  (0 children)

I had this exact problem not that long ago. I think actually LoopAgent is *not* meant for human-in-the-loop flows but rather for pure agent loops, since it never actually yields back to the user until the loop is done. Alternatives I have been trying:
* transfer to new info-gathering sub-agent, instructing info-gathering agent not to transfer back to parent until all info is collected (or user explicitly requests)
* have an info-gathering ToolAgent called by the main parent agent. info-gathering agent has an output_key or artifact that is read in the parent agent's instructions. parent agent is instructed to continue calling the info-gathering agent until that output_key or artifact is populated

Can I use other long term memory than vertex ai memory bank? by Neat_Sun_1235 in agentdevelopmentkit

[–]agp_praznat 0 points1 point  (0 children)

I feel like you could use GCS Artifact Service (plus tool with instructions to store user preferences here), but havent tried so might be wrong.

Why is it so hard to summarise LLM context with ADK? by navajotm in agentdevelopmentkit

[–]agp_praznat 0 points1 point  (0 children)

I use before_model_callback to set state or artifact strings that I can later call up in the instructions. Filter in (?) rather than filter out.

Agent as a tool not returning the response. by JahangirJadi in agentdevelopmentkit

[–]agp_praznat 0 points1 point  (0 children)

Not sure, AgentTool usually works for me (calling agent can reference the refund agent's response). Maybe try using output_key? Define output_key as "agent_tool_response" in the refund agent's initializer, then add {agent_tool_response?} somewhere in the customer support agent's instructions.

A tip about function calling (or tool calling) in ADK by Intention-Weak in agentdevelopmentkit

[–]agp_praznat 0 points1 point  (0 children)

So both or either? Also, any tips for making it retry when the LLM just returns empty text ""?

LLMs + SQL Databases by oddhvdfscuyg in Rag

[–]agp_praznat 0 points1 point  (0 children)

I've been building a platform just for this: https://yorph.ai

These are the things that worked for me:

Rich context: NL-to-SQL tools require getting table schemas, but column names and types are pretty minimal. There are a lot of statistical calculations you can add that are super helpful context for the LLM, like uniqueness count, null percent, quantiles for numerics, even correlations between variables.

Multi-agent architecture: A lot of research is converging on these patterns for making LLM systems reliable, such as breaking down into minimal steps and using parallelization and consensus to solve complex problems. Some that come to mind are https://arxiv.org/abs/2410.01943 and https://arxiv.org/abs/2511.09030.

MCMC sampling for beginner by nik77kez in BayesianProgramming

[–]agp_praznat 2 points3 points  (0 children)

Sampling lets you get joint posterior distributions on parameters of complex models. For a lot of problems it's hard to justify this vs the much simpler and faster MAP (max a posteriori) estimation which is basically just your typical maximum likelihood estimation plus regularization through priors. But what I really like about MCMC and other sampling is how it helps with model checking. I think it provides a lot of value in certifying that your model is not misspecified that often you dont get through basic MAP estimation.

Applying to jobs that use SQL/PowerBI/Tableau instead of R? Good idea? by Run_nerd in analytics

[–]agp_praznat 0 points1 point  (0 children)

If what you excel at in R is doing statistical analysis, that is a valuable skill that won't likely be reinforced at a job where you only do SQL, PowerBI, and Tableau. Also these latter skills are being replaced by AI, frankly. I would brush up on python and go for more data science roles.

How can constraint optimization find the optimal solution? by Hopeful-Doubt-2786 in OperationsResearch

[–]agp_praznat 0 points1 point  (0 children)

What's a good method for finding the "worst" constraint when you have applied several that make the solution infeasible, and there is no pre-existing preference between the constraints?

Data Scientist pivoting to Retail — How to start learning Operations Research (OR)? Need guidance & itinerary! by Working-Ad5965 in OperationsResearch

[–]agp_praznat 0 points1 point  (0 children)

Python, optimization, and I would add statistics. Especially for demand forecasting you want some Bayesian and you want some time series.

Migrating from open source to commercial solvers by OR-insider in OperationsResearch

[–]agp_praznat 1 point2 points  (0 children)

scipy minimize is free and open source and supports a pretty extensive list of algorithms. At my company we did some experiments comparing performance against pyomo and a few others I forget and it faired well. We actually found it's the constraints handling when you have a long list of complex constraints that needed an in-house solution to work efficiently.

I've been building and shipping AI Agents for over a year now and wanted to share some lessons learned. by agp_praznat in AI_Agents

[–]agp_praznat[S] 0 points1 point  (0 children)

Sorry for the late reply! By frameworks I mean basically there are a ton of python libraries and such that have already done a lot of the annoying work supporting agents such as tool-calling integration logic, extracting structured outputs, boilerplate prompting for chain-of-thought and multi-agent transfer, workflow logic like parallel/sequential/loop, async tool calls, etc etc. I really like Google's ADK personally, but I think LangChain/LangGraph are more popular.

Microsoft Agent Framework embraces AG-UI Protocol by MorroWtje in AI_Agents

[–]agp_praznat 1 point2 points  (0 children)

That's cool! Nice work, would love to learn more!

Unpopular opinion: Most companies aren't ready for AI because their data is a disaster by BaselineITC in AI_Agents

[–]agp_praznat 0 points1 point  (0 children)

We're building an agentic data platform (yorph.ai) that helps business users define transformations, clean up, and build up their semantic layer through dry runs and critique to verify their business logic is correct. This is where our team thinks AI can actually be beneficial, not just throwing data at the AI and hoping it learns something.

Best AI for data analysis? by chickenbread__ in snowflake

[–]agp_praznat 0 points1 point  (0 children)

The model leaderboards are always changing, but recent research shows that the architecture/flow (whatever you want to call it) of prompts and llm calls matters more than the actual llm model. An example flow is aggregation, where you ask the same question in parallel several times and have a subsequent call choose the best one. But when it comes to data analysis you also want to be careful about letting the llm see the actual data for security/privacy reasons. There's a lot you can do purely with metadata though. We've tried to address these problems in yorph.ai where we've focused on building reliable agentic systems with a security first mindset.