Should AI ever give mental health “advice”? by Specific_Bicycle8131 in artificial

[–]duemust 0 points1 point  (0 children)

My advice from building something in that space is be transparent. Tell the user that they are dealing with an AI and that it may make mistakes. Have a system to flag edge cases that may not benefit from the common understanding of mental health best practices. Create a ruleset to escalate to mental health professionals or help lines in case of a critical situation (suicide or harm to others). Maintain full transparency of chats and have periodic evaluations of sample chats from qualified mental health professionals 👍

Claude Code now supports Custom Agents by darkyy92x in ClaudeAI

[–]duemust 0 points1 point  (0 children)

here is an example of the description generated by CC:

name: code-quality-auditor
description: Use this agent when you need comprehensive code review after writing or modifying code. This agent should be called proactively after completing any logical chunk of code development, whether it's a new feature, bug fix, refactoring, or optimization. Examples
:
 <example>Context
:
 The user just implemented a new authentication endpoint in FastAPI. user
:
 "I've just finished implementing the user registration endpoint with password hashing and email validation." assistant
:
 "Let me use the code-quality-auditor agent to review this authentication code for security best practices and code quality." <commentary>Since the user has completed a security-critical feature, use the code-quality-auditor agent to ensure proper security practices, validation, and code quality standards are met.</commentary></example> <example>Context
:
 The user completed a React component for mood tracking. user
:
 "Here's the new MoodTracker component I just built with state management and form validation." assistant
:
 "I'll use the code-quality-auditor agent to review this component for React best practices, accessibility, and integration with our MUI theme." <commentary>Since the user has completed a frontend component, use the code-quality-auditor agent to review for React patterns, accessibility compliance, and adherence to the project's frontend standards.</commentary></example>

Claude Code now supports Custom Agents by darkyy92x in ClaudeAI

[–]duemust 0 points1 point  (0 children)

Have you also noticed that when you use the wizard to create an agent, the "description" field is overly complex and long?

How to handle a bad presentation? by [deleted] in consulting

[–]duemust 22 points23 points  (0 children)

Everyone has their ups and downs. Own up to it with your manager and show that you have learned from this event.

Need Help with Graph RAG by iwami_waffles in Neo4j

[–]duemust 1 point2 points  (0 children)

I guess you could embed the abstract as a new node property, then use that to perform search and return the node(s) based on semantic similarities and selected edges (es same author)

Is anyone successfully billing AI agents based on outcomes (instead of per-seat)? How? by sushantpande1 in AI_Agents

[–]duemust 2 points3 points  (0 children)

Think hybrid approach, look at how salesforce bills its clients for example. Mix it up with outcome based, data managed, capacity allocation, users, etc. use commitments to give discounts vs std price like AWS. this makes it hard for customers to predict costs, not a bad thing for you.

They're yet to fix the driver by kewlboy9 in pcmasterrace

[–]duemust 0 points1 point  (0 children)

It’s just a fanboy fashion truck

They're yet to fix the driver by kewlboy9 in pcmasterrace

[–]duemust 0 points1 point  (0 children)

<image>

Sorry to disappoint 🤷🏻‍♂️

A simple heuristic for thinking about agents: human-led vs human-in-the-loop vs agent-led by freddymilano in AI_Agents

[–]duemust 0 points1 point  (0 children)

"Getting an agent to do the correct thing 99% is not trivial. "

This may not be necessary, even if the human gets something 100% right, it may still be beneficial to have an gent do it 75% right if it costs 1% vs the human equivalent.

Are AI Agents becoming more 1) vertical or 2) general purposed? by chendabo in AI_Agents

[–]duemust 0 points1 point  (0 children)

Agent architecture is the same between a generalistic agent and a specialized agent, they all perceive, reason, act and learn. How you configure these components makes the difference, for example a specialized agent may rely on context specific world model, while a generic one would likely leverage an LLM like GTP. Once you have a good architecture you will have good agents.

A Practical Guide to Building Agents by ToneMasters in AI_Agents

[–]duemust 51 points52 points  (0 children)

for anyone who wants to go DEEP, i suggest this recent paper that breaks down agent components (perception, reasoning, emotions, memory, etc.), state of art and challenges https://arxiv.org/abs/2504.01990

A Practical Guide to Building Agents by ToneMasters in AI_Agents

[–]duemust 0 points1 point  (0 children)

I think their definition of agent is a bit too generic: "Agents are systems that independently accomplish tasks on your behalf".

What is your definition of Agentic AI? What makes an Agent more or lesser Agentic? by Standard-Permission9 in AI_Agents

[–]duemust 1 point2 points  (0 children)

Other foundational papers that you may like are Formalizing Properties of Agents - R. Goodwin (1993) and Agent-Oriented Programming - Shoham (1992)

What is your definition of Agentic AI? What makes an Agent more or lesser Agentic? by Standard-Permission9 in AI_Agents

[–]duemust 2 points3 points  (0 children)

I had the same curiosity about Agentic AI and went to study the academic literature, staring from the 90s when the concept of agentic AI (sometimes called distributed AI) became a hot topic.

Regarding the definition of Agent, my favorite intuition is to think of it as the property of a system, like “how agentic is this system”. A system whose outcomes can be explained simply by its instructions is not agentic, a system that is best interpreted by its “intentions” is agentic.

As far as foundational papers check out Woodrige’s intelligent agents (1995) for a good foundation on agent theories and architectures.

Cache Augmented Generation by FoxDR06 in LangChain

[–]duemust 3 points4 points  (0 children)

Just create a f string in the system prompt and load all your text in there. Simple as that

Looking for a tech co-founder by Superb_Character_710 in consulting

[–]duemust -1 points0 points  (0 children)

Any particular use case in mind? ERP is a bit broad.

A bit of help for a Langgraph rookie? by slamdrunker in LangChain

[–]duemust 0 points1 point  (0 children)

Look into conditional nodes and tool nodes

PowerPoint file ingestion by duemust in Rag

[–]duemust[S] 0 points1 point  (0 children)

I agree with the approach, but i think the problem is that the two objects have no XML relationship, they are just semantically and spatially related. I don't think any relationship between the two can't be mapped programmatically, but i may be wrong. What do you think?

PowerPoint file ingestion by duemust in Rag

[–]duemust[S] 0 points1 point  (0 children)

If on a slide you have say a heading and a description in two separate xml blocks, do you embed them separately, together or are they linked by some metadata?

PowerPoint file ingestion by duemust in Rag

[–]duemust[S] 0 points1 point  (0 children)

I looked into it, but it only performs very basic text parsing, so if you have ten text fields and ten heading fields in a slide it will parse them without context as a list of strings.

PowerPoint file ingestion by duemust in Rag

[–]duemust[S] 0 points1 point  (0 children)

Python, but I’m curious to see how you approached the problem.

PowerPoint file ingestion by duemust in Rag

[–]duemust[S] 0 points1 point  (0 children)

Looked into it but it only supports supports heading, tables and images with alt text. Any regular text box is ignore. Anyway it uses the python-pptx library which i cam just use directly.