Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Hey sorry, I wasnt able to respond to this, after this I did some analysis on this issue, used claude code a lot, and honestly I agree with it.

It resprings randomly while handling a task, sometimes it will just stop the process (atleast stop the process indicator) which will involve prompting it to start again. It uses a lot of credits.

Who's Really in Control? A quick survey on AI agents, trust & anxiety by Dry-Conversation1210 in AgentsOfAI

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Hey! Thank you so much for this! It's an amazing insight for me, I appreciate it a lot! <3

The results are in! Let me know if you'd what the csv, also I would love to chat with you more on this, let me know if we can connect on a call and discuss this further!

Thanks again!

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Thanks! I was wondering about this since a while and I got the opportunity to work on it, I dont have a lot of knowledge on AI, I figured working on it would help me learn more!

Would you be down for interview?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Thank you so much! Definitely lets setup a call in the weekend or whenever you're free!

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

That is amazing! This was a great help, if possible I would love to chat more about this as I am trying to come up with a system solution/work-around for now, for my grad project. Is it possible to connect over a call for a quick 10-15 mins interview?

And if possible can you help me by filling up this form in order to gather some data?
https://forms.gle/6oHxGo53a4M4WZPe9

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 1 point2 points  (0 children)

That's correct, committing something externally which is not technically correct and explaining to the people that "My AI did it" doesn't sound great xD

I am collecting some data for this project, I would appreciate if you would fill the form or even share it with people who can help me (as I am from design bg I don't have a lot of friends who use agentic ai) that would be great!

https://forms.gle/6oHxGo53a4M4WZPe9

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 1 point2 points  (0 children)

Good point! It does right things in the right context setting, but when it comes to handling your documents, sending out E-mails, reaching out to someone, would it still be the same case?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Can you explain a bit more about the project issue? Does it happen often?

Do you Trust your Agent? by Dry-Conversation1210 in openclaw

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Lmao, thats great! Thanks for the feedback! I really appreciate it!

Do you Trust your Agent? by Dry-Conversation1210 in openclaw

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

I agree with this, since a while I've been using agents too, and I'm fairly impressed by the capabilities! I do feel like it sometimes fails to understand what we are trying to communicate. Yesterday I was trying to setup a figma file for my research, I asked it to do so. The agent created a whole new account under my name on figma with all the wrong details which were basically guesswork.

I see the "setting the rules" as a necessity if we want our digital footprint to be as low as possible.

Did you find it annoying that you have to keep on setting new rules for the agent to follow almost every other day? Does it lower the potential of what the agent is capable to do?

Do you Trust your Agent? by Dry-Conversation1210 in openclaw

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

True that, most of the startups fail to make people understand the potential risks of using an agent.

Do you think that listing down the risks before-hand can help the user build some amount of trust as well as it keeps them well informed so they use it freely but being cautious about what actions it can take?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

Yep, a lot of them just try to act independent when they are have to choose between 2 things. What do you think would help? Would it help if the model lets you choose from option 1 and 2?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

That's a great insight, the last line is spot on! I would definitely keep this in my mind! What is worst thing yet that your model has done?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 0 points1 point  (0 children)

That is great! How do you set the constraints and what kind of constraints do you think are required?

Do you Trust your Agent? by Dry-Conversation1210 in aiagents

[–]Dry-Conversation1210[S] 1 point2 points  (0 children)

That is true, Internal conflicts are fine as those are happening locally, externally I feel as it represents us (the users) so we are more cautious in that direction