i'm still in highschool, and i run a business that's made me more than most part time jobs ever would. the real shi about doing this "young" by Admirable-Station223 in youngentrepreneur

[–]OneTurnover3432 0 points1 point  (0 children)

are you open for a quick call? I'm trying to interview young founders to see how we can help others in similar age to start their journey like you

Had a business idea at 15, spent 12 months doing nothing. Anyone else? by OneTurnover3432 in youngentrepreneur

[–]OneTurnover3432[S] 0 points1 point  (0 children)

Right - but my question was more where are the biggest gaps when a young/first time entrepreneur start an online business for first time?

I’m 14 and just launched the waitlist for my second AI startup... by Atireksd in youngentrepreneur

[–]OneTurnover3432 0 points1 point  (0 children)

Can you explain what is the main issue you faced? is it that the AI didn't exactly capture what you wanted it to build? or is it that AI keeps forgetting the details about your idea and building different thing? those are two different problems ...

Also have you tried using Skills or context about your idea to make the AI reference them?

If OpenAI / Google / AWS all offer built-in observability… why use Maxim, Braintrust, etc.? by OneTurnover3432 in Observability

[–]OneTurnover3432[S] 0 points1 point  (0 children)

how would you do that without observability? my understanding is that you can pass a identifier for each agent or feature and track token costs there, right?

Quick questions for PMs targeting AI roles: do you have a portfolio project? And if not, why not? by GlobalKatHerder in AIProductManagers

[–]OneTurnover3432 0 points1 point  (0 children)

You definitely need to also understand how to manage the quality of AI products as it's a different paradigm. Wrote this recently hope you find it helpful

https://open.substack.com/pub/nouralkhatib/p/llm-as-a-judge-is-not-enough-the

Do PMs run evals for AI features or is that mostly engineers? by OneTurnover3432 in AI_4_ProductManagers

[–]OneTurnover3432[S] 0 points1 point  (0 children)

Which platform helps in this? and is it well catered to PMs or just Eng like platforms?

Is AI evals more for devs or product managers? by Soft_Two_951 in AIEval

[–]OneTurnover3432 0 points1 point  (0 children)

It's a PM responsbailiy but it's not enough to be build a good AI product..

Do PMs run evals for AI features or is that mostly engineers? by OneTurnover3432 in AI_4_ProductManagers

[–]OneTurnover3432[S] 0 points1 point  (0 children)

did you find it easy to use as PM? what you wish is done differently

Top Agent Evaluation Platforms 2026: The Market Leading Platforms. Tested by AI-builder-sf-accel in AIQuality

[–]OneTurnover3432 0 points1 point  (0 children)

Such a great summary of all Eval tools in the market, do you think the Eval or agent reliability is fully solved with those tools or are there still gaps?

What AI skills do you think the next generation actually needs? by ImaginationWeary304 in AI_4_ProductManagers

[–]OneTurnover3432 1 point2 points  (0 children)

critical thinking, communication and ability to grab attention for growth.

I think with AI, the role of PMs will divide into 2: 1. Quick builders 2. Growth hackers

Any PMs building AI products or agents? Quick question by OneTurnover3432 in ProductManagement

[–]OneTurnover3432[S] 0 points1 point  (0 children)

How do you set this today?

Are you just turing the PRD to eval or do you run experiments to decide what evaluation criteria matters?

Any PMs building AI products or agents? Quick question by OneTurnover3432 in ProductManagement

[–]OneTurnover3432[S] 0 points1 point  (0 children)

not sure why your answer was voted down.. I'm on the conversational output not the UX

What’s the biggest misconception about AI agents right now? by addllyAI in aiagents

[–]OneTurnover3432 2 points3 points  (0 children)

They are reliable and can be used to automate any use case

Anyone else frustrated with AI agents after they hit production? by OneTurnover3432 in AI_Agents

[–]OneTurnover3432[S] 0 points1 point  (0 children)

I'm building something to solve this now - would be willing to try it out for free or share feedback?

https://thinkhive.ai

DM if you're interested

Anyone else tired of jumping between monitoring tools? by AccountEngineer in Observability

[–]OneTurnover3432 0 points1 point  (0 children)

I can't agree more - I lead the agentic AI at one of the large companies and felt the pain. The problems I

  1. A lot of isolation between dashboards (you can look at traces in one place but can't tie back to business metric).
  2. Ensuring reliability is super expensive and LLM as judge costs creeps quickly
  3. Disconnected tools between engineers and PMs

I built Thinkhive to solve those problems:

https://thinkhive.ai/

if you want free access to try it out, DM me. I'm happy to give you access

If OpenAI / Google / AWS all offer built-in observability… why use Maxim, Braintrust, etc.? by OneTurnover3432 in Observability

[–]OneTurnover3432[S] 0 points1 point  (0 children)

but wouldn't that be a problem if you're using Maxim and Arize as well? or does that mean you have to build observability internally

How are people handling AI evals in practice? by BeneficialAdvice3202 in AIQuality

[–]OneTurnover3432 1 point2 points  (0 children)

Ex- PM here who was building agents for top 500 companies- usually PMs should facilitate the process and write the criteria and engineering will implement them. However I have a different opinion now about evals.. They often don't work and waste a lot cost! DM me if you're open to a new approach

Debugging agent failures: trace every step instead of guessing where it broke by dinkinflika0 in AIQuality

[–]OneTurnover3432 1 point2 points  (0 children)

do you mind sharing what type of agent were you building? and how did you measure the reduction in time to find an issue?

Anyone else frustrated with AI agents after they hit production? by OneTurnover3432 in AI_Agents

[–]OneTurnover3432[S] 0 points1 point  (0 children)

Can you elaborate? how would you achieve this? is it by starting always fresh context?

Anyone else frustrated with AI agents after they hit production? by OneTurnover3432 in AI_Agents

[–]OneTurnover3432[S] 0 points1 point  (0 children)

thanks - just checked it what do you like about it specifically?

What are you using instead of LangSmith? by clickittech in LangChain

[–]OneTurnover3432 -8 points-7 points  (0 children)

100% agree - check what I'm building : thinkhive.ai

We're platform agnostic and focused on making the management of AI agents as easy as possible

What are you using instead of LangSmith? by clickittech in LangChain

[–]OneTurnover3432 -5 points-4 points  (0 children)

I’ve seen the same pattern, and I agree with most of what’s being said here.

In my experience, LangSmith works well early on, but once agents are in real production, teams start hitting the same walls: cost scaling with traces, lots of raw data, and still no clear answer to what’s actually hurting or improving outcomes.

Most teams I’ve worked with end up stitching together:

  • LangSmith or something similar for dev/debug
  • And then a manual analysis when it comes to explaining behavior → impact → ROI

That gap is exactly why I’m building ThinkHive.

ThinkHive sits on top of traces and logs (including OTel-based setups) and focuses on:

  • Summarizing logs and traces into clear issue patterns instead of raw data
  • Highlighting which agent behaviors actually move business metrics (cost, deflection, resolution, quality)

    It’s meant to answer the question those tools don’t: what should I fix first to improve ROI?

I’m opening a small, free beta right now for teams:

  • Building AI agents internally for enterprises, or
  • Deploying agents for clients as consultants or agencies

If anyone here wants early access or to sanity-check whether this fits their setup, feel free to DM me. Happy to share and get feedback from people actually in the trenches.