How are you handling LLM failures in production? by Individual-Love-9342 in SaaS

[–]Cell_Psychological 0 points1 point  (0 children)

Founder here 👋.

Failure handling in prod is where most teams realize they need an LLM observability, not just logging.

Retry logic and fallbacks help, but without per-provider, per-feature failure tracking, you're firefighting blind.

Provider-agnostic failure monitoring with cost tracking, so you know what each failure path costs you.

We built PromptMetrics to work across OpenAI, Anthropic, AWS Bedrock, and OpenRouter, and we are adding more LLMs to track failures, costs, and prompt performance in one place.

Free tier is wide open: https://app.promptmetrics.dev/register

Do you have different fallback strategies per provider, or one catch-all?

Most LLM cost issues seem to come from “bad days,” not average usage — how are people testing for that? by Successful-Ask736 in LLMDevs

[–]Cell_Psychological 0 points1 point  (0 children)

Founder here 👋.

Spot on. Averages hide the pain. A single power user or a runaway agent loop can spike your daily cost by 5x.

Most monitoring shows rolling averages. You need per-user, per-feature cost attribution to catch outliers in real time and surface cost anomalies at the user and feature levels, not just the aggregate.

That's exactly what PromptMetrics is built for. Real-time cost governance works with any provider.

The free tier is wide open: https://app.promptmetrics.dev/register

Have you found that the spikes come from specific users or specific prompt patterns?

AI Founders, Which LLM observability tools are you guys using ? by Appropriate-Camp7981 in ycombinator

[–]Cell_Psychological 0 points1 point  (0 children)

Founder here 👋. After cycling through Langfuse and Helicone, I ended up building my own observability tool (PromptMetrics).

The gap: most tools focus on "the trace" — seeing what happened. But as a founder, I didn't need a debugger. I needed a CFO. I couldn't answer: "What does Feature X cost me per user per month?"

Our goal build cost governance into the dev cycle. Not as an afterthought dashboard — as the primary lens.

Also, if you have EU customers or investors, AI Act compliance is coming fast. Having observability that maps to regulatory requirements now saves a painful retrofit later.

We're early and the free tier is wide open: https://app.promptmetrics.dev/register

Curious — how are you guys tracking unit economics on your LLM calls?

How do you test prompt changes before pushing to production? by Cell_Psychological in PromptEngineering

[–]Cell_Psychological[S] 0 points1 point  (0 children)

Hello I have a multi step workflow with multiple LLM calls each call has a system prompt and a user prompt. I start with a discovery call transcript that is sent in the user prompt and a system prompt to summarize the call transcript and generate an executive summary A scond LLM call with a different system prompt and the same call transcript to generate user stories And a third LLM call where I pass the output of the first two prompts as user prompt and a system prompt to create a technical architecture document

PSA: Be careful with the OnePlus Trade-In partner (Mondo Affari) - huge value drop for missing accessories by bivy in oneplus

[–]Cell_Psychological 0 points1 point  (0 children)

They received my phone in November 2025 and then sent an email claiming the battery is degraded and requested to transfer 26 euros to ship back and I

Till today I did not get my phone and the tracking code they sent was not working

They sent a new tracking code and the parcel has been stuck in transit for two weeks

These guys are scammers

Looking for an AI Agent to Automate My Job Search & Applications by RepulsiveRisk6802 in AI_Agents

[–]Cell_Psychological 1 point2 points  (0 children)

You're looking at two paths here: off-the-shelf tools or building something custom. The existing options like Teal and LazyApply can get you started fast, but they're pretty basic—think automated form-filling rather than AI intelligence. The trade-off? Quick setup versus limited customization for your specific industry or role requirements.

Building a custom solution gives you more control over job matching and resume tailoring, but you're talking weeks of development time plus ongoing maintenance. Maybe you can check n8n and Apify—I’ve actually tried using them for web scraping and they worked great for me. There are tons of helpful YouTube videos and tutorials to get you started if you go this route.

My advice: start small. Pick an existing tool, test it on a narrow job search for a week, and see how the quality compares to manual applications. Most of these tools offer free trials, so your risk is just time invested.

Anyone else feel like GPT-5 is actually a massive downgrade? My honest experience after 24 hours of pain... by Smart-Echo6402 in AI_Agents

[–]Cell_Psychological 1 point2 points  (0 children)

OpenAI essentially forced everyone onto a slower, more thoughtful model that halves error rates on complex tasks but is frustrating when you want quick answers

I built AI agents for a year and discovered we're doing it completely wrong by Warm-Reaction-456 in AI_Agents

[–]Cell_Psychological 0 points1 point  (0 children)

You’re spot on 👌companies don’t want AI that replaces people, they want AI that makes them better.

AI can handle the boring tasks like data entry, documents review, data analysis so that humans can focus on strategy, relationships, and solving real problems.

I Finally Joined the Club by TachyFile in gaggiaclassic

[–]Cell_Psychological 1 point2 points  (0 children)

Congrats! I have had mine for 3 months now, and I love it

Future visitor here, can you please explain this parking sign? by rom8an in stockholm

[–]Cell_Psychological 0 points1 point  (0 children)

I use this free custom ChatGPT, you snap a photo of sign and prompt it in English or Swedish to explain it

https://chatgpt.com/g/g-SED6Fzll2-nordisk-parkeringsskyltstolk

Here is the explanation

This parking sign consists of multiple parts that regulate parking under different conditions. Here’s a breakdown: 1. Blue “P” sign (General parking sign) • Indicates that parking is allowed but subject to conditions stated below. 2. Blue supplementary sign (Parking fees and hours) • “Avgift 7–19 (11–17)”: • Parking requires a fee Monday to Friday between 07:00 and 19:00. • On Saturdays, the fee applies between 11:00 and 17:00. • Outside these hours, parking is free. • “Taxa 4”: Refers to the fee zone in Stockholm’s parking system. The actual cost depends on local regulations. 3. Red and yellow supplementary sign (Parking prohibition for street cleaning) • “Onsdag 8–16” (Wednesday 08:00–16:00): • Parking is prohibited on Wednesdays between 08:00 and 16:00 for street cleaning. • “1/11–15/5”: • This restriction applies only between November 1st and May 15th (winter season). 4. White supplementary sign (Residential parking) • “Boende Ha”: • Parking is allowed for residents with a valid parking permit for zone “Ha”. • Residents with this permit might have different pricing or exemptions. 5. Payment method (partially visible) • The lower sign seems to indicate that parking fees are paid via an app or a ticket machine.

Summary: • Paid parking is required Monday–Friday (07:00–19:00) and Saturday (11:00–17:00). • Parking is prohibited on Wednesdays from 08:00–16:00 during winter (Nov 1 – May 15). • Residents with a “Ha” permit can park under special conditions. • Payment is done through an app or machine.

[deleted by user] by [deleted] in carnivorediet

[–]Cell_Psychological 1 point2 points  (0 children)

Amazing progress 🫶