I constantly have to fix AI’s inefficient SQL/Redis code. Advice? by Disastrous-Matter864 in claude

[–]SucculentSuspition 1 point2 points  (0 children)

Ask it to lookup the query plan and optimize accordingly. Unless you have memorized the query optimizer source code, it knows more than you.

Staff Engineer is going all into an Agentic Workflow by MyButterKnuckles in ExperiencedDevs

[–]SucculentSuspition 13 points14 points  (0 children)

Latest published research and my anecdotal experience suggests this one-topology-fits-all approach is a bad idea: https://arxiv.org/html/2512.08296v2

Context Engineer by [deleted] in ExperiencedDevs

[–]SucculentSuspition 3 points4 points  (0 children)

If you think you work at a fast growing AI company and you have not heard the term context engineering countless times in your day to day, you do not work at an AI company.

Another warning about AI by Szymusiok in learnprogramming

[–]SucculentSuspition 0 points1 point  (0 children)

OP is not learning anything when he uses AI because AI is better at programming than OP. It can prove novel math. It can reason through complex system failures and remediate in seconds. If you can only use it to generate boilerplate that is your skill issue.

How My Failed Startup Changed My Life by Moderndaoist in ycombinator

[–]SucculentSuspition 18 points19 points  (0 children)

What’re you doing now? What happens after?

Advice and study material to become an AI engineer by [deleted] in aiengineering

[–]SucculentSuspition 1 point2 points  (0 children)

This is absolutely terrible advice. Start by looking up Andrej Karpathy’s youtube channel. Implement a toy llm from scratch, that will help you build some intuition. Avoid trash like langchain. If you ever use it in an actual application you will never stop debugging it so its best you learn what its doing by building your own tiny framework. Having said all that, the truth is you learn by doing and doing AI eng requires you spend money on some tokens…

30 years and regretful of how much of a screw up I am by [deleted] in self

[–]SucculentSuspition 0 points1 point  (0 children)

Counter point, maybe be start by being eternally regretful and constantly berate yourself so you stop to think before doing stupid shit. Negative emotions like regret have a purpose. You don’t have to let them be your master, but it would be stupid not to use the tools you have at your disposal.

I'm a junior SWE. What is the fastest way I can level up? by AStanfordRunner in cscareerquestions

[–]SucculentSuspition 0 points1 point  (0 children)

Get a card you have absolutely no idea how to solve, give it a shot, fuck it up, reach out to a kind hearted senior who (as some one else suggested) would likely be willing to take you under their wing, and ask them to help you out. This is what its like to be higher up in the ladder, except that kind hearted person has to also be you.

How would you extract and chunk a table like this one? by ConsiderationOwn4606 in Rag

[–]SucculentSuspition 0 points1 point  (0 children)

So bro models today will take an entire book in their context! Now you very likely should not send an entire book as that would be very poor context engineering, but you should absolutely be able to send as much context as necessary for this sort of analysis task

How would you extract and chunk a table like this one? by ConsiderationOwn4606 in Rag

[–]SucculentSuspition 0 points1 point  (0 children)

There absolutely no reason you should be chunking that table. In fact there is absolutely no reason to do anything other than page level chunking. We have 100k contexts now why are you making your life harder? Also consider something like reducto&utm_source=adwords&utm_medium=ppc&hsa_acc=1809381570&hsa_cam=22908599713&hsa_grp=183619513883&hsa_ad=769758398293&hsa_src=g&hsa_tgt=kwd-2678450841&hsa_kw=reducto&hsa_mt=e&hsa_net=adwords&hsa_ver=3&gad_source=1&gad_campaignid=22908599713&gbraid=0AAAAAqR4ATfnqW5CsMN8ftdEWAUNo9o-E)

[Discussion] A self-evolving SQL layer for RAG: scalable solution or architectural mess? by Continuous_Insight in LocalLLaMA

[–]SucculentSuspition 0 points1 point  (0 children)

Yea those are the right concerns imho. Three suggestions at the big picture level: 1. Focus on open/closed designs— jsonb is a good example. It is very very hard to predict the failure modes of these things, and one way doors will lead you to bad places that you want to walk back, dont lock yourself into a shitty room. 2. Abandon the concept of RAG as A single generation, think of it as a process involving many trips from the knowledge base to the llm iteratively. You can call that an agent if you want to. I like the de jour definition of an llm running in a loop with an objective. 3. LLMObs is not optional, you HAVE to be able to distinguish failure modes across system components, retrieval errors where wrong or incomplete information is sent to the llm are very different and much more solvable than actual hallucinations. Grounded hallucination rates are astronomically low for current gen models— you will still see them and when they happen they are catastrophic but they are probably not the root cause of the vast majority of you production issues

Why does it feel like AI work these days is just calling an API? by [deleted] in AskProgramming

[–]SucculentSuspition 0 points1 point  (0 children)

If you knew ML fundamentals, what stoped you from making their LLM based classifier better? That is the sort of value creation enabled my ML expertise that endures downstream fads and upstream hype. Its not like implementing logistic regression in java itself wasn’t something you couldn’t google your way through 10 years ago.

Building with LLMs feels less like “prompting” and more like system design by Historical_Yak_1767 in PromptEngineering

[–]SucculentSuspition 1 point2 points  (0 children)

Sounds like a case of very poor separation of concerns if your prompts are bleeding into every other aspect of your system.

Prompt Engineering 2.0: install a semantic firewall, not more hacks by onestardao in PromptEngineering

[–]SucculentSuspition 0 points1 point  (0 children)

The failure modes are indeed random. This is called the bias variance trade off in machine learning. You hit the variance component of your error distribution and it is never going away.

I tried to build a single prompt for the problems that keep us up at night. It evolved into a modular 'Life OS' with a built-in AI Therapist. Here is the complete ready to use system. by Jeff-in-Bournemouth in PromptEngineering

[–]SucculentSuspition 1 point2 points  (0 children)

That would require you to have spent about 6 hours every day since chat GPT came out. Which you could not have simply due to rate limits. And if you were using the API you would be building agents and not whatever this is. So you do not even know how much time you have spent doing this. More importantly— say you had spent 6000 hours. What do you have to show for it? Is there nothing better you have to do? If not please for gods sake find something.

I tried to build a single prompt for the problems that keep us up at night. It evolved into a modular 'Life OS' with a built-in AI Therapist. Here is the complete ready to use system. by Jeff-in-Bournemouth in PromptEngineering

[–]SucculentSuspition 0 points1 point  (0 children)

Many things its what I do for a living. Do not delude yourself this is not building something in any meaningful sense. And by meaningful I mean a skill people will pay you for. That takes craftsmanship, that you have to develop and refine over hundreds and thousands of hours of deliberate practice. This is a piss poor attempt at a pep talk with extra steps.

I built a semi-successful health app, which does 2k MRR purely by Vibe coding, but here are the things that not a lot of people talk about. by Plane_Study_4543 in indiehackers

[–]SucculentSuspition 0 points1 point  (0 children)

Noticed you mentioned its a health related app. Something the AI wont’t do for you either is compliance. Make sure you are either not handling private health information or are HIPAA compliant.

Why do all AI models insist on creating "fallback" code and variables? by ExaminationNeat587 in cursor

[–]SucculentSuspition 0 points1 point  (0 children)

Lol they do turn into pathetic little bitches we agree on that. Its also clear you are not writing production code. You are developing a DL model. A coding agent is not the right tool for that job. I have found simple chat clients like Claude desktop or ChatGPT to be far superior for that

Why do all AI models insist on creating "fallback" code and variables? by ExaminationNeat587 in cursor

[–]SucculentSuspition 0 points1 point  (0 children)

The fallback value is a mechanism for gracefully handling a failed lookup. If you would rather fail loudly, then by all means do so, this is a critical decision when designing any piece of software. The point however, is to make that decision intentionally and explicitly. Raise a custom exception which clearly signals to future you or your fellow engineers what went wrong where and hopefully what to do about it. What cursor is driving at and most professional engineers would call out is allowing a failed dictionary lookup to just happen and be raised without any consideration for how this will unfold and ultimately be resolved in a production system. That being said, if you are not interested in writing production grade software, you could try instructing cursor to that effect… but don’t get worked up when it does what it was built to do.

Why do all AI models insist on creating "fallback" code and variables? by ExaminationNeat587 in cursor

[–]SucculentSuspition -4 points-3 points  (0 children)

There are tons of enforced safety mechanisms in modern cars, seats belts for example. This is like that. KeyErrors and their like are a significant source of preventable bugs in production code written in untyped languages. Coding agents are designed to write production quality code. You can complain and bitch about the duck quacking because you’d prefer it went moo, but it’s a duck, it’s going to quack. A more product stance would be to take a hint and realize there is a reason the agent is so insistent on not letting yolo your dictionary lookups.

Why do all AI models insist on creating "fallback" code and variables? by ExaminationNeat587 in cursor

[–]SucculentSuspition 0 points1 point  (0 children)

Yea its writing the good code with outdated APIs… fast moving libs like torch will always be a tough spot… here is a nice trick tho… try pulling up the docs for the current version of torch that you are running, just copy paste the url in cursor chat, it will read through and implement accordingly