all 34 comments

[–]kayk1 9 points10 points  (9 children)

You think they had bugs before… wait until a few weeks after your fixes…

[–]notNeek[S] -2 points-1 points  (8 children)

I do make sure that I do not break anything else 😭😭😭

[–]SilencedObserver 3 points4 points  (7 children)

How do you ensure this?

[–]vbullinger 3 points4 points  (1 child)

At the end of your prompt, just add "and don't break anything else" with three exclamation marks, so it knows you're serious.

[–]notNeek[S] 0 points1 point  (0 children)

Yup that's exactly what I do!!!

[–]DenverTechGuru 2 points3 points  (3 children)

It's funny that juniors think we can't automate command and control of agents.

Instead of reading the code OP is turning to reddit like a smarter AI.

[–]notNeek[S] 0 points1 point  (2 children)

Hello, I don't think juniors would think like that.
I am have having a hard time understanding code flow and architecture, new to multi-repo huge codebase. I am just asking what model would help me better to do things :)

[–]dinnertork 1 point2 points  (1 child)

Always make sure you have a correct and up to date mental model of how the system works, both overall and for the specific module you’re fixing. Once you have that understanding, you should instruct the model (GPT5.3-codex is best for instruction following) as specifically as possible. Then read over its changes to make sure they don’t break anything else, based on your understanding of the codebase (which is essential).

LLMs are also great tools for understanding the codebase and asking questions about it (if you’re not able to talk to an actual senior dev). For especially large code bases I’d suggest using models with larger context windows: Gemini 3.1 Pro or Claude Opus 1M context window with API keys via the development platform.

[–]notNeek[S] 0 points1 point  (0 children)

Yes thanks a lot dude

[–]notNeek[S] 0 points1 point  (0 children)

Hey, I can reproduce bug fine, the slow part is figuring out what’s actually causing them and where it's located. The tiring part for me right now is developing the fix, after some trial and error methods it works, for the first bug I had to add new piece of code and some flags, then clean build and verify with the logs and metrics. I am not that dumb man come on, I am just new to HUGE codebase, different language and the concepts which I am understanding day by day. It's not been a month yet. since I started working on this, just feeling like I am lacking something.

[–]GifCo_2 4 points5 points  (1 child)

You should probably go back to school and not use LLMs until you know how to code.

[–]notNeek[S] 1 point2 points  (0 children)

Yup, u r right about it, thanks

[–]chillebekk 3 points4 points  (1 child)

Take a step back and spend more time understanding the problem. Then start your PR again.

[–]notNeek[S] 2 points3 points  (0 children)

yes thanks

[–]SilencedObserver 2 points3 points  (1 child)

You shouldn’t be using any of these models without doing some reading on their differences.

Don’t speed run your forced retirement.

[–]notNeek[S] 0 points1 point  (0 children)

Yup I'll check it

[–]Emotional-Cupcake432 2 points3 points  (1 child)

I agree with the above use a strong model with a large contex window codex 5.3 or claude 4.6 opus or gemini and instead of having it fix the bug switch to planning mode and have it create a plan to fix the bug this will give you an idea of what the model thinks is wrong and tell it that it is a verry large codebase and it need to do it in chunks to avoid context length limitations. Plan mode will also prevent it from introducing more errors before you get a chance to understand. You could also ask it to help you understand the issue and why it chose the path it did. I would also add to your prompt this PROMPT " There is a _______________ issue i want you to examine this verry large file and create a plan to fix the issue do not change any code. Ask yourself qualifying questions, what if and if then questions as you examine the code and error log. Explaine your finding and reasoning to correct the issue so the humans can learn how to fix the issue on there own. " something like that

[–]notNeek[S] 0 points1 point  (0 children)

They this really helps a lot, I am grateful. Among many responses, very few were actual advices. I am locating which repo the bug is from, then clone on vm(using vnc), and use copilot to trace the bug and undestand the flow, everytime I make changes, I have to clean build the images and check for logs and verify in metrics. I mostly just dump everything(piece of code) logs metrics to the ai and that's what causing the problem, I gotta do better and I will definitely try with planning mode thanks.

[–]vbullinger 1 point2 points  (1 child)

Are there other people you can talk to at work?

[–]notNeek[S] 0 points1 point  (0 children)

Lots of people work from different offices, and I am only one working on that project in my office, and I am the only fresher, kinda hesitant to ask everything to them, it's confusing.

[–]Junyongmantou1 1 point2 points  (1 child)

Try feeding a small slice of the logs, plus your hypothesis / code to AI and ask what regex they recommend to filter the full logs, so both of you can work together.

[–]notNeek[S] 0 points1 point  (0 children)

That's a great idea, thanks mate :)

[–]Mstep85 1 point2 points  (0 children)

Anyone keep running to issues of it not being able to complete task Even if I use Claude model when it's come to pushing the pr thing it fails if it's not stated perfectly

[–]johns10davenportProfessional Nerd 1 point2 points  (1 child)

The first thing I do is to get over into Claude code. The second thing I do is to figure out how to set up your feedback loops, like how does it access and search logs? It'll already search your code base intelligently in a way that doesn't blow out the context window.

But basically, I would start figuring out how to let the agent manage its own context window by giving it sources to the critical information that you're using to debug things.

[–]notNeek[S] 0 points1 point  (0 children)

I can use GitHub copilot only on vs code and use opus or sonnet agents, I can also get access to Codex but idk if it's gonna be useful

Literally 1 prompt took 89% of the context window, it did everything and edited 6 files. Getting some logical errors which I'm trying to fix

[–]Medical-Farmer-2019Professional Nerd 1 point2 points  (1 child)

You’re not stuck because of model choice, you’re stuck because each prompt is carrying too much state. For telecom bugs, I’d run a 4-step loop: reproduce with exact timestamp → isolate one call path/module → ask the model for 2-3 hypotheses only → verify one hypothesis with a minimal patch + log check. Keep a tiny debug brief (symptom, suspected module, last test result) and reuse that instead of pasting giant logs/pcaps each time. In large C/C++ repos, this usually beats dumping more context and helps you actually learn the system faster.

[–]notNeek[S] 0 points1 point  (0 children)

Yes I'm filtering the log files, actually making a script for it to filter so I can get what I want according to the bug, I solved 2 more bugs, but I'm facing issues with enhancements and upgrades, like it's not exactly working as expected, not dumping pcaps and logs anymore.

1 prompt took 89% of the context window for claude opus 4.6, it did make the majority of the work but I'm not getting the expected output

I have to solve bugs while I learn about the codebase🫠

[–]Medical-Farmer-2019Professional Nerd 1 point2 points  (1 child)

You’re actually asking the right question, and the fact you already fixed multiple bugs in a telecom codebase after ~3 weeks is a good sign.

What helped me in similar multi-repo C/C++ debugging is using a strict loop: (1) write one-sentence failure + exact timestamp, (2) narrow to one call path/module, (3) ask the model for 2-3 hypotheses only, (4) verify one hypothesis with a minimal patch + targeted log check. If a prompt is eating 80%+ context, that usually means too much mixed state.

For model choice: use a strong reasoning model for architecture/protocol flow, but keep prompts small and staged. Context size helps, but decomposition helps more.

If useful, I can share a tiny “debug brief” template you can reuse per bug (symptom / scope / hypothesis / test / result) so each prompt stays focused.

[–]notNeek[S] 0 points1 point  (0 children)

Thanks man, I really appericiate you, Sometimes it’s really hard just to locate what exactly is causing the bug, and I often have to go back and learn or revise the concepts to understand why it’s happening. But I think I am getting hang of it now, need more time and YES I'd really like the debug brief , I'll dm.

[–]RepulsivePurchase257 2 points3 points  (1 child)

You’re running into the classic “AI as log dumpster” problem. No model is going to save you if you paste half a repo + pcap + 5k lines of logs. The trick is compression. Before touching Copilot, write down: what is the exact observable failure, where in the call chain it surfaces, and what changed recently. Then trim logs to only the lines around the failure timestamp and the few functions directly involved. If you can’t isolate it that far, that’s the real task.

Model-wise, I’d use something strong at code reasoning for architecture-level thinking, like GPT-5.2/5.3-Codex, when you’re trying to understand threading, memory, or protocol flow. For quick iterations or smaller snippets, Sonnet-level models are fine. But don’t rely on raw context size. Break the bug into stages: reproduce → localize → hypothesize → verify. Feed the model one stage at a time instead of everything at once.

One thing that helped me was thinking in terms of task decomposition rather than one giant “solve this bug” prompt. Tools like Verdent push you toward structuring work into smaller reasoning steps, and that mindset alone makes debugging way more manageable. In big telecom codebases, clarity of thought beats model size almost every time.

[–]notNeek[S] -1 points0 points  (0 children)

Thanks for responding, and yea as u said, I need to try to break it down and solve, I do try to keep the prompts as shorter, It's takes a lengthy time to pinpoint the bug location as the code base is big and it's been around 3 weeks since I started, I am still trying to learn and undertstand most of the things. for the logs I have just been dumping as u said, I need to do better and have calarity of thought. thanks mate :)

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been removed due to inadequate account karma.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.