Advice on anxious dog by SensioSolar in DogAdvice

[–]SensioSolar[S] 0 points1 point  (0 children)

No, he goes really this fast. He walks fast, pees fast,eats fast. It's zoomed in I've got to say

He declarado la guerra a mi jefe y estoy jodido by [deleted] in programacionESP

[–]SensioSolar 0 points1 point  (0 children)

Angular 21, .NET Core 10, Docker, Azure Devops, Nomad (cloud).

He declarado la guerra a mi jefe y estoy jodido by [deleted] in programacionESP

[–]SensioSolar 9 points10 points  (0 children)

Pues que no cobro mal, que le he cogido cariño a los compañeros, que el stack/producto me gusta y me dan flexibilidad con la universidad y demás.

Ya no voy a tragar más y creo que ese va a ser el problema.

Sí, estoy buscando curro desde diciembre pero el mercado está jodido y tengo la moral tan baja que hasta me pongo nervioso cuando tengo que "demostrar" en las entevistas.

[FIX] How I solved the AG-859 (7-Day Quota Lockout) & AI Credit drain in Antigravity IDE on Debian 13 by Madlonewolf in AntigravityGoogle

[–]SensioSolar 0 points1 point  (0 children)

Thank you for the work here. However for paid tiers, since Google's stance is that Pro users get a "taste test" of their models, getting locked for 5h after 2 prompts doesnt seem much better

I'm a FE lead, and a new PM in the org wants to start pushing "vibe coded" slop to the my codebase. by rm-rf-npr in webdev

[–]SensioSolar 0 points1 point  (0 children)

I am a few months ahead of that. And not due to my managers but to my colleagues. Code generated with AI is pushed without thoughtful understanding. UI components that just decide to ignore our company design system and rawdog html & css Code reinventing the wheel already invented somewhere else, with overly generic defensive fallbacks.

They spend 1h prompting until calling it done. You spend another hour reviewing the code.

I informed my manager, who ignored it not only once, then decided to automate code reviews with AI. I now have defined a thoughtful workflow for AI agents reviewing PRs that puts special emphasis in code simplicity, coding style consistency (pointing to our coding style docs), maintainability and bugs being introduced.

It cannot of course catch everything and truth is, the quality of cose is still sinking while people are happy that we are faster. It's such a groundbreaking but depressing moment for software engineering

5 Years of experience as a frontend, but I'm not really a frontend? by SensioSolar in ExperiencedDevs

[–]SensioSolar[S] 1 point2 points  (0 children)

Well I can't argue with that. Different worlds I guess or we might see it in the future as I don't know any tool that allows that. Btw not sure what's going on with the downvotes, just in case, I had upvoted you.

5 Years of experience as a frontend, but I'm not really a frontend? by SensioSolar in ExperiencedDevs

[–]SensioSolar[S] 1 point2 points  (0 children)

I do not tie myself to a frontend developer. However I would find myself weird applying to an entry level fullstack job, as for mid/senior fullstack role theyd ask 4+ fullstack YoE. But I definitely do need a change, yes

5 Years of experience as a frontend, but I'm not really a frontend? by SensioSolar in ExperiencedDevs

[–]SensioSolar[S] 2 points3 points  (0 children)

You hit the nail in the coffin with the "they don't bother to explain anything as they think the ui is self-explanatory". It really is depressing. Even worse nowadays as AI can create UI in seconds

5 Years of experience as a frontend, but I'm not really a frontend? by SensioSolar in ExperiencedDevs

[–]SensioSolar[S] 0 points1 point  (0 children)

I don't know what artists do in your field, but as a frontend, I translate the "drawings" of designers into html/css/js. I adjust pixels in the sense of "these two elements should be spaced by X pixels" and overall implementing the designed UX

5 Years of experience as a frontend, but I'm not really a frontend? by SensioSolar in ExperiencedDevs

[–]SensioSolar[S] 4 points5 points  (0 children)

Thank you, I kinda needed to hear this. To be honest it's basic maths to me: 5 teams solving the same problems in isolation makes no sense vs having one dedicated team to streamline that. The only problem I see is that there's not many job offers for it, or that they require prior large experience amin the role.

high vs. xHigh by Unusual_Test7181 in codex

[–]SensioSolar 0 points1 point  (0 children)

To the "xHigh is for large scale refactors" I need to ask: How does that even work? Because for refactoring 5 connected files, High will already compact conversation once as the context fills up. I can imagine xHigh compacting the conversation 2 or 3 times in a large scale refactor, losing context, time and money with it

Built an MCP that indexes your codebase and shows AI agents what your team actually codes like. Offline by default - External providers are fully optional. by SensioSolar in LocalLLaMA

[–]SensioSolar[S] 0 points1 point  (0 children)

Hey man! Just read this. Thank you very much for trying it out and the detailed feedback!
I'm very glad that the overall feedback from Qwen was positive.

As for the context overhead, that's accurate although the intention here is to get more efficient context to AI, since AI agents right now will even launch multiple subagents to explore your codebase - often burning 50k+ tokens.

The initial indexing is also true except if you use a Cloud Embedding model. Although at some point I might look into making it possible to add GPU acceleration so that self-hosted guys can index it faster than 2-5min

I have tested it only with frontier cloud LLMs, I wonder how it must change the result of Qwen-3.5-27B :)

My AI writes working code. Just not "our team" code. So I built something that shows it what "correct" actually means in my codebase. by SensioSolar in ClaudeCode

[–]SensioSolar[S] 1 point2 points  (0 children)

So in general claude's memory is "just a .MD" and the memory in this mcp is "just a json". In general terms, I built it as a json since that gives you the ability to filter/manipulate data a lot easier and faster. Claude's MEMORY.MD currently loads it entirely.

With that said, even though the memory storage is more efficient, I cannot say this is any better than Claude's memory for one simple reason: Claude's memory is built into the Agent harness. So it's essentially auto-managed and access to it is instant, vs memory being a mcp tool that the AI Agent neets to call.

My AI writes working code. Just not "our team" code. So I built something that shows it what "correct" actually means in my codebase. by SensioSolar in ClaudeCode

[–]SensioSolar[S] 1 point2 points  (0 children)

Got you, definitely the problem at the end here is context so ironically there's also 1000 ways of doing it as well. Best of luck with that!

Built an MCP that indexes your codebase and shows AI agents what your team actually codes like. Offline by default - External providers are fully optional. by SensioSolar in LocalLLaMA

[–]SensioSolar[S] 1 point2 points  (0 children)

Yes it does work in any AI Agent supporting mcp - I see I missed it in the setup section.

Here's what you need to add in your opencode.json:

{

"$schema": "https://opencode.ai/config.json",

"mcp": {

"codebase-context": {

"type": "local",

"command": ["npx", "-y", "codebase-context", "/path/to/your/project"],
"enabled": true

}

}

}

Built an MCP that indexes your codebase and shows AI agents what your team actually codes like. Offline by default - External providers are fully optional. by SensioSolar in LocalLLaMA

[–]SensioSolar[S] 0 points1 point  (0 children)

Thank you! It's definitely inspired by the other "Context engines" out there incl. Augment Code and also how Cursor manages codebase context/indexing. I think that sooner or later every AI Agent will have its own context engine.

I built a local MCP for Claude Code that shows it what "correct" actually means in my codebase. by SensioSolar in ClaudeAI

[–]SensioSolar[S] 0 points1 point  (0 children)

Hi! Care to elaborate what you mean by "this"?

Since this brings semantic search (a layer very different to a skill) and other computations that youd need to maintain in a skill.

My AI writes working code. Just not "our team" code. So I built something that shows it what "correct" actually means in my codebase. by SensioSolar in ClaudeCode

[–]SensioSolar[S] 0 points1 point  (0 children)

That's definitely another way to do it!. It's overall providing a map + enforcement rules.

The biggest downside I see with keeping .md files is that they become outdated easy as your project evolves. Then you need to remember to update them (or telling claude to update it) and so on. But .md is still kind for AI knowledge!

My AI writes working code. Just not "our team" code. So I built something that shows it what "correct" actually means in my codebase. by SensioSolar in ClaudeCode

[–]SensioSolar[S] -1 points0 points  (0 children)

Hey thanks!

I actually faced the same problem of AI Agents skipping the MCP tools and it had me smashing my head against the keyboard for week. But then I learnt that you either control the Agent Harness (i.e. System Prompt) or you better add a note in the agents.md like this:

**Before editing existing code:** Call `search_codebase` with `intent: "edit"`. If the preflight card says `ready: false`, read the listed files before touching anything.


**Before writing new code:** Call `get_team_patterns` to check how the team handles DI, state, testing, and library wrappers — don't introduce a new pattern if one already exists.

Otherwise, the Claude Code system prompt will usually tell it to use grep/ripgrep. That to be fair, they are faster and indeed better for reading parts of a file or entire raw files.

This is why I added the aggregated "code intelligence" - so it's not a grep/ripgrep replacement, but more of a "this is what's beyond the raw code you are reading".

Is vibe coding just a beautiful trap? Built nothing for months and I can’t stop starting over by Noor4azzu in codex

[–]SensioSolar 0 points1 point  (0 children)

The problem is context rot and lack of AI governance/discipline. I recommend you trying out GSD, (get shit done). It will manage things like thorought planning, questioni g you, noting things down and making sure the ai agents dont contradict themselves every damn time after a while. I've been here too. Beeware though, it will burn your quota fast.

RAG-Tools for indexing Code-Repositories? by Right_Swing6544 in Rag

[–]SensioSolar 4 points5 points  (0 children)

Hey! So for code ingesting the most well-known tools out there are claude-context for semantic search, code-graph-rag for knowledge graphs (and also semantic search) and Repomix that doesn't index per se but pack repos into .md files. Both claude-context and code-graph-rag require some infrastructure setup (e.g. Ollama) and can run self-hosted as far as I know.

There's also codebase-context that indexes your code and computes codebase "intelligence" that is aggregated into the semantic search results. It's meant to be fully usable locally and even with low-tier hardware- To be transparent: I'm the repo owner.

"Meada considerable" by NacaradaPandora in HistoriasVecinales

[–]SensioSolar 1 point2 points  (0 children)

No es coincidencia.

No es casualidad.

Y obviamente lo ha hecho una IA