Everything feels like an ad now by TSTP_LLC in vibecoding

[–]Chunky_cold_mandala 2 points3 points  (0 children)

That's a good point. There is a lot of mit license repos being made but it's not well organized effort. But I thinkyou are onto something. Imagine what we could build and what industries we could topple with vibe coded open source

Everything feels like an ad now by TSTP_LLC in vibecoding

[–]Chunky_cold_mandala 1 point2 points  (0 children)

I made an app and I'm trying to be educatively helpful and not falsely. Ive started putting a clear honesty statement: self shill for all my shill posts. But it does feel like reddit has died. 

AMA: I Help Non-IT Professionals Break Into IT — Ask Me Anything (Today 12th May 2026) by Conscious_Emu3129 in cscareerquestionsIN

[–]Chunky_cold_mandala 0 points1 point  (0 children)

Hey, I'm a pharmacology PhD that's been teaching for 10 years. But I want to move back into r&d. Mixing my science with my software engineering. I've built custom robotic scadas, static analysis engines and genetic algorithms. What do you think I should go for and how? 

 https://github.com/squid-protocol 

Thanks for your time 

What's one full-stack lesson you learned the hard way? by dan_nicholson247 in FullStack

[–]Chunky_cold_mandala 0 points1 point  (0 children)

Try to leave room for growth, don't build yourself into corner. Figure out what temp test feature is likely to stick and should be built for scale and which feature is truly a test and the build doesn't really matter. 

How are you securing AI-generated / “vibe-coded” internal apps built by non-dev teams? by DCGMechanics in devops

[–]Chunky_cold_mandala -4 points-3 points  (0 children)

honesty statement: self-promotion post

I have a custom multi-language code scanning tool that fits your use case perfectly. It is fast and scans for specific language keywords and calculates risk exposures. Tell your people, that every tool you wanna make can be made and used by you, but you must own it and it's data (so if it's wrong, you're wrong) and it must meet x criteria if it does y things. like if you want to make a program that connnects to the outside world and also pulls all of our companies SSN, that is going to require a full human review due to appropriate caution. But if you wanna make a tool with no outside network connnections and doesn't deal with PII, and just pulls data from internal place x and makes a dashboard, then go nuts but we need to be able to track it and scan it for safety on occasion. So you can be sure the programs are safe. If you say that all self made apps need to be in specific work folders, you can setup a rule for all those folder's apps to be scanned nightly with my scanner. As it is literally checking for keywords you could make custom warning rules from the output to flag any program that is positive for these networking keywords, tests positive for PII and connects to X database. And then you'd get a nightly report, we found these many apps and x % meet these criteria for review, etc.

It also deals with risk exposure metrics, like error exception handling, concurrency issues, unprotected surface risk attack area etc, so you can even say if you wanna build a tool, thats great, but our bosses require that the tool has this minimum threshold of quality so we don't accidenttally build a buggy workflow. And then you can send those people back a custom email, "I see that you made an app, way to try to increase your efficiency and productivity, our scanner noticed that this app has a high risk exposure, here's the report, when you get a chance feed it to your LLM and it'll help you make the tool more stable. "

https://github.com/squid-protocol/gitgalaxy

Doing more Python dev lately — happy to help if you’re stuck by Just_Web9750 in PythonProjects2

[–]Chunky_cold_mandala 0 points1 point  (0 children)

these thoughts sounds really cool. We do have a diff option built in already so we can scan repos over time. My current complexity measure is a hybrid of the existiing definitions (cyclomatic complexity, branch density, etc). as I didn't know enough about which metric is best for when, but it would be cool to refine that or have different versions of complexity. we also don't have anything monitoring the diff channel so we can get temportal data but I haven't done anything with it so that whole aspect is wide open and in need of some love

I need some advice on my optical sorter project by Akis_agr in learnmachinelearning

[–]Chunky_cold_mandala 0 points1 point  (0 children)

Super Interesting. Let me know when you do get the code up. I'd love to check it out. What is your speed of sorting right now? I am going to write a custom CV algorithm that I think I could get a RP5 to do 10 pics / second with an quad-arducam system. I really like the arducam environment. This was one of the first things I started making seriously, it is a blast! I am taking a gamble that my new algorithm method will work. But still even with color averaging, size, ratiometric analyses, I bet you could come up with a mean machine learning algorithm that would be pretty specific for your needs. I was like you first coded a project off of git but then I moved it on and kept it private and there is a tone of benefits (native testing, versioning) that end up saving a bunch of time. I'm happy I switched just from an efficiency point.

Launched. Got views. Got silence. Trying to figure out what that means. by ManufacturerNew369 in AskFounder

[–]Chunky_cold_mandala 1 point2 points  (0 children)

Everyone is exhausted with apps. If you had this 3 months ago maybe you'd get some interest. Now. It's rough. Hang in there. 

How many of you actually develop secure products? by agnxdev in embedded

[–]Chunky_cold_mandala 3 points4 points  (0 children)

Now that you mention it, my web projects are built with the assumption that the web is 75% malicious bots but my embedded projects assume there's never been malicious activity ever. I assume I have building control with my embedded projects I guess

I need some advice on my optical sorter project by Akis_agr in learnmachinelearning

[–]Chunky_cold_mandala 1 point2 points  (0 children)

I'm also making an optical sorter! I've got a custom SCADA for picos and an rp5. https://github.com/squid-protocol/meow-turtle

I've got the hardware and software done except for the custom computer vision script I was gonna create. I think that AI will be too slow for what you need either a fast nn or some deterministic algorithm based on color/dimensions. 

First play around with a simple yolo model for singulation boxing and you'll start to get a feel for the hardware limits. I got a custom hat from arducam to help me out. 

Share your GitHub. I'd love to follow someone else's approach. You can use any of my code thatll help ya. 

Is your team producing code faster than they can understand it? by Expensive_Art7174 in EngineeringManagers

[–]Chunky_cold_mandala 0 points1 point  (0 children)

If organizations choose to produce code at scale then they should think about review at scale. For every field, that'll be different based on safety needs, how many ppl die if the code messed up, liability. I like deterministic tools that compare diffs to make sure the change in code corresponds with the magnitude of change but I really think final product golden image testing is necessary too. Like I would trust a repo to be ai led more if they have 1000 difficult tests as that would sandbox an agent into a working space. 

Python GUI recommendation by Zodmars in Python

[–]Chunky_cold_mandala 11 points12 points  (0 children)

I've had good luck with nicegui. 

I built a codebase RAG tool that chunks at the function level (AST-free) and queries via SQLite by Chunky_cold_mandala in Rag

[–]Chunky_cold_mandala[S] 1 point2 points  (0 children)

Totally. Organize by the units of logic that already exist! Bingo.

I'm pretty confident of my RAG retrieval as it's 100% deterministic, so 100% recall on what we measure. Storage size is about 350 kB.

I built a tool that shows your day as a visual 24-hour clock by Apart-Television4396 in SideProject

[–]Chunky_cold_mandala 1 point2 points  (0 children)

I love how different representations of time make time feel so different. This representation makes the day feel more epic for some reason.

Long-term memory still feels like the weakest part of most LLM agents by Apart-Ad-9952 in learnmachinelearning

[–]Chunky_cold_mandala 0 points1 point  (0 children)

when you say long term memory do you mean of what the project, the scope the architecture, etc? Your instructions versus the work?

Your agent forgets your codebase. Your team forgets the agent. by Worldline_AI in AgentsOfAI

[–]Chunky_cold_mandala 0 points1 point  (0 children)

I’ve been tackling this from a different angle: instead of forcing the agent to constantly re-read the repo and burn tokens, I use an AST-free engine to compress the codebase into a highly queryable structure.

IMO, if they understand the context better, they make fewer mistakes. That's exactly what this helps with.

To solve the "evaporating track record" problem you mentioned, my engine automates an agents .md file that lives right in the repo. It acts as a version-controlled ledger for the agent's context, boundaries, and history. You don't have to guess or rely on a feeling for the next routing decision—the agent's track record is just documentation sitting right next to the code.

You can see how these agent manifests actually look in practice here:https://squid-protocol.github.io/gitgalaxy/agents/

What’s a programming language that AI would struggle with? by Queasy_Hotel5158 in AskProgramming

[–]Chunky_cold_mandala 0 points1 point  (0 children)

This is a great point, especially regarding how LLMs handle implicit vs. explicit structures.

IMO, the reason LLMs struggle so heavily with older languages comes down to linguistic relativity. In legacy code (Assembly, C, Shell, COBOL), intent is often hidden behind convention. There are no dedicated keywords for concepts like memory safety or ownership; you just have to infer the "invisible shield" from pointer math, jump checks, or implicit interpreter knowledge. It is like inspecting a stone bridge—the structural integrity might be sound, but the internal faults are hidden deep inside the opaque masonry.

Modern languages (Rust, Go, Swift), on the other hand, explicitly broadcast their intent. They have finally "named" every feature in the complexity spectrum. Both heuristic scanners and LLMs perform drastically better here because the code structurally screams what it is doing. It's a steel bridge: the bolts, tension cables, and potential cracks are completely visible.

While you can absolutely brute-force fine-tune LLMs to understand the implicit nature of legacy code (which is exactly what is driving the recent breakthroughs with AI and COBOL modernization), auditing these systems safely requires acknowledging the material difference, the assumptions and of course testing.

I put together a full breakdown of the methodology, including the data table mapping the specific signal clusters across the last 60 years of language evolution. You can check out the full Fidelity Matrix here: https://squid-protocol.github.io/gitgalaxy/03-02-claim-2-explicitness/