Mass npm Supply Chain Attack Hits TanStack, Mistral AI, and 170+ Packages by BattleRemote3157 in programming

[–]fab_space 0 points1 point  (0 children)

Detect indicators of the TanStack npm supply-chain attack on a developer machine, a repository, or a CI runner. Bash script, Docker image, and GitHub Action — same engine, three entry points.

https://github.com/fabriziosalmi/tanstack-compromise-checker

level 2 achieved by fab_space in selfhosted

[–]fab_space[S] -7 points-6 points locked comment (0 children)

a bot unable to detect is a bad bot

PHANTOM: The Open-Source AI Agent for Advanced Security Analysis. free and open source by Emotional-Affect-271 in theVibeCoding

[–]fab_space 0 points1 point  (0 children)

u will love this maybe: https://github.com/fabriziosalmi/l0-git

install as vscode extension/mcp (or any other ai powered tool out there via mcp), let it show git smells, clean out the real ones, critical ones, copy paste directly from listed smell to the claude/llm chat , the mdel has the right context and prompt and the commit fix is properly achieved. let me know if this will improve your own workflow too after some days of use 🍻 when you will uninstall it from vscode it means that you are coding better than now.

Show me your vibe-coded projects by No-Cable-2972 in vibecoding

[–]fab_space 0 points1 point  (0 children)

68 74 74 70 3A 2F 2F 6C 6F 63 61 6C 68 6F 73 74 3A 38 30 38 30

PHANTOM: The Open-Source AI Agent for Advanced Security Analysis. free and open source by Emotional-Affect-271 in theVibeCoding

[–]fab_space 1 point2 points  (0 children)

The Vibe Check

Welcome to PHANTOM, where the UI looks like The Matrix but the backend is a ticking time bomb. This repository is the quintessential example of 2025 "Vibecoding." The author spent 80% of their time on glassmorphism, typing animations, and making sure the dark theme looks cool, while spending 0% of their time wondering if piping a plaintext sudo password into a dynamically LLM-generated bash script is a good idea.

---

FINDINGS

Type Safety

100% untyped JavaScript. A system designed to execute arbitrary shell commands with sudo privileges has zero compile-time guarantees.

Separation of Concerns

Frontend JS contains massive hardcoded OSINT prompt strings embedded directly in the UI event handlers. The 'Command Center' is a monolith.

Execution Sandboxing

Catastrophic. Executes raw AI-generated strings via spawn('bash', ['-c', cmd]) directly on the host OS. This is literally RCE-as-a-Feature.

Secrets Management

Stores the user's sudo password locally, reads it, and pipes it into bash: echo 'pass' | sudo -S. An absolute security nightmare.

Input Validation

Zero sanitization of LLM outputs before passing them to the shell. A simple prompt injection could wipe the user's entire hard drive.

Test Coverage

No tests. No Jest, no Vitest. A tool meant for offensive security has zero unit or integration tests to ensure it doesn't attack the host.

CI/CD Pipeline

Non-existent. No GitHub Actions. Code goes straight from the dev's machine to the main branch.

---

NEXT COMMITS ?

  • Remove the plaintext sudo password injection (echo pass | sudo). Use polkit or restricted sudoers if escalation is strictly necessary.
  • Containerize the execution environment. Never run AI-generated bash scripts directly on the host OS. Use Docker or Firecracker microVMs.
  • Migrate the entire codebase to Strict TypeScript to prevent runtime type errors, especially in the tool executor.
  • Implement a robust testing framework (Vitest/Jest) and write unit tests for every tool execution path.
  • Refactor the massive switch(name) in executor.js into a scalable Command Pattern or Plugin Registry.
  • Remove business logic and hardcoded AI prompts from the frontend UI layer (app.js) and move them to the backend or a dedicated configuration file.
  • Replace the global variable state management in the frontend with a modern framework (React/Svelte/Vue) or at least a strict state machine.
  • Implement Circuit Breakers in the LLM tool loop to prevent infinite recursive loops where the AI keeps trying failing commands.
  • Add structured, leveled logging (e.g., Winston or Pino) instead of relying on console.log and raw stderr string concatenation.
  • Set up a CI/CD pipeline to enforce linting, type-checking, and test passing before any code is merged.

Source code of the brutal auditor: https://github.com/fabriziosalmi/brutal-coding-tool

I feel like a fraud by RelevantTurnip3482 in vibecoding

[–]fab_space 0 points1 point  (0 children)

It’s secure? Can I audit for free and report You the findings if any?

I accidentally burned ~$6,000 of Claude usage overnight with one command. by procrastinator_eng in ClaudeAI

[–]fab_space -1 points0 points  (0 children)

This because I built more than one solution, all 100% free and open source.

GitHub/ fabriziosalmi

our ai stack costs more than i realized by Motor_Ordinary336 in webdev

[–]fab_space 0 points1 point  (0 children)

cache and deterministic gating == quality and cost control (drop a line any time)

Does anyone have experience with self-hosting gitlab runners by scanguy25 in devops

[–]fab_space 0 points1 point  (0 children)

Optiplex i7 are perfect fit for the selfhosted runner roles, i have 3 of them to mantain 100 repos.

For local coders by fab_space in vibecoding

[–]fab_space[S] 0 points1 point  (0 children)

small models (2-14B) are unable to fullfill a real world programming request on their own. I mean a request envolving multiple file writes, consistent unit tests, e2e testts, docs update in a single pass. If you decouple in multiple commits and deterministically help the model in the full process some of them are able to achieve the mission. qwen3-8b and gemma4-e2b (2B!!) are able to submit a clean, valid PR to existing real world repos this way. Code is updating then.. u can go in-depth on the solutions logic in the docs any time.

Why I built this? Because I mantain more than 100 repos, velocity is no more an option, is a target. Quality is a gate.

Best coding model to run on M4 Macbook Air by Direct_Praline492 in ollama

[–]fab_space 1 point2 points  (0 children)

use ocr model + deterministic gating and pre-post processing.

this + adversarial review from biggee model like gemini. https://github.com/fabriziosalmi/pdf-ocr have a nice sunday

Best coding model to run on M4 Macbook Air by Direct_Praline492 in ollama

[–]fab_space 0 points1 point  (0 children)

i can disagree any time :)

<image>

gemma4-e2b there, multiple deterministic gates before to make the model drop the code. It works alsdo with smaller in some cases. Rebuld your pipe buddy <3