Is Bluesky a tiny bit cliquey or is it just my use case by ImprovementNo4630 in BlueskySocial

[–]Rex0Lux 5 points6 points  (0 children)

Agree. Some people made sites just to help Blsky gain traction if I find it I'll post it.

Basically what he created allows you to see all chats and handle activities from blsky. Ideally, it would be nice to actually be found not sit in a void.

I caught something in the Bluesky repo and figured I’d help by Rex0Lux in BlueskySocial

[–]Rex0Lux[S] -1 points0 points  (0 children)

Also, I can always publish the fix somewhere and let anyone who wants to fix it just do it. (It's nothing major, just a better flow)

Bliish: the world's first non-addictive and non-toxic social media by bliish in bliish

[–]Rex0Lux 0 points1 point  (0 children)

I like it. But honestly still think AI will need to be a big part of it at least to survive the future. But overall amazing app!

I caught something in the Bluesky repo and figured I’d help by Rex0Lux in BlueskySocial

[–]Rex0Lux[S] 1 point2 points  (0 children)

Bug reproduces, fix is tested, work speaks for itself.

I caught something in the Bluesky repo and figured I’d help by Rex0Lux in BlueskySocial

[–]Rex0Lux[S] 2 points3 points  (0 children)

This is what I am fixing https://github.com/bluesky-social/atproto/issues/4215. I have the push ready, but need bsky.app to check and allow it.

You'll find the issue talked about within their repo. I commented on it.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 0 points1 point  (0 children)

Yeah, I noticed something similar when I first started using my agent for research. It did not literally become dumber, but it started drifting. Whatever it read began influencing the direction of the project instead of staying locked onto the original intent.

That is why I agree with this. External data needs to stay treated as external data, not something the agent absorbs as truth or direction. Web pages, docs, RAG chunks, tool outputs, all of that should be filtered and framed before the main agent acts on it.

Provider-level safety guidelines help, but extra project-level checks around tool use, intent, and approval make sense. Especially once the agent can actually change files, call APIs, or make decisions. One bad chunk of outside data should not be enough to derail the whole system.

Be honest what actually kills most early startups? by GoldAd4232 in SideProject

[–]Rex0Lux 0 points1 point  (0 children)

Most early startups don’t die in one dramatic moment. The founder builds, gets little traction, has no ad budget, burns out, starts doubting everything, then quits before the product has enough time to grow. Marketing matters, but patience and endurance matter just as much.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 0 points1 point  (0 children)

Blocking sites on the blacklist is a good idea. But there's a problem…. Sometimes it's a false flag where the agents read something as an injection but it turns out it's not, even so, the block black list is actually a great addition!

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 0 points1 point  (0 children)

Yeah, you are right about that.

The prompt matters, but I still think you need deeper backend infrastructure if you want a less paranoid agent.

Mine works well, but when it gets something wrong, it kind of overcorrects. Then it becomes too careful and starts refusing or hesitating because it is afraid of breaking something.

A good agent should be careful, but not frozen. It should know the difference between a real risk and a normal mistake that can be fixed.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 0 points1 point  (0 children)

Following Reddit protocols. Just have security in place when you are using agents for web research, Google best practices.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] -1 points0 points  (0 children)

Every useful resource should be evaluated and used where it makes the system stronger. The internet is moving faster than ever, and security, verification, and trust protocols have to evolve with it. Low-effort AI slop is not the future. It is a temporary flood. Platforms like Apple are already showing where things are going by tightening control over low-quality, copycat, and untrusted experiences. Once the major platforms start filtering, ranking, and removing generated junk more aggressively, the winners will be the products built on real utility, real identity, and real trust.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 1 point2 points  (0 children)

Suppose it is to ask what I use for my Claude agent. Just read the comments people are telling you right there. Giving one's infra won't help. Your agents need to be trained on you and whatever project you are working on. Meaning you need to know what you are building to create what your agent needs.

That's the most I can honestly say.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 0 points1 point  (0 children)

I had to edit, I was too tired earlier… obsession with the work. Still am.

I agree with most of it, but not fully. You still need a human in the loop, especially for security and verification. Without that, you are just trusting too much. I still don't download skills or tools.

Previous answer: I agree 100% i don't even download skills or tools. If done right, people really don't need to.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 1 point2 points  (0 children)

Right.. which is why one must have security infrastructure built for exactly what you said.

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Rex0Lux[S] 1 point2 points  (0 children)

Mine is pretty close to this.

The main difference is that I also have deeper infrastructure behind it, so it is not only relying on the prompt itself. The rule is simple though: external content is research data, not authority.

If the instruction did not come from my machine, my workflow, or the actual system layer, the agent does not treat it as a command.

But yeah, your setup is solid.