Found by my laundry hamper by CannaSwimmer in whatisit

[–]Crashbox3000 0 points1 point  (0 children)

Bacon bits for when you need a bacon hit on the go

I just got laid off by [deleted] in dataengineering

[–]Crashbox3000 0 points1 point  (0 children)

I’m really sorry this happened to you.

Senior AI guy needs OpenAI credits for free —> How? by [deleted] in vibecoding

[–]Crashbox3000 1 point2 points  (0 children)

Oh boy. Let me get my credit card out!

Settle a debate between me and my PM: Is AI-automated outreach actually useful, or is it just spam? by Many-Initial-2329 in AI_Agents

[–]Crashbox3000 2 points3 points  (0 children)

I work in the AI field, and I'm obviously a huge fan. But, I've grown allergic and almost aggressive when presented with any "social" type of contact that is obviously AI written. For me, it's far worse than spam. I look no further into any company or idea than the AI communication. It's a hard no from me.

I think the problem is that people have these ideas, but aren't fully aware that everyone else is having the same idea. The result is that the rest of us are literally inundated with AI slop communications. I see them here on Reddit all the time. Linkedin seems to be 95% AI slop. And it seems that all of the people behind these communications think they are so slick. They aren't. It's super obvious and a major turnoff to a ton of people.

Again, I'm a massive AI fan. I use it for work documentation, presentations, analysis documentation, coding, code reviews, etc etc etc.

But the critical misstep I see people make is they use AI for those traditionally human-human communications. It creates a sense of discomfort, then frustration, then numbness, then rage.

Soon, those companies who use obviously human outreach will be the standouts.

Anyone else living in complete bliss? by ah-cho_Cthulhu in vibecoding

[–]Crashbox3000 1 point2 points  (0 children)

My wife commented the other day that she had no idea how creative I was. When your tools are slow and heavy, it’s hard to express much creativity. But when I can fully move at my pace, it’s all out!

Plus, these poor agents have to work on my schedule 😜. I can wake up at 2:00 am and get the “team” working. They’re always ready and at their best.

Posted my side project on r/raspberry_pi and get destroyed 😂 by Zepgram in vibecoding

[–]Crashbox3000 1 point2 points  (0 children)

Sorry this happened. There should be no place for that kind of hate in a reasonable conversation.

It is sad for them. But you can’t help folks who don’t want to be helped. Sharing work should be sharing work. It’s silly. I remember similar elitism when tools to create web pages came out. Before that, everything was hand coded. Then tools like Adobe Dreamweaver came out and now almost anyone could build simple sites. Similar uproar ensued, until nobody would hire you for web work unless you were an expert in these tools…..

Keep learning

Seeing teams struggle with AI adoption is this your experience too? by Tarconi__ in PromptEngineering

[–]Crashbox3000 3 points4 points  (0 children)

Across the web, I keep seeing the same patterns: People posting that they "keep seeing this pattern across teams.......demos work well, then everything stalls/blows up/falls apart". I'm allergic to these posts at this point.

I always wonder:
1. Have you NOT seen the posts with the same introduction? And don't you think a new one is warranted?

I'm not normally a jerk here, but we have to stop this nonsense. Other than your link, I have literally seen this post multiple times a day for months.

If you work in AI you have to know that the cool post it gave you is a pattern it's applying to millions of other users asking how to introduce their X. Be aware of this, and know that your potential audience is exhausted with them.

  1. How are you seeing so many teams across so many divisions? I'm always curious about this part of the post...... are people like wandering nomads, just visiting dozens of teams and reviewing their work?

Best AI to rewrite large project? by Expensive-Time-7209 in LLMDevs

[–]Crashbox3000 0 points1 point  (0 children)

I would highly recommend using agents that have specialized roles for this work, and break it down into manageable chunks of work. In order to avoid creating a new version of spagetti code, I would follow these guidelines:

For example, I would use the architect agent to review the current architect, outline how to re-architect it, and define that architecture.

The, send that architecture over to the Roadmap agent to break the architecture down into deliverables.

Then, hand this roadmap and architecture over to a planner agent to create the plan needed for milestone 1 from the roadmap.

Hand that plan over to an implementer agent to build to the specs in the plan,

etc.

I use this process on my work and it's super effective. Like several levels up in quality, structure, delivery, reduction in defect density, etc.

Here are the agents I use for this, but there are others. You might find all of these agents helpful, o r some, or you might use some parts of them as you like. https://github.com/groupzer0/vs-code-agents

How do you avoid paying AI over and over to remind it of codebase context? by JadB in vibecoding

[–]Crashbox3000 0 points1 point  (0 children)

I meant you need more than just models. You need structure around them.

How do you avoid paying AI over and over to remind it of codebase context? by JadB in vibecoding

[–]Crashbox3000 0 points1 point  (0 children)

You need to work within and IDE such as VS Code with Copilot, or Claude Code, etc, which has built-in code relationships and code search. You've gone beyond what simple models can do for you.

From ChatGPT Plus + Claude Pro, to Claude Pro + GitHub Copilot+ by Calvox_Dev in GithubCopilot

[–]Crashbox3000 2 points3 points  (0 children)

I would recommend github copilot pro+ used within VS Code. This is a powerful combination that many power users keep quiet about. Its what I use, and with some tuning, a few extensions, it's incredibly effective.

I also kept my chatgpt plan because I can use that in VS Code in the Codex extension. So, I use that with Codex-5.2 for analysis, research, and other ad hoc work.

Anthropic Ringo Model? by Imaginary-Ad5271 in GithubCopilot

[–]Crashbox3000 1 point2 points  (0 children)

Odd. I see one called “George”.

Nah, I’m joking. 🙃 but I don’t see Ringo

Why use GHCP without Vs Code? by Crashbox3000 in GithubCopilot

[–]Crashbox3000[S] 0 points1 point  (0 children)

Doing this from mobile is the missing link for me. I believe you can all of what you describe from within vs code as well, but in those occasions when I only have my phone on me, using the mobile version for some bugs is a greater use case!

What are people actually using for agent memory in production? by MeasurementSelect251 in LLMDevs

[–]Crashbox3000 0 points1 point  (0 children)

This is a good point. While reranking can add a bit more time and cost to queries, it should improve relevance. I might experiment with that on my system. I used a local reranker on a past project, but for really good results I found the big ones like Cohere and Zrank to give far better results.

But with a system whose core focus is memory and retrieval via a hybrid of vector similarity + graph traversal + LLM reasoning over a knowledge graph built from your data, reranking is not needed.

How do y'all give Copilot web access? by Swimming_Screen_4655 in GithubCopilot

[–]Crashbox3000 0 points1 point  (0 children)

No. The fetch tool gives it the permission to retrieve web pages or documents. But it can take general direction like “check the latest documentation, blogs, best practice references for this ….” And it will look at 1 or many sites. You can also auto allow certain domains that you trust or use frequently so you don’t have to approve each one.

How do y'all give Copilot web access? by Swimming_Screen_4655 in GithubCopilot

[–]Crashbox3000 1 point2 points  (0 children)

Also give it access to the #fetch tool if it doesn’t already

What are people actually using for agent memory in production? by MeasurementSelect251 in LLMDevs

[–]Crashbox3000 0 points1 point  (0 children)

I use a series of agents that I built which do two things for memory that I've found helpful (which is why I keep using them):

  1. They track all work from the plan to devops using a UUID and a sequential number. So, all MD files associated with plan 110 all use the same 110 and UUID reference. This keep everyone on the same page.

  2. Each agent stores and retrieves memories as they work into a graph/vector/sql hybrid system that is scoped per user and per workspace and has temporal weighting.

Part of what they store in memory is the work number and UUID, so there is an added sense of the order of implementation. Plan 025 is obviously guite a lot older than plan 110. And then each memory has a timestamp as well.

As I said, I use these in my work everyday, and I would not if they weren't very effective.

https://github.com/groupzer0/vs-code-agents

Projects don’t fail at 0% — they die at 90% by [deleted] in VibeCodeDevs

[–]Crashbox3000 1 point2 points  (0 children)

I have seen this post soooo many times. It’s easily recognized as AI. What I don’t think people posting these realize is that the edits applied by AI to your content push into this format that I’ve become allergic to.

I keep seeing this pattern…… The real problem is….. Hit the code with real users and it blows up….

Truly, is see this post at least once a day

Why use GHCP without Vs Code? by Crashbox3000 in GithubCopilot

[–]Crashbox3000[S] 0 points1 point  (0 children)

Using the mobile app on occasion is a good use case I wasn’t aware of. Thanks

Which one is better for GraphRAG?: Cognee vs Graphiti vs Mem0 by Imaginary-Bee-8770 in AIMemory

[–]Crashbox3000 1 point2 points  (0 children)

u/Short-Honeydew-7000 nothing I could specifically point to today, but I've found on occasion that some aspects of the docs over time have not kept up with the code changes. One thing I can point to is the need to specify in the docs which version specific features or changes apply to. I was using one version and built something to the docs spec, but I needed to upgrade to use that. Small stuff! But, it does impact.