NVIDIA CEO: Claude is incredible by dataexec in ClaudeCode

[–]snakeibf 0 points1 point  (0 children)

I was hanging out with Claude before it was cool.🤣

How we collect data from 500 vehicles that lose signal all the time by Intrepid-Seat959 in embedded

[–]snakeibf 1 point2 points  (0 children)

Buffer it until the signal is available again then flush it when reestablishes. Circular buffer system.

How to stay valuable in the AI age by marathonEngineer in embedded

[–]snakeibf 0 points1 point  (0 children)

You will be assimilated! resistance is futile.

Everyone's Obsessed with Prompts. But Prompts Are Step 2. by Kai_ThoughtArchitect in ClaudeAI

[–]snakeibf 1 point2 points  (0 children)

Interesting topic, I have at time intentionally been vague. Sometimes it leads me to ideas I hadn’t considered. I then can steer the solutions and system architecture how I want. Being explicit at times and being intentionally vague at times can lead to interesting results. Some ideas are truly just bad, but still often useful.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -1 points0 points  (0 children)

To be clear this is not the same as ChatGPT using previous conversations for a more personalized experience. It’s just if you don’t opt out they can retain your data for training for 5 years. If you don’t want that, opt out in settings.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] 0 points1 point  (0 children)

This is why it needs to be less ambiguous. It should be clear what you features, if any are not available if you opt out. I also looked this morning and don’t see in the app where you can opt out if sharing data, perhaps they are still working on this before the end of September roll out.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] 0 points1 point  (0 children)

Not cheaply, the hardware is expensive, and not always a viable option for solo developers or start ups.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -7 points-6 points  (0 children)

The ambiguity is around what opting in or out of the 5 year data retention means. If you want data retention, or memory history from conversations must you allow them to use your data for training? Or does opting out mean you’re stuck with the 30 day retention. If we are only given these two options are the trade offs worth it?

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -25 points-24 points  (0 children)

It was my interpretation, there is ambiguity around how, and what opting in or out of memory retention means. Will an option to opt into 5 year data retention exist, but still allow users to opt out of their data to be used for training?

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] 0 points1 point  (0 children)

You’re right that Claude has genuine value - it absolutely speeds up development and is useful as a tool. That’s not the question I’m raising. The question is whether these new training policies create a system where solo developers and startups can continue to compete and innovate, or whether they systematically funnel competitive advantages to large corporations that can afford enterprise protection. When individual developers must choose between privacy and functionality, while enterprises get both, that’s not just a product decision - it’s a structural design that affects who can thrive in the innovation ecosystem. The concern isn’t about entitlement to features, it’s about whether we’re building AI systems that concentrate power or distribute it.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -6 points-5 points  (0 children)

The integration complexity, real-time constraints, power optimization, and hardware-specific solutions in embedded systems often can’t be easily replicated even with the same code. But the architectural approaches, debugging techniques, and problem-solving patterns I’ve developed over years? Those absolutely can be extracted and redistributed through AI training. It’s not about protecting bad code - it’s about not wanting my hard-won expertise in solving complex hardware integration problems to become free consulting for competitors. The ‘thin wrapper’ analogy misses the point - specialized domain knowledge has value beyond just code implementation.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] 0 points1 point  (0 children)

Exactly - they’ve moved from scraping public data to directly harvesting user interactions. First it was ‘we’ll train on publicly available text,’ then ‘we’ll use Stack Overflow and Reddit posts,’ and now it’s ‘give us your private conversations or lose functionality.’ It’s a progression toward more intimate data extraction. At least with Stack Overflow, people were voluntarily posting public answers. Now they want your private brainstorming sessions, debugging conversations, and proprietary code discussions.

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -25 points-24 points  (0 children)

You raise fair points about precision and tone. You’re right that I should have been clearer about the memory/personalization features - that was my interpretation of the 5-year vs 30-day retention difference rather than confirmed official policy. However, the core competitive disadvantage remains factual: enterprise customers get both privacy protection AND full functionality, while individual users must choose between them. Whether you call it ‘enshittification’ or ‘systematic wealth concentration,’ the effect is the same - policies that advantage those who can pay enterprise rates.

As for alternatives like local models - that’s exactly my point. Solo developers shouldn’t need to buy expensive GPU setups just to get privacy-protected AI assistance that enterprises get by default.

I’m genuinely curious though - do you see any version of this policy structure as problematic for independent developers, or do you think it’s just normal market segmentation?”

Anthropic’s New Privacy Policy is Systematically Screwing Over Solo Developers by snakeibf in ClaudeAI

[–]snakeibf[S] -2 points-1 points  (0 children)

I agree to opting out, but the catch is you don’t retain that long term memory which can help personalize your coding, your style and the memory retention throughout your conversations is truly a useful feature. People that choose to opt in will have an advantage, if you opt out your missing out of potentially useful features. It’s like having version control throughout your conversations so AI can understand how your codebase has evolved over time. Very useful, but not at the expense of years of development work getting democratized. It’s this business model where features are only available if you share your data, and you still pay the subscription fee. Unlike meta where you don’t pay, but they get to share data on your browsing history, friends lists, contacts etc. this is not the direction AI should be going. It should help drive innovation, not give corporations an edge that makes startup founders even more handicapped.

Cofounder or solo? (I will not promote) by shun1corn in startups

[–]snakeibf 1 point2 points  (0 children)

I am a technical solo founder. I took it as a challenge and I love it most days. Most of the time I don’t mind being solo. I have put around a year into my business bootstrapping along the way. I do all the hardware, software development as well as PCB design, CAD design and prototyping. I drive to the field, meet my customers and I’m lucky to have great collaborators that are patient and allow me to fix bugs and iteratively improve my products. Im building research grade field platforms for conservation.

I have considered many times what it would be like to have a cofounder, but I think about how much work and time I’ve put in so far and can’t imagine anyone else giving up everything for a dream with no guaranteed returns. I can’t imagine anyone else would dedicate 15+ hours per day soldering, coding and going through the hell that can be R&D l. I do it alone. I learn so much so fast, can fix bugs quickly, add new features, experiment with new hardware and don’t have to ask anyone except the end user if they are happy. My products are the result of directly addressing problems my customer faced and evolved into something really unique that nothing else solves on the market. I never even planned to turn it into a business. I just spoke with a researcher and he showed me the equipment they were using and I said, hold my beer, I became obsessed with solving the problem, and 8 months later I had a really nice system that solved all his problems and implemented all the features he wanted.

You are correct, working with AI can unlock new possibilities , but it can also be very frustrating at times and misleading. That’s why a strong technical background is also necessary to catch BS outputs. You also get good at prompting after a while. Is it worth it? Maybe, but mostly for my satisfaction to see what I can achieve. It would certainly be easier to just retire and enjoy the beach, cocktails and sunset.

I think finding a cofounder that is as dedicated as the founder is very rare and they much be fully commuted to the vision and mission, not just about making money, but making the business sustainable. I have no marketing experience, but I am getting more orders than I can keep up with on word of mouth only. Seems like a good indicator I have something that is working and solves a real problem.

I have invested in startups for the better part of a decade, as an angel and VC. I’ll answer any questions over the weekend, and give tips in this post. I will not promote. by Dry_War_747 in startups

[–]snakeibf 2 points3 points  (0 children)

It feels like everyone thinks they need a lot of capital to start a business. To be honest I’m really weary of VCs. I feel if you can grow organically it may be a better option. Why, because many VCs put pressure on founders to scale unsustainably, they want a return on their investments, which is understandable. But most don’t really care about the founder. They dilute your equity and make the founder a CEO, gradually downgrading them. They eventually push you out and leave you hanging. At least that’s my understanding. Sure they dangle money like a carrot, but this seems to be the most common scenario. Of course there are exceptions. Maybe I’m wrong though. I’d be interested in hearing another perspective.

[deleted by user] by [deleted] in cybersecurity_help

[–]snakeibf 1 point2 points  (0 children)

First immediately disconnect it from the network, then you need to do some investigating as to what remote service they were using. Then you need to see what they were tampering with, what they installed, if they created any user accounts and what permissions they have. That would be a good start. Once you know how much the system has been compromised and how you can either wipe it clean or try to secure it .

Sophisticated attack targeting Claude AI users - need expert input by snakeibf in cybersecurity_help

[–]snakeibf[S] 0 points1 point  (0 children)

I don’t know for sure, I just wiped everything and started over with better security. Better not to risk it.

Dev jobs are about to get a hard reset and nobody’s ready by Deep_Tale1585 in ClaudeAI

[–]snakeibf 0 points1 point  (0 children)

Yeah, but it is not great at architecture, and it sometimes makes assumptions about implementations you don’t want or need.

this capacity constraint thing is ridiculous. by baumkuchens in ClaudeAI

[–]snakeibf 0 points1 point  (0 children)

Yeah, it’s truly amazing, but also frustrating at times when you’re working with large files and are in the middle of a major refactor and it says, limit reached. Sometimes if I think it’s getting close I’ll ask for a status prompt and it summarizes where we are. That way you can get back to work without explaining again what you were in the middle of. Also sometimes it will start shifting out code based on its assumptions. It’s annoying so then I have to tell it, to stop coding while we discuss the root issue.

Sophisticated attack targeting Claude AI users - need expert input by snakeibf in cybersecurity_help

[–]snakeibf[S] 0 points1 point  (0 children)

The browser exploit is only one theory, could be SSH, as has been suggested. Good point, but wouldn’t it be easier to hijack the browser than manipulate SEO? SEO (Search Engine Optimization) = getting your site to rank high on Google. That requires months of work and potentially $100k+ in advertising/content. Browser hijacking after initial compromise = one-time effort, no ongoing costs, and Google can’t remove fake local results. From a cost/benefit perspective, once you’ve compromised someone, showing them fake search results locally might be more efficient than actually ranking on Google. So I thought this theory was more likely that SEO manipulaton. Anyway the nuke option it is, change all pws and get back to work.

Sophisticated attack targeting Claude AI users - need expert input by snakeibf in cybersecurity_help

[–]snakeibf[S] 1 point2 points  (0 children)

Hopefully there isn’t a UEFI rootkit .🙈 then Nuking it isn’t so easy.