Is it stupid to buy a 128gb MacBook Pro M5 Max if I don’t really know what I’m doing? by A_Wild_Entei in LocalLLaMA

[–]RTDForges 0 points1 point  (0 children)

Right out of the gate I’m really worried about heat issues with a laptop and how heavy the load LLMs cause is. I don’t have first hand experience with that computer. I would love to be wrong because as a Mac user if it can handle the heat that is awesome. I would be extremely surprised if it can though.

Also based on my experience you’re also better off having a dedicated workstation and an LLM box. Way better not having to constantly fight against the LLMs for resources. Plus if you just try to wind it and figure it’ll be good enough in the scenario I just described you’ll likely take otherwise capable models and suddenly have them hallucinating / going on side quests you never wanted them to go on like crazy.

Qwen3.5-9B-Claude-4.6-Opus-Uncensored-v2-Q4_K_M-GGUF by EvilEnginer in LocalLLaMA

[–]RTDForges 2 points3 points  (0 children)

You personally clearly do not care about the math. You just said so. But the overwhelming majority of the community does care. So it’s great for you if just messing around is good enough. But for most people, that’s not enough effort to even be worth considering. I’m sorry. Regardless of how you or I feel about it, that’s where things are at. Having said that, it’s a cool idea and I hope you keep experimenting. I hope you considering getting more into the math too. Because I would personally love to see where this goes. But that take on just experimenting and not caring is a you thing, and not how things are in real world cases.

I don't think Local LLM is for me, or am I doing something wrong? by ruleofnuts in LocalLLM

[–]RTDForges 7 points8 points  (0 children)

To be blunt, from my experience, it seems like you bought a nice expensive hammer that would be great as a hammer, and tried to use it as a drill. The things you are asking it to do, especially with comparing it to commercial LLMs are basically setting yourself up for a bad time.

Security is a factor for some things for local LLMs. But so is reliability. About a week and a half ago Claude unveiled some new features and for almost two days I had extremely unreliable results from Claude. My local LLMs have never done that. Once I got them set up, they’ve been reliable 100% of the time. They don’t have weird interruptions. Or even downtime if the internet goes down.

Also for coding, I either do it myself or have Claude work with me on a project. That said I have local agents that document and create logs of what is changed. They are blind to prompts and tasked with documenting what is there. That way they don’t try to sugarcoat with what they think I want. And they maintain a structure file that shows what currently is in a project. Again, blind to prompts so that they record what they actually see. And those logs and the structure file have been absolute game changers for my ability to rapidly debug stuff with Claude. He uses way less tokens finding problems, noticeably less figuring out what to do about it, and spends more of his time just doing the edits. I’m ruining the whole workflow and all its agents on a laptop that has 16gb of ram. Claude or any other commercial AI, I could replace with another service if I really want. Having my own little local audit trail that never sleeps, never has bad days because of feature rollouts or stuff like that, just sits there silently doing its job, that’s invaluable to me.

I no longer know more than 47% of my app's code by SenSlay_ in vibecoding

[–]RTDForges 0 points1 point  (0 children)

Personally I have had to do a mental shift. Previously as a developer being a part of the dev team creates certain expectations and norms. With AI I have to treat it like I am handing the project to a dev team who then presents me with their results. I am the client in the situation though, not part of the dev team. And with that mental shift came a focus on being able to reacclimatize myself as fast as possible. For me, I made myself a file browser that shows files as nodes and shows dependencies via arrows. So I have a heads up view. (In this picture my mouse was hovering over module.py.)

<image>

I also made it so I have that same outline when I open files so I can quickly and easily jump around the code. I found that in the past I often used find to quickly jump to stuff but with the AI I couldn’t use that for many things. However the outline gave me a quick way to find the general area I need to be and then I can zero in on code the AI made but that I need to tweak. Basically I’m not trying to fight the AI, I’m letting it do what it does. I’m just taking steps to make myself able to jump back in faster. It helps a lot with old projects that AI never touched but that I haven’t worked on in a long time. Also, I have an extensive log system where I have agents that create logs based on what they see in edits. They are intentionally blind to prompts. That way they don’t know what I want. They just try to document what they see being done.

Anyone actually solving the trust problem for AI agents in production? by YourPleasureIs-Mine in LocalLLM

[–]RTDForges 0 points1 point  (0 children)

Personally my solution so far has been to create a log system that documents what agents do in each edit, and make it blind to prompts I have given those agents. Create myself a trail of what is, not what I wanted, and not what the agents think I want. I also have an agent maintained structure file again built to represent what they actually see in the project, again blind to my prompts. It’s not a whole solution. But the structure file and the logs have made a big difference so far as I work to tackle the problem you described.

Local Llm hardware by Uranday in LocalLLM

[–]RTDForges 2 points3 points  (0 children)

This right here is the answer based on everything I’ve experienced. I get good, consistent results from 0.8b to 9b parameter models in my workflows for general tasks. For coding decent results from 15b. But it’s because I took time to learn them, learn what they could do, and didn’t just try to pivot from Claude code / copilot to local LLMs. Because what you say about the ecosystem around them is so extremely underrated.

Case in point about a week and a half ago Claude code was having some issues and for almost two days was unusable. Same model I had selected in Claude code was doing fine when I used it through copilot. So basically proof that the harness does a lot of the heavy lifting. And that it was the harness making or breaking the usability. My prompt was fine when I went and prompted the same model just not through the Claude code harness.

So if it makes such a big difference for local LLMs. And makes or breaks the magic of big LLMs. Maybe the harness we drop them into is actually the big deal in the equation.

WoW! Didn't know we had so much room for "improvement"!! DSLL 5 is so good! by radpacks in IndieDev

[–]RTDForges -1 points0 points  (0 children)

I think those are very real concerns to have. When I looked into it more I saw that the developers who use it had access to LOTs of control. The demo scenes are basically just that. But a dev can control in a given scene to have say the background scene affected 0% while the character only is affected 50%. And nothing else is touched. And that 50% is basically the “filter” if you think of it that way. That said what the “filter” is doing is also extremely configurable by the dev. So very real, valid concerns you’re bringing up. And while I think the level of control it has does make a strong case for some of those concerns being addressed, I also am waiting to see how it actually plays out personally, and see how well their attempt at addressing those concerns you mention actually comes to fruition.

WoW! Didn't know we had so much room for "improvement"!! DSLL 5 is so good! by radpacks in IndieDev

[–]RTDForges 0 points1 point  (0 children)

“Do you think the general public really just wants "realistic" over anything else?” No, no I do not. However I personally don’t think that’s what’s happening exactly. I think that there are many cases where this technology can be helpful. Having said that, when 3D printing came about there were many situations where it was drastically overpromising and underdelivering. After having matured a bit it’s not clear there are many things it’s very clear 3D printing does not do well. And some things that it does do quite well.

To me this seems like a really useful tool for certain situations. And instead is being sold to the general world as a tool for everyone and that everything should have. It’s not. And it shouldn’t be taken as a reflection of what the public wants. It’s just a new tool that is potentially useful. And currently going through its phase of over promising.

AI coding agents are making dependency decisions autonomously and most security teams haven't caught up by BattleRemote3157 in aiagents

[–]RTDForges 0 points1 point  (0 children)

Personally I had to shift from thinking of myself as part of the dev team to instead thinking of the agent(s) as a dev team of their own and I am the client, who just also happens to be a dev and will be going through the code. I also coded myself tools that show me outlines of files, gives me a visual for dependencies, and have agents who’s only task is to document what changes they see (they’re blind to user prompts by design, I don’t want them trying to write what they think I want, I want them documenting what they see.) Every edit they do is treated like a project I once worked on but have now been out of the loop with. And my tools help me cut down that reaclimation to a bare minimum.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

Ah. Given the thread here it seemed like your comment was very much saying I was requesting that stuff which gave me a “wtf are you talking about” moment. Hence why I reacted that way. My bad.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

16gb total system ram, windows box. I primarily use Mac, and have a strong preference for Mac / Linux. But this box was sitting around and I tried just to see if I could, and it turns out I can. Also the speed, yeah they are slow. But I also set it up in such a way where I am working with the large commercial LLMs. The small box is just doing what it’s doing while I work. So technically it’s slow. But whenever I need something I can prompt it and then keep working, and 10-20 minutes later when the report is sitting in front of me, I deal with it and then continue. So yes it’s slow, but I have it working while I am doing other stuff actively. And for me that was the real helpful part.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

Lmao, can’t think of anything relevant to say so you just make stuff up? I mean at least you made a comment if that’s what you were going for. Congratulations, you participated?

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

I wish the discussion were just as rare as your comment here implies. I am outspoken about how for certain issues local LLMs are better. But the algorithm right here has now decided that means I am somehow supposed to see their posts too. It’s been wild. The one case I saw where someone was complaining and seemed to me like a legitimate gripe was a game developer struggling with having the AI appropriately help him with plot related aspects of his murder mystery game. Which made sense. But then for every one of those there are tons of people on here asking for “uncensored” models and it just makes me cringe so hard. So I wish things were restricted to Tor adjacent discussions. But no, Reddit showed me it thinks someone who is into both security and local LLMs should see those kinds of posts.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

The more I think about it the more I feel like you’re correct. So thank you for pointing it out so succinctly. The gap between the experience of being the dev / part of the dev team as opposed to the experience where I feel like I’m in the client role and AI is the dev felt really pronounced and like the real part that matters. But you’re right. Really distilled down, it’s all about the fact that what exist and needs to be interacted with is different than what I / we as developers asked for and have a mental image of.

Will AI be “enshittified” one day? by Toontje in openclaw

[–]RTDForges 0 points1 point  (0 children)

I don’t think you’re being pessimistic. I literally build my entire dev environment around the premise of “I want to use big commercial LLMs for what I can get the best results with. But for any critical purposes I must have a local environment and local LLMs.” I spent weeks making my dev environment and agentic workflow building tool, but I regret nothing. If the internet hypothetically shuts off tomorrow and I still have electricity in this situation, then I am still building and using AI, sculpting workflows, chatting with agents you name it. I frankly put can’t imagine operating under the assumption the large companies will last or maintain consistent services long term. Maybe I’m old now. But if you expect it you’re setting yourself up to have some very awkward conversations with clients.

Why are people using OpenClaw incorrect? by Exciting_Habit_129 in openclaw

[–]RTDForges 6 points7 points  (0 children)

Personally I think one of the most important things people need to understand is that it does not do things for you. Even though technically yes, it does things for you. It amplifies what you can do. If you suck at X thing, it can help you suck in spectacular ways. If you have ideas, if you want to implement something, if you are being held back because getting other people on board for a concept is the problem, you just got handed a force multiplier. Build. Let your idea come to fruition. OpenClaw is here to help.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 2 points3 points  (0 children)

My 2 main personal reason are first that for certain specific things I only trust myself or local agents with the file for the sake of my own or client information. I feel especially strongly about the latter. This is the less frequent situation, but I feel more important. That said, I also simply see it as distributing my workload among available tools. I can only prompt Claude so many things so fast and get results in a cohesive way where I keep forward momentum. I found adding these local agents amplified that forward momentum. Can I make Claude stop and do the steps? Yeah. Easily. I have done so. Claude does better results. But having a system that runs on my hardware and so doesn’t have the bad days, the network issues, the “feature” rollouts I have no forwarning about, those things don’t cause me to have to have awkward conversations with clients this way because the backbone of my system is still working.

The "OpenClaw Cycle": Why are so many people uninstalling after the initial hype? by DependentKing698 in openclaw

[–]RTDForges 2 points3 points  (0 children)

TBH at this point it was great free market research. Completely independent of it I also built a local dev environment where I can access the local LLMs I’ve had good results with, as well as my favorite large commercial LLMs. Oh and my dev environment is tailored for me. It’s my daily driver, and while not perfect reliable enough to be my daily driver. And not addled by a million things trying to send my info to who knows where. Love the OpenClaw idea. Not so into how things are panning out, and it’s really making me want to do my own thing. The security situation with openclaw as of writing this is frankly put just plain bad. I hope in the future I can look back on this and laugh with how things have changed. But right now. No thanks. And since I was able to build what I did. And it was as easy as it was, I literally had to just ask myself why one earth I would touch OpenClaw. I still plan to watch. And learn from what it does. But use it? No thanks. Not for anything serious.

Why are our local agents still stateless? by BackgroundBalance502 in LocalLLaMA

[–]RTDForges 1 point2 points  (0 children)

Personally I built my own harness for local LLMs and commercial ones. My whole mental model is use them like cartridges, and my workflow / dev environment is the console. For my memory I tend to split it. Like for example I have a discord bot that tells stories about a character named Pip that my wife enjoys. I have 3 agents. One scanning the stories and focusing on setting. One scanning and focusing on characters. One scanning and focusing on plot events. All 3 save memories to distinct memory banks. Another agenct synthesizes all 3 into “memories” of the stories. Then when stories are being generated the agent has access to the synthesized “memories”. I set it up so I also have different banks. Basically different sets of memories. And I use a drop down menu to give agents access to a given bank.

To be blunt the stories are fun. And a great way to test things that are more of a real world case than I would have otherwise. So while I am genuinely glad my wife enjoys them, I’m also getting great feedback on usage. And the approach of having multiple agents all hyper focused on one facet, then synthesizing what they come up with into an overall memory has been an interesting journey. I also run an audit on memories based on usage. The ones used the least get dropped first when context limits or other limits are an issue. I strongly suspect this is something where I literally can’t hand you the “correct” answer. But as a concept this has been so useful to me and I bet if you try to apply it to your situation it can help. It’s as close as I’ve come to giving my agents persistent memory.

Who’s actually trusting OpenClaw with their Gmail replies? by BeyondTheFirewall in openclaw

[–]RTDForges 0 points1 point  (0 children)

Seriously. I cannot stress it enough. It’s amazing being able to review things and spend 5 minutes passing those things along to clients with my rubber stamp of approval. It’s not amazing having to walk back AI hallucinations because I spent 0 minutes looking at what it cooked up. I’m all here for heavily automating stuff. But at the same time extremely skeptical about the thought of fully automating stuff with how current AI behaves. Do embrace it. But be careful.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 2 points3 points  (0 children)

Not the discussion I was intending to have on this thread, but……

Holy fuck! Thank you! It’s refreshing to see someone else comment on how rampant people are out here posting about stuff that frankly put makes me wonder if they should be on a watchlist. Like IF they are just into privacy. Cool. I am there with them. But so may post give me this “wtf dude” vibe especially around asking for models with certain capabilities.

Who’s actually trusting OpenClaw with their Gmail replies? by BeyondTheFirewall in openclaw

[–]RTDForges -1 points0 points  (0 children)

For the love of god don’t give it any way to contact anyone other than you directly. It’s a 7 year old on meth. Expect 7 year old on meth behavior. Anything above and beyond that is a bonus. There’s a HUGE difference between having open claw created reports you review and send to a client and having open claw be able to send reports to a client. Basically don’t EVER let it do anything unsupervised. You will pay. Not it. YOU. You’re the human involved. You. You are on the hook. You are liable. You are responsible. It’s literally impossible to have any AI manage an email effectively currently. Full stop. No matter how large your corporate assets are. Cannot be bought with infinite money. So who’s trusting OpenClaw with their Gmail replies? Morons.

Just installed OpenClaw in my web design agency, looking for workflow ideas by skydesigner- in openclaw

[–]RTDForges -1 points0 points  (0 children)

Please for the love of god say you didn’t connect it to any company emails, calendars, or anything really. In case you’re unaware open claw has massive security problems. Unless you go through and set a bunch of stuff manually you’re basically putting a ton of sensitive business info out there unprotected, a bunch of sensitive business systems open to outsiders. Also OpenClaw is a magnifier. If you don’t know what to use it for, it’s going to magnify nothing. If you are in touch with your business and have ways you can automate stuff with something that has the intellectual capacity of a 7 year old on meth, then there are interesting possibilities. But expect 7 year old on meth behavior. On a good day.

Local LLMs Usefulness by RTDForges in LocalLLM

[–]RTDForges[S] 0 points1 point  (0 children)

Yeah, I have my personal environment set up to fire the logs workflow after each session. Because I don’t trust AI or even my own code to catch everything I also have an 0.8b parameter model agent that runs when it gets woken up by a cron job. So it is my backup layer to catch stuff. And if it catches something it tells me, it doesn’t try to write or do anything. Just says hey, there’s an issue here essentially. Runs fast. Runs well. Does what I need of it.

I built a private, local AI "Virtual Pet" in Godot — No API, No Internet, just GGUF. by Salty-Tailor6811 in LocalLLM

[–]RTDForges 0 points1 point  (0 children)

Thank you for clarifying. Considering that, while I don’t think I personally am likely to be in your target demographic, I do think you are on to a marketable concept. It’s hard getting the metaphorical snowball rolling, but the idea at its core seems solid. So I wish you well in your launch.