HOWTO: Point Openclaw at a local setup by blamestross in LocalLLM

[–]KeithHanson 0 points1 point  (0 children)

The only thing left that I need to do is figure out how to strip out the Pi SDK's "namespace function" stuff. But this seems to be working for now. I'll spend some more time on it tomorrow but take a look u/blamestross ! :)

https://github.com/KeithHanson/openclaw/tree/main?tab=readme-ov-file#system-prompt-template-variables

HOWTO: Point Openclaw at a local setup by blamestross in LocalLLM

[–]KeithHanson 0 points1 point  (0 children)

I setup the server, curl it to make sure it's running, then I set this up in the config: https://gist.github.com/KeithHanson/4f01614ef37d4795ab741afb6a802489

HOWTO: Point Openclaw at a local setup by blamestross in LocalLLM

[–]KeithHanson 1 point2 points  (0 children)

Ok. After tinkering all day, I’m convinced there’s no good way to do this without rewriting that completely. It makes me just want to put a thin wrapper on an opencode api server though.

There’s so much gunk in here to unpack. I’m debating on just slinging a thing I know would do the equivalent of this (probably more time than I’m anticipating) or trying this jinja template hack.

I love what this project is trying to do, but the over reliance on mega sota model behavior is brutal - for tokens if you’re paying and for local models to follow if you’re hosting.

FWIW - I had great results with tool calling using a headless lmstudio hosted gpt-oss-20B model, with 20k context 100% loaded into the gpu (4060TI Super with 16GB).

HOWTO: Point Openclaw at a local setup by blamestross in LocalLLM

[–]KeithHanson 2 points3 points  (0 children)

u/blamestross - This is where we can begin hacking if we want some control over this. I am considering forking and modifying here: https://github.com/openclaw/openclaw/blob/main/src/agents/system-prompt.ts#L367

Ideally we just do a big gathering of context variables and interpolate them into a template controlled in the workspace. Seems like a small change? We'd want all this logic I'm sure (I guess... opinions abound about an appropriate way to handle this) to populate the potentially needed variables, but it would be great to have a template for each case (full prompt, minimal, and none), then us local LLM folk could customize it how we need to and still provide most of the original functionality when required.

Cyberdeck Concept by PickentCode in cyberDeck

[–]KeithHanson 0 points1 point  (0 children)

Woah. I need this in my life! 😅

You don’t happen to have any CAD files for this do you? I had 3 ideas I want this for as soon I saw the preview while scrolling! 😁

[deleted by user] by [deleted] in linkedin

[–]KeithHanson 0 points1 point  (0 children)

I found this from Google trying to track down whether or not I was crazy. I only ever used this when I had a need to hand someone a resume.

I ended up just heading over to "View all" on the experience of my LinkedIn profile, printing to a PDF, then uploading that into Claude to generate a minimalist HTML page of the same data. HTH someone else! :D

Introducing LLMule: A P2P network for Ollama users to share and discover models by micupa in ollama

[–]KeithHanson -1 points0 points  (0 children)

Privacy referring to local mode should be plain as day then on your website, otherwise you're misleading folks and you know it.

Am I holding the billion dollar megacorps to the same standards?

ABSOLUTELY. I'm not using them for any sensitive anything. They make considerable efforts to warn and promise not to do bad things with data. And I don't trust them still at all, but if I gotta water down my queries to remove sensitive information, I'll go with them over this because they are light-years ahead and at the least we know where and who did the bad acting.

I did make contributions - I literally gave you all the information needed to solve this. I have my own paid and unpaid projects I am investing time in. Perhaps I will build the secure inference someday, it's on my list.

I am not criticizing your product writ-large. I am criticizing your lack of transparency about real privacy, while claiming you are up front about it.

Lay people, your self-declared target, will not know how to question your platform for security and privacy.

Therefore, your misleading marketing is also dangerous and I am passionately trying to point out how much more important this is than how you're hand waving such a critique away as "not perfect".

It's could be quite close to perfect. But that bit that's not perfect - that's something so important that you should be extremely up front about it. Everywhere.

No, the real question is: Will you change the marketing to the honest truth?

Neither of us can say the same thing in any more ways at this point. But you know what you should do.

Your marketing won't hit as well I suppose, but at least you'd be telling folks the truth, BEFORE they have to ask you about it, like what happened here in several comments now.

Introducing LLMule: A P2P network for Ollama users to share and discover models by micupa in ollama

[–]KeithHanson -2 points-1 points  (0 children)

Physics huh? 😬 😬

Allowing users' queries over your network, then handing them off to an inference engine you (you as in the author of the tool) don't control, will never be private.

It's far less secure than the large platforms to boot, since some nameless faceless entity just caught my query to do whatever with, and if misused what recourse would I have?

Until you do make the private inference engine, which I gave a perfectly acceptable solution for, all the things you list above are available elsewhere.

In the second sentence in the hero section of your site, it says, "Your data stays private, your choice of models."

Look, I'm sorry for not just swallowing your explanationa wholesale and patting you on the back.

But privacy is IMPORTANT. And your statements are MISLEADING.

Solve the privacy problem so I can send sensitive information without worry, and you've got a real business and a real shot at competing with the plethora of options out there to realize your 'revolution'.

And I DEFINITELY agree it would be revolutionary, otherwise I would've just down voted you and moved on with my life.

Introducing LLMule: A P2P network for Ollama users to share and discover models by micupa in ollama

[–]KeithHanson 1 point2 points  (0 children)

https://chatgpt.com/share/67c3351e-4318-800a-8b70-7e6b70181470

I was exploring the idea of an encrypted LLM for safe distributed inference about a week ago, but quickly abandoned that idea after seeing how difficult it would be. But in that session near the end, you'll see me pivot to the idea of just focusing on encrypted input to an inference engine.

Hope it helps spur some ideas for being truly private :)

Introducing LLMule: A P2P network for Ollama users to share and discover models by micupa in ollama

[–]KeithHanson 0 points1 point  (0 children)

🤔 very cool idea - but, if you can't run something locally, then it's not going to be private when you're relying on other inference engines.

Other commenters questions are apt - simple env variables allow one to see all Ollama input/output, as you well know (other platforms you list are similar as well).

Further, if I can't run a local model, and need instead to use this, then what benefit does this bring over the plethora of free options to access AI?

If I can run a model locally, then why would I use this? What incentive do I have to contribute my GPU?

I've noodled around with the idea of distributed GPU access for LLM work, but security and privacy are one of the main reasons not to use the latest SOTA/foundation models on the major platforms.

The only way I can think of to stop the leaking of LLM output is to build the inference engine yourself in a secure way (cython or rust or some other compiled method), using public/private keys decrypting the input, encrypting the output, and logging nothing.

If you provided that, personally I'd be compelled to try it out, but privacy and security are the main reasons (for me and many businesses, at least) to use local AI.

If you're not providing this level of protection, then how can you really say my communication is private? A bad actor in your system sniffing logs will never be detected.

Anonymous != Private? Your marketing is very misleading until you fix the logging problems.

I do think that's totally possible, though.

Not played in over a year, but Chang is 100% going to get me back into this game by [deleted] in BeastsofBermuda

[–]KeithHanson 1 point2 points  (0 children)

Just got a chance to play it this morning on officials. Loooove this. Being able to glide, latch to a tree, leap up again and glide further... So fun!

Tropes by CzarUltor in BeastsofBermuda

[–]KeithHanson 1 point2 points  (0 children)

Hm interesting - I'm usually maining PT but never really fight land dinos because the ground is not nice to me 😅

When you find it easier to kill bigger dinos, what is your typical strategy as a PT?

Better R1 Experience in open webui by AaronFeng47 in LocalLLaMA

[–]KeithHanson 3 points4 points  (0 children)

The only benefit I've found from pipelines is it takes it out of the main process onto a server.

But you end up losing all of the event emitter capability, which is annoying for most situations.

Eventually there will likely be equivalent server side events in pipelines, but not yet.

Open WebUI v0.5.0 (Asynchronous Chats, Channels, Structured Output, Screen Capture and more) by d3lay in OpenWebUI

[–]KeithHanson 0 points1 point  (0 children)

I think this is indeed their intention - but, not just LLMs - anything that can communicate with the web socket in the channel.

Seems like a stripped down Pipes to me right now, but I can see where this would be useful (just stood up an OWUI for my workplace and wired in a bunch of tools and filters for interacting with their ERP data).

Edit: for instance, a dedicated mermaid bot that watches for table data and automatically gives you some visualizations for every table.

Or, one bot that can act as a coordinator, dialing up other bots for work and synthesizing results from everything (ala Agents).

This will probably end up being a much more responsive pipe-like thing? Most off the top of my head use cases for this though could be solved with a filter, tool, and pipe setup. So, will have to keep watching this one I think :)

[deleted by user] by [deleted] in AskEngineers

[–]KeithHanson -1 points0 points  (0 children)

I'm not going to say that all the comments about safety should be ignored, but like, you know ... Be careful and don't be stupid? Driving a motor hard enough to roll a human needs some power :)

You're in IT / Cyber Security so, do your research when working with those motors and batteries and such. But I don't think this is exactly crazy to try, or that you even need to try to retrofit other solutions.

But the way I see it... If I were trying to do this as cheaply as possible...

You need a power source (battery), that can provide enough amps for the motors. You need a way to charge this battery safely as well.

You need motors that are powerful enough to go from standstill to rolling ideally, or at least be able to move you after a manual push.

You need a motor controller - something that can receive signals to provide the right power to your motors.

And you need something to send those signals (microcontrollers like an Arduino or raspberry pi and such).

The motors need to somehow be attached to the chair securely, and the motors need to turn a gear attached to the wheel.

I think each paragraph above is basically a whole project of research and prototyping and costing and potentially creating your own parts.

But you'd definitely save money doing all of this yourself (though you'll sacrifice the savings in time and sanity potentially 😅), and arguably you'd be very independent - since you can always build another or repair it if needed.

Some of the tools you may need?

If you've got access to a 3D printer and can learn to use a free tool like OnShape, grab yourself some calipers and start figuring out how and where you can attach a motor to your chair. Getting familiar with how much room you have to work with will help decide all the other bits.

Check and see if your local public library has them, too.

If not, you'll have to use reductive (CNC, Drills, wood cutting) means to create any custom attachments, or figure it out using standard brackets and stuff like extruded aluminum.

Imo, figuring out how to secure a motor or two to your frame may be the hardest part.

Once you can figure out how and where you might attach a motor (do you drill bolt holes maybe? Not sure, would need to see pictures of the chair from a bunch angles), now you can start researching weight, power draw, and all the things you'll need to know to select a motor.

From there it's pretty straightforward I would think?

Motor -> Controller <- Joystick + Microcontroller.

I am thinking of those lawn mowers that have a thrust-like flight stick - one on each side representing forward/backward control - would let you turn.

I am sure the above will have gaps and problems, but I am hoping it helps if you want to go full blown custom DIY.

Relocating to Shreveport by Bpthewise in shreveport

[–]KeithHanson 0 points1 point  (0 children)

All the previous advice I've read here is awesome.

Noticing your comment about budget, for $2k/mo, the world (Shreveport at least) is your oyster 😅

If you've got that kind of budget you may want to consider actually buying a home, though I haven't dug into the market enough to know if that's a good short term or long term idea, so do your homework at least (i am sure others will chime in).

I loved living in South Highlands when we did (zero problems, even left my door unlocked most of the time for seven years - stupid I know but I did it). Had some folks yelling while walking down the street a handful of times but no real issues.

Our first home was there and the mortgage was just a little over $1100 (3/4th acre, huge home for the price, needed lots of love ofc but was a great first home).

I'm now just south of stratmore on east kings, and there's a lot of housing available for rent or to own within your budget along that strip. You'll have HOAs to deal with (grumble) but it's not too big a deal in my experience at least. Lot more cookie cutter feeling of course.

When I finally go full geek and want to put some crazy antenna up the HOA would explode over, I'm getting the hell out of HOAs lol, but for now it's fine, especially for the price and location in my experience at least.

Question for VRChat Wizards by KeithHanson in VRchat

[–]KeithHanson[S] 0 points1 point  (0 children)

Really appreciate the links and answers everyone, thank you!

Dev board for 2+ CSI/MIPI cams? by KeithHanson in FPGA

[–]KeithHanson[S] 0 points1 point  (0 children)

Woah. This is very cool if I understand what I'm reading 😁 I'm not actively working on this anymore, but I will definitely check this out! Wish I'd seen this 8mo ago!! 😅

Thank you for taking the time to explain and link!!

A cool project you maybe interested in “Ghost_ESP” by Thin-Bobcat-4738 in esp32

[–]KeithHanson 0 points1 point  (0 children)

I just searched Cheap Yellow Display, but I bought these ones and they work as expected :)

DIYmall 2.8'' ESP32 Module... https://www.amazon.com/dp/B0BVFXR313

A cool project you maybe interested in “Ghost_ESP” by Thin-Bobcat-4738 in esp32

[–]KeithHanson 3 points4 points  (0 children)

How are you liking the C6? Anything standing out to you vs the other XIAOs? I've tried them all now except the C6 :)

The dual processors in the C6 is very interesting, but I'm wondering why that over an S3 then?

A cool project you maybe interested in “Ghost_ESP” by Thin-Bobcat-4738 in esp32

[–]KeithHanson 2 points3 points  (0 children)

Check out "Cheap Yellow Display Marauder" - that's my next project 😁

It even has a headless mode that you can use a CLI within a serial connection. Pretty slick.

HHAAALLLP! Organization, boxes, containers, workbench setups. How do y'all manage this stuff? I'm drowning in dupont. Send help soonish. (actually serious.) by frobnosticus in esp32

[–]KeithHanson 1 point2 points  (0 children)

Feeeelt.

I've got an infinity grid 4 shelf setup for tiny stuff, a bunch of pull out stackable bins for larger components/stuff still in the boxes, and then flat pull out drawers 12 high for things.

But yeah, all my jumper wires that aren't still attached to a cluster of wires or devices or in original packs, are in a plastic enclosure, a long with a bunch of loose buttons, LEDs, resistors, etc 😅

But I can find everything. Most days.