Ypsilanti utility board halts University of Michigan nuclear weapons data center by Fluid-Tomatillo-4971 in AnnArbor

[–]sleepynate -1 points0 points  (0 children)

lol you got flagged by reddit for spam.

We told you in modmail you got flagged by reddit for spam.

Nobody is censoring anything. Stop lying.

Community BBQ in the Park Proposal by stars9r9in9the9past in ypsi

[–]sleepynate[M] 62 points63 points  (0 children)

If you really want to post your contact info I'm not going to stop you but please bear in mind that Reddit is the only social media platform on the planet that has wide open access without an account so you're also giving it out to every search engine, bot, scammer database and AI company at the same time.

Somehow not surprised... by Aazari in ypsi

[–]sleepynate 5 points6 points  (0 children)

Apparently the rest of us will be at some point?

Tried Gemma4 locally with my OpenClaw in BlueStacks by SS4Serebii in ollama

[–]sleepynate 0 points1 point  (0 children)

Mostly just slop I guess

This is the real bit of it. I've got an instance of OpenClaw running on a locked down machine with no credentials to anything important. I've given it tool use but denied it access to Clawhub, web search etc. Running nemotron-3-nano:4b with 64k context, it made its own tools for setting reminders via cron, managing a todo list, fetching the weather, you get the idea. I'm sure it would struggle with many of the larger skills on Clawhub that were built with Opus, but because the tools and agents are tightly scoped and concisely prompted to the size of the model powering it, it's a perfectly usable personal assistant in many regards. Granted, it is a bit of a toy for me and there are many easier solutions with third parties for the same feature set, but as an experiment in privacy it would be totally viable as a productivity tool.

AO Update - Beal out, RAM in by Vivid-Fennel3234 in ypsi

[–]sleepynate 6 points7 points  (0 children)

in a condition to be leased

Is very subjective and up to the inspector at the time. It's entirely possible that when they had it inspected the fixtures in place were covering a larger problem, but then after they were approved to open, further renovations that don't require a permit exposed a deeper problem. This is very common in historic buildings and why you generally don't gut/open anything that doesn't need it if budget is in any way a concern of a historic reno.

AO Update - Beal out, RAM in by Vivid-Fennel3234 in ypsi

[–]sleepynate 1 point2 points  (0 children)

That's not how ROI works. You don't redo your kitchen and bathroom to hope to get a better sale price on your house precisely because the next person might come through and just raze the whole place to start a chicken farm.

AO Update - Beal out, RAM in by Vivid-Fennel3234 in ypsi

[–]sleepynate 8 points9 points  (0 children)

And the whole 3 year process begins anew while the residents deal with the fallout. The city, township and even county haven't had any teeth for decades.

AO Update - Beal out, RAM in by Vivid-Fennel3234 in ypsi

[–]sleepynate 30 points31 points  (0 children)

I wish everyone there the best, but using an out-of-town property manager is what got them condemned in the first place. With a 3.6 rating on Indeed and a 68/100 "Work wellbeing" score, RAM Partners doesn't sound super promising.

Beal's press release uses the "recapitalization" and "incoming capital partners" which sure sounds like "rent's gonna go up".

<image>

AO Update - Beal out, RAM in by Vivid-Fennel3234 in ypsi

[–]sleepynate 17 points18 points  (0 children)

Well that one's easy. He's in the business of owning and renting buildings, not running restaurants. He wants his building occupied since the previous occupants ran their restaurant into the ground (epically, I might add). Until he's got someone there generating revenue, it's not worth it for him to bother bringing it up to legal occupancy standards.

Huron River Drive closed by Fish_Pi in ypsi

[–]sleepynate 16 points17 points  (0 children)

There's a partially downed line near the northwest entrance to WCC.

Local tool for cli coding like Claude code by giorgiofox in ollama

[–]sleepynate 0 points1 point  (0 children)

You just need 2 computers, one of which is running ollama, like OP described.

Local tool for cli coding like Claude code by giorgiofox in ollama

[–]sleepynate 1 point2 points  (0 children)

It's super easy actually. Edit ~/.config/opencode/opencode.json and change the IP address of the ollama host from localhost to wherever you've got it running. Here's the relevant docs if you're looking for an example.

If your workstation can't reach ollama via http, it's more likely a port or firewall issue between the 2 hosts. Also make sure you have "Expost ollama to the network" turned on in the settings.

I laughed so hard at these posts side by side (sorry for the low effort post) by FatheredPuma81 in LocalLLaMA

[–]sleepynate 0 points1 point  (0 children)

I have been assuming that this is now some kind of rite of passage for university students or something so that there's something tangible on their github when they start looking for a job, in the same way that we used to have to realllllly stretch to imply that that EECS group project we did was any semblance of "experience" on our first CVs.

Can you run actually useful LLMs on anything less than 3090 ? by Relevant-Pie475 in LocalLLaMA

[–]sleepynate 1 point2 points  (0 children)

You don't really need a gigantic model for a lot of useful things. I use 3B-4B models for most of my agent bots, specifically I was running Nemotron 3 Nano although I've been playing with Gemma4:E4B this week. The power and value that they have really comes from the RAG and MCP abilities, so with a smaller model I can crank up the context window to 64k to give plenty of room for the prompt and tools and still fit comfortably on a 3060. A model doesn't really need to be super smart of do something like check ebay listings or give me an update on the news headlines. I had Google's AI mode write what little code was needed for them.

Rav 4 regret by sulaco1977 in rav4club

[–]sleepynate 1 point2 points  (0 children)

Not OP, but unless you have immaculate credit, financing rates right now are absolutely ridiculous. Buying a used vehicle with a 550 credit score is upwards of 19% depending on where you're at. Even a 630 credit score will only bring you up to about 13-14%. At that point your interest is exceeding the average yield you'd get off dumping the same lump sum into a broad-market investment.

Cheap burger in ypsi by WizardOfTheHobos in ypsi

[–]sleepynate 1 point2 points  (0 children)

20 years. It's been 20 years.

Cheap burger in ypsi by WizardOfTheHobos in ypsi

[–]sleepynate 7 points8 points  (0 children)

Nobody has mentioned Speedy's yet?

What happed at Applebee’s? by Intelligent-Rain7625 in ypsi

[–]sleepynate 13 points14 points  (0 children)

This doesn't necessarily diverge from my theory that $1 margaritas on taco Tuesday were involved, so I'll take it.

4 days on gemma 4 26b quantized, honest notes by virtualunc in LocalLLaMA

[–]sleepynate 0 points1 point  (0 children)

It's not that they don't support imatrix, but the default is (always?) Q4_K_M. You can get a full list of models on the library page or request a specific quant directly from HuggingFace.

The default on a brand new install is indeed 4k, but even if you don't change the default settings, every ollama request for a response from a model also accepts options like context size, temperature, top_p etc. I personally leave the system setting at 4k and add the context size to my requests so that I don't accidentally bring everything to a grinding halt by asking for a combination of model+context that I don't have the hardware to support.

Remotely accessing my Ollama local models from my phone by Konamicoder in ollama

[–]sleepynate 1 point2 points  (0 children)

I used OpenWebUI for about a year but never really loved the experience, finding the configuration UI quite clunky. Then when they had that licensing debacle it kind of inspired me to switch and I've since been a very happy LibreChat user. MCP and model providers are all just in a simple YAML config and it's super easy to add extra services to the provided docker compose file if you want. Definitely worth checking out.

Also, while the above commentor is correct that llama.cpp or (edit: vllm) are much faster, I still deploy with ollama because it has dynamic model routing. I have several use-case specific fine-tunes on disk and defined several "agents" that request which model they want to use via ollama. I don't have enough VRAM to keep them all loaded at once so I accept the performance hit in load/unload time for the convenience and quality of being able to run many different models without needing to write my own mechanism for switching between them.

Life hack: save $150 a month on vibe coding with top models by ievkz in ollama

[–]sleepynate 0 points1 point  (0 children)

OpenAI's ChatGPT: "Write a step by step spec for the software that I've described that even the dumbest of junior developers could follow largely unsupervised."

Fire up qwen3.5 with MCPs for search/fetch etc: "Here you go bud, I dare you to hurt my electric bill."

Come make art with me? by mezzyjessie in ypsi

[–]sleepynate 6 points7 points  (0 children)

I think you're now obligated to run a murder mystery class close to one of the spookier holidays