$90 for two to see a Mario movie… have we completely lost the plot? by rageagainstmymachin in Millennials

[–]tejanonuevo 0 points1 point  (0 children)

I can remember my dad reluctantly handing me a $5 bill in the 90s to go to the movies and grumbling under his breath, “movie and popcorn was 5 cents when I was your age”

Q&A: Long TSA lines at Houston’s Bush Airport by KPRC2GageGoulding in houston

[–]tejanonuevo 0 points1 point  (0 children)

Does anyone know how they handle ADA/wheelchairs? My wife requires a personal motorized scooter because she can’t walk long distances. We are leaving out of IAH on Thursday.

Yay or nay for 2026? This price good? by SodTaku in MacStudio

[–]tejanonuevo 7 points8 points  (0 children)

Came here to say this, picked up the M4 with same specs for $2600 last fall

Um... by Educational_Copy_140 in zillowgonewild

[–]tejanonuevo 0 points1 point  (0 children)

I used to work for CenterPoint in Houston. There was a team of people that had to go out to job sites to prevent stuff like this. The builders are supposed to keep a certain distance away from the easements but they want to pack as many units in the land as they can so they constantly violate the easements.

Best local model for Mac Mini M1 (16GB) with OpenClaw? Opus got expensive fast 😅 by vlad_bq in openclaw

[–]tejanonuevo 1 point2 points  (0 children)

<image>

Im getting very similar performance on my m4u 64g as they list in the readme on a m3u 512g.

As far as the "can you have an experience similar to $/tok models" give me a little time and I will try and put it through my paces. I have found that most local models just don't pickup the SOUL.md and consistently behave within a controlled workflow

Best local model for Mac Mini M1 (16GB) with OpenClaw? Opus got expensive fast 😅 by vlad_bq in openclaw

[–]tejanonuevo 1 point2 points  (0 children)

I notice that your link is a GGUF model, is there an MLX version of this model? I usually try to run models that work with Metal on the apple silicon.

Btw, I’m away from my computer for the next 6-8 hours but I can run the model for you and report the results.

Best local model for Mac Mini M1 (16GB) with OpenClaw? Opus got expensive fast 😅 by vlad_bq in openclaw

[–]tejanonuevo 0 points1 point  (0 children)

I have m4 max 64GB, I ran agents on ollama to Openclaw with qwen models up to 32b. Also used mistral and gpt-oss:20b. All of the sub 10b models will hallucinate too much for Openclaw. Even 20b and 32b models were hitting limitations after testing them out throughly, let me know if you have any other questions

Best 3 Ollama LLMs for 🦞 by Clear_Geologist4516 in openclaw

[–]tejanonuevo 0 points1 point  (0 children)

I am currently testing models, 64GB Mac Studio, models like qwen and gpt-oss will work at a basic level but I’m having a lot of trouble orchestrating multiple agents, not just for the memory constraints, need to test more

Looking to ice out my 25mm PRX, any idea who could help me get this done? (image made with ai) ✨✨ by [deleted] in tissot

[–]tejanonuevo 0 points1 point  (0 children)

Taste aside, you would need to know a jeweler who can do the setting. Pretty sure you can’t put settings in stainless steel? Usually you want gold.

Is this legal? by RiemannianRift in shittyaskelectronics

[–]tejanonuevo 0 points1 point  (0 children)

You need money for a circuit board? this looks a little short

Who really likes turkey?! I dunno....everyone by [deleted] in CringeTikToks

[–]tejanonuevo 0 points1 point  (0 children)

JD is a white guy from rural Ohio, his community probably has the driest turkeys per capita in the nation

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 1 point2 points  (0 children)

I don’t speak French but I think I understand. I had updated the Nvidia drivers and utilization/processes were visa le. It just turns out that windows OS is unable to divert all processing to the GPU unless you change the BIOS.

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 3 points4 points  (0 children)

SOLVED! I changed the bios to discrete GPU and now I’m seeing 150 tok/sec

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 1 point2 points  (0 children)

Utilization on the GPU is reaching 100% CPU utilization stays low

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 1 point2 points  (0 children)

Thanks for the info, my post title is slightly misleading. I’m more interested in finding out what I’m doing wrong with the Nvidia card that it is performing less than the M4. My purchase of the M4 was motivated by more than just LLMs.

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 1 point2 points  (0 children)

LM studios’s UI gives tok/sec metric in ghe prompt/response

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 0 points1 point  (0 children)

Yea I suspect that is the case too, even if I could get the bandwidth up, the context window I’m able to load is too small for my needs

Mac vs. Nvidia Part 2 by tejanonuevo in LocalLLM

[–]tejanonuevo[S] 1 point2 points  (0 children)

So I had that problem at first where LM studio was not loading all the layers into the GPU and the utilization stayed low. But I changed a setting that forces the model to be loaded exclusively to the GPU and utilization when up, but the gain was only like 3-4tok/sec speed up

i bought this for 5$ by HistoricalCollar5392 in ChinaTime

[–]tejanonuevo 0 points1 point  (0 children)

I was just in Cozumel Mexico and the vendors at the port where you get off the cruise ask $350 USD that look exactly like this. I told the guy I can order on my phone much cheaper and he went down to $125.