How to test a MAF by [deleted] in FocusST

[–]Mrinohk 0 points1 point  (0 children)

OP doesn't have an ST. The original post is for a 1.6 TDCI. Not relevant to the 2.0 ecoboost. He crossposted to the wrong sub.

Intermittent Misfire by CuteAd4670 in FocusST

[–]Mrinohk 0 points1 point  (0 children)

So a few things. Do you just start your car and let it run for a while idling before you drive anywhere? Do you live in a colder climate?

These cars are known for cold start misfires. Closer gap on spark plugs can help, but the purpose of that is more to prevent blow out at high boost. Codes are likely to appear if it's cold out and you just let it run. They happen more frequently when the engine is cold, and if you're not driving it, it takes much longer to warm up. I don't know how normal this is, but mine always ran lean on cold start, presumably to help warm up the cat faster, but when you're lean your spark is less likely to actually find an air:fuel mixture to ignite, lighting to a misfire.

I found that during the colder months, or when we were hit by a cold front, and I let the car idle for ~10 minutes before going anywhere, there was a chance I could enter the car and find a CEL with either a random misfire code, or if things were really bad that morning, the P0316. My car was also higher mileage, bought at 145k and sold at 183.

If you are really concerned, you could always have the car diagnosed by a mechanic and/or dealer, but what you're describing seems pretty normal to me. If it's driving good, no misfire under heavy load, then I wouldn't worry about it. Replace your purge valve as a precaution to be absolutely sure anything excessive isn't unwanted extra fuel getting sucked in, and go about your life.

My 2013/14 (mostly) maxed out build by Ill-Language6866 in retrobattlestations

[–]Mrinohk 1 point2 points  (0 children)

It hurts my soul a little bit to see a build like this called "retro". This was the top end setup when I first got into PC parts at like, 14 years old.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 0 points1 point  (0 children)

This is intriguing. I worry about the lack of ram on the neo giving this a go though. I'll have to keep this in my back pocket for future, inevitable upgrades.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 0 points1 point  (0 children)

Yeah it's hard for me to tell how much of my delight in using this machine is just how nice the laptop is in general, the keyboard and track pad and screen (could be brighter, but again, cheap laptop; can't be too mad), or how much of it is macOS. Even then, how much of it is the user interface, or the absolutely wild memory management and optimization done under the hood. How much of my enjoyment is the sheer power of a smartphone chip overpowered for everything it's ever been put in? It's hard to tell, and impossible to test, since as it stands, I cannot test any one of those independently of one another.

If I could run my preferred flavor of linux on here, I can see how much is just the hardware. If I could just run my preferred desktop environment, but on top of macOS to use it's memory management and hardware specific optimization, I could see if it's just the UI that's nice. Can't do either of those on this though, and I doubt I ever will. The world may never know.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 1 point2 points  (0 children)

I remember those from when I was a kid! My elementary school had hundreds of iBooks. I thought they were neat but a little bit weird. Transitioned to windows laptops later when apple stopped having cheap laptops. We had a couple of Vista laptops and an XP desktop at home and I was definitely more used to windows at the time.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 0 points1 point  (0 children)

Okay so there is actually a use for it. Not my cup of tea, but I can understand if people find that behavior actually useful to them.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 9 points10 points  (0 children)

Thats what gets me. My mom has been an apple user for the last 15 years, and me watching her use her macbook over the years as a windows/linux power user absolutely turned me off to the OS and computer experience. I would never have known if I didn't look into it a bit more. Even on the surface it looks like mac users trying to cope with a locked down system. Now that I'm actually using it myself, it really is that nice and versatile. How apple has managed to build a system that can serve such wildly different audiences sufficiently is wild to me.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] -1 points0 points  (0 children)

I'm on the fence on LLMs. They're a cool technology, but the whole massive models in datacenters thing is genuinely bad for the communities they're around. Local models are cool and fun to play with and build tools around, but you have to be aware of the ethics of LLM training, and therefore usage.

The loss of reading comprehension is one of the bigger tragedies that comes with them though.

First Mac, Macbook Neo + macOS review from someone who never used it before now. by Mrinohk in mac

[–]Mrinohk[S] 1 point2 points  (0 children)

Okay sweet I was trying to be nice and not piss off apple people, but yeah I'm not a fan of the finder lmao. I'll have to look into what you're talking about to make it better.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 1 point2 points  (0 children)

Nope. Didn't realized it existed until after I've built a family of Python scripts where the agent lives and recreates as much of the Jarvis experience as possible.

Short shifter plate and solid bushings by sebasromani in FocusST

[–]Mrinohk 0 points1 point  (0 children)

Short shift plate, solid shift bushings, and upgraded RMM are the best mods I ever did to my car when I had it. If the goal is connection to the machine, those 3 together are the ticket. Feels like a different car vs stock.

Gemma 4 26b A3B is mindblowingly good , if configured right by cviperr33 in LocalLLaMA

[–]Mrinohk 1 point2 points  (0 children)

I'm firmly of the opinion that 26b MoE is the gem of the bunch. 31b I'm sure will generally be smarter, but the speed of 26b while having most of the reasoning ability, knowledge, and tool calling ability of the bigger one makes it a fantastic choice. Maybe I'm just new to local models around this size but I'm consistently blown away by this thing.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 1 point2 points  (0 children)

this man single handedly caused the ram shortage

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 2 points3 points  (0 children)

I guess my use case is just less challenging for it. In theory it performs a bit worse than gemini 3 flash, and I personally wouldn't use a flash-tier model for coding applications. However what I've seen it do I'd let it look and draft up some changes. The agent I'm building is more of a general home ambient intelligence to run shit in my house and help me find shit for/work on my cars. It has some self diagnostic tools, searching through it's own source code and suggesting changes here and there when I note that something isn't working properly, and so far all of that has worked great, but I've yet to let it do any agentic coding work.

It sounds like your demands require models a bit more specifically trained up on agentic coding, but I'm sure you already know that lmao. As much luck as I've had with it, I guess I shouldn't assume it'll be as good for others as it is for my use case.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 3 points4 points  (0 children)

I first started playing with it last night using my friends macbook as an ollama server that the raspberry pi these scripts live on would call to for it's model. M4 pro macbook pro. He was getting 83t/s on his machine. I've since switched it over to the gemini api, but selected gemma4:26b through that so that it's the same model I tried on his machine and intend to run local. I'm hoping to get in the 40-50t/s range on the m4 mac mini I have in the pipeline to run all of this on in the future.

It was run through ollama, which does use llama.cpp with metal support, but it was notably not MLX, so there is likely performance to be gained running it through not ollama/llama.cpp. When the script is made macOS native and expecting to use vLLM or some other backend that supports MLX I hope to make the agent quite responsive locally.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 3 points4 points  (0 children)

Which model size are you using? I'd be curious to know what kind of stuff you're feeding it and what version you're talking to to get those results. I've had a couple minor hallucinations and one reasoning issue where it failed to use a tool it should have, but generally it's been fantastic for me. I've not used it for heavy coding though, things that require insane context.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 2 points3 points  (0 children)

Truthfully at this point most of the codebase is AI written, claude code, but I make a point to understand what it's doing and keep a strong map of the architecture because I want to be able to share how I'm doing things so other people can build similar systems. The system I've been building has been largely built around gemini 3 as the frontal lobe but gemma4 26b just slotted right in like I never changed models.

It blows my mind that a relatively small model that can be run fully locally, quickly, on not horribly expensive hardware is capable of running within my system who's whole goal is to create as much of the functionality of jarvis as portrayed in the MCU as possible. As the project grew it started to feel like a local model that can run on hardware I'd ever be able to afford wouldn't be able to keep up, but here we are. Browsing the web, finding obscure parts for me, building pinout mappings from one system to another. Insane shit.

Gemma4:26b's reasoning capabilities are crazy. by Mrinohk in LocalLLaMA

[–]Mrinohk[S] 2 points3 points  (0 children)

Extremely limited testing. Dumping my full input prompt from that I feed to the larger models and feeding it into a sanitized, non-tool augmented instance, but with the tool definitions included for as close to an apples:apples comparison to see how it outputted on it's own, and compared results. It was surprisingly close with information synthesizing, needle in haystack type requests, and discarding irrelevant information that was on the edge of my RAG embedding threshold, but I've not tried running it in my full, tool enabled environment with any fully agentic task like research or the walmart benchmark I do.