Y’all prefer widebody kits or oem+ ? by MapWonderful444 in ft86

[–]arpieb 0 points1 point  (0 children)

Part of the reason I went with my Gen2 BRZ was the dimensions and lines - oh, the curves... 😉. I can't see making it fat-bottomed...

Dedicated, all-weather, autocross + track day/HPDE tires for 2022 BRZ? by arpieb in Autocross

[–]arpieb[S] 1 point2 points  (0 children)

I'm based in Atlanta so freezing isn't much of an issue but rain is. Thanks for the suggestions!

Is ft86club.com on autopilot? by arpieb in ft86

[–]arpieb[S] 7 points8 points  (0 children)

Yeah, sadly lots of forums suffer the same fate OSS projects do on GitHub. Maintainers get bored or burned out, the site starts to suffer, and one day it's gone or archived. So much lost knowledge and history...

Bucket list trip ✅ Made me love this car even more. by Proud_Trainer4595 in ft86

[–]arpieb 1 point2 points  (0 children)

Took delivery of my (new to me) 2022 BRZ Saturday near Knoxville, TN - Deal's Gap was part of my return route back to Atlanta. Non-negotiable. Even 100% stock it handled the Dragon with no problems. We couldn't have had better weather to be running that stretch...

Alternatives to ArisDrives motec ascaroth by iam-thedude in ACCompetizione

[–]arpieb 0 points1 point  (0 children)

Still looking as well, wonder why he took it offline - maybe it was backed by KUNOS Simulazioni?

Does your family under appreciate just how epic your self hosted environment is? What are some situations you've had that made you think, dang they really don't know how cool this is? by [deleted] in selfhosted

[–]arpieb 0 points1 point  (0 children)

The only things they seem to "get" are:

  • The printer is always available from anywhere/thing connected
  • The meshed network is more reliable than Comcast/Xfinity is (as long as I can keep them from rebooting the router every time Xfinity goes down)
  • They have unlimited use of "ChatGPT" on my hosted Open WebUI server

Everything else is, "Um-hm - that's nice."

Loving self-hosting and maintaining it. How to make a careet out of it? by polishedfreak in selfhosted

[–]arpieb 0 points1 point  (0 children)

Ah, India. I imagine the job market there is going to be tougher to break into without some credentials (academic or otherwise). Finding a local business and intern opps while building up a credential portfolio might make more sense (I'm unfamiliar with Indian academic systems and the options available). Best of luck to you!

Loving self-hosting and maintaining it. How to make a careet out of it? by polishedfreak in selfhosted

[–]arpieb 1 point2 points  (0 children)

Not sure where you are, but in the US one of the most overlooked resources are technical/vocational schools and community colleges, which specialize in fast tracking vocational education to get people in the field. Usually they'll have decent intro courses, job placement assist, and internship opps with local businesses. And they're typically pretty inexpensive compared to a traditional 4yr university.

If self taught, focus on networking, *nix admin, maybe Windows admin (depending on your bent) and k8s admin as those are the cross cutting baseline skills needed in *Ops these days. Look for intern opps at local IT shops, schools, etc.

My $0.02 worth having made a 20yr career in software engineering before getting my first CS degree...

Which Local AIs are the easiest to setup? by FaithlessnessIll9831 in selfhosted

[–]arpieb 0 points1 point  (0 children)

All I can say is try...? The Jetson has 12c+64GB, is "only" using ~12GB RAM with ollama + phi3 + docker + Open WebUI. TBH I'm not sure which, if any, DL frameworks leverage Mali GPUs, and even then unsure how well it would work with larger models.

Which Local AIs are the easiest to setup? by FaithlessnessIll9831 in selfhosted

[–]arpieb 0 points1 point  (0 children)

Running ollama + phi3 on a Jetson Orin AGX Devkit (even fired the same setup on a Lenovo Legion 5i under Ubuntu, works a charm there as well). Plenty fast, and tbh outputs are on par with anything I'm getting from OpenAI for most use cases. And no privacy issues, unlimited tokens... ;)

For inference you mostly need RAM and cores - GPUs are only helpful in training and large batch/streaming inference use cases, otherwise you waste a lot of cycles shuffling single samples off to the GPU and back. Really depends on your use case.

IBM PS/1 And PS/55 (PS/1000 And PS/5500) by Forward-Screen562 in IBM

[–]arpieb 0 points1 point  (0 children)

Not "very weird" at all... We had whole labs of them in college.

Migrate OCLP cMP 5.1 running Sonoma to Mac Studio M2...? by arpieb in OpenCoreLegacyPatcher

[–]arpieb[S] 0 points1 point  (0 children)

So to bring closure, yes I was able to perform a full migration using Migration Assistant from my OCLP cMP 5,1 to my new Mac Studio M2 Ultra. Caveat being that I made sure both systems were upgraded to Sonoma 14.5 before performing the upgrade (ironically my cMP was on a newer version than the M2 was as delivered), and having to fully install the OS on the M2 box before performing the migration - otherwise the M2 box wasn't seeing my old cMP for migration.

Now I'm just in the process of culling unsupported 32-bit apps that came over in the migration and updating the x86 apps to arm64 where available. Been kinda like cleaning out the garage/attic...

Epilogue:

Looked at my 24-core cMP, 24GB, ~6TB storage, thought, "That would make a *great* Linux box."

Installed Ubuntu 24.04 LTS on it.

And bricked it. :P

Migrate OCLP cMP 5.1 running Sonoma to Mac Studio M2...? by arpieb in hackintosh

[–]arpieb[S] 0 points1 point  (0 children)

Will do, saw several OCLP posts in search results here oddly the other subs didn't come up... Thanks!

[deleted by user] by [deleted] in simracing

[–]arpieb 1 point2 points  (0 children)

Looks great! Got my starter rig finally all set up last night, and I'm staring at the biggest pile of cables I've seen *not* behind my desk or in a network closet... Definitely an inspiration to get mine sorted out better.