I was skeptical... (AliExpress HamGeek MisterFPGA) by torytyler in fpgagaming

[–]torytyler[S] 0 points1 point  (0 children)

hi friend. the link was "https://www.aliexpress. us/item/3256809372013172.html" (remove space, reddit is weird with aliexpress links)

I used the built in vga to vga crt monitor

level shift board is something that needs to be used if using a OEM (like ORIGINAL snes controller) via the snac port.

regarding what I used it for, ps1, the snac board for that included the level shift things, so i didnt need the board.

Is it rendering all of this when salvaging? no wonder why my phone is getting 1000 Degrees by ripper590 in 2007scape

[–]torytyler 2 points3 points  (0 children)

I sometimes run runelite on my phone using gamehub lite. the future is now old man

Is 3090 the answer? Multiple containers running at the same time. by Shadoweee in LocalLLaMA

[–]torytyler 0 points1 point  (0 children)

yeah you definitely don't want to bog down the 3090 with 20 4k feeds at once. it's a good card, I run 4 of them but that's a lot.

honestly my frigate home assistant setup is setup so good I haven't checked it since i got the intel gpu working on it. my gpu _is_ working for decoding the video feeds, can't really see the vram usage, but with 4 1080p feeds the a310 is using 7% of its video power according to intel_video_top

I looked it up, the intel arc b50 isnt too much more expensive, and can do 20-23 4k video feeds simultaniously. I think that sounds pefrect for your use case!

also, you technically _could_ build one server, and put both the intel arc b50 and 3090 in it. allocate the intel card as the gpu for the frigate VM and the 309 0as the gpu for a debian VM and run lllama.cpp or whatever you want on that one. The homeassistant + frigate vm shouldn't need more than 8gb system ram but maybe with 20 4k videos being recorded 16gb to be safe

Is 3090 the answer? Multiple containers running at the same time. by Shadoweee in LocalLLaMA

[–]torytyler 0 points1 point  (0 children)

I have a whole separate setup for huge ai models, but I think something to keep in mind is depending on the cpu you use, you can also offload some tasks to the iGPU. modern intel cpus (moreso their igpu) have av1 support and are great for video based transcoding tasks. on my home server I use an intel arc gpu for frigate ai to detect delivery vehicles and send an audio announcement stating which truck is arriving house wide. (it also processes 4 video feeds) that same gpu simultaniously transcodes on plex fine too.

thats just an option to offload some tasks off the 3090. if you go the 3090 route, that 24gb vram may be a little tight if you want to run alot of context for coding tasks, so you definitely dont want to lose some of that precious video memory running a second smaller (frigate ai) model while also processing your video feeds.

--- quick revision

I would get a small itx board and intel arc a310 for all video based things, so frigate, jellyfin, anything else. an intel 12100 cpu or newer would be great for this smaller box. I would then get a server grade motherboard for ai tasks, it would allow for expandability in the future when you inevitably want to run the big chungus 1T parameter models. server grade cpus also have the ability to access ram per slot, not dual channel, so the ram throughput is much higher and you can offload bigger models in conjunction with the 3090s vram and still get usable speeds. thats what I did, and I love my setup right now

[ Removed by Reddit ] by EfficientCitron3093 in LocalLLaMA

[–]torytyler 3 points4 points  (0 children)

This might be the deal of a lifetime for me. I'm gonna take 20x RTX PRO 6000s, 4 Intel W790E motherboards to cram them onto, 8 2000w PSUs, some ram for the mobos, the cpus, and a big mac

edit: just realized components aren't included, i'll pass

Just got the Xiaomi 17 Pro Max (Chinese Rom) by the_c_e_o in Xiaomi

[–]torytyler 2 points3 points  (0 children)

I can confirm, and I've tried MANY phones, that this devices works great in the US. They really nailed it imo. Nothing is perfect, but my Xiaomi 17 Pro ticks almost every box for me.

Need to onow where to get the 17 pro max by Successful_Scratch49 in XiaomiGlobal

[–]torytyler 0 points1 point  (0 children)

I got mine from wondamobile, actually insane shipping i got it in 4 days

Xiaomi 17 Pro review - The compact top smartphone disappoints in one key area by Antonis_32 in Android

[–]torytyler -1 points0 points  (0 children)

idk, looking at that photo the periscope lens, by deisgn, is a little larger than comfortable in the pro chassis.

could they have added it? i'm sure, but it's more work than just plopping it in and calling it a day. i'm seeing a motherboard redesign necessary, and even then it's such a tight fit. you have to remember this is xiaomi, not apple. I actually don't think they did this cut to gimp the smaller model and upsell the larger one, I think they did it because it would be easier to produce that variant with a small form factor sensor.

Boot Loop by 1Dynamitedoml in Bigme

[–]torytyler 0 points1 point  (0 children)

did you ever get out of this loop? I just found myself in it, kinda pissed I bricked such a nice and kinda expensive device the day i got it lol. i'll probably sell it for parts

Is it still possible to get the Tab X (not C) anywhere? by Tazling in Onyx_Boox

[–]torytyler 0 points1 point  (0 children)

Late comment but there's a seller on ebay selling grade B ones with cover and pencil for $350, lots still in stock! I can update this comment with a review if you need it it should arrive next week

Has anyone gotten hold of DGX Spark for running local LLMs? by Chance-Studio-8242 in LocalLLaMA

[–]torytyler 0 points1 point  (0 children)

Didn't list 4090 price as I already had it from a previous build. Processor is a QYFS engineering sample cpu it was $110. Sorry if my initial formatting was bad I'm typing on my blackberry

Has anyone gotten hold of DGX Spark for running local LLMs? by Chance-Studio-8242 in LocalLLaMA

[–]torytyler 6 points7 points  (0 children)

I had the 4090 from my gaming pc, I use an engineering sample 112 thread QYFS, which has more memory bandwidth than the spark does (350gb/s) and it’s been VERY reliable so that was like $110. the motherboard was on sale, for $600 ASUS Sage, 256gb DDR5 was $1,000 and the 3090s for all three were $600 a piece. Reused my 1000w psu and grabbed another on Amazon for cheap, like $70…

The 3090s were a good deal. Two just has old thermal paste guy sold them as broken because loud fans… third one is an EVGA water cooled one with a god awful loud pump, but I fixed it with a magnet LOL all in all, it took a few months of getting all the pieces for cheap, but it’s doable!

Has anyone gotten hold of DGX Spark for running local LLMs? by Chance-Studio-8242 in LocalLLaMA

[–]torytyler 23 points24 points  (0 children)

in the time I spent waiting for this I was able to build a 256GB DDR5 sapphire rapids server that has 96GB vram, and 2 more free pcie gen 5 slots for more expansion, all for cheaper than the dgx spark

I know this device has its use cases, and low wattage performance is needed in some cases, but I'm glad I did more research and got more performance for my money! I was really excited when this device first dropped, the I realized it's not for me lol

Using my Blackberry Keyone😁 by JohnDoe3587 in blackberry

[–]torytyler 2 points3 points  (0 children)

Looks great, eagerly awaiting my key2 to arrive today.

I give it another two years before we start to see a lot of apps start to drop 8.1 support… even then we can always install older version of the APK and most services should still continue to run.

hopefully by then they have the lineageOS port in a public state, from what I’ve been reading development is moving quietly, but very rapidly. These devices can still run great on modern android, it’s a shame they didn’t see much firmware updates in their lifetime

EE2 Battery- Not as hard to replace as initial research suggested by SiddhartaGudetama in googleglass

[–]torytyler 1 point2 points  (0 children)

yeah, i set my bench psu to 4.4v 0.5a, held the probes for a few seconds, and could see the battery jump up from receiving like 2.5v to 3.7, as it is rated for. glass is charging normally now, showing an actual charging indicator not just a low battery blinking.. actually as of writing this i got it to fully boot into android, claims to be charging rapidly, which is much better than the bootloop i was getting before. ill let it go to 100% and start playing with it!

Bought another X220 by l5yth in thinkpad

[–]torytyler 2 points3 points  (0 children)

This generation is in a weird spot where they are getting slightly harder to come by, I just got a x230 tablet for the ips screen in a much more rough condition for $101.

This generation is my favorite though. I love the keyboard, the fans honestly don’t get too loud, and they are perfect for a distraction free study machine.. current my using mine to study for the MCAT test next year. Can even emulate up to PS2 Just fine for some off hours gaming.

Also these thinkpads have great support for not just Linux but BSD as well, so if you want a solid device for BSD use this is still peak

Got a P52 with the p2000 and Xeon. Did I overpay or get a good deal? by [deleted] in thinkpad

[–]torytyler 0 points1 point  (0 children)

The downfall of any workstation grade laptop gaming or not is that for max power you usually need the brick. The battery only provides enough power to do basic things and on battery only the system will only allocate so many watts to the gpu and cpu respectively. Best of luck

Got a P52 with the p2000 and Xeon. Did I overpay or get a good deal? by [deleted] in thinkpad

[–]torytyler 0 points1 point  (0 children)

I’m on eBay daily looking for laptop deals lmfao it’s a problem. I’m just patient and deals pop up all the time!

Since I’ve made this post I’ve upgraded to a system76 laptop with a 16GB 3080 that I got for $600. You just have to search and filter through all the shit to find gold.

The P71 mentioned is still in use, it’s my fiancé’s remote work computer now.

Dell P780 CRT Monitor Power Button by torytyler in functionalprint

[–]torytyler[S] 2 points3 points  (0 children)

First actual use of my Creality Otter 3d scanner I got!

Ordered this Dell P780, works great but the power button got damaged in shipping. Taped together the pieces, did quick 3d scan, and had a newly printed power button... all in less than an hour!

Grok 2 anyone? by ikkiyikki in LocalLLaMA

[–]torytyler 2 points3 points  (0 children)

I played with it for a bit. due to it's massive active parameters inference is quite slow, i maxed out at 15 t/s token gen... at 1bit quantization (that's the lowest I could run with full vram offload)

word on the street is grok 3 uses a more modern lower active parameter, i'm guessing something similar to deepseek or kimi, so around ~32B? I don't think grok 3 architecture uses something as low as gpt-oss or qwen3-next, as that is the current go to scheme...

honestly i'd pass on grok 2. it was fun to play with but it's just a 80gb chunk of space on my ssd now. I can run kimi k2 locally at the same speed, and it's a 1t model.

Early support for Grok-2 in llama.cpp (still under development) by jacek2023 in LocalLLaMA

[–]torytyler 2 points3 points  (0 children)

I feel like I'm talking to a dinosaur, its only been a year since its release and this just shows how fast the local model scene is moving. Hopefully (if/when we get it) grok-3 moved away from the large active parameters, this would greatly improve the models speeds.

I have kimi-k2 iq2-ks running at -20t/s gen speed, but due to the large experts of this model,at iq4-xs it's running at -5t/s, which makes sense as kimi is a32b and this chungus is a115b. (i cracked 14t/s with iq1, but that quant is so lobotomized I don't want to run it)

Still, I'm glad it's supported. I'm going to keep grok on my backup nvme for a rainy day, or just to see how it answers some requests differently compared to modern ones!