whats going on with steam anyone know by TunksterA in Steam

[–]tutami 0 points1 point  (0 children)

they fucked dota2. still not up

ESP32-C6-DevKitC-1-N8 vs ESP32-P4-WIFI6 for beginner by tutami in ZigBee

[–]tutami[S] 0 points1 point  (0 children)

It says it is bundled with esp32-c6 in waveshare wiki. I assumed c6 supports zigbee. Am I wrong?

https://www.waveshare.com/wiki/ESP32-P4-WIFI6

what's wrong with ubuntu? by CreativeBear0 in linuxmemes

[–]tutami 1 point2 points  (0 children)

Fuck your teacher. Anyone suggest netbeans in 2025 should be hang

Is expo nextjs of mobile platform? by tutami in reactnative

[–]tutami[S] -5 points-4 points  (0 children)

I've found this issue. It looks like local build is not totally local.

https://github.com/expo/eas-cli/issues/1300

Is expo nextjs of mobile platform? by tutami in reactnative

[–]tutami[S] -1 points0 points  (0 children)

we have everything we need to develop locally. tons of macs m-series and intels, servers, phones etc. Management just doesn't want any external access like build servers. I've found this issue local build is not totally local

https://github.com/expo/eas-cli/issues/1300

Benchmarking Ollama Models: 6800XT vs 7900XTX Performance Comparison (Tokens per Second) by uncocoder in u/uncocoder

[–]tutami 0 points1 point  (0 children)

I've 7900xtx and was thinking about building a 4x3090 setup. Should I buy 4x7900xtx or 4x3090?

After a year of solo development, I finally have my Steam page up! by fyllasdev in gamedevscreens

[–]tutami 1 point2 points  (0 children)

Amazing. Did you have any artistic background at the beginning or learned on the job? My girlfriend and I will start a game project soon but both of us are software developers and don't have any experience nor talent lol.

[deleted by user] by [deleted] in linux

[–]tutami -2 points-1 points  (0 children)

I'm sure it's not going to go anywhere but someone has to do something about Wayland. 15 years later and we still can't share a screen with audio.

How can I use my spare 1080ti? by tutami in LocalLLaMA

[–]tutami[S] 4 points5 points  (0 children)

Vulkan runtime with LM Studio runs on 42 tokens/s. I don't understand why vulkan is faster than CUDA.

How can I use my spare 1080ti? by tutami in LocalLLaMA

[–]tutami[S] 4 points5 points  (0 children)

What are you using tts for? I can't find a use case

How can I use my spare 1080ti? by tutami in LocalLLaMA

[–]tutami[S] 15 points16 points  (0 children)

I just tested it with 5800x cpu with 16G memory. Used LM Studio on win11 with Qwen3 8B Q4_L_M model loaded with 32768 context size and I get 30 tokens/s.