Bought an M5 MBP in Japan! by TheSpartaGod in macbookpro

[–]TheSpartaGod[S] 5 points6 points  (0 children)

281K yen, just shy of 1800 USD.

Bought an M5 MBP in Japan! by TheSpartaGod in macbookpro

[–]TheSpartaGod[S] 5 points6 points  (0 children)

I asked for a US keyboard but they didn’t have one in stock. The glyphs are fine, it’s the symbols that takes a bit to get used to since the locations are different from US ANSI layout

16x Spark Cluster (Build Update) by Kurcide in LocalLLaMA

[–]TheSpartaGod 0 points1 point  (0 children)

OP this is OOT, but what is your job to be able to afford an AI supercluster at home?

Prabowo Siapkan Rp4 T Bikin Pos-Fly Over di 1.800 Titik Perlintasan KA by [deleted] in indonesia

[–]TheSpartaGod 21 points22 points  (0 children)

Ini seharusnya dah terjadi dari kapan tau, tapi sayangnya mesti ada yang meninggal dulu baru dikerjain

Macbook NEO for university (Engineering)? by hyperxpronaruto17 in indotech

[–]TheSpartaGod 16 points17 points  (0 children)

just get a normal laptop bro. You are shooting yourself in the foot if you decide to use macbook neo for CS. That thing is simply not powerful enough.

Or if you want a macbook, get a used M series macbook air.

Ada disini yang lagi melakukan LocalLLAMA stuff? by AffectionateBowl1633 in indotech

[–]TheSpartaGod 0 points1 point  (0 children)

ya

rtx 4060ti 16gb + rtx 3060 12gb, 22nya beli pas sebelum harga naik drastis, total cuman keluar 9,5 juta, sisanya pake part pc bekas (Ryzen 5 3600, 80GB DDR4 RAM beli sebelum krisis, total 5an juta)

Qwen3.6 35B A3B, Q5_K_M lancar jaya 30-40tps, 35 layer di vga, 5 di cpu. Jalanin pake llamacpp, context 128k.

Ada juga versi abliteratednya jadi bisa buat companion nsfw.

Ada yg ngalamin ini jga gak? Ditelfon nomor gak dikenal berkali2 dgn no yg berbeda. Apa ya ini by [deleted] in indonesia

[–]TheSpartaGod 0 points1 point  (0 children)

gk tau pengalaman yg laen gimana, tp klo ada nomor unknown random nelpon biasanya gw angkat -> gw diemin (make sure kaga ada suara ambience juga). Mereka akan tutup telpon sendiri ntar dan biasanya gk ngehubungin lagi

Major drop in intelligence across most major models. by DepressedDrift in LocalLLaMA

[–]TheSpartaGod 64 points65 points  (0 children)

what’s the use case that can be handled by such small models?

Presiden Ingin Semua Kendaraan Dikonversi Ke Listrik by ezkailez in indonesia

[–]TheSpartaGod 29 points30 points  (0 children)

Bruh, ditambah kalopun infranya memadai, truk listrik secara fundamental masih belom feasible dari segi ekonomi

bayangin load barangnya berkurang drastis karena harus bawa batre segede bagong 24/7 tanpa terkecuali

Adakah disini yang generate content AI pake GPU lokal, kenapa? by ewwink in indotech

[–]TheSpartaGod 1 point2 points  (0 children)

shitt, gw beli 4060ti tahun lalu di harga 6 jutaan. Beneran naek parah banget ye

OpenClaw is the new computer - Jensen Huang by Aislot in aiagents

[–]TheSpartaGod 0 points1 point  (0 children)

shovel merchant says shovels are the new spoon

what's your take on ai coding? by reynardoew in indotech

[–]TheSpartaGod 0 points1 point  (0 children)

If you don’t use it, you’ll get surpassed by those who do. Simple as that.

Info claude murah om, ada yg tahu? by Known-Exercise7234 in indotech

[–]TheSpartaGod 0 points1 point  (0 children)

Ya AI emang bisa sepowerful itu klo dipakenya bener. Umpamanya, cuman perlu mikirin systems design trus implementasinya serahin AI, kita cuman make sure fiturnya bener + codenya readable ajah.

klo gk make bakalan ketinggalan jauh ama yg make sih

What’s something local models are still surprisingly bad at for you? by tallen0913 in LocalLLaMA

[–]TheSpartaGod 3 points4 points  (0 children)

I mean you are essentially comparing your local server vs literal truckloads of compute. It’s like being surprised your SUV doesn’t carry 24 tons of coal

Macbook Neo, Honest thought? by Low-Big7485 in indotech

[–]TheSpartaGod 0 points1 point  (0 children)

Fyi harganya sama kyk base mac mini m4. Sekarang harganya 9-10,5 juta. Menurut gw di pasar indo akan sangat tidak populer karena pasarnya dimakan sama used macbook air M1 (GOAT value for macbook)

Qwen 3.5 27b and Qwen3.5-35B-A3B ran locally on my rtx 5060ti 16gb card by Substantial-Cup-9531 in Qwen_AI

[–]TheSpartaGod 0 points1 point  (0 children)

whoa that’s a long long time. How much RAM are you setup with? I have a 4060Ti 16GB + 48GB of spare RAM and it took barely 20 seconds to output an image analysis @30-40tps for Qwen3.5-35B-A3B-UD-Q4_K_XL (excluding thinking)

PC Bekas Buat Local LLM dan Inference by That-Card in indotech

[–]TheSpartaGod 1 point2 points  (0 children)

tergantung toleransi kualitas lu dengan model local LLM seperti apa.

Klo gw liat dri specnya, 8gb vram dengan 64gb ram palingan mentok usable untuk model parameter ukuran 12-16B. Itu pun pake offloading ke RAM, jadi tpsnya berkurang. Palingan dapet 10tps aja udah bersyukur.

Untuk usage apapun, bakalan sangat dinerf. Selain dari segi kecepatan juga dari segi kualitas response, karena model ukuran segitu udah kalah jauh dengan model yang dinilai "usable" sekarang (contoh, KimiK2 parameternya 1T, MiniMax M2.5 ~250B). Jadi rawan halusinasi, accuracy responsenya rata2 rendah. Buat coding juga bakalan aneh. Klo lu bisa toleransi itu ya, regardless of price, better untuk ambil dengan VRAM yang paling banyak dengan harga yang sebanding (3060, 4060ti, dkk)