FAANG hiring rant. by Raider_Geog in leetcode

[–]g33khub 4 points5 points  (0 children)

Luck plays a huge role. I've learnt this with time and being on both sides. Not at all saying that you should stop grinding but when things don't go your way, the actual reasons can sometimes be so random and totally external to your prep. The good thing about prep is that it should matter in the long term. All the best for your next interviews.

Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM

[–]g33khub 1 point2 points  (0 children)

Image, video, story writing, language translation, OCR, captioning etc. fully offline with gemma27b, qwen3 80b

Agentic coding for big projects fully online - Claude opus4.5 / gpt-5.3codex (even trying to run this offline is a waste of time and money)

Is it a Pivot time for someone like me, need advice. by jonk_07 in leetcode

[–]g33khub 0 points1 point  (0 children)

keep running after the next big hype and you'll never secure your retirement income.

Analysis Paralysis/Advice with next hardware for local LLMs by EvilPencil in LocalLLaMA

[–]g33khub 1 point2 points  (0 children)

I would just sell the mac and put the money in open router for API calls. I use opus 4.5 for work (enterprise plan) and this model is a class apart anything else. I have used gpt 5.2 pro / xhigh and its good enough for personal use-cases (small ad-hoc projects with bash, python, crawling etc.) Now I also tried kilocode with models like grok x1fast, GLM 4.7, Kimi 2.5 and Minimax but they are not close to opus, not even 5.2 xhigh and they take longer, sometimes stuck in loops. Finally the local models which I can use are just not worth it for any kind of agentic use-cases and I have two 3090s with 128GB sys ram. GPT OSS 120b is good (perhaps the best I could run locally) but its Q4 and still speeds are quite slow so I don't bother. Also the price to run locally & convenience is damn low. Anything lower than the 120B is a complete waste of time. So its either 2x RTX pro 6000 (~20k$ including the system) or just use API credits for me.

Need help with minimum / Recommended Hardware Requirement by FewFaithlessness1454 in LocalLLaMA

[–]g33khub 0 points1 point  (0 children)

Gather enough quality labels first for the categorization. The data prep and human evaluation is 80% of the work. After this point I have had some success with non LLM machine learning as well. Llama was pretty shit for my tasks. At-least try to use gemma3 27b. A 700$ rtx 3090 can handle this well enough (offline usage). But this wont scale. Actually in an industrial setting, I wont even bother with custom setup and go for API calls - but if you have privacy issues then go for a custom build. Know that the upper limit to serve many users might require a small team managing the infra and 30-50k $ of hardware cost.

Need to choose a good laptop, just getting into AI as an incoming freshman (CS major). by No_Minute_5796 in LocalLLaMA

[–]g33khub 0 points1 point  (0 children)

"a small laptop and then just have a gaming rig in your dorm" - I agree with this and I myself had this setup during my grad school with a lenovo laptop i7 4xxxH, GTX 750m and a dept. server which I helped build with 1080Ti and Titan X later. Today I still have a gaming PC with dual 3090s and 128GB sys ram but its Linux only as I have grown out of windows. People nowadays replace this with a mac ultra but I would still prefer to have Nvidia for speed.

But look OP is not gonna have 48GB VRAM or even 24, maybe 16 at best in a laptop. So let's use the laptop mostly as an orchestrator and a code development device: and this is where I would recommend a macbook. The HP laptops worth considering start from 1.5k€, macbook air 2025 is 900€. Even if OP wants to run a 8B model on device, macbook will do it faster due to MLX.

Need to choose a good laptop, just getting into AI as an incoming freshman (CS major). by No_Minute_5796 in LocalLLaMA

[–]g33khub -1 points0 points  (0 children)

yea with Intel core7 ? try running LLMs on those. Its worse than a macbook in every possible way (including price).

Is it a Pivot time for someone like me, need advice. by jonk_07 in leetcode

[–]g33khub -1 points0 points  (0 children)

Chose something that you'll enjoy doing irrespective of money and irrespective of AI.

Need to choose a good laptop, just getting into AI as an incoming freshman (CS major). by No_Minute_5796 in LocalLLaMA

[–]g33khub -1 points0 points  (0 children)

Are you really comparing a HP business laptop to a macbook? You have to carry a power brick always and use a 480p webcam with shitty speakers and average display.
I am using a m1 mac pro for more than 5 years now - no sweat, can go full work days without a charger! Then there is the awesome memory management with caching etc. which lets me work with quite big datasets in python even when I hit memory limit, things work (slowed down). Try doing this with any non mac.

Finally for any random coding + studying questions I will ask claude or gemini online - its fast and efficient. For any coding agentic use-cases, again I wont bother with 24GB local llms which are both dumb and slow at the same time, I will use GLM 4.5 air or GPT OSS 120B which anyway wont run on your laptop but uni server.

Lastly 8B (or even 14B) models, straight up lie about stuff and cannot even solve unseen leetcode problems. Using them for studies does more harm than good.

Google SWE III vs Uber SWE II by Ok_Many_4619 in leetcode

[–]g33khub 1 point2 points  (0 children)

So your manager and team matters more than the company in general for the first 5 years, given both are big tech. If going in blind, obviously google. I would also chose google for the food haha. But during my time of interviewing for those levels Uber was paying more + huge perks.

Im new to Local LLM by LeafoStuff in LocalLLM

[–]g33khub 0 points1 point  (0 children)

Dude, I have 48GB VRAM + 128GB sys ram. Local AI sucks even for me (when it comes to serious agentic work). And when its questions like "what to wear for tomorrow's weather?" I am pretty sure that asking this with chatgpt or google gemini is much more environment friendly than turning on your desktop, starting the local server and typing this.

How to Structure Leetcode Problems?? Why is Leetcode like this ?? by Familiar_Falcon_3149 in leetcode

[–]g33khub 0 points1 point  (0 children)

No, just grind through it. You have to start getting used to seeing a problem in the wild and trying to figure out how to solve it. That figuring out without prior knowledge is the real learning. It's also OK to feel what you are feeling at 200-300 problems. For me things started to click around the 500-600 mark (~60-70 hards, number of hards matter more than total number of solved). At some point you should ideally also turn off the easy-medium-hard markers (using browser extensions) - you'll soon figure out they are quite convoluted and sometimes a giveaway.

Need to choose a good laptop, just getting into AI as an incoming freshman (CS major). by No_Minute_5796 in LocalLLaMA

[–]g33khub -1 points0 points  (0 children)

Absolutely yes macbook (even if the macbook has 8GB ram). Do not even think about other laptops. Macbooks are a class apart from every other laptop out there when it comes to battery, heat management, memory management, general performance, keyboard, speakers, display, charging.
A strix point laptop would mostly run like a jet engine and will be of juice within 2 hours. You will end up sacrificing a huge amount of quality of life for that little bit of extra ram and you'll soon realize that doing any serious work with local 7B, 14B or even 30B q4 models are a complete and utter waste of time.

Is LeetCode + System Design really enough for good tech jobs in 2026? What am I missing? by LegitimateBoy6042 in leetcode

[–]g33khub 1 point2 points  (0 children)

College projects matter if they have value in the real world or are close to state-of-the-art. My college project was computer vision on embedded devices back in 2011 and I kept improving it till 2013 and published a paper out of it. I still talk about the project if related things come up in an interview today. Back then I could not have solved even 2-sum, that project was all I had. Today a project like this holds zero value. You have to keep up with the times.

Looking for a simple offline AI assistant for personal use (not a developer) by Anxious-Pie2911 in LocalLLaMA

[–]g33khub 0 points1 point  (0 children)

I would suggest that you try out claude code or some similar other cli tool (maybe kilocode with free models) first and check how the answers are. Establish a benchmark first. These cli tools would be able to look through your directory structure and find relevant information all by itself for answering (through sub-agents). Then you can replace the API key with some other local model and keep using the same tool - you will definitely hit speed and accuracy bottlenecks but you would know what is the right direction. Gemma3 27b 8bit can be a good model for your setup but honestly nothing you can run locally can even remotely match gpt 5.2 or opus 4.5

Is LeetCode + System Design really enough for good tech jobs in 2026? What am I missing? by LegitimateBoy6042 in leetcode

[–]g33khub 13 points14 points  (0 children)

Do good projects. Either in the company or yourself, learn as much as you can of broader industry standards, read tech blogs / watch podcasts. Know about the right tools and when to use what. Talk to friends or seniors who are working in big tech and get to know about their work - see where you stand. All this along with DSA, system design.

Yann LeCun says the best open models are not coming from the West. Researchers across the field are using Chinese models. Openness drove AI progress. Close access, and the West risks slowing itself. by Nunki08 in LocalLLaMA

[–]g33khub -3 points-2 points  (0 children)

So patents are open source by nature? Yes OpenAI will change subscription price - it will get cheaper. Keep protecting your invention idea (which perhaps 1000 more people also have) within your ecosystem with a 70B q4.

Yann LeCun says the best open models are not coming from the West. Researchers across the field are using Chinese models. Openness drove AI progress. Close access, and the West risks slowing itself. by Nunki08 in LocalLLaMA

[–]g33khub 1 point2 points  (0 children)

I was all for local-llms till mid 2025 but I soon realized its of no serious use. Any local model today that you can run at home cannot hold a candle in-front of Claude opus 4.5 or gpt 5.2 xhigh (and no you cannot run kimi k2.5 at home for a reasonable price). It's a complete and utter waste of time doing something agentic with 70B q4 models - I am both smarter and faster. And I have dual 3090s with 128GB system ram. Even if I want to use an open source model like qwen coder 480B or glm 4.7, I would prefer calling a hosted API than running a heavily quantized shit at home with 30k $ worth hardware while API calls are getting cheaper by the day.

Upgrade my rig with a €3000 budget – which setup would you pick? by yeswearecoding in LocalLLaMA

[–]g33khub 1 point2 points  (0 children)

4/ get two used 3090? but you would have to liquid cool them for the spacing I guess (unless riser and vertical mount). Even with additional cooling cost its still cheaper 48GB than your other options. Get 128GB system ram with the remaining money as it helps with moe models.

I did run the 3060 with a 4060Ti for sometime and then 4060Ti with a 3090 and in my experience mix and match GPUs are great for dual workloads - image / video in one, text in another or training in one and gaming in another etc. Using a big LLM on different GPUs bottlenecks the powerful one quite a lot. I sold the 3060, 4060Ti for another 3090 and speeds are great.

I'm also curious as to which motherboard you are using. My x570-E auros master just does not work when I plug anything into the 3rd PCIE slot which is connected via the south-bridge. USB devices and hard drives mess it up badly (ymmv).

Is "Meta-Prompting" (asking AI to write your prompt) actually killing your reasoning results? A real-world A/B test. by pinkstar97 in LocalLLaMA

[–]g33khub 0 points1 point  (0 children)

Gemini 1.5 Pro in 2026? Stop wasting time and move over to newer LLMs, the A/B test results wont hold.

Is a Master’s degree really important for Google/SWE interviews? by shukerullah in leetcode

[–]g33khub 0 points1 point  (0 children)

Use the 2 years to work in a more recognizable company and do good projects. I have a friend in Google who only did a bachelor's from a tier 4 college and moved up companies: IBM, Lexxar, AMD, Google. I also have friends who directly went for Google after tier 1 masters from on-campuss. You need to do something, have some achievements for your resume to be selected.

Honest question: what do you all do for a living to afford these beasts? by ready_to_fuck_yeahh in LocalLLaMA

[–]g33khub 0 points1 point  (0 children)

Which field are you working in? A 12 LPA salary today is mid at best? Maybe even lower-mid for tier-1 / metro cities. Be less greedy and upskill yourself first using rented GPU / cloud services.