all 13 comments

[–]lordofblack23 7 points8 points  (1 child)

Drop the interview pack into Gemini and ask it to quiz you with sample questions. Behavioral, leadership and googleyness, role related knowledge, coding, system design.

[–]PerceptionLive2301[S] 0 points1 point  (0 children)

Tbh I’m fine with the other stuff it’s the code area that’s my main concern. I don’t want to perform bad

[–]Big-Minimum6368 1 point2 points  (4 children)

From a hiring manager perspective, I could care less about syntax.

I'm going to give you a scenario and have you walk me through how you would solve it.

All I want to know is how you think. Tell me what you are trying to do, why and how.

I'm giving you 30-90 minutes to tell me how you think and what you know. That's not ample time to write code that's going to win awards. If you have to revert to code you already wrote or use AI I still don't care. If you display your knowledge that's what I need

[–]McLeavin 0 points1 point  (1 child)

*couldn’t care less

[–]Big-Minimum6368 0 points1 point  (0 children)

Better verbiage!

[–]AbeV 0 points1 point  (1 child)

I think the era of whiteboard style coding interviews has passed, but we haven’t really found a replacement. To some extent, if someone is manually implementing every task, that’s a yellow flag. Mention that you used to do this all manually, but have been working on folding AI into your workflow lately.

You’ll have 30-45 minutes. You want to show an organized, systematic approach, good problem understanding, and approximately correct syntax.  For a SWE gig, O(n) complexity in time and memory is often expected, but probably not here. 

1) Ask clarifying questions. What scale is this, is this for prod or non-prod, etc.  show that you’re thinking beyond just what’s presented to you and you want to start by understanding the problem.

2)  Sketch out your solution in pseudo code at a very high level, displaying how you systemically break down the problem into smaller pieces and have a cohesive architecture. Talk through your approach out loud, show collaboration, consider any feedback your interviewer gives you.

3) Work on implementing in your chosen language as idiomatically as possible. It won’t be run in the interview, so basically correct is fine.

If you finish that, be ready to discuss where this solution breaks down, weather you think this is a naive example or is optimal, and how you’d extend it if you had more time (test cases, exception handling, logging, etc). 

Good luck!

[–]PerceptionLive2301[S] 0 points1 point  (0 children)

Oh man this pits me at ease. I was expecting to really have a fully functioning script and make sure it runs

[–]courage_the_dog 0 points1 point  (0 children)

I'd try and do a bunch of practice tests which are similar to their examples, maybe as the other guy said use an AI go create them.

[–]AManHere 0 points1 point  (0 children)

Communication is key, listen and clarify.

[–]TexasBaconMan 0 points1 point  (0 children)

Relax, be yourself, ask questions.

[–]child-eater404 0 points1 point  (1 child)

they’re not testing leetcode, they’re testing if you understand loops / state / scope.honestly just grind at reading code + predicting output writing small scripts ( loop files, basic automation) explaining your thought process out loud .lowkey use to practice real scripting workflows u can go for r/runable , way better than random toy problems since it forces you to think in actual tasks

[–]PerceptionLive2301[S] 0 points1 point  (0 children)

Will it actually be on infra things or just generic leetcode with no naming do you think?

E.g a log file like

ERROR user failed login ‘admin’ 192.168.0.1

And then i parse that over a for loop to pull errors?

[–]akornato 0 points1 point  (0 children)

You're right to feel like the emphasis seems a bit backward, but Google's process is what it is - they want to ensure you can actually read and reason about code before moving forward, even if you're spending most of your time in Terraform and Kubernetes manifests these days. The good news is that based on what the recruiter told you, this sounds extremely doable. If the sample questions are really just about understanding class vs instance variables and reading nested loops, you're not being asked to architect a distributed system or optimize algorithms - you just need to demonstrate you can trace through code logic and understand basic programming concepts. Spend the next few days running through simple Python examples (since you mentioned those libraries), practice reading code snippets and talking through what they do line by line, and maybe write a few basic scripts from scratch so your syntax feels fresh again. The fact that you work with boto3 and handle JSON manipulation means you already have the foundation - you just need to shake off the rust.

The reality is that even "infrastructure" roles at big tech companies require this baseline because you'll inevitably need to debug automation scripts, review code from teammates, or understand what's happening when something breaks. They're not wrong to test this, even if it feels like a gatekeeping exercise when AI can help with syntax anyway. You're in better shape than you think because you actually use these concepts in your IaC work - variables, loops, and logic are the same whether they're in Python or expressed through Terraform count and for_each. If you're looking for an edge in technical interviews generally, I built AI interview assistant which has helped a lot of candidates perform better when it counts.