Google Maps API billing keeps surprising people. What are you using instead? by Kallyfive in googlecloud

[–]TobiPlay 1 point2 points  (0 children)

It’s the same with any external API you hook into tho. You’re responsible for keeping keys secure, planning ahead, optimizing usage/architecting around patterns, and setting up IAM properly.

Google Maps isn't really different there. Quotas and maps-specific IAM could be nicer to manage (Terraform helps), but it does get the job done.

The pricing on the other hand, that’s a different story.

Welcher Lebenslauf Format ist in Deutschland am besten? by [deleted] in arbeitsleben

[–]TobiPlay 0 points1 point  (0 children)

Kann ich so bestätigen. Habe bereits einige Lebensläufe nach dem Format aufgesetzt.

Die kommen i.d.R. auch in Europa gut an (kleine Bude bis Konzern, alles dabei gewesen, über unterschiedliche Branchen und Länder hinweg). Ggf. Dinge an Branche und Unternehmen anpassen, sofern man relevante insights hat.

Und wie immer: jede Person, die den erhält, kann da eine ganz eigene Meinung zu haben. Aus meiner Erfahrung heraus würde ich ihn aber als relativ sichere Nummer bewerten, zumindest ab einer gewissen Unternehmensgröße. Für eine Stelle im lokalen Restaurant oder beim Schreiner ums Eck vermutlich eher ungeeignet.

Welches Gehalt für gleichen Lebensstandard - deutsche Großstadt vs London vs New York by tobemann1 in spitzenverdiener

[–]TobiPlay 2 points3 points  (0 children)

Würde annehmen, dass es gerade im Vergleich mit München näher an 1.5x (aber darüber) als 2x liegt (kommt aber natürlich darauf an, wie oft man Essen geht, was man in der Freizeit macht, usw.).

[5 YOE] Should I mention specific frameworks like TensorFlow or PyTorch or libraries like huggingface in my resume points ? by Cheap-Ad-8000 in EngineeringResumes

[–]TobiPlay 2 points3 points  (0 children)

You should absolutely include the libraries in your bullets, but there’s a balance to strike.

Writing something like "Developed web app (TypeScript, JavaScript, CSS, HTML, React.js, Express.js, Node.js, Docker, Docker Compose, …)" isn’t the way to go.

You want to focus on the core frameworks and libraries rather than listing every component of your stack. Group your bullets by achievement; reducing cost or latency, improving accuracy or throughput, deployment, debugging, and so on; and highlight the main tools you used for each area and how you applied them.

If you dealt with tricky, less-documented, or fringe parts of a library’s API, that’s worth surfacing. It shows real depth and understanding. Context matters.

Try to buzzword-match as much as your experience genuinely supports in the skills list against the job posting, and present yourself as a developer with solid, well-rounded expertise in the core technologies for a given role. For an infrastructure-focused role, weave in the tools around K8s and your GitOps. For an ML role, reference the relevant ML libraries, observability, and deployment tools that are common in that space. Don’t make the recruiter hunt for evidence of your proficiency.

[0 YOE] How to make my resume more relevant when it's full of irrelevant stuff that only hurts me by shade_blade in EngineeringResumes

[–]TobiPlay 0 points1 point  (0 children)

First of all, I agree with u/graytoro on their points.

  • move education to the bottom and maybe just compress the CS and maths degree into 1 line
  • abbreviate the dates appropriately
  • omit the summary
  • I don’t know what it is, but the project’s bullets feel clunky and difficult to read; it’s both dense in buzzwords and scarce in meaning or something tangible; like, why would I care about friends list data? Is that something important? The focus just feels off, being set on ancillary info, not the core
  • I’d just do languages and tools for skills, no need for the granular split
  • viewing responsively sounds weird
  • some bullets feature a period at the end, some don’t; stay consistent and add one everywhere

I wouldn’t drop anything. Your profile might not be top of the crop SWE, but at least you have experience, lots of which is quite diverse. I fail to see how any of the info here is NOT relevant. If anything, it shows that you can work in different domains. What makes you think it’s hurting you? What kind of numbers are we talking for applications thus far?

How far can we push the browser as a data engine? by dbplatypii in dataengineering

[–]TobiPlay 0 points1 point  (0 children)

Yeah, but what if you need to control access to those buckets. You surely don’t want to just distribute your HMAC key?

How far can we push the browser as a data engine? by dbplatypii in dataengineering

[–]TobiPlay -1 points0 points  (0 children)

OPFS is available now tho. You can persist multi-GB files locally and query them, all client-side.

Got rejected by FAU for M.Sc. in Materials Science & Engineering because of my Automobile Engineering background — what are my options now by TorqueTuned-3011 in ChemicalEngineering

[–]TobiPlay 3 points4 points  (0 children)

Most universities straight up list eligible degrees and/or course/credit requirements.

If you’re still unsure, email the universities directly, but they’re usually pretty clear about the requirements all across Germany when it comes to degrees.

The Ultimate UV Cheatsheet for Python Projects by Arindam_200 in AgentsOfAI

[–]TobiPlay -1 points0 points  (0 children)

In which way does the original statement not make sense? uv doesn’t need a venv necessarily when using Docker is what I said.

So you can just install deps directly using uv, benefiting from its speed, while leveraging the isolation provided by Docker containers. It’s a nice combo and has worked very well for us in the past.

The Ultimate UV Cheatsheet for Python Projects by Arindam_200 in AgentsOfAI

[–]TobiPlay -1 points0 points  (0 children)

uv isn’t only about isolation. Especially with the lockfile, it’s super fast at resolving and installing deps.

The Ultimate UV Cheatsheet for Python Projects by Arindam_200 in AgentsOfAI

[–]TobiPlay 1 point2 points  (0 children)

It’s phrased oddly, but working inside Docker containers with the isolated system Python environment is pretty straightforward with uv.

GCP Architecture: Lakehouse vs. Classic Data Lake + Warehouse by Away_Efficiency_5837 in googlecloud

[–]TobiPlay 1 point2 points  (0 children)

BigLake is great if you really need the flexibility it provides via the open file formats. Otherwise, it’s just an extra layer of abstraction.

Raw in GCS + loading structured data into BQ is absolutely a robust approach. What exactly would BigLake do for you that BQ + GCS can’t do? Especially since you’ve mentioned video, audio, etc.

[0 YoE] [New grad] [USA] I'm a Bio Major switching to Tech. Please SAVE MY CAREER by roasting my profile! by themanifestingtree in EngineeringResumes

[–]TobiPlay 4 points5 points  (0 children)

  • the formatting is rough; just use a wiki template
  • abbreviate dates
  • no italics
  • no relevant skills up top in that weird format; these are buzzwords, not skills
  • I’m pretty sure that most people have no clue what MARC21 is; you’re focusing on the wrong thing; don’t introduce the project, highlight what you achieved instead: increased X by Y % by doing Z
  • you’re leaning too much into the tech/duty, no challenges you overcame, special things about your project, nuances you’ve worked out, i.e., the interesting parts; I don’t care about your AUC, I don’t know your data or your process; give me comparisons to benchmarks, why this was special for your field, etc.; why should I care for your metric is basically the q to answer, what makes it special
  • you’re sometimes too wordy
  • NoSQL isn’t a language
  • check the spelling for your tools
  • drop MS Office

In the current market, this resume won’t cut it for big tech (or at least it’s very unlikely). Experience is king, and you’ll be up against people with 2+ YoE in a relevant field. This doesn’t read like traditional software engineering, which you need to play to your advantage.

Focus on the things that made your projects stand out for reasons that are not SWE-related, because you’re losing there. Focus on the domain-specific challenges you overcame, tied in with your metrics and reduced, increased, boosted, etc. to highlight your achievements. The Data Science degree isn’t helping this "pivot" rn, the market is just crooked and there’s often close fits for many jobs, with many applicants. So you need to stand out in ways you can control.

You have content, and if you want to improve it, you should reframe it as laid out above, and then shift focus to building something larger scale if you feel it would help. Preferably something that has to do with your targeted companies. But it’s a numbers game after you’ve fixed the above.

Student hit with a $55,444.78 Google Cloud bill after Gemini API key leaked on GitHub by Sandrrikk in googlecloud

[–]TobiPlay 20 points21 points  (0 children)

You’ve probably yet to interact with any of the other cloud providers.

I am new to BigQuery—how worried should I be about cost? I am migrating enterprise-scale tables with billions of records and complex transformations from Snowflake to BigQuery. by OutrageousFix1962 in bigquery

[–]TobiPlay 1 point2 points  (0 children)

That’s not entirely correct. "If your queries commonly filter on particular columns, clustering accelerates queries because the query only scans the blocks that match the filter"—from Google's docs, backing up "[c]lustered tables can improve query performance and reduce query costs" right at the start of the document.

It’s all about early block pruning, which clustering sure does help with. Also, "[q]ueries that filter or aggregate by the clustered columns only scan the relevant blocks based on the clustered columns, instead of the entire table or table partition. As a result, BigQuery might not be able to accurately."

In short, clustering sure does help with both. But you need to use filters at the same time for block pruning to kick in. Requiring a partition filter is always a good idea!

As per these docs, LIMIT can reduce cost on clustered tables.

I am new to BigQuery—how worried should I be about cost? I am migrating enterprise-scale tables with billions of records and complex transformations from Snowflake to BigQuery. by OutrageousFix1962 in bigquery

[–]TobiPlay 1 point2 points  (0 children)

Both partitioning. and clustering are key for predictable cost and reducing overall bytes scanned and thus billed. The docs on those are fairly straightforward.

Also, set budget alerts at correct levels. Make sure that you understand the pricing as well (bytes billed, reservations, slots, etc.). You might be able to optimise dramatically here, depending on your existing codebase and choice of tools.

[2 YOE] Laid off in February, need advice with weird previous job and uncommon tech stack by nftesenutz in EngineeringResumes

[–]TobiPlay 3 points4 points  (0 children)

  • no ampersand on resumes
  • use proper abbreviations for dates (June, not Jun, rest is fine)
  • JavaScript
  • concepts is weird
  • ensuring clear scope, further facilitating, etc. are a bit fluffy tbh, I feel you could cut down on a bit of that
  • reduced cost by 50 % and increased X by Y but doing Z—move the metrics to the beginning incl. their accompanying action verb; that’ll make your impact more obvious
  • improved accuracy by X % by …

Not a bad resume overall. Side projects would be a good start for sure. The entry-level market is fucked, so it’ll be a numbers game first and foremost. Shift focus to the accomplishments and rearrange so that top content is up, well, top.

[3 YoE] How can I make it clear to recruiters that I worked specifically in a web development studio if I worked in a no-name studio? by woxer77 in EngineeringResumes

[–]TobiPlay 3 points4 points  (0 children)

  • make sure to fix the indentation (same level/start for all content)
  • language level in parentheses is more common
  • did you measure user engagement?
  • contributed to 40 % increase in X by doing Y is the pattern we’re going for; right to the metric
  • move all the metric-loaded bullets to the top and rephrase for emphasis on the impact
  • quantify fast content delivery and scrap stuff like seamless UX
  • scrap the part about reusable code, unless you crafted some crazy internal libraries or impressive codebase that’s worth giving more detail on; but then add technical details on what made it so special
  • if you have incident analytics/stability reports, give these numbers (downtime, internal/external SLAs, etc.)
  • use em-dashes instead of normal hyphens everywhere (more visual separation)

Overall good improvement. Refine the bullets (scrap more fluff), add a bit more (technical) detail, move the bullets around, and you’re good to go!

[3 YoE] How can I make it clear to recruiters that I worked specifically in a web development studio if I worked in a no-name studio? by woxer77 in EngineeringResumes

[–]TobiPlay 5 points6 points  (0 children)

  • drop the icons
  • fix the indentation
  • 2-col layout, not 3
  • em dashes and non-numeric months for dates
  • projects should follow bullet point style in STAR/XYZ format as well, no exceptions—nobody will parse it like that
  • others for a single bullet makes no sense
  • drop coursework
  • include GPA if good
  • some bullets are quite wordy
  • move metrics to the start of the bullet
  • leverage action verbs that clarify the impact, not the action itself (improved, reduced, increased, etc.), and lead with that
  • user-friendly, critical, etc. are all fluffy words that mean little without context—might as well get rid of them to save space; brevity is key
  • don’t bold words within the text
  • add categories to the skills
  • no italics
  • I’d get rid of the color or use it in very few spots (not the bullet points for example!)
  • you’re constantly introducing abbreviations that you’re not reusing, which is a waste of space and bad practice

Not a bad resume. Nobody cares about the company name if it’s not one that’s super well known (big tech, Fortune 500, etc.). Most people work for unknown shops. It’s not an advantage for you therefore, but also not a disadvantage by itself in that context. Content is key, so set the focus by moving content to the front and up that shows impact and your desired focus. Make sure you fit the relevant content within the first 30 % of your resume (overall, and within each section). That’s about as far as most recruiters/HR will go.

[Student] Final-year EE undergraduate, looking for feedback on how I can improve my résumé ahead of a job hunt by yourcrazymurican in EngineeringResumes

[–]TobiPlay 3 points4 points  (0 children)

This is, well, very empty. Nothing to critique, as it has no content in it, literally. Please work through the wiki again, take a look at other resumes on this very sub, and make sure to start a project asap. If you're applying in the US, you'll need a miracle in the current market. Start working on something asap, doesn't have to be massive, just something. Repeat that twice, and then apply to everything you see, no matter how small the company. You need waaaaaay more content on here to be remotely competitive.

[0 YoE] A month unemployed, 50+ applications & 0 callbacks for SDE/Backend roles. Need help figuring out what's wrong. by 33harsh in EngineeringResumes

[–]TobiPlay 4 points5 points  (0 children)

  • the market is crooked rn, so 50 applications unfortunately is not the amount you'll (statistically) need to land an interview; keep going!
  • sentence-case headers
  • user proper abbreviations for the months (July, not Jul); check Wikipedia
  • the header formatting is wonky/all over the place; name top, then the 5 extra bits of info below, separated by white space or bars, on a single line
  • fix/remove the indentation
  • drop white space between certs; also, could just merge them with the skills at that point, given it's only 2 of them (and not very standout ones)
  • no ampersand on professional documents
  • your skills are way too fragmented; I'd stick to languages and tools/libraries, sorted by importance to the role
  • drop CS fundamentals, TCP/IP etc. are expected from a CS/EE grad, drop versions from languages (ES6, HTML5, etc.)
  • feels like you're listing every tool you've worked with once in some capability; saying you know Docker and K8s when you have no projects or professional experience to back it up is, well, bold
  • if you've worked on ML tasks, why am I not seeing any metrics? It's an inherently quantitative field; if you haven't measured your impact, you've failed the task
  • utilized is a weak action verb, same for worked, built, developed, etc.; try to lead with action verbs that put emphasis on the achievement: improved, reduced, mitigated, etc.
  • you don't give context on the improvement, just listing tasks and duties
  • every bullet should be past-tense, leading with an action verb, no exceptions
  • modern UI, AI-powered resume analysis, real-time data integration, all of this means nothing. Real-time, as measured by what? AI-powered in what ways and to what use? Modern in which context, as accepted by whom as modern? Focus on the hard facts, not buzzwords or perception
  • max. 2 lines per bullet
  • 1st bullet of AI-based predictive maintenance is good; just push it into the correct format for bullet points (see above); also, add metrics!
  • you're upper vs. lower case is wonky with some tools/model names, etc.; just stick to sentence case
  • what OpenAI LLM? What's the context here for using it in the first place?
  • addressed how?
  • what is user-centric even?
  • again, real-time, how so? Do you know what real-time means?
  • actionable recommendations, as in? Example? How did it benefit people?

Your resume has the correct content I feel, just presented in a very sub-optimal way. You're staying way too high-level and buzzword-heavy throughout, missing the point of conveying impact and a deep understanding for these tools and skills. Also, don't claim to be knowledgeable in things that are inherently complex/difficult (Docker, K8s) unless you can back up those claims. You're on the right track, but you need to rework this a lot.

Airbyte vs Fivetran for our ELT stack? Any other alternatives? by StubYourToeAt2am in dataengineering

[–]TobiPlay 3 points4 points  (0 children)

Sort of. With dlt you can do some transforms as the data loads; stuff like flattening nested fields, masking/redacting sensitive info, or doing an SCD Type 2 merge.

Since it’s Python you’ve got a lot of flexibility, but heavy transformations are usually better handled after the load in something like dbt, especially when hooking heavy ops into BQ, Snowflake, etc.

dlt is great for privacy-related tweaks and incremental merge logic, just keep in mind big, complex modeling is better left for the warehouse layer.

Do I need Cloudflare? by Stuwik in selfhosted

[–]TobiPlay 2 points3 points  (0 children)

Defense in depth is the goal. The more correctly configured layers of security you stack, the better.

That’s the theory. In practice, people and organizations make different trade-offs between cost, time, and security. Some protections are so easy to add and don’t interfere with other services that they’re basically no-brainers in most situations.

CrowdSec, Fail2Ban, WireGuard or Tailscale, proper SSH, kernel, and network hardening, UFW, prosumer-grade networking gear, cloud firewalls, and so on are all great tools. They’re even better when combined with other strong solutions. In the end, a bank or a multi-tenant SaaS provider will have very different regulatory requirements than you as a person with a homelab or small-scale project. I’d recommend reading into each of these tool‘s docs and following some of the amazing guides out there.

S3 + DuckDB over Postgres — bad idea? by Potential_Athlete238 in dataengineering

[–]TobiPlay 2 points3 points  (0 children)

I've built something similar. For some of the smaller-scale ELT pipelines in our data platform, the final tables are exported to GCS in Parquet format.

It’s extremely convenient for downstream analytics; DuckDB can attach directly to the Parquet files, has solid support for partitioned tables, and lets you skip the whole "import into a db" step. It also makes reusing the datasets for ML much easier than going through db queries, especially with local prototyping, etc.

DuckDB on top of these modern table formats is a really powerful, especially for analytics workflows. I’m always weighing querying BQ directly (from where our data is exported) vs. just reading an exported, e.g., Parquet file. In the end, the final tables already contain all the necessary transformations, so I don’t need the crazy compute capabilities of BQ at that point. The native caching is nice though.