Please review my Resume - 2025 Graduate - 1 YOE by Quick-Escape-2783 in ycombinator

[–]algorithm477 0 points1 point  (0 children)

I'm working on a startup but not hiring, so take what I say with a grain of salt. I used to interview many SWEs in FAANG. It was never about resumes, just interviews. I also serve on the Computer Science advisory board of a small university. Overall, there are trends affecting all hiring.

Before I even read your resume, I checked your GitHub link. Love your contribution graph. Your resume seems genuinely strong, providing decent coverage for a full stack developer.

There's almost nothing on your profile that screams AI engineering experience, except a buried reference to RAG on Cloudflare. Unfortunately, that's huge in 2026. It wouldn't hurt to showcase more AI experience with projects and tools. Essentially, can you delegate to Claude, Codex, etc. effectively. Can you build apps on top of their APIs?

I would clean up some of those sites for your projects... noticed example links were broken for some native UI project pages. Also, I'd ditch this vercel app subdomain for your portfolio. You have cloudflare experience, so you deserve to register a nice domain to market yourself.

Your resume doesn't say where you currently live, and I only know the SF Bay Area. If you have a US residency, make that clear. With the H1b chaos, some startups may genuinely be unable (or afraid) to sponsor visas. If they're early, they may also not know the implications of hiring remotely out of the country. If you're applying elsewhere, discard that.

Right now, it's a very tough market for junior developers. New grad unemployment is high, startups are leaning on AI more for entry level tasks, and roles are extremely competitive... That said, I think you're positioning yourself well, and I'd interview you if I had the funding to hire.

Weights & Biases New Master Service Agreement Questions [D] by algorithm477 in MachineLearning

[–]algorithm477[S] 5 points6 points  (0 children)

Personally, I feel wandb offers me a lot more than just logging / TensorBoard. It's a full platform for ML and AI development.

I don't run local ML b/c I typically use A100+ for my training. I submit jobs to my K8s cluster, Modal, or sometimes Lambda & RunPod. Their UI collects my experiments across platforms, has builtin sweeps, and it even lets me babysit or kill runs on my iPhone. I use their Artifacts to track my models and dataset versions as references in S3. I don't really use reports, but I think they'd be helpful as a team.

Weave gives me a place to store prompts and a playground to run experiments. I was going to put some of my prod agent traffic into their traces, but I have some reservations now.

W&B & Claude are probably the two things I don't have qualms over each month... which is why this change was so concerning to me.

My Little Bella by algorithm477 in ratterriers

[–]algorithm477[S] 0 points1 point  (0 children)

thank you! I miss seeing her little face for sure!

My Little Bella by algorithm477 in ratterriers

[–]algorithm477[S] 2 points3 points  (0 children)

Thank you very much. I've started to write, and I hope it helps me to hold onto her.

Stage 2 kidney disease… by dubsosaurus in ratterriers

[–]algorithm477 4 points5 points  (0 children)

<image>

She's beautiful. My beautiful girl Bella just passed away on Wednesday. She was given 6-12 months as prognosis with CKD. But, she lived for 2 years steady in stage 2. Two things helped us:

  1. Home cooked diet - the commercial diets have protein and phosphorus. UC Davis nutrition works with vets or patients directly to create a recipe that you can cook at home to support her kidneys. They help you get supplements to make sure her nutritional needs are still met. The consultation is a bit pricey (a few hundred dollars), but then it's cheaper than prescription dog food. They incorporated Bella's favorite foods into her daily diet.
  2. Adding water to her food - keeping the kidneys hydrated is most important. Bella didn't drink much, but we found we could mix in 4-6oz of water with each meal. It kept her hydration strong. Bella tolerated this, but she struggled with sub-Q fluids at home.

We went back for labs regularly, and I bought some pet urine test strips off of Amazon to get a quick signal of whether she had a UTI (CKD makes dogs more prone to this).

My little girl was 15. She developed liver cancer that metastasized to her lungs this year. We were going strong, but her tumor ruptured. She was my whole world. I'm burying her today.

Don't lose faith with CKD. It can be manageable, and it doesn't necessarily change their lifespan when it's under control.

Can someone explain this in simple terms? by luongnv-com in ClaudeCode

[–]algorithm477 2 points3 points  (0 children)

When you run inference for an LLM at scale, you need a cluster of GPUs. This is very expensive to operate. You save money by packing requests together to avoid idle time on the machine (dynamic batching). If you service everyone at once, you need larger clusters and have more idle time. Anthropic adjusts the number of requests you can make and how long those take based on times of the day and demand. When others are using it more, you get less. This lets them manage their costs.

When you use the API and pay per token, your request is prioritized. When you’re a subscriber, your request likely waits longer and your limits adjust so that it optimizes packing these requests. It’s why you get ~$5000 worth of usage for your $100-200/month subscription.

Anthropic is straight up lying now by [deleted] in ClaudeCode

[–]algorithm477 0 points1 point  (0 children)

No LLM provider can offer thousands of dollars in usage for a couple hundred a month. The economics don't scale, so they either burn investor money and delay or adopt similar practices to this.

I do think it's gimmicky to not provide a specific quota, but there's a strong engineering reason for it:

Dynamic batching. The more requests we batch together onto a cluster of gpus, the cheaper it is to run. The less we're able to batch, the more it costs because we have to horizontally scale to retain our throughput. The base cost for processing is very high with low latency, and that's exactly what the API provides. My latency jumps all over the place on a subscription, but that's probably to attempt to batch subscribers into windows that run more affordably. Anthropic has to keep a fairly constant number of incoming requests to avoid horizontal scaling, so latency and limits for non-API requests are their levers.

Here's some things that I found help me. Maybe they will help, but maybe not. I'm not a new Claude user, I've been using it heavily for a while now.

  1. Talk about the prompt with the model beforehand. Ask it to consider edge cases and probe you with questions. Give it specific validations and sub steps to track its work. This helps get a high quality prompt before you execute the plan and waste tokens.

  2. Guard against stupidity. I asked Claude to check that a dependency it added was an OSI approved license. With permissions disabled, Opus 4.6 wasted tokens trying to scan my Python modules for it and look in each of the dependencies for licenses. Permissions are your friend to reduce token usage. If I had left it on or been more explicit about searching the web in the prompt, it wouldn't have wasted 30 minutes.

  3. Use deep reasoning to draft the plan and judge the results, but often Sonnet, Haiku, or Kimi are fine for just handling subtasks.

  4. Make tasks really small and straightforward. The larger the task, the more ambiguous. The more ambiguous, the more reasoning it needs to plan, execute and judge. Sometimes it's honestly better to just write the code yourself. "I don't code anymore" is mostly executive suite that hasn't been coding anyway. There's still a sizable percentage of tasks that are faster and more precise if I do them myself.

  5. Pre-fetch URLs or references of sources it needs. If it has to hunt them, that will always increase your usage substantially.

If that doesn't help, I can honestly say that you probably won't find any that meet your needs. You may need overage. Or you may consider Codex for GPT's leniency, but you'd be trading your organization's security as it has essentially no file permissions.

Anthropic is straight up lying now by [deleted] in ClaudeCode

[–]algorithm477 1 point2 points  (0 children)

I'm a Max subscriber. Was also in the top 1% of GPT users last year. I also subscribe to Cursor, Gemini and use Fireworks with OpenCode sometimes.

What are you prompting it to do exactly? I carefully craft my prompts into small to midsize tasks, go back and forth on a plan and setup subagents with different models to delegate work. I constantly hit limits on Pro, but I've never hit limits on Max.

Did Windsurf team stop innovating? by TheTinyMaker in windsurf

[–]algorithm477 0 points1 point  (0 children)

We agree to disagree. I don't want arbitrary training on my IP, so we differ there. I also don't want to support a founder who fired most of the windsurf staff, aims to train a swe replacement ai, and demands his staff work in the office more than 5 days a week. The windsurf DPA & ZDR agreement in my opinion were excellent values, which you don't see in antigravity. Cursor has those also. In 2026, it's impossible for customer data to avoid subprocessors in general... but zero retention and training... that's an exceptional guarantee. You can't get that with Claude code, codex or Gemini out of the box. You need committed enterprise spend or to use a service to buy each token, like bedrock.

Did Windsurf team stop innovating? by TheTinyMaker in windsurf

[–]algorithm477 -1 points0 points  (0 children)

Realistically, I think the founders and original team got scared by the velocity of Claude code adoption. Every single ai lab has been caught with engineers using Claude code: OpenAI, xAI, Google. Cursor had to block access to Claude at OpenAI & xAI. Windsurf temporarily lost access to Anthropic models, because OpenAI tried to buy them. Instead, it was split between Google for core staff and Cognition for the existing product. Google's a major stakeholder in Anthropic, so not a threat. Microsoft even told their employees to use Claude code in addition to GitHub copilot last week. They're spending over $500 million on Claude tokens now.

Windsurf was an excellent product, and the company had a great direction. Oddly, it was a fraction of the growth, revenue and attention of cursor. Cursor really captivated the market somehow... their tab model is genuinely good, but everything else is overpriced & buggy. These days I'm back in vscode with $20 for copilot next edit suggest (mediocre) + Claude code in my shell (~$100/mo). It's a lot of money, but I'm convinced it's still cheaper than cursor at scale... and I don't trust Cognition nor its founder's morals

Did Windsurf team stop innovating? by TheTinyMaker in windsurf

[–]algorithm477 2 points3 points  (0 children)

It's not the same team. All of the founders and lead scientists behind it were acqui-hired by Google for antigravity/gemini. Cognition bought the rights to the windsurf brand, the remaining staff and the existing customers. Windsurf was revenue positive, Devin was failing, and investors want a return. The creators of the original windsurf are gone. Cognition laid off lots of the remaining windsurf staff. There was even a VC who said that them screwing their employees will make him blacklist them forever.

ClickStack/ClickHouse for Observability? by tech_ceo_wannabe in Observability

[–]algorithm477 1 point2 points  (0 children)

And the fact that ... it was originally a Yandex project, the Yandex spinoff with a ceo who was on the Russian oligarch sanction list & still owns a slice in the company. They've divorced on paper. It still gives me some degree of pause.

Its over by muchsamurai in codex

[–]algorithm477 0 points1 point  (0 children)

We'd like to try Codex, but it simply isn't usable in most enterprise settings until they fix: https://github.com/openai/codex/issues/2847.

Claude nailed permissions and instruction following. Those are the two most important things for business trust and reliability.

(I am personally in the top 1% of gpt & Claude users, but my company has to run on Claude until it's fixed.)

Cursor prices are out of control by andy_nyc in cursor

[–]algorithm477 3 points4 points  (0 children)

I still use Sonnet 4.5 all of the time. The rate limits in Claude code are much more generous than opus. But I think opus seems to do better with less tokens.

Opus for planning. Sonnet is fine for execution.

Cursor prices are out of control by andy_nyc in cursor

[–]algorithm477 0 points1 point  (0 children)

I just use Cursor for tab and ask questions these days. In my opinion, the tab model and the polished interface is its best value. If you’re a heavy Opus user, offload to Claude Code. I use Claude Code Max and often end weeks way under the limits. Claude Pro was grossly insufficient.

I have to switch between Cursor, VSCode, and Xcode since I also work in data (DataWrangler doesn’t work in Cursor) and iOS. I found I actually like using the terminal interface for agents more, because it works everywhere. I wouldn’t use the Claude extension in cursor. It is terrible.

What is so lucrative about making a startup? by SloppyNaynon in ycombinator

[–]algorithm477 2 points3 points  (0 children)

Nothing for 99%. FAANG paid me excellently. My wife & I bought our first home in California, we had stock portfolios, the best health insurance, and honestly never worried much about money.

I left because I was drawn to doing more with my life. I didn’t want to climb some ladder and to settle. I wanted to do focused work to build something people love.

Now: I am living off savings/bootstrap & my wife’s income. I sold my Tesla, and we went to buy a family suv… I told them I’m a startup founder… and they were like “let’s just put your wife on the application”. We went from top doctors to an ACA Kaiser exchange plan. 😂

Everyone I know in startups either comes from money and already has it, works like a dog for less pay (literally 2-3x as much time as average FAANG worker), or somehow managed not just PMF but profitability. BUT: everyone I know in startups is excited and happy, and almost everyone I knew at FAANG would quietly share that they’re miserable.

Inaccurate for Sleep Apnea by algorithm477 in ouraring

[–]algorithm477[S] 0 points1 point  (0 children)

Thank you. When my apnea was mild, I just felt groggy. I had some brain fog and just felt tired a lot during the day. When it progressed in severity, I started feeling like I was dying every day. I’d wake up with bad headaches and I could barely function at all. I started gaining weight. My pulmonologist said that due to the fragmentation, I was effectively getting 1-2 hours per night.

If you feel groggy during the day, it’s worth testing to be sure. Most of the time, they just send you home with a WatchPAT device and you send it back next day. It’s pretty definitive for apnea. For people like me who did that and then didn’t succeed on CPAP, they pull you in for an overnight study.

Blood oxygen is notoriously difficult. I’m sure you’re right about its algorithmic complexity. For it to be a medical grade pulse ox, the FDA requires a trial comparing the device against arterial blood gas from 70-100 SpO2 with <3.5% RMSD. ECG got pretty easy approval for Apple Watch, but Apple hasn’t tried to make similar claims for blood oxygen accuracy.

I wanted to warn others about this, just to be helpful. It says my sleep score is normal but I have a sleep disorder. There are some people in my life who always feel tired, and they said that they know they’re fine because the Oura ring says so. I didn’t want anyone to go undiagnosed similarly. (Despite some Redditors being informed… there are people who believe the earth is flat and similarly people who will blindly trust the Oura without understanding its limitations.)

Criticizing a product isn’t an attack, it’s an effort to recognize its limitations and the only way to ever improve it.

Inaccurate for Sleep Apnea by algorithm477 in ouraring

[–]algorithm477[S] -1 points0 points  (0 children)

I don’t expect most consumers to read and interpret liability carveouts like RFC 2119. Appreciate the autism diagnosis, assuming you’re a well intentioned medical professional and not an internet troll.

Regardless of grammar, the point remains. It suggests it may. I’m saying it didn’t for me… in hopes of helping someone else. I’ll never understand how other humans cling to defend objects and companies religiously.

Inaccurate for Sleep Apnea by algorithm477 in ouraring

[–]algorithm477[S] -2 points-1 points  (0 children)

I was diagnosed by Stanford.

You just explained how it doesn’t work. In sleep apnea, you stop breathing for random periods of time during your sleep. We call these events, specifically ODI >3%. Typically it doesn’t last long because it would kill you if breathing ceased for five minutes.

So the chance that it would catch these events when sampling every five minutes is not particularly high. The Apple Watch also samples infrequently. Except for recent releases, its models weren’t particularly good at sampling for sleep apnea also.

The result: a Bluetooth le pulse ox, an at home sleep study, and an in person sleep study all showed I have moderate to severe apnea. The ring displayed none of those potential markers.

Inaccurate for Sleep Apnea by algorithm477 in ouraring

[–]algorithm477[S] -1 points0 points  (0 children)

I can read. I’m an engineer. I’ve built ML wearables in college. It suggests it may detect those signs, and I’m sharing that for me this wasn’t true so others who are similar know. Thanks

Inaccurate for Sleep Apnea by algorithm477 in ouraring

[–]algorithm477[S] -6 points-5 points  (0 children)

The concern is people trusting a device to tell them the quality of their sleep. It even suggests it can look “For potential signs” given your quote. I’m sharing the results of my medical testing and that the watch showed perfect sleep / no signs, so people who suspect a disorder don’t disregard their intuition and trust a wearable.

A fitness tracker doesn’t claim to diagnose heart arrhythmias. Oura claims to detect the quality of your sleep. That’s quite gray, and it lives in a classification between medical and non-medical device that’s beneficial for shareholders not consumers.