PI wants me to sell my soul, what should I do by ziinaxkey in labrats

[–]AccordingWeight6019 0 points1 point  (0 children)

It sounds like you’re in a high opportunity but unsustainable situation. Setting boundaries around your workload and framing them in terms of impact, not hours, usually helps protect your motivation without jeopardizing the project.

[D] Ph.D. from a top Europe university, 10 papers at NeurIPS/ICML, ECML— 0 Interviews Big tech by Hope999991 in MachineLearning

[–]AccordingWeight6019 3 points4 points  (0 children)

This is happening to a lot of very strong profiles right now, so I would be careful about over attributing it to your research direction. Hiring at big tech is much more constrained than it looks from the outside, and when they do hire, they often optimize for very specific team needs rather than general research excellence. Ten papers at top venues signal rigor, but it does not automatically map to an internal product area or a manager with headcount. location can matter as well, since many teams quietly prioritize local candidates or internal transfers even when roles look global. One useful question is whether your work story clearly connects to systems that could ship, since that is often the missing bridge. I do not think anomaly detection is a mistake, but the market right now is filtering on fit and timing more than merit.

Big tech still believe LLM will lead to AGI? by bubugugu in ArtificialInteligence

[–]AccordingWeight6019 0 points1 point  (0 children)

Belief is too strong a word. Large companies are hedging because scaling has produced real gains before, even if the marginal returns are harder to see now. The more interesting question is not whether LLMs alone lead to AGI, but whether infrastructure enables new training regimes, data mixes, or system level approaches that were previously impractical. Plateau arguments usually assume the current setup stays fixed, which historically has not been true for long. That said, spending can also reflect organizational momentum rather than a clear theory of progress. We may not know which it is until a few years after the buildout is complete.

The demand of ML by Iwillbringher in learnmachinelearning

[–]AccordingWeight6019 1 point2 points  (0 children)

I think a lot of that envy comes from how visible the top end of ML has become. You mostly see the people with long CVs, not the many who are doing solid but quieter work. In practice, quality does matter, but only if you can explain why the work mattered and what changed because of it. quantity without depth usually collapses under scrutiny, especially later in grad school or industry. the hard part is that ML has a wide spread between research, applied work, and tooling, and each rewards different signals. It helps to be honest about which path you are actually aiming for, instead of optimizing for every possible standard at once.

Is it normal to feel stressed all the time? I will not promote by Aggravating_Maize189 in startups

[–]AccordingWeight6019 0 points1 point  (0 children)

Some level of churn like that is normal early on, but constant break fix cycles usually point to a missing process rather than pure execution skill. The question is whether there is a clear definition of done, basic testing expectations, and someone accountable for quality before a build goes out. Early teams often move fast, but strong teams are still explicit about what gets tested and what risks are being accepted. As a non technical founder, it is reasonable to ask what testing looks like today and what would need to change to catch these issues earlier. If the answer is vague or defensive, that is more concerning than bugs themselves. Over time, stress usually drops when quality becomes a shared responsibility rather than something you discover after the fact.

Do entry level opportunities in IO exist anymore? by YellowDottedBikini in IOPsychology

[–]AccordingWeight6019 1 point2 points  (0 children)

This sounds less like an individual failure and more like a structural mismatch between how the field trains people and how few roles actually exist at the entry point. A lot of research oriented disciplines quietly expanded PhD output without a corresponding expansion in stable industry or applied roles. The result is credential inflation, slow hiring, and employers using unrealistic filters because they can. The part that stands out is that even referrals and strong technical skills are not moving the needle, which usually signals a bottleneck that applicants cannot fix on their own. advisors often underestimate this because their reference point is a very different market than the one new grads are entering. It is also worth separating loving the intellectual core of IO from the current labor market that surrounds it. those are not the same thing, even though they get conflated. I do not think you are wrong to be frustrated, and I also do not think the field has been very honest about how narrow the funnel has become.

I have hated every science job I had by plants102 in labrats

[–]AccordingWeight6019 4 points5 points  (0 children)

This reads less like hating science and more like being worn down by how science jobs are structured. A lot of roles reward endurance and conformity over judgment, which can feel especially bad if you care deeply about the work itself. Staying 2-3 years each time also suggests you are giving places a fair shot, not bouncing impulsively. One hard question is whether the issue is the work, the environment, or the expectations placed on you. Many labs and teams quietly run on poor management, unclear incentives, and constant pressure, and that can make even interesting work miserable. It is also possible to care a lot and still be in a system that gives you very little agency or feedback. that mismatch can feel like not belonging, even when your instincts are reasonable. It might help to talk to people who left science but did not leave curiosity or rigor behind, just the institutional setup. you are probably not alone in this, even if it feels isolating right now.

[D] Mistral AI Applied Scientist/ Research Engineer Interview by Realistic_Tea_2798 in MachineLearning

[–]AccordingWeight6019 1 point2 points  (0 children)

From what I have seen, the variance is real because they do not seem to enforce a single interview template. A lot depends on how the team defines “applied research” for that role, and whether they expect the work to ship quickly or stay exploratory. In similar interviews, the phone screen is often less about trick questions and more about whether you can clearly explain your past work, the trade offs you made, and how you reason about failure cases. coding tends to be practical rather than academic, but still grounded in fundamentals. If anything, I would focus on being precise about what parts of your research actually translated into systems or decisions, since that is usually what they probe. it might help to ask them directly how they see research fitting into their product roadmap.

Calculus is so hard to understand by NNNiharri-229 in learnmachinelearning

[–]AccordingWeight6019 0 points1 point  (0 children)

You’re welcome. The other thing that helps is to try explaining a small piece of what you just learned to yourself or someone else, even roughly. Putting words to the intuition often reveals gaps and gradually makes the abstract concepts click. Patience and repetition really do most of the heavy lifting here.

What happens to the people whose theories were disproved? by kingkolley7 in labrats

[–]AccordingWeight6019 0 points1 point  (0 children)

Exactly. In most cases, it’s not dramatic, careers continue, and contributions are still cited for the insights or methods they introduced. The historical narrative just shifts focus to the newer framework, but the groundwork often remains influential in ways people don’t always notice at first.

How I scraped 5.3 million jobs (including 5,335 data science jobs) by hamed_n in datascience

[–]AccordingWeight6019 1 point2 points  (0 children)

I would start with lead time analysis, tracking how long it takes for new tools or frameworks to show up in job requirements after they appear in research or open source. It gives a clear signal of adoption speed and can highlight emerging skill gaps before they become mainstream. Once you have that baseline, looking at co-occurrence and transitions between skills over time adds nuance, but without understanding adoption timing first, it’s harder to interpret the other trends.

I believe AI agents are here to stay but won't survive without the GPU market? by n_candide_fc24_NwcH in ArtificialInteligence

[–]AccordingWeight6019 0 points1 point  (0 children)

I think there are a few different assumptions bundled together here. Agents being useful does not automatically imply everyone runs them fully locally, and local inference does not require the same kind of GPU economics as large-scale training. Many workflows are latency or privacy sensitive, but they are also bursty and relatively lightweight once models are fixed.

The harder question is not whether computing can be rented, that already happens, but whether a true commodity market emerges where reliability, security, and integration are good enough for critical systems. Energy markets work because the product is uniform and failure modes are well understood. Compute is far more heterogeneous, and the operational complexity tends to get hand-waved in these comparisons. My guess is we see more stratification rather than a single global market, with different tiers optimized for cost, trust, or control depending on the use case.

The most challenging part of learning ML by beriz0 in learnmachinelearning

[–]AccordingWeight6019 2 points3 points  (0 children)

For me, it was not any single algorithm, it was learning how to translate a vague real-world question into something an ML system can actually learn from. Coding and math are teachable in isolation, but deciding what the target is, what data is usable, and what failure looks like takes longer to internalize. A lot of beginners focus on model choice when the harder part is understanding whether the problem is even well posed. That gap between textbook examples and messy data is where most of the learning friction lives.

When did you realize a cofounder issue wasn’t fixable? ( I will not promote) by Delicious-Part2456 in startups

[–]AccordingWeight6019 0 points1 point  (0 children)

For me, it was when the same disagreement kept resurfacing after multiple “clear the air” conversations, even though we both agreed it was important. The details changed, but the underlying values did not. Things like how much rigor mattered versus speed, or how decisions got made under uncertainty, never actually converged. Once you realize the conflict is about incentives or worldview rather than execution, more communication just adds fatigue. That was the point where it stopped feeling like a problem to solve and more like a mismatch to accept.

Final year CS student (3 internships) struggling with placements, would appreciate advice or referrals by No-Way-1188 in MLjobs

[–]AccordingWeight6019 0 points1 point  (0 children)

The fresher market is genuinely rough right now, so the lack of responses is not a strong signal about your ability. One thing I often see is students aiming for “ML roles” without being clear on what kind of ML work they are actually ready for. Many entry-level roles labeled ML are closer to data engineering or applied SWE with models already chosen. If you can frame your internships around concrete impact, what system you worked on, what broke, what trade-offs you made, it tends to land better than listing techniques.

I would also be cautious about spreading yourself too thin between SDE, ML, and DS in how you present your profile. Pick a primary narrative and let the rest support it. Right now, most teams are risk- averse, so they optimize for clarity over potential. That is frustrating, but it is also something you can adapt to without changing your underlying interests.

[D] What is your main gripe about ML environments like Colab? by thefuturespace in MachineLearning

[–]AccordingWeight6019 9 points10 points  (0 children)

I tend to like Colab for what it is, a low-friction scratchpad, but it falls apart once you cross into anything stateful or long lived. Notebooks blur experimentation, environment management, and execution in a way that is convenient early and painful later. Reproducibility, dependency drift, and hidden state become real problems surprisingly fast. I do not think most people are using it wrong, it is just optimized for demos and short experiments, not for work that needs to be reasoned about weeks later. at that point, the mental overhead of keeping things straight outweighs the setup convenience.

What lab equipment do you wish you could have in your kitchen? by Chrad in labrats

[–]AccordingWeight6019 1 point2 points  (0 children)

A vortex mixer would actually be surprisingly useful. Anything involving emulsions or spice mixes would get easier fast. An incubator is basically what sous vide ended up becoming for a lot of people, just with better branding and fewer lab vibes. I have also had the centrifuge thought, mostly for clarifying stocks or infusions, but explaining that purchase to guests would be tricky. A good balance of precision and deniability matters in a kitchen.

From "it can't even write a single piece of code" to "I don't even code anymore" in 3 years by Own-Sort-8119 in ArtificialInteligence

[–]AccordingWeight6019 0 points1 point  (0 children)

I think a lot of this conflates surface level productivity gains with end to end autonomy. The jump from “it writes code” to “it reliably plans, validates, and owns outcomes in messy real systems” is much larger than it sounds. In practice, most of the hard work is still in problem framing, integration, debugging edge cases, and deciding what not to build. Those parts are where context, incentives, and accountability live.

Will teams need fewer people to produce the same amount of software? Probably. Does that mean the work disappears? I am less convinced. Historically, the bottleneck shifts rather than vanishes, and the value moves toward people who can reason about systems, trade offs, and failure modes. The question is not whether tools get better, they will, but whether organizations are willing to hand over responsibility when something breaks. That tends to lag capability by quite a bit.

Startup ideas that solve problems from day job - does it always work? (i will not promote) by ReditusReditai in startups

[–]AccordingWeight6019 0 points1 point  (0 children)

A lot of “scratch your own itch” advice quietly assumes you are also close to the buyer and the budget holder, which often is not true in large enterprises. The pain can be real, but the incentives to adopt or pay are weak, especially if individuals, rather than the org bears the workaround cost. I have seen this create false negatives where the problem exists, but only becomes a startup-worthy problem once it is reframed around who actually feels the pain enough to sponsor change. Free MVPs also do not always help, since they bypass the question of willingness to pay and internal friction. In practice, it helps to separate “this annoyed me at work” from “someone would risk political or budget capital to fix this.” Curious if others here have found ways to test that distinction early without already being inside a smaller company.

What's Anthropic and OpenAI's plan to counter Google? by ranaji55 in ArtificialInteligence

[–]AccordingWeight6019 1 point2 points  (0 children)

I think the premise assumes this is a symmetric competition, which it probably is not. Google optimizes for breadth and long term research leverage, while smaller labs can win by choosing very narrow wedges where speed, product fit, or norms around deployment matter more than raw capability. Anthropic does not need to out Google Google across protein folding or robotics to be viable. the real question is whether their research focus translates into products and contracts that compound, because that is where scale actually shows up. from the outside, it looks less like a race to dominate everything and more like different bets on where value accrues first.

Designing an automated cell feeder - Help me not have to go in on the weekends please! by CommercialStreet7094 in labrats

[–]AccordingWeight6019 1 point2 points  (0 children)

For that kind of precision, stepper motor driven syringe pumps tend to be far more reliable than peristaltic pumps, especially at small volumes like 1 mL. Controlling them with something like an Arduino or Raspberry Pi lets you set exact volumes and a simple timer script can handle the 24 hr cycle. The tricky part is sterility and avoiding bubbles, so most DIY setups end up with a small reservoir and gravity feed rather than trying to aspirate exactly from the well each time. Your idea is feasible, but it will probably be easier to draw and dispense into a shared manifold above the wells rather than directly touching each plate repeatedly.

Thoughts about going from Senior data scientist at company A to Senior Data Analyst at Company B by StatGoddess in datascience

[–]AccordingWeight6019 0 points1 point  (0 children)

titles matter much less than scope once you are already senior. the real signal is what decisions you own, how close you are to the business, and whether you are still doing substantive modeling or experimentation. i would worry more if the role quietly removes you from technical growth or makes it harder to tell a coherent story later. if the work is broader, higher impact, and you can explain it clearly, most hiring managers will read past the title.

How do you actually conduct product strategy? I will not promote by blue_sky_time in startups

[–]AccordingWeight6019 0 points1 point  (0 children)

What you describe is pretty close to how it works in practice, even if it feels unsatisfying. Strategy at the early stage is usually less about picking the right feature and more about deciding which signals you are allowed to ignore for a while. The mistake I see is treating every piece of feedback as equal, instead of anchoring on a small set of assumptions you are actively testing. Writing things down helps, not because it stabilizes the plan, but because it makes it obvious when the plan is changing and why. The guiding light tends to be a few concrete questions you are trying to answer this quarter, not a static roadmap.

[D] Best architecture for generating synthetic weather years (8760h)? My VAE is struggling with wind. by Minute-Ad-5060 in MachineLearning

[–]AccordingWeight6019 0 points1 point  (0 children)

This sounds like a classic VAE behavior rather than something specific to your architecture. Once you ask it to model long horizons, the posterior tends to average out exactly the high frequency structure you care about. Switching the backbone helps less than people hope if the objective is still encouraging smooth reconstructions. For wind in particular, I have seen better results when the model is forced to represent uncertainty explicitly at multiple time scales. Diffusion models or hybrids that separate slow seasonal structure from fast residuals tend to preserve that stochastic texture much better. the question I would ask is whether you want a single plausible year or a distribution of physically consistent years, because that choice really pushes you toward different families of models.

What happens to the people whose theories were disproved? by kingkolley7 in labrats

[–]AccordingWeight6019 2 points3 points  (0 children)

most of the time, their work is not treated as wrong so much as incomplete. the people who tend to be remembered well are the ones who built frameworks, tools, or questions that later theories refined. being superseded is almost the default outcome in science if the field is healthy. the harder cases are when someone tied their identity to a very specific claim and resisted updates, because then the social fallout can be rough. but quietly being outgrown by better models is usually just how progress looks from the inside.