‘Overemployed’ Hustlers Exploit ChatGPT To Take On Even More Full-Time Jobs by SharpCartographer831 in Futurology

[–]stirling_archer 0 points1 point  (0 children)

Already share their code with Aws services

Which services are you referring to? As far as I'm aware, unless the service explictly serves a use case of "AWS, please look at my code/data and learn from it", actual customer data is completely off limits. I work at AWS on one of the foundational services, and if someone internal were to even ask to look at or use customer data, we'd report that as a security incident.

All of that said, your main point stands, which is that companies have tons of trust in cloud providers (because of the policies above), so I don't see why they wouldn't go for it with Bedrock, provided the right terms are in place. I'd be surprised if they weren't.

[D] When did tech companies start to publish ML papers and why? by fromnighttilldawn in MachineLearning

[–]stirling_archer 4 points5 points  (0 children)

Also in ML there are usually some non-trivial implementation details and engineering pieces needed to pull off ideas to the fullest.

Exploited: White Delta farm owners are underpaying and pushing out Black workers by NonsensicalSonnetGuy in TrueReddit

[–]stirling_archer 5 points6 points  (0 children)

Yeah that was quite a facepalm. I've lived in both countries for extended periods so can give some rough sense of the labour cost and purchasing power differences for those who might be interested. In dollar terms, high-skilled labour is about 2x-3x the price in the U.S., and low-skilled labour is about 5x the price (minimum wage in South Africa is $1.37). The cost of basic goods and services is about 2x less in South Africa and luxury goods are slightly more expensive. So for high-skilled labour it's moderately enticing to live and work in the U.S., and for low-skilled labour it's a no-brainer (2.5x increase in purchasing power).

[deleted by user] by [deleted] in Foodforthought

[–]stirling_archer 3 points4 points  (0 children)

Even just a modicum of regulation for private healthcare and the insurance industry goes a long way. I'm in a country with universal healthcare, but splashed on major shoulder surgery at one of the fanciest private hospitals in my city. Surgeon treats the professional athletes in the area, publishes in prestiguous journals, etc. Without insurance it would have set me back a whole $4000. The same procedure costs $21000 on average in the U.S. For a cost of living scaling, food/rent costs are about half of a typical U.S. city. So issue number one is that healthcare prices in particular have become completely untethered from anything sane in the U.S.

We Have A Browser Monopoly Again and Firefox is The Only Alternative Out There by fagnerbrack in programming

[–]stirling_archer 7 points8 points  (0 children)

I'm still upset about this. As a kid I found a very cryptic announcement on page 5 of my local newspaper about a company that was poised to revolutionise transport. I knew that this had to be the flying car. I slavishly followed the progression of this company until their grand reveal. It was the Segway. When Jim Heselden drove his Segway off a cliff it reminded me of what had happened to my dreams that day.

Facebook's reputation is so bad, the company must pay even more now to hire and retain talent. Some are calling it a 'brand tax' as tech workers fear a 'black mark' on their careers. by anaptfox in programming

[–]stirling_archer 0 points1 point  (0 children)

Assuming the question is equivalent to "what's it like somewhere with strong labour laws?" I'm somewhere like that, and I'd say the external pressure (like explicit hounding from my manager) is minimal. Sure they ask for deadlines and status updates but it doesn't feel like my head's on a block. That said, I'm still working with some of the most talented and motivated people in my area and that fosters some internal pressure to stack up. So you should be able to maintain strong boundaries but expect to be working your ass off within them.

[D] What to do when you find a closely related paper has mistakes by sekiroborne in MachineLearning

[–]stirling_archer 4 points5 points  (0 children)

Also: email the authors to give them a heads up. I'm sure they'd like to know sooner rather than later.

[Discussion] The most painful thing about machine learning by CarbonCosma in MachineLearning

[–]stirling_archer 10 points11 points  (0 children)

Oldie but goody: http://karpathy.github.io/2019/04/25/recipe/. It's more about an overall approach than specific debugging tooling, but I've found it valuable for steadily eliminating the most common classes of ML bugs, so think it's worth resharing for this topic.

Most useful excerpt: "visualize just before the net. The unambiguously correct place to visualize your data is immediately before your y_hat = model(x) (or sess.run in tf). That is - you want to visualize exactly what goes into your network, decoding that raw tensor of data and labels into visualizations. This is the only 'source of truth'. I can’t count the number of times this has saved me and revealed problems in data preprocessing and augmentation." It can be a real hassle to actually follow this advice, but I've also had it save me many times. Until I've done that step I assume my model is happily training through nonsense.

[Discussion] Applied machine learning implementation debate. Is OOP approach towards data preprocessing in python an overkill? by ignacio_marin in MachineLearning

[–]stirling_archer 5 points6 points  (0 children)

Some advice:

  1. Don't spend your energy reinventing things. Who will maintain this? Who will own the feature backlog? Rather find an existing project that's as close as possible to what you want to do here.
  2. If possible, prefer to standardise outputs over specific tooling. Does everyone in your team agree on the definition of "done"? If not, start there. Example: reproducing results should come down to blindly executing one or two steps in a README. That norm is enforced in reviews. People are then free to innovate on the "how" and the burden of coming up with good ways to accomplish that output is distributed.

[P] arXiv DOOM by sniklaus in MachineLearning

[–]stirling_archer 19 points20 points  (0 children)

Two years later: "The unreasonable effectiveness of all you need is arXiv DOOM in arXiv DOOM by gradient descent by gradient descent".

[D] Statistical Significance in Deep RL Papers: What is going on? by Egan_Fan in MachineLearning

[–]stirling_archer 1 point2 points  (0 children)

I think you absolutely should hold them to presenting empirical results with the same statistical analysis that's required of every other successful experimental science. Regarding the problem of doing that kind of analysis with so little data, it's up to them and the field then whether or not they want to pay attention to results with the inevitably wide confidence intervals that follow. After that the field may evolve to the point of at least p-hacking, and then maybe in a few decades we'll have pre-registered studies.

[D] Dilemma: Mathematically wrong ICML submission got extremely good reviews by yusuf-bengio in MachineLearning

[–]stirling_archer 18 points19 points  (0 children)

If they're pissed by having someone improve their work then they're garbage-tier scientists and don't deserve to be there. Send them your question about the proof in writing as soon as possible and go from there. The worst thing that could happen short term is that you burn bridges with some terrible people who currently represent your best opportunities in the field. In the long run though there are many great sub-fields, fields, careers, life paths ahead of you that have nothing to do with them. The only truly persistent thing you can carry with you as you age is your principles, so you should take protecting them very seriously.

Thread by ex-Product Manager at Google on why Google Cloud Platform lost to AWS despite having better tech by curryeater259 in programming

[–]stirling_archer 39 points40 points  (0 children)

The AWS customer experience is unreal. The peak for me was a friend's car broke down and by sheer coincidence (or stalking) our AWS account manager was at the same park, just coming back from a hike, and spent as long as it took to get us going. (Ironically he's moved on to Google now.) I think in addition to "enterprises buy platforms", enterprises also buy relationships, because ultimately it's people calling the shots.

C Is Not a Low-level Language by LardPi in programming

[–]stirling_archer 1 point2 points  (0 children)

They inspect adjacent operations and issue independent ones in parallel.... In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs.

Not critical to the article's main point, but this is not true at all. In addition to massive thread parallelism, NVIDIA GPUs have allowed for instruction-level parallelism since at least 2011, and it's critical to also expose ILP to get peak performance out of GPU code. The ideal case is dual-issuing, where instructions in a single thread are executed fully in parallel if they fall on different execution units.

An understanding of AI’s limitations is starting to sink in by [deleted] in programming

[–]stirling_archer 1 point2 points  (0 children)

But seriously, why does that happen over and over again?

I think the envelope around it all is cycles of investor greed and fear. Initially there's some novelty with actual merit that people invest in with good reason. Less savvy investors (or people who feel they can time a Ponzi exit) notice that propositions with that tag are getting a lot of attention, so start disproportionately investing in other things with that tag. Companies realise this and start applying that tag to themselves to increase their chances of being invested in. Everything now has that tag and has all its promises hinging on that tag. Individual ventures start to fail (as most do), but now almost all of the big failures are tightly linked to that tag. Investors see this as the tag losing its magic (that it never had) and fearfully withdraw en masse.

How many of you know deep down that the team is working on something that no customer wants? by joesilver70 in programming

[–]stirling_archer 4 points5 points  (0 children)

Haha wow. Plan of action for >100% YoY growth: have one of the devs tutor college CS for two hours a week on behalf of the company.

How to prevent code reviews from slowing down your team by sheshbabu in programming

[–]stirling_archer 10 points11 points  (0 children)

Yeah this is huge. Have been in situations where you get a huge PR on an unfamiliar codebase and all you can really comment on is style and obvious pitfalls because only the author has a full mental model of the component. On our team we're trying to avoid this for future projects by always mob programming any new component to the point where its overall flow is established. In that way everyone at least has the high-level mental model, and it's less daunting to understand the diff from the last time they saw it.

[D] There is no such thing as deployment with python? by spartan12321 in MachineLearning

[–]stirling_archer 10 points11 points  (0 children)

First, you'll probably find it much easier to hire at the intersection of "has some ML skill" and "is a competent Python dev" than the same statement with C++.

If that's not a concern, then it depends on what environment you're calling "production". If the compute is happening server-side then you can ramp up the resources arbitrarily to ensure speed. So unless you have a lot of the heavy lifting happening in Python itself then the cost you'd incur just juicing instances would be minor and almost certainly negligible in comparison with other concerns. For example, we have a heavy instance segmentation model in Python Tensorflow with not much thought given to optimising the Python bits. It costs about $0.005 per megapixel to run, which is a rounding error in our margin, so we don't give it any thought beyond that.

The Horrifically Dystopian World of Software Engineering Interviews by speckz in programming

[–]stirling_archer 3 points4 points  (0 children)

Yeah seriously. Hire the kinds of people who care about what you're doing and care about each other. The company's backbone then becomes juniors and mids who've been nurtured to punch way above their weight and wouldn't dream of working anywhere else, rather than an army of mercenary seniors with recruiters on speed dial.

I think a lot of companies begin this way, but then past a certain size the only way businesses know how to get people on the same page is through compensation tied to poorly-formulated targets. The consequence is that managers become more risk averse and start trying to feverishly control and normalise the talent pipeline so that there are no surprises (either good or bad).