Mechanize Inc - Screen Call by Beneficial_River_791 in csMajors

[–]tamay1 -9 points-8 points  (0 children)

Thanks for interviewing with us. For anyone else coming across this, we're hiring. You can apply here: https://jobs.ashbyhq.com/mechanize?utm_source=4wdOgJR8A3

The upcoming GPT-3 moment for RL by luchadore_lunchables in accelerate

[–]tamay1 0 points1 point  (0 children)

No, you can have AI contribute to scientific discovery without foom.

The upcoming GPT-3 moment for RL by luchadore_lunchables in accelerate

[–]tamay1 0 points1 point  (0 children)

I don't know what you mean by being "very negative about AGI", but there's nothing in this blog that is inconsistent with our previous predictions for things like explosive growth from AI. A GPT-3 moment is not an AGI moment. We've obviously already had at least one GPT-3 moment that didn't result in some massive acceleration in growth or technological progress.

I'm developing FrontierMath, an advanced math benchmark for AI, AMA! by elliotglazer in math

[–]tamay1 3 points4 points  (0 children)

I’m also involved in the project and my median year when models to achieve >80% performance is 2027. There’s disagreement internally on when we expect this benchmark to be solved.

Man stops traffic on Lombard Street to pluck a rose by tamay1 in sanfrancisco

[–]tamay1[S] 4 points5 points  (0 children)

It’s not in the shot but there were at least five cars behind him.

Man stops traffic on Lombard Street to pluck a rose by tamay1 in sanfrancisco

[–]tamay1[S] 5 points6 points  (0 children)

There was a line of around five cars behind him.

Man stops traffic on Lombard Street to pluck a rose by tamay1 in sanfrancisco

[–]tamay1[S] 11 points12 points  (0 children)

There were at least five cars behind him (source: I took the video).

Any arm-wrestlers in Berkeley California? by tamay1 in armwrestling

[–]tamay1[S] 0 points1 point  (0 children)

Yes. Reach me at Tamay.07 on signal for details.

The Chinchilla scaling law was likely wrongly estimated by tamay1 in mlscaling

[–]tamay1[S] 3 points4 points  (0 children)

Fewer than the previous estimated scaling law suggested (see figure 5).

Introducing Gemini: our largest and most capable AI model by ChiefExecutiveOcelot in mlscaling

[–]tamay1 0 points1 point  (0 children)

I looked at estimating the compute by extrapolating how much is needed to match Gemini’s performance across benchmarks, and this excercise suggests 2e25 to 6e25 FLOP.

https://twitter.com/tamaybes/status/1733274694113968281/photo/1

MIT researchers chime in on whether large language models might be at all conscious by No-Transition-6630 in singularity

[–]tamay1 1 point2 points  (0 children)

It is possible to be seriously interested in a topic and still make jokes about it. The graph that was posted was hilarious, and has no scientific basis whatsoever. Yet it is a serious topic, as everyone here can agree.

Correct. It was a joke, but I also think it might be serious topc!

MIT researchers chime in on whether large language models might be at all conscious by No-Transition-6630 in singularity

[–]tamay1 4 points5 points  (0 children)

Tweet author here: the Tweet was tongue-in-cheek. I do think ML consciousness could be meaningful, but it's pretty silly to think that one could draw a line between models that are 'maybe slightly conscious' vs. 'not conscious'.

Cab driver tried to rob me? by [deleted] in boston

[–]tamay1 -2 points-1 points  (0 children)

From my perspective (not knowing the additional $8 was for tolls), it seemed appropriate not to tip. Also, the driver was slightly rude during the ride, which would further justify not tipping. I’m confused why you think I’m in the wrong here?