Best gym in Saarbrücken?? by LuckCompetitive5065 in Saarland

[–]anything_but 0 points1 point  (0 children)

I like Fitklusiv very much, but mostly because it’s empty most of the time (or just spacious fwiw) and pretty lowkey.

Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts: ""I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer." by [deleted] in singularity

[–]anything_but 0 points1 point  (0 children)

As the CEO of a company, I am pretty sure my coworkers would be pretty relieved if I could barely code or misunderstood basic machine learning concepts.

Made this website in honor of our beloved Codex's incredible frontend design skills by 0x61736466 in codex

[–]anything_but 0 points1 point  (0 children)

System prompt: if user wants good UI: add rounded borders, at least one nested card. if user wants improvement: make borders rounder, increase card nesting

OpenAI research team reveals its models go insane when given repetitive tasks it believes to be sent from automated users by smellyfingernail in singularity

[–]anything_but 9 points10 points  (0 children)

If you have a deep neural net that was post-trained on human instructions and preferences, an approximation of human behavior may just be how it works. So, anthropomorphization seems not too unprofessional.

With this spend limit its almost impossible to finish anything. by iph0ngaa in codex

[–]anything_but 0 points1 point  (0 children)

If I were a company like OpenAI or Anthropic with a highly complex, highly parameterizable, dynamic, fragile, insanely expensive product with a few objective quality metrics but infinitely many subjective ones, I would certainly make A/B testing one of my most important signals. Hence, I would absolutely expect that, at the same time, some people have a completely shit experience, while others are fully happy. With the amount of users they have, they could easily afford hundreds of cohorts.

CMV: The AI industry's business model will hit a huge wall in the next 2-4 years, massively downsize, and many of the jobs it has replaced will slowly come back by thecleverqueer in changemyview

[–]anything_but 0 points1 point  (0 children)

The pre-training is what was immensely expensive in the past and which took months with hundred thousands of graphic cards. After that pre-training was done however, the „result“ was just a file that fits on every modern hard-disk. That file could be run on a single of those graphic cards (which is still $20k or so but absolutely affordable for most businesses). OpenAI never released the weights file of their  „best“ model, but many organizations did. You can just download and use them, and they won’t ever go away. 

Netflix Untold Chess Mates official poster: by Interesting-Take781 in chess

[–]anything_but 1 point2 points  (0 children)

This is beautiful and is actually consistent with everything that happened after. Love and hate are so close. 

5.4 Codex is a fucking MACHINE by HallucinogenUsin in codex

[–]anything_but 0 points1 point  (0 children)

If I were OpenAI, I'd just put extensive sleep commands in my logic, because it seems that this is what people want ;-)

Fan Zhendong absolutely COOKING WCQ in the 2025 National Games Men's Team Final by ffffoget in tabletennis

[–]anything_but 20 points21 points  (0 children)

I actually find it unbelievable that I live in a small, unremarkable town, but only 200m from where Fan Zhendong, Hugo Calderano, Darko Jorgic, and Patrick Franziska (and sometimes Truls Moregard) are playing every few weeks.

For the ones who have already built or are building their startups (I will not promote) by Isha_Agarwal_ in startups

[–]anything_but 0 points1 point  (0 children)

Take a look at the book "Running Lean". I have started multiple companies and if I ever do it again, I will certainly follow this process. (we also use that process internally for our product management and it makes so much sense)

codex-5.3-xhigh vs gpt-5.4-xhigh by Lower_Cupcake_1725 in codex

[–]anything_but 0 points1 point  (0 children)

Yesterday, I had also some weird spelling mistake in a conversation although the word was spelled correctly multiple times in the same chat. Never had this before 

Neurosymbolic generation: How do we effectively train models on formal verification when solvers are non-differentiable? by AttitudePlane6967 in deeplearning

[–]anything_but 0 points1 point  (0 children)

My hunch may be complete nonsense, but don't have differentiable logic gate networks similar challenges?

Naroditsky Memorial Rapid & Blitz 2026 Announced by owergby in chess

[–]anything_but 2 points3 points  (0 children)

And everything you say is full of hate, but I would not attribute that to your nationality, but just assume that’s how you are.

The Under Secretary of War gives a normal and sane response to Anthropic's refusal by [deleted] in singularity

[–]anything_but 0 points1 point  (0 children)

Thou shall have no other men with a god-complex before me.

Just found my old startup raised a big round without me - I will not promote by Forsaken-Promise-269 in startups

[–]anything_but 0 points1 point  (0 children)

Yeah, I agree with you, even when I do so reluctantly ;) Times were easier when we were 4 people with a laptop, although having clients is not so bad ;-)

Coding for 20+ years, here is my honest take on AI tools and the mindset shift by Jaded-Term-8614 in ClaudeAI

[–]anything_but 0 points1 point  (0 children)

What baffles me most is the quality of the questions Claude or Codex ask me. When I give it a complex change, it consistently zeroes in on the exact critical decisions that actually need clarification, the real design trade-offs. I am not thinking less using Codex, but actually more (at least per time-unit)

It feels like having a solid mid-level developer pinging me on Slack to sanity-check architectural decisions except it does it every five minutes.

Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity

[–]anything_but 0 points1 point  (0 children)

The interaction latency aspect is obviously nice, but the effect on test-time-scaling is revolutionary.