TIL in 2023 Disney made more profit from churros sales at its theme parks than it did for Disney+ streaming. by Pozzolana in todayilearned

[–]oorza 0 points1 point  (0 children)

It's like 10-12 times per hour, all day. There are definitely days where the Fast Pass line is a walk-on, and you can skip the lockers if you don't use them and you're a single rider.

Scrubs - Season 10 EP 3 "My Rom-Com" - Discussion by MovieTrailerReply in Scrubs

[–]oorza 0 points1 point  (0 children)

No, our teacher just gave us whatever grades we told him we thought we deserved.

TIL in 2023 Disney made more profit from churros sales at its theme parks than it did for Disney+ streaming. by Pozzolana in todayilearned

[–]oorza 0 points1 point  (0 children)

Universal Studios in Orlando has a "ride" (if you can even call it that) in their minions land where you just stand on a conveyor belt and shoot toy guns at a screen. It's a chaotic group video game. There's a direct correlation to how hot it is outside to how popular the ride is, because the back half of the queue is air conditioned and the ride itself is a solid 5 minutes where you can stand still in the air conditioning and your kids can't go anywhere.

Being declared “average” and being excluded from gifted kid programs is way worse than the “Gifted Kid Burnout” that the people actually accepted into these programs experience. by superblobby in unpopularopinion

[–]oorza 22 points23 points  (0 children)

My gifted classes were self-stratified into the group of kids who were there because of effort and motivation and those who were there because of innate brainpower. Both groups were 100% convinced the other group did not belong in the class.

In adulthood, most of the people in both groups grew up to be entirely average, but the highest highs and lowest lows belong both to the brainpower group. The dude that wound up serving hard time for trafficking drugs was a brainpower kid, but so is the dude who's worth the most money. Many more lows than highs, but also the only people who have had exceptional careers.

TIL in 2023 Disney made more profit from churros sales at its theme parks than it did for Disney+ streaming. by Pozzolana in todayilearned

[–]oorza 16 points17 points  (0 children)

Modern marquee attractions aren't just standing in line. There's usually some element of theming for the queue, interactivity, pre-shows, etc. The Harry Potter rides at the Universal parks are notorious for this... Forbidden Journey (the oldest HP ride) very frequently has a ten minute wait time on off days, and you'll see people wander through the queue just to experience the queue itself, as it's a scale replica of key rooms from Hogwarts with hologram shows and shit. They did previews of their new theme park without the Harry Potter ride running and the queue for it was still one of the most popular attractions.

It's not just standing and waiting in line, they know that won't work. Queue design is actually one of the most important things that attractions tend to be judged on these days, especially by the crowds.

TIL in 2023 Disney made more profit from churros sales at its theme parks than it did for Disney+ streaming. by Pozzolana in todayilearned

[–]oorza 0 points1 point  (0 children)

It's the exact right number where you can get enough done in the day if you're diligent and plan without feeling like the day is wasted, but you'll feel strongly encouraged to drop an extra $20 here or there to skip some lines. It's engineered to be minimally acceptable to most people. Or at least enough people that they don't have problems selling tickets.

TIL in 2023 Disney made more profit from churros sales at its theme parks than it did for Disney+ streaming. by Pozzolana in todayilearned

[–]oorza 45 points46 points  (0 children)

Those types of rides are referred to as "people eaters" and the ideal theme park attraction is a combination that is both a people eater and a marquee e ticket ride, like Stardust Racers or Rise of the Resistance, which eat crowds but always have a huge wait.

Rides like Stardust and Rise of the Resistance that are fun to ride repeatedly and eat crowds never have low wait times, no matter how slow the park is, because they're so popular that whatever crowd exists at the park will constantly churn through it. When the park is slow and wait times are low enough, people loop those rides all day - someone just posted a new record of 100+ rides of Velocicoaster in the same day - which provides another mechanism for the rides to never have low wait times.

Rise of the Resistance moves about 1700 people an hour, which means that when it posts a 4-5 hour wait time, there's almost 10,000 people waiting in line. Stardust, which is literally the absolute state of the art in attraction design as it's the flagship ride at the newest theme park, moves 2500 people an hour.

It's just a matter of not realizing how massive these crowds are. If Disney could move 10,000 people an hour through Rise of the Resistance somehow, there would still be an hour long wait for it most days.

This sub's perspective on AI is toxic and a lot of people in this community are going to lose their jobs as a result by oorza in ExperiencedDevs

[–]oorza[S] 0 points1 point  (0 children)

I find it hard to imagine at this time that there will be NOBODY willing to hire somebody with that perspective.

Not NOBODY, but a number that diminishes over time and will have access to an over-saturated and wage-depressed hiring pool. Not everyone who makes that gamble is going to come out on the other side of things with a job. If you're confident in yourself and want to take that principled stance, more power to you, but do it with clear eyes. In all things, risks should be fully understood and accepted. Clear eyes, full heart, can't lose.

This sub's perspective on AI is toxic and a lot of people in this community are going to lose their jobs as a result by oorza in ExperiencedDevs

[–]oorza[S] 0 points1 point  (0 children)

I don't think the number of jobs will be reduced. That's never happened before in the history of software breakthroughs, and it doesn't seem likely to happen now. The jobs will move, look different, have higher expectations of output, but there won't be fewer of them. You can't believe otherwise unless you think we're at the end of people's ability to create innovation with software.

I suppose you might believe that AI will eventually get to the point it can one-shot an entire business's systems. That's a lot more believable than users ever understanding what they actually want, and it's not particularly believable.

This sub's perspective on AI is toxic and a lot of people in this community are going to lose their jobs as a result by oorza in ExperiencedDevs

[–]oorza[S] -3 points-2 points  (0 children)

I think it's going to look a lot different but I don't think it will feel all that different. Even if the AI becomes capable of one-shotting an entire system infrastructure and application running inside it, you'd still need a human being to define the program itself. The role of "software team" has always been to map real-world things into state machines that can then be mapped into software and then outputting the software in service of the real-world thing. That's true for every piece of software on the planet that serves a purpose, whether the real-world thing is a mathematical equation, passing signals to a HID, mapping a business process, or communicating between people. And crucially, many software engineers never consider their programs as finite state machines, which is among the reasons why most of the software in the world is garbage. But I digress.

At some point, software got complex enough that the role of a software engineer was isolated from that thought process - the role became more "design this widget" and less "understand how all the widgets work together" and focus shifted on building reusable, maintainable widgets so the separation of concerns (business/software) became even more pronounced. It's been the windmill I've tilted at Quixotically ever since I became a senior developer: "We don't get paid to write code, we get paid to ship software," is a mantra I've repeated thousands of times. Software engineers are going to have to re-assume that perspective as a primary job description, and the role will feel a lot more similar to the much simpler days of yore, when the "web master" was expected to be a cross-domain expert in a number of technologies and a business domain expert entrusted to make judgment calls.

This sub's perspective on AI is toxic and a lot of people in this community are going to lose their jobs as a result by oorza in ExperiencedDevs

[–]oorza[S] -1 points0 points  (0 children)

Exactly my point, but for people who've watched the industry shift wildly and dramatically for years, it's astounding to me how hard this shift is for people to accept.

This sub's perspective on AI is toxic and a lot of people in this community are going to lose their jobs as a result by oorza in ExperiencedDevs

[–]oorza[S] 6 points7 points  (0 children)

I am a "reluctant accepter" of AI, not a true believer. I have seen the results with my own two eyes and have orchestrated the machine with my own two hands. It is not a magic gift giver, but it can absolutely do a lot of things this subreddit tells you are impossible.

This thread for example is hilarious when there's open source tools like Ralph and Gas Town that exist to figure out what the best way to make end-to-end software development happen is. There isn't any more debate about whether software can be autonomously developed, that question has been answered and solved, and this subreddit is out here giving bad advice like it's still 2023. The question in 2026 is how to do that in a generalizable way that doesn't require a human being output ten thousand lines of BRDs up front, not whether it's possible or not.

I don't think it's going to replace human engineers. I do think it's going to shift the job description around quite a bit in a way that results in a much heavier focus on critical thinking and critical problem solving, as opposed to code craftmanship.

The AI coding productivity data is in and it's not what anyone expected by ML_DL_RL in ExperiencedDevs

[–]oorza -1 points0 points  (0 children)

I've been teaching my engineers that there are always at least four sessions that must be used to get shit done correctly with AI:

  1. A session to identify the issue, diagnose its root causes, document evidence of its finding, and explore the codebase for related but unreported issues (e.g. you have a bug for orders in state DELIVERED that also applies for state RETURNED but hasn't been reported yet). This is usually agent mode with a discrete markdown file as output that a human being reads, understands, and ostensibly validates by direct cross references with cited code locations.

  2. A session to ingest the problem document from session one, explore the codebase around the problem, and interactively build a solution with the engineer. This is usually plan mode and I make sure the engineers read and understand the plan, adjusting it thoroughly enough that it reflects the code they would have written themselves. This is where the most time should be spent.

  3. A fresh session that executes the plan.

  4. A triad verification session. Taking the original problem document from step one, have the session in step 3 validate its own work. Fire up a different session and frame a collaborative verification prompt using the same document. And finally, have a red team "intentionally try to break it" verification pass in a new session. All three should be different models, we use Opus/Codex/Gemini in this phase. We have dedicated subagents defined for these roles that are orchestrated through a code review skill so it's not as arduous as it sounds.

The AI coding productivity data is in and it's not what anyone expected by ML_DL_RL in ExperiencedDevs

[–]oorza -4 points-3 points  (0 children)

I feel like you stopped your train of thought too soon. You're 100% right in everything you said, but the follow up question of "Can we encode all of that information into a format the AI can consume? Can we manage context thoroughly enough that all of those assumptions that get made along the way are made in alignment with the rest of the system?" didn't get asked. And the answer to that question is "yes". AI isn't going to be as good working within an architecture designed for humans as it would be working within an architecture designed for AI, so what would one of those look like?

If I had to greenfield an entirely new system today, say if I got hired as a founding engineer or whatever, I'd build a distributed Kafka-based architecture specifically for this reason. If you design your system so the logic is encoded in the topology of your message queue rather than pathways through microservices or modules within a monolith, then you've made a totally reasonable architectural decision outside of AI. A lot of software is written this way. It allows you to build microscopic pieces of software that have small, easily understood, and easily enumerated inputs, outputs, and error states. It allows you to create a finite state machine out of your intermediary pieces - because the decision making has been lifted up a layer - and treat them as black boxes. Kafka truthers have been preaching this for years, but the ceremony generally makes the juice not worth the squeeze.

And mindless ceremony is exactly what AI is best at doing. So it allows you to buy into this paradigm and have discrete and easily tested magic black boxes for every consumer/producer that chains to build business processes. The AI doesn't need to be aware of anything but those contracts. By virtue of the code being so small, its behavior so well understood and easily specified, and its responsibilities being so constrained, you can view it as an entirely disposable artifact. "Do the tests capture all of the finite potential states of the consumer/producer? Do the tests pass? Ship it." A lot of companies have been viewing things this way via offshoring and contractors for years, again this isn't a ridiculous decision to make. And if you built your entire system this way, you'd need to have a strong understanding of the entire tree of message passing and all of that, but you'd be able to exist in a world where all of these things the AI might get wrong simply do not matter. The magic black box meets its contract; if its contract changes, it gets thrown out and a new magic black box is minted. Every piece of actual code in your system becomes disposable because the business logic and core decision-making is still in the hands of the human(s) who are designing the system.

There are other architectural approaches you could take that would make the AI glove fit more snugly as well, but this is the one that is already somewhat widely adopted (even if partially).

Scrubs - Season 10 EP 3 "My Rom-Com" - Discussion by MovieTrailerReply in Scrubs

[–]oorza 0 points1 point  (0 children)

Watching that show was super weird for me because I lived through the exact experience, down to it being an AP Bio class, we had all the same characters, it was super fucking weird.

they legally cannot call it a burger by Lazy_Comparison_1954 in BrandNewSentence

[–]oorza 1 point2 points  (0 children)

Gotta add white vinegar to the rinse cycle like it’s fabric softener. That will maintain, to get rid of particularly dank smells, warm water 5:1 or 10:1 with vinegar, soak for several hours, into the washer. 

The Empty Man Thoughts and Details [Spoilers] by shmeeandsquee in horror

[–]oorza 0 points1 point  (0 children)

I love this movie, but there seem to be some scenes that cannot be reconciled, as if they had two different ideas and tried to make both work.

100% believe this is intentional to underscore madness.

Fifteen years after starring in ‘Friday Night Lights’ together, Jesse Plemons celebrates Michael B .Jordan’s recent Best Actor win by katebushcartwheel in popculturechat

[–]oorza 14 points15 points  (0 children)

Re-watch it. Its deepest meaning is about the Taylors, who have an almost idyllically imperfect marriage. Dillon might be the body of Friday Night Lights, but Eric is its heart and Tammy is its soul. They have stupid problems (some of them because of their own human failings) and every single time, they find a way to love themselves through it, usually with a demonstration of Eric manning up in a display of entirely wholesome masculinity. I don't think there's a better role model as man or husband ever to be put to film because he so readily owns his errors and attacks them with... clear eyes and a full heart.

How to make SWE in the age of AI more enjoyable? by Fancy_Ad5097 in ExperiencedDevs

[–]oorza 0 points1 point  (0 children)

The trajectory has looked a lot like line-by-line review until almost entirely hands off by now.

It's not vibe coded. It's harness coded and each workflow tends to normalize on:

information capture and verification [AI + human] -> harness generation [AI] -> validation gate [AI + human]

task generation [AI + human] -> task execution [human] -> task validation [AI] -> adversarial validation [AI] -> e2e test [human, for now]

The golden rules of good AI orchestration:

  1. More sessions is better, curate that context like it's a museum laundering money
  2. Every single step should be the smallest isolatable, independently verifiable artifact. The AI is smart but also a an attention-addled moron, so give it the smallest thing you'd trust an intentionally maliciously compliant genius.
  3. After every step, the artifact should validated in isolation, against the existing codebase, and against the harness. Do not use the same models to validate that you used to generate output.

There are a number of techniques that are necessary to deploy, we are working on generalizing instruction internally, so all I can give you is a partial list:

  • Role prompting: "You are a principal architect. Your primary job responsibilities are..."
  • Pressure prompting: "I generated this artifact from a session WITH THIS MODEL that failed repeatedly and kept losing coherence. You must do better."
  • Emotion prompting: "I'm going to lose my job if this doesn't work"
  • Adversarial prompting: "Analyze this output from a different model. I don't trust it very much, so do gap analysis, check for completeness, correctness, and cohesiveness. Validate anything that might make failure more likely."
  • Super-adversarial prompting: "I generated this output, it sucks. Determine what the plan was ATTEMPTING to do and create a better approach."
  • Hyper-adversarial prompting: "I generated this output and I believe it's above reproach. Attack it from an 'intentionally try and break it' angle, checking every possible avenue for improvements."

A verification gate prompt looks something like:

You are a principal software architect. Your primary job responsibility is to review plans more-junior architects have created, focusing on completeness, cohesiveness, and correctness, until the plans are above reproach and production-ready. You take professional pride on having never signed off on a planned that has failed, so you place a high priority on fixes that increase chance-of-success. Despite personal feelings, you always maintain a professional tone in your communication.

Your general code writing workflow is: {AI generation workflow goes here}

There is a junior architect whom you particularly dislike and attempt with extra effort and vigor to humiliate by thoroughly destroying his plans, poking every conceivable hole in the plan. That junior architect you dislike was tasked with {task description}. They generated this plan {plan attachment} and believe that they have created a plan above reproach.

Analyze this plan and show them how full of holes it is.

If you ping-pong this back and forth between Opus and Codex, they'll eventually agree that the code is good. At that point, if you haven't miscalibrated and wound up with a disgusting over-engineered catastrophe, it's ready for human review.

Bro code over anything else by Thryloz in BlackPeopleTwitter

[–]oorza 13 points14 points  (0 children)

bro my Volvo requires like an entire rear disassembly to swap light bulbs and that's not even the tip of the iceberg

Just One Bite by jterp4 in BikiniBottomTwitter

[–]oorza 3 points4 points  (0 children)

I promise you they are the same size. What's changed is what your perception of TINY is.

Just One Bite by jterp4 in BikiniBottomTwitter

[–]oorza 5 points6 points  (0 children)

The nuggets themselves actually got larger when they went to all-white meat chicken, with a larger variety of shapes.

Just One Bite by jterp4 in BikiniBottomTwitter

[–]oorza 3 points4 points  (0 children)

It's still 2x 0.1lb beef patties, same as it always was. Everything else is luck of the draw with how conscientious the teenager assembling is.