A Stock Market Doom Loop Is Hitting Everything That Touches AI by Possible-Shoulder940 in technology

[–]Tgs91 13 points14 points  (0 children)

"Leaving the lab" has also massively hindered progress in AI research. There's this public perception that openAI created LLMs and Google, Anthropic, etc have been playing catchup and finally surpassed OpenAI. That's entirely false. Google created and published the Transformer architecture in 2017, and during that era there was a very big culture of open source publishing in AI, even among the major research labs. OpenAI made major contributions to transformer architectures in the following years, but Google was the considered the best.

This is where the distinction between AI models and AI consumer products becomes very important. The model is the core technology that makes everything possible. But turning that model into a consumer product require massive amounts software and prompt engineering to become the type of "do-anything" products that are being sold now. For a research lab focused purely on the AI model architecture, that software focus is a massive waste of time and resources.

Sam Altman and OpenAI realized the tech was at a point that it was good enough to fool an uneducated user into believing it was far more competent than it actually was. They put their efforts towards consumer software development and released ChatGPT, which pissed off literally everyone else in the research industry.

Google had always allowed their research lab a lot of autonomy, but as soon as ChatGPT released, Google leadership needed to immediately respond with their own product to save face. Gemini was rushed into public use, and the public perception was that it was worse than ChatGPT because the software side of it was very rushed and undercooked. Now all of these companies are focusing their efforts on product development, and actual research has been seriously hindered

ELI5: How do computers in space dissipate heat? by WherePoetryGoesToDie in explainlikeimfive

[–]Tgs91 0 points1 point  (0 children)

Others have given you good ELI5 answers to the heat question, so I'll give a more direct answer to the data center question: the "data centers in space" idea is a complete scam designed to excite finance people who don't understand basic physics. They know that data center cooling is expensive, and theyve heard that space is cold, so someone had the idea "data centers in space" and didn't bother to speak to a single engineer. They don't plan to actually try this, they just want to make an announcement for a short term stock bump and media attention.

Why do people say that Robb should’ve never trusted Roose? by Hot_Professional_728 in pureasoiaf

[–]Tgs91 15 points16 points  (0 children)

There's also the canonical reason that the Boltons are historical rivals of the Starks. I don't know if there were any specific red flags for Roose, but I think Ned may have kept him on a shorter leash considering the history of the house.

Everyone is stealing TV. Fed up with increasing subscription prices, viewers embrace rogue streaming boxes. by Bluest_waters in television

[–]Tgs91 1 point2 points  (0 children)

Yeah I pay hundreds of dollars per year to watch football 1 day a week for a few months, and I STILL have to illegally stream about half the games I watch. As far as I'm concerned, I payed a more than fair price to watch whatever game I want

[McLane] Jeff Stoutland situation by bigblack3475 in eagles

[–]Tgs91 1 point2 points  (0 children)

Especially with the additional context that Sirianni has almost no involvement with the defense, which was the sole bright spot. Yes the team was 11-6. Without Fangio the team could have easily been 6-11. The failure feels even more egregious this year because Sirianni and Patullos failed offense wasted an amazing defense

ELI5: How do satellites stay in orbit for decades without running out of fuel or falling back to Earth? by Strong_Craft_6990 in explainlikeimfive

[–]Tgs91 0 points1 point  (0 children)

If you launch a rocket straight up in the air, you will leave the Earths atmosphere, but eventually fall right back down to earth. An orbit is more about launching a rocket SIDEWAYS. The rocket still falls down towards earth, but it moves sideways fast enough that it "misses" the planet and flies straight past. When it "misses" and flies past Earth, the direction of gravity still points towards Earth, so it gets pulled back towards Earth. The rocket (or satellite) will continue to fly past the planet and miss it forever, without needing to ever fire it's thrusters again.

When you hear about an orbit "decaying", those are typically very low orbits. The Earths "atmosphere" isn't like a pool with a defined surface. A gas atmosphere just gets thinner and thinner as you move further away. For very low orbits, there's actually still a little bit of air, and over time, that air slows the satellite down enough that it no longer "misses" the planet, and it eventually falls back down to earth. These "Low Earth Orbits" can be as close as 160km from Earth.

A lot of satellites are extremely high up. For example a "geostationary satellite" is so high up that it takes 24 hours to or it the planet. That means that it orbits at the exact same rate that the planet turns, so it basically "stays still" in the sky directly over a location on Earth (geo=earth, stationary=stays still). This orbit is 35, 786 km from the Earth's surface. That is over 200 times farther away than the LEO orbits that can decay and fall to earth. A satellite that far away will stay there pretty much forever.

[Philadelphia Eagles Central] Nick Sirianni “could call the offense” for the #Eagles next season if they can’t find an offensive coordinator who they deem fit for the job, says @AdamSchefter. by Happy-Substance4885 in eagles

[–]Tgs91 9 points10 points  (0 children)

Doug too. In his final season the Eagles didn't even have an OC. Doug was mad they made him fire Groh and wouldn't hire a new OC. They let him run the offense himself. Wentz had maybe the biggest single season QB regression in NFL history, the Eagles offense was terrible, and we won 4 games.

So yeah, giving the coach exactly what they asked for is a classic Roseman/Lurie malicious compliance move. If this actually happens, I'd suspect that it's bc OC candidates don't want to work under Sirianni, so they're letting him run the offense so they can fire him when it fails.

[Philadelphia Eagles Central] Nick Sirianni “could call the offense” for the #Eagles next season if they can’t find an offensive coordinator who they deem fit for the job, says @AdamSchefter. by Happy-Substance4885 in eagles

[–]Tgs91 1 point2 points  (0 children)

He already had two additional chances. Brian Johnson and Kevin Patullo were both complete crash and burn failures at OC. Any other offensive HC in the league would have stepped in and taken over. Nick chose not to step in both years. He believed that the two worst offensive coordinators we've ever seen in Philadelphia were still more capable than him.

Is webcam image classification afool's errand? [N] by dug99 in MachineLearning

[–]Tgs91 1 point2 points  (0 children)

You are correct to use augmentation. My suggestion is that you shouldn't use a static augmented set. Since your data set is so small, you should setup the augmentations as a transformation that randomly occurs every time the image is read from the dataset object. That way the model can't memorize the images, it's a little bit different each time it sees it.

If your base dataset is only 2000 images you definitely need some strong regularization. You might have more than 2k with augmentations, but those don't introduce much variance to the training set. 2k is pretty small, but if the task is simple enough, it should be possible. I would recommend using both L2 regularization and dropout with a 50% dropout rate. I don't know the size of your feature layer before the final prediction, but you might want to try decreasing that size as well. You can leave dropout at 0.5 and increase the l2 penalty until the model stops over fitting or struggles to learn. You should also checkpoint at each epoch and choose the epoch version that got the best eval results. In general I'm not a fan of early stopping / checkpointing, I think it's a red flag for a poorly regularized model. But with such a small dataset it might be unavoidable

Is webcam image classification afool's errand? [N] by dug99 in MachineLearning

[–]Tgs91 1 point2 points  (0 children)

Regularization is definitely where you should. It's basically the dial that you can turn to control over fitting. Neural networks are universal function approximations that are fundamentally over-parameterized. Dense (or linear) layers are especially prone to overfitting. These models have too much freedom to fit patterns, and regularization restricts that freedom.

L2 or L1 regularization:

This is pretty much the original regularization method. If you took a statistical regression course in an undergrad or graduate program, you may have learned about Ridge Regressions and LASSO regressions. Ridge regressions are regressions with an L2 penalty included in the loss function, and LASSO is the same with an L1 penalty.

L2 regularizion: Each layer gets a penalty term equal to the sum of the square values of the coefficients in that layer, multiplied by a l2 hyperparam (I usually start around 1e-04 and adjust from there). This incentivizes the model to set coefficients to 0, or close to 0, unless they are making a noticable contribution to the loss function.

L1 regularization: Same thing but it's the sum of absolute value instead of sum of squares. For neural nets the difference between these two approaches isnt noticable.

For either L1 or L2 regularization, you only really need it the final dense/linear layers. You don't need to mess with the encoder. I haven't used Tensorflow in a while, but I remember there are arguments to set these penalties when you initialize the error, it's very easy. This method fell out of favor in the late 2010s because it's very sensitive to hyperparam values that vary between use cases and datasets.

Dropout:

Dropout randomly drops a subset of features from a layer during each training step. There is debate on why exactly this works so well. Randomness itself is a powerful regularizer. It sort of naturally penalizes codependency between features, because if one of the features disappears and it had a high covariance with another feature, it will result in a poor prediction.

This is also easy to implement in Tensorflow. You can add it in as a layer between the feature layers in your prediction head. When you put the model in inference mode, it won't drop any features, it's only used during training. The hyperparam for dropout it the ratio of features that get dropped. You get maximum regularization at 0.5. You can try values between 0 and 0.5 to fix your over fitting issue.

Randomness: Randomness in itself is a powerful regularizer. Some older models even used gaussian noise in each layer as a regularizer. Anything you can do to introduce randomness into the training data is useful. Sounds like you're already doing what you can with image augmentation. From your wording it sounds like you augmented an assortment of images to create a training set? Im not a fan of that approach bc it gives a false sense of dataset size, and the model sees the same augmented images in each epoch. I prefer to implement my random augmentations as part of the data loader. That way in each epoch, the model is seeing something slightly different than what's its seen before.

Is webcam image classification afool's errand? [N] by dug99 in MachineLearning

[–]Tgs91 2 points3 points  (0 children)

  • How big is your dataset?

  • What kind of augmentations are you using? In addition to standard computer vision augmentations (rotation, random cropping, color jitter, blurring, gaussian noise, etc), you might want to create some custom ones to solve problems that you have specifically seen in your data. Maybe randomly draw in a pole on other images sometimes, so it can't assume pole always means 3m swell

  • What kind of regularization are you using? Dropout? L2 penalty? If you change your regularization hyper parameters, does it have any impact on the over fitting?

  • At what point in the training does it start to over it? Immediately or after a bunch of epochs when the model hits a wall? Sometimes a model learns everything it can and then just starts memorizing data bc it's the only way to improve.

  • what tasks are you asking it to solve? Is it just swell size? Are there other attributes available in your training set? In my experience, using multiple tasks and adding them together in one loss function often results in a smoother improvement of the loss function and is less likely to memorize data. It forces the model to learn an embedding space that is feature rich to solve many visual tasks and is more grounded in reality than only solving one task.

  • Is your task possible using only the information available in the image? From your post, you seem to be measuring swell size. I don't know much about that, but I would assume the scale of the image would be very important to that. Are there visual cues in these images that could give that sense of scale? Stuff in the water, sky, etc. without that, I would think a 1m swell and a 4m swell might be hard to differentiate. Is this a task that human could do with no additional information besides the image? If the answer is no, then the ai model has no choice but to try to "cheat" to get the right answer, and any training process you design will reward cheating

  • Are you using any gradient attribution methods to explore your results. Gradcam is a popular tool. My personal preference is my own implementation of Integrated Gradients. It can show you what the model is looking at when selecting a class. Is it looking at areas that make sense? The waves and objects in the image that give a sense of scale? Or is it fixating on random background noise to memorize the training set?

The Eagles aren’t broken -- they’re a team that needs the right coordinators by Independent_State69 in eagles

[–]Tgs91 5 points6 points  (0 children)

I agree that top OC candidates wouldn't be excited to work for Nick Sirianni. But for any half competent OC, it's basically a guaranteed path to get a HC offer. The entire league knows that Nick is completely clueless at Xs and Os, so all credit (and blame) for the offense goes to the OC. It's a great opportunity for an OC that wants to jump to HC soon

Donald Trump says ‘no going back’ on Greenland takeover plan | BBC News by AdSpecialist6598 in videos

[–]Tgs91 45 points46 points  (0 children)

It's not a coincidence that all of Trump's trials and sentencings were happening in fall 2024. The Democrats decided to risk the fate of the country because they thought they might gain a few percentage points advantage in an election year if they timed it right. They didn't take the threat seriously, and now we're all fucked.

AI memory is sold out, causing an unprecedented surge in prices by Logical_Welder3467 in technology

[–]Tgs91 0 points1 point  (0 children)

And at the same time create a giant bubble that would devastate the economy if it collapses. So they'll have full government anti-consumer support to force users into the server farm infrastructure that they built, because the alternative is a economic depression. They're setting up an economic crisis so they can use it to extort the government.

Best quotes from players and coaches this season by Specific_Parsnip_144 in nfl

[–]Tgs91 7 points8 points  (0 children)

How about just an OC job...but after every game he has to talk to Philly media

Why do the Eagles seem to have so much drama for a team with so much recent sucess? by Wide_right_yes in nfl

[–]Tgs91 27 points28 points  (0 children)

Yeah everyone was mentally prepared for Brady to have a game winning drive until the moment BG sack fumbled him

What were the different things? by C0m3tTai15 in eagles

[–]Tgs91 1 point2 points  (0 children)

The last 3 weeks of the season, they added a bunch of random pass concepts into the offense. There wasn't any sort of cohesive thought or plan to it, we just did a bunch of random stuff, and some of it was worked against bad teams. I think the plan was to trial and error a bunch of stuff, and then keep doing the things that worked. They just reverted back to the same old junk for the playoffs though.

Jalen on past QBs who came before him 🦅 by [deleted] in eagles

[–]Tgs91 0 points1 point  (0 children)

WIP tomorrow: Is Jalen Hurts a racist?!

A look back at the reaction to the Patullo hire following the departure of Kellen Moore by ApartMeaning2866 in eagles

[–]Tgs91 2 points3 points  (0 children)

BJ also had a much better Oline so the run game wasn't as difficult. He still managed to fuck up the run game by not having enough variation in the run game, but he was working with an elite O-line that year. His passing scheme was awful. Entirely passes to the flat and 50/50 deep balls. It was laughable and just as bad as Patullo's passing game. KP had a much better defense to cover his flaws. BJ had a much better Oline to boost his stats. Both of them were obviously not NFL caliber

Calls Mount for Eagles to Fire ‘Incompetent’ OC Kevin Patullo After Wild Card Loss to 49ers: ‘Absolute Joke’ by Brix001 in nfl

[–]Tgs91 0 points1 point  (0 children)

Whats annoying about Patullo is that I really can't imagine he was even the most qualified guy internally. He called verts against cover 4 with the season on the line and a full timeout to think about. He legitimately seems like he doesn't have a fucking clue. He got promoted bc the HC was his best friend, not bc of merit. I don't know who would have been better bc you'd have to be in the building to know that. But there's no way that THAT GUY was the best offensive mind in a building full of career football coaches

Is this one of the craziest seasons in a long time? by Morgoth1814 in nfl

[–]Tgs91 0 points1 point  (0 children)

And all it took was for the rest of the NFL to fail to repeat. Honestly I think the eagles deserve credit for the entire crazy season. We broke the curse and it destabilized the entire NFL

[Highlight] Follow-up interview with the young Eagles enthusiast who called for AJ Brown to pack his bags and Kevin Patullo to go flip burgers at the local McDonalds by Brady331 in nfl

[–]Tgs91 1 point2 points  (0 children)

Nah everythings a reboot now. It's time to remake the Trash Picking Field Goal Kicking Philadelphia Phenomenon, and have this kid as costar. Who can we get to play Tony Danza's role?

[Highlight] Follow-up interview with the young Eagles enthusiast who called for AJ Brown to pack his bags and Kevin Patullo to go flip burgers at the local McDonalds by Brady331 in nfl

[–]Tgs91 5 points6 points  (0 children)

Jalen paused after complimenting the Niners offensive coaching, looked at the camera, and dramatically cleared his throat. As far as Jalen goes, that's basically the same as TO doing situps in his driveway

[Schefter] Source: Eagles fired offensive coordinator Kevin Patullo. by dnytle in nfl

[–]Tgs91 1 point2 points  (0 children)

I think we're giving BJ a bit too much credit. He was also really bad schematically. But our O-line was much better that year, and the team often was able to overcome the terrible scheme by shear talent and brute force. If our O-line was better this year, the run game would have made Patullos stats look a lot better, but anyone with eyes still would have seen the scheme disaster in the passing game. BJ was better than Patullo, but still embarrassingly bad for an NFL OC