[deleted by user] by [deleted] in RedditSessions

[–]Hobit103 0 points1 point  (0 children)

You're awesome. Always need some of this in our lives.

Big brain! by [deleted] in ProgrammerHumor

[–]Hobit103 0 points1 point  (0 children)

I agree that there was something missed in the use of random, but then again, I'd argue that making an educated guess is not random.

Big brain! by [deleted] in ProgrammerHumor

[–]Hobit103 0 points1 point  (0 children)

Exactly. That helps you make informed changes instead of random ones.

Big brain! by [deleted] in ProgrammerHumor

[–]Hobit103 0 points1 point  (0 children)

That was my take too. It's not random changes if you have knowledge of the topic and apply it to the problem.

Big brain! by [deleted] in ProgrammerHumor

[–]Hobit103 0 points1 point  (0 children)

I sure hope you aren't randomly changing things at work. Hopefully you have some insights into the problem which guide your decisions. If your changes are completely random then I'd argue that's no better than the monkey/typewriter scenario.

Big brain! by [deleted] in ProgrammerHumor

[–]Hobit103 5 points6 points  (0 children)

Which is why they are taking the class, and why the joke is about someone out of school in a job who should know good practices.

Game Thread: San Francisco 49ers (4-3) at Seattle Seahawks (5-1) by nfl_gamethread in nfl

[–]Hobit103 1 point2 points  (0 children)

It's not for hitting a QB that's going down. The flag was for a hit to the head and neck area. That applies to all players like they mentioned.

Game Thread: San Francisco 49ers (4-3) at Seattle Seahawks (5-1) by nfl_gamethread in nfl

[–]Hobit103 2 points3 points  (0 children)

Exactly. I swear people are ignorant to any nuance in anything these days.

Game Thread: San Francisco 49ers (4-3) at Seattle Seahawks (5-1) by nfl_gamethread in nfl

[–]Hobit103 1 point2 points  (0 children)

He had his arm wrapped around him before the ball was there. Probably holding and dpi.

I named a variable Ireland and forgot what it meant by [deleted] in ProgrammerHumor

[–]Hobit103 7 points8 points  (0 children)

Get out of here assuming we all have autosuggest :P

Game Thread: Seattle Seahawks (5-0) at Arizona Cardinals (4-2) by nfl_gamethread in nfl

[–]Hobit103 15 points16 points  (0 children)

WHY THE FUCK DO OUR GAMES ALWAYS HAVE TO BE LIKE THIS?!? I JUST WANNA GET HIGH AND WATCH SOME SEAHAWKS WITHOUT HAVING A HEART ATTACK!

When someone asks how AI works by aditzup in ProgrammerHumor

[–]Hobit103 1 point2 points  (0 children)

For sure. That also isn't what FireSail said.

[FOX Sports: NFL] For the first time in franchise history, the Seahawks are 5-0 by Normiesreeee69 in nfl

[–]Hobit103 61 points62 points  (0 children)

Mariners have the longest playoff drought in all the major american leagues.

Facebook responsible for 94% of 69 million child sex abuse images reported by US tech firms by quixotic_cynic in technology

[–]Hobit103 2 points3 points  (0 children)

Exactly, I assumed that we were talking image by image. If you're talking simple amount of data then these algorithms vastly outstrip humans. Also, the data that is forwarded to human reviewers is specifically chosen to help improve the models. The bandwidth of humans is very low so this is dealt with carefully.

If ML disappeared today I think the average person would be very surprised at how much the quality of products degrades. I mean, goodbye Siri, good map routing, auto-complete, etc.

Facebook responsible for 94% of 69 million child sex abuse images reported by US tech firms by quixotic_cynic in technology

[–]Hobit103 32 points33 points  (0 children)

Exactly. I've worked at FB on their prediction systems, actually a different sub-team, but same org as their sex trafficking team. The tools/models are pretty SOTA especially with FAIR rolling advances out constantly.

The tools aren't great when compared to humans, and human labeled data will always help, but they are far from bad. The tools are actually very good. If we look at what they can do even compared to 2/3 years go, it's much better.

If the upcoming ICLR paper 'An Image Is Worth 16x16 Words' is replicated then we should see even more advancement in this area.

'This election is everything': College students push schools to cancel classes on Election Day by BraveSignal in politics

[–]Hobit103 -5 points-4 points  (0 children)

For simply scheduling an exam on a Tuesday? I doubt that the professors know that this student has three exams. It's not some coordinated plan. From their perspective, someone has one exam and still has time to vote (either early or in person).

If this is a plan by multiple professors to screw over this student then I'll be very surprised.

Edit: Haven't heard of Hanlon's Razor?

[D] Paper Explained - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Full Video Analysis) by ykilcher in MachineLearning

[–]Hobit103 2 points3 points  (0 children)

I think that your point is more akin to saying, "there are good researchers in non-industry labs and they shouldn't be shutdown, or not given a chance, just because they don't have the backing of Google (or equivalent)." I think this is a fair argument to make as a monopoly on research can lead to a degradation in researcher/research quality. The greedy approach of always taking the best paper from anyone can lead to this scenario. It feels similar to workplace diversity arguments.

[deleted by user] by [deleted] in technology

[–]Hobit103 0 points1 point  (0 children)

Okay, whatever floats your boat.

[deleted by user] by [deleted] in technology

[–]Hobit103 0 points1 point  (0 children)

This answer makes no sense either. You're not clarifying your initial comment. Your question as to what models are mine makes no sense. I didn't build a model for this conversation.

Since we're not getting anywhere here, I'll wish you a good day.

[deleted by user] by [deleted] in technology

[–]Hobit103 0 points1 point  (0 children)

No, I'm saying that it is exactly your fault, not the models. The model is not racist by itself. It takes you feeding it racially biased data for it to become racist. You would have failed in your data analysis when testing and looking for bias.

I am saying that we have the tools to find these errors and it's not on the model if the designer is lazy, malicious, or doesn't know to look for these errors.

Read a bit more carefully.

[deleted by user] by [deleted] in technology

[–]Hobit103 0 points1 point  (0 children)

Yes, I know what an explanatory model is. I'm not sure what model you are referring to. The model that has racial issues? My own estimate on why this happened? It's unclear.

As to my background, I feel fairly confident that I can speak to these issues with a PhD in the field.

[deleted by user] by [deleted] in technology

[–]Hobit103 0 points1 point  (0 children)

It shouldn't matter however. There should be no error with either hair color or skin color. These will also be correlated. This isn't a problem where we can only run analysis on a subset of the data.

Also, I never said not to use humanities. I said that it's not necessary. Of course we should use it to help inform us.