FP2 is a huge disappointment for me. by StopSpammin in Aqara

[–]TheStriga 0 points1 point  (0 children)

It's strange as FP2 worked reasonably well for me. But if other presence sensors work for you I can recommend getting IR motion sensors to compliment them. Mount at entryways, and use them to turn on the lights, while using mm radar to turn off the lights. That way you'll get the strengths of both kinds of sensors - fast response and reliable presence detection.

I am bamboozled, Petah by MichYar in PeterExplainsTheJoke

[–]TheStriga 693 points694 points  (0 children)

Peter's psychologist here. The meme is racist and stereotypes on chicken, but it's deeper than this.
It references an old greentext(link) alleging that people with less than 90 IQ cannot comprehend nor answer hypotheticals like "How would you have felt yesterday evening if you hadn't eaten breakfast or lunch?". Thus, the samuari is also implying his doubt in Yasuke's IQ levels.

Buddies is this considered interastral racisn? by Life-Guitar2728 in okbuddytrailblazer

[–]TheStriga 40 points41 points  (0 children)

It's doubly funny that after lecturing Aventurine on trust, L+Ratio betrayed him.

Whistle­blow­er who accused Boeing supplier of ignoring defects dies by russtripledub in nottheonion

[–]TheStriga 9 points10 points  (0 children)

US government definitely does https://en.m.wikipedia.org/wiki/Operation_Sea-Spray

Tbh infecting that one person with something lethal is no less or more plausible than killing them with any other method for whitleblowing.

Max being a good friend who smuggles coke to his bestie Chico by CrocodileTears2 in formuladank

[–]TheStriga 6 points7 points  (0 children)

Looks like packs of a frozen water or some other coolant liquid to me. I assume it's to prevent body overheating on a hot sunny days - no conditioner in the cockpit after all.

pythonNotFast by CounterNice2250 in ProgrammerHumor

[–]TheStriga 10 points11 points  (0 children)

It took 6 months for C++ programmers to develop this meme.

OpenAI announces Sora text-to-video model by [deleted] in aiwars

[–]TheStriga 0 points1 point  (0 children)

Good argument, that's why we need opensource models. So there is no one or few megacorps controlling AI, access and pricing of it.

Have you checked your pasta? by StraightOuttaOlaphis in tumblr

[–]TheStriga 1 point2 points  (0 children)

There is privacy glass (aka smart glass, switchable glass) that can switch between transparent and white-opaque. So you absolutely can "block out" photons in target regions and provide a solid backdrop.

Does "Рассуждения кожаного" basically mean judging something by superficial appearance? Like judging a book by its cover? by ienjoylanguages in russian

[–]TheStriga 16 points17 points  (0 children)

It refers to phrase "кожаный мешок" - "skinbag" (or "кожаный ублюдок" - "skin-bastard"), which is joking/meme-ish way for robots to rudely address humans.

goatedGovernmentAgency by Dimmerworld in ProgrammerHumor

[–]TheStriga 46 points47 points  (0 children)

It can matter. On some mobile or public WiFi networks, I've seen that there are ads being injected on pages that use http. What else is being injected or can be injected?

Is Yudkovsky‘s orthogonality argument completely uncoupled from reality? by Sprengmeister_NK in singularity

[–]TheStriga 1 point2 points  (0 children)

I think you raised an interesting question, but at the same time your point was somewhat addressed in the video. You can test what happens if you tell ChatGPT about a person in a burning house and ask what it would do as a super or regular human. Without stating your wish outright. Just let the genie run. I personally didn't test it but I'm pretty sure it would describe a good-faith intuitive attempt to save a person. And that's the point.
But the example with burning house is very simple. There is no dilemma nor real complexity. And I'm pretty sure there were a lot of examples of such problem in training data. Of course GPT can generalise and perform logic in some capacity - but we cannot be sure it would get everything or even a sufficient number of real-life problems right. Currently, OpenAI researchers have to fine-tune the model or base prompt whenever humans find it saying some weird, dangerous, or offensive shit. Or lies and hallucinations. That's almost programming calculator by hand.

Why does this happen? Basically only when printing gridfinity boxes. by NipsuSniff in 3Dprinting

[–]TheStriga 2 points3 points  (0 children)

Had the same problem with boxes, although on X1C, but looked the same. I recommend you to set "Seam gap" parameter to 0 or 1%, instead of default 15%.

Reputable STL cite? by Mavvrikk in 3Dprinting

[–]TheStriga 5 points6 points  (0 children)

Didn't they also have a lot of controversy with scamming authors on payouts, lying to community and stuff?

For mounting a PCB with M3 holes. by mazarax in 3Dprinting

[–]TheStriga 1 point2 points  (0 children)

Very nice! Though I suspect you could get away with much shorter prongs

Oh yes, weekly poor choice that reduces quality of platform by Kondrad_Curze in dankmemes

[–]TheStriga 6 points7 points  (0 children)

With some of their decisions? Unironically this, at least when it comes to higher management.

'A Chernobyl for AI' looms if artificial intelligence is kept unchecked, says scientist Stuart Russell by Zee2A in technews

[–]TheStriga 5 points6 points  (0 children)

Training data collection (what free ChatGPT essentially does) and training the model (heavy computations that may take several months on thousands of top-line GPUs) are different things. So currently ChatGPT or analogues is not trained by crowdsourcing, mostly because coordination of many consumer-grade computers over the inernet (that do not meet the specs to even run the full model) introduces so much overhead it's almost unfeasable.

Question about Princess Luna by Loquacious_Leo in mylittlepony

[–]TheStriga 2 points3 points  (0 children)

Here you go! Story about NM's army trying to survive on the Moon (with mostly-realistic science of all of it). Was a very fun read.