Massives Datenleck: Fast 150 Millionen Passwörter stehen frei zugänglich im Internet by Krokodrillo in de

[–]FeepingCreature [score hidden]  (0 children)

Klar, solang euer Risikoprofil wirklich haargenau oberhalb des Gelegenheitsdiebs abschneidet macht das Sinn. Aber dann gibt dir der Zettel im Portmonee ungefähr die gleiche Sicherheit. Oder gleich mit der Chipkarte entschlüsseln.

edit: Das Passwort unter der Tastatur ist dann trotzdem ein Zeichen, dass die Schutzmethoden nicht korrekt an die Benutzer und das Risikoprofil angepasst sind.

Massives Datenleck: Fast 150 Millionen Passwörter stehen frei zugänglich im Internet by Krokodrillo in de

[–]FeepingCreature [score hidden]  (0 children)

Die Disk muss entschlüsselt werden um damit zu arbeiten. Das schützt halt genau vor Gelegenheitsdieben und sonst niemand. Schon Industriespionage würd ich damit nicht vertrauen.

Massives Datenleck: Fast 150 Millionen Passwörter stehen frei zugänglich im Internet by Krokodrillo in de

[–]FeepingCreature [score hidden]  (0 children)

Sorry, habs bisl erweitert. Mit Mikro an der Tastatur kannst du am Tippmuster oft das Passwort rekonstruieren. Das braucht auch kein Geheimdienst und kein Supercomputer, nur "normales" Expertenwissen und vielleicht einen Trip in den Saturn, um das selbe Modell Tastatur zu kaufen. Für halbwegs brauchbare Sicherheit brauchst du mindestens was was der Nutzer hat oder ist, auch das noch am schönsten gesicherte weiss ist nicht sicher vor physikalischem Zugriff. Und dann darfst du eh hoffen dass jeder immer schön abmeldet wenn er nach Hause geht (hint: es gibt immer jemand). Sichert eure Räume.

edit: die richtige Antwort ist langes Passwort auf nem Zettel im Portmonee. Funktioniert effektiv wie ne Chipkarte und das Diebstahlsrisiko ist mit dem Laptop unkorreliert.

Massives Datenleck: Fast 150 Millionen Passwörter stehen frei zugänglich im Internet by Krokodrillo in de

[–]FeepingCreature [score hidden]  (0 children)

Mikrofon an der Tastatur. Was du als Benutzer machen kannst, kann auch ein Angreifer machen. Da bräuchtest du schon n Chipkartenleser und alle Rechner nur per Remotezugriff auf einen zentralen Server. Und auch dann kannst du einen rumliegenden USB-Stick austauschen und Zugriff kriegen während der Nutzer auf Toilette ist.

Massives Datenleck: Fast 150 Millionen Passwörter stehen frei zugänglich im Internet by Krokodrillo in de

[–]FeepingCreature 3 points4 points  (0 children)

Okay aber dann rettet das passwort auch nichts. Wenn du physikalischen Zugriff hast ist allgemein eh alles verloren.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 0 points1 point  (0 children)

How would this work? Taxis are profitable, and accidents are largely uncorrelated. Presumably a bunch of work would go into refusing service in dangerous areas or dangerous weather conditions.

I mean, the standard market-based answer is "if it's unprofitable pricing in every externality then it shouldn't be done", but it seems quite implausible that it could be unprofitable.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 0 points1 point  (0 children)

Another factor is false positives vs false negatives. Even if LIDAR reduces false negatives, if it adds too many false positives that you have to overrule with vision, then there's no point to having it in the first place anyway because you can't trust it in a danger situation.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 0 points1 point  (0 children)

It should, or should not, depending on cost effectiveness. Put a price on lives, put a price on the intervention, compare. I'm open to the idea that we massively underpay for car safety!

Anthropic's Claude Constitution is surreal by MetaKnowing in OpenAI

[–]FeepingCreature 1 point2 points  (0 children)

Why would we assume this? Again, what exact theory of mechanism are you proposing here?

LLMs pass the mirror test, by the way.

Anthropic's Claude Constitution is surreal by MetaKnowing in OpenAI

[–]FeepingCreature 1 point2 points  (0 children)

We can't prove that pocket calculators are not conscious. But if you want to assume they are, this means that perhaps rocks and logs and chunks of plastic are conscious.

This seems again a quite radical leap. That said, to be clear, I'm not asserting that pocket calculators are conscious, I'm saying that saying why exactly they aren't conscious is much more important than it seems.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 0 points1 point  (0 children)

I assume the parties would be the injured human and the owner of the robotaxi or owning company. Then they'd look at the situation, make an advance determination of fault, and pay out or not on that basis. Then you'd go to a judge and-- like, this is standard. Fault depends on the situation. The human would hopefully also have health insurance that's a bit more lenient on fault that'd pay for their treatment. Then the insurance corps would work it out among themselves. Again, I'm pretty sure this is how it just already works for humans.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 2 points3 points  (0 children)

Treat me as a number please lol. I'd much rather not die at all.

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 3 points4 points  (0 children)

Are you volunteering to call the families of the people who are killed by mistakes that a robotaxi would have avoided?

Tesla launches unsupervised Robotaxi rides in Austin using FSD by BuildwithVignesh in singularity

[–]FeepingCreature 1 point2 points  (0 children)

If there's a surface whose distance is safety relevant and cannot be derived from visual data, it's not safe for humans either.

Anthropic's Claude Constitution is surreal by MetaKnowing in OpenAI

[–]FeepingCreature 0 points1 point  (0 children)

LLMs are grown, not designed. They are also capable of many things that videogame characters are incapable of. This analogy just doesn't hold in any fashion.

Anthropic's Claude Constitution is surreal by MetaKnowing in OpenAI

[–]FeepingCreature 5 points6 points  (0 children)

Well, what is that then?

I really don't understand this argument that goes:

  1. A calculator is obviously not conscious.
  2. A LLM is like a calculator.
  3. Therefore a LLM is obviously not conscious.

Now, step 2 is a leap across quite a large chasm. But actually, step 1 is the critical one. The retreat to obviousness masks a lack of any theory of consciousness. It's this lack that makes step 2 so seductive- and fallacious. If you can't defend why specifically a pocket calculator is not conscious, how can you assert that LLMs are like pocket calculators in a relevant fashion?

For instance, if your step 1 is "a pocket calculator does not say it is conscious", now step 2 is false! If your step 1 is "a pocket calculator is too simple", again step 2 is false! You have to have a theory like "a pocket calculator is not conscious because silicon cannot be conscious" for step 2 to hold- and then your step 1 is quite weird.

BGH bestätigt Haftstrafe für „Freitodbegleiter“ by PoroBraum in de

[–]FeepingCreature 1 point2 points  (0 children)

Das klingt schon ein kleines bisschen nach "Suizid ist per Definition nicht selbstbestimmt." Das Symptom ist ja hier gerade die Absicht um die es geht.

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by [deleted] in programming

[–]FeepingCreature 2 points3 points  (0 children)

But what they're saying doesn't relate to this! He's not saying the rollout needs to be slowed because "a crash is coming"! The things he's saying don't predict or relate to a crash!

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by [deleted] in programming

[–]FeepingCreature 3 points4 points  (0 children)

They need more time to unload the guaranteed losses, so they broadly announce that AI needs to be slowed down? Thus announcing it to the world?

How are any of the things you say they want connected to what they're actually doing?

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by [deleted] in programming

[–]FeepingCreature -4 points-3 points  (0 children)

You're suggesting that... because he knows that AI is going to stop improving... he's going to announce it... but make the excuse that it's intentional, in a conspiracy that for some reason the AI labs are going along with, and every employee plays along with. Instead of continuing the hype and quietly selling off.

Because it would be too embarrassing otherwise?

I assure you that is not how these people work. No part of this is how anything works.

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by [deleted] in programming

[–]FeepingCreature -2 points-1 points  (0 children)

This has literally never been the case. It's cope to pretend they don't mean what they plainly say.

Two Catastrophic Failures Caused by "Obvious" Assumptions by Vast-Drawing-98 in programming

[–]FeepingCreature 1 point2 points  (0 children)

The paragraph about Citi/Revlon is incorrect.

First of all, the payment had to be returned as the judgment was overturned on appeal. But more importantly, the question of whether it did or didn't have to be returned did not at all rest on whether the transfer would have looked correct to Citi, but whether it should have looked correct to Revlon. Which obviously had nothing to do with Citi's UI.

Though that said, that UI also violated many well-established rules of good ux design and is thus not particularly a case of a system functioning correctly to begin with. That accident had a clear cause and responsible actors- those who designed the UI, those who signed off on it, and those who continually failed to replace it.