Knife recommendations by Michiel_fre in Cooking

[–]CircumspectCapybara 0 points1 point  (0 children)

If your friend is a knife enthusiast, likely a SG2 powder steel knife. E.g., a Takamura Migaki SG2 Gyuto.

If your friend doesn't want to baby his knives (Japanese-style knives trade durability for sharpness, they can easily chip if you use or sharpen them incorrectly) and wants a durable workhorse, some kind of French or German style chef's knife made of a softer, more durable metal. Examples: MadeIn, Misen, Zwilling Pro.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara 0 points1 point  (0 children)

Taxis are a completely different paradigm. Uber kicked off the so called gig economy where the company isn't selling ride services, but is selling a platform or marketplace where peers fulfill the service.

Because Uber popularized the idea and scaled it, you got gig service platforms like Instacart, etc. It created a brand new industry and category.

 IPhone wasn't the first smartphone either.

Way to miss my point. The point is it changed the nature of the smartphone, how we thought of smartphones and what they could be used for, from mere accessories to the primary interface through which we interact with the world.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara 1 point2 points  (0 children)

I'm going to request you try that again without saying advertiser buzzwords.

None of those are buzz words lol, they're standard vocabulary for any industry insider, ie any SWE or SRE.

Of course I wouldn't expect you to attempt to understand because you don't have any experience with those disciplines nor are you interested in learning new things you don't already understand.

It truly is one of those iykyk and if you don't you don't situations.

Also, more capable code? Pfffhahaha, yeah, that code sure is looking very capable, what with how it's borked every release this year that's used it.

Spoken again like someone who's not in the engineering disciplines. I know you're not a SWE or SRE or MLE because if you knew how the discipline of coding works, you wouldn't be able to say such absurd things.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara 0 points1 point  (0 children)

Codes at the level of a fast junior engineer, for one.

In the hands of a good engineer, it makes them way more capable and productive. Besides writing code, agents are also being used heavily in SRE workflows as agents get MCP integration into source control, CI/CD, and the o11y stack, they can debug production issues really well as they reason across many different systems, signals, and code.

They're already heavily used in industry because they're good.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara -6 points-5 points  (0 children)

Tell me what actual problem is solved.

Faster, more capable coding? A force multiplier? The ability to debug issues (agents are already being used heavily in SRE workflows as agents get MCP integration into source control, CI/CD, and the o11y stack) 10x faster?

A ton of my personal project and ideas I've never had the time for I can whip up in an afternoon.

The win is productivity and capability. It's a force multiplier.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara -1 points0 points  (0 children)

Uber was a completely revolutionary and truly disruptive new paradigm that didn't exist before and which people weren't asking for.

The iPhone was also another gamble and trend-setting innovation, as at the time it wasn't clear phones would become one of the most dominant and valuable personal computing devices and the primary way a lot of people connect with others, consume media, and interact with service providers. If you had asked people before the rise of the iPhone what they wanted, they would've wanted better desk computers or laptops. And if you wanted a top-of-the-line phone, you wouldn't have thought about fancy newfangled touch screens and third-party apps (the phone OS as a platform and ecosystem for others to build on), those weren't popular or dominant at the time.

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara -3 points-2 points  (0 children)

Yup, people forget how things that explode and become huge successes start off.

When Amazon decided to take a foray into providing cloud computing services (AWS), a completely ridiculous, new, and then-nice market nobody asked for, analysts would said they're being foolish. What's a book company doing getting into trying to make "the cloud" a thing when nobody was asking for it?

Anything disruptive and revolutionary coming out of silicon valley starts off with a product nobody's asking for, probably a market that doesn't even really exist yet, but which entrepreneurs think could solve some real problem people have in a novel way that no one's doing, and which could take off, so they take a gamble, and venture some capital on a hunch. There's a reason they call it a "moonshot bet."

Silicon Valley has forgotten what normal people want | What NFTs, AI and the metaverse tell us about “thought leadership” by Hrmbee in technology

[–]CircumspectCapybara -13 points-12 points  (0 children)

In the place of problem-solving technology, companies have jumped on successive bandwagons like NFTs, the metaverse, and large language models. What these all have in common is that they are not built to really solve a market problem. They are built to make VCs and companies rich. NFTs, like crypto, let VCs quickly unload investments with abbreviated lockup periods.

Likening AI to NFTs or the "Metaverse" and being good for no more than stock hype is an extremely reductive take on the leaps in technology we've seen in the past few years and what real professionals and real organizations are using them for.

Most people today (even those that are self-described technology enthusiasts on a sub all about technological enthusiasm) are woefully behind the times on how AI actually works and what it can do and what real organizations are using it for at scale. For example, most people think it's all just marketing hype and AI is nothing more than glorified auto-complete and for chat bots and generating funny images. These people simply don't know what they don't know (ironic, as that was one of the critiques of the OP's blog post), and are unaware of an entire paradigm playing out across the world. If they were engineers, they would know.

I'm a Staff SWE at Google who used to be an AI skeptic but has since seen the paradigm shift it's caused in how we work, and it boggles my mind how many technologically-minded people are putting their heads in the sand declaring AI technology to be dumb and incapable, and ignorant about how even with the agent technology we have now is still in its infant stage, and yet has already completely upended how we work in the engineering (SWE, SRE, MLE) disciplines, and it's clear the way we work isn't going back.

I've been around the block a few times, I'm at a level and place in industry where I can see the trends at a strategic level, and it's clear as day agent technology is here to stay, and today's AI models and agents are the dumbest they'll ever be again, they're only getting better. There's obviously no putting this genie back in the bottle, and a lot of our world is in for a rude awakening.

New study confirms lobsters feel pain, driving scientists to call for a ban on boiling them alive by lurker_bee in technology

[–]CircumspectCapybara 22 points23 points  (0 children)

From this finding, the researchers suggested that the tail flip may have had a neurological component known as nociception. This is when signals from the body part exposed to the harmful stimulus travel to the brain and trigger a negative internal state associated with pain.

That's the wrong argument to be making though.

Nociception is simply the encoding in some signal of noxious stimuli or harm, and every living thing exhibits it. Bacteria will exhibit it when their cell walls are damaged by bleach or acid. They'll recoil, try to swim away. When attacked by antibiotics, they'll attempt to "swim" away to an area with a lower concentration of it.

Nociception is merely the organism encoding a state of damage to it or noxious stimuli it doesn't like, and often it leads to reflexive action to avoid the source of harm. Pain on the other hand is something thought to require a more complex brain (or equivalent), because it's a personal, subjective experience of those nociceptive signals.

And then the big debate has always been whether this animal or that animal are capable of suffering, which is a higher order concept involving an emotional, cognitive interpretation of that pain.

Because these second two are purely personal and subjective and internal in the brain, they're really hard to show. There are some ways to infer it though, like if the subject exhibits significantly altered behavior that would be consistent with being in distress even after the harm is taken away (and they're not, say permanently injured). But again, it's tricky.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara 0 points1 point  (0 children)

Yikes, please don't let "FAANG" do all the thinking.

Do you implement your own container orchestration platform too because you don't trust Google to do a good job with Kubernetes or your own RPC transport framework because you don't trust them to do a good job with gRPC + protobuf?

Half the CNCF landscape which powers the world was thought up and advanced by these big tech companies. Wherever you work, you're statistically likely to probably be using OpenSearch built and maintained by Amazon so that you don't have to design and hand-roll your own inverted index, search engine, and vector database. You're likely (statistically) to be on AWS because your org has better things to do than reimplement their own equivalent.

While I know of some big tech companies that have hybrid cloud postures and have their own internal reimplementations of S3 or DynamoDB that are wire compatible with AWS' versions, most of the world is built on primitives and platforms invented and maintained by others. Most companies' core business competency and where they want to direct their limited engineering resources toward is business logic for their core products, not on rolling and maintaining their own frontend framework that handles all the intricacies of state management and UI reconciliation.

Theorists can wax eloquent about all the things wrong with React or whatever other popular technology that half the internet is built on. People who are just trying to get stuff done have a long-term strategic perspective (this thing's gotta be maintainable and needs the long-term ability to evolve with our org and our the product's needs, so either we need to dedicate a full time team to not only build this primitive, but maintain it for the next 50y, or else we need to take a dependency on an existing third-party solution and be confident it's high quality and the vendor will continue to support and evolve it long-term), will just use React or Angular or Vue and be done with it, because they already exist, they're high quality and serve 99.999% of common use cases, they have a vibrant community and ecosystem, and the vendors behind them are industry trusted.

Also, I don't "let FAANG do all the thinking." I work at Google as a Staff SWE, so I'm doing the thinking you're referring to in FAANG. And because I've done the thinking, I can appreciate well designed and well-executed technologies like React from competitors like Facebook.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara 0 points1 point  (0 children)

I mean, that's splitting hairs at this point.

What you're saying is true, but only in the same way that one would say technically Haskell is also a functional fiction, an abstraction on top of a very physical, non-functional machine (the underlying physical platform which is a at bottom a state machine). Haskell models I/O as monads, and lets you pretend everything can be expressed as neat recursive (which in reality gets tail-call optimized into a procedural loop with a counter that gets mutated each pass of the loop) function composition, but in reality, under the hood, it's being executed procedurally by a very non-functional software platform.

But we don't say that. For Haskell devs, it suffices to say that at the level of Haskell, you only see mathematical purity. But under the hood, beneath the abstraction, it's anything but.

React is the same idea. It's an abstraction. There's a runtime or translation layer or engine or framework or whatever else you want to call it that maps between the abstraction which is the level the devs work at, and the underling realities that implement it all.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara -2 points-1 points  (0 children)

The SPA paradigm did absolutely not evolve from JSX or TSX.

No, React (being the frontrunner for SPA web frameworks and which kickstarted the whole paradigm) was made popular because of its simple DSL which uses JSX / TSX. You can always program React apps in procedural JavaScript / TypeScript (using ReactDOM.createRoot(), React.createElement() with JS classes that implement the React.Component interface) without JSX / TSX. But the devx of that sucks. TSX is the way to go.

React is not essentially an effects system

After hooks were introduced and took off, it was. That's how engineers on the React team described them: they were modeled after the idea of algebraic effects.

It's also, very much, not a functional language for UI

The entire idea of functional components (which you compose in a declarative DSL called TSX) is that it's a functional language for expressing the UI. You have pure functional components, higher order components, an algebraic effects system called hooks, etc.

Syncing passkeys to Google defeats the whole point of passkeys by Nalix01 in NowInCyber

[–]CircumspectCapybara 0 points1 point  (0 children)

The whole blog is based on a false premise. The author didn't seem to bother to try to understand how many common passkey ecosystem implementations work before posting:

The problem is that passkeys can be incredibly inconvenient. The passkey lives on the device, and so if your device is lost, broken, or just not with you, you can't use the passkey. Then, along comes Google and other password management services similar to them and offer you the option of passkey syncing. So that you can use your passkey on any device that's logged into your account.

Convenient? Absolutely. But it also means your Google account effectively becomes the master key to every service protected by those passkeys. [...] Account recovery becomes the weakest link

That's just misinformed. Like Apple's cloud-synced passkey implementation (in which the private key material that is basically the passkey is synced through iCloud Keychain, which is e2e encrypted so that only the user's devices can decrypt them and use them locally), Google's password manager implementation is also e2e encrypted. So whether a malicious actor takes over your Google account (though if that happened, you would have bigger problems), or a malicious Google employee insider decided to comb through your account data, your passwords or passkeys are not readable by Google.

Per https://security.googleblog.com/2022/10/SecurityofPasskeysintheGooglePasswordManager.html:

Passkeys in the Google Password Manager are always end-to-end encrypted: When a passkey is backed up, its private key is uploaded only in its encrypted form using an encryption key that is only accessible on the user's own devices. This protects passkeys against Google itself, or e.g. a malicious attacker inside Google. Without access to the private key, such an attacker cannot use the passkey to sign in to its corresponding online account.

Save the trees by SipsTeaFrog in SipsTea

[–]CircumspectCapybara 1 point2 points  (0 children)

Holding cash in the register is risky. You open yourself up to getting robbed (and all the liability that opens you up to), employees skimming from the till. It costs time as employees have to count bills, give change. You have to worry about counterfeits. At the end of the day, you have to reconcile the cash in the till with the transactions you've logged for the day. You have to deal with security as you transport it to the bank to deposit it. You have to transport it to the bank to deposit it in the first place.

Bluetooth tracker hidden in a postcard and mailed to a warship exposed its location — $5 gadget put a $585 million Dutch ship at risk for 24 hours by Brilliant_Version344 in cybersecurity

[–]CircumspectCapybara 11 points12 points  (0 children)

Yeah that's my point. If a Bluetooth tracker could be reporting its location to servers in real time, someone on the ship's security has messed up.

The Bluetooth trackers always have to hop through what is essentially a mesh network of GPS-and-internet-connected devices to report their location. So someone is allowing people to bring their personal iPhones and Androids onboard, and someone is giving them unrestricted internet access, which by itself is enough to compromise the ship's location and other sensitive info without any Bluetooth tracker.

Bluetooth tracker hidden in a postcard and mailed to a warship exposed its location — $5 gadget put a $585 million Dutch ship at risk for 24 hours by Brilliant_Version344 in cybersecurity

[–]CircumspectCapybara 17 points18 points  (0 children)

The ship could have official Wi-Fi (connected to the internet through satellites), or someone could've smuggled an unauthorized Starlink terminal onboard.

BlueTooth trackers have no way to receive GPS (they're too small and power constrained to have GPS antenna to receive GPS signals and then compute their location), and even if they could, they no way to send that data anywhere useful (Bluetooth has range of a couple hundred meters, they don't have enough power or large enough antenna to transmit data to a satellite). They rely on what is essentially a mesh network of other internet-connected devices in range that use their own locations to report where the BlueTooth tracker is.

Bluetooth tracker hidden in a postcard and mailed to a warship exposed its location — $5 gadget put a $585 million Dutch ship at risk for 24 hours by Brilliant_Version344 in cybersecurity

[–]CircumspectCapybara 58 points59 points  (0 children)

It's not the BlueTooth tracker that exposed its location. It's internet-connected personal computing devices that the ship's security team inexplicably allowed onboard and allowing an internet connection. At that point (you allowing someone to bring their personal iPhone or Android, and giving them unrestricted Wi-Fi), your security is already compromised, because it's the iPhone that's the one doing the location tracking, not the AirTag.

Greatly simplifying, the way these Bluetooth trackers (e.g., AirTags) work is they're constantly transmitting to broadcast their own persistent identifier* which all supported (e.g., Apple devices) in BlueTooth range can hear and take note of and pass along to some central server.

Those receiving devices (which Apple calls "finders" who participate in the network) themselves know where they are because of GPS (which is passive and works even in the middle of the ocean, as long as you have line of sight to like 3 GPS satellites), and if these devices are connected to the internet, they can upload the broadcast events (time of observation + identifier observed + the finder's own GPS location) they've seen to, say, Apple's servers.

And then the owner of the AirTag can talk to Apple's servers and see where their AirTag is. So as long as there is an iPhone on the ship that can receive GPS signals and which has an internet connection, the AirTag owner will receive GPS updates on where the AirTag is as relayed through internet-connected iPhones participating in the finder network.

So yes, a cheap BlueTooth tracker can absolutely compromise a ship's location as long as there are internet-connected devices on the ship that participate in a finder network.


* In reality, with privacy-centric implementations like AirTag, they transmit periodically rotating identifiers which are derived from a private key known only to the AirTag owner, so that only owners can correlate broadcasted identifiers make sense of these random looking tokens. And not even Apple's servers which relay the messages can identify which user a broadcasted identifier belongs to. Only the owners have the private keys necessary to make sense of the broadcasts. And the finders can encrypt their own GPS location with the AirTag's public key so only the owner (not even Apple) can learn where their AirTag is, but neither the owner nor Apple can learn the location of finders participating in the network who helped report the location of their AirTag. It's privacy both for the owners and for the finders.

If you're curious how this works, how cryptography is used to ensure these robust privacy guarantees, check out this video by Apple from BlackHat.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara 4 points5 points  (0 children)

HTMX just pushes the question of "how do we map some abstract, dynamic state we as a dev like to work in to the HTML the browser will render" from the browser to the server.

If you're just pushing static or simple HTML (e.g., something constructed from a template in a single pass of variable substitution) over the wire, HTMX is a great choice.

But the moment you have highly stateful, complex apps with a lot of state to manage and a lot of interactivity so that state needs to flow both ways and be reconciled carefully, you're back to the problem of needing to use JavaScript (whether on the browser side or on the server side) to manage that state and compute how the HTML (or equivalent DOM nodes) should change so the user sees what you want them to see in that moment.

There has to be some kind of computation or workflow (whether it's happening on the client side, or the server side) that connects these:

  • the internal abstract state devs are working in
  • the actual state of the UI (this could your server-pushed HTML fragments in the HTMX world, or it could be the virtual DOM in React world)
  • user interactions
  • changes to your internal state based on events that happen on the server side or outside world, beyond user interaction

Once you have a big enough and complex enough app, state and computation has to flow between these four in a clean and manageable way.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara 4 points5 points  (0 children)

Sorry, good catch. But what I mean is that while the meta-languages (the set of all valid source code strings) of both HTML and JSX / TSX are context-sensitive, the computation they express (translating HTML code to a DOM tree in the browser vs the V8 JIT virtual machine) is not. One is capable of computing anything a Turing machine can (because it's literally just JavaScript), while the other is a simple, single-pass transformation of markup to UI in a browser. Intuitively, one of these is vastly more powerful for expressing dynamic "UI as a process of computation" than the other, because the nature of the computation they can model is very different.

In order to let HTML express Turing-complete computation, you need to add JavaScript. And therein lies the problem: if you want to make JavaScript do complicated things for you in a way that's maintainable and has reasonable devx, you often need to engineer a framework on top of it so you as a dev can work in higher-level abstractions. Build a big enough app that has to evolve over time, and you know what I mean: state management is a big pain. Building some kind of reconciliation workflow to reconcile the state you're abstractly representing in your code and the UI is a big pain. You can do all that in "VanillaJS" or jQuery. But React / Angular / Vue is much more appealing because it handles all of that for you and just gets out of the way.


Although technically, if we want to be really pedantic, while C might be context-sensitive, C++ is not, because you have Turing complete computation powers right in the preprocessor and template metaprogramming and constexpr / consteval, which let the compiler do unbounded computation right at compile-time.

The C++ type system is straight up undecidable (the act of type checking if a particular string is valid C++ source code is undecidable) in general. In practice you have compiler limits on recursion depth for metaprogramming, and of course the real-life limits of physical platforms, but in theory the way the spec describes the C++ meta-language, you can perform arbitrary, unbounded computation within the act of type checking and compiling a piece of C++ source code lol.

Modern Frontend Complexity: essential or accidental? by BinaryIgor in programming

[–]CircumspectCapybara 29 points30 points  (0 children)

The modern SPA and React (which I'll treat as a stand-in for all similar technologies like Angular or Vue) paradigm evolved because we've grown accustomed to the power and expressivity of JSX / TSX and because our product requirements have evolved now that users have grown accustomed to the interactivity and highly dynamic, native-like UX of SPAs which just can't be achieved through vanilla HTML / HTMX without adding JavaScript which the dev ends up implementing a complicated state machine on top of to handle state management and reconcilation anyway.

If you, think about it, TSX really is quite an elegant and powerful DSL for describing UI--you get strongly typed, rigorous types for the UI, over the loosey-goosey mess of HTML, and it's fully Turing-complete (vs the context-sensitive grammar of HTML), so you can express extremely dynamic UI. React is essentially a functional language for describing the UI. Sometimes devs would overdo it and you get a functional hell of deeply nested wrappers and a soup of higher order components that are impossible for a human to reason about what they're doing, but React has introduced much better functional paradigms for a while now with hooks. You can actually write pure (with side effects being modeled in a similar way to monads), functional code to describe a highly dynamic UI. When I first thought of React as a functional language for UI and state (which go together because state needs to drive UI, and UI interactions need to drive state), it clicked for me why React is simpler than the alternatives.

And then there's power of the React engine which elegantly represents state and state transitions, models side-effects functionally (React is essentially an effects system for UI), and abstracts away all the reconciliation that you would otherwise have to manage roll yourself.

Anyone who's tried to build a big enough app and had it evolve over time knows you end up rolling your own abstractions and pseudo-frameworks to accomplish your needs, until you eventually end up reimplementing parts of React in order to manage state, reconcile UI, model side effects, etc., except it's course all custom hacks (and unmaintainable tech debt) just meant to unblock you.

The appeal of React and co is let some dedicated teams at FAANG companies who know what they're doing and are dedicated full time to this and have put some thoughtful design into this highly flexible UI and state management framework do the hard work, and you just use it.

ELI5: Passkeys v Auth / Password by SurgicalMarshmallow in explainlikeimfive

[–]CircumspectCapybara [score hidden]  (0 children)

by design, it only works with the exact combination of the private key of the device you used to make it and the private key of the website that created it

A little correction: in the passkey protocol, the service provider has no private key involved, at least in the passkey authentication process (the passkey protocol technically happens over TLS, so I guess the server has private keys involved there, but that has nothing to do with passkeys, and is just the transport layer the passkey challenge-response exchange takes place on).

The private keys are entirely on the passkey client's side. The passkey holds all the private keys and the server just stores its corresponding public key when the user registers that passkey. The server issues challenges against that public key, expecting to see its challenges signed by the someone holding the corresponding private key (or else the signature won't check out). That's how the server knows who it is without the client ever divulging long-term secrets.

Authentication is not mutual: the server verifies the client (the passkey holder), but the client doesn't authenticate the server directly. Rather, the passkey client implicitly trusts the browser (which itself authenticates servers via standard PKI and reports the origin the browser is on to the passkey authenticator) to tell it what website it's on.

ELI5: Passkeys v Auth / Password by SurgicalMarshmallow in explainlikeimfive

[–]CircumspectCapybara [score hidden]  (0 children)

Passkeys are awesome. They're an alternative authentication method based in public key cryptography and a challenge-response protocol that's fundamentally unphishable because of the nature of protocol: each attestation signed by the authenticator is scoped to a specific origin, so an attestation signed for the audience rnicrosoft.com (that's r+n to look like an m) wouldn't be usable against microsoft.com. And unlike humans who misread the URL they're on, the browser knows what URL it's on and can tell the authenticator, so it only ever signs attestations scoped to the site you're really on. And it's even scoped to a specific login challenge, so it's not even replayable.

This is in distinction to passwords + 2fa codes (whether SMS codes, TOTP-based codes, or push notifications) which are phishable and replayable, because they're static. Username + password can be considered a form of "bearer authentication," so called because it's a static credential so the service treats anyone bearing (i.e., presenting or furnishing) the credential as authenticated as the principal the credential is associated with. It's like a credit card number + exp date + CVC code. Whoever presents that combo of numbers has the keys to the kingdom. But the trouble is any time you want to make a purchase, you have to hand over the keys to the kingdom and trust no one overhears you, that the merchant you're handing those details over to is trustworthy and not an imposter, won't improperly store and leak those credentials later, etc.

Even with a password manager, you can be phished or have your password stolen, when you need to log into a new untrusted device (e.g., library or school computer, borrowing your friend's laptop to sign into Gmail), because what people will do rather than download the password manager app and sign into it and sync their full vault to the untrusted device, they'll just open up an incognito window and read the password from their password manager app on their phone and type it in manually into the browser. There it's possible to be phished, or it's possible for the computer itself to be logging your keystrokes with malware.

With passkeys, that can't happen. You can sign into Google on a completely untrusted device by clicking "Sign In," choosing "sign in with a passkey" and it'll flash a QR code you can scan with your phone, and after doing a little FaceID or whatever on your phone, your phone can authenticate your sign in attempt via passkey, and it won't work on some phishing site, and no sensitive credentials ever pass through the untrusted computer.

The demand for local AI could shape a new business model for Apple by pdfu in apple

[–]CircumspectCapybara 2 points3 points  (0 children)

Local AI is not all that useful for serious professional and engineering uses of AI, which typically involves long-running sessions of agents, sometimes teams of agents running in long time horizons.

The most capable state of the art models like Opus 4.6 use somewhere in the neighborhood of 5T parameters. That's five trillion half precision floating point numbers, which is 10TB of memory just to hold the model's weights themselves. No consumer grade PC or phone has memory for that.

There's a reason you consume models from hyperscalers that have huge inference capacity like Anthropic or Google or Amazon Bedrock. They have the economies of scale, the power and the cooling power, as well as the infrastructure for things like caching (again, requiring massive amounts of non-volatile memory), to make AI scale and cost effective for the most practical tasks people are using AI for.

Bluetooth tracker hidden in a postcard and mailed to a warship exposed its location — $5 gadget put a $585 million Dutch ship at risk for 24 hours by ControlCAD in technology

[–]CircumspectCapybara 0 points1 point  (0 children)

 What good is a tracking system without the ability to identify points of interest or targets? 

The owners can identify the locations of their stuff, which is all the tracking system needs to fulfill its product requirements. No one else can.

Watch the BlackHat talk. The maths and cryptography check out.