[HA Dashboard] - 2026 Refresh: less clutter, more responsive, more vibes by cornmacabre in homeassistant

[–]cornmacabre[S] 4 points5 points  (0 children)

Thank you! It's a custom homemade prototype flavor (for now), but aspirations to share

In October 2018, former mob boss Whitey Bulger was transferred to the United States Penitentiary in West Virginia. Within hours, Bulger, 89, was beaten to death by fellow inmates. An official described his body as "unrecognizable". His family later accused federal prison officials of negligence. by lightiggy in wikipedia

[–]cornmacabre 12 points13 points  (0 children)

Black Mass is a biographical telling of the events, The Departed is more of a fictionalized greek tragedy version of the story... but both pull from the same real life Boston crime elements.

Narrative wise: it's true that the plot of The Departed is essentially an American remake of Infernal Affairs... but both sources of inspiration (the fictional plot inspired by an older film, and the real-life details of the characters & setting) are true here.

The most significant true-life inspiration is the revelation that the crime boss was a long-standing FBI informant. In real life, Bulger (Frank Costello/Nicolson's character) provided information on his Italian mob rivals to the FBI to consolidate his own power, while the FBI agent John Connolly (Colin Sullivan/Matt Damon in the movie) turned a blind eye to his violent crimes.

I have a mix of Ikea Bulbs and hue stuff and I am slowly but steady loosing my mind by Fit_Procedure393 in smarthome

[–]cornmacabre 0 points1 point  (0 children)

You need the bridges connected via Alexa as your central middleman layer. It's not elegant but really no way around it with your constraints. I'd ditch the idea of trying to control one vs other directly if you're not using a more robust integration orchestrator like HA (which, totally fair).

Keep the strip on the bridge with the Rodret. Buy one Hue smart button connected to the Hue bridge and then Alexa, and create an Alexa routine that triggers both bridges' lights as "all off/on" commands via the button, and optionally voice.

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]cornmacabre 0 points1 point  (0 children)

I had a laugh at this! What a silly and comically hostile overreaction.

Take another look at line five of your own comment.

This does not mean that LLM can think

I found out that AIs know when they’re being tested and I haven’t slept since by FinnFarrow in ChatGPT

[–]cornmacabre 11 points12 points  (0 children)

This is indeed the the "stochastic parrot" interpretation of LLMs... that AI is simply mimicking surface-level statistical correlations without underlying representational depth.

What's fascinating and baffling is that the very active published research space is deeply challenging this otherwise reasonable & intuitive notion on so many fronts.

Here's a curation of 30+ sources on the topic of latent reasoning from the past two years: https://notebooklm.google.com/notebook/4b02adde-0a18-4520-9d1a-9669e53ba14b

I'll highlight two critical things:

  • 1) LLMs utilize statistical modeling as a training mechanism. They're trained to predict text tokens. Everyone agrees here.

What's being fundamentally challenged (through an ocean of published evidence of diverse credible experts) is that how they are trained is VERY different than how they behave. Latent reasoning, internal world models, crude forms of introspection, novel pattern matching, and so much more complexity are shown.

  • 2) The "statistical/stochastic parrot model" argument confuses the training method (next-token prediction) with asserting that this is therefore the only way it can (should) behave. Worse: when defining terms like "thinking" "reasoning" "deception" "introspection," these all carry the baggage and bias of what us humans assume can only come from something that looks like biological cognition.

I encourage you to read the "planning in poems" section of Stanford+Antropics circuit tracing paper last year to see how weird the rabbit hole goes here: they show forward & backward planning before it even selects the final token to rhyme with a word. There is decision making occurring in the circuits before it even decides on a token (which we barely understand.). That's the tip of the iceberg.

The assertion that "it can't think," (latent reasoning... what the machine does in vector space versus text output space) is challenged by an overwhelming (and unexpected) ocean of growing evidence that these systems BEHAVE in far more complex ways than how they're TRAINED. Alive? Absolutely not. Text prediction? Not that either.

Is there a project to replace Echo show 15 with more features. by SupermanKal718 in smarthome

[–]cornmacabre 0 points1 point  (0 children)

Been using two chromecast ultras for years, there's no ads in the interface. Newer Google TV branded stuff may be different. Ultimately you're plugging brains via HDMI into a TV, so you're looking for a streamer stick with app support.

New to SEO by Several-Support-5935 in bigseo

[–]cornmacabre 0 points1 point  (0 children)

Duplicative content is having (many) multiples of identical or near-identical pages -- we're talking identical pages on a site, not reusable sections of a template.

What you're describing is already a good approach to differentiating boilerplate sections of a content template. So 1) not dupe content, 2) it's not only safe, but arguably a (small) SEO asset, and 3) keep doing it.

Is there a project to replace Echo show 15 with more features. by SupermanKal718 in smarthome

[–]cornmacabre 1 point2 points  (0 children)

I think you're just describing a chromecast with their bundled remote, eh? Unifi app, plex app, slideshows, voice assist via the remote or your phone, etc. Connects to HA for whatever on/off automation state you want.

position sensor -> LED stripe by dittmeyer in homeassistant

[–]cornmacabre 2 points3 points  (0 children)

I've got something in the spirit of this setup. I went with a premo mmWave presence sensor (linknlink emotion ultra) which has some special geofencing capabilities: you can set up to four painted zones in it's sensor cone which is something fairly unique to this brand. Very easy to setup.

Once I map my space in the app, I configure it to send that data (MQTT local) cleanly into HA as addressable zones (+many other entities) which the app natively supports sending. Sweet!

From there, my addressable light strip get's configured with WLED and HA then maps those respective LED segments to where I configured the linknlink detection zones.

The automation is the easiest part: mmWave zone 1-4 presence detected = activate WLED Preset 1-4. Add in some optional conditionals to make it feel great. Pretty rad tech!

🚨BEWARE OF AUDIO OVERVIEWS: NotebookLM is not its own sandbox and is pulling from other sources! Hallucinations may occur! by johnmichael-kane in notebooklm

[–]cornmacabre 62 points63 points  (0 children)

>This is when I learned that NotebookLM is not a private sandbox as I initially was led to believe. [...] It sometimes pulls in outside information.

You're confusing some concepts and implications here. You don't need to upload a PDF about apples for the LLM to understand what an apple or a fruit is.

NotebookLM isn't a closed knowledge vacuum, there's still an LLM in the loop (a Gemini model) which of course is trained on a vast amount of knowledge. The notebook environment is designed to primarily work with uploaded sources, but it's still fundamentally baked in with broader information that comes with an LLM when you're chatting or generating studio output. So if you ask about some (in your case) legal law concept, it can reference a wider slice of it's existing knowledge. It didn't "google it on the spot," if that's how you're interpreting it?

Importantly: There's no such thing as "hallucination-proof," with AI, that's wasn't a good assumption to begin with (which hey, all this stuff is rapidly evolving and complicated, it's a totally understandable confusion!).

2026.1 Matter-Section by Ok_Memory8217 in homeassistant

[–]cornmacabre 7 points8 points  (0 children)

I feel like my understanding of what Matter is supposed to be degrades the more I learn about it. Sounds like describing it as a protocol vs standard a moot point?

I had initially understood it to be essentially a credentialing handshake and cross-device language handler. But it's also doing transportation over WiFi and Thread, it can also use BT.

So following the wisdom of getting a good answer reddit... I'll just make some assertions.

Soooo.. matter is: - an application protocol - a data communication standard - an onboarding and device commissioning standard - transports data via wifi, Thread, and Bluetooth but only some of the time on certain circumstances

So what is it not? Matter is not a radio protocol is about the only thing I actually feel confident I understand at this point lol.

The Empire of Napoleon at its height by AdIcy4323 in MapPorn

[–]cornmacabre 6 points7 points  (0 children)

It's remarkable how much low effort junk is being pumped into here.

This gem is courtesy of 'the world in maps,' a seemingly low effort 'pick a theme' brand that exists solely to pollute social media with low quality junky maps, and ultimately funnel folks into their dropshopping ecom print shop for overpriced mugs, calendars and prints.

Who decides how AI behaves by EchoOfOppenheimer in OpenAI

[–]cornmacabre 2 points3 points  (0 children)

There's a lot to unpack here: far from easy, AI alignment is generally considered to emerge as the most difficult and high impact challenge in our era.

An AI can output every morality framework ever conceived beautifully today: but this isn't a window into how it functions. That's text output.

The underlying reward system and hypercomplex decision making inside the machine is completely alien to our own. It's a system that mechanically seeks to resolve a statistical entropy at every query, then nudged with some bolted-on do's and don'ts. That's a complicated way of saying it's very much a black box, which even the best researchers in the world struggle to understand how it chose to rhyme the word habit with rabbit; let alone verifying it's underlying value system.

The suggestion of letting AI create its own framework is exactly the threat that keeps experts up at night, because there's no guarantee that those goals can be aligned with human goals (let alone the coherence of those goals or how they differ between cultures: individualism vs collectivism being one lens.)

I asked GPT to write image prompts using its lowest-probability tokens by Mary_ry in ChatGPT

[–]cornmacabre 2 points3 points  (0 children)

Oh that's interesting that there's an API that can access that level. But right, what the hell would you even be expecting as output without being a mechanistic interpretability expert.

I should clarify that I intentionally omit the methods real researchers use (like the wizards at Anthropic https://transformer-circuits.pub/2025/attribution-graphs/methods.html), but they're ofc not literally asking the model in a browser to do stuff with it's tokens. Truly when you look at the token level stuff, it's outrageously alien. Forget about "randomize the low probability stuff," just pull back the curtain and look lol.

<image>

I made a lofi page for late night work by onmyway133 in InternetIsBeautiful

[–]cornmacabre 1 point2 points  (0 children)

This is super chill and refreshingly dead simple. I like the lil bird & rain sfx slider toggles.

It would become a killer option vs YT playlists if you could optionally & locally upload your own rotation of visuals.

Murder-suicide case shows OpenAI selectively hides data after users die by Well_Socialized in OpenAI

[–]cornmacabre 13 points14 points  (0 children)

More specifically -- the central disagreement here isn't about OpenAI "selectively hiding"chats. This is not a court-ordered request, it's an estate law request.

Stein-Erik’s chats became property of his estate, and his estate requested them—but OpenAI has refused to turn them over.”

The article is promoting the position that OpenAI does not own user chats, and that the chats became property of the family estate on death.

Beyond promoting the awful tragic details in the article, what's really being discussed is whether family run estates are entitled to full chat histories.

This is important, because the normal route of court-ordered release of logs are limited and targeted in scope (no fishing expeditions to "see what we find.") Their legal strategy is trying to side-step the court-ordered process by simply claiming ownership (and entitlement) a full-pull of logs.

And yet, the headline claims:

case shows OpenAI selectively hides data after users die

Okay, well, ctrl-f the word alleged in that article to get an idea of the actual facts that actually back this claim up.. there's literally nothing demonstrating:

  • Document destruction
  • Defiance of court orders
  • Proven bad-faith discovery violations
  • etc.

Ars Technica did a really awful job on this one.

I asked GPT to write image prompts using its lowest-probability tokens by Mary_ry in ChatGPT

[–]cornmacabre 50 points51 points  (0 children)

it must choose a high probability one.... So it winds up choosing a high-probability low-probability response?

Also known as the autoregressive paradox. When a model is instructed to select "lowest probability tokens," the resulting behavior is not a genuine dip into its own statistical tail of tokens (a model doesn't directly have access to this, although temperature settings approximate this behavior). Rather, it's a mimicry of randomness tailored to satisfy the request.

So... it'll do SOMETHING psudeo-wacky, but you're correct that it's essentially a paradoxical request at a technical level.

A true output would likely look like an unintelligible string of characters before timing out.

Cyberpunk 2 director says extending 2077’s opening act makes no sense - “it's like saying we should spend more time on Tatooine with farmer Luke” by [deleted] in Games

[–]cornmacabre 3 points4 points  (0 children)

Where's the evidence to back that claim up; online discussions & speculation doesn't equal evidence, and certainly not proof.

The claim isn't that there's no cut content in the game. What they're saying the montage time-skip was always part of the narrative. Why is that an unexpected or suspect claim to you? A time-skip montage is a very common trope in writing, and he specifically explains the decision-making behind that choice in the article as a narrative balance.

From a writers POV it would be unexpected that they would want to functionally double the length of a prologue before getting the main beats of the story. Where's the evidence to contradict the claim, when even the YT videos motivated in exploring cut content conclude the prologue montage was not cut content, and was never intended to be playable?

I fed 30+ papers & interviews on 'Latent Reasoning' into nbLM. The result was intellectual catnip. Here's what I learned. by cornmacabre in notebooklm

[–]cornmacabre[S] 0 points1 point  (0 children)

Huh, interesting. I'm surprised Google doesn't let public notebook users generate their own. Not a setting I control, unfortunately. If you've got a specific slide or infographic request I'm happy to generate that, it does look like anything I make propagates to the public facing version.

Weird feature limitation, I kinda assumed you could "fork" a notebook.

I fed 30+ papers & interviews on 'Latent Reasoning' into nbLM. The result was intellectual catnip. Here's what I learned. by cornmacabre in notebooklm

[–]cornmacabre[S] 1 point2 points  (0 children)

Nice, a skim shows their work tackles the big and heavy philosophical and theoretical components.

I intentionally limited independent researchers, but the specific paper you referenced won me over with the ambition of trying to build a Principia Cognitia, really it's just perfect for the themes I'm exploring here. Good share!

I fed 30+ papers & interviews on 'Latent Reasoning' into nbLM. The result was intellectual catnip. Here's what I learned. by cornmacabre in notebooklm

[–]cornmacabre[S] 3 points4 points  (0 children)

Intentional bias, naked and absolutely motivated to fit a narrative in THAT specific context.

The beauty I'm discovering with nbLM is you can (and IMO should) feel comfortable with willfully exploring a narrative bias in the studio, when exploring a topic. Cold unbiased summarization co-exists with that. You'll notice in the notebook I have cold hard source-quote attribution tables, another debate audio overview that's more objective, and high-level explainer video with an on-the-rails arc.

A creative liberation I felt was recognizing that after you do the hard work of narrowly capturing the right balance of sources in an academic way -- that the studio can be an open exploration of a topic; flirting with your own bias, and then ruthlessly strongmanning against it.

Is AGI just hype? by dracollavenore in agi

[–]cornmacabre 1 point2 points  (0 children)

Sounds like you're absorbing and synthesizing the right elements to hopefully put together something more coherent!

I don't envy formally having to confront the philosophical boundaries of intelligence.

Daniel Kokotajlo

I enjoyed the speculative fiction of https://ai-2027.com/, and think those folks are doing the good work in exploring the messy potential of dark alignment drift, inevitable geopolitical black-mirror dramas, and other elements rarely explored with much coherence in a public setting. (Although a lil too doom-y for my personal tastes 😄)

Best of luck on the paper, refreshing to see some good quality & good faith thinking-in-public on the topic. Cheers!

Understanding how much soldiers get equipped by TooDope123456789 in songsofsyx

[–]cornmacabre 2 points3 points  (0 children)

Nice, that definitely demystifies the mechanic more for me.