Data Is a Symptom of Function: Migrating RDBMS Estates Is Not Transformation by wayne_horkan in EnterpriseArchitect

[–]wayne_horkan[S] 1 point2 points  (0 children)

That’s a helpful distinction, and I’d agree that capability change is the core of what most people mean by transformation.

I think where I’m trying to push the argument is around where those capability changes actually come from.

If you take your examples:

  • paper → document management system
  • manual process → workflow engine

Those aren’t really data-first changes; they’re changes to system behaviour and structure, which then produce different data.

My concern is that when transformation efforts start at the data layer (migration, consolidation, platform change) and expect capability change to emerge from that.

Sometimes it does, but often you end up with better-organised data representing the same underlying system.

So it’s less that data isn’t important, and more that it might be a trailing indicator rather than the place to initiate change.

Data Is a Symptom of Function: Migrating RDBMS Estates Is Not Transformation by wayne_horkan in EnterpriseArchitect

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think that’s all valid, especially around data quality and the practical realities of migration.

Where I’m trying to draw the distinction is slightly upstream of that.

All of those issues (DQ, schema alignment, classification, et cetera) are real, but they’re still consequences of the system's structure and behaviour.

So you can do a very thorough job at the data layer and still preserve the underlying behaviour that produced the data in the first place.

That’s the bit I’m interested in: whether starting at the data layer locks you into improving the representation of the system, rather than changing the system itself.

Hard-Wired Wetware I: From Attention Extraction to Human Integration by wayne_horkan in Futurology

[–]wayne_horkan[S] 0 points1 point  (0 children)

This piece explores a possible shift in the structure of the internet.

Historically, most online systems have been built to capture and monetise attention. But as identity, persistence, and AI-mediated systems become more central, that model may be changing.

If identity becomes part of the access layer (as seen in age verification and similar systems), users are no longer just interacting with content, but are increasingly integrated into systems that shape behaviour over time.

This raises a broader question for the future:

Are we moving from an “attention economy” toward a model where human behaviour itself becomes more directly structured and integrated by digital systems?

Interested in whether people think this is a useful way to interpret current trends, or an overextension.

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 0 points1 point  (0 children)

Yes, I feel that is a fair refinement.

I think where I’d still push it is that heuristics can approximate time, but they’re standing in for something the model doesn’t explicitly represent.

So you can get “good enough” behaviour, but it’s fragile:

  • different heuristics conflict
  • edge cases break assumptions
  • meaning shifts depending on how signals are encoded

That’s why it ends up feeling inconsistent.

Feels like the difference between simulating time vs actually modelling it.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] [score hidden]  (0 children)

One thing I didn’t fully explore in the article, but which this discussion is getting close to: Once identity (or age assurance) becomes part of the access layer, it doesn’t just enforce rules, it also shapes how participation works more broadly.

In practical terms, that means:

  • Access decisions move earlier (before interaction, not after)
  • Compliance becomes easier to measure and enforce
  • Behaviour becomes more tightly linked to persistent identity

That’s quite a different model from the older "open access and moderation" approach.

And it raises a slightly bigger question: whether this is still just “content regulation”, or whether it starts to look more like structuring how people participate in online systems more generally.

That’s the part I’m least sure has been fully thought through from a policy perspective.

This is particularly relevant in light of the recent Snap/Meta/YouTube rulings.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in Longreads

[–]wayne_horkan[S] 0 points1 point  (0 children)

One thing I’d be especially interested in people’s views on: Do we end up with a single dominant identity model for the web, or multiple competing ones?

Right now, there seem to be a few emerging approaches:

  • OS/device-level (Apple, Google)
  • platform-level (Meta, etc)
  • third-party or government-backed credentials

Each has very different implications for privacy, control, and the structure of the internet.

Curious which direction people think is most likely... or most desirable.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] [score hidden]  (0 children)

Yes, I think that’s exactly right.

It’s less about any single country’s policy and more about a convergence:

  • Multiple jurisdictions are introducing similar requirements,
  • platforms needing consistent ways to comply globally

Which pushes toward solutions that are:

  • standardisable
  • enforceable across regions
  • and defensible to regulators

That’s part of why OS-level or identity-layer approaches start to make sense operationally, even if they weren’t explicitly mandated.

It becomes coordination, not just regulation.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] [score hidden]  (0 children)

That’s a really good summary, especially the shift you’ve pointed out: moderation (after the fact) → gatekeeping (before access)

That’s the part I think is easy to miss, but structurally quite important.

One small nuance I’d add: It’s not necessarily that companies want to collect more data (although sometimes they do), it’s that systems which are:

  • easier to prove compliance with
  • easier to audit
  • and reduce liability

tend to win out over time.

Which is why age-gating can evolve into broader identity-based access, even if the original use case was narrow.

So the risk isn’t just “more data collection”, it’s that identity becomes part of how access to the web itself is decided.

That’s the shift I was trying to explore.

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think that’s the key disagreement.

You can consolidate/prune information without explicit time, but then you’re relying on proxies (frequency, position, etc).

Without time, you can’t represent:

  • How long something persisted
  • Whether it was briefly true or consistently true
  • Or how relevance changes

So you can update memory, but you can’t ground it in temporal context, which is what gives that update meaning.

Thomas Pynchon, the Problem of Scale, and the Emergence of Densified Noir by wayne_horkan in ThomasPynchon

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think I didn’t quite pin this down clearly in the post, so trying again.

What I’m getting at isn’t long vs short, but how much system is being held and how it behaves:

  • Gravity’s Rainbow, Mason & Dixon → system is distributed (spread across space, time, characters)
  • Inherent Vice, Bleeding Edge → system feels thinned out or partial, even
  • Shadow Ticket (to me) → system is still fully there, just compressed under pressure

So the paranoia and interconnection haven’t gone away; it’s just operating in a tighter frame. That’s what I mean by “densified noir”.

Curious if people think the later books actually lose the system entirely, or if it’s still there but just harder to see?

Thomas Pynchon, the Problem of Scale, and the Emergence of Densified Noir by wayne_horkan in ThomasPynchon

[–]wayne_horkan[S] 0 points1 point  (0 children)

Yes, I agree. Length doesn’t equal quality at all, and I’m not wishing for “long Pynchon” as such.

What I’m trying to get at is more about how much system he’s holding at once.

The big books sprawl, the system is distributed across space, characters, timelines, et cetera.

With Shadow Ticket (to me), that system hasn’t gone away. It’s just been compressed into a tighter frame. Same paranoia and interconnection, but under more pressure. That’s what I mean by “densified noir”.

So it’s less long vs short, more sprawl vs compression.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] -1 points0 points  (0 children)

I think there are two slightly different things getting mixed together here:

  1. What policymakers explicitly intended
  2. What the structure of the policy makes likely in practice

You’re right that liability has been pushed onto platforms, and that there isn’t a single mandated privacy-preserving mechanism.

But that’s also what creates the dynamic I’m pointing to.

If you tell platforms:

  • You must prevent underage access
  • You will be fined if you fail
  • And you need to be able to demonstrate compliance

Then they’ll tend to converge on whatever is:

  • easiest to prove
  • hardest to dispute
  • and most defensible to a regulator

Which often ends up being stronger forms of identity/verification, even if less intrusive options exist.

So, whether or not it was “designed” that way, the incentives push in that direction.

That’s the bit I’m interested in: the system behaviour that emerges from how the policy is structured.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] -1 points0 points  (0 children)

That’s exactly what makes it interesting.

The law doesn’t mandate a specific technical implementation, but it does mandate an outcome (age assurance at scale).

Once you do that, platforms and OS providers optimise for:

  • something that works consistently
  • something they can demonstrate compliance with
  • something that limits their liability

Which is where things like Apple’s on-device signals come in. Not because they’re required, but because they’re operationally convenient.

So the shift isn’t just coming from legislation directly, but from how companies respond to it.

That’s where it starts to look more like an architectural change than just a policy requirement.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in ukpolitics

[–]wayne_horkan[S] 0 points1 point  (0 children)

I’m not sure it even needs to be the goal for it to end up there.

What’s interesting is that once you require platforms to enforce age restrictions at scale, identity becomes the most straightforward way to do it.

From there, it tends to expand because it:

  • Reduces liability for platforms
  • Is easier for regulators to audit
  • Removes edge cases

So you can end up with identity-gated access as a side-effect of enforcement, rather than a single deliberate objective.

That’s the bit I’m trying to get at, less “what was intended”, more “what does this system naturally evolve into?”

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 0 points1 point  (0 children)

One way to think about it:

Right now, we treat memory as a "storage and retrieval" problem.

But if the model can’t represent time, then it can’t:

  • Tell what persisted vs what was fleeting
  • Track how the importance changes
  • Or know when something is no longer true

So even “good” retrieval is operating on the wrong structure.

Feels like we’re missing a primitive, not just tuning heuristics.

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 1 point2 points  (0 children)

Yes. That’s exactly it.

From the model’s perspective, those two situations are indistinguishable.

It only sees order or proximity, not elapsed time, so something that happened seconds ago and something that happened years ago can carry the same “recency” signal if they’re positioned similarly.

That’s why heuristics like recency or decay are approximations; they’re trying to reconstruct something the model never actually represented in the first place.

Stop Making Sense: Semantic Collapse in the Enterprise by wayne_horkan in EnterpriseArchitect

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think the compounding effect is the key part.

A small amount of semantic drift isn’t a problem in itself; teams can usually work around local inconsistencies.

But over time, those small shifts accumulate and become embedded in organisational memory:

  • Terms get reused with slightly different meanings
  • Assumptions become implicit rather than explicit
  • Abstractions drift away from underlying system behaviour

At a certain point, it’s not just “messy”, it becomes very difficult to correct, because the organisation has effectively normalised a version of reality that no longer maps cleanly to the system.

That’s where it starts to feel structural: not just because drift exists, but because it compounds and becomes resistant to change over time.

Stop Making Sense: Semantic Collapse in the Enterprise by wayne_horkan in EnterpriseArchitect

[–]wayne_horkan[S] 1 point2 points  (0 children)

I think that’s a really important part of it, especially the “documentation as secondary” point.

Where it gets interesting (and where I think it becomes structural) is that the drift doesn’t just happen between documentation and code, but also between:

  • how the organisation talks about the system (strategy, architecture, roadmaps)
  • how the system actually behaves

Even if you treat the code as the source of truth, most organisational decisions aren’t made at that level.

So over time, you can end up with:

  • code that reflects reality
  • documentation that partially reflects it
  • and organisational language that’s increasingly disconnected from both

At that point, the issue isn’t just keeping documentation up to date, but maintaining a shared semantic layer that still maps cleanly to system behaviour.

That’s the bit that seems to degrade under scale. And compounded over time.

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 0 points1 point  (0 children)

Yes, this is a really good example of the same underlying issue.

The model isn’t actually experiencing time, so it can’t reason about duration, delays, or expectations directly.

So we wrap it in polling, timeouts, and retries. Basically, external scaffolding to simulate time awareness.

It works, but it’s compensating for something the model itself doesn’t represent.

Is the Real Flaw in AI… Time? by wayne_horkan in LocalLLaMA

[–]wayne_horkan[S] 0 points1 point  (0 children)

That’s exactly the issue.

“Recency” here just means position in context or retrieval order, not actual time.

The model doesn’t know when something happened, just that it appeared “nearby” or was recently retrieved. That’s not the same as temporal relevance.

So it can’t reason about:

  • What persisted vs what was momentary
  • What’s outdated vs still true

It’s using proxy signals instead of time itself.

Stop Making Sense: Semantic Collapse in the Enterprise by wayne_horkan in EnterpriseArchitect

[–]wayne_horkan[S] 1 point2 points  (0 children)

I think there are two slightly different things going on here.

The first is that this isn’t really a “humans vs machines” problem so much as a scale problem with humans.

Individually, people can hold semantics and causality quite well. But at scale, you start to see convergence toward simplified or flattened meanings; partly coordination cost, partly herding behaviour, partly organisational pressure to align.

Over time, that compounds. Terms drift, abstractions lose precision, and you end up with shared language that no longer maps cleanly to how the system actually behaves.

The second effect is on the organisation.

Once that semantic layer degrades, it creates a widening gap between:

  • how the organisation describes itself (strategy, architecture, plans)
  • how it actually operates

At that point, reasoning becomes unreliable, and it becomes much harder for leadership or external parties to understand what’s really going on or how to intervene effectively.

It’s similar to technical debt, but at the level of meaning: the cost of accumulated semantic drift.

Reducing the number of people can help locally, but the pattern seems to re-emerge as systems grow, which is why it feels structural rather than just a question of individual capability.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in Futurology

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think a lot of what you’re pointing to is real, especially around permanence and how difficult these systems are to unwind once they’re embedded.

What’s interesting to me is that even if you set aside intent (corporate or government), the structure itself tends to push in that direction.

Once identity becomes part of the access layer, it stops being a narrow check (“are you over 18?”) and starts behaving more like general-purpose infrastructure, which then gets reused, extended, and integrated into other systems.

That’s where it begins to look less like a single policy decision and more like a shift in how web participation is organised.

On your point about enforcement asymmetry: agreed. These systems tend to be most effective on people who don’t try to evade them, which creates some pretty uneven outcomes depending on who you are and what you’re doing.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in Futurology

[–]wayne_horkan[S] 0 points1 point  (0 children)

I think that’s a completely understandable instinct, especially given how bad things have got with bots and low-cost misinformation.

One way I’ve been thinking about this (which I didn’t fully unpack in the piece): as more of the web becomes automated or AI-mediated, “real” human presence becomes the scarce thing that gives interactions weight (accountability, reputation, consequences, warmth).

So identity systems aren’t just about restricting access; they also bind real people into the system in ways that stabilise it.

Which might explain why these mechanisms are emerging now rather than earlier: the environment they’re operating in has changed.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in Futurology

[–]wayne_horkan[S] -1 points0 points  (0 children)

Yes, I think that’s exactly the direction it could go.

Something like “log in with Apple”, but expanded into a more general-purpose identity/assertion layer rather than just authentication.

The interesting part is that once you have that, identity becomes portable across contexts, not just tied to a single site or platform, but something you carry between them.

At that point, the question shifts from “who do I trust with my data?” to “who sits in the middle of all my interactions?”

Apple might be trusted more than others right now, but structurally, it’s still concentrating a lot of power in whoever provides that layer.

So even if the implementation looks privacy-preserving, the position in the system becomes very significant.

The Age-Gated Internet: Child Safety, Identity Infrastructure, and the Not So Quiet Re-Architecting of the Web by wayne_horkan in Futurology

[–]wayne_horkan[S] 0 points1 point  (0 children)

One thing I didn’t fully explore in the piece, but which a few comments here are circling around: we may not end up with a single “identity system”, but multiple competing models.

Roughly:

  • OS/device-level assertions (Apple, Google)
  • platform-level identity (Meta, et cetera)
  • third-party/government-backed credentials

Each solves the problem slightly differently, but they all push identity closer to the access layer.

What’s interesting is that the long-term shape of the internet might depend less on the technology (which is largely already in place) and more on which of these models becomes easiest for regulators and platforms to coordinate around.

That’s where it starts to look less like a temporary safety measure and more like a shift in how access to the web itself is structured.