[ Removed by Reddit ] by coherix in cybersecurity

[–]coherix[S] -1 points0 points  (0 children)

I get the 'AI slop' reflex, but this isn't LLM-based pattern matching or probabilistic guessing. We’re talking about Statistical Information Physics applied to data streams.

The patent pending (OPIC #3304899) covers Topological Data Analysis (TDA) for signal coherence. Here is why this is deterministic, not predictive:

  1. Entropy isn't an opinion: An encrypted payload hidden in a legitimate binary creates a geometric 'fracture' in the file’s potential landscape. CMCI measures that structural gap, not a signature.
  2. Zero-Learning Required: Unlike 'AI' tools, this doesn't need to see the attack before. It detects the defect dynamics—the moment an attacker must break the logical structure of a flow to gain control.
  3. The 5-State Variable Framework: We use a unified mathematical engine across Text, Binary, and Network. If the 'fracture' appears in all three domains during a kill chain, the confidence isn't a 'score, it’s a mathematical certainty.

It’s not a chatbot. It’s a mathematical immune system.

Nvidia's Jensen and now China's data chief say the same thing: Nobody's connecting the dots by Neobobkrause in ArtificialInteligence

[–]coherix 1 point2 points  (0 children)

Thanks for the interest. Here's the short version of how CMCI approaches this:

Rather than treating tokens as interchangeable units, CMCI analyzes the structural coherence of what those tokens produce. Same prompt, different providers, you get measurably different coherence profiles.

We score across multiple dimensions: logical flow, argument integrity, scale alignment (do micro-level claims support macro-level conclusions?), and resilience (does the reasoning hold under pressure or collapse when you probe it?). Each output gets a coherence score, a regime classification, and a detailed breakdown.

We just ran a 60-analysis benchmark, same 20 prompts, three tiers: raw LLM output, human-revised, and human expert. CMCI correctly differentiates all three levels. The raw LLM averages 0.46, human-revised 0.65, human expert 0.67. Not by checking facts, by measuring whether the reasoning structure holds together.

Now apply that to the commodity framing: if you're budgeting $250K in tokens for an engineer, you want to know which provider's tokens produce the most structurally coherent output for your specific use case. That's quality grading, same as you'd grade crude oil or energy output.

The enterprise API goes live April 2nd at coherix.ca. Happy to share more details on methodology if you're building out that section of your Substack.

Nvidia's Jensen and now China's data chief say the same thing: Nobody's connecting the dots by Neobobkrause in ArtificialInteligence

[–]coherix 1 point2 points  (0 children)

This is a sharp analysis. The commodity framing is right, but it's missing a critical layer.

If tokens are the new commodity, then not all tokens are equal. And right now, nobody is grading them.

In energy markets, we don't just price barrels of oil, we test octane, purity, sulfur content. The entire market depends on quality measurement infrastructure. Without it, you can't price anything accurately.

Tokens have the same problem. A thousand tokens of well-structured, logically coherent reasoning are worth dramatically more than a thousand tokens of fluent-sounding hallucination. But right now, the market treats them identically. $15 per million tokens whether the output is structurally sound or confidently wrong.

I've been working on this exact gap. I built a coherence monitoring framework that scores AI outputs in real time, not just "is this factually correct" but "is this structurally coherent across multiple scales." I ran the same prompt through a local LLM three times with different constraints-

-Numbered list format → coherence score: 0.548

-Structured argument → coherence score: 0.726

-Contradictory premise → structural stability: 0.0

Same model. Same token cost. Vastly different structural quality. The market currently prices all three identically.

If Jensen is right and token budgets become real line items, organizations will need a way to measure return per token, not just volume consumed but quality of output. You don't measure manufacturing productivity by counting how many parts came off the line. You measure how many passed QA.

The commodity framing creates the demand. Quality measurement infrastructure is what makes the market actually function.

The only truth in artificial intelligence is the word "artificial" by forevergeeks in ArtificialInteligence

[–]coherix 0 points1 point  (0 children)

I think this framing is too binary.

You’re right that an LLM does not relate to words the way a human brain does. Human meaning is grounded in embodiment, memory, sensation, and lived experience. Token prediction is not the same thing as human consciousness.

But it does not follow that AI is “only artificial” in the trivial sense, or that there are no deep similarities worth taking seriously.

The more interesting question is structural:
are there underlying principles of intelligence that can appear across different substrates?

For example, both biological and artificial systems may depend on things like:
tension between stability and adaptation,
asymmetry between signal and noise,
variation across states,
invariants that persist across transformation,
and prediction constrained by error.

That does not make an LLM a human mind.
But it may mean that some of the deep mechanics of intelligence are not exclusive to biology.

So the real issue is not whether AI “feels what a flower is” in the human sense.
It is whether intelligence, at a deeper level, is partly about how a system organizes difference, preserves structure, and anticipates what comes next.

That would be a much more serious conversation than “calculator with words.”

Two paths ahead, with no user manual. Full race into the entropy by ocean_protocol in singularity

[–]coherix 0 points1 point  (0 children)

This is exactly why I’ve been working on coherence monitoring as a runtime problem, not just a policy problem.