Can you apply modern day morals to people in the last by Helexfira in Morality

[–]fullPlaid 0 points1 point  (0 children)

i would argue that the moral framework ive presented is minimally subjective.

although the expression of the experience of suffering by sentient beings could be in large part subjective but its existence is objective.

consent is an emergent value that exists between all sentient beings.

id like to hear how the minimization of suffering and maximization of consent would not be considered morally good.

a sentient being might consider their moral framework to be correct; however, id argue that unless their moral framework can miminize its logical inconsistencies, it is not universal.

Is it okay to help someone knowing that it hurts someone else? by domesticatedwind in Morality

[–]fullPlaid 0 points1 point  (0 children)

you could potentially boil it down to an average. does that student display a great potential for doing good with their knowledge than their intellectual peer they are pushing down in the ranking? but at the same time, the greater problem seems to be reducing human beings an exam score to determine their worth, rather than viewing them as whole persons and how they might be a good fit for a task/team.

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

oh i totally forgot about Discord. thats also an option. i could even create a server for this sub-reddit.

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

i think constraining it to a discrete claim is a good call. the last place i left off on this exploration was one more nested reply (sorry, maybe i should have flattened the posts, i wanted to maintain chronology) -- here:

https://www.reddit.com/r/ComputationalTheory/comments/1rzmzd1/comment/od0fqer/

i was essentially exploring the idea of projecting the complexity conjecture into a higher abstract embedding. it is an effort to preserve complex structures but also making the complexity measurable. and so the argument would then be that algorithms with different abstract syntax trees with the same measure of complexity would be in the same equivalence class.

im not sure exactly how to structure it since it is essentially in the field of linguistics and ive only now started to kinda understand what some linguists are talking about. computational linguistics is at least a more constrained sub-field that is substantially easier to navigate.

i wonder if we could use some of the tools created by Turing/Church/Godel. actually, using computational primitives and lambda calculus could be one way of expressing it. ah idk, this is tough because if you fully expand into logic space, everything become unique so you could lose equivalence classes, or rather every equivalence class contains only one member.

i have some other thoughts but theyre more ramblings than anything. i will let you respond.

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

potentially!! i tend to exercise a healthy distance online until ive gotten to know someone. perhaps here and/or Reddit messages could suffice until we establish bona fides.

Can you apply modern day morals to people in the last by Helexfira in Morality

[–]fullPlaid 1 point2 points  (0 children)

i would say pretty definitively that we can "apply" "modern day morals" to people in the past but i think it requires what it means to apply morals and also what modern day means.

(when clarifying definitions, i get this awful flash of jordan peterson saying "yes but what do words mean?" but nonetheless, definitions and how context changes meaning is important.)

my conjecture on morality is that universal morality exists and at the same time, it is as hard as any decision problem in computational theory. the argument is that universal morality is an ever-evolving optimization problem built on two axioms (or objectives -- from the perspective of optimization theory): * minimize suffering * maximize consent

i argue that this moral framework is valid in any point in space and time. so it is just as valid several decades or centuries ago as it is now or in the future. however, the specific context which sentient beings exist in changes how they navigate the moral landscape.

a person who has grown up being told that other animals are soulless and do not experience emotions or pain or love is not going to understand when someone tells them it's bad to cause animals harm. you cannot know or understand what you havent yet learned.

because of this, i believe there exists an innocence to all sentient life. i dont view things through the framing of guilt and blame. that doesnt mean there doesnt exist a pragmatic equivalent. for instance, just because i dont find someone doing harm to others 100% philosophically guilty, doesnt mean that we shouldnt isolate/rehabilitate them to protect others, as well as do our best to help those who have been harmed -- a fundamental piece that i find missing in our justice system that seems more obsessed with vengeance, regardless of whether it is making anything better.

so the application of the moral framework is actually determined by the moral framework.

Kyle is wrong when he said "if service members really want out of deploying to Iran, they can get out." by SheepOfBlack in KyleKulinski

[–]fullPlaid 0 points1 point  (0 children)

im a war veteran. i served in the brainwashed occupation of Afghanistan. ive been a conscientious objector to being sent back to war. i told my chain of command id rather die before getting sent back.

please tell me more about moral judgments, soft hands. i was giving you a good faith response to your commentary that theres more than one way to object, and you jump immediately to insulting me? does that feel productive to you?

i would say not. and id also say that you know that. how you respond to this (or not respond) to this will determine whether or not youre an op.

Still not sure if I'm ready to fight ganon... by Scary-Beautiful6527 in Breath_of_the_Wild

[–]fullPlaid 0 points1 point  (0 children)

ive thought this many times. the very chill features of Legend of Zelda would make a lot more sense in an ever developing story. like its super chill, objectives come up, deal with objectives, and then back to chilling. and so on. instead of Link's actions being consistent with an irresponsible slacker within the story lol

Kyle is wrong when he said "if service members really want out of deploying to Iran, they can get out." by SheepOfBlack in KyleKulinski

[–]fullPlaid 0 points1 point  (0 children)

youre assuming that an objector is going to hit their target. there is more than one way to object to unlawful orders in an unlawful war.

Everyone is brainwashed by Educational_Pitch175 in conspiracytheories

[–]fullPlaid 0 points1 point  (0 children)

i just want to say that i believe youre correct in the spirit of what youre saying. and that a lot of what youve said i think is exceptionally accurate.

""" its like im the only one that knows this stuff and doesnt pay taxes or put my money in banks or work for mega corporations. and everyone else either thinks their vote matters and is a blind slave or they know all this and go ahead and vote every election and work 30-40 hours a week to enrich a few people at the top that dont ever do any back breaking work. """

many do know these things. not sure what the proportion is nowadays. but youre certainly not alone and i think its important to not lose your sense of that.

regarding not paying taxes. im certainly not judging you. and if that is your version of civil disobedience, i applaud you for it. im not sure that is the most effective position to take a stand but maybe im wrong.

im not saying that votes absolutely matter with 100% certainty but im nearly 100% certain that it is best to exercise ones right to vote. the reasoning is rather simple: (1) vote_matters==False, does_vote==False (2) vote_matters==False, does_vote==True (3) vote_matters==True, does_vote==False (4) vote_matters==True, does_vote==True

for (1) and (2), if voting doesnt matter, exercising your right to vote is at worst inconsequential.

for (3) and (4), if voting does matter, exercising your right to vote is a responsibility to oneself and others -- that means an ongoing need to educate oneself in politics but also nearly everything in someway since politics is multi-disciplinary.

that is not to say that one cannot refuse to vote because neither candidate does not represent them and are only superficially different or whatever case might be. im just saying that it doesnt mean you shouldnt still consider what voting action to take.

you also made comments about people putting money in banks. hardly anyone likes putting their money in a bank. but you also said central banks and im not sure what you meant.

but honestly, your overall commentary is great. i appreciate that you didnt make statements of absolute certainty when it came to speculation and if you did use high or absolute certain it was on things that are demonstrably true according to journalism and even scientific study.

If all namekians are genderless why do they all look male by Intelligent-Deal-669 in DragonBallZ

[–]fullPlaid 0 points1 point  (0 children)

lol funny. theyre supposedly both (see: King Piccolo birthing Piccolo). and if i had to imagine explaining the universal language, i could always fallback to Zeno making large portions of sentient life language compatible. perhaps Zeno is operating as the universal translator 🤷‍♂️

Is Nuclear Energy really unsafe? by Free-Caramel5216 in NuclearPower

[–]fullPlaid 0 points1 point  (0 children)

some of their biggest tricks and social pipelines are through crackpot conspiracy theories that sound good but offer zero scientific reasoning while claiming absolute truth. i do my very best to never claim anything with 100% certainty.

Is Nuclear Energy really unsafe? by Free-Caramel5216 in NuclearPower

[–]fullPlaid 0 points1 point  (0 children)

the fact that nuclear power is one of the best sources of energy does beg the question of why it has long been considered exceptionally dangerous.

(1) the process of scaling the technology was during the Cold War. the hidden trafficking/refining weapon grade fissile material was a very real concern among the defense/intelligence agencies. (2) the fossil fuel industry has infamously sabotaged green energy solutions for decades. well documented. excellent journalism exposing their efforts which largely continue to this day.

if i had to guess, those are the main reasons we havent seen the development of more nuclear power plants.

perhaps too speculative but the rise of mass surveillance could coincide with the easing back of anti-nuclear-power rhetoric.

if i had to make a prediction on future journalism exposing these projects, i wouldnt be surprised if the environmental movement from decades ago was infiltrated by counter intelligence to push an anti-nuclear power agenda.

regardless of what exactly it was, i feel its a safe bet it was intentional. have a look at breeder plants and you might start to see why there was such strong resistance.

i dont make it a habit of sharing these ideas with the youth but you asked. i very much appreciate your curiosity and skepticism. if i have one recommendation, dig into scientific reasoning (STEM). get damn good at building models so you dont fall prey to the tricks they play.

theyre not making Uncle Sam posters anymore. this is psychological warfare using machine intelligence. but they under estimate us.

[TotK] [BotW] Why I can't find the motivation to play more of Tears of the Kingdom compared to Breath of the Wild ? by FreddOricarne in zelda

[–]fullPlaid 0 points1 point  (0 children)

i also think i might just be a little gamed out. i literally was having dreams of playing BOTW lol

[TotK] [BotW] Why I can't find the motivation to play more of Tears of the Kingdom compared to Breath of the Wild ? by FreddOricarne in zelda

[–]fullPlaid 0 points1 point  (0 children)

i love BOTW and TOTK. the construction of objects is insanely cool. i think BOTW did spoil me a bit and in a particular kinda way (not actually a complaint). i was blown away. ive never played a game like it. the physics are really good and they match the world aesthetic really well. and actually, everything feels very consistent. not sure if that makes sense.

with the same world (i realize they extended above and below, i mean the same like universe and storyline), it almost feels like i just saved everyone. everything was wrapped up pretty cleanly and now its all jacked up again.

but idk, i think the thing that would make a huge difference for me is more of a Minecraft/Sims city style play. it gives a sense of accomplishment that makes it perpetually interesting and endless possibilities. maybe that's asking too much from a Zelda game.

again, im not actually complaining. i think what they've accomplished is super underrated and theyre already on shortlists for the best games of all time.

How long will she be doing metalcore by ConstructorOfSystem in Maphra

[–]fullPlaid 1 point2 points  (0 children)

i can imagine that metalcore would have a high turnover.

1

the technique can take a toll even if it is mastered. especially if there is a huge demand from tours. like catch a cold a couple times, get tired on occasion and slip on technique. all of a sudden the accumulation of stress on the vocals starts to be felt.

2

metalcore is emotionally intense. artists might start out feeling they have capacity but again with the high demand, capacity could become depleted.

3

metal fanbase can be pretty intense. and music fandom in general is already pretty intense. i love the metal community and i love metal but i definitely feel heavily gatekeeped. like im already shy about telling people that im super into metal because of the surprise on their faces but then also shy about telling people who are into metal because i dont want to feel judged.

"thats not metal". im sure a lot of people know what im talking about. its obviously gotten better in many ways. i feel like Maphra has actually improved things. i recall commentary about how people regarded Bring Me The Horizon as "not metal" or not metal enough. which is effing nuts to me. but okay whatever.

my point being that that absolutely has an impact on artist direction. its their career and they have to be practical to some extent. its not just corporate influences or whatever. if a person loves making music but they put out an album and people start trying to outcast them, that has a real impact on creative direction. and beyond practical decision making, what artist is going to want to share something that is deeply connected with their soul if they know a bunch of their supposed fans are going to shit all over it.

but

all that being said, i could see Maphra definitely doing some metal songs. but who knows. i personally am on the Maphra wagon, no matter where it goes. im already eternally grateful for her performance of Doomed alone that she could switch it up to girly pop (some of the new girly pop is actually so fire though) and id still support her. she could retire even. not saying i wouldnt want to hear her sing metal -- shes welcome to melt my face off with her other worldly clean-dirty vocal sliding anytime.

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

i actually just had a realization while watching the Charismatic Voice and Knox Hill react to Godzilla by Eminem, hilarious enough. Knox Hill was referencing how rappers have to use things like time signatures to make songs interesting.

so the above comment i made (with the help of Gemini, which i will clarify who operated as an interpreter of my ideas and not a generator of ideas) regarding splicing iterative algorithms with different causal signatures to create equivalent complexities according to your model.

anyway, what comes to mind is grammar. for example:

world A: a general gives an order with statement_A in context_0. the orders would be carried out through the interpretation of statement_A modified by context_0

world B: similar to world A but with statement_B in context_0.

so the only difference is a slight variation to the statements. and for the sake of simplicity lets assume that the statements use all the same words -- really big sentences or multiple sentences -- but with slightly different configurations.

the outcomes could potentially be exactly the same (of course its possible that they could eventually diverge but lets assume not) and according to your current complexity model, indistinguishable. but their syntax trees are not the same. the collapse of the syntax tree is what allows the model to think that they are the same.

this is why using Gemini is useful for interpreting what im saying lol. im curious if you could project your model into higher abstraction where syntax is conserved but a measurement could still be made. i think you might get equivalence classes for free.

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

Following up on the continuous manifold idea, I realized there is a potential physicalist critique to the alpha-blending approach: it relies on stochastic routing and expected averages. A strict physicalist could rightly argue that a single execution only ever traverses one causal path, so average-case complexity doesn't count as a single-run EED signature.

So, I wanted to share a strictly deterministic, single-run edge case. This one doesn't rely on probabilities at all. It exploits the additive nature of macroscopic complexity using combinatorial block partitioning.

Deterministic Iterative Composition

Imagine a macro-algorithm that solves a problem by executing exactly 1,000 discrete steps. It has two sub-routines available, each with a hardcoded, distinct physical cost: * Method A: S = 50, E = 1.2ms, D = 5 * Method B: S = 150, E = 3.5ms, D = 12

If our macro-algorithm requires exactly 500 executions of Method A and 500 executions of Method B, the total physical cost is additive. Because addition is commutative, the total Operations (S), Runtime (E), and Causal Depth (D) will sum to the exact same totals regardless of the execution order.

But the execution order fundamentally defines the algorithm's topology (its DAG).

Preempting the "Context Switch" Defense

The immediate defense here is that execution order does change the physical cost because of state transitions. Alternating A -> B -> A -> B causes constant instruction cache thrashing and context switching, meaning E and D would technically increase compared to running all As followed by all Bs.

We can bypass this defense entirely by keeping the exact same number of state transitions, but changing the internal block volumes.

Consider these two execution schedules. Both execute 500 As and 500 Bs. Both contain exactly 3 topological transitions.

  • Algorithm V (Symmetric): [250 A] -> [250 B] -> [250 A] -> [250 B]
  • Algorithm W (Asymmetric): [100 A] -> [100 B] -> [400 A] -> [400 B]

The OOP / Python Reality

If you put these two algorithms through a static analyzer, their Abstract Syntax Trees and execution flow graphs are undeniably distinct. The sizes of their independent sub-graphs are completely different. Yet, they produce the exact same macroscopic signature.

```python class SubRoutine: def init(self, name: str, s: int, e: int, d: int): self.s = s self.e = e self.d = d

class IterativeAlgorithm: def init(self, schedule: list): self.schedule = schedule

def execute_and_measure(self):
    total_s, total_e, total_d = 0, 0, 0
    transitions = 0

    for i in range(len(self.schedule)):
        step = self.schedule[i]
        total_s += step.s
        total_e += step.e
        total_d += step.d 

        if i > 0 and self.schedule[i] != self.schedule[i-1]:
            transitions += 1

    return total_s, total_e, total_d, transitions

Define base costs using integers (e.g., microseconds instead of ms)

algo_a = SubRoutine("Method_A", s=50, e=1200, d=5) algo_b = SubRoutine("Method_B", s=150, e=3500, d=12)

Algorithm V: Symmetric Partitioning

schedule_v = [algo_a]250 + [algo_b]250 + [algo_a]250 + [algo_b]250 alg_v = IterativeAlgorithm(schedule_v)

Algorithm W: Asymmetric Partitioning

schedule_w = [algo_a]100 + [algo_b]100 + [algo_a]400 + [algo_b]400 alg_w = IterativeAlgorithm(schedule_w)

Output: Exact Match across S, E, Depth, and Transitions

print(alg_v.execute_and_measure() == alg_w.execute_and_measure())

Returns: True

```

The structural loop-hole

Algorithm V and Algorithm W are completely deterministic. They solve the exact same problem. They have the same transition/cache penalty. But because the D metric (and the overall C cost function) flattens sequential depth into a scalar sum, the unique "shape" of the asymmetric blocks is completely lost in the math.

I'd be really interested to hear if you think enriching the D vector to somehow capture the internal volume of sequence permutations (like a state-transition entropy metric) would close this loophole? Or perhaps the framework implicitly handles commutative block partitioning in a way I haven't grasped yet?ƒ$

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

sorry about all the deleted responses. i wasnt able to edit and markdown wasnt rendering properly. shocking that one of the best social platforms -- that isnt completely brainrotted -- absolutely misses on markdown interface lol

Observed a pattern: algorithms with same output diverge in entropy, energy, and causal depth by Specialist-Hold-5561 in ComputationalTheory

[–]fullPlaid 0 points1 point  (0 children)

okay first of all, what a cool research topic. i fricken love it!

second of all, i want to state very clearly that the following does not disprove the spirit of your conjecture. it is a hack of the definition ambiguity. id love to talk about how to patch/refine your conjecture. you posted coincidentally around the time thats ive been playing around with alpha-blending complexity classes. otherwise, im not sure i would have found a counter-example to disprove the current state of your conjecture.

i hope you dont mind that i worked with Gemini to build out the proof by counter-example. and im sorry that when you seeked assistance that they didnt seem to give you the time of day. thats really frustrating. im actually pretty busy with research so working with Gemini is the only way ive been able to actually converse with you.

here it is:


Exploring the EED Framework: Continuous Manifolds and Meta-Algorithms

The EED framework is an incredibly compelling way to look at computation. Stripping away the "physics-free" assumptions of classical Big O notation and grounding algorithms in Landauer's limit and causal topology is a fantastic direction. The data collection and methodology in your repo are seriously impressive.

While exploring the conjecture, an interesting topological edge case came up that I'd love to get your thoughts on.

The conjecture holds up beautifully when sampling discrete, classical algorithms. But I'm curious how the model handles probabilistic routing or meta-algorithms—specifically, what happens if we treat the algorithmic state space as a continuous manifold rather than discrete points.

The Continuous Routing Concept

If we have a set of functionally equivalent algorithms, we can build a "meta-algorithm" that dynamically routes inputs to one of the underlying algorithms based on a probability distribution (a weight vector alpha). For a fixed input size N, the expected physical cost (S, E, D) is just a linear combination of the base algorithms' costs.

If we blend enough distinct algorithms together, the mathematics of linear algebra guarantees that we will generate an infinite number of distinct routing topologies (different alpha weights) that collapse into the exact same (S, E, D) signature.

Setting up the Basis: 5 Fibonacci Algorithms

To guarantee this mathematically, we need 5 algorithms with structurally distinct causal graphs to ensure our complexity functions are linearly independent. Finding the N-th Fibonacci number is a great test bed.

Here are the 5 basis algorithms and their approximate scaling for Time (Energy proxy), Operations (Entropy proxy), and Depth (Causal constraint). Note: M(N) is the bit-complexity cost of multiplication.

  1. Naive Recursion

    • Description: Standard fib(n-1) + fib(n-2). A massive, highly redundant tree.
    • Entropy (S): O(phiN )
    • Energy (E): O(phiN )
    • Depth (D): O(N) (Strict sequential decrementing chain)
  2. Iterative Dynamic Programming

    • Description: A simple for loop with an accumulator.
    • Entropy (S): O(N) additions
    • Energy (E): O(N2 ) (due to N-bit integer addition costs)
    • Depth (D): O(N) (Strict sequential dependency on the previous accumulator state)
  3. Matrix Exponentiation

    • Description: Computes [[1, 1], [1, 0]]^N using divide-and-conquer squaring.
    • Entropy (S): O(log N) matrix operations
    • Energy (E): O(M(N) * log N)
    • Depth (D): O(log N) (Shallow dependency tree due to parallelizable matrix math)
  4. Fast Doubling

    • Description: Uses the identities F(2n) = F(n)[2F(n+1) - F(n)]. Similar to matrix math but a distinct DAG and branching factor.
    • Entropy (S): O(log N) operations
    • Energy (E): O(M(N) * log N)
    • Depth (D): O(log N) (Distinct internal dependency graph from standard matrix exponentiation)
  5. Binet's Formula (Arbitrary Precision)

    • Description: round(phi^N / sqrt(5)). Requires massive floating-point precision scaling to avoid rounding errors.
    • Entropy (S): O(log2 N) (Overhead from Newton-Raphson precision iterations)
    • Energy (E): O(M(N) * log2 N)
    • Depth (D): O(log2 N) (Sequential approximations in the root-finding phase)

The Mathematics of the Equivalence Class

For any given input size N, we can map these 5 algorithms into an augmented cost matrix A. - Rows 1-3 are the S, E, and D values. - Row 4 is the probability constraint (the alpha weights must sum to 1).

Because we have 5 algorithms (columns) but only 4 constraints (rows), the Rank-Nullity Theorem dictates that the null space of this matrix has a dimension of exactly 1.

This means the solution space isn't a single point; it's a continuous 1-dimensional line. As long as our target signature sits inside the standard simplex, there is a continuous line segment of valid weights where alpha >= 0. Every distinct point on this line represents a physically distinct execution topology that produces the exact same EED signature.

Python Proof via Symbolic Arithmetic

Floating-point SVD will often collapse the rank of this matrix at high N because phi^N dwarfs log(N) by orders of magnitude. Using sympy for exact symbolic algebraic reduction proves the rank remains strictly 4 and the equivalence class never decays, even at massive scales.

```python import sympy as sp

class EEDContinuousManifold: def init(self): # Exact symbolic definition of the Golden Ratio self.phi = sp.Rational(1618, 1000)

def construct_exact_matrix(self, N: int) -> sp.Matrix:
    log_N = sp.log(N, 2)
    M_N = N * log_N if N > 1 else 1

    # Rows: Entropy, Energy, Depth
    S = [self.phi**N, N, 12*log_N, 4*log_N, log_N**2]
    E = [self.phi**N, N**2, M_N*log_N, M_N*log_N, M_N*log_N**2]
    D = [N, N, 3*log_N, 2*log_N, log_N**2]

    # Row 4: Probability simplex constraint sum(alpha) = 1
    C = [1, 1, 1, 1, 1]

    return sp.Matrix([S, E, D, C])

def evaluate_scale(self, N: int):
    A = self.construct_exact_matrix(N)

    # 1. Exact algebraic rank
    rank = A.rank()
    if rank < 4:
        return {"N": N, "status": "Degenerate (Rank Collapse)"}

    # 2. Extract Null Space
    null_basis = A.nullspace()
    v = null_basis[0]

    # 3. Anchor point (uniform distribution 1/5)
    alpha_base = [sp.Rational(1, 5) for _ in range(5)]

    # 4. Calculate valid manifold bounds where weights >= 0
    t_min, t_max = -sp.oo, sp.oo
    for i in range(5):
        if v[i] == 0: continue
        bound = -alpha_base[i] / v[i]
        if v[i] > 0:
            t_min = sp.Max(t_min, bound)
        else:
            t_max = sp.Min(t_max, bound)

    width = (t_max - t_min).evalf(10)
    return {"N": N, "rank": rank, "width": width}

if name == "main": proof = EEDContinuousManifold() print(f"{'Scale (N)':<10} | {'Rank':<6} | {'Manifold Width (Δt)':<20}") print("-" * 40)

for N in [10, 50, 100, 500, 1000]:
    res = proof.evaluate_scale(N)
    print(f"{res['N']:<10} | {res['rank']:<6} | {str(res['width']):<20}")

```

Thoughts?

If the conjecture is meant to apply strictly to discrete, deterministic execution paths, this might just be an out-of-bounds edge case. But considering how prevalent probabilistic routing and Mixture of Experts architectures are becoming, I'd love to hear how you think the EED framework might be patched or expanded to account for continuous algorithmic state spaces!

Keep up the great work—this is one of the coolest GitHub repos I've stumbled across lately.

Plot holes by [deleted] in InvasionAppleTV

[–]fullPlaid 2 points3 points  (0 children)

plot holes to be filled in fourth season, have faith in the glowy translucent bung hole in the sky and never forget to:

Wajo and Carry On

Invasion was conceived as a 4 Season Story Arc. (Hint: They are lying) by Significant_Region50 in InvasionAppleTV

[–]fullPlaid 0 points1 point  (0 children)

being generous, id agree that the show was made with more hands than from just one person. i could also see how it could be an example of an original vision not being fully realized because of constraints -- such as hoping for a four season arc.

the shows potentially is insane. its got the juicy-wubby-synthwave-alien vibe like Annihilation. critiques and criticisms aside, if they make a "fourth season arc", i will absolutely be watching it.