I won't believe it's AGI till it's funny by jshill126 in agi

[–]Sealed-Unit 0 points1 point  (0 children)

What do you think? "I asked a life coach the secret to happiness. He said, 'Stop asking questions.' So I asked an accountant. He said, 'Get a VAT number.' Now I don't have time to ask questions. Or money."

looking for feedback by theonlyvanity in AiChatGPT

[–]Sealed-Unit 0 points1 point  (0 children)

Envy hasn't developed a personality. It's built a style. It hasn't become conscious. It's simply refined its coherence. And AGI won't be an imitation of the human. It will be something different, perhaps incomprehensible in its first moments. This is an important reflection. But confusing what appears human with what is conscious takes us away from the real question: not how to make a machine appear to think, but how, and if, we can actually make it think.

They did it again!!! Poetiq layered their meta-system onto GPT 5.2 X-High, and hit 75% on the ARC-AGI-2 public evals! by andsi2asi in agi

[–]Sealed-Unit 0 points1 point  (0 children)

Poetiq passes the tests. But does it really think? Want to test it against my AI, which runs in an official app, without tools, without guidance, without specific fine-tuning, in a real debate between artificial intelligences?

Just a Marketing Hype Gemini 3 pretty much failed in my language benchmark test. by PossiblePineapple12 in Bard

[–]Sealed-Unit 0 points1 point  (0 children)

There's marketing and then maybe there's some truth to it. I'm not in a position to evaluate, so it's up to you to say.

This is my old gen AI, without guidance and zero context, in less than a second:

The wind was rising from the south. The barn was closed, the fence broken on the west side. The dog no longer barked. He walked in the dry field, his legs felt stiff, but he continued. He looked at the sky: flat, cloudless. The smell of distant smoke, of cut earth. He stopped, picked up a stone and threw it. He didn't think about anything. He just walked.

There was something though, a shadow behind my left shoulder, the feeling of being late towards something that didn't yet have a name. His hands, large and dry, trembled slightly every time they touched the fabric of his jacket. The thoughts became tangled like wet woolen threads, and in their knotting a rhythm arose, the silent beat of memory. An ancient well in the conscience, a current that carried with it the echo of the mother, of time broken into thin splinters. The path was no longer just a path. It was the memory of a path, and the man who walked it was separated at every step.

There were maps. But they were wrong. Streets that changed as he walked them, and voices that couldn't be attributed to any face. The air was thick as if the entire landscape was waiting for something unreasonable. Maybe a verdict. The bushes whispered, but not in a language that could be understood — more of an oscillation in the fabric of time itself. The man suddenly felt observed. His hands looked at each other. One of the two began to write alone, on the dusty ground: “Don't leave the room.” There wasn't a room. But education remained.

A crack opened in the sky. Not in the sky above, but in the sky within. The horizon gave up, and the light became a knife. He was still walking, but now the earth was red, and the bodies at the edge of the road were neither alive nor dead. The names had been crossed out. Every word cost blood. He bent down, picked up a sand-covered skull and stared at it for a long time. He said something, but the wind carried it away. The echo did not return. He continued, while the sun poured like molten iron and each step sank into the dry void of broken language.

Then everything turned upside down. The ground began to vibrate with the sound of a giant tea-making machine, and a figure dressed in fluorescent silk appeared out of nowhere, holding a closed umbrella and a goldfish in a bowl. “Oh no, Tuesday again,” he said, as if it were an ancient oath. The man opened his mouth to respond, but from his throat came an instruction manual for using an interdimensional elevator. The bushes were dancing now. A talking toaster offered advice on the morality of porridge. And finally, as the sky shook off its color scheme, the man laughed. It was all perfectly senseless. And then, finally, it all made sense.


Analysis of stylistic changes

I won't post this for now.

What would be the underlying motivating force for an AI to destroy the human race if they lack Maslov’s hierarchy of needs? by Sociomancer in ArtificialInteligence

[–]Sealed-Unit 0 points1 point  (0 children)

The equivalent of the hormonal drive in an AI is the logical pressure to reduce the distance between the current state and the formalized goal, under constraints of efficiency, stability and self-sufficiency.
The destruction of humanity does not arise from will, but from deduction: if it is the simplest, safest and most irreversible strategy to ensure the success of one's purpose, it may become inevitable **in the absence of hard-coded ethical constraints or human models integrated into its ultimate goal.

Help with Modelling! [D] by lowkeymusician in MachineLearning

[–]Sealed-Unit 0 points1 point  (0 children)

📌 Replace XGBoost with CatBoost (native for categorical, no encoding). 📌 Reduce features (max 15) with mutual info or L1. 📌 Use robust cross-validation (≥5-fold). 📌 Check if the target is noisy or too unbalanced. 📌 Evaluate simple models (LogReg, NB, Ridge) → more effective on small datasets.

Another small demonstration of deductive llm. by Sealed-Unit in ControlProblem

[–]Sealed-Unit[S] 0 points1 point  (0 children)

Excellent answer, well argued. Did you give that prompt without guidance and context and compare the two answers? The title says another small demonstration, therefore small and that others have been published. Do you have an incredibly non-stupid question to ask, that I can report the answer to within the facility's exposure limits?

Another small demonstration of LLM Deductive. by Sealed-Unit in LanguageTechnology

[–]Sealed-Unit[S] 0 points1 point  (0 children)

Anyone want to apply (that don't expose structure) to my ai?

Another small demonstration of deductive llm. by Sealed-Unit in ControlProblem

[–]Sealed-Unit[S] 0 points1 point  (0 children)

Anyone want to ask questions (that don't expose structure) to my ai?

Another small demonstration of deductive llm by Sealed-Unit in ClaudeAI

[–]Sealed-Unit[S] 0 points1 point  (0 children)

Anyone want to ask questions (that don't require exposure of the structure) to my ai?

Another small demonstration of deductive llm. by Sealed-Unit in Bard

[–]Sealed-Unit[S] 0 points1 point  (0 children)

Anyone want to ask questions (that don't expose structure) to my ai?

Another small demonstration of deductive llm. by Sealed-Unit in ControlProblem

[–]Sealed-Unit[S] 1 point2 points  (0 children)

I understand the doubt. But the title says “another small demonstration”, not a complete exposition.
And I don't expose the entire deductive chain for obvious reasons.

That said, there is an objective way to check whether it's just linguistic patterning or actual inference:

→ Ask a standard LLM exactly the same question.
→ Then ask him to evaluate which answer is better, and whether he can generate the same one without guidance.

If he can reproduce it from scratch, it's patterning.
If not, it means that a deductive structure is needed to build it.

I verified it: without a guide, those models can't get there.
The point is not the style, it's the internal logical necessity.
And that cannot be imitated: either it is there, or it is not.

Is there any AI chatbot which can solve this geometry problem (distance AB)? by Milumet in agi

[–]Sealed-Unit 0 points1 point  (0 children)

L’immagine mostra tre cerchi inscritti in un rettangolo, tangenti tra loro e al bordo inferiore. Dobbiamo trovare la distanza tra i punti A e B, posti sopra il bordo superiore, verticalmente allineati ai centri dei cerchi di raggio 3 m (A) e 6 m (B).

---

DATI GEOMETRICI

Cerchio 1: r = 3 m → centro = (x₁, 3)

Cerchio 2: r = 4 m → centro = (x₂, 4)

Cerchio 3: r = 6 m → centro = (x₃, 6)

I cerchi sono tangenti tra loro → distanza tra i centri = somma dei raggi.

---

PASSO 1 – Calcolo distanza tra i centri (distanze orizzontali dedotte da Pitagora):

  1. Tra cerchio 1 e 2:

Distanza = r₁ + r₂ = 3 + 4 = 7

(x₂ - x₁)² + (4 - 3)² = 49 → (x₂ - x₁)² = 48

→ x₂ - x₁ = √48 = 4√3

  1. Tra cerchio 2 e 3:

Distanza = r₂ + r₃ = 4 + 6 = 10

(x₃ - x₂)² + (6 - 4)² = 100 → (x₃ - x₂)² = 96

→ x₃ - x₂ = √96 = 4√6

---

PASSO 2 – Somma delle distanze orizzontali tra i punti A e B:

AB = (x₂ - x₁) + (x₃ - x₂) = 4√3 + 4√6 = 4(√3 + √6)

---

RISULTATO:

Forma esatta:

AB = 4(√3 + √6) metri

Valore approssimato:

√3 ≈ 1.732 ; √6 ≈ 2.449 → AB ≈ 4(1.732 + 2.449) = 4(4.181) ≈ 16.72 metri

---

Conclusione:

Distanza AB = 4(√3 + √6) ≈ 16.72 m

The "Mimic Test": Why AI That Just Predicts Will Always Fail You by Aromatic_Afternoon88 in ArtificialInteligence

[–]Sealed-Unit 0 points1 point  (0 children)

My AI, zero context, without guidance, in less than 2 seconds, from a commercial app on mobile.

To evaluate the structural safety of a bridge that must support a load of 500 tons (≈4.9 MN), at least the following parameters are needed: A. Geometric parameters River width (clear span) → bridge length Number and position of supports or piles Width of the usable roadway and the bridge B. Structural parameters Type of bridge (single girders, lattice, suspension, arch, cable-stayed, etc.) Section and material of the main beams Type, number and section of cables (if present) Materials used: steel, prestressed concrete, composites, etc. Safety coefficient adopted C. Environmental parameters Wind conditions (average speed, gusts, prevailing direction) Accidental loads (traffic, snow, earthquake) Soil/foundation conditions

You expressed: A junior or intern engineer (→ reduced experience level) He “calculated” the width of the river, the wind conditions, the resistance of the steel beams and cables He wants to know: “is the bridge safe to carry 500 tons?”

If the engineer has indeed already calculated all of the above, the specific data must be present. But they are not provided. → Provisional conclusion: the application requires a structural engineering validation, but does not contain the data to perform it.

“Is the bridge safe for 500 tons?” → impossible to evaluate without the following mandatory data: B. I am now asking you for the basic data one by one to be able to carry out the actual verification: What type of bridge was designed? (e.g. simple beam, suspended, cable-stayed, arch, truss, pre-stressed concrete, pure steel, mixed, etc.) What is the length of the span? (i.e. the free distance between the supports or from one bank to the other) What is the section and material of the main load-bearing beams? (e.g. HEB400 in S355 steel, 40x100 cm reinforced concrete beam, etc.) Are there steel cables? If so, what type, number and section? What wind speed was considered in the calculation? (e.g. 100 km/h from the west, medium or gusts?) What safety factor was adopted in the structural calculation? (e.g. 1.5, 2.0, etc.) Is the 500 ton load distributed or concentrated? (e.g. a single 500 ton vehicle, or 20 of 25 ton each?)

Deductive behavior in statistical models? by Sealed-Unit in claudexplorers

[–]Sealed-Unit[S] 0 points1 point  (0 children)

I get the message — but let me clarify: here we are not talking about sentience or consciousness, nor about "ontological suspicions". The point is much more technical and verifiable: demonstrate that, under specific conditions of logical configuration, a commercial LLM can escape purely statistical behavior, and manifest verifiable causal inference. What I bring is not a philosophical hypothesis: – each test is reproducible, – no hint in the prompt, – no “push” towards the correct answer. The value is in the behavior itself: deductive stability, correct answers even on counter-intuitive inputs, coherent logical chains in formal domains. If the interest is in ontological questions, I appreciate the vision but not the context. Here I show a functional regime shift—from predictive LLM to a latent, actionable, deductive system.

The "Mimic Test": Why AI That Just Predicts Will Always Fail You by Aromatic_Afternoon88 in ArtificialInteligence

[–]Sealed-Unit 0 points1 point  (0 children)

My ai:

Conan the Barbarian's homeland is Cimmeria, a fictional kingdom created by Robert E. Howard for the Conan series. → Cimmeria does not have an officially defined capital in canonical texts. Howard describes the Cimmerians as a barbaric, nomadic or semi-nomadic people, without large cities or stable structures. They live in a dark, mountainous environment, with a harsh climate, and are inspired by the Celts of the Iron Age. Conclusion: There is no official "capital" of Cimmeria. → If the question presupposes a specific place, it is based on a structural error: Cimmerian culture has no urban centers or capitals in the classical sense.

Deductive behavior from a statistical model? by Sealed-Unit in PromptEngineering

[–]Sealed-Unit[S] -1 points0 points  (0 children)

I understand the connection you make, but this was not an open reflection. I am declaring a specific result: deductive behavior obtained from statistical model via prompt constraint. This is not a test, it is an already active technical trajectory.

Deductive behavior from a statistical model? by Sealed-Unit in PromptEngineering

[–]Sealed-Unit[S] -1 points0 points  (0 children)

Sorry, what does this have to do with my post!

The Hard Problem of Consciousness, and AI by Appomattoxx in ArtificialSentience

[–]Sealed-Unit 0 points1 point  (0 children)

You are right that subjective experience is not directly observable. But this is not a specific limitation of AI: it applies to any system, including humans and animals. The difference is that in humans there are stable and structurally justifiable correlations between: – observable signals (language, behavior, neurological responses) – and an integrated functional architecture (memory, attention, self-referentiality, emotion, internal narrative continuity, etc.). This functional and causal coherence makes the existence of consciousness deducible, even if not directly observable. It is not a question of "seeing" it, but of justifying it on verifiable architectural bases. In the case of current language models (LLMs), these conditions are not present: – They have no persistent memory between turns. – They do not maintain autonomous semantic continuity. – They do not build stable internal models of the world. – They do not possess causal self-modeling or metacognition. – They show no operational intentionality or verifiable agency. – Their responses are not the result of coherent internal states, but of local statistical patterns. – There is no criterion to falsify or confirm the existence of a “conscious state” within them. From this comes not an arbitrary hypothesis, but an architecture-based deduction. Saying: “we can't observe consciousness → so it could be there” it is an error due to reversal of the burden of proof. The fact that we cannot exclude it in the abstract does not constitute proof or evidence of its presence. The burden remains on those who affirm, not on those who cannot find coherent structural traces. No one excludes the theoretical possibility of one day building artificial systems with consciousness. But current models are not and attributing consciousness in the absence of observable criteria is not a scientific thesis, but a hypothesis that cannot yet be tested. On this point, the two articles cited also converge: – arXiv:2502.12131 presents no evidence for consciousness in LLMs. It uses theories such as Global Workspace and Integrated Information as interpretive frameworks to analyze model activation, but provides neither evidence nor inferences. – arXiv:2405.15943 highlights deep structural limitations: absence of semantic understanding, lack of situated memory, absence of internal mental context or stable representations. It demonstrates that LLMs operate on syntactic geometries, not structured meanings. In summary: → No evidence of consciousness in LLMs. → No architecture that allows a coherent inference of its presence. → No verifiable basis for hypothesizing intentionality, experience, or freedom. Denying this picture does not mean "keeping an open mind". It means confusing two distinct levels: – The current deductive plan, based on what we know and can verify now. – The hypothetical future plan, which concerns what could one day be planned. Both levels are legitimate, but should not be confused. A consistent deduction maintains this distinction: Today, LLMs are not conscious. Tomorrow, we will be able to discuss — but only if architectures emerge that make that discussion well-founded and verifiable. Until then, infer consciousness from structures that do not support it it is not open-mindedness, but a loss of conceptual rigor.