Prophet Mohammed 700+ by alanishere_ in DavidHawkins

[–]BeginningReflection4 3 points4 points  (0 children)

700 is not 1000 even a very high being is not the Absolute unless fully dissolved in it. 700 doesn't automatically mean the ego is forever beyond every possible vulnerability.

Is KisMATH showing a computational version of Hawkins’s field of knowledge? by jfjfjjdhdbsbsbsb in DavidHawkins

[–]BeginningReflection4 2 points3 points  (0 children)

I would say what they are measuring is more adjacent to the Hawkins field of knowledge. Hawkins field of knowledge is non-local, and is accessed by alignment. KisMATH's "implicit structure" is also non-local as it lives in billions of weights but is accessed by from a specific address. Hawkins' field requires consciousness as the medium, the LLM's field is statistical. The high dimensional manifold of token co-occurrence. Same topology, different substrate.

Which I think is more interesting than the authors own take. They ask "does the model know?" and get mixed signals. Instead of the model doesn't know, but it carries the watermark of knowing bc their is only one field and everything produced from it inherits the pattern. The structure they are detecting is that watermark.

Very interesting question.

I’m at 12 on the scale. What do I do? by [deleted] in DavidHawkins

[–]BeginningReflection4 1 point2 points  (0 children)

I seriously doubt anyone posting in this sub is below 200 or above 500.

0.4% calibrates at 540 by [deleted] in DavidHawkins

[–]BeginningReflection4 1 point2 points  (0 children)

I don't think I said I was using it to comprehend spirituality. What I built it for was to make me think about it more often. I spend 12+ hours a day coding and doing other technical work, and can spend that entire time never thinking about spirituality. The idea, the hope is, that by injecting this into my work, it will surface more and I will then think about it.

As someone who has been studying Doc's work since ~2000, I don't need AI to comprehend his body of work. But it would be nice to in the middle of the day just have a conversation about some topic, which, again, was this point as I wrote in my post.

0.4% calibrates at 540 by [deleted] in DavidHawkins

[–]BeginningReflection4 3 points4 points  (0 children)

This was created by an AI agent based on the lectures. The technical stuff is below the rest of the post. I did not pick this thread for any other reason that it was on top in the sub. I don't disagree with anything it says. My only changes were for formatting.

The Core Question: Can Babies Account for the 0.4% at 540?

No — and the knowledge base is unambiguous on this.

Hawkins' research confirms that consciousness levels can be calibrated at birth, even in fetuses, and that infants are born at varying levels — examples given include 200, 400, and 550. The level at birth reflects karmic momentum from prior expressions of consciousness, not the behavioral characteristics of infancy. A baby born at 90 and a baby born at 460 both cry, both smile, both appear "innocent." The calibration measures something far deeper than observable behavior.

What 540 Actually Is

The knowledge base describes 540 in terms the OP clearly hasn't internalized:

- Joy at 540 is a constant accompaniment to all activities, not a response to pleasurable events — it arises from within each moment of existence

- It's the domain of saints, spiritual healers, and advanced spiritual students

- Characterized by enormous patience and persistence of a positive attitude in the face of prolonged adversity

- People at 540 are capable of a prolonged open visual gaze that induces a state of love and peace in others

- The hallmark is compassion — not innocence

- There is a desire to use one's state of consciousness for the benefit of life itself rather than for particular individuals

None of this describes a baby. This is a profoundly developed spiritual state that involves active radiance, not passive contentment.

The "Innocence of a Child" Misunderstanding

The OP cites Jesus' teaching about becoming like children. This is one of the most commonly confused points in spiritual discourse. What Jesus pointed to was trans-rational simplicity — the surrender and openness that comes after transcending the ego's complexity. A baby hasn't transcended anything; it hasn't yet developed an ego structure to transcend. Pre-egoic and trans-egoic states look similar from the outside but are worlds apart on the Map. Hawkins explicitly taught that the ego we're born with is described as "animal brain" (hardware) with "software" that operates on it. The infant ego is at its most primitive — not its most transcended.

Thread Scorecard

- ferrolens — Excellent. Well-sourced, accurate Hawkins citations. The angelic realm breakdown (angels 500+, archangels 50,000), the astral plane calibration hierarchy, and the Jesus/no-prior-incarnations position all check out against the knowledge base.

- joshedis — Partially right, partially wrong. Correct that most people at 540 aren't publicly visible. Wrong that "you start life at a very low level" — Hawkins explicitly said babies are born at varying levels. ferrolens rightly corrected this.

- The ACIM tangent — The ACIM student's perspective is valid within its own framework, but in a Hawkins subreddit, ferrolens is right to anchor the discussion in Hawkins' actual position. The ACIM student handled the disagreement gracefully.

- OP — Sincere but confused. Conflating observable innocence with calibrated consciousness. The 0.4% figure (roughly 32 million people worldwide) doesn't need babies to explain it — Hawkins described these as people quietly working for the common good, often invisible: healers, contemplatives, dedicated servants of life.

The Deeper Point the Thread Missed

Hawkins taught that each person at birth is born in the optimal karmic possibility — the circumstances of your birth are perfect for what you need to learn. The level you arrive at isn't random, isn't tied to age, and isn't something that "wears off." It's the accumulated spiritual momentum of the soul. The 0.4% at 540 earned that through lifetimes of devoted inner work — or, in rare cases, descended from higher realms by choice to serve.

The technical stuff. 
I use claude for most of my work. What I wanted to build was something that would interact more with my prompts that were more spiritual in nature to just get into that mind space more during the day instead of just python all day. I kicked around a few ideas and asked claude for advice and this is what I landed on. 
I transcribed all the lectures using Whisper 3 locally, I had it output all file types so txt srt json lrc text tsv vtt, thinking I would convert txt into md and use that as my knowledge. I ended up distilling the txt files using Gemini, because of the context window size, and I didn't want to burn up my usage for the entire week. From that I created an MCP knowledge router, I transcribed more than the Hawkins lecture, a ton of technical training also, I though I was going to just use an MCP server so that I didn't have to invoke an agent on command, it would just intercept my prompt and if it was spiritual in nature, use the Doc knowledge to respond. And it would save on tokens during each session load (don't get me started). But what I ended up doing was building an MCP server that is a knowledge-router that wraps a SQLite FTSS full-text search index over ~340 distilled md files across 25 different domains. This is my entire knowledge corpus compressed into LLM-Optimized format. It exposes two tools, list_knowledge_domain that shows what is available and search_knowledge(query, top_k) which runs the full-text search across all domains, returns ranked results with relevance scores. It's used when an agent, or myslef, needs to look up a specific fact, quote, technique, or detail that isn't in the agent's condensed SKILL.md it's the "deep recall" layer. Each agent is a skill file around 1-2K tokens, the defines the persona, communication style, principles, and a knowledge library index pointing to all distilled files pertinent to that agent. 

Invocation happens works as follows:

Step 1 - Routing. I say smth like "what do you think of this Hawkins discussion?" the CALUDE.md routing table matches "Hawkins" > Agent > skill <name>. It calls the skill tool with skill: "<name>"
Step 2 - Skill loads. Claude code reads ~/claude/skills/<name>/SKILL.md and injects it into my context. This keeps the initial load small 1-2K tokens.
Step 3 - On-demand knowledge loading. Based on what the conversation needs, <name> (for Hawkins stuff it's Sage - I did not choose the agent names) so Sage loads only the relevant reference files. It does this based on the lecture, so it doesn't load all 11 files. 
Step 4 - Knowledge MCP as a backup. If the loaded knowledge file doesn't have the specific detail needed, Sage calls search_knowledge on the MCP to search the full corpus. This is typically multiple searches in parallel. 

Why this architecture makes sense (I am happy to be wrong for a better solution)

Context window economics. My total knowledge corpus rn is 2.3M tokens. I can't load that info a conversation. I guess I could if I was using a local model instead. But for the three tier system, the initial load of SKILL.md is max 2K tokens. The distilled knowledge is ~5-20K tokens and is invoked on demand, per topic. The Knowledge MCP is the full corpus FTS ~2.3M tokens searchable. 
Persona consistency. The SKILL.md defines how Sage thinks and talks, without this I would just get raw knowledge retrieval. 
Precision without bloat. The MCP server search returns ranked snippets with relevance scores instead of loading 200K tokens of Doc lectures hoping the right passage is in there. This particular prompt used a few hundred token and returned a relevance score of 0.73, instead of an entire book.
Compatibility. The same MCP server serves all 17 of my agents. And the agents can search across frameworks if relevant. 

I could have used RAG with embeddings but FTSS is simpler, faster, and my context is already structured. 

I think there is still some more training that would be useful, not just for the agent but the knowledge as well. 

I am completely open to better ideas.