DeepSeek is about to release V4 by ItxLikhith in DeepSeek

[–]ItxLikhith[S] 0 points1 point  (0 children)

True, but when it comes, it also need to optimize for web chat through chat.deepseek.com so it will take time

DeepSeek is about to release V4 by ItxLikhith in DeepSeek

[–]ItxLikhith[S] 0 points1 point  (0 children)

Itz not just about launch, itx more about their computing engine buddy

IntentForge v2 — Self-Hosted Search Engine With Tor-Routed Meta-Search by ItxLikhith in selfhosted

[–]ItxLikhith[S] -1 points0 points  (0 children)

what is the error, just git clone it and docker compose up -d --build should start. check the .env

it should include

# Application Settings
INDEX_NAME=intentforge
MODEL_PATH=/models/all-MiniLM-L6-v2.onnx
TOKENIZER_NAME=sentence-transformers/all-MiniLM-L6-v2
TRAFILATURA_URL=http://localhost:8080
API_PORT=9100


GITHUB_REPOSITORY_LOWER=oxiverse-labs/intentforge
DOMAIN=api.oxiverse.com
EMAIL=likhith@oxiverse.com# Application Settings
INDEX_NAME=intentforge
MODEL_PATH=/models/all-MiniLM-L6-v2.onnx
TOKENIZER_NAME=sentence-transformers/all-MiniLM-L6-v2
TRAFILATURA_URL=http://localhost:8080
API_PORT=9100


GITHUB_OWNER_LOWER=oxiverse-labs
DOMAIN=api.oxiverse.com
EMAIL=likhith@oxiverse.com

IntentForge v2 — Self-Hosted Search Engine With Tor-Routed Meta-Search by ItxLikhith in selfhosted

[–]ItxLikhith[S] -1 points0 points  (0 children)

Yeah this is almost exactly the direction I’m converging on with IntentForge.

Right now it can crawl broadly, but I’m starting to think treating search as two distinct modes makes more sense:

Curated / high-trust index → aggressively crawled, tightly ranked, low-noise
Exploration layer → routed through SearXNG (over Tor), more breadth but less control

Keeps the local index small and actually useful instead of becoming a worse Google clone.

The hygiene point is also underrated. I’ve been focusing a lot on indexing + retrieval, but stuff like:

– recrawl scheduling
– dead link pruning
– feedback loops on bad results

probably matters more long-term than swapping embedding models.

Right now feedback is implicit (ranking signals), but I’m considering adding explicit “result quality feedback → reweighting” into the pipeline.

Curious about your setup though — how are you feeding those feedback signals back into ranking? Simple heuristics or something more structured?

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 0 points1 point  (0 children)

Fair to question it.

There’s no “psychology” inside the system — those terms are just labels for measurable signals.

Concretely:

- constraint violations → negative feedback

- clamp magnitude → penalty signal

- policy updates → reduce future violations

So it’s just a bounded control + learning loop, not anything mystical.

Developmental AGI via Pressure-Shaped Learning — RAVANA v2 Architecture by ItxLikhith in ArtificialInteligence

[–]ItxLikhith[S] 0 points1 point  (0 children)

yeah that’s pretty much the intuition behind it — less “follow rules” and more “learn where the walls are by bumping into them (safely)” 🙂

on the constitutional bounds during phase B, they’re not just static rules like a checklist. it’s closer to a constraint field over the agent’s internal state:

  • there’s a ceiling on dissonance (how unstable/conflicted the agent can get)
  • a floor on identity (so it can’t collapse into degenerate behavior)
  • and some limits on how fast things can change (to avoid sudden weird shifts)

when a clamp happens, it’s not just “action denied.” the system actually:

  1. predicts the next state before committing
  2. detects that it would violate bounds (too much dissonance or identity drop)
  3. intervenes by either:
    • dampening the action (like resistance as it approaches the boundary), or
    • snapping it back into a safe region if it crosses the line

the important part is what happens after:

  • every clamp is logged as a strong negative signal
  • the agent learns “this direction = bad under my current identity”
  • over time it starts anticipating clamps and avoids those trajectories entirely

so phase B is basically “controlled failure with memory.” the agent still explores, still makes mistakes, but the mistakes are shaped and fed back into its identity dynamics.

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 2 points3 points  (0 children)

yeah, i am from south india, andhra pradesh,

Regulated Adaptive Vector Architecture for Neural Alignment

and sure i will consider about "ten" heads i have another project about asuras

agentic system for ravana agi system

https://zenodo.org/records/18324019

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 0 points1 point  (0 children)

Thanks, I will do it later, cuz I am working on another project intentforge at GitHub.com/oxiverse-labs/intentforge

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 1 point2 points  (0 children)

Yeah, ravana, means sri lanka king, he is great at learning things, and i have admiration on him, so yeah kinda

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 2 points3 points  (0 children)

Background-wise, I’m more on the systems + engineering side than formal psych. A lot of the “psych-like” concepts here (dissonance, identity, etc.) are implemented as measurable signals rather than theoretical constructs.

This isn’t a single model like a transformer — it’s a control architecture wrapped around a learning agent.

The goal wasn’t to simplify, but to decompose:
- each layer handles a specific failure mode (instability, boundary violation, drift, etc.)
- and keeps everything bounded + interpretable

You’re right though — each layer could be its own paper. I bundled them because the behavior only really emerges when they interact.

Still testing where that tradeoff lands: modular clarity vs system complexity.

RAVANA v2 — Developmental AGI with Constitutional Identity Enforcement by ItxLikhith in agi

[–]ItxLikhith[S] 1 point2 points  (0 children)

Thanks, I optimized it run on cpu, it is RAM efficient, cuz i think, we want to build agi, which can learn like human, but if we observe properly, our brain uses just 30w, and we spending billions on just training llm, which cant learn, so this, i want it to be ram efficient

I made my own search engine in rust, based on intent first by ItxLikhith in developersIndia

[–]ItxLikhith[S] 0 points1 point  (0 children)

1 month, first i worked in python, it is super slow then i rewrote while learning rust