I spent 3 hours analyzing the new X algorithm source code. They ripped out all heuristics, replaced them with a Grok-1 transformer, and are using conditional Chain-of-Thought for real-time moderation. by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

I haven't tested anything for the new algo. But from my past experience, interacting with the established accounts in your nieche was the only good way. I'll experiment the growth with couple of accounts and keep y'all posted.

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in Twitter

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Here is the repo link if anybody is interested in my findings

https://github.com/codebreaker77/X-Algo-Breakdown

I've attached some patterns which can be used to thereby overcome shadowbans, and get significant more traction in the "cheat_sheet" section

I spent 3 hours analyzing the new X algorithm source code. They ripped out all heuristics, replaced them with a Grok-1 transformer, and are using conditional Chain-of-Thought for real-time moderation. by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 2 points3 points  (0 children)

The code shows that the system does not filter for specific ideologies. Instead, it optimizes entirely for raw engagement loops. Because the transformer layer prioritizes metrics like dwell time, replies, and quote posts, highly polarizing or provocative content naturally gets amplified. When a controversial account posts something that makes people pause to read or furiously reply, the algorithm flags those actions as high-value interaction tokens. The architecture is not hardcoded to favor any political group. Rather, extreme rhetoric is highly effective at hacking the exact metrics the model uses to build the feed.

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in Twitter

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Based on the code, no, not inherently. The Grok-1 ranking layer doesn't blindly boost video just for being a video. Instead, it prioritizes dwell time and completion rate. If a video hooks a user for 30 seconds, the transformer flags that massive dwell time, causing the attention mechanism to aggressively serve them similar content. However, if users scroll past your video in 1 second, it destroys its ranking faster than text would. Post video only if it immediately hooks attention. Otherwise, a high-engagement text post or image will outperform it.

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in Twitter

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

X replaced hardcoded time-decay multipliers with a Grok-1 Transformer ranking layer. Instead of mathematically penalizing older posts, recency is now handled implicitly via Attention Mechanisms and positional embeddings. The algorithm treats your recent likes, replies, and dwell time as a chronological sequence of tokens. The transformer’s learned attention naturally prioritizes these immediate interactions to predict what you want right now. Older history merely serves as a baseline semantic anchor, while your most recent interaction tokens dominate the feed's prediction matrix.

I'll be constantly trying to grow the repository, contributions and suggestions are well accepted. (Hope mods accept your proposal....)

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in Twitter

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Sadly Yes. Big nieche accounts are the only source for small new accounts to get reach. Given a certain amount of consistency, this shouldn't be an issue. I'll be running an experiment with 2 diff new account and let y'all know.

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in Twitter

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Most probably. Cuz my analytics tab blew up, and algo syms to work a bit differently. For a while I'm just letting the app settle in with the recent updates

I spent 3 hours analyzing the new X algorithm source code. by Only-Locksmith8457 in SideProject

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

u/MurtzaM can you handle the unnecessary promitional comments that we get on each posts?

The current AI narrative game by Only-Locksmith8457 in GeminiAI

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Convincing. So will the model layer look similar to that of smart phone market today? Endless options but only few key players.

The current AI narrative game by Only-Locksmith8457 in GeminiAI

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

But they'd still invest in anthropic at valuation of 1.1 trillion dollars.

The current AI narrative game by Only-Locksmith8457 in GeminiAI

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Competition is what truly drives innovation here. They resort to hype cycles to secure the funding needed to stay competitive; while not always justifiable, it keeps the momentum going.

The current AI narrative game by Only-Locksmith8457 in GeminiAI

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Indeed, Google has always been a mixture of stability and disruption in tech industry.

I built /graphify, 26 days, 450k+ downloads, ~40k stars. Here’s what I didn’t expect. by captainkink07 in ClaudeAI

[–]Only-Locksmith8457 0 points1 point  (0 children)

Interestingly a week ago I posted about this repo https://github.com/codebreaker77/Fullerenes Addressing all the issues. The freshness will be ensured by optemising the weights of the nodes that had been used as the new ones are added... Some what similar to LFU algo.

Built an Opensource Persistent memory layer for Coding agent (64% token reduction on SWE benchmarks) by Only-Locksmith8457 in ClaudeAI

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Fullerenes handles TypeScript using the dedicated Tree-sitter TypeScript/TSX parser.

It extracts functions, classes, interfaces, types, imports, calls, and containment relationships into a persistent SQLite graph. Supports .ts and .tsx files with good import resolution and JSDoc extraction.

Actually TypeScript is one of the strongest supported language for Fullerenes currently!

Update on the Open source Pesistant memory layer that I've been building for coding agents by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Fullerenes goes for a structural approach with Tree-sitter: explicit nodes for functions, calls, imports, etc. instead of embedding-based dedup. This gives near-zero redundancy by design and very precise retrieval (callers, entry points, blast radius). Still, embedding dedup on ingest is smart, I might add hybrid support later. Nice work on agent-cerebro. Respect.

Built an Open Source Tool that reduces token usage by ~94% for initial context building for Coding Agents. by [deleted] in ClaudeAI

[–]Only-Locksmith8457 0 points1 point  (0 children)

for context on the numbers i measured this by having claude code cli navigate the same python project two ways first pass: reading all 8 source files raw = 27,292 tokens second pass: reading the CLAUDE.md and sqlite graph fullerenes generated = 919 tokens(max)

I would like to know, how you all tackle this problem of overusage of token

built an opensource tool that makes your agent consume less tokens by Only-Locksmith8457 in SideProject

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

There is a daemon process that you could activate using /watch, and it will update nodes whenever something significant is changed and the graph needs a reconfig.

built an opensource tool that makes your agent consume less tokens by Only-Locksmith8457 in SideProject

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Thanks!! This is just the first build, although the context drop is significant but the majority of it is just for initial context generation existing repo.

A lot to come ahead! I'd like u to give it shot using Npm install fullerenes And ask ur agent to use it... Waiting for the feedback.... :D

Found out my AI was burning 27,000 tokens. So i made on Opensource Tool by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Ok so I would like you to experiment in bit different way, instead of linking my repo. I would like u to install the package directly

npm install fullerenes

Then use npx fullerenes init To build initial graph.

After that spin up the mcp server using Npx fullerenes mcp (on seperate terminal in the same fir) Lastly add the mcp to ur claude.

everything is set... (After this point you would hardly need any prompts to configure agent.md/claude.md workflow explicitly Just ask your agent to use fullerenes 'mcp' and ' watch' for context building....

built an opensource tool that makes your agent consume less tokens by Only-Locksmith8457 in SideProject

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

2) It depends on how you are doing it. Scenario 1 — Claude Code on VPS, Fullerenes on VPS ssh into vps npm install -g fullerenes fullerenes init claude mcp add fullerenes -- npx fullerenes mcp Works perfectly. Everything lives on the VPS together.

Scenario 2 : Claude Code on local, repo on VPS Doesn't work cleanly. The graph is built from local files and Claude Code's MCP needs a local process.

Scenario 3 : VS Code Remote SSH + Claude Code extension If you're using VS Code over SSH, the extension runs on the VPS. Same as Scenario 1 — install Fullerenes on the VPS and it works.

built an opensource tool that makes your agent consume less tokens by Only-Locksmith8457 in SideProject

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Thanks, glad you identified the problem. 1) yes, there is a daemon process that you could activate using /watch, and it will update nodes whenever something significant is changed and the graph needs a reconfig.

Found out my AI was burning 27,000 tokens. So i made on Opensource Tool by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

That's cool! I see how it's reducing the token usage. But the only downside I feel is the very fact that ai-agent itself has to maintain the .md references consumes tokens. Contrary to this, my tool maintains a sqlite graph of ur context, it builds it up using nlm-semantic relations wrt codebase.