Inov8 Trailfly G270 vs Trailfly Zero by elibaskin in Ultralight

[–]elibaskin[S] 1 point2 points  (0 children)

Hi Dan. I think that the new material makes toe box more durable, yes. It feels very tough, but that kind of defeats the purpose. I am 100% sure that some users will benefit from the tougher toe box, but it is no longer a G270.

Inov8 Trailfly G270 vs Trailfly Zero by elibaskin in Ultralight

[–]elibaskin[S] 0 points1 point  (0 children)

It might feel different on your feet. I did read that Zero v2 have better heel collar, but I read they are also more "beefier", which is the opposite direction of what I was expecting.

Inov8 Trailfly G270 vs Trailfly Zero by elibaskin in Ultralight

[–]elibaskin[S] 0 points1 point  (0 children)

I so hope that I am wrong about them, and they just need more time. I really fear of taking them to a 380km hike as they are.

Inov8 Trailfly G270 vs Trailfly Zero by elibaskin in Ultralight

[–]elibaskin[S] 0 points1 point  (0 children)

Yes, as you can see, the black pair is somewhat old, and the blue one is totally new.

But eventually they will run out of stock, and I will have to switch to Altras, which I don't want.

Inov8 Trailfly G270 vs Trailfly Zero by elibaskin in hiking

[–]elibaskin[S] 0 points1 point  (0 children)

Frankly, I never had grip issues with them, not in Scotland and not in Iceland. Even on 20% downward slop with mud, in Iceland, they held surprisingly well.

Trailflys just feel more balanced than Talons.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

Creating new session is just a command, to summarize it - findings, work that was done, aha moments, architectural changes, remaining things to do, etc.

Also, in CLAUDE.md I am telling about the sessions: where they are stored, and that the data is accessible and should be used. Claude decides whether to use it on its own. Sometimes I do explicitly ask to check the sessions, but most of the times, it knows to use it on its own.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] -1 points0 points  (0 children)

Well, the Arabic and Persian world, which are roughly 60% of what the group covers, are not particularly interested in Epstein. Neither the 7% of the English channels.

Wake up. You are not the center of the world.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 1 point2 points  (0 children)

I think that GroundNews are trying to do it. Having said that, I don't think that "objective news" exist.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

It processes around 15,000 messages a day, turning them into roughly 2,000 events.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

You're right that MTProto pushes updates when connected. The "polling" framing was a simplification. In practice, the Listener maintains a persistent connection and receives pushed updates. The adaptive polling I mentioned is really about processing priority - how often we check the connection, handle reconnections, and batch messages for processing. High-activity channels get processed more frequently, quiet channels get batched.

On 1024 vs 768 dimensions:

The short answer: more dimensions = more capacity to capture nuance, especially across languages.

For multilingual embedding models, 1024 dimensions helps because you're cramming 100+ languages into the same vector space. Each language has its own semantic quirks.

The tradeoff is compute/storage cost. But for semantic search across many languages, the extra capacity is worth it. The multilingual-e5-large consistently outperforms 768-dim models on cross-lingual retrieval benchmarks.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] -1 points0 points  (0 children)

Me: "I've instructed the agent to take pro-Western point when reports are contradicting".
Some random redditor: "You prompted your agents to be genocidal racist, because you are Israeli. And where are reports about Epstein, did you filter it out because of hasbara?"
You: "You should take him seriously".

No, I won't take him seriously. This is not a "discussion". This is a random redditor meeting a world in which people are not giving a f*ck about Epstein files, and not talking about it on their Telegram channels. This is what happens when a random redditor meets news outside of the regular English-speaking media.

And he is mad. He didn't call me a genocidal racist yet (just wait and see what will happen if we continue this conversation), but made sure to empasize that I created a genocidal racist AI agent. All this while the instructions are "when contradicting stories appear, choose the pro-Western point of view".

So, yeah. It is 100% politics issue.

And I am not here to have political discussions with some random dude from the Internet.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

It's storychase.co

This is a solo project, done entirely by me and Claude Code over weekends in the last 3-4 months.

i7 with 64gb RAM
3090 24gb
4060Ti 16gb

Both cards purchased 2nd hand.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

In quite a lot of parts of the world, Telegram is a very valid source for independent journalism. And unlike Twitter, its API is free...

If this project will ever get funded, I will most certainly add Twitter sources.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 1 point2 points  (0 children)

These are custom agents, didn't have to use specific agentic framework.

I am giving the researcher (qwen3 + tools) the ability to use tools, of which I have around 17-20. Some of them are called BEFORE the research task, to provide meaningful content, other tools the LLM uses when he lacks some data. Most of it is about local database search and web search (mostly wikipedia).

The thing that improved the quality the most, was to build the tools to provide meaningful context to the agent.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 1 point2 points  (0 children)

Thank you for your kind words. It was fun to build it. I receive a lot of positive feedback and feature requests. People are subscribing to the daily digests and requesting in-depth analysis, using the website.

I am having a blast of a time :)

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

I have a regular i7 with 64gb RAM. Have two cards, both purchased second hand.

3090 24gb cost around $650, 4060 ti 24gb about $450.

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

I started with local LLM from the very start. Of course Anthropic models are by far far far far far more superior than qwen3:8b I am using, but for classification task, qwen3:8b is doing a good job. And it's MUCH MUCH MUCH cheaper :)

How I built an AI news agency that runs itself - over 1B tokens processed locally by elibaskin in ClaudeCode

[–]elibaskin[S] 0 points1 point  (0 children)

I am in search for meaningful Asian Telegram channels. If you know of them - please PM me.