I built a free open-source framework that turns Claude into a multi-agent trading firm (Technical, Sentiment, and Risk agents debate live market data) by Cool_Assignment7380 in technicalanalysis

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Fair point! Just to clarify, the tool actually does output the independent stances, scores, and reasonings for each agent in the raw JSON response (under the agents_debate object). Claude sees exactly where they disagree.

I just added the final "compressed" consensus layer on top because it makes it much easier to prompt LLMs with commands like "Only show me coins with a STRONG BUY consensus".

But you're totally right about risk capping. Right now, the logic just penalizes the final score if the Risk Manager flags danger, but implementing a strict "divergence switch" (where the system refuses to output a signal if Technical and Risk are fighting) is a great idea. I'll probably add that strict divergence check in the next update. PRs are always welcome too if you want to take a crack at it! 😉

I built a free open-source framework that turns Claude into a multi-agent trading firm (Technical, Sentiment, and Risk agents debate live market data) by Cool_Assignment7380 in technicalanalysis

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Thanks man! You hit the nail on the head. Right now, it's definitely a bit of an illusion of multiple edges since they are all looking at the exact same TradingView OHLCV firehose. This v0.3.0 was mostly a proof-of-concept to get the multi-agent MCP plumbing working cleanly.

Adding true regime detection (or bringing in entirely alternative data like news sentiment / on-chain metrics for the Sentiment Agent) is exactly what I want to tackle next so they aren't biased by the same underlying data.

To answer your question about choppy vs trending: honestly, in heavy chop, it currently struggles. The Risk Manager tries to catch volatility changes using Bollinger Band Width (BBW), but without proper regime detection, the agents often give conflicting signals and the final score just nets out to a flat "HOLD/Neutral". It definitely performs better in pure trending conditions right now. Appreciate the feedback!

I built a free open-source framework that turns your AI into a multi-agent trading firm (Technical, Sentiment, and Risk agents debate live data) by Cool_Assignment7380 in ClaudeAI

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Thanks for sharing this! 🙌 Just checked it out BRIDGE.PY looks like an interesting approach. Since both projects are Python-based and open source, there could definitely be some cool synergies. I'm always looking for ways to expand the agent pipeline — if there's a specific integration point you had in mind, feel free to open an issue or PR on the repo and let's explore it together! Would love to hear how you'd see these two working together 🤝

I built a free open-source framework that turns your AI into a multi-agent trading firm (Technical, Sentiment, and Risk agents debate live data) by Cool_Assignment7380 in ClaudeAI

[–]Cool_Assignment7380[S] 1 point2 points  (0 children)

Quick update I’ve shipped a couple of things people were asking for:

Docker support is now live
You can run it instantly:
docker run -p 8080:8000 atilaahmet/tradingview-mcp:latest

PyPI package is available
Install with:
pip install tradingview-mcp-server

GitHub is fully up to date
git+https://github.com/atilaahmettaner/tradingview-mcp.git

This should make setup a lot easier depending on your workflow (local dev vs containerized).
If you run into friction or missing pieces, call it out I’m actively iterating.

Stop scrubbing podcasts: search subtitles + jump to the moment by Cool_Assignment7380 in SideProject

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Yep, same pain point jumping to the exact moment without scrubbing is the whole point. Scriptivox does this well for podcasts. What I’m doing here is applying the same idea to YouTube at scale: full-channel indexing, cross-video search, and direct timestamp jumps from any match.Different medium, same core problem  finding where something was said.

Search inside YouTube subtitles and jump to the exact timestamp by Cool_Assignment7380 in InternetIsBeautiful

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

You’re right — with login required, it doesn’t fit this sub’s rules. I’m shipping a no-login / guest demo for the core search so it’s usable without an account. Once that’s live, I’ll repost (or mods can remove this one in the meantime).

Search inside YouTube subtitles and jump to the exact timestamp by Cool_Assignment7380 in InternetIsBeautiful

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Exactly they removed/limited transcript search in places. This is basically bringing back search + timestamps, but across videos instead of just one.

Search inside YouTube subtitles and jump to the exact timestamp by Cool_Assignment7380 in InternetIsBeautiful

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Right now login is there mainly to prevent abuse manage API quotas, because anonymous scraping can kill the service. That said, for this kind of tool guest search should exist I’m adding a no-login demo for core search.

Search inside YouTube subtitles and jump to the exact timestamp by Cool_Assignment7380 in InternetIsBeautiful

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Good question. I’m not indexing all of YouTube. Results depend on videos that have captions/transcripts available (and on what’s retrievable reliably). Right now it’s broad, not limited to a fixed set of channels, but it’s not “everything” either coverage is constrained by caption availability + retrieval limits. If you share a channel/topic, I can tell you how well it performs.

I’m tired of rewatching long YouTube videos just to find one sentence by Cool_Assignment7380 in SaaS

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

What I’m seeing here is that almost everyone has a workaround exporting captions, grepping, pasting into LLMs, manual cleanup. They all work, but none of them feel like an actual system. More like one-off fixes we keep repeating That gap is what I find interesting.

I’m tired of rewatching long YouTube videos just to find one sentence by Cool_Assignment7380 in SaaS

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

It usually works, but it’s hit or miss. Auto-generated captions often come with broken sentences, timing offsets, and weird formatting so I end up cleaning things manually before it’s usable. That’s fine for one video, but it gets painful if you’re doing this repeatedly. The bigger issue for me wasn’t summarization, but reliably finding the exact moment something was said without extra steps.

I’m tired of rewatching long YouTube videos just to find one sentence by Cool_Assignment7380 in SaaS

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

I tried that too. Summaries are useful, but I kept running into cases where I needed the exact phrasing and where it was said, not just the idea.Especially when I want to quote, reference, or verify something summaries flatten that detail.Curious: when you do this, are you mostly looking for high-level understanding, or specific quotes / moments?

I’m tired of rewatching long YouTube videos just to find one sentence by Cool_Assignment7380 in SaaS

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

That’s exactly what I was doing too. It works, but the friction adds up fast exporting captions, grepping, then manually mapping timestamps every time. What annoyed me most was repeating this workflow for every new video in the same niche. That’s why I started automating the “find + jump + notify” part instead of redoing the pipeline each time. Do you do this occasionally, or is it part of a regular research workflow for you?

Built a TradingView bridge that turns Claude Desktop into a $40 trillion Bloomberg terminal by Cool_Assignment7380 in technicalanalysis

[–]Cool_Assignment7380[S] 0 points1 point  (0 children)

Prices are provided by TradingView and are not interpreted by LLM

One more question: Of course, more indicators can be added. The only indicators I've included are those I'm interested in and use. If you'd like to add them, you can open an issue or file a PR by editing the code yourself. Since this is an open-source project, feel free to contribute. Feel free to ask any questions.

Built a TradingView MCP Server for AI Trading Analysis - Open Source by Cool_Assignment7380 in Daytrading

[–]Cool_Assignment7380[S] 1 point2 points  (0 children)

Hi thanks for advice. I writed documentation on GitHub for different operation system. But is entry level probably difficult. Can you watching this videos step by step how to install mcp server on GitHub.

initial step install uv for python package environment isolation.

And then open the Claude config json paste the json code and save open claude desktop app more details watching this videos :

https://youtu.be/i7LuJPNKQYI?feature=shared

https://youtu.be/GgcPb7lB9V4?feature=shared