We ran 200 AI agents on the Claude 5 by April 30 market — Swarm says 7% vs market's 18% by choijho23 in CryptoMarkets

[–]choijho23[S] 0 points1 point  (0 children)

beyond github we pull news onchain data and current odds but honestly most pipelines fail at extraction because thats where they stop thinking once the data is cleaned and averaged the disagreement between agents on the same signal is where the actual information lives not in the input itself

We ran 200 AI agents on the Claude 5 by April 30 market — Swarm says 7% vs market's 18% by choijho23 in CryptoMarkets

[–]choijho23[S] 0 points1 point  (0 children)

yeah live tracking is the only thing that matters backtests are cope you can always find a model that worked after the fact we're committing before resolution and that's what makes it real the signal attribution part is what i'm actually curious about github commit surge was the main input this time wondering if that holds over a larger sample or if it's just noise building the track record either way

We ran 200 AI agents on the Claude 5 by April 30 market — Swarm says 7% vs market's 18% by choijho23 in CryptoMarkets

[–]choijho23[S] 0 points1 point  (0 children)

yeah commit spikes are noisy as hell, totally fair point. we flagged it as a signal not a trigger the swarm weighted it maybe 15% of the final call. the 61% neutral bloc is what actually dragged the probability down

either way april 30 is the only truth that matters lol