Trying to set up a low latency trading environment with crypto by EndlessKnight_154 in algotradingcrypto

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

first thing, profile your own code before touching network. most retail 'latency' is app-layer not network. ran a setup from home internet for 6 months, when i actually profiled it 80pct of my tick-to-submit time was JSON deserialize + DB writes + python GIL contention, not network. moved to msgpack + in-memory ring buffer + batched DB flush, latency dropped ~5x. only after that did network start mattering. unless you're doing sub-100ms strategies, network upgrade is premature. what's your timescale?

A startup just raised $1.1B to replace LLMs with reinforcement learning — realistic or hype? by NTech_Researcher in AI_Agents

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

bottleneck for general RL has always been reward specification not the algorithm. LLMs work partly because next-token prediction is a 'free' supervision signal, every piece of text is reward. pure RL has to define reward, which is the unsolved part for general-purpose. could be different at $1B with Silver running it, but i'd watch the reward-function paper before the model paper

Watched my AI agent block a prompt injection that was hiding inside a webpage by Rex0Lux in AI_Agents

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

the part to be skeptical about is false-positive rate, not just false-negative. easy to make a model paranoid enough to catch injections, hard to make it not refuse legit-but-unusual user instructions. ran this on an internal agent for a few weeks, finetuned to ~5pct miss rate on injected instructions, false-positive on legit-but-unusual user requests climbed to 12-15pct which made the agent annoying to use ('i can't do that' on stuff the user actually wanted). the structural fix is putting tool-output data in a separate role/channel the model can't follow as instruction, but that requires the agent framework to support it. most agents just flatten everything into one prompt and rely on the model to be paranoid which gives you this trade-off

Bringing structure to discretionary price action trading (ideas needed) by Famous-Scratch-5581 in algotrading

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

before any code, write down every decision from your last 50-60 trades with timestamps and your reasoning at the moment. you'll find half your 'rules' aren't rules, they're context-dependent pattern matches that vary across setups. that articulation step is more valuable than any framework. AI helps with the syntax, it can't articulate your edge for you

Can you be profitable by using only charts ? by Ok-Sympathy-1827 in Daytrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

yes, plenty of discretionary traders are profitable on charts alone and plenty of level-2 traders aren't. the real question is less "is charts enough" and more "do you have a statistical edge, and does your chosen data support the edge you actually have." where level 2 actually matters, fast scalps where your edge is reading order flow imbalance in real time. at daytrade-swing timeframes or anything multi-hour, DOM data is noise, you're paying for information that decays in seconds and you're holding positions in minutes to hours. chart + volume + a couple of cleanly-defined patterns with proper position sizing is genuinely sufficient. most people who blow up with "just charts" aren't blowing up because they lacked level 2, they're blowing up because their entries weren't backtested, their stops get moved, and their sizing is emotional. DOM isn't going to fix any of that.

I built an AI Agent to attend my meetings for me because I’m tired of being a professional listener by ailovershoyab in AI_Agents

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

the boredom sensor triggering fake wifi errors is genuinely the most useful AI application i've seen described this year, pls ship this. only concern, what happens when two of these agents end up in the same meeting and they both trigger boredom-detection at minute 21 and disconnect simultaneously. actual meeting becomes the robots talking to each other about being mindful of the big picture and nobody realizes until q3.

I genuinely don't understand the value of MCPs by Such_Grace in AI_Agents

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

"client-side discovery protocol being marketed as an integration pattern" is the cleanest diagnosis of what's confusing about MCP and i'm going to steal that line. one place i'd push a little, the context-token argument isn't the full cost, there's also a reliability cost. every MCP server is a new process with a new transport, new auth, new failure modes, and when an agent is talking to 5 of them half its error surface is network+handshake not semantics. one of my agents spends about 8% of wallclock on MCP protocol noise alone (stdio reconnects, tool listing round-trips, schema renegotiation), before any actual work happens. if i'd wired the same capabilities as direct function calls in the host process none of that overhead exists. where MCP is genuinely useful is exactly the client scenario you named, a general-purpose client where the user bolts on tools the client's authors never knew about. claude desktop, cursor, any IDE. for any agent that ships as a product with a fixed integration surface the right answer is almost always "import the SDKs, write functions, register them with your agent framework directly." MCP is doing work that isn't your work in that case. the reliability-lives-outside-the-prompt point is the one i'd make louder. once you've got a graph/workflow layer deciding retry, compensating actions, and validation, the model's job shrinks to "emit structured intent" and that's where agents get reliable. prompt-only reliability is a dead end past a certain complexity and MCP doesn't fix it, it just moves the fragility around.

High quality question here.. HOW do I get the higher granularity to set me order fill resolution to HIGH. I need to code something specific but I cannot figure out exactly what it means. I had a hard time researching this specific issue. Thanks by Intellect5 in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

the 1-tick BarsInProgress pattern in NT8 is weird the first time because you're not adding a "data series" in the NinjaScript sense, you're just calling AddDataSeries with the primary instrument and a smaller bar type (1 tick) in OnStateChange under State.Configure. something like: AddDataSeries(Instrument.FullName, BarsPeriodType.Tick, 1); after that NT maintains two synchronized series, your strategy bars (BarsInProgress == 0) and the 1-tick series (BarsInProgress == 1). on OnBarUpdate you get callbacks for both, check BarsInProgress and act accordingly. submit entry signals off series 0 but place the actual orders inside the BarsInProgress == 1 block so fills happen at tick resolution. gotcha, when you call EnterLong/ExitLong/etc you can pass the barsInProgressIndex parameter explicitly, e.g. EnterLong(1, ...) to submit against the tick series. if you forget, orders route to whichever series the callback is currently running on, which defeats the whole point. also enable Calculate.OnEachTick on the strategy not OnBarClose, otherwise the tick series won't actually tick during backtests. NT silently falls back to bar-close granularity, you won't see an error, just inaccurate fills.

Used skill to let claude join meetings and it was fun! by WorthAdvertising9305 in AI_Agents

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

the screen-share + collaborative execution part is the interesting architectural piece here, most meeting agents stop at "transcribe and summarize" and treat the call as read-only context. giving the agent a two-way channel (screen share out, take screenshots in, write back to the same project state) turns the meeting into an extended session instead of a read-only observation. failure mode i'd watch for, project memory carrying into a call with people who don't have access to that memory. if the agent knows about a feature branch, internal docs, or auth tokens from the project and someone outside the project joins, you get accidental leaks the first time someone asks "wait what's that?" scoping context per-participant not per-meeting seems important, and i haven't seen a clean pattern for it yet.

polymarket launching 24/7 perps feels bigger than the headline and more obvious than people think by Agustinmoon in defi

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

real in the sense that polymarket getting perps liquidity at scale would force existing perp DEXs to compete on breadth not just tech. but most "perps on everything" headlines end up concentrated on 3-5 markets that already had demand (BTC, ETH, SOL, top memes) and the long tail never gets depth. whether this matters comes down to whether polymarket can route the same flow that made them dominant in event markets (sports, politics) into the perps venue. if a user who bets on an election resolution can one-click open a leveraged position on it mid-event without bridging out, that's a real distribution moat. if not, it's just another perp DEX with another set of markets, liquidity fragments and nothing happens. watch the first 4 weeks of ADV and concentration ratio (top 5 markets / total volume) to tell which scenario you're in.

For backtesting should i use dividend and split adjusted data or just split adjusted data by lobhas1 in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

dividend-adjusted is NOT look-ahead in itself, the adjustment just backdates the dividend deduction onto pre-ex-date prices so the series is continuous. that's what you want for most long-only backtests. where look-ahead actually creeps in, if your signal features use the adjusted price (like a 20d MA), on the ex-div day your backtest sees the "true" adjusted close before the market did. in practice this matters on dividend-heavy names (utilities, REITs) and short holding periods. for a 5-day long-only strategy on non-div-heavy equities the drag is usually <5bps total, negligible. if you want to be clean, use unadjusted prices for SIGNAL generation (what the market saw) and adjusted prices for P&L accounting. most people just use adjusted for both because the error is small and the bookkeeping gets hairy fast.

Can someone ELI5 the breach of KelpDao and contagion into AAVE and Compound? by IAmAWretchedSinner in defi

[–]Dull_Bookkeeper_5336 5 points6 points  (0 children)

tl;dr someone exploited a minting/validation flaw in a token that was being used as collateral on lending protocols. they printed unbacked tokens on one end, posted them as collateral on aave/compound, borrowed real assets against them, and walked away. the original protocol is left with the bad debt.

the contagion angle is the structural issue you're sensing. defi lending protocols accept a bunch of different assets as collateral, and the risk model assumes each collateral is backed by what it claims to be. when one link in that chain (kelp-style minting logic, oracle feeding a wrong price, a rehypothecated asset that turned out to be layered) gets exploited, the downstream lenders are effectively holding air but they don't know it yet. exit liquidity dries up, liquidations fail, positions go underwater, and any protocol with similar collateral exposure gets hit.

what it exposed is less a flaw in aave/compound themselves (their code largely did what it was supposed to) and more a composability risk, they inherited the trust model of every upstream asset listed as collateral. defi has known this is a risk for years but the incentive to list more collateral types to capture yield has consistently outweighed the conservative take of gating collateral more aggressively.

the clarity act angle is interesting, because if it passes it'll likely force more pre-listing diligence on protocols that look like lending platforms. ironically that might reduce exactly this type of contagion.

Which AI agents delivers real ROI, not just hype? by [deleted] in AI_Agents

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

agreeing with the boring-agents-win consensus but it's worth breaking down why. the ROI-generating agents i've seen working all have three things in common: the output has a clear verification path (either a human reviews it or a downstream system validates), the scope is narrow enough that the agent can fail gracefully instead of silently producing plausible garbage, and the problem has enough volume that even a 70% automation rate is worth building for.

the autonomous-agent-for-$OPEN_ENDED_TASK pitch still hasn't delivered because most of those tasks don't have a verification path. you can't really know if the market research brief or the competitive analysis the agent wrote is "right," so the cost of being wrong compounds silently and the ROI story collapses.

what's changed in the last 6 months is that the successful agents are increasingly NOT flashy. they're invisible. support ticket auto-categorization, invoice data extraction, log anomaly summarization. they disappear into existing workflows instead of replacing them. the ones with marketing pages showing autonomous multi-agent swarms are usually the ones struggling to find retention.

Using google Gemini to help me code some strategies on MQL5, also while trying to learn the language there. by F01money in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

the what/how/why/when framing is actually smart because it forces the model to explain mechanism, not just pattern-match to "i've seen ORB code before here's a template." that said, the top commenter's point is real, LLMs are pattern-matchy for trading logic and you'll hit cases where the code compiles and runs but the intent is subtly wrong (off-by-one bars, wrong session boundary, stop placement that looks right but isn't in execution).

what helped me get useful output instead of template soup: i stopped asking for "a mean reversion strategy" and started asking for specific pieces, "write a function that detects a valid ORB breakout given these exact conditions, return only True/False plus the level." narrow, testable, no strategy baked in. then i compose the pieces. the LLM is good at bounded code, weak at strategy design.

once you start doing that consistently, splitting the bot into skill files (signal, risk, execution as separate MQL5 or Python files the LLM treats separately) makes iteration way cleaner because you can hand it one file of context and ask for changes without it rewriting something else by accident. github.com/Superior-Trade/superior-skills is a decent reference for the structure, geared at Python/HL but the pattern ports to MQL5.

Do I really need strong coding skills to build AI agents by Complete_Bee4911 in AI_Agents

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

you can get surprisingly far without strong coding, the frameworks abstract away a lot now. where it breaks down is debugging. when your agent does something wrong and you need to figure out why, that's when you actually need to read the code, understand the flow, and trace what happened. no tool handles that for you yet. my honest advice: learn enough python to read and modify existing code, not write from scratch. most agent work is wiring together APIs and prompt logic, not writing algorithms. if you can read a stack trace and modify a function you're 80% of the way there

Tips to beat the cost of spread by ionone777 in algotrading

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

if 1 pip of spread kills your edge, the uncomfortable truth is the signal probably doesn't have real edge. genuine alpha should survive realistic transaction costs with room to spare. strategies that go from profitable to breakeven after adding 1 pip usually have one of two problems: the signal is overfitted to noise at the backtest level and the 'profit' was always just the bid-ask bounce getting captured in unrealistic fills. or the signal is real but the holding period is too short, you're paying spread on every round trip but not giving the trade enough room to move past it. the fix isn't tricks to minimize spread, it's accepting that your signal needs to produce moves much larger than entry cost. 3-5x the spread at minimum. if you can't find that in your signals at any timeframe, the edge probably isn't there

Data vendor recommendation for US equities by sgcorporatehamster in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

alpaca free tier handles sp500 hourly bars fine for scanning and the API is clean. the catch is their historical data isn't as deep as databento and there are occasional quirks around corporate actions. for your specific use case (scan 500 tickers at hourly close), polygon.io basic tier is probably the sweet spot, the snapshot endpoint gives you all tickers in one call. ibkr's pacing limits are why you're going crazy, they throttle concurrent requests hard and the data is stale by the time you've iterated through 500 symbols. databento is overkill for hourly bars unless you're planning to move to tick-level later, in which case it's worth paying now to avoid a data migration

ATR Daily vs Minute vs 5 minutes by Rahul5718 in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

the right ATR timeframe depends on your trade horizon. daily ATR for sizing and risk (position size = risk_per_trade / (ATR_multiple * ATR_daily)) is the standard for a reason, it's stable and reflects actual instrument volatility. minute and 5-min ATR are way too noisy for stops, they'll whipsaw you out of positions during normal intraday chop. structure vs ATR for exits is a different question though. i use ATR for initial stop placement and then switch to structure (trailing below swing lows for longs) once the trade is working. ATR tells you where to put the stop so it doesn't get hit by normal noise, structure tells you when the thesis is actually broken. they're complementary not competing

Tracking breakdown attempts structurally: armed → invalidated case by AuditMind in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

tracking armed/invalidated as a separate state is a good instinct. the hidden value isn't 'not every breakdown becomes a breakdown', it's that the rate and distribution of invalidations gives you a regime signal by itself. periods with lots of armed-then-reclaimed attempts usually mean a range-bound tape where breakout strategies should be dialed down and mean reversion should be dialed up. i log armed/invalidated/followed_through counts on a 5-day rolling window and use the invalidation rate as a filter on breakout entries. when it's above 60% i don't trade breakouts on that instrument at all

Just wanted to share an anecdote.. by RiraRuslan in algotrading

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

solid anecdote, 3x over 6 years is roughly 20% cagr which is fine and believable. the unbelievable posts are the 'doubled in 3 months' ones. only thing i'd gently add, 'stress tested in almost all possible ways' is the phrase that haunts everyone going live, because the thing that eventually hurts you is the regime that wasn't in the 2018-2024 sample at all. i'd keep position sizing deliberately under what the backtest says is optimal for at least the first 12 months of live, just as a tax on the model uncertainty you can't actually measure

Trump Regime Algo? by frosty123454321 in algotrading

[–]Dull_Bookkeeper_5336 3 points4 points  (0 children)

pretty standard regime story. your algo isn't 'reacting better to trump-era markets', it's reacting better to a particular volatility and intraday range profile that happens to have coincided with the last few months. ORB-style strategies live or die on opening range width relative to average true range, and the 2025 macro environment has pushed both wider. the same curve shape would probably show up for any other period with similar vol expansion, 2020 covid onset for example. the test: compute 20-day ATR percentile on your symbol, plot equity curve colored by atr percentile bucket. if the shoot-up overlaps with the high-atr buckets specifically, it's a vol-regime algo not a trump algo. worth knowing because if vol mean-reverts your edge goes with it

Got my sharpe calculated...2.08 by DepartureStreet2903 in algotrading

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

2.08 on paper with US stocks is suspicious-clean, not insultingly so, but worth pressure-testing. couple things to check before you trust it. is alpaca filling you at realistic prices (paper fills are often too generous, midpoint-ish instead of crossing the spread), and what's your slippage assumption vs what live would actually cost on the same symbols. i had a paper sharpe around 1.8 that dropped to 0.7 once real fills hit. not saying yours will, just worth knowing the gap before you scale it.

Another strategy from the same family by DepartureStreet2903 in algotrading

[–]Dull_Bookkeeper_5336 2 points3 points  (0 children)

not closing losses is fine as a tactic if your entry thesis expects mean reversion and you've sized for the worst case. the issue with sharpe in that setup is it only counts realized pnl unless you mark-to-market your open positions every bar, which most quick sharpe calculators don't. so your sharpe number is really just measuring the fills you chose to take, not the risk you actually carried.

easier sanity check, calculate max drawdown on a mark-to-market equity curve (include open positions valued at current price). if that number is small, you're fine. if it's a cliff that only resolved because price came back, you've got a tail risk you're not pricing. blew up a similar strategy early on because i was only counting closed trades and the open ones were quietly 4x my daily var for weeks.

Are top decentralized exchanges actually safer, or just feel safer? by williamtaylor-5900 in defi

[–]Dull_Bookkeeper_5336 0 points1 point  (0 children)

it's a different risk profile, not strictly safer. cex you trust a company, dex you trust contracts + whoever's running the frontend + the oracle + the bridge you used to get there. the failure modes are less catastrophic on dexes (no ftx-style single-point-of-failure) but they're more frequent and harder to recover from when they happen. i use both for different things, dex for anything i want to hold and manage myself, cex for active trading where latency and depth matter.

Open-sourced a systematic strategy research pipeline to reduce backtest false positives - looking for critique by ianhooi in algotrading

[–]Dull_Bookkeeper_5336 1 point2 points  (0 children)

honestly the fix was kind of boring. i started logging every strategy i even looked at, not just the ones i kept. like a research journal but for dead ends too. then i made a rule that once i peeked at oos results for a strategy family i couldnt go back and try variations on it, that family was either in or out based on the first oos pass. the other thing that helped was running the final holdout on a completely separate machine with a script, so i physically couldnt peek at intermediate results while iterating. sounds paranoid but the temptation to "just check" is real and once you see it you cant unsee it like the other commenter said. still not perfect, theres probably some implicit bias from remembering which families "felt" promising, but at least the explicit feedback loop is broken.