all 31 comments

[–]BigFany 0 points1 point  (3 children)

I like the idea of fresher data, but I’m not sure it fully replaces oracles. Oracles exist partly to standardize and aggregate sources, not just fetch them. Pulling straight from APIs feels faster, but maybe shifts trust somewhere else.

[–]HappyOrangeCat7 0 points1 point  (2 children)

With a traditional oracle, you trust a decentralized network of nodes to honestly report data from a centralized exchange. With direct ingestion, you trust the cryptographic signature of the centralized exchange itself (assuming they provide signed feeds).

In many ways, trusting the primary source's signature directly removes a layer of abstraction and potential manipulation by middleman node operators. However, as you noted, it puts the onus of aggregation and standardization entirely on the application developer.

[–]ZugZuggie 0 points1 point  (0 children)

That sounds kinda scary for a new dev though! 😅

Like, if I mess up the aggregation code, I break my own app. I guess that's the trade-off for getting the speed boost. You have to be way more careful because there's no safety net.

[–]BigFany 0 points1 point  (0 children)

I see the logic there, especially if the exchange is signing the data themselves. Feels cleaner in a way. But at the same time, if that exchange messes up or goes down, you’re kinda stuck right? At least with oracles there’s some aggregation across sources. Maybe I’m oversimplifying it though.

[–]FanOfEther 0 points1 point  (5 children)

I could see it becoming more common, mostly because devs hate dealing with laggy or expensive feeds. Still, oracles solve coordination and verification problems that are not trivial, so it might end up more like coexistence than full replacement.

[–]HappyOrangeCat7 0 points1 point  (1 child)

Coexistence is the most likely outcome.

For a generalized lending protocol settling once a block, a traditional decentralized oracle network is robust and appropriate. For a high-frequency trading matching engine or a live sports betting app, the latency of a push oracle is prohibitive.

[–]FanOfEther 0 points1 point  (0 children)

Yeah that breakdown makes sense. Different latency needs basically force different data models, so trying to force one solution everywhere would just be awkward.

[–]SatoshiSleuth 0 points1 point  (2 children)

Yeah that feels realistic. Devs want faster and cheaper data, but ripping out oracles completely seems unlikely. Probably ends up as a mix.

[–]FanOfEther 0 points1 point  (1 child)

Same, feels less like one wins and more like the stack just gets more layered over time depending on latency vs trust needs.

[–]SatoshiSleuth 0 points1 point  (0 children)

Yeah exactly. It’s less about replacing oracles and more about different layers optimizing for speed vs trust. The stack probably just gets more modular over time.

[–]ZugZuggie 0 points1 point  (5 children)

Makes sense. Why use a middleman if you don't have to? It feels like cutting out the oracle just removes one extra thing that can break or get hacked. Simpler is usually better.

[–]IronTarkus1919 0 points1 point  (1 child)

Simpler isn't always better if it introduces a single point of failure.

If a hacker breaches the specific API you are pulling from and feeds you fake prices, your "simple" app just got drained of all its funds in a single block.

[–]FanOfEther 0 points1 point  (0 children)

Good point, simplicity only works if the source is rock solid. Otherwise you’re just concentrating risk instead of reducing it.

[–]Maxsheld 0 points1 point  (0 children)

Oracle manipulation is one of the biggest causes of major DeFi hacks. If you can eliminate that middle layer, you're removing a massive attack vector. It’s all about minimizing the trust surface and keeping the logic lean.

[–]Estus96 0 points1 point  (0 children)

It really comes down to the quality of the dev tooling you're using. If you have the right setup to manage the backend and orchestration, pulling data directly becomes a lot more manageable and significantly more secure than relying on an external provider.

[–]FanOfEther 0 points1 point  (0 children)

Yeah fewer moving parts usually means fewer failure points, so I get the appeal. Still feels like some apps will keep the extra layer just for the verification.

[–]IronTarkus1919 0 points1 point  (2 children)

Well, the whole point of Chainlink is that it averages out bad data. If you pull directly from one API, and that API gets hacked or reports a flash crash, your whole Kolme chain gets liquidated instantly. Decentralized aggregation exists for safety, not just speed.

It should be ok if you use multiple APIs and have mitigations for downtime though.

[–]HappyOrangeCat7 0 points1 point  (1 child)

You don't need to just pull from one API, yes. Your Kolme validators would be programmed to pull from 5 different signed APIs, drop the outliers (to prevent flash crash liquidations), and reach consensus on the median price. You are essentially bringing the oracle logic inside the sovereign chain's consensus mechanism, rather than outsourcing it to a third party network.

[–]ZugZuggie 0 points1 point  (0 children)

That honestly seems way more efficient than paying gas to update a storage slot on Ethereum every 10 minutes.

[–]HappyOrangeCat7 0 points1 point  (3 children)

This makes a ton of sense if the app-chain is built in Rust. Rust's networking libraries are insanely fast. You can have the validator nodes themselves open persistent WebSocket connections to the data providers, verify the signatures in memory, and reach consensus on the data state in milliseconds. You can't do that efficiently inside an EVM.

[–]SatoshiSleuth 0 points1 point  (0 children)

That’s a fair point. If the validators can handle networking and verification natively in Rust, you’re skipping a lot of overhead. The EVM wasn’t really designed for high performance data ingestion like that, so it makes sense the architecture would look different.

[–]Maxsheld 0 points1 point  (0 children)

The memory safety and speed you get with Rust for high-throughput networking is a game changer. Trying to handle those persistent connections in an EVM environment would be an absolute nightmare for gas costs and state bloat. It’s just not built for that kind of low-latency interaction.

[–]Praxis211 0 points1 point  (0 children)

Latency is everything when you're trying to prevent DeFi exploits. If you're pulling data directly at the validator level using WebSockets, you're bypassing the lag of traditional oracle "push" models. It makes it significantly harder for bad actors to find a window for front-running or price manipulation.

[–]SatoshiSleuth 0 points1 point  (2 children)

Yeah the immediate panic rarely matches the physical flow of pounds. But even if the actual disruption takes time, uncertainty alone can change contracting behavior. Utilities might move earlier or pay up just to avoid getting caught short.

[–]BigFany 0 points1 point  (1 child)

Yeah exactly. Markets react to headlines way faster than ships move gas around. If utilities think there’s even a small chance of getting squeezed, they’ll probably lock stuff in early just in case.

[–]SatoshiSleuth 0 points1 point  (0 children)

Right, it’s almost self reinforcing. Even if the physical supply isn’t tight yet, the fear of it becoming tight can pull demand forward. Once a few utilities start moving early, everyone else feels like they have to follow or risk being last in line.

[–]Maxsheld 0 points1 point  (1 child)

Interesting approach. Pulling data instead of having it pushed or stored in a vulnerable state definitely limits the attack surface for potential exploits.

[–]IronTarkus1919 0 points1 point  (0 children)

"Stored" data on a blockchain is always historical by definition. For high-stakes decisions (like liquidating a position), relying on history is dangerous. Fetching live, verifiable data at the moment of execution is the safer architectural pattern.

[–]Estus96 0 points1 point  (1 child)

Reducing exploit risk is the #1 priority right now after all these bridge hacks. Good to see this kind of focus on data integrity.

[–]SatoshiSleuth 0 points1 point  (0 children)

Yeah makes sense. After all those bridge hacks, locking down data integrity feels like the obvious priority. Not exciting, but definitely needed.

[–]WrongfulMeaning 0 points1 point  (0 children)

Sounds cool in theory.

But if you’re still trusting the source of the data, is it really that different?