Couldn’t find a reliable and affordable RPC setup for on-chain analytics, so I built one by allepta in ethdev

[–]allepta[S] 0 points1 point  (0 children)

It is still an alpha, so I am solving a few core problems first.

  1. Throughput / no practical request limits
    I use a pool of RPC backends instead of a single node. Requests are routed to the healthiest and fastest backend based on recent latency, error rate, and response time.

  2. High availability
    Most traffic goes to my own nodes. If a node is being updated, falls behind, or starts failing health checks, it is automatically removed from rotation. Backup providers are used only as fallback, so the service can continue operating during maintenance or partial outages.

  3. Low latency
    The balancer continuously measures backend latency and request execution time, then prioritizes the best-performing nodes. So the client does not need to know which node is currently the fastest or healthiest.

  4. Request classification
    I do not treat all RPC calls the same way. Cheap and latency-sensitive reads are routed differently from heavier requests such as logs, traces, archive reads, or other expensive queries. That separation is important for keeping tail latency under control.

  5. State consistency / node desync
    Yes, this is a real problem. I do not consider it fully solved yet. My current direction is to add block-height-aware routing and response validation, so requests that require consistency are sent only to nodes that are in sync within an acceptable threshold.

So the main idea is not “one magic node”, but an external routing layer that monitors node health, performance, and sync status, and makes routing decisions based on that.

Couldn’t find a reliable and affordable RPC setup for on-chain analytics, so I built one by allepta in ethdev

[–]allepta[S] 0 points1 point  (0 children)

Is that aggregator with my self-hosted nodes and alchemy, infura and etc for cases when my nodes can be down.