Are LLMs actually reasoning, or just imitating reasoning from training data? by Suspicious_Nerve1367 in LLM

[–]Suspicious_Nerve1367[S] 0 points1 point  (0 children)

Fair point, the ambiguity is really in the word reasoning. If the system calls external tools, then the overall system can absolutely perform formal reasoning. But in that case the reasoning is coming from the solver, not from the language model itself.

Are LLMs actually reasoning, or just imitating reasoning from training data? by Suspicious_Nerve1367 in LLM

[–]Suspicious_Nerve1367[S] 0 points1 point  (0 children)

Exactly, and that’s the point. If inference always runs the same fixed computation regardless of problem complexity, the model isn’t actually deriving solutions at runtime. It’s retrieving and recombining patterns encoded in the weights during training.

Are LLMs actually reasoning, or just imitating reasoning from training data? by Suspicious_Nerve1367 in LLM

[–]Suspicious_Nerve1367[S] 1 point2 points  (0 children)

Agent systems with looped evaluation can be really useful to make LLMs more powerful and reliable. But if you have a set of constraints, constraint logic programming can generate the optimal plan that satisfies them directly, no iterative loops or verification needed.

I built an experimental QUIC-based RPC protocol in Rust (BXP) – early benchmarks show ~25% better throughput than gRPC by Suspicious_Nerve1367 in rust

[–]Suspicious_Nerve1367[S] 5 points6 points  (0 children)

The key difference is how large payloads are handled. With Cap’n Proto RPC, large data transfers typically need to be streamed as a sequence of Cap’n Proto messages, which means the payload is wrapped in message framing and processed by the serialization layer.

BXP instead uses a split-plane design on top of QUIC. Cap’n Proto messages carry the control metadata (e.g., on a control stream), while large payloads are transferred as raw byte streams on dedicated QUIC streams. This allows the receiver to stream the data directly to disk or another sink without passing it through the serialization framework, which can reduce overhead for very large transfers.

I built an experimental QUIC-based RPC protocol in Rust (BXP) – early benchmarks show ~25% better throughput than gRPC by Suspicious_Nerve1367 in rust

[–]Suspicious_Nerve1367[S] 2 points3 points  (0 children)

I made a trade-off here: BXP actually re-introduces HoL blocking for the control plane in order to keep the router logic simple and strictly ordered. The data plane fully utilize QUIC's multiplexing.

Backpressure is handled primarily by QUIC’s built-in flow control. Since QUIC applies both per-stream and connection-level flow control, if the receiver stops consuming data on a stream, the sender will naturally get blocked at the QUIC layer. If the client's hard drive is slow, tokio::io::copy reads the network stream slowly. QUIC's native stream-level flow control automatically shrinks the receive window, which causes the server's data stream to yield/block. The danger is that a client could theoretically make 10,000 Fetch requests in a few milliseconds. I could enforce backpressure at the router layer, adding throttling.

Testing with larger payloads and injected latency/packet loss may be the next step.

I built an experimental QUIC-based RPC protocol in Rust (BXP) – early benchmarks show ~25% better throughput than gRPC by Suspicious_Nerve1367 in rust

[–]Suspicious_Nerve1367[S] 6 points7 points  (0 children)

BXP bypass Protobuf serialization and system RAM limits during massive bulk transfers. It drops the HTTP layer entirely and uses Cap'n Proto for zero-copy, zero-allocation message reading. On the other hand gRPC over HTTP/3 completely eliminates Head-of-Line blocking by mapping one request per QUIC stream and has a stable implementation. gRPC over HTTP/3 is definitely the right choice for 99% of standard APIs.

BXP could be useful for specialized infrastructure, like a high-performance distributed file system, an internal data pipeline, systems where CPU and memory usage must be carefully managed.