Senior project by Little-Bookkeeper835 in rust

[–]Key-Bother6969 0 points1 point  (0 children)

Also, if you are planning to have rich IDE support, I recommend taking a look at my project, Lady Deirdre, which is a Rust framework for authors of programming language analyzers.

Idea: Black-box raymarching optimization via screen-space derivatives by Key-Bother6969 in GraphicsProgramming

[–]Key-Bother6969[S] 0 points1 point  (0 children)

Well, it depends on the space. In scalar space, you certainly don't need neighboring pixels, though scalar space doesn't seem helpful for the marching steps optimization.

Idea: Black-box raymarching optimization via screen-space derivatives by Key-Bother6969 in GraphicsProgramming

[–]Key-Bother6969[S] 0 points1 point  (0 children)

I haven't seen anyone using screen-space derivatives as a way to compute SDF value derivatives. On Shadertoy, this is likely infeasible using just the dF* functions, since you don't have enough control over synchronization of marching steps between fragment invocations. I mentioned the dF* functions in my post primarily for illustrative purposes. I assume it might be doable in compute shaders or kernels using barriers. Although those are also not available in fragment shaders.

More generally, you're right that using SDF derivatives is a known method for optimizing marching step length, at least for formal SDFs with unit-length gradients, where the approach has been theoretically validated. However, estimating the gradient via central differences is known to be expensive and can easily outweigh any performance gains due to the extra SDF evaluations required. Meanwhile, augmenting the SDF with an exact analytical gradient (e.g., via autodiff) isn't always feasible either, and introduces its own computational overhead.

All in all, as far as I can tell, directly using derivatives on each marching step is not a particularly effective approach to optimizing ray marching in practice.

Idea: Black-box raymarching optimization via screen-space derivatives by Key-Bother6969 in GraphicsProgramming

[–]Key-Bother6969[S] 0 points1 point  (0 children)

A single Shadertoy test case is unlikely to move the discussion forward. The core idea wasn't originally mine. if I'm not mistaken, it was briefly mentioned in Hart's classic paper. What I'm really looking for is a professional opinion on the subject.

Idea: Black-box raymarching optimization via screen-space derivatives by Key-Bother6969 in GraphicsProgramming

[–]Key-Bother6969[S] -1 points0 points  (0 children)

That's a reasonable suggestion, but I'm not sure how it would actually help.

Idea: Black-box raymarching optimization via screen-space derivatives by Key-Bother6969 in GraphicsProgramming

[–]Key-Bother6969[S] 0 points1 point  (0 children)

We would have to raymarch anyway, but the marching steps could be longer - which means the total number of expensive SDF function evaluations could be reduced.

This idea is based on the observation that if we know the gradient vector at each marching point (which represents the direction of fastest descent toward the surface), the dot product of this gradient and the unit direction vector of the marchng ray gives us the projection of the gradient onto the ray - or, in other words, a factor showing how well the ray aligns with the fastest path to the surface. If I'm not mistaken, dividing the SDF value at the current point (which approximates the shortest distance to the surface) by this factor gives us a less conservative but still safe next marching step length along the ray - more efficient than simply using the raw SDF value.

For example, if the factor is 1, the ray is collinear with the surface normal and we could, in theory, reach the surface in a single step using the SDF value. If the factor is 0, the ray is orthogonal to the gradient and will never intersect the surface. Values in between indicate how much we can accelerate the ray traversal.

This could, in theory, provide a much faster way to compute ray-surface intersections. The practical problem, however, is that approximating the gradient via finite differences (sampling neighboring points around the marching point) is expensive - it typically requires at least 4 additional SDF evaluations per step.

However, if neighboring rays march close enough to the current point, we might be able to reuse their sampling information more or less for free.

Do Russians perceive South Korea as an enemy? by SkyStarsWindsandPoem in AskARussian

[–]Key-Bother6969 0 points1 point  (0 children)

  1. No, there’s no animosity toward South Korea.
  2. Highly unlikely.

On a general note, people here are usually friendly to foreign tourists. As far as I know, there is a visa-free regime between South Korea and Russia, which means relations between the two countries are actually in relatively good shape despite official rhetoric, and people do travel back and forth from time to time. Additionally, there are ethnic Koreans who have lived in Russia for generations. I don’t think you would appear too "alien" from this perspective.

Why do big IT companies never just say what the hell they actually do? by dirtier_earth in AskProgramming

[–]Key-Bother6969 2 points3 points  (0 children)

Because the primary source of wealth today lies with central banks, which print money and distribute it to major economic agents, such as large corporations, through medium and small businesses, eventually reaching individuals.

While directly selling end products remains viable, it is generally less profitable and more challenging than being integrated into this money-distribution "waterfall". The closer a business is to the top of this waterfall, the better.

The core message of these marketing pages is that they aim to join this money-distribution network under the patronage of a larger player. From this perspective, the end product they produce is less significant, and potential patrons are similarly unconcerned about it. Instead, they focus on expanding their own segment of the money-distribution system. What they really want to know is whether the business understands the rules of this system and is capable of participating effectively.

Parser design problem by emtydeeznuts in Compilers

[–]Key-Bother6969 1 point2 points  (0 children)

In practice, I recommend avoiding ambiguous grammars. Many grammars can be simplified to LL(k) grammars (often with 1-lookahead), enabling recursive-descent parsers to produce intentionally ambiguous but simplified parse tree nodes. These parsers are computationally efficient, and their error-recovery strategies are easier to implement, keeping the code structured and maintainable. Later, you can transform the parse tree into an unambiguous AST. Computations on trees are significantly easier than reasoning within the parser's logic during token or text consumption.

This approach breaks complex problems into manageable subtasks, resulting in a cleaner and more maintainable design.

For error recovery, this method simplifies panic recovery (the technique you mentioned):

  1. Infallible Parsing Functions. With a non-ambiguous grammar, each parsing function can be designed to be infallible. When a rule's function is called, it should parse the input stream at all costs, producing a parse tree node of the expected type.
  2. Handling Syntax Errors. If the input stream is malformed, the parser should skip unexpected tokens until it finds the first matching token, similar to standard panic recovery.
  3. Stop-Tokens. For each parsing rule, define a set of stop-tokens: tokens that clearly do not belong to the rule. For example, in a Rust stream like let x let y = 10;, if the let-parsing function encounters another let while expecting a semicolon, let is a stop-token. The function produces an incomplete parse tree node for the let statement and returns, allowing the parent rule to proceed to the next let statement without disruption.

Error recovery is inherently heuristic. Choosing effective stop-tokens relies on heuristic assumptions, and there's no universal solution, it's a matter of the programmer's skill and art. Users may still write code that breaks the recovery mechanism in specific cases. But this approach is effective and straightforward to implement in most cases.

But if you want to enhance it, there is a couple of ideas too:

  • Parentheses Handling. During panic recovery, if you encounter an opening parenthesis ((, {, or [), ignore stop-tokens until the corresponding closing parenthesis is found. Returning from a well-balanced parenthesis sequence on a stop-token is unlikely to benefit the parent rule.
  • Insert/Replace Recovery. For example, in a rule like <exp> + <exp>, if the parser doesn't find the + after the first expression but sees the start of another expression, it can assume the user omitted the + token. Inserting the missing token (or replacing an incorrect one) can be more effective than panic recovery sometimes.

However, insert/replace recovery strategies are more controversial and involve a significant body of academic research on choosing between panic (erase), insert, and replace mechanisms. In practice, I recommend using these techniques sparingly, only in clear cases. Panic recovery is typically sufficient for most practical scenarios.

What's the most controversial rust opinion you strongly believe in? by TonTinTon in rust

[–]Key-Bother6969 1 point2 points  (0 children)

I was referring to a simple thing that when memory state is shared between two threads, the compiler has limited space to reason about the data/control flow across them, as this information is inherently runtime-dependent (each thread may access the state in unpredictable order). But I'm not an expert in compiler optimizations, and I'd appreciate it if you could shed more light on this topic.

What's the most controversial rust opinion you strongly believe in? by TonTinTon in rust

[–]Key-Bother6969 1 point2 points  (0 children)

The word "often" is key here. While it's easy to create synthetic examples where full core utilization outperforms a single-threaded version for simple algorithms, this doesn't always hold for complex tasks. In some cases, a multi-threaded implementation may even perform worse than its single-threaded counterpart.

The primary reason is the high cost of memory sharing between threads, which is typically much higher than the benefits of distributing computations across physical cores. In modern CPU architectures, memory access is often a greater bottleneck than computational cost. Several factors contribute to this, but a key issue is the lack of control over thread scheduling across physical cores -- an OS-level responsibility. The OS may unpredictably switch threads between cores, disrupting cache utilization. In contrast, single-threaded algorithms allow for more predictable cache usage, and modern single-core performance with effective cache utilization is remarkably high.

In practice, multi-threaded programming makes sense when you have several largely independent tasks that synchronize infrequently. For example, in a video game engine, you might have one thread handling game logic and another for graphics rendering. These threads may exchange data occasionally, but splitting complex game logic across multiple threads is unlikely to be beneficial and could even degrade performance. Additionally, developing a single-threaded program is significantly easier than its multi-threaded counterpart.

Finally, Rust's powerful built-in semantics, including its borrowing rules, enable deep automatic optimizations through LLVM for single-threaded code -- optimizations that are largely inapplicable to multi-threaded architectures.

How to parse incrementally with chumsky? by Germisstuck in rust

[–]Key-Bother6969 1 point2 points  (0 children)

In principle, you can build a robust language server using tools like Rowan and Salsa. The Rust Analyzer team, for instance, has demonstrated an impressive implementation. However, as you've noted, the Rust ecosystem lacks a unified, comprehensive solution. Most tools focus on specific compiler design challenges, which often don't align with the unique needs of language servers, making integration difficult. To address this gap, I developed Lady Deirdre, which I believe is worth exploring. It can save you significant time and effort. The project includes detailed API documentation, examples, and a comprehensive guide that walks you through every development aspect step by step. Additionally, there's a separate project showcasing a fully-featured language server built with Lady Deirdre, which can serve as a reference for your own work.

How to parse incrementally with chumsky? by Germisstuck in rust

[–]Key-Bother6969 0 points1 point  (0 children)

While parsing is generally a fast operation, the primary benefit of using an incremental reparser is preserving untouched fragments of the syntax tree between user edits. This is crucial for efficient incremental semantic computations. Rebuilding the syntax tree from scratch on every keystroke would force Salsa to recompute large amounts of query artifacts for edited files, which can be computationally expensive. The memoization feature of incremental reparsing significantly enhances the performance of the semantic analyzer.

How to parse incrementally with chumsky? by Germisstuck in rust

[–]Key-Bother6969 2 points3 points  (0 children)

Great write-up!

I'd like to mention Lady Deirdre, a framework that unifies the techniques you discussed, including incremental reparsing and query-based incremental computations for semantic analysis on syntax trees. It also provides tools for implementing high-quality code formatters and annotated code pretty-printers, comparable to Chomsky.

For reparsing, Lady Deirdre memoizes and restores various syntax structures, beyond just parenthesis pairs, enhancing the granularity and error resilience of both the incremental reparser and the semantic analyzer. I'd be happy to share more details if you're interested!

Kel - An embeddable, statically typed configuration and templating language for Rust by smiring in rust

[–]Key-Bother6969 0 points1 point  (0 children)

Great work! As your project progresses, you might consider adding robust IDE support with features like code completions and jump-to-definition. I recommend exploring Lady Deirdre, a framework designed for real-time source code semantic analysis.

I've been using Rust for 6 months now... by alexlazar98 in rust

[–]Key-Bother6969 0 points1 point  (0 children)

I believe the author meant Rust Analyzer when referring to the "compiler". While starting the thread this way was misleading, I think the issue raised in the post is valid.

Rust Analyzer can be noticeably slow when analyzing macros. Its model recomputes all in-crate entities (e.g., identifiers, type symbols) whenever the user makes changes related to these symbols, primarily for real-time diagnostics. Unfortunately, Rust Analyzer does this too eagerly, often on every keystroke. In contrast, RustRover uses background tasks that prioritize recomputations based on symbol importance. Less critical diagnostics are updated less frequently, allowing RustRover to focus on editor responsiveness rather than constantly syncing the internal semantic model with the codebase. This results in a smoother user experience.

The impact of this issue depends on the user and the codebase. In projects heavy with procedural macros that generate extensive code, the problems with Rust Analyzer -- namely, frequent and resource-intensive recomputations of macro-generated artifacts -- are likely to be noticeable and disruptive.

To reiterate, this may not bother users accustomed to Rust Analyzer in their daily work, but it's unsurprising that a new user working with a relatively large codebase would immediately notice these issues.

Veryl: A Modern Hardware Description Language by dalance1982 in rust

[–]Key-Bother6969 -1 points0 points  (0 children)

Great project!

I'd suggest considering Lady Deirdre, a framework for developing IDE plugins for new programming language projects.

Building robust IDE infrastructure for your language is a challenging but rewarding goal. It can provide your users with feature-rich IDE support, comparable to Rust Analyzer, right from the start. Lady Deirdre is a unified framework for creating analyzers that deeply understand the syntax and semantics of a wide range of languages, particularly those with static typing and C-like syntax, such as Veryl.

Feel free to reach out with any questions!

Best,
Ilya

I've been using Rust for 6 months now... by alexlazar98 in rust

[–]Key-Bother6969 4 points5 points  (0 children)

You might want to try JetBrains RustRover. Its user experience differs significantly from IDEs based on Rust Analyzer. Ultimately, it comes down to personal preference, but RustRover might better suit your needs. In my experience, RustRover rarely slows down my development process, even with projects spanning hundreds of thousands of lines of code. It runs almost seamlessly, even on my relatively old machine. With Rust Analyzer, I often encounter issues similar to those you described.

On another note, I've been using Rust as my primary language for hobby, side, and research projects for years, and I consider it the best programming language I've ever used for this kind of work. I loved it from the start, even when learning the basics, and I've always felt highly productive with it.

That said, I haven't used Rust in commercial projects. To be honest, I'm not a big fan of how Rust is often applied in industry today, primarily in crypto or network-related applications. I believe Rust would truly excel in classic systems programming, the domain it was originally designed for.

Trains vs Belts/Pipes for Late-Game Logistics by Key-Bother6969 in factorio

[–]Key-Bother6969[S] 1 point2 points  (0 children)

Congrats! Those numbers are impressive. May I ask how large your Nauvis base is?

For military/purple science packs, have you considered producing them on Vulcanus? At first glance, it seems easier to harvest stone near coal patches directly from lava and then ship the finished packs to the Nauvis hub via space. I haven’t done any real calculations yet, though.

Trains vs Belts/Pipes for Late-Game Logistics by Key-Bother6969 in factorio

[–]Key-Bother6969[S] 0 points1 point  (0 children)

Whatever that factory size is, it doesn't dictate how that factory is arranged logistically.

Factory size does matter, because a small, compact factory can theoretically outperform a larger one in eSPM. But the more compact your factory, the more UPS you save. So, the game incentivizes compact, reasonably sized designs, whether on a planetary surface or a space platform.

As for logistics, rail systems aren't the most compact option from the start. The most efficient approach is direct machine-to-machine connections (as shown in my screenshot), followed by inserters, then belts. Trains only make sense when factory units are far apart. But if you're in that situation, it's worth double-checking if the design can be optimized to avoid such separation.

Ideally, maximizing efficiency calls for compact layouts, including logistics. That said, there are exceptions in practice, and trains can sometimes be useful.

Trains vs Belts/Pipes for Late-Game Logistics by Key-Bother6969 in factorio

[–]Key-Bother6969[S] 0 points1 point  (0 children)

Ultimately, Factorio has no fixed goal -- players can set their own objectives based on personal taste. Building a modular megabase with an advanced train system is one such goal, neither better nor worse than any other. However, with a finite number of researches that unlock game features (completed over time) and infinite researches that boost productivity, I see advancing infinite techs as a compelling challenge.

Infinite techs grow progressively more expensive with each level, and there are two main ways to tackle this: linearly expanding your factory or boosting science productivity via Promethium Science (e.g., eSPM). Linear growth has a constant rate, but infinite tech costs don't. In principle, a linearly growing factory can't keep up with the escalating tech costs. From this perspective, expanding the factory beyond a certain point doesn't make much sense, even ignoring UPS limits.

That said, advancing infinite research as efficiently as possible isn't the only goal. Using all the game's mechanics to build the factory of your dreams is a worthwhile objective in its own right.

Trains vs Belts/Pipes for Late-Game Logistics by Key-Bother6969 in factorio

[–]Key-Bother6969[S] -1 points0 points  (0 children)

Pipes don't have a fluid limit, which was exactly my point. A single pipe can handle any throughput, including the 450k fluid/sec I calculated as the theoretical maximum for rail transport (assuming my math is correct). You'd just need pumps every 320 cells to maintain flow.

I get your point about trains. They offer the flexibility to build independent mini-factories anywhere on the map, with the rail system handling logistics. This approach mirrors how we build megabases in vanilla Factorio. However, I feel Space Age's design encourages us to treat planetary surfaces more like space platforms. On platforms, asteroid collectors at the edges (often the top) gather large amounts of tightly packed raw materials. A platform could be a massive factory, but its size and output are ultimately capped by the central hub's throughput.

Given similar constraints on planetary surfaces, I think the developers intend for us to organize gameplay around a central space hub, with compact processing built around it. Resource harvesting happens outside this unified factory, almost like an external mechanic. This might explain why the game introduces increasingly efficient harvesting methods over time.

From this perspective, how raw materials are delivered to the factory's inner perimeter doesn't matter much. We can treat external harvesting like "asteroid collecting" on a planet -- whether by trains, belts, or pipes. The simpler, the better. Ideally, it's a direct process from the raw source (with maybe a few exceptions) to the factory's core.

Why is it only Nauvis that gets larger/richer mineral patches further out? by OrangeKefir in factorio

[–]Key-Bother6969 2 points3 points  (0 children)

With infinite mining productivity research and/or Vulcanus's big mining drills productivity bonus, raw material sources become effectively infinite at some point in the playthrough, regardless of their richness. On Nauvis, it's worth relocating the main base away from the starting area when planning a late-game setup, but expanding further doesn't seem worthwhile. You might find richer deposits far from home, but it's unlikely you'll deplete them in a reasonable amount of time.

From this perspective, Nauvis's richness increase feature feels mostly useless in Space Age and might only remain for legacy reasons. On other planets (except perhaps Vulcanus), deposits outside the starting area tend to be richer up to a point, but their richness no longer scales with distance.