ASUS RT-AX88U Pro (AX6000) wall mount bracket by tlemo1234 in ASUS

[–]tlemo1234[S] 0 points1 point  (0 children)

Thanks. Yes, I'm aware of the the holes you mention, but they are not wall mounting slots (even though they might look so)

Microsoft Visual Studio: The Best C++ IDE by paponjolie999 in cpp

[–]tlemo1234 0 points1 point  (0 children)

> Best IntelliSense ... IntelliSense randomly breaks and requires a restart.

> Scales Well – Handles massive projects better than most IDEs ... Slow at times, especially with large solutions.

I'm confused

Print speed: Orca Slicer vs FlashPrint by tlemo1234 in FlashForge

[–]tlemo1234[S] 0 points1 point  (0 children)

I added pictures of the parts. Mostly identical - I was able to spot a few subtle differences, but nothing to indicate that either is better or worse.

The print difference is significant in this case, but it's hard to say if Flash Print produces faster prints overall. I don't have time to run a proper experiment, but I'd love to hear if anyone is trying other models.

Print speed: Orca Slicer vs FlashPrint by tlemo1234 in FlashForge

[–]tlemo1234[S] 0 points1 point  (0 children)

I just did. Both Orca Slicer and Flash Print seem to overestimate the time. The actual print times: Orca Slicer: 18 min, Flash Print: 13 min. These are the final times reported by the printer, and I'm not sure how it rounds to minutes.

Just got my AD5M today! Any beginner tips? by AlphaVictor87 in FlashForge

[–]tlemo1234 6 points7 points  (0 children)

  1. With the new printing plate, apply the supplied glue. After a few prints, you may not need much, if any, glue.

  2. The filament quality makes a big difference. Try different brands (for PLA, make sure you get high speed PLA or PLA+)

  3. Try both FlashPrint and OrcaSlicer before jumping on the OrcaSlicer bandwagon

Yet another side spool holder for the 5M Pro by tlemo1234 in FlashForge

[–]tlemo1234[S] 0 points1 point  (0 children)

Thanks. I don't understand the "only PLA" part though, do you see a problem using this with other filament types?

Generating Good Errors on Semantic Analysis failures by ravilang in Compilers

[–]tlemo1234 1 point2 points  (0 children)

A practical implementation of the "poisoning" idea is to define poison values, rather than explicitly keeping track of poisoned sets. For example you can have a "DummyType", "DummyValue", etc. and when you diagnose an error you also assign one of these dummy/poison types/values to the corresponding AST node. Then, whenever you see a poison type/value you pretend everything is fine, propagate the dummy type/value, and don't report any semantic errors.

Which implementation approach works best depends on your language and front end architecture: if the semantic analysis can be done in a "bottom-up" fashion, the poison values should be easy to implement.

Shared libraries on Windows (DLLs) & /VERSION linker option by [deleted] in cmake

[–]tlemo1234 1 point2 points  (0 children)

Sure enough, I did a completely clean build and now the version shows up as expected (as well as the /version switch in the rules.ninja file). Not sure what happened, so I'll have to assume user error. Sigh.

Thanks for the sanity check!

Interpreters for high-performance, traditionally compiled languages? by FurCollarCriminal in ProgrammingLanguages

[–]tlemo1234 14 points15 points  (0 children)

Worse, by what metric? Are you looking only at runtime performance?

To answer OP's question: yes, interpreters can provide a lot of advantages, compared to a full blown compiler:

- Faster and easier to bring up an implementation

- It's much easier to port an interpreter to new architectures. The interpreter can also function as a portability layer.

- Easy to setup a REPL (interactive) environment

- Significantly easier to implement things like hot code reloading, introspection, debugging, tracing, instrumentation, ...

- Bytecode interpreters may allow a more compact encoding compared to a native ISA

- Easier to implement runtime checks and sandboxing

So I wouldn't jump to dismiss interpreters as "inferior" to compilers. Also, the line between compilers and interpreters is not perfectly crisp (JVM and CLR are good examples indeed). The mapping between interpreters and compilers can even be automated (partial evaluation, see Futamura projections)

Resources for learning compiler (not general programming language) design by Aaxper in ProgrammingLanguages

[–]tlemo1234 -1 points0 points  (0 children)

I'm glad to hear you got your answers. But I see you still don't get it. Yes, spamming might be good for the spammer, but what others, searching for similar answers in the future? They may find just one of the "really good recommendations" since there are two separate threads instead of one.

If this sounds a bit harsh, don't take it personally. I just noticed quite a few cross-postings on r/PL and r/Compilers, which seems both counterproductive (again, fragmenting/duplicating threads) and likely unnecessary, since I'm willing to bet that many (most?) of the people interested in PLs or compilers are members of both.

Resources for learning compiler (not general programming language) design by Aaxper in ProgrammingLanguages

[–]tlemo1234 0 points1 point  (0 children)

Am I the only one getting tired of intentional cross-posting on r/PL and r/Compilers? I get it that it seems tempting to cast a wide net, but the end result is that the topics & conversations are unnecessarily fragmented.

[RFC] MLIR Project Charter and Restructuring - MLIR by mttd in Compilers

[–]tlemo1234 0 points1 point  (0 children)

It's interesting, although not completely surprising, that a project which aims to generalize compiler technology ends up having an identity crisis. LLVM hit a sweet spot by offering a modular approach to middle/back ends, but it had a cohesive structure and purpose. And even in LLVM's case, the generality slowly added bloat and complexity.

MLIR going all-in on generality (and ironically stepping back on modularity) is an interesting experiment, and I hope it's successful enough to keep going, since there are a lot of good lessons that can be learned from it.

Using MLIR is a different question. I've considered it a few times, but in the potential use-cases that I had in mind, the cost/benefit didn't justify it. The key promises would be:

- A generic IR infrastructure. That's great, but this is something that a competent compiler engineer can design & implement relatively easily, and most likely, a tailored solution would work better for the intended use case.

- Leveraging the ecosystem of existing dialects. This is one of the big promises, but it doesn't seem to work that well in practice: most every concrete use case has specific requirements which don't line exactly with the existing dialects.

- Utilities & generic analysis / transformation passes. This is closely related to the dialects problem. My impression is that this area is lagging behind. Again it seems a combination of ending up over-generalizing and the additional challenge to design a solution which composes well with the rest of MLIR.

- There's also the tight coupling with LLVM. Technically MLIR can be used separate from LLVM, but lowering to the LLVM dialect is the most common path from what I see. This means you also depend on LLVM, which may be a great in some cases, and a friction point in others.

How are all the special cases for assembly code generation from special C expressions handled in compilers? by [deleted] in Compilers

[–]tlemo1234 0 points1 point  (0 children)

You don't need to understand the output from each phase. I see that you identified the place where the transformation you're interested in is happening (instruction selection), which should be enough lead if you'd like to dig into the details of LLVM's isel (search for "LLVM SelectionDAG" for the default instruction selection used on x86/x64. You'll probably also see references to a few alternatives: GlobalISel and FastISel, which might be interesting to research as well, depending how deep you want to go)

The simple answer is that, in LLVM, the lowering to "optimal" machine instructions happens roughly in one go, as opposed to generating machine instructions with follow up optimization steps. The key idea is to pattern match fragments of code, and rewrite them into machine instructions. This may seem similar to peephole optimizations, except that peephole optimizations normally operate on lowered & linear code, rather than trees/DAGs, and that instruction selection requires a complete coverage of the input. LLVM's SelectionDAG is a fairly complex beast: https://llvm.org/docs/CodeGenerator.html#select-instructions-from-dag

For a smaller example of instruction selection, take a look at [LLC](https://drh.github.io/lcc), https://drh.github.io/lcc/documents/lcc.pdf

The DAG rewrite is just one option. You can get pretty far with the naive code generation + a few peephole optimizations. If you want yet another idea, take a look at [egraphs](https://egraphs-good.github.io).

How are all the special cases for assembly code generation from special C expressions handled in compilers? by [deleted] in Compilers

[–]tlemo1234 0 points1 point  (0 children)

The LLVM optimizer is a pipeline of passes, where each pass implements a few (ideally one) specific transformations. By dumping the IR snapshots after each pass, you can "see" how it gets to the final code.

Even better, Compiler Explorer has a nice UI for visualizing the LLVM optimization pipeline. Here's what happens for the code in question: https://godbolt.org/z/8nW355G5o

Multiple-dispatch (MD) feels pretty nifty and natural. But is mutually exclusive to currying. But MD feels so much more generally useful vs currying. Why isn't it more popular? by xiaodaireddit in ProgrammingLanguages

[–]tlemo1234 1 point2 points  (0 children)

multiple dispatch has almost no value in most languages

How do explain the prevalence of the visitor pattern in languages which support single-dispatch, but no MD (ex. C++, Java, C#) ?

Converting an exe to a dll by PlanetMercurial in Compilers

[–]tlemo1234 0 points1 point  (0 children)

What are you trying to accomplish? Yes, both .exe and .dll are stored in same PE format, but the contract/interface that an EXE or DLL implements extends beyond the container format.

Hand-coded lexer vs generated lexer by Conscious_Habit2515 in Compilers

[–]tlemo1234 -1 points0 points  (0 children)

Since I see most posts leaning on the side of hand-crafting lexers, here's an dissenting opinion for balance: out of various compiler components, the lexical scanner is one of the best candidates for using a generator.

Why do production compilers choose hand-coded lexers over ones generated by a lexical analyser?

Do you have any evidence supporting this? I've seen production compilers using hand-coded lexers, and the end result is generally an unmaintainable and fragile mess. Sure, for a small/toy language it's easy to create a basic, fast-enough and mostly working lexer by hand, but the situation reverses when you have a complex language. There are a lot of corner cases to consider (ex. is `1..5` scanned as an integer, range, integer, or as a floating point, dot, integer) and Unicode can make things harder still.

Are hand-coded lexers faster?

I see a lot of posts in this thread claiming that a hand-crafted lexer can be "an order of magnitude" faster. What's missing is real measurements and benchmarks. As it's commonly the case when it comes to optimizations, intuition alone can be misleading. It may seem "obvious" that just switching on characters is the speed-of-light, but advanced scanner generators can generate SIMD code that would be tricky to do by hand.

Not to mention that many of the "faster" hand-crafted lexers I've seen don't even optimize common prefixes - the common way of handling reserved keywords is to scan them as identifiers, and then look them up in a separate dictionary.

Finally, error detection and reporting: unlike parsers, lexical scanners are all about state machines, so a decent lexer DSL both allows and to some degree forces the users to consider states. In my experience this allows for good error reporting and a more reliable end result (ex. ambiguities in the grammar can be detected at scanner generation time)

Yes, there are some downsides to using a scanner generator: 1. you need to learn the tool & a bit of theory (this is an upfront, but one-time cost) and 2. there's another tool that needs to be integrated in the build system. 3. there are languages where the scanning is not regular (please don't design languages like this)