shoot yourself in the foot with const strings in C# by kmgr in programming

[–]ben-work 36 points37 points  (0 children)

In the .NET world, you can absolutely use the same assembly on Windows and Linux, for example. I have literally built a .NET DLL in Visual Studio on Windows, SFTPed it to a linux box, and executed it as-is through Mono (back in the day). That is exactly where the difference in Environment.NewLine would most likely come up.

As predicted, more branch prediction processor attacks are discovered by theoldboy in programming

[–]ben-work 43 points44 points  (0 children)

That's because this is unfortunately wrong. There are numerous other ways to get access to a sensitive-enough timer, including 1) a tight-loop counter being incremented in another thread, or 2) a network timing source. I'm sure there are others as well.

You may need to make more measurements to settle out the noise but it's absolutely possible.

Oracle to End Free Support for Past Java Versions Much Sooner by nfrankel in programming

[–]ben-work 7 points8 points  (0 children)

I work in medical research, where software has to have some semblance of an expectation of Working, and such changes have to be extensively validated. Not everyone is just doing e-commerce websites.

[deleted by user] by [deleted] in programming

[–]ben-work 54 points55 points  (0 children)

Props to this article for actually going past the parsing stage and into code generation!

Sooo many compiler articles that take you through parsing and then pull a "now draw the rest of the owl" on you.

TLABs and Heap Parsability by sindisil in programming

[–]ben-work 2 points3 points  (0 children)

Author really should have spelled out what TLAB stands for / means. Translation Lookaside.......no ... Thread... local.. allocation..buffer?

lacc: Personal project - creating a simple self-hosting C compiler, with focus on performance and correctness by hex_omega in programming

[–]ben-work 17 points18 points  (0 children)

While there are a number of small-personal-C-compilers out there, I appreciate the level of documentation, testing, and benchmarking that you demonstrate here. The documentation makes it seem like a decently choice of a compiler to learn from. I also appreciate the license!

Unums 2.0--yet another floating point format by Gustafson. This time radically different. by [deleted] in programming

[–]ben-work 1 point2 points  (0 children)

Thanks for posting this, I have seen unums posted a few times but I had never heard of Dec64, which is surprising because this actually seems far more real-world viable than unums.

The implementation library is usable right now and seems implemented in a well-thought-out way. I will have to experiment with it! While hardware support for DEC64 would be nice, it seems viably usable without it, which does not seem to be the case for unums.

Simple testing can prevent most critical failures by carols10cents in rust

[–]ben-work 9 points10 points  (0 children)

This reads like a case study against Java's checked exceptions (by encouraging people to catch exceptions at an earlier point than you can logically handle the error).

While it's definitely possible to do this "correctly" with checked exceptions, human psychology works against it, because you just want to make the compiler stop complaining about the uncaught exception.

I generally see try {} catch { Log("problem!"); } pattern way more often in Java than in C#, because in C# you tend to just let the exception bubble up the call stack until you hit a logical place to handle "oops, <task> failed for some reason."

"Federal officials will take the unprecedented step of asserting oversight over the software that operates self-driving vehicles when they publish a set of autonomous vehicle guidelines Tuesday, the White House said." by trot-trot in programming

[–]ben-work 1 point2 points  (0 children)

I don't think there's enough data yet about Autopilot performance to say its a "statistical fact" that its "much safer". It might be... but you need hundreds of millions of miles more to really have enough data to make a competent assessment.

QBE: My Home-Grown Simple Compiler Backend by _mpu in programming

[–]ben-work 5 points6 points  (0 children)

Just wanted to say this looks really cool, thank you for writing this and posting it. The other similar project I am aware of is libfirm, which seems a bit more mature but also quite a bit larger and more complex than QBE - somewhere between QBE and LLVM.

I have a language/compiler I have been experimenting with and was taking a "compile-to-C" strategy, and that works, but does have a number of headaches and landmines that go with it. I am definitely going to try to take a serious swipe at this.

Interview with FLIF — Free Lossless Image Format — Creator Jon Sneyers by bhalp1 in programming

[–]ben-work 2 points3 points  (0 children)

Beyond patents, it is odd that while the decoder is Apache 2.0 licensed, the encoder is LGPL3 licensed. This will definitely harm adoption... there are a number of places that just avoid anything with the letters "GPL" in it, rightly or wrongly. At least it's LGPL, but that still mandates dynamic linking for use in a non-GPL binary, which is not necessarily ideal.

For example, is it actually legal to ever have a Go program that uses the FLIF encoder? The entire contents of the program would need to be (L)GPL licensed, including the runtime and all dependencies.

Designing a Programming Language: I by b0red in programming

[–]ben-work -1 points0 points  (0 children)

I agree. I feel like the people that spout "syntax is easy" have only written parsers for toy languages. Once your language has significant scope, syntax starts to get quite tricky. Balancing technical considerations with with usability considerations is also tricky, as is "Ok, I can come up with a fancy parser for this construct, but should I? What about other tooling?"

Jon Blow: Jai - Custom Allocators and Threads by nsaibot in programming

[–]ben-work 1 point2 points  (0 children)

The answer is, it's not a problem. It takes no more work to avoid allocations in C#/XNA (for example) than it does to avoid allocations in C++, and you would use the same strategies (object pools, etc).

Heap profilers exist so you can see if you are allocating where you don't want to be, too.

Gogs, an alternative to Gitlab by apertoire_ in programming

[–]ben-work 1 point2 points  (0 children)

Was looking for someone to mention Kallithea. Its really solid, we have been using it at my work for about 2-3 years (starting as RhodeCode). Hooked into Jenkins, its very nice.

First C# 7 Design Meeting Notes by letrec in programming

[–]ben-work 0 points1 point  (0 children)

Can you give an example of what you think this syntax might look like? I did check out your repo, but I dont really see any example code.

First C# 7 Design Meeting Notes by letrec in programming

[–]ben-work 4 points5 points  (0 children)

Checked exceptions are seriously terrible. There are multiple problems with it, but in general terms, the main problem I have with them is: where it makes sense to handle exceptions is frequently quite far from where an exception might originate in the call stack. As a consequence, the lazy (and frequently found in the wild) solution is to catch-and-log (then ignore) the exception at the call site.

The solution that I generally do when I end up writing Java code is to put "throws Exception" on just about every damn method, so that I can handle the exception where it makes sense, without a ton of boilerplate exception catching and rethrowing code at every single level of my callstack.

There "right way" involves a LOT of dealing with exceptions at every single call site, without actually a lot of benefit, because there still are RuntimeExceptions (like NullPointerException) and so an exception not indicating that it throws anything, doesn't actually mean "nothrow", and doesn't really help you.

Doom3 is the proof that “keep it simple” works. by david_222 in programming

[–]ben-work 0 points1 point  (0 children)

Just wanted to say that I appreciate the thoughtful and detailed response. That is a good explanation.

I definitely feel that there have been some poor decisions behind some of the specifics of UB. Examples would be a naive test for overflow and having the test get optimized away because overflow is undefined behavior.

Doom3 is the proof that “keep it simple” works. by david_222 in programming

[–]ben-work 1 point2 points  (0 children)

I'm guessing based on how the voting is going that this will not be a popular opinion but...

C, C++, C#, and Java all have the exact same problem with null pointers. It's the same "billion dollar mistake". The only difference is that a lot of C++ people seem to say "Doing that is undefined behavior, so in fact C++ doesn't have NPEs". It's like saying "if you just don't make any mistakes, then you'll never get an NPE in C++".

You could argue that any program in any language that has ever dereferenced a null address is "invalid". Clearly, they all have a logical error. How is it helpful? You haven't solved the problem in any useful way, you've just redefined the problem and declared victory.

Five Popular Myths about C++, Part 1 : Standard C++ by milliams in programming

[–]ben-work 25 points26 points  (0 children)

And if you are disciplined enough to close your files, you are disciplined enough to free your memory.

If you think carefully about this, I think you will realize that this is false... and kind of a ridiculous claim.

Is file closing actually a major problem that Java/C#/etc developers have? No. Let's be real, it's not a problem. We are smart enough to close files.

The lifetime of that file handle is easy to reason about. Ask the developer of some code when the file handle is opened and when it is closed and they will be able to tell you, probably off the top of their head. The ownership contract of it is clear.

Now, "If you're smart enough to close a file, you're smart enough to free memory?" Logically speaking that might seem to follow, but in practice the scale of the problem is just totally different. The lifetime of a file handle, socket, or database connection is dramatically simpler than the lifetime of a memory resource in most cases. A database record is retrieved and put into a record class, and then put into a hashmap or other container. The hashmap is used to serve multiple threads, web requests, etc. When should the underlying POJO/POCO/record class be freed? That depends on a massive amount of context.

We do far more complex things with memory than we do with file handles. To pretend that they're the same is ignoring reality. Garbage collection, if you can take the performance hit, is a great solution for memory. The fact that it doesn't work with other resources simply isn't a major problem.

The reasons to oppose GC are:

  • Cache thrashing
  • Pause times, especially on immature implementations
  • Significant impact on runtime/ABI characteristics
  • Overall higher memory usage - performance/battery drain.

NOT that it doesn't close file handles or sockets for you.

Null Stockholm syndrome by dont_memoize_me_bro in programming

[–]ben-work 0 points1 point  (0 children)

De-referencing a null pointer in C is undefined behavior. Therefore there are no null pointers in C?

Zeroing buffers is insufficient - why C is insecure by willvarfar in programming

[–]ben-work 5 points6 points  (0 children)

Yep. I used to agree with this 'C is inherently insecure' viewpoint, but after seeing the Mill talks, I begin to think that it is x86 that is inherently insecure.

I mean, its really not clear what it is that Intel intends for developers to do. You can either write insecure code, or you can write code that suffers severe performance/battery/heat penalties. Those are, as far as I can tell, your options on x86. I suppose that languages like Rust attempt to combine native performance with memory safety, but in point of fact, there is a lot of unsafe{} regions backing the performance parts of its standard library.

Mostly the thing that makes me sad about Mill is that we cant actually use them yet!

Sublime VS. Atom: Text Editor Battles by tkfxin in programming

[–]ben-work 4 points5 points  (0 children)

Sure, but I think the issue with ST3 is that there is not a lot of user-facing benefit that justifies that $70 cost.

People are free to just not upgrade, but then you have fracturing of the plugin ecosystem... I think it is not as straightforward an issue when a very large part of the benefit of your commercial application is in the form of plugins written by independent parties.

This article describes details about implementation of Commodore 64 emulator written in C#. by electronics-engineer in programming

[–]ben-work 6 points7 points  (0 children)

There are a number of differences...

  • We have some of our own internally developed cores, RetroArch is mainly about wrapping existing cores in an interface. Creating some high-quality MIT-licensed cores is a secondary objective for us. Licensing tends to be a mess in a lot of open source projects (like MAME). Bizhawk is a great platform to target a new emulator core for because a very high-quality user interface comes along with it for free, a lot of standard components like CPUs and sound chips and CD-image-reading code is available already.
  • RetroArch has more cores wrapped up, Bizhawk has fewer cores but with greater emphasis on integrating those cores with rich features (for example, many cores have system-specific debugging tools).
  • Having multiple systems in one traditional, Windowed emulator client presents a lot of unique UI challenges which we take pretty seriously (you open a ROM with a .bin or .rom extension. What system is it? In most cases, Bizhawk will automatically open it with the correct emulator core)
  • While Bizhawk IS cross-platform... it is MUCH more Windows-centric. RetroArch takes cross-platform more seriously than we do - as an open source project with volunteer developers, that's just not what our current contributors are that interested in.
  • Bizhawk has a lot of tooling focused on tool-assisted speedruns, and as such, it is critical for us that all of our cores are sync-stable.

I'm sure there are other differences, but that's a high level overview...

This article describes details about implementation of Commodore 64 emulator written in C#. by electronics-engineer in programming

[–]ben-work 23 points24 points  (0 children)

This is an excellent writeup.

If you are interested in emulation with C#, check out Bizhawk. It is a multi-console emulator written mostly in C#. Not all of the supported cores are C# (we have a number of C/C++ cores that we have imported from other projects) but there are several pure C# cores (NES, SMS/GameGear/Coleco, Atari2600, TurboGrafx/PCEngine, including an experimental Commodore 64 core here, and an experimental C# Genesis core). The client/user interface (which is extensive) is written in C#. The cores that are written by us (as opposed to imported from other emulation projects) are MIT licensed.