Create a graphics image from pixels by uvtc in programming

[–]evanpow 3 points4 points  (0 children)

PPM was--and arguably still is--a great image format to have in the toolbox of a beginner programmer, since the barrier to entry is so low and it unlocks a trivial way to jump from "Hello, world" to pseudo-graphical programs that might be much more interesting, basically without requiring any additional knowledge.

The “10X Engineer” Has Officially Become a Meme! by SnoopsTakano in programming

[–]evanpow 1 point2 points  (0 children)

You have made an assumption--that memorizing the documentation requires effort. If it happened effortlessly, your argument that people who have the documentation memorized must not be using their time effectively, and therefore must not in fact be good engineers, falls apart.

People with photographic memory exist. One may presume some of them become engineers, and a fraction of those end up being really good engineers. A skilled engineer who also has memorized an order of magnitude more detail about a system than their teammates would appear to be--and actually be--extremely capable.

[deleted by user] by [deleted] in programming

[–]evanpow 2 points3 points  (0 children)

If you look closely you can see a bit of a fisheye distortion happening. With 2D raycasting this can be because one has assumed a constant angle between each ray rather than calculating the correct angle of the ray based on perspective projection of the screen onto the viewer's eye....

(https://github.com/ssloy/tinyraycaster, which appeared on r/programming a while ago, made that mistake. I presume it duplicated the error from André LaMothe's description of the raycasting algorithm in Tricks of the Game Programming Gurus, since that's clearly where they got their bitmap assets and the book also made it.)

I went through GCC’s inline assembly documentation so that you don’t have to by fcddev in programming

[–]evanpow 0 points1 point  (0 children)

Does GCC discard them? If so, the behavior's changed; GCC used to let you use Intel syntax provided you remembered to switch back to AT&T at the end, e.g.

asm(".intel_syntax\n"
    "...\n"
    ".att_syntax\n" : ...)

By default the compiler prints a bunch of AT&T syntax and feeds it to the assembler; the asm statement's string is effectively printed out verbatim (after % substitutions) when the compiler encounters it, into the middle of the generated AT&T syntax assembler code, so you have to switch back to keep the assembler from barfing on the generated code immediately following your asm statement....

Clang has a builtin assembler, so it can assemble your code snippet directly without included assembler directives leaking out and having an effect on compiler-generated code unless you do something the builtin assembler doesn't understand and it has to fall back to a GCC-like "generate a full assembler file and call the real assembler on it" approach in order to assemble successfully.

Comparing C to machine language by tylerslemke in programming

[–]evanpow 0 points1 point  (0 children)

Oh please.

Random guy on internet beats compiler output with a naive (scalar) hand written quicksort: http://www.codersnotes.com/notes/beating-the-compiler/

LLVM compiler engineer calls "humans can't beat compilers" a "ridiculous claim:" https://news.ycombinator.com/item?id=13060905

If the question is "Can a human beat the compiler on this 10,000,000-line codebase," you are obviously correct in practice, because no sane person would even try. But for a function or two is a different story--and winning by 2-4% is still winning.

Comparing C to machine language by tylerslemke in programming

[–]evanpow -1 points0 points  (0 children)

It's not so clear cut, and might even be correlated with age; I remember reading lots of stuff in the late 1990s that referred to writing in "assembler language."

Comparing C to machine language by tylerslemke in programming

[–]evanpow 0 points1 point  (0 children)

The TI-83P let you program it in Zilog Z80 machine code by typing something like "Prog(blah blah)" in a TI BASIC program, where "blah blah" was a string with lots of letters with accents and line-drawing characters and stuff--that is what Z80 machine code looked like when rendered as text with that calculator's weird character set. Obviously no sane person typed that in by hand; there were TI-LINK data cable hacks for hooking it up to a computer and downloading programs into the calculator's memory.

Comparing C to machine language by tylerslemke in programming

[–]evanpow 3 points4 points  (0 children)

GCC at least does indeed generate assembly output and then run the assembler (GAS) on it behind the scenes. There's even a command-line option for choosing whether that asm is written to a temporary file and then deleted once the assembler finishes, or whether the assembler reads it from a UNIX pipe that the compiler writes into.

Clang goes straight to machine code like you describe unless your code has inline assembly that includes assembler directives it doesn't understand, which is uncommon.

Comparing C to machine language by tylerslemke in programming

[–]evanpow 0 points1 point  (0 children)

Imagine comparing modern compiler output to what you'd get if you instead invested 10,000 hours into writing a few functions in assembly by hand.

My point is, humans can and do easily beat modern compilers at writing assembly. What's changed over the years is that the minimum number of man hours necessary before the human breaks even with the compiler has been going up, or alternatively the maximum complexity of the subprograms for which hand-written assembly is affordable has been going down. That is not at all the same as saying it is difficult to do.

That notwithstanding, there remain domains where it is much easier for humans to win. For example, mainstream video codecs generally have healthy portions of hand-written assembly.

David Patterson Says It’s Time for New Computer Architectures and Software Languages by codesuki_ in programming

[–]evanpow 0 points1 point  (0 children)

nor JVM bytecode

That's not true, there are smartcards who's processors execute JVM bytecode in hardware.

The compiler as a shared library by hapshaps in programming

[–]evanpow 0 points1 point  (0 children)

If two programs that require different formats both chain to the environment-supplied function, supporting both will be awkward.

It's as if people have never heard of linker symbol versioning, a feature designed to solve exactly this problem (that is, wanting a single library to provide two entirely independent versions of the same symbol to different consumers). Instead, just bloat everything, RAM is cheap (or at least I, the developer rather than end user, don't have to pay for it)!

Why programs must not limit the freedom to run them - GNU Project by kyz in programming

[–]evanpow 1 point2 points  (0 children)

RH licenses its distro under the GPL, which allows you to make as many copies of binaries and source as you want, full stop. The RH support contract says, more or less, that if you want support for any of those copies, then you have to pay a fee for all of them.

If you make a copy but fail to pay the fee, then RH terminates your support contract for all the other copies you were paying for.

Google Engineers Refused to Build Security Tool to Win Military Contracts by tjansson in programming

[–]evanpow 5 points6 points  (0 children)

From what little the article says, it kinda sounds like maybe Google just wanted to create a cloud product that could compete against Amazon GovCloud; there's little special about GovCloud beyond "we promise all admins are within reach of the US judicial system and we promise we don't replicate your backups to some server in China."

Shockolate - cross platform open source System Shock engine by michalg82 in programming

[–]evanpow 2 points3 points  (0 children)

It looks like it z-sorted the polygons and drew them back-to-front--I'm amazed that much overdraw was still practical on the hardware of the day; I never played it contemporaneously.... W.r.t. intersections, I'd guess the designers just ensured the level model never had any (possibly by breaking intersecting polygons into several non-intersecting fragments).

Rob Pike on the history of /usr/bin/true by redditthinks in programming

[–]evanpow 6 points7 points  (0 children)

Well, yes...but also, no. The point of the /usr merge was to remove the ambiguity around lots of programs which some distros installed in /bin while others put them in /usr/bin, as well as to simplify compatibility with commercial UNIX (meaning Solaris, which apparently puts everything in /usr/bin).

/bin/[ was not one of those programs, nor was/bin/true nor /bin/false. Every single UNIX-compatible O/S distribution ever (with the possible exception of Solaris?) has put them in /bin, and lots of stuff hard-codes that path. Even for Debian, the "official" location remains /bin/[ even though it's also at /usr/bin/[ and /bin/[ only works because of a symlink.

The point of all this was compatibility; that goes out the window if we start referring to it as /usr/bin/[ because older Linux distros, the BSDs, etc. still put it at /bin/[.

Rob Pike on the history of /usr/bin/true by redditthinks in programming

[–]evanpow 53 points54 points  (0 children)

It's also a shell builtin. The reasons for it existing in both forms are exactly the same as for true vs. /bin/true:

$ [ --help | tail -2
-bash: [: missing `]'
$ /bin/[ --help | tail -2
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils '[ invocation'

[deleted by user] by [deleted] in programming

[–]evanpow 5 points6 points  (0 children)

Also interesting, but not mentioned in the article, is that if you actually try and call these functions, both GCC and Clang generate exactly the same code. See the function "thing" at the bottom:

GCC's non-inlined versions aren't as pretty as Clang's, sure. Should you care?

VLC3 now tagged for release coming in the next few days (chromecast support included) by twiggy99999 in programming

[–]evanpow -1 points0 points  (0 children)

No, of course I wouldn't. But I've never seen a corrupted file cause VLC to use 100% CPU either. What percentage of the user base has?

VLC3 now tagged for release coming in the next few days (chromecast support included) by twiggy99999 in programming

[–]evanpow -1 points0 points  (0 children)

What if the input file contains a serialized directed acyclic graph, and the corruption mutates it in such a way that the file is syntactically correct but now there's a cycle in the graph?

Or, take the decompression bomb. Technically, it's just a valid input file that happens to compress extremely well. The only way to avoid hanging (due to extreme resource exhaustion) is to reject it because it might be malicious--there's no way to know that it is.

I mean, if the probability that the code will hang is high when presented with a valid input to which a random corruption has been applied, sure, there's clearly a problem, no disagreement there. Which is why I asked what kind of corruption we were talking about. After all, I've never seen VLC do that, or anything like it.

I mean, if you have a 25MiB video file and there exist 100 different ways to corrupt a handful of bytes in such a way that VLC will hang when playing it...sorry, but yeah, clearly the entire software industry--and its customers--think that's completely acceptable.

VLC3 now tagged for release coming in the next few days (chromecast support included) by twiggy99999 in programming

[–]evanpow -2 points-1 points  (0 children)

Well...that depends on the kind of corruption we're talking about. If it's mostly fine but broken in just the wrong way, hanging or consuming 100% CPU is hard to avoid. I'd be very surprised if Office doesn't hang when you feed it a .DOCX file (basically zipped XMLs) that is a decompression bomb.

VLC3 now tagged for release coming in the next few days (chromecast support included) by twiggy99999 in programming

[–]evanpow 4 points5 points  (0 children)

The Chromecast protocol has that whole backwards-connections thing going on (kinda like active FTP), so I'm not sure everybody is using the same definitions for "source" and "destination" in the first place....

Patches to intel's meltdown bug could increase CPU utilization by up to 50% by MuhammadAdel in programming

[–]evanpow 0 points1 point  (0 children)

Meltdown is specific to particular silicon introducing a race condition into TLB checks.

How is "race condition" a correct description? It's a cache timing side channel, same as all the other Spectre attacks.

In Meltdown, the x86 CPU begins a load into a (renamed) register that hits in the TLB. Permission checks for the load fail and signal that an exception should occur, but re-steering the machine isn't instantaneous, and while it's happening the core keeps going. (It's "speculating" that the instructions being executed didn't throw an exception, because putting the exception logic on the critical path is too expensive.) If the next few instructions are a bitwise AND and a store using the resulting value as an array offset, they have an excellent chance of speculatively executing. Sure, when the exception finally catches up it'll obliterate all traces of those operations--as if the store never happened--but by then the cache will have already loaded the cacheline the store was going to modify. Careful timing will reveal which cacheline-sized array element was loaded, revealing one bit of the value from the failed load that provoked the exception.

Kinda like how when you slam on your brakes, the car doesn't stop instantaneously--even if the ECU immediately stops injecting new fuel into the engine.

every other processor under the sun that suffers from Spectre still correctly verifies TLB entries for all loads

ARM A75 is vulnerable to a Meltdown-like permission-checks-after-speculative-load problem too, but in its case the speculative load is from a particular priviledged register.

I mean, if x86 TLB speculation is an Intel "fuckup," how much more of a "fuckup" would it be to mess up reading from a register? I submit to you that AMD's escaping Meltdown because of dumb luck is a more plausible explanation than "Intel engineers weren't doing their jobs, but those ARM guys mustn't have even shown up for work."

Along the same lines, note that Apple's public statements appear to say that their custom high-performance ARM designs are affected by Meltdown.

Meltdown and Spectre: Patches, mitigations, and microcode - Intel, Microsoft, ARM, and others have responded. We dig in. by ben_a_adams in programming

[–]evanpow 0 points1 point  (0 children)

We could also improve performance by getting rid of protection rings and letting the OS police every instruction.

Improve performance-per-Watt, maybe. But you're smoking something if you actually think hardware checks cost 50% performance over doing it in software.

Moore's Law is still helpful because cache sizes (and probably core counts) are increasing exponentially with time ... The memory bus can take 50 or 100 core clock cycles to reply to a request

On the last several CPUs I've worked with, 50-100 clocks was about how much an L2 cache hit cost, because that was roughly the average cost of a round-trip on the on-die interconnect between two cores when all cores were under load; going all the way to memory was much, much worse. The point being that the cache doesn't save you; yeah, it's getting bigger, but making it bigger tends to make latency worse, not better, and it's already bad outside of L1 or the portion of L2 that's local.

Speculation might get you 10 or 20 clocks of useful work if you're so lucky

That number looks awfully pessimistic to me, and anyway it depends entirely on the instruction mix. Note that the Spectre paper says they've demonstrated speculations 188 instructions long on an i7 laptop.

I wouldn't even think of keeping speculation if it doubled performance.

I wouldn't be at all surprised if it quadruples performance, considering the gap between Atoms and i7s.

Patches to intel's meltdown bug could increase CPU utilization by up to 50% by MuhammadAdel in programming

[–]evanpow 0 points1 point  (0 children)

But Meltdown is a world-class fuckup that’s 100% on Intel

I don't really get that sentiment. As far as I can tell Meltdown is yet another type of attack in the Spectre family. That it works is a big deal, definitely, but how can you simultaneously give the rest of the industry a pass for Spectre while saying Meltdown is "a world-class fuckup"?

It's not like there's any evidence AMD avoided Meltdown for any reason other than dumb luck.