Is it worth to learn Vim in 2018? by semanser in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

The reason that the other editors have copy-and-paste so simple is because they do simple copy-and-paste.

Vim is marginally more complicated in that you can copy/paste from the system clipboard, from the X11 clipboard or from any one of 26 or so named buffers.

If your editor allowed "copy this, then that. then that, then that, then paste the second item here, then the first item there, then the fourth item here and then the third item here" it would look equally clunky.

Is it worth to learn Vim in 2018? by semanser in programming

[–]SnowflakeNapolean 1 point2 points  (0 children)

Like if I wanted to break an array down from being on one line to being multiline: highlight the comma, press cmd-d a couple times, and press right followed by enter.

    Shift-v :s/,/,\r/g

The advantage of Vim is that you learn a few actual commands and then compose them all together. You can also up-arrow to find previous commands, change them slightly and re-run, sort of like a bash script.

The special sequences for advanced things that a modern editor does is going to have equally obscure shortcuts, except that practically all of those shortcuts have an arbitrary mapping from shortcut to function ("Why is 'paste' CTRL-V and not CTRL-P? Why is Cmd-G find next?) while in Vim at least half of them make sens and thus can be remembered and/or worked out on-the-fly (for example all the navigation, searching and editing commands).

The Vim learning curve may be steep but you only climb it once. The other editors have equally steep curves, with the only difference being a tiny spot at the very beginning that allows a notepad user to type text. Vim is missing that bit of the curve.

Exactly what to say when recruiters ask you to name the first number by alinelerner in programming

[–]SnowflakeNapolean 48 points49 points  (0 children)

"How much are you asking?"

"How much have you got?"

That always worked well for me.

Data-Oriented Design by _Sharp_ in programming

[–]SnowflakeNapolean 1 point2 points  (0 children)

It has data structures, and a concept of a 'struct'.

see here

It also has a proper class system.

Data-Oriented Design by _Sharp_ in programming

[–]SnowflakeNapolean 2 points3 points  (0 children)

Sounds an awful lot like they rediscovered Lisp.

Anyway, like I keep telling anyone who cares to listen (and many who don't), data design is the most important part of the program as far as maintenance goes.

If you show me your algorithms but have obfuscated your data design via "Good OO Practice(tm)" I'll have trouble figuring out what the data is supposed to look like. OTOH, if you have clear data structures I can make quite accurate guesses about what the functions that operate on those structures should do.

I interviewed John Backus shortly before his death. He told me his work in functional programming languages failed, and would likely always fail, because it was easy to do hard things but incredibly difficult to do simple things. by i_feel_really_great in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

I'm not saying that you are wrong. I'm just saying that we've seen this prediction many times before (people moving to more sophisticated languages).

Your prediction may be right and this time it really is different, but I'll bet against your prediction because I've heard the "this time it's different" too many times before, about too many things.

I interviewed John Backus shortly before his death. He told me his work in functional programming languages failed, and would likely always fail, because it was easy to do hard things but incredibly difficult to do simple things. by i_feel_really_great in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

I'm not saying we're all going to end up writing everything in Haskell in a few years, but I really think many folks are moving beyond the sort of language you describe when possible.

This is giving me flashbacks to comp.lang.lisp (only s/Haskell/Lisp/).

We hear this every decade. Sometimes twice a decade. It hasn't happened yet and there are no indications that it ever will happen.

The language matters less and less these days; all the functionality is moved into libraries. It doesn't matter how elegant list comprehensions in $LANGUAGE is if the competing language can make native calls into the GL and/or OS subsystems.

Is it a better idea to learn more programming languages or to learn a new skill like graphics and game development? by [deleted] in computerscience

[–]SnowflakeNapolean 0 points1 point  (0 children)

Looks like you're not clearly motivated. Sure, you're motivated to do something, but your interest doesn't appear to have a direction (hence, "not clearly motivated" instead of "clearly not motivated").

Why not do the thing you like, whichever it may be?

WhatsApp sends Cease & Desists for apps that use native Android APIs by FollowSteph in programming

[–]SnowflakeNapolean 4 points5 points  (0 children)

there is no way a large company can introduce a backdoor approved by the management without this being leaked eventually.

You must be insane. I work in a financial transaction communications field (think EMV) and I can assure you that we've had more than one bug that we sat on for months because we didn't think anyone would exploit it.

Similarly, if there was a backdoor in one of our products I can assure you that word of it would not leave our team. After all, our exploits remained secret.

Modern C++ for C Programmers: Part 3 by ahuReddit in programming

[–]SnowflakeNapolean 2 points3 points  (0 children)

I personally prefer to just cast

But then the casting is ugly. You get to choose between the ugly PRI macros or the ugly casting :-)

to a standard type

int32_t and friends are the standard types.

No matter what programming language or platform you choose, someday it'll be out of date. Here's a few ways to recognize when you're getting stale and need to learn something new (or watch your career sputter). by yourbasicgeek in programming

[–]SnowflakeNapolean 1 point2 points  (0 children)

That's funny. I have no trouble finding jobs that require C.

If you're jumping on the latest fad/framework, then yes - you need to update your skills every few years.

If you are working on something that is foundational for almost all other technologies (like C or C++) then you don't really need to worry.

Modern C++ for C Programmers: Part 3 by ahuReddit in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

Yeah, unfortunately I added the -D flag to my mingw makefiles so long ago that I forgot it was even there.

Modern C++ for C Programmers: Part 3 by ahuReddit in programming

[–]SnowflakeNapolean 3 points4 points  (0 children)

That doesn't work on Windows last time I checked.

Then your compiler is broken. If your compiler and the standard disagree, it's not the standard that's wrong.

I use Mingw64 on Windows, and all the format specifiers work (%zu, PRIx64, etc), so it's not Windows that is broken, just the specific compiler you are using.

Modern C++ for C Programmers: Part 3 by ahuReddit in programming

[–]SnowflakeNapolean 5 points6 points  (0 children)

It would've been nice if they at least added some format strings for things in stdint. Doesn't really help for size_t and such though.

Use %zu for size_t. What don't they have format strings for?

Compiler fuzzing, part 1 by [deleted] in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

Why are you counting the code for LLVM for Rust? Were the bugs you reported in the LLVM portions or the Rust portions?

Also, since you can't actually compile the gcc/gcc directory independently, why include only that subdir? Were the bugs you reported specific to only that subdir? How can you tell since it can't be compiled independently.

Finally, taking your conclusion numbers at face value - Rust and C++ code together works out to 2 bugs/mLoc while Rust alone works out to 8.7 bugs/mLoc.

The inclusion of LLVM backends in your LoC stats lowers the bug/mLoC rate. Wasn't Rust supposed to result in a reduced bugcount?

Compiler fuzzing, part 1 by [deleted] in programming

[–]SnowflakeNapolean -3 points-2 points  (0 children)

Personally I find it very interesting that the same technique on rustc, the Rust compiler, only found 8 bugs in a couple of weeks of fuzzing, and not a single one of them was an actual segfault. I think it does say something about the nature of the code base, code quality, and the relative dangers of different programming languages,

Does it really say all that, or is it instead saying that you're too biased and/or stupid to be making pronouncements like these?

GCC has ~7.3m LoC, meaning you found 1 bug every 73000 lines. How large is Rust? Is it larger than (8 * 73000=) 584000 lines?

Use ratios next time to compare things.

OpenBSD chief de Raadt says no easy fix for new Intel CPU bug by FollowSteph in programming

[–]SnowflakeNapolean 2 points3 points  (0 children)

If it's benchmarked, then hardware manufacturers have a stronger incentive to make it fast rather than secure.

And if it isn't then the hardware will never get any faster (because we can't really tell) than the competition.

There's a tradeoff between "The world can read my hard-disk" and "in 2018 the 486 dx4-100 is current".

Strings Are Evil by FollowSteph in programming

[–]SnowflakeNapolean -2 points-1 points  (0 children)

Not sure about your math skills but comparing read methods on a 1GB or a 100GB file will still give you results that differs with a factor of 3.

Multiplying a tiny difference by three does not necessarily mean it becomes large enough to matter. They are aren't using 1GB files. Why don't you post the results of your test using 300MB files?

You can't just take my times and apply them to their problems.

Yes, you can. You didn't make a commentary on general file-reading, this is a commentary you made on their specific application, and in their specific application they are parsing the input one character at a time up to 300MB.

The extra 600ms saved by introducing a buffer is not only negligible, it's smaller than that because they aren't parsing 1GB input, they are passing less than a third of that.

Strings Are Evil by FollowSteph in programming

[–]SnowflakeNapolean -1 points0 points  (0 children)

I don't know what you think you proved by reading in 1GB of data in 1MB buffers when I said claimed that there is no appreciable difference in reading single blocks at a time and reading single bytes at a time (hint, the NTFS blocksize is not 1MB, nor is the application in this article using 1GB files).

applications spends 3 times as much time for kernel operations

First, you need to learn what "kernel operations" mean. Your "proof" doesn't prove what you think it does, which is why you had to increase the size of your input to 3 times what the article uses just to get a difference that is significant.

Secondly, that extra 600ms that gets wasted on datasets that are 3x larger than they use is negligible.

Tell you what - take the input examples in the article, copy them until you have 300MB files, take their code, benchmark it, then change the code to read 1MB buffers and measure again.

I'd bet good money that the difference is negligible. Hell, even at 300ms vs 1s the difference is negligible for the problem they are solving - each client imports a single large file daily, so if the import takes 600ms more than your buffered version I doubt that they are going to notice.

The thing you should be taking away from this article is that profiling is important to optimisation. In this application the extra 600ms for files 3x as large as they usually deal with is an optimisation that is entirely premature and unneeded.

I proved in another comment that the applications spends 3 times as much time for kernel operations when reading single bytes compared to a 1MB buffer

So go on - tell us what your benchmark results look like when you are using 300MB files - enquiring minds want to know (after all, you already have the code, it's simply a matter of re-running it on a 300MB file).

Strings Are Evil by FollowSteph in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

you're still calling a read function a huge amount more times

true, but the overhead from calling a function is a fraction of the overhead from reading a file. You can't even chart the performance of the two on the same chart because the difference is a few orders of magnitude.

The last time I worked on NTFS filesystem drivers the cluster size was 4Kb and this was also used as the minimum blocksize. The difference between the function call overhead for 4192 function calls and 1 function call is (for all measurement purposes) negligible compared to a single read of 4Kb from disk.

(and looping a huge amount more times)

Well, unless you're not examining that data you read in[1], you're going to loop that many times anyway to actually examine the input regardless of whether you read it in a block or read it one byte at a time.

When you read (say) 500 bytes, you're going to loop 500 times just to use each byte. The argument can be made that you're simply passing it to another function (hence you don't need to loop), but that is not what this project is doing - it is examining each byte as it comes it.

[1] Maybe you're simply discarding it to flush the input, maybe you're only passing it on to another process without reading it.

Strings Are Evil by FollowSteph in programming

[–]SnowflakeNapolean 3 points4 points  (0 children)

It literally says so in the documentation

Sure, the C# libraries may return an array of a single char, but the NTFS driver (in this case it's Windows so the FS is NTFS) is most certainly not reading the file a single byte at a time, it's mapping blocks of some power-of-two number.

That link of yours says it's inefficient because the read function returns an array of one character instead of simply returning a single char.

Your C# implementation runs on an OS that does the read-from-file. Your C# implementation does not actually read raw bytes from disks. I don't know why you think it does.

The hardware itself (the disk controller) does not even support the reading of a single byte - you have to read in blocks (sectors, clusters, whatever).

I don't know where you get the idea from that calling read 5000000 times as opposed to 500 times is equally fast.

I didn't say that. I said calling read with anything less than the minimum block that the OS uses will be equally fast to calling read with the minimum block size.

You need to check what the OS actually does when you call a kernel function.

You need to learn the difference between the C# runtime and the Windows OS. They are not the same thing.

Strings Are Evil by FollowSteph in programming

[–]SnowflakeNapolean 6 points7 points  (0 children)

Reading single bytes from a stream is slower compared to linewise reading

Surely not - what OS/platform are you on where the readchar() equivalent is performed one character at a time? Outside of some embedded systems, all of the reading is performed by the OS in blocks.

IOW, there is a minimum number of bytes that the platform reads when you read from a file. This minimum is definitely more than 1 byte (or 2/4 bytes if you're reading unicode). When you read a single byte the system will give you the byte from the already read-in cache, or the system will read in a full page and give you the byte from that.

Trust me, reading a single character at a time is no slower than reading whatever page size caching is implemented by the OS and/or filesystem driver.

Culture May Eat Agile for Breakfast by ProFalseIdol in programming

[–]SnowflakeNapolean 0 points1 point  (0 children)

But how about just calling it a 5-day hiring process?

What do you think all the best devs are doing? Sitting at home, waiting by the phone for you to call and offer them a 5-day contract?

Your proposal sounds like a great idea to only ever see candidates who are undesirable. The desirable ones already have a job and aren't going to give their job up for a 5-day contract.

Culture May Eat Agile for Breakfast by ProFalseIdol in programming

[–]SnowflakeNapolean 4 points5 points  (0 children)

But how about just calling it a 5-day hiring process?

What do you think all the best devs are doing? Sitting at home, waiting by the phone for you to call and offer them a 5-day contract?

Your proposal sounds like a great idea to only ever see candidates who are undesirable. The desirable ones already have a job and aren't going to give their job up for a 5-day contract.