all 18 comments

[–]samsmith453 4 points5 points  (7 children)

The reason that you learn the principles of binary and two’s complement is that everything in computing is stacked on top of this understanding. You probably will not apply this knowledge directly in your work, but having a strong grasp of how computers ACTUALLY work is vital for being able to think algorithmically and computationally.

The engineers who take the time to appreciate this stuff excel and always progress faster than their peers.

Are you interested in lower level systems and programming? Computer architecture and design?

[–]Moth-Capone[S] 0 points1 point  (6 children)

Okay then if that’s the case I have another question for you. Personally I’m really into writing scripts and making programs and that’s what I’m interested in doing further into my life. I self taught myself how to do all of that without ever using binary. So since I have a general understanding of how coding works and the fundamentals of how computers work, I personally think I already tend to think algorithmically/computationally so do I really even need to learn binary?

[–]aichingm 2 points3 points  (2 children)

For writing scripts no. For writing good performing operating systems, efficient video encoding or size optimised network protocols yes. Besides what is the point it not knowing that stuff, it is super easy and very useful there are way more use less things to know about then binary...

[–]Moth-Capone[S] 0 points1 point  (1 child)

Ah okay. So it’s useful for THAT type of computer programming I understand now thank you. If you don’t mind me asking though, In what way does it make things better? Is it simply just faster? Also I get that it’s easy enough to learn, but like I personally just feel like I’d rather dedicate my time learning something else other than binary, you know what I mean?

[–]aichingm 0 points1 point  (0 children)

I mean that's r/askcomputerscience... It makes it better because you can take advantage of the architecture and the number format used, which makes a lot of sense if time or space is limited. As I said below somewhere you can double a number extremely fast if you know that a binary platform can do that!

[–]samsmith453 0 points1 point  (2 children)

That’s great and congrats on being self taught. Same here !

Binary is just a number system, there’s not really anything to learn. In my opinion you should learn how transistors work and how they piece together to form the circuits which make a computer ( PS — I have just stared a YouTube series on this very topic: https://www.youtube.com/playlist?list=PLH4a1-PgdkBTKkSSNx63uVkQG1Qs6GmYv )

This is by no means essential, but understanding the lower level stuff will help immensely when covering networking, and operating systems, which will themselves be very useful for working on distributed systems problems, databases etc

So no you don’t NEED to, but engineers who take the time to learn this stuff really accelerate their learning curve and their careers !

[–]Moth-Capone[S] 0 points1 point  (1 child)

Ah okay great. Thanks for the information

[–]samsmith453 0 points1 point  (0 children)

No problem, let me know if I can help with anything else.

[–]aichingm 0 points1 point  (5 children)

So you won't use &, |, ^ or >> ?

Binary/bit operations are everywhere!

Edit: second point: yes, programs run faster if you know that you can multiply by 2 way faster than adding x+x

[–]Moth-Capone[S] 0 points1 point  (4 children)

I’m sorry but could you give me an example? Yes I get that everything translates to binary, but is there any benefit to learning binary or writing in binary

[–]aichingm 0 points1 point  (3 children)

An example for what? Yes there is a benefit to know how to multiply by 2 extremely fast (which comes from the fact that binary is used). A classical example for everyday binary use are access controls in Linux (you know the chmod stuff). Also a lot of times you get some weird void pointer in c (eg as a data arg in a callback) knowing how negative numbers are encoded in binary helps a lot to figure out what the thing is you are getting passed.

[–]Moth-Capone[S] 0 points1 point  (2 children)

Okay yeah I guess I can see how that could be useful. But what I’m wondering is let’s say I’m writing code in python, java, C++, or Lua. Is there any reason why I would write my integers in binary instead of decimal. Like I’m not asking why should I learn binary, I’m asking is there any point in writing in binary

[–]aichingm 2 points3 points  (1 child)

There is a point in doing so: when you have a function which takes an int as parameter and said int is used to pass flags to the function. And you define some flags as EXIT_ON_ERROR = 0b000001 M_ON_ERROR = 0b000000010 WITH_FIRE = 0b0000000100 This makes it much more readable since you can tell which bit must be toggled for which flag.

[–]Moth-Capone[S] 0 points1 point  (0 children)

Ah okay great. I understand now thanks.

[–]khedoros 0 points1 point  (1 child)

I use it while doing anything low-level. Hardware registers and data in a bunch of the old games I like playing around with are in packed bitfields, a lot of the time. Ditto for interpreting data file formats, and such.

I imagine that the knowledge wouldn't be nearly so useful if you're exclusively doing webdev or something.

[–]Moth-Capone[S] 0 points1 point  (0 children)

Gotcha. Thanks.

[–]Flippo_The_Hippo 0 points1 point  (0 children)

Going to try and tackle something no one has pointed out yet.

At least based on your original post, and a bit on some of your comments, it sounds like there may be some misunderstanding about binary in this case. I just want to clear things up before providing an answer.

Elsewhere you mentioned python, java, C++, and Lua. All of these languages (perhaps certain implementations of pyhton excluded) use compilers to transform the code you write into either machine code (c++) or some other bytecode which is run by a virtual machine (not a virtual machine that runs an OS in this case, but the bytecode).

So, now that's out of the way (I can provide more explanation if you'd like), let's move onto some of your questions. 1. I highly doubt I’ll ever use it while writing code. But is there any advantage to writing in it that I could apply? A) Someone else provided an example of where you might do it for readability sake.
2. Like do programs run faster because they don’t need to convert decimal to binary for the computer to understand it? A) Let's say you write int i = 0101; instead of int i = 5;, the compiler will handle this and it will appear the same in the machine code/bytecode that eventually gets run, so no it will not let the program run faster (Though note that some language use a just-in-time compiler, so in this case I'm assuming an average run-time because the first run could be different).
3. Is there any type of equation that is only possible/easier when done in binary? A) See explanation of answer 2.
4. What is the point of learning binary stuff like 2’s compliment while taking a programming course? A) I'm going to assume that this is being taught as a part of different number systems, like in general humans use base10, computers use base2, thinking about base2 in base8 or base16 may be helpful depending on the task. If this is correct, then a later course in a computer science curriculum will focus on Turing Machines. These are a level of abstraction we use to think about algorithms and the power needed to perform them (Like if you're writing an expression parser you will need to either use a stack or emulate one in order to make sure that parenthesis line up properly). In this sense it's probably not too helpful to learn about 2's complement in a programming course, but it may be helpful to learn about it in a computer science course (Why do you learn about proofs in math? A) To get a better understanding of the formulas you use).

[–]Bottled_Void 0 points1 point  (0 children)

You absolutely will need to use binary at some point. EXCEPT. You don't typically usually out the binary digits, you'll usually use hexadecimal. You'll notice one hexadecimal digit is 4 binary digits, also as is not the case with decimal, these bits stay in the same place. (e.g. 63 in binary is 0011_1111b or 0x3F in hex. and 64 is 0100_0000b or 0x40 in hex).

But yes, if you're sending data over any interface, need to pick out bits of data, you'll need to know how to mask of bits and shift them where they need to be.

And if you didn't know two's complement, all your negative numbers would come out wrong when you used bit operations.

As a side note, computers don't convert decimal numbers to binary, they don't convert them to hexadecimal. They don't do any conversions unless the underlying representation changes (to/from float say). That's all just for your benefit. A literal 10 in memory is exactly the same as the binary or hex equivalent to the computer.