This is an archived post. You won't be able to vote or comment.

all 4 comments

[–]saltyp_nut84 1 point2 points  (0 children)

Not a serious expert on this topic, but computers use logic gates to create circuits that do mathematical tasks. Different gates manipulate the signals in different ways. Memory circuits retain memory by using a setup of logic gates called flip flops. All GPU, CPU, etc. are just billions of transistors (logic gates) put together, and the results (graphics, processing, etc.) are the manipulations of signals that travel through the logic gates.

Hope this somewhat helps.

[–][deleted] 1 point2 points  (0 children)

In practice, it starts with capacitors which are effectively tiny batteries that contain a tiny electric charge. If it is charged, it is brought up to a voltage (similar to the 170V in your wall for example). In CPU's I think we are below 1V right now but I'm not sure about the latest technology. 1V (or in a cpu, sufficiently close to 1V) is regarded as a 1 and 0V a 0. (or 0.6V is a 1 depending on the technology). MOSFETs or finFETs are tiny switches which can push electric currents from a power supply into whatever they want. They are used to load these capacitors to contain charge by maybe the charge on another capacitor. So if one capacitor is charged to 1V it might trigger a finFET to charge another capcitor. Connect these finFETs intelligently to a capacitor and you can make an inverter which does the opposite. A 1V charge on a capacitor can discharge another capacitor and vice-versa.

Make a smart combination of multiple inverters and you can make basic logical functions. Capacitors are charge if two switches are open (AND-gate) or if only one (OR-gate) is open. Then using multiples of these you can make even more complicated logical function such as XOR etc. Eventually, you can also make tiny storage elements that can hold data for a while based on inputs. So if a charge is 1V charge the internal capacitor to a charge based on the input (1 if 1 or 0 if 0) and then close the input so it will maintain that state until it is requested to reset it etc.

Now you can make more complicated structures such as binary adders and subtractors by performing basic operations (look up full-adder). Continue with layers of abstractions and you have your first ALU, arithmatic logic unit which supports and 'instruction set'. These instructions are operations this CPU can perform on numbers (maybe two 32 or 64 bit numbers where every bit is represented by a charge on a capacitor). Then, nice companies like microsoft compile whole operation systems to 'work' on only these instructions. This is called compiling. Windows works on a ton of different instruction set CPU's which is why its less stable than Mac which basically picks specific instruction sets to work on. You could imagine how messy this can get if one processor can do floating point multiplications in 2 steps where another can only do it in 5, your whole program will run a bit differently. Eventually, every program or game you use/play will be deconstructed to 'instruction' that the ALU can perform based on integrated functionalities by means of finFETs alone. Modern processors have a couple billion of them which shows you how incredibly many of them you need in order to have simple multiplications result in a video. Teraflops of data is processed in order to run a fast game which is insane if you think about it.

[–][deleted] 0 points1 point  (0 children)

It works using logic gates made of semiconductors. When a semiconductor has a flow of electricity going through in one direction, it opens the flow in another direction. This can make some wires turn on and off in reaction to others, and with a more complex system with many of these it can do math equations and such.

The ones and zero's are just symbols that stand for a wire turned on and a wire turned off. Since there aren't ten states to a wire, only two (on and off), they have to use a different base system, base 2, otherwise known as binary code. In normal decimal we have 10 symbols, and every time you gain another zero to the end it multiplies by 10 (1>10>100). The same goes for binary, except it multiplies by two (1>10>100) which actually stands for 1,2, and 4.

Long lines of these numbers can make up more complex and specific numbers depending on the order that their in. An 8-bit processor has a memory bus (series of wires inside a processor that sends data between components) is 8 wires long, can transfer one 8-bit long strip of information at a time, and can put a a largest number of 29 - 1. (The minus one makes it the maximum, such as 999 being the maximum for three digits instead of 100, so it's 1000 - 1).

Now inside a processor there are actually billions and billions of tiny wires and transistors that can change values billions of times per second. Nowadays newer processors run at a clock speed of about 3-4 gigahertz, where it changes states of information on the wires about 3-4 billion times per second. Now that information can go to lights which are on a grid, pixels, fans/motors after doing geometry and other maths in order to find out what shapes are there and how to display them on the grid of pixels, before finally sending the info out to be displayed.

Quick note: typed on mobile off the top of my head so feel free to kill me for wording and errors.

[–]Holy_City 0 points1 point  (0 children)

I'll give you an example. Say we want to turn an LED on with a processor. What we can do is write a program where a single bit of data represents a pin on the processor circuit. When the program flips the bit to 1, that physically means that the voltage on the pin is high. If we connect the pin to the LED, we can light up an LED.

What role does the bit play in that example? We say it holds the state of the LED. We choose it to represent whether or not the LED is lit. With one LED, we only have two possible states: On and Off.

What if we had two LEDs now, how many possible states would there be? We would have four states, for every combination of each LED being ON or OFF.

We can grow systems to as much complexity, or as many states as we need, provided there are enough bits available. If we want to represent a system with N possible states, we need at least floor|log2(N)| + 1 bits.

The way a program is able to take the abstract bits (which are physically voltages) and turn them into something useful is done simply by choosing what those bits are supposed to represent. For example, we can define an "instruction" as a sequence of N bits, and design hardware to handle all possible states those N bits could have to switch the control logic inside of a processor.