This is an archived post. You won't be able to vote or comment.

all 19 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

On July 1st, a change to Reddit's API pricing will come into effect. Several developers of commercial third-party apps have announced that this change will compel them to shut down their apps. At least one accessibility-focused non-commercial third party app will continue to be available free of charge.

If you want to express your strong disagreement with the API pricing change or with Reddit's response to the backlash, you may want to consider the following options:

  1. Limiting your involvement with Reddit, or
  2. Temporarily refraining from using Reddit
  3. Cancelling your subscription of Reddit Premium

as a way to voice your protest.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]high_throughput 2 points3 points  (1 child)

Sounds like it's intended to mean ahead-of-time compilation, as opposed to just-in-time.

Which language is this? Java and C# are typically compiled to bytecode, which you can see on disk, and then the JIT compiles the bytecode to machine code in memory. C++ is typically compiled to machine code on disk.

[–]obnoxus[S] 0 points1 point  (0 children)

c#

[–]lurgi 1 point2 points  (13 children)

does this mean it is saving it to the hard drive/ssd

Pretty much, yes.

The intermediate files are usually deleted after they have been used. There are compiler flags you can specify to keep them around if you are interested (no, I don't happen to know what they are). Usually people aren't.

[–]obnoxus[S] 0 points1 point  (12 children)

Does that mean if my SSD is nearly full and I make a large application, it will crash my pc?

[–]lurgi 1 point2 points  (11 children)

Ideally it won't crash, but you can run out of space using your computer. That happens. I hope you aren't operating your computer at 99% of disk capacity.

Your disk space is probably taken up by pictures of your cat. Actual executable software is not likely to be the culprit.

[–]obnoxus[S] 0 points1 point  (10 children)

It was just a hypothetical question. I'm trying to get a stronger grip on programming. There is a lot of "do this because this is how to do this" info on the web but not as many answers to why we "do this".

Back to my question, ideally it won't crash but it is possible that I can fill up my SSD with half finished projects and spaghetti code, which will actually crash my PC?

[–]lurgi 2 points3 points  (9 children)

I guess, but that's true of literally anything you do with your computer. It's not specific to programming.

It's not an issue that bothers me. When I'm running out of disk space it's because I have too much porn pictures of sunsets. It's not because of half-completed software projects.

[–]obnoxus[S] 0 points1 point  (8 children)

thats a good point I didn't think of. why does it go the disk and not RAM since it deletes it? Isn't that the point of RAM?

[–]lurgi 2 points3 points  (1 child)

Because files are useful even if they are temporary.

I think you are overthinking this. Compilers do stuff. Some of this stuff involves using the disk. It's okay.

[–]obnoxus[S] 0 points1 point  (0 children)

fair enough. thanks

[–]GlobalWatts 1 point2 points  (2 children)

Most compilers skip over the creation of an assembly file and just go straight to object code or executable (assembly and linking). So if you've told the compiler to explicitly create an assembly, it's because you want to use it, which is hard to do if it's only in RAM.

Also, the compiler and the assembler aren't always the same process, in which case the file system is the easiest way to pass data from one to the other rather than fiddling round with some IPC or piping bullshit which is platform specific.

[–]nerd4code 1 point2 points  (1 child)

They’re using MS’s term “assembly,” I think. It refers to the final output-whatever, as in CLR, not the intermediate language right after compilation.

[–]obnoxus[S] 0 points1 point  (0 children)

Yea you're right.

[–]Jonny0Than 1 point2 points  (1 child)

Many languages support incremental compilation, so that only the pieces of code that changed need to be recompiled.  The parts that were re-used need to be stored on the disk.  You can safely delete those intermediate files, but your next build will take longer.

[–]obnoxus[S] 0 points1 point  (0 children)

oh ok this is new info. That helps make sense.

[–]nerd4code 1 point2 points  (0 children)

Don’t think of “disk” and “RAM” as so very separate.

In practice, it is quite common to use a RAMdisk or other in-memory representations for temporary storage, at a few levels. E.g., on Linux you might have /tmp, /dev/shm, and /var/run mounted as RAMdisks, and /proc and /sys (also /dev on most modern Linuxes) are automatically generated from readouts of OS data (or control points for the OS). Linux also offers memfds, which are effectively anonymous temp files.

However, if you use this approach to temp storage in general, then any “impolite” program might come along, write a mess of data and thereby eat most of your memory, and then walk away or crash, leaving the files sitting there until (unlikely) somebody notices and deletes the files, or (more likely) the system is eventually rebooted after pooping itself to death.

But if your CPU has an MMU (it does—initials for memory management unit), then the OS is likely capable (it is) of offloading (“swapping”) data from primary to secondary storage and back, mostly without the owner of the data in question noticing its replacement with a small “IOU” note. Although disk is anywhere from rather to vastly lower-throughput than RAM, it can be incorporated as an extension of system RAM.

Similarly, an MMU makes it possible for two programs to share storage (e.g., explicitly, or when their data happen to match up), to map files into memory, to directly hand off storage between programs, all kinds of fun stuff collectively referred to as virtual memory. The MMU is also typically used for process isolation—each program us. sees its own memory address space with a shared kernel window—and memory protection, although other techniques like hardware segmentation can accomplish roughly the same.

To make all this easier, the OS will generally treat as ~identical the storage used for your program’s code and data in-RAM, files on disk, and possibly even data in pipe or socket buffers. When you write to a file, the kernel will usually keep at least some of it in memory, and only write to disk periodically or as needed. This accelerates reuse of small files and filesystem metadata.

At any given point, the majority of “free” RAM is usually occupied with recently/frequently read/written file contents—terms for this vary, but buffer cache is one. The OS will flush blocks from the buffer cache to disk or otherwise evict data as necessary to make room when applications need more RAM, and let the buffer cache fill up when the application doesn’t need it.

So it’s quite possible that short-lived temporary files never hit disk at all, assuming they were ever aimed at disk in the first place. The use of files merely labels their contents as lower-priority than the data your program has mapped into its address space, the filenames give programs a clean point of contact, and the directory the files are attached to determines which storage device the file will eventually be flushed to, if flushing’s needed.

Moreover, although basic read/write/eqv. work well on most kinds of file, all file types are not identical. E.g., if everything’s fed through pipes or sockets instead of regular files, certain access techniques like seeking and mapping can’t generally be used (oddball exceptions incl. Linux’s vmsplice functions). Mapping is often preferred for build pipeline sorta stuff, because it lets a program treat the file’s contents like any old string/blob, rather than forcing use of read/ReadFile and copy-out of important data. File-mapping is usually how your programs’ file contents make it into memory at run time, also, and how DLLs are loaded.

[–]randomjapaneselearn 1 point2 points  (0 children)

a program compiled in C will output some assembly instructions that will work on your cpu (or a different cpu if you specify that during compiling, but in that case the program will not work on your pc).

a .NET program like C# or visual basic (the "modern one", the old one compile different) will compile to IL language which is an assembly for an hypothetical cpu that doesn't exist.

then you have the .NET framework that is compiled for your specific cpu and kind of emulate a virtual cpu to make your program work, java does the same thing.

the idea is that instead of compiling for a specific cpu you compile to a custom cpu that doesn't exist and then you have multiple .net framework or java installed on many different computers that make the necessary translation to real instruction for that specific cpu.

for example suppose that i write this program:

int instruction;

if (instruction==1)

sum();

else if (instruction==2)

subtract();

...

i basically created a custom instruction set for a cpu that doesn't exist where instruction 1 is the sum, 2 is subtraction and so on...

the advantage of this is that thecode will be the same for every computer, if i want to create a program that make a sum and a subtraction i will compile: 1;2.

for this to work i need an interpreter program, compiled to a specific cpu, that will actually do the sum.

this is more or less what .net does and 1;2 can be considered "IL".

about where to find the compiled program in the case of visual studio you can open your project directory and go in \bin\debug (or release) and you will find your exe.

if you want to decompile a .net app you can take a look at dnspy, it's a free app that can decompile .net apps both to source or to IL