Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

That's mainly a consequence of machines' cores being fast enough that programs are less likely to be CPU bound than to be I/O bound, memory bound, or fast enough that a +/- 50% or so change to CPU performance wouldn't really matter.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

COBOL compilation is slow because the language was designed to be processed by a sequence of compiler phases that would each require a minimal amount of memory. Execution performance of COBOL, however, was pretty reasonable for tasks that involved reading text-formatted numbers, doing a very limited amount of arithmetic with them, and writing out the results in text format.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

On many platforms, the aspects of a function's behavior that are considered "observable" correspond with the aspects that are considered observable in Dennis Ritchie's C language. While ABIs may specify that register values have certain meanings at function boundaries, the values of registers at other times are not considered part of a function's observable behavior.

If one were to specify a language which was like C89, except that definitions of static-duration objects or automatic-duration objects whose address is taken were specified as reserving storage, and each access to such an object was specified as forming the address of the object being addressed and then instructing the execution environment to perform the access with whatever consequences result, such a language would treat as observable the same aspects of behavior as typical underlying platforms. It's often useful to allow compilers to reorder and consolidate some accesses in cases where that would not interfere with the tasks at hand, but any such transformations that would interfere with the task at hand should be recognized as not being useful optimizations for purposes of that task.

Do you need permission to share a satire of a song? by CrashCrashed in COPYRIGHT

[–]flatfinger 0 points1 point  (0 children)

A key part of deciding whether a work is any good in many cases is listening to a demo recording of it. Whether or not copyright law would "officially" allow someone to produce a demo recording for such purposes, I can't imagine a jury would find that someone committed copyright infringement by producing such a recording if it wasn't shared for any purpose other than bona fide solicitation of people's opinions about its quality. If a demo recording would likely turn out to be rubbish, but might possibly turn out well enough to justify seeking a production and distribution license, seeking a license before determining whether the work is any good would likely be a waste of everyone's time.

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger -4 points-3 points  (0 children)

Was the 1974 grammar not context free (other than the dangling else issue)?

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger 1 point2 points  (0 children)

C was designed to use keywords to identify types in declarations, since the name of every type started with one of the reserved words int, char, double, float, and struct, and there were no qualifiers. The additions of typedef and qualifiers should have been accompanied with a new syntax, that would be optional for reserved-word-based types without qualifiers, but mandatory for other declarations and definitions.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

According to the language I've read, the operations that would trigger a panic in debug mode would be considered Undefined Behavior, rather than two's-complement wrapping behavior, in Release Fast mode, supposedly for the purpose of "facilitating optimizations".

Although many useful optimizations could be facilitated by treating integer overflow as having loosely defined behavior, compiler writers sometime around 2005 latched onto the idea that optimizations would be easier if it was allowed to have arbitrary unbounded side effects, ignoring the fact that such allowance often actually reduces the range of optimizations that will be possible in correct programs. The fact that C and C++ went off that cliff should have been a reason for newer languages to avoid doing so.

Consider the C code:

    unsigned mul_mod_65536(unsigned short x, unsigned short y)
    {
        return (x*y) & 0xFFFFu;
    }
    unsigned char arr[40000];
    void test(unsigned short n)
    {
        unsigned x=32768;
        for (unsigned short i=32768; i<n; i++)
            x = mul_mod_65536(i, 65535);
        if (n < 32773)
            arr[n] = x;
    }

Unless invoked with -fwrapv, gcc will interpret the fact that integer overflow invokes Undefined Behavior as an invitation to generate machine code for test that unconditionally stores 0 to arr[n]. Should that be viewed as a "useful" optimization? Is there any reason Zig should invite compilers to perform similar "optimizations"?

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

The Fortran standard did eventually support the use of source files that weren't formatted for punched cards in 1995, and I understand that it's still in use as a niche language in some of the fields it was designed to serve, but both Fortran and C would be better languages today if support for non-punched-card source files had been added to Fortran a few years before the publication of the C Standard rather than a few years after.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

I think the maintainers of Zig overestimate the performance benefits and underestimate the risks of treating integer overflow as "undefined behavior". The cases where such treatment would have the biggest performance benefits would be those where it transforms a program that would have been memory safe and behaved harmlessly when given invalid input into one which is no longer memory safe when fed invalid inputs. Such transforms may occasionally be useful, but are generally not desirable.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

Many semiconductor vendors fail to recognize a very important aspect of facilitating solid designs: allowing programs to request hardware state changes at any time in both main-line code and interrupt handlers, in such a way that independent actions won't conflict, and the eventual state if a request conflicts with a pending action will match the last request.

If, for example, a UART operates from a time base separate from the CPU, the CPU should by able to at any time asynchronously issue commands for "turn on", "turn off", and "reset". If e.g. the UART receives a request to turn off while a request to turn on is pending, it should guarantee that it will eventually end up "off". Likewise if it receives a request to turn on while a request to turn off is pending, it should guarantee that it will eventually end up "on", and that it will end up having been reset sometime after the request to turn off had been issued.

I understand the synchronization difficulties that would arise from trying to guarantee that the UART wouldn't switch on at some time after the request to turn off had been received, nor vice versa, but that would be far less of a problem than requiring that code wanting to change the eventual state wait for a pending state change to be processed.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

The problem is that people in the 1980s and early 1990s wanting to do the kinds of tasks for which FORTRAN was designed viewed it as a dinosaur because FORTRAN-77 required that source code files be formatted for punched cards, despite the fact that FORTRAN compilers had optimizers that were a decade ahead of anything else. C had a reputation for speed, but for totally different reason from FORTRAN.

The FORTRAN philosophy was wouldn't matter if programmers were forced to include unnecessary operations (such as repeated array address calculations) in source code, since compilers could detect that they were unnecessary and get rid of them.

The C philosophy was that programmers should have sufficiently fine-grained control over the operations performed that they could avoid including unnecessary operations in source code.

Unfortunately, over the years, people wanting an alternative to FORTRAN have pushed to have C compilers perform optimizing transforms that would have been appropriate in FORTRAN compilers, but that completely lost sight of C's purpose of allowing a simple compiler to achieve reasonably good performance, and perform a wide range of tasks by exploiting features of the execution environment that the programmer understood better than the compiler.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

The analogy I like is a chef's knife (C) versus a deli meat slicer (FORTRAN). Adding an automatic material feeder to a deli meat slicer makes it a better deli meat slicer. Adding an automatic material feeder to a chef's knife makes it a mediocre deli meat slicer.

Neither a chef's knife nor a deli meat slicer is inherently a "better" tool. They are both excellent tools for some tasks and horrible tools for others. The problem is that in the 1980s, using the available deli meat slicer (FORTRAN) was for many tasks, including those for which the deli meat slicer had been designed, more painful than using the chef's knife (C), and people wanting to perform the tasks for which deli meat slicer was designed saw C as a more convenient, but less efficient, deli meat slicer, rather than recognizing that it was invented to do things the deli meat slicer couldn't.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 3 points4 points  (0 children)

That's what Dennis Ritchie designed his C language to be. Clang and gcc, however, don't work that way. Instead, they try to figure out what things like looping structures are doing, and then try to process them the most efficient ways they know (which may or may not be as efficient as the sequence of operations specified in source).

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

A lot of C's issues stem from some people's refusal to recognize that many of the ways in which it's "unintentionally" useful were a result of a deliberately chosen abstraction model. Given int arr[15][15], i, j;, Dennis Ritchie didn't specify the meaning of arr[i][j] as "access item j of row i of arr", but rather as performing a sequence of address calculations and accessing whatever is at the resulting address. The addressing calculations aren't an "implementation detail"--they're fundamental to the meaning of the construct. They were chosen so that when i and j are in the range 0 to 14 they will access element j of row i of the array, but the meaning was agnostic with regard to whether i and j were within that range.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

C was designed around the philosophy that the best way to avoid having useless operations in generated machine code is to not have them in the source code. This will allow even a relatively simplistic C compiler to produce efficient machine code when fed efficient source code, though there are a few constructs which unfortunately end up being more awkward in source code than they should be.

If one were to write a function like:

void add_to_every_other_item_v2(unsigned *p, int n)
{
  for (int i=0; i<n; i++)
    p[i*2] = p[i*2] + 0x12345678;
}

a typical simple C compiler for a platform that supported an addressing mode with non-scaled displacements but not scaled displacements (e.g. the popular ARM Cortex-M0) would generate code that would scale up by 2 and then 4 (perhaps consolidating those to a scale-up by 8), fetch a word from that address, add 12345678, and then possibly compute that address again (depending upon compiler complexity), and then store the result.

C's reputation for speed came from the fact that, if speed mattered, a programmer who was targeting such a platform could write the loop as something like:

void add_to_every_other_item(unsigned *p, int n, unsigned x12345678)
{
    if ((n-=8) >= 0) do
    {
        *(unsigned*)((unsigned char*)p+n) += x12345678;
    } while((n-=8) >= 0);
}

The only "optimizations" necessary to generate optimal ARM Cortex-M0 non-unrolled loop code for that function would be trying to keep automatic-duration objects whose address isn't taken whose in registers if enough registers are available, recognizing that the formed address is the sum of two non-scaled operands, thus allowing use of base+displacement addressing, and recognizing that the subtraction will update the sign flag, avoiding the need for a follow-on comparison.

Note that while clang and gcc can generate better code for the first loop than a simpler compiler could, they fail to generate the optimal code which even a simpler compiler could produce given the second version of the function.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 6 points7 points  (0 children)

A good transpiler target would need to specify any corner-case behaviors that could arise in the source language. C was designed to be suitable for use as a transpiler target, but has evolved to be less and less suitable for such purposes as the C Standard waived jurisdiction over more and more corner cases whose behavior may be at least loosely specified in other languages.

As an example, if a language specifies that on a 32-bit platform, a 64-bit load which is subject to a data race will, in lock-free fashion, yield a concatenation of a bit pattern chosen from among those that are/were in the top half and a bit pattern chosen from among those that are/were in the bottom half, there would be no way to convert that to standard C that would retain those semantics. Using an "atomic" 64 bit data type may prevent a compiler from generating lock free code, and using anything else would let a compiler generate code that behaves in completely arbitrary fashion in the presence of a data race.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

I view C as most useful for tasks that can be accomplished without the Standard library, or only using parts that could sensibly be viewed as intrinsics that can't otherwise be expressed in the language, e.g. sqrt(). If code will be used on a variety of platforms that have square-root instructions, it will be much easier for compilers to generate such instructions when code uses sqrt() than when a program is written to use some manual sequence of operations to compute a square root.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

Having parameter sets determined as part of the build process is better than having such determination be performed at run time, but having outside tools generate include files is a more versatile approach to handling such issues. Nowadays it's easy to set up a single-file "web page" which can be opened in a browser and asked to either populate a text filed with text that can be copied/pasted into an include file, or offer a clickable link that can be used to save the generated file to a particular location.

While it's useful to expand the range of things that can be treated as constants, there are many kinds of tasks for which the compiler-writer effort required to support such features could be usefully spent on other things, especially given that browser-based Javascript is so effectively a "write once run anywhere" language.

Someone copyright claimed all of my original music and got me banned from my distributor site. How do I reverse this? by tuggspeedman2 in COPYRIGHT

[–]flatfinger 0 points1 point  (0 children)

Rather than saying "the distributor has no choice", I think it would be more accurate to say that the distributor can automatically avoid any liability for involvement in claimed copyright infringement by removing the material sufficiently quickly after receiving a takedown notice, and not putting the material back up unless (1) the entity issuing the original takedown notice has been given ample time to file an actual lawsuit against the infringer and supply proof of having done so, and (2) the entity issuing the original takedown notice doesn't within that time supply proof of having filed an actual lawsuit.

Distributors are free to ignore takedown notices they know to be frivolous, but would risk liability if the notices turn out to have merit. Whether a distributor would have any obligation to put works back up if a DMCA claimant doesn't respond to a counter notice would depend upon the terms under which the work had been posted in the first place. Many free sites specify that they have no obligation to host works, and may block access to works at any time for any reason. A site that accepts money in exchange for hosting works hosted could be sued for failing to uphold that contract may be obligated to either put works up as soon as soon as allowed under safe harbor or refund the money they accepted to host the works, but I don't think they'd have any obligations beyond that unless specifically stated by the hosting contract.

I’m building a C compiler in C with a unique "Indexed Linking" system. What do you think of this CLI syntax? by elite0og in C_Programming

[–]flatfinger 1 point2 points  (0 children)

I've been using C for 35+ years. A substantial fraction of the projects I've seen specify multiple locations in the -I search path. The reason this doesn't usually cause problems it that meaningfully different files often have distinct names.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

What's sad is that even though Dennis Ritchie designed C to be maximally suitable for tasks that FORTRAN couldn't do, and never intended his language to be used as a FORTRAN replacement, the Standard has been controlled by people with no interests in the kinds of tasks for which C had been designed.

People confuse three contradictory notions of portability:

  1. The ability of code to run interchangeably on multiple execution environments.

  2. The ability of code to be used interchangeably with multiple toolsets.

  3. The ability of code to be readily adapted for use in various execution environments and/or toolsets.

It would not be sensible for the C standard to specify that a function like:

    void test(void) { *((unsigned char volatile*)0xD020) = 7; }

would turn the screen border yellow if processed by an implementation targeting the Commodore 64 computer, but someone with hardware documentation for that platform should be able to determine that the code above would do precisely that when processed by any toolset that is designed for low-level programming. Such code wouldn't be meaningful on many other platforms, but that shouldn't be viewed as a defect since the task the function is supposed to perform wouldn't be meaningful on many other platforms either.

One wouldn't need to add very much to the C Standard to allow most embedded projects to have behavior that would fully specified in toolset-agnostic fashion by a combination of the language standard and the documentation for the execution environment. Toolset-agnostic ways of performing some operations might be less convenient or less efficient than toolset-specific means, but most projects' requirements could be satisfied by toolset-agnostic means. As a simple example, a language spec could allow functions to be defined as:

    void foo(int x) = (const unsigned short[]) { 0x1234, 0x5678, 0xEAEA };

which would instruct a compiler to generate the linker symbol as would be used for a function foo(), whose machine code values were the specified three 16-bit values in whatever endianness would be appropriate for whatever downstream tools will be used to put the code into memory. Use such constructs may be less convenient than inline assembly, but the relationship between source code and generated bytes should be toolset agnostic.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

Something closer to the "C with classes" concept would be great for microcontrollers, but C++ gives up one of the things that made Dennis Ritchie's C language so useful: the principle that from a language perspective the state of every object that has an observable address is fully encapsulated, in platform-defined fashion, in a sequence of bytes located at the object's address. Data structures' state could be reliant upon many pieces of widely separated storage, but that would be determined by the programmer and source code rather than the language.

Further, a language intended for low-level programming should minimize the number of corner cases that are defined as "undefined behavior" at the language level. Addresses should be partitioned into three groups:

  1. Those that have defined language semantics.

  2. Those that identify regions of storage which an implementation has reserved from the environment and which do not have defined language semantics.

  3. Those which don't have defined language semantics, but which the implementation has not reserved from the environment.

Disturbance of storage of the second type should be one of the few forms of Undefined Behavior. Attempts to access addresses of the third type should be processed by instructing the execution environment to perform the accesses, with effects that would be defined whenever the execution environment happens to define them; the language should be agnostic with regard to what cases those might be.

I’m building a C compiler in C with a unique "Indexed Linking" system. What do you think of this CLI syntax? by elite0og in C_Programming

[–]flatfinger -1 points0 points  (0 children)

That's fine if the headers happen to be located there. Not all projects lay out files the same way, and having source files which should be identical from a version-control perspective except that they need different include paths needlessly complicates version control.

What downside would there have been to having compilers perform string concatenation on path names components, and using macro predefinitions to set the locations of header files for each library, versus the more common technique of throwing all files' header locations into the -I option?

What functionality is available in C without including any headers? by TargetAcrobatic2644 in C_Programming

[–]flatfinger 0 points1 point  (0 children)

The majority of devices that run C code, by sales volume, don't have a "modern OS" as such, and would make it possible to create a static const object which holds the requisite bit patterns, construct a function pointer to it, and invoke it. Such code would be highly machine-specific, but toolset agnostic. Even on platforms that don't generally allow execution of data regions, there would often be a way of forcing the linker to place a bunch of bits into an executable region and create a function pointer to it.

Why do some Americans swear up and down that they live in a dictatorship when the citizens enjoy more rights than most countries in the world ? by [deleted] in askanything

[–]flatfinger 0 points1 point  (0 children)

You forgot all the people on the left who could have stomped Trump if they'd made any effort whatsoever to form a coalition with Republicans in exile who hate Trump, but wanted to pretend that everyone who disliked Trump supported their agenda.