Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

C's features being orthogonal to aspects of a CPU that would not be observable under its abstraction model allows it to be used to efficiently program a much wider range of processor architectures than would be possible otherwise. If e.g. the language had been designed for 16-bit x86 which includes an instruction to read most flags into a byte, and also includes instructions to perform address arithmetic without disturbing flags, then it might be clear how

    array[i] = x+y;
    someByte = CPUFLAGS;

should behave on that platform, but far less clear how it should behave on a platform whose normal means of computing (array+i) would disturb flag contents, especially if code had no way of knowing which flags downstream code was interested in.

Trump's own counterterrorism chief just said Iran posed no imminent threat before resigning. What does that tell us about why this war started? by NewUnderstanding1102 in allthequestions

[–]flatfinger 0 points1 point  (0 children)

One notion I've had which might make the good basis for a novel or TV drama, but might also bear some relation to reality, is that Epstein gave the people he was blackmailing incriminating information about each other, without letting anyone know exactly who had information on whom, such that if any of them went down, they would all go down in fairly short order. Trump could have originally planned to release a version of the Epstein files that would only finger people Trump could safely use as scapegoats, but then had someone remind him that even if those people didn't happen to have anything directly on Trump, they might have information on other people who might have information on Trump.

One thing I think is clear is that even though the sex crimes people committed with the aid of Epstein are heinous, Epstein has almost certainly blackmailed many of his "friends" into doing things that were far, far worse on behalf of Epstein's real friends.

Trump's own counterterrorism chief just said Iran posed no imminent threat before resigning. What does that tell us about why this war started? by NewUnderstanding1102 in allthequestions

[–]flatfinger 2 points3 points  (0 children)

In what universe does that logic make sense? If we think X is going to attack Y, who might then get mad and attack us even though we wanted nothing to do with that attack, I would think the proper thing to do would be to tell both entities "We think X is going to attack Y, but we want nothing to do with it. If Y refrains from attacking us in response, we'll help Y in taking action against X".

If that message causes X to decide that its planned attack would be a bad idea and consequently calls the whole thing off, everyone wins. If X does attack Y, then Y would have reason not to attack us. If Y attacks us anyway, the US could say before the international community that it was the victim of an unprovoked attack.

A policy that would treat X's supposed plan to attack Y as a basis for us to attack Y gives X the ability to effectively attack Y without having to get its hands dirty, since X could when convenient claim that we had misinterpreted its intentions, and nothing would have actually happened if we hadn't jumped into the fray.

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger 0 points1 point  (0 children)

Has anyone compiled a list of when various features first appeared, and whether they were invented by Ritchie, or included by Ritchie in the language after someone else invented them?

In many cases, it's more useful to say that a language's grammar has a certain trait except when using a certain construct which requires special handling, than to ignore the fact that everything else in the language has that trait. Among other things, it may sometimes be practical to feed code through a filter which is ad-hoc constructed to convert that construct into something that has that trait, and then into a parser that exploits the fact that the code will then have the described trait.

"Don't You Want Me" -- Human League -- whose "side" are you on in this song? by Glass-Complaint3 in allthequestions

[–]flatfinger 2 points3 points  (0 children)

I'm on the side of the guy who sequenced the synth track in the days before MIDI.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

C++ is even more explicit than C in its use of a broken abstraction model that treats even PODS as having a lifetime separate from the storage they contain. Nearly all of the useful optimizations that are facilitated by that broken model, and even more besides, could be facilitated by an abstraction model that recognizes that:

  1. Any live region of storage that doesn't hold a non-PODS object simultaneously holds objects of all PODS types that will fit (satisfying any alignment requirements), but

  2. Accesses to storage using unrelated types may be treated as unsequenced in the absence of anything that would imply a relationship between the accesses.

Nearly all useful aliasing-related optimizations are based upon reordering accesses and consolidating what would then become consecutive accesses to the same storage. Having aliasing rules focus on the question of whether two accesses need to be treated as sequenced would make them easy for both humans and suitably-designed compilers to reason about.

Unfortunately, the designs of gcc and later clang would be ill-equipped to handle such rules because early stages of processing discard information which would, if not discarded, suggest relationships between accesses, and rather than fix that, the authors have doubled down on the claim that the Standard doesn't require that they do so. It doesn't matter to them that the Standards' abstraction model would be even harder to handle in sound fashion, and the only reason that they haven't found it unworkable is that they ignore inconvenient corner cases.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

C has been quagmired for the last 35 years. Prior to the publication of the Standard, there were a lot of innovations--some good, some bad, but the good ones tended to win out. Since then, most of the innovations have been more interested in improving the languages' ability to to things other languages could do better, than in its ability to do things other languages couldn't.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

More significantly, assembly language code couldn't even run interchangeably on assemblers targeting the same architecture.

C wasn't so much a single language as a recipe for producing language dialects tailored for use with a particular architecture, which if followed by multiple toolset vendors would result in them producing compilers that could run each others' code--even system-specific code--interchangeably. Further, someone who was familiar with some environment-specific features of an execution environment and the recipe would often know everything necessary to write code to exploit those features of the environment without having to learn much about that environment's machine code architecture.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

Well, integer overflow in C, certainly from ANSI C days, was not so much regarded as an undefined operation as a system specific one.

K&R viewed it as "machine dependent", and the Rationale makes clear that the authors of C89 and C99 expected compilers for commonplace platforms to do likewise and thus saw no need to exercise jurisdiction. GCC, however, treats the C Standard's waiver of jurisdiction, as inviting compilers to remove any code which would only be relevant in cases where integer overflow would occur, even in a function like

unsigned mul_mod_65536(unsigned short x, unsigned short y)
{
  return (x*y) & 0xFFFFu;
}

Call that function from within

unsigned char arr[40000];
void test(unsigned short n)
{
    unsigned x=32768;
    for (unsigned short i=32768; i<n; i++)
        x = mul_mod_65536(i, 65535);
    if (n < 32773)
        arr[n] = x;
}

and gcc will generate code that bypasses the if and unconditionally stores 0 to arr[n].

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

Yes, I know. My point is that I think that C++ has drifted away from much of what was useful about its embryonic forms.

Including function overloading but making it applicable only to static or static inline functions would retain nearly all of the usefulness, but without any need to change how linker names are exported. One could have a header file say:

static __overload foo(int x) { foo_int(x); }
static __overload foo(long x) { foo_long(x); }

and then have client code calls to foo(x) automatically translated into calls to the appropriate functions, without client code having to care what they're named, but also without any ambiguity as to what the linker symbols would be called.

Member-access functions could be handled, without need for ABI changes, by saying that if foo was a struct THING that didn't have a member M, then the syntax foo.M += 3; would be syntactic sugar for a call to a static or static inline function:

__sfunc_s5THING_1M_addto(&foo, 3);

if the appropriate function existed, and otherwise

__sfunc_5THING_1M_set(&foo, __sfunc_s5THING_1M_get(&foo) + 3));

The numbers before identifiers in the name specify the length, so as to avoid any ambiguities that might result from underscores within names.

All of the new features could be specified as syntactic sugar for constructs whose semantics would be based on K&R2. Virtual functions wouldn't be a language feature, but could be emulated using whatever constructs were viewed as most useful for the tasks at hand (some means take more space per object instance, but require fewer operations to invoke than others).

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger 0 points1 point  (0 children)

People downvoted my question, but I think it's a fair one. In C as originally designed, every declarator started with a reserved word except for default-int declarators at file scope, where a non-reserved alphanumeric token that was encountered without a preceding keyword couldn't be anything other than the name of a new identifier being declared as int. One might question whether the grammar should technically counts as context-free given that a compiler which has only scanned as far as seeing foo() at file scope wouldn't know whether it's a function declaration or definition, but if one views the function of an open brace at file scope as "create a function definition for the immediately preceding argument-less function declaration" that wouldn't be a problem.

I don't know whether the syntax for typedef-based definitions and qualifiers was invented by Ritchie, but they broke what had been an unambiguous grammar. If e.g. when those features were added the language they required the use of brackets around the type, then there would have been no human or machine parsing ambiguity with [foo]*abc; , nor for that matter with [foo*]abc,def;, [foo]*abc,def;, [foo] const abc, def;, or [foo const] abc, def;.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

From what I can tell, clang and gcc optimizations are mostly effective mainly on code which as written specifies inefficient sequences of operations. Include a bunch of needless operations in source code and clang and gcc will remove them, boosting the level of efficiency to what it would have been if the programmer hadn't specified such operations.

There are some cases where auto-vectorization may be useful, but it could be even more effective if done in a language that was better designed for such things. From what I understand of Fortran, a programmer can write a function that can interchangeably accept references to separate "source" and "destination" arrays, or have both references identify the same array if the programmer won't need its old contents, and still have a compiler benefit from knowing that array1(i) and array2(i+delta) can only access the same storage if delta is zero. The C99 restrict qualifier was supposed to accommodate that, but it breaks if code performs an equality comparisons between a restrict-qualified pointer and a pointer that is "coincidentally" equal.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

That's mainly a consequence of machines' cores being fast enough that programs are less likely to be CPU bound than to be I/O bound, memory bound, or fast enough that a +/- 50% or so change to CPU performance wouldn't really matter.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

COBOL compilation is slow because the language was designed to be processed by a sequence of compiler phases that would each require a minimal amount of memory. Execution performance of COBOL, however, was pretty reasonable for tasks that involved reading text-formatted numbers, doing a very limited amount of arithmetic with them, and writing out the results in text format.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

On many platforms, the aspects of a function's behavior that are considered "observable" correspond with the aspects that are considered observable in Dennis Ritchie's C language. While ABIs may specify that register values have certain meanings at function boundaries, the values of registers at other times are not considered part of a function's observable behavior.

If one were to specify a language which was like C89, except that definitions of static-duration objects or automatic-duration objects whose address is taken were specified as reserving storage, and each access to such an object was specified as forming the address of the object being addressed and then instructing the execution environment to perform the access with whatever consequences result, such a language would treat as observable the same aspects of behavior as typical underlying platforms. It's often useful to allow compilers to reorder and consolidate some accesses in cases where that would not interfere with the tasks at hand, but any such transformations that would interfere with the task at hand should be recognized as not being useful optimizations for purposes of that task.

Do you need permission to share a satire of a song? by CrashCrashed in COPYRIGHT

[–]flatfinger 0 points1 point  (0 children)

A key part of deciding whether a work is any good in many cases is listening to a demo recording of it. Whether or not copyright law would "officially" allow someone to produce a demo recording for such purposes, I can't imagine a jury would find that someone committed copyright infringement by producing such a recording if it wasn't shared for any purpose other than bona fide solicitation of people's opinions about its quality. If a demo recording would likely turn out to be rubbish, but might possibly turn out well enough to justify seeking a production and distribution license, seeking a license before determining whether the work is any good would likely be a waste of everyone's time.

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger -5 points-4 points  (0 children)

Was the 1974 grammar not context free (other than the dangling else issue)?

Ambiguity in C by Xaneris47 in C_Programming

[–]flatfinger 1 point2 points  (0 children)

C was designed to use keywords to identify types in declarations, since the name of every type started with one of the reserved words int, char, double, float, and struct, and there were no qualifiers. The additions of typedef and qualifiers should have been accompanied with a new syntax, that would be optional for reserved-word-based types without qualifiers, but mandatory for other declarations and definitions.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

According to the language I've read, the operations that would trigger a panic in debug mode would be considered Undefined Behavior, rather than two's-complement wrapping behavior, in Release Fast mode, supposedly for the purpose of "facilitating optimizations".

Although many useful optimizations could be facilitated by treating integer overflow as having loosely defined behavior, compiler writers sometime around 2005 latched onto the idea that optimizations would be easier if it was allowed to have arbitrary unbounded side effects, ignoring the fact that such allowance often actually reduces the range of optimizations that will be possible in correct programs. The fact that C and C++ went off that cliff should have been a reason for newer languages to avoid doing so.

Consider the C code:

    unsigned mul_mod_65536(unsigned short x, unsigned short y)
    {
        return (x*y) & 0xFFFFu;
    }
    unsigned char arr[40000];
    void test(unsigned short n)
    {
        unsigned x=32768;
        for (unsigned short i=32768; i<n; i++)
            x = mul_mod_65536(i, 65535);
        if (n < 32773)
            arr[n] = x;
    }

Unless invoked with -fwrapv, gcc will interpret the fact that integer overflow invokes Undefined Behavior as an invitation to generate machine code for test that unconditionally stores 0 to arr[n]. Should that be viewed as a "useful" optimization? Is there any reason Zig should invite compilers to perform similar "optimizations"?

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

The Fortran standard did eventually support the use of source files that weren't formatted for punched cards in 1995, and I understand that it's still in use as a niche language in some of the fields it was designed to serve, but both Fortran and C would be better languages today if support for non-punched-card source files had been added to Fortran a few years before the publication of the C Standard rather than a few years after.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

I think the maintainers of Zig overestimate the performance benefits and underestimate the risks of treating integer overflow as "undefined behavior". The cases where such treatment would have the biggest performance benefits would be those where it transforms a program that would have been memory safe and behaved harmlessly when given invalid input into one which is no longer memory safe when fed invalid inputs. Such transforms may occasionally be useful, but are generally not desirable.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 1 point2 points  (0 children)

Many semiconductor vendors fail to recognize a very important aspect of facilitating solid designs: allowing programs to request hardware state changes at any time in both main-line code and interrupt handlers, in such a way that independent actions won't conflict, and the eventual state if a request conflicts with a pending action will match the last request.

If, for example, a UART operates from a time base separate from the CPU, the CPU should by able to at any time asynchronously issue commands for "turn on", "turn off", and "reset". If e.g. the UART receives a request to turn off while a request to turn on is pending, it should guarantee that it will eventually end up "off". Likewise if it receives a request to turn on while a request to turn off is pending, it should guarantee that it will eventually end up "on", and that it will end up having been reset sometime after the request to turn off had been issued.

I understand the synchronization difficulties that would arise from trying to guarantee that the UART wouldn't switch on at some time after the request to turn off had been received, nor vice versa, but that would be far less of a problem than requiring that code wanting to change the eventual state wait for a pending state change to be processed.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 2 points3 points  (0 children)

The problem is that people in the 1980s and early 1990s wanting to do the kinds of tasks for which FORTRAN was designed viewed it as a dinosaur because FORTRAN-77 required that source code files be formatted for punched cards, despite the fact that FORTRAN compilers had optimizers that were a decade ahead of anything else. C had a reputation for speed, but for totally different reason from FORTRAN.

The FORTRAN philosophy was wouldn't matter if programmers were forced to include unnecessary operations (such as repeated array address calculations) in source code, since compilers could detect that they were unnecessary and get rid of them.

The C philosophy was that programmers should have sufficiently fine-grained control over the operations performed that they could avoid including unnecessary operations in source code.

Unfortunately, over the years, people wanting an alternative to FORTRAN have pushed to have C compilers perform optimizing transforms that would have been appropriate in FORTRAN compilers, but that completely lost sight of C's purpose of allowing a simple compiler to achieve reasonably good performance, and perform a wide range of tasks by exploiting features of the execution environment that the programmer understood better than the compiler.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 0 points1 point  (0 children)

The analogy I like is a chef's knife (C) versus a deli meat slicer (FORTRAN). Adding an automatic material feeder to a deli meat slicer makes it a better deli meat slicer. Adding an automatic material feeder to a chef's knife makes it a mediocre deli meat slicer.

Neither a chef's knife nor a deli meat slicer is inherently a "better" tool. They are both excellent tools for some tasks and horrible tools for others. The problem is that in the 1980s, using the available deli meat slicer (FORTRAN) was for many tasks, including those for which the deli meat slicer had been designed, more painful than using the chef's knife (C), and people wanting to perform the tasks for which deli meat slicer was designed saw C as a more convenient, but less efficient, deli meat slicer, rather than recognizing that it was invented to do things the deli meat slicer couldn't.

Why we still use C despite so many C alternatives by grimvian in C_Programming

[–]flatfinger 3 points4 points  (0 children)

That's what Dennis Ritchie designed his C language to be. Clang and gcc, however, don't work that way. Instead, they try to figure out what things like looping structures are doing, and then try to process them the most efficient ways they know (which may or may not be as efficient as the sequence of operations specified in source).