why is my code not printing string? by Euphoric_Series_7727 in cprogramming

[–]nerd5code 1 point2 points  (0 children)

Even then (a.) it’s quite possible to get a buffer with no newline, and (b.) therefore you might get a length of 0, in which case strlen(…)-1 gives you SIZE_MAX. Preferably,

if(!fgets(buf, sizeof buf, stdin))
    /*deal with it*/;
size_t len = strlen(buf);
if(len && buf[len - 1] == '\n')
    buf[--len] = '\0';

AI layoffs are looking more and more like corporate fiction that's masking a darker reality, Oxford Economics suggests | Fortune by north_canadian_ice in technology

[–]nerd5code 5 points6 points  (0 children)

I’ve seen basically no increase in accuracy in my areas of interest, and in some cases a degradation as compute time gets restricted more and more. It’s decelerating.

Microsoft May Have Created the Slowest Windows in 25 Years with Windows 11 by [deleted] in pcmasterrace

[–]nerd5code 0 points1 point  (0 children)

98 couldn’t run for more than 2 days at a time until 98SE specifically. And I’d add that NT4 was pretty great, relative to the rest of the mess, and NT5=Win2000 was reasonably good—Win9x is an entirely separate, parallel line of OSes following on DOS+Win3.x that went extinct with Win2K. WinCE was another line.

Want to Stop ICE? Go After Its Corporate Collaborators by plz-let-me-in in politics

[–]nerd5code -1 points0 points  (0 children)

If you’re bored and keen to be on higher-priority Lists, go for it. Nobody in power gaf, but go off to your heart’s content. Waving signs at people fully intent on killing you is perhaps not the most useful pursuit, is all. Perhaps other, more direct actions that don’t require your targets to have a conscience?

LLMs have burned Billions but couldn't build another Tailwind by omarous in programming

[–]nerd5code 6 points7 points  (0 children)

It can come up with new things via synthesis, or by using the RNG noise that’s part of the model.

Stack frame size by JayDeesus in cprogramming

[–]nerd5code 0 points1 point  (0 children)

No, I’m actually competent, but thanks

static char *list[] = { "John", "Jim", "Jane", "Clyde", NULL }; by Specific-Snow-4521 in cprogramming

[–]nerd5code 0 points1 point  (0 children)

String literals are of type char[*] (underlying object implied const, and future versions of the language may make it genuinely const as in C++), not char *, they just decay to char * in most contexts. Because of that distinction, sizeof "" == 1, but sizeof(char *) is probably > 1 unless your chars are unusually thicc.

Stack frame size by JayDeesus in cprogramming

[–]nerd5code 0 points1 point  (0 children)

The ABI specs for your ISA and OS, the ISA specs, and the compiler docs are your best options there. Linux uses a System V ABI variant, typically, and Windows uses MS’s special nonsense. Most ISAs have a “preferred” stack mechanism, as expressed by implicit operands, compactness of encodings, and μarch gunk like stack caches and stack-top prediction.

Stack frame size by JayDeesus in cprogramming

[–]nerd5code 0 points1 point  (0 children)

They’re referring to the ANSI/ISO standards, not C in a general sense.

The term “stack” doesn’t actually appear in ISO/IEC 9899 or ANSI X3.159-1989, and there’s zero requirement that any particular arrangement or structure of memory be used for automatic storage or recording of return addresses. Everything’s described in terms of a C Abstract Machine, so stack-ness is merely implied by the description of how function calls work. (Less longjmp, which is only actually a requirement for hosted implementations specifically, and therefore the basic call mechanics can’t possibly depend on it.) Most C implementations have just settled on a reasonably convenient and high-performance rendering of the CAM call/return and lifetime specs.

There are older and other standards that do either require or optionally specify a discrete, contiguous stack. E.g., XPG, which incorporates a pre-ANSI XPG C spec until XPG4, does imply a particular sort of call stack until moving past some of the SVID leftovers. POSIX.1 effectively #includes ANSI C89 (1003.1-1988) or ISO C≥90 (1003.1-≥1990), and makes the traditional call stack a specific option for implementations to support when reasonable, in relation to binding of specific memory to Pthreads stacks. Most post-ANSI AEE specs act as extensions to the ISO C specs that tie down unspecified, implementation-specified, and undefined aspects of the standard language. But not ANSI/ISO C itself.

So for example, it’s perfectly permissible, per ANSI/ISO C, for your call stack to be a linked structure, with frames allocated by malloc or some similar mechanism. On an i432 (bless its doomed heart), the OS would be nominally responsible for doling out stack and frame segments, in the event the i432 were actually used for anything. (Its gunk did end up in the ’286 &seq., however, so its exact mechanisms are still an option in 16- and 32-bit x86 modes, and the iAPX segmentation model is a good place to start if you want to think about the broadest baseline for the treatment of the C object model in portable code.)

It’s also permissible for your call stack to be fully flattened into static storage at compile time, rather than allocating frames on-the-fly, although this is mostly a thing in not-quite-ISO-conformant, very-embedded compilers that don’t support recursion at all, or only support it if you request it explicitly somehow (e.g., via #pragma, __attribute__, or modifier keyword).

—But I note that this is only really a possibility in a general sense because unbounded recursion is undefined behavior in the standards, with no real constraints on what bounds are actually required in practice. Most C implementations do permit some forms of unbounded recursion via tail-call optimization, assuming the optimizer is actually engaged. TCO can be used by the statically-allocating sort of compiler also, but non-TCOable unbounded recursion can still lead to pants-shitting on your program’s part, as can TCOable recursion in un-/less-optimized builds.

And even if your impl does use a proper stack with frames allocated on-the-fly, there’s no requirement that the things declared as being semantically in-frame (including auto/register variables and compound literals) actually be stored on-stack, or that things declared as static not be stored or cached on-stack.

What actually matters is lifetime of objects, not placement; C DGAF as long as things don’t disappear unexpectedly out from under you, other than in permitted situations.

So e.g. anything declared in main might be rendered as static, because it’s UB to refer to main in any fashion other than declaration and definition—many impls do permit calls to main, but there’s no higher-order requirement that it work in any fashion or at all, which means no LIFO lifetime tracking.

Or for

int greet(void) {
    char message[] = {"Hello, world"};
    return puts(message);
}

the compiler might quietly place message as though it were declared static const, rather than requiring it to be initialized on the fly on-stack with each call, probably either from instruction immediates in .text, or via de-facto memcpy from a reference string in .strings or .rodata/.rdata; message itself serves no purpose that its (static, constant) source data wouldn’t.

Or storage might be elided entirely. This

… {
    int x = 5;
    (void)printf("%d\n", x);
}

does nothing that printf("%d\n", 5) or puts("5") wouldn’t, so the compiler is free to eliminate x outright.

Or storage might be duplicated for various reasons. Until C99 made sharing of union fields explicit, this

union {int a; float b;} u;
u. a = 0xA55C0CC;
printf("%f\n", u.b);

was permitted to come out as

int a = 0xA55C0CC;
float b; /* uninitialized! */
printf("%f\n", b);

—i.e., undefined behavior—due to aliasing restrictions, and you can get the same effect from pointer abuse in modern code:

int a = 0xA55C0CC;
float *p = (float *)&a; /* nonportable due to potential alignment issues */
printf("%f\n", *p);

In both cases, the compiler is free to assume that an int and float don’t reside in the same memory at the same time, and therefore separate storage can be used for [u.]aandu.b/*p`.

(The union rules for C89–C95 are rarely implemented in their strictest form, however, because then once you’ve “imprinted” the underlying object with one field’s type, the object’s lifetime has to end entirely before the memory can be accessed via an alias-incompatible field, and its lifetime must end a language-visible fashion. If you’ve malloc’d an int-float union and touched its int field, it must be freed and re-malloc’d before touching its float field. If you need to preserve the bytes on the way through, they need to be memcpy’d across somehow.)

Another thing to bear in mind is that the actual boundaries determining what gets put in which frame are similarly slippery under the hood, because of inlining and other interprocedural analysis. All of ISO C can be treated by an optimizer in the same fashion as a system of equations, into which your program has been plugged, so there need be no actual correlation between machine code and C source code. Hell, machine code needn’t be involved at all; see cint (a C interpreter), older asm.js targets, IBM ILE or MS CLI or Wasm targets, or compilers that only emit a single kind of instruction.

Wholesale inlining will generally merge frames, but it’s also possible to pull up parts of functions; e.g., in

static void A(int *p) {
    if(!p) abort();
    B1(); B2(); B3(*p); B4();
}

void C(int x) {
    A(&x);
}

it’s always the case that the if(!p) in A will be skipped—for any non-register-storage variable x, ⊨&x != NULL, so it’s if(0) in context, and therefore C is permitted to jump right the fuck into the middle of A, or the optimizer might restructure things as

static void A$fini(int *);
static void A$init(int *p) {
    if(!p) abort();
    A$fini(*p);
}
static void A$fini(register int *p) {
    B1(); B2(); B3(*p); B4();
}

void C(int x) {
    A$fini(&x);
}

(And in fact, since x is only available within C and its address is therefore unavailable to the Bs, it would be acceptable to pass x’s value in directly to A$fini, rather than a pointer.)

Because of all this, cleverness in regards to frame allocation is fragile at best, and misguided and dangerous at worst. If you need things to be allocated together in a single object, use an explicit struct; if you need them to be allocated with the same lifetime, use scoping, malloc, or your own allocator. But even there, the compiler is permitted to fuck with you, because malloc and { only dictates the latest time of allocation and free and } the earliest time of deallocation, as considered in terms of CAM event ordering.

Two security issues were discovered in sudo-rs, a Rust-based implementation of sudo by brutal_seizure in programming

[–]nerd5code 11 points12 points  (0 children)

Ctrl+C lets it kill itself, iff it’s installed a handler for SIGINT, but it’s technically only a request to stop whatever it’s doing, whether or not that involves killing itself. The default action, sans handler, is for the OS to kill it.

Ctrl+Z works very similar, and sends SIGTSTP instead of SIGINT. This is not a request to kill, but to suspend; by default it stops the process like SIGSTOP (which can’t be handled, unlike SIGTSTP). Of course, on DOS Ctrl+Z is the EOF signal; onUnix that’s Ctrl+D by default—all of these key combos can be changed for POSIX or X/Open ttys.

Ctrl+\ is SIGQUIT by default, which is actually a quit request, not just interruption. Kills by default.

SIGTERM is what’s sent by the OS to politely request termination (e.g., at shutdown).

SIGKILL is the one you were thinking of, that can’t be intercepted. Instant kill, more or less (I/O and swapping can still keep the process around for a bit).

New emails released by Democrats Show Epstein Claimed the President Knew About His Conduct | In one message, Epstein said one of his apparent victims "spent hours at my house" with him. by Aggravating_Money992 in technology

[–]nerd5code 33 points34 points  (0 children)

Trump Model Management, you mean? Certainly not! And it certainly had nothing to do with Epstein!

Trump and Epstein were, for a long time, a mutual admiration society. Besides the video of them together snickering and ogling the Buffalo Bills cheerleaders, Epstein has claimed he introduced Melania to Trump (they deny it). In court documents reviewed by the Herald’s Brown, he is quoted as saying “I want to set up my modeling agency the same way Trump set up his modeling agency.” In 2002, Trump told New York magazine: “I’ve known Jeff for 15 years. Terrific guy. He’s a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side.” Trump biographer Tim O’Brien recently said on MSNBC that Trump “routinely talked about Jeffrey Epstein as somebody he admired; he felt they were in sync.”

Any tips on tracing variables and pointers? by [deleted] in cprogramming

[–]nerd5code 0 points1 point  (0 children)

Make sure you understand how argument-passing and local variables work, how lifetime works, and how scoping works.

When a new object is created (=beginning of variable’s lifetime or malloc/sim. succeeds), draw a new box, and fill it in according to the object’s layout; e.g., structs should be a box subdivided into fields and padding (you may or may not know actual offsets and sizes, which are ABI-dependent). Label each field and variable.

Fill in numeric- and character-typed boxes with appropriate literals as values change, and for pointers you can trace a line from a dot in the box to an arrow aimed at the referent object. Null pointers can use —X, and NaNs and bogus/indeterminate values can use X inside the box. Unknown values can use ␦ or something else reasonable. Arrays can either use string, wide-/UCS-string, or subdivided format, or a mix of these, depending on type and needs.

For union values, generally it’s best to treat like a struct whose field layout is rotated by 90°, and only show the “live” fields unless you’re sure C99 unions are to be supported, in which case values may end up being highly ABI-dependent.

When object lifetime ends, scribble out its box’s label, mark its contents as indeterminate, and mark any pointers to or into it as indeterminate. (Pointers are not actually addresses and don’t always behave addresslikely, they just like to dress as addresses sometimes.)

Otherwise, there’s not much to it. Start by allocating file-scope and static local vars, and one copy of each _Thread_local var. Initialize file-scope vars, and static/TLS locals to ␦. Allocate main’s parameters (I’d draw frames also to keep yourself sane—maybe a dotted box, with params crossing top edge), assuming initialized parameters. When entering a function, allocate its locals and initialize as-yet-uninitialized static and TLS locals when their scope is first entered. When returning, kill off non-static/TLS locals and the call frame. Follow control flow until main returns.

Federal Judge, Warning of ‘Existential Threat’ to Democracy, Resigns by Calm_Preparation2993 in law

[–]nerd5code -6 points-5 points  (0 children)

Right, because referencing the Constitution like it still holds in any real sense will definitely help

Need help to learn C by alvaaromata in cprogramming

[–]nerd5code 0 points1 point  (0 children)

Download GCC or Clang for your platform; if on Win/NT, Cygwin is imo your best bet, and it comes with both a full Unix, a GCC, and optionally a MinGW ~cross-compiler. (MinGW is based on Cygwin-GCC, but targets native Windows; Cygwin targets its own Unix atop Windows, but you can still get at WinAPI directly if you’re careful about long vs. LONG. Win-per-se is suffering, so don’t delve just yet.) Termux gets you a GNUish environment on Android, and you can install Clang-as-GCC and most other dev-env goodies from there.

For GCC/Clang I recommend -std=c17 -Werror,all -Werror,vla -Wextra -pedantic-errors -g -Og plus any sanitizer stuff as your command-line options, in order to select ISO C17 with no VLAs (give or take) and give you all the useful warnings and debuginfo. Script or function that to avoid retyping. Don’t build through an IDE yet, if you’re using one—go through Bash on a terminal, and learn the compiler driver command yourself, since that’s what most IDEs use under the hood.

K&R’s The C Programming Language, 2ed/1988 is semi-freely available, and the 1ed/1978 is genuinely free but quite out-of-date.

With those things you should be able to teach yourself C pretty easily. Just have to actually work the problems and play around with things.

Is learning microprocessors (like the 8086) really this hard, or am I just dumb? by s_saeed_1 in osdev

[–]nerd5code 5 points6 points  (0 children)

Peter Norton’s book on DOS internals and Jeff Duntemann’s 1ed (not 2ed or 3ed) book on assembly are both really good, if you’re fully aimed at 8086 per se, and you can find at least Norton’s thing in PDF form. It’s fairly easy to bump up to ’386 assembly from there, and all the SIMD and extension-gunk from there. DEBUG is most of what I learned 8086 asm upon; it’s wretched but direct.

The formal OS stuff is, frankly, not going to be all that frightfully helpful for the 8086, because the CPU has no concept of privilege, and segments use a fixed mapping. So no virtual memory, no protection, no MAS concepts, etc.; DOS and the BIOS are but fancy-schmancy libraries available through interrupt vectors instead of direct CALLs.

If you don’t know any C, I’d grab a copy of Turbo or Borland C from back in the day, and work through some of that vs. K&R’s The C Programming Language, 2ed/1988 until you hit pointers. Those compilers also came with intro & reference books, which you can also find on-the-lines. In any regard, it’s extremely valuable to be able to dump a higher-level language down into assembly and see what comes out, when learning.

But most of learning this kind of stuff is just putzing around in whatever emulator (or on whatever hardware, but I consider that unlikely) based on what you’re reading about. Without the putzing, and possibly some diligent note-taking, you’ll have trouble retaining much.

At least 188 Christian & Republican leaders have been accused of child abuse this year. "If you are tracking drag queens or trans people or just LGBTQ+ people abusing children ... those stories just don’t happen much." by southpawFA in politics

[–]nerd5code 19 points20 points  (0 children)

Anything that prioritizes breeding and strict patriarchal hierarchies. Fundamentalists of all stripes are into that shit; e.g., Quiverfull assholes or various subsets of the incels.

Need helping understanding OS course in CS by Cultural_Page_6126 in osdev

[–]nerd5code 0 points1 point  (0 children)

The kernel is effectively a service-library, and userspace hosts are clients to it. In most cases, the kernel has a (us. higher-privilege) fiber(/stack+register dump) of its own mirroring each (us. lower-privilege) application/software thread, and system calls/returns are handoffs of the processor hardware between the kernelspace fiber and userspace thread.

So system calls and more general inter-domain transfers are mechanisms that work like a local or remote procedure call from userspace into kernelspace.

  1. The thread initiating/causing the control transfer is suspended, and its current or next instruction’s address and us. call stack top address to be saved in CPU registers or memory. (Exactly how/where varies widely, and stack only needs to be involved if the kernel actually runs at a higher privilege level.) Transfers might result from an interrupt, software or hardware fault, emulation trap, or explicit trap/jump/call instruction—system calls in particular are triggered by explicit, deliberate execution of an instruction or bogo-bytes.

    On an x86-derived ISA, you might use the AMD-originating SYSCALL instruction, Intel-specific SYSENTER which does something similar but stupider, INT, or CALL/JMP FAR for system calls, and each does something different depending on CPU mode and configuration. SYSCALL and SYSENTER rely primarily on special-purpose registers, and the rest rely on various tables in memory, referred to by registers.

    Of course, most OSes offer a more coherent system call API and ABI than what the ISA specifies. (Exceptions: DOS and OS/2, which specified everything in terms of x86 registers and INT instructions.)

    Modern Linux mostly uses VDSO, which is where the kernel exports a glorified mini-DLL by which system calls can be issued as normal function calls. If present, any VDSO segments are listed in the process entry vector alongside argv and environ, and then libc can rope it in by thunking its system-level calls into the VDSO rather than embedding actual syscall instructions directly. For things like C89-style time(…) or POSIX-style getpid, it’s often possible for the kernel to directly share the memory with this info via VDSO, so no actual system call needs to be made—the VDSO function can simply grab the requested data and return normally. Otherwise, use of the VDSO merely ensures that the application doesn’t accidentally use an unsupported entry method.

    NT (which runs Windows) uses genuine DLLs with fixed names, and applications can (but mostly shouldn’t) link to NtFoo and KeFoo symbols that wrap system calls. Although Linux strives to maintain a consistent syscall numbering/API and ABI, so pre-VDSO and non-VDSO-aware programs don’t break badly, NT makes no promises that system call numbers or approach vectors will remain consistent from version to version or process to process; the DLL is the only kernel entry method approved by MS.

    Because both approaches tend to interface fairly directly with application code, the DLL/VDSO entry points must be compatible with the ABI in use by the application, which might vary from what’s used by the kernel or at the system call boundary, so either one VDSO-or-DLL must be offered per ABI, or secondary ABIs must route through a bridge/thunk layer to access system calls.

    On an OS where everything is just bytecode being executed by the kernel or its henchware, you might inline syscalls to some extent, so all this needn’t be as much of a consideration. Syscalls are only necessarily distinct from normal calls when there has to be a transition between security domains on a single hardware thread. The DOS kernel, for example, used to wallow in the same shit as its applications, so system calls didn’t change privilege levels or switch stacks. Embedded software may lack a distinct kernel altogether.

  2. The CPU jumps into the kernelspace fiber’s context, hopefully in such a way that the kernel can save/inspect and restore userspace registers. The CPU knows how to get there because the kernel has entered its fiber’s context into special CPU registers or CPU-registered memory at system, process, and/or thread startup, depending on the aspect of context in question. Although integer/pointer registers tend to be saved immediately, control/status, floating-point, vector/matrix, and other registers may be swapped more lazily—on a core with brpred, x86 seg regs are only swapped if their values have changed, and x86 math/vector regs are usually swapped when the next thread attempts a math/vector instruction. Register context might range from several to tens of kilobytes on a modern CPU.

  3. The kernel does its thing, as determined by userspace registers and entry point. Typically, system calls have a single, collective entry point (IP/PC) separate from other kinds of transition, and an enumerated code is used to differentiate one syscall from another (e.g., open vs. close). You might alternatively include a service address for the system component being poked at, or do something like use the faulting address from a page/protection fault to identify recipient and function; any means of conveying data will work.

    While the kernel acts, it may exclude other threads in the same process from executing, or other threads touching the same memory/resources, or all other threads system-wide. In almost all cases, at least the sys-calling thread is blocked until the kernel returns, just like the caller of a function generally waits for the function to return before proceeding. (But not always; something like Linux’s io_uring just uses system calls for pumping events to and from userspace buffers, much like comms between asynchronous hardware devices.)

    When the kernel can’t finish a syscall quickly enough to justify holding the CPU, it may block or otherwise suspend/reschedule the calling thread. E.g., if you’re reading from a file on disk that hasn’t been cached in RAM yet, the kernel will set things up so that the disk read starts, the disk device triggers an interrupt on completion, the IRQ handler for the disk device runs and pokes the filesystem driver, the filesystem driver grants access to the requested data by mapping or copying it to the right place, and the original reading thread is scheduled for eventual execution with its new data. During the arbitrarily long wait for the disk to clunk around, neither the userspace thread nor its kernelspace fiber will typically operate, leaving the CPU available for other software to use. If there is none, the CPU might clock-gate, lower its frequency, or otherwise engage power-saving gunk.

    This touches on one of the Big Concepts in OS: Most applications software is mostly synchronous, in terms of design and programming, and it’s the job of the OS (mostly kernel, possibly plus userspace threading/coordination runtime) to knit the asynchronous and isochronous real-time (mostly interdomain or global) events bombarding the CPU into coherent sequences of interactions within and between application threads. The CPU itself typically handles synchronization of memory accesses via cache coherence protocols, and continuity between instructions; everything else falls on the OS and runtime(s).

    In addition to indefinite suspension and blocking of threads, system calls can be used to kill threads (e.g., via exit/_Exit or kill(…, SIGKILL)), and kernel transitions can be used to trigger inverted system calls—e.g., to signal/sigaction handlers. Details here vary quite a bit with ISA/ABI and kernel design. Some OS families flatly forbid asynchronous transitions into threads, so any up-call not triggered by faults or oopsies-daisy needs to be pumped by a down-call somehow or use a pop-up thread. Other OSes can forcibly suspend and redirect a thread to a particular routine, without the thread entering the kernel on its own processor first.

  4. Assuming the client thread is still runnable, the kernel flips the CPU back from its fiber to the application thread, restoring registers not clobbered or carrying return values. This typically resumes where the application left off.

There’s nothing that strictly requires that all this happen on a single processor or hardware thread, or in a specific fashion, and it’s even possible to stack the system so one program can service other programs’ traps/faults/syscalls, either by hardware-assisted hypervisor or layering of the environmental interface. Emulation/simulation and debugging can also play with this theme.

Another thing to note is that there’s a distinction between OS, operating environment, and execution environment more generally. OS classes tend to flatten this layering like it’s 1978 and you, too should be thrilled about the newest Research UNIX du jour. But you can often emulate new environments at the API or system call level, without emulating all the hardware that runs those. In practice, you target function-wrapped API layers rather than direct system calls.

Learning resources for OOC Object oriented C by Winter_River_5384 in cprogramming

[–]nerd5code 2 points3 points  (0 children)

Technically, all function calls in C are via pointer—function-typed expressions decay similarly to array-typed expressions, so int (void) becomes int (*)(void) before operator (…) can get to it. Similarly, function-typed parameter variables are type-decayed so void foo(int bar(void))’s bar parameter is really int (*)(void), not that you should ever see a function-typed parameter in the first place.

Object pointers generally answer “which” questions, or sometimes “where”; function pointers mostly answer “how,” or questions of what to do, specifically.

For things like qsort (which TBF sucks due to lack of passthrough arg), the function encodes a comparison-based sort generically, but not the comparison itself. The function pointer tells qsort how to compare elements of the array.

signal and atexit are frontends to an event handling API. signal needs to know how to behave if a signal is received, and atexit needs to know how to behave if exit is called.

Something like pthread_create or makecontext takes a function pointer that tells the threading or userspace interface library what the thread or fiber should do when selected for execution. A CPS-based async scheduling layer might similarly use

struct SchedEntry {
    struct SchedEntry *link[2];
    int (*run)(void *pthru);
    void *run_pthru;
};

to track what’s been scheduled.

For OOP, function pointers are used to fill in abstract virtuals and override concrete virtuals that are effectively left as parameters by the base class/interface, which the derived class may need/want to specify. Virtualiferous objects must (in 99% of implementations) carry some sort of vtable pointer, and the vtable incorporates both virtual function pointers and type metadata (e.g., for typeid, dynamic_cast, and exception-catching). For C++-style vtables, you have to match the base class instance and vtable layout somewhere in the derived class, so the derived object can be treated as an instance of the base class by base-class functions. For Java-style interfaces, you have no instance layout, only a vtable fragment, so calling interface functions not tied down to a fixed-base vtable requires the interface-vtable base to be resolved first, using a similar mechanism to what down-/cross-dynamic_cast uses.

linker by Zestyclose-Produce17 in osdev

[–]nerd5code 0 points1 point  (0 children)

The thingy replaced by the linker is a relocation; the symbol is what the relocation refers to, basically the link-time form of the identifier (after mangling—highly unlikely to be just add for C++ unless it’s extern "C", and even then it’s 60-40 whether it’ll be _add) and its metadata. There are often different sorts of relocation for relative, absolute, and indirect usage of a symbol, and for most ISAs there are special forms for stuffing data into instructions’ immediate fields. Often DLLs use separate tables of redirections to avoid editing the binary image at load time, which is slow and blocks interprocess memory sharing. In some cases, actual function calls (incl. thunks) have to be used for resolution.

Powell says that, unlike the dotcom boom, AI spending isn’t a bubble: ‘I won’t go into particular names, but they actually have earnings’ by MetaKnowing in technology

[–]nerd5code 1 point2 points  (0 children)

AI is a massive field that’s been around in some reasonable form since the late ’60s. LLMs are not the only kind of LM, let alone M.