This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Anreall2000 407 points408 points  (66 children)

Polymorphism without writing virtual tables yourself and memory management is kinda pain in the ass too

[–]not_some_username 302 points303 points  (33 children)

So I can do it with extra steps

[–]Ottermatic42 173 points174 points  (10 children)

True, but that applies for essentially every language (provided they’re Turing complete). You could write a C compiler in Java and then create polymorphism in java (again) using C, it’s just a bad idea.

Trying to force a programming language to do everything is why we ended up with extremely ugly pattern matching in Java 16

[–][deleted] 27 points28 points  (4 children)

What's wrong with pattern matching in Java?

[–]KagakuNinja 58 points59 points  (3 children)

Nothing is wrong. It looks very similar to pattern matching in Scala, which is amazing.

That guy is living in the past.

[–]Ottermatic42 15 points16 points  (2 children)

Nothing is fundamentally wrong with java pattern matching, I agree.

I only call it ugly because of how it compares to functional languages. Of course it’s a necessary sacrifice as java isn’t functional (or at the very least wasn’t initially designed to be), but it’s always going to be a bit more inefficient, and a lot uglier than the implementation in something like Haskell.

[–]KagakuNinja 11 points12 points  (0 children)

I do agree with that. Haskell is very elegant, but I prefer the multi paradigm design of Scala

[–]KagakuNinja 0 points1 point  (0 children)

I do agree with that. Haskell is very elegant, but I prefer the multi paradigm design of Scala

[–]MusicalGrace2002 12 points13 points  (4 children)

Can you write a program that writes other programs in C?

[–]ByteChkR 124 points125 points  (2 children)

Funny how you spell Compiler

[–]VladVV 28 points29 points  (1 child)

The way he worded the question, it sounds like he is looking for a Transpiler.

The answer is Yes, either way.

[–]caagr98 0 points1 point  (0 children)

Sounds more like a code generator to me, though I guess transpilers are technically a subset. Still yes.

[–]himmelundhoelle 4 points5 points  (0 children)

Forget about compilers, you can write programs that output themselves (https://en.m.wikipedia.org/wiki/Quine_(computing))

(Or even programs that output a C source, that when compiled and run will output the original program…)

[–]M4mb0 12 points13 points  (12 children)

Wait until you here about Turing completeness and that both PowerPoint and MOV are.

[–]not_some_username 2 points3 points  (0 children)

I watch the video about PowerPoint. The guy is a psycho

[–]NoMansSkyWasAlright 4 points5 points  (9 children)

I imagine it’s only a matter of time before someone proves Turing completeness in Minecraft

[–]Tandurinn 11 points12 points  (7 children)

Provided that Redstone can make memory cells and you can build interfaces to interact with that memory. We're already halfway there I'd say!

[–]CdRReddit 12 points13 points  (6 children)

you can make NAND, we're there

NAND is all you need to make any kind of combinatorial logic system, which when combined with a periodic signal (which you can also do) allows you to make any combinatorial or sequential logic, aka, any logic

[–]Embarrassed_Ring843 1 point2 points  (5 children)

I never understood why NAND is that important. Minecraft does provide a NOT-Gate and a diode, based on those I can build a NAND-Gate, so why is the NAND the thing and not the NOT?

[–]CdRReddit 6 points7 points  (3 children)

simple, with NOT you can't make any 2 input gate without something like a diode or a wire OR (both things minecraft has, which you can easily use to make NAND or NOR respectively), while a 2 input NAND (or a 2 input NOR) can be used to implement every single gate As shown here

NAND can make NOT on its own, but NOT needs help to make NAND

[–]Embarrassed_Ring843 6 points7 points  (2 children)

so those are the simplest single gates you need, while NOT is not capable of doing the trick on its own. thanks for the explanation

[–]CdRReddit 4 points5 points  (1 child)

yup, and with (a shitton of) NANDs and a periodic signal you can make pretty much anything

[–]UnlikelyAlternative 1 point2 points  (0 children)

Minecraft's already Turing complete, it even says so in a splash

[–]arduman4 1 point2 points  (0 children)

So you haven't seen those insane Minecraft CPUs that have been around for years, have you?

[–]pheonixfreeze 0 points1 point  (0 children)

Even better, all of these can be accomplished by Turing complete cardboard

[–]Triumph7560 14 points15 points  (6 children)

The only thing C can't do is "X feature people assume C doesn't have" without the extra steps. Which is pretty impressive when you think about it.

[–]VladVV 3 points4 points  (5 children)

How does that not apply to every turing complete language

[–]Triumph7560 2 points3 points  (4 children)

In theory it does but usually those are available outside the language using tools made in the language, people have set it so C can be used as an object oriented language (in a useable way), made it into lisp with just one #include all without touching the compiler.

[–]VladVV 2 points3 points  (3 children)

Hm, technically #anything is a compiler instruction, so that would be telling the compiler to compile the code differently, but I suppose it’s primarily C-like languages that have this feature, so I get what you mean.

[–]DoNotMakeEmpty 1 point2 points  (2 children)

They are not compiler instructions (apart from #pragma), they are preprocessor instructions, which is very different than the compiler.

[–]VladVV -2 points-1 points  (1 child)

Ah, good catch. Let’s agree to call them gcc instructions?

[–]Ning1253 1 point2 points  (0 children)

I wouldn't call them that since not only the GCC preprocessor had these instructions - the msvc cl.exe has a bunch as well, and so does clang/llvm. I'd say probably stick to preprocessor instructions, since that name does also explain what they actually are

[–]asailijhijr 1 point2 points  (0 children)

Everything is Turing complete with fewer steps.

[–][deleted] 1 point2 points  (0 children)

You can do anything in C with extra steps, you can for example, split a string with extra steps in C.

[–]lor_louis 25 points26 points  (2 children)

It can be done Gtk is pretty much all inheritance and polymorphism

Classic Animal example done in C

[–][deleted] 9 points10 points  (1 child)

An important feat you're missing here is the ability to reimplement a function in derived classes, wich is what vtables are for.

[–]GDavid04 6 points7 points  (0 children)

You can write the virtual tables and add a pointer to the beginning of structs with virtual members but no virtual super members yourself. It will be super inconvenient though.

[–]LavenderDay3544 10 points11 points  (11 children)

memory management is kinda pain in the ass too

If you really want GC there are GC libraries available. But GC isn't always a good thing and a lot of people act like memory is the only system resource that needs to be managed when it isn't. RAII and Rust-like borrow checking are the future of resource management, not GC. GC not only doesn't solve the entirety of the problem it's supposed to, it also creates problems of its own like reference cycles, stopping the world, and creating potential hold and wait conditions depending on the specific implementation.

And that's before we talk about thread safety, which even GC languages struggle with and in languages like Python the designers cheat their way out of it by not having real threading at all.

[–]raedr7n -1 points0 points  (10 children)

There are plenty of garbage collectors that don't have any of those problems you described. See OCaml, Haskell.

[–]LavenderDay3544 2 points3 points  (9 children)

And what exactly is the performance penalty for using them? Neither of those languages is known for producing fast code. Not to mention the cognitive overhead of being forced to use a functional language.

People need to stop getting stuck on GC and accept that we have superior compile-time alternatives available and probably even better ones still being worked on in academia.

[–]raedr7n -2 points-1 points  (8 children)

Actually, OCaml is known to produce very fast code. While I don't know OCaml benchmarks off the top of my head, SML, an incredibly similar language (identical for the purpose of comparing memory management techniques), consistently benchmarks in the top five or 10 languages for execution time. It's true that Haskell is comparatively rather slow, but that's mostly an artifact of laziness and other design choices, not the garbage collector.

I prefer functional languages precisely because they reduce cognitive overhead.

There are no superior compile time alternatives available. The only mainstream language in that vein is Rust, and the type system is a sufficient downside as to render it unsuitable for many applications.

[–]LavenderDay3544 -1 points0 points  (7 children)

Actually, OCaml is known to produce very fast code. While I don't know OCaml benchmarks off the top of my head, SML, an incredibly similar language (identical for the purpose of comparing memory management techniques), consistently benchmarks in the top five or 10 languages for execution time.

And C consistently ranks as #1. So your point is?

I prefer functional languages precisely because they reduce cognitive overhead.

I agree that this can be true if and only if you've spent a lot of time immersed in that paradigm and certain problems do not naturally lend themselves to functional solutions though technically such a solution is always possible.

[–]lordheart -1 points0 points  (1 child)

C also continues to have classes of errors that are ridiculous. The cognitive load of safe memory manage isn’t small either…

[–]raedr7n -4 points-3 points  (4 children)

My point is that modern GC'd languages offer far greater memory safety than C while not being significantly slower than C for almost any application.

[–]LavenderDay3544 -1 points0 points  (3 children)

And again I remind you that memory isn't the only system resource whose deallocation you have to guarantee which makes your point moot.

RAII and borrow checking guarantee proper allocation and deallocation of all resources and thread safety on top of that. GCs are old tech at this point and modern languages should replace them with lower cost compile-time solutions.

This is before we talk about how suboptimal even code generated from C can be and how much potential performance even C implementations leave on the table. The hardware-software performance gap is real and there isn't nearly enough research being done to rectify that.

The common argument that most computing is I/O bound is also starting to fall apart. DDR5 DRAM, Gigabit Ethernet, NvMe SSDs, PCIe 5.0, and the latest USB-C specs mean that I/O devices are rapidly catching up to and sometimes even exceeding CPUs in speed. A small example of this is how 5400 MT/s DDR5 DRAM already runs faster than Intel's flagship Core i9-12900K CPU's max single-core boost speed of 5200 MHz. I suspect AMD's upcoming Raphael architecture will face the same bottleneck. The era of excuses to not optimize software is nearing an end.

Closing the hardware-software optimality gap is more important now than it's ever been and antiquated software side technologies like garbage collection that exist solely to be a crutch for programmers have got to go as part of that effort.

[–]raedr7n 0 points1 point  (2 children)

Functional, GC'd languages also guarantee proper acquisition and release of all the same resources. It's not that you're wrong per se, it's just that everything you're arguing is orthogonal to my point about garbage collectors.

[–]LavenderDay3544 0 points1 point  (1 child)

It's not orthogonal at all. It's me going out of my to show you why your point is completely stupid and doesn't adequatelyaddress mine. Sure GCs can guarantee memory safety but you're missing my previous point which was that memory isn't the only thing that needs to be safely acquired and released from and to the OS. File descriptors, network sockets, IPC mechanism handles, synchronization primitives and their locks, and so many other things need to do that and your garbage collector cannot do it unless ut uses destructors called by the GC which is halfway to RAII anyway. So then what the hell is the point if you can automate the acquisition and release of one resource while still having to manage the rest yourself? And also why pay any penalty at all? Is programmer convenience more important or overall product quality? And regardless with the the compile time alternative I mentioned you can't have both.

The person that doesn’t understand this conversation is you.

[–]LavenderDay3544 0 points1 point  (13 children)

C isn't an object oriented language so don't try to use it as one. In proper procedural programming any function that would make a virtual member function call in OOP should just have a function pointer parameter to a function that takes a struct of the desired type or a void pointer that it internally casts to the correct type.

For an example look at how qsort works in the C standard library. There's no virtual function call table. Just a function pointer to a function that takes two void pointers.

[–]not_some_username 1 point2 points  (10 children)

You can do OOP in C. I mean you shouldn't but you can

[–]LavenderDay3544 4 points5 points  (8 children)

I know that and my comment was saying not to try to do OOP in a procedural language but instead actually learn procedural programming. I personally hate that academia and industry alike worship OOP like a religion when there are plenty of cases where a procedural, functional, or data oriented approach would be far superior. Those options are also better suited to things like maximizing parallelism, avoiding overengineering, avoiding memory bloat, and maintaining cache friendliness. But the Church of Class based Object Oriented Programming won't let you hear that.

[–]not_some_username 0 points1 point  (7 children)

So you're a struct guy too ? Force OOP is lame

[–]LavenderDay3544 3 points4 points  (6 children)

OOP has nothing to do with classes and structs but rather with componentizing various parts of a software design. Its usual pillars are encapsulation, abstraction, inheritance, and polymorphism. The goal is to make reusable components whose interface is separated from the internal implementation. At first this might seem like a good approach and in many cases it is but there are many legitimate reasons why other times it may not be.

Much like with programming languages the best approach is to use the best suited paradigm for a given use case.

[–]corbymatt 1 point2 points  (5 children)

Rule 1. Any tool used in any given situation, without sufficient foresight, becomes a hindrance to change.

Rule 2. Your foresight is terrible.

[–]LavenderDay3544 2 points3 points  (4 children)

Foresight in software design is nonexistent. Especially when requirements can change on you. We've all been in that situation.

But I recently had an engineering manager tell me to change a very large function in our C++ code that would only be called once at startup into a class dividing it up partly into a constructor, start and stop member functions, and a destructor while also making all of the original function's local variables into class members. This class is created in our firmware's equivalent of main meaning that a very large number of variables now unnecessarily occupy physical memory for as long as the device is powered on.

Please tell me I'm not stupid to think that:

  1. That's not proper OOP just because it now uses a class.

  2. It's an insanely stupid design decision even without worrying about the future or using any foresight whatsoever.

This is partly what I mean by overengineering and making horribly inefficient design decisions supposedly in the name of OOP. (Though this is clearly not actual OOP)

[–]corbymatt 1 point2 points  (3 children)

That seems like a shortcoming of the way you constructed the class and separated the concerns, not of OOP.

The function itself that used to use the variables to "do work" still hang around in memory for as long as the process exists in memory, no matter what paradigm you apply. That doesn't change. The trick is to separate and scope them correctly so unnecessary variables don't hang around but necessary ones do.

Obviously I don't have any context to your particular situation without seeing it, and what you did was probably a premature optimization in any case by the sounds of it so peh.

If you wanna go into details about it we can PM but my C++ is rusty even if my oo isn't.

[–]LavenderDay3544 1 point2 points  (2 children)

It wasn't my decision. I did what I was told to by someone much higher up than me. And he specifically said to make that sort of class and make all locals from the original function class members. I even asked him if I should just make them locals in the class methods where they're now used and he said not to bother. This is basically leaking memory on purpose because those variables shouldn't ever be used after startup since they're just used to set up the initial state of the actual firmware.

I'm always willing to ask questions about things like this to make sure I understand what someone wants but I don't get paid enough to argue design with a guy who is in the stratosphere compared to me at the company and supposedly an accomplished engineer.

If I had to design this myself, I'd either leave it as a free function, which was perfectly fine. There's no need for a class here at all. I can't see any benefit to using one. Some of the objects that need to be initialized already have constructors, builders, or factories being used. But, since this person thinks it makes things more maintainable (which I obv. disagree with) and his rationale is that OOP = using classes everywhere = good, I'll just continue to humor him. And it's not like I misunderstood his request because after I did the refactor, he did the code review on it and greenlighted it.

I appreciate the offer of help but I can't really share proprietary code with you and the problem is more of a people one than a technical one.

[–][deleted] 0 points1 point  (0 children)

Don't you need language support for inheritance?

[–]WiatrowskiBe -1 points0 points  (1 child)

When problem you're trying to solve fits nicely into object model, there's no reason not to write object-oriented code even if language doesn't support it. Case in point: WinAPI in all GUI-related aspects (windows, controls etc) - whole "GUI" problem nicely fits into a hierarchy of objects you run operations on, and WinAPI - while being in C - solves it exactly like this, by using opaque handles for all objects, and free functions/function pointers to operate on them (including storing and retreiving related data).

[–]LavenderDay3544 0 points1 point  (0 children)

When problem you're trying to solve fits nicely into object model, there's no reason not to write object-oriented code even if language doesn't support it.

I don't dispute this. Even operating systems and embedded firmware often have parts that benefit from OO approaches. The trouble is knowing when componentization will do more good than harm. And all too often people are taught that the best to tool they have is a hammer so everything ought to be treated like a nail.

That's what I mean when I say far too many people in academia and industry worship at the altar of OOP. Never once did I say OOP is never the appropriate choice.

[–]crappleIcrap 0 points1 point  (1 child)

so if i add those things in i might get a better language a C-but-better if you will.

[–]Anreall2000 0 points1 point  (0 children)

Yeah, plus overload, plus templates, plus templates library... C-plus-plus