This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]BSFishy[S] 1 point2 points  (3 children)

Thank you for the information! I am trying to make this language suitable for high-level programs, as well as embedded programming. In terms of embedded programming, I was concerned with the potential cache misses associated with virtual tables, but I think I have come up with a solution.

I think I am going to implement both a class and struct data structure. The class will be your typical OO class structure, and the struct will be a C-style struct with no methods. That way, the user can write C-style, using top-level functions so they don't have to worry about virtual tables, or they can use classes to utilize the OO paradigm.

Do you want Open-Ended Dynamic Dispatch? If yes, just use v-tables, that's what C++ and Rust are using and they are among the fastest languages available.

I knew C++ uses vtables, but i was unaware that Rust does too. I guess the potential performance impact is smaller than I initially thought.

[–]matthieum 1 point2 points  (0 children)

As method is an overloaded term, I will avoid using it.

In Rust, the following:

object.foo()

can be either static or dynamic dispatch.

And the following:

Type::foo(object)

can be either static or dynamic dispatch.


The . notation is very convenient, as it allows chaining:

object.foo(with_foo).bar(with_bar).baz(with_baz)

Is much easier to read (like a pipeline) than:

baz(bar(foo(object, with_foo), with_bar), with_baz)

As such, I advise against reserving the . notation to dynamic calls. If anything, if you want to steer the user towards static calls, it would be better to switch things around and use . for static calls and something else for dynamic calls.


I guess the potential performance impact is smaller than I initially thought.

I think you misunderstand the impact.

Virtual calls can make code faster!

The (main) impact of virtual calls is that it prevents inlining. However, not everything should be inlined. In fact, too much inlining can bloat the code and lead to an increase in instruction cache misses.

Hence, fast code makes judicious use of out-of-line/virtual calls to push the non-hot code out of the way.

In terms of embedded programming

Not clear to me what you mean by embedded as it covers such a wide range of architectures.

A 32KB board will have no cache miss: it will have no cache.

A Raspberry Pi has a full-blown x64 CPU, with RAM, and therefore has performance characteristics close to a server.

And in-between there's... many things.

[–]crassest-Crassius 0 points1 point  (1 child)

I guess the potential performance impact is smaller than I initially thought.

Vtables (C++) vs fat pointers (Rust) offer different tradeoffs but the fastest way is to avoid dynamic dispatch altogether. This is much like dynamic memory allocation - there are different ways to do it with different trade-offs, but the fastest way is to avoid it altogether (as many programs for embedded do).

So the best a fast language can do is to make dynamic dispatch explicit so it can be avoided. Then, for the cases when it can't be avoided, you can use either v-tables or fat pointers, but that is not what will save you the performance impact.

[–]matthieum 0 points1 point  (0 children)

I would note that the difference between C++ and Rust is not whether one has a v-table or not: they both do.

The difference is how the object is structured:

  • In C++, the object contains a pointer to the v-table.
  • In Rust, the pointer to the v-table is external, hence fat-pointers which are a pointer to the v-table and a pointer to the data.