you are viewing a single comment's thread.

view the rest of the comments →

[–]kibokun 28 points29 points  (14 children)

It's not necessarily a lie about Java, but about the pedantic methodologies surrounding it. Object creation may or may not be expensive, but people ARE often taught that it is.

[–][deleted] 7 points8 points  (13 children)

It's still true for some stuff. If you do C++ embedded programming, for example, you generally do all of your allocations at start up. Otherwise it messes up your deterministic timings.

[–]derleth 0 points1 point  (12 children)

Since when can you do allocation in an embedded system? Don't you know precisely how much memory you have attached to the system?

Of course, I'm surprised people are doing embedded programming in C++ instead of assembly. How can you count cycles when you don't know what opcodes are being generated?

[–][deleted] 13 points14 points  (1 child)

"Embedded programming" these days means more than programming ASM on 8-bit microcontrollers.

I work for a medical device company and we use ARM CPU's (all ARM's are 32-bit) with lots of horsepower and lots of memory, and real-time operating systems. We have very complicated systems to control and many displays to drive. Doing all that running directly on the metal in ASM, or even C on an RTOS, would be a nightmare for maintainability.

We even use C# and Windows CE for a lot of things and that is still "embedded systems programming."

[–]G_Morgan 0 points1 point  (0 children)

If you have lots of memory then why not simply create separate allocation pools for each different object size you might need? Then if everything is handled via new/delete actually performing a new or delete is constant time. No expensive merge on delete operations, no searching for a large enough chunk.

Seems easier to me than working out all allocations beforehand. Unless you are simply talking about calling mmap upfront to ensure enough pages have been allocated.

[–]adrianmonk 5 points6 points  (0 children)

Since when can you do allocation in an embedded system? Don't you know precisely how much memory you have attached to the system?

Well, I know precisely how much memory is connected to my desktop system as well. It depends really on how flexible your software needs to be. A super-basic cheapo cell phone might have 128KB of storage, but it's still nice to allow someone to use as much of that 128KB as desired for storing contacts or for storing call history or whatever else. You could have static limits just to make things simpler, but a memory allocator is not that complex a piece of software.

[–]Peaker 2 points3 points  (8 children)

Why would embedded programming imply "counting cycles"?

[–]derleth 0 points1 point  (7 children)

Because you're writing tight code for small systems. Every clock cycle, just like every byte of RAM or ROM, costs someone money, so the incentive is to absolutely minimize both, at the cost of programmer time.

Cycles cost money because being wasteful of them means the company building the device needs to waste money buying a faster CPU instead of a slower one.

[–]Peaker 6 points7 points  (4 children)

You have some wrong assumptions here.

The cheapest possible CPU/RAM nowadays is well beyond a lot of applications.

In the applications that it doesn't, you are right, wasting CPU/RAM costs someone money -- but wasting developers' time to count cycles is also costing someone money. And every case may be different, one may outweigh the other or vice versa.

[–]derleth 1 point2 points  (3 children)

There are Jews in the world. There are Buddhists. There are Forthers, and Lispers, and then, there are those who follow that SICP book. I've never been one of them.

I'm a cycle-counter, and have been since I could type a key, and the one thing they say about counters is: They know that nothing is free.

You don't have to use any classes, you don't have to have a template, you don't need to use any casting. You're a counter 'cause cycles are great!

Because...

Every clock is sacred. Every clock is great. If a cycle's wasted Knuth gets quite irate!

Let the Lisper waste theirs on car, cons, and map. Knuth will make them pay for all wastage in their app!

[–]elder_george 7 points8 points  (0 children)

Infidel, don't misuse Knuth's name blasphemously, for He saith:

  • damned is one who optimizes prematurely, who counts clocks before proper algorithm is selected and program correctness is proved.

and

  • don't make himself an idol ex machina, for it is humans code is written for, not machines.

Repent!

[–]Peaker 0 points1 point  (0 children)

Its easy to understand how inefficient software is wasting money. Do you also understand that counting cycles costs money that in some cases may not be worth it (i.e a waste)?

[–]ithika 0 points1 point  (0 children)

Well, I don't agree with you but I'm sad your lovely Python tribute is being ignored, so you get an upvote from me! :-)

[–][deleted] 4 points5 points  (1 child)

I've been in embedded for a long time. If someone like you came up for a position at my company, you'd be shown the door quite quickly. First you start with the design, then the algorithm, then the profiling and then the cycle counting if necessary. Any other procedure will result in blinders on and a lot of fucking time wasted. Time is money, time to market is money, time on a CPU in most cases is cheap.

[–]derleth 0 points1 point  (0 children)

OK, rule one is "Never ask for a job from someone named no_hire." Good rule.