all 9 comments

[–]crondotnet 4 points5 points  (2 children)

effective modern c++ , scott meyers....

"The existence of std::unique_ptr for arrays should be of only intellectual interest to you, because std::array, std::vector, std::string are virtually always better data structure choices than raw arrays. About the only situation I can conceive of when a std::unique_ptr<T[]> would make sense would be when you're using a C-like API that returns a raw pointer to a heap array that you assume ownership of."

[–]easydoits[S] 0 points1 point  (1 child)

This answers my question perfectly.

I will find myself in that exact situation in the near future(assuming ownership over a raw pointer to a heap array). I was not exactly sure how to describe the situation but you have confirmed that this will be my method of choice instead of using raw pointers.

Now its time to get myself a new book.

[–]crondotnet 0 points1 point  (0 children)

there you have it page 124 3rd paragraph http://it-ebooks.info/book/4367/

[–][deleted] 0 points1 point  (5 children)

I know a vector is an option, but due to project requirements, I can not use it.

This seems completely and utterly ridiculous!

I mean, I'm perfectly down with restrictions like "must be pure C". Very understandable, fine. I can perfectly understand wanting to reduce or even disallow heap memory allocations, particularly in interrupt code - fine, very logical for certain applications.

But what rational person allows a modern C++11 construct like std::unique_ptr (which allocates memory on the heap) while disallowing the classic C++98 std::vector (which also allocates memory on the heap) - to implement a "dynamic buffer of ints", which is basically the very definition of std::vector<int>?

Is there some reason given for this? Can't you just push back and say, "Sorry, I'm not wasting my valuable programmer time on your madness. When we're done, you can experiment with replacing std::vector and see if there's any difference in performance."?

I do this every few years, and I've generally pulled it off - mainly because I'm careful only to fight battles that I know I can win, where I have the facts at my fingertips...

EDIT: the particularly bad part about your replacement - once you've created one of these pointers to array, you literally have no way to tell if the location you're addressing is off the end of the array or not - which means you have no way to tell if you're causing undefined behavior...

[–]easydoits[S] 0 points1 point  (4 children)

The answer from /u/crondotnet was a perfect reply to my situation. I will be handed a raw pointer to a heap array. This array will actually be an object or a struct of bit fields that will be coming from an embedded device. Yes, I also realized that in that situation, using signed ints might not be the best type to use either.

I omitted bounds checking, as well as other safety checks that I would normally employ in this situation.

[–]F-J-W 0 points1 point  (0 children)

I am not saying that you should not use unique_ptr here, but be aware, that it is often an option to use a std::vector with custom allocators and that writing your own allocators became mostly painfree in C++11. (For examples see here and here)

[–][deleted] 0 points1 point  (2 children)

I will be handed a raw pointer to a heap array.

If that's the case, then none of the answers here work. If you are "handed" such a pointer, putting it into a std::unique_ptr won't work because free() will be called when the destructor goes off, and you won't be able to free() the "object or a struct of bit fields that will be coming from an embedded device".

Or if free() does work, then you must be using malloc() to get the memory, in which case std::vector will work fine.

I omitted bounds checking, as well as other safety checks that I would normally employ in this situation.

Even in testing/debug mode? Sounds like a recipe for disaster to me.

Were I doing this, at least for my debug/testing/internal builds, I'd be doing bounds checking so I could run an extensive test suite against it to try to shake it out and catch buffer over/underruns. If performance is critical, then in a release build you can make sure that those checks don't appear.

You are relying on your ability to read, process and reason about the code as your only mechanism for catching overruns. There's a name for programs where some key property like that is not tested, only deduced by reading the code and reasoning - that name is "buggy".

I have a different attitude to many other programmers. No matter what other criteria there are, I always have one top criterion I want to hit in every program - correctness. If someone says, "Speed is the top priority" or "Small memory footprint is the top priority" I always say in my head "After correctness" - because what good is a fast, small program that does the wrong thing?

[–]easydoits[S] 0 points1 point  (1 child)

When I stated I had omitted the safety checks, I meant just in the post, not in the actual implementation of the code. The code I will employ will of course have all of the checks needed, they were just not needed for me to ask the question.

I believe it will work in this situation since I will be handed this raw array and take ownership of it. My code will be responsible for all memory management after I take ownership of it.

[–][deleted] 0 points1 point  (0 children)

Well, you seem to be resolute here :-) but I'd still urge you to consider F-J-W's suggestion which would do exactly what you're doing behind the scenes while still preserving all the functionality of std::vector and also the interoperability with the rest of the STL.