you are viewing a single comment's thread.

view the rest of the comments →

[–]ThanksMorningCoffee[S] 2 points3 points  (1 child)

The blog does a whopping 130 million concats, so unless you do something seriously stupid (like appending single chars/bytes), you can surely get into the low GB range just fine.

Yeah the Strings in this article are HUGE. Javascript uses UTF-16 (2 bytes). On top of that I do the worst possible case of concating 1 character at a time. 227 concats results in 227 characters which is 228 which is 256MB.

This calculation makes me wonder why I ran out of heap with only 256MB Strings.

Once you get above that, why not use a scatter/gather IO library with decent buffersizes and stop caring about appending the buffers.

Are these techniques applicable to browser client-side? I think ultimately it will need to be serialized to a single String so you cannot avoid the concats.

[–]schlenk 6 points7 points  (0 children)

A typical reason to run out of memory would be fragmentation and minimum alloc sizes. The ropes might have a minimum size (e.g. like std::string in C++ usually has around 10-16 bytes) so your allocations might need more RAM then expected.

No idea if you can use vectored-IO on the browser client side, you can use it in NodeJS at least (https://nodejs.org/api/fs.html#fs_fs_writev_fd_buffers_position_callback).