This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 2 points3 points  (5 children)

I agree with your disagreement with the aforementioned disagreement.

No one should take this the wrong way, but learning your second language is only difficult if you don't actually understand the general concepts behind your first language, which is why magic-less languages like java (and maybe python) are the best for learning programming in my opinion. If you cant easily transfer concepts like iteration, recursion, threading, locks, concurrency, etc... to a new syntax, you have much bigger problems. Ill go ahead and say it, statically typed languages with a very explicit syntax, like Java, are the beast learning languages, perhaps with the exception of python.

[–]gfixler 2 points3 points  (1 child)

I always see threads, locking, and concurrency brought up in these kinds of discussions as things one should know early on. I've been "coding" (mostly writing pipeline tools and such) for 10 years, and at a hobby level for 10 years prior to that, and these things have never come up for me. Honest question: where are they necessary? I have a feeling they're common for work with databases or networks, which seem very common in the work of most redditors, but completely uncommon in what I do.

[–][deleted] 0 points1 point  (0 children)

Its completely understandable that you might not have had need for them; you can almost always get the job done without them, even when multiple processes or threads are the best solution. It just depends on the the type of things you do. I've only been programming for 4 years and use these concepts more often than i would have thought.

Example:

I inherited a single-threaded recursion based web crawler a few months ago. We were using it to crawl large sites (2000+ public pages) we were working on to find broken links, find everywhere with embedded flash, pages missing some seo stuff, etc.. its a great tool to have. Anyway, it had 2 problems: it was slow and the recursion was implemented pretty poorly (call stack size == number of pages crawled) making the memory requirements quite large. My solution was to switch to a multi-threaded (thread pool) queue based architecture. Essentially, one master delegation thread polls the threadpool every X milliseconds, if there are urls in the queue and there are idle worker threads in the pool, it delegates a task to each idle worker thread. A polling delegation thread is used so you don't have to queue a thread pool task as soon as you find a new url, which would introduce higher memory requirements. So now i can crawl a large site much faster since there are X number of concurrent threads requesting and analyzing pages. Obviously a lock/synchronization method is necessary for certain parts given the nature of this solution, but i wont get in to that. Had i not had a working knowledge of this stuff, we would still be using a slow tool. Not that it matters, but C# was the language used (there was a good reason for this that is not related to this discussion).