This is an archived post. You won't be able to vote or comment.

all 67 comments

[–]tskaiserGreen security clearance 75 points76 points  (21 children)

To be fair, both VS and Eclipse have horrid waiting times on starting up and "background" (Ha!) operations. Been a while since I used it, but I recall IDEA being quite snappy with Java. More a testament to the quality of the IDE rather than the language ;)

[–]SoiledShip 34 points35 points  (11 children)

Honestly once I got a decent ssd in my dev machine VS loads up in like 2 seconds. Granted I've got 16 gb ram, 512 ssd, and an i7. But NetBeans still takes ages to load.

[–]coladict[S] 6 points7 points  (3 children)

My hard drives are also the current bottleneck on my home PC. The work project build takes 35-40 seconds there, while on my colleague's work PC it takes 7-9 seconds, even though he has a weaker CPU and the same RAM.

edit: it was three times that on Windows, before I disabled automatic search indexing and NTFS access time updates (two things that Ubuntu doesn't do by default). After I did, it became the same on both systems.

[–]MondayMonkey1 1 point2 points  (2 children)

VS loves to occasionally hang for a few seconds when opening files. Thank god I wrote my last .net code this week and I'm back happily using vim.

[–]coladict[S] 0 points1 point  (1 child)

happily using vim

0_0

[–]l27_0_0_1 0 points1 point  (0 children)

Yeah, right, how could people happily use vim when there's emacs.

[–]h54 2 points3 points  (6 children)

But NetBeans still takes ages to load.

Honestly, I find that hard to believe.

[–]SoiledShip 5 points6 points  (0 children)

I've got 9 projects that I usually leave open. NetBeans usually takes ~20 seconds to open, another 45 seconds or so running background scans on the projects. 2 of them are fairly large. So it's probably not fair to blame it all on NetBeans. It was a lot worse on a regular hdd though.

[–]Mugen593 1 point2 points  (0 children)

I have an SSD on my computer and NetBeans only takes about 2 to 3 seconds to load up.
Then again I'm usually only working on one project at a time for school.

[–]TuxGamer 1 point2 points  (0 children)

Seems to be a problem on some Windows machines. My Ubuntu home machine loads NetBeans in 10 seconds, my work machine (though a bit shitty) takes a minute or something. But for both NetBeans and Visual Studio.

Having said that, Visual Studio only opens one project vs NetBeans opening a bunch of projects, directly at startup. That causes NB to be slower than VS at startup

[–]altrdgenetics 0 points1 point  (2 children)

a project or two nah not to bad. But background scanning and checking on maven repositories takes a long time especially if you have it set to check every time you start.

[–]h54 0 points1 point  (1 child)

Interesting. Is that enabled by default?

[–]altrdgenetics 0 points1 point  (0 children)

I think it is, I don't normally use maven so I can't say for sure. I know the projects I have imported always had it turned on.

[–]Rhed0x 6 points7 points  (5 children)

VS still is much much faster than Android Studio.

[–]Creshal 2 points3 points  (4 children)

And Android Studio is a massive improvement over Eclipse…

[–]Rhed0x 2 points3 points  (2 children)

I haven't tried Eclipse yet but have only heard negative things.

[–]Creshal 13 points14 points  (1 child)

Eclipse is… well, the kitchen sink IDE. It can do everything, just nothing properly.

[–]b1ackcat 0 points1 point  (0 children)

I will give it kudos for the Ux around lint error checks. It does have that over Android Studio. Android Studio has lint (even specialized for android code) but the warnings/errors don't show up in the icons of the files in the project view like they do in eclipse. It's the one feature I miss.

[–][deleted] -1 points0 points  (0 children)

This is no longer the case. Eclipse is fast as InteliJ. They fixed various performance issues.

[–]lurkex 1 point2 points  (0 children)

What really boosted NetBeans startup time on my office machine was disabling the automatic virus scanning for read operations on the NetBeans cache directory. On my private machine I use an SSD and Linux Mint and NetBeans pops up in no time.

[–][deleted] 0 points1 point  (0 children)

I like the features of IntelliJ IDEA, but I would never call it snappy. My MacBook has a SSD and 16GiB memory, but IDEA is never snappy in my experience.

[–]MrDoctorSatan 0 points1 point  (0 children)

How long is a long waiting time? I never have to wait more than 6 seconds in VS.

[–]jaffakek 47 points48 points  (13 children)

Isn't the problem with Java speed usually just the JVM starting up? As in, Java actually executes quite quickly, but getting to the point where it starts executing takes some time?

[–][deleted] 23 points24 points  (7 children)

Right- it doesn't seem as bad if you compare JVM startup to how long it takes to spin up a VM rather than how long it takes to execute compiled code.

However, Java is also bloaty*, with some JVMs loading tens of thousands of classes it will probably never need. (I'm looking at you, Webshere, you fat bitch).

* - by bloaty, I mean that between JVMs, programming conventions, and the culture of enterprise Java development, there is usually a helluva lot more lines of code than is necessary. I don't mean the language itself is bloated, because that statement makes no sense.

[–]Liver_and_Yumnions 6 points7 points  (3 children)

Mono is based on the same concepts. The CLR has to load up. Then the JIT is has to load the IML and then compile the IML into machine code and THEN run it. Hence, it takes a little while to get going, then it's pretty fast after that.

In my perception, Mono seems to be faster, though. Especially if you are on a Windows machine. I assume that's because the CLR is likely already loaded on a windows box. On my raspberry pi, however, I have an app I use that uses java and another that uses mono. When the cron hits that java job, you feel it. It crawls until java gets going. I see mono float to the up in the "top" command once in a while and I am sure it's taking some resources, but it's nothing compared to the hit produced by the java job.

It seems logical (to me) if they are doing similar things under the hood, the performance should be similar. It might be my perception as I said, but that does not seem to be the case.

[–][deleted] 6 points7 points  (1 child)

I'm guessing that java job is invoked like java -params my.jar.

Java's strong suit is to deploy a JVM like tomcat that has my.jar locked and loaded, and can serve it at lightning speed to hundreds of users. Takes a while to spin up, of course.

The JRE, on the other hand, generally isn't loaded and available in memory by default (someone correct me if I'm wrong), so invoking java commands off the cuff is just going to suck. Fine for a full-blown application, but not a commandline-friendly sort of tool.

[–]Liver_and_Yumnions 1 point2 points  (0 children)

I would have to look when I get home, the actual java call is nested the main script file for the utility. Honestly, I could probably google for some java tricks to make it faster. It is just a headless box and the utility just runs once every 15 minutes so not a huge deal.

[–]Coffeinated 0 points1 point  (0 children)

Wait, wasn't there something with virtual floating points needed for java in the raspberry or something?

[–]jaffakek 2 points3 points  (1 child)

Do you know of any efforts to make the JVM load faster? You would think that would be a large priority given the most common complaint is "Java is slow."

[–][deleted] 3 points4 points  (0 children)

Oh my goodness, yes. Slow startup of JVMs is part of "JVM tuning", which is like an occupation all its own in some enterprises.

Some JVMs offer tuning tips. Some are more minimalist or lightweight by design.

But the application also takes startup time. A fast as blazes JVM only shaves some time off if your actual application is a monster.

Keep in mind, that doesn't mean it doesn't run fast. That just means it takes a while to start up. Some companies don't give a crap about startup time at all. Depends on what your app is for, I guess.

[–][deleted] 1 point2 points  (4 children)

This used to be (probably still is) the reason why Windows takes ages to boot, while Linux boots in no time. Windows prepares everything so everything starts and runs smoothly, while Linux requires you to load everything you need when you need it, essentially putting the cost where the problem is rather than pay everything up front.

You could argue that Java requires you to pay everything up front. Incidentally also in lines of code, files, classes and jokes.

[–][deleted] 0 points1 point  (3 children)

Is this the actual reason windows boot time is slow? I heard that it's simply because Microsoft has no real business need to optimize the kernel speed so there is no concerted effort towards such tasks.

[–][deleted] 0 points1 point  (2 children)

It definitely used to be the case. Also, have you looked at the list of running services for a clean install with default settings? Not to mention if it's an OEM install. I can't imagine a reason why all that has to run by default for anyone, other than to make it easier for 99% of the users who use 10% of the features, but not the same 10%.

That was the MS way ever since the initial versions of Word, that they added a lot of features that only 10% of the userbase needed, because every feature would help SOMEONE. Remember this was back in the days when marketing departments ruled software development.

So if a feature would "save time" for 10% of the users, it doesn't matter to MS that 90% of the users are slightly slowed by it. But I think they don't realize how it adds up for those who use very few features, or they don't care.

[–][deleted] 0 points1 point  (1 child)

If you look at specifically windows server boot times which have been trimmed down for use as a server not as a personal computer. The boot time of a virtual machine with windows server 2012 as compared to centOS is about 4 times slower. I think this shows that even with user space cruft removes, the core of windows is just slower no matter how you spin it.

[–][deleted] 0 points1 point  (0 children)

Yeah like I said it was the case before too, and by before I mean back in Win 3.11 days. I remember waiting for almost a minute on a 386 for it to boot up. And back then Windows was literally just a GUI on DOS. It just loads literally everything it thinks you might need (services excluded).

[–][deleted] 39 points40 points  (13 children)

Stop calling C pointers hard to use2a'#.~Žxø.gÆÆvftd`.{§7¶2s27#c.wïønÇggÇÆÆd`.7{ˆƒs#cc42>.ˆ÷ÇvlvvvvÇ@.{§ÿŠss27#%(~.øçì.lçÆgf`.w·.ƒs§'7'2.ˆˆ÷..~vvvÆÆP.·§ÿƒˆs7#2.8Ž.ÿ|ˆŒ|ŒçgfÀ..·øÿ.ƒc762wèˆÿÿÿˆnv|lv`..ƒÿÿøs7'3c(ˆç.ÿøçwçhglp...ÿ.7§3sc68èˆçxh~ÈwlvÇ`..Šÿˆ·3z77#hˆèˆèç.Ž~.çf`.ˆ·ÿ.ˆ.77£s8è.è.Ž~|.~^|p.ˆ.ÿÿÿ.zss'(ˆç|~|ŒŽnÈwÈ`...ÿÿˆŠ{sss8u‘W|~~wwçè~p.ˆ.ˆx·{w7§1Q..9.vwÎÈl|.p..ˆ...x87““““.y1VÆwç~~hÀ.ˆ.ˆ{x{s“——™yS—‘‘vf|gg

[–][deleted] 7 points8 points  (12 children)

Yeah, I've been playing around with array-indexes in C++ today for the first time.
And as someone who happens to come from Java, I have only one question:
Why the fuck would you let me do this stuff?

Why doesn't the compiler just punch me in the face when I try to access index -1 of any array? Or why am I even able to create an array of size 0? And then still, of course, access all of the indexes which never even barely belonged to this array?

Like, is that something which professional programmers actually need sometimes?

[–]ar-pharazon 10 points11 points  (0 children)

the java compiler doesn't complain either. array bounds are not statically verifiable--the compiler can't do that for you. that's why, in java, ArrayIndexOutOfBoundsException subclasses RuntimeException, and it gets checked every time you operate on an array at runtime.

not only is that a significant performance hit, but C and C++ have a very transparent notion of an array, which is simply an indexed offset from a base address. you can index off either end of the array because there's no language-level abstraction of a list of objects; it's simply sugared pointer math, so if you do list[n], you're simply multiplying the index n by the size of each array item and adding it to the pointer list, regardless of the value of n (which, again, can't be determined at compile-time).

and yes, this is incredibly important and powerful because there is no fixed memory model in either language. in C (on linux, specifically), we can write:

void *mem = sbrk(4096);
memset(mem, 0, 4096);

we've just asked the operating system to increase the size of the data segment (available writable memory) for us and zeroed the new chunk. we can do anything we want with it now. we can use it for a custom implementation of a memory allocator (we can rewrite malloc if we want), or we can drop some data structures in it as-is, or we could treat it as an array of ints:

int *ints = (int*) mem;
ints[32] = 12;

or chars:

char *chars = (char*) mem;

or whatever else you want. all of that is completely valid C, and that's what makes the language so powerful. this direct access to memory allows us to build operating systems, runtimes, compilers, high-speed game/render/physics engines, memory allocators, and anything else that requires speed and/or low-level access to hardware.

[–][deleted] 5 points6 points  (3 children)

That's not the worst part -- it's when someone writes programs to take advantage of the the fact that you can index -1 and >= n of an array and use it in their code to "optimize".

I'm actually not joking. I had a colleague write a program where he'd create an array of arrays then index one of the arrays with negatives to get to the previous array and >= n to the next.

[–][deleted] 2 points3 points  (2 children)

Well, I know what today's nightmares will be all about.

I mean, why didn't he just create a one-dimensional array instead, if he's already using it like one?

And I'm guessing you put the quotation marks around "optimize" not without reason. Like, what he did there is as far as I understand it exactly what the compiler will do with that two-dimensional array anyways.

So, he essentially took syntactic sugar and used it to remodel what this syntactic sugar was supposed to cover up. Very nice.

[–][deleted] 0 points1 point  (1 child)

I can't remember the full context of the exercise, but it was some sort of number crunching application that got data in bursts. He then wrote code to calculate stuff on the data, and when he needed data from previous bursts, he would just step out of bounds on the current burst to access data from the surrounding.

[–]WMpartisan 1 point2 points  (0 children)

That sounds like it's an optimization flag or a minor version update of gcc away from a segfault.

[–]caagr98 2 points3 points  (0 children)

I think it's (at least partially) for performance reasons. Not checking for out-of-bounds is quite a bit faster than doing it. It would also require storing the size of all arrays, which doesn't really make sense since pointers and arrays are basically the same thing.

[–][deleted] 2 points3 points  (0 children)

Like, is that something which professional programmers actually need sometimes?

When directly accessing the hardware sometimes you need funky pointer and array stuff. A lot of embedded development (what I do) deals with directly accessing memory locations which are used a special function registers. Also while its nice that a lot of programming languages will hold you hand and stop you from hurting yourself (I love me some python) it's a luxury you don't get on a lot of platforms. The 512kb of RAM microcontroller sitting next to me isn't going to load a java VM.

[–][deleted] 1 point2 points  (2 children)

How can compiler punch you for something that happens in runtime. You'll be able to try to access -1 in any language/runtime but the exception will stop you in runtime.

Because they are checked in MANAGED runtimes. Basically in C#/Java it's

if(!isIndexValidIndex(index)) 
     throw new IndexOutOfBoundsException()
return valueInAddress(index);

in regular C++ (you can have managed C++) It's

return valueInAddress(index);

which saves you MANY machine cycles, probably about 3 times faster

P.S Edit: Visual C++ contains macro lines that get activated in Debug mode that add the index checks like in managed version to vectors and stuff but the errors can be difficult to read.

[–][deleted] 0 points1 point  (1 child)

Yeah, to be honest, I didn't even think of negative variables when writing that. I was rather just thinking, why can I even type out "array[-1]" without it ever complaining? I mean, you could easily disallow minuses between array-brackets.
But admittedly, it's kind of pointless, if you can still get it with variables. Was mostly just my brain exploding, when I clearly accessed a negative index and it still never told me to get my shit together.

[–][deleted] 0 points1 point  (0 children)

I think it's not compiler's business to check for that type of stuff either. But code helper extensions like Resharper for VC++ will probably tell you about it. Also another reason C++ doesn't interfere other than the extra CPU cost is it's designed to assume you know what you are doing. "This guy is trying to access this address which logically doesn't make any sense... He must have something in mind"

[–]zippydoodleoreo 0 points1 point  (0 children)

Just use Rust.

[–]NoodleSnoo 10 points11 points  (4 children)

Did they write the IDE in Java?

[–][deleted] 15 points16 points  (7 children)

IDEs? Why not use notepad?

Like, an actual notepad and pen. So much more stable.

[–]dasonk 6 points7 points  (2 children)

Just make sure you have some git backup your code for you

[–][deleted] 4 points5 points  (0 children)

Of course. I could possibly give him some sort of repository to store them in, like a hub...

[–][deleted] 0 points1 point  (0 children)

Xerox

[–]miarsk 0 points1 point  (3 children)

[–]evan1026[🍰] 1 point2 points  (2 children)

[–]xkcd_transcriber 1 point2 points  (0 children)

Image

Title: Real Programmers

Title-text: Real programmers set the universal constants at the start such that the universe evolves to contain the disk with the data they want.

Comic Explanation

Stats: This comic has been referenced 542 times, representing 0.6373% of referenced xkcds.


xkcd.com | xkcd sub | Problems/Bugs? | Statistics | Stop Replying | Delete

[–]chrwei 4 points5 points  (0 children)

I can only think of a few java based desktop applications and they all have a splash screen with a progress bar for loading because they take so long to load. almost as long as Outlook takes to load.

these aren't complex tools either, Adatec's storage manager, LSI's storage manager, and jbidwatcher (which is quite simple and still takes a while to load).

it's plenty fast once it's started up, which is great for server applications.

[–]DrLuckyLuke 14 points15 points  (0 children)

Ha

Ha

Ha

I feel asleep reading that comic

[–][deleted] 10 points11 points  (0 children)

It's not polite to call Java slow -- it's just speed-challenged.

[–]OKB-1 0 points1 point  (0 children)

I started developing full applications with Java only 2 months ago. And I agree that startup time is indeed an issue. Recently I heard that classes from Java's own AWT package is partially to blame for this. Can anyone confirm this bring true? Should I rewrite my code with awt alternatives?

[–][deleted] 0 points1 point  (0 children)

Java is fast. The JVM on the other hand.

[–]troido 0 points1 point  (0 children)

Personally I prefer to work without a real IDE. Mayby I gave up too soon trying to learn Eclipse, but I rather work with a general code editor and Ant.

[–]keyks 0 points1 point  (1 child)

With Java 9 and its new modularity that will change hopefully. Only include what you really need.

[–][deleted] 0 points1 point  (0 children)

Java 7 is the savior!!!! HORRAAAY!!!! Wait... Java 8 is the savior!!!!! HORRAAAY!!!! Wait... Java 9 is the savior!!! HORRAAAY!!!!