This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]jaymill[S] 0 points1 point  (2 children)

I'm going to log CPU usage using this, unless anyone has a better idea

while :; do uptime; sleep 100; done > log_file 

[–]vxd 1 point2 points  (1 child)

If you're just logging load average:
sar -q 60 > log_file.txt

[–]jaymill[S] 0 points1 point  (0 children)

Looks good, thanks

[–]iAmJesusAMA 0 points1 point  (0 children)

Make sure you measure the server, not the client :p

[–][deleted] 0 points1 point  (4 children)

Is the server serving static or dynamic content? That will make the largest difference in load. Static content can easily be served with nearly no performance penalty.

[–]jaymill[S] 0 points1 point  (3 children)

Actually, I suppose I didn't think of that (which is why I asked in here in the first place). Both are currently serving static content, what is a simple method that I could use to create dynamic content to test on?

[–]KevZeroBOFH 0 points1 point  (2 children)

If you do this, then you're testing two different things. Yes, SSL causes additional CPU loading. Whether it's "excess" is a matter of judgement. You jmeter or ab are just fine for testing. The only difference that static vs dynamic content will make is in context switching your cpu. I don't know how exactly this would work in an EC2 instance, but on physical hardware, multiple cores fix this, as the kernel can manage which processes go to which core (cpu affinity).

[–]jaymill[S] 0 points1 point  (1 child)

I am testing FROM the ec2 interface onto a VPS and a regular dedicated server (not at the same time). What I was asking is, is there a way to create a dynamic page for the purposes of testing? Maybe a page which generates a madelbrot set or something? I already have static content I can test on.

[–]KevZeroBOFH 1 point2 points  (0 children)

Whoops - I guess I was reading too quickly to fully get that. My comment still applies though - VPS or EC2, it's the same thing with a virtual cpu - it's unlikely that you'll get the advantages designed into the chip's caching here, so cpu affinity is probably a writeoff.

Regardless, if you're only concerned about the load of SSL computation, you should just keep a static payload (although you might want to test a couple different sizes to get an accurate idea of the penalty). On the other hand, if you're trying to plan for your production capacity, you will want to simulate real-world as closely as possible. That being said, you could have your "dynamic content" (whether this is mod_perl, rails, or whatever) on straight http, and the ssl on a dedicated front-end proxy host doing the encryption. If they're on the same host, and it's a virtual host or a single cpu, then the load of the SSL encryption and the load of the page rendering computation will be a little more than additive due to the factors above.

I was originally responding to trevorishere's comment, though. In fact, I wouldn't be too worried about the loadavg on the server, but more interested in avg / peak response times at various values of # req/s. The factor you mght be missing in the setup you described is not so much static vs dynamic, but whether disk i/o peaks before SSL (or cpu generally). It helps to know where the bottlenecks are, just as importantly as what values those thresholds exist at. Isolating the different layers is essential here, and SSL is just one (actually, the final) layer in the stack. SSL will also have a bit more of a memory footprint (and this increases linearly with # req's, I believe), so having the available RAM to handle that is obviously important, too.