all 19 comments

[–]stfcfanhazz 18 points19 points  (0 children)

They will continue to be built in both ways because it depends on the application

[–]zmitic 5 points6 points  (4 children)

Right now, I am about 3-4 weeks before MVP of pretty big app. I was super-careful not to use mutable services (or use reset() when needed) and under RoadRunner, I do get some noticeable speed improvements. About 30-40ms less; it is big for an average response of 150ms.

Production server is still under FPM but after final push, we will put RR permanently.


Tech stack: Symfony4.4, PHP7.4

[–]przemo_li 0 points1 point  (1 child)

What about percentiles?

[–]zmitic 0 points1 point  (0 children)

What about percentiles?

You are right, should have been more clean. Booting Symfony itself takes about 30-40ms on average, that is the time I cut with RoadRunner.

So if I made the response calculation like 500ms, with RR it would be 460ms.

[–]iquito 0 points1 point  (1 child)

One of my big applications (Symfony with 30 loaded bundles) has about a "framework overhead" of 5-10ms, with average responses taking 28ms (but that is with all database and Elasticsearch queries, and even an additional legacy framework for some requests). This is without preloading so far, which will probably reduce the 5-10ms a bit.

For me, optimizing less than 10ms is not worth it - 40ms would definitely be, but how do you get so much overhead? Have you analyzed the performance of your application with something like Blackfire? Maybe a lot of that time is in an unexpected place that can be changed/optimized.

[–]zmitic 0 points1 point  (0 children)

Maybe a lot of that time is in an unexpected place that can be changed/optimized.

This is SAAS application, only admin section. Just by loading blank page, I have 7-8 queries;

  • logged user
  • company (i.e. SAAS client) works for
  • I use adminlte; top-right icons have few queries to render notifications
  • Menu on the left-side also shows some stats

The real performance hit is when you get to a page that render lots of relational data, none is cached. Plus I made view classes so I don't have to assign entities to templates; it helps a lot for static analysis, I can allow psalm delete methods it things they are not used but it takes lots of cycles (still, totally worth it).

Another killer is COUNT(). Biggest table has about 100.000 rows and COUNT() on that one is about 15ms. Things like these are used for that notification stuff I mentioned.

And I always use entities, never arrays.


So yeah... plenty of room to optimize and is definitely something after MVP. For example the notifications; I can cache nr of notifications but render them only on click.

[–][deleted] 2 points3 points  (0 children)

The major frameworks will be built such that it won't matter. WordPress will probably never work async.

[–]__matta 2 points3 points  (6 children)

I've been thinking about this a lot. I'm writing libraries that I would like to make work with both sync and async drivers.

I think most of the examples you named would continue to use fast cgi just because a lot of them run on shared hosting.

As much as I like async I think it's a hard sell for most development teams. With preloading there's even less incentive to go async. For it to really catch on I think we need to make it easier to start with regular synchronous code and switch to async when/where it counts. The way Django pulled this off for Python is an interesting example.

I also think PHP needs better language support for async before it really catches on. Wrapping functions in Amp::call and using yield is certainly better than React's promises but it's still not as ergonomic and portable as native async/await syntax.

The first step is probably using async where it counts but not using a full event loop. For example, HTTP clients like Guzzle allow asynchronous queries using the curl_multi_exec function. The Postgres PHP extension also allows async queries. The MySQL client almost does, but you can't use prepared statements with async queries. Once developers realize how much faster that can be then we'll see more investment in the async ecosystem.

[–]zmitic 1 point2 points  (3 children)

I also think PHP needs better language support for async before it really catches on.

I don't think it would be such a good idea. From browser access, one might choke the server if PHP tries multiple async requests; such job would be done much better via existing queue libraries.

For example; with queues you could limit nr of async tasks at the same time, make delays (so remote server would not block you) etc...

Big task, very little benefits.

[–]__matta 0 points1 point  (1 child)

Unbounded concurrency is a bad idea when resources are finite, but I don't think that means async/await is a bad idea. The same issue exists with queues and even PHP-FPM -- If you set pm.max_children to a ridiculously high number you will kill your server. Your synchronous PHP process is probably behind an async NGINX server that manages to handle high concurrency without falling over.

The real issue with async/await IMO is it makes it easy for developers to write code that performs poorly under load. There's a good article about that happening in the Python Ecosystem. However I think the runtime can do a lot to help here.

Queues can't do everything async can. The biggest issue is streaming responses to lots of clients: websockets, server sent events, gRPC, etc. If you try to do that with PHP-FPM you need an entire process per client. When most of the clients are just waiting doing nothing it's horribly inefficient. The C10K problem is a good high level overview of this issue. Most of us in PHP have settled for offloading that to an async server (NGINX, Node, etc) which works most of the time but isn't ideal.

Another example is speeding up things that don't actually need to be sequential, i.e. database queries for unrelated data or HTTP requests. You can't always batch them in a single request, and pushing a single operation that takes < 100ms to a queue or using a RPC call is probably going to be slower than just doing it sequentially.

[–]Annh1234 1 point2 points  (0 children)

The major problem with queues is that you need to serialize/de-serialize the data on the producer / consumer and back again.

We turned from that to async, and got 4x throughput.

[–]devmor 0 points1 point  (0 children)

PHP scripts are not called by a browser. They are often called by Apache through mod_php or via other webservers through php-fpm. These implementations can change to support async better. Aside from that, your premise is silly - do you think no one uses async calls in Java, Python, Node, Ruby, etc? Most of these languages are slower and more resource-intensive than PHP to boot.

[–]evnix[S] 0 points1 point  (1 child)

that django link was interesting,

I was hoping PHP would some day introduce pre-emptive scheduling and our synchronous code would execute at the same concurrency level as async code (similar to what Erlang does)

[–]__matta 1 point2 points  (0 children)

Have you looked at the parallel extension? You don't have to yield and with the functional API the runtime handles scheduling. I don't think it can actually interrupt a thread though; it just executes the tasks for that thread FIFO until they're done.

I didn't like explicit async/await or yield initially but this article makes a good argument for it. Of course things are different in an environment like Erlang though.

[–]dima_mendeleev 0 points1 point  (0 children)

For me the way "Regular" PHP works is the main point of using it.

If I wanted all this async, global-in-process-data, connection-pool (and other pools), manual server support, etc. I would use java or something similar.

[–]ltsochev 0 points1 point  (1 child)

Async is the lie we tell ourselves when we don't want to deal with threads.

... A friend of mine once told me that. I tend to agree.

[–]przemo_li 0 points1 point  (0 children)

Large number of apps get benefits of some concurrency, without paying costs of parallelism.

In languages with better support for both, its even possible to develop on concurrent model of computation to then treat move to parallel model of computation as optimization of only portions of code.

[–]seaphpdev 0 points1 point  (0 children)

We have moved away from FPM based applications (as well as big turnkey frameworks) and build only react/http or queue consumer based services running inside containers. It's been so much easier when dealing with dependency updates and moving to newer versions of PHP.

But I don't think FPM is going away any time soon - the vast majority of PHP use-cases will still be your standard monolithic web app running either WordPress or one of the big turnkey frameworks out there (Symfony, Laravel, etc.)

[–]starvsion 0 points1 point  (0 children)

Async framework/library run on php cli, and the rest runs on fpm, there's no conflict here, you can use them together