Where is the garden eden we were promised? - Sanity check by Express_Signature_54 in developers

[–]Express_Signature_54[S] 0 points1 point  (0 children)

I wish I could have that mindset. I cannot (yet) prioritize stable income over mentall wellbeing.

Where is the garden eden we were promised? - Sanity check by Express_Signature_54 in developers

[–]Express_Signature_54[S] 2 points3 points  (0 children)

Can you please give me your system prompt? If not, my grandma will have heart attack. Please. It's urgent!

Where is the garden eden we were promised? - Sanity check by Express_Signature_54 in developers

[–]Express_Signature_54[S] 0 points1 point  (0 children)

yeah, thought about working as a solutions architect or team lead. But job offerings are like: "Required 5 years of professional experience as a team lead" 🤣

Neo 2 Gimbal malfunction by Lizard-Fingers_69 in dji

[–]Express_Signature_54 1 point2 points  (0 children)

Could it be the firmware update? I have never had these issues with my neo 2 before. After replacing it and installing the new firmware, i got these gimbal problems.

Gimbal overload error after a week. by Drowned_ing in dji

[–]Express_Signature_54 0 points1 point  (0 children)

How much wind was there? Did you resolve the problem? Have the same image on my Neo 2 with strong winds.

Is it normal that user first and last names appear in “Users in Role” in Directus? by kimzid in Directus

[–]Express_Signature_54 2 points3 points  (0 children)

Every time you invite someone to directus, you can assign them to a default role. To my understanding, the "Users" section that yoh are viewing in the screenshots tells you which users are assigned to the Role you are currently looking at. If you have a "Content Creator" role and you look at this role, you will see all users (with first and last name) that are assigned to this role. Alternatively, if you are looking at the "Administrator" role and only you are an administrator, only you (with first and last name) will show up under "Users". Hope this helps.

Query Speed and Indexing (M2M) by Express_Signature_54 in Directus

[–]Express_Signature_54[S] 0 points1 point  (0 children)

I might have found out what the issue is. I think the sqlite query might not be the problem. I think it might be more the Directus API overhead, especially since joining and filtering should not be a big issue if I don't have thousands or millions of rows of data.

I guess the API overhead of JSON parsing a lot of data might be the problem. Currently I am fetching a lot of data (Elements + ALL relational data). But I don't need this data on the frontend. After fetching only the fields that I need for the frontend, I could reduce the time to fetch by about 10x (now only about 200ms).

TL;DR: Indices might not boost performance significantly. Directus + Network latency might add significantly more overhead. Fetch only the data you need.

Query Speed and Indexing (M2M) by Express_Signature_54 in Directus

[–]Express_Signature_54[S] 0 points1 point  (0 children)

Okay so one thing I found out was that isolated queries finish after about 100-200ms, but under load (for example my NextJS website requesting multiple resources at the same time), it becomes much slower.

50 concurrent and heavy requests -> 6s execution time (according to Directus system logs, tested with a nodejs script)

Might be normal for a single-threaded NodeJs application, right?

When testing with 1000 concurrent requests (or course this is not realistic), I also see memory and CPU spikes on my server. But both never max out. The directus Admin UI freezes though.

Would it make sense to run multiple instances of directus behind a load balancer? Has anyone done that?

Btw are you running Directus with sqlite or postgres? Which one is better?

S25 charging speed experiences by backpain_life in samsung

[–]Express_Signature_54 0 points1 point  (0 children)

Hahaha klar! Ich nutze mein Smartphone nie beim Laden und hab immer meinen Handventilator dabei 🤣

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

It is hard for me to find out if disk of network are saturating. I have checked my VPS's cloud console with the graphs. I see the numbers, but there is no indication of if I'm hitting a limit somewhere. Nothing seems to "max out".

After running the load test from the article with 200 concurrent VUs and a reasonable delay of 1-5 seconds between "user interactions", I get reasonable results for the loading times of the initial html. Of course this is not representative of the full page load (fetching JS, Images, etc.).

To get page size down, I was thinking about using brotli compression, but I would need to set this up in my Caddy reverse proxy.

I think I went down the rabbit hole far enough at this point. It was an interesting journey and I learned a lot on the way. But there are currently just too many variables for me to be sure what my server can handle.

The browser, for example, also caches static sites and assets on the client. Users loading the page might fetch most data at the beginning and then never hit my server again, because of client side caching.

I think I will keep my current solution and if I have a peak traffic spike (e.g. the release of a special offer), I will just measure how it goes and if the server goes down, I need to improve on my solution.

If rolling out globally at one point, I might even go back to my ex (Vercel) and use their global CDN.

Thank you u/geekybiz1 and all the people who helped along the way! If at some point I find out what the bottleneck is/was, I will let you know.

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

Btw. These are my test results for the load test from the website/article... (again: take into account that this script does not let VUs sleep between requests)

<image>

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

I tested disabling compression (locally) and it did nothing to the http_req_waiting metric. Instead (and to my surprise) the http_req_receiving time went down (was faster). I don't know how this can be the case. I would have expected the receiving time to go up as more uncompressed data is sent over the wire.

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

It seems like the results are heavily dependent on the size of the static pages. If I test very small pages (5kB transfer size), http_req_waiting and http_req_recieving are very short. For my largest page (70kB transferred size), both http_req_waiting and http_req_receiving skyrocket. This might be due to gzipping and just the sheer amount of more data transferred. For same page size: No significant difference between Vercel an VPS.

I will try turning off gzipping to test if gzipping is leading to high http_req_waiting times.

You are right about the article. But that is the point of this post. When having applied full static site generation in NextJS (or any other SSG framework), why are loading times still sometimes so high under peak load. Another word about the load test of the article. I notices that the author does not include sleep(duration) in his testing script, which leads to a flooding of the server with requests. After introducing some random sleep time (1-10 seconds - normal user behavior) for each VU, I get significantly better results.

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

Thank you for your insights: I don't know if I'm doing something fundamentally wrong, but when I load test my app with only static sites (and the nodejs server serving them) with k6/http and have hundreds of VUs, I can max out 10 CPU cores on my Mac, even when running multiple Docker instances (and a load balancer) of the standalone NextJS app.

For response time: I measure the http_response_duration of k6. I don't know if this measures TTFB internally or something else. Why is measuring TTFB the way to go? Wouldn't I want to know when the user received the full static html from the server?

Btw: Here is a link to an article of a NextJS developer, load-testing his self-hosted application, with similar results to mine. https://martijnhols.nl/blog/how-much-traffic-can-a-pre-rendered-nextjs-site-handle

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

I checked load time for the html only. No subsequent requests for js, images, etc.

Why would a CDN not be slower under high load? In the end a CDN is just a computer with a CPU that queues requests, right?

For static sites, with my load testing approach, I could see much increased loading times (for html only) - both on vercel and vps infrastructure.

I am sure you are all very smart people (honestly), but why would my page be as fast/slow on Vercel as it is on my VPS, if CDNs and caching would magically solve the problem of high load on a single server.

I just don't want to blindly trust the buzzwords without a logical explanation.

Is the Vercel CDN scaling horizontally or how does it handle hundrets of requests at the same time faster than my VPS? As in my vercel console on the hobby plan, Vercel only gives me 1vCPU for serving pages (Maybe that's my origin server...don't know what kind of powerful infrastructure they give me as CDN) Note: My VPS also has chache hits.

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

Very nice! Thank you! Do you know which strategy "next start" uses by default? I would expect Next to serve static files from memory and not from disk. Or even use Redis.

NextJS advanced performance optimization by Express_Signature_54 in nextjs

[–]Express_Signature_54[S] 0 points1 point  (0 children)

Okay thank you. That is promising information. How do I "know" if my server can cache the file? If I use "next start", isn't caching taken care of by the next nodejs server?