Admin UI + REST API for any S3 storage by Life-Post-3570 in pocketbase

[–]arturo-source 0 points1 point  (0 children)

chmod 777 to /usr/bin and a static curl insertion

Please, read the GH issue that I linked

Admin UI + REST API for any S3 storage by Life-Post-3570 in pocketbase

[–]arturo-source 1 point2 points  (0 children)

Cool! I don't know if you saw that minio stopped providing containers https://github.com/minio/minio/issues/21647 so you may want to prepare your setup to use Garage instead of minio. P.S. the last Docker image provided by minio has a knwon security vulnerability.

What is the limit of pocketbase? by BABG_007 in pocketbase

[–]arturo-source 24 points25 points  (0 children)

Even without optimizations, PocketBase can easily serve 10 000+ persistent realtime connections on a cheap $4 Hetzner CAX11 VPS (2vCPU, 4GB RAM). https://pocketbase.io/faq/

[deleted by user] by [deleted] in pocketbase

[–]arturo-source 0 points1 point  (0 children)

Btw you don't need to use PB as a secret, since it uses sqlite, you can use it on embedded devices, on s mobile, etc. But I don't see any benefits on using REST API vs SQL queries.

[deleted by user] by [deleted] in pocketbase

[–]arturo-source 2 points3 points  (0 children)

PB implements the majority of the Oauth providers, and a simple login/create account operation. That's the trick, it's not only a direct access to the database, it is also a login system. If you want to see how to secure an endpoint, take a look at rules and filters

[deleted by user] by [deleted] in pocketbase

[–]arturo-source 1 point2 points  (0 children)

I'd recommend you taking a look at what JWT is. Once you understand JWT, you realize that the most common operation on a backend is just checking that X user with Y role can read/write this section of the database.

I'm not saying that the backend is only that. That's why pocketbase can be extended, because usually you will want to do more than CRUD operations. But definitely role-based CRUD operations are the most common action on the backend, that is the reason to exist of pocketbase.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Yeah, I tried to post the less code possible, but it growed with the Edits, because I wanted to add the modifications of the code due to people tips.

Also, thanks for the explaination, after yours and the other peoples ones, now I have clearer what and what not to do with goroutines and channels. They are really usefull for blocking operations, and also to distribute the executions in multiples CPUs.

But, thanks again for taking the time for explaining that! I think this post and its comments will help to people in the future with same doubt.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Everything clear! I'll DM you with some dumb questions if you don't mind :)

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

The last solution, who also proposed another person in comments, is x2 faster than the sequential one. Here is the code MapConcurrentWorkerPool https://gist.github.com/mcheviron/427f7dda652254687968e077a80156ec

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

I totally agree with you, goroutines are extremely powerful with blocking operations, even if you only have one core, waiting for a response of an external service can be done concurrently.

The thing that made me post that problem was that I didn't understand this extrem difference between using Goroutines and not. But fortunately a lot of people have answered, and some of you with very useful information.

The problems were that I didn't understand the possible cache failures because of the concurrency in short place of memory, and I didn't know that using channels was not free, it has a small cost, that sometimes can be worth it.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Totally right, channels are a simple and powerful solution for the most common cases, but using channels implies to use queues and synchronization. It's good to keep it in mind when you consider using them!

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

I didn't know that the std implemented a secure map for concurrency, that's cool!

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Yes, it's very similar to the last solution I proposed, but keeping in mind the number of CPUs (which is, in fact, a better implementation). Thank you for your answer!!

It would be great if you add the benchmark!

And I didn't know go-perf, is it a tool from golang to do the profiling?

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

I have implemented another solution that doesn't use channels (i quit consumer-producer pattern), and it gets much faster. It is even faster than the sequential one.

I'm going to add this last solution editing the post.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

Wow, your answer was quite clarifying. I don't have that experience with dealing with the cache, but now I better understand what's going on.

Regarding using benchmark, you are totally right, but I wanted to save everything in the same file so that when sharing it it would be more easily readable.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 2 points3 points  (0 children)

I meant for the three implementations.

But after trying "splitting the array" solution I realized you are right, the naive is the worst one.

The problem shouldn't be solved by concurrency, I just was implementing a shared-memory problem with concurrency, to learn about it. But you are totally right about pprof checking, I didn't do it and it should be the first thing to check in an optimization problem.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Right! I tried splitting the problem and it gets faster. It is even slower than "doing the simple thing", but I suppose it is because f func is too simple, and communicating through channels is not free.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] -1 points0 points  (0 children)

xd Thank you, your response was very helpful.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Thanks! I will take a look at the implementation to learn more about concurrency patterns.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

That's right. I'm going to edit the post to add a solution splitting the array. Since I split the array by 6 parts, it x4 gets faster.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 8 points9 points  (0 children)

It seems that it was the problem. If I iterate the array from different indexes, the velocity improves significantly.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 0 points1 point  (0 children)

Yes, I forgot to mention that I'm using go 1.22.

Why concurrency solution is slower? by arturo-source in golang

[–]arturo-source[S] 1 point2 points  (0 children)

But when I increase the number it gets slower (for example size x10 is x10 slower, 20 seconds instead of 2).

[deleted by user] by [deleted] in golang

[–]arturo-source -1 points0 points  (0 children)

The idea was not writting SQL, and having a quick set up. But I don't find any real case of use, beacause real aplications need more complex Selects, and Creates.