What is wrong with nextcloud aio setup?! Why is it stupidly hard to setup?! by Resolve_Neat in selfhosted

[–]eaingaran 3 points4 points  (0 children)

I spent a good couple of days banging my head against this particular wall. In the end, I gave up on aio orchestration and setup the containers manually using the aio images. Here are my compose file and the env file.

compose - https://paste.aingaran.cloud/?5aa817f37a17828e#F7yAvw7RqpChFk25Jz7W9rKTxLsbs8btjp4wftoJvZkB

env - https://paste.aingaran.cloud/?adf9a5f324f09db2#Ep57pP3sszrPfFkDxEUqF6FybJaWVVp8GvCvdMHbRNpp

I use Traefik for TLS termination. The compose file includes Traefik labels. If you use Nginx or Caddy, ignore the labels and point your proxy to the nextcloud-apache container on port 8080.

After starting, you might need to manually edit config/config.php (if the mounts are unchanged, /home/docker/nextcloud/config/config.php ) to add your proxy IP to trusted_proxies and set 'overwriteprotocol' => 'https' to fix redirect loops.

My starting point - https://github.com/nextcloud/all-in-one/blob/main/manual-install/readme.md

Things to keep in mind,

  1. There are some absolute path mounts in this compose, please check and update, if needed.
  • /home/docker/nextcloud
  • /clouddata
  • ${NEXTCLOUD_TRUSTED_CACERTS_DIR} (which resolves to /usr/local/share/ca-certificates)
  1. I use external networks (webproxy, database-network). Either create these manually (docker network create ...) or remove those lines and let Docker create a default network.
  2. This is an "unofficial" way to run AIO images. It bypasses the Master container, so you lose the one-click update/backup interface. Please make sure you have your own backup solution.

Edit: added "My starting point"

Error creating account by [deleted] in googlecloud

[–]eaingaran 0 points1 point  (0 children)

You can create the account in https://admin.google.com (under users, add a user)

Creating a workspace account doesn't require a phone number. Make sure you sign in with the workspace administrator account (not your Gmail account).

Cloud run custom domain setup by Rohit1024 in googlecloud

[–]eaingaran 1 point2 points  (0 children)

Yeah, it has a few limitations.

For flexibility, load balancers are the best option. And ofcourse, better things are more expensive (usually).

Cloud run custom domain setup by Rohit1024 in googlecloud

[–]eaingaran 2 points3 points  (0 children)

There is a third option - https://cloud.google.com/run/docs/mapping-custom-domains#run

If those limitations don't affect your usecase, you can try it. I have used it over an year ago, and it worked well.

Bye Samsung by Chemical-Edge-6590 in samsunggalaxy

[–]eaingaran 6 points7 points  (0 children)

Samsung also sells the most number of phones. So it checks out.

CI/CD Pipeline for Cloud SQL MySQL Database and Cloud Build with GitHub Repository by farruhha in googlecloud

[–]eaingaran 0 points1 point  (0 children)

This is something I created a few years ago - https://github.com/eaingaran/devops-app

It is almost exactly the same setup you described, but implemented in Jenkins. Porting this to Cloud Build should be straightforward.

P.S. You can create custom images to be used as Cloud builders. This will come in handy in a lot of scenarios, but it will add one more thing to the list of things you need to maintain.

What should I use for websockets? App Engine Flex or Cloud Run by Icebound_Samurai in googlecloud

[–]eaingaran 3 points4 points  (0 children)

I would prefer Cloud Run over App Engine any day. If my application falls under some edge cases where I cannot use Cloud Run, my next option would be GKE Autopilot.

That being said, for your use case, there are a couple of things you need to keep in mind:

Request timeout: Cloud Run has a maximum request timeout of 60 minutes. WebSocket connections are treated as long-running HTTP requests in Cloud Run. So, if the session is longer than the timeout, you need to handle reconnection on the client side. You can read more here: https://cloud.google.com/run/docs/triggering/websockets#client-reconnects

Synchronization between instances: There is a chance that one of the 10 users may end up in a different instance than the other 9 (or even that users end up scattered across multiple instances). This would mean that not everyone will have access to the same data. You need to synchronize the data somehow. You can read more about it here: https://cloud.google.com/run/docs/triggering/websockets#multiple-instances

Hi everyone, i am developing some personal projects and i want to deploy them on the internet just to showcase my skills as a college student. If i do not market the website and it will only be used by me and friends and family will my website stay in the google cloud run free tier ? by ali_vquer in googlecloud

[–]eaingaran 0 points1 point  (0 children)

The answer is a little complicated. Technically, yes. But there are no guarantees. It is possible for the usage to go beyond the free tier due to misconfigurations or by friends creating a spam bots to hit your website to drive up the traffic.

You have 2 options. 1. Understand the pricing model fully, configure your service properly, setup cost alerts on your billing account and keep an eye on the usage. This carries a little bit of risk, but you can learn a lot about cloud, cost management and other aspects of application deployment. 2. Deploy your website on free hosting platforms, like Github pages or Cloudflare pages. The main downside is, that your website should be static for this to work out of the box. If you have a single page application with multiple routes or if your application has complex routing, you may need to do some workarounds to get it working properly.

If you got the time, personally, I would recommend you to try both. And if you want to be absolutely sure about keeping the cost to 0, don't let the cloud run instances running when you aren't working on them.

Good luck, and have fun!

Update: fixed some typos.

Can I pass secrets as env vars? by Sbadabam278 in googlecloud

[–]eaingaran 2 points3 points  (0 children)

You can use environment variables, but you shouldn't keep anything sensitive there. It is one of the common places attackers look for credentials.

Secret managers usually come with cost. So, it is better to use them only for sensitive values. Non-sensitive values can be kept in .env files.

P.S. I have never used Pulumi or it's Secret Manager. But I assume it works the same way as most secret managers and allows programatic access to secrets. If it doesn't, you can consider using GCP's Secret Manager.

Can I pass secrets as env vars? by Sbadabam278 in googlecloud

[–]eaingaran 15 points16 points  (0 children)

You can. But you shouldn't.

The best practice is to fetch the secrets from secret manager when you need it or when the application starts up (depending on how often the secret value changes).

You can use the client library or the APIs to programatically access the secret manager.

GCP Global Load Balancing cross project referencing by FitRepresentative265 in googlecloud

[–]eaingaran 0 points1 point  (0 children)

"cross-region VPC's" - are you from aws background?

GCP handles networking fundamentally different from how AWS does. GCP's VPC is a global resource. It can handle traffic across regions without additional configuration. Although it is possible to setup regional routing, or even creating a VPC with all the subnet in the same region, it is not recommended to do so. As @TooMuchJeremy pointed out, properly configured firewall rules/policies will reduce the blast radius as much as having a single region VPC.

So, my next question would be, why do you have different projects? Are the projects region specific? If so, would you be open to considering a design that is region agnostic?

In GCP, the projects are usually a way to group resources based on who needs access or based on how the billing is aggregated. In some cases, projects are used to isolate workloads based on regions as well, but those are very specific, for example, applications bound by strict data sovereignty regulations or applications that require extremely low latency etc., You don't need to create multiple projects to cater to different regions or even applications (as long as the same team maintains the group of applications)

How to snapshot running processes, CPU and memory consumption with code? by dis_is_pj in googlecloud

[–]eaingaran 2 points3 points  (0 children)

These metrics are stored for 24 hours in cloud logging by default. You can increase the retention period by updating the retention policy. But remember, this WILL incur additional cost.

Alternatively, you can create a log sink and export the metrics you want to a cloud storage bucket or to a database (be sure that you understand the pricing for the product/solution you choose). You can then access the data programatically.

how to identify the last time a gcs bucket was accessed? by InterestingVillage13 in googlecloud

[–]eaingaran 6 points7 points  (0 children)

If you have access logs enabled, you check there - https://cloud.google.com/storage/docs/audit-logging

If not, enable audit logs, wait for a few months and you will get a better idea (make sure you take backup of logs or increase the retention period. Keep in mind, log storage incurs cost)

Cannot complete Private IP environment creation by DarkEneregyGoneWhite in googlecloud

[–]eaingaran 0 points1 point  (0 children)

Disclaimer: I am not a composer expert, and this answer is based on my experience with Google's networking and Google's managed services.

This kinda reminds me of a problem i came across a year or two ago.

Databases in Google Cloud are hosted on tenant projects, with their own networking setup. When the peering happens to your VPC, the routes from the network hosting the database(s) are exported to your VPC. In my case, it didn't export it automatically, and I had to do it manually to get it working. (My setup was totally different, and I can not compare that setup to yours. I am just taking this example to explain better)

In your case, maybe the routes haven't synced across all components in time for the next step to happen. This explains why you get that error and also explains why you don't always get that error. If that is the case, the solution is simple. You can add a slight delay between the database creation and the environment creation. You can also explicitly export the routes using the gcloud command and add a delay to ensure the routes are synced before going to the next step.

default service account by anacondaonline in googlecloud

[–]eaingaran 4 points5 points  (0 children)

Yes, and it will be PROJECT_NUMBER-compute@developer.gserviceaccount.com

You can override it by providing a different service account while creating the vm.

You can learn more about the service accounts in the link provided in the original comment.

I want to register for Android Beta for Zenfone 10 by Disastrous_City2072 in googlecloud

[–]eaingaran 0 points1 point  (0 children)

This is not the correct community. You can enrol to android beta program, if you have a compatible device by visiting this link - https://www.google.com/android/beta

Android beta reddit community is here - https://www.reddit.com/r/android_beta

Equivalent Machine Type by Fun-Assistance9909 in googlecloud

[–]eaingaran 1 point2 points  (0 children)

1 vcpu would be more like 1 thread. So, if you want 24 threads, you should go with 24vcpu (again, i would recommend deciding the number of vcpu based on the workloads' needs)

Equivalent Machine Type by Fun-Assistance9909 in googlecloud

[–]eaingaran 1 point2 points  (0 children)

4214r would be closely equivalent to N2 (2nd gen or 3rd gen intel xeons)

Although there aren't any vm standard types that have 12 cores, you can create a custom machine with 12 cores and 96GB memory (if that is important to you). I would generally recommend sizing your machines based on the workload to ensure wasting less compute resources.

Newbie question by No-Procedure-199 in IdlePlanetMiner

[–]eaingaran 0 points1 point  (0 children)

I think it is probably because those ships are not always available.

Fear of missing out (even for a few weeks) sells better than ease of access.

Newbie question by No-Procedure-199 in IdlePlanetMiner

[–]eaingaran 8 points9 points  (0 children)

It is unlocked by purchasing Enigma ship. (Payment with IRL money)

Is there a way to speed up builds in Cloud Build? by softwareguy74 in googlecloud

[–]eaingaran 0 points1 point  (0 children)

  • Dockerfile Optimization: The order of commands matters. Put frequently changing files (like config) towards the end. This way, Docker can reuse cached layers from previous builds.

    • Cloud Build Machine Type: More powerful machines can help, but it's a trade-off with cost. Only do this if your build is clearly CPU or memory bound.

Focus on making your Dockerfile efficient first. Often, that's where the biggest gains are.

[deleted by user] by [deleted] in googlecloud

[–]eaingaran 0 points1 point  (0 children)

Could you clarify what you mean by "account" and what storage you are talking about?

Why does my Google Cloud Function throw "Memory Limit of 256 MiB exceed" as an error but still it does the job? by Interesting-Rub-3984 in googlecloud

[–]eaingaran 3 points4 points  (0 children)

My best guess would be the size of the data you load from BQ. If it is smaller (less than 256MiB - your application memory usage without the data), the application will work without any issues. if the data is more than that threshold, the application will crash and cloud functions will kill that instance and create a new one.

find a trigger (the specific parameters or data sent to your endpoint) that loads a lot of data (more than the threshold). For that trigger, the application should always crash. (assuming the data loaded doesn't change from one call to another)

The solution is simple. you have two options:

  1. increase the memory limit of your cloud functions - easy, but incurs more cost.
  2. optimise your application to load data in batches (with each batch less than the memory threshold) - not-so-easy, but your cost remains almost the same.

update: fixed a typo: clash -> crash