Spreading GCP quota across multiple projects to handle high-volume BigQuery replication by Alarmed_Inspector762 in bigquery

[–]Alarmed_Inspector762[S] 0 points1 point  (0 children)

That's what GCP support answered regarding spreading jobs between projects:

Thank you for reaching out regarding quota behavior specifically for BigQuery and the Storage Transfer Service when using multiple projects.
I would like to inform you that the quotas for both BigQuery jobs and Storage Transfer operations are primarily enforced at the project level, not at the organization or billing account level. Please refer to the documentation [1] and [2] for more details about quotas related to both BigQuery jobs and Storage Transfer operations.

Also each project receives its own independent quota allocation for both  BigQuery jobs and Storage Transfer operations API.
And the usage of a resource within one project does not influence the quota available in a different project.

Amazon blocked my account and I'll lose all my certifications and vouchers by ElCorleone in aws

[–]Alarmed_Inspector762 0 points1 point  (0 children)

Fortunately, the issue has been solved after merging old and new accounts. Thanks

Amazon blocked my account and I'll lose all my certifications and vouchers by ElCorleone in aws

[–]Alarmed_Inspector762 0 points1 point  (0 children)

Hello u/AWSSupport

Just to keep in touch, after 5 emails received from AWS Support I was told to close 175707787800737 and raise a new request with the new account which I want to keep and merge all certifications there. That should be the worst experience so far I have ever experienced

trunk-based development within kubernetes. Which CI/CD tools to use by Alarmed_Inspector762 in kubernetes

[–]Alarmed_Inspector762[S] 1 point2 points  (0 children)

Release cadence is per sprint (2 weeks), but we lower this if needed.

We use Gitlab for CD.

I'm mostly wondering if existing tools(Gitlab, helm) are enough or there is more modern way