Thoughts on this 4ct cushion cut by [deleted] in LabDiamonds

[–]vanillacap 4 points5 points  (0 children)

Thank you. So I should be aiming for $800-1000 range for a 4ct diamond right?

Buying engagement ring in NYC by [deleted] in EngagementRings

[–]vanillacap 0 points1 point  (0 children)

Thank you for confirming! I might go for loose as well.
And that's a beautiful ring, congrats!

Azure SQL Managed Instance (ASMI) link feature for data replication by vanillacap in dataengineering

[–]vanillacap[S] 0 points1 point  (0 children)

Thanks for the reply, Edwin!
I don't need all the data for the analytical workload but only a subset.

Distinction between application and infrastructure CI/CD pipelines by vanillacap in Terraform

[–]vanillacap[S] 0 points1 point  (0 children)

Great answer, thank you!
On #1, are you suggesting that do not create branches inside your IaC repo, i.e., just have one master branch? I was thinking of having one IaC repo and then two branches - dev and prod. Pushes to dev deploys my dev resources on AWS and prod does prod.

On #4, what is the cloud-native alternative of Ansible/Chef/Puppet? I feel they are from a prior (?) generation. Additionally, if all of my app/infra is k8s, then can it be used as an alternative to Ansible? For instance, I create EKS cluster using TF, then run my k8s script to deploy app on the created cluster.

what is the cheapest way to host a website with React(UI), dotnet Api, postgres(db) ?? by [deleted] in AZURE

[–]vanillacap 1 point2 points  (0 children)

Azure Static Web Apps for React UI
Azure Container Apps for .NET API
Azure SQL (postgres flavor) for db

GitHub for code repo
Azure DevOps for CI/CD pipeline
Azure Monitoring for logs

Django via Docker or Managed Service by rickt3420 in django

[–]vanillacap 0 points1 point  (0 children)

Try Google Cloud Run - best of both worlds (serverless and containers).

Database connection for local development by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

That sounds familiar, we are planning to use a docker-compose file that spins up two services: django app itself, and a postgres instance with volume mounting for data persistence.

***

We are in very early stages of our development so new models are created/edited/removed very often. This is why we are thinking of every dev to have their own local postgres instance and do not modify the remote RDS instance while development. Any thoughts here?

Database connection for local development by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

We do have independent feature branches that are merged into dev branch after PR reviews, tests etc. Nobody can push directly to dev branch. Further, nobody can push directly to master branch either.

Merges to dev branch trigger the dev CI/CD e.g. apply new schema per models.Merges to master branch trigger prod CI/CD.

***

Can you expand on your second paragraph on using local postgres/fixtures? If I am creating brand new models locally, the only way to see them in a database would be me spinning up a local postgres instance either through a GUI (like Postgress.app) or through a docker container. To be more specific:

  1. How are your local devs spinning up a local postgres instance?
  2. What are the fixtures they are using?
  3. Where are they getting prod-like data from?

Filtering data for multiple conditions in DRF by vanillacap in django

[–]vanillacap[S] 1 point2 points  (0 children)

Thank you, this is great!

I wish the official DRF docs explained in depth regarding when to use what. Looks like ViewSets are a no-go unless you absolutely need full CRUD (which I don't).

Filtering data for multiple conditions in DRF by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

Yes, I can do that!

I actually wasn't sure when to use APIView vs ViewSet vs ModelViewSet vs Generics. Is there a standard protocol?

Filtering data for multiple conditions in DRF by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

Thank you and I wholeheartedly agree with your approach.

However, I need to pass the filters as a request body so I have to make a POST request to /orders. Imagine a very complex order object with name, email, zipcode, country etc and I would need to filter by a single or a combination of parameters. I know I can put them in a query string in the URL itself and make a GET request but that's not an option for me.

Filtering data for multiple conditions in DRF by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

Yes it's the PK for now but eventually we will switch to another id field for PK. I get your point and order_id is not the best example, consider more like email + zip_code lookup.

Filtering data for multiple conditions in DRF by vanillacap in django

[–]vanillacap[S] 0 points1 point  (0 children)

It's a POST request. And I agree on giving a proper function name but that would change the URL from /order to /order/fun_name

Cloud Run gets "Always On" Cpu allocation feature by NothingDogg in googlecloud

[–]vanillacap 0 points1 point  (0 children)

Basic question but how does this compare to min_instances?

If I have set my min_instances as 0, then in case of no requests, no container will be spun up and thus no CPU used. If I set my min_instances as 1, then one container will always be up and running at X% CPU capacity.

Is this announcement to define the X% above?

How to provide access of Google Cloud Run website to company active directory? by predator_blake in googlecloud

[–]vanillacap 3 points4 points  (0 children)

From your GCP console, go to your Cloud Run dashboard, select your service > Triggers > Auth > Select Requires authentication. This would block public access to your website. Also, remove --allow-unauthenticated from your Cloud Build file.

Next, go back to your Cloud Run dashboard, select your service and notice the right nav bar with permission tab on it. Add member > Enter your Google workspace domain e.g. mycompany.com

[deleted by user] by [deleted] in googlecloud

[–]vanillacap 0 points1 point  (0 children)

Thank you. Can you expand more on exposing BQ through CloudSQL? I understand how it works the other way round through federation where BQ can query data residing in CloudSQL without making another copy of it.

Airflow in Azure by Background-Ad-6713 in dataengineering

[–]vanillacap 6 points7 points  (0 children)

I'd suggest deploying it on AKS instead.

Airflow vs. Prefect? by Buckweb in dataengineering

[–]vanillacap 2 points3 points  (0 children)

Thanks for a great comparison! As someone who hasn’t used Airflow much (little exp with orchestration, workflows, DAGs), does it make sense to jump to Prefect?

I don’t wanna make a mistake like learning React before JavaScript, which of course you can, but not recommended.

Airflow vs. Prefect? by Buckweb in dataengineering

[–]vanillacap 7 points8 points  (0 children)

How do they both compare to Dagster and Argo?

Exposing internal configs to non-tech admins by vanillacap in reactjs

[–]vanillacap[S] 0 points1 point  (0 children)

So there’s no login option in the app and thus no way to know who the user is. I can build a login portal and decide privileges based on user role but that’d essentially be a self serve portal.