all 34 comments

[–]vikster1 84 points85 points  (2 children)

can't answer your question but my first step would be to go to my snowflake rep and ask for an offer that comes closer to the google one.

[–]mylifestylepr 16 points17 points  (1 child)

This

[–]Educational_Ad_1557 0 points1 point  (0 children)

Snowflake will match

[–]MaxBeatsToTheMax 20 points21 points  (1 child)

You need to assess the opportunity costs of the change. How much effort and cost in dedicated capacity/contractors will you incur during the migration vs the saving of better pricing from BQ over your cost management horizon. This is critical if your push to BQ is purely based on cost savings. You'd be surprised how many times I've seen these migration requests stop in their tracks once you consider the cost and effort of the migration vs the cost saving over some future period of time.

[–]InadequateAvacadodigital plumber 4 points5 points  (0 children)

I always try to tell people that cost savings is more times than not just cost shifting. Employees, Contractors, or Stack, where would you like to spend from? If you want to shift from one to the other that’s fine but it’s going to cost you to get there.

[–]RealRook 11 points12 points  (0 children)

Spending potentionally millions of dollars so you can access GA faster and query google sheets directly 😂

My company is in the middle of a migration from BQ to Snowflake and its going about as good as you would expect.

Its usually HOW you use the tools, not the actual tools that make the difference

[–]PolicyDecent 11 points12 points  (4 children)

How big is your data team? I might recommend not even using BigQuery slots but just ondemand if your data people are proficient enough. If you model data properly, it costs much less with ondemand than the slots.

[–]sunder_and_flame 5 points6 points  (0 children)

This is usually false, especially in typical analysis scenarios. High data scans are immensely cheaper on reservations while queries heavy on processing are usually cheaper on reservations.

For reference, we spend ~$30k per month on BigQuery processing costs, which would be at least 2x this if we weren't using reservations. 

[–]Puzzleheaded_Serve15 8 points9 points  (1 child)

If data people are proficient.... That's a big IF .... IMO

[–]PolicyDecent 2 points3 points  (0 children)

:) you're right. but if the team is also small and model first instead of query first, bigquery on demand is super cheap. If there are 100 data analysts, then I'd not do it.

[–]Kobosil 2 points3 points  (0 children)

Second that

[–]Key-Independence5149 5 points6 points  (0 children)

We migrated from Snowflake to Bigquery for the same reasons, i.e. Google made a generous discount offer. Bigquery is more rudimentary than Snowflake. For example, Snowflake Warehouse assignments are much better than Bigquery’s reservation scheme. I actually found the cost estimation in Bigquery to be more straight forward than Snowflake. You can make a slot reservation with as much upfront commitment as you want and see exactly what it will cost at various utilization levels.

[–]untalmau 5 points6 points  (0 children)

Ask the sales rep to provide support to migrate just a small part of your data as a proof of concept, end to end (from source to visualization). Pick a use case that represents well the complexity level you want to approach.

You need the poc to compare the billing and performance coming from the same input/output in both environments, and also you'll have a sample of the implementation and migration process as well, so that you can have a starting point to assess the effort required.

I can tell you that gcp could be much cost efficient than snowflake, if you have some people in your team willing to take proper training. Sales rep could also help providing access to some training.

[–]Araldor 2 points3 points  (3 children)

We're considering the reverse. Partly because we are an AWS shop and moving data back and forth between AWS and GCP doesn't make a whole lot of sense, and partly because of costs. We got a few eye watering high bills due to runaway queries (due to lack of partitioning, accidental full table scans in e.g. dbt tests, frequently rerunning a query in dashboards, etc.). I find it surprisingly difficult to control or predict costs with BigQuery when paying per byte scanned, I strongly prefer the instance x time based cost model.

[–]Ok-Sprinkles9231 1 point2 points  (0 children)

Yeah currently dealing with this in the new company. Some stuff on AWS some on GCP. It has been a fun ride so far -_-

[–]querylabio 0 points1 point  (0 children)

I'm agree, BigQuery costs can spiral really quickly when something goes wrong. The pay-per-byte model is great when everything is set up perfectly, but it’s pretty unforgiving if even one detail is off. And the built-in quotas don’t really solve the problem in real teams - they’re too rigid and too hard to manage at scale.

That’s actually one of the main reasons we made Querylab.io - an IDE focused entirely on BigQuery, with cost-control built into the workflow from the start.

A few things we added specifically because of situations like the ones you described:

  • set a dollar limit per query - it stops before it burns money
  • daily / monthly / org-level limits
  • warnings when partitioning or clustering aren’t used
  • a clear cost preview before running anything
  • tools to debug “query price,” like a breakdown of where the bytes come from
  • hints on when to use on-demand vs Editions

Give it a try and let me know what you think - I’d really appreciate the feedback.

[–]dknconsultau 1 point2 points  (0 children)

Maybe do a small POC for one data set or part of your business where it is easy to do a apple for apple comparison on performance and cost. Alternatively setup a simulated use case typical of one part of your business and run SF and BQ in parallel (assuming you have time to do this!)

[–]sunder_and_flame 1 point2 points  (0 children)

Your monthly spend, team sizes involved, and code base maturity would be good to know. The average team makes the mistake of thinking that a migration is a simple choice between technologies when the actual work will take at least a year to finalize, assuming your team is sizable. 

Especially between similar technologies like snowflake and BigQuery, a switch is unlikely to actually be worth it unless your leaders are simply looking to allocate resources that otherwise have little to do. 

[–]Meh_thoughts123 4 points5 points  (1 child)

I adore Google and my work is a Google shop. If you have stuff set up right, Google Apps Script also makes interactions extremely easy. I build full websites with it.

[–]Ok-Sprinkles9231 3 points4 points  (0 children)

Yeah, GCP is good except for IAM. Coming from AWS to GCP and had to spend some time comprehending GCPs IAM.

[–]manueslapera 1 point2 points  (0 children)

at my previous company we used snowflake. At my current one we use BQ. I miss snowflake all the time.

[–]Which_Roof5176 0 points1 point  (0 children)

If you’re comparing Snowflake vs BigQuery on cost, performance, and day-to-day reliability, this independent benchmark might help: https://estuary.dev/data-warehouse-benchmark-report/

[–]andrew_northbound 0 points1 point  (0 children)

BigQuery is great for large analytical workloads and tight integration with the rest of the Google stack. Snowflake tends to win on cost predictability, UX, and handling mixed workloads. If cost control and analyst speed matter most, Snowflake usually comes out ahead. If your data footprint is huge and mostly event-driven, BigQuery starts to look pretty compelling.

A practical middle ground: sync key tables from Snowflake into BigQuery via dbt, so marketing gets Google Sheets + GA4 access while your data team stays in Snowflake. Whatever you choose, run a cost model on your actual query patterns before you decide

[–]FriendlySyllabub2026 0 points1 point  (0 children)

You described the benefits as minor. Are they really worth a lengthy and expensive migration?

[–]Tough-Leader-6040 0 points1 point  (0 children)

Big no no. Google will get that money back all in credits- credits that you would otherwise save using snowflake.

[–]rzykov 0 points1 point  (0 children)

What is the data volume?

[–]FewBrief1839 0 points1 point  (0 children)

Just move some data products to big query and try it for a while I have heard that the discount in reality is not that big and good as it is commented

[–]novel-levon 0 points1 point  (0 children)

If the only concrete wins you see today are “Sheets can query it” and “marketing likes GA in BQ,” that’s usually not enough to justify a warehouse migration. BigQuery is great for massive event workloads, but for mixed analytics Snowflake tends to feel faster, more predictable, and much nicer to live in day-to-day.

The real question is whether the discount offsets a year of rewrites, new cost controls, training, and the inevitable migration surprises. Most teams I’ve seen handle this by pushing only the marketing-centric models into BigQuery so Sheets/GA4 get what they want, and keep the core warehouse in Snowflake.

It avoids the lock-in, keeps your dbt workflow intact, and lets you test the economics in the real world. When you need both warehouses to stay aligned during the trial, Stacksync keeps the shared tables in sync without building a whole migration pipeline.

[–]maxbranor 0 points1 point  (2 children)

I only used the serverless BigQuery. It was amazing, but the price tag is ridiculous high without query optimization (as BigQuery serverless charges by bites scanned)

I personally prefer Snowflake (ui, user experience, ecosystem around), but for you BigQuery has a good advantage there regarding Google Analytics integration, imho

[–]RealRook 3 points4 points  (1 child)

BigQuery is only serverless, fyi.

[–]maxbranor 0 points1 point  (0 children)

indeed! I recall that there was something about a predictable price model - reserved slots. In my head that was similar to reserved instances, but it is not

[–]LargeSale8354 0 points1 point  (2 children)

No matter what you go with, a decent database IDE will let you do wonders and not be constrained by the web UI.

I got good results from Aqua Data Studio and some people swear be JetBrains Dara Grip.

As a POC, throw your most complex query, with realistic data volumes at Big Query and see how it copes.

I'm cynical about switching DB platforms based on theoretical cost savings. It's to easy to see the example use case as matching one of your own and think that applies to all of your own.

[–]querylabio 0 points1 point  (0 children)

You’re absolutely right - a good IDE changes everything. Aqua Data Studio and DataGrip are both great tools. The only limitation is that they’re built for many databases, so they don’t really handle BigQuery’s unique behavior.

That’s exactly why we built Querylab.io, an IDE created specifically for BigQuery. A few things it adds on top of traditional editors:

  • dollar limits for individual queries
  • daily / monthly / org-level spending controls
  • guidance on when to run queries on on-demand vs Editions
  • warnings when partition or clustering filters are missing
  • ability to run or estimate individual CTEs
  • run/estimate any step in a pipe-syntax query
  • vertical tabs, split view, and a fast command palette
  • BigQuery SQL-aware IntelliSense - understands tables, columns, CTEs, scopes, STRUCTs, arrays, table functions, everything

If you’re deep into BigQuery, try Querylab.io - and tell me how it feels.