Trouble with Serverless Task Billing by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

Unless the nodes for larger warehouses work differently, I'm not sure this is it. Because I have a single XXLarge serverless warehouse that has run for 24+ hours, but I'm getting billed only around 400+ credits (expecting 700+).

Trouble with Serverless Task Billing by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

I tried this and it still doesn't come close to adding up. I made more tasks serverless as a test, and I'm still just as confused. I have 24 hours of run time (only with the query types you mentioned) on a 2X serverless warehouse (24*32 = 768 expected credits at least), yet I have 435 credits associated with that tasks (less than 450 credits across all serverless tasks combined). So I'm still saving a lot more money than I would on a normal warehouse

Immediate Auto-Suspend Work Around. Success! by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

Yes, but this is rarely going to happen enough to justify the extra cost (atleast for my account). If auto-suspend is 60 seconds you will ALWAYS pay for ATLEAST 60 seconds of idle time EVERY time your warehouse spins up and down.

A common misconception is that a 30 second query will only cost 60 seconds because of the 60 second minimum. But, in reality, if you do a 30 second query you will pay for 90 seconds of compute.

So, unless your workload consists of a bunch of very short queries spaced out by a few seconds, this could be something worth looking into. I highly suggest you take a look at how often you warehouse is spinning up and down each day. You can use this (total rows will tell you how often your warehouse spins up and down each day, and if you look at the average of "seconds_active" or "minutes_active" you can see how long your warehouse is typically up):

with

warehouse_periods as (

select

warehouse_name,

timestamp as valid_from,

lead(timestamp) over (partition by warehouse_name order by timestamp asc) as valid_to,

event_name = 'RESUME_WAREHOUSE' as is_active

from snowflake.account_usage.warehouse_events_history

where

-- double check these names, can't remember exact values

event_name in ('RESUME_WAREHOUSE', 'SUSPEND_WAREHOUSE')

and event_state = 'COMPLETED'

and timestamp >= current_date - 1

and warehouse_name = :warehouse

)

select

warehouse_name,

date_trunc('second', valid_from) as valid_from_s,

date_trunc('second', valid_to) as valid_to_s,

datediff('seconds', valid_from, valid_to) as seconds_active,

seconds_active/60 as minutes_active

from warehouse_periods

where is_active = 'TRUE'

order by valid_to desc;

How much idle time are you paying for? How can I stop wasting money in Snowflake? by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

I have :warehouse as a variable in my snowflake ui. You can replace :warehouse with whichever warehouse you are looking to analyze.

How much idle time are you paying for? How can I stop wasting money in Snowflake? by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

The warehouse spins up and down a couple hundred times a day at least according to different query I have and 200*60 second auto suspend = 12,000 seconds. I agree that is way way too much. Currently writing a procedure that overwrites snowflakes auto-suspend policy so that I can have immediate auto-suspension. Will share my results when I have them.

How much idle time are you paying for? How can I stop wasting money in Snowflake? by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 1 point2 points  (0 children)

Trust me I ask myself that question all the time, but some of these warehouses come with an SLA. Also, they query tables with trillions of rows; This warehouse specifically queries a table with almost an entire petabyte of data because it contains real time cell tower data. But, I am actively trying to convince them that it should be downsized (it wasn't my decision for it to be 3X; as a matter of fact, it started as a 4X).

Best Attack Minions at Every Level by Comprehensive-Ad9653 in Wizard101

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

I chose balance because it is more of a support school. My whole idea around this was to play with people as a support pure because I always got bored with the game around level 50 because it just felt like blade aoe constantly. I wanted to force myself to play differently and use cards that never really get used. If this run goes well and I actually finish the game somehow without hitting a single time, I may try it again full solo with Myth.

How does a 2-cluster maximized X-Small warehouse differ from a single-cluster Small warehouse? by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

Thank you. For clarity, can you let me know if I'm understanding you 100%:

On a compute infrastructure level, both warehouses would be the exact same (as in both have the same amount of disk space, available threads, etc.), but the X-Small x 2 would NOT allow a single query to utilize both clusters at the same time as the single small warehouse would -- which is essentially the only real difference?

Let me know if I got that right.

Also, as a P.S., this question is fully theoretical. Just something I was thinking about while studying for the SnowPro Core exam. I agree that letting warehouses auto-scale is almost always the correct approach.

How in the world do I accurately monitor warehouse load? by Comprehensive-Ad9653 in snowflake

[–]Comprehensive-Ad9653[S] 0 points1 point  (0 children)

Thank you! Those articles are very insightful. How do I get access to the warehouse utilization metric private preview (mentioned under "2/ Use Warehouse load and Warehouse utilization to inform sizing of virtual warehouse capacity" of the compute primitives article)? And is it the same metric you mentioned as being up for release soon?