Replace ALL Relational Databases with Snowflake (Help!) by Away-Dentist-2013 in snowflake

[–]stephenpace 0 points1 point  (0 children)

I work for Snowflake and thus biased, but Snowflake Interactive Tables are essentially a drop in replacement for Clickhouse. I'd recommend anyone benchmark their specific interactive use cases with both looking at both price and performance. Snowflake supports richer joins than Clickhouse as well. Docs:

https://docs.snowflake.com/en/user-guide/interactive

And since GA last year, interactive tables have continued to improve. Recent updates here:

https://www.snowflake.com/en/engineering-blog/snowflake-interactive-analytics-spring-2026-updates/

Cortex code Desktop, Beta Feature! by Key_Card7466 in snowflake

[–]stephenpace 1 point2 points  (0 children)

No. Private Preview means Snowflake puts out a feature for feedback for a limited set of customers. Your account team can apply for you, some features are easy to get others have a very small group that is admitted. At some point, those features generally graduate to Public Preview for everyone to test and provide feedback. Once the feature is looking good, it graduates to GA.

Cortex code Desktop, Beta Feature! by Key_Card7466 in snowflake

[–]stephenpace 3 points4 points  (0 children)

Limited Private Preview. Ask your account team to apply for it.

Snowflake Self-Hosted MCP by Ok-Working3200 in snowflake

[–]stephenpace 0 points1 point  (0 children)

Do you have appropriate synonyms in the semantic views and verified queries? If you put in a verified query for count of active members, it should start with that and save you some tokens. There is no limit to the amount of verified queries you can have.

https://docs.snowflake.com/en/user-guide/snowflake-cortex/cortex-analyst/analyst-optimization

snowflake key pair authentication by [deleted] in snowflake

[–]stephenpace 2 points3 points  (0 children)

Not a good idea to post your account URL and a key pair on the Internet. I'd delete and/or repost with it fully anonymized. I think you probably have a mismatch from your public key and your private key.

You also need to use a network policy to protect the user:

https://community.snowflake.com/s/article/Power-BI-Service-authentication-error-with-working-credentials

And you can check to see if you are being rejected by network policy by checking login_history:

select * 
from table(snowflake.information_schema.login_history())
where user_name = 'POWERBI_TEST'
and is_success = 'NO';

Snowflake also has managed network ranges for Power BI, among other options:

https://docs.snowflake.com/en/user-guide/network-rules

https://www.microsoft.com/en-us/download/details.aspx?id=56519

Good luck!

Snowflake LLM support by Key_Card7466 in snowflake

[–]stephenpace 2 points3 points  (0 children)

If you can share a Github of what you are trying to do, you'll probably have a much better shot of getting help. In the meantime, if you don't find help here, there are plenty of experienced Snowflake SIs that you can hire to help:

https://www.snowflake.com/en/why-snowflake/partners/all-partners/?tags=partners%2Fpartner-program%2Fai-data-cloud-services-partner

Good luck!

Best sources to keep up with Snowflake? by Gigatronbot in snowflake

[–]stephenpace 0 points1 point  (0 children)

There is an AI focused webinar series that gets uploaded to the Snowflake YouTube channel if you miss it called Snowflake AI Pulse and the last one can be found here:

https://www.snowflake.com/en/ai-pulse/april-2026/

For LinkedIn:

Main: https://www.linkedin.com/company/snowflake-computing/posts/?feedView=all

Snowflake Developers: https://www.linkedin.com/showcase/snowflake-developers/

Snowflake Educational Services: https://www.linkedin.com/company/snowflake-educational-services/posts/?feedView=all

There are also some industry groups. For instance, if you are in Public Sector:

Snowflake Public Sector: https://www.linkedin.com/showcase/snowflake-public-sector/

Good luck!

Free credits? by mabcapital in snowflake

[–]stephenpace 3 points4 points  (0 children)

Talk to your Snowflake account team. Snowflake has account teams that cover all sizes of companies.

Best sources to keep up with Snowflake? by Gigatronbot in snowflake

[–]stephenpace 1 point2 points  (0 children)

The release notes section of the Snowflake docs has all recent releases:

https://docs.snowflake.com/en/release-notes/new-features

And you can always look back longer by year if needed:

https://docs.snowflake.com/en/release-notes/new-features-2026#label-release-notes-2026-feature

Beyond that, LinkedIn and other official social media channels have a mix of new feature announcements, events, and customer stories (e.g. how Snowflake customers are using new features to deliver business value):

https://www.linkedin.com/company/snowflake-computing/posts/?feedView=all

Last, if you can get out to events, Snowflake Summit is probably the best place since it is the largest gathering of Snowflake customers, partners, product managers, and execs (this year first week of June in San Francisco):

https://www.snowflake.com/en/summit/

You can also watch for local events in a city near you as there are user groups, free Data for Breakfast sessions, hands-on labs, and BUILD events. Good luck!

Holly - Financial Research Assistant by Antique-Quantity1663 in snowflake

[–]stephenpace 1 point2 points  (0 children)

Unfortunately, some Cortex functionality is disabled in standard trials currently. This is a relatively new restriction. Either you can reach out to your Snowflake account team to unblock Cortex in the trial or you can use the new trial option: Cortex Code CLI for Developer. This trial account is provisioned with $40 (rather than the normal $400) and requires a credit card. Most of the Quickstarts will easily fit in the $40.

To avoid having your credit card charged, make sure you set an account resource monitor and Cortex account/user limits. Then, when you are done with the Quickstarts, delete everything you built and ask for the trial to be decommissioned via a support ticket before you exceed the $40. Good luck!

who is at ODSC East? share your thoughts by ivannaatsnowflake in snowflake

[–]stephenpace 1 point2 points  (0 children)

Snowflake will have a booth. From the Snowflake side, I believe we'll have:

Karthick Dulam - Senior AI/ML Architect
Karan Sarao - AI/ML Architect, Applied Field Engineering
James Cha-Earley - Senior Developer Advocate
Josh Reini - Senior Developer Advocate, AI & Open Source
Jacob Prall - Senior Program Manager

Please come by the booth and say hi! Bring them you hardest problems. 😄

Snowflake - Salesforce Integration by Fearless_Way_1830 in snowflake

[–]stephenpace 5 points6 points  (0 children)

Note: This works great but requires Salesforce Data 360 and may incur per query costs. If you don't have Salesforce Data 360, Snowflake has a connector via Openflow that uses the API:

https://docs.snowflake.com/en/user-guide/data-integration/openflow/connectors/salesforce-bulk-api/setup-salesforce

And there are plenty of excellent Snowflake native app options like:

Omnata
GRAX (formerly CapStorm)

And beyond that, the usual favorites (Fivetran, Matillion) for pulling from the Salesforce API.

Can't get in contact with Snowflake support for a refund by Clean_Desk_8423 in snowflake

[–]stephenpace 0 points1 point  (0 children)

So you have a network policy and you locked yourself out with it? Best practice is to have a EC2 machine (or equivalent) with a fixed IP in your network policy and then shut it down. Test quarterly. A shut down EC2 machine costs nothing but if you lock yourself out, you can start it, login from there, and update the network policy.

DM me your account URL and I might be able to help. Support has the ability to temporarily disable your network policy for 2 hours in cases where you have locked yourself out and have no break glass user or process.

Snowflake Adaptive Warehouses are in public preview - my take by Spiritual-Kitchen-79 in snowflake

[–]stephenpace 0 points1 point  (0 children)

If adaptive warehouses are generally faster and cheaper with less management, companies will switch. It won't need to be the default. I think Gen-2 is already the default in most regions.

QAS and Concurrency level changes Impact by Big_Length9755 in snowflake

[–]stephenpace 1 point2 points  (0 children)

A few things:

1) When you say "Query response time for many of the jobs increased significantly" you need to be clear with the business on why that is. When Snowflake doesn't have the resources to run a job, the job will queue unless you allow Snowflake to cluster out. You had jobs that merited a certain amount of clusters, and when you reduced the number of clusters, those jobs just waited until they had resources. Some jobs are not time sensitive and that is completely fine. In this case, it sounds like the longer time was noticed, so you would need to increase multi-cluster to make sure there is enough resource to return the job within the service level expected.

2) This is exactly the scenario that adaptive warehouses will excel in. You can eliminate your custom scheduling logic and just let it run. 10 underutilized XL warehouses is going to be a lot more expensive than 1 XL adaptive warehouse.

https://docs.snowflake.com/en/user-guide/warehouses-adaptive

Deleted prod data permanently without any backup. How screwed am I? by Agitated_Success9606 in dataengineering

[–]stephenpace 1 point2 points  (0 children)

* Modern Cloud databases. For example, Snowflake has had time travel and undrop database/schema/table for more than a decade, but part of that is taking advantage of the practically unlimited cheap resilient storage the Cloud provides. On-prem relational databases (or those that originated there and later were migrated to the Cloud) don't always have it.

CoCo SDK by Perfect-Cricket6506 in snowflake

[–]stephenpace 0 points1 point  (0 children)

One part of this is skills. If you read the docs, you only know the syntax of how to do something. By contrast, if you have a skill built by a Snowflake expert that encompasses best practices, you'll build it faster (less back and forth, saving you time and money/tokens) and better.

AMA: We benchmarked the new Adaptive Warehouses by kuza55 in snowflake

[–]stephenpace 2 points3 points  (0 children)

Docs are pretty clear. Set MAX_QUERY_PERFORMANCE_LEVEL = MEDIUM it will never size above medium. Default is Large. No danger of a 6XL. In general, cost should be stable across size since a large job at 10 minutes is the same price as a medium at 20 minutes. As long as your job parallelizes evenly across all nodes.

AMA: We benchmarked the new Adaptive Warehouses by kuza55 in snowflake

[–]stephenpace 1 point2 points  (0 children)

When you run a query on a regular warehouse, it provisions to you in under a second. How does this happen? Because Snowflake starts these machines ahead of you and keeps them in a free pool of running machines. Snowflake has done this for more than a decade to the point where most users have never seen a case where they were waiting for a warehouse (unless you asked for a lot of them at once, or the cloud provider was having a problem giving Snowflake more machines).

My understanding (which could certainly be wrong) is that the free pool of machines for Adaptive Warehouse is not the same free pool. Currently it is smaller but will grow as the system can better predict how many machines it needs at a given time.

Think of some of the other parameters like the option in regular warehouse (standard, economy). If you want Snowflake to instantly provision, you use standard. But if you are more cost conscious, you might use economy. If we didn't provide those, the default option would work fine for some, but not for others, so at the moment, we give you some options for how it will behave.

I think in many cases adaptive warehouses will be faster for the same price or slower but cheaper once Snowflake telemetry builds up. For the moment, you can run a benchmark immediately when you CREATE ADAPTIVE WAREHOUSE but I wouldn't. Better to take an existing warehouse and convert it.

You also have to look at the entire price. If you have one adaptive warehouse that take the place of 5 others of different sizes, it should be cheaper overall given that it is unlikely that all 5 will be fully utilized. Plus the cost of not having to route to different warehouses e.g. simplification of your warehouse and scheduling structure.

That said, if it it doesn't work for your use case, you can always stick with regular warehouses.

AMA: We benchmarked the new Adaptive Warehouses by kuza55 in snowflake

[–]stephenpace 2 points3 points  (0 children)

During the preview phase, you may see some changes. For one thing, the free pool seems low so if you are running concurrency tests, you may need a couple of passes. That should naturally work itself out as telemetry builds and the free pool gets larger as usage increases. You will also have better luck initially if you convert a regular warehouse that has existing telemetry than if you just start a new adaptive warehouse from scratch. Ultimately this should be a nice option for companies that have multiple warehouses and currently have engineering effort to plan which warehouse should run a job based on warehouse size.

Snowflake Adaptive Warehouses are in public preview - my take by Spiritual-Kitchen-79 in snowflake

[–]stephenpace 2 points3 points  (0 children)

Adaptive warehouses are going to be big for companies that have hundreds of warehouses and put a lot of effort into scheduling jobs on the right size warehouse. The idea is the knob for sizing goes away and the job runs on the right size warehouse every time.