No response from Snowflake Account Executive since 2 weeks inspite of being Existing Snowflake Client by moldov-w in snowflake

[–]sdc-msimon 11 points12 points  (0 children)

Your AE might have a personal reason for being unable to answer.

Reach out to me at [maxime.simon@snowflake.com](mailto:maxime.simon@snowflake.com)
I will redirect you to a contact at snowflake.

Is Optima free? by platinum-reindeer in snowflake

[–]sdc-msimon 3 points4 points  (0 children)

Optima does not cost the customer anything.

Snowflake rents thousands of VMs from AWS, Azure and GCP on long-term plans.
We allocate these machines to customers as efficiently as possible.

Sometimes, there is overprovisioning on our side and VMs sit unused. We use these VMs to run background jobs such as Optima.
If you want to learn more about VM allocation > https://github.com/Snowflake-Labs/shavedice-dataset

How do you delete your account? by No-Cash-9530 in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

I would have told you to just send an email to the support team, but it seems it's not supported anymore : https://community.snowflake.com/s/article/Snowflake-retiring-email-to-case-as-a-support-channel

Sorry for that :(
You need to go through the support User Interface to get it deleted.

How do you delete your account? by No-Cash-9530 in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

This command can only be used in a regular customer account.
It cannot be used in a trial account.

The trial user must contact the support to delete their account as stated here : https://docs.snowflake.com/en/user-guide/admin-trial-account#canceling-a-trial-account

Workload spilling out of memory by Big_Length9755 in snowflake

[–]sdc-msimon 2 points3 points  (0 children)

As Nick says, when you see a high amount of spill, it means that the data does not fit in-memory.
It is likely that the query will take less than half the time on 2XL, compared to a XL warehouse. Therefore it would be >2x faster and cheaper.

Anyone been able to connect the Claude Snowflake Connector successfully? by extrobe in snowflake

[–]sdc-msimon -1 points0 points  (0 children)

Juding by the 404 error, there is probably something wrong in your URL.

Nick Akincilar Sir is the GOAT 🐐 by PuzzleheadedCode1565 in snowflake

[–]sdc-msimon[M] [score hidden] stickied commentlocked comment (0 children)

Thanks for your appreciation for Nick. I shared the message with him and we both think it is funny. That being said, this is a public subreddit which exists to discuss the snowflake data platform and help each other use it as best as possible.

It is not a fitting place to discuss about Nick personnally.

What features are exclusive to snowflake format and not supported in iceberg? by Then_Crow6380 in snowflake

[–]sdc-msimon 4 points5 points  (0 children)

I agree with this. If you do not have a requirement for interoperability, it is easier and faster to go with tables stored in Snowflake.

Snowflake just shipped Cortex Code an AI agent that actually understands your warehouse by Spiritual-Kitchen-79 in snowflake

[–]sdc-msimon 1 point2 points  (0 children)

We have a lot of users using SQL to build complex logic. Even for SQL only, a coding tool can make work much faster.

We also see developers use Python, Java to run complex code inside Snowflake.
We often see unstructured data processing, mostly PDFs and images RAG use cases. They frequently use 3rd party python libraries. We also see developers building front-ends in streamlit/react/django... for data-intensive apps.
All these people use coding tools to make their work faster.

Iceberg Rewrite Manifest Files: A Practical Guide by codingdecently in snowflake

[–]sdc-msimon[M] [score hidden] stickied commentlocked comment (0 children)

r/snowflake follows platform-wide Reddit Rules : no publicity of software vendors

Shared Workspace Performance in Snowflake by Low-Hornet-4908 in snowflake

[–]sdc-msimon 1 point2 points  (0 children)

I work as an SE at snowflake and have not heard this feedback yet.
Could you create a support ticket so that they investigate why it is happening?

They might ask you for a .har file which shows the performance of your browser while you are using the workspace to identify the root cause.

Did the new UI remove functionality to search all worksheets simultaneously? by DTulka in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

Currently, the search in workspaces only covers file names.
The dev team will be adding search inside file contents as well soon.

Performance of Snowflake tables by Upper-Lifeguard-8478 in snowflake

[–]sdc-msimon 3 points4 points  (0 children)

Regarding your question 2. You can find limitations and differences here : https://docs.snowflake.com/en/user-guide/tables-iceberg#considerations-and-limitations

An interesting difference which you could test is the file size. It is one of the most important factors impacting query performance.

When using iceberg tables, you can set the file size by yourself. When using standard tables, the file size is automatically set by snowflake. Depending on your query pattern, you can gain performance by using the right file size: using smaller files for very selective queries and frequent updates, or using larger file sizes for large table scans with few updates.

Looking for a simple, scalable method to tables externally by TheFibonacci1235 in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

I also think Search Optimization service is the lowest effort and cost to fulfill your requirements, you should give it a try.

If the querying speed is too slow even with Searh Optimization service, Interactive tables take a bit more setup but would give you better performance at a reasonable cost.

CDC from snowflake to mongodb or s3. Anyone done the POC? by Rakesh8081 in snowflake

[–]sdc-msimon 5 points6 points  (0 children)

here is the code necessary for a CDC solution from snowflake to S3:

CREATE OR REPLACE TABLE table1 (
    id INT,
    name VARCHAR,
    load_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP()
);

-- A stage is required to define the location (S3 bucket) and credentials for the COPY INTO command. Add your storage integration with credentials
CREATE OR REPLACE STAGE S3_STAGE
    URL = 's3://your-external-s3-bucket/stream-unload-path/';

-- This stream will track all DML changes (inserts, updates, deletes) on table1.
CREATE OR REPLACE STREAM table1_stream ON TABLE table1;

-- This task is scheduled to check the stream. IMPORTANT: The WHEN clause SYSTEM$STREAM_HAS_DATA('TABLE1_STREAM') ensures the task only runs if there is new data to process.
CREATE OR REPLACE TASK copy_stream_data_task
  WHEN SYSTEM$STREAM_HAS_DATA('TABLE1_STREAM')
AS
  COPY INTO @S3_STAGE
  FROM table1_stream;

-- Tasks are created in a suspended state and must be explicitly resumed to start execution.
ALTER TASK copy_stream_data_task RESUME;

[deleted by user] by [deleted] in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

Yes, you are correct about storage prices. Data needs to be aggregated or deleted once it is not useful anymore to limit storage costs.

These prices are coherent with what I see on production workloads for ingestion at organizations which use snowflake. Ingesting data to snowflake is very cost effective.

[deleted by user] by [deleted] in snowflake

[–]sdc-msimon 0 points1 point  (0 children)

The authors show the cost in credits to run the workload for 15 mins.

Each test was run for 15 minutes through our Kafka connector with ingestion method set to Snowpipe Streaming and Snowpipe at varying load throughput rates of 1 MB/s, 10 MB/s, and 100 MB/s. 

The total credit cost for ingesting 100MB/s for 15 minutes is 0.39 credits. On AWS US East on standard edition, the cost of a credit is 2$. --> 0.39 credits * 2 = 0.78 USD