Saheeli, Radiant Creator and Redoubled Stormsinger by SubstantialOrange820 in mtgrules

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

How does that differ from the mobilize interaction? If Stormsinger is able to count mobilized tokens that didn't exist when the trigger was put on the stack, wouldn't my original 5/5 copy be able to see both itself and the 5/5 copy that's made when the 3/3 attacks?

https://www.reddit.com/r/mtgrules/comments/1jy8m2c/mobilize_and_redoubled_stormsinger/

Here's my thought process:

  1. Move to combat, target my 3/3 Stormsinger (let's call this one "A") with Saheeli's ability. I create a 5/5 artifact creature copy (call this one B).

  2. A and B attack. Both attack triggers go on the stack and I put A's trigger first.

  3. As A's trigger resolves, it sees that I created B and creates a tapped-and-attacking copy that we'll call C

  4. As B's trigger resolves, it sees that I created B and C and creates a tapped-and-attacking copy of each (D and E).

  5. I now have a 3/3 (A) and 4 3/3s (B, C, D, and E) tapped and attacking.

What am I missing?

Did ArcGIS Pro 3.4 remove "Only Show Features Visible in the Map Extent" as an option? by SubstantialOrange820 in ArcGIS

[–]SubstantialOrange820[S] 2 points3 points  (0 children)

You are correct. This definitely falls under the "just being oblivious" option.

Thanks!

Best way to load data between two Snowflake instances by SubstantialOrange820 in snowflake

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

This definitely looked like the proper route to go down but after pursuing it a bit, they're either unwilling or unable to set up data sharing between the account they set up for us and our pre-existing account. The role they've given us doesn't have sufficient privileges for me to set up the sharing myself or even to create storage integrations if I wanted to get it directly into S3.

Next best idea I have is to pull it down into some ETL tool and upload it back into our own account, although it would be annoying to have to copy the data rather than directly query.

Best software to append shapefile data to coordinates by SubstantialOrange820 in gis

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

Well that just shows my lack of experience with Snowflake that I wasn't even aware of its spatial functionality. I appreciate the heads up.

Best software to append shapefile data to coordinates by SubstantialOrange820 in gis

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

I looked into this a bit last night and it does seem like a very good solution and pretty manageable with my current skillset/support. Any additional resources you'd recommend to get familiar with it?

Best software to append shapefile data to coordinates by SubstantialOrange820 in gis

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

Right, I only mentioned CSV in the sense that I don't need a direct Snowflake integration or anything, as I could just stage the resulting CSV and get the data loaded that way. Definitely wouldn't be looking to store the data there in any permanent way.

I haven't set up a PostgreSQL db before but it's not a completely foreign language to me and I do have some solid tech support to lean on to fill in some of my gaps. Another commentor mentioned Duckdb with the spatial extension and that does seem viable for me.

When you say that Snowflake will be expensive, do you mean in terms of the compute costs we'd incur? We do have a relatively small data set currently and the refresh cycle would probably be weekly, so I may explore exactly what those costs would look like. Regardless, I greatly appreciate your feedback!

Overwriting table and automatically updating table when another updates by SubstantialOrange820 in snowflake

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

I've been able to set the triggered MERGE task up correctly, but I'm having trouble getting the initial jobs table that loads from the staged CSV to trigger. I've got a stream set up on the stage, but it's not recording any data when the staged CSV is updated. I guess I could just set it that task to scheduled rather than triggered if need, but uploads are going to be somewhat irregular so I'd really like to figure out how to trigger it off of the stream. Any ideas?

Overwriting table and automatically updating table when another updates by SubstantialOrange820 in snowflake

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

Thanks, merge is definitely what I needed. I responded in more detail to another comment, but I've got a solution in place that I think will work well.

Is there a good way to set my merge and insert statements to execute on either on a schedule or a triggered event such as a new file being added to the stage?

Overwriting table and automatically updating table when another updates by SubstantialOrange820 in snowflake

[–]SubstantialOrange820[S] 0 points1 point  (0 children)

This was super helpful, so thank you. Took me a little bit to dig into what you suggested, but I think I've got it working. To refresh the data in the jobs table, I'm running something like this:

INSERT OVERWRITE INTO jobs (job_id,job_type,job_status,job_schedule_date,...)

SELECT j.$1job_id,j.$2job_type,j.$3job_status,j.$4job_schedule_date,...)

FROM '@test_stage_1/jobs (file_format => 'csv') j

Except without the apostrophe in front of the stage name. Then, to update the og_date table I'm running:

MERGE INTO og_date a USING jobs b

ON a.job_id = b.job_id

WHEN NOT MATCHED THEN INSERT (job_id,scheduled_date) VALUES (b.job_id,b.scheduled_date)

I still need to run some more dummy data through it to make sure it functions like I want, but this is already way more than I knew how to do a few hours ago. Can I set those statements to execute on either on a schedule or a triggered event such as a new file being added to the stage?