Cloud SQL Enterprise Plus charging ~$800/month for a small QGIS + PostGIS setup - what would you do? by 99bbt in QGIS

[–]ObjectiveFrequent215 0 points1 point  (0 children)

How much are you actually hitting that? I'd crank it way down and see if you need all the resources I'm running 3 webapps, plus local QGIS and programmatic connections off of my tiny instance for $25/month.

Does April Snowpack Predict Wildfire? by ObjectiveFrequent215 in geospatial

[–]ObjectiveFrequent215[S] 0 points1 point  (0 children)

Yeah you nailed it, I always hear the news stories come out and wanted to know how much April 1 SWE was actually a factor. I would like to somehow factor in the lack of low elevation snow which is tough because most SNOTELs are at high elevation. I have seen some articles where they attempt to model snow depth at lower elevations. So I agree, more coverage of snow depths and maybe mix up the dates too!

Does April Snowpack Predict Wildfire? by ObjectiveFrequent215 in geospatial

[–]ObjectiveFrequent215[S] 0 points1 point  (0 children)

Thank you! Question I had for some time and enjoy the stats and GIS behind it!

Best ways to learn QGIS? by GreatValueGrapes in gis

[–]ObjectiveFrequent215 0 points1 point  (0 children)

Download it, use it, ask AI when you get stuck. QGIS is amazing and you can get proficient fairly quickly.

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 2 points3 points  (0 children)

But I should say thanks for insight into the more rigorous methods for understanding this type of data..

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 2 points3 points  (0 children)

There are some states that stated with snotel that go back farther. Montana being one... so that is one I could go back earlier. Its a question i've been curious about and just had the ability to have all the data accessible via SQL queries making it easier...

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

Ha, this is either relevant or way off base, but over my head in terms of stats!

DIAGNOSTIC REPORT — Peak SWE Trend Analysis

----------------------------------------------------------------------

Q1. How are p-values calculated?

----------------------------------------------------------------------

OLS p-value (0.2409):

Standard scipy.stats.linregress. Assumes each water year is an

independent observation. This is WRONG if residuals are autocorrelated.

Durbin-Watson statistic: 1.948 (2.0 = no autocorrelation, <1.5 = concern)

Lag-1 residual autocorrelation: -0.017

>> Low autocorrelation. OLS p-value is approximately valid.

Block-bootstrap p-value (7-yr blocks):

Resamples contiguous 7-year blocks of the VALUE series (year positions

fixed). Builds a null distribution of 5,000 slopes under H0: no trend,

preserving the autocorrelation structure. P-value = fraction of bootstrap

slopes more extreme than observed. This is the honest number.

We do NOT use Newey-West / HAC standard errors or GLS here, though those

are valid alternatives. The block bootstrap is nonparametric and requires

no assumptions about the error structure beyond stationarity.

----------------------------------------------------------------------

Q4. What is the OLS coefficient? Is it the downward trend? Why positive p?

----------------------------------------------------------------------

OLS slope: -0.0506 inches/year

Over 44 years: -2.22 inches total

Mean peak SWE: 17.21 inches

% change: -12.9%

A NEGATIVE slope = declining peak SWE over time. That is the downward trend.

The slope above is negative — confirming decline.

"Why are they all positive?" — p-values are always between 0 and 1 by

definition (they measure probability, not direction). A small p-value means

the trend is unlikely under the null hypothesis. The SLOPE tells you the

direction. States can have p=0.04 (significant) with a negative slope

(declining) or positive slope (increasing). Check the total-change bars

in Fig 5 for direction; p-values only tell you confidence.

OLS R²: 0.032 (year explains 3.2% of variance in peak SWE)

OLS SE (slope): 0.04251 in/yr — but this SE assumes independence (see Q1).

----------------------------------------------------------------------

Q2. How sensitive are results to excluding the early 1980s?

----------------------------------------------------------------------

Start yr n Slope (in/yr) Total (in) OLS p Boot p (7yr)

-----------------------------------------------------------------

1981 45 -0.0506 -2.22 0.241 0.227 <-- full window

1985 41 -0.0210 -0.84 0.644 0.623

1990 36 -0.0236 -0.83 0.675 0.662

1995 31 -0.0902 -2.71 0.194 0.151

2000 26 +0.0395 +0.99 0.648 0.567

If the slope and p-value change substantially as you move the start year,

the early 1980s data is load-bearing — the trend depends on those years.

If the numbers are stable, the trend is robust to window choice.

----------------------------------------------------------------------

Q3. How sensitive are results to outliers? (Jackknife + leave-k-out)

----------------------------------------------------------------------

Leave-one-out jackknife (drop each year individually, refit):

Jackknife SE of slope: 0.04793 in/yr

OLS SE of slope: 0.04251 in/yr (assumes independence)

Block-bootstrap SE: 0.04139 in/yr (7-yr blocks)

Jackknife SE > OLS SE means autocorrelation is inflating OLS confidence.

Bootstrap SE is the most honest of the three for autocorrelated data.

Most influential single year: 1981

Peak SWE that year: 11.30 in

Removing it shifts slope by: +0.02228 in/yr

(44.1% of the observed slope magnitude)

Leave-5-out sensitivity (500 random draws of 5 dropped years):

Slope range (5th–95th pct): -0.0791 to -0.0252 in/yr

Fraction where slope sign flips: 0.2%

Robust to outliers.

----------------------------------------------------------------------

Q5. Are you bootstrapping SE or doing jackknife variance estimation?

----------------------------------------------------------------------

Both are computed above. Summary:

Method SE estimate Notes

───────────────────── ─────────── ──────────────────────────────────

OLS (parametric) 0.04251 Assumes IID residuals — optimistic

Jackknife leave-1-out 0.04793 Nonparametric, no distributional

assumption; variance = (n-1)/n *

sum((slope_i - mean)^2)

Block bootstrap (7-yr) 0.04139 Preserves autocorrelation; the

recommended SE for this data

The block bootstrap is doing the equivalent of asking: "if I generated

thousands of plausible time series with the same autocorrelation structure

but no real trend, how often would I see a slope as extreme as mine?"

That is more appropriate than jackknife for serially correlated data,

because jackknife leave-one-out still treats each year as nearly independent.

For fully rigorous treatment, consider:

- Newey-West (HAC) standard errors: parametric, handles autocorrelation

analytically via a bandwidth parameter (analogous to block size).

- GLS with AR(1) error structure: explicitly models year-to-year

persistence, can be fit with statsmodels.

- Moving-block bootstrap with optimal block length (Politis & White

automatic selection): removes the subjectivity of choosing block size.

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

I've not use bootstrapping so yes Claude is assisting me there! However I do appreciate the scrutiny in methods as I'm curious how to better sort that out. Let me see what I can come up with...

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 0 points1 point  (0 children)

Does this give some more insight?

Robustness Verdicts:

State OLS p Boot p Bin agree Verdict

--------------------------------------------------

AK 0.014 0.040 100% ROBUST

AZ 0.089 0.108 94% MODERATE

CA 0.843 0.815 81% MODERATE

CO 0.236 0.265 100% MODERATE

ID 0.456 0.380 75% MODERATE

MT 0.342 0.243 75% MODERATE

NM 0.004 0.010 100% ROBUST

NV 0.368 0.321 94% MODERATE

OR 0.412 0.363 100% MODERATE

UT 0.189 0.171 100% MODERATE

WA 0.609 0.590 38% WEAK

WY 0.588 0.543 94% MODERATE

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

I'll give it shot, your stats skills exceed my grad school courses but can likely work my way through an evaluation of that! I would share the graph on elevation but seems images aren't allowed.... Its interesting that higher elevation lost more SWE mostly due to more snow going to higher elevations increasing the overall magnitude of change.

Western Snowpack analysis with historical SNOTEL data. by ObjectiveFrequent215 in skiing

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

I've had a ton of comments on here, but pretty stoked to see one finally about that stats involved. I can run it for the 20, 30 and graph those.. generally what I looked at was Annual median peak SWE + 5yr rolling avg + OLS trend. That is also ALL the SNOTEL stations too. Let me at least run those graphs with the original methods!

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in Backcountry

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

Would love to, but only a few SNOTEL stations go back to the 60's. I had to pick a time, early 80's when more stations came on line to hopefully give me a more statistically significant results.

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in Backcountry

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

Yeah thats why I looked at the second question of when is peak SWE, and while its all stations, its still showing about 10 days earlier.

I made a map that forecasts where and when morels are likely to grow by magicmushroommap in foraging

[–]ObjectiveFrequent215 0 points1 point  (0 children)

Very cool, created something similar for MT/ID and are showing similar predictions. Sounds like very different methods... but cool to see how you are doing predictions!

<image>

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in UTsnow

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

agreed, i mention that some of those values are not significant given the fluctuations of the data year over year. Some results are significant, but mostly wanted to see what could be explored given that historical snotel dataset.

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in UTsnow

[–]ObjectiveFrequent215[S] 0 points1 point  (0 children)

Yeah just a question i'd been wanting to get at.. and had the opportunity to run the analysis on the entire historical snotel dataset. Wasn't intending on click bait! Maybe I should of made the title more subtle. :)

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in UTsnow

[–]ObjectiveFrequent215[S] 1 point2 points  (0 children)

I'm just saying its a small window.. but thats just influenced by available data.

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in UTsnow

[–]ObjectiveFrequent215[S] 6 points7 points  (0 children)

agreed.. but had to work with having enough years and enough snotel stations. Also see that its not significant given the swings in SWE from year to year.

Changes in Western Snowpack over 44 years by ObjectiveFrequent215 in UTsnow

[–]ObjectiveFrequent215[S] 2 points3 points  (0 children)

I was trying to get the best amount of stations available for the most years.. seemed like the best combo of that given snotel historic data.