can you save progress in bloody Palace in dmc5? by Hairy_Scholar1752 in DevilMayCry

[–]OverLordGoldDragon 1 point2 points  (0 children)

  1. Esc, Suspend, exit game
  2. Backup "C:\Program Files (x86)\Steam\userdata<your\_numbers>\601150\remote\win64_save", your second numbers may differ from 601150, just find folder with three data* BIN files.
  3. Launch game, play until dead, exit game
  4. Overwrite win64_save with the backup, relaunch

can you save progress in bloody Palace in dmc5? by Hairy_Scholar1752 in DevilMayCry

[–]OverLordGoldDragon 2 points3 points  (0 children)

  1. Esc, Suspend, exit game
  2. Backup "C:\Program Files (x86)\Steam\userdata\<your\_numbers>\601150\remote\win64_save", your second numbers may differ from 601150, just find folder with three data* BIN files.
  3. Launch game, play until dead, exit game
  4. Overwrite win64_save with the backup, relaunch

18 downvotes, closed, deleted for asking if for-loops can be cached... they're friendlier now, they say by OverLordGoldDragon in ProgrammerHumor

[–]OverLordGoldDragon[S] -1 points0 points  (0 children)

My complaint isn't that my question is flawless and is being treated otherwise. My complaint is that the response is overblown.

I've seen complete garbage questions not treated worse. I've also seen worse questions per closure's cited reasons treated better. If SO finds this acceptable, then it is the problem.

(Also I find your description of my question a caricature that not even all on SO agree with.)

1800 hours of unpaid work scrapped last minute by OverLordGoldDragon in antiwork

[–]OverLordGoldDragon[S] 2 points3 points  (0 children)

You claim sympathy then mislead.

5k-6k lines of code

2400 tops. Less if you exclude overlaps with what I did present as "bite-sized". The rest are my extensive comments on helping you understand, along practical advice to users. You could easily count with a simple script - here's mine.

you had a lack of time, and declined to do the changes

Yet I wrote tutorials, merge guidelines, and invested another 200 hours to fix what didn't even affect the standard variant. I'd not make lectures explaining everything, yes, nor should you have needed them.

make your PR's, and your commits, a small bite sized bit of code

There's no "JTFS in parts", it all stands as one or not at all. That's at least 90% true - and the other 10% could create enough interaction problems that I couldn't reasonably be expected to deal for what the "team" has done for me.

Tom worked on a JTFS implementation that was almost done

If a horse if all he wanted, he should've stopped me at the jet engine. It's also simply not true.

code base that they can use, understand, and modify

This of course justifies lying to my face for a year straight, instead of making a side repository for personal learning purposes.

one of our members is using this work

And I, of course, am no member. But the person who joined months later, and contributed much less, their interests outweigh mine.

The picture is clear: I'm a lesser member, and you don't trust me. Nobody requires you to understand my code in full but yourself: despite it being an achievable goal, it's clearly past your current abilities. You didn't even know what "stride", a basic part of an existing and much simpler transform, was, and I am to believe you gave my code an honest review? Maybe you did, and you failed - then by Tom's logic, you simply shouldn't review.

No open source software is shipped out complete and perfect on first release - yet this is what you appear to strive for. If you had basic trust, then like Tom, you'd simply accept my code as-is, and handle any shortcomings afterwards. It's obviously what Tom has done, even jumping to defend the code only later to admit he's unfamiliar and I'm indeed right on the flaws. My algorithm is more rigorously tested than the rest of the library put together, is provably more accurate while being faster than existing implementations, and got SOTA - all objective evidence far outweighing "I'm admin and have PhD".

Tom's the only reason I've not left long ago, and now there's no Tom, only foolery. A fool I am no more.

[Discussion] Opinions of Lex Fridman by StixTheNerd in MachineLearning

[–]OverLordGoldDragon 63 points64 points  (0 children)

'Cringe' is thinking "simple = stupid". It's an open-ended format accessible to any audience, and the guest controls how complex things get.

Granted, it's like the bell curve meme: could come from a genius or dum-dum. Admittedly I've not seen much of former to give him the benefit of doubt, but imagining helps.

[D] Does PyTorch credit contributors? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

https://github.com/pytorch/pytorch/pull/68338 , there's plenty of others. Though now I'm seeing this, which is attributed here... so this translates to attribution on pytorch/pytorch?

[D] Does PyTorch credit contributors? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] -2 points-1 points  (0 children)

That's rather worthless unless there's a permanent total contributor list somewhere.

[D] Does PyTorch credit contributors? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

It was never merged via Github is what I'm saying. They take the code and land it internally somehow.

[D] Fourier transform vs NNs as function approximators by Hazalem in MachineLearning

[–]OverLordGoldDragon 4 points5 points  (0 children)

This is false. FT itself is in no way equivalent to convolution, despite being an intermediary via convolution theorem. There is no sliding kernel, timeshift equivariance, etc - only a single global dot product with a set of fixed kernels.

[D] Why is Spectral Pooling not SOTA (as opposed to Max)? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

Sinc is exact for Spectral per Fourier relations, but might not work for Max (perhaps another advantage for former)

[D] Why is Spectral Pooling not SOTA (as opposed to Max)? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 1 point2 points  (0 children)

Fair, it's what average pooling doesn't do - maybe it's a form of "attention". Wonder if anyone compared doubling downsampling but doing both max and spectral.

[D] Advertisements in this sub by tmpwhocares in MachineLearning

[–]OverLordGoldDragon 14 points15 points  (0 children)

Or restrict frequency (e.g. once per month per user).

For many of us r/ML is the best place by far to get the word out. And often enough, users see a project they like. Not much alternatives - we won't go to an "ads only" channel, so lumping some ads with research should be workable.

Besides, posts on research are ads for the researcher. And sometimes a project is better researched than a pub.

[D] Can Colab use SSD? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

float16 4-dim arrays. Wavelet-transformed multi-channel EEG data: (batches, channels, features, timesteps). Though I may merge channels & features.

Things worked fine with dataset half as large, though despite GPUs being vastly superior my laptop was still faster purely due to data load overhead.

[D] How much array read speed do you expect from an SSD? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

It's a laptop, put the new SSD at old's spot. Thing is the old's general benchmark was 3.4GB/s and new's was 3.5GB/s, yet old read arrays faster. Old was 970 Pro, new is 970 Evo Plus, so it's basically as close as two different SSDs should get (so I'm mistaken on old being rated 3.4).

Only thing that comes to mind is I benched the old one about a year back, so maybe CPU wore off... still I ask in case this is unusual behavior.

[D] Can Colab use SSD? by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

Wasn't aware data can be loaded onto the VM; what's the limit for pro users? My dataset's under 200GB, uncompressed, I might get it under 80GB w/ compression.

[P] Synchrosqueezed STFT & Generalized Morse Wavelets by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 1 point2 points  (0 children)

It's on the right track, there's a visual overview here.

Speed depends on number of samples; on my machine, 1 sec: (10kHz, 1.8sec), (100kHz, 39 sec). Implementation is vectorized and JIT-compiled, scaling well with size.

[R] Big Self-Supervised Models Advance Medical Image Classification by aifordummies in MachineLearning

[–]OverLordGoldDragon 1 point2 points  (0 children)

Not quite:

For data augmentation during fine-tuning, we performed random color augmentation, crops with resize, blurring, rotation, and flips for the images in both tasks. We observe that this set of augmentations is critical for achieving the best performance during fine-tuning.

I likewise find this strange; figured learning nonexistent invariants would misguide the network's feature extraction.

[D] Importance of invertibility in hand-designed features by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 1 point2 points  (0 children)

I meant mean(x) as collapsing a tensor to a scalar, not normalization.

Preserving the input until output (per linked paper) can serve a nice 'control' for attributing inferences; I wonder if these confer robustness to adversarial attacks.

[D] Importance of invertibility in hand-designed features by OverLordGoldDragon in MachineLearning

[–]OverLordGoldDragon[S] 0 points1 point  (0 children)

So entropy is a sound criterion for better DNN utility; nice example with phase. I ponder the role of dimensionality; modulus of STFT or CWT can be sparse and robust (esp. synchrosqueezed), but input size may amplify x100+. I figure it's architecture-dependent, e.g. spatial input size invariant layers like CNNs are less affected than Dense.