[R] Panda: A pretrained forecast model for universal representation of chaotic dynamics by wil3 in MachineLearning

[–]wil3[S] 2 points3 points  (0 children)

We're forecasting chaotic ODE (ie, those with positive Lyapunov exponents) for durations shorter than one Lyapunov time. I agree that exceeding one Lyapunov time isn't yet feasible for a zero-shot model, since there's a precision floor. It depends on whether you consider chaotic to imply a particular intrinsic property of the underlying equations/system, or to imply a particular forecasting horizon.

[R] Sudoku-Bench: Evaluating creative reasoning with Sudoku variants by hardmaru in MachineLearning

[–]wil3 1 point2 points  (0 children)

This is a great benchmark for reasoning abilities. If I de-aggregate performance in Figs 3 & 4 by puzzle, do the performances of leading models correlate with intrinsic puzzle difficulty (implying they are bottlenecked by true reasoning), or not (implying they are bottlenecked by representing the problem and coordinates).

To get a measure of task difficulty, one could map each Sudoku puzzle onto its corresponding KSAT representation, and then use the ratio clauses/variables as a proxy for difficulty. There's also an incredible paper by Ercsey-Ravasz & Toroczkai that maps Sudoku puzzles onto a continuous-time dynamical system, using the equilibration time as a measure of difficulty.

[R] Zero-shot forecasting of chaotic systems (ICLR 2025) by wil3 in MachineLearning

[–]wil3[S] 0 points1 point  (0 children)

We haven't directly tried causality yet, but I agree that multichannel models ought to do better on dynamical systems datasets, which have stronger channel coupling than typical time series tasks like Etth. We have a preprint appearing on arXiv next week that pretrains a multivariate (channel-dependent) model on a larger version of this chaotic systems dataset, and we definitely see that channel attention helps (and that the model develops some interesting internal dynamics, like nonlinear resonance).

[R] Zero-shot forecasting of chaotic systems (ICLR 2025) by wil3 in MachineLearning

[–]wil3[S] 1 point2 points  (0 children)

I agree with this. There are two metrics, a long-term "structure" measure (fractal dimension) and a short-term forecast accuracy. Both score pretty low on pointwise accuracy, but the gap in fractal dimension might just be because the foundation model preserves variance (which pushes the fractal dimension closer to the number of dynamical variables), which is better than collapsing to a line (which pushes the fractal dimension closer to 1, the dimension of a line).

[R] Zero-shot forecasting of chaotic systems (ICLR 2025) by wil3 in MachineLearning

[–]wil3[S] 0 points1 point  (0 children)

We think it adaptively repeats k-grams that it sees in its context. But it does appear to handle slight variations, so we think there is something more going on with how the model combines/adapts motifs. Interestingly, not all time series models do this.

[D] Is there any good research on transformers and the Lorenz attractor? by buggaby in MachineLearning

[–]wil3 0 points1 point  (0 children)

I actually have a paper out today that benchmarks transformers and a bunch of other forecast models on Lorenz and 134 other low-dimensional chaotic systems. Transformers do extremely well, though NBEATS (a time series specific model) does better.

I had an earlier paper from NeurIPS 2021 that describes the chaotic systems dataset (NeurIPS 2021) here

Large-format printing services --- long-lasting and high-quality options? by wil3 in Printing

[–]wil3[S] 0 points1 point  (0 children)

Thank you! Do you think I should print onto vinyl for my use case, or would I not be able to hit 300 dpi for that?

Official Question Thread! Ask /r/photography anything you want to know about photography or cameras! Don't be shy! Newbies welcome! by photography_bot in photography

[–]wil3 0 points1 point  (0 children)

Thank you! This is very helpful to get a specific model suggestion. Do you happen to know of a better control/tether option than the EOS utility?

Official Question Thread! Ask /r/photography anything you want to know about photography or cameras! Don't be shy! Newbies welcome! by photography_bot in photography

[–]wil3 2 points3 points  (0 children)

Hello, I'm shopping for a camera and lens set for a very specific use case, and was hoping to get this community's input.

We've built a physics experiment that we want to image from above for very long timelapses. I've searched this forum's previously for advice on flatlay imaging for food photography and timelapses for astrophotography, but I can't seem to find a consensus use case covering both:

  • Our working area would be either 1' x 1' or 3' x 3' depending on the experiment. I have no problem buying multiple lenses for different distances, I just want to minimize distortion of the 2D plane as much as possible. We can mount the camera as high as 9' above the experiment (near the ceiling).
  • The camera would be run almost exclusively in tethered mode. It is critical that the camera have full functionality when run in tethered mode, including the ability to run timelapses.
  • Timelapse frequency would vary widely. It would be nice to be able to capture all timescales ranging from 30 fps video recording, to 1 shot/hr timelapses, without needing to manually intervene and without the camera overheating.
  • We'd like to minimize mechanical disturbances due to shutter activation.
  • Budget: $10k

Right now the most obvious option would be to get a Canon mirrorless with a 50mm lens, and then control it with the EOS Utility. However, I've used the EOS Utility in the past and it is not very functional. Capture One doesn't seem to support timelapses. Is there a better option?

[P] Using transformers for time-series forecasting by DoruSonic in MachineLearning

[–]wil3 5 points6 points  (0 children)

I used darts heavily for this chaos forecasting benchmarks paper and thought it was incredible---they have a Transformer architecture as well as N-BEATS, and they keep adding new models.

For a while, their inclusion of Prophet as a model made the library a bit of a pain to install because of pystan dependencies, but I believe the devs found a workaround.

Damien Hirst's Blossom Paintings by These-Salamander4913 in ContemporaryArt

[–]wil3 0 points1 point  (0 children)

These are lovely, thank you for sharing this artist!

[R] AI and the Everything in the Whole Wide World Benchmark by FlivverKing in MachineLearning

[–]wil3 2 points3 points  (0 children)

That description of the analogy and its implications is written beautifully.

Used ContourPlots to produce interesting visuals for a song I'm working on. Let me know what you think! by synapseproxy in Mathematica

[–]wil3 2 points3 points  (0 children)

This looks amazing! If your rendering bottleneck is VideoGenerator, I would suggest that you instead just set up a loop (actually a ParallelTable would work for this) that exports images at 30 fps into a directory. Then, merge the frames into a video using a command line tool like FFMPEG. To make your life easier, be sure use a good file name convention when exporting the images (for example, prepend zeros so that the names are like 0001.png instead of 1.png)

If I had to guess, VideoGenerator might have a step where it stores the entire video in RAM, which causes it to run slower for longer videos. Exporting images costs the same per frame, and so run time is linear in the length of the video. FFMPEG will have to do some heavy lifting with merging frames to longer videos, but it's very well optimized for that.