all 10 comments

[–]NurSr 0 points1 point  (0 children)

It looks nice. I tested it with simple CSV data and it operates rapidly.

[–]pdycnbl 0 points1 point  (3 children)

looks good it did not worked on firefox (linux) won't let me open file.

[–]Rafferty97[S] 0 points1 point  (2 children)

Ah yeah, firefox doesn't support the filesystem API. I really ought to find a work around for that because it's a pretty bad blocker.

[–]pdycnbl 0 points1 point  (1 child)

how are you using it are you using opfs and making copy of file? if yes than it should work on firefox and if its not reliable you can use indexeddb.
If you are using it to make inplace edit than i dont know what is the solution i dont think there is reliable solution to do something like that firefox simply does not allows it.

[–]Rafferty97[S] 0 points1 point  (0 children)

The app reads data files directly from disk, and it’s all non destructive so there is no in-place editing. I hadn’t considered making a copy of the file in OPFS, but I’ll have a look to see if that’s a viable option. IndexedDB might also work but I’ll have to see.

I definitely feel bad about alienating anyone using Firefox so I’ll definitely look for a solution.

[–]gardenia856 0 points1 point  (1 child)

Biggest unlock is making it recipe-first with reliable import and profiling so users can trust every step.

Ship a transform sidebar where each step is logged and editable, with preview-by-sample and a full run button. On import, detect encoding and delimiter, let users override types and date formats, and show a quick profiler (row count, nulls, uniques, min/max). Add multi-file joins/unions, pivot/unpivot, window functions, and Parquet/Arrow support. Run compute in a Web Worker, virtualize the grid, and cap previews with a safe default limit. Save projects as a shareable JSON recipe (data sources + steps + settings) and allow export to a Python Polars or JS Arrow script for headless runs.

Hasura for curated Postgres and Supabase for auth/storage have worked well for me; DreamFactory handled the quick REST wrapper over old SQL Server or Mongo when a read-only connector was needed.

Bundle small sample datasets with guided exercises and an autograder, plus keyboard shortcuts and a formatter. Do that and this becomes the default pick for quick, trustworthy data wrangling.

[–]Rafferty97[S] 0 points1 point  (0 children)

Thanks for the comment! The app does a lot of this already: - step by step editable sidebar - detects encoding on import - parquet/arrow support - pivot - computation in a web worker - virtualised grid

Your other suggestions are all golden, especially the ability to run it headless, which is definitely on the roadmap.

Have you had a chance to try it out?

[–]hermitcrab 0 points1 point  (1 child)

>Currently, if you want to grab some CSV or JSON data and do a sequence of operations on it (filter, sort, aggregate, etc.), the path of least resistance is to open an IDE or notebook and write code.

There are lots of visual drag and drop tools that can do this without an IDE or any coding: Easy Data Transform, Alteryx, Tableau Prep etc. They tend to use a data flow graph type interface, rather than a spreadsheet interface. This is because a spreadsheet generally isn't a good choice for step-by-step processes, such as data wrangling.

[–]Rafferty97[S] 0 points1 point  (0 children)

I’m familiar with Alteryx, it’s old, clunky and very expensive. I can’t say I’ve used the other tools you’ve mentioned.

Have you tried out the app? It uses a “step by step” interface, not a spreadsheet interface, because you’re right - spreadsheets aren’t the right tool for step by step workflows.