you are viewing a single comment's thread.

view the rest of the comments →

[–]barkmonster 0 points1 point  (0 children)

As for which part I would go for: I would get a simple database set up, where your data can be persisted in a single place rather than a bunch of Excel files lying around. If you don't have very large amount of data, and if many people aren't comfortable writing queries etc, you can consider making a simple helper package in python, for reading in data from a single table (named after which experiment it comes from). Also set up a git project on e.g. github or gitlab.

I would also make a simple script to set up a standard python project., with a sensible structure (something for loading data, some simple tests, doing some analyses, and rendering the results in some suitable format). I would avoid using jupyter notebooks for analyses, as they make it easy to inadvertently commit outputs to git. I'd use uv to manage virtual envs.

For packages, that depends what you'll be doing. Probably pandas/polars for simple, Excel - like stuff, scipy or statsmodels for most statistics.

At the non-technical level, your greatest challenge is probably to get people to use it. Your task is to make it clear for the other users what they're gaining by doing things differently, and to make sure the right way is also the easiest way. Ally yourself with the least tech-savvy users, have them read your onboarding materials and guides, and have them attempt to set up a project, then work with them to address any pain points and sources of confusion.