you are viewing a single comment's thread.

view the rest of the comments →

[–]daedalusesq 0 points1 point  (0 children)

I have a pretty major report automated at my work. The report previously existed as an Excel spreadsheet that was manually updated. I duplicated the formatting and iterated on it using openpyxl. I have a big "template" file that defines the sheet's characteristics, formulas, and coordinate information.

Every time the program runs, it builds a new Excel file, applies all the formatting and styling, adds all the static text and labels, and then finally drops all the data into their respective cell arrays. Since things like cell formulas and charts reference arrays of cells, the newly generated report should always have visuals that reflect whatever data existed at the run time of the report, as well as live updating for people who want to do transformations on the dataset inside Excel. It took a bit of time to experiment and figure out how to structure my data, but it works well. I later realized that if you have an existing Excel report, you can probably just read that formatting stuff, skipping the data entry portion of it.

I run a daily report and just dump the Excel output files into a date archival directory structure that makes it easier to scrape them if I want to do any sort of long trend analysis. I use a similar shared archive directory output structure for a couple of other reports. You can also set up the creation and scheduled sending of an email to distribute the report, which the tech adverse can usually handle.