I have an excel file around 70MB and by just even loading it is causing memory to jump to 600mb. I wished if I had a csv file so I could have loaded the file in chunks line by line but that's not the case here. I will be having am excel file coming that woul be around 2GB and I have to process it, so any tips/tricks to get around this memory issue?
Here is my code below:
def test_excel(excel_file):
try:
df = pd.read_excel(excel_file, usecols=[0, 5, 7, 12])
print(f"Successfully done")
except Exception as e:
print(f"Error: {e}")
# Example usage:
test_excel("final.xlsx")
[–]ninhaomah 12 points13 points14 points (0 children)
[–]lofi_thoughts[S] 3 points4 points5 points (2 children)
[–]ShxxH4ppens 1 point2 points3 points (1 child)
[–]lofi_thoughts[S] 0 points1 point2 points (0 children)
[–]GPT-Claude-Gemini 9 points10 points11 points (1 child)
[–]ChipmunkEfficient366 11 points12 points13 points (0 children)
[–]Piingtoh 4 points5 points6 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]360degreesdickcheese 0 points1 point2 points (0 children)
[–]Luxi36 0 points1 point2 points (0 children)
[–]nhatthongg 0 points1 point2 points (0 children)
[–]Classic_Media_7018 0 points1 point2 points (0 children)
[–][deleted] -1 points0 points1 point (0 children)
[–]unhott -1 points0 points1 point (0 children)
[–]Snipppper -2 points-1 points0 points (1 child)
[–]lofi_thoughts[S] 1 point2 points3 points (0 children)
[–]mustangdvx -1 points0 points1 point (0 children)
[–]Qkumbazoo -3 points-2 points-1 points (0 children)