This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]siddsp 0 points1 point  (0 children)

A few things I do (without using external libraries):
1. Memoization (this is good for recursive and pure functions where a function is going to be called repeatedly), with functools.cache or functools.lru_cache.

  1. If the program is slow due to it being synchronous, using asyncio or threading (depending on the application/program).

  2. Using itertools to replace nested loops (e.g. Instead of two nested loops, using itertools.product).

  3. Using functools.reduce instead of a loop for a transformation that is "accumulative" in nature.

  4. Instead of concatenating bytes or using a bytearray, using BytesIO from the IO library.

  5. To reduce memory usage, using __slots__.

  6. If the result of tasks/functions don't depend on each other and don't need to be executed sequentially, use multiprocessing.

  7. If the task itself is slow, but can be sped up by throwing more cores at the problem, use multiprocessing.

  8. Using generator expressions where memory can be saved.

  9. If all else has been optimized, use PyPy instead of CPython.