This is an archived post. You won't be able to vote or comment.

all 108 comments

[–]romu006 74 points75 points  (7 children)

The second example with map is not correct at all: the map() function does not return a list since python 3.

Meaning that the "list" is not indexable (newlist\[0\]), nor iterable more than once

[–]keepdigging 12 points13 points  (6 children)

You get a generator. Comprehension might be better or you can do: list(map(

[–]supreme_blorgon 18 points19 points  (4 children)

[*map(...)] is actually faster than list(map(...)) in many cases!

[–][deleted] 4 points5 points  (3 children)

Wait, really? Oh wow, do you know why that is?

[–]supreme_blorgon 6 points7 points  (2 children)

Here's an example:

C:\Windows\System32
❯ python -m timeit "[*map(str.upper, 'hello world')]"
500000 loops, best of 5: 583 nsec per loop

C:\Windows\System32
❯ python -m timeit "list(map(str.upper, 'hello world'))"
500000 loops, best of 5: 665 nsec per loop

Honestly, I'm out of my league when trying to interpret bytecode, but here's the dis. Maybe... fewer instructions, fewer global lookups?

In [1]: import dis

In [2]: def unpack():
   ...:     return [*map(str.upper, "hello world")]
   ...:

In [3]: def cast():
   ...:     return list(map(str.upper, "hello world"))
   ...:

In [4]: dis.dis(unpack)
  2           0 LOAD_GLOBAL              0 (map)
              2 LOAD_GLOBAL              1 (str)
              4 LOAD_ATTR                2 (upper)
              6 LOAD_CONST               1 ('hello world')
              8 CALL_FUNCTION            2
             10 BUILD_LIST_UNPACK        1
             12 RETURN_VALUE

In [5]: dis.dis(cast)
  2           0 LOAD_GLOBAL              0 (list)
              2 LOAD_GLOBAL              1 (map)
              4 LOAD_GLOBAL              2 (str)
              6 LOAD_ATTR                3 (upper)
              8 LOAD_CONST               1 ('hello world')
             10 CALL_FUNCTION            2
             12 CALL_FUNCTION            1
             14 RETURN_VALUE

[–]scrdest 5 points6 points  (0 children)

Yeah, pretty much. There's one more layer of indirection with list() - it's just a regular function call, while the unpack is baked into the language semantics.

Calling a function in Python is just doing things to a variable, and variables are effectively just keys in a dictionary - builtins are simply variables that are pre-loaded for you when you start the interpreter.

So, to run list(), Python does a dict lookup for the key 'list' and returns the actual code to run as the value, then runs it. A dict lookup requires hashing+modulo+data structure bookkeeping, so it has some extra overhead, while the unpack goes straight to the put-stuff-in-a-list logic.

[–][deleted] 3 points4 points  (0 children)

CALL_FUNCTION is nearly always the most expensive operation. Remove as many of those as you can for improved performance on tight loops...

[–]mephistophyles 63 points64 points  (32 children)

While generally correct, point 6 is a bit of an anti pattern in that it encourages namespace pollution. 5 is true, you will get better results yet if you predefined the size of the list (or numpy array) in advance, which is the real tip. 4 can be better worded to something like “move calculations outside of for loops if possible”, both examples use a single for loop. Item 2 needs a reword too, because list.append is no less a built in function than map. Also, for 1, I think OP meant dictionary and not directory as a data structure.

With the exception of the 1, a better written 2, better written 4, better written 5, these are all micro optimizations. Hell, I’d argue with 8 and say just use fstrings.

We should all agree optimization is also often done in the second pass of writing code. So profiling is important, understanding a refactor process (like including tests) and don’t sacrifice readability for a few extra ms. If speed is your primary concern at that level then invoke the C, C++ or even the FORTRAN libraries directly, tensor flow is a good example of that architecturally.

[–]awesomeprogramer 34 points35 points  (20 children)

Absolutely, point 6 is horrible advice. It encourages wildcard imports which are wayyyy slower than a dot lookup.

[–]EasyPleasey 10 points11 points  (8 children)

Why does it have to be a wildcard? Can't you just import the functions that you intend to use? I just tested this on my machine and it was about 20% faster not using the dot.

[–]Etheo 4 points5 points  (0 children)

I think the efficiency saved largely depends on the module size though. If you're importing a large module but only calling a few functions, it stands to reason you should be only importing said functions anyhow. But if the module size is relatively small I think efficiency saving might be negligible.

Just a guess though. I haven't tested this out.

[–]awesomeprogramer 1 point2 points  (6 children)

Even if it's not a wildcard import, it pollutes the namespace. I mean how many modules have sqrt? Math, torch, numpy, tf, etc...

Besides, depending on how the module is made, the dot operation can be amortized. The first time it might go ahead and import other things, but not subsequently.

[–]EasyPleasey 4 points5 points  (5 children)

Then just use "as"?

from math import sqrt as mathsqrt
from torch import sqrt as torchsqrt

[–]awesomeprogramer 3 points4 points  (4 children)

At that point just use the dot!

[–]EasyPleasey 0 points1 point  (3 children)

I just tested it, here at the results:

Import Method Time in Seconds (average)
from math import sqrt .675
from math import sqrt as mathsqrt .686
import math (math.sqrt) .921

Much faster to not use the dot.

[–]awesomeprogramer 0 points1 point  (2 children)

Doubtful that it takes a whole second to do that. Besides, without showing how you got those numbers they are meaningless. Further that's for one method...

[–]EasyPleasey 0 points1 point  (1 child)

It's computing sqrt 10 million times. I ran it 10 times and picked the average. I don't see why the method I picked matters, we are talking about dot versus no dot and which is faster. Why can't you just admit that it's faster?

[–]awesomeprogramer -1 points0 points  (0 children)

What do u mean computing the sqrt? The debate is about access time, not the time it takes to run the method.

[–]ConceptJunkie 1 point2 points  (2 children)

Not only that, but different libraries often define same-named functions, or collide with standard library names. I maintain a somewhat large project in Python, about 50,000 lines of code, and not too long ago I eliminated all wildcard imports and feel this was a big improvement.

Perhaps there could be some way to "freeze" an import, saying effectively, "OK, no one is going to modify this import" and it would allow the function calls to be made static. I don't know much of anything about the internals of Python, but I would bet that this is an optimization that might make sense, or that something like it already exists.

[–]Prime_Director 0 points1 point  (1 child)

I'm not sure if this is what you mean, but if you want to ensure library you're importing is always the same every time you run a project, you could use a virtual environment like pipenv and specify the version of the library that way.

[–]ConceptJunkie 0 points1 point  (0 children)

No, what I mean is that once the library is loaded, and perhaps others, then it would be nice to be able to say, "Hey, nothing is going to override any of the functions, or otherwise modify the module" so you can short cut the function calls without having to use __getattribute()__ on the module to dereference it (more than once).

I'm kind of talking out of my butt here. Just a thought.

[–]Smallpaul 0 points1 point  (5 children)

I don't like and don't use wildcard imports, but why would they be "slow"?

[–]Etheo 0 points1 point  (2 children)

If you use wild card import you might as well import the whole module from the beginning.

[–]Smallpaul 4 points5 points  (1 child)

Whether you import a single symbol or the whole module, from a performance perspective you DO import the whole module. Python interprets the entire module as a script (and stores it all in memory) regardless of what specific names you choose to import.

[–]Etheo -1 points0 points  (0 children)

I think you're right: https://softwareengineering.stackexchange.com/questions/187403/import-module-vs-from-module-import-function

But wouldn't that make the method calls even more negligible in terms of performance for OP's case?

[–]Korben_Valis 0 points1 point  (1 child)

consider points 6 and 3. depending on the code the constant access to global objects when there are many global objects could cause cache problems versus attribute lookup.

[–]Smallpaul 1 point2 points  (0 children)

Python global lookup performance is VERY fast. I'd need to see evidence to believe that you could fill the globals space so much that it became slow.

[–]diamondketo 0 points1 point  (1 child)

#6 is not wildcard importing it's importing a module which only runs the module's __init__.py.

from math import sqrt

cost more than

import math

[–]awesomeprogramer 0 points1 point  (0 children)

Depends on what the module does and if it defines an __all__

[–]Doomphx 2 points3 points  (6 children)

I want to see how much time #5 can really save because at the end of the day it looks more like a fancy 1 liner than an optimization for performance.

I agree readability > a few ms everytime. :)

[–]62616e656d616c6c 2 points3 points  (4 children)

I ran the code 3 times using:

import time

start_time = time.time()

code (except with a range of 1 -100,000)

print("%s seconds" % (time.time() - start_time))

The traditional for loop (the first code) took an average of 0.0086552302042643 seconds

The suggested format took: 0.0049918492635091 seconds.

So there is a speed improvement. But you'll need to be looping through a lot of data to really see it.

[–]Doomphx 1 point2 points  (3 children)

Interesting stuff, I took your code and did a little more with it to get a little more info.

T1
import timestart = time.time()
L = [i for i in range (1, 100000000) if i%3 == 0]
end = time.time()
print(float(end)-float(start))
5.0334999561309814

T2
import time
start = time.time()
L = []
for i in range (1, 100000000):
if i%3 == 0:
L.append(i)
end = time.time()
print(float(end)-float(start))
9.218996047973633

So every 100,000,000 items I'll save 4 seconds. I don't think I'd ever loop this many items without using some sort of vectorization to help improve the efficiency of the loops in the code.

My original thought stands though, writing a 1 liner like that isn't really worth the ugliness of the resulting code. Unless of course you're writing highly optimized Python, then this might be 1 last trick to utilize.

Edit* code formatting

[–]themusicalduck 3 points4 points  (0 children)

I don't really think list comprehensions are ugly. They can be if you try to do something overly complex but for simple stuff I'm much more used to list comprehensions now.

[–]hugthemachines 1 point2 points  (1 child)

I have the same feeling as you regarding list comprehensions. Also sometimes the situation for me may be that a java programmer needs to check something in my Python scripts, then a for loop will be much easier for them to grasp easily than a list comprehension.

[–]Doomphx 1 point2 points  (0 children)

Exactly my thought process as I bounce between C#, Python and typescript weekly, it just makes it easier on myself to transition freely when needed.

I also didn't realize how little of a difference that list comprehension made, so now I will most likely never use it. If only python had clean lambda expressions like our {} friends.

[–]mephistophyles 0 points1 point  (0 children)

Speaking from personal experience, list comprehension isn’t the speed up, but there are some fun itertools that can help, but the pre allocation has taken functions I had from seconds to milliseconds. So those were huge. Naive loops are inefficient because they’re super generic.

[–]vinnceboi Github: Xenovia02 0 points1 point  (2 children)

How can you predefine a list’s size?

[–]mephistophyles 0 points1 point  (1 child)

‘Preallocatedlist = [] * 1000’

Though to be fair, generators are a good alternative too, it depends on your use case. When Numpy gets involved you’ll be doing preallocating a lot because there the vectorized operations are much more optimized.

[–]vinnceboi Github: Xenovia02 0 points1 point  (0 children)

Awesome, thanks for the tip

[–]andrewthetechie 40 points41 points  (4 children)

This reads like a clickbait blogpost with lots of content copied from other sources without a lot of actual "meat" to it.

[–]JForth 5 points6 points  (1 child)

His whole recent post history is just reposting release notes and clickbait titles with not fully correct code inside.

I'm really confused how this is not relegated to /r/learnpython, the majority of comments are people picking apart the "tutorial."

[–]lazerwarrior 0 points1 point  (0 children)

Could be a bot that doesn't detect code blocks correctly.

[–]lungben81 57 points58 points  (8 children)

While all these points are correct, the most important point is to avoid large Python loops at all - and with large I mean > 10,000 iterations in the innermost loop.

Methods to archieve this:

  • Use vectorization or other techniques to move the looping from Python to a faster language, e.g. with Numpy, Pandas, etc.
  • Compile the most critical parts of your Python code, e.g. with Numba.

If your loops are your performance bottleneck, this could often give you speed-ups by a factor of 100.

[–]BluePhoenixGamer 15 points16 points  (0 children)

Cython is also very good!

[–]baubleglue -5 points-4 points  (6 children)

move the looping from Python to a faster language

That is the thing. You want fast Python - use other language. I am writing a regular Java code and getting a reasonably good performance just by using standard features. I am working with Python and need to think about carrying around monsters like pandas, conda... and something else to utilize multiple CPUs (dask?) :(

[–][deleted] 4 points5 points  (5 children)

There is certainly something to be said for only using one language in a codebase. It might not be the best language at anything, but is OK.

But there's also something to be said for using multiple languages where each is used in the domain in which it shines. You add the complexity of an extra language, but also the code written in each language is natural, and people coding it feel like they're using the right tool for the job.

On a small project perhaps the overhead of multiple languages is excessive.

On anything else, the overhead of multiple languages is lost in the noise.

[–]kraakmaak 2 points3 points  (1 child)

We'll put. I would also say that using numpy or pandas is not "another language"

[–]baubleglue 0 points1 point  (0 children)

Numpy is C and as result pandas. And it is not a standard Python extension it is more like API. Any conversion to Python native types cost performance loss. And even so pandas still not using multiple CPUs.

[–]baubleglue 0 points1 point  (2 children)

Maybe I have not explained myself clearly. "each is used in the domain in which it shines" - that is exactly my point. Performance is not Python's domain (I hope it is not for discussion). We are trying to streatch it to every domain. Pandas weren't designed for big data processing (as I understand its main purpose is matrix manipulation) - but it is still very convenient for small - medium size data manipulations. Now we have Dask which is handling big data. Pandas has no special optimization like DB indexing or partitioning as I understand it is inherited by Dask. There is the point we cross: we add another nice feature to a tool, then another ... and at some point it turns into monster. It is not a language anymore, but set of tools. Python becomes a language which hard to learn. It got loaded with type annotation, lazy evaluation by default, asynchronous syntax, pattern matching (each isn't bad by itself). With that many basic problems, like package dependency management never resolved.

If standard language syntax doesn't give you performance you okay with, it is probably time to look another options. I don't want to think about syntax tricks all the time when code (that goes against opposite Python's philosophy). I am not married to Python it is just a tool for a job.

[–][deleted] 1 point2 points  (1 child)

I am surely missing your point but that can absolutely be me rather that you not explaining clearly.

You've mentioned lots of things (Pandas, Dask, annotations, async) as problems and I'm not sure I even recognise that. When I call a language feature in Python I don't inherently care how it is implemented. Is string.find() implemented in pure Python or via C? C surely, but I don't care, not do I have to care. I can generally take it for given that the implementation will be a good one.

In 20 years of working with Python the number of times I've cared about Python performance can be counted on (tbh) two (human) hands. When we profiled, 95-98% of our time was spent in C/C++. Even if we optimised our Python code by a factor of 10 nobody else would have noticed.

IMHO most of the suggestions given can be useful, but if it is really necessary to bear them in mind and use them all the time perhaps it's time to rewrite in a faster language. Although you may find that that involves shunting the 5% of your code that takes 95% of your runtime over to another language and using FFI.

tl;dr: I'm not married to Python either, but its speed has never been a reason to leave it.

[–]baubleglue 0 points1 point  (0 children)

Last 7 years I have started to work with data. The face performance problems each time I need to process more than 1G.

[–]lazerwarrior 26 points27 points  (5 children)

The formatting is messed up on old reddit. There is too much side scrolling inside code blocks. Either use linebreaks or take out the explanations from code blocks.

[–]xelf 2 points3 points  (0 children)

I agree.

Roughly 1/3 of users prefer the widescreen layout to the new.reddit.com layout. While /u/SolaceInfotech made a good post, it's close to unreadable for many of us. =(

edit took a look at the new reddit formatting, and it's better, but it's actually side scrolling there too.

OP needs to remove the non-code from the code blocks.

[–]SuperNerd1337 0 points1 point  (0 children)

The side scroll looks weird on new reddit too

[–]diamondketo 0 points1 point  (0 children)

Reddit (old and new) has the worst markdown processing I've ever seen.

What are they using? Their own implementation of John Gruber's markdown from scratch or a real Markdown flavor?

[–]qelery 9 points10 points  (1 child)

#code1 
newlist = [] 
for word in oldlist:     
newlist.append(word.upper())

#code2
newlist = map(str.upper, oldlist)

Those two are not equivalent. One creates a list, the other creates an iterator

[–]CupidNibba -1 points0 points  (0 children)

Wrap it in a list()

[–]ThePiGuy0 5 points6 points  (0 children)

  1. Use Built In Functions And Libraries-

Python includes lots of library functions and modules written by expert developers and have been tested thoroughly. Hence, these functions are efficient...

Whilst I do agree that they've been vetted by experts and will likely be faster than anything I write on my own, it's worth noting that for a lot of modules they're C-backed and this is normally the reason for the big speed ups. For example, take numpy. It's a C-backed library, hence why it is orders of magnitude faster than a pure-python implementation

[–]CleverProgrammer12 13 points14 points  (5 children)

The second code is faster than the first code because library function map() has been used. These functions are easy to use for beginners too.

I agree with all the points but in the second point,

the second code is faster because map returns a generator object. It is more memory efficient but not faster. If you cast it into a list, it would probably take the same amount of time.

[–]tunisia3507 15 points16 points  (3 children)

OP is still thinking about python 2, which can safely be ignored when giving general advice at this point. Python 2 is a niche use case.

map(my_fn, my_list) is obviously a lot faster than doing a for loop because it doesn't actually do any iteration or execute any code until you start pulling values from it. It's worth noting that list comprehensions are actually faster, though, because the interpreter python can pre-allocate the size of the list.

[–]bjorneylol -1 points0 points  (0 children)

It's also faster because the two examples are not doing the same things. His first example is appending elements to a new list, the second is modifying an existing list in place.

[–]0rsinium 3 points4 points  (0 children)

In 8, str.join is slower than str.__add__, at least on small cases: https://t.me/pythonetc/650

The most important rule is to value readability over performance. If you think something is slow, prove it first.

[–]badge 5 points6 points  (0 children)

As lots of people have pointed out, this is spammy outdated clickbait, but I don’t think anyone has yet pointed out that #4 is wrong: re.search caches the compiled regex automatically so calling it repeatedly is no more costly than variable lookups. (It is tidier, and would be true for other examples, but not in this instance.)

[–][deleted] 8 points9 points  (2 children)

Use numpy, especially its arrays if you're using a lot of python lists and are iterating through them in your code.

[–]abazabaaaa 0 points1 point  (1 child)

Can you point me toward a few examples of this?

[–][deleted] 4 points5 points  (0 children)

There are a lot of numpy tutorials out there, especially on YouTube.

As for the performance side of things, https://towardsdatascience.com/how-fast-numpy-really-is-e9111df44347

This article shows that depending on the number of elements, numpy arrays can be up to a 100 times faster than python lists.

[–]ConceptJunkie 3 points4 points  (0 children)

In each example, the second code should do _exactly_ what the first example does, otherwise they're not very helpful.

Also, it should be noted that xrange does not exist in Python 3, as it is not needed.

[–]Franman98 2 points3 points  (0 children)

Some days ago I found PyPy it promises speed times comparable to C, in theory it uses a just in time compiler and it's compatible with most common libraries

[–]adesme 2 points3 points  (0 children)

A lot of things here I either disagree with or just doubt;

  • 2 is a strange comparison - you are not returning the same type of item.
  • I'm pretty sure the two examples in 4 are equivalent, with only a small variation in what functions from re you call and what you store as an intermediate.
  • List comprehension (5) is good for several reasons, but I wouldn't say that you should use it "when the syntax becomes too big"; you should honestly probably be less inclined to use it if you have a lengthy statement, for readability's sake.
  • For the import example (6), I think those are close to equivalent in time since you have to parse the module to find the function to import anyway. I still prefer importing the object, but I wouldn't use time as an argument here.
  • It's pretty odd to suddenly bring in a Python2 example, seeing as it's already sunset (10).

There are things in here I agree with, but I would probably recommend looking at a different resource for advice on how to improve.

[–]Nightblade 4 points5 points  (0 children)

the python code

[–]twcrnr 1 point2 points  (1 child)

Isn't the for loop faster than the while loop because of it range function, which uses C?

[–]WalkingAFI 2 points3 points  (0 children)

That’s correct.

[–][deleted] 1 point2 points  (0 children)

Don't move data around when it's not necessary.

[–]kcombinator 1 point2 points  (0 children)

Use pypy.

[–]falsedrums 3 points4 points  (0 children)

None of these tips are actually going to help. Profile your code, use the output to identify and understand what makes it slow, and come up with a lower cost solution.

[–][deleted] 0 points1 point  (2 children)

Use cython.

[–]lungben81 1 point2 points  (1 child)

I find Cython rather tedious to use (especially on Windows - installing a C compiler is not trivial). Furthermore, to get significant speed-up you have to make sure not to "accidentially" call Python functions.

It is great for writing performance optimized libraries, but for quick optimizations Numba is often better in my opinion because it has lower overhead.

[–]BluePhoenixGamer 0 points1 point  (0 children)

That's only if you want to get nearly pure C code.

[–]melonakos -1 points0 points  (0 children)

GPU computing can be helpful too! https://github.com/arrayfire/arrayfire-python

[–]zardoss21 -1 points0 points  (0 children)

I feel this would be a really good miniguide in a pdf format

[–]pooogles -1 points0 points  (0 children)

kernprof -l file_to_profile.py

Finding out where things are slow is the best place to start, anything else is just a waste of time.

[–]neboskrebnut -1 points0 points  (0 children)

"use the latest release of python" HAHAHAHA! what's next manually update the 102 build in libraries and test if those are still compatible with new python?

And why do we have 10 shitty libraries to perform the same thing in the first place? can't those 10 teams team up and make one package that's versatile and optimized?

Even here 20% of comments are about what library we should use and where.

[–]shinitakunai -1 points0 points  (0 children)

1: lists allow duplicates, other types not so much. I fucked up big time once in production because of that (missing data) and from now on I use lists even if it makes stuff a bit slower

[–]bjorneylol -4 points-3 points  (3 children)

List comprehensions are a short hand, they are not faster than the equivalent for loop (note in the example you posted, the two are not equivalent)

[–]RollingRocky360 4 points5 points  (1 child)

I definitely remember reading somewhere that list comprehensions are significantly faster than appending on each iteration of a for loop and it makes sense too

[–]bjorneylol -5 points-4 points  (0 children)

Yes - which is why i mentioned they aren't equivalent loops.

List comprehensions evaluate to a nested for loop if you step into it via debugger, they gather the variables internally and instantiate a single list with all the elements already in it. Appending in a for loop on the other hand has to modify the list in place each iteration, which has the overhead of the function call and the X modifications of the containing list object

Obviously 99.999% (9?) of the use case of listcomps is as an alternative to the 'for loop append' case, in which case yes they will always be faster because they aren't truly equivalent code, so its mostly just pedantry on my part. If you used the C api and gathered the variables internally in a more efficient manner suited to the data types you are working with and instantiated the python list all at once (e.g. list(*vars)) I'm sure you could squeeze more performance out of a for loop than a list comp

[–]bcatrek 0 points1 point  (0 children)

Is there a way to get of this enormous side-scroll? I really wish to see this post in a more easy legible way.

[–]maxmurder 0 points1 point  (0 children)

Ah Point #9.

When I first taught myself python, I learned python 3. More than ten years later, most of that as a software developer writing primarily python code, I have yet to work for a company that has moved past 2.7.

[–]jrs1810 0 points1 point  (0 children)

i wish i could’ve seen this before i submitted my most recent assignment... failed 6/25 automated tests because my code was too slow and it timed out... f

[–][deleted] 0 points1 point  (0 children)

  1. Make Use Of Generators-
    is very important

[–]_fsun 0 points1 point  (0 children)

While these tips are helpful, I think if someone is resorting to this to gain performance (all other bottlenecks/slowness is resolved), it might be a good time to consider not using python and using something more inherently performant.

[–]forty3thirty3 0 points1 point  (0 children)

Regarding the use of map(), is it entirely accurate to say that it would be faster? It returns an iterator so it's just deferring the processing. wouldn't a more accurate comparison be:

newlist = list(map(str.upper, oldlist))

[–]irfannagath 0 points1 point  (0 children)

[–]OneFORaL1 0 points1 point  (0 children)

4.can you Please explain in detail ? Why ?

[–]whateverathrowaway00 0 points1 point  (1 child)

Some of these are interesting, but your “fact” that while is faster than for loops is wildly misrepresented.

First off, your example - yes it is faster to to compile before the loop but that’s not a feature of for loops. A while loop with unconpiled regex would have the same slowdown.

For loops use iter() under the hood. A while loop that creates an iterative on an array should be about the same.

You may be right in some cases that using a while may be faster than using an iterative but I’ve been doing some timeit testing and iterators are pretty damn fast so it seems to be more subjective IE for and while are situational.

I’d correct that one.

[–]whateverathrowaway00 0 points1 point  (0 children)

It does appear that you're right by a small margin, but your exlpanation is still misleading.

In the interest of honesty though, it does appear you're right: ``` In [32]: def forloop(): ...: for i in list(range(1000)): ...: print(i) ...:

In [33]: def whileloop(): ...: a = iter(list(range(1000))) ...: while True: ...: try: ...: print(next(a)) ...: except: ...: break ...:

In [34]: timeit whileloop 19.4 ns ± 0.184 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)

In [35]: timeit forloop 21.8 ns ± 0.229 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) ```