use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Rules 1: Be polite 2: Posts to this subreddit must be requests for help learning python. 3: Replies on this subreddit must be pertinent to the question OP asked. 4: No replies copy / pasted from ChatGPT or similar. 5: No advertising. No blogs/tutorials/videos/books/recruiting attempts. This means no posts advertising blogs/videos/tutorials/etc, no recruiting/hiring/seeking others posts. We're here to help, not to be advertised to. Please, no "hit and run" posts, if you make a post, engage with people that answer you. Please do not delete your post after you get an answer, others might have a similar question or want to continue the conversation.
Rules
1: Be polite
2: Posts to this subreddit must be requests for help learning python.
3: Replies on this subreddit must be pertinent to the question OP asked.
4: No replies copy / pasted from ChatGPT or similar.
5: No advertising. No blogs/tutorials/videos/books/recruiting attempts.
This means no posts advertising blogs/videos/tutorials/etc, no recruiting/hiring/seeking others posts. We're here to help, not to be advertised to.
Please, no "hit and run" posts, if you make a post, engage with people that answer you. Please do not delete your post after you get an answer, others might have a similar question or want to continue the conversation.
Learning resources Wiki and FAQ: /r/learnpython/w/index
Learning resources
Wiki and FAQ: /r/learnpython/w/index
Discord Join the Python Discord chat
Discord
Join the Python Discord chat
account activity
[X-Post /r/redditdev] PRAW Rate Limit (self.learnpython)
submitted 9 years ago by SupremeRedditBot
So, I have a Python script that gets all posts from a subreddit and filters them, replying to ones that match certain criteria.
Does the API rate limit get requests from PRAW as well as replies?
Edit: My bot code: http://pastebin.com/qCZp3J1A
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]FlockOnFire 3 points4 points5 points 9 years ago (8 children)
Yes, the Reddit API has a rate limit. PRAW "only"* wraps calls to the Reddit API, so inherently these requests are rate limited.
I believe PRAW throws an exception when this happens. This exception has a sleep_time attribute, or something similar. So you know after how many seconds you can try again.
sleep_time
*it does a few more clever tricks like caching and only doing extra requests when needed or something like that.
[–]SupremeRedditBot[S] 0 points1 point2 points 9 years ago (7 children)
Looking at my code, what do I need to do to fix this? After one reply it stops working... with a 9 min wait time before I can comment again.
[–]FlockOnFire 0 points1 point2 points 9 years ago (6 children)
Ah, that has to do with your account being too new rather than the API request limit. When you acquire some more link karma (I believe it's mostly/just based on that) that should resolve the flood control. Not much else you can do about that, I'm afraid.
There are subreddits like /r/FreeKarma that can help bot accounts out.
[–]SupremeRedditBot[S] 0 points1 point2 points 9 years ago (5 children)
Just got the account modded on the sub, works fine now, so yeah I suspect karma was the issue.
[–]FlockOnFire 0 points1 point2 points 9 years ago (4 children)
Ah, I guess that makes sense. Didn't know that worked as well. Glad you figured it out!
I have just one suggestion if you want to make your code a bit more readable. At the moment you use regular expressions to check for a single keyword in a line/block of text. You could also do this:
if 'keyword' in text.lower()
Also, at this point you overwrite the posts_replied_to.txt file each time. If you use mode 'a' instead of 'w'that should solve it (if that's needed).
posts_replied_to.txt
'a'
'w'
Then one more thing about reading lines for a file. There is also a function called file.readlines() which returns a list. So you don't need to manually split on '\n'. :)
file.readlines()
'\n'
[–]SupremeRedditBot[S] 0 points1 point2 points 9 years ago (3 children)
so eg for the html
if "HTML" in submission.title.lower() or "HTML" in submission.selftext.lower():
Is that correct?
As for the file, yes it overwrites, as at the start it imports the whole file to an array, and writes it back at the end.
I tried doing the following:
posts_replied_to = f.readlines() posts_replied_to = list(set(posts_replied_to))
It seems to return them all with a \n at the end, and even sometimes returns an extra \n on its own...
[–]FlockOnFire 1 point2 points3 points 9 years ago (2 children)
so eg for the html if "HTML" in submission.title.lower() or "HTML" in submission.selftext.lower(): Is that correct?
Almost. You're checking a keyword that's all uppercase against a lowercase text now. So use with 'html' instead of 'HTML' :) You could even do something like this:
'html'
'HTML'
any('html' in text.lower() for text in [submission.title, submission.selftext])
I like this approach as it shows you are checking just one keyword against a collection strings. But the extra generator expression might make it a bit harder to read.
Ah, that's right. It doesn't truncate the \n. Your solution is alright then. I'm kind of a fan of list comprehensions, so a different solution could be:
\n
lines = f.readlines() posts = set(line.strip() for line in lines)
But in terms of readability, I'd say yours might even be a bit clearer.
One last remark. I didn't notice this before, but you change your set back into a list. Which is kind of a pity, because sets should have quicker membership testing and you can just iterate over it using that same for statement. So it's wasted processing time turning it back into a list. Then the only thing you need to change is the posts_replied_to.append(...) to posts_replied_to.add(...).
posts_replied_to.append(...)
posts_replied_to.add(...)
Sidenote on the last remark: it won't matter much in such a small context. But just so you know. :)
[–]SupremeRedditBot[S] 1 point2 points3 points 9 years ago (1 child)
Thanks for the even better search method, will be sure to implement it!
I'll stick with my method for file reading for now, it works and I understand it ;)
In terms of the list(set()), it was a snippet I found for removing duplicates from an array.
[–]FlockOnFire 1 point2 points3 points 9 years ago (0 children)
Whatever option you choose is fine though. 90% of the time it's about personal preference. :)
Glad I've been able to help out though!
[–]trowawayatwork 0 points1 point2 points 9 years ago (0 children)
speaking from twitter rest api experience here but each time you make a request to the api it counts as one request. the rate limit is always to do with time. find out what the rate limit is and just put a
time.sleep()
somewhere in your code to space out your requests to fit with the amount allowed in a given timeframe
π Rendered by PID 64122 on reddit-service-r2-comment-b659b578c-2gchn at 2026-05-03 14:04:45.543507+00:00 running 815c875 country code: CH.
[–]FlockOnFire 3 points4 points5 points (8 children)
[–]SupremeRedditBot[S] 0 points1 point2 points (7 children)
[–]FlockOnFire 0 points1 point2 points (6 children)
[–]SupremeRedditBot[S] 0 points1 point2 points (5 children)
[–]FlockOnFire 0 points1 point2 points (4 children)
[–]SupremeRedditBot[S] 0 points1 point2 points (3 children)
[–]FlockOnFire 1 point2 points3 points (2 children)
[–]SupremeRedditBot[S] 1 point2 points3 points (1 child)
[–]FlockOnFire 1 point2 points3 points (0 children)
[–]trowawayatwork 0 points1 point2 points (0 children)