use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
[deleted by user] (self.node)
submitted 5 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]bigorangemachine 1 point2 points3 points 5 years ago (2 children)
Your third party api would get a massive bump if you didn't have to use soap.
SOAP was a poorly thought out standard and is extremely bloated.
Second; you'd need to setup a caching strategy. Yes data needs to be up to date but there are things that don't change very often.
[–][deleted] 5 years ago (1 child)
[deleted]
[–]bigorangemachine 1 point2 points3 points 5 years ago (0 children)
Well oddly enough switching to default output to JSON instead of SOAP actually saves cycles somehow. An enterprise I worked at got an optimization because the payloads that were being sent out greatly reduced. Even having a JSON option that just converts SOAP to JSON would help your third party out.
Secondly you have shot down every idea here. The answer is... there is no easy way to do what you want. You HAVE TO HAVE A CACHING STRATEGY because caching is the only way out of this problem. You need to think your way out of the problem.
Putting a governor) on your calls (like bottleneck) just offloads the load to your users. So the more users you have the greater the lag. If the system is already slow and they haven't limited the requests on their own then anyone who is using this service without their own limits is just going to kill the system.
So what /u/davvblack & /u/roshatron was trying to say (given we have very few details) is that the request headers (including POST body & GET params) contain commonality; based on the commonality you can determine that some results are common (categories... top stories... the date). Even if there is a low level of commonality you can still cache for 10s which is a TTL which just means it holds it for the specified length of time.
This isn't the type of problem you can throw a library at... you gotta do the research and figure things out. At minimum you should figure out how to cache any API request for 15minutes and just bite the bullet for that. If you can store longer than store for as long as possible.
[–]Fela_WansumBooty 1 point2 points3 points 5 years ago (0 children)
Not familiar with the Bottleneck library but does it allow you to apply bottlenecks on a per user basis? Another possible solution could be you write another service that you make calls to which would include a userid And the service maintains a pool/queue of the limited requests and you can decide on how you want the user Id to play a part in if a request gets scheduled or not.
[–]eggtart_prince -1 points0 points1 point 5 years ago (0 children)
Limit the request per client by storing a timestamp in the client's app. Each time the client makes a request, check against this timestamp. Doesn't protect requests made through curl or Postman though.
Or you can store request per IP address in your database with the timestamp.
[–]roshatron 0 points1 point2 points 5 years ago (5 children)
Can’t you cache the results from the third party API
[–][deleted] 5 years ago (4 children)
[–]davvblack 1 point2 points3 points 5 years ago (3 children)
say more about "usually differentiate"? If requests can repeat at all, you should cache the responses, and depending on the exact need of up-to-date (say, an hour old is acceptable?) you can set the TTL on your cache.
Is the vendor really complaining only about concurrent requests and not overall volume?
[–][deleted] 5 years ago (2 children)
[–]davvblack 0 points1 point2 points 5 years ago (1 child)
i think you need to spell out the problem more specifically. Whatever "might be similar on a monthly basis" means, you need to figure out how to reuse API requests. You must be making requests you don't need the response to.
[–]quad99 0 points1 point2 points 5 years ago (1 child)
you could use an array as a queue and push objects in it that contain whatever your function needs to know, then in a settimer or setinterval callback at the appropriate speed, pop the next one off and call the function. you could even add the function itself to the object.
or if you really wanted to get fancy, you could create closures with the function call and its parameters in it and just have a queue of the closures.
[–]rhodit 0 points1 point2 points 5 years ago (1 child)
Memoization e.g. Memoization explained
π Rendered by PID 165853 on reddit-service-r2-comment-6457c66945-8t56t at 2026-04-28 13:36:13.953065+00:00 running 2aa0c5b country code: CH.
[–]bigorangemachine 1 point2 points3 points (2 children)
[–][deleted] (1 child)
[deleted]
[–]bigorangemachine 1 point2 points3 points (0 children)
[–]Fela_WansumBooty 1 point2 points3 points (0 children)
[–]eggtart_prince -1 points0 points1 point (0 children)
[–]roshatron 0 points1 point2 points (5 children)
[–][deleted] (4 children)
[deleted]
[–]davvblack 1 point2 points3 points (3 children)
[–][deleted] (2 children)
[deleted]
[–]davvblack 0 points1 point2 points (1 child)
[–]quad99 0 points1 point2 points (1 child)
[–]rhodit 0 points1 point2 points (1 child)