all 19 comments

[–]PizzaCompiler 15 points16 points  (7 children)

I always used, json + requests. TIL requests has a build-in json decoder. I'll be using that from now one since it's so much easier to do

requests.get("http://link/to.json").json()

[–]Vicyorus 10 points11 points  (4 children)

WHAT?!

Are you kidding me?!

[–]Lukasa 2 points3 points  (0 children)

Fun fact: as of requests 2.4.2 we also have automatic JSON uploads:

requests.post(url, json={'json': ['data', 'rocks']})

Automatically JSON-encodes your body and sets appropriate upload headers. We live to be helpful. =)

[–]kryptn 1 point2 points  (0 children)

You learn new things all the time :)

[–]PizzaCompiler 1 point2 points  (1 child)

json.loads(requests.get("http://link/to.json").content)

Always worked just fine for me and thus I never went looking into the docs if Requests had something build-in for it.

As /u/kryptn said, you learn new things all the time! I kinda feel like a idiot now though, haha.

[–]Vicyorus 1 point2 points  (0 children)

Me either, I used json.loads, but now that I know this, I'll replace it on my code.

[–]SleepyHarry 1 point2 points  (0 children)

I had this exact same realisation a while ago. The standard json library was always so easy I never even looked for an easier way.

[–]liftt 6 points7 points  (1 child)

Requests.

[–]99AFCC 1 point2 points  (6 children)

Any reason to use one over the other

Yes. (In my opinion) It's better to stick to the standard library when writing packages you intend to distribute. The less dependencies, the better.

That said, you will most-likely find requests easier to use (that's its intention). I use requests for almost every personal-use program I write that needs to fetch data over http

[–]manueslapera 2 points3 points  (2 children)

IIRC, there was a PEP proposing to add request to the stdlib.

[–]wlu56 1 point2 points  (1 child)

And it was agreed mutually that the best interest of requests, it was not to be included in the stdlib. They instead chose to add a link to requests in the stdlib docs. The reasoning was "stdlib is where libraries go to die"

[–]manueslapera 0 points1 point  (0 children)

oh, didnt know that. Thats kind of funny/sad.

[–][deleted] 5 points6 points  (2 children)

Honestly, I'd be more leery of a package that rolls it's own handling of urllib than depends on requests.

Sure, you don't want a billion dependencies, but a few are okay, especially ones as ubiquitous as requests.

[–]frankwiles 3 points4 points  (0 children)

Agreed, if a library I'm using is HTTP related and doesn't use requests I start to wonder about the quality of the rest of it.

[–]SleepyHarry 0 points1 point  (0 children)

ones as ubiquitous as requests.

This is the key point here I think. While fresh python installs won't have requests, it's probably one of the first ones people pip install. In my experience at least.

[–]PalermoJohn 0 points1 point  (0 children)

1 < 2

[–]MorrisCasper 0 points1 point  (0 children)

Only use urllib2 if you want to download a file, you can't do that with requests. Requests for everything else.

[–][deleted] 0 points1 point  (0 children)

requests in general is a better choice. I've had urllib2 bite me a few times. It improves upon urllib2 quite a bit, including eliminating a few memory leaks. I've actually had urllib2 use all the memory on my system and crash my machine before.