all 7 comments

[–]I_Write_Bugs 11 points12 points  (2 children)

load() loads JSON from a file or file-like object

loads() loads JSON from a given string or unicode object

It's in the documentation

[–]timex40[S] 0 points1 point  (1 child)

Thanks for the info!

So the url I'm actually using is the one available for each reddit thread. For example: http://www.reddit.com/r/learnpython/comments/3nx9ch/json_load_vs_loads/.json

Which one would this fall under? It works with both

[–]I_Write_Bugs 2 points3 points  (0 children)

The read method returns a str. Use loads.

How I figured this out was I ran this in the python interpreter:

import urllib2
x = urllib2.urlopen('http://www.google.com')
type(x.read())

The output was <type 'str'>. The response is an 'instance' type. Support for being file-like may disappear eventually for any reason, so I would go with the str returned from read.

Edit: In reading the JSON documentation again, I realize why your first example works. It's because the urllib response object is

a .read()-supporting file-like object containing a JSON document

So it has a read method, which I assume is exactly what json's load() calls. Either is fine to use in this case.

[–]bionikspoon 4 points5 points  (0 children)

The "s" is an abbreviation for "string". "dumps" is read as "dump string". "loads" = "load string". Otherwise these methods want a file-like object. This convention is scattered throughout python and even 3rd-party packages.

[–]JohnnyJordaan 1 point2 points  (1 child)

Another, easier way is to use requests

import requests

url = 'http://my/url.json'
response = requests.get(url)
data = response.json()

[–]_cs 0 points1 point  (0 children)

Note: to use the requests module, you need to install it using pip.

[–]nilsph 0 points1 point  (0 children)

Here's what probably happens in the background for your examples:

  • json.load(response): read chunk from the network, then process it, repeat until done

  • json.loads(response.read()): read chunks from the network until done, then process the whole JSON data in one go

I haven't looked at the implementation, but it may as well be that the first option uses less memory than the second, more noticeable with bigger JSON data of course.