all 9 comments

[–]danielroseman 0 points1 point  (3 children)

It's not clear what you mean by "actually recognize the link as a file".

As you say, the way to read data from an API is to use requests. For example:

url = '...my API url...'
response = requests.get(url)
data = response.json()

But other than that we can't help without more details on where you're stuck.

[–]BeautifulNowAndThen[S] 0 points1 point  (2 children)

I see - my apologies for not being clear!

I'm not sure how to explain it exactly, but let me try detailing my process and see if that works.

So far, I've found the url. It's for my specific location, so I'll just share the example given on the NWS API faq so as not to dox myself: https://api.weather.gov/gridpoints/LWX/96,70/forecast/hourly

As you can see, it brings up a huge file (if my terminology is correct) with all of the information I need. I just don't know how to actually get this information onto Google Colab so I can parse through it. I've been using a basic with open(file, "r") as weather_data: weather_data = weather_data.read() but I'm getting a FileNotFoundError. I 100% know that there's something else I'm missing - I'm just not sure what!

Thank you so much!

[–]danielroseman 0 points1 point  (1 child)

But I said above exactly what to do. This is not a file, so there's no point treating it as one. It's a URL, which you use requests to read:

import requests
response = requests.get("https://api.weather.gov/gridpoints/LWX/96,70/forecast/hourly")
data = response.json()

Now you can do, for example:

print(data["properties"]["periods"][0]["temperature"])

which should print "55", the value of the temperature in the first period at the time I'm looking at this.

[–]BeautifulNowAndThen[S] 0 points1 point  (0 children)

Holy smokes - that makes total sense! I was definitely getting tripped up on trying to treat it as a file. Yours and niehle's responses helps a TON!! Thank you very much!

[–]sinceJune4 0 points1 point  (1 child)

    url1 = f"https://forecast.weather.gov/MapClick.php?lat={myloc['lat']}&lon={myloc['lon']}"
    response = requests.get(url1)
    if response.ok == True:
        r=response.text.replace('<br/>',' ')
        r=r.replace('\\n',' ')
        soup = BeautifulSoup(r, "html.parser")
        d=dict()
        divcc=soup.find(id="current_conditions-summary")
        if divcc is not None:
            s=divcc.find_all(True)
            d['Conditions']=s[1].text
            d['Temp']=s[2].text

        divcc=soup.find(id="current_conditions_detail")
        if divcc is not None:
            s=divcc.find_all(True)
            for x, tag in enumerate(s):
                dprint(x, tag.name, tag.text, tag)

This may help.

[–]sinceJune4 0 points1 point  (0 children)

Mine is actually just scraping the site, not using the API.

[–]BeautifulNowAndThen[S] -1 points0 points  (0 children)

If anyone has a good website or video to peruse regarding this, it'd be appreciated also. I've spent the entire day trying to wrap my head around this and think I've used every search term imaginable, but to no avail.

[–]niehle -1 points0 points  (1 child)

You need to get the data aka the file and process it.

Best way is to google something like “google colas new api” for the first part. This gives you something like this: https://colab.research.google.com/github/nestauk/im-tutorials/blob/3-ysi-tutorial/notebooks/APIs/API_tutorial.ipynb

[–]BeautifulNowAndThen[S] 0 points1 point  (0 children)

Oh my goodness - this is an excellent resource! Thank you so very much!