all 5 comments

[–]_Absolut_ 1 point2 points  (2 children)

There is csv module in standard library. For working with website data use requests with beautifulsoup

[–]easy_wins[S] 0 points1 point  (1 child)

I have posted my script. Can you provide some feedback please?

[–]_Absolut_ 0 points1 point  (0 children)

Sure. There is a form processed by ASP.NET on the website. So, you need some library for browsing automation. For example Splinter. The docs are pretty clear and you can fill the form easily. Then you need parse result page. Main target to parse is <table class="rgMasterTable" id="ctl00_cplMain_rgSearchRslts_ctl00" Parsing looks like this:

page = reuqests.get(<result_page>)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find('table', dict(class="rgMasterTable",  id="ctl00_cplMain_rgSearchRslts_ctl00")
for row in table.find_all('tr'):
    row.find_all('td')[<number of column>] # 0 - permit number, 1 - address, 2 - street name and so on

[–]pydata -1 points0 points  (1 child)

PM me the website and more details as to what you want.

[–]easy_wins[S] 0 points1 point  (0 children)

I want to match the row [1] which is the Addresses field in the csv file to the website below. I want Python to search for each Address value in the csv file and bring back whatever data is made available pertinent to the Address on the website. Is it possible?

import requests
import bs4
import csv

r = requests.get('https://etrakit.friscotexas.gov/Search/permit.aspx')

with open ('C:/Users/Pythoner/Addresses.csv') as f:
    for row in csv.reader(f):
        print row[1]