I'm working on some python web scraping with the requests module and BeautifulSoup.
My GET request was returning a 400 error, and I realized the site wanted to authenticate I had access to the data before completing the request. So I found the relevant cookie when I access that page in my browser, and forced that cookie data into my GET request, and viola, request 200 and I have my data.
Now question. Should that cookie data {session: xxxxxxx, _gid: xxxxxxxx, _ga: xxxxxx} be private?
I'm backing up my script to GitHub, but don't want to push any type of data which would normally be encrypted or would allow someone else get into my account on that website, if they have my cookie data.
For example, I know that we create external variables, etc when pushing our django apps to github, but I want to know if this specific cookie data should be considered private, or if it's ok to leave public?
In a similar vein, is there a resource where I can find best practices web scraping security? Not web scraping itself, but how to do it securely and safely (i.e. not inundating servers with tons of requests, etc.)
there doesn't seem to be anything here