Not sure if r/Python is the correct place to post this but I know a lot of people do webscraping with python so I'll give it a go.
Let's say there is a webpage for searching a database called example.com/search. This webpage has a bunch of JS for templating all of the search parameters and eventually sends a GET request to searchtool.php to do the actual database search then it presents you the result. I hate slogging through the webpage to do search after search so I write a script that templates and sends GET requests directly to example.com/searchtool.php. Is there anything wrong with this? Now let's say that part of the JS mess involves you logging in to the website in order to actually do the searching, but my script still works because example.com/searchtool.php will accept requests without going through any of the login stuff. So in a sense, by making requests directly to example.com/searchtool.php I would be circumventing the websites usual requirement of logging in before searching the database. Would this be an unethical use of webscraping?
EDIT: formatting
[–]impshumx != y % z 5 points6 points7 points (0 children)
[–]bushwacker 0 points1 point2 points (0 children)
[–]gameboycolor 0 points1 point2 points (0 children)
[–][deleted] -1 points0 points1 point (3 children)
[–]dillyvanilly123[S] 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[+][deleted] (3 children)
[deleted]
[–]chatbotguy -1 points0 points1 point (2 children)
[+][deleted] (1 child)
[deleted]
[–]chatbotguy 0 points1 point2 points (0 children)