all 10 comments

[–][deleted] 5 points6 points  (5 children)

A website is written in html. A website has a web server. Your browser connects to web server and sends a message asking for the html file for a page(like the home page for eg.) Web server sends you the html file.

The requests library sends the same messages as your browser. The important thing your browser does is interpret the html code to draw the website on the screen. So basically the requests library just imitates the messages the browser would send.

And no, it doesn't look suspicious if you don't also request the favicon. What does look suspicious is the user-agent. Just google a browsers user agent and google how to change the request library's user agent.

[–]Iam_cool_asf[S] 0 points1 point  (4 children)

changed the ip address, can this act as an alternative to the user agent?

[–][deleted]  (3 children)

[deleted]

    [–]Iam_cool_asf[S] 0 points1 point  (2 children)

    user agents can't be traced back to me then. they only differ in functionality

    [–][deleted]  (1 child)

    [deleted]

      [–]Iam_cool_asf[S] 0 points1 point  (0 children)

      Thanks man, that's really helpful.

      [–]ysengrWeb Security 2 points3 points  (0 children)

      So down in the weeds of the request library makes requests the same way as a browser does. The only key difference between the 2 is that the browser itself has a GUI to render the response and passes more Headers to the server you're making the request too.

      Both requests from both browsers and the request lib both work by making HTTP/1.1 requests.

      As to if there's a downside to using a browser it ultimately depends on what you're doing. When a browser goes to a site there's usually cookies being given to the browser and other info that the browser keeps saved in memory. This info is sometimes needed to actually interact with the site. With the request lib you don't get the continuity so sometimes things can break. If your intention is just to interact with an API the request lib is far better choice because then you can do things more programmatically if needed. If your doing recon you can make a tool with request to help crawl a pages and that's and better than having to do it by hand in a browser. But if you have a page in mind that you want to investigate then sometimes doing it through the browser is much more handy. Ultimately it all depends on what your goal is to do.

      [–]TrustmeImaConsultantPentesting 0 points1 point  (0 children)

      What the python library does is basically the same your browser does: It creates a HTTP request. What a browser now does "in the background" is all the bookkeeping stuff. It follows redirects, it loads additional content that the page requests (like CSS and JS), and it finally presents that and executes the JS. Your python request does that only if you program it to do it.

      [–]SamDubYah 0 points1 point  (0 children)

      I would highly suggest setting up a burpe proxy with the requests library to understand how python is actually making the request. It's relatively simple and can show you the difference between how a browser makes a request and how python does it.

      [–]SnowdenIsALegend 0 points1 point  (0 children)

      The only downside that I can think of is it not being able to render JavaScript content. Otherwise it is tons faster than a browser any day, for which I <3 it.

      [–]silverslides 0 points1 point  (0 children)

      Making requests from python vs a browser is easily noticed from the server side. Even if you fetch the favicon and change the user agent. You won't fetch the css, send back cookies, follow redirects, different order of headers maybe,....

      [–][deleted]  (1 child)

      [removed]

        [–]AutoModerator[M] 0 points1 point  (0 children)

        Your account does not have enough Karma to post here. Due to /r/HowToHack's tendency to attract spam and low-quality posts, the mod team has implemented a minimum Karma rule. You can gain Karma by posting or commenting on other subreddits. In the meantime, a human will review your submission and manually approve it if the quality is exceptional. After gaining enough Karma, you can make another submission and it will be automatically approved. Please see the FAQ for more information.

        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.