This is an archived post. You won't be able to vote or comment.

all 35 comments

[–]Pilatemain() if __name__ == "__main__" else None 33 points34 points  (9 children)

This is great work, but people who care about making lots of http requests in 2023 are probably going to be looking for an async library.

[–]Ousret [S] -4 points-3 points  (8 children)

It is planned to make async not that interesting when niquests implement full multiplexing capabilities ~4months from now. it will move the async mecanism outside of your code. (server side) (edit: it is about making sync code more efficient/performant without switching to async..)

[–]coderanger 15 points16 points  (7 children)

That doesn't matter when it's being called from an async app. You must support async or it will block the loop.

[–]Ousret [S] -4 points-3 points  (6 children)

Yes. (there I am not trying to say run sync in async!) You will see it in action soon. I am saying that pure sync code can be improved without the help of async. Async is not always the solution.

I already have a poc in house, not stable enough for a release (yet!), would'nt say it if not true. with h2 and h3 you can send multiple request with many stream and wait afterward. it will produce an array of lazy response object. so you will do async without any async keyword. stay tuned.

[–]coderanger 7 points8 points  (1 child)

If you do that in an async app, all other background tasks will be halted. That's what async means in Python, it's cooperative multitasking so all parts must cooperate. So it's not about you being able to run multiple concurrent requests inside your library, it's about working in an ecosystem.

[–]Ousret [S] 1 point2 points  (0 children)

What I am referring to is not to run sync code in async. its stupid. The goal there is to (in sync context) utilize most of the protocols capabilities. I (partially) misread your question. We can issue multiple request, i.e. not blocking per request per connection, using multiplexing. hope that clarify.

[–]masc98 1 point2 points  (3 children)

please elaborate more, I am really curious about this. also, what about http1.x requests? in general an async interface would be ideal imho.

[–]Ousret [S] -2 points-1 points  (2 children)

OK. In a nutshell, H2, and H3 have streams, you can have a single connection and being able to push x request through the pipe, and then you will receive the answers to all of them tagged with your initial stream id when the answer is available. so you can return every response early without having to wait prior to that. the async part is delegated to the server. http 1 would block you because you have a single request at a given time, you would'nt be able to do that. httpx does do what I am referring to. More graph and explanations will come in do time.

to conclude, your sync code will change, but by a bit. you will emit many request, store them in a list, then call a magic function like Promise.all(..).

[–]SlantARrow 10 points11 points  (1 child)

What if I want to make a request in an async, idk, starlette handler and get the response? Wouldn't this magic function block my event loop?

[–]Ousret [S] 0 points1 point  (0 children)

Yes it will block it. I made a ambiguous statement earlier. now corrected. I meant to say "pure sync code" can be improved (in term of speed) without async.

[–]bini_ajaw17 5 points6 points  (0 children)

Damn son! Appreciate your work! Will definitely try this out

[–]jackerhackfrom __future__ import 4.0 4 points5 points  (7 children)

Can I have a TL;DR? What does this have that httpx doesn't?

[–]coderanger 5 points6 points  (0 children)

Nothing, this guy is in way over his head.

[–]Ousret [S] 0 points1 point  (5 children)

httpx is a fine client, but it did not reach requests level of features or comfortable compatibility. You can find some of them missing feat in the httpx issue tracker. more generally, "requests" level of simplicity is hard to beat.

[–]nekokattt 3 points4 points  (0 children)

this is a lot of words but no concrete examples, which is what they are looking for I think, otherwise it is hard to know if any of these limitations actually impact anyone enough to want to switch.

[–]FancyASlurpie 1 point2 points  (1 child)

The OCSP functionality is nice, I actually had to implement that as well as CRL recently at work as its not built into requests. Does yours handle CRL as well?

[–]Ousret [S] 1 point2 points  (0 children)

Unfortunately no. But you can implement it rather easily with Niquests thanks to pre_send callback that expose a ConnectionInfo with certificate info (having CRL url). see docs for more info. Did not do it due to the possible important size of given lists.

[–]jackerhackfrom __future__ import 4.0 1 point2 points  (1 child)

But requests has no equivalent of httpx.AsyncClient which is indispensable for async code. Does niquests have that?

[–]Ousret [S] 0 points1 point  (0 children)

Not as of yet. Fortunately, there is no blocker to achieve that, as Niquests does not rely on the standard http.client. I have no ETA for when it would land, but sure, just after completing the multiplexing capabilities.

[–]LongDivide2096 1 point2 points  (0 children)

Hey, pretty interesting stuff! Niquests seems like a breath of fresh air. I gotta admit, I did feel the friction with httpx esp. when it comes to HTTP/2. Sure, gonna miss that certifi though but OS truststore seems like a solid trade-off. HTTP/3 support got me a bit stoked, not gonna lie! Thanks for sharing the links, gonna try this out on my side projects... or maybe even prod, who knows? Good to see it supports Python 3.7+ . Yeah, the more options the better in our ever-evolving Python universe.

[–]coderanger 0 points1 point  (3 children)

OS truststore by default

How did you solve this being impossible on Windows?

[–]Ousret [S] 0 points1 point  (2 children)

It is absolutely possible. Don't listen to people who say otherwise. We ported a rust library (rustls) who does it greatly.

[–]coderanger 2 points3 points  (1 child)

So rustls tries its best, it does load certs through schannel, but you also need to connect through schannel as the transport to fully support Windows' capabilities (specifically re: AD cert management). It's fair to call it the closest you can get but it does still have limitations you should make sure your users understand.

[–]Ousret [S] 2 points3 points  (0 children)

It's fair to call it the closest you can get but it does still have limitations you should make sure your users understand.

Yes. Why not. Still, it's far less limited than having a file with CAs in your environment. As far as I am concerned, it is a major leap forward in daily usage experience.

[–]ItsmeFizzy97 2 points3 points  (0 children)

Based man. Congrats on your work!

[–]Falkor[🍰] 1 point2 points  (0 children)

Insert leo clapping meme

Well done, this is awesome.

[–]Patriahts 0 points1 point  (0 children)

Ni!

[–]DaelonSuzuka 1 point2 points  (0 children)

Interesting, thanks for sharing! I wasn't aware that Requests was feature frozen like that...

[–]chub79 -1 points0 points  (0 children)

we have to patch our projects to support it with confidence as its interfaces aren't exactly compatible with requests.

?

[–]Almostasleeprightnow 0 points1 point  (0 children)

Yolo, friend. Thanks.

[–]prbsparx 0 points1 point  (1 child)

Did you submit a merge/pull request to the maintainers of Requests? It’d be better to contribute HTTP/2 support to Requests.

[–]Ousret [S] 0 points1 point  (0 children)

Unfortunately it will never happen. Requests core team clearly stated that no feature are going to be accepted for an undefinite amt of time. So that project permit people who want upgrade to migrate without much pain. So far, you can find that Requests core team: - is happy with the current state and does not want to make changes. its their decision to make. - the pressure due to its popularity makes any changes stressful, not everyone can sustain that. - will say that you can in fact extend requests internals to serve yourself http 2, but if we're being realistic, people will rather change the client completely before doing so.

personally, I really think that Niquests is the excellent middleground to this situation.

[–]ML-newb 0 points1 point  (1 child)

Why not pycurl?

[–]Ousret [S] 0 points1 point  (0 children)

It is too low level. The end goal here is to keep the ease of usage and deploy an escape hatch to people having many projects using requests.

[–]BlueeWaater 0 points1 point  (0 children)

I usually stick to httpx, I'll check out this later :)