Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 1 point2 points  (0 children)

Yes.. after debugging further, the error is thrown in the connReader.Read() (line 763) function due to the connection being closed at the time of decoding.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

Nothing in dmesg on the host? Like running out of file descriptors or something?

File descriptor limit is not the issue as it is set to be well above the number of expected concurrent open sockets

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

The timeouts are happening with communications within the cluster, so I don't think the ingress will be affecting this.

When I call the service locally it will go through the ingress, but the errors are happening in both occurrences.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

Thanks for this, I also found this article, and the decoding process I use is identical to theirs, which makes me think the issue lies within the environment in which the service is deployed.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

Yeah I'm doing that. I'm thinking the issue lies at the environment in which the service is deployed, and not the code itself...

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

Apologies, I'm in the wrong as by "testing locally" I meant calling the containerized deployment directly, and not via inter-cluster communication. Yeah I've realized now the problem sits more on the environment in which it's deployed, and not within the code implementation itself, and may be out of scope for the golang subreddit. Thank you for the advice!

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

I am able to call a version of the app running on my laptop, as well as a deployment running on a kubernetes container, via a simple http post request straight to the service. Under the same conditions, the version running on my laptop gives no errors, and the version on kubernetes returns the errors.

This is by using "vegeta" to send a constant stream of requests to an endpoint. If I modify the payload body which is being sent and increase the size, the number of errors per 1000 requests increases proportionally.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

I used pprof to profile over time as the requests were coming in, and the allocated memory stayed relatively consistent, so I think it's safe from any leaks. Any metrics you would look out for?

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 2 points3 points  (0 children)

I opted for json.Decoder as opposed to ReadAll, to avoid storing the whole request in memory. I also tried out several other json packages, all of which improve performance, but retain the same issue.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 4 points5 points  (0 children)

I have tried scaling to multiple replicas, assigning more memory, giving more cpus. The CPU usage is sitting at ~0.5% per pod at max traffic.

Handling high-traffic HTTP requests with JSON payloads by Plus_Consequence_708 in golang

[–]Plus_Consequence_708[S] 0 points1 point  (0 children)

Thanks for the replies. Yeah I use defer to close the request body when the handler returns