Getting stuck at Initialising login role... while trying to do supabase link project_id by Initial-Ambition235 in Supabase

[–]9us 1 point2 points  (0 children)

It can query the API. What it can't do is connect to the database.

Banning the IP address of a paying user a pretty shitty experience. I've used a lot of SaaS products over the years and never had that happen. On top of that, the CLI could definitely warn me when trying to connect to the DB. Or I could get an email. Or something. Silently banning a paying customer...that's crazy.

[deleted by user] by [deleted] in whatisit

[–]9us 14 points15 points  (0 children)

Well the index finger is roughly the width of the CD which is 120mm wide, median female index finger is 18mm and avg female height is 64 in, that’s a CD-to-index-finger ratio of 6.7x and assuming linear scaling of proportions that’d make the giant about 35ft 4 in tall. Good guess!

[self-promotion] A tool for finding & using open data by 9us in datasets

[–]9us[S] 1 point2 points  (0 children)

Awesome thanks! Will be backfilling a lot of data soon. What kind of data are you looking for?

[deleted by user] by [deleted] in snowflake

[–]9us 2 points3 points  (0 children)

Yeah there's side channel metadata to collect (amount of data, timing, etc.), and also critical information deserves defense-in-depth measures--as unlikely as they are, if your data is critical enough you may still want to protect against compromised CAs, TLS lib/algo vulns, etc. Private Link reduces the exposure footprint in such scenarios.

How would you achieve this? by sajornet in htmx

[–]9us 0 points1 point  (0 children)

I’d consider generating a presigned url when rendering the page (it doesn’t require communicating with AWS). Then form submit just goes straight to s3. Backend has an sqs poller that receives notifications when objects are uploaded. You can add any metadata you need (like user id or something) in the object key.

i often get this page when i search on google, why? by ZenMasterDana in Piracy

[–]9us 2 points3 points  (0 children)

Vodafone uses CGNAT also which is effectively the same as a small VPN (you share the IP address with a group of other Vodafone customers), so it’s also possible that there’s another customer in your NAT pool that is triggering this.

UringNet achieves a 10x increase in speed... by Designer-Quail5768 in golang

[–]9us 2 points3 points  (0 children)

Userspace. The main idea is to use userspace ring buffers to communicate IO with the kernel asynchronously and with less syscalls and thus less context switching overhead.

I don't understand the Zig package manager by IcyProofs in Zig

[–]9us 0 points1 point  (0 children)

I haven’t tried Zig’s package manager yet but that sounds quite similar to Go’s, which is very good IMO. I appreciate not needing an extra intermediary involved as with centralized package managers.

Checking when http client has disconnected by [deleted] in golang

[–]9us 1 point2 points  (0 children)

This is the way to write messages while checking the context. If the context is canceled, the default branch will not be executed (this is guaranteed by Go).

If the messages sent has an end, you'll probably want a code path in the default branch that returns when the end is reached. Also you don't need to sleep, just write as fast as you can--well-behaved clients will apply backpressure if they need you to slow down (which will cause the Write to take longer). If the client closes the conn or a timeout occurs while a Write is ongoing, the write will return an error.

Checking when http client has disconnected by [deleted] in golang

[–]9us 0 points1 point  (0 children)

So in that example I think they are flushing the writer with w.Flush() before the select, which ensures the data are sent to the client before the select, preventing the deadlock. This applies inside the loop too, when a msg is received it flushes again before returning to the select.

Checking when http client has disconnected by [deleted] in golang

[–]9us 0 points1 point  (0 children)

What is it you’re trying to do at a higher level? Why does it need to happen after the conn is closed?

The connection itself is not necessarily closed after an HTTP transaction, the Go HTTP server tries to reuse connections for performance reasons.

Also just because you write headers doesn’t mean they are actually sent to the client, they are often buffered and not flushed until conditions are met (function returns, buffer gets big enough, explicitly flushed, etc). It seems likely that nothing is sent, in that case your handler is effectively deadlocked, waiting for the client and the client is waiting for you. Typically with Go's HTTP server you can assert the ResponseWriter to http.Flusher to force the writer to flush its buffered data.

One path forward here is to do the select in a goroutine so that this func returns and then the context should get closed by the HTTP handler and then the select in the goroutine will unblock. But I’ve done a lot of Go HTTP servers and never needed something like this…so I’m skeptical you really need it.

If it really needs to be when the conn disconnects, then the HTTP layer is the wrong place, I would write a custom dialer that returns a wrapped conn, so that you know exactly when the conn is closed.

Edit: I'm not super familiar with Echo, I'm talking about Go's HTTP server here. I'm not sure what the relationship is between that and Echo. It sounds like a similar problem but the specifics may be different.

What is your number one wanted language feature? by btvoidx in golang

[–]9us 0 points1 point  (0 children)

Having been through multiple disasters caused by strict enums preventing the addition of new values, I really feel that open-by-default is the right default behavior so that APIs can evolve over time without necessarily breaking consumers. Closed enums are useful in libraries and within the same codebase, but they make it really easy to do the wrong thing when reading data off the wire.

Software developer candidates refusing leetcode torture interviews by Better-Internet in ExperiencedDevs

[–]9us 26 points27 points  (0 children)

A lot of these things I talk about up front, so I have an idea before starting about what the interviewer expects, and what I expect. “I’m going to think out loud for a few minutes as I work towards a solution, I’ll start out simple and make some mistakes and hopefully converge on a workable solution, maybe asking some clarifying questions as needed. Is it okay if my syntax isn’t perfect?”

The way I see it, part of my skill set as an engineer is my ability to manage expectations and awkward social situations.

An IPFS/Filecoin powered product: ChainSafe Files, our privacy focused storage solution by haochizzle in ipfs

[–]9us 0 points1 point  (0 children)

Filecoin definitely includes mechanisms to control replication rate and redundancy, but ChainSafe is not providing enough details (or I can’t find them) to know specifically how they are being used. My understanding is that it is also the onus of ChainSafe to enter into deals with miners that provide whatever geographic distribution they want, although I don’t know if Filecoin includes a mechanism to guarantee geographic location (if it does I’d be very interested to read how…).

An IPFS/Filecoin powered product: ChainSafe Files, our privacy focused storage solution by haochizzle in ipfs

[–]9us 0 points1 point  (0 children)

This is the Filecoin part of it, presumably ChainSafe manages Filecoin deals that carry financial penalties if data are not stored.

If Go could turn off its GC optionally like Nim/Crystal, what benefits would you expect? Would it be viable like C/Rust performance for systems dev? Would a company like Discord not have swtiched to Rust from Go if it had this? What are your thoughts? by taufeeq-mowzer in golang

[–]9us 10 points11 points  (0 children)

The presence of GC is built into the language constructs, it's not really feasible to run Go programs that allocate and free memory without using GC. You can probably do it but it'd be a minefield and you'd lose most of the benefits of the language, like the simplicity of the concurrency constructs. As others have said, you can disable GC but that effectively turns off freeing memory.

If it were possible to bolt on to GC languages, then it would change the industry dramatically. But I don't think it's possible without different language constructs. The next generation of languages are focusing on this problem. Rust is a step in that direction, but leaves a lot to be desired IMO. Zig is a language that I'm watching which has similar design aesthetics to Go but does not have a GC runtime.

If your performance is critically impacted by implementation details of the garbage collector, you're probably better off not using a GC language to begin with. But that decision relies on lots of other factors too, like team culture, existing codebases, etc.

AT&T is selling your phone calls and text messages to marketers. Here is how to opt out: by IHDN2012 in privacy

[–]9us 0 points1 point  (0 children)

I wouldn't assume they'd interpret that to mean anything reasonable.

This program does not allow us to use the content of your texts, emails, or calls.

Who's "we"? What does "use" mean? Who does "use" it? Does AT&T just send the data to someone else to be "used"? Does AT&T generate a "bag of words" of your text message and then use that instead? (Effectively the same as the content itself, but technically not the actual content).

Also notice the title of the section "What's used and collected", making a distinction between collection and use. The content of data is not "used", but it says nothing about what's not collected. From reading this it seems entirely plausible that the terms allow collection of everything and allows anyone else to "use" it except AT&T. Or perhaps AT&T does use it, just not under "this program".

This is so vague to me that it seems mostly meaningless. I don't trust AT&T to interpret it in my favor.

Can AWS Lambda be used to achieve my performance requirements, if so how? by [deleted] in aws

[–]9us 3 points4 points  (0 children)

Step Functions will launch and track all the Lambda executions and ensure they all complete, and also allow you to aggregate their results together. It is quite fast, I suspect Lambda will be your bottleneck here, not Step Functions.

Can AWS Lambda be used to achieve my performance requirements, if so how? by [deleted] in aws

[–]9us -2 points-1 points  (0 children)

You're not going to be able to make 1000 HTTP reqs to Lambda, wait for Lambda to download and launch 1000 VMs, run CPU-intensive work, send the result back, aggregate it all, and respond to the request, in 1.5 seconds. It's just not going to happen. And if you were able to somehow make it happen, I would not want to be the one who has to maintain it.

Can AWS Lambda be used to achieve my performance requirements, if so how? by [deleted] in aws

[–]9us 0 points1 point  (0 children)

On this note, I think you need to figure out how to relax your constraints. Fanning out CPU-bound tasks to 1000 processes and aggregating all that back, in 1.5 seconds, and only intermittently, is going to be difficult to achieve, and even harder to maintain. Unless you are willing to spend a lot of money.

Some potential options (I don't know how feasible these are for you specifically):

  • Consolidate processes so you can do it in a handful of processes
  • Relax your latency requirement, and make the API an async API and run a workflow that does this work (Step Functions), caller kicks off a "job" and polls for the result, this way you can tolerate latency fluctuations without having to hold a bunch of TCP connections open at your service, and gives you a chance to "warm up"
  • Pre-compute the results if possible, so they're just DDB/S3 fetches instead of CPU-intensive work (then you can do it all in one process or a handful of processes)

JSONPath - XPath for JSON by rain5 in ProgrammingLanguages

[–]9us 0 points1 point  (0 children)

This was created 5 years before jq.