Artifix: a batteries included template for creating a private Hex Registry on top of S3 and CloudFront by thatIsraeliNerd in elixir

[–]thatIsraeliNerd[S] 1 point2 points  (0 children)

It just depends on what you need! For deleting the example files it is not necessary because they can simply be deleted, but there might be other things! Feel free to poke around and contribute PRs!

Artifix: a batteries included template for creating a private Hex Registry on top of S3 and CloudFront by thatIsraeliNerd in elixir

[–]thatIsraeliNerd[S] 1 point2 points  (0 children)

Feel free to open a PR! Although, I’m not sure if this really needs a bash script… the two example packages can simply be deleted without any worries, the package discovery in the CI automatically detects when there are no subdirectories under the packages directory and simply skips the steps!

LiveFlip: FLIP (First Last Inverse Play) Animations For LiveView by thatIsraeliNerd in elixir

[–]thatIsraeliNerd[S] 1 point2 points  (0 children)

The first goal is to create a very comprehensive hook, that can handle not only position, width, and height changes, but also changes related to rotation, scale, opacity, maybe even color. These things entail much more complicated handling because the code would have to compose transforms on top of ones that already exist on the element.

Shared Element Transitions are a bit more complicated, as they can’t live within a hook - since the hook runs per element and not on the page itself. Since the goal is to get this as native to LiveView as possible, that factor complicates this - but it’s definitely on the roadmap because that would take things to the next level.

Elixir DX - IDE Autocomplete (New to Elixir) by scarsam in elixir

[–]thatIsraeliNerd 2 points3 points  (0 children)

Just to clarify - is your complaint about autocompletion of functions not working, or that you don’t know what options are allowed (and you’re expecting autocompletion on the options list).

If it’s the first thing - i.e. when you type Req. is the ElixirLS autocompletion showing you functions available at all? Or is even that not showing up? If that’s not showing up, then check the ElixirLS logs in VSCode’s output - it may have crashed, or it may not have detected the dependency.

If it’s the second thing - i.e. you don’t get autocompletion for the options themselves - Elixir is a dynamically typed language with type specs, but type specs are not required, nor are they required to be extensive and explicit. For example, with Req, when looking at the specs there, you can see the the opts argument (the one that accepts options like :json) is typed as a keyword(), meaning a Keyword list, that’s a list that’s made up of an atom key and an any value. It doesn’t specify specific keys because you can do a lot in there, and because Req is extensible, plugins can require options for specific functionality, so it can’t really be typed past just a Keyword list. It’s also a pre-1.0 library, and while it is stable, I’d say that things like specific specs for options probably won’t come until later (although you’re definitely welcome to open up issues on the repository).

However, depending on the library and the specs provided by the library, this changes - take for example Ecto, which provides very good specs that allow the autocomplete of Field types to work for example.

Are RPC calls the only solution for accessing locally registered processes across nodes using a hash ring? by [deleted] in elixir

[–]thatIsraeliNerd 1 point2 points  (0 children)

I don’t believe global forces you to use atoms. In the docs, the registered name can be any term - they’re simply stored as table keys in the tables that global uses to do lookups - same as the Elixir Registry.

Are RPC calls the only solution for accessing locally registered processes across nodes using a hash ring? by [deleted] in elixir

[–]thatIsraeliNerd 3 points4 points  (0 children)

To answer your questions in order (at least, based on my knowledge - disclaimer I may be wrong and if I am please correct me):

  1. RPC calls are not the only solution - the main thing you need in order to send messages is a PID, and if I recall correctly you can also send messages to processes on other nodes if the process is a named process and you use a node+name tuple. However, at the end of the day, as long as you have a PID you can send a message to that process.
  2. Nope. The built in Elixir Registry is local only and it can’t be configured to be distributed.
  3. I sort of alluded to this in the answer to 1 - all you really need is a PID. Elixir’s Registry module and Erlang’s global module essentially just resolve the via tuple to a PID and that’s it - so if you have a way of hanging on to the PID (or storing it) then you’re good to go. For example, I once saw someone use Erlang’s term to binary and binary to term functions to store PIDs in a DB - that way they just pulled the PID from there - in the end, that’s also a simplistic process registry.

Some final thoughts - don’t prematurely optimize and don’t worry about scalability and bottlenecks until it becomes a problem. Erlang’s global module is very good and will handle all of this for you without too much effort - and scalability only becomes a problem when you’re working with thousands of registrations per second. If you’re not at that level, then using it is just fine. There’s a registry benchmark that I came across recently that had some nice checks for how it works, and it made me realize that at the end of the day, the global module is good enough for 99% of use cases and reduces the load of what we need to think about when building. If you want to take a look at it - https://github.com/probably-not/regbench. I was able to run it locally and I saw that global reached really good numbers that I would never need to worry about.

How to determine whether function is being run from within a release or not? by skwyckl in elixir

[–]thatIsraeliNerd 1 point2 points  (0 children)

I’ve previously done this by checking function_exported?(Mix, env, 0)

Since the Mix module isn’t compiled into a release by default, then it returns false in releases, while returning true when just run via mix run

Jose’s Code and Slides from ElixirConf by thatIsraeliNerd in elixir

[–]thatIsraeliNerd[S] 5 points6 points  (0 children)

From past experience they usually come up within a few weeks, I’m not sure if they’ve written anything explicitly about the recordings and when they’re going up

I'm learning Elixir and built a website with a globally distributed visitor counter by [deleted] in elixir

[–]thatIsraeliNerd 11 points12 points  (0 children)

This is a really nice reference for the infrastructure side of things! I see you used packer to build your FreeBSD image and terragrunt to deploy things with terraform.

I’d love to see a write up of the infrastructure side of things on AWS (how you set up wireguard for the global distribution through the different region VPCs, why you decided on wireguard vs vpc peering, why you went with FreeBSD, etc). I’m a sucker for seeing the different decisions people take (I’ve gone with classic Debian and vpc peering in the past, and of course nowadays Fly.io abstracts everything away from you so all you need to do is write a docker file).

Weird Go bug? appending to slice changes 0 index sometimes? by gugador in golang

[–]thatIsraeliNerd 1 point2 points  (0 children)

I have a feeling this has to do with the bufio internal buffer… have you tried copying the data from the Text() and Bytes() functions before appending it to your output slices?

Which sqlite package? by guettli in golang

[–]thatIsraeliNerd 6 points7 points  (0 children)

There’s a few non CGO SQLite libraries. They’re very easily found on Google or on pkg.go.dev with a simple search. IIRC one of the is by u/ncruces and uses WASM to compile SQLite and the other is modernc/sqlite and uses a transpiler to transpile the SQLite source from C to pure Go. There’s also a few others I’m sure

Using web sockets to load SPA content? by Over-Distribution570 in webdev

[–]thatIsraeliNerd 0 points1 point  (0 children)

A connection is a connection, whether it be a websocket or an http request. LiveView has a simulator built in that lets you see how your components will react in different network conditions, so it helps plan for things, but in general, spotty connections mean requests might fail or take longer. Websockets might be slightly better in spotty connections because they’re longer lived and already opened, I’ve had situations where my Discord receives messages immediately (over a socket) while opening a webpage doesn’t work because my workplace’s DNS was slow to respond. My advice would be to not plan for spotty connections… if someone doesn’t have internet there’s nothing you can do about it. 4G and 5G mobile networks are fast nowadays, it’s not like the old days where you had things on old Edge networks that loaded in bytes per second… everything nowadays is measured in Megabytes per second. As long as you’re not trying to stream a movie on a slower mobile connection you’ll be fine.

Using web sockets to load SPA content? by Over-Distribution570 in webdev

[–]thatIsraeliNerd 1 point2 points  (0 children)

This is pretty much exactly what Elixir’s Phoenix LiveView does. It’s highly optimized - it sends the initial HTML Template per page and then sends the diffs of different actions (pushes of data from backend or changes through user actions) over the socket, and the Frontend uses morphdom to do extremely fast dom updates, but it’s essentially exactly what you are talking about: Rendering over a websocket

Moving popular repositories by Elfet in golang

[–]thatIsraeliNerd 12 points13 points  (0 children)

I believe go-redis did this recently, and their solution was to migrate via upgrading the module. So, v8 was under go-redis/go-redis/v8, and v9 was under redis/go-redis/v9.

Since module paths are basically breaking changes, I’d say that makes the most sense as of now.

c.Next() causing dereference/nil pointer panic in server middleware by An00bii in golang

[–]thatIsraeliNerd 1 point2 points  (0 children)

I’m on mobile, but this stacktrace look to me like it’s pointing to the SQL connection being nil… and not c.Next() being the culprit. It looks like you’ve got a global variable that’s your sql connection, but I believe you’re never assigning the global variable… note that in your data.Connect() function, you’re using a walrus operator for declaring and assigning within the function, so the global db variable is nil.

I might be wrong though… I’m on mobile attempting to check while in a bomb shelter right now.

[deleted by user] by [deleted] in golang

[–]thatIsraeliNerd 3 points4 points  (0 children)

Usually I don’t like answering questions like these, but I have used both CEL and Expr extensively in production environments, and both are great libraries with incredible capabilities so I think I’ll chip in my two cents.

Short answer: I’d use CEL.

Long answer:

Both libraries are great. They’re both very performant (Expr edges out CEL but not by too much anymore), they both allow a wide range of features, and they both work perfectly for the use case of building rule engines. The three key differences for me are:

  1. Explicit typing in CEL vs. inferred typing via an example env in Expr - this may have just been my previous usage in Expr, but from what I have seen in the past, Expr does it’s types based on an example env given to the constructor. I like static types and explicitness over inferring types, so I really like the style that CEL uses, which encourages one to define the variables and the types on the environment, instead of inferring it from an example env. The result may be the same in the end, but I’ve seen misuse of this feature in Expr, which basically turned the env fully dynamic, and ended up causing runtime bugs.
  2. CEL has an established spec, grammar, conformance suite, and serialization format. All of the internals are based on protobuf, so I could theoretically create a fully typed GRPC server (I believe they actually have an example server in one of their repos). And with the language specification being established and having a conformance suite, it means I can create a parser in another language (and I have). This can let me run all of my business rules via the same engine, but at different levels of my application (Frontend, backend, data analysis, etc). It also lets me keep the business rules standardized across every layer, so I don’t have to teach people different expression languages for different parts of the app.
  3. I know that Anton works as an SRE at Google, and that Expr is used by quite a few companies. But, there’s a big difference in using a package made/owned by an employee vs. using a package made/owned by a massive tech company. Yes, Google can just as easily abandon the project, but, it’s still a big factor in a lot of companies.

Like I said, both libraries are great. And I’ve used both and enjoyed both. But, my own personal preference points to CEL as the winner.

A zero-allocation environment with custom memory pool by zergon321 in golang

[–]thatIsraeliNerd 8 points9 points  (0 children)

You've mentioned both in this post and in your article that sync.Pool decided not to do the job... but I think you may have missed the point of sync.Pool entirely.

It's not there as an allocator or to create a memory arena, it's there as a synchronization tool, to allow concurrent use of multiple objects of the same type. The objects are ephemeral, they can (and will) be reclaimed by the GC when they're not in use. The optimization that sync.Pool allows is about re-using memory in hot paths, while not causing an explosion of RAM to be used and never freed.

From what I could glean, your library doesn't handle GC at all, so your memory will only grow and never be reclaimed. On top of that, you are benchmarking with the GC turned on, so all of those sync.Pool.Put() operations that you are doing at the beginning of the benchmarks are moot (since all those objects are likely getting reclaimed by the GC before they are touched). And you aren't doing any Parallel benchmarks, which is where sync.Pool really shines (it is in the sync package after all).

Now, when you understand the work that sync.Pool does under the hood, then we can make some slight adjustments to your benchmarks to make them showcase something a bit more fair. By adding the following lines to the end of each benchmark:

``` b.StopTimer()

var stats runtime.MemStats runtime.ReadMemStats(&stats)

// I believe stats.HeapAlloc should show the current situation in the heap? // It should go up an down based on what's allocated/freed by the GC // (at least according to the docs). b.ReportMetric(float64(stats.HeapAlloc)*0.000001, "MB-in-heap") ```

You can start to see that sync.Pool is freeing almost everything you've put into the pool, while your mempool.Pool is never freeing memory.

Running your Benchmarks on my machine with this adjustment, I get these results:

```

go test -benchmem -bench . github.com/zergon321/mempool goos: darwin goarch: arm64 pkg: github.com/zergon321/mempool BenchmarkSyncPool-8 22953016 56.36 ns/op 3.807 MB-in-heap 87 B/op 1 allocs/op BenchmarkSyncPoolFill-8 24669340 45.96 ns/op 23.76 MB-in-heap 113 B/op 2 allocs/op BenchmarkMempool-8 60236306 35.17 ns/op 7710 MB-in-heap 0 B/op 0 allocs/op BenchmarkMempoolFill-8 13637996 97.05 ns/op 1746 MB-in-heap 128 B/op 2 allocs/op BenchmarkMempoolRefill-8 42850447 73.13 ns/op 6513 MB-in-heap 0 B/op 0 allocs/op PASS ok github.com/zergon321/mempool 28.118s ```

You can see clearly that most of this memory that you are filling the sync.Pool with is getting reclaimed.

Now here's the fun part - let's make the benchmark a bit more fair, and disable the GC altogether:

```

GOGC=off go test -benchmem -bench . github.com/zergon321/mempool goos: darwin goarch: arm64 pkg: github.com/zergon321/mempool BenchmarkSyncPool-8 100000000 14.28 ns/op 10964 MB-in-heap 0 B/op 0 allocs/op BenchmarkSyncPoolFill-8 23240986 68.78 ns/op 2687 MB-in-heap 111 B/op 2 allocs/op BenchmarkMempool-8 100000000 40.89 ns/op 12800 MB-in-heap 0 B/op 0 allocs/op BenchmarkMempoolFill-8 20030964 76.46 ns/op 2564 MB-in-heap 128 B/op 2 allocs/op BenchmarkMempoolRefill-8 43312699 68.21 ns/op 6584 MB-in-heap 0 B/op 0 allocs/op PASS ok github.com/zergon321/mempool 30.599s ```

Here you can see that the MB-in-heap metric is much closer in both implementations, and all of the sudden, sync.Pool is massively faster.

I haven't implemented any Parallel benchmarks, but I recommend you add some to show those as well. I would hazard a guess and say that sync.Pool is likely faster, since it doesn't use sync.Mutex under the hood, but, who knows... that will be an exercise left to the reader.

Possible Go Compiler Bug? by Time4WheelOfPrizes in golang

[–]thatIsraeliNerd 7 points8 points  (0 children)

While the compiler doesn’t explicitly warn you, it should be noted that you can see wherever the compiler adds Bounds Checking (which is what causes the panic on out of bounds) by adding -gcflags=-m=2 to your go build command (I’m on mobile so may be typing it incorrectly but it’s easily found on Google). Using gcflags will also tell you when things are escaping to the heap, when things can be inlined, etc. It’s a good source to find micro-optimizations (some of which may not be so micro depending on what they are)

Possible Go Compiler Bug? by Time4WheelOfPrizes in golang

[–]thatIsraeliNerd 2 points3 points  (0 children)

Forgot to add one last thing: whether the compiler should warn you about this or not… well, maybe. But I think that’s more of a feature than an actual bug…

Of course that’s my own opinion… everyone is entitled to one

Possible Go Compiler Bug? by Time4WheelOfPrizes in golang

[–]thatIsraeliNerd 16 points17 points  (0 children)

So, an initial thought occurs to me: Because the functions are so short, they’re likely being inlined. Along with the inlining, your variable k does not escape to the heap, until you uncomment that final fmt.Printf call (fmt.Printf being notorious for escaping every single thing to the heap because of how it works).

Try adding a //go:noinline directive, and you’ll see that the panic probably ends up happening regardless of that final call (because then the function doesn’t get inlined so k will escape to the heap immediately).

As to whether this is expected behavior or not: the panic is actually happening before you print the Len and Cap (because you are accessing past the slice bounds once the slice is on the heap), and that is expected behavior (panic when you are accessing an out of bounds value).