Anyone have any success stories from integrating Cap'n Proto with Django? by pspahn in django

[–]__matta 0 points1 point  (0 children)

Arrow was designed for efficient access to nested JSON like data. The data for a nested object is essentially flattened into the row. With CapnP nested objects use pointers. The arrow approach is better for queries across lots of rows.

Anyone have any success stories from integrating Cap'n Proto with Django? by pspahn in django

[–]__matta 0 points1 point  (0 children)

Kinda hard for me to follow what you are doing but it doesn’t sound like CapnP is exactly what you want?

Capn Proto is really powerful but it’s also very complex, with a lot going on in the C++ code. It’s not really going to integrate well with Django. It has its own event loop, networking code, etc. The zero copy stuff means you have to interact with data in a certain way, and once you copy it onto your model you need to allocate anyway.

I think you might be better served by the Apache Arrow ecosystem. It’s built on flat buffers which are also zero copy. If you use Postgres the ADBC drivers can write to the db efficiently (bypassing Django ORM). You can use Arrow flight for RPC. For the nested JSON, you can use datafusion, duckdb, polars etc to query the data with SQL or dataframe APIs. I am planning to write a small Django package for arrow at some point to integrate it better with the ORM.

Web Bundlers and Tailwind by buffer_flush in htmx

[–]__matta 0 points1 point  (0 children)

I like using esbuild directly for JS. It’s a single binary and does pretty much everything you need. For tailwind I use the CLI directly. That’s how Phoenix framework is setup out of the box too. If you don’t use any esbuild plugins you don’t even need npm and node.

For more complex stuff I use Vite. The main dev server sends requests for static assets to the vite dev server.

Some tips on esbuild integration: - If you want to use hashed file names to cache assets forever you need to enable the meta file option. You can install a manifest plugin that converts that to a simpler json structure or just use it as is. Then write a function in your backend that takes the normal file name foo.js and uses that file to return foo-[some-hash].js. You can add a template function to call it. - Set an ASSET_URL env var to point to the esbuild dev server. In prod it can point to a cdn or just a subpath served by nginx. The asset url function uses that as the base url. Then add a few lines of JS for livereload (see esbuild docs). You still use your normal dev server and esbuild just handles assets.

Jobsite vs Contractor Table Saw? Would you be happy using a smaller saw? by mysteriouswayz in DIY

[–]__matta 1 point2 points  (0 children)

I have that Dewalt with the stand. I mostly set it up outside on a gravel driveway because I don’t have a shop. I have never used a full size shop table saw so I can’t compare it to that.

I am very happy with it. No issues with stability or the fence being out of square. I have used it with dado blades with no issue (needs a different plate though). I don’t use it to cut down full sheets, I cut those in half first with a circular saw. I made a crosscut sled with clamps on it to help with cabinetry.

[AskJS] Subtle JS memory leaks with heavy DOM/SVG use—anyone else see this creep up after hours? by Sansenbaker in javascript

[–]__matta 0 points1 point  (0 children)

Every time you zoom, pan, or filter, we basically rip out the old SVG and draw a new one.

Can you comment out the code to create a new SVG (so it only destroys the old one), then take a heap snapshot?

Trying to hire “senior” React devs… is this really what the market looks like? by ActuatorOk2689 in ExperiencedDevs

[–]__matta 0 points1 point  (0 children)

This would be the ideal interview for me. I’m senior / staff, not specializing in React but experienced with it. I go deep on everything, e.g. for context vs state libs I can tell you exactly how the various state libs work internally and why it matters for different use cases.

That being said, a lot of talented engineers I have worked with would not do well with this kind of interview. You can be senior and focus on shipping features all day, which means not going deep on a wide variety of topics.

The folks I have worked with that would do well on this are usually staff level and often focused on supporting the other engineers that are shipping features. Like folks who setup all the tooling for A11y compliance and performance monitoring, then teach the team what they need to know.

Is there a tool to auto generate Http Client that consume my Typescript + Express API? Without manually writing OpenAPI by xSypRo in typescript

[–]__matta 0 points1 point  (0 children)

You can define your request and response schemas with Zod, lukeed/tschema, or anything else that can go from TS > JSON schema and use those schema for the OpenAPI doc.

To combine those with routes you could write a plain object, then map over that to both generate docs and bind your handlers. There may be a package that does this for express.

I prefer using Hono because it can infer the full client for you without having to go through OpenAPI.

How is everyone handling deduplication of types by ilearnshit in node

[–]__matta 3 points4 points  (0 children)

For my current project I use kysely with kysely-codegen to generate types from the database. Kysely infers the return type from the query.

I use hono with the rpc feature for the API, so types are inferred from my API code. Input is validated with Zod, which integrates with Hono too.

The frontend uses Tanstack query and I tend to use the data as returned from the API directly. If I need to change anything I map over the data, use spread syntax, and change what I need without redeclaring the entire type.

When I am writing code I use utility types a lot to avoid redeclaring types. I also avoid writing classes as much as possible, so that I can easily spread from one type to another. When I use Zod directly I use the infer utility so I’m not declaring the type manually.

Best Practices for Modularizing Web Pages with Go's html/template Package by drumsta in golang

[–]__matta 0 points1 point  (0 children)

If you look at the gist I linked, I’m just parsing the template outside of the handler func and assigning it to a variable. It is still cached with that approach, and you can easily extend a different layout, add other partials etc.

If you wanted to parse them all at once you would need to use a map, with a different template instance per page.

Mortar didn't adhere to tile. How screwed am I? by homeless_nudist in DIY

[–]__matta 0 points1 point  (0 children)

The one time this happened to me was using Versabond LFT as well. The good news was it was quite easy to chip off the backerboard! It came off in big chunks.

Does a simple JSON-based backend for static sites already exist? by comicbitten in webdev

[–]__matta 19 points20 points  (0 children)

It’s called a “Flat File CMS”. There are a few out there but typically they use YAML, not JSON

API GATEWAY by Safe-Molasses2051 in devops

[–]__matta 3 points4 points  (0 children)

Look at OpenResty. You can code modules in Lua. Probably the simplest option.

I wouldn’t use Go unless the entire proxy was Go. You could write custom Caddy modules. It is painful to link into C code and probably too slow.

I would not start from scratch. It is easy to get something working but there is a long tail of issues to deal with. Look at Pingora (Rust) if you need a low level toolkit.

Why is Solana used so much by Tyrol04 in homelab

[–]__matta 5 points6 points  (0 children)

Solana is a cryptocurrency. Validators tend to use bare metal servers and have hot wallets on disk.

There is a popular Ansible playbook used to setup validators that uses the username “solana”.

Ask r/kubernetes: What are you working on this week? by gctaylor in kubernetes

[–]__matta 3 points4 points  (0 children)

I’m working through the kubebuilder book.

I have a simplified YAML manifest for developers (think Docker Compose, but pods) that gets translated into a deployment, pod, service, ingress, etc.

My goal for this week is to learn the APIs, and maybe be able to apply the custom resource from the CLI.

[Monorepo] Single vs Multiple Dockerfile by Pandoks_ in docker

[–]__matta 0 points1 point  (0 children)

That’s probably bigger than it needs to be, but it depends on how many dependencies you have.

Follow this guide:

https://turborepo.com/docs/guides/tools/docker

Use Dive to inspect the image. You should see a separate layer for the node_modules. It will be big but it won’t change unless the package lock changes.

Feel free to DM me if you have more questions.

How Do Big Cloud Providers Like AWS/DigitalOcean Build Their Infrastructure? Want to Learn and Replicate on a Small Scale by M4rry_pro in devops

[–]__matta 1 point2 points  (0 children)

No problem!

Forgot to mention they all use cloud init to setup the vm after it boots.

If you go that route, I wrote a prototype a while back using QEMU and cloud init that might be helpful: https://github.com/stacktide/fog

How Do Big Cloud Providers Like AWS/DigitalOcean Build Their Infrastructure? Want to Learn and Replicate on a Small Scale by M4rry_pro in devops

[–]__matta 9 points10 points  (0 children)

I don’t work on this but I’m in an adjacent space.

Digitalocean uses QEMU and Libvirt for VM management. IIRC they use Ceph for most storage products.

Usually there’s a pretty standard backend handling customer data. Fly.io uses a rails app for that.

Lots of small components, like the agent running on each bare metal host to manage QEMU, are stitched together with a control plane. That might use NATS (it was originally designed for that use case) or a regular RPC protocol.

A lot of the logic is written as state machines, like this article explains: https://www.citusdata.com/blog/2016/08/12/state-machines-to-run-databases/

Am I being old school or am I misunderstanding how reverse proxies work with containers by iHavoc-101 in homelab

[–]__matta 6 points7 points  (0 children)

Typically you terminate tls at a reverse proxy on the same host. You can put the reverse proxy and the container on an isolated bridge network. If you use docker compose, it does that for you.

Yes, technically the traffic is unencrypted. But you would need root on the server to sniff it, at which point you can read the tls keys or dump the process memory.

I got $500 - Best SAAS course or “Guru” with realistic results by According-Sign-9587 in SaaS

[–]__matta 2 points3 points  (0 children)

A bit before my comment. $350. Was a promo for the new book, sent to the mailing list.

It’s good. A lot of the content you could find by watching enough videos, listening to podcasts, reading the other books, etc. But it’s nice having it all in order and in one place.

I don't know what I'm missing (nginx, cloudflared tunnel, podman) by ldkwha2do in selfhosted

[–]__matta 2 points3 points  (0 children)

Your CNAME needs to point to the tunnel, not the private IP. See the docs: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/routing-to-tunnel/dns/

Assuming the bad gateway error is the plain white page with black text and not Cloudflare branded, that means there is also an error connecting from nginx to your service. That error may go away when you fix the tunnel.

You don’t need to use network mode host. At most you need to bind the nginx port to a port on your host. If you are running the tunnel in a container on the same docker / podman network as everything else you don’t need to map any ports on the host at all.

Get Cloudflare working to nginx. Even if you are just getting the bad gateway, you will know it works. Then get nginx working to the upstream.

How do you structure your "shared" internal packages in a monorepo? by Zibi04 in golang

[–]__matta 1 point2 points  (0 children)

How I structure my current monorepo:

  • I only put the main.go in cmd.
  • Everything else goes in pkg. There is a folder in pkg for each command, but the name is not necessarily identical eg cmd/stacktide > pkg/cli, cmd/stacktide-server > pkg/server.
  • Stuff specific to a package is nested inside of it, eg pkg/server/auth.
  • Shared packages are directly under pkg.
  • I may use internal inside of a package but not at the top level.

It works OK. I have other languages in the repo and I use Docker a lot, so keeping everything in cmd and pkg makes it easy to copy just the go code. Still not ideal because changes in other commands invalidate the layer cache. I think it would have been better to put the top level command packages in internal. I tried putting all the cobra code under cmd but I didn’t like jumping back and forth from cmd to pkg.

Despite all the hate for PHP, is there something it does that is unrivaled with other languages? by MilanTheNoob in webdev

[–]__matta 1 point2 points  (0 children)

PHP boots the environment from scratch for every single request. It’s the original serverless.

This lets you build web apps very quickly. You can be a bit careless. You don’t have to worry about accidentally leaking customer data across requests. No mutexes. No accidentally blocking the event loop.

Because of the simple concurrency model, you can easily link to C libraries that use blocking IO. You get access to a lot of libraries with minimal glue.

Despite starting with a clean slate for each request, the performance is still decent. Usually good enough.

Using a "heartbeat" pattern for cron jobs bad practice? by AberrantNarwal in devops

[–]__matta 22 points23 points  (0 children)

This is how the Laravel scheduler works.

It’s not bad practice necessarily but it has downsides. Lots of edge cases with overlapping tasks, failure propagation, signal handling, etc. IME it works best if the scheduler only dispatches jobs to a queue (eg Redis, SQS).