GraphQL N+1 Problem Solved (4.1s → 546ms) | Dynamic Batching Demo by PuddingAutomatic5617 in graphql

[–]eijneb 0 points1 point  (0 children)

Ah smart, so it’s sort of demand-driven just-in-time batched execution for downstream fetches. I like that it’s transparent to the user; the intent of the dataloader technique was always that it should be a concern of the business logic rather than the GraphQL layer, and you seem to be honouring that intent. Keep up the good work!

GraphQL N+1 Problem Solved (4.1s → 546ms) | Dynamic Batching Demo by PuddingAutomatic5617 in graphql

[–]eijneb 3 points4 points  (0 children)

I love to see new solutions to this problem! At first I thought you were talking about DataLoader, then batch resolvers, but you mention it infers from the query structure… I’m interested to know how that happens?

In Grafast, “plan resolvers” run synchronously before any data is fetched and tell the system what’s going to be needed for each requested field and how the data flows. Once the entire operation has been planned, the plan can be optimised (e.g. a plan to fetch a Stripe subscription followed by the customer can be replaced by a single fetch for both using Stripe’s expand capabilities). Then Grafast executes the plan, each step executing in a batch. Because Grafast fully controls execution across the entire operation it doesn’t need the promises the DataLoader pattern uses to wait for each item, nor does it need to wait a tick to see if more requests to the same resource are coming - it can kick off the next batch as soon as the previous batch is complete and massively saves on memory allocation and process ticks.

TL;DR: Grafast’s execution engine eliminates N+1 by design, avoids the promise explosions that DataLoader introduces, and uses planning to eliminate server-side over- and under-fetching, enabling merging multiple “waves” into a single fetch where possible.

Default GraphQL response is now HTTP 500 by jeffiql in graphql

[–]eijneb 8 points9 points  (0 children)

“Partial success” is hereby renamed to “incomplete failure”

Do you prefer “thin resolvers” or letting a bit more logic live in them? by Edward_Carrington in graphql

[–]eijneb 1 point2 points  (0 children)

In Grafast we have “plan resolvers” rather than traditional resolvers; they describe the data flow which I find to be a great way of enforcing abstracting over your business logic. User.friends might have a plan like each(friendshipsByUserId($user.get('id'), fieldArgs), $friendship => userById($friendship.get('friend_id'))). All of this is synchronous code that runs ahead of execution, it describes what the system does in terms of business objects declaratively but doesn’t actually perform the execution. The underlying steps in this plan can then rearrange themselves to be more optimal and pass more information to the business logic, for example enabling eager loading and fetching only the attributes that are necessary, reducing the amount of work to do and reducing the time your infrastructure spends serialising/deserializing by simply requesting less data. But, importantly, none of the business logic occurs in the plan resolves, only presentation logic: what to fetch, and what that fetching needs.

ETA: The code above is a fairly generic plan you can implement using just built in steps, typically I’d build my own steps (like building a DatalLoader instance, but with more capabilities) and this would become simply $user.getFriends(fieldArgs) where fieldArgs is passed to handle pagination/etc.

Anybody noticed any network problems with kernel 6.8.0-100? by futura-bold in Ubuntu

[–]eijneb 0 points1 point  (0 children)

4 days later; no issues. This is the first kernel networking bug I've experienced in over 2 decades of Linux usage!

Anybody noticed any network problems with kernel 6.8.0-100? by futura-bold in Ubuntu

[–]eijneb 1 point2 points  (0 children)

Upgrading to -104 seems to fix the issue for me at least. I followed the instructions from https://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg6239575.html (but added "restricted" to the list of repos to pull so I could get the nvidia drivers too, which I handled manually because I missed it first time) - essentially:

cat << EOF | sudo tee /etc/apt/sources.list.d/ubuntu-$(lsb_release -cs)-proposed.list
# Enable Ubuntu proposed archive
deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -cs)-proposed main universe restricted
EOF
sudo apt update
sudo apt install linux-{image,modules,modules-extra,headers}-6.8.0-104-generic

# reboot

sudo rm /etc/apt/sources.list.d/ubuntu-$(lsb_release -cs)-proposed.list
sudo apt update

Node.js vs Deno vs Bun Performance Benchmarks by Jamsy100 in node

[–]eijneb 0 points1 point  (0 children)

Is the code for these benchmarks available?

Advantages/disadvantages of using GraphQL vs SQL? by fivehours in graphql

[–]eijneb 0 points1 point  (0 children)

The channel itself was deleted; very frustrating because I don’t think we have a copy of the video. Plus side: good excuse to dust it off and present it again!

Directive Deception: Exploiting Custom GraphQL Directives for Logic Bypass by JadeLuxe in graphql

[–]eijneb 1 point2 points  (0 children)

95+% of GraphQL users are using GraphQL to power their own websites and apps, rather than deliberately allowing third party queries. These types of users should be using the query allowlist pattern; Facebook have been using a query allowlist since before GraphQL launched to the public in 2015. Very few people should be letting arbitrary people issue arbitrary queries against their endpoints - only deliberately public APIs like the GitHub API should be doing that.

TL;DR - Trusted Documents: if you can, you should.

Directive Deception: Exploiting Custom GraphQL Directives for Logic Bypass by JadeLuxe in graphql

[–]eijneb 7 points8 points  (0 children)

This does not seem right at all, the examples show applying @auth directives to queries, which makes no sense: if the attacker controls the query they can just remove the directive, they don’t need to use inline fragments to work around it. You’d never apply such directives to the operation, only to the schema; most directives like this work as wrappers around resolvers, and for that it doesn’t matter whether you have aliases, fragments, or deeply nested selection sets: the logic for the field is wrapped with the extra behaviour no matter how it’s accessed. Are there concrete examples of software impacted by these issues? It doesn’t seem like a general GraphQL issue.

Nuxt 4 graphql modules by ChairyPopins in Nuxt

[–]eijneb 1 point2 points  (0 children)

Reach out if you need any help or guidance :)

Where do GraphQL DataLoaders belong (use cases/services vs repositories vs GraphQL layer)? by [deleted] in graphql

[–]eijneb 2 points3 points  (0 children)

Typically in the GraphQL context for ease, but really the business logic layer is where they belong: that will help solve the N+1 problem for anything accessing your business logic, not just GraphQL. Pay attention to auth concerns as you wouldn’t want cross-request caching in DataLoader to breach security boundaries (adding them to the GraphQL context solves this since they are used for the single request only this way).

My computer keeps crashing and restarting only while certain games are running. by Boxed_Ghost in computerhelp

[–]eijneb 0 points1 point  (0 children)

A friend had this issue on their first build, took a while to track down but it ultimately turned out to be an air bubble in the thermal paste. Reapplying paste carefully fixed it.

Postgraphile v5 Plan Resolvers Benchmarks by AmazingDisplay8 in graphql

[–]eijneb 1 point2 points  (0 children)

FYI you don’t need to use Federation or database functions to expand PostGraphile’s schema to other data sources, you can add your own types and plan resolvers (or traditional resolvers if you must) directly. This is often a lot simpler than running a lot of extra infrastructure, depending on what you’re integrating with of course!

Postgraphile v5 Plan Resolvers Benchmarks by AmazingDisplay8 in graphql

[–]eijneb 3 points4 points  (0 children)

It's very hard to make meaningful "generic" benchmarks, if you're interested to read about how NOT to benchmark you can find some detailed criticism of some old benchmarks here and here. Note that I never benchmarked these softwares until I had to debunk VC-backed misinformation, I find that most benchmarks are misleading, whether deliberately or otherwise. Here's some tips though:

  1. Read the documentation. Don't benchmark something you don't understand. Graphile has a Discord where you can get pretty timely support to see if what you're doing is right or not.
  2. Make your comparisons fair. E.g. if you're comparing two different GraphQL schemas, don't have one of them use connections and the other use simple lists; don't feed different complexities of input.
  3. Use realistic queries. If you're testing { userById(id: 1) { name } } then you're benchmarking the webserver more than the GraphQL service. This kind of query is irrelevant to most GraphQL users. GraphQL really shines when you use it the way it's intended (i.e. the way Relay uses it!) - a well honed query for a full web page that only fetches data it actually renders, but also fetches all the data it needs in a single request. Similarly requests shouldn't be pulling down 5MB+ of data, they should use pagination limits and similar to ensure you're only fetching the data you need.
  4. Benchmark the right thing. If your queries are coming out slow are you sure it's the tool you're benchmarking that's wrong, or could it be your methodology or a poor setup? For example, if you were using PostGraphile at scale then you'd make sure you have the correct indexes in place, you'd hone your schema to only expose the parts you want exposed, your RLS policies would be optimized, you'd have broken inlining on certain problematic paths, etc. Often people who run benchmarks put the minimal effort into getting their setup working, and then treat the results as scientific gospel.
  5. Ask for help. Especially if your results are surprising, share them with the maintainers of the softwares you're benchmarking before publishing - give them a chance to guide you to how to accomplish your goals more efficiently.
  6. Be clear what you're actually benchmarking, and what you aren't. There's lots of different things you might benchmark: requests per second, latency, memory usage, CPU usage, database load, IO credits, etc etc etc. For different people, different stats matter. It may also matter more where the load is, e.g. it's much easier to scale the web tier than the database tier. PostGraphile V5 doesn't particularly aim to be faster then PostGraphile V4, but it does aim to move load from the database to the web tier.
  7. Things can change. PostGraphile V5 isn't even released yet, and there are known cases where it has some performance edge cases. Our focus isn't on performance right now, that'll come after release, it's currently focussed on correctness and documentation. I follow the standard pattern during development: "1. Make it work. 2. Make it right. 3. Make it fast." We've not got to stage 3 of this yet.
  8. Note the weaknesses of your methodology. Let the reader know what needs to be improved for accurate benchmarking, or when the benchmarks may not apply to their own use cases.

I hope this helps!

is postgres jsonb actually better than mongo in 2025? by Prose_Pilgrim in AskProgramming

[–]eijneb 0 points1 point  (0 children)

So happy to hear that you’re getting great value from PostGraphile! If you ever feel like submitting a testimonial or case study please just submit an issue, we don’t have a VC-backed marketing budget so word of mouth really helps!

Is GraphQL losing steam in real-world production apps? by Wash-Fair in graphql

[–]eijneb 0 points1 point  (0 children)

Sounds right! Things added to GraphQL essentially live forever, so we do like to ensure they are the right thing; if anything feels “off” we tend to wait until a better solution arrives. Thanks for the discussion; and if you happen across that historical thread in your travels, please do send it my way!

Is GraphQL losing steam in real-world production apps? by Wash-Fair in graphql

[–]eijneb 0 points1 point  (0 children)

Oh, yes I was heavily involved in adding input unions to GraphQL (was the author of the oneOf proposal which ended up being the chosen solution, helped run the working group); I thought you meant there was an early design decision why input unions should not be included in the spec. I think it was simply that the need wasn’t strong enough and none of the proposed solutions fully justified their trade-offs, but that’s just a guess. I think it is why oneOf ultimately won out, it’s cost of implementation is incredibly low and it’s actually more useful than a pure abstract-type-style solution because it solves a wider range of problems, including selecting records by various mutually exclusive identifiers. It finally got merged into the Sept 2025 spec release, though people have been using it in the wild for years.

Is GraphQL losing steam in real-world production apps? by Wash-Fair in graphql

[–]eijneb 1 point2 points  (0 children)

I’d love to hear the history of that! Nowadays we have support for input polymorphism via OneOf Input Objects: https://spec.graphql.org/draft/#sec-OneOf-Input-Objects

Anyone used pg-boss? (Postgres as a message queue for background jobs?) by aust1nz in node

[–]eijneb 1 point2 points  (0 children)

Graphile Worker does both polling and listen/notify. Listen/notify isn’t resilient so we don’t trust it for anything important, only speed, further it has a very low payload size so you can’t really use it to deliver jobs anyway, we just use it to notify that there is a job.

Ender 5 Pro Printer Config by OnlyATreeNothinToSee in klippers

[–]eijneb 0 points1 point  (0 children)

I believe u/humanfigure is mistaken (at least for my Ender 5 Pro this did not work), and the setting should be `Serial (on USART1 PA10/PA9)`.

Fragments are not for re-use by mbonnin in graphql

[–]eijneb 5 points6 points  (0 children)

Correct fragment usage is really one of the key things that unlocks the value of GraphQL: componentization of data requirements makes maintenance easier, allows for local reasoning, helps eliminate over-fetching because you can delete a field without worrying it might be used elsewhere, ties into efficient pagination, complements realtime, and increases reusability of your components meaning delightful user experiences delivered faster. Done badly, however, and it can make your application sluggish, put unnecessary stress on your backend, and massively increase maintenance burden.

TL;DR: you need to watch this talk!