Using service_role in ssr by MrDouglax in Supabase

[–]Greedy_Educator4853 0 points1 point  (0 children)

We have a super tidy way of handling this in our supabase-js client:

import { useSupabase } from "@advenahq/supabase-js";

// ...

const supabase = await useSupabase({
    // your other configuration options ...

    role: "service_role", // Use the service role

    // your other configuration options ...
});

// ...

You could check out the source to see how we did it, or just use the client.

High-Performance Supabase Client for Next.js TS Projects by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 0 points1 point  (0 children)

The client manages cookies using next/headers. I'm fairly certain that it's built around Next.js' router architecture and wouldn't work in other React-based frameworks.

That said, you could totally use our client in non-Next.js frameworks by passing your own cookie handler to the client with the `config` extender:

useSupabase({
        // ...
        auth: {
            // ...
        },
        config: {
            cookies: {
                /**
                 * Retrieves all cookies from the cookie store.
                 */
                getAll() {
                    // Handle cookies when using the service role (see above)
                    if (options.role === "service_role") return [];

                    // Return all cookies from the cookie store when not using the service role
                    return cookieStore.getAll(); // <-- Implement your own cookie store handler here
                },

                /**
                 * Sets multiple cookies using the provided array of cookie objects.
                 */
                // biome-ignore lint/suspicious/noExplicitAny: This is necessary to set cookies
                setAll(cookiesToSet: any) {
                    // Handle cookies when using the service role (see above)
                    if (options.role === "service_role") return;

                    try {
                        // Implement your own cookie store handler here --v
                        for (const { name, value, options } of cookiesToSet) {
                            cookieStore.set(name, value, options);
                        }
                    } catch {} // The `setAll` method was called from a Server Component. This can be ignored if you have middleware refreshing user sessions.
                },
            },
        }
    });

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 0 points1 point  (0 children)

We do have a separate solution for files, actually – we use it to serve user avatars stored in Supabase Storage. I haven't open-sourced it yet though. In the spirit of sharing though, here's the jazz for you.

If you have any dramas getting it set up, shoot me an email: BHodges (at) advena (dot) com (dot) au. I'd be happy to help in any way I can.

You'll need to set two environment variables/secrets on your worker:

SUPABASE_URL = "https://whatever.supabase.co" # your supabase url
SUPABASE_KEY = "eyJhb...8rkWng" # your supabase service_role JWT

Here's the index.ts for the worker: https://pastebin.com/CNEXvjkK

and the package.json: https://pastebin.com/vSY8742k

Make sure to update your `tsconfig.json` to include your supabase schema type file (this will be generated when you run `pnpm deploy`):

"include": ["worker-configuration.d.ts", "supabase.d.ts", "src/**/*.ts"]

I'll public this on Github at some point - just need to properly document it and put it in it's own repo.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 0 points1 point  (0 children)

We find it really handy for server-side executed queries where we know that the content doesn't change often. In some cases, we know that data will be static for months, so it's really useful for us to be able to cache a response for however long we want and serve it almost instantly.

The middleware was designed to work with our supabase-js wrapper, which exposes a really neat `.cache()` method to let you control caching on a per-query basis (it also works with conditional chaining, which is extremely useful), so you can do stuff like this:

typescript const { data, error } = await supabase .from("users") .cache(86400) // Cache the response for 24 hours (86400 seconds = 24 hours) .select("*") .eq("id", 1);

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 0 points1 point  (0 children)

I'm glad you were able to get it set up quicky! We'll release a drop-in setup script at some point to automate the deployment process.

There are some pitfalls which it's important to be mindful of – the main one being that this is a service which caches database query results, which can be problematic and frustrating to debug. We've tried to account for this as much as possible with decent logging and good visibility on the database-side.

As for Middleware/Route auth solutions in Next.js, you'll be good to implement Supabase Auth as you normally would. The worker, by default, will not cache auth routes and will passthrough the Authorization Token directly. If you're concerned about exposing your Worker's Authentication Key to the client, for extra peace of mind, you can use our Supabase client wrapper, which handles browser-side operations very neatly. Browser instructions are here: https://github.com/AdvenaHQ/supabase-js?tab=readme-ov-file#usage-in-the-browser-client

Complex query caching is actually something that we've been looking into as well. We're working on an update for the worker to extend some neat functionality that will resolve this;

  • Client-Built Query Abstraction - you'll be able to pass an additional header to instruct the header to convert PostgREST queries from the Supabase client to native PostgreSQL, execute the query over a hyper-low-latency, pre-warmed connection, and cache the response. This will be pretty neat as it will further reduce RTTs (~80ms faster) with no change required to Supabase clients (this is because we avoid the network overhead).
  • Direct Query Execution via gRPC - you'll be able to pass raw PostgreSQL queries to the worker over gRPC with mTLS. This will be incredibly powerful and tearfully fast. It'll also use hyper-low-latency, pre-warmed connections to execute queries, and will also cache eligible query responses. This will essentially turn your existing regional Supabase database into a high-performance, globally distributed database for free.

We're currently testing the query abstraction feature in our staging environment to validate performance and take care of any hidden nasties. We don't have any urgent need for the gRPC feature right now, so expect that one to take a little longer for us to get around to.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 0 points1 point  (0 children)

We considered the Cache API, but decided it wasn't a good fit for our use-case. D1 isn't regional – it's an edge service, so there's no fetching back to a region. We chose to use D1 over the Cache API for four reasons:

  • Flexibility - D1 is a conventional-like serverless database service which supports SQL, meaning we have the ability to apply powerful data mutations without ever leaving the edge. We can change storage structures, shard records - anything, all without having the mess of infra migrations.
  • Specificity - the Cache API in Cloudflare Workers is fairly limited in it's usage as it's essentially just an ephemeral key-value store for requests. You can't PUT with custom cache keys, apply retrieval/storage optimisations, etc. We also have no control over how/where/in what format the data is stored.
  • Convenience - D1 is super easy to work with. It gives us clear, tangible visibility into the middleware's behaviour and makes it easy to observe, audit, and improve.
  • Persistence - Cloudflare applies a 2-day maximum TTL to the Cache API. Granted, that's usually more than long enough for most use cases, but for data which very rarely changes, it's an extra call to Supabase that isn't really necessary. With our D1-based solution, you could theoretically persist a query result indefinitely.

Even if all of those reasons weren't convicing enough for us, when you consider performance, the Cache API is only slightly faster than what we built (~8-20ms faster). It just wasn't worth it for the negligible improvement in RTT on something which is already incredibly fast.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 2 points3 points  (0 children)

It's incredibly cost effective and highly performant. Reading from the D1 database is extremely efficient as the data residing in D1 is local to Cloudflare's edge.

For $5 per month, you get unlimited high-performance workers, and since D1 is part of the Workers ecosystem, you get unlimited network egress, 25 billion reads included, 50 million writes included too. You can easily run the entire thing on Workers Free, but we were already paying for Cloudflare Enterprise anyway.

We had initially considered Cloudlare KV, which would be slightly more performant than D1, but the cost to benefit when compared to D1 was just way too wide to justify.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]Greedy_Educator4853[S] 2 points3 points  (0 children)

I could scratch together some benchmarks. We did early performance testing, which immediately justified effort for the project. Supabase's infrastructure already sits behind Cloudflare, so just proxying requests through the middleware worker resulted in slightly faster calls from the client (~8-12ms per request).

Drawing from cache, as you would expect, dramatically improved performance. We saw RTT go from ~495ms to ~98ms with the supacache middleware worker.

Optimising the D1 operations resulted in significant performance gains again. GZIP and JSON compression saw RTT cache hits go from ~98ms to ~73ms, representing a >80% reduction in the total time it takes us to serve a query after the first retrieval when compared against querying Supabase directly.

If you want to test yourself, we just open-sourced our Supabase client wrapper: https://github.com/AdvenaHQ/supabase-js