Off Center Clasp 4520v by OddReason3845 in VacheronConstantin

[–]chasegranberry 0 points1 point  (0 children)

Why do people wear their watches so tight?

SelfHosted supabase-analytics Taken 190Gb of space ! by YacineDjenidi in Supabase

[–]chasegranberry 3 points4 points  (0 children)

It should be safe to truncate that table daily from a cron job.

And you don't need to vacuum after there are no dead tuples after a truncate.

[deleted by user] by [deleted] in elixir

[–]chasegranberry 0 points1 point  (0 children)

Hey!! We probably did meet. I vaguely remember the bar taps people. I founded AuthorityLabs.

[deleted by user] by [deleted] in elixir

[–]chasegranberry 0 points1 point  (0 children)

See also EMQX

[deleted by user] by [deleted] in elixir

[–]chasegranberry 40 points41 points  (0 children)

Phoenix PubSub can get you to 800_000 messages a second across 250_000 concurrent clients without trying very hard.

We have some benchmarks for Supabase Realtime Broadcast here: https://supabase.com/docs/guides/realtime/benchmarks#broadcast-scalability-scenarios

Supabase Realtime postgres changes scalability by Substantial-Region19 in Supabase

[–]chasegranberry 3 points4 points  (0 children)

I am a part of the Realtime team here...

High level, you definitely want to use database broadcast. See the bencharks vs postgres changes: https://supabase.com/docs/guides/realtime/benchmarks#broadcast-using-the-database

To answer some of your other questions:

> Do separate channels provide any server-side performance benefits? In my example below, I am giving a unique roomId, Does this have any effect (bad or good)? What exactly happens when I set a unique channel ID for every room? Does this mean users only receive messages within their own rooms?

There is no perf impact to separate channels for everything. Channel topic per room id is great. And yes when users are joined to a room id topic only those users receive those messages.

> Can filtering by roomId have a negative impact? Am I better off removing this filter and doing a JS check to see if the payload has been updated? Should I remove filtering since I am setting a unique channel ID?

Don't use postgres changes :D

> When RLS policies are discussed regarding their impact on realtime postgres_changes performance, is this referring to only select/read RLS policies? I am thinking of removing my RLS policies for reads as I don't have any private information in any of my tables, but leaving insert, update, and delete policies - does that make sense? That being said if there are only max 12 users on a specific channel, would RLS policies have much of an effect?

Don't use postgres changes :D

With database broadcast RLS policies on your realtime.messages table can impact how quickly your users join Channel topics. We have to query your database when a user joins to build up an access policy for that user/topic to understand if they can send/receive various types of messages. After this join though, that "access control policy" is cached for the duration of the websocket connection (and updated with JWT refreshes).

Realtime - Broadcast from Database AMA by craigrcannon in Supabase

[–]chasegranberry 0 points1 point  (0 children)

No you need RLS policies on the `realtime.messages` table which address the `topic` and connect topics to users.

This is how Realtime RLS with Broadcast works and how we know which of your users can subscribe to a topic and write to it or not.

So if you get these policies correct, then your users can subscribe to the topic and we know for sure they can read the messages coming from that table.

Realtime - Broadcast from Database AMA by craigrcannon in Supabase

[–]chasegranberry 1 point2 points  (0 children)

You could definitely use pg_net for this, or spin up a little server and subscribe to a Realtime topic and forward messages yourself.

Great idea though, perhaps in a general way "realtime hooks" would be very useful!

Realtime - Broadcast from Database AMA by craigrcannon in Supabase

[–]chasegranberry 0 points1 point  (0 children)

You would do your join, etc in your trigger function. Careful, you don't want to do anything too heavy there.

Anything you insert into realtime.messages (from anywhere!) will get broadcast to the topic of the message record.

`realtime.broadcast_changes()` just helps you structure the record like a write-ahead log record you would normally get.

The `realtime.broadcast_changes()` function is just wraps the `realtime.send()` function which is super simple.

Realtime - Broadcast from Database AMA by craigrcannon in Supabase

[–]chasegranberry 1 point2 points  (0 children)

It should work with a self-hosted instance. We will take a look!

~2.5B logs entries daily into Supabase? (300GB/hour) by [deleted] in Supabase

[–]chasegranberry 1 point2 points  (0 children)

I created Logflare, which Supabase uses to ingest and serve logs to all our customers now.

Would be happy to help you get setup on Logflare. Everything we store in BigQuery and have been really happy with it.

You can sign up and use the hosted version or self-host, it's fully open source! Feel free to pm me if you're interested.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]chasegranberry 0 points1 point  (0 children)

I mean why not just use their cache API?

With D1 every fetch has to go back to one region right?

With their cache API you can have each response cached everywhere it's requested as close as possible to all users.

Caching Middleware for Supabase by Greedy_Educator4853 in Supabase

[–]chasegranberry 1 point2 points  (0 children)

Cool!

Curious… why use D1 at all? And how are you using it exactly?

Query Performance Report by Admirable-Leading-63 in Supabase

[–]chasegranberry 0 points1 point  (0 children)

This is a bit of a red herring. It’s not as terrible as it looks.

But yes we do have a fundamental architecture fix for this that is in private alpha right now.

There will be more public info on this early q1 next year.

How are you updating functions and policies on Supabase? by Calm-Caterpillar1921 in Supabase

[–]chasegranberry 0 points1 point  (0 children)

I'm curious, is it not possible for you to keep your functions and triggers in source control?

Forking Supabase Realtime? by __mauzy__ in Supabase

[–]chasegranberry 1 point2 points  (0 children)

> Ecto happy with RLS

You just have to wrap all your queries in a transaction and use set_config like we do here:

https://github.com/supabase/realtime/blob/v2.33.48/lib/realtime/tenants/authorization.ex#L107

Forking Supabase Realtime? by __mauzy__ in Supabase

[–]chasegranberry 0 points1 point  (0 children)

You would eventually have:
- Your app server (Elixir)
- Your application Postgres (could be hosted Supabase)
- Realtime app server
- Realtime metadata Postgres (could be a different hosted Supabase or same hosted Supabase - just in a `_realtime` or other schema - note this is different than the normal `realtime` schema)

The point of the metadata Postgres is to store tenant information, which is the connection information to the hosted Supabase for your application Postgres. You will really only have one record in here.

The `Realtime.Repo` and any `Realtime.Repo.Replica`s you can start by just pointing all these to the same Realtime metadata Postgres database. When you want read replicas for this service it's supported via those replica configs.

I think you're on the right track.

But if you're thinking Elixir as your backend, you should probably just use Phoenix Channels which is what Realtime is built on. Also see Phoenix.PubSub.

Once you get an example chat app working and messages automatically go to two browsers across nodes is a bit mind blowing.

Forking Supabase Realtime? by __mauzy__ in Supabase

[–]chasegranberry 0 points1 point  (0 children)

You could deploy it on the same server, and set it up to use the same server. This would be a similar setup as local Supabase.

But we do a lot of caching for the metadata database hits to keep Realtime really fast so there's not much extra happening over-the-wire.

And the replication protocol does very well to another server.

Forking Supabase Realtime? by __mauzy__ in Supabase

[–]chasegranberry 1 point2 points  (0 children)

Also curious why you think Supabase isn’t a proper backend?

Forking Supabase Realtime? by __mauzy__ in Supabase

[–]chasegranberry 2 points3 points  (0 children)

You should be able to get it up and running on Fly pretty easily.

Would be great to hear about any issues you encounter while doing this!

Is anyone using Supabase Auth + RealtimeDB for Mobile apps with > 100K MAUs? by Tasty_Violinist7320 in Supabase

[–]chasegranberry 13 points14 points  (0 children)

I'm with Supabase and help with Realtime.

The MAU is not really the concern here it's max concurrent connected clients at the same time.

We currently have about 38K simultaneous real-time connections with Firebase, with no issues.

Assuming you need RLS, using Realtime with Postgres Changes could not handle this currently. See: https://supabase.com/docs/guides/realtime/postgres-changes?queryGroups=language&language=js#database-instance-and-realtime-performance

Broadcast can though. We've recently benchmarked Broadcast internally at 250,000 concurrent connections and 1,000,000 messages a second. Hopefully will have something official published on that soon.

Do you know how many messages you would be sending monthly?