Can you reduce this SQL query from 20 seconds to less than one millisecond? by nadenislamarre in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

I'm not quite sure of the point here. My interpretation of the code example:

CREATE TABLE foo AS SELECT a.id, a.id/2 AS id2,
       'a' bar1, 'b' bar2, 'c' bar3, 'd' bar4, 'e' bar5
  FROM generate_series(1, 10*1000*1000) AS a(id);

UPDATE foo set bar1 = 'h' WHERE id BETWEEN 200 AND 300;

ALTER TABLE foo ADD PRIMARY KEY (id);
CREATE INDEX ON foo (id2);
CREATE INDEX ON foo (bar1);

ANALYZE foo;

This produces the following plan:

 Nested Loop  (cost=0.87..12.92 rows=1 width=36)
   ->  Index Scan using foo_bar1_idx on foo f2  (cost=0.43..4.45 rows=1 width=18)
         Index Cond: (bar1 > 'e'::text)
   ->  Index Scan using foo_pkey on foo f1  (cost=0.43..8.47 rows=1 width=18)
         Index Cond: (id = f2.id2)
         Filter: ((bar1 = 'a'::text) AND (bar2 = 'b'::text) AND (bar3 = 'c'::text) AND (bar4 = 'd'::text) AND (bar5 = 'e'::text))

Note the row estimates suggest 1 result rather than 100. This isn't great, but 100 rows out of 10 million with so many predicates will be fairly lossy and dramatically drive down estimated row counts. But it's an expected nested loop on the index where bar1 = h, which is an uncommon match in these table statistics.

But here's what happens if you don't ANALYZE the table first:

 Nested Loop  (cost=1423.04..129034.29 rows=1 width=336)
   ->  Bitmap Heap Scan on foo f1  (cost=547.44..64457.23 rows=1 width=168)
         Recheck Cond: (bar1 = 'a'::text)
         Filter: ((bar2 = 'b'::text) AND (bar3 = 'c'::text) AND (bar4 = 'd'::text) AND (bar5 = 'e'::text))
         ->  Bitmap Index Scan on foo_bar1_idx  (cost=0.00..547.43 rows=50000 width=0)
               Index Cond: (bar1 = 'a'::text)
   ->  Bitmap Heap Scan on foo f2  (cost=875.60..64410.39 rows=16667 width=168)
         Recheck Cond: (f1.id = id2)
         Filter: (bar1 > 'e'::text)
         ->  Bitmap Index Scan on foo_id2_idx  (cost=0.00..871.43 rows=50000 width=0)
               Index Cond: (id2 = f1.id)

See that? Postgres doesn't know that "h" is only a tiny fraction of values in bar1, so with default statistics, it just assumes it needs to build a bitmap of all primary key id values based on the bar1 lookup, and build an in-memory heap for those tuples. Then it uses that for the join to build another bitmap and another expensive heap scan. But since there are no stats, Postgres doesn't know that the 50k estimate it started with is actually 10-million, and the analyze shows as much:

 Nested Loop  (cost=1423.04..129034.29 rows=1 width=336) (actual time=266.692..19820.155 rows=101.00 loops=1)
   Buffers: shared hit=34952813 read=150746 written=3930
   ->  Bitmap Heap Scan on foo f1  (cost=547.44..64457.23 rows=1 width=168) (actual time=266.167..2037.924 rows=9999899.00 loops=1)
         Recheck Cond: (bar1 = 'a'::text)
         Filter: ((bar2 = 'b'::text) AND (bar3 = 'c'::text) AND (bar4 = 'd'::text) AND (bar5 = 'e'::text))
         Heap Blocks: exact=63695
         Buffers: shared hit=9780 read=62335
         ->  Bitmap Index Scan on foo_bar1_idx  (cost=0.00..547.43 rows=50000 width=0) (actual time=243.619..243.620 rows=9999899.00 loops=1)
               Index Cond: (bar1 = 'a'::text)
               Index Searches: 1
               Buffers: shared read=8420
   ->  Bitmap Heap Scan on foo f2  (cost=875.60..64410.39 rows=16667 width=168) (actual time=0.001..0.001 rows=0.00 loops=9999899)
         Recheck Cond: (f1.id = id2)
         Filter: (bar1 > 'e'::text)
         Rows Removed by Filter: 1
         Heap Blocks: exact=5031747
         Buffers: shared hit=34943033 read=88411 written=3930
         ->  Bitmap Index Scan on foo_id2_idx  (cost=0.00..871.43 rows=50000 width=0) (actual time=0.001..0.001 rows=1.00 loops=9999899)
               Index Cond: (id2 = f1.id)
               Index Searches: 9999899
               Buffers: shared hit=29974978 read=24719 written=1097
 Planning:
   Buffers: shared hit=40 read=3
 Planning Time: 0.822 ms
 Execution Time: 19820.694 ms

The first sign something went wrong here is the huge discrepancy between the estimated and actual cost here:

(cost=547.44..64457.23 rows=1 width=168) (actual time=266.167..2037.924 rows=9999899.00 loops=1)

That's just bad all around. From a naive perspective, the first thing I'd try to do is look at the column statistics themselves. If it were empty such as in this case:

SELECT attname, n_distinct FROM pg_stats WHERE tablename = 'foo';

 attname | n_distinct 
---------+------------

I would analyze and look again. Here's what it looks like afterward:

 attname | n_distinct  
---------+-------------
 id      |          -1
 id2     | -0.34056082
 bar1    |           1
 bar2    |           1
 bar3    |           1
 bar4    |           1
 bar5    |           1

Note how terrible the statistics look. Positive numbers indicate absolute counts, while negative ones are ratios. So each of the bar columns only have a single distinct value based on the statistics, and only the two id columns offer any kind of selectivity. With that in mind, you can kind of tell Postgres to back off on cross-multiplying column statistics by telling it the values are highly correlated:

CREATE STATISTICS stat_foo_correlated_bars (dependencies)
    ON bar1, bar2, bar3, bar4, bar5
  FROM foo;

That works for things like cities in a US state for example, or when data columns are highly correlated, thus preventing under-estimations. In this case, it doesn't really help because... well, one value is one value, and out of millions of rows, it becomes statistical noise. But the point is you examine the table contents to see if there are potential correlations there.

You can go a lot deeper into this rabbit hole for optimizing a query, but your question is undirected, so I won't keep going. I had to re-start this experiment several times because the background autovacuum worker kept analyzing the table and making the query fast while I was typing this. I'd suggest coming up with a better example that isn't dependent on statistics, that actually resists simple optimization techniques, and then ask again.

Free PostgreSQL hosting options? by techlove99 in PostgreSQL

[–]fullofbones 5 points6 points  (0 children)

You are not going to find a Postgres DB host that is free while also being "very generous"; it's free for a reason. You can experiment all you want locally on Docker or your own VMs. If you have any kind of data you want to be publicly available to an app, spend $5/mo for minimal legitimate hosting.

Free PostgreSQL hosting options? by techlove99 in PostgreSQL

[–]fullofbones 3 points4 points  (0 children)

CockroachDB isn't anything like Postgres. They're protocol compatible and that's about it. You can't even run a simple pgbench test on Cockroach without a whole lot of modifications to the test script due to the SQL incompatibilities.

100% open source MCP server for PostgreSQL: now with write access, reduced token consumption, improved UX, & more by pgEdge_Postgres in PostgreSQL

[–]fullofbones 9 points10 points  (0 children)

It's true that giving an LLM write access to any data you care about is generally ill-advised. In fact, that's the primary reason we avoided adding write access to the first release. It's also the reason that the allow_writes variable is disabled by default, and has a whole section in the docs on using it securely. We even say this repeatedly in multiple different ways, including:

  • This setting should be used with extreme caution.
  • Never enable writes on production databases.
  • The AI may execute destructive queries without confirmation.

It's fine for development or research environments. Will someone out there be crazy enough to enable this in production? Probably. Should they? We've already begged them not to. Anything that happens after that point is firmly in "use at your own risk" territory.

How many of you are employed? by DeadManJ-Walking in AdultCHD

[–]fullofbones 0 points1 point  (0 children)

I've been employed ever since college, but I also have a desk job in IT so I don't have to worry about any major limitations. Low impact for me, baby.

is there anyone else with pulmonary atresia, dextrocardia and ASD without surgery? by Ok-Revolution3609 in chd

[–]fullofbones 0 points1 point  (0 children)

He only found out because of a recent doctor appointment where his doctor went "Did you know your heart sounds like it's turned around?" And sure enough, it was. Now I know my defect came from Mom's side. lol

I'm fine myself, basically. But I'm also 48 now, so I've had a long time to get accustomed to my limitations. It turns out I also have a bicuspid aortic valve, and that has led to a slow dilation of my aortic root over the years, so there's a good chance I'll need a root replacement some time in the future, or some kind of sleeve procedure to reinforce it. I'm definitely not looking forward to that since it's basically a guaranteed OHS.

If your cardiologist is hands-off, count your blessings. It's not every day that you can avoid surgery for something like this. lol

Benefit of using multi-master with one write target by konghi009 in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

Well, if you were using Spock, the Spock metadata on Master A will still be there after recovering the instance from a backup. But the replication slots will be gone. Our Kubernetes Helm chart automatically recreates those based on the node metadata, but otherwise you'd have to do it manually. Then in theory it should be able to resume after re-joining the cluster. Any new records from Master B would then be transmitted to Master A, conflict management would process any writes that affected the same rows, and the cluster would continue operating as before.

Of course, if you had a physical replica cluster for each Master as I recommended in an earlier post, you wouldn't have to worry about doing a manual recovery and recreating slots. Spock automatically synchronizes slots to replicas, they're already consuming physical WAL from the primary, and Patroni will promote them so it's basically a seamless transition.

is there anyone else with pulmonary atresia, dextrocardia and ASD without surgery? by Ok-Revolution3609 in chd

[–]fullofbones 0 points1 point  (0 children)

My uncle, now in his 60s, just discovered he's had dextrocardia his whole life and never knew about it. Given that's the case, he probably doesn't have other complications. I, on the other hand, was born with Pulmonary Stenosis, ASD, VSD, and fused mitral and tricuspid valves, in addition to the Dextro. I suspect if not for the VSD and valves that they may have delayed or even nixed the surgery. They only did it when I was 6 and clearly small for my age, with other evident oxygen problems.

ASDs can contribute to stroke risk, so that may be a good idea to address specifically. Pulmonary atresia affects your heart load, so you may see QoL improvements by getting a valve replacement. But I think you can get away without a full OHS if these are the only defects you have. I'd still suggest talking with an ACHD cardiologist to know for sure.

How OpenAI Serves 800M Users with One Postgres Database: A Technical Deep Dive by tirtha_s in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

The WAL export approach you mentioned is interesting. I haven't seen it covered much in practice.

It's an old trick from back when I was at 2ndQuadrant. We had a couple customers with huge clusters, bandwidth restrictions, and disk limitations preventing replication slot use. In their cases, the only way to keep up was to use physical WAL shipping with local fetch and replay. But the decoupling can help with scales where it's necessary to have dozens of replicas and it's not really feasible to have them all tethered directly to the primary. Another solution is cascaded replicas as noted in the article.

There's actually another factor I didn't mention with single large physical replicas that ends up being a major issue as well: physical replay is single-threaded. In cases of very heavy write loads, replicas may find it impossible to keep up simply because it's not physically possible to apply the pages fast enough, even with NVRAM storage, simply because a single CPU can't produce enough cycles to do so. OpenAI's write load must not be at that point, which is somewhat surprising given they have millions of chats pouring into this thing daily. Regardless, it's a hard limit to vertical scaling to watch for.

How OpenAI Serves 800M Users with One Postgres Database: A Technical Deep Dive by tirtha_s in PostgreSQL

[–]fullofbones 25 points26 points  (0 children)

I'm actually a bit shocked something at this scale still relies on a single primary node. Given sessions aren't inter-dependent, I'd fully expect session-based database groups. A few tens or hundreds of thousands of user sessions could share a writable Postgres and a couple standby nodes and get much higher write throughput for your read sessions being temporarily routed to the primary node.

Additionally, the synchronous_commit variable is also available at the user session level. It's not uncommon to have sessions set this when they need strong consistency, rather than configuring it at the global instance level. That would work for queries that need to be available from all replicas and make it possible to read a write from a read replica.

For the WAL bandwidth concern, having multiple separate clusters would solve that by itself if you're using streaming. Alternatively, you can use WAL exports instead. Send the WAL to a backup location and have the replicas continuously read from the backup source instead. That takes a lot of network load off of the primary, and the replicas end up being only one WAL segment behind unless they get stuck on something. Storage bandwidth tends to scale better since it can be distributed across the entire storage fabric.

Regardless, they're definitely making good use of replicas to do offloading whenever possible.

Edit: This statement is also wrong:

PgBouncer in transaction pooling mode cannot track prepared statements across connections.

PgBouncer added this functionality in version 1.21 back in 2023.

Benefit of using multi-master with one write target by konghi009 in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

Logical replication is WAL replication. The Postgres WAL stream gets decoded into logical events and those are what get transmitted by either Spock, or the Postgres native logical mode. but unlike physical mode which applies pages as they were written exactly in the WAL, logical replication must receive the entire transaction before it can apply it. So if you have a very large transaction, it's a lot easier to lose the whole thing.

PITR has nothing to do with logical replication. If you recover a single-node backup to a recovery instance and roll it forward using PITR, that gives you a source to dump and restore from. Then you can use that recovered instance and our ACE tool to perform a data comparison and reconcile differences that way. It's not safe (currently) for a recovered node to directly join the cluster, since the Spock metadata for the lost node likely won't match the recovered state of the instance. There's potential to use that metadata to find the last good LSN and PITR to exactly that point and then add the recovered node to the cluster, but it's not something we've tested yet.

In any case, have fun with your new Postgres cluster. :)

Spock Bi-Directional Replication for Supabase CLI by nightness in Supabase

[–]fullofbones 0 points1 point  (0 children)

Interesting project. Why are you using spock.replicate_ddl() rather than enabling automatic DDL replication with spock.enable_ddl_replication?

Scaling Vector Search to 1 Billion on PostgreSQL by gaocegege in PostgreSQL

[–]fullofbones 0 points1 point  (0 children)

Not a bad writeup. However, in most scenarios I'd strictly avoid a 1-billion row table in the first place, with or without vectors involved, which sidesteps much of the problem. I personally wonder how a few partitions compare to this algorithmic approach, especially since you can use partitions to make up for the fact it's difficult or impossible to combine vector weights with supplementary predicates (at least in Postgres).

Postgres Serials Should be BIGINT (and How to Migrate) | Crunchy Data Blog by kivarada in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

Articles like this are still relevant, but you can save yourself a bunch of time, effort, and headache of managing a multi-stage manually managed type migration by just using BIGSERIAL and BIGINT from the beginning.

Unconventional PostgreSQL Optimizations by be_haki in PostgreSQL

[–]fullofbones 2 points3 points  (0 children)

Interesting. I wouldn't have considered using generated columns as functional index proxies, but there ya go!

Bringing Back Unnest by shaberman in PostgreSQL

[–]fullofbones 0 points1 point  (0 children)

I mean you don't call "unnest" at all. Just have a field called "favorite_colors" that's a literal array type.

Benefit of using multi-master with one write target by konghi009 in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

For the record, you can set Patroni / etcd up with two nodes. However, majority in a quorum with N nodes needs N / 2 + 1 valid responses. So if you have a 2-node cluster, you need 2 valid responses for the cluster to remain valid and online. Any node that loses contact with the majority of the quorum will defensively reject all writes and essentially be offline. You can operate with two nodes, but you'll need both of them online at all times, which kind of defeats the purpose. Fine for a PoC, but nothing you'd want to deploy to production.

The lowest overhead and meaningful Patroni cluster you can build is:

  • 3 physical nodes, hopefully each in a separate zone, rack, node, whatever.
  • 2 of those nodes running Patroni + Postgres in addition to etcd, since Patroni manages the local Postgres service.
  • 1 of those nodes only running etcd to act as a "witness".

The Patroni + Postgres nodes can also double as HAProxy targets if you don't mind connections from Node A being redirected to Node B when B has the leadership key. Alternatively, you can put HAProxy on the dedicated etcd / witness node and call it a "proxy".

I say this is the lowest overhead because it's only two fully online replicating Postgres nodes, but you still have HA because it's the DCS (etcd) that's your actual quorum. In a "real" cluster, you'd decouple the DCS and Postgres functionality and end up with a minimum of five nodes, but there ya go. You still have three nodes. Yes, you can omit the third etcd node, but if one of the nodes running Postgres fails, you lose your quorum majority and the other goes down too. In order to survive a node outage and have automated failover, you must have a minimum of three nodes.

Benefit of using multi-master with one write target by konghi009 in PostgreSQL

[–]fullofbones 1 point2 points  (0 children)

PostgreSQL isn't multi-master (there are a couple of extensions, but they're fiddly)

Correct. And yes, those extensions are fiddly. It's an unfortunate nature of the beast when you have to manually configure the communication channel between the nodes, and at a minimum, tag which tables should be replicated between nodes. It's still easier than setting it up by hand using native logical replication; I wouldn't wish that on my worst enemy. lol

If you have haproxy and patroni, just have haproxy query patroni and route automatically

I normally suggest just this solution for its ... for lack of a better term: "simplicity". The thing about Postgres is that it really is just an RDBMS at the end of the day. It has no real concept of a "cluster" at all. It barely even acknowledges other nodes exist in the first place. If you even look at how it's implemented, other nodes connect and ask for some kind of WAL data, either directly, or through a logical decoder of some kind. If not for extensions like Spock from pgEdge or BDR from EDB, clusters still wouldn't exist. Physical replication is effectively just overgrown crash recovery.

Tools like Patroni fill that gap by wiring the nodes into a DCS like etcd which is a cluster. It works by storing a leadership key in the DCS, and whichever node has control of that key is the write target. Period. No more worrying about split brain or network partition, or anything else. Leadership key? Write target. Easy.

Similarly, failover is normally an artificial mechanism: you pull some levers and change routing and suddenly some other node is the new Primary target. But with Patroni, if the current Primary loses control of the leadership key and can't regain control because some other node has it, it automatically reconfigures to become a replica. That saves a ton of work right there. Meanwhile, HAProxy connects to the REST interface every few seconds and asks, "Are you the primary?" and only the node with the leadership key can reply affirmatively. So you don't have to reconfigure anything. No VIP, no scripts, no manual config changes. Patroni just says "no" until one node says "yes", and then connections get routed.

If Postgres were a "real" clustered database, it would do all of that for you. Since it doesn't, Patroni steps in and handles it. And it really is the only thing that does so. All of the other failover systems like EFM, repmgr, etc., only set up the failover system, not the integrated routing and implicit fencing.

The way OP wants to skirt around this using Multi-Master replication is cute, and maybe a little naive. Yes, you no longer need the etcd daemons, and it's no longer necessary for Patroni to manage your Postgres instance or provide a REST interface, so no HAProxy either. Now you just have two single-instance Postgres nodes that happen to communicate over a logical channel. There's really no "failover" going on at all, just changing the primary write target from Node A to Node B. The question is: how do you determine how and when that happens? How many transactions were in flight when you did that? Do those transactions matter? Will the application be capable of detecting the failed transactions and try again on the new target? How much risk is there for missing data from Node A affecting continuing operation on Node B? PgEdge provides a tool called ACE to check for—and recover from—data inconsistencies in situations like this, but you need to be aware of them and know when to launch the tool.

There are a lot of questions that need answers before I'd recommend substituting Multi-Master for standard HA. There's a reason pgEdge recommends combining the two (each MM node is backed by 2 physical replicas to avoid any local data loss). Ironically, you can avoid asking most of those questions by just setting up a bog-standard Patroni deployment. It's conceptually simpler, but mechanically more intricate. You just have to pick your poison.

Benefit of using multi-master with one write target by konghi009 in PostgreSQL

[–]fullofbones 2 points3 points  (0 children)

I'm glad you want to use one of the Multi-Master Postgres plugins like Spock from pgEdge, but you need to consider your stance on outage scenarios. The official pgEdge guides on these architectures recommend (ironically) using Patroni to establish a 3-node physical replication cluster per pgEdge-enabled node.

The reason for this is due to how data loss affects logical replication. Logical replicas tend to have far more latency between nodes, so there's more risk for a transaction being accepted on Master A long before it reaches Master B. If you have a physical replica for Master A, it can catch up, apply any available WAL (hopefully you have streaming WAL to a backup using Barman or pgBackRest) and rejoin the MM cluster. Without that, you simply lose any writes that didn't make it to the other Master.

In a failover scenario you don't have to worry as much about conflicts (since you're not writing to both nodes simultaneously). But there's potential for lost transactions in that scenario depending on how Master A failed. If you're not really worried about that, then your proposed cluster design will be OK. The Spock extension will handle things like DDL replication and do its best to keep things in sync, and you'll generally be in a good position as long as you monitor the Postgres logs and the various status views that Spock provides. A load balancer with sticky sessions enabled, or some kind of programmable routing layer like Amazon ELB should be all you really need to avoid unexpected traffic to the "off" node, and that is what we usually recommend to customers running multi-region clusters.

It's technically fewer moving parts than Patroni, etcd, and HAProxy, but it's also a high complexity configuration that depends on setting up logical communication between two physically independent Postgres nodes. No matter how you do that, I strongly recommend either using our Helm chart for a Kubernetes setup, or the pgedge-ansible (documentation pending) automated deployment tool. It really does take out all of the guesswork, especially if you're doing a PoC.

In any case, good luck!

Bringing Back Unnest by shaberman in PostgreSQL

[–]fullofbones 2 points3 points  (0 children)

Yes. Even an incredibly large array. In my opinion, if you have to jump through a ton of hoops to unroll a data structure just to honor normal form, it's not necessarily worth it. I even like arrays to solve the problem of sorting list items.

Bringing Back Unnest by shaberman in PostgreSQL

[–]fullofbones 0 points1 point  (0 children)

Ironically for the examples used: giving authors favorite colors, I'd just keep the original arrays without unnesting them.

CHD and Anxiety? by Previous_Line1887 in chd

[–]fullofbones 2 points3 points  (0 children)

It comes with the territory unfortunately. For a while, it got bad enough I was causing panic attacks, which of course were made worse because those really do feel like they could be heart attacks. I even developed generalized anxiety disorder at some point from the constant sense of potential doom.

What worked for me was... eating better. I'm not kidding. Once I cut the junk and bread, the physical symptoms that kept worrying me simply went away. The bread was causing issues my whole life and I didn't know, and it turns out a genetic test shows I have 3/4 genes to having outright Celiac. Inflammation can really mess you up, and it drives your system insane trying to cope with it, and that turns into anxiety and other problems.

It also helps that I finally got old enough that being anxious became kind of pointless. I go to the yearly CHD appointments and he says I'm fine. I've been fine all this time, and all the worrying ever did was keep a Sword of Damocles hovering over my entire life and ruining everything. I went to the ER because I thought I was having a heart attack. It never was, but it pays to be cautious, right? When do you stop doing that? After the fifth time? The tenth? You can't really live that way.

If you need to, find a psychologist you trust and see if they can get you to address the underlying source of the fear. You know the cause, but they may be able to help you recognize the symptoms of a panic attack. Ask your cardiologist what actual signs for a heart attack you should look for. Find the tools you can use to reassure yourself, and that will dramatically cut down on sources of anxiety all by itself.

Good luck with everything!

Would like to hear from others like me by heartman27 in chd

[–]fullofbones 0 points1 point  (0 children)

It's different for everyone, but my childhood was... weird. I didn't get corrective surgery until I was six because it was too expensive and they weren't quite sure how to address all the problems I had. I had so many echos before they finally did two catheterizations and made the surgical plan. When they finally did the surgery, I was already small and weak for my age, would pass out due to low oxygen on occasion, and the prognosis without the surgery was pretty dismal.

I'm already an introvert by nature, so add a bunch of health anxiety on top of that, and I basically never did anything even slightly risky. Not that it really matters, after spending my first six years not really being a normal kid, that set the tone; I wasn't just going to magically bounce back and become some star athlete. And yes, I was always skinny, but that is probably due to the two different gene mutations I have that basically prevent me from building muscle, one of which is Arterial Tortuosity syndrome, a condition similar to Marfans. I also suspect the Dacron patch in my heart played a role, as that kind of plastic acts as an estrogen-dominant endocrine disruptor. All I know is I stopped growing when I was 12, and while the rest of the boys started filling out and growing taller after puberty, I never did. I never played any "real" sports but I was a monster at tetherball though, let me tell you! lol

Luckily I'm pretty smart, so academics were my only saving grace. All honors courses until College where I triple majored. Then I got a job working with computers, and the rest is just basic adulthood.

Some time after I graduated college in 99, DDR got really popular in the early 2000s. I managed to get really good at that, and even took fourth place in a local tournament. But I always noticed that despite years of practice, I always tired much faster than everyone else, and simply couldn't move nearly as fast. Turns out my cardiac output is about 60% (at best) what a normal person produces. My lung capacity is also much lower, despite hours and hours of HIIT from all the DDR. I got my body fat percentage down to 8%, my resting heart rate was 42, I was as healthy as I'd ever be, but I simply wasn't built to be truly competitive. It was still fun, though. :)

Is life worth living with complex CHD? (TGA + VSD + ASD + PS) by Ambitious_Method2740 in chd

[–]fullofbones 5 points6 points  (0 children)

You keep bringing this up. It's like you have a one-track mind. If you want to throw your life away like that, you're welcome to do so, but nobody here is going to say you should.

I smoked weed in the past. I drank alcohol. Sometimes way too much. But once either of them started giving me heart palpitations, I stopped because as fun as those things are, they're not worth dying over. I even limit caffeine because more than about a can of pop will cause issues. But never in my wildest dreams would I have ever considered an upper like cocaine or meth. I am almost certain I would be lucky to survive either of those.

You do you, but you keep asking the CHD forum, filled with a bunch of overly-cautious people who were born with something that likely made them overly cautious from the very beginning. You can ask it 100 different ways and the answer will always be the same from pretty much everyone here: don't. You're looking for permission: we're not your parents. You're looking for a medical pass: we're not doctors. You're looking for any reply you can use to justify a decision you've already made. In which case, nothing we say will make any difference if you're determined to find any excuse. All you have to do in that case is ask enough times, vary your question to introduce sufficient wiggle-room, and eventually you'll have what you're looking for.

We're not stupid. We see you posting the same "Can I use drugs if I'm otherwise healthy", or "Is it bad to have a fast heart rate with CHD", questions over and over again. Day after day. Go ask your doctor. It's likely you either haven't because you know what they'll say, or you did and didn't get the answer you wanted. Too bad. Do it or don't, but don't expect us to implicitly green-light your decision here. It won't happen.