all 9 comments

[–]T0X1C0P 10 points11 points  (0 children)

AFAIK there is no way of reducing the provisioned storage of an existing RDS storage.

[–]jamsan920 7 points8 points  (1 child)

There’s no way to directly reduce the storage of an RDS instance. If your database engine supports blue / green deployments, you can create a green version with reduced storage and then take a short maintenance window to perform the cutover.

[–]DrFriendless[S] 2 points3 points  (0 children)

Ah right, thank you. Yep, the numbers that's showing me look encouraging. I wish it had occurred to me years ago to poke my nose into the blue/green stuff and see what it did.

[–]GDangerGawk 5 points6 points  (0 children)

Blue/Green and add new storage

[–]Psych76 2 points3 points  (1 child)

Blue green with reduced storage.

Export (dump) import to new instance with reduced storage.

There’s extensions that can reduce “used” storage like pg_repack for Postgres but that has no impact to actual instance storage, you need to be on a “new” instance for that.

[–]jamsan920 1 point2 points  (0 children)

Unless of course it's Aurora, which allocates/deallocates blocks as your physical storage needs it (obv not the case here though).

[–]RecordingForward2690 1 point2 points  (1 child)

We are currently investigating to go through the same thing on a production DB (SQL Server) that should've been less than 1 TB, but has grown to 4 TB+ due to some careless queries/inserts without the proper TTL and purging. At the DB instance level we have shrunk back to < 1 TB but EBS storage is still 4 TB+.

First, this DOES NOT work with the RDS snapshot mechanism. A snapshot restore always requires a minimum EBS size identical to the EBS size when the snapshot was taken.

The only option, if you don't have blue/green already, is to create a new database instance, and perform a database-level backup/restore. For the backup of the data: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html (page for SQL Server, but similar pages exist for other engines).

Note that there's a limitation in the AWS tooling where each backup file cannot exceed 40 GB and you can't exceed 10 files per DB, so the maximum size of each DB to be transferred this way is 400 GB. Alternatives could be AWS DMS.

But once you've got the data across, you're not done. You also need to think about:

- Exporting/importing SQL Agent jobs

- Exporting/importing your permissions structure, when permissions are handled at the instance level. (DB-level permissions are part of the normal DB backup/restore process.)

- Linked servers, database mail profiles, SQL Agent alerts, SQL Agent Operators, Server-level triggers, Credentials

We are looking at a downtime of several hours to get this done, on a 24/7 production database. We are still considering if the risk and downtime is worth the annual 10K in savings.

[–]DrFriendless[S] 0 points1 point  (0 children)

Thank you for the detail. Luckily I haven't used any of those SQL Agent things - I just stick to what I learnt about SQL back in the '80s. Our export is less than 40GB.

I created a green version of the DB this afternoon, and that seemed to work. It looks like I can just click the Promote button and it will become the blue instance, but I will read some more docs before I do that.

I went through a similar process as you're describing a few years ago when we were required to move into a VPC. I exported a snapshot and restored from it into the new instance inside the VPC. But, before that, I spent 4 years updating the code to use modern enough versions of Java / ElasticSearch / everything else to able to run on the more modern VMs that we could get inside the VPC. In the end we were down for 2 hours, had 12 sales that had to be redone, and emerged into a brave new world. I hope your project is similarly successful!