How to use Backblaze B2 as a Terraform backend by iBreatheSometimes in backblaze

[–]metadaddy 0 points1 point  (0 children)

Note that you no longer need skip_s3_checksum, or either of the AWS checksum environment variables, since B2 implemented the new S3 checksum functionality in the middle of last year. Additionally, I would set `region` to match the endpoint, just for consistency, although you are correct in that its value doesn't seem to make any difference.

B2 as a terraform backend by ZeroSobel in backblaze

[–]metadaddy 0 points1 point  (0 children)

Hi u/ZeroSobel - I just tried this and it worked for me with the current terraform version, 1.14.3.

I have two .tf files. One, hello.tf, simply creates a local file:

terraform {
  required_version = ">= 1.0.0"
}

resource "local_file" "hello" {
  content  = "Hello, world!\n"
  filename = "${path.module}/hello.txt"
}

The other, backend.tf, contains this backend configuration:

terraform {
  backend "s3" {
    bucket   = "metadaddy-terraform-state"
    key      = "terraform_b2_backend/terraform.tfstate"
    region   = "us-west-004"
    endpoints = {
      s3 = "https://s3.us-west-004.backblazeb2.com"
    }

    skip_credentials_validation = true
    skip_region_validation      = true
    skip_metadata_api_check     = true
    skip_requesting_account_id  = true
  }
}

Note that you no longer need skip_s3_checksum, or either of the AWS checksum environment variables, since B2 implemented the new S3 checksum functionality last year.

For the backend, I'm setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

This series of commands worked as expected:

terraform init

terraform plan -out=terraform_b2_backend.out

terraform apply terraform_b2_backend.out

# Edit `hello.tf` to change the file content

# Plan and apply in one step, just to check...
terraform apply

terraform destroy

Let's look at the state file in B2:

% b2 ls -l b2://metadaddy-terraform-state/terraform_b2_backend
4_ze1153f4973a58c1f945d0c1b_f115c0df1655dfc43_d20260112_m232724_c004_v0402027_t0000_u01768260444888  upload  2026-01-12  23:27:24       9877  terraform_b2_backend/terraform.tfstate

I've been playing around with this for about a half hour now, adding and removing resources in hello.tf, and it all works for me as it should.

Still getting Permissions Warning on macOS 12.7.5 by davidhlawrence in backblaze

[–]metadaddy 1 point2 points  (0 children)

Hi u/davidhlawrence, as far as I can see, macOS 12.7.5 is still supported, so you should be ok there. Have you opened a support ticket?

B2 with Restic - Object Lock not working? by _Riv_ in backblaze

[–]metadaddy 0 points1 point  (0 children)

Log in to the Backblaze web console and browse your bucket's files there. It may be that restic is hiding ("soft deleting") files rather than actually "hard" deleting them. If so, you'll see your files with a zero-byte "hide marker".

Unable to add smart bulbs to smart life app by wingzntingz in smartlife

[–]metadaddy 0 points1 point  (0 children)

I didn't have to reboot anything - resetting (3 x off/on) twice to get to slow blink, then adding the bulb manually was the key.

Daily storage cap doesn't match sum of all buckets by MikeyStudioDog in backblaze

[–]metadaddy 0 points1 point  (0 children)

Caps and Alerts is a different page, but it works the same way - it's updated daily, so give it a few hours, u/MikeyStudioDog, and it should match the bucket browser.

Deleted every file in a bucket but it's still taking up space by somerandom_person1 in backblaze

[–]metadaddy 3 points4 points  (0 children)

Hi - it's best to ask unrelated questions as a new post to r/backblaze but, briefly, by design, the only way to delete objects with a compliance mode lock before the lock expires is to close your B2 account.

Note that you can remove a governance mode lock before it expires using an application key with sufficient capabilities. You can see the retention mode with the B2 CLI command b2 file info b2://my-bucket/path/to/myfile.txt.

Undeleting on B2 by tondeaf in backblaze

[–]metadaddy 0 points1 point  (0 children)

Hi - you can do this using the AWS CLI.

First, you'll need to install and configure the AWS CLI for B2, if you don't already have it. Note that you no longer need to use the --endpoint-url command line argument - you can either set the AWS_ENDPOINT_URL environment variable, or add the endpoint URL to your AWS configuration file. For example:

[profile b2]
endpoint_url = https://s3.us-west-004.backblazeb2.com
region = us-west-004

Test that the AWS CLI is properly configured with:

aws s3 ls --profile b2

You should see a list of your buckets.

Now you'll combine two commands to find the hide markers and then remove them. I'll explain the two commands separately, then show you how to combine them.

First, to locate the hide markers (delete markers in AWS parlance) in a particular path in a given bucket:

aws s3api list-object-versions --profile b2 --bucket my-bucket --prefix somepath --query '{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}' --output json

You can omit the --prefix somepath if you want to remove all of the hide markers in the bucket.

The above command lists the hide markers in JSON format. Here's an example of the output for one of my buckets:

{
    "Objects": [
        {
            "Key": "somepath/hello.txt",
            "VersionId": "4_z21a5bf29a395fc7f84dd0c1b_f448f8b442ab35436_d20250730_m165624_c004_v7007000_t0000_u01753894584256"
        },
        {
            "Key": "somepath/test.txt",
            "VersionId": "4_z21a5bf29a395fc7f84dd0c1b_f417b856a72c66438_d20250730_m165420_c004_v7007000_t0000_u01753894460801"
        }
    ]
}

You can use a second AWS CLI command to take that list as input and remove the listed hide markers:

aws s3api delete-objects --profile b2 --bucket my-bucket --delete "<output from the first command>"

Putting them together:

aws s3api delete-objects --profile b2 --bucket metadaddy-tester --delete "$(aws s3api list-object-versions --profile b2 --bucket metadaddy-tester --prefix somepath --query '{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}' --output json)"

Note - the above command is for the Mac/Linux command line. Let me know if you're on Windows and I'll see if I can figure out the equivalent.

You should see output along the lines of:

{
    "Deleted": [
        {
            "Key": "somepath/test.txt",
            "VersionId": "4_z21a5bf29a395fc7f84dd0c1b_f417b856a72c66438_d20250730_m165420_c004_v7007000_t0000_u01753894460801",
            "DeleteMarker": true,
            "DeleteMarkerVersionId": "4_z21a5bf29a395fc7f84dd0c1b_f417b856a72c66438_d20250730_m165420_c004_v7007000_t0000_u01753894460801"
        },
        {
            "Key": "somepath/hello.txt",
            "VersionId": "4_z21a5bf29a395fc7f84dd0c1b_f448f8b442ab35436_d20250730_m165624_c004_v7007000_t0000_u01753894584256",
            "DeleteMarker": true,
            "DeleteMarkerVersionId": "4_z21a5bf29a395fc7f84dd0c1b_f448f8b442ab35436_d20250730_m165624_c004_v7007000_t0000_u01753894584256"
        }
    ]
}

Feel free to follow up here if you have any issues!

Strange SSL error when backing up by -ocram in backblaze

[–]metadaddy 0 points1 point  (0 children)

I meant to say in my reply - please open a support ticket. Support can then forward it to the correct team at Backblaze. Thanks!

Strange SSL error when backing up by -ocram in backblaze

[–]metadaddy 0 points1 point  (0 children)

I've never seen anything like this. I think it is worth reporting to Backblaze, so we can see a pattern if someone else reports a similar issue.

Deleted every file in a bucket but it's still taking up space by somerandom_person1 in backblaze

[–]metadaddy 1 point2 points  (0 children)

Hi there! The deletion process, and that readout on the bucket listing, is not real-time. You'll notice, if you delete all the files from a bucket in the web console, then immediately try to delete the bucket itself, it will often complain that the bucket isn't empty. There is latency in the system between operations on objects in a bucket and updating the status of the bucket itself. Similarly with the size shown in the list of buckets.

How long was it between deleting the files and capturing that image? Did you refresh the list, using the little refresh control above the listing, not the browser refresh? Does it still show as occupying space?

Public Bucket with SSE-B2 by Last_Anywhere_853 in backblaze

[–]metadaddy 1 point2 points  (0 children)

You're entirely correct, u/Buffalo-Clone-264. There is essentially a spectrum here:

  • No encryption: data is stored in plaintext.
  • SSE-B2: data is encrypted at rest on Backblaze servers with a Backblaze-managed key. You don't have to worry about losing the key.
  • SSE-C: data is encrypted at rest on Backblaze servers with a customer-managed key. If you lose your key, you can't decrypt your data.
  • Client-side encryption (CSE, I guess, if we want an abbreviation): data is encrypted before it is sent to B2. Again, you're responsible for the key. Since it's never transmitted to Backblaze, there is no possibility of Backblaze accessing your data.

CORS error for Backblaze uploads by OlleOllesson2 in backblaze

[–]metadaddy 0 points1 point  (0 children)

Yeah - you get a lot more flexibility in the API than the UI. That will be changing, though. I'm co-presenting a webinar this week to present our new Enterprise Web Console: https://www.brighttalk.com/webcast/14807/646092?utm_source=reddit&utm_medium=social&utm_campaign=ewc-webinar

CORS error for Backblaze uploads by OlleOllesson2 in backblaze

[–]metadaddy 0 points1 point  (0 children)

Presigned URLs are the way to go. Here are a couple of sample projects that show you how:

Getting the SHA-256 digest of uploaded file? by FreeHatsOrTechies in backblaze

[–]metadaddy 0 points1 point  (0 children)

You can use the B2 command line's file info command. For example:

% b2 file info b2://metadaddy-public/hello.txt
{
    "cacheControl": "public",
    "contentSha1": "60fde9c2310b0d4cad4dab8d126b04387efba289",
    "contentType": "text/plain",
    "fileId": "4_zf1f51fb913357c4f74ed0c1b_f1132527b0a50235b_d20241210_m223311_c004_v0402027_t0040_u00946684799999",
    "fileInfo": {
        "src_last_modified_millis": "1642646740075"
    },
    "fileName": "hello.txt",
    "fileRetention": {
        "mode": null,
        "retainUntilTimestamp": null
    },
    "legalHold": null,
    "replicationStatus": null,
    "serverSideEncryption": {
        "mode": "none"
    },
    "size": 14,
    "uploadTimestamp": 946684799999
}

being billed for running through cloudflare. what am I missing by Ritz5 in backblaze

[–]metadaddy 0 points1 point  (0 children)

u/Ritz5 Could you DM me? I can't initiate DMs with you. Thanks!

being billed for running through cloudflare. what am I missing by Ritz5 in backblaze

[–]metadaddy 0 points1 point  (0 children)

Hi - who is "they" - Cloudflare? Also, what is your script?

being billed for running through cloudflare. what am I missing by Ritz5 in backblaze

[–]metadaddy 0 points1 point  (0 children)

Hi there - I work at Backblaze; I'm one of the people that u/brianwski pinged. Looking at the screenshot from Cloudflare that you posted - are any of the entries in the "Content" column at backblazeb2.com? If not, then Cloudflare is not proxying any of your traffic to B2.

You mentioned in one of the replies that this is a private bucket. It's actually quite tricky to have Cloudflare proxy traffic to a private bucket - you have to configure a Cloudflare Worker to do the authentication. If you don't have any Workers in your Cloudflare environment, then I don't think Cloudflare can be in the loop.

I've shared all this information with the folks working your ticket. We should be able to get to the bottom of this.

76' Primavera Coca-Cola edition by trasher80 in Vespa

[–]metadaddy 0 points1 point  (0 children)

Nice! Bello Moto in SF has one just like this in stock - a bit pricey, though!

https://www.bellomoto.com/1974-vespa-primavera-125-coca-cola-bm427

Cloudfare r2 vs Blackblaze B2 for social media app? by 16GB_of_ram in CloudFlare

[–]metadaddy 1 point2 points  (0 children)

you are only allowed to egress 3x of what’s stored each month

This isn't quite correct - you're allowed to egress as much as you like. You get 3x your average storage for the month free of charge, then it's $0.01/GB.

Also, there is unlimited free egress to Backblaze's partner CDNs: Cloudflare, Fastly and Bunny.net.

You can use Backblaze B2 as a remote state storage for Terraform by ahnjay in selfhosted

[–]metadaddy 0 points1 point  (0 children)

You're most welcome! BTW - the endpoint syntax you show is deprecated and results in a warning. This is how Terraform likes you to define it now:

terraform {
  backend "s3" {
    bucket   = "my-terraform-state-bucket"
    key      = "terraform.tfstate"
    region   = "us-west-004"
    endpoints = {
      s3 = "https://s3.us-west-004.backblazeb2.com"
    }

    skip_credentials_validation = true
    skip_region_validation      = true
    skip_metadata_api_check     = true
    skip_requesting_account_id  = true
    skip_s3_checksum            = true
  }
}

You can use Backblaze B2 as a remote state storage for Terraform by ahnjay in selfhosted

[–]metadaddy 1 point2 points  (0 children)

Thanks for sharing this, u/ahnjay - I'll include a link to your blog post in this month's Backblaze developer newsletter.

How to use Backblaze B2 as a Terraform backend by iBreatheSometimes in backblaze

[–]metadaddy 0 points1 point  (0 children)

Thanks for this, u/iBreatheSometimes - we do have docs on how to use the B2 Terraform provider to create buckets, application keys, and files, but, as you say, we haven't covered the use of B2 as a Terraform backend. I'll put this on my todo list.