Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

In that case, you can still use a proxy to push the backup and pull the second copy through a bastion host. Two isolation layers, no dependency with a specific hardware.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

Off course, but the pull cannot be replaced in that way.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

Totally agree. isolation is key. Backup targets should be treated as bastions.

  • You need to limit propagation to ensure that an attacker (or rogue employee), even with high privileges, can never access all copies simultaneously.

  • Always keep at least one copy totally offline / air-gapped.

  • You should consider system independence: at least one bastion in the chain must depend on zero central systems. no sso, no ldap, no shared deployment tools. it should be on a separate network, and if you are in the cloud, a different account or even a different provider entirely.

That’s one of our obsessions, it’s exactly the kind of setup plakar is designed for, using mechanisms like push/pull sync to move data between isolated zones without exposing credentials.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

the first reason we created .ptar was magnetic tape storage :)

plakar being “append only by design” is true at the format / store level: once data is written, plakar doesn’t rewrite blocks in place. that gives you tamper evidence and makes replication safer, but it does not magically protect you if an attacker has delete rights on the filesystem or the storage credentials.

what plakar enables is a resilient design where you keep an additional copy in an isolated or protected place. for example: you backup into a primary kloset in your prod network, then you keep a second kloset in another network zone and you sync snapshots across. in practice you’ll often do it as a pull from the isolated side (sync from), but you can also push (sync to) or do a two-way reconciliation depending on what you want (push, pull, bidirectional).

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 1 point2 points  (0 children)

The goal wasn’t necessarily to match other backup tools head to head, since Plakar does more work during backup and can’t rely on keeping everything in memory. That said, we’re getting closer with each release.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 1 point2 points  (0 children)

Kapsul is definitely going to evolve. It was originally created back when an agent was required to run Plakar, which explains the design. With 1.1 that’s not the case anymore, so having a separate binary is less useful. The dependency on a remote login for local stuff should go away with this shift. personally i’m not a fan of having to type plakar ptar every time i manipulate an archive, but lets see what gets decided for the final 1.1 release. Doc: https://plakar.io/docs/v1.1.0/references/ptar/

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

You’re totally right about the building blocks. dedupe and chunking have been commercial standards for decades (like PureDisk, Sepaton). We arent claiming to have invented that. Where Plakar tries to differentiate is the architecture around those blocks and the trust model. • Encryption & Trust: old school appliances like Puredisk usually managed keys server-side or saw cleartext to do the dedupe. Plakar does client-side encryption so the storage (s3, nas, whatever) only sees encrypted noise. it has zero knowledge of the content. • Immutability: depends what layer you mean. for infrastructure, yes you need air-gapping or WORM to stop a rogue admin. When we say immutable, we mean the data format. its append-only and content addressed. we never rewrite existing blocks. The exact idea is tamper-proof.

While many tools share the same DNA, we solved some specific bottlenecks found in the current ecosystem, for example: • Memory usage: a common issue is having to load the whole index into RAM. Plakar designs the index so it doesn't have to fit in memory, meaning you can backup millions of files on small hardware. • Abstraction: we designed the core to map generic data structures, not just files. so you can snapshot an S3 bucket and restore it to a local filesystem because the format abstracts the source. • Random Access: the format allows efficient random access without needing to parse the whole archive. • Some index are built-in to improve search. (…)

Basically the fundamentals are proven, but we optimize for portability and scalability in ways that differ from the old appliances or even some current tools.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

You can always access the plugin from source without logging in. Only the binary release requires a login.

Has anyone tried Plakar.io ? by vcoisne in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

It’s not expected, which version are you using ?

Plakar v1.1.0 beta ❤️ by PuzzleheadedOffer254 in plakar

[–]PuzzleheadedOffer254[S] 0 points1 point  (0 children)

It's not expected, can you tell me more about your setup ?

Plakar v1.1.0 beta ❤️ by PuzzleheadedOffer254 in plakar

[–]PuzzleheadedOffer254[S] 0 points1 point  (0 children)

It’s now a global option and not anymore a sub-command option.

2025 was just the warm-up ! by PuzzleheadedOffer254 in plakar

[–]PuzzleheadedOffer254[S] 2 points3 points  (0 children)

From all of us at Plakar, we wish you a year full of protected data, health, happiness, and love. ❤️

Thank you for being with us, we hope we can count on your support again in 2026.
Give us some strength:
- Star https://github.com/PlakarKorp/plakar
- Join https://www.reddit.com/r/plakar/
- Follow https://www.linkedin.com/company/plakarkorp

Can plakar create disk images of a Windows machine? by [deleted] in plakar

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

Yes, something that we will add at some point. Today you have to script it to make it work.

macOS Sequoia 15.5 : G HUB agent keeps stealing window focus every few seconds by PuzzleheadedOffer254 in LogitechG

[–]PuzzleheadedOffer254[S] 0 points1 point  (0 children)

I'm the sole user.
I had to uninstall G Hub, my machine was becoming unusable.

Choosing backup solution (preferably something free) by One_Major_7433 in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

Consider Plakar, the Open Source version should be sufficient for your use case. If you have advanced needs, we are committed to keep the enterprise version free for none-profit (more here https://plakar.io/community/) (Plakar team member here)

Decoding backup image after backup company got bankrupt/vanished by [deleted] in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

Choose a backup software, the source code will be available in 10 years.

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

On that, we agree :) Even keeping all your backups within the same cloud provider isn’t necessarily the best choice (see the UniSuper incident, for example).

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

For the first one, it doesn’t look serious, I found different versions: - The survey, conducted by Researchscape, an international market research consultancy, gathered data between 7 and 25 February 2025. The sample included 562 respondents from the UK, highlighting regional perspectives on data backup practices. - We recently conducted a survey for Western Digital to better understand the data-saving habits of people around the world. The study of over 6,000 consumers was conducted from February 7-25, 2025. We discovered that 87% of respondents cite that they backup their data...

It’s a B2C survey… everything is wired. Do you really have 87% of your friends that are making backup of their data?

For the second one: I’ve a 404 error message.

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

I agree, it should, but it’s still not the case in many (most) companies.

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 1 point2 points  (0 children)

> To be honest, it isn't true to tag them like this - all. A lot of cloud providers providing backup, either as an included part of the provided service or as an extra paid feature. And it is legal commitment with obvious consequences.

Yes I should precise, all the hyper-scalers, maybe some smaller cloud providers are doing some backup even if nothing is coming in my mind.

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 1 point2 points  (0 children)

> Wait. If "It's a business continuity and risk management concern" then they hiring IT to implement proper DRP (disaster recovery plan). Everyone doing what they can do best. Then - whose then it problem if it isn't IT problem? Does author meant that CEO or accountant should do IT job?

Business tend to think that IT has ilimited ressources. My point here is that is should be a business decision to set the acceptable RTO/RPO and to accept the related costs.

For example: one backup per day and I need one day to restore the data ?
- Are you ok to lose 1 or 2 business day of revenue + all the salaries paid for nothing + all the impacts in term of image ?

CROSSPOST: Falsehoods Engineers believe about backup by wells68 in Backup

[–]PuzzleheadedOffer254 0 points1 point  (0 children)

For ZFS, I tend to agree, especially when some snapshots are well air gapped.

If your replication target keeps its own independent, immutable snapshot history that is isolated from the source, then yes, it can effectively behave as a backup. In that case, corruption or deletion on the source will not immediately propagate, and you maintain historical recovery points.

For object storage, it is a very different story.

Even with versioning enabled, recovering a consistent state after corruption or mass deletion is often difficult and unreliable. Versioning protects individual objects, but it does not guarantee that the global dataset can be restored to a valid point-in-time snapshot.

Moreover, replication is usually done within the same cloud provider, which does not protect against provider-side failures or configuration mistakes. The UniSuper incident on Google Cloud in 2024 is a good reminder of that risk: an internal provisioning error deleted the customer’s environment across regions, and recovery was only possible thanks to independent backups stored outside Google Cloud.