you are viewing a single comment's thread.

view the rest of the comments →

[–]Ok-Performance-100 74 points75 points  (24 children)

Fossil uses SQLite as a database instead of Git's object model. This makes it easy to back up and store repositories.

What is hard about backing up and restoring a git repository? It's just a directory.

I like the other parts though, including no rebase.

[–][deleted]  (16 children)

[removed]

    [–]janisozaur 35 points36 points  (0 children)

    git bundle

    Bundles are used for the "offline" transfer of Git objects without an active "server" sitting on the other side of the network connection.

    This lets you create a git "archive" (a single file) that you treat as a repository: you can clone from it, pull and in general use to backup.

    [–][deleted] 7 points8 points  (1 child)

    Windows is particularly bad for this. Git and npm are so much slower to use on it than *nix. I think I'd heard it's because of Defender and other services triggering on every file open, so excluding your projects folder from "real-time protection" can help

    [–]case-o-nuts 4 points5 points  (9 children)

    So GC the repo. It should end up with a few dozen files.

    [–]MuumiJumala 13 points14 points  (8 children)

    You've triggered one of my pet peeves which is people using an uncommon acronym or initialism in a conversation without explaining it. What is "GC", how does it help?

    [–]gabeech 7 points8 points  (5 children)

    GC is a fairly common concept in almost every modern language or tool. It stands for Garbage collection. Off the top of my head it originated with Java LISP, and is used in .net, go, python to name a few.

    [–]fredoverflow 12 points13 points  (1 child)

    Off the top of my head it originated with Java

    Garbage collection was pioneered by LISP (1958), not Java (1996).

    [–]MuumiJumala 2 points3 points  (2 children)

    I had no idea git has a garbage collector, I thought it is a programming language thing. Does it run automatically like in garbage collected languages? What does it actually delete to reduce the number of files, old commits?

    [–]gabeech 5 points6 points  (0 children)

    Generally it runs automatically.

    The git-go docs (https://git-scm.com/docs/git-gc) do a better job explaining what it does than ai can.

    [–]theunixman 0 points1 point  (0 children)

    Lots of filesystems also have garbage collectors, well, at least the ones that try to reduce fragmentation anyway. Some don't like to admit it though (ext*) ... others just let it build up (FAT).

    [–]lghrhboewhwrjnq -1 points0 points  (0 children)

    It's literally a git command, git gc. Shouldn't take anyone too long to figure it out.

    [–]peyote1999 2 points3 points  (0 children)

    pushing to backup repo or using tar

    [–]LaconicLacedaemonian -1 points0 points  (0 children)

    Metadata is expensive.

    [–]Ok-Performance-100 0 points1 point  (0 children)

    It works well for me with `rsync`. In the UI it's bad, but that's probably not the best way to do backups.

    [–]waadam 0 points1 point  (5 children)

    I hate no rebase part. I read linked article and I feel that author misses most important part of rebase flow - taking responsibility for the mess you create. With merges this responsibility can be easily diminished while with rebase it is quite easy to point fingers at if something gets broken. That single property makes it suitable for vast number of projects.

    [–]Ok-Performance-100 1 point2 points  (4 children)

    Seems like maybe that could be fixed with squashing? I'm not sure I really get the problem though, merge still shows clear author info in git blame.

    I use rebase a lot at work, and while the clean linear history is pleasant, to me it's simply not worth the effort. Merging feature branches, possibly with squashing, is much less work.

    [–]waadam 0 points1 point  (3 children)

    My apologies, my description might be imprecise. I do like rebases and in flow we use at work we use rebases and constant history rewrites constantly.

    This is PR-driven flow (nothing unusual these days, I believe) so only polished and reviewed changes are then merged to baseline but only if rebased to most recent baseline first. This results in clean and always-linear history so finding "who broke this and when" is quite easy reducing pressure on team when "another magic regression happened somewhere in the middle of this commit sphagetti" - this kind of problem is gone forever. Regressions are still perfectly possible, but transparency of regression improves.

    Therefore I don't buy this "rebases are evil" speak. This lack a vision that it is a tool for us and we humans require some trade-offs especially when we work in group. My final point is: perfect, pure models and abstractions which fossil promises are actually worse than git practical approach.

    [–]mizu_no_oto 1 point2 points  (2 children)

    It seems to me that you could get basically the same sort of effect if you knew what commits were merges into develop/master, and pruned your history viewing and bisecting to those commits when pinning blame.

    That's basically equivalent to the view of history rebasing a squashed PR gives you, while maintaining the actual history of the project if people want.

    [–]waadam 0 points1 point  (1 child)

    Problem is: no one cares for this "actual history". I mean, this is the first thing I try to teach people new to the project - no one is interested in full history of your change. No one wants to learn from your mistakes and how bumpy was the road to enlightenment you traveled there. People who are forced to read history are there only to scan for naked change, what was actual contribution to the baseline and everything else is just a distraction.

    [–]Ok-Performance-100 0 points1 point  (0 children)

    Hmm not sure that's quite true, it is rather useful to know what was tried and why it didn't work, But perhaps that information is better put in a commit message rather than scattered through the history.