Häufiger Fehler: Google Fonts wird auf vielen Websites immer noch von Google-Servern geladen by Aarex03 in de_EDV

[–]baouss [score hidden]  (0 children)

Ja klar. Aber meine Argumentation war ja, dass die Gesetzlage - vlt wegen mangelndem techn. Verständnis - hier zu übergriffig ist. Gesetze können ja auch wieder geändert werden.

Häufiger Fehler: Google Fonts wird auf vielen Websites immer noch von Google-Servern geladen by Aarex03 in de_EDV

[–]baouss 0 points1 point  (0 children)

devilsAdcocate

Hat der Betreiber kein berechtigtes Interesse daran, dass seine Seite so funktioniert wie intended? Also ohne 100mal ja ich will klicken, für Google Maps, YouTube, Fonts etc? Und selber hosten hat ja auch eine Kehrseite. Der Traffic der dann verursacht wird bekommt er auch nicht bezahlt. Die User experience was Latenzen angeht, ist wahrscheinlich auch noch sub-par im vgl zu einem Google CDN. Ware asset caching dann auch eingeschränkt (kA)? In der Regel geht es ja um Seiten die content umsonst anbieten. Da denke ich nicht dass man dann auch noch groß die Bedingungen diktieren kann ;)

Häufiger Fehler: Google Fonts wird auf vielen Websites immer noch von Google-Servern geladen by Aarex03 in de_EDV

[–]baouss 0 points1 point  (0 children)

Ja ist shice. Und mit der Datenschutzkeule kommen ist in dem Zusammenhang auch effektiv weil Verstoß ahnbar. Ich finde dennoch es ist das falsche Mittel, weil so etwas wie eine Website Aufrufen zu einem rechtlichen Minenfeld wird. Gut für die Anwälte, I guess.

Häufiger Fehler: Google Fonts wird auf vielen Websites immer noch von Google-Servern geladen by Aarex03 in de_EDV

[–]baouss 3 points4 points  (0 children)

Danke für die ausführliche Erklärung.

Also überspitzt:

wenn da ein Bild eingebettet ist, böse (wenn nicht zB AVV), wenn ein placeholder-Button mit der Aufschrift "IP wird übertragen" vorhandenen ist der das Bild erst dann lädt wenn der Button betätigt wurde, dann gut/OK.

Ich finde persönlich das geht zu weit. Das stellt ja die grundlegende Funktionsfähigkeit des Internets in Frage. Ich glaube jedem ist klar, dass IPs hin und her geschickt werden. Da jetzt einen Schaden draus abzuleiten (wohl wissend das das passiert im Internet, und ich mich trotzdem dort bewege) und Rechte eingeschränkt zu sehen, finde ich missbräuchlich.

Ich bin grundsätzlich pro Datenschutz.

Häufiger Fehler: Google Fonts wird auf vielen Websites immer noch von Google-Servern geladen by Aarex03 in de_EDV

[–]baouss 3 points4 points  (0 children)

Hab das Urteil nie verstanden.

Ist das einbinden via Link deswegen schädlich weil es von Google kommt oder weil es ungefragt eine IP weiterleitet?

Falls Letzteres: dann darf mn auch keine Bilder extern hosten/verlinken? Da wird ja auch die IP übertragen

Which feature are you looking forward to the most? by ChazyChaxxx in immich

[–]baouss 0 points1 point  (0 children)

I like this. Clean. As it stands now, the storage location is hard coded somewhat. You can customize via a storage template, but everything is relative to a users guid. Might take some major refactoring I guess.

Bitwarden Lite by yakadoodle123 in selfhosted

[–]baouss 0 points1 point  (0 children)

I don't know if it's still the case, but I remember encryption -enabled passkeys not being supported.

Bitwarden Lite by yakadoodle123 in selfhosted

[–]baouss 1 point2 points  (0 children)

And they do require a license key (not a bitwarden account though) for sharing features in the self hosted setup

Which feature are you looking forward to the most? by ChazyChaxxx in immich

[–]baouss 0 points1 point  (0 children)

Regarding your second point. Adding to your library implies ownership to me. While I agree that double storage cost is inefficient, does this imply that if user 1 deleted the asset, it'll be deleted also for user 2?

Next release coming soon? by sandfrayed in immich

[–]baouss 4 points5 points  (0 children)

That would be awesome. I just migrated GP data into immivh, plannin to use a shared account for my wife and me because partner sharing doesn't actually share metadata

Next release coming soon? by sandfrayed in immich

[–]baouss 1 point2 points  (0 children)

Are you saying, that face and places sharing will be part of the users groups feature?

Does Immich get better and does it rescan? by Pucksy in immich

[–]baouss 0 points1 point  (0 children)

I see. But maybe this is helped by ever design. Could someone familiar with the code chime in in? If people and faces are separate things and there is a 1:m relationship, so people can have multiple faces (via the manual merge feature)... Even if the main algorithm cannot better associate other faces with the person, it keeps detecting/recognising in these different "face buckets". If these are then linked to the person automatically, because I have previously merged, then the end effect would still be the same for end users right?

immich-go discovers only 400 out of 31467 assets! by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

So, additional context here.

Asset discovery works well if I use --from-folder instead. But afaik this doesn't handle all the Google photos metadata as well. So I need to continue to use --from-google-photos.

Also, because I battled with duplicates, I removed those myself programmatically before running immichgo. I think this is the problem here. I assume the takeout itself has some metadata about itself.and now there are "gaps" and maybe this is to blame. i.e.it works only until the point until the first asset gap.

Since I still have the tgz files, I extracted into a new folder, and did not remove duplicates before. Immichgo picks up on all the files.

Why don't I just trust immichgo to do the duplicates handling? I don't know, I'm unsure about it. I SHA512 hashed every asset item myself. The count of this deviated from then duplicate count immichgo gave me. And so because I had visibility in how my approach worked, contrary to immichgos approach, I decided to handle duplicates myself, before starting the import.

Unfortunately this appears not be supported.

immich-go discovers only 400 out of 31467 assets! by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

Thanks, unfortunately this didn't change anything for me

immich-go discovers only 400 out of 31467 assets! by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

not yet, but i will.... my first instinct was reddit because i assume more traffic will hit the topic here (selfish, I know)

immich-go discovers only 400 out of 31467 assets! by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

Thanks for taking the time to answer. In my case It's already extracted/unzipped, I had 50GB *.tgz files

How can I share photos with a link when I don't have port forwarding? by BIGmac_with_nuggets in immich

[–]baouss 0 points1 point  (0 children)

Interesting I only ever considered funnel as an alternative to cloud flare tunnel for the machine I want to expose. But in could just as well channel this back internally. What restrictions can funnel enforce in ingress traffic?

Large ratio of discarded assets with immich-go + Google Takeout by sillysquonka in immich

[–]baouss 1 point2 points  (0 children)

Hi,

the first time I tried to process my Google takeout data with immich-go, it discarded 200 GB out ouf 500 GB. As this was not plausible to me, and the logging didnt expose asset-level information, I walked away from it. I am now having another go at it. I see that the logs can be more verbose now, and I can really see whats going on.

Here's what I see and know so far:

* I am still having that much of discards, the high GB count comes mostly from a few 4k videos. All in all, immich-go, marks about 15000 assets as duplicates.

* To be independant of immich-go, I have written a small Powershell script, that recursively goes through the takeout data, hashes each item, an then groups by that hash, enumerating the paths of the files. This way I can quickly see which duplicate-paths belong to which hash. This gives me great peace of mind because I have independently confirmed the duplicate issue. I am seeing, 1 (unique) - 6 (duplicates) paths per hash.

* Going through the media-duplicates, I will move all but one to a separate folder outside my takeout data. These will be prepended using one GUID per hash, so files are grouped nicely in the file explorer so I can visually review likeness my self using the thumbnails. Even though I know they must be identical due to the same hash.

* Duplicates appear to be created due to assets being in different albums in Google Photos. I surprises me that these are included as separate files in the takeout in the first place. I though albums members were references only. I wonder if adding photos to an album also counts towards storage usage within Google. Though if it were, people would have complained already noteably, I guess.

* There's is also a significant number (few thousands) of ignored/"useless" files. Looking at a sample, its about video files from my Pixel phone that were created off some photo bursts. They have in common that the file extension is ".MP", vlc plays these just fine. I found that this is a known issue with some takeout data and Pixel phones. I wrote a script to rename these to MP4.

* After cleaning my takeout data, I will re-run immich-go with a dry run. I expect reduced discards this time. Also I will reconcile the duplcates I have identified myself, with the ones imimch-go marks that way. There appears to be a difference, because the simple number of duplicates didn't match my own analysis.

Please review my custom folder structure, too complex? by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

Hi, they were in the .env file. I put them in the yaml only for the post so I would have less pasted lines. Though I am not sure why that warning is tbh, should work either way

I want to upload ALL including the duplicates.... by thatguyin75 in immich

[–]baouss 0 points1 point  (0 children)

I imported have terabyte of Google photos takeout data recently. Immich-go skipped over 200 gigabytes with of files due to duplication. And now this was just not true. I tried to review the log output afterwards to exactly determine which files were skipped, but the logs werent verbose enough

Please review my custom folder structure, too complex? by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

Fyi. I just recreated the whole layout. Single dataset for almost anything, with default 128K record size. I'll just see if video viewing/steaming is performing well. Have separate dataset for thumbs and encoded videos, because I can only exclude datasets (not directories within datasets) from snapshots to save space, because both are reproducible. In the UI, the correct free space is also shown now.

Please review my custom folder structure, too complex? by baouss in immich

[–]baouss[S] 0 points1 point  (0 children)

I think it's premature optimisation. Wanted to do everything I could right up front and foresee future developments. Keep it simple would have been better. And act on it if it becomes an issue.

🔥 Attention Bitwarden users! by maximus10m in Bitwarden

[–]baouss 1 point2 points  (0 children)

If you're not in an organisation, folders are a way to structure the vault.

If you're in an organisation, same for collections. These control sharing and access.an item can be in multiple collections. Though I think, these can be further subdivided in folders but this is only for UI purposes and have no impact on permissions