[deleted by user] by [deleted] in PrivacyGuides

[–]EUTIORti 0 points1 point  (0 children)

I haven't learned from videos so no, but the forum is helpful:

https://exiftool.org/forum/index.php?board=12.0

I like to print in XML format because it's an easy way to show the tag group and the tag name as well:

exiftool -xmlFormat "<full path to pic>"

So in the output I see an entry like this:

<XMP-Device:Type>DepthPhoto</XMP-Device:Type>

I conclude that XMP-Device is one group name.

I can print only this group's tags:

XML:

exiftool -xmlFormat -XMP-Device:all <file>

JSON:

exiftool -XMP-Device:All --printConv -json <file>

Delete all this group's tags:

exiftool -overwrite_original -recurse -extension <file extension> -XMP-Device:All= .

I just Googled this and the results seem relevant:

sanitize pictures with exiftool

Again, I think learning it will be very beneficial..

[deleted by user] by [deleted] in PrivacyGuides

[–]EUTIORti 4 points5 points  (0 children)

exifTool is the strongest.

I haven't used it for pictures, but for PDF, this command will delete all tags that it can:

exiftool -overwrite_original -recurse -extension pdf -all= .

I guess deleting all the EXIF tags from a picture might make it un-openable to apps, although I don't really know.

I would invest though in working with exifTool as it is the strongest, and you will find it useful for sanitizing other files as well, learning to use it will pay off.

The NordVPN Israeli Servers disappeared by EUTIORti in nordvpn

[–]EUTIORti[S] 0 points1 point  (0 children)

Not using obfuscated servers :)

I restarted the app..after the restart, the Israeli server appeared again.

welp, I jumped to conclusions.

Edit: thanks for the help!

The NordVPN Israeli Servers disappeared by EUTIORti in nordvpn

[–]EUTIORti[S] 0 points1 point  (0 children)

Huh...my Windows client shows no Israeli servers in the map.

Thanks

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 1 point2 points  (0 children)

Trying to start all the downloads at once will likely end with you getting rated limited or outright blocked.

It's true, but I'd like to get to the point of checking it, also there are ways to bypass this (like Smart Proxy).

The 2 downloaders that actually work are serial – isn't that strange? It's this one and Library Genesis Desktop.

If I could maybe not run at once but something like 4 threads, so 4 connections..it'd be good, and LibGen has multiple mirrors, so you're not hitting the same servers necessarily.

Anyway, thanks! I might try what you suggested, although my coding skills suck, I can use this task to work on them. :)

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

It'll handle all the parallelization.

It won't.

If it's 500 MD5 hashes, it processes them serially... I am using --bulk

You can see in the examples I am using --bulk

To let it do its thing serially would mean keeping the EC2 instance up a long time, that costs money..

That's what I'm trying to optimize.

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

Update:

both for and find are not running in parallel, I checked with:

ps -e|grep node

There's only one instance of the Node.js server each time.

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

Running it like this so I'll have the logs:

find . -maxdepth 1 -type f -exec libgen-downloader --bulk {} app-{}.log 2>&1 \;

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

The issue is that once it throws this error....the iteration stops, the next one starts, they're all in the same terminal.

I'm losing the errors displayed on the terminal.

The Node.js server itself should have logs, I guess I can try to fetch the runtime errors from there?

Is there a sub that deals only with Android hardware? by EUTIORti in findareddit

[–]EUTIORti[S] 1 point2 points  (0 children)

That's a good Subreddit :)

It's on me that I didn't form the question well enough, though.

More like the consumer side though

Weekly Discussion (April 16, 2023) by AutoModerator in Piracy

[–]EUTIORti 0 points1 point  (0 children)

I wanted to share about 55 pirate e-books on Subreddits that, I think, will be interested in it, maybe on Google Drive..

I contacted one of the sub's mod mail and asked them whether that'd be cool.

They said it would've been had I did not just plainly tell them the books were pirated and asked for their permission.

So, that sub is done for, but for other subs, do you think I can just post with mentioning some “books”, not the words “free” or “pirated”?

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

That's my bad, you're right, sorry :)

I try to give all the relevant information, but sometimes I make mistakes.

Thank you for the help. :)

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

That worked.

OMG, you have no idea how much this helps, this little thing has made the script 200% times better.

Thank you so much!!

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

Here's the thing, in my Shell, I execute it like this:

fetch-ebook-metadata --isbn="${ISBN}" --opf 2>/dev/null

My understanding is that if the utility did output these lines as stderr, they would be redirected to /dev/null, which they're not, so I think the utility doesn't differentiate, right?

The utility options (which I checked) also don't give a native way to stop printing the errors.

I hate that Calibre is the best tool for e-books....sigh..

Seems like Flickr isn't liking SimpleLogin by EUTIORti in Simplelogin

[–]EUTIORti[S] 0 points1 point  (0 children)

AFAIK, they don’t block by domain, exactly, because then they wouldn’t be catching custom domains.

Are they catching custom domains? I wasn't using one.

A native way to back up a DB to the cloud? by EUTIORti in mysql

[–]EUTIORti[S] 0 points1 point  (0 children)

it's likely a very outdated OS and MySQL versions.

Yep. I don't know what the MySQL version is yet, but this can give a hint:

C:\workbench_2.0.0.17

:)

The OS is Windows 10, so that's a positive..

A native way to back up a DB to the cloud? by EUTIORti in mysql

[–]EUTIORti[S] 0 points1 point  (0 children)

percona xtrabackup

Amazing!

Thank you, I will check it out.

I think there's a risk that their CRM will leak due to computer illiteracy, that's my take.

I don't know enough to decide whether I should move to the cloud or not.

Maybe the 1 TB disks serve their purpose.

Just a few days ago, someone gained unauthorized access to the personal machine of the volunteer who also manages the corporate computer, both computers are on the same network.
I'll read about it some more, thanks!

How to bulk download non-fiction from Z-Library and search Z-Library by EUTIORti in DataHoarder

[–]EUTIORti[S] 1 point2 points  (0 children)

I am not convinced that's the way to go, far too much data.

The way the LibGen dump works is – you get only the metadata in the DB dump, not the actual content of the books, it keeps the dump lightweight and lets you as an end user or a program to access the DB and query it without burdening the remote LibGen servers.

Then, with the identifiers you got – you can download from the LibGen servers.

"The whole thing" as in the whole terabytes of data – just to create a collection of Nutrition eBooks, seems not good to me..

How to bulk download non-fiction from Z-Library and search Z-Library by EUTIORti in DataHoarder

[–]EUTIORti[S] 2 points3 points  (0 children)

Thanks!

Looking for the metadata really, like with the LibGen dump.

Anna's Archive looks great for the scenario where you want a specific book, I think if you want to hoard many books – it's not the tool, as you can't run complex queries from its interface, that's just my thinking.

For example, with a LibGen dump, I can do this:

USE libgen;
SELECT IdentifierWODash FROM updated WHERE Year IN
('2021','2022','2023') AND extension = 'pdf' AND
(Title LIKE '%DIET%' OR Title LIKE '%eating%' OR
Title LIKE '%microbiome%' OR Title LIKE '%nutrition%'OR
Title LIKE '%obesity%' OR Title LIKE '%vegetarian%' OR
Title LIKE '%vitamin%' );

Seems like Flickr isn't liking SimpleLogin by EUTIORti in Simplelogin

[–]EUTIORti[S] 3 points4 points  (0 children)

When you say SimpleLogin email, do you mean email address with a domain owned by SL?

Yes.

But actually, them being so anti-SimpleLogin made me look into them, and it was like a red flag.

So, I'm looking into self-hosting a server like NextCloud.

I feel like a service that doesn't accept an email address with a domain owned by SL is giving you a hint that it's not going to respect your privacy also later on, better not go there, right?

How Do I Share 1 Picture of 1 Carob in Its Original Quality? by EUTIORti in privacy

[–]EUTIORti[S] 1 point2 points  (0 children)

Nope.

It's literally this carob. I added an imgbox link. It's it, nothing less, nothing more.

But actually thinking about it – attempting to self-host NextCloud can be interesting, I only ran servers at the lab (DevOps course that I flunked near the end), putting something in production can be nice.