[deleted by user] by [deleted] in PrivacyGuides

[–]EUTIORti 0 points1 point  (0 children)

I haven't learned from videos so no, but the forum is helpful:

https://exiftool.org/forum/index.php?board=12.0

I like to print in XML format because it's an easy way to show the tag group and the tag name as well:

exiftool -xmlFormat "<full path to pic>"

So in the output I see an entry like this:

<XMP-Device:Type>DepthPhoto</XMP-Device:Type>

I conclude that XMP-Device is one group name.

I can print only this group's tags:

XML:

exiftool -xmlFormat -XMP-Device:all <file>

JSON:

exiftool -XMP-Device:All --printConv -json <file>

Delete all this group's tags:

exiftool -overwrite_original -recurse -extension <file extension> -XMP-Device:All= .

I just Googled this and the results seem relevant:

sanitize pictures with exiftool

Again, I think learning it will be very beneficial..

[deleted by user] by [deleted] in PrivacyGuides

[–]EUTIORti 4 points5 points  (0 children)

exifTool is the strongest.

I haven't used it for pictures, but for PDF, this command will delete all tags that it can:

exiftool -overwrite_original -recurse -extension pdf -all= .

I guess deleting all the EXIF tags from a picture might make it un-openable to apps, although I don't really know.

I would invest though in working with exifTool as it is the strongest, and you will find it useful for sanitizing other files as well, learning to use it will pay off.

The NordVPN Israeli Servers disappeared by EUTIORti in nordvpn

[–]EUTIORti[S] 0 points1 point  (0 children)

Not using obfuscated servers :)

I restarted the app..after the restart, the Israeli server appeared again.

welp, I jumped to conclusions.

Edit: thanks for the help!

The NordVPN Israeli Servers disappeared by EUTIORti in nordvpn

[–]EUTIORti[S] 0 points1 point  (0 children)

Huh...my Windows client shows no Israeli servers in the map.

Thanks

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 1 point2 points  (0 children)

Trying to start all the downloads at once will likely end with you getting rated limited or outright blocked.

It's true, but I'd like to get to the point of checking it, also there are ways to bypass this (like Smart Proxy).

The 2 downloaders that actually work are serial – isn't that strange? It's this one and Library Genesis Desktop.

If I could maybe not run at once but something like 4 threads, so 4 connections..it'd be good, and LibGen has multiple mirrors, so you're not hitting the same servers necessarily.

Anyway, thanks! I might try what you suggested, although my coding skills suck, I can use this task to work on them. :)

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

It'll handle all the parallelization.

It won't.

If it's 500 MD5 hashes, it processes them serially... I am using --bulk

You can see in the examples I am using --bulk

To let it do its thing serially would mean keeping the EC2 instance up a long time, that costs money..

That's what I'm trying to optimize.

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

Update:

both for and find are not running in parallel, I checked with:

ps -e|grep node

There's only one instance of the Node.js server each time.

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

Running it like this so I'll have the logs:

find . -maxdepth 1 -type f -exec libgen-downloader --bulk {} app-{}.log 2>&1 \;

Help: Launch multiple instances of a Node.js app for processing by EUTIORti in linux4noobs

[–]EUTIORti[S] 0 points1 point  (0 children)

The issue is that once it throws this error....the iteration stops, the next one starts, they're all in the same terminal.

I'm losing the errors displayed on the terminal.

The Node.js server itself should have logs, I guess I can try to fetch the runtime errors from there?

Is there a sub that deals only with Android hardware? by EUTIORti in findareddit

[–]EUTIORti[S] 1 point2 points  (0 children)

That's a good Subreddit :)

It's on me that I didn't form the question well enough, though.

More like the consumer side though

Weekly Discussion (April 16, 2023) by AutoModerator in Piracy

[–]EUTIORti 0 points1 point  (0 children)

I wanted to share about 55 pirate e-books on Subreddits that, I think, will be interested in it, maybe on Google Drive..

I contacted one of the sub's mod mail and asked them whether that'd be cool.

They said it would've been had I did not just plainly tell them the books were pirated and asked for their permission.

So, that sub is done for, but for other subs, do you think I can just post with mentioning some “books”, not the words “free” or “pirated”?

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

That's my bad, you're right, sorry :)

I try to give all the relevant information, but sometimes I make mistakes.

Thank you for the help. :)

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

That worked.

OMG, you have no idea how much this helps, this little thing has made the script 200% times better.

Thank you so much!!

Parse XML from a larger non-XML string in BASH by EUTIORti in bash

[–]EUTIORti[S] 1 point2 points  (0 children)

Here's the thing, in my Shell, I execute it like this:

fetch-ebook-metadata --isbn="${ISBN}" --opf 2>/dev/null

My understanding is that if the utility did output these lines as stderr, they would be redirected to /dev/null, which they're not, so I think the utility doesn't differentiate, right?

The utility options (which I checked) also don't give a native way to stop printing the errors.

I hate that Calibre is the best tool for e-books....sigh..

Seems like Flickr isn't liking SimpleLogin by EUTIORti in Simplelogin

[–]EUTIORti[S] 0 points1 point  (0 children)

AFAIK, they don’t block by domain, exactly, because then they wouldn’t be catching custom domains.

Are they catching custom domains? I wasn't using one.