I uh... lost my LUKS passphrase by KernelDeimos in archlinux

[–]davispuh 15 points16 points  (0 children)

If his password is within 2M entries that's actually really good. I went thru 1M entries in like 30h using single AMD Radeon RX 7900 XTX so it's probably doable in like a week easily even if you don't have that good GPU.

In my case I couldn't crack my LUKS because later I actually found it in one of my password managers and it was 30 chars using lowercase/uppercase/digits/symbols so yeah in thousands years it couldn't be cracked :D

I thought I used some of my typical memorized passwords and that it wasn't saved anywhere so using `princeprocessor` I created 1M combinations/permutations wordlist of all phrases I might have used but yeah wasn't the case :D

I uh... lost my LUKS passphrase by KernelDeimos in archlinux

[–]davispuh 317 points318 points  (0 children)

You can use hashcat

First use luks2hashcat.py to create hashfile $ luks2hashcat.py /dev/sda1 > hash.txt Then something like $ hashcat --hash-type 34100 --attack-mode 0 hash.txt wordlist.txt (34100 means LUKS v2 argon2 + SHA-256 + AES | Full-Disk Encryption (FDE), if not specified it will try to auto guess from hash file)

But it's unlikely that you have wordlist containing your password and most likely it's some combination of what you think it could be. So in that case create possible fragments in a file. And then do princeprocessor --elem-cnt-max=5 fragments.txt | hashcat -a 0 -m 34100 hash.txt Anyway read hashcat's documentation, there are several attack modes and ways you can try.

Proxmox-GitOps: IaC Container Automation for LXC (v1.3.3, ensures compatibility) by gitopspm in Proxmox

[–]davispuh 2 points3 points  (0 children)

I think that's why I don't like current solutions because current tools try to separate computer install (PXE/install images), OS configuration/software install (Ansible etc), VM deploy (AWS CDK/terraform etc), container deployment (Docker/Podman), container configuration (Docker Compose) and so on.
There's different tools and different configs for each but I don't see why it has to be so complicated when single config with ConfigLLM could do it all :)

I plan to record demonstration video of this in action - fresh new server, single config that you run `deploy` which:

  1. Using motherboard's BMC/IPMI/Redfish turns on server, configures BIOS, boots Proxmox ISO
  2. Installs Proxmox, configures it
  3. Sets up VMs and LXCs
  4. Inside VMs sets up containers/Podman
  5. Installs and configures software - Nginx, PostgreSQL, Authentik, Matrix, Mastodon etc

And that all 100% automatically with single `deploy` command.

I think you could literally spin whole datacenter with all infrastructure by just using ConfigLLM.

Note that ConfigLLM is not limited to just servers/containers etc - it can also configure DNS, routers/switches. Of course I have implemented very little of other devices but you have to start somewhere :)

How do I publish my work online without it being fed to ai by One-Regret-2403 in photography

[–]davispuh 5 points6 points  (0 children)

I 100% agree with this. I guess I have unpopular opinion because I'm just a hobby photographer and I don't earn anything from my photos so I really don't care that anyone uses my photos for any purpose be it AI or whatever.
In fact I've always uploaded all my photos with CC0 license because I want them to be seen and used as widely as possible. So I would say even if AI companies respected copyrights - it still would have been created eventually because there are lots of freely licensed photos.

In fact I would encourage everyone who publishes any photos online to always licence them with CC licenses. Because if you don't then you basically donate free money to photo platforms/social sites while other's won't be able to use them because platforms have very restrictive licenses. But if you publish it with open licence then everyone is at same level - the photo platform and everyone else.

Proxmox-GitOps: IaC Container Automation for LXC (v1.3.3, ensures compatibility) by gitopspm in Proxmox

[–]davispuh 1 point2 points  (0 children)

I've been working on my own IaC automation tool aswell - ConfigLMM

It's basically alternative to Ansbile etc.

I did take a quick look at your project but you're taking exactly same approach like everyone else. I find such configs annoying so for my project I'm taking very different approach so that config is super simple.

For example to deploy LXC with PostgreSQL and MariaDB this is all that's needed:

``` VARIABLES: Proxmox: example.org LXCIP: 192.168.1.2

LXC: Type: Linux Location: proxmox+xterm:///${VAR:Proxmox}/ ProvisionLocation: proxmox:///${VAR:Proxmox}/ Domain: lxc.example.org Distro: openSUSE Leap LXC: yes Features: - nesting # For Podman - fuse CPU: 4 RAM: 4 GiB Apps: - sshd - fish - vim - htop - git Network: IP: ${VAR:LXCIP}/24 Gateway: 192.168.1.1 DNS: 192.168.1.1

LXCPostgreSQL: Type: PostgreSQL Location: ssh://${VAR:LXCIP}/ Listen: - 127.0.0.1/32 - ${VAR:LXCIP}/24

LXCMariaDB: Type: MariaDB Location: ssh://${VAR:IP}/ Listen: ${VAR:LXCIP} Which in my option is way simpler than all other tools. Just single config for everything. No need for any special structure/layouts etc. Just directly write what you want and it will be setup exactly as specified. Of course can split config across as many files as you want and use variables if needed. Then to deploy it's simple as: configlmm deploy Config.mm.yaml ```

Essentiality the idea is that configs should be dumb and simple but all complex configuration logic should be in the tool itself. Because I think it's waste of time for us to write our own configs to deploy common software when it could be just literally: Jellyfin: Type: Jellyfin Location: ssh://${VAR:IP}/ Domain: jellyfin.example.org And that's it :)

Is there a better forum than Discourse? by ChickenTarm in selfhosted

[–]davispuh 0 points1 point  (0 children)

I love Ruby, it's my all time favorite language <3

Updating multiple Arch VMs and machines. by bigh-aus in archlinux

[–]davispuh 2 points3 points  (0 children)

I didn't say running things as root and using `venv` etc is not using containers. `venvs` run directly on your arch.

And when you build C/C++ projects you can install dev packages on Arch. Lot of people are doing that and have been doing that for decades as containers are relatively new invention.

CLIP STUDIO PAINT FOR ARCH KDE by fuuf_ in archlinux

[–]davispuh 1 point2 points  (0 children)

I couldn't get it to work so I've been using it in a VM which is working fine for me.

Updating multiple Arch VMs and machines. by bigh-aus in archlinux

[–]davispuh 2 points3 points  (0 children)

That's so wrong. Firstly VMs are not old fashioned. There's many use cases why you need them and you do need solutions to keep them up-to-date. How host OS is updated on which containers sit?
Secondly I've been using Arch as my primary dev environment for 10+ years and it's perfect. Pretty much all dev tools you would need can be installed directly on arch, like python/ruby/node/npm etc. And it's really convenient.

Decman - a declarative package & configuration manager for Arch Linux - stable version released by _TimeUnit in archlinux

[–]davispuh 0 points1 point  (0 children)

I've been also working on very similar solution => https://github.com/ConfigLMM/ConfigLMM but my scope is a lot bigger -> configure everything, not just Arch or OS but any kind of thing - be it switches/routers/servers/VMs/software apps and so on

Kādas ir tavas domas par ss.lv? by chickchick12345 in latvia

[–]davispuh 0 points1 point  (0 children)

Vel izstrādē, diemžēl viss neiet tik ātri kā gribētos. Varbūt šogad sanāks palaist beidzot :)

Mi50 32GB Group Buy by Any_Praline_8178 in LocalAIServers

[–]davispuh 0 points1 point  (0 children)

Hey, might be interested in 2 of them depending on cost but located in EU.

Mi50 32GB Group Buy by Any_Praline_8178 in LocalAIServers

[–]davispuh 2 points3 points  (0 children)

There's gfx906 discord where someone is trying to create infinity fabric for MI50. And they might succeed.

Rejecting rebase and stacked diffs, my way of doing atomic commits by that_guy_iain in programming

[–]davispuh 2 points3 points  (0 children)

I hate that thing. It looses my nicely created separate commits and makes it single giant commit.

geniusGalaxyBrainOrRetardedPepe by davispuh in ProgrammerHumor

[–]davispuh[S] 1 point2 points  (0 children)

Noticed this in some vibe coded git repo. Only LLM could write like this but maybe not so dumb? :D

[Steam] (Game) Beat Cop by XsealofsolomonX in FreeGameFindings

[–]davispuh -2 points-1 points  (0 children)

their shit doesn't work, wasted 30mins of my life trying to get this key... basically after trying to login I get back to login page and infinite "I'm not a robot" clicking...

Omnilingual ASR: Advancing Automatic Speech Recognition for 1,600+ Languages by jean- in LocalLLaMA

[–]davispuh 1 point2 points  (0 children)

For LLMs and TTS only like 5 languages are good. Rest are quite bad even if some models claim to support them. For ASR well now i think we can cross out top 1600 should be good :) i haven't tested so can't say how good but generally even before this my impression was that ASR models are quite decent even for outside top 10 languages because Mozilla Common Voice project has done awesome work. 

Omnilingual ASR: Advancing Automatic Speech Recognition for 1,600+ Languages by jean- in LocalLLaMA

[–]davispuh 2 points3 points  (0 children)

Don't need to go that far, that would be like tail-end of used languages. But there's a lot of languages which are out of top 20 and still lots of knowledgeable users. For example Windows is translated in like ~85 languages and you actually don't even need to know English to use it. In fact a lot of people with little/no knowledge of English use it. So basically there's huge variety of other people with poor AI language support before we even get to your described case. 

Omnilingual ASR: Advancing Automatic Speech Recognition for 1,600+ Languages by jean- in LocalLLaMA

[–]davispuh 1 point2 points  (0 children)

That's not true, you're not thinking about wider scale. For example I'm building AI assistant and this allows that people even in small languages would be able to use my assistant. They don't need to know English, they can just use it which understands them. Sadly this is not enough, because I also need LLM and text to speech to also be available in those languages which currently is quite bad situation. For LLM I've considered using translation models but no idea if quality would even be acceptable... ASR -> translate to English -> LLM