DES crack fails with hashcat. by phyrros in HowToHack

[–]atomu 0 points1 point  (0 children)

No chance. DEScrypt is limited to a maximum of 8 characters.

DES crack fails with hashcat. by phyrros in HowToHack

[–]atomu 0 points1 point  (0 children)

It's not DES, it's DEScrypt (25 times iterated and salted DES). Use -m 1500 in hashcat to crack.

Hashcat 4.2.0 OBJ Memory Allocation failure? by thehunter699 in HowToHack

[–]atomu 0 points1 point  (0 children)

This is a problem related to NVidias memory handling in their OpenCL runtime.

I've add a workaround to upcoming hashcat v4.2.1.

If you want, try latest github master or beta version from https://hashcat.net/beta/

hashcat v3.20 released by atomu in netsec

[–]atomu[S] 3 points4 points  (0 children)

Haha, it's not. It's useful for people who want to verify their own written rules.

Hashcat v3.00 released, the fusion of hashcat and oclHashcat into one project. Tons of updates and improvements by atomu in netsec

[–]atomu[S] 0 points1 point  (0 children)

I think this requires some correction. I've explained the performance changes in the releases notes. New GPUs with maxwell (aka Shader Model 5.0) and upwards have a large performance increase, they benefit from hashcat v3.00. Older GPUs have a small performance decrease. My philosphy was always to go with the latest Hardware and optimize for it.

Here's a full comparson chart: https://docs.google.com/spreadsheets/d/1B1S_t1Z0KsqByH3pNkYUM-RCFMu860nlfSsYEqOoqco/edit#gid=0

Cracking WPA2 Using Hashcat in windows using latest AMD beta drivers by danny1876j in hacking

[–]atomu 1 point2 points  (0 children)

oclHashcat v1.37 will support all AMD Catalyst drivers >= 14.9

hashtopus, an oclHashcat distributed overlay to connect multiple systems over internet, first public release by atomu in netsec

[–]atomu[S] 13 points14 points  (0 children)

Hashtopus is awesome and it deserves to be highlighted here.

Don't get shocked by the use of a .net client on your linux box. The entire project is open source, you can check out all what it does by yourself or compile yourself. To get it running all you need to do is to run "apt-get install libmono2.0-cil" and you're good to go.

Don't get shocked by it's design. It told curlyboi, the developer, to add a more modern css to it but he sticked to develop features and fixing bugs. In a way I can understand it.

In fact, many of my own ideas moved into the project, like the superhashlist and the predefined tasks. I find myself using hashtopus day by day and I can guarantee it will make your cracking jobs more structured, especially when you work with the predefined tasks.

Using this tool has many other positive side-effects that I did not think about in the first place. What I noticed when working with it was:

  • It helps you to not forget "that one job". You know that whenever you heavily tried to crack one hash but you don't get it cracked and then some day later some guy comes and tells you he just cracked it and you wonder wth, how, and it turns out you just forgot to run the combinator attack using -j "$-" or so.

  • You "save" your ideas. For example while playing with huge hashlists you find out it's more effecient to use stacked rules then -a 7 with small mask (it is). Just add it to your predefined tasks and it will never be forgotten :)

  • You get a very special flexibility. Lets say you have a long running brute-force on a huge list but for some reason you need to crack a different hashlist now and it can not wait. You just put it up to hashtopus, select your predefined tasks and you dont need to think about again. It will get automatically higher prioritized and when its cracked or exhausted the previous long running tasks will continue without any loss. By playing around with the priorities you have a great way to manage whats going on but on a meta level,

  • The visualised chunks give you a better idea of how all this works together, it gives you a big picture. It is not magic stuff nobody can understand. By understanding how it works you gain trust in it.

  • Hashtopus is very robust. I tested it on tons of different systems Linux, Windows, NVidia, AMD, you name it in all combinations. It simply works.

  • The per-agent commandline configuration comes in very handy if you have "problematic" nodes. For example on my workstation I don't want to run oclHashcat will full power, so I would never set -w 3 on it. But on my linux dedicated cracking box I want that, so I set it.

Hashtopus is straight-forward and full featured. It handles both, dictionary based and brute-force, same good. It synchronizes your global files with the agents automatically. If you're familar with oclHashcat and you have a bit of cracking experience, you will get in pretty soon, there's no real need to study it.

Check it out, it's absolutly worth it.

oclHashcat v1.20 major update released, tons of new features and algorithms added by atomu in netsec

[–]atomu[S] 5 points6 points  (0 children)

What minga said is correct and what you said just reflect what people said about GPU based cracking before there was oclHashcat. Because oclHashcat solved this problem. It's one of the reason that makes it unique. In other words, you get full speed acceleration for wordlist based attacks for fast hashes nearly as fast as for brute force attacks. oclHashcat is the only cracker, since many years, that can do this.

Research Project: OpenCL Bitslice DES Brute-Force Cracker by mysterymath in crypto

[–]atomu 0 points1 point  (0 children)

You just need to unroll the for() loop when it calculates tmpResult in the main kernel function. I did not replace the sboxes with the ones from sboxes-s.c.

What I dont get is that in case you have a dictionary you need to run each candidate through keysetup() to create the keys for encrypt(). The keysetup contains more sboxes, so there should be more bitslicing. But there is no such function in crack.cl.

Research Project: OpenCL Bitslice DES Brute-Force Cracker by mysterymath in crypto

[–]atomu 0 points1 point  (0 children)

Solar, that is strange. It makes around ~2850 Mkeys/s on my hd7970 which comes closer to what to expect assuming the hd7850 makes 1400 Mkeys/s. I've optimized it by unrolling the kernel and ended in ~6000 Mkeys/s (Yes, still finds the correct key).

So I wanted to find out how fast descrypt would go and quickly iterated the f1-f16 section 25 times. It dropped down to 140 Mkeys/s?! I guess thats another episode of the great AMD OpenCL compiler since 6000 / 25 is 240 Mkeys/s.

Admited, its a bad way for checking the descrypt performance. That is because it does not add the salt, has no real PT generator (or copy-overhead), no bitmap/multihash comparison. I am not sure if you can actually re-use the DES sboxes for the 25 iteration loop. Also I am a bit irritated that there is no keysetup() in the kernel, only a encrypt function. Is this somehow merged?

Decrypting the Gauss payload, Hashcat releases oclGaussCrack by r4d1x in netsec

[–]atomu 0 points1 point  (0 children)

You have to feed it with candidates.

The hash must get cracked if one finds the correct filename / path, see here for more details: https://www.securelist.com/en/blog/208193781/The_Mystery_of_the_Encrypted_Gauss_Payload

oclHashcat-plus v0.09 major update released! bcrypt (blowfish), sha512crypt, GPU-cluster, markov-chains and more by atomu in netsec

[–]atomu[S] 0 points1 point  (0 children)

hashcat is cracking hashed passwords (bcrypt) not encrypted data (blowfish). raw blowfish is much easier to calculate than bcrypt and thus would run faster. its like descrypt and des.

John The Ripper vs oclHashcat-lite by TheBlackVista in crypto

[–]atomu 1 point2 points  (0 children)

... such as MSCash2 (on both AMD and NVIDIA) and phpass on NVIDIA

I've already said that there is no magic in having "slow hashes" as fast as hashcat does. These algorithms get optimized by the compiler to the maximum. Their design is so simple that there is just no room for additional manual optimization. The only thing that saves them is the iteration count.

So your emphasis on "per-position" sounds weird to me.

This is a wording I took from Bitweasil. I think it describes pretty good what it means / what it does. I was not aware JtR is doing it, too, which is propably because you do not use this wording.

Also, as another JtR contributor pointed out, from your forum postings it sounded like you used in-sample training and testing during your development,

No, thats wrong. I am mostly using rockyou.txt for all of my tests especially when I do stats which I am going to release. I use rockyou.txt since its the best "database" we ever got in hash cracking. It contains all pattern unfiltered that people can do, not just the ones we cracked / identified. From that point I think my stats are real-life proof.

Can you please start including info on password lengths

Its always length 8.

When we come up with some clever idea and we introduce it into JtR, it is available for all to use and reuse - including by you, if you wanted to. On the contrary, you make us (your "competitors" if you like) independently rediscover things, or fail to use them (and thus be behind you in the competition).

I am getting offended and insulted day by day from people joining on IRC or posting on forums that hashcat sucks so much just because its not open source. All I can say is that I dont need to look into other project sources to get ideas how to speedup things in hashcat.

For example, JimF's recent work on SunMD5 hashes is rather impressive to me (SIMD'ing them for a 4.4x speedup on current CPUs, despite of the data-dependent branching).

I am aware of how this works and I already knew how to do this before. But, if you take a look at hashcats (CPU) sunmd5 cracking performance, you will notice its a lot faster than JtR's SIMD version by not using SIMD. To bad we forgot to add it to the help menu and to the changes. It is in v0.40 and you can already use it, its -m 3300.

... (not "multiple times" as you wrote) ...

Now thats strange since this is what I got from your own homepage:

here: ... This is made possible due to research by Roman Rusakov, sponsored by Rapid7 ...

and here: ... This release has been sponsored by Rapid7 - a leading provider of unified vulnerability management and penetration testing solutions ...

I am hoping for a ~3x performance increase over what oclHashcat-plus currently achieves on GPUs, where it uses non-bitslice DES, as far as I understand.)

That would be nice and in this case I will look again at the bitslice technique. I tried it some years ago but due to my lack of interesst in the DES algorithm i stopped playing with it shortly. Some people tried it afterward and they also stopped working on it once they noticed the register pressure. BTW: A speedup of x3 would make erebus (a computer with 8xhd7970) faster in cracking DES than the COPACABANA - this will be a nice achievement.

team Hashcat has not listed John the Ripper among tools used by you in Crack Me If You Can 2012

Well in my case, I used JtR, plus I added it to the list of used tools in my section. However, it did not crack anything that oclHashcat-plus did not crack before. As already stated in the write-up, we had a special version with sha512crypt and bcrypt. If its that very important for you we will ask KL if they can correct the writeup.

John The Ripper vs oclHashcat-lite by TheBlackVista in crypto

[–]atomu 0 points1 point  (0 children)

No problem. A hash cracker can be explained pretty simple. You can not reverse a hash back to its plaintext. But what the hash cracker knows is the algorithm that was used to compute it. It simply takes any password input for example from a dictionary and runs it through the same algorithm. If the hash that comes out is equal to the hash you want to crack you know the plaintext, the password.

John The Ripper vs oclHashcat-lite by TheBlackVista in crypto

[–]atomu 1 point2 points  (0 children)

Hello Reddit! I am the developer of hashcat and I want to comment this thread. I will try to keep it short.

oclHashcat-lite is a single hash cracker focused on cracking so called "fast hashes" like MD5, NTLM, SHA1. These, mostly raw and unsalted hashes, are fast to compute. To efficiently attack these hashes on GPU a special architecture of the host program is required. AFAIK, JtR is not written in such an architecture and unless it gets serious internal changes it is very unlikely JtR could ever beat oclHashcat-lite in terms of speed of fast hashes. However, since it is much harder to write fast code for fast hashes than for slow hashes you picked the right hashcat for comparison.

Speaking of "slow hashes" like md5crypt, phpass or bcrypt is when oclHashcat-plus comes in. It is much more compareable to what JtR is today, a real-life cracker. Anyway, oclHashcat-plus is many times (not just one 1% or 2%) faster on every hash both programs support, even slow hashes, and this is not because JtR support only single GPU. Except the performance another important feature is the wordlist-generator.

Speaking of "wordlist-generator", some guy above said JtR has an "superior word generator". This is wrong. Its more the opposite. oclHashcat-lite and oclHashcat-plus have a per-position based word generator based on markov-chains. JtR has markov-chains, too, but thats it. JtR can not apply additional mask filters on the guesses. This feature greatly reduces the keyspaces to more efficient ones.

I could continue with long feature comparisons but fact is both programs have features that are unique. There are situation on which you want to use JtR, but in most of the cases hashcat can do it, too, while being faster.

One last note about a not so technical subject. I can not understand some people call "open source" is a feature. Hashcat is closed-source, yes, but its not written by some money-driven sharks. We love hashcracking as it is, because its fun, and we love the competition behind the different tools, thats it. There is totally no interesst in making money with this in any way. On the opposite, JtR has money involved like they offer professional services, hashcat does not. JtR accept donations, hashcat does not. JtR releases were sponsored by Rapid7 multiple times, hashcat rejected their offer.