Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190121122802.GA26680@openwall.com>
Date: Mon, 21 Jan 2019 13:28:02 +0100
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: Re: Benchmarking

On Fri, Jan 18, 2019 at 01:21:28PM +0100, Jeroen wrote:
> I'd like to make calculate feasibility of some cracking sessions. To do
> that, I'm using the single core performance of a reference CPU:
> 
> bofh@dev:/opt/JohnTheRipper-bleeding-jumbo/run$ export OMP_NUM_THREADS=1;
> ./john --test --format=raw-md5
> Benchmarking: Raw-MD5 [MD5 256/256 AVX2 8x3]... DONE
> Raw:    26776K c/s real, 26776K c/s virtual

This is good enough to decide on "feasibility of some cracking
sessions", but not precise enough to calculate their duration.
For fast hashes like this, there are significant performance differences
between cracking modes, etc.

> If I do this for all algorithms, something weird happens. E.g. some salted
> algorithms show higher numbers than raw formats.

It is expected that salted hashes may show better speeds at "Many salts"
than their counterpart raw hashes do.  This is because some work
performed per candidate password independently of the salt may be out of
the inner loop when there are many salts.

> So I double-checked using cracking speeds of actual hash files.

Sure.  Always do that.

> I see completely different figures:
> 
> bofh@dev:/opt/JohnTheRipper-bleeding-jumbo/run$ ./john --format=raw-md5
> /tmp/raw-md5.benchmark
> Using default input encoding: UTF-8
> Loaded 10000 password hashes with no different salts (Raw-MD5 [MD5 256/256
> AVX2 8x3])
> Proceeding with single, rules:Single
> Press 'q' or Ctrl-C to abort, almost any other key for status
> Almost done: Processing the remaining buffered candidate passwords, if any
> Proceeding with wordlist:./password.lst, rules:Wordlist
> Proceeding with incremental:ASCII
> 0g 0:00:00:26  3/3 0g/s 6783Kp/s 6783Kc/s 67860MC/s serxci..seaeak

This is batch mode, meaning JtR quickly went through single crack mode,
wordlist mode, and is now at incremental mode spending much of its time
switching between different candidate password lengths in an attempt to
more likely find a guess sooner.  When doing so, it tries to maximize
the successful guess rate, not the hash rate.

To see what your long-term speeds would be, you need to use a different
cracking mode like mask, or explicitly choose incremental mode (not via
batch mode as above) and set it to a fixed --min-length and --max-length,
or wait _much_ longer for incremental mode's length switching overhead
to become relatively small.

> So --test = 26776K c/s, cracking = 6783Kc/s and there's a C/s (capital C)
> that's orders of magnitudes higher.
> 
> Questions:
> 
> - Where are the differences in coming from?

There are many different things causing them.  I've explained some
above.  As to the C/s (capital C) figure, it's higher due to your many
password hashes loaded against this unsalted hash type.  C/s is the
number of combinations of {candidate password, hash} tested per second.

While having many hashes loaded at once increases C/s and that's great,
you also need to be aware that it will slightly reduce c/s as some time
is spent on comparisons of each computed hash against the many loaded
hashes (even though we do this in smart ways that avoid performing this
many individual comparisons).

> - What's the best number to use in calculations for time predictions?

Actually run the attack that you intend to run.  Let it run for a longer
while.  Use the p/s rate along with your knowledge of how many candidate
passwords the session is going to test to calculate the likely duration.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.