|
Message-ID: <20111109151729.GA18427@openwall.com> Date: Wed, 9 Nov 2011 19:17:29 +0400 From: Solar Designer <solar@...nwall.com> To: announce@...ts.openwall.com, john-users@...ts.openwall.com Subject: John the Ripper 1.7.8-jumbo-8 Hi, [ Offtopic: anyone with a "permanently failed" MacBook battery (doesn't charge, shows as "Poor" or "Check battery" in the GUI)? E-mail me off-list if you don't mind trying a software repair on it (so far tested on my battery only, with success). No promises, not a free support offer, ABSOLUTELY NO WARRANTY. And yes, this builds upon Charlie Miller's research and code. ] I've just released John the Ripper 1.7.8-jumbo-8: http://www.openwall.com/john/ Most of the changes since -jumbo-7 are by magnum and JimF - thanks! My role was mostly limited to preparing the release, although I've also added a couple of Perl scripts. Also, there's a new Perl script by Jean-Michel Picod. Here's more detail on the changes: * Optional OpenMP parallelization has been added for MD5-based crypt(3) and Apache $apr1$ hashes when building with SSE2 intrinsics, as well as for SAP CODVN B (BCODE) and SAP CODVN G (PASSCODE). (magnum) * Raw MD4 has been enhanced with optional SSE2 intrinsics. (magnum) * The SSE2 intrinsics code for MD4, MD5, and SHA-1 has been pre-built with Intel's compiler into an assembly file, used with the new linux-x86-64i, linux-x86-sse2i, and win32-mingw-x86-sse2i make targets (these i-suffixed targets use gcc and do not require icc to be installed, yet benefit from the pre-compiled code). (magnum) * A CRC-32 "format" has been added (see comments and samples in crc32_fmt_plug.c for proper usage). (JimF) * Support for occasional false positives or multiple correct guesses has been added and made use of for WinZip/AES and CRC-32. (JimF) * "md5_gen", which was not limited to MD5 anymore (also supporting SHA-1 based hashes now), has been renamed to "dynamic". (JimF) * Two Perl scripts have been added to dump password hashes from Mac OS X 10.7 (Lion) binary plist files - lion2john.pl, lion2john-alt.pl (normally only one of these is required). (Solar, JimF, Jean-Michel Picod) * A Perl script has been added to compare two sets of benchmarks (saved "john --test" output) - relbench.pl. (Solar) * Numerous other fixes and enhancements have been made, including to character encodings support and to status reporting (such as to emit a status line at every cracked password and to show number of candidates tried in the status line). (magnum, JimF, Solar) Speaking of the OpenMP parallelization and intrinsics, here are some benchmarks on 2xE5420 2.5 GHz, gcc 4.5.0, "make linux-x86-64i": Benchmarking: FreeBSD MD5 [12x]... (8xOMP) DONE Raw: 215040 c/s real, 26812 c/s virtual Benchmarking: SAP BCODE [sapb]... (8xOMP) DONE Many salts: 10166K c/s real, 1267K c/s virtual Only one salt: 7737K c/s real, 964788 c/s virtual Benchmarking: SAP CODVN G (PASSCODE) [sapg]... (8xOMP) DONE Many salts: 6948K c/s real, 867475 c/s virtual Only one salt: 5061K c/s real, 631062 c/s virtual Also curious (at least to me, although I am biased) is the benchmark comparison script, relbench.pl. With 1.7.8-jumbo-8, "john --test" reports as many as 160 individual benchmark results. Often it is not clear whether a certain change - different JtR version, compiler, compiler options, computer - makes things faster or slower overall, by how much, and how much this varies by hash/cipher type. Although for your own immediate use you probably care about just one or a handful of benchmarks, when some of us build John the Ripper packages for others to use we care about overall performance. Also, being able to compare different JtR builds like this helps a certain secondary use of JtR: as a general-purpose benchmark (for C compilers, their optimization vs. security hardening options, OpenMP implementations, CPUs, computers). For example, here's how Pentium 3 at 1.0 GHz (gcc 3.4.5, linux-x86-mmx) compares to one core in E5420 2.5 GHz (gcc 4.5.0, linux-x86-64i, no OpenMP), both running 1.7.8-jumbo-8: $ ./relbench.pl 1.7.8-j8-mmx 1.7.8-j8-64i Number of benchmarks: 160 Minimum: 1.90200 real, 1.90200 virtual Maximum: 19.68295 real, 19.68295 virtual Median: 4.08437 real, 4.08439 virtual Median absolute deviation: 1.25943 real, 1.25940 virtual Geometric mean: 4.85750 real, 4.85302 virtual Geometric standard deviation: 1.57265 real, 1.57344 virtual This shows that speedup varies from 1.9x to 19.7x, with median and geometric mean at around 4.5x. Another set of tests shows that gcc 4.6.2's -fstack-protector does not result in a slowdown, whereas -fstack-protector-all has a 1.2% slowdown: $ ./relbench.pl asis ssp Number of benchmarks: 158 Minimum: 0.97152 real, 0.97152 virtual Maximum: 1.05857 real, 1.06020 virtual Median: 1.00181 real, 1.00171 virtual Median absolute deviation: 0.00516 real, 0.00593 virtual Geometric mean: 1.00311 real, 1.00367 virtual Geometric standard deviation: 1.01337 real, 1.01360 virtual $ ./relbench.pl asis sspa Number of benchmarks: 158 Minimum: 0.91182 real, 0.92094 virtual Maximum: 1.03674 real, 1.04156 virtual Median: 0.98883 real, 0.98883 virtual Median absolute deviation: 0.01139 real, 0.01172 virtual Geometric mean: 0.98710 real, 0.98765 virtual Geometric standard deviation: 1.01801 real, 1.01799 virtual (These are for -jumbo-7 on a Core 2 Duo, no OpenMP.) Of course, with performance ratios this close to 1.0, you need to run the benchmarks more than once to make sure the results are reliable. The relevant concepts are explained here: http://en.wikipedia.org/wiki/Median http://en.wikipedia.org/wiki/Median_absolute_deviation http://en.wikipedia.org/wiki/Geometric_mean http://en.wikipedia.org/wiki/Geometric_standard_deviation When interpreting the results, I recommend that we primarily use the geometric mean, but also look at the median as a sanity check. The median and the median absolute deviation are more robust when there are outliers (which the median ignores): http://en.wikipedia.org/wiki/Robust_statistics#Examples_of_robust_and_non-robust_statistics http://en.wikipedia.org/wiki/Outlier On a related note, prompted by a question on john-users, I did some research on effect of password policies on keyspace reduction: http://openwall.info/wiki/john/policy Once in a while, someone posts to john-users asking how to restrict John the Ripper to match a policy, hoping that this would substantially reduce the number of candidate passwords to try. Most of the time, such assumptions are incorrect, although in some extreme cases they are valid. For example, for printable US-ASCII (95 different characters) and length 8 requiring at least 3 character classes (out of four: digits, lowercase letters, uppercase letters, and other characters) reduces the keyspace by only 5.5% (so it is a reasonable thing to do). However, requiring at least 2 characters of each class (which for length 8 implies exactly 2 characters of each class) reduces the keyspace for length 8 by a factor of 52.9 (which in terms of keyspace reduction is almost as bad as making passwords one character shorter) and for length 9 by a factor of 17.6. Discussion on reddit /r/netsec/: http://www.reddit.com/r/netsec/comments/lfgbz/effect_of_password_policies_on_keyspace_reduction/ Finally, John the Ripper is still 10th most popular security tool out there out of 125 top tools per SecTools' latest survey, just like it was per 2006's survey: http://sectools.org http://sectools.org/tool/john/ Along with the update based on the latest survey's results, the SecTools website has been redesigned to make it more dynamic, so you may comment on and rate the tools now. (Openwall is not affiliated with SecTools, but I felt that this SecTools update was worth mentioning.) Enjoy, and please be sure to provide your feedback on john-users. Alexander
Powered by blists - more mailing lists
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.