Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100302220419.GA2570@openwall.com>
Date: Wed, 3 Mar 2010 01:04:19 +0300
From: Solar Designer <solar@...nwall.com>
To: john-users@...ts.openwall.com
Subject: gcc version, Core i7

Hi,

I thought that some of you could want to be building JtR with a newer
version of gcc (GNU C compiler) than whatever version your system has
installed globally.

gcc version makes almost no difference for 32-bit x86 builds of the
official JtR since almost all performance-critical code is written in
assembly anyway, but on other architectures (especially x86-64) and for
other hash types (those added with the jumbo patch) it could make a
difference (speedups of 10% to 20% when going from gcc 3.x to 4.4.x are
sometimes seen).

I created a wiki page with instructions on building and using gcc 4.4.3
(the latest stable release as of this writing) under a non-root account
on a Unix-like system (tested on a 64-bit Linux install on a Core i7
machine, which had gcc 3.4.5 installed globally):

http://openwall.info/wiki/internal/gcc-local-build

I and others also made some benchmarks of JtR that show the differences
between gcc 3.4.x and 4.x on x86-64:

http://openwall.info/wiki/john/benchmarks

Of the hashes supported by the official JtR, these differences on x86-64
are mostly limited to MD5-based and Blowfish-based crypt(3) hashes,
because DES is mostly implemented in assembly anyway.  Similar
differences are also seen for some of the hash types added with the
jumbo patch (but this is not seen on the benchmarks page).

Speaking of Core i7, its "Hyperthreading" works surprisingly well, so it
does make sense to run more instances of JtR than the CPU's number of
physical cores (up to the number of logical CPUs, such as 8).  The
speedup achieved with this reduces as you make the code more optimal
(such as by going from a 32-bit to a 64-bit build, and by upgrading
gcc).  This is understandable because "Hyperthreading" specifically
takes advantage of otherwise-idle execution units, and the average
number of those decreases with more optimal code.  So optimizing the
code and running more instances of it in parallel are two ways to
achieve the same goal - make use of the otherwise-idle execution units.

Alexander

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.