Follow @Openwall on Twitter for new release announcements and other news
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKvk7_6edWDZyALvw_6Cd2mp9+bvLuhknU475K8akJKi+0ohiA@mail.gmail.com>
Date: Fri, 29 Jan 2016 23:18:14 -0500
From: japhar81 <japhar81@...il.com>
To: john-users@...ts.openwall.com
Subject: Re: MPI with Spot Instances?

Makes sense -- thank you!

On Fri, Jan 29, 2016 at 11:00 PM, magnum <john.magnum@...hmail.com> wrote:

> On 2016-01-30 04:47, japhar81 wrote:
>
>> I'm still a bit fuzzy on the modes.. I'm going after a RAR file's hash
>> (via
>> rar2john) in incremental mode.. which of those cases does that fall into?
>>
>
> I'm not sure exactly how much redundant work may be issued for incremental
> when resuming after a crash but I only recall seing notable issues with
> single (many salts) and prince.
>
>
> magnum
>
> On Fri, Jan 29, 2016 at 10:44 PM, magnum <john.magnum@...hmail.com> wrote:
>>
>> On 2016-01-30 04:25, japhar81 wrote:
>>>
>>> They basically just disappear, whatever they were in the middle of -- i
>>>> guess my question is, will the resume re-run whatever jobs those nodes
>>>> were
>>>> in the middle of and didn't report back? And if one of them happens to
>>>> have
>>>> hit a match, will that get saved somehow too?
>>>>
>>>>
>>> A cracked password is very unlikely to not end up in the pot file. The
>>> beauty of Solar's design is that if a session dies before it wrote a
>>> recently cracked password to the pot file, it will also not have written
>>> the corresponding unit of work to the session file. So a resume will
>>> almost
>>> certainly re-crack the password.
>>>
>>> Jumbo's MPI functionality is very KISS minded and doesn't rely on
>>> "reporting back" anything to anyone at all. Each node runs in its own
>>> daft
>>> world not giving a dang about the others. Each node writes its own
>>> session
>>> file just as any non-MPI session would. In fact, the code paths are
>>> *very*
>>> near 100% the same as when running --node=x/y in a single process except
>>> the "x" and "y" is filled in automagically.
>>>
>>> Worst-case scenario is supposedly that a resume will do a bit of
>>> redundant
>>> work. This is obviously by design - better safe than sorry. The default
>>> "Save" timer in john.conf is 60 seconds for Jumbo so you will hopefully
>>> not
>>> lose more than that. Some modes (eg. single w/ many salts and PRINCE
>>> regardless of salts) may be much worse than that though, to the point
>>> that
>>> a stop/resume once an hour may end up never proceeding past this hour at
>>> all.
>>>
>>> magnum
>>>
>>>
>>> On Fri, Jan 29, 2016 at 10:20 PM, magnum <john.magnum@...hmail.com>
>>> wrote:
>>>
>>>>
>>>> On 2016-01-29 18:12, japhar81 wrote:
>>>>
>>>>>
>>>>> Ok, so corollary question -- does the session stuff work with MPI? i.e.
>>>>>
>>>>>> lets say I start the spot instances externally, and mpiexec jtr with
>>>>>> some
>>>>>> flavor of --session (on a box that wont die). If those nodes die
>>>>>> mid-process, will that be recorded in the session to enable a resume
>>>>>> later
>>>>>> when I spin new nodes and start mpiexec again?
>>>>>>
>>>>>>
>>>>>> Sure (as far as I can imagine how spot instances work). Session file
>>>>> integrity is very well tested.
>>>>>
>>>>> magnum
>>>>>
>>>>>
>>>>> On Wed, Jan 27, 2016 at 4:03 PM, magnum <john.magnum@...hmail.com>
>>>>> wrote:
>>>>>
>>>>>
>>>>>> On 2016-01-27 17:25, japhar81 wrote:
>>>>>>
>>>>>>
>>>>>>> I've been playing around with MPI clustering JtR for a while, and
>>>>>>> I've
>>>>>>>
>>>>>>> managed to get it running smoothly on static nodes. What I'd like to
>>>>>>>> do
>>>>>>>> next is create an auto-scaling group in AWS, using spot instances.
>>>>>>>> What
>>>>>>>> this basically means is nodes will come and go, with their
>>>>>>>> hostnames/IPs
>>>>>>>> changing at random.. I can not figure out how I would run JtR in
>>>>>>>> that
>>>>>>>> instance -- since it requires a node list in a file on startup to
>>>>>>>> mpirun.
>>>>>>>>
>>>>>>>> If it matters, I'm looking to do a brute-force using the ASCII mode.
>>>>>>>> Has
>>>>>>>> anyone found a way to do a dynamic cluster that adds/removes nodes
>>>>>>>> at
>>>>>>>> random? Is this even possible?
>>>>>>>>
>>>>>>>>
>>>>>>>> I'm not aware of any existing work that would do this. A solution
>>>>>>>>
>>>>>>> using
>>>>>>> JtR as-is, but with some yet-to-be-implemented master issuing jobs,
>>>>>>> could
>>>>>>> involve looking at the existing "-node=x/y" as describing "pieces"
>>>>>>> instead
>>>>>>> of "nodes". So instead of saying -node=2/8 as in "you are node 2 of
>>>>>>> 8"
>>>>>>> you'd say -node=4321/100000 as in "do piece 4321 of 100000". Then
>>>>>>> you'd
>>>>>>> submit pieces to active nodes. Obviously you'd have to handle dying
>>>>>>> nodes
>>>>>>> that never reported back their given piece, and re-issue those
>>>>>>> pieces.
>>>>>>>
>>>>>>> magnum
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Powered by blists - more mailing lists

Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.