# Machinations

Puzzle
November 28, 2011, 4:45 pm
Filed under: Uncategorized | Tags: ,

One of my graduating PhD students has been interviewing with several big Internet companies (Google, Amazon, Microsoft, etc).  One of the benefits of this is that my research group gets to hear many of the new job interview puzzles for this year.  Here’s one of the more interesting ones:

You are given a large array in the form:

$a_1, a_2, \ldots a_n, b_1, b_2, \ldots, b_n$

You want to output an array in the form:

$a_1, b_1, a_2, b_2, a_3, b_3, \ldots, a_n, b_n$

The catch is that you have very little external memory (only O(log n)), so you need to change the array in place.

It’s not too hard to get a solution to this problem that runs in O(n log n) time (think recursion).  But surprisingly, you can do better.  Can you get a linear time solution? Beware: this is harder than it seems!

You can do this for any fixed permutation, though the algorithms are easier for easy-to-describe permutations like this one.

Comment by Paul Beame

Paul,

I’m aware of the work to do it in O(n log n) time for any arbitrary permutation – there’s a paper by Faith Ellen on this I think. However, I’m not aware of a O(n) solution for any arbitrary fixed permutation – do you have a reference? The O(n) solution I know for this problem makes use of the easy to describe structure.

Jared

Comment by Jared

That paper (by Faith Ellen, Ian Munro and Patricio Poblete) assumes that the permutation itself is part of the input. I mean the situation when the permutation is fixed and not part of the input and the algorithm is allowed to have hard-coded instructions that depend on the permutation

Comment by Paul Beame

Paul,

Do you have a reference for how to do it in linear time in the case where the algorithm is allowed to have hard-coded instructions that depend on the permutation? Maybe I’m being slow, but it doesn’t seem easy. In particular, even if the permutation is easy to describe, the cyclic decomposition can have many cycles and the structure of these cycles can depend critically on the size of the array, right?

Jared

Comment by Jared

There is nothing but a definitional difference between us: If the permutation is ugly, the program may be linear length. but the space to implement it is only logarithmic since it only requires a program counter. It is easy to see that such long programs are necessary since it takes roughly n log n bits simply to describe the permutation and the program must define the permutation it implements.

This is what I meant in my first message about the difference between the general and “easy-to-describe” cases.

Comment by Paul Beame

Paul: If n is fixed, I completely agree with you that you can encode the cyclic decomposition into the program itself. However, for general n, and for an arbitrary class of permutations, it seems hard. Even for the class of permutations in this interview question, it’s tricky to figure out how to enumerate the set of cycles using logarithmic space, and I only know how to do it for some values of n (although enough values of n for a recursive approach).

Comment by Jared

Nice question, but to have any intuition why the most obvious algorithm works, you need to know some abstract algebra….

Comment by Sariel Har-PEled

Sariel: You can get the O(n log n) solution without any knowledge of group theory. In fact, here is the solution that my student gave in the interview. First, assume that n is even for simplicity. Note that it’s not hard to get the array to be of the form a_1, b_1, a_3, b_3, …, a_(n-1),b_(n-1), a_2, b_2, a_4, b_4, …, a_n, b_n. In other words, the a’s and b’s are paired up but the odd indices are on the left and the even ones on the right. Now think about how you can recursively approach this new array – Hint: Move pairs of elements at a time; think 1, 3, 5, …, n-1, 2, 4, 6, …, n.

Comment by Jared

Isn’t it simpler to exchange the second half of the as with the first half of the bs [linear time], and then just call recursively on both halves? This gives O(n log n), no?

With my previous comment, I just meant that proving that the linear time algorithm works (and thus coming up with it) requires some intuition about cyclic groups. At least my solution does… 😉

Comment by Sariel Har-PEled

yes good point – this is an easier recursive solution.

Comment by Jared

I just realized that to implement my algorithm described in the other comment, you need to be able to do things like copying blocks of the array from being in order XYZW to be in order XZYW in linear time. This can be easily be done by first reversing the block YZ, an then reversing the reversed Y, and then reversing the reversed Z…

Comment by Sariel Har-PEled

Hi Sariel,

I’m curious as to what your “the most obvious algorithm” is. Personally I got stuck, so I started Googling. The best I’ve been able to find is http://arxiv.org/abs/0805.1598v1, which is simple but far from obvious.

Comment by Dirk Gerrits

If 2n+1 is a prime (a tiny assumption;) ) then the mapping i->2i mod (2n+1) defines a permutation which if you implement it gives you (more or less) the right ordering.
if 2n+1 is not a prime,then you have to work harder. For example, there is always a prime number between n and 2n (I assume here that I have access to the list of prime numbers). Let this prime number be p=2k+1. Use the above block copying trick, such that you the first k as followed by the first k bs. Apply the simple algorithm to this block ,and then recurse on the remaining as and bs.

Comment by Sariel Har-PEled

…and now that I think about it, I think my algorithm has a bug, since it is true the group modulo a prime is cyclic, it is not necessarily true that 2 is a generator. Oh well.

Comment by Sariel Har-PEled

The mapping i -> 2i mod (2n+1), regardless of whether 2n+1 is prime or not, defines an in-shuffle. The question posed here is to do an out-shuffle, but that is a simple modification. To out-shuffle the array A[0…2n-1], simply in-shuffle the subarray A[1…2n-2]. Alternatively, use the mapping i -> 2i mod (2n-1).

Anyway, thanks for the explanation. Although I still maintain that this is not exactly an obvious algorithm. 😉

Comment by Dirk Gerrits

Meh, I mixed 1-based and 0-based indexing. Sloppy.

Comment by Dirk Gerrits

Could someone please explain the puzzle? I’m reading it again and again, but I do not understand, why a simple for-loop running up from 0 to n and exchanging the elements k and k+n is not a solution to the problem. But this is obviously not the desired solution, or is it?

Comment by jab

Never mind, now i noticed the difficulty.

Comment by jab

I am not sure I understand the question.

Why can’t you swap a2 with b1 and than and then a2 with a3 and so on? In each step swap one new element into its correct place and you will do at most 2*n swaps.

What am I missing?

Comment by Juan

[…] I’m underthinking the puzzle here, but it seems like heap sort solves the problem. It’s certainly meets the time requirements, […]

Pingback by A Sorting Puzzle : depth first search