### Sorting in Python

List sorting in Python (the

However, there's always been a penalty to pay if you don't want the sort's default ordering. Maybe you want to sort on one field, for example sorting a list of log items by timestamp. Or maybe you want to sort by some function of each item, for example sorting a list of vectors by magnitude.

Historically there have been two methods of doing this. The simplest is to pass a custom comparison function to list.sort:

Here we're sorting 2D vectors by magnitude, using the standard trick to save an expensive square-root operation: sqrt(x) is monotonic in x, so sorting by the square of the magnitude is equivalent to sorting by magnitude.

This code is easy to understand, but slow. It makes a Python function call for each comparison made in the sort; and a sort will typically make many more comparisons than there are elements to sort.

The second, trickier method is known in the Python world as decorate, sort, undecorate (DSU) and in the Perl world as the Schwartzian Transform. The trick here is to decorate each element in the list by prepending a key to each element; to sort the decorated list with the default sort; and then to undecorate the list by stripping off the keys, leaving the sorted list of elements.

This avoids multiple calculation of the key by precomputing each key once; and avoids the multiple expensive calls to a custom comparison function. However, the auxiliary list,

And there's another potential pitfall to DSU: it may not have the same behaviour in the face of equal keys as sort-with-cmp. In sort-with-cmp, elements with equal keys will be subject to whatever behaviour sort implements for equal elements. In Python 2.4, sort is guaranteed stable, so the existing order will be preserved. In previous Python versions, sort was not guaranteed stable; although in practice it usually was.

In DSU, elements with equal keys will be sorted by value. This may or may not be a problem, depending on the application. To avoid this and retain stability in elements with equal keys, decorate with both key and index:

Or you could borrow from a typical C programming pattern: don't sort elements, sort pointers or indexes to elements:

Python 2.4 adds an extra optional argument to

This is as clear and concise, and has the same stability in the face of equal keys, as sort-with-cmp. But it also avoids most of the overhead of sort-with-cmp, calling a key function once per element rather than a comparison function multiple times per element.

Some timings, sorting float vectors of various lengths N (code); all times in seconds:

And the same data in a log-log chart:

What this shows is that, for small N, the DSU pattern is significantly more efficient than sort-with-cmp. However, at N=10,000 and above, DSU's overhead in list allocation, decoration, and undecoration starts to bite, leading to similar performance to sort-with-cmp. The two stable DSU patterns have slightly higher cost than plain DSU, with index DSU in general performing slightly worse: possibly the indexing overhead in reassembling the undecorated list is to blame.

But in all cases, sort-with-key is an order of magnitude faster than any other sorting pattern, with performance close to that of a regular sort. If you're targetting Python 2.4, it's time to let go of DSU: sort-with-key is the new state-of-the-art.

`list.sort`method) is fast: sorting has been heavily optimised in successive versions of Python.However, there's always been a penalty to pay if you don't want the sort's default ordering. Maybe you want to sort on one field, for example sorting a list of log items by timestamp. Or maybe you want to sort by some function of each item, for example sorting a list of vectors by magnitude.

Historically there have been two methods of doing this. The simplest is to pass a custom comparison function to list.sort:

`def cmp_magnitude(a, b):`

return cmp(a[0]*a[0] + a[1]*a[1],

b[0]*b[0] + b[1]*b[1])

vectors.sort(cmp=cmp_magnitude)return cmp(a[0]*a[0] + a[1]*a[1],

b[0]*b[0] + b[1]*b[1])

vectors.sort(cmp=cmp_magnitude)

Here we're sorting 2D vectors by magnitude, using the standard trick to save an expensive square-root operation: sqrt(x) is monotonic in x, so sorting by the square of the magnitude is equivalent to sorting by magnitude.

This code is easy to understand, but slow. It makes a Python function call for each comparison made in the sort; and a sort will typically make many more comparisons than there are elements to sort.

The second, trickier method is known in the Python world as decorate, sort, undecorate (DSU) and in the Perl world as the Schwartzian Transform. The trick here is to decorate each element in the list by prepending a key to each element; to sort the decorated list with the default sort; and then to undecorate the list by stripping off the keys, leaving the sorted list of elements.

`decorated = [(val[0]*val[0] + val[1]*val[1], val)`

for val in vectors]

decorated.sort()

vectors = [val for (key, val) in decorated]for val in vectors]

decorated.sort()

vectors = [val for (key, val) in decorated]

This avoids multiple calculation of the key by precomputing each key once; and avoids the multiple expensive calls to a custom comparison function. However, the auxiliary list,

`decorated`, takes extra memory and extra time to construct and deconstruct. And the code is not straightforward to the untrained eye: once you're familiar with DSU, you'll easily identify it as such, but if you haven't seen DSU before it takes a little puzzling through.And there's another potential pitfall to DSU: it may not have the same behaviour in the face of equal keys as sort-with-cmp. In sort-with-cmp, elements with equal keys will be subject to whatever behaviour sort implements for equal elements. In Python 2.4, sort is guaranteed stable, so the existing order will be preserved. In previous Python versions, sort was not guaranteed stable; although in practice it usually was.

In DSU, elements with equal keys will be sorted by value. This may or may not be a problem, depending on the application. To avoid this and retain stability in elements with equal keys, decorate with both key and index:

`decorated = [(val[0]*val[0] + val[1]*val[1], i, val)`

for (i, val) in enumerate(vectors)]

decorated.sort()

vectors = [val for (key, i, val) in decorated]for (i, val) in enumerate(vectors)]

decorated.sort()

vectors = [val for (key, i, val) in decorated]

Or you could borrow from a typical C programming pattern: don't sort elements, sort pointers or indexes to elements:

`indexed = [(val[0]*val[0] + val[1]*val[1], i)`

for (i, val) in enumerate(vectors)]

indexed.sort()

vectors = [vectors[i] for (key, i) in indexed]for (i, val) in enumerate(vectors)]

indexed.sort()

vectors = [vectors[i] for (key, i) in indexed]

Python 2.4 adds an extra optional argument to

`list.sort`,`key`, which can be passed a custom key-computation function. One call to`key`is made for each element in the list. The list is then sorted by the key of each element, rather than the value of each element. In effect, this internalises DSU within`list.sort`'s implementation. The code is straightforward:`def key_magnitude(a):`

return a[0]*a[0] + a[1]*a[1]

vectors.sort(key=key_magnitude)return a[0]*a[0] + a[1]*a[1]

vectors.sort(key=key_magnitude)

This is as clear and concise, and has the same stability in the face of equal keys, as sort-with-cmp. But it also avoids most of the overhead of sort-with-cmp, calling a key function once per element rather than a comparison function multiple times per element.

Some timings, sorting float vectors of various lengths N (code); all times in seconds:

sort | N= | ||||
---|---|---|---|---|---|

100 | 1,000 | 10,000 | 100,000 | 1,000,000 | |

sort(cmp) | 0.00117 | 0.0191 | 0.272 | 3.47 | 43.6 |

DSU | 0.000212 | 0.00274 | 0.265 | 3.57 | 41.9 |

stable DSU | 0.000225 | 0.00289 | 0.273 | 3.69 | 45.4 |

index DSU | 0.000228 | 0.00295 | 0.275 | 3.73 | 44.6 |

sort(key) | 0.000168 | 0.00202 | 0.0278 | 0.448 | 6.28 |

normal sort | 0.000105 | 0.00165 | 0.0239 | 0.378 | 5.38 |

And the same data in a log-log chart:

What this shows is that, for small N, the DSU pattern is significantly more efficient than sort-with-cmp. However, at N=10,000 and above, DSU's overhead in list allocation, decoration, and undecoration starts to bite, leading to similar performance to sort-with-cmp. The two stable DSU patterns have slightly higher cost than plain DSU, with index DSU in general performing slightly worse: possibly the indexing overhead in reassembling the undecorated list is to blame.

But in all cases, sort-with-key is an order of magnitude faster than any other sorting pattern, with performance close to that of a regular sort. If you're targetting Python 2.4, it's time to let go of DSU: sort-with-key is the new state-of-the-art.