Featured

MergeSort Example using Python Multiprocessing

Sorting algorithms are everywhere. In filesystems, databases, in the sort methods of the Javascript & Ruby Array class or the Python list type.
There are many algoriths but I believe some of the most known methods of sorting are:

Merge sort divides the list (array) of elements in sub-lists and then merges the sorted sub-lists to finally produce the complete sorted sequence. Wikipedia offers great pseudo-code and I implemented this on my example. It uses recursion for the division in sub-lists and because of the divide-and-conquer logic it can be also parallelized.

Implementation

We start by writing the the main function that divides our list to two sub-lists and then proceeds with sorting each individual sub-list:

def merge_sort(a_list):
    length = len(a_list)
    if length <= 1:
        return a_list
    middle_index = length / 2
    left = a_list[0:middle_index]
    right = a_list[middle_index:]
    left = merge_sort(left)
    right = merge_sort(right)
    return merge(left, right)

Let us take it step-by-step:

if length <= 1:
return a_list

Return immediately if the list has only one element, which is a trivial case for sorting. Note that this is also the termination condition for the recursion!

middle_index = length / 2
left = a_list[0:middle_index]
right = a_list[middle_index:]

Find the middle of the list (middle_index) and divide the input list into two distinct sub-lists, left& ​right.


left = merge_sort(left)
right = merge_sort(right)

The recursive part: call merge_sort for each of the new sub-lists, left & right. Let’s say that we have an initial list of 10 elements that needs to be sorted; in the first call of merge_sort the list will be divided into 2 sub-lists of 5 elements. merge_sort will then be called for each of left & right sub-lists. To be exact, the left sub-list must be completely sorted before proceeding to the right. Clearly these two sorting operations are independent and thus are good candidates for parallelisation.

Based on this simple logical deduction, we will later see how Python multiprocessing can be utilized to take advantage of this algorithmic property.As a final step in our algorithm, the two sub-lists must be merged to a single list, where all elements are now in sorting order.

Merging the distinct sub-lists

And the merge function which will merge 2 sorted subists:

def merge(left, right):
    sorted_list = []
    # We create shallow copies so that we do not mutate
    # the original objects.
    left = left[:]
    right = right[:]
    # We do not have to actually inspect the length,
    # as empty lists truth value evaluates to False.
    # This is for algorithm demonstration purposes.
    while len(left) > 0 or len(right) > 0:
        if len(left) > 0 and len(right) > 0:
            if left[0]  0:
            sorted_list.append(left.pop(0))
        elif len(right) > 0:
            sorted_list.append(right.pop(0))
    return sorted_list

Taking it a bit step-by-step again:

while len(left) > 0 or len(right) > 0:

We need to exhaust both lists. Remember that the element count may be different by one element if the total length of the initial list is odd.

    if len(left) > 0 and len(right) > 0:

If both lists are not exhausted, then:

       if left[0]  0:
            sorted_list.append(left.pop(0))
        elif len(right) > 0:
            sorted_list.append(right.pop(0))

If either one of the two lists is exhausted then continue extracting elements from the non-empty list.

In second thought, this could probably be done in a single step, which will save some computational time:

        elif len(left) > 0:
            sorted_list.extend(left)
            break
        elif len(right) > 0:
            sorted_list.extend(right)
            break

Finally, the function returns the merged list which now contains all initial elements in ascending order.

Observations

For small lists this procedure is rather trivial for the core of a modern CPU. But what happens when a list has a significant size? We talked about parallelization. Without having to modify the code of the functions described previously, we could just divide our list in a number of sub-lists equal to the number of cores at our disposal and assign each to a different process.

Caveat: the final merging will be accomplished by a single core, which is the largest fraction of the algorithm’s time. Nevertheless, the process speeds up significantly for large lists. An additional note, the process of spawning new processes (fork system calls) will notably delay the overall procedure for small lists. Apparently, the amount of time required for creating the objects and handling the multiple processes (let’s call this MP) becomes a less significant factor for large lists, as the percentage of MP over the total sorting time (let’s call this S), diminishes with the increase in the denominator.

Putting it all together

I have written an annotated version of the parallel MergeSort version using Python multiprocessing:

Benchmarks

I have executed some test runs on a VPS, which uses a 4-core CPU, Intel Xeon L5520 @ 2.27GHz. However, available CPU time fluctuates due to other VMs consuming resources from time to time. All time measurements are in seconds. The two charts display the change in processing time (y-axis) as a function of the number of list elements (x-axis).

Abbreviations:

SP
Single Process Time
MP
Multiple Process Time
MPMT
Multi-process Final Merge Time

Using 2 cores

Elements SP MP MP/SP (%) MPMT MPMT/MP (%)
464,365 132.278 104.410 78.93 66.288 63.49
479,585 141.993 107.780 75.91 64.916 60.23
628,561 244.518 183.056 74.86 114.838 62.73
680,722 285.694 229.938 80.48 142.639 62.03
703,865 305.931 234.565 76.67 148.318 63.23
729,973 330.974 254.887 77.01 162.184 63.63
762,260 372.593 277.687 74.53 173.243 62.39
787,132 401.388 296.826 73.95 189.222 63.75
827,259 432.098 335.185 77.57 212.990 63.54
831,855 445.834 338.542 75.93 217.752 64.32

Benchmarks using 2 CPU cores

Using 4 cores

Elements SP MP MP/SP (%) MPMT MPMT/MP (%)
321,897 68.038 45.711 67.18 38.777 84.83
347,426 82.131 54.614 66.50 46.272 84.73
479,979 153.305 101.320 66.09 87.410 86.27
574,747 215.026 147.384 68.54 127.574 86.56
657,324 272.733 196.504 72.05 173.142 88.11
693,975 311.907 219.233 70.29 191.414 87.31
703,379 332.943 232.885 69.95 196.705 84.46
814,617 440.827 312.758 70.95 273.334 87.39
837,964 446.744 323.348 72.38 283.348 87.63
858,084 476.886 363.580 76.24 320.736 88.22

Benchmarks using 4 CPU cores

I would be more than happy to read any comments or answer any questions!

Advertisements

Using GROUP_CONCAT Lists Directly In SQL

What is the GROUP_CONCAT function?

Ever wondered how to retrieve the values within a group when you perform a GROUP BY operation on your data? GROUP_CONCAT returns a delimited list of a field’s or expression’s values when grouped. It supports ordering of the values, delimiter is a parameter (SEPARATOR) and you can apply a DISTINCT filter, thus retrieving unique values. I am giving an example which is highly unlikely to be used in real-life but for the sake of demonstration, plus we see the power of prepared statements which provide the ability of dynamic SQL.

GROUP_CONCAT([DISTINCT] expr [,expr ...]
             [ORDER BY {unsigned_integer | col_name | expr}
                 [ASC | DESC] [,col_name ...]]
             [SEPARATOR str_val])

Caveat: the length of the generated list length is controlled by a variable group_concat_max_len.
The good news are that it can also be defined as a SESSION variable dynamically.

SET SESSION group_concat_max_len = 100000;

Default length in bytes is 1024.
If your GROUP_CONCAT result exceeds the permitted length, a warnings is generated and you can view it as usual with:

SHOW WARNINGS;

How can I use a list of IDs?

Let’s assume the common scenario that you have a table with a structure like:

CREATE TABLE test(
  id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
  fk_id_1 INT UNSIGNED,
  fk_id_2 INT UNSIGNED
);

CREATE TABLE fk_table_1(
  id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
  name VARCHAR(50)
);

Executing this:

SET @LIST:='';
SELECT @LIST:=GROUP_CONCAT(DISTINCT fk_id_1 SEPARATOR ',') AS fk_id_1_list
FROM test 
WHERE fk_id_2 = 7293
GROUP BY fk_id_2;

Comma is the default separator, so the specification is redundant. We stored the list in session variable @LIST.

We can retrieve data from fk_table_1 using the list of IDs:

SET @FK_TABLE_STMT = CONCAT('SELECT id, name FROM 
                             fk_table_1 WHERE id IN(', 
                             @LIST, 
                             ')'
                           );
PREPARE fk_table_data FROM @FK_TABLE_STMT;
EXECUTE fk_table_data;

…and we retrieve the data from fk_table_1.
After we are done we remove the prepared statement by executing:

DEALLOCATE fk_table_data;