[python] Flattening a shallow list in Python

Is there a simple way to flatten a list of iterables with a list comprehension, or failing that, what would you all consider to be the best way to flatten a shallow list like this, balancing performance and readability?

I tried to flatten such a list with a nested list comprehension, like this:

[image for image in menuitem for menuitem in list_of_menuitems]

But I get in trouble of the NameError variety there, because the name 'menuitem' is not defined. After googling and looking around on Stack Overflow, I got the desired results with a reduce statement:

reduce(list.__add__, map(lambda x: list(x), list_of_menuitems))

But this method is fairly unreadable because I need that list(x) call there because x is a Django QuerySet object.

Conclusion:

Thanks to everyone who contributed to this question. Here is a summary of what I learned. I'm also making this a community wiki in case others want to add to or correct these observations.

My original reduce statement is redundant and is better written this way:

>>> reduce(list.__add__, (list(mi) for mi in list_of_menuitems))

This is the correct syntax for a nested list comprehension (Brilliant summary dF!):

>>> [image for mi in list_of_menuitems for image in mi]

But neither of these methods are as efficient as using itertools.chain:

>>> from itertools import chain
>>> list(chain(*list_of_menuitems))

And as @cdleary notes, it's probably better style to avoid * operator magic by using chain.from_iterable like so:

>>> chain = itertools.chain.from_iterable([[1,2],[3],[5,89],[],[6]])
>>> print(list(chain))
>>> [1, 2, 3, 5, 89, 6]

This question is related to python list-comprehension

The answer is


You almost have it! The way to do nested list comprehensions is to put the for statements in the same order as they would go in regular nested for statements.

Thus, this

for inner_list in outer_list:
    for item in inner_list:
        ...

corresponds to

[... for inner_list in outer_list for item in inner_list]

So you want

[image for menuitem in list_of_menuitems for image in menuitem]

def is_iterable(item):
   return isinstance(item, list) or isinstance(item, tuple)


def flatten(items):
    for i in items:
        if is_iterable(item):
            for m in flatten(i):
                yield m
        else:
            yield i

Test:

print list(flatten2([1.0, 2, 'a', (4,), ((6,), (8,)), (((8,),(9,)), ((12,),(10)))]))

The easiest way to achieve this in either Python 2 or 3 is to use the morph library using pip install morph.

The code is:

import morph

list = [[1,2],[3],[5,89],[],[6]]
flattened_list = morph.flatten(list)  # returns [1, 2, 3, 5, 89, 6]

Here is a version working for multiple levels of list using collectons.Iterable:

import collections

def flatten(o, flatten_condition=lambda i: isinstance(i,
               collections.Iterable) and not isinstance(i, str)):
    result = []
    for i in o:
        if flatten_condition(i):
            result.extend(flatten(i, flatten_condition))
        else:
            result.append(i)
    return result

If each item in the list is a string (and any strings inside those strings use " " rather than ' '), you can use regular expressions (re module)

>>> flattener = re.compile("\'.*?\'")
>>> flattener
<_sre.SRE_Pattern object at 0x10d439ca8>
>>> stred = str(in_list)
>>> outed = flattener.findall(stred)

The above code converts in_list into a string, uses the regex to find all the substrings within quotes (i.e. each item of the list) and spits them out as a list.


Off the top of my head, you can eliminate the lambda:

reduce(list.__add__, map(list, [mi.image_set.all() for mi in list_of_menuitems]))

Or even eliminate the map, since you've already got a list-comp:

reduce(list.__add__, [list(mi.image_set.all()) for mi in list_of_menuitems])

You can also just express this as a sum of lists:

sum([list(mi.image_set.all()) for mi in list_of_menuitems], [])

@S.Lott: You inspired me to write a timeit app.

I figured it would also vary based on the number of partitions (number of iterators within the container list) -- your comment didn't mention how many partitions there were of the thirty items. This plot is flattening a thousand items in every run, with varying number of partitions. The items are evenly distributed among the partitions.

Flattening Comparison

Code (Python 2.6):

#!/usr/bin/env python2.6

"""Usage: %prog item_count"""

from __future__ import print_function

import collections
import itertools
import operator
from timeit import Timer
import sys

import matplotlib.pyplot as pyplot

def itertools_flatten(iter_lst):
    return list(itertools.chain(*iter_lst))

def itertools_iterable_flatten(iter_iter):
    return list(itertools.chain.from_iterable(iter_iter))

def reduce_flatten(iter_lst):
    return reduce(operator.add, map(list, iter_lst))

def reduce_lambda_flatten(iter_lst):
    return reduce(operator.add, map(lambda x: list(x), [i for i in iter_lst]))

def comprehension_flatten(iter_lst):
    return list(item for iter_ in iter_lst for item in iter_)

METHODS = ['itertools', 'itertools_iterable', 'reduce', 'reduce_lambda',
           'comprehension']

def _time_test_assert(iter_lst):
    """Make sure all methods produce an equivalent value.
    :raise AssertionError: On any non-equivalent value."""
    callables = (globals()[method + '_flatten'] for method in METHODS)
    results = [callable(iter_lst) for callable in callables]
    if not all(result == results[0] for result in results[1:]):
        raise AssertionError

def time_test(partition_count, item_count_per_partition, test_count=10000):
    """Run flatten methods on a list of :param:`partition_count` iterables.
    Normalize results over :param:`test_count` runs.
    :return: Mapping from method to (normalized) microseconds per pass.
    """
    iter_lst = [[dict()] * item_count_per_partition] * partition_count
    print('Partition count:    ', partition_count)
    print('Items per partition:', item_count_per_partition)
    _time_test_assert(iter_lst)
    test_str = 'flatten(%r)' % iter_lst
    result_by_method = {}
    for method in METHODS:
        setup_str = 'from test import %s_flatten as flatten' % method
        t = Timer(test_str, setup_str)
        per_pass = test_count * t.timeit(number=test_count) / test_count
        print('%20s: %.2f usec/pass' % (method, per_pass))
        result_by_method[method] = per_pass
    return result_by_method

if __name__ == '__main__':
    if len(sys.argv) != 2:
        raise ValueError('Need a number of items to flatten')
    item_count = int(sys.argv[1])
    partition_counts = []
    pass_times_by_method = collections.defaultdict(list)
    for partition_count in xrange(1, item_count):
        if item_count % partition_count != 0:
            continue
        items_per_partition = item_count / partition_count
        result_by_method = time_test(partition_count, items_per_partition)
        partition_counts.append(partition_count)
        for method, result in result_by_method.iteritems():
            pass_times_by_method[method].append(result)
    for method, pass_times in pass_times_by_method.iteritems():
        pyplot.plot(partition_counts, pass_times, label=method)
    pyplot.legend()
    pyplot.title('Flattening Comparison for %d Items' % item_count)
    pyplot.xlabel('Number of Partitions')
    pyplot.ylabel('Microseconds')
    pyplot.show()

Edit: Decided to make it community wiki.

Note: METHODS should probably be accumulated with a decorator, but I figure it'd be easier for people to read this way.


This solution works for arbitrary nesting depths - not just the "list of lists" depth that some (all?) of the other solutions are limited to:

def flatten(x):
    result = []
    for el in x:
        if hasattr(el, "__iter__") and not isinstance(el, basestring):
            result.extend(flatten(el))
        else:
            result.append(el)
    return result

It's the recursion which allows for arbitrary depth nesting - until you hit the maximum recursion depth, of course...


What about:

from operator import add
reduce(add, map(lambda x: list(x.image_set.all()), [mi for mi in list_of_menuitems]))

But, Guido is recommending against performing too much in a single line of code since it reduces readability. There is minimal, if any, performance gain by performing what you want in a single line vs. multiple lines.


pylab provides a flatten: link to numpy flatten


This solution works for arbitrary nesting depths - not just the "list of lists" depth that some (all?) of the other solutions are limited to:

def flatten(x):
    result = []
    for el in x:
        if hasattr(el, "__iter__") and not isinstance(el, basestring):
            result.extend(flatten(el))
        else:
            result.append(el)
    return result

It's the recursion which allows for arbitrary depth nesting - until you hit the maximum recursion depth, of course...


A simple alternative is to use numpy's concatenate but it converts the contents to float:

import numpy as np
print np.concatenate([[1,2],[3],[5,89],[],[6]])
# array([  1.,   2.,   3.,   5.,  89.,   6.])
print list(np.concatenate([[1,2],[3],[5,89],[],[6]]))
# [  1.,   2.,   3.,   5.,  89.,   6.]

This solution works for arbitrary nesting depths - not just the "list of lists" depth that some (all?) of the other solutions are limited to:

def flatten(x):
    result = []
    for el in x:
        if hasattr(el, "__iter__") and not isinstance(el, basestring):
            result.extend(flatten(el))
        else:
            result.append(el)
    return result

It's the recursion which allows for arbitrary depth nesting - until you hit the maximum recursion depth, of course...


def is_iterable(item):
   return isinstance(item, list) or isinstance(item, tuple)


def flatten(items):
    for i in items:
        if is_iterable(item):
            for m in flatten(i):
                yield m
        else:
            yield i

Test:

print list(flatten2([1.0, 2, 'a', (4,), ((6,), (8,)), (((8,),(9,)), ((12,),(10)))]))

If you're looking for a built-in, simple, one-liner you can use:

a = [[1, 2, 3], [4, 5, 6]
b = [i[x] for i in a for x in range(len(i))]
print b

returns

[1, 2, 3, 4, 5, 6]

have you tried flatten? From matplotlib.cbook.flatten(seq, scalarp=) ?

l=[[1,2,3],[4,5,6], [7], [8,9]]*33

run("list(flatten(l))")
         3732 function calls (3303 primitive calls) in 0.007 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.007    0.007 <string>:1(<module>)
      429    0.001    0.000    0.001    0.000 cbook.py:475(iterable)
      429    0.002    0.000    0.003    0.000 cbook.py:484(is_string_like)
      429    0.002    0.000    0.006    0.000 cbook.py:565(is_scalar_or_string)
  727/298    0.001    0.000    0.007    0.000 cbook.py:605(flatten)
      429    0.000    0.000    0.001    0.000 core.py:5641(isMaskedArray)
      858    0.001    0.000    0.001    0.000 {isinstance}
      429    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*66

run("list(flatten(l))")
         7461 function calls (6603 primitive calls) in 0.007 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.007    0.007 <string>:1(<module>)
      858    0.001    0.000    0.001    0.000 cbook.py:475(iterable)
      858    0.002    0.000    0.003    0.000 cbook.py:484(is_string_like)
      858    0.002    0.000    0.006    0.000 cbook.py:565(is_scalar_or_string)
 1453/595    0.001    0.000    0.007    0.000 cbook.py:605(flatten)
      858    0.000    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     1716    0.001    0.000    0.001    0.000 {isinstance}
      858    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*99

run("list(flatten(l))")
         11190 function calls (9903 primitive calls) in 0.010 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.010    0.010 <string>:1(<module>)
     1287    0.002    0.000    0.002    0.000 cbook.py:475(iterable)
     1287    0.003    0.000    0.004    0.000 cbook.py:484(is_string_like)
     1287    0.002    0.000    0.009    0.000 cbook.py:565(is_scalar_or_string)
 2179/892    0.001    0.000    0.010    0.000 cbook.py:605(flatten)
     1287    0.001    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     2574    0.001    0.000    0.001    0.000 {isinstance}
     1287    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*132

run("list(flatten(l))")
         14919 function calls (13203 primitive calls) in 0.013 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.013    0.013 <string>:1(<module>)
     1716    0.002    0.000    0.002    0.000 cbook.py:475(iterable)
     1716    0.004    0.000    0.006    0.000 cbook.py:484(is_string_like)
     1716    0.003    0.000    0.011    0.000 cbook.py:565(is_scalar_or_string)
2905/1189    0.002    0.000    0.013    0.000 cbook.py:605(flatten)
     1716    0.001    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     3432    0.001    0.000    0.001    0.000 {isinstance}
     1716    0.001    0.000    0.001    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler'

UPDATE Which gave me another idea:

l=[[1,2,3],[4,5,6], [7], [8,9]]*33

run("flattenlist(l)")
         564 function calls (432 primitive calls) in 0.000 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    133/1    0.000    0.000    0.000    0.000 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.000    0.000 <string>:1(<module>)
      429    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*66

run("flattenlist(l)")
         1125 function calls (861 primitive calls) in 0.001 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    265/1    0.001    0.000    0.001    0.001 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.001    0.001 <string>:1(<module>)
      858    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*99

run("flattenlist(l)")
         1686 function calls (1290 primitive calls) in 0.001 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    397/1    0.001    0.000    0.001    0.001 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.001    0.001 <string>:1(<module>)
     1287    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*132

run("flattenlist(l)")
         2247 function calls (1719 primitive calls) in 0.002 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    529/1    0.001    0.000    0.002    0.002 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.002    0.002 <string>:1(<module>)
     1716    0.001    0.000    0.001    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*1320

run("flattenlist(l)")
         22443 function calls (17163 primitive calls) in 0.016 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   5281/1    0.011    0.000    0.016    0.016 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.016    0.016 <string>:1(<module>)
    17160    0.005    0.000    0.005    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

So to test how effective it is when recursive gets deeper: How much deeper?

l=[[1,2,3],[4,5,6], [7], [8,9]]*1320

new=[l]*33

run("flattenlist(new)")
         740589 function calls (566316 primitive calls) in 0.418 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 174274/1    0.281    0.000    0.417    0.417 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.001    0.001    0.418    0.418 <string>:1(<module>)
   566313    0.136    0.000    0.136    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*66

run("flattenlist(new)")
         1481175 function calls (1132629 primitive calls) in 0.809 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 348547/1    0.542    0.000    0.807    0.807 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.002    0.002    0.809    0.809 <string>:1(<module>)
  1132626    0.266    0.000    0.266    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*99

run("flattenlist(new)")
         2221761 function calls (1698942 primitive calls) in 1.211 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 522820/1    0.815    0.000    1.208    1.208 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.002    0.002    1.211    1.211 <string>:1(<module>)
  1698939    0.393    0.000    0.393    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*132

run("flattenlist(new)")
         2962347 function calls (2265255 primitive calls) in 1.630 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 697093/1    1.091    0.000    1.627    1.627 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.003    0.003    1.630    1.630 <string>:1(<module>)
  2265252    0.536    0.000    0.536    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*1320

run("flattenlist(new)")
         29623443 function calls (22652523 primitive calls) in 16.103 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
6970921/1   10.842    0.000   16.069   16.069 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.034    0.034   16.103   16.103 <string>:1(<module>)
 22652520    5.227    0.000    5.227    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

I will bet "flattenlist" I am going to use this rather than matploblib for a long long time unless I want a yield generator and fast result as "flatten" uses in matploblib.cbook

This, is fast.

  • And here is the code

:

typ=(list,tuple)


def flattenlist(d):
    thelist = []
    for x in d:
        if not isinstance(x,typ):
            thelist += [x]
        else:
            thelist += flattenlist(x)
    return thelist

You almost have it! The way to do nested list comprehensions is to put the for statements in the same order as they would go in regular nested for statements.

Thus, this

for inner_list in outer_list:
    for item in inner_list:
        ...

corresponds to

[... for inner_list in outer_list for item in inner_list]

So you want

[image for menuitem in list_of_menuitems for image in menuitem]

If each item in the list is a string (and any strings inside those strings use " " rather than ' '), you can use regular expressions (re module)

>>> flattener = re.compile("\'.*?\'")
>>> flattener
<_sre.SRE_Pattern object at 0x10d439ca8>
>>> stred = str(in_list)
>>> outed = flattener.findall(stred)

The above code converts in_list into a string, uses the regex to find all the substrings within quotes (i.e. each item of the list) and spits them out as a list.


Performance Results. Revised.

import itertools
def itertools_flatten( aList ):
    return list( itertools.chain(*aList) )

from operator import add
def reduce_flatten1( aList ):
    return reduce(add, map(lambda x: list(x), [mi for mi in aList]))

def reduce_flatten2( aList ):
    return reduce(list.__add__, map(list, aList))

def comprehension_flatten( aList ):
    return list(y for x in aList for y in x)

I flattened a 2-level list of 30 items 1000 times

itertools_flatten     0.00554
comprehension_flatten 0.00815
reduce_flatten2       0.01103
reduce_flatten1       0.01404

Reduce is always a poor choice.


From my experience, the most efficient way to flatten a list of lists is:

flat_list = []
map(flat_list.extend, list_of_list)

Some timeit comparisons with the other proposed methods:

list_of_list = [range(10)]*1000
%timeit flat_list=[]; map(flat_list.extend, list_of_list)
#10000 loops, best of 3: 119 µs per loop
%timeit flat_list=list(itertools.chain.from_iterable(list_of_list))
#1000 loops, best of 3: 210 µs per loop
%timeit flat_list=[i for sublist in list_of_list for i in sublist]
#1000 loops, best of 3: 525 µs per loop
%timeit flat_list=reduce(list.__add__,list_of_list)
#100 loops, best of 3: 18.1 ms per loop

Now, the efficiency gain appears better when processing longer sublists:

list_of_list = [range(1000)]*10
%timeit flat_list=[]; map(flat_list.extend, list_of_list)
#10000 loops, best of 3: 60.7 µs per loop
%timeit flat_list=list(itertools.chain.from_iterable(list_of_list))
#10000 loops, best of 3: 176 µs per loop

And this methods also works with any iterative object:

class SquaredRange(object):
    def __init__(self, n): 
        self.range = range(n)
    def __iter__(self):
        for i in self.range: 
            yield i**2

list_of_list = [SquaredRange(5)]*3
flat_list = []
map(flat_list.extend, list_of_list)
print flat_list
#[0, 1, 4, 9, 16, 0, 1, 4, 9, 16, 0, 1, 4, 9, 16]

@S.Lott: You inspired me to write a timeit app.

I figured it would also vary based on the number of partitions (number of iterators within the container list) -- your comment didn't mention how many partitions there were of the thirty items. This plot is flattening a thousand items in every run, with varying number of partitions. The items are evenly distributed among the partitions.

Flattening Comparison

Code (Python 2.6):

#!/usr/bin/env python2.6

"""Usage: %prog item_count"""

from __future__ import print_function

import collections
import itertools
import operator
from timeit import Timer
import sys

import matplotlib.pyplot as pyplot

def itertools_flatten(iter_lst):
    return list(itertools.chain(*iter_lst))

def itertools_iterable_flatten(iter_iter):
    return list(itertools.chain.from_iterable(iter_iter))

def reduce_flatten(iter_lst):
    return reduce(operator.add, map(list, iter_lst))

def reduce_lambda_flatten(iter_lst):
    return reduce(operator.add, map(lambda x: list(x), [i for i in iter_lst]))

def comprehension_flatten(iter_lst):
    return list(item for iter_ in iter_lst for item in iter_)

METHODS = ['itertools', 'itertools_iterable', 'reduce', 'reduce_lambda',
           'comprehension']

def _time_test_assert(iter_lst):
    """Make sure all methods produce an equivalent value.
    :raise AssertionError: On any non-equivalent value."""
    callables = (globals()[method + '_flatten'] for method in METHODS)
    results = [callable(iter_lst) for callable in callables]
    if not all(result == results[0] for result in results[1:]):
        raise AssertionError

def time_test(partition_count, item_count_per_partition, test_count=10000):
    """Run flatten methods on a list of :param:`partition_count` iterables.
    Normalize results over :param:`test_count` runs.
    :return: Mapping from method to (normalized) microseconds per pass.
    """
    iter_lst = [[dict()] * item_count_per_partition] * partition_count
    print('Partition count:    ', partition_count)
    print('Items per partition:', item_count_per_partition)
    _time_test_assert(iter_lst)
    test_str = 'flatten(%r)' % iter_lst
    result_by_method = {}
    for method in METHODS:
        setup_str = 'from test import %s_flatten as flatten' % method
        t = Timer(test_str, setup_str)
        per_pass = test_count * t.timeit(number=test_count) / test_count
        print('%20s: %.2f usec/pass' % (method, per_pass))
        result_by_method[method] = per_pass
    return result_by_method

if __name__ == '__main__':
    if len(sys.argv) != 2:
        raise ValueError('Need a number of items to flatten')
    item_count = int(sys.argv[1])
    partition_counts = []
    pass_times_by_method = collections.defaultdict(list)
    for partition_count in xrange(1, item_count):
        if item_count % partition_count != 0:
            continue
        items_per_partition = item_count / partition_count
        result_by_method = time_test(partition_count, items_per_partition)
        partition_counts.append(partition_count)
        for method, result in result_by_method.iteritems():
            pass_times_by_method[method].append(result)
    for method, pass_times in pass_times_by_method.iteritems():
        pyplot.plot(partition_counts, pass_times, label=method)
    pyplot.legend()
    pyplot.title('Flattening Comparison for %d Items' % item_count)
    pyplot.xlabel('Number of Partitions')
    pyplot.ylabel('Microseconds')
    pyplot.show()

Edit: Decided to make it community wiki.

Note: METHODS should probably be accumulated with a decorator, but I figure it'd be easier for people to read this way.


In Python 2.6, using chain.from_iterable():

>>> from itertools import chain
>>> list(chain.from_iterable(mi.image_set.all() for mi in h.get_image_menu()))

It avoids creating of intermediate list.


What about:

from operator import add
reduce(add, map(lambda x: list(x.image_set.all()), [mi for mi in list_of_menuitems]))

But, Guido is recommending against performing too much in a single line of code since it reduces readability. There is minimal, if any, performance gain by performing what you want in a single line vs. multiple lines.


In Python 2.6, using chain.from_iterable():

>>> from itertools import chain
>>> list(chain.from_iterable(mi.image_set.all() for mi in h.get_image_menu()))

It avoids creating of intermediate list.


Here is the correct solution using list comprehensions (they're backward in the question):

>>> join = lambda it: (y for x in it for y in x)
>>> list(join([[1,2],[3,4,5],[]]))
[1, 2, 3, 4, 5]

In your case it would be

[image for menuitem in list_of_menuitems for image in menuitem.image_set.all()]

or you could use join and say

join(menuitem.image_set.all() for menuitem in list_of_menuitems)

In either case, the gotcha was the nesting of the for loops.


If you're looking for a built-in, simple, one-liner you can use:

a = [[1, 2, 3], [4, 5, 6]
b = [i[x] for i in a for x in range(len(i))]
print b

returns

[1, 2, 3, 4, 5, 6]

have you tried flatten? From matplotlib.cbook.flatten(seq, scalarp=) ?

l=[[1,2,3],[4,5,6], [7], [8,9]]*33

run("list(flatten(l))")
         3732 function calls (3303 primitive calls) in 0.007 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.007    0.007 <string>:1(<module>)
      429    0.001    0.000    0.001    0.000 cbook.py:475(iterable)
      429    0.002    0.000    0.003    0.000 cbook.py:484(is_string_like)
      429    0.002    0.000    0.006    0.000 cbook.py:565(is_scalar_or_string)
  727/298    0.001    0.000    0.007    0.000 cbook.py:605(flatten)
      429    0.000    0.000    0.001    0.000 core.py:5641(isMaskedArray)
      858    0.001    0.000    0.001    0.000 {isinstance}
      429    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*66

run("list(flatten(l))")
         7461 function calls (6603 primitive calls) in 0.007 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.007    0.007 <string>:1(<module>)
      858    0.001    0.000    0.001    0.000 cbook.py:475(iterable)
      858    0.002    0.000    0.003    0.000 cbook.py:484(is_string_like)
      858    0.002    0.000    0.006    0.000 cbook.py:565(is_scalar_or_string)
 1453/595    0.001    0.000    0.007    0.000 cbook.py:605(flatten)
      858    0.000    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     1716    0.001    0.000    0.001    0.000 {isinstance}
      858    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*99

run("list(flatten(l))")
         11190 function calls (9903 primitive calls) in 0.010 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.010    0.010 <string>:1(<module>)
     1287    0.002    0.000    0.002    0.000 cbook.py:475(iterable)
     1287    0.003    0.000    0.004    0.000 cbook.py:484(is_string_like)
     1287    0.002    0.000    0.009    0.000 cbook.py:565(is_scalar_or_string)
 2179/892    0.001    0.000    0.010    0.000 cbook.py:605(flatten)
     1287    0.001    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     2574    0.001    0.000    0.001    0.000 {isinstance}
     1287    0.000    0.000    0.000    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*132

run("list(flatten(l))")
         14919 function calls (13203 primitive calls) in 0.013 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.013    0.013 <string>:1(<module>)
     1716    0.002    0.000    0.002    0.000 cbook.py:475(iterable)
     1716    0.004    0.000    0.006    0.000 cbook.py:484(is_string_like)
     1716    0.003    0.000    0.011    0.000 cbook.py:565(is_scalar_or_string)
2905/1189    0.002    0.000    0.013    0.000 cbook.py:605(flatten)
     1716    0.001    0.000    0.001    0.000 core.py:5641(isMaskedArray)
     3432    0.001    0.000    0.001    0.000 {isinstance}
     1716    0.001    0.000    0.001    0.000 {iter}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler'

UPDATE Which gave me another idea:

l=[[1,2,3],[4,5,6], [7], [8,9]]*33

run("flattenlist(l)")
         564 function calls (432 primitive calls) in 0.000 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    133/1    0.000    0.000    0.000    0.000 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.000    0.000 <string>:1(<module>)
      429    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*66

run("flattenlist(l)")
         1125 function calls (861 primitive calls) in 0.001 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    265/1    0.001    0.000    0.001    0.001 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.001    0.001 <string>:1(<module>)
      858    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*99

run("flattenlist(l)")
         1686 function calls (1290 primitive calls) in 0.001 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    397/1    0.001    0.000    0.001    0.001 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.001    0.001 <string>:1(<module>)
     1287    0.000    0.000    0.000    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*132

run("flattenlist(l)")
         2247 function calls (1719 primitive calls) in 0.002 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    529/1    0.001    0.000    0.002    0.002 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.002    0.002 <string>:1(<module>)
     1716    0.001    0.000    0.001    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



l=[[1,2,3],[4,5,6], [7], [8,9]]*1320

run("flattenlist(l)")
         22443 function calls (17163 primitive calls) in 0.016 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   5281/1    0.011    0.000    0.016    0.016 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.000    0.000    0.016    0.016 <string>:1(<module>)
    17160    0.005    0.000    0.005    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

So to test how effective it is when recursive gets deeper: How much deeper?

l=[[1,2,3],[4,5,6], [7], [8,9]]*1320

new=[l]*33

run("flattenlist(new)")
         740589 function calls (566316 primitive calls) in 0.418 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 174274/1    0.281    0.000    0.417    0.417 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.001    0.001    0.418    0.418 <string>:1(<module>)
   566313    0.136    0.000    0.136    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*66

run("flattenlist(new)")
         1481175 function calls (1132629 primitive calls) in 0.809 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 348547/1    0.542    0.000    0.807    0.807 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.002    0.002    0.809    0.809 <string>:1(<module>)
  1132626    0.266    0.000    0.266    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*99

run("flattenlist(new)")
         2221761 function calls (1698942 primitive calls) in 1.211 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 522820/1    0.815    0.000    1.208    1.208 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.002    0.002    1.211    1.211 <string>:1(<module>)
  1698939    0.393    0.000    0.393    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*132

run("flattenlist(new)")
         2962347 function calls (2265255 primitive calls) in 1.630 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 697093/1    1.091    0.000    1.627    1.627 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.003    0.003    1.630    1.630 <string>:1(<module>)
  2265252    0.536    0.000    0.536    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}



new=[l]*1320

run("flattenlist(new)")
         29623443 function calls (22652523 primitive calls) in 16.103 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
6970921/1   10.842    0.000   16.069   16.069 <ipython-input-55-39b139bad497>:4(flattenlist)
        1    0.034    0.034   16.103   16.103 <string>:1(<module>)
 22652520    5.227    0.000    5.227    0.000 {isinstance}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}

I will bet "flattenlist" I am going to use this rather than matploblib for a long long time unless I want a yield generator and fast result as "flatten" uses in matploblib.cbook

This, is fast.

  • And here is the code

:

typ=(list,tuple)


def flattenlist(d):
    thelist = []
    for x in d:
        if not isinstance(x,typ):
            thelist += [x]
        else:
            thelist += flattenlist(x)
    return thelist

You almost have it! The way to do nested list comprehensions is to put the for statements in the same order as they would go in regular nested for statements.

Thus, this

for inner_list in outer_list:
    for item in inner_list:
        ...

corresponds to

[... for inner_list in outer_list for item in inner_list]

So you want

[image for menuitem in list_of_menuitems for image in menuitem]

In Python 3.4 you will be able to do:

[*innerlist for innerlist in outer_list]

sum(list_of_lists, []) would flatten it.

l = [['image00', 'image01'], ['image10'], []]
print sum(l,[]) # prints ['image00', 'image01', 'image10']

What about:

from operator import add
reduce(add, map(lambda x: list(x.image_set.all()), [mi for mi in list_of_menuitems]))

But, Guido is recommending against performing too much in a single line of code since it reduces readability. There is minimal, if any, performance gain by performing what you want in a single line vs. multiple lines.


Performance Results. Revised.

import itertools
def itertools_flatten( aList ):
    return list( itertools.chain(*aList) )

from operator import add
def reduce_flatten1( aList ):
    return reduce(add, map(lambda x: list(x), [mi for mi in aList]))

def reduce_flatten2( aList ):
    return reduce(list.__add__, map(list, aList))

def comprehension_flatten( aList ):
    return list(y for x in aList for y in x)

I flattened a 2-level list of 30 items 1000 times

itertools_flatten     0.00554
comprehension_flatten 0.00815
reduce_flatten2       0.01103
reduce_flatten1       0.01404

Reduce is always a poor choice.


What about:

from operator import add
reduce(add, map(lambda x: list(x.image_set.all()), [mi for mi in list_of_menuitems]))

But, Guido is recommending against performing too much in a single line of code since it reduces readability. There is minimal, if any, performance gain by performing what you want in a single line vs. multiple lines.


@S.Lott: You inspired me to write a timeit app.

I figured it would also vary based on the number of partitions (number of iterators within the container list) -- your comment didn't mention how many partitions there were of the thirty items. This plot is flattening a thousand items in every run, with varying number of partitions. The items are evenly distributed among the partitions.

Flattening Comparison

Code (Python 2.6):

#!/usr/bin/env python2.6

"""Usage: %prog item_count"""

from __future__ import print_function

import collections
import itertools
import operator
from timeit import Timer
import sys

import matplotlib.pyplot as pyplot

def itertools_flatten(iter_lst):
    return list(itertools.chain(*iter_lst))

def itertools_iterable_flatten(iter_iter):
    return list(itertools.chain.from_iterable(iter_iter))

def reduce_flatten(iter_lst):
    return reduce(operator.add, map(list, iter_lst))

def reduce_lambda_flatten(iter_lst):
    return reduce(operator.add, map(lambda x: list(x), [i for i in iter_lst]))

def comprehension_flatten(iter_lst):
    return list(item for iter_ in iter_lst for item in iter_)

METHODS = ['itertools', 'itertools_iterable', 'reduce', 'reduce_lambda',
           'comprehension']

def _time_test_assert(iter_lst):
    """Make sure all methods produce an equivalent value.
    :raise AssertionError: On any non-equivalent value."""
    callables = (globals()[method + '_flatten'] for method in METHODS)
    results = [callable(iter_lst) for callable in callables]
    if not all(result == results[0] for result in results[1:]):
        raise AssertionError

def time_test(partition_count, item_count_per_partition, test_count=10000):
    """Run flatten methods on a list of :param:`partition_count` iterables.
    Normalize results over :param:`test_count` runs.
    :return: Mapping from method to (normalized) microseconds per pass.
    """
    iter_lst = [[dict()] * item_count_per_partition] * partition_count
    print('Partition count:    ', partition_count)
    print('Items per partition:', item_count_per_partition)
    _time_test_assert(iter_lst)
    test_str = 'flatten(%r)' % iter_lst
    result_by_method = {}
    for method in METHODS:
        setup_str = 'from test import %s_flatten as flatten' % method
        t = Timer(test_str, setup_str)
        per_pass = test_count * t.timeit(number=test_count) / test_count
        print('%20s: %.2f usec/pass' % (method, per_pass))
        result_by_method[method] = per_pass
    return result_by_method

if __name__ == '__main__':
    if len(sys.argv) != 2:
        raise ValueError('Need a number of items to flatten')
    item_count = int(sys.argv[1])
    partition_counts = []
    pass_times_by_method = collections.defaultdict(list)
    for partition_count in xrange(1, item_count):
        if item_count % partition_count != 0:
            continue
        items_per_partition = item_count / partition_count
        result_by_method = time_test(partition_count, items_per_partition)
        partition_counts.append(partition_count)
        for method, result in result_by_method.iteritems():
            pass_times_by_method[method].append(result)
    for method, pass_times in pass_times_by_method.iteritems():
        pyplot.plot(partition_counts, pass_times, label=method)
    pyplot.legend()
    pyplot.title('Flattening Comparison for %d Items' % item_count)
    pyplot.xlabel('Number of Partitions')
    pyplot.ylabel('Microseconds')
    pyplot.show()

Edit: Decided to make it community wiki.

Note: METHODS should probably be accumulated with a decorator, but I figure it'd be easier for people to read this way.


pylab provides a flatten: link to numpy flatten


In Python 3.4 you will be able to do:

[*innerlist for innerlist in outer_list]

From my experience, the most efficient way to flatten a list of lists is:

flat_list = []
map(flat_list.extend, list_of_list)

Some timeit comparisons with the other proposed methods:

list_of_list = [range(10)]*1000
%timeit flat_list=[]; map(flat_list.extend, list_of_list)
#10000 loops, best of 3: 119 µs per loop
%timeit flat_list=list(itertools.chain.from_iterable(list_of_list))
#1000 loops, best of 3: 210 µs per loop
%timeit flat_list=[i for sublist in list_of_list for i in sublist]
#1000 loops, best of 3: 525 µs per loop
%timeit flat_list=reduce(list.__add__,list_of_list)
#100 loops, best of 3: 18.1 ms per loop

Now, the efficiency gain appears better when processing longer sublists:

list_of_list = [range(1000)]*10
%timeit flat_list=[]; map(flat_list.extend, list_of_list)
#10000 loops, best of 3: 60.7 µs per loop
%timeit flat_list=list(itertools.chain.from_iterable(list_of_list))
#10000 loops, best of 3: 176 µs per loop

And this methods also works with any iterative object:

class SquaredRange(object):
    def __init__(self, n): 
        self.range = range(n)
    def __iter__(self):
        for i in self.range: 
            yield i**2

list_of_list = [SquaredRange(5)]*3
flat_list = []
map(flat_list.extend, list_of_list)
print flat_list
#[0, 1, 4, 9, 16, 0, 1, 4, 9, 16, 0, 1, 4, 9, 16]

In Python 2.6, using chain.from_iterable():

>>> from itertools import chain
>>> list(chain.from_iterable(mi.image_set.all() for mi in h.get_image_menu()))

It avoids creating of intermediate list.


This version is a generator.Tweak it if you want a list.

def list_or_tuple(l):
    return isinstance(l,(list,tuple))
## predicate will select the container  to be flattened
## write your own as required
## this one flattens every list/tuple


def flatten(seq,predicate=list_or_tuple):        
    ## recursive generator 
    for i in seq:
        if predicate(seq):
            for j in flatten(i):
                yield j
        else:
            yield i

You can add a predicate ,if want to flatten those which satisfy a condition

Taken from python cookbook


sum(list_of_lists, []) would flatten it.

l = [['image00', 'image01'], ['image10'], []]
print sum(l,[]) # prints ['image00', 'image01', 'image10']

Performance Results. Revised.

import itertools
def itertools_flatten( aList ):
    return list( itertools.chain(*aList) )

from operator import add
def reduce_flatten1( aList ):
    return reduce(add, map(lambda x: list(x), [mi for mi in aList]))

def reduce_flatten2( aList ):
    return reduce(list.__add__, map(list, aList))

def comprehension_flatten( aList ):
    return list(y for x in aList for y in x)

I flattened a 2-level list of 30 items 1000 times

itertools_flatten     0.00554
comprehension_flatten 0.00815
reduce_flatten2       0.01103
reduce_flatten1       0.01404

Reduce is always a poor choice.


If you have to flat a more complicated list with not iterable elements or with depth more than 2 you can use following function:

def flat_list(list_to_flat):
    if not isinstance(list_to_flat, list):
        yield list_to_flat
    else:
        for item in list_to_flat:
            yield from flat_list(item)

It will return a generator object which you can convert to a list with list() function. Notice that yield from syntax is available starting from python3.3, but you can use explicit iteration instead.
Example:

>>> a = [1, [2, 3], [1, [2, 3, [1, [2, 3]]]]]
>>> print(list(flat_list(a)))
[1, 2, 3, 1, 2, 3, 1, 2, 3]

Here is the correct solution using list comprehensions (they're backward in the question):

>>> join = lambda it: (y for x in it for y in x)
>>> list(join([[1,2],[3,4,5],[]]))
[1, 2, 3, 4, 5]

In your case it would be

[image for menuitem in list_of_menuitems for image in menuitem.image_set.all()]

or you could use join and say

join(menuitem.image_set.all() for menuitem in list_of_menuitems)

In either case, the gotcha was the nesting of the for loops.


The easiest way to achieve this in either Python 2 or 3 is to use the morph library using pip install morph.

The code is:

import morph

list = [[1,2],[3],[5,89],[],[6]]
flattened_list = morph.flatten(list)  # returns [1, 2, 3, 5, 89, 6]

In Python 2.6, using chain.from_iterable():

>>> from itertools import chain
>>> list(chain.from_iterable(mi.image_set.all() for mi in h.get_image_menu()))

It avoids creating of intermediate list.


You almost have it! The way to do nested list comprehensions is to put the for statements in the same order as they would go in regular nested for statements.

Thus, this

for inner_list in outer_list:
    for item in inner_list:
        ...

corresponds to

[... for inner_list in outer_list for item in inner_list]

So you want

[image for menuitem in list_of_menuitems for image in menuitem]

Here is the correct solution using list comprehensions (they're backward in the question):

>>> join = lambda it: (y for x in it for y in x)
>>> list(join([[1,2],[3,4,5],[]]))
[1, 2, 3, 4, 5]

In your case it would be

[image for menuitem in list_of_menuitems for image in menuitem.image_set.all()]

or you could use join and say

join(menuitem.image_set.all() for menuitem in list_of_menuitems)

In either case, the gotcha was the nesting of the for loops.


There seems to be a confusion with operator.add! When you add two lists together, the correct term for that is concat, not add. operator.concat is what you need to use.

If you're thinking functional, it is as easy as this::

>>> list2d = ((1,2,3),(4,5,6), (7,), (8,9))
>>> reduce(operator.concat, list2d)
(1, 2, 3, 4, 5, 6, 7, 8, 9)

You see reduce respects the sequence type, so when you supply a tuple, you get back a tuple. let's try with a list::

>>> list2d = [[1,2,3],[4,5,6], [7], [8,9]]
>>> reduce(operator.concat, list2d)
[1, 2, 3, 4, 5, 6, 7, 8, 9]

Aha, you get back a list.

How about performance::

>>> list2d = [[1,2,3],[4,5,6], [7], [8,9]]
>>> %timeit list(itertools.chain.from_iterable(list2d))
1000000 loops, best of 3: 1.36 µs per loop

from_iterable is pretty fast! But it's no comparison to reduce with concat.

>>> list2d = ((1,2,3),(4,5,6), (7,), (8,9))
>>> %timeit reduce(operator.concat, list2d)
1000000 loops, best of 3: 492 ns per loop

Off the top of my head, you can eliminate the lambda:

reduce(list.__add__, map(list, [mi.image_set.all() for mi in list_of_menuitems]))

Or even eliminate the map, since you've already got a list-comp:

reduce(list.__add__, [list(mi.image_set.all()) for mi in list_of_menuitems])

You can also just express this as a sum of lists:

sum([list(mi.image_set.all()) for mi in list_of_menuitems], [])

Here is a version working for multiple levels of list using collectons.Iterable:

import collections

def flatten(o, flatten_condition=lambda i: isinstance(i,
               collections.Iterable) and not isinstance(i, str)):
    result = []
    for i in o:
        if flatten_condition(i):
            result.extend(flatten(i, flatten_condition))
        else:
            result.append(i)
    return result

@S.Lott: You inspired me to write a timeit app.

I figured it would also vary based on the number of partitions (number of iterators within the container list) -- your comment didn't mention how many partitions there were of the thirty items. This plot is flattening a thousand items in every run, with varying number of partitions. The items are evenly distributed among the partitions.

Flattening Comparison

Code (Python 2.6):

#!/usr/bin/env python2.6

"""Usage: %prog item_count"""

from __future__ import print_function

import collections
import itertools
import operator
from timeit import Timer
import sys

import matplotlib.pyplot as pyplot

def itertools_flatten(iter_lst):
    return list(itertools.chain(*iter_lst))

def itertools_iterable_flatten(iter_iter):
    return list(itertools.chain.from_iterable(iter_iter))

def reduce_flatten(iter_lst):
    return reduce(operator.add, map(list, iter_lst))

def reduce_lambda_flatten(iter_lst):
    return reduce(operator.add, map(lambda x: list(x), [i for i in iter_lst]))

def comprehension_flatten(iter_lst):
    return list(item for iter_ in iter_lst for item in iter_)

METHODS = ['itertools', 'itertools_iterable', 'reduce', 'reduce_lambda',
           'comprehension']

def _time_test_assert(iter_lst):
    """Make sure all methods produce an equivalent value.
    :raise AssertionError: On any non-equivalent value."""
    callables = (globals()[method + '_flatten'] for method in METHODS)
    results = [callable(iter_lst) for callable in callables]
    if not all(result == results[0] for result in results[1:]):
        raise AssertionError

def time_test(partition_count, item_count_per_partition, test_count=10000):
    """Run flatten methods on a list of :param:`partition_count` iterables.
    Normalize results over :param:`test_count` runs.
    :return: Mapping from method to (normalized) microseconds per pass.
    """
    iter_lst = [[dict()] * item_count_per_partition] * partition_count
    print('Partition count:    ', partition_count)
    print('Items per partition:', item_count_per_partition)
    _time_test_assert(iter_lst)
    test_str = 'flatten(%r)' % iter_lst
    result_by_method = {}
    for method in METHODS:
        setup_str = 'from test import %s_flatten as flatten' % method
        t = Timer(test_str, setup_str)
        per_pass = test_count * t.timeit(number=test_count) / test_count
        print('%20s: %.2f usec/pass' % (method, per_pass))
        result_by_method[method] = per_pass
    return result_by_method

if __name__ == '__main__':
    if len(sys.argv) != 2:
        raise ValueError('Need a number of items to flatten')
    item_count = int(sys.argv[1])
    partition_counts = []
    pass_times_by_method = collections.defaultdict(list)
    for partition_count in xrange(1, item_count):
        if item_count % partition_count != 0:
            continue
        items_per_partition = item_count / partition_count
        result_by_method = time_test(partition_count, items_per_partition)
        partition_counts.append(partition_count)
        for method, result in result_by_method.iteritems():
            pass_times_by_method[method].append(result)
    for method, pass_times in pass_times_by_method.iteritems():
        pyplot.plot(partition_counts, pass_times, label=method)
    pyplot.legend()
    pyplot.title('Flattening Comparison for %d Items' % item_count)
    pyplot.xlabel('Number of Partitions')
    pyplot.ylabel('Microseconds')
    pyplot.show()

Edit: Decided to make it community wiki.

Note: METHODS should probably be accumulated with a decorator, but I figure it'd be easier for people to read this way.


Performance Results. Revised.

import itertools
def itertools_flatten( aList ):
    return list( itertools.chain(*aList) )

from operator import add
def reduce_flatten1( aList ):
    return reduce(add, map(lambda x: list(x), [mi for mi in aList]))

def reduce_flatten2( aList ):
    return reduce(list.__add__, map(list, aList))

def comprehension_flatten( aList ):
    return list(y for x in aList for y in x)

I flattened a 2-level list of 30 items 1000 times

itertools_flatten     0.00554
comprehension_flatten 0.00815
reduce_flatten2       0.01103
reduce_flatten1       0.01404

Reduce is always a poor choice.


Off the top of my head, you can eliminate the lambda:

reduce(list.__add__, map(list, [mi.image_set.all() for mi in list_of_menuitems]))

Or even eliminate the map, since you've already got a list-comp:

reduce(list.__add__, [list(mi.image_set.all()) for mi in list_of_menuitems])

You can also just express this as a sum of lists:

sum([list(mi.image_set.all()) for mi in list_of_menuitems], [])

If you have to flat a more complicated list with not iterable elements or with depth more than 2 you can use following function:

def flat_list(list_to_flat):
    if not isinstance(list_to_flat, list):
        yield list_to_flat
    else:
        for item in list_to_flat:
            yield from flat_list(item)

It will return a generator object which you can convert to a list with list() function. Notice that yield from syntax is available starting from python3.3, but you can use explicit iteration instead.
Example:

>>> a = [1, [2, 3], [1, [2, 3, [1, [2, 3]]]]]
>>> print(list(flat_list(a)))
[1, 2, 3, 1, 2, 3, 1, 2, 3]

There seems to be a confusion with operator.add! When you add two lists together, the correct term for that is concat, not add. operator.concat is what you need to use.

If you're thinking functional, it is as easy as this::

>>> list2d = ((1,2,3),(4,5,6), (7,), (8,9))
>>> reduce(operator.concat, list2d)
(1, 2, 3, 4, 5, 6, 7, 8, 9)

You see reduce respects the sequence type, so when you supply a tuple, you get back a tuple. let's try with a list::

>>> list2d = [[1,2,3],[4,5,6], [7], [8,9]]
>>> reduce(operator.concat, list2d)
[1, 2, 3, 4, 5, 6, 7, 8, 9]

Aha, you get back a list.

How about performance::

>>> list2d = [[1,2,3],[4,5,6], [7], [8,9]]
>>> %timeit list(itertools.chain.from_iterable(list2d))
1000000 loops, best of 3: 1.36 µs per loop

from_iterable is pretty fast! But it's no comparison to reduce with concat.

>>> list2d = ((1,2,3),(4,5,6), (7,), (8,9))
>>> %timeit reduce(operator.concat, list2d)
1000000 loops, best of 3: 492 ns per loop

Here is the correct solution using list comprehensions (they're backward in the question):

>>> join = lambda it: (y for x in it for y in x)
>>> list(join([[1,2],[3,4,5],[]]))
[1, 2, 3, 4, 5]

In your case it would be

[image for menuitem in list_of_menuitems for image in menuitem.image_set.all()]

or you could use join and say

join(menuitem.image_set.all() for menuitem in list_of_menuitems)

In either case, the gotcha was the nesting of the for loops.


This version is a generator.Tweak it if you want a list.

def list_or_tuple(l):
    return isinstance(l,(list,tuple))
## predicate will select the container  to be flattened
## write your own as required
## this one flattens every list/tuple


def flatten(seq,predicate=list_or_tuple):        
    ## recursive generator 
    for i in seq:
        if predicate(seq):
            for j in flatten(i):
                yield j
        else:
            yield i

You can add a predicate ,if want to flatten those which satisfy a condition

Taken from python cookbook


A simple alternative is to use numpy's concatenate but it converts the contents to float:

import numpy as np
print np.concatenate([[1,2],[3],[5,89],[],[6]])
# array([  1.,   2.,   3.,   5.,  89.,   6.])
print list(np.concatenate([[1,2],[3],[5,89],[],[6]]))
# [  1.,   2.,   3.,   5.,  89.,   6.]

This solution works for arbitrary nesting depths - not just the "list of lists" depth that some (all?) of the other solutions are limited to:

def flatten(x):
    result = []
    for el in x:
        if hasattr(el, "__iter__") and not isinstance(el, basestring):
            result.extend(flatten(el))
        else:
            result.append(el)
    return result

It's the recursion which allows for arbitrary depth nesting - until you hit the maximum recursion depth, of course...