[python] Multiprocessing a for loop?

I have an array (called data_inputs) containing the names of hundreds of astronomy images files. These images are then manipulated. My code works and takes a few seconds to process each image. However, it can only do one image at a time because I'm running the array through a for loop:

for name in data_inputs:
    sci=fits.open(name+'.fits')
    #image is manipulated

There is no reason why I have to modify an image before any other, so is it possible to utilise all 4 cores on my machine with each core running through the for loop on a different image?

I've read about the multiprocessing module but I'm unsure how to implement it in my case. I'm keen to get multiprocessing to work because eventually I'll have to run this on 10,000+ images.

This question is related to python multiprocessing

The answer is


You can use multiprocessing.Pool:

from multiprocessing import Pool
class Engine(object):
    def __init__(self, parameters):
        self.parameters = parameters
    def __call__(self, filename):
        sci = fits.open(filename + '.fits')
        manipulated = manipulate_image(sci, self.parameters)
        return manipulated

try:
    pool = Pool(8) # on 8 processors
    engine = Engine(my_parameters)
    data_outputs = pool.map(engine, data_inputs)
finally: # To make sure processes are closed in the end, even if errors happen
    pool.close()
    pool.join()

Alternatively

with Pool() as pool: 
    pool.map(fits.open, [name + '.fits' for name in datainput])