A bit of speed comparison for all mentioned methods:
UPDATED on 2020.07.13 (thx to @user3780389): ONLY for keys from bigdict.
IPython 5.5.0 -- An enhanced Interactive Python.
Python 2.7.18 (default, Aug 8 2019, 00:00:00)
[GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux2
import numpy.random as nprnd
...: keys = nprnd.randint(100000, size=10000)
...: bigdict = dict([(_, nprnd.rand()) for _ in range(100000)])
...:
...: %timeit {key:bigdict[key] for key in keys}
...: %timeit dict((key, bigdict[key]) for key in keys)
...: %timeit dict(map(lambda k: (k, bigdict[k]), keys))
...: %timeit {key:bigdict[key] for key in set(keys) & set(bigdict.keys())}
...: %timeit dict(filter(lambda i:i[0] in keys, bigdict.items()))
...: %timeit {key:value for key, value in bigdict.items() if key in keys}
100 loops, best of 3: 2.36 ms per loop
100 loops, best of 3: 2.87 ms per loop
100 loops, best of 3: 3.65 ms per loop
100 loops, best of 3: 7.14 ms per loop
1 loop, best of 3: 577 ms per loop
1 loop, best of 3: 563 ms per loop
As it was expected: dictionary comprehensions are the best option.