I am not 100% sure what is meant by "running times of my algorithms written in python", so I thought I might try to offer a broader look at some of the potential answers.
Algorithms don't have running times; implementations can be timed, but an algorithm is an abstract approach to doing something. The most common and often the most valuable part of optimizing a program is analyzing the algorithm, usually using asymptotic analysis and computing the big O complexity in time, space, disk use and so forth.
A computer cannot really do this step for you. This requires doing the math to figure out how something works. Optimizing this side of things is the main component to having scalable performance.
You can time your specific implementation. The nicest way to do this in Python is to use timeit. The way it seems most to want to be used is to make a module with a function encapsulating what you want to call and call it from the command line with python -m timeit ...
.
Using timeit to compare multiple snippets when doing microoptimization, but often isn't the correct tool you want for comparing two different algorithms. It is common that what you want is asymptotic analysis, but it's possible you want more complicated types of analysis.
You have to know what to time. Most snippets aren't worth improving. You need to make changes where they actually count, especially when you're doing micro-optimisation and not improving the asymptotic complexity of your algorithm.
If you quadruple the speed of a function in which your code spends 1% of the time, that's not a real speedup. If you make a 20% speed increase on a function in which your program spends 50% of the time, you have a real gain.
To determine the time spent by a real Python program, use the stdlib profiling utilities. This will tell you where in an example program your code is spending its time.