Parallel computing is the execution of a computer program utilizing multiple computer processors (CPU) concurrently instead of using one processor exclusively. Let be the run-time of the fastest known sequential algorithm and let be the run-time of the parallel algorithm executed on processors, where is the size of the input.
The speedup is then defined as
i.e., the ratio of the sequential execution time to the parallel execution time. Ideally, one would like , which is called perfect speedup, although in practice this is rarely achieved. (There are some situations where superlinear speedup is achieved due to memory hierarchy effects.)
Another metric to measure the performance of a parallel algorithm is efficiency, , defined as
One can use speedup and efficiency to analyze algorithms either theoretically using asymptotic run-time complexity or in practice by measuring the time of program execution. When is fixed, Speedup and Efficiency are equivalent measures, differing only by the constant factor .