Abstract
This paper describes the important issue of energy conservation for data centers. We consider the problem of provisioning physical servers to a sequence of jobs, and reducing the total energy consumption. The performance metric is the wasted energy -- the over-provisioned computing power provided by the physical servers, but exceeding the requirement of the jobs. We propose three new strategies for allocating servers to a sequence of jobs -- a largest machine first heuristic, a best fit method, and a mixed method. We prove that both the largest machine first heuristic and the mixed method will only incur at most 2/n in over-provisioned energy. That is, the ratio between the over-provisioned energy and the total provisioned energy is bounded by 2/n(1 + d), where n is the number of jobs, and 1 + d is the ratio between the maximum and minimum execution time of jobs. We also derive a tight bound of i on the ratio of wasted energy if the ratio 5 could be arbitrarily large. We also conduct experiments to compare the three algorithms in practice. The experiment results indicate that all three algorithms waste very little energy in over-provision. The mixed method outperforms the best fit method, which outperforms the largest machine first method.