International Conference on Computer Supported Cooperative Work in Design
Download PDF

Abstract

Researchers have proposed a number of automated techniques for testing refactoring engines. However, they may have limitations related to the program generator, time consumption, kinds of bugs, and debugging. We propose a technique to scale testing of refactoring engines. We improve expressiveness of a program generator, use a technique to skip some test inputs to improve performance, and propose new oracles to detect behavioral changes using change impact analysis, overly strong conditions using mutation testing, and transformation issues related to the refactoring definitions. We evaluate our technique in 24 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse) and found 119 bugs. The technique reduces the time in 96% using skips while misses only 7% of the bugs. Using the new oracle to identify overly strong conditions, it detects 37% of new bugs while misses 16% of the bugs comparing with a previous technique. Furthermore, the proposed oracle facilitates debugging by indicating the overly strong conditions.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles