Graphics Processing Units (GPUs) have recently gained widespread usage as an advanced parallel platform for accelerating compute intensive applications. The maturity of programming interfaces and the improved programmability of GPUs have enabled the development of parallel algorithms that leverage the wealth of compute power provided by them. In this paper, we present μ-GSIM, a GPU-based simulation tool that leverages the inherent bit parallelism of GPUs for accelerating simulations of mutated digital circuits. We propose an efficient mapping of multiple mutated circuits on the GPU’s device memory where we exploit as much data parallelism as possible so our GPU simulation kernel can achieve maximal performance by operating on independent data. Results show that from the largest ITC’99 circuit benchmarks used were able to achieve a 60% decrease in memory usage while gaining a 5.4× increase in simulation performance. Additionally, we demonstrated a speed-up of at least 95 × against a commercial event-driven simulation tool running on a conventional processor. This is beneficial in the quest for assessing the quality of test sets.