The inevitable accumulation of errors in near-future quantum devices represents a key obstacle in delivering practical quantum advantages, motivating the development of various quantum error-mitigation methods. Although numerous quantum error-mitigation protocols have been proposed, their general potential and limitations have still been elusive. In particular, to understand the ultimate feasibility of quantum error mitigation, it is crucial to characterize the fundamental sampling cost—how many times an arbitrary mitigation protocol must run a noisy quantum device. Here, we derive universal bounds for the sampling cost that apply to general error-mitigation protocols. We discuss several consequences of our general bounds. We show that a prominent mitigation strategy known as the probabilistic error cancellation method is optimal in terms of a certain figure of merit among a wide class of strategies in mitigating the local dephasing noise. We then show that the number of samples required for general mitigation protocols to mitigate errors in layered circuits must grow exponentially with the circuit depth for various noise models, revealing the fundamental obstacles in showing useful applications of noisy near-term quantum devices.
B.Sc.: University of Tokyo (Department of Physics)
PhD (Physics): Massachusetts Institute of Technology (Advisors: Isaac Chuang and Seth Lloyd)
Postdoc: Nanyang Technological University, Singapore (Mile Gu's group)
Current (from this April): Associate Professor, University of Tokyo (Department of Basic Science)