Search code examples
pythonquantum-computingqiskit

Qiskit - Run a noisy COBYLA optimizer of a circuit


I'm working on a quantum computing project using qiskit where I am trying to train an autoencoder on a noisy circuit. What follows is my approach so far, but I'm struggling with integrating the noise model into my circuit, then optimizing using a COBYLA optimizer with a SamplerQNN.

1. Define the noise in:

x_noise = XGate(label="x1") // "x1" so that "x" doesn't get noised
cx_noise = CXGate(label="cx1") // "cx1" so that "cx1" doesn't get noised
ccx_noise = CCXGate(label="ccx1") // "ccx1" so that "ccx1" doesn't get noised
noise_bit_flip = NoiseModel()

p_gate1 = 0.05
... # Add the noise into the noise model

2. Define the circuit:

n_qubits = 5
a = QuantumRegister(n_qubits,'a')
b = QuantumRegister(n_qubits, 'b')
out = QuantumRegister(n_qubits, "out")
c = QuantumRegister(1, "carry")

qc = QuantumCircuit(a, b, out, c)
... # create the 5 qubit full adder
... # this circuit uses the cx_noise, ccx_noise and x_noise gates created above

3. Now, create the autoencoder circuit:

num_latent = 3
num_trash = 3
num_add_qubits = circ.num_qubits - num_trash - num_latent

ae = auto_encoder_circuit(num_latent, num_trash)
qc = QuantumCircuit(num_latent + 2 * num_trash + 1 + num_add_qubits, 1)
qc.compose(circ,qubits=range(circ.num_qubits), inplace=True)
qc = qc.compose(ae, range(num_add_qubits, num_latent + 2 * num_trash + 1 + num_add_qubits))

5. Create the SamplerQNN to train the encoder:

qnn = SamplerQNN(
    circuit=qc,
    input_params=[],
    weight_params=ae.parameters,
    interpret=identity_interpret,
    output_shape=2,
)

6. Finally, creating the optimizer and cost function:

def cost_func_digits(params_values):
    probabilities = qnn.forward([], params_values)
    cost = np.sum(probabilities[:, 1])

    # plotting part
    clear_output(wait=True)
    objective_func_vals.append(cost)
    plt.title("Objective function value against iteration")
    plt.xlabel("Iteration")
    plt.ylabel("Objective function value")
    plt.plot(range(len(objective_func_vals)), objective_func_vals)
    plt.show()

    return cost

opt = COBYLA(maxiter=150)

objective_func_vals = []
# make the plot nicer
plt.rcParams["figure.figsize"] = (12, 6)

start = time.time()
opt_result = opt.minimize(fun=cost_func_digits, x0=initial_point)

Question

How can I effectively apply the defined noise model to circ and use the COBYLA optimizer to train my encoder? I haven't found any tutorials on how to integrate an optimizer with a noisy backend in Qiskit.


Solution

  • @SteveWood gave the right answer, I was to use a Sampler from qiskit_aer.primitives. The backend options in Sampler allowed me to change the "noise_model" to be the noise model I created. Here is the code:

    1. Create Noise model:

    seed = 170
    algorithm_globals.random_seed = seed
    
    print()
    
    devices = AerSimulator(device="GPU").available_devices()
    print(devices)
    methods = AerSimulator(device="GPU").available_methods()
    print(methods)
    
    method = "automatic"
    device = "CPU"
    if "GPU" in devices and "tensor_network" in methods:
        device = "GPU"
    
    noisy_sampler = Sampler(
        backend_options={
            "method": method,
            "device": device,
            "noise_model": noise_bit_flip,
        },
        run_options={"seed": seed, "shots": 1024},
        transpile_options={"seed_transpiler": seed}
    )
    

    2. Running it:

    opt = COBYLA(maxiter=500)
    
    objective_func_vals = []
    # make the plot nicer
    plt.rcParams["figure.figsize"] = (12, 6)
    
    start = time.time()
    opt_result = opt.minimize(fun=cost_func_digits,x0=initial_point)
    elapsed = time.time() - start
    print(f"Fit in {elapsed:0.2f} seconds")