Home/Blog/Quantum Computing
Quantum Computing·March 25, 2026·5 min read

Quantum Computing on AWS Braket: A Builder's First Look

I ran quantum optimization circuits on real hardware through AWS Braket. Here's what actually works in 2026, what doesn't, and why builders should care before physicists figure out the software side.

AWS BraketQuantum ComputingPennyLanePythonQAOA

Why a Software Engineer Should Touch Quantum Now

Most quantum content targets physicists. Fair enough — the math demands it. But the hardware is reaching a point where the bottleneck is shifting from physics to engineering. Someone needs to build the pipelines, integrations, and production patterns around these machines. That gap between "working quantum circuit" and "useful quantum service" looks a lot like the gap between "trained ML model" and "production ML system" circa 2017.

I build production AI systems — agent runtimes, RAG pipelines, orchestration layers. Quantum computing caught my attention because the programming model maps to patterns I already use: parameterized compute modules, hybrid classical-quantum loops, and cloud-managed hardware abstraction. AWS Braket makes the on-ramp accessible enough to actually test these ideas.

What AWS Braket Gives You

Braket is AWS's managed quantum computing service. No quantum hardware in your closet — you submit circuits through an SDK and get results back from real quantum processors:

  • IonQ Forte — trapped-ion, 36 qubits, high gate fidelity
  • Rigetti Ankaa — superconducting, faster operation times
  • IQM Garnet — superconducting, European hardware
  • QuEra Aquila — neutral-atom, specializes in analog Hamiltonian simulation
  • AQT IBEX-Q1 — trapped-ion

Plus three managed simulators (SV1, DM1, TN1) and a free local simulator in the SDK.

The pricing model is simple: $0.30 per task (a batch of circuit runs) plus $0.00090–$0.08 per shot (single circuit execution) depending on hardware. Or reserve a full QPU by the hour ($2,500–$7,000/hr).

For learning, the free local simulator and 1 hour/month of managed simulator time cost nothing.

Quantum Optimization Circuits, Explained Without the Physics

Here's the concept mapped to things software engineers already understand.

A regular optimization problem: you have a function, you want to find the input that minimizes (or maximizes) it. Gradient descent does this — adjust parameters, evaluate, repeat.

A quantum optimization circuit does the same thing, but the evaluation step runs on quantum hardware. The advantage: qubits in superposition represent many possible solutions simultaneously. Twenty qubits hold roughly one million states at once. The circuit steers probability toward good solutions.

The dominant algorithm is QAOA (Quantum Approximate Optimization Algorithm):

  1. Encode your problem as a "cost Hamiltonian" — a mathematical object that assigns energy levels to each possible solution (lower energy = better answer)
  2. Build a parameterized quantum circuit with alternating "cost layers" and "mixer layers"
  3. Run it, measure the output, get a candidate solution
  4. A classical optimizer (running on your CPU) tweaks the circuit parameters
  5. Repeat until convergence

The quantum part explores the solution space. The classical part steers the search. This hybrid loop is why the pattern is called "variational" — it's conceptually identical to training a neural network.

Running QAOA with PennyLane

PennyLane is the framework that makes this practical. It's a Python library that treats quantum circuits like differentiable compute graphs — same concept as PyTorch or TensorFlow, different substrate.

Here's a complete working example solving MaxCut, a graph partitioning problem:

import pennylane as qml
from pennylane import qaoa
from networkx import Graph
import numpy as np

# Define the problem: split nodes into two groups,
# maximizing edges between groups
graph = Graph([(0, 1), (1, 2), (2, 0), (0, 3), (3, 2)])
wires = range(4)

# PennyLane generates the quantum operators from your graph
cost_h, mixer_h = qaoa.maxcut(graph)

def qaoa_layer(gamma, alpha):
    qaoa.cost_layer(gamma, cost_h)
    qaoa.mixer_layer(alpha, mixer_h)

# Local simulator — free, runs on your machine
dev = qml.device('default.qubit', wires=4)

@qml.qnode(dev)
def cost_function(params):
    for w in wires:
        qml.Hadamard(wires=w)
    qml.layer(qaoa_layer, 2, params[0], params[1])
    return qml.expval(cost_h)

# Classical optimizer drives the quantum circuit
optimizer = qml.GradientDescentOptimizer(stepsize=0.4)
params = np.array([[0.5, 0.5], [0.5, 0.5]], requires_grad=True)

for step in range(80):
    params = optimizer.step(cost_function, params)
    if step % 20 == 0:
        print(f"Step {step}: cost = {cost_function(params):.4f}")

To switch from simulator to real IonQ hardware, change one line:

dev = qml.device(
    'braket.aws.qubit',
    device_arn='arn:aws:braket:us-east-1::device/qpu/ionq/Forte',
    wires=4,
    shots=1000
)

Same circuit. Same optimizer. Different physics underneath.

What This Can and Cannot Do Today

Honest accounting:

Works now

  • Learning and prototyping on simulators — fast, free, full-fidelity
  • Small combinatorial problems (< 20 nodes) — QAOA finds good solutions
  • Algorithm benchmarking — compare quantum vs. classical on the same problem
  • Hybrid pipeline architecture — the classical-quantum loop pattern is stable and well-tooled

Doesn't work yet

  • Production advantage — classical algorithms still win on every practical business problem in 2026
  • Scale — current hardware maxes out around 36 usable qubits; real-world optimization needs hundreds or thousands
  • Noise — qubits decohere before complex circuits finish; error rates limit circuit depth
  • Error correction — roughly 1,000 physical qubits per logical qubit needed; we're nowhere close

The timeline for genuine quantum advantage on optimization problems is roughly 2028–2031, depending on who you ask and how optimistic they are.

Why I'm Building on It Anyway

Three reasons, none of which require believing quantum will save the world tomorrow:

The hybrid pattern is transferable. Orchestrating a quantum circuit as a compute module inside a classical pipeline is the same pattern as orchestrating an LLM call, a GPU inference job, or a Rust sidecar. The architecture skills compound.

The talent gap is real. Most quantum people are physicists who struggle with production engineering. Most engineers don't touch quantum. Operating in both spaces — even at a learning level — is a genuine differentiator.

Simulators make the learning free. PennyLane's local simulator runs 20+ qubit circuits in seconds on a laptop. The Braket free tier gives you managed simulator time. The only cost is attention.

Practical Next Steps If You're Interested

  1. pip install pennylane — run the QAOA example above locally
  2. Try the Amazon Braket Digital Badge — free self-paced courses
  3. Map a problem from your domain to a graph optimization — scheduling, routing, resource allocation all fit naturally
  4. Run it on the Braket simulator with the free tier
  5. Write about what you find — the field needs more builder perspectives

What I'm Exploring Next

I'm working on a proof-of-concept for quantum-enhanced similarity search — using parameterized quantum circuits to compute vector distances differently than classical cosine similarity. The theory suggests advantages for high-dimensional sparse vectors, which shows up in RAG pipelines and engineering parts matching.

Whether it beats classical? Probably not yet. Whether I'll learn something useful about the architecture regardless? Definitely.


Building production AI systems, exploring quantum computing, and writing about it honestly. More at mohsenjahanshahi.com.