Skip to content

Monte Carlo Option Pricing

quanta.layer3.monte_carlo

quanta.layer3.monte_carlo — Quantum Monte Carlo via Amplitude Estimation.

Uses quantum amplitude estimation to compute expectations faster than classical Monte Carlo. Key applications: option pricing, risk analysis, and probabilistic inference.

Classical Monte Carlo converges as O(1/√N) with N samples. Quantum amplitude estimation achieves O(1/N) — quadratic speedup.

Pipeline
  1. Encode probability distribution into quantum state amplitudes
  2. Apply payoff function as phase rotations
  3. Use amplitude estimation (Grover-style iterations) to extract expectation
  4. Classical post-processing for final estimate
Example

from quanta.layer3.monte_carlo import quantum_monte_carlo

Price a European call option

result = quantum_monte_carlo( ... distribution="lognormal", ... payoff="european_call", ... params={"S0": 100, "K": 105, "sigma": 0.2, "T": 1.0, "r": 0.05}, ... ) print(result.estimated_value)

GreeksResult dataclass

Option Greeks (sensitivities).

Attributes:

Name Type Description
delta float

Price sensitivity to spot (∂V/∂S).

gamma float

Delta sensitivity to spot (∂²V/∂S²).

vega float

Price sensitivity to volatility (∂V/∂σ).

theta float

Price sensitivity to time (∂V/∂T).

rho float

Price sensitivity to interest rate (∂V/∂r).

Source code in quanta/layer3/monte_carlo.py
@dataclass
class GreeksResult:
    """Option Greeks (sensitivities).

    Attributes:
        delta: Price sensitivity to spot (∂V/∂S).
        gamma: Delta sensitivity to spot (∂²V/∂S²).
        vega: Price sensitivity to volatility (∂V/∂σ).
        theta: Price sensitivity to time (∂V/∂T).
        rho: Price sensitivity to interest rate (∂V/∂r).
    """

    delta: float
    gamma: float
    vega: float
    theta: float
    rho: float

    def summary(self) -> str:
        return (
            f"Greeks: Δ={self.delta:+.4f}  Γ={self.gamma:+.4f}  "
            f"ν={self.vega:+.4f}  Θ={self.theta:+.4f}  ρ={self.rho:+.4f}"
        )

    def __repr__(self) -> str:
        return (
            f"GreeksResult(delta={self.delta:.4f}, gamma={self.gamma:.4f}, "
            f"vega={self.vega:.4f}, theta={self.theta:.4f}, rho={self.rho:.4f})"
        )

MonteCarloResult dataclass

Result of Quantum Monte Carlo estimation.

Attributes:

Name Type Description
estimated_value float

Estimated expectation value.

classical_value float

Classical Monte Carlo estimate for comparison.

confidence_interval tuple[float, float]

(lower, upper) bounds.

num_qubits int

Qubits used for encoding.

grover_iterations int

Amplitude estimation iterations.

speedup_factor float

Theoretical quantum speedup achieved.

Source code in quanta/layer3/monte_carlo.py
@dataclass
class MonteCarloResult:
    """Result of Quantum Monte Carlo estimation.

    Attributes:
        estimated_value: Estimated expectation value.
        classical_value: Classical Monte Carlo estimate for comparison.
        confidence_interval: (lower, upper) bounds.
        num_qubits: Qubits used for encoding.
        grover_iterations: Amplitude estimation iterations.
        speedup_factor: Theoretical quantum speedup achieved.
    """
    estimated_value: float
    classical_value: float
    confidence_interval: tuple[float, float]
    num_qubits: int
    grover_iterations: int
    speedup_factor: float
    distribution_params: dict = field(default_factory=dict)

    def summary(self) -> str:
        lines = [
            "╔══════════════════════════════════════╗",
            "║  Quantum Monte Carlo Estimation      ║",
            "╠══════════════════════════════════════╣",
            f"║  Quantum estimate: {self.estimated_value:<18.6f}║",
            f"║  Classical value:  {self.classical_value:<18.6f}║",
            f"║  CI: [{self.confidence_interval[0]:.4f}, "
            f"{self.confidence_interval[1]:.4f}]"
            + " " * max(0, 10 - len(f"{self.confidence_interval[1]:.4f}"))
            + "║",
            f"║  Qubits:           {self.num_qubits:<18}║",
            f"║  Grover iters:     {self.grover_iterations:<18}║",
            f"║  Speedup:          {self.speedup_factor:<18.1f}║",
            "╚══════════════════════════════════════╝",
        ]
        return "\n".join(lines)

    def __repr__(self) -> str:
        return (
            f"MonteCarloResult(value={self.estimated_value:.6f}, "
            f"classical={self.classical_value:.6f}, "
            f"speedup={self.speedup_factor:.1f}x)"
        )

amplitude_estimate

amplitude_estimate(
    probs: ndarray,
    payoffs: ndarray,
    n_qubits: int,
    n_estimation: int = 4,
    seed: int | None = None,
) -> tuple[float, int]

Quantum amplitude estimation (Brassard-Hoyer-Mosca-Tapp).

Estimates a = E[f(X)] = Σ p(x)·f̃(x) using quantum circuits.

Circuit structure
  1. Prepare |ψ⟩ = Σ √p(x)|x⟩ on data register
  2. Controlled-R_y on ancilla: encodes f̃(x) as P(ancilla=|1⟩|x⟩) Result: Σ √p(x)[√f̃(x)|1⟩ + √(1-f̃(x))|0⟩]|x⟩
  3. P(ancilla=|1⟩) = Σ p(x)·f̃(x) = a (the target expectation)
  4. Iterative Grover amplification refines the estimate

Parameters:

Name Type Description Default
probs ndarray

Probability distribution (2^n_qubits,).

required
payoffs ndarray

Payoff values per state (2^n_qubits,).

required
n_qubits int

Number of qubits encoding the distribution.

required
n_estimation int

Number of Grover power rounds (precision).

4
seed int | None

Random seed.

None

Returns:

Type Description
tuple[float, int]

(estimated_expectation, total_grover_iterations).

Source code in quanta/layer3/monte_carlo.py
def amplitude_estimate(
    probs: np.ndarray,
    payoffs: np.ndarray,
    n_qubits: int,
    n_estimation: int = 4,
    seed: int | None = None,
) -> tuple[float, int]:
    """Quantum amplitude estimation (Brassard-Hoyer-Mosca-Tapp).

    Estimates a = E[f(X)] = Σ p(x)·f̃(x) using quantum circuits.

    Circuit structure:
      1. Prepare |ψ⟩ = Σ √p(x)|x⟩ on data register
      2. Controlled-R_y on ancilla: encodes f̃(x) as P(ancilla=|1⟩|x⟩)
         Result: Σ √p(x)[√f̃(x)|1⟩ + √(1-f̃(x))|0⟩]|x⟩
      3. P(ancilla=|1⟩) = Σ p(x)·f̃(x) = a (the target expectation)
      4. Iterative Grover amplification refines the estimate

    Args:
        probs: Probability distribution (2^n_qubits,).
        payoffs: Payoff values per state (2^n_qubits,).
        n_qubits: Number of qubits encoding the distribution.
        n_estimation: Number of Grover power rounds (precision).
        seed: Random seed.

    Returns:
        (estimated_expectation, total_grover_iterations).
    """
    dim = 2 ** n_qubits

    # Prepare distribution amplitudes: |ψ⟩ = Σ √p(x)|x⟩
    amplitudes = np.sqrt(np.maximum(probs, 0))
    norm = np.linalg.norm(amplitudes)
    if norm > 0:
        amplitudes /= norm

    # Normalize payoffs to [0, 1] for amplitude encoding
    max_payoff = float(np.max(np.abs(payoffs))) if np.any(payoffs != 0) else 1.0
    f_norm = np.clip(payoffs / max_payoff, 0, 1)

    # Build initial state: ancilla + data register
    # |Ψ_init⟩ = Σ √p(x) [cos(θ_x)|0⟩ + sin(θ_x)|1⟩] ⊗ |x⟩
    # where sin²(θ_x) = f̃(x), so P(ancilla=1) = Σ p(x)·f̃(x) = a
    n_total = 1 + n_qubits
    total_dim = 2 ** n_total

    init_state = np.zeros(total_dim, dtype=complex)
    for x in range(dim):
        theta_x = math.asin(math.sqrt(max(0, min(1, f_norm[x]))))
        # ancilla=0 basis: indices [0, dim)
        init_state[x] = amplitudes[x] * math.cos(theta_x)
        # ancilla=1 basis: indices [dim, 2*dim)
        init_state[x + dim] = amplitudes[x] * math.sin(theta_x)

    # Direct measurement → a = P(ancilla=|1⟩)
    a_direct = float(np.sum(np.abs(init_state[dim:]) ** 2))

    # Iterative amplitude estimation with phase-wrapping correction
    # After m Grover iterations: P(good) = sin²((2m+1)θ)
    # where sin²(θ) = a. We need to invert this correctly.
    n_grover_total = 0
    theta_direct = math.asin(math.sqrt(max(0, min(1, a_direct))))

    # Collect (m_k, p_measured) pairs for maximum likelihood
    measurements: list[tuple[int, float]] = [(0, a_direct)]

    for k in range(n_estimation):
        m_k = 2 ** k
        n_grover_total += m_k

        # Apply G^m_k to initial state
        current = init_state.copy()
        for _ in range(m_k):
            # Oracle: flip sign of "good" states (ancilla=|1⟩)
            current[dim:] *= -1
            # Diffusion: 2|Ψ_init⟩⟨Ψ_init| - I
            overlap = np.vdot(init_state, current)
            current = 2 * overlap * init_state - current

        # Measure P(ancilla=|1⟩) = sin²((2m_k+1)θ)
        p_one = float(np.sum(np.abs(current[dim:]) ** 2))
        measurements.append((m_k, p_one))

    # Maximum likelihood estimation of θ across all measurements
    # For each measurement (m, p), P = sin²((2m+1)θ)
    # Find θ that best explains all observations
    best_theta = theta_direct
    best_score = float("inf")

    # Grid resolution scales with Grover power for quantum-correct precision
    max_m = max(m for m, _ in measurements)
    n_candidates = max(200, 50 * (2 * max_m + 1))

    def _mle_score(theta: float) -> float:
        """Sum of squared errors across all measurements."""
        return sum(
            (math.sin((2 * mk + 1) * theta) ** 2 - pm) ** 2
            for mk, pm in measurements
        )

    # Phase 1: Coarse grid search
    for i in range(n_candidates + 1):
        theta_cand = (i / n_candidates) * (math.pi / 2)
        score = _mle_score(theta_cand)
        if score < best_score:
            best_score = score
            best_theta = theta_cand

    # Phase 2: Golden-section refinement around best grid point
    # This gives continuous precision — no grid snapping artifacts
    grid_step = (math.pi / 2) / n_candidates
    lo = max(0.0, best_theta - grid_step)
    hi = min(math.pi / 2, best_theta + grid_step)
    gr = (math.sqrt(5) + 1) / 2  # golden ratio

    for _ in range(60):  # 60 iterations → ~1e-13 precision
        if hi - lo < 1e-14:
            break
        c = hi - (hi - lo) / gr
        d = lo + (hi - lo) / gr
        if _mle_score(c) < _mle_score(d):
            hi = d
        else:
            lo = c

    best_theta = (lo + hi) / 2
    best_a = math.sin(best_theta) ** 2

    # Scale back
    estimated = best_a * max_payoff
    return estimated, n_grover_total

compute_greeks

compute_greeks(
    spot: float = 100.0,
    strike: float = 105.0,
    volatility: float = 0.2,
    rate: float = 0.05,
    time_to_expiry: float = 1.0,
    payoff: str = "european_call",
    n_samples: int = 100000,
    seed: int = 42,
) -> GreeksResult

Compute option Greeks using finite-difference Monte Carlo.

Parameters:

Name Type Description Default
spot float

Current asset price (S0).

100.0
strike float

Strike price (K).

105.0
volatility float

Annualized volatility (σ).

0.2
rate float

Risk-free interest rate (r).

0.05
time_to_expiry float

Time to expiry in years (T).

1.0
payoff str

"european_call" or "european_put".

'european_call'
n_samples int

Number of classical MC samples.

100000
seed int

Random seed.

42

Returns:

Type Description
GreeksResult

GreeksResult with delta, gamma, vega, theta, rho.

Example

from quanta.layer3.monte_carlo import compute_greeks g = compute_greeks(spot=100, strike=105, volatility=0.2) print(g.summary())

Source code in quanta/layer3/monte_carlo.py
def compute_greeks(
    spot: float = 100.0,
    strike: float = 105.0,
    volatility: float = 0.2,
    rate: float = 0.05,
    time_to_expiry: float = 1.0,
    payoff: str = "european_call",
    n_samples: int = 100_000,
    seed: int = 42,
) -> GreeksResult:
    """Compute option Greeks using finite-difference Monte Carlo.

    Args:
        spot: Current asset price (S0).
        strike: Strike price (K).
        volatility: Annualized volatility (σ).
        rate: Risk-free interest rate (r).
        time_to_expiry: Time to expiry in years (T).
        payoff: "european_call" or "european_put".
        n_samples: Number of classical MC samples.
        seed: Random seed.

    Returns:
        GreeksResult with delta, gamma, vega, theta, rho.

    Example:
        >>> from quanta.layer3.monte_carlo import compute_greeks
        >>> g = compute_greeks(spot=100, strike=105, volatility=0.2)
        >>> print(g.summary())
    """
    rng = np.random.default_rng(seed)
    h_S = spot * 0.01       # 1% bump for spot
    h_sigma = 0.001         # 0.1% bump for vol
    h_T = 1 / 365           # 1 day for theta
    h_r = 0.0001            # 1 bps for rho

    def _price(S: float, sigma: float, T: float, r: float) -> float:
        params = {"S0": S, "K": strike, "sigma": sigma, "T": T, "r": r}
        return _classical_monte_carlo(
            "lognormal", payoff, params, n_samples, rng,
        )

    V0 = _price(spot, volatility, time_to_expiry, rate)

    # Delta: ∂V/∂S (central difference)
    V_up = _price(spot + h_S, volatility, time_to_expiry, rate)
    V_dn = _price(spot - h_S, volatility, time_to_expiry, rate)
    delta = (V_up - V_dn) / (2 * h_S)

    # Gamma: ∂²V/∂S² (central second difference)
    gamma = (V_up - 2 * V0 + V_dn) / (h_S ** 2)

    # Vega: ∂V/∂σ
    V_vol_up = _price(spot, volatility + h_sigma, time_to_expiry, rate)
    V_vol_dn = _price(spot, volatility - h_sigma, time_to_expiry, rate)
    vega = (V_vol_up - V_vol_dn) / (2 * h_sigma)

    # Theta: -∂V/∂T (time decay, negative by convention)
    if time_to_expiry > h_T:
        V_T_dn = _price(spot, volatility, time_to_expiry - h_T, rate)
        theta = -(V0 - V_T_dn) / h_T
    else:
        theta = 0.0

    # Rho: ∂V/∂r
    V_r_up = _price(spot, volatility, time_to_expiry, rate + h_r)
    V_r_dn = _price(spot, volatility, time_to_expiry, rate - h_r)
    rho = (V_r_up - V_r_dn) / (2 * h_r)

    return GreeksResult(
        delta=delta, gamma=gamma, vega=vega,
        theta=theta, rho=rho,
    )

quantum_monte_carlo

quantum_monte_carlo(
    distribution: str = "lognormal",
    payoff: str = "european_call",
    params: dict | None = None,
    n_qubits: int = 6,
    n_estimation: int = 4,
    n_classical: int = 100000,
    seed: int | None = None,
) -> MonteCarloResult

Quantum Monte Carlo estimation using amplitude estimation.

Estimates E[f(X)] where X follows a probability distribution and f is a payoff function. Uses quantum amplitude estimation for quadratic speedup over classical Monte Carlo.

Parameters:

Name Type Description Default
distribution str

Probability distribution type. Options: "lognormal", "normal", "uniform".

'lognormal'
payoff str

Payoff function type. Options: "european_call", "european_put", "expectation", "var".

'european_call'
params dict | None

Distribution and payoff parameters. For lognormal/options: S0, K, sigma, T, r. For normal: mean, std.

None
n_qubits int

Qubits for distribution encoding (precision).

6
n_estimation int

Estimation register qubits (accuracy).

4
n_classical int

Classical Monte Carlo samples for comparison.

100000
seed int | None

Random seed.

None

Returns:

Type Description
MonteCarloResult

MonteCarloResult with quantum and classical estimates.

Example

result = quantum_monte_carlo( ... distribution="lognormal", ... payoff="european_call", ... params={"S0": 100, "K": 105, "sigma": 0.2, "T": 1.0, "r": 0.05}, ... ) print(result.estimated_value)

Source code in quanta/layer3/monte_carlo.py
def quantum_monte_carlo(
    distribution: str = "lognormal",
    payoff: str = "european_call",
    params: dict | None = None,
    n_qubits: int = 6,
    n_estimation: int = 4,
    n_classical: int = 100_000,
    seed: int | None = None,
) -> MonteCarloResult:
    """Quantum Monte Carlo estimation using amplitude estimation.

    Estimates E[f(X)] where X follows a probability distribution
    and f is a payoff function. Uses quantum amplitude estimation
    for quadratic speedup over classical Monte Carlo.

    Args:
        distribution: Probability distribution type.
            Options: "lognormal", "normal", "uniform".
        payoff: Payoff function type.
            Options: "european_call", "european_put", "expectation", "var".
        params: Distribution and payoff parameters.
            For lognormal/options: S0, K, sigma, T, r.
            For normal: mean, std.
        n_qubits: Qubits for distribution encoding (precision).
        n_estimation: Estimation register qubits (accuracy).
        n_classical: Classical Monte Carlo samples for comparison.
        seed: Random seed.

    Returns:
        MonteCarloResult with quantum and classical estimates.

    Example:
        >>> result = quantum_monte_carlo(
        ...     distribution="lognormal",
        ...     payoff="european_call",
        ...     params={"S0": 100, "K": 105, "sigma": 0.2, "T": 1.0, "r": 0.05},
        ... )
        >>> print(result.estimated_value)
    """
    if params is None:
        params = {"S0": 100, "K": 105, "sigma": 0.2, "T": 1.0, "r": 0.05}

    if n_qubits < 2 or n_qubits > 12:
        raise ValueError(f"n_qubits must be in [2,12], given: {n_qubits}")

    rng = np.random.default_rng(seed)

    # Encode distribution
    probs = _encode_distribution(n_qubits, distribution, params, rng)

    # Compute payoffs
    payoffs = _compute_payoff(n_qubits, payoff, params, probs)

    # Quantum amplitude estimation
    quantum_est, grover_iters = amplitude_estimate(
        probs, payoffs, n_qubits, n_estimation, seed
    )

    # Classical Monte Carlo for comparison
    classical_est = _classical_monte_carlo(
        distribution, payoff, params, n_classical, rng
    )

    # Confidence interval (quantum)
    # Precision scales as O(1/M) where M = total Grover iterations
    precision = 1.0 / max(grover_iters, 1)
    max_payoff = np.max(np.abs(payoffs)) if np.any(payoffs != 0) else 1.0
    ci_half = precision * max_payoff
    ci = (quantum_est - ci_half, quantum_est + ci_half)

    # Speedup: quantum uses O(M) queries vs classical O(1/ε²)
    classical_queries = n_classical
    quantum_queries = grover_iters * (2 ** n_qubits)
    speedup = classical_queries / max(quantum_queries, 1)

    return MonteCarloResult(
        estimated_value=quantum_est,
        classical_value=classical_est,
        confidence_interval=ci,
        num_qubits=n_qubits + n_estimation,
        grover_iterations=grover_iters,
        speedup_factor=speedup,
        distribution_params=params,
    )