Skip to content

Metrics Module

The progridpy.metrics package implements a five-stage pipeline for computing financial and risk metrics from ISO trade data.

Pipeline Overview

ISOTradeSchema --> CanonicalSeries --> MetricsCalculator --> MetricsEngine --> MetricsPlots
Stage Module Responsibility
1. Schema schema Adapts ISO-specific DataFrames to canonical columns (interval_start_local, gain_normalized, date, etc.)
2. Series series Constructs the CanonicalSeries -- daily gains, cumulative gains, and drawdowns -- from the standardized DataFrame
3. Calculator calculator Computes 50+ metrics organized into slots=True dataclasses: RiskMetrics, ReturnMetrics, RatioMetrics, DrawdownMetrics, StreakMetrics
4. Engine engine Public facade exposing .compute(), .metrics_df, .plots, and convenience properties
5. Visualization visualization Plotly-based interactive charts (performance, drawdown, histogram, rolling returns)

ISO Registry

The iso_registry sub-module maintains a mapping of ISO names to their ISOTradeSchema instances. Register a new ISO with register_iso() and retrieve its schema with get_iso_schema().

Quick Start

from progridpy.metrics import MetricsEngine

engine = MetricsEngine(iso_name="MISO", df_iso=trade_df)
engine.compute()

print(engine.estimated_risk)
print(engine.sharpe_overall)
fig = engine.plots.performance_chart()

ProgridPy Metrics Package - Financial and risk metrics for ISO trade data.

Classes

DrawdownMetrics dataclass

DrawdownMetrics(worst_daily_loss: float, worst_daily_loss_date: Timestamp, worst_7d_loss: float, worst_7d_loss_start_date: Timestamp, worst_7d_loss_end_date: Timestamp, worst_1m_30d_loss: float, worst_1m_30d_loss_start_date: Timestamp, worst_1m_30d_loss_end_date: Timestamp, worst_3m_90d_loss: float, worst_3m_90d_loss_start_date: Timestamp, worst_3m_90d_loss_end_date: Timestamp, worst_6m_180d_loss: float, worst_6m_180d_loss_start_date: Timestamp, worst_6m_180d_loss_end_date: Timestamp, worst_12m_365d_loss: float, worst_12m_365d_loss_start_date: Timestamp, worst_12m_365d_loss_end_date: Timestamp, worst_daily_loss_over_estimated_risk: float, worst_7d_loss_over_estimated_risk: float, worst_1m_30d_loss_over_estimated_risk: float, worst_3m_90d_loss_over_estimated_risk: float, worst_6m_180d_loss_over_estimated_risk: float, worst_12m_365d_loss_over_estimated_risk: float, max_drawdown: float, max_drawdown_over_estimated_risk: float)

Drawdown and worst loss metrics.

MetricsCalculator

MetricsCalculator(df_std: DataFrame, schema: ISOTradeSchema)

Calculator for computing all metrics from trade data.

This class orchestrates the computation of all metric groups using a standardized DataFrame and ISO schema.

Initialize the calculator.

Args: df_std: Standardized DataFrame (output of schema.adapt_frame) schema: ISO schema used for adaptation

Source code in src/progridpy/metrics/calculator.py
def __init__(self, df_std: pd.DataFrame, schema: ISOTradeSchema) -> None:
    """
    Initialize the calculator.

    Args:
        df_std: Standardized DataFrame (output of schema.adapt_frame)
        schema: ISO schema used for adaptation
    """
    self._df_std = df_std
    self._schema = schema
    self._canonical: CanonicalSeries | None = None

Functions

from_iso_frame classmethod
from_iso_frame(df_iso: DataFrame, iso_name: str) -> MetricsCalculator

Create a MetricsCalculator from an ISO-specific DataFrame.

Args: df_iso: ISO-specific trade DataFrame iso_name: Name of the ISO (e.g., "MISO")

Returns: MetricsCalculator instance

Source code in src/progridpy/metrics/calculator.py
@classmethod
def from_iso_frame(cls, df_iso: pd.DataFrame, iso_name: str) -> MetricsCalculator:
    """
    Create a MetricsCalculator from an ISO-specific DataFrame.

    Args:
        df_iso: ISO-specific trade DataFrame
        iso_name: Name of the ISO (e.g., "MISO")

    Returns:
        MetricsCalculator instance
    """
    schema = get_iso_schema(iso_name)
    df_std = schema.adapt_frame(df_iso)
    return cls(df_std, schema)
filtered_frame
filtered_frame() -> DataFrame

Return the DataFrame filtered to cleared trades only.

Filter: cleared == True

Returns: Filtered DataFrame

Source code in src/progridpy/metrics/calculator.py
def filtered_frame(self) -> pd.DataFrame:
    """
    Return the DataFrame filtered to cleared trades only.

    Filter: cleared == True

    Returns:
        Filtered DataFrame
    """
    return self._df_std[self._df_std["cleared"]]
canonical_series
canonical_series() -> CanonicalSeries

Build and cache the canonical time series.

Returns: CanonicalSeries with daily_gains, cumulative_gains, and drawdowns

Source code in src/progridpy/metrics/calculator.py
def canonical_series(self) -> CanonicalSeries:
    """
    Build and cache the canonical time series.

    Returns:
        CanonicalSeries with daily_gains, cumulative_gains, and drawdowns
    """
    if self._canonical is None:
        self._canonical = build_canonical_series(self.filtered_frame())
    return self._canonical
calculate_risk_metrics
calculate_risk_metrics() -> RiskMetrics

Calculate risk metrics.

Includes: - Estimated Risk - CVaR 1%, CVaR 5% - VaR 99%, VaR 95%, VaR 90% - Capital Required (ISO-specific)

Returns: RiskMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_risk_metrics(self) -> RiskMetrics:
    """
    Calculate risk metrics.

    Includes:
    - Estimated Risk
    - CVaR 1%, CVaR 5%
    - VaR 99%, VaR 95%, VaR 90%
    - Capital Required (ISO-specific)

    Returns:
        RiskMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Estimated Risk: 2 × P₀.₀₅(daily) + P₀.₀₅(90d) + P₀.₀₅(180d)
    # Using 0.05th percentile (0.0005 quantile)
    quantile = 0.0005
    q05_daily = float(np.quantile(daily, quantile))
    rolling_90 = rolling_period_sums(daily, 90).dropna()
    rolling_180 = rolling_period_sums(daily, 180).dropna()
    # Handle insufficient data for rolling windows
    q05_90d = float(np.quantile(rolling_90, quantile)) if len(rolling_90) > 0 else 0.0
    q05_180d = float(np.quantile(rolling_180, quantile)) if len(rolling_180) > 0 else 0.0
    # Estimated Risk: Sum of quantiles (no clamping - can be positive or negative)
    estimated_risk = 2 * q05_daily + q05_90d + q05_180d

    # CVaR (Conditional Value at Risk) - average of worst X% daily gains
    sorted_gains = np.sort(daily.to_numpy())
    n = len(sorted_gains)
    cutoff_1 = max(1, int(np.ceil(0.01 * n)))
    cutoff_5 = max(1, int(np.ceil(0.05 * n)))
    cvar_1 = min(0.0, float(sorted_gains[:cutoff_1].mean()))
    cvar_5 = min(0.0, float(sorted_gains[:cutoff_5].mean()))

    # VaR (Value at Risk) - quantiles of daily gains
    var_99 = min(0.0, float(np.quantile(daily, 0.01)))  # 1% left-tail
    var_95 = min(0.0, float(np.quantile(daily, 0.05)))  # 5% left-tail
    var_90 = min(0.0, float(np.quantile(daily, 0.10)))  # 10% left-tail

    # Capital Required (ISO-specific)
    capital_required = self._schema.compute_capital_requirement(self.filtered_frame())

    return RiskMetrics(
        estimated_risk=estimated_risk,
        cvar_1=cvar_1,
        cvar_5=cvar_5,
        var_99=var_99,
        var_95=var_95,
        var_90=var_90,
        capital_required=capital_required,
    )
calculate_return_metrics
calculate_return_metrics() -> ReturnMetrics

Calculate return metrics.

Includes: - Mean/Median Daily Return - Mean/Median Monthly Return (30d) - Mean/Median Annual Return (365d) - Calendar Year Returns

Returns: ReturnMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_return_metrics(self) -> ReturnMetrics:
    """
    Calculate return metrics.

    Includes:
    - Mean/Median Daily Return
    - Mean/Median Monthly Return (30d)
    - Mean/Median Annual Return (365d)
    - Calendar Year Returns

    Returns:
        ReturnMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Daily returns
    mean_daily = float(daily.mean())
    median_daily = float(daily.median())

    # Monthly (30d rolling)
    rolling_30 = rolling_period_sums(daily, 30).dropna()
    mean_monthly = float(rolling_30.mean())
    median_monthly = float(rolling_30.median())

    # Annual (365d rolling)
    rolling_365 = rolling_period_sums(daily, 365).dropna()
    mean_annual = float(rolling_365.mean())
    median_annual = float(rolling_365.median())

    # Calendar returns - group by year and sum
    calendar_returns: dict[int, float] = daily.groupby(daily.index.year).sum().to_dict()

    return ReturnMetrics(
        mean_daily=mean_daily,
        median_daily=median_daily,
        mean_monthly_30d=mean_monthly,
        median_monthly_30d=median_monthly,
        mean_annual_365d=mean_annual,
        median_annual_365d=median_annual,
        calendar_returns=calendar_returns,
    )
calculate_ratio_metrics
calculate_ratio_metrics(risk: RiskMetrics, returns: ReturnMetrics) -> RatioMetrics

Calculate ratio metrics (returns divided by estimated risk).

Args: risk: Previously computed RiskMetrics returns: Previously computed ReturnMetrics

Returns: RatioMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_ratio_metrics(self, risk: RiskMetrics, returns: ReturnMetrics) -> RatioMetrics:
    """
    Calculate ratio metrics (returns divided by estimated risk).

    Args:
        risk: Previously computed RiskMetrics
        returns: Previously computed ReturnMetrics

    Returns:
        RatioMetrics dataclass
    """
    er = risk.estimated_risk
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Sharpe Ratio: mean / std (no risk-free rate, annualized by sqrt(365))
    # Use ddof=1 (sample std) for consistency across all calculations
    annualization_factor = np.sqrt(365)
    mu = float(daily.mean())
    sigma = float(daily.std(ddof=1))
    sharpe_overall = (mu / sigma * annualization_factor) if sigma != 0 else np.nan

    # Sortino Ratio: mean / downside_std (annualized by sqrt(365))
    downside = daily[daily < 0]
    sigma_down = float(downside.std(ddof=1)) if len(downside) > 0 else np.nan
    sortino_overall = (mu / sigma_down * annualization_factor) if sigma_down and sigma_down != 0 else np.nan

    # Sharpe/Sortino by year (annualized by sqrt(365))
    # Note: Use ddof=1 (sample std) for all calculations
    sharpe_by_year: dict[int, float] = {}
    sortino_by_year: dict[int, float] = {}
    for yr, gains in daily.groupby(daily.index.year):
        yr_mu = float(gains.mean())
        yr_sigma = float(gains.std(ddof=1))
        sharpe_by_year[yr] = (yr_mu / yr_sigma * annualization_factor) if yr_sigma != 0 else np.nan

        yr_downside = gains[gains < 0]
        yr_sigma_down = float(yr_downside.std(ddof=1)) if len(yr_downside) > 0 else np.nan
        sortino_by_year[yr] = (
            (yr_mu / yr_sigma_down * annualization_factor) if yr_sigma_down and yr_sigma_down != 0 else np.nan
        )

    # Calendar annual ratios - total of all calendar year gains / estimated risk
    total_calendar_gain = sum(returns.calendar_returns.values())
    # Denominator must be positive risk scale
    risk_scale = abs(er) if er != 0 else np.nan

    mean_annual_calendar_over_er = total_calendar_gain / risk_scale if risk_scale else np.nan

    median_annual_calendar_over_er = (
        total_calendar_gain / risk_scale if risk_scale else np.nan
    )  # Same as mean per spec

    return RatioMetrics(
        mean_daily_over_estimated_risk=returns.mean_daily / risk_scale if risk_scale else np.nan,
        median_daily_over_estimated_risk=returns.median_daily / risk_scale if risk_scale else np.nan,
        mean_monthly_over_estimated_risk=returns.mean_monthly_30d / risk_scale if risk_scale else np.nan,
        median_monthly_over_estimated_risk=returns.median_monthly_30d / risk_scale if risk_scale else np.nan,
        mean_annual_over_estimated_risk=returns.mean_annual_365d / risk_scale if risk_scale else np.nan,
        median_annual_over_estimated_risk=returns.median_annual_365d / risk_scale if risk_scale else np.nan,
        sharpe_overall=sharpe_overall,
        sortino_overall=sortino_overall,
        sharpe_by_year=sharpe_by_year,
        sortino_by_year=sortino_by_year,
        mean_annual_calendar_over_estimated_risk=mean_annual_calendar_over_er,
        median_annual_calendar_over_estimated_risk=median_annual_calendar_over_er,
    )
calculate_drawdown_metrics
calculate_drawdown_metrics(risk: RiskMetrics) -> DrawdownMetrics

Calculate drawdown metrics.

Args: risk: Previously computed RiskMetrics (for risk-adjusted ratios)

Returns: DrawdownMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_drawdown_metrics(self, risk: RiskMetrics) -> DrawdownMetrics:
    """
    Calculate drawdown metrics.

    Args:
        risk: Previously computed RiskMetrics (for risk-adjusted ratios)

    Returns:
        DrawdownMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains
    cs = self.canonical_series()
    daily = cs.daily_gains
    er = risk.estimated_risk
    risk_scale = abs(er) if er != 0 else np.nan

    def get_worst_window_dates(rolling_series: pd.Series, window_days: int) -> tuple[pd.Timestamp, pd.Timestamp]:
        """Get start and end dates for the worst loss window."""
        if len(rolling_series) == 0:
            return pd.NaT, pd.NaT
        end_date = pd.Timestamp(rolling_series.idxmin())
        start_date = end_date - pd.Timedelta(days=window_days - 1)
        return start_date, end_date

    # Worst losses for different rolling periods
    worst_daily_loss = float(daily.min())
    worst_daily_loss_date = pd.Timestamp(daily.idxmin())

    rolling_7 = rolling_period_sums(daily, 7).dropna()
    worst_7d_loss = float(rolling_7.min()) if len(rolling_7) > 0 else 0.0
    worst_7d_start, worst_7d_end = get_worst_window_dates(rolling_7, 7) if len(rolling_7) > 0 else (pd.NaT, pd.NaT)

    rolling_30 = rolling_period_sums(daily, 30).dropna()
    worst_1m_30d_loss = float(rolling_30.min()) if len(rolling_30) > 0 else 0.0
    worst_1m_30d_start, worst_1m_30d_end = (
        get_worst_window_dates(rolling_30, 30) if len(rolling_30) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_90 = rolling_period_sums(daily, 90).dropna()
    worst_3m_90d_loss = float(rolling_90.min()) if len(rolling_90) > 0 else 0.0
    worst_3m_90d_start, worst_3m_90d_end = (
        get_worst_window_dates(rolling_90, 90) if len(rolling_90) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_180 = rolling_period_sums(daily, 180).dropna()
    worst_6m_180d_loss = float(rolling_180.min()) if len(rolling_180) > 0 else 0.0
    worst_6m_180d_start, worst_6m_180d_end = (
        get_worst_window_dates(rolling_180, 180) if len(rolling_180) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_365 = rolling_period_sums(daily, 365).dropna()
    worst_12m_365d_loss = float(rolling_365.min()) if len(rolling_365) > 0 else 0.0
    worst_12m_365d_start, worst_12m_365d_end = (
        get_worst_window_dates(rolling_365, 365) if len(rolling_365) > 0 else (pd.NaT, pd.NaT)
    )

    # Max drawdown from cumulative gains series
    max_drawdown = float(cs.drawdowns.min())

    return DrawdownMetrics(
        worst_daily_loss=worst_daily_loss,
        worst_daily_loss_date=worst_daily_loss_date,
        worst_7d_loss=worst_7d_loss,
        worst_7d_loss_start_date=worst_7d_start,
        worst_7d_loss_end_date=worst_7d_end,
        worst_1m_30d_loss=worst_1m_30d_loss,
        worst_1m_30d_loss_start_date=worst_1m_30d_start,
        worst_1m_30d_loss_end_date=worst_1m_30d_end,
        worst_3m_90d_loss=worst_3m_90d_loss,
        worst_3m_90d_loss_start_date=worst_3m_90d_start,
        worst_3m_90d_loss_end_date=worst_3m_90d_end,
        worst_6m_180d_loss=worst_6m_180d_loss,
        worst_6m_180d_loss_start_date=worst_6m_180d_start,
        worst_6m_180d_loss_end_date=worst_6m_180d_end,
        worst_12m_365d_loss=worst_12m_365d_loss,
        worst_12m_365d_loss_start_date=worst_12m_365d_start,
        worst_12m_365d_loss_end_date=worst_12m_365d_end,
        worst_daily_loss_over_estimated_risk=worst_daily_loss / risk_scale if risk_scale else np.nan,
        worst_7d_loss_over_estimated_risk=worst_7d_loss / risk_scale if risk_scale else np.nan,
        worst_1m_30d_loss_over_estimated_risk=worst_1m_30d_loss / risk_scale if risk_scale else np.nan,
        worst_3m_90d_loss_over_estimated_risk=worst_3m_90d_loss / risk_scale if risk_scale else np.nan,
        worst_6m_180d_loss_over_estimated_risk=worst_6m_180d_loss / risk_scale if risk_scale else np.nan,
        worst_12m_365d_loss_over_estimated_risk=worst_12m_365d_loss / risk_scale if risk_scale else np.nan,
        max_drawdown=max_drawdown,
        max_drawdown_over_estimated_risk=max_drawdown / risk_scale if risk_scale else np.nan,
    )
calculate_streak_metrics
calculate_streak_metrics(risk: RiskMetrics) -> StreakMetrics

Calculate streak and probability metrics.

Args: risk: Previously computed RiskMetrics

Returns: StreakMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_streak_metrics(self, risk: RiskMetrics) -> StreakMetrics:
    """
    Calculate streak and probability metrics.

    Args:
        risk: Previously computed RiskMetrics

    Returns:
        StreakMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains
    cumulative = cs.cumulative_gains
    drawdowns = cs.drawdowns

    # Win rate: percentage of positive days
    win_rate_pct = 100.0 * float((daily > 0).mean())

    # Percent of periods with loss
    rolling_30 = rolling_period_sums(daily, 30).dropna()
    pct_1m_loss = 100.0 * float((rolling_30 < 0).mean())

    rolling_90 = rolling_period_sums(daily, 90).dropna()
    pct_3m_loss = 100.0 * float((rolling_90 < 0).mean())

    rolling_180 = rolling_period_sums(daily, 180).dropna()
    pct_6m_loss = 100.0 * float((rolling_180 < 0).mean())

    rolling_365 = rolling_period_sums(daily, 365).dropna()
    pct_12m_loss = 100.0 * float((rolling_365 < 0).mean())

    # Worst any period (max drawdown period) - find start and end dates
    # End date: where drawdown hits minimum
    worst_dd_idx = drawdowns.idxmin()
    worst_any_period_end_date = pd.Timestamp(worst_dd_idx)

    # Start date: find the peak before the trough
    # The peak is where cumulative_gains equals its cummax just before the trough
    peak = cumulative.cummax()
    # Find the last date where cumulative == peak before worst_dd_idx
    pre_trough = cumulative.loc[:worst_dd_idx]
    peak_before_trough = peak.loc[:worst_dd_idx]
    # Find where they were equal (at a peak)
    at_peak = pre_trough == peak_before_trough
    if at_peak.any():
        peak_dates = pre_trough.index[at_peak]
        worst_any_period_start_date = pd.Timestamp(peak_dates[-1])
    else:
        worst_any_period_start_date = pd.Timestamp(cumulative.index[0])

    worst_any_period_length_days = (worst_any_period_end_date - worst_any_period_start_date).days

    # Longest period in loss: longest consecutive run where cumulative < peak
    in_loss = cumulative < peak
    # Find runs using diff and cumsum (vectorized)
    # Create groups where each new "not in loss" starts a new group
    not_in_loss = ~in_loss
    group_ids = not_in_loss.cumsum()
    # For periods in loss, find the longest run
    loss_groups = in_loss.groupby(group_ids)

    longest_start = pd.Timestamp(cumulative.index[0])
    longest_end = pd.Timestamp(cumulative.index[0])
    longest_length = 0

    for _group_id, group in loss_groups:
        if group.any():  # This group has loss periods
            # Get only the True values (in loss)
            loss_dates = group[group].index
            if len(loss_dates) > 0:
                start = loss_dates[0]
                end = loss_dates[-1]
                length = (end - start).days
                if length > longest_length:
                    longest_length = length
                    longest_start = pd.Timestamp(start)
                    longest_end = pd.Timestamp(end)

    # Calculate total loss during the longest loss period
    longest_period_loss = 0.0
    if longest_length > 0:
        longest_period_loss = float(daily.loc[longest_start:longest_end].sum())

    return StreakMetrics(
        worst_any_period_start_date=worst_any_period_start_date,
        worst_any_period_end_date=worst_any_period_end_date,
        worst_any_period_length_days=worst_any_period_length_days,
        longest_period_loss=longest_period_loss,
        longest_loss_period_start_date=longest_start,
        longest_loss_period_end_date=longest_end,
        longest_loss_period_length_days=longest_length,
        win_rate_pct=win_rate_pct,
        pct_1m_periods_with_loss=pct_1m_loss,
        pct_3m_periods_with_loss=pct_3m_loss,
        pct_6m_periods_with_loss=pct_6m_loss,
        pct_12m_periods_with_loss=pct_12m_loss,
    )
calculate_all
calculate_all() -> MetricsResult

Calculate all metrics in the correct dependency order.

Returns: MetricsResult containing all metric groups

Source code in src/progridpy/metrics/calculator.py
def calculate_all(self) -> MetricsResult:
    """
    Calculate all metrics in the correct dependency order.

    Returns:
        MetricsResult containing all metric groups
    """
    risk = self.calculate_risk_metrics()
    returns = self.calculate_return_metrics()

    # Populate calendar_returns_risk_adjusted after we have estimated_risk
    er = risk.estimated_risk
    # Populate calendar_returns_risk_adjusted after we have estimated_risk
    er = risk.estimated_risk
    risk_scale = abs(er) if er != 0 else np.nan
    returns.calendar_returns_risk_adjusted = {
        yr: gain / risk_scale if risk_scale else np.nan for yr, gain in returns.calendar_returns.items()
    }

    ratios = self.calculate_ratio_metrics(risk, returns)
    drawdowns = self.calculate_drawdown_metrics(risk)
    streaks = self.calculate_streak_metrics(risk)

    return MetricsResult(
        risk=risk,
        returns=returns,
        ratios=ratios,
        drawdowns=drawdowns,
        streaks=streaks,
    )

MetricsResult dataclass

MetricsResult(risk: RiskMetrics, returns: ReturnMetrics, ratios: RatioMetrics, drawdowns: DrawdownMetrics, streaks: StreakMetrics)

Container for all computed metrics.

Functions

to_frame
to_frame() -> DataFrame

Flatten all metrics into a DataFrame with 'metric' and 'value' columns.

Returns: DataFrame with columns ['metric', 'value']

Source code in src/progridpy/metrics/calculator.py
def to_frame(self) -> pd.DataFrame:
    """
    Flatten all metrics into a DataFrame with 'metric' and 'value' columns.

    Returns:
        DataFrame with columns ['metric', 'value']
    """
    rows: list[tuple[str, float | str | int | None]] = []

    # Risk Metrics
    rows.append(("Estimated Risk", self.risk.estimated_risk))
    rows.append(("CVaR 1%", self.risk.cvar_1))
    rows.append(("CVaR 5%", self.risk.cvar_5))
    rows.append(("VaR 99%", self.risk.var_99))
    rows.append(("VaR 95%", self.risk.var_95))
    rows.append(("VaR 90%", self.risk.var_90))
    if self.risk.capital_required is not None:
        rows.append(("Capital Required (MISO, $)", self.risk.capital_required))

    # Return Metrics
    rows.append(("Mean Daily Return", self.returns.mean_daily))
    rows.append(("Median Daily Return", self.returns.median_daily))
    rows.append(("Mean Monthly Return (30d)", self.returns.mean_monthly_30d))
    rows.append(("Median Monthly Return (30d)", self.returns.median_monthly_30d))
    rows.append(("Mean Annual Return (365d)", self.returns.mean_annual_365d))
    rows.append(("Median Annual Return (365d)", self.returns.median_annual_365d))

    # Ratio Metrics
    rows.append(("Mean Daily / Estimated Risk", self.ratios.mean_daily_over_estimated_risk))
    rows.append(("Median Daily / Estimated Risk", self.ratios.median_daily_over_estimated_risk))
    rows.append(("Mean Monthly / Estimated Risk", self.ratios.mean_monthly_over_estimated_risk))
    rows.append(("Median Monthly / Estimated Risk", self.ratios.median_monthly_over_estimated_risk))
    rows.append(("Mean Annual / Estimated Risk", self.ratios.mean_annual_over_estimated_risk))
    rows.append(("Median Annual / Estimated Risk", self.ratios.median_annual_over_estimated_risk))
    rows.append(("Sharpe Overall", self.ratios.sharpe_overall))
    rows.append(("Sortino Overall", self.ratios.sortino_overall))

    # Sharpe/Sortino by year
    for year in sorted(self.ratios.sharpe_by_year.keys()):
        rows.append((f"Sharpe ({year})", self.ratios.sharpe_by_year[year]))
    for year in sorted(self.ratios.sortino_by_year.keys()):
        rows.append((f"Sortino ({year})", self.ratios.sortino_by_year[year]))

    # Calendar annual ratios
    rows.append(
        ("Mean Annual Return (calendar) / Estimated Risk", self.ratios.mean_annual_calendar_over_estimated_risk)
    )
    rows.append(
        ("Median Annual Return (calendar) / Estimated Risk", self.ratios.median_annual_calendar_over_estimated_risk)
    )

    # Drawdown Metrics
    rows.append(("Worst Daily Loss", self.drawdowns.worst_daily_loss))
    rows.append(("Worst Daily Loss Date", str(self.drawdowns.worst_daily_loss_date)))
    rows.append(("Worst 7d Loss", self.drawdowns.worst_7d_loss))
    rows.append(("Worst 7d Loss Start Date", str(self.drawdowns.worst_7d_loss_start_date)))
    rows.append(("Worst 7d Loss End Date", str(self.drawdowns.worst_7d_loss_end_date)))
    rows.append(("Worst 1m Loss (30d)", self.drawdowns.worst_1m_30d_loss))
    rows.append(("Worst 1m Loss Start Date", str(self.drawdowns.worst_1m_30d_loss_start_date)))
    rows.append(("Worst 1m Loss End Date", str(self.drawdowns.worst_1m_30d_loss_end_date)))
    rows.append(("Worst 3m Loss (90d)", self.drawdowns.worst_3m_90d_loss))
    rows.append(("Worst 3m Loss Start Date", str(self.drawdowns.worst_3m_90d_loss_start_date)))
    rows.append(("Worst 3m Loss End Date", str(self.drawdowns.worst_3m_90d_loss_end_date)))
    rows.append(("Worst 6m Loss (180d)", self.drawdowns.worst_6m_180d_loss))
    rows.append(("Worst 6m Loss Start Date", str(self.drawdowns.worst_6m_180d_loss_start_date)))
    rows.append(("Worst 6m Loss End Date", str(self.drawdowns.worst_6m_180d_loss_end_date)))
    rows.append(("Worst 12m Loss (365d)", self.drawdowns.worst_12m_365d_loss))
    rows.append(("Worst 12m Loss Start Date", str(self.drawdowns.worst_12m_365d_loss_start_date)))
    rows.append(("Worst 12m Loss End Date", str(self.drawdowns.worst_12m_365d_loss_end_date)))
    rows.append(("Worst Daily Loss / Estimated Risk", self.drawdowns.worst_daily_loss_over_estimated_risk))
    rows.append(("Worst 7d Loss / Estimated Risk", self.drawdowns.worst_7d_loss_over_estimated_risk))
    rows.append(("Worst 1m Loss / Estimated Risk", self.drawdowns.worst_1m_30d_loss_over_estimated_risk))
    rows.append(("Worst 3m Loss / Estimated Risk", self.drawdowns.worst_3m_90d_loss_over_estimated_risk))
    rows.append(("Worst 6m Loss / Estimated Risk", self.drawdowns.worst_6m_180d_loss_over_estimated_risk))
    rows.append(("Worst 12m Loss / Estimated Risk", self.drawdowns.worst_12m_365d_loss_over_estimated_risk))
    rows.append(("Worst Any Period Loss (Max Drawdown)", self.drawdowns.max_drawdown))
    rows.append(("Worst Any Period Loss / Estimated Risk", self.drawdowns.max_drawdown_over_estimated_risk))

    # Streak Metrics
    rows.append(("Worst Any Period Start Date", str(self.streaks.worst_any_period_start_date)))
    rows.append(("Worst Any Period End Date", str(self.streaks.worst_any_period_end_date)))
    rows.append(("Worst Any Period Length (days)", self.streaks.worst_any_period_length_days))
    rows.append(("Longest Period Loss", self.streaks.longest_period_loss))
    rows.append(("Longest Period in Loss Start Date", str(self.streaks.longest_loss_period_start_date)))
    rows.append(("Longest Period in Loss End Date", str(self.streaks.longest_loss_period_end_date)))
    rows.append(("Longest Period in Loss Length (days)", self.streaks.longest_loss_period_length_days))
    rows.append(("Win Rate (%)", self.streaks.win_rate_pct))
    rows.append(("Percent of 1m periods with loss (%)", self.streaks.pct_1m_periods_with_loss))
    rows.append(("Percent of 3m periods with loss (%)", self.streaks.pct_3m_periods_with_loss))
    rows.append(("Percent of 6m periods with loss (%)", self.streaks.pct_6m_periods_with_loss))
    rows.append(("Percent of 12m periods with loss (%)", self.streaks.pct_12m_periods_with_loss))

    # Annual Returns (calendar) by year
    for year in sorted(self.returns.calendar_returns.keys()):
        rows.append((f"Annual Returns ({year})", self.returns.calendar_returns[year]))

    # Annual Returns Risk-Adjusted by year
    for year in sorted(self.returns.calendar_returns_risk_adjusted.keys()):
        rows.append((f"Annual Returns Risk-Adjusted ({year})", self.returns.calendar_returns_risk_adjusted[year]))

    # Gains by Calendar Year (dynamic - only years with data)
    for year in sorted(self.returns.calendar_returns.keys()):
        rows.append((f"Gains ({year})", self.returns.calendar_returns[year]))

    df = pd.DataFrame(rows, columns=["metric", "value"])
    df["value"] = df["value"].astype(str)  # Ensure consistent type for Arrow serialization
    return df

RatioMetrics dataclass

RatioMetrics(mean_daily_over_estimated_risk: float, median_daily_over_estimated_risk: float, mean_monthly_over_estimated_risk: float, median_monthly_over_estimated_risk: float, mean_annual_over_estimated_risk: float, median_annual_over_estimated_risk: float, sharpe_overall: float, sortino_overall: float, sharpe_by_year: dict[int, float], sortino_by_year: dict[int, float], mean_annual_calendar_over_estimated_risk: float, median_annual_calendar_over_estimated_risk: float)

Ratio metrics (returns divided by estimated risk).

ReturnMetrics dataclass

ReturnMetrics(mean_daily: float, median_daily: float, mean_monthly_30d: float, median_monthly_30d: float, mean_annual_365d: float, median_annual_365d: float, calendar_returns: dict[int, float], calendar_returns_risk_adjusted: dict[int, float] = dict())

Return-related metrics.

RiskMetrics dataclass

RiskMetrics(estimated_risk: float, cvar_1: float, cvar_5: float, var_99: float, var_95: float, var_90: float, capital_required: float | None)

Risk-related metrics.

StreakMetrics dataclass

StreakMetrics(worst_any_period_start_date: Timestamp, worst_any_period_end_date: Timestamp, worst_any_period_length_days: int, longest_loss_period_start_date: Timestamp, longest_loss_period_end_date: Timestamp, longest_loss_period_length_days: int, longest_period_loss: float, win_rate_pct: float, pct_1m_periods_with_loss: float, pct_3m_periods_with_loss: float, pct_6m_periods_with_loss: float, pct_12m_periods_with_loss: float)

Streak and probability metrics.

MetricsEngine dataclass

MetricsEngine(iso_name: str, df_iso: DataFrame, _calculator: MetricsCalculator | None = None, result: MetricsResult | None = None)

High-level façade for computing metrics from ISO trade data.

This class provides a simple interface for users and tests to compute and access metrics without dealing with the underlying calculator details.

Usage: engine = MetricsEngine(iso_name="MISO", df_iso=trade_df) engine.compute()

# Access results
metrics_df = engine.metrics_df
risk = engine.estimated_risk

Attributes

metrics_df property
metrics_df: DataFrame

Get metrics as a DataFrame.

Returns: DataFrame with 'metric' and 'value' columns

plots property
plots: MetricsPlots

Get the MetricsPlots instance for visualization.

Returns: MetricsPlots configured with the canonical series

estimated_risk property
estimated_risk: float

Get the estimated risk value.

sharpe_overall property
sharpe_overall: float

Get the overall Sharpe ratio.

sortino_overall property
sortino_overall: float

Get the overall Sortino ratio.

win_rate_pct property
win_rate_pct: float

Get the win rate percentage.

max_drawdown property
max_drawdown: float

Get the maximum drawdown.

calendar_returns property
calendar_returns: dict[int, float]

Get the calendar returns by year.

Functions

compute
compute() -> MetricsResult

Compute all metrics.

Returns: MetricsResult containing all computed metrics

Source code in src/progridpy/metrics/engine.py
def compute(self) -> MetricsResult:
    """
    Compute all metrics.

    Returns:
        MetricsResult containing all computed metrics
    """
    if self.result is not None:
        return self.result
    self._calculator = MetricsCalculator.from_iso_frame(self.df_iso, self.iso_name)
    self.result = self._calculator.calculate_all()
    return self.result

ISOTradeSchema dataclass

ISOTradeSchema(iso_name: str, column_map: Mapping[str, str], gain_normalizer: GainNormalizer, capital_requirement_fn: CapitalRequirementFn | None = None, extra_validations: list[Callable[[DataFrame], None]] = list())

Schema definition for adapting ISO-specific trade DataFrames to canonical format.

Attributes: iso_name: Name of the ISO (e.g., "MISO", "ERCOT", "SPP") column_map: Mapping from canonical column names to ISO-specific column names gain_normalizer: Callable that computes gain_normalized from the DataFrame capital_requirement_fn: Optional callable to compute capital requirement extra_validations: List of validation functions to run on the adapted DataFrame

Functions

adapt_frame
adapt_frame(df_iso: DataFrame) -> DataFrame

Adapt an ISO-specific DataFrame to the canonical format.

Steps: 1. Validate required columns exist (using self.column_map.values()) 2. Rename ISO columns to canonical names (inverse mapping) 3. Enforce dtypes (datetime64[ns], booleans, floats) 4. Create 'gain_normalized' via self.gain_normalizer(df_renamed) 5. Add 'date' = df_renamed["interval_start_local"].dt.date 6. Add constant column 'iso' = self.iso_name 7. Run any extra_validations 8. Return a new DataFrame (do not mutate input)

Args: df_iso: ISO-specific DataFrame

Returns: Standardized DataFrame with canonical columns

Source code in src/progridpy/metrics/schema.py
def adapt_frame(self, df_iso: pd.DataFrame) -> pd.DataFrame:
    """
    Adapt an ISO-specific DataFrame to the canonical format.

    Steps:
    1. Validate required columns exist (using self.column_map.values())
    2. Rename ISO columns to canonical names (inverse mapping)
    3. Enforce dtypes (datetime64[ns], booleans, floats)
    4. Create 'gain_normalized' via self.gain_normalizer(df_renamed)
    5. Add 'date' = df_renamed["interval_start_local"].dt.date
    6. Add constant column 'iso' = self.iso_name
    7. Run any extra_validations
    8. Return a new DataFrame (do not mutate input)

    Args:
        df_iso: ISO-specific DataFrame

    Returns:
        Standardized DataFrame with canonical columns
    """
    # 1. Validate required columns exist
    required_cols = set(self.column_map.values())
    missing = required_cols - set(df_iso.columns)
    if missing:
        raise ValueError(f"Missing columns: {missing}")

    # 2. Create copy and rename columns (inverse mapping)
    inverse_map = {v: k for k, v in self.column_map.items()}
    df = df_iso.rename(columns=inverse_map).copy()

    # 3. Ensure datetime type
    df["interval_start_local"] = pd.to_datetime(df["interval_start_local"])

    # 4. Create gain_normalized
    df["gain_normalized"] = self.gain_normalizer(df)

    # 5. Add date column
    df["date"] = df["interval_start_local"].dt.date

    # 6. Add iso column
    df["iso"] = self.iso_name

    # 7. Run validations
    for validation in self.extra_validations:
        validation(df)

    return df
compute_capital_requirement
compute_capital_requirement(df_std: DataFrame) -> float | None

Compute the capital requirement for the given standardized DataFrame.

Args: df_std: Standardized DataFrame (should be filtered to cleared trades)

Returns: Capital requirement value, or None if no capital_requirement_fn is defined

Source code in src/progridpy/metrics/schema.py
def compute_capital_requirement(self, df_std: pd.DataFrame) -> float | None:
    """
    Compute the capital requirement for the given standardized DataFrame.

    Args:
        df_std: Standardized DataFrame (should be filtered to cleared trades)

    Returns:
        Capital requirement value, or None if no capital_requirement_fn is defined
    """
    if self.capital_requirement_fn is None:
        return None
    return self.capital_requirement_fn(df_std)

CanonicalSeries dataclass

CanonicalSeries(daily_gains: Series, cumulative_gains: Series, drawdowns: Series, daily_gains_supply: Series | None = None, daily_gains_demand: Series | None = None, cumulative_gains_supply: Series | None = None, cumulative_gains_demand: Series | None = None, drawdowns_supply: Series | None = None, drawdowns_demand: Series | None = None)

Container for canonical daily time series used in metrics computation.

Attributes: daily_gains: Series indexed by date with sum of gain_normalized per day cumulative_gains: Cumulative sum of daily_gains drawdowns: cumulative_gains - cumulative_gains.cummax() (always <= 0) daily_gains_supply: Optional series for supply (is_supply=True) gains daily_gains_demand: Optional series for demand (is_supply=False) gains cumulative_gains_supply: Optional cumulative sum of supply gains cumulative_gains_demand: Optional cumulative sum of demand gains drawdowns_supply: Optional drawdowns for supply drawdowns_demand: Optional drawdowns for demand

MetricsPlots dataclass

MetricsPlots(canonical: CanonicalSeries)

Visualization utilities for metrics.

Uses Plotly to create interactive performance charts and histograms.

Functions

performance_chart
performance_chart(show_total: bool = True, show_supply: bool = False, show_demand: bool = False) -> Figure

Create a 3-pane performance chart.

Panes (share x-axis by date): 1. Top: Cumulative gains (line) - can show total, supply, demand 2. Middle: Drawdowns (filled area below 0) - updates based on selection 3. Bottom: Daily gains (bars) - updates based on selection

When a single series is selected (supply or demand only), the drawdown and daily gains panes show that specific series. Otherwise, they show the total/combined data.

Args: show_total: Show total cumulative gains (default True) show_supply: Show supply-only cumulative gains show_demand: Show demand-only cumulative gains

Returns: Plotly Figure

Source code in src/progridpy/metrics/visualization.py
def performance_chart(
    self,
    show_total: bool = True,
    show_supply: bool = False,
    show_demand: bool = False,
) -> go.Figure:
    """
    Create a 3-pane performance chart.

    Panes (share x-axis by date):
    1. Top: Cumulative gains (line) - can show total, supply, demand
    2. Middle: Drawdowns (filled area below 0) - updates based on selection
    3. Bottom: Daily gains (bars) - updates based on selection

    When a single series is selected (supply or demand only), the drawdown
    and daily gains panes show that specific series. Otherwise, they show
    the total/combined data.

    Args:
        show_total: Show total cumulative gains (default True)
        show_supply: Show supply-only cumulative gains
        show_demand: Show demand-only cumulative gains

    Returns:
        Plotly Figure
    """
    # Determine which series are active
    active_series: list[str] = []
    if show_total:
        active_series.append("total")
    if show_supply and self.canonical.cumulative_gains_supply is not None:
        active_series.append("supply")
    if show_demand and self.canonical.cumulative_gains_demand is not None:
        active_series.append("demand")

    # Determine which data to use for drawdowns and daily gains panes
    # Mode 1: Total (all three flags True) - Show all curves in panel 1, total in panels 2 & 3
    # Mode 2: Supply only (show_supply True, others False) - Show supply in all panels
    # Mode 3: Demand only (show_demand True, others False) - Show demand in all panels
    # Mode 4: None (all False) - Show nothing
    if show_total and show_supply and show_demand:
        # Total mode: panels 2 & 3 use total data only
        daily_for_bars = self.canonical.daily_gains
        drawdown_for_area = self.canonical.drawdowns
    elif len(active_series) == 1 and active_series[0] == "supply":
        # Supply-only mode
        daily_for_bars = self.canonical.daily_gains_supply
        drawdown_for_area = self.canonical.drawdowns_supply
    elif len(active_series) == 1 and active_series[0] == "demand":
        # Demand-only mode
        daily_for_bars = self.canonical.daily_gains_demand
        drawdown_for_area = self.canonical.drawdowns_demand
    elif not active_series:
        # None mode: don't set data (will handle below by skipping trace addition)
        daily_for_bars = None
        drawdown_for_area = None
    else:
        # Fallback to total (shouldn't happen with new logic)
        daily_for_bars = self.canonical.daily_gains
        drawdown_for_area = self.canonical.drawdowns

    # Create subplots with shared x-axis
    fig = make_subplots(
        rows=3,
        cols=1,
        shared_xaxes=True,
        vertical_spacing=0.08,
        row_heights=[0.5, 0.25, 0.25],
        subplot_titles=("Cumulative Gains", "Drawdowns", "Daily Gains"),
    )

    # Top pane: Cumulative gains line chart(s)
    if show_total:
        fig.add_trace(
            go.Scatter(
                x=self.canonical.cumulative_gains.index,
                y=self.canonical.cumulative_gains.values,
                mode="lines",
                name="Total",
                line={"color": COLORS["primary"], "width": 2},
                fill="tozeroy",
                fillcolor="rgba(59, 130, 246, 0.1)",
                hovertemplate="Date: %{x}<br>Total: $%{y:,.2f}<extra></extra>",
            ),
            row=1,
            col=1,
        )

    # Supply cumulative gains (if available and selected)
    if show_supply and self.canonical.cumulative_gains_supply is not None:
        fig.add_trace(
            go.Scatter(
                x=self.canonical.cumulative_gains_supply.index,
                y=self.canonical.cumulative_gains_supply.values,
                mode="lines",
                name="Supply",
                line={"color": "#10B981", "width": 2},  # Green
                hovertemplate="Date: %{x}<br>Supply: $%{y:,.2f}<extra></extra>",
            ),
            row=1,
            col=1,
        )

    # Demand cumulative gains (if available and selected)
    if show_demand and self.canonical.cumulative_gains_demand is not None:
        fig.add_trace(
            go.Scatter(
                x=self.canonical.cumulative_gains_demand.index,
                y=self.canonical.cumulative_gains_demand.values,
                mode="lines",
                name="Demand",
                line={"color": "#F59E0B", "width": 2},  # Orange/Amber
                hovertemplate="Date: %{x}<br>Demand: $%{y:,.2f}<extra></extra>",
            ),
            row=1,
            col=1,
        )

    # Middle pane: Drawdowns filled area (responds to selection)
    if drawdown_for_area is not None:
        fig.add_trace(
            go.Scatter(
                x=drawdown_for_area.index,
                y=drawdown_for_area.values,
                mode="lines",
                name="Drawdowns",
                line={"color": COLORS["loss"], "width": 1.5},
                fill="tozeroy",
                fillcolor="rgba(239, 68, 68, 0.3)",
                hovertemplate="Date: %{x}<br>Drawdown: $%{y:,.2f}<extra></extra>",
                showlegend=False,
            ),
            row=2,
            col=1,
        )

    # Bottom pane: Daily gains bar chart (responds to selection)
    if daily_for_bars is not None:
        colors = [COLORS["gain"] if v >= 0 else COLORS["loss"] for v in daily_for_bars.values]
        fig.add_trace(
            go.Bar(
                x=daily_for_bars.index,
                y=daily_for_bars.values,
                name="Daily Gains",
                marker_color=colors,
                hovertemplate="Date: %{x}<br>Daily: $%{y:,.2f}<extra></extra>",
                showlegend=False,
            ),
            row=3,
            col=1,
        )

    # Show legend if multiple traces are shown in cumulative pane
    show_legend = sum([show_total, show_supply, show_demand]) > 1

    # Update layout for clean, modern look
    # Legend positioned below chart to avoid overlap with plotly modebar
    fig.update_layout(
        template="plotly_white",
        showlegend=show_legend,
        legend={
            "orientation": "h",
            "yanchor": "top",
            "y": -0.08,
            "xanchor": "center",
            "x": 0.5,
        },
        height=750,
        margin={"l": 60, "r": 30, "t": 80, "b": 80},  # Increased bottom margin for legend
        font={"family": "Inter, system-ui, sans-serif", "size": 14},
        hovermode="x unified",
    )

    # Update axes for each subplot
    fig.update_xaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        showline=True,
        linewidth=1,
        linecolor=COLORS["grid"],
    )

    fig.update_yaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        showline=True,
        linewidth=1,
        linecolor=COLORS["grid"],
        tickformat="$,.0f",
    )

    # Update subplot titles styling
    for annotation in fig.layout.annotations:
        annotation.font = {"size": 17, "color": COLORS["secondary"]}

    return fig
gains_histogram
gains_histogram(frequency: str, actual_series: Series | None = None) -> Figure

Create a histogram of gains for a given frequency.

Args: frequency: One of "daily", "7d", "30d", "90d", "365d" actual_series: Optional daily gains series to overlay for comparison (e.g., actual trading data vs backtest). The same frequency transformation will be applied automatically.

Returns: Plotly Figure with histogram

Raises: ValueError: If frequency is not one of the valid options

Source code in src/progridpy/metrics/visualization.py
def gains_histogram(self, frequency: str, actual_series: pd.Series | None = None) -> go.Figure:
    """
    Create a histogram of gains for a given frequency.

    Args:
        frequency: One of "daily", "7d", "30d", "90d", "365d"
        actual_series: Optional daily gains series to overlay for comparison
            (e.g., actual trading data vs backtest). The same frequency
            transformation will be applied automatically.

    Returns:
        Plotly Figure with histogram

    Raises:
        ValueError: If frequency is not one of the valid options
    """
    daily = self.canonical.daily_gains

    if frequency == "daily":
        data = daily
        title = "Daily Gains Distribution"
    elif frequency == "7d":
        data = rolling_period_sums(daily, 7).dropna()
        title = "7-Day Rolling Gains Distribution"
    elif frequency == "30d":
        data = rolling_period_sums(daily, 30).dropna()
        title = "30-Day Rolling Gains Distribution"
    elif frequency == "90d":
        data = rolling_period_sums(daily, 90).dropna()
        title = "90-Day Rolling Gains Distribution"
    elif frequency == "365d":
        data = rolling_period_sums(daily, 365).dropna()
        title = "365-Day Rolling Gains Distribution"
    else:
        raise ValueError(f"Invalid frequency: {frequency}. Must be one of: daily, 7d, 30d, 90d, 365d")

    # Process actual_series with the same frequency transformation
    actual_data: pd.Series | None = None
    if actual_series is not None:
        if frequency == "daily":
            actual_data = actual_series
        elif frequency == "7d":
            actual_data = rolling_period_sums(actual_series, 7).dropna()
        elif frequency == "30d":
            actual_data = rolling_period_sums(actual_series, 30).dropna()
        elif frequency == "90d":
            actual_data = rolling_period_sums(actual_series, 90).dropna()
        elif frequency == "365d":
            actual_data = rolling_period_sums(actual_series, 365).dropna()

    fig = go.Figure()

    # Calculate bin width for KDE scaling (use ~50 bins)
    data_range = data.max() - data.min()
    n_bins = 50
    bin_width = data_range / n_bins
    n_samples = len(data)

    # Modern color palette
    overall_color = "#6366F1"  # Indigo
    year_colors = [
        "#10B981",  # Emerald
        "#F43F5E",  # Rose
        "#F59E0B",  # Amber
        "#0EA5E9",  # Sky
        "#8B5CF6",  # Violet
        "#EC4899",  # Pink
        "#14B8A6",  # Teal
    ]

    # Single overall histogram (all data)
    fig.add_trace(
        go.Histogram(
            x=data.values,
            name="Overall",
            marker_color=overall_color,
            opacity=0.85,
            xbins={"size": bin_width},
            hovertemplate="Range: %{x:$,.0f}<br>Count: %{y}<extra></extra>",
        )
    )

    # Add actual series histogram overlay (if provided)
    if actual_data is not None and len(actual_data) > 0:
        fig.add_trace(
            go.Histogram(
                x=actual_data.values,
                name="Actual",
                marker_color="#F97316",  # Orange
                opacity=0.7,
                xbins={"size": bin_width},
                hovertemplate="Actual<br>Range: %{x:$,.0f}<br>Count: %{y}<extra></extra>",
            )
        )

    # Add per-year histograms (hidden by default, toggleable via legend)
    years = sorted(data.index.year.unique())
    for i, year in enumerate(years):
        year_data = data[data.index.year == year]
        color = year_colors[i % len(year_colors)]
        fig.add_trace(
            go.Histogram(
                x=year_data.values,
                name=str(year),
                marker_color=color,
                visible="legendonly",
                opacity=0.8,
                xbins={"size": bin_width},
                hovertemplate=f"{year}<br>Range: %{{x:$,.0f}}<br>Count: %{{y}}<extra></extra>",
            )
        )

    # Add KDE curve (no legend entry, always visible)
    if len(data) >= 2:
        kde = gaussian_kde(data.values)
        x_kde = np.linspace(data.min(), data.max(), 200)
        y_kde = kde(x_kde) * n_samples * bin_width

        fig.add_trace(
            go.Scatter(
                x=x_kde,
                y=y_kde,
                mode="lines",
                name="KDE",
                line={"color": "#F87171", "width": 2.5},
                showlegend=False,
                hoverinfo="skip",
            )
        )

    # Add KDE curve for actual series (if provided)
    if actual_data is not None and len(actual_data) >= 2:
        actual_kde = gaussian_kde(actual_data.values)
        x_kde_actual = np.linspace(actual_data.min(), actual_data.max(), 200)
        actual_n_samples = len(actual_data)
        y_kde_actual = actual_kde(x_kde_actual) * actual_n_samples * bin_width

        fig.add_trace(
            go.Scatter(
                x=x_kde_actual,
                y=y_kde_actual,
                mode="lines",
                name="Actual KDE",
                line={"color": "#FB923C", "width": 2.5},  # Lighter orange
                showlegend=False,
                hoverinfo="skip",
            )
        )

    # Calculate max y for vertical lines (estimate from histogram)
    hist_counts, _ = np.histogram(data.values, bins=n_bins)
    max_y = hist_counts.max() * 1.1

    # Add vertical line at zero (non-toggleable)
    fig.add_vline(x=0, line_width=2, line_dash="dash", line_color=COLORS["secondary"])

    # Mean line (toggleable via legend, label toggles with line)
    mean_val = float(data.mean())
    fig.add_trace(
        go.Scatter(
            x=[mean_val, mean_val, mean_val],
            y=[0, max_y, max_y * 1.08],
            mode="lines+text",
            text=["", "", "Mean"],
            textposition="top center",
            textfont={"size": 10, "color": COLORS["primary"]},
            name=f"Mean: ${mean_val:,.0f}",
            line={"color": COLORS["primary"], "width": 2},
            hoverinfo="skip",
        )
    )

    # Median line (toggleable via legend, label toggles with line)
    median_val = float(data.median())
    # Offset median label to avoid overlap with mean if they're close
    median_textpos = "top right" if abs(mean_val - median_val) < data_range * 0.05 else "top center"
    fig.add_trace(
        go.Scatter(
            x=[median_val, median_val, median_val],
            y=[0, max_y, max_y * 1.08],
            mode="lines+text",
            text=["", "", "Median"],
            textposition=median_textpos,
            textfont={"size": 10, "color": "#8B5CF6"},
            name=f"Median: ${median_val:,.0f}",
            line={"color": "#8B5CF6", "width": 2},
            hoverinfo="skip",
        )
    )

    # Percentile configuration
    percentile_config = [
        ("P1", 1, "#B91C1C"),
        ("P5", 5, "#DC2626"),
        ("P10", 10, "#EF4444"),
        ("P25", 25, "#F59E0B"),
        ("P75", 75, "#22C55E"),
        ("P90", 90, "#16A34A"),
        ("P95", 95, "#15803D"),
        ("P99", 99, "#166534"),
    ]

    # Add percentile lines (toggleable via legend, labels toggle with lines)
    for label, pct, color in percentile_config:
        pct_val = float(data.quantile(pct / 100))
        fig.add_trace(
            go.Scatter(
                x=[pct_val, pct_val, pct_val],
                y=[0, max_y, max_y * 1.08],
                mode="lines+text",
                text=["", "", label],
                textposition="top center",
                textfont={"size": 9, "color": color},
                name=f"{label}: ${pct_val:,.0f}",
                line={"color": color, "width": 1.5, "dash": "dot"},
                hoverinfo="skip",
            )
        )

    # Update layout
    fig.update_layout(
        title={
            "text": title,
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        showlegend=True,
        legend={
            "orientation": "v",
            "yanchor": "top",
            "y": 0.99,
            "xanchor": "right",
            "x": 0.99,
            "bgcolor": "rgba(255,255,255,0.9)",
            "bordercolor": "#E5E7EB",
            "borderwidth": 1,
        },
        height=600,
        margin={"l": 60, "r": 30, "t": 80, "b": 50},
        font={"family": "Inter, system-ui, sans-serif", "size": 14},
        barmode="overlay",
        bargap=0,
        xaxis_title="Gain ($)",
        yaxis_title="Frequency",
    )

    fig.update_xaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        tickformat="$,.0f",
    )

    fig.update_yaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
    )

    return fig
cumulative_gains_by_year
cumulative_gains_by_year() -> Figure

Create a line chart showing cumulative gains by year.

Each year is a separate colored line, normalized to start at 0 on day 1. X-axis: Day of year (1-365/366) Y-axis: Cumulative gain ($)

Returns: Plotly Figure with cumulative gains by year

Source code in src/progridpy/metrics/visualization.py
def cumulative_gains_by_year(self) -> go.Figure:
    """
    Create a line chart showing cumulative gains by year.

    Each year is a separate colored line, normalized to start at 0 on day 1.
    X-axis: Day of year (1-365/366)
    Y-axis: Cumulative gain ($)

    Returns:
        Plotly Figure with cumulative gains by year
    """
    daily = self.canonical.daily_gains

    # Create DataFrame with year and day of year
    df = pd.DataFrame({"gain": daily})
    df["year"] = df.index.year
    df["day_of_year"] = df.index.dayofyear

    fig = go.Figure()

    # Define a color palette for years
    year_colors = [
        "#3B82F6",  # Blue
        "#10B981",  # Emerald
        "#F59E0B",  # Amber
        "#EF4444",  # Red
        "#8B5CF6",  # Purple
        "#EC4899",  # Pink
        "#14B8A6",  # Teal
    ]

    years_sorted = sorted(df["year"].unique())
    for i, year in enumerate(years_sorted):
        year_data = df[df["year"] == year].copy()
        year_data = year_data.sort_values("day_of_year")
        year_data["cumulative"] = year_data["gain"].cumsum()

        color = year_colors[i % len(year_colors)]

        fig.add_trace(
            go.Scatter(
                x=year_data["day_of_year"],
                y=year_data["cumulative"],
                mode="lines",
                name=str(year),
                line={"color": color, "width": 2},
                hovertemplate=f"Year: {year}<br>Day: %{{x}}<br>Cumulative: $%{{y:,.2f}}<extra></extra>",
            )
        )

    fig.update_layout(
        title={
            "text": "Cumulative Gains by Year",
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        height=400,
        margin={"l": 60, "r": 160, "t": 80, "b": 50},
        font={"family": "Inter, system-ui, sans-serif", "size": 14},
        xaxis_title="Day of Year",
        yaxis_title="Cumulative Gain ($)",
        legend={
            "orientation": "v",
            "yanchor": "top",
            "y": 1.0,
            "xanchor": "left",
            "x": 1.02,
            "bgcolor": "rgba(255,255,255,0.9)",
            "bordercolor": "#E5E7EB",
            "borderwidth": 1,
        },
        hovermode="x unified",
    )

    fig.update_xaxes(
        range=[1, 366],
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
    )

    fig.update_yaxes(
        tickformat="$,.0f",
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
    )

    return fig
daily_gains_bar_chart
daily_gains_bar_chart(days: int | str = 30) -> Figure

Create a bar chart showing daily gains for recent days.

Args: days: Number of recent days to show (7, 30, 90) or "latest_year" for all days in the current/latest year

Returns: Plotly Figure with daily gain bars (green=positive, red=negative)

Source code in src/progridpy/metrics/visualization.py
def daily_gains_bar_chart(self, days: int | str = 30) -> go.Figure:
    """
    Create a bar chart showing daily gains for recent days.

    Args:
        days: Number of recent days to show (7, 30, 90) or "latest_year"
              for all days in the current/latest year

    Returns:
        Plotly Figure with daily gain bars (green=positive, red=negative)
    """
    daily = self.canonical.daily_gains

    if days == "latest_year":
        # Get latest year in the data
        latest_year = daily.index.year.max()
        data = daily[daily.index.year == latest_year]
        title = f"Daily Gains - {latest_year}"
    else:
        # Get last N days
        data = daily.tail(int(days))
        title = f"Daily Gains - Last {days} Days"

    # Create colors based on positive/negative values
    colors = [COLORS["gain"] if v >= 0 else COLORS["loss"] for v in data.values]

    fig = go.Figure()

    fig.add_trace(
        go.Bar(
            x=data.index,
            y=data.values,
            marker_color=colors,
            hovertemplate="Date: %{x}<br>Gain: $%{y:,.2f}<extra></extra>",
            name="Daily Gain",
        )
    )

    # Add zero line
    fig.add_hline(
        y=0,
        line_width=1,
        line_dash="solid",
        line_color=COLORS["secondary"],
    )

    # Update layout
    fig.update_layout(
        title={
            "text": title,
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        height=350,
        margin={"l": 60, "r": 30, "t": 80, "b": 50},
        font={"family": "Inter, system-ui, sans-serif", "size": 14},
        xaxis_title="Date",
        yaxis_title="Gain ($)",
        showlegend=False,
    )

    fig.update_xaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        tickformat="%Y-%m-%d",
        tickangle=-45,
    )

    fig.update_yaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        tickformat="$,.0f",
    )

    return fig
daily_gains_heatmap
daily_gains_heatmap(year: int) -> Figure

Create a GitHub-style heatmap showing daily gains for a specific year.

Args: year: The year to display

Returns: Plotly Figure with heatmap (weeks x days of week)

Source code in src/progridpy/metrics/visualization.py
def daily_gains_heatmap(self, year: int) -> go.Figure:
    """
    Create a GitHub-style heatmap showing daily gains for a specific year.

    Args:
        year: The year to display

    Returns:
        Plotly Figure with heatmap (weeks x days of week)
    """
    daily = self.canonical.daily_gains

    # Filter to the specified year
    year_data = daily[daily.index.year == year]

    if len(year_data) == 0:
        # Return empty figure if no data for this year
        fig = go.Figure()
        fig.add_annotation(
            text=f"No data for {year}",
            xref="paper",
            yref="paper",
            x=0.5,
            y=0.5,
            showarrow=False,
        )
        return fig

    # Create a matrix: 7 rows (days of week) x 53 columns (weeks)
    # Initialize with NaN
    heatmap_data = np.full((7, 53), np.nan)

    for date, value in year_data.items():
        week_num = date.isocalendar().week - 1  # 0-indexed
        day_of_week = date.weekday()  # 0=Mon, 6=Sun
        if week_num < 53:  # Handle edge cases
            heatmap_data[day_of_week, week_num] = value

    # Day labels (Mon-Sun)
    day_labels = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]

    # Calculate color scale bounds (symmetric around 0)
    max_abs = max(abs(np.nanmin(heatmap_data)), abs(np.nanmax(heatmap_data)))
    if max_abs == 0:
        max_abs = 1  # Avoid division by zero

    fig = go.Figure(
        data=go.Heatmap(
            z=heatmap_data,
            x=list(range(1, 54)),  # Week numbers
            y=day_labels,
            colorscale=[
                [0, "#EF4444"],  # Red for losses
                [0.5, "#F8FAFC"],  # Light gray for neutral
                [1, "#10B981"],  # Green for gains
            ],
            zmid=0,
            zmin=-max_abs,
            zmax=max_abs,
            hovertemplate="Week %{x}<br>%{y}<br>Gain: $%{z:,.0f}<extra></extra>",
            showscale=True,
            colorbar={
                "title": "Gain ($)",
                "tickformat": "$,.0f",
                "len": 0.8,
            },
        )
    )

    fig.update_layout(
        title={
            "text": f"Daily Gains Heatmap - {year}",
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        height=200,
        margin={"l": 60, "r": 80, "t": 50, "b": 30},
        font={"family": "Inter, system-ui, sans-serif", "size": 12},
        xaxis_title="Week",
        yaxis_title="",
    )

    fig.update_xaxes(
        showgrid=False,
        dtick=4,  # Show every 4th week
    )

    fig.update_yaxes(
        showgrid=False,
        autorange="reversed",  # Mon at top
    )

    return fig
daily_gains_heatmap_combined
daily_gains_heatmap_combined() -> Figure

Create a combined GitHub-style heatmap showing daily gains for all years.

Returns: Plotly Figure with stacked heatmaps (one per year, shared x-axis)

Source code in src/progridpy/metrics/visualization.py
def daily_gains_heatmap_combined(self) -> go.Figure:
    """
    Create a combined GitHub-style heatmap showing daily gains for all years.

    Returns:
        Plotly Figure with stacked heatmaps (one per year, shared x-axis)
    """
    daily = self.canonical.daily_gains
    # Sort years ascending - oldest first in array, then reverse y-axis
    years = sorted(daily.index.year.unique())

    if len(years) == 0:
        fig = go.Figure()
        fig.add_annotation(
            text="No data available",
            xref="paper",
            yref="paper",
            x=0.5,
            y=0.5,
            showarrow=False,
        )
        return fig

    # Use percentiles for color bounds to handle outliers
    # This prevents large outliers from washing out the color scale
    all_values = daily.dropna().values
    if len(all_values) > 0:
        p5 = np.percentile(all_values, 5)
        p95 = np.percentile(all_values, 95)
        color_bound = max(abs(p5), abs(p95))
        if color_bound == 0:
            color_bound = 1
    else:
        color_bound = 1

    # Build combined matrix: stack all years vertically (7 rows per year)
    # Years in ascending order, days Mon(0) to Sun(6) within each year
    total_rows = 7 * len(years)
    combined_data = np.full((total_rows, 53), np.nan)
    # Store dates for hover display
    date_labels = [[None for _ in range(53)] for _ in range(total_rows)]

    for year_idx, year in enumerate(years):
        year_data = daily[daily.index.year == year]
        row_offset = year_idx * 7

        for date, value in year_data.items():
            week_num = date.isocalendar().week - 1  # 0-indexed
            day_of_week = date.weekday()  # 0=Mon, 6=Sun
            if week_num < 53:
                combined_data[row_offset + day_of_week, week_num] = value
                date_labels[row_offset + day_of_week][week_num] = date.strftime("%Y-%m-%d")

    # GitHub-style discrete color scale with fewer, darker colors
    colorscale = [
        [0, "#991B1B"],  # Dark red - large losses
        [0.3, "#DC2626"],  # Red - medium losses
        [0.45, "#FCA5A5"],  # Light red - small losses
        [0.5, "#E5E7EB"],  # Gray - neutral
        [0.55, "#86EFAC"],  # Light green - small gains
        [0.7, "#22C55E"],  # Green - medium gains
        [1, "#166534"],  # Dark green - large gains
    ]

    # Y-axis: use numeric values, then set custom tick labels for years
    y_values = list(range(total_rows))

    fig = go.Figure(
        data=go.Heatmap(
            z=combined_data,
            x=list(range(1, 54)),
            y=y_values,
            colorscale=colorscale,
            zmid=0,
            zmin=-color_bound,
            zmax=color_bound,
            xgap=2,  # Gap between cells for GitHub style
            ygap=2,
            customdata=date_labels,
            hovertemplate="%{customdata}<br>Gain: $%{z:,.0f}<extra></extra>",
            hoverongaps=False,  # Don't show hover for empty cells
            showscale=True,
            colorbar={
                "title": "Gain ($)",
                "tickformat": "$,.0f",
                "len": 0.7,
                "y": 0.5,
                "thickness": 15,
            },
        )
    )

    # Set year labels at center of each 7-row section
    tick_positions = [year_idx * 7 + 3 for year_idx in range(len(years))]
    tick_labels = [str(year) for year in years]

    fig.update_layout(
        title={
            "text": "Daily Gains Heatmap",
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        height=18 * total_rows + 100,
        margin={"l": 60, "r": 100, "t": 80, "b": 40},
        font={"family": "Inter, system-ui, sans-serif", "size": 12},
        xaxis_title="Week",
        yaxis_title="",
    )

    fig.update_xaxes(
        showgrid=False,
        dtick=4,
        constrain="domain",
        zeroline=False,
    )
    fig.update_yaxes(
        showgrid=False,
        tickmode="array",
        tickvals=tick_positions,
        ticktext=tick_labels,
        autorange="reversed",
        zeroline=False,
        scaleanchor="x",
        scaleratio=1,
    )

    return fig
quantile_box_plots
quantile_box_plots() -> Figure

Create box plots showing distribution of gains across different time periods.

Returns: Plotly Figure with box plots for daily, weekly, monthly, quarterly, yearly gains

Source code in src/progridpy/metrics/visualization.py
def quantile_box_plots(self) -> go.Figure:
    """
    Create box plots showing distribution of gains across different time periods.

    Returns:
        Plotly Figure with box plots for daily, weekly, monthly, quarterly, yearly gains
    """
    daily = self.canonical.daily_gains

    # Calculate gains for different periods
    periods = {
        "Daily": daily,
        "Weekly": rolling_period_sums(daily, 7).dropna(),
        "Monthly": rolling_period_sums(daily, 30).dropna(),
        "Quarterly": rolling_period_sums(daily, 90).dropna(),
        "Yearly": rolling_period_sums(daily, 365).dropna(),
    }

    # Modern colors for each period
    period_colors = {
        "Daily": "#6366F1",  # Indigo
        "Weekly": "#0EA5E9",  # Sky
        "Monthly": "#10B981",  # Emerald
        "Quarterly": "#F59E0B",  # Amber
        "Yearly": "#EC4899",  # Pink
    }

    fig = go.Figure()

    for period_name, period_data in periods.items():
        if len(period_data) > 0:
            box_color = period_colors[period_name]

            fig.add_trace(
                go.Box(
                    y=period_data.values,
                    name=period_name,
                    marker_color=box_color,
                    boxmean=True,  # Show mean as dashed line
                    hovertemplate=(f"{period_name}<br>Value: $%{{y:,.0f}}<extra></extra>"),
                )
            )

    fig.update_layout(
        title={
            "text": "Gains Distribution by Period",
            "x": 0.5,
            "xanchor": "center",
            "font": {"size": PLOT_TITLE_FONT_SIZE_PX, "color": COLORS["secondary"]},
        },
        template="plotly_white",
        height=400,
        margin={"l": 60, "r": 30, "t": 80, "b": 50},
        font={"family": "Inter, system-ui, sans-serif", "size": 14},
        yaxis_title="Gain ($)",
        showlegend=False,
    )

    fig.update_yaxes(
        showgrid=True,
        gridwidth=1,
        gridcolor=COLORS["grid"],
        tickformat="$,.0f",
        zeroline=True,
        zerolinewidth=2,
        zerolinecolor=COLORS["secondary"],
    )

    return fig

Functions

get_iso_schema

get_iso_schema(iso_name: str) -> ISOTradeSchema

Retrieve an ISO schema from the registry.

Args: iso_name: Name of the ISO (case-insensitive)

Returns: The registered ISOTradeSchema for the given ISO

Raises: KeyError: If no schema is registered for the given ISO name

Source code in src/progridpy/metrics/iso_registry.py
def get_iso_schema(iso_name: str) -> ISOTradeSchema:
    """
    Retrieve an ISO schema from the registry.

    Args:
        iso_name: Name of the ISO (case-insensitive)

    Returns:
        The registered ISOTradeSchema for the given ISO

    Raises:
        KeyError: If no schema is registered for the given ISO name
    """
    key = iso_name.upper()
    if key not in ISO_REGISTRY:
        available = ", ".join(ISO_REGISTRY.keys()) or "none"
        raise KeyError(f"No schema registered for ISO '{iso_name}'. Available: {available}")
    return ISO_REGISTRY[key]

register_iso

register_iso(schema: ISOTradeSchema) -> None

Register an ISO schema in the global registry.

Args: schema: ISOTradeSchema instance to register

Note: Schemas are stored under schema.iso_name.upper(). If a schema for that name already exists, it will be overwritten.

Source code in src/progridpy/metrics/iso_registry.py
def register_iso(schema: ISOTradeSchema) -> None:
    """
    Register an ISO schema in the global registry.

    Args:
        schema: ISOTradeSchema instance to register

    Note:
        Schemas are stored under schema.iso_name.upper().
        If a schema for that name already exists, it will be overwritten.
    """
    ISO_REGISTRY[schema.iso_name.upper()] = schema

build_canonical_series

build_canonical_series(df_std_filtered: DataFrame) -> CanonicalSeries

Build canonical time series from a filtered standardized DataFrame.

Args: df_std_filtered: Standardized DataFrame filtered to cleared trades only

Returns: CanonicalSeries with daily_gains, cumulative_gains, drawdowns, and optional supply/demand breakdown if 'is_supply' column exists

Steps: 1. Group by 'date', sum 'gain_normalized' -> daily_gains 2. Sort by date 3. cumulative_gains = daily_gains.cumsum() 4. drawdowns = cumulative_gains - cumulative_gains.cummax() 5. If 'is_supply' exists, compute supply/demand breakdown

Source code in src/progridpy/metrics/series.py
def build_canonical_series(df_std_filtered: pd.DataFrame) -> CanonicalSeries:
    """
    Build canonical time series from a filtered standardized DataFrame.

    Args:
        df_std_filtered: Standardized DataFrame filtered to cleared trades only

    Returns:
        CanonicalSeries with daily_gains, cumulative_gains, drawdowns,
        and optional supply/demand breakdown if 'is_supply' column exists

    Steps:
        1. Group by 'date', sum 'gain_normalized' -> daily_gains
        2. Sort by date
        3. cumulative_gains = daily_gains.cumsum()
        4. drawdowns = cumulative_gains - cumulative_gains.cummax()
        5. If 'is_supply' exists, compute supply/demand breakdown
    """
    # Group by date, sum gain_normalized
    daily_gains = df_std_filtered.groupby("date")["gain_normalized"].sum()
    daily_gains = daily_gains.sort_index()

    # Convert index to DatetimeIndex for rolling operations
    daily_gains.index = pd.to_datetime(daily_gains.index)

    # Cumulative and drawdowns
    cumulative_gains = daily_gains.cumsum()
    drawdowns = cumulative_gains - cumulative_gains.cummax()

    # Supply/demand breakdown
    daily_gains_supply = None
    daily_gains_demand = None
    cumulative_gains_supply = None
    cumulative_gains_demand = None
    drawdowns_supply = None
    drawdowns_demand = None

    if "is_supply" in df_std_filtered.columns:
        # Supply trades (is_supply=True)
        supply_df = df_std_filtered[df_std_filtered["is_supply"]]
        if len(supply_df) > 0:
            daily_gains_supply = supply_df.groupby("date")["gain_normalized"].sum()
            daily_gains_supply = daily_gains_supply.sort_index()
            daily_gains_supply.index = pd.to_datetime(daily_gains_supply.index)
            # Reindex to match the full date range, filling missing dates with 0
            daily_gains_supply = daily_gains_supply.reindex(daily_gains.index, fill_value=0.0)
            cumulative_gains_supply = daily_gains_supply.cumsum()
            drawdowns_supply = cumulative_gains_supply - cumulative_gains_supply.cummax()

        # Demand trades (is_supply=False)
        demand_df = df_std_filtered[~df_std_filtered["is_supply"]]
        if len(demand_df) > 0:
            daily_gains_demand = demand_df.groupby("date")["gain_normalized"].sum()
            daily_gains_demand = daily_gains_demand.sort_index()
            daily_gains_demand.index = pd.to_datetime(daily_gains_demand.index)
            # Reindex to match the full date range, filling missing dates with 0
            daily_gains_demand = daily_gains_demand.reindex(daily_gains.index, fill_value=0.0)
            cumulative_gains_demand = daily_gains_demand.cumsum()
            drawdowns_demand = cumulative_gains_demand - cumulative_gains_demand.cummax()

    return CanonicalSeries(
        daily_gains=daily_gains,
        cumulative_gains=cumulative_gains,
        drawdowns=drawdowns,
        daily_gains_supply=daily_gains_supply,
        daily_gains_demand=daily_gains_demand,
        cumulative_gains_supply=cumulative_gains_supply,
        cumulative_gains_demand=cumulative_gains_demand,
        drawdowns_supply=drawdowns_supply,
        drawdowns_demand=drawdowns_demand,
    )

rolling_period_sums

rolling_period_sums(daily_gains: Series, window_days: int) -> Series

Compute rolling sums over a specified window.

Args: daily_gains: Series indexed by date window_days: Number of days in the rolling window

Returns: Series with rolling sums, index aligned to window end date. NaN values are present for the first (window_days - 1) entries.

Note: Uses min_periods=window_days to ensure full windows only.

Source code in src/progridpy/metrics/series.py
def rolling_period_sums(daily_gains: pd.Series, window_days: int) -> pd.Series:
    """
    Compute rolling sums over a specified window.

    Args:
        daily_gains: Series indexed by date
        window_days: Number of days in the rolling window

    Returns:
        Series with rolling sums, index aligned to window end date.
        NaN values are present for the first (window_days - 1) entries.

    Note:
        Uses min_periods=window_days to ensure full windows only.
    """
    return daily_gains.rolling(window_days, min_periods=window_days).sum()

__getattr__

__getattr__(name: str)

Lazy import for dashboard types to avoid importing streamlit unless needed.

Source code in src/progridpy/metrics/__init__.py
def __getattr__(name: str):
    """Lazy import for dashboard types to avoid importing streamlit unless needed."""
    if name == "StreamlitDashboard":
        from progridpy.metrics.dashboard import StreamlitDashboard

        return StreamlitDashboard
    if name == "TimeFrameRange":
        from progridpy.metrics.dashboard import TimeFrameRange

        return TimeFrameRange
    raise AttributeError(f"module '{__name__}' has no attribute '{name}'")