Skip to content

MetricsCalculator

The computation core of the metrics pipeline. MetricsCalculator takes a standardized trade DataFrame (via from_iso_frame), builds the canonical series, and produces a MetricsResult containing all metric groups.

Metric Groups

Dataclass Key Fields
RiskMetrics estimated_risk, cvar_1, cvar_5, var_99, var_95, var_90, capital_required
ReturnMetrics mean_daily, median_daily, mean_monthly_30d, median_monthly_30d, mean_annual_365d, median_annual_365d, calendar_returns
RatioMetrics sharpe_overall, sortino_overall, risk-adjusted ratios at daily/monthly/annual horizons
DrawdownMetrics max_drawdown, max_drawdown_start, max_drawdown_end, max_drawdown_duration_days
StreakMetrics win_rate_pct, max_win_streak, max_loss_streak, current_streak

All metric dataclasses use slots=True for memory efficiency.

Metrics Calculator - Compute financial and risk metrics from trade data.

Classes

RiskMetrics dataclass

RiskMetrics(estimated_risk: float, cvar_1: float, cvar_5: float, var_99: float, var_95: float, var_90: float, capital_required: float | None)

Risk-related metrics.

ReturnMetrics dataclass

ReturnMetrics(mean_daily: float, median_daily: float, mean_monthly_30d: float, median_monthly_30d: float, mean_annual_365d: float, median_annual_365d: float, calendar_returns: dict[int, float], calendar_returns_risk_adjusted: dict[int, float] = dict())

Return-related metrics.

RatioMetrics dataclass

RatioMetrics(mean_daily_over_estimated_risk: float, median_daily_over_estimated_risk: float, mean_monthly_over_estimated_risk: float, median_monthly_over_estimated_risk: float, mean_annual_over_estimated_risk: float, median_annual_over_estimated_risk: float, sharpe_overall: float, sortino_overall: float, sharpe_by_year: dict[int, float], sortino_by_year: dict[int, float], mean_annual_calendar_over_estimated_risk: float, median_annual_calendar_over_estimated_risk: float)

Ratio metrics (returns divided by estimated risk).

DrawdownMetrics dataclass

DrawdownMetrics(worst_daily_loss: float, worst_daily_loss_date: Timestamp, worst_7d_loss: float, worst_7d_loss_start_date: Timestamp, worst_7d_loss_end_date: Timestamp, worst_1m_30d_loss: float, worst_1m_30d_loss_start_date: Timestamp, worst_1m_30d_loss_end_date: Timestamp, worst_3m_90d_loss: float, worst_3m_90d_loss_start_date: Timestamp, worst_3m_90d_loss_end_date: Timestamp, worst_6m_180d_loss: float, worst_6m_180d_loss_start_date: Timestamp, worst_6m_180d_loss_end_date: Timestamp, worst_12m_365d_loss: float, worst_12m_365d_loss_start_date: Timestamp, worst_12m_365d_loss_end_date: Timestamp, worst_daily_loss_over_estimated_risk: float, worst_7d_loss_over_estimated_risk: float, worst_1m_30d_loss_over_estimated_risk: float, worst_3m_90d_loss_over_estimated_risk: float, worst_6m_180d_loss_over_estimated_risk: float, worst_12m_365d_loss_over_estimated_risk: float, max_drawdown: float, max_drawdown_over_estimated_risk: float)

Drawdown and worst loss metrics.

StreakMetrics dataclass

StreakMetrics(worst_any_period_start_date: Timestamp, worst_any_period_end_date: Timestamp, worst_any_period_length_days: int, longest_loss_period_start_date: Timestamp, longest_loss_period_end_date: Timestamp, longest_loss_period_length_days: int, longest_period_loss: float, win_rate_pct: float, pct_1m_periods_with_loss: float, pct_3m_periods_with_loss: float, pct_6m_periods_with_loss: float, pct_12m_periods_with_loss: float)

Streak and probability metrics.

MetricsResult dataclass

MetricsResult(risk: RiskMetrics, returns: ReturnMetrics, ratios: RatioMetrics, drawdowns: DrawdownMetrics, streaks: StreakMetrics)

Container for all computed metrics.

Functions

to_frame
to_frame() -> DataFrame

Flatten all metrics into a DataFrame with 'metric' and 'value' columns.

Returns: DataFrame with columns ['metric', 'value']

Source code in src/progridpy/metrics/calculator.py
def to_frame(self) -> pd.DataFrame:
    """
    Flatten all metrics into a DataFrame with 'metric' and 'value' columns.

    Returns:
        DataFrame with columns ['metric', 'value']
    """
    rows: list[tuple[str, float | str | int | None]] = []

    # Risk Metrics
    rows.append(("Estimated Risk", self.risk.estimated_risk))
    rows.append(("CVaR 1%", self.risk.cvar_1))
    rows.append(("CVaR 5%", self.risk.cvar_5))
    rows.append(("VaR 99%", self.risk.var_99))
    rows.append(("VaR 95%", self.risk.var_95))
    rows.append(("VaR 90%", self.risk.var_90))
    if self.risk.capital_required is not None:
        rows.append(("Capital Required (MISO, $)", self.risk.capital_required))

    # Return Metrics
    rows.append(("Mean Daily Return", self.returns.mean_daily))
    rows.append(("Median Daily Return", self.returns.median_daily))
    rows.append(("Mean Monthly Return (30d)", self.returns.mean_monthly_30d))
    rows.append(("Median Monthly Return (30d)", self.returns.median_monthly_30d))
    rows.append(("Mean Annual Return (365d)", self.returns.mean_annual_365d))
    rows.append(("Median Annual Return (365d)", self.returns.median_annual_365d))

    # Ratio Metrics
    rows.append(("Mean Daily / Estimated Risk", self.ratios.mean_daily_over_estimated_risk))
    rows.append(("Median Daily / Estimated Risk", self.ratios.median_daily_over_estimated_risk))
    rows.append(("Mean Monthly / Estimated Risk", self.ratios.mean_monthly_over_estimated_risk))
    rows.append(("Median Monthly / Estimated Risk", self.ratios.median_monthly_over_estimated_risk))
    rows.append(("Mean Annual / Estimated Risk", self.ratios.mean_annual_over_estimated_risk))
    rows.append(("Median Annual / Estimated Risk", self.ratios.median_annual_over_estimated_risk))
    rows.append(("Sharpe Overall", self.ratios.sharpe_overall))
    rows.append(("Sortino Overall", self.ratios.sortino_overall))

    # Sharpe/Sortino by year
    for year in sorted(self.ratios.sharpe_by_year.keys()):
        rows.append((f"Sharpe ({year})", self.ratios.sharpe_by_year[year]))
    for year in sorted(self.ratios.sortino_by_year.keys()):
        rows.append((f"Sortino ({year})", self.ratios.sortino_by_year[year]))

    # Calendar annual ratios
    rows.append(
        ("Mean Annual Return (calendar) / Estimated Risk", self.ratios.mean_annual_calendar_over_estimated_risk)
    )
    rows.append(
        ("Median Annual Return (calendar) / Estimated Risk", self.ratios.median_annual_calendar_over_estimated_risk)
    )

    # Drawdown Metrics
    rows.append(("Worst Daily Loss", self.drawdowns.worst_daily_loss))
    rows.append(("Worst Daily Loss Date", str(self.drawdowns.worst_daily_loss_date)))
    rows.append(("Worst 7d Loss", self.drawdowns.worst_7d_loss))
    rows.append(("Worst 7d Loss Start Date", str(self.drawdowns.worst_7d_loss_start_date)))
    rows.append(("Worst 7d Loss End Date", str(self.drawdowns.worst_7d_loss_end_date)))
    rows.append(("Worst 1m Loss (30d)", self.drawdowns.worst_1m_30d_loss))
    rows.append(("Worst 1m Loss Start Date", str(self.drawdowns.worst_1m_30d_loss_start_date)))
    rows.append(("Worst 1m Loss End Date", str(self.drawdowns.worst_1m_30d_loss_end_date)))
    rows.append(("Worst 3m Loss (90d)", self.drawdowns.worst_3m_90d_loss))
    rows.append(("Worst 3m Loss Start Date", str(self.drawdowns.worst_3m_90d_loss_start_date)))
    rows.append(("Worst 3m Loss End Date", str(self.drawdowns.worst_3m_90d_loss_end_date)))
    rows.append(("Worst 6m Loss (180d)", self.drawdowns.worst_6m_180d_loss))
    rows.append(("Worst 6m Loss Start Date", str(self.drawdowns.worst_6m_180d_loss_start_date)))
    rows.append(("Worst 6m Loss End Date", str(self.drawdowns.worst_6m_180d_loss_end_date)))
    rows.append(("Worst 12m Loss (365d)", self.drawdowns.worst_12m_365d_loss))
    rows.append(("Worst 12m Loss Start Date", str(self.drawdowns.worst_12m_365d_loss_start_date)))
    rows.append(("Worst 12m Loss End Date", str(self.drawdowns.worst_12m_365d_loss_end_date)))
    rows.append(("Worst Daily Loss / Estimated Risk", self.drawdowns.worst_daily_loss_over_estimated_risk))
    rows.append(("Worst 7d Loss / Estimated Risk", self.drawdowns.worst_7d_loss_over_estimated_risk))
    rows.append(("Worst 1m Loss / Estimated Risk", self.drawdowns.worst_1m_30d_loss_over_estimated_risk))
    rows.append(("Worst 3m Loss / Estimated Risk", self.drawdowns.worst_3m_90d_loss_over_estimated_risk))
    rows.append(("Worst 6m Loss / Estimated Risk", self.drawdowns.worst_6m_180d_loss_over_estimated_risk))
    rows.append(("Worst 12m Loss / Estimated Risk", self.drawdowns.worst_12m_365d_loss_over_estimated_risk))
    rows.append(("Worst Any Period Loss (Max Drawdown)", self.drawdowns.max_drawdown))
    rows.append(("Worst Any Period Loss / Estimated Risk", self.drawdowns.max_drawdown_over_estimated_risk))

    # Streak Metrics
    rows.append(("Worst Any Period Start Date", str(self.streaks.worst_any_period_start_date)))
    rows.append(("Worst Any Period End Date", str(self.streaks.worst_any_period_end_date)))
    rows.append(("Worst Any Period Length (days)", self.streaks.worst_any_period_length_days))
    rows.append(("Longest Period Loss", self.streaks.longest_period_loss))
    rows.append(("Longest Period in Loss Start Date", str(self.streaks.longest_loss_period_start_date)))
    rows.append(("Longest Period in Loss End Date", str(self.streaks.longest_loss_period_end_date)))
    rows.append(("Longest Period in Loss Length (days)", self.streaks.longest_loss_period_length_days))
    rows.append(("Win Rate (%)", self.streaks.win_rate_pct))
    rows.append(("Percent of 1m periods with loss (%)", self.streaks.pct_1m_periods_with_loss))
    rows.append(("Percent of 3m periods with loss (%)", self.streaks.pct_3m_periods_with_loss))
    rows.append(("Percent of 6m periods with loss (%)", self.streaks.pct_6m_periods_with_loss))
    rows.append(("Percent of 12m periods with loss (%)", self.streaks.pct_12m_periods_with_loss))

    # Annual Returns (calendar) by year
    for year in sorted(self.returns.calendar_returns.keys()):
        rows.append((f"Annual Returns ({year})", self.returns.calendar_returns[year]))

    # Annual Returns Risk-Adjusted by year
    for year in sorted(self.returns.calendar_returns_risk_adjusted.keys()):
        rows.append((f"Annual Returns Risk-Adjusted ({year})", self.returns.calendar_returns_risk_adjusted[year]))

    # Gains by Calendar Year (dynamic - only years with data)
    for year in sorted(self.returns.calendar_returns.keys()):
        rows.append((f"Gains ({year})", self.returns.calendar_returns[year]))

    df = pd.DataFrame(rows, columns=["metric", "value"])
    df["value"] = df["value"].astype(str)  # Ensure consistent type for Arrow serialization
    return df

MetricsCalculator

MetricsCalculator(df_std: DataFrame, schema: ISOTradeSchema)

Calculator for computing all metrics from trade data.

This class orchestrates the computation of all metric groups using a standardized DataFrame and ISO schema.

Initialize the calculator.

Args: df_std: Standardized DataFrame (output of schema.adapt_frame) schema: ISO schema used for adaptation

Source code in src/progridpy/metrics/calculator.py
def __init__(self, df_std: pd.DataFrame, schema: ISOTradeSchema) -> None:
    """
    Initialize the calculator.

    Args:
        df_std: Standardized DataFrame (output of schema.adapt_frame)
        schema: ISO schema used for adaptation
    """
    self._df_std = df_std
    self._schema = schema
    self._canonical: CanonicalSeries | None = None

Functions

from_iso_frame classmethod
from_iso_frame(df_iso: DataFrame, iso_name: str) -> MetricsCalculator

Create a MetricsCalculator from an ISO-specific DataFrame.

Args: df_iso: ISO-specific trade DataFrame iso_name: Name of the ISO (e.g., "MISO")

Returns: MetricsCalculator instance

Source code in src/progridpy/metrics/calculator.py
@classmethod
def from_iso_frame(cls, df_iso: pd.DataFrame, iso_name: str) -> MetricsCalculator:
    """
    Create a MetricsCalculator from an ISO-specific DataFrame.

    Args:
        df_iso: ISO-specific trade DataFrame
        iso_name: Name of the ISO (e.g., "MISO")

    Returns:
        MetricsCalculator instance
    """
    schema = get_iso_schema(iso_name)
    df_std = schema.adapt_frame(df_iso)
    return cls(df_std, schema)
filtered_frame
filtered_frame() -> DataFrame

Return the DataFrame filtered to cleared trades only.

Filter: cleared == True

Returns: Filtered DataFrame

Source code in src/progridpy/metrics/calculator.py
def filtered_frame(self) -> pd.DataFrame:
    """
    Return the DataFrame filtered to cleared trades only.

    Filter: cleared == True

    Returns:
        Filtered DataFrame
    """
    return self._df_std[self._df_std["cleared"]]
canonical_series
canonical_series() -> CanonicalSeries

Build and cache the canonical time series.

Returns: CanonicalSeries with daily_gains, cumulative_gains, and drawdowns

Source code in src/progridpy/metrics/calculator.py
def canonical_series(self) -> CanonicalSeries:
    """
    Build and cache the canonical time series.

    Returns:
        CanonicalSeries with daily_gains, cumulative_gains, and drawdowns
    """
    if self._canonical is None:
        self._canonical = build_canonical_series(self.filtered_frame())
    return self._canonical
calculate_risk_metrics
calculate_risk_metrics() -> RiskMetrics

Calculate risk metrics.

Includes: - Estimated Risk - CVaR 1%, CVaR 5% - VaR 99%, VaR 95%, VaR 90% - Capital Required (ISO-specific)

Returns: RiskMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_risk_metrics(self) -> RiskMetrics:
    """
    Calculate risk metrics.

    Includes:
    - Estimated Risk
    - CVaR 1%, CVaR 5%
    - VaR 99%, VaR 95%, VaR 90%
    - Capital Required (ISO-specific)

    Returns:
        RiskMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Estimated Risk: 2 × P₀.₀₅(daily) + P₀.₀₅(90d) + P₀.₀₅(180d)
    # Using 0.05th percentile (0.0005 quantile)
    quantile = 0.0005
    q05_daily = float(np.quantile(daily, quantile))
    rolling_90 = rolling_period_sums(daily, 90).dropna()
    rolling_180 = rolling_period_sums(daily, 180).dropna()
    # Handle insufficient data for rolling windows
    q05_90d = float(np.quantile(rolling_90, quantile)) if len(rolling_90) > 0 else 0.0
    q05_180d = float(np.quantile(rolling_180, quantile)) if len(rolling_180) > 0 else 0.0
    # Estimated Risk: Sum of quantiles (no clamping - can be positive or negative)
    estimated_risk = 2 * q05_daily + q05_90d + q05_180d

    # CVaR (Conditional Value at Risk) - average of worst X% daily gains
    sorted_gains = np.sort(daily.to_numpy())
    n = len(sorted_gains)
    cutoff_1 = max(1, int(np.ceil(0.01 * n)))
    cutoff_5 = max(1, int(np.ceil(0.05 * n)))
    cvar_1 = min(0.0, float(sorted_gains[:cutoff_1].mean()))
    cvar_5 = min(0.0, float(sorted_gains[:cutoff_5].mean()))

    # VaR (Value at Risk) - quantiles of daily gains
    var_99 = min(0.0, float(np.quantile(daily, 0.01)))  # 1% left-tail
    var_95 = min(0.0, float(np.quantile(daily, 0.05)))  # 5% left-tail
    var_90 = min(0.0, float(np.quantile(daily, 0.10)))  # 10% left-tail

    # Capital Required (ISO-specific)
    capital_required = self._schema.compute_capital_requirement(self.filtered_frame())

    return RiskMetrics(
        estimated_risk=estimated_risk,
        cvar_1=cvar_1,
        cvar_5=cvar_5,
        var_99=var_99,
        var_95=var_95,
        var_90=var_90,
        capital_required=capital_required,
    )
calculate_return_metrics
calculate_return_metrics() -> ReturnMetrics

Calculate return metrics.

Includes: - Mean/Median Daily Return - Mean/Median Monthly Return (30d) - Mean/Median Annual Return (365d) - Calendar Year Returns

Returns: ReturnMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_return_metrics(self) -> ReturnMetrics:
    """
    Calculate return metrics.

    Includes:
    - Mean/Median Daily Return
    - Mean/Median Monthly Return (30d)
    - Mean/Median Annual Return (365d)
    - Calendar Year Returns

    Returns:
        ReturnMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Daily returns
    mean_daily = float(daily.mean())
    median_daily = float(daily.median())

    # Monthly (30d rolling)
    rolling_30 = rolling_period_sums(daily, 30).dropna()
    mean_monthly = float(rolling_30.mean())
    median_monthly = float(rolling_30.median())

    # Annual (365d rolling)
    rolling_365 = rolling_period_sums(daily, 365).dropna()
    mean_annual = float(rolling_365.mean())
    median_annual = float(rolling_365.median())

    # Calendar returns - group by year and sum
    calendar_returns: dict[int, float] = daily.groupby(daily.index.year).sum().to_dict()

    return ReturnMetrics(
        mean_daily=mean_daily,
        median_daily=median_daily,
        mean_monthly_30d=mean_monthly,
        median_monthly_30d=median_monthly,
        mean_annual_365d=mean_annual,
        median_annual_365d=median_annual,
        calendar_returns=calendar_returns,
    )
calculate_ratio_metrics
calculate_ratio_metrics(risk: RiskMetrics, returns: ReturnMetrics) -> RatioMetrics

Calculate ratio metrics (returns divided by estimated risk).

Args: risk: Previously computed RiskMetrics returns: Previously computed ReturnMetrics

Returns: RatioMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_ratio_metrics(self, risk: RiskMetrics, returns: ReturnMetrics) -> RatioMetrics:
    """
    Calculate ratio metrics (returns divided by estimated risk).

    Args:
        risk: Previously computed RiskMetrics
        returns: Previously computed ReturnMetrics

    Returns:
        RatioMetrics dataclass
    """
    er = risk.estimated_risk
    cs = self.canonical_series()
    daily = cs.daily_gains

    # Sharpe Ratio: mean / std (no risk-free rate, annualized by sqrt(365))
    # Use ddof=1 (sample std) for consistency across all calculations
    annualization_factor = np.sqrt(365)
    mu = float(daily.mean())
    sigma = float(daily.std(ddof=1))
    sharpe_overall = (mu / sigma * annualization_factor) if sigma != 0 else np.nan

    # Sortino Ratio: mean / downside_std (annualized by sqrt(365))
    downside = daily[daily < 0]
    sigma_down = float(downside.std(ddof=1)) if len(downside) > 0 else np.nan
    sortino_overall = (mu / sigma_down * annualization_factor) if sigma_down and sigma_down != 0 else np.nan

    # Sharpe/Sortino by year (annualized by sqrt(365))
    # Note: Use ddof=1 (sample std) for all calculations
    sharpe_by_year: dict[int, float] = {}
    sortino_by_year: dict[int, float] = {}
    for yr, gains in daily.groupby(daily.index.year):
        yr_mu = float(gains.mean())
        yr_sigma = float(gains.std(ddof=1))
        sharpe_by_year[yr] = (yr_mu / yr_sigma * annualization_factor) if yr_sigma != 0 else np.nan

        yr_downside = gains[gains < 0]
        yr_sigma_down = float(yr_downside.std(ddof=1)) if len(yr_downside) > 0 else np.nan
        sortino_by_year[yr] = (
            (yr_mu / yr_sigma_down * annualization_factor) if yr_sigma_down and yr_sigma_down != 0 else np.nan
        )

    # Calendar annual ratios - total of all calendar year gains / estimated risk
    total_calendar_gain = sum(returns.calendar_returns.values())
    # Denominator must be positive risk scale
    risk_scale = abs(er) if er != 0 else np.nan

    mean_annual_calendar_over_er = total_calendar_gain / risk_scale if risk_scale else np.nan

    median_annual_calendar_over_er = (
        total_calendar_gain / risk_scale if risk_scale else np.nan
    )  # Same as mean per spec

    return RatioMetrics(
        mean_daily_over_estimated_risk=returns.mean_daily / risk_scale if risk_scale else np.nan,
        median_daily_over_estimated_risk=returns.median_daily / risk_scale if risk_scale else np.nan,
        mean_monthly_over_estimated_risk=returns.mean_monthly_30d / risk_scale if risk_scale else np.nan,
        median_monthly_over_estimated_risk=returns.median_monthly_30d / risk_scale if risk_scale else np.nan,
        mean_annual_over_estimated_risk=returns.mean_annual_365d / risk_scale if risk_scale else np.nan,
        median_annual_over_estimated_risk=returns.median_annual_365d / risk_scale if risk_scale else np.nan,
        sharpe_overall=sharpe_overall,
        sortino_overall=sortino_overall,
        sharpe_by_year=sharpe_by_year,
        sortino_by_year=sortino_by_year,
        mean_annual_calendar_over_estimated_risk=mean_annual_calendar_over_er,
        median_annual_calendar_over_estimated_risk=median_annual_calendar_over_er,
    )
calculate_drawdown_metrics
calculate_drawdown_metrics(risk: RiskMetrics) -> DrawdownMetrics

Calculate drawdown metrics.

Args: risk: Previously computed RiskMetrics (for risk-adjusted ratios)

Returns: DrawdownMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_drawdown_metrics(self, risk: RiskMetrics) -> DrawdownMetrics:
    """
    Calculate drawdown metrics.

    Args:
        risk: Previously computed RiskMetrics (for risk-adjusted ratios)

    Returns:
        DrawdownMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains
    cs = self.canonical_series()
    daily = cs.daily_gains
    er = risk.estimated_risk
    risk_scale = abs(er) if er != 0 else np.nan

    def get_worst_window_dates(rolling_series: pd.Series, window_days: int) -> tuple[pd.Timestamp, pd.Timestamp]:
        """Get start and end dates for the worst loss window."""
        if len(rolling_series) == 0:
            return pd.NaT, pd.NaT
        end_date = pd.Timestamp(rolling_series.idxmin())
        start_date = end_date - pd.Timedelta(days=window_days - 1)
        return start_date, end_date

    # Worst losses for different rolling periods
    worst_daily_loss = float(daily.min())
    worst_daily_loss_date = pd.Timestamp(daily.idxmin())

    rolling_7 = rolling_period_sums(daily, 7).dropna()
    worst_7d_loss = float(rolling_7.min()) if len(rolling_7) > 0 else 0.0
    worst_7d_start, worst_7d_end = get_worst_window_dates(rolling_7, 7) if len(rolling_7) > 0 else (pd.NaT, pd.NaT)

    rolling_30 = rolling_period_sums(daily, 30).dropna()
    worst_1m_30d_loss = float(rolling_30.min()) if len(rolling_30) > 0 else 0.0
    worst_1m_30d_start, worst_1m_30d_end = (
        get_worst_window_dates(rolling_30, 30) if len(rolling_30) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_90 = rolling_period_sums(daily, 90).dropna()
    worst_3m_90d_loss = float(rolling_90.min()) if len(rolling_90) > 0 else 0.0
    worst_3m_90d_start, worst_3m_90d_end = (
        get_worst_window_dates(rolling_90, 90) if len(rolling_90) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_180 = rolling_period_sums(daily, 180).dropna()
    worst_6m_180d_loss = float(rolling_180.min()) if len(rolling_180) > 0 else 0.0
    worst_6m_180d_start, worst_6m_180d_end = (
        get_worst_window_dates(rolling_180, 180) if len(rolling_180) > 0 else (pd.NaT, pd.NaT)
    )

    rolling_365 = rolling_period_sums(daily, 365).dropna()
    worst_12m_365d_loss = float(rolling_365.min()) if len(rolling_365) > 0 else 0.0
    worst_12m_365d_start, worst_12m_365d_end = (
        get_worst_window_dates(rolling_365, 365) if len(rolling_365) > 0 else (pd.NaT, pd.NaT)
    )

    # Max drawdown from cumulative gains series
    max_drawdown = float(cs.drawdowns.min())

    return DrawdownMetrics(
        worst_daily_loss=worst_daily_loss,
        worst_daily_loss_date=worst_daily_loss_date,
        worst_7d_loss=worst_7d_loss,
        worst_7d_loss_start_date=worst_7d_start,
        worst_7d_loss_end_date=worst_7d_end,
        worst_1m_30d_loss=worst_1m_30d_loss,
        worst_1m_30d_loss_start_date=worst_1m_30d_start,
        worst_1m_30d_loss_end_date=worst_1m_30d_end,
        worst_3m_90d_loss=worst_3m_90d_loss,
        worst_3m_90d_loss_start_date=worst_3m_90d_start,
        worst_3m_90d_loss_end_date=worst_3m_90d_end,
        worst_6m_180d_loss=worst_6m_180d_loss,
        worst_6m_180d_loss_start_date=worst_6m_180d_start,
        worst_6m_180d_loss_end_date=worst_6m_180d_end,
        worst_12m_365d_loss=worst_12m_365d_loss,
        worst_12m_365d_loss_start_date=worst_12m_365d_start,
        worst_12m_365d_loss_end_date=worst_12m_365d_end,
        worst_daily_loss_over_estimated_risk=worst_daily_loss / risk_scale if risk_scale else np.nan,
        worst_7d_loss_over_estimated_risk=worst_7d_loss / risk_scale if risk_scale else np.nan,
        worst_1m_30d_loss_over_estimated_risk=worst_1m_30d_loss / risk_scale if risk_scale else np.nan,
        worst_3m_90d_loss_over_estimated_risk=worst_3m_90d_loss / risk_scale if risk_scale else np.nan,
        worst_6m_180d_loss_over_estimated_risk=worst_6m_180d_loss / risk_scale if risk_scale else np.nan,
        worst_12m_365d_loss_over_estimated_risk=worst_12m_365d_loss / risk_scale if risk_scale else np.nan,
        max_drawdown=max_drawdown,
        max_drawdown_over_estimated_risk=max_drawdown / risk_scale if risk_scale else np.nan,
    )
calculate_streak_metrics
calculate_streak_metrics(risk: RiskMetrics) -> StreakMetrics

Calculate streak and probability metrics.

Args: risk: Previously computed RiskMetrics

Returns: StreakMetrics dataclass

Source code in src/progridpy/metrics/calculator.py
def calculate_streak_metrics(self, risk: RiskMetrics) -> StreakMetrics:
    """
    Calculate streak and probability metrics.

    Args:
        risk: Previously computed RiskMetrics

    Returns:
        StreakMetrics dataclass
    """
    cs = self.canonical_series()
    daily = cs.daily_gains
    cumulative = cs.cumulative_gains
    drawdowns = cs.drawdowns

    # Win rate: percentage of positive days
    win_rate_pct = 100.0 * float((daily > 0).mean())

    # Percent of periods with loss
    rolling_30 = rolling_period_sums(daily, 30).dropna()
    pct_1m_loss = 100.0 * float((rolling_30 < 0).mean())

    rolling_90 = rolling_period_sums(daily, 90).dropna()
    pct_3m_loss = 100.0 * float((rolling_90 < 0).mean())

    rolling_180 = rolling_period_sums(daily, 180).dropna()
    pct_6m_loss = 100.0 * float((rolling_180 < 0).mean())

    rolling_365 = rolling_period_sums(daily, 365).dropna()
    pct_12m_loss = 100.0 * float((rolling_365 < 0).mean())

    # Worst any period (max drawdown period) - find start and end dates
    # End date: where drawdown hits minimum
    worst_dd_idx = drawdowns.idxmin()
    worst_any_period_end_date = pd.Timestamp(worst_dd_idx)

    # Start date: find the peak before the trough
    # The peak is where cumulative_gains equals its cummax just before the trough
    peak = cumulative.cummax()
    # Find the last date where cumulative == peak before worst_dd_idx
    pre_trough = cumulative.loc[:worst_dd_idx]
    peak_before_trough = peak.loc[:worst_dd_idx]
    # Find where they were equal (at a peak)
    at_peak = pre_trough == peak_before_trough
    if at_peak.any():
        peak_dates = pre_trough.index[at_peak]
        worst_any_period_start_date = pd.Timestamp(peak_dates[-1])
    else:
        worst_any_period_start_date = pd.Timestamp(cumulative.index[0])

    worst_any_period_length_days = (worst_any_period_end_date - worst_any_period_start_date).days

    # Longest period in loss: longest consecutive run where cumulative < peak
    in_loss = cumulative < peak
    # Find runs using diff and cumsum (vectorized)
    # Create groups where each new "not in loss" starts a new group
    not_in_loss = ~in_loss
    group_ids = not_in_loss.cumsum()
    # For periods in loss, find the longest run
    loss_groups = in_loss.groupby(group_ids)

    longest_start = pd.Timestamp(cumulative.index[0])
    longest_end = pd.Timestamp(cumulative.index[0])
    longest_length = 0

    for _group_id, group in loss_groups:
        if group.any():  # This group has loss periods
            # Get only the True values (in loss)
            loss_dates = group[group].index
            if len(loss_dates) > 0:
                start = loss_dates[0]
                end = loss_dates[-1]
                length = (end - start).days
                if length > longest_length:
                    longest_length = length
                    longest_start = pd.Timestamp(start)
                    longest_end = pd.Timestamp(end)

    # Calculate total loss during the longest loss period
    longest_period_loss = 0.0
    if longest_length > 0:
        longest_period_loss = float(daily.loc[longest_start:longest_end].sum())

    return StreakMetrics(
        worst_any_period_start_date=worst_any_period_start_date,
        worst_any_period_end_date=worst_any_period_end_date,
        worst_any_period_length_days=worst_any_period_length_days,
        longest_period_loss=longest_period_loss,
        longest_loss_period_start_date=longest_start,
        longest_loss_period_end_date=longest_end,
        longest_loss_period_length_days=longest_length,
        win_rate_pct=win_rate_pct,
        pct_1m_periods_with_loss=pct_1m_loss,
        pct_3m_periods_with_loss=pct_3m_loss,
        pct_6m_periods_with_loss=pct_6m_loss,
        pct_12m_periods_with_loss=pct_12m_loss,
    )
calculate_all
calculate_all() -> MetricsResult

Calculate all metrics in the correct dependency order.

Returns: MetricsResult containing all metric groups

Source code in src/progridpy/metrics/calculator.py
def calculate_all(self) -> MetricsResult:
    """
    Calculate all metrics in the correct dependency order.

    Returns:
        MetricsResult containing all metric groups
    """
    risk = self.calculate_risk_metrics()
    returns = self.calculate_return_metrics()

    # Populate calendar_returns_risk_adjusted after we have estimated_risk
    er = risk.estimated_risk
    # Populate calendar_returns_risk_adjusted after we have estimated_risk
    er = risk.estimated_risk
    risk_scale = abs(er) if er != 0 else np.nan
    returns.calendar_returns_risk_adjusted = {
        yr: gain / risk_scale if risk_scale else np.nan for yr, gain in returns.calendar_returns.items()
    }

    ratios = self.calculate_ratio_metrics(risk, returns)
    drawdowns = self.calculate_drawdown_metrics(risk)
    streaks = self.calculate_streak_metrics(risk)

    return MetricsResult(
        risk=risk,
        returns=returns,
        ratios=ratios,
        drawdowns=drawdowns,
        streaks=streaks,
    )

Functions