Building a risk engine that runs on-chain sounds straightforward. You write the margin calculations in a smart contract, define the liquidation thresholds, and let the protocol handle the rest. In practice, it's one of the hardest engineering problems in DeFi.
The core challenge
Risk management in traditional finance is computationally intensive. Margin calls depend on real-time mark-to-market valuations, portfolio-wide risk calculations, and rapid execution when thresholds are breached. None of that is trivial on a public blockchain where every computation costs gas and block times are measured in seconds.
The naive implementation is to run the risk calculation on-chain and trigger liquidations automatically when a position crosses the threshold. This works — but it's expensive, and more importantly, it's slow. If a market moves 5% in ten seconds, a contract that checks margins every block may not keep up.
The keeper model
Most production-grade perpetual protocols use a keeper model. The on-chain contract defines the margin rules and the liquidation mechanism. An off-chain monitoring system watches positions and submits liquidation transactions when positions become undercollateralized. The protocol validates the liquidation on-chain and executes it.
This is fast and gas-efficient. The risk is that keepers are centralized infrastructure. If the keeper network goes down, positions that should be liquidated aren't. This is why decentralizing the keeper network matters — any party should be able to submit a valid liquidation transaction and receive a liquidation reward for doing so.
Oracle quality as a risk variable
The liquidation price is only as good as the oracle that feeds it. A price feed that's lagging, manipulable, or temporarily wrong creates two failure modes: legitimate liquidations that don't happen (because the oracle doesn't reflect the true price) and illegitimate liquidations that do (because the oracle was temporarily pushed to an extreme value).
Aark uses a Chainlink + TWAP hybrid specifically to handle this. Chainlink provides external price data aggregated from multiple sources. The TWAP (time-weighted average price) smooths out short-term volatility that might otherwise trigger liquidations during brief price spikes. The combination makes the risk engine robust without making it sluggish.
Socialized losses
Even a well-designed risk engine occasionally has bad debt: positions that go negative before they can be liquidated. The protocol needs a mechanism to handle this without collapsing. Most use an insurance fund — a reserve built from liquidation fees — that absorbs losses that exceed what the liquidated position can cover. If the insurance fund is depleted, the protocol falls back to socializing the remaining loss across profitable positions.
This is the part nobody likes to talk about, but it's how every perpetual protocol handles extreme events. Understanding it before you trade is part of knowing what you're actually participating in.
Position limits as a risk tool
Open interest limits are one of the cleaner risk management mechanisms available to a perpetual protocol. By capping the total notional exposure the protocol will take on a given asset, you limit the maximum insurance fund drain in a worst-case scenario. The tradeoff is that you're also limiting the protocol's utility — a trader who wants to establish a position above the limit simply can't.
Getting the limits right requires real data. In our experience, the right OI limits are a function of the asset's market cap, its liquidity depth on external markets, and the size of the insurance fund. Too tight and you're turning away legitimate volume. Too loose and a single large correlated move can expose the insurance fund to losses it can't absorb.
Dynamic margin requirements
Static margin requirements — fixed maintenance margin ratios regardless of position size — are a simplification that works well under normal conditions and fails under stress. A trader with a $100 position and a trader with a $10 million position present very different risk profiles to the protocol, even at identical leverage.
Dynamic (tiered) margin requirements adjust the maintenance margin as position size increases. Larger positions require proportionally more margin. This limits the amount any single position can draw on the insurance fund in a liquidation event, and it creates a more appropriate risk-adjusted cost structure for large traders who genuinely have market impact on the way out.
The practical note here: if you're running a large position on a protocol with flat margin requirements, you're getting subsidized by the traders with smaller positions. The protocol is taking more tail risk from you than it's charging you for. That's fine until it isn't — when the insurance fund depletes and the socialized loss mechanism activates.
Stress testing the protocol
Before going live, we ran simulations of historical market events through the Aark risk engine. The 2022 LUNA collapse, the November 2022 FTX-driven sell-off, the March 2020 COVID crash. Each of these involved rapid price moves combined with liquidity withdrawal that stressed liquidation mechanisms across the industry.
The results informed the tuning of our insurance fund contribution rates, OI limits, and keeper incentive structure. No simulation is perfect — real events always have features that weren't modeled. But the stress test process identifies the scenarios where the system is most vulnerable and forces explicit decisions about how to handle them rather than discovering the failure mode in production.