Since the rapid growth of decentralized finance applications hit the mainstream about 2 years ago, some of the largest protocols in the space have relied on Gauntlet to provide valuable quantitative insights on how to best manage protocol parameters. In this article, we will use the example of lending protocol risk management to provide more detail on the methodology behind our analysis and recommendations. To start let’s briefly recap how a lending protocol works, and how the parameters in question affect market functioning.
Without the loss of generality, we can consider the example of a lending pool where users can borrow a generic USD token (USDx) using ETH as collateral, as shown in the diagram above. The lenders at the top right provide their assets to the protocol as available liquidity, which the borrowers on the left can access by taking a loan against their collateral. Part of the proceeds from repaid loans go towards paying a yield to lenders, providing an incentive to continue supplying their assets to the pool. If a loan is not repaid or the value of the collateral drops to an unacceptable level, the protocol allows liquidators to take over the distressed loans and underlying collateral. To incentivize liquidators to unwind defaulted loans, the protocol offers slightly more collateral than is needed to cover the obligation, leaving a profit margin or bonus for the liquidator.
The primary risk to lending protocols arises from this process of liquidating offsides positions: what if the collateral is unable to cover the full amount of a problem loan? Such an event is called an insolvency and could trigger losses to lenders if it occurs at a large scale. To manage the risk of insolvencies, lending protocols set aside reserves to cover small losses and tune risk parameters to try to ensure liquidations can occur without major issues.
This dynamic adjustment of parameters leads to a trade-off between the protocol’s risk level and capital efficiency. To minimize the risk of insolvency, a lending protocol would ideally want to liquidate problem loans quickly long before collateral value becomes a pressing issue, but setting collateral thresholds more conservatively reduces the amount of funding that borrowers are able to access. Similarly, paying liquidators a higher bonus may incentivize them to unwind positions more promptly in times of market stress, but comes at a direct cost to users which may cause them to seek out more competitive rates elsewhere. Achieving balance between protocol risk and user needs is thus a complex optimization problem, and needs a robust framework to quantify the objectives of the protocol and set parameters in order to achieve them. In the next two sections, we will explain how Gauntlet approaches lending protocol management, and in the process introduce some of the quantitative tools we use throughout our product suite.
Gauntlet Approach to Optimization
At the core of our methodology is an agent-based simulation of the lending protocol for a range of risk parameters, where we as faithfully as possible replicate observed market conditions and user interactions with a test protocol under realistic scenarios. To quantify the concepts of risk and efficiency, we estimate the percentage of available lending capacity utilized (or Borrow Usage) and expected liquidations and insolvencies in a severe market shock for each set of trial parameters. These outputs allow us to search the range of parameters for values that best meet the protocol’s desired balance, as shown in the diagram below:
Since lending protocol management involves the trade-offs discussed earlier, we score these trial runs with an objective function that weighs the risks and benefits. At a high level, this can be seen as setting a reward for capital efficiency and a penalty for risk in order to compare simulation outcomes on a quantifiable ranking scale. Using community feedback to adjust the reward and penalty calculations (shown in the formula as R and P), we can select a measure that balances the trade-off to the desired level of risk tolerance.
A major challenge in running this type of optimization for a complex protocol is that while we can rank simulation results on an objective scale, it is difficult to know which way to adjust parameters to improve the simulation outcome. For a simple optimization with only two or three parameters it would be practical to sample enough settings to create a full map of performance, but for most real applications this would be very inefficient. If we also knew how the objective function would change with an adjustment in parameters (or in other words, the derivative), we could more quickly converge to the optimal solution by using methods of gradient descent. However, this would be impossible for risk parameter tuning without making extensive assumptions about market behavior. To be able to provide parameter recommendations in a reasonable number of simulation runs, we use a covariance matrix adaptation evolution strategy (CMA-ES), which adapts some elements of gradient descent methods to problems where the derivative is not directly observable.
Risk Parameter Tuning
The key insight of CMA-ES is that if we sample a cloud of parameter values, we can infer something about the derivative by looking at where the top performing points land. Thus, by sampling a Gaussian distribution of points and ranking them, we can iterate the shape and location of the next sample set to reflect our best guess of where the optimal value is. A stylized depiction of this method of parameter tuning is shown in the diagram below, where the strategy converges to the maximum in about four iterations:
In this example, the top performing points in each sample set are marked in red, the lowest performing points in blue and the points in between in gray. With each step, we adjust the location or shape of the sample distribution to move it in the apparent optimal direction, indirectly applying the concept of gradient descent to this more abstracted problem. Since the strategy does not make assumptions about the nature of the problem besides the ability to rank outcomes, it is a robust and powerful approach for many optimization problems where other methods are not practical. When Gauntlet makes parameter recommendations, we rely on CMA-ES methods to tune our recommendations to protocol objectives. In future articles, we look forward to further elaborating on how we use these tools, both in the area of lending risk management and the variety of other problems we seek to help solve.