A DEMONSTRATION FROM MANAGERIAL FINANCE
Example Co. is privately held and has financing from a bank loan. As a condition of the loan, the lender requires that Example Co. maintain an interest coverage ratio, defined as earnings before interest and taxes (EBIT) divided by interest expense, of 1.5. While Example Co. is financially sound and expects to be profitable, its profits face two major sources of uncertainty. First, revenues fluctuate due to variations in product mix and sales volume. Second, the firm's costs present an element of uncertainty due to factors such as variations in the sales mix, efficiencies in operations, changes to input prices, etc.
To estimate EBIT and compute the interest coverage ratio, the CFO analyzes the company's cost structure. Historically, variable costs average 35% of revenues, but this percentage randomly fluctuates from one period to the next. The firm also has fixed operating costs of $5 million. The interest charged by the lender is also fixed with an annual amount of $500,000, as the net borrowings on the loan are not expected to change.
Based on prior years (e.g., a couple of years of sales data) and current market information, revenues for the current period are projected to be $10 million. The expected EBIT is $1.5 million, computed as ($10,000,000 × (1 — 0.35) — $5,000,000), and the expected interest coverage ratio is 3.0 ($1,500,000 ÷ $500,000). Based only on these estimates, the firm appears likely to meet the lender's covenant. Rather than relying on the relatively large margin of safety reflected in this single calculation, the CFO uses Excel to further quantify the likelihood of a covenant violation by performing a Monte Carlo simulation of EBIT and the resulting interest coverage ratio.
First, the CFO must quantify the variation around the ratio's two primary sources of uncertainty—revenues and variable costs. For a stable business, the standard deviation of revenues over a historical period may be appropriate. Alternatively, the CFO could ask for multiple revenue projections from the sales department, each based on different assumptions of factors such as customer growth, sales mix, and economic conditions, and calculate the standard deviation of these projections. Several other advanced statistical techniques, such as regression analysis of prior-period results, could also be employed on historical data to identify trends in variation. In this scenario, the CFO reviews projections prepared by the sales department and determines an appropriate standard deviation for revenue is $500,000. The standard deviation of variable costs can be similarly measured. Here, the CFO reviews the past 24 months of cost data and finds an appropriate standard deviation of 2% for use in the analysis.
BRINGING EXCEL INTO THE EQUATION
The CFO proceeds to build the spreadsheet by labeling each fixed and random variable with its expected value and, if applicable, expected standard deviation. These values are referenced in the spreadsheet to model the thousands of replications of the EBIT and coverage ratio calculations based on "random draws" of sales and variable costs. The resulting output is a range of the possible EBITs and coverage ratios.
To perform a "random draw" for a variable, the CFO uses the following Excel formula: =NORMINV(RAND(),mean value, standard deviation). Then, the CFO references the "drawn variables" in the formulas for EBIT and the interest coverage ratio. Finally, the CFO copies these formulas over any number of rows, with each row representing a separate observation. In this example, the CFO repeats the formulas in 100,000 rows to compute an EBIT and interest coverage ratio for 100,000 potential outcomes. The CFO knows that using a smaller number of rows, perhaps only 10,000, would likely be sufficient, but that the analysis will become more stable over multiple attempts with the larger sample size. Excel limits the number of rows on a worksheet to just over 1 million. However, through using the more than 16,000 maximum columns in Excel, the number of possible calculations increases into the billions. As such, time and the computer's memory space and processing power usually are the only true limitations.
See the table "Spreadsheet Template" for a snapshot of the template and the table "Excel Formulas" for details on the necessary formulas. Only the first three rows are illustrated, but the formulas remain the same for each of the 100,000 observations. One expects that a greater number of observations would increase the predictive power of the simulation, but there are diminishing returns as observations are added. For example, the model will increase in utility far more when going from 10,000 to 20,000 observations than when going from 100,000 to 200,000 observations.