Interesting KPIs (Key Performance Indicators) for a Subscription Company

what-are-key-performance-indicators-kpis

In working with early stage businesses, I often get the question as to what metrics should management and the board use to help understand a company’s progress. It is important for every company to establish a set of consistent KPIs that are used to objectively track progress. While these need to be a part of each board package, it is even more important for the executive team to utilize this for managing their company. While this post focuses on SaaS/Subscription companies, the majority of it applies to most other types of businesses.

Areas KPIs Should Cover

  1. P&L Trends
  2. MRR (Monthly Recurring Revenue) and LTR (Lifetime Revenue)
  3. CAC (Cost of Customer Acquisition)
    1. Marketing to create leads
    2. Customers acquired electronically
    3. Customers acquired using sales professionals
  4. Gross Margin and LTV (Life Time Value of a customer)
  5. Marketing Efficiency

Many companies will also need KPIs regarding inventory in addition to the ones above.

While there may be very complex analysis behind some of these numbers, it’s important to try to keep KPIs to 2-5 pages of a board package. Use of the right KPIs will give a solid, objective, consistent top-down view of the company’s progress. The P&L portion of the package is obviously critical, but I have a possibly unique view on how this should be included in the body of a board package.

P&L Trends: Less is More

One mistake many companies make is confusing detail with better analysis. I often see models that have 50-100 line items for expenses and show this by month for 3 or more years out… but show one or no years of history. What this does is waste a great deal of time on predicting things that are inconsequential and controllable (by month), while eliminating all perspective. Things like seasonality are lost if one is unable to view 3 years of revenue at a time without scrolling from page to page. Of course, for the current year’s budget it is appropriate for management to establish monthly expectations in detail, but for any long-term planning, success revolves around revenue, gross margins, marketing/sales spend and the number of employees. For some companies that are deep technology players there may be significant costs in R&D other than payroll, but this is the exception. By using a simple formula for G&A based on the number of employees, the board can apply a sanity check on whether cost estimates in the long-term model will be on target assuming revenue is on target. So why spend excessive time on nits? Aggregating cost frees up time for better understanding how and why revenue will ramp, the relationship between revenue types and gross margin, the cost of acquiring a customer, the lifetime value of a customer and the average spend per employee.

In a similar way, the board is well served by viewing a simple P&L by quarter for 2 prior years plus the current one (with a forecast of remaining quarters). The lines could be:

Table1: P&L by Quarter

A second version of the P&L should be produced for budget comparison purposes. It should have the same rows but have the columns be current period actual, current period budget, year to date (YTD) actual, year to date budget, current full year forecast, budget for the full year.

Table 2: P&L Actual / Budget Comparison

Tracking MRR and LTR

For any SaaS/Subscription company (I’ll simply refer to this as SaaS going forward) MRR growth is the lifeblood of the company with two caveats: excessive churn makes MRR less valuable and excessive cost in growing MRR also leads to deceptive prosperity. More about that further on. MRR should be viewed on a rolling basis. It can be done by quarter for the board but by month for the management team. Doing it by quarter for the board enables seeing a 3-year trend on one page and gives the board sufficient perspective for oversight. Management needs to track this monthly to better manage the business. A relatively simple set of KPIs for each of 12 quarterly periods would be:

Table 3: MRR and Retention

Calculating Life Time Revenue through Cohort Analysis

The detailed method of calculating LTR does not need to be shown in every board package but should be included at least once per year, but calculated monthly for management.

The LTR calculation uses a grid where the columns would be the various Quarterly cohorts, that is all customers that first purchased that quarter (management might also do this using monthly instead of quarterly). This analysis can be applied to non-SaaS companies as well as SaaS entities. The first row would be the number of customers in the cohort. The next row would be the first month’s revenue for the cohort, the next the second months revenue, and so on until reaching 36 months (or whatever number the board prefers for B2B…I prefer 60 months). The next row would be the total for the full period and the final row would be the average Lifetime Revenue, LTR, per member of the cohort.

Table 4: Customer Lifetime Revenue

A second table would replicate the grid but show average per member of the cohort for each month (row). That table allows comparisons of cohorts to see if the average revenue of a newer cohort is getting better or worse than older ones for month 2 or month 6 or month 36, etc.

Table 5: Average Revenue per Cohort

Cohorts that have a full 36 months of data need to be at least 36 months old. What this means is that more recent cohorts will not have a full set of information but still can be used to see what trends have occurred. For example, is the second months average revenue for a current cohort much less than it was for a cohort one year ago? While newer cohorts do not have full sets of monthly revenue data, they still are very relevant in calculating more recent LTR. This can be done by using average monthly declines in sequential months and applying them to cohorts with fewer months of data.

Customer Acquisition Cost (CAC)

Calculating CAC is done in a variety of ways and is quite different for customers acquired electronically versus those obtained by a sales force.  Many companies I’ve seen have a combination of the two.

Marketing used to generate leads should always be considered part of CAC. The marketing cost in a month first is divided by the number of leads to generate a cost/lead. The next step is to estimate the conversion rate of leads to customers. A simple table would be as follows:

Table 6: Customer Acquisition Costs

table 6.1

For an eCommerce company, the additional cost to convert might be one free month of product or a heavily subsidized price for the first month. If the customer is getting the item before becoming a regular paying customer than the CAC would be:

CAC = MCTC / the percent that converts from the promotional trial to a paying customer.

CAC when a Sales Force is Involved

For many eCommerce companies and B2B companies that sell electronically, marketing is the primary cost involved in acquiring a paying customer. For those utilizing a sales force, the marketing expense plus the sales expense must be accumulated to determine CAC.

Typically, what this means is steps 1 through 3 above would still be used to determine CPL, but step 1 above might include marketing personnel used to generate leads plus external marketing spend:

  1. CPL (cost per lead) as above
  2. Sales Cost = current month’s cost of the sales force including T&E
  3. New Customers in the month = NC
  4. Conversion Rate to Customer = NC/number of leads= Y%
  5. CAC = CPL/Y% + (Sales Cost)/NC

There are many nuances ignored in the simple method shown. For example, some leads may take many months to close. Some may go through a pilot before closing. Therefore, there are more sophisticated methods of calculating CAC but using this method would begin the process of understanding an important indicator of efficiency of customer acquisition.

Gross Margin (GM) is a Critical Part of the Equation

While revenue is obviously an important measure of success, not all revenue is the same. Revenue that generates 90% gross margin is a lot more valuable per dollar than revenue that generates 15% gross margin. When measuring a company’s potential for future success it’s important to understand what level of revenue is required to reach profitability. A first step is understanding how gross margin may evolve. When a business scales there are many opportunities to improve margins:

  • Larger volumes may lead to larger discounts from suppliers
  • Larger volumes for products that are software/content may lower the hosting cost as a percent of revenue
  • Shipping to a larger number of customers may allow opening additional distribution centers (DCs) to facilitate serving customers from a DC closer to their location lowering shipping cost
  • Larger volumes may mean improved efficiency in the warehouse. For example, it may make more automation cost effective

When forecasting gross margin, it is important to be cautious in predicting some of these savings. The board should question radical changes in GM in the forecast. Certain efficiencies should be seen in a quarterly trend, and a marked improvement from the trend needs to be justified. The more significant jump in GM from a second DC can be calculated by looking at the change in shipping rates for customers that will be serviced from the new DC vs what rates are for these customers from the existing one.

Calculating LTV (Lifetime Value)

Gross Margin, by itself may be off as a measure of variable profits of a customer. If payment is by credit card, then the credit card cost per customer is part of variable costs. Some companies do not include shipping charges as part of cost of goods, but they should always be part of variable cost. Customer service cost is typically another cost that rises in proportion to the number of customers. So:

Variable cost = Cost of Goods sold plus any cost that varies directly with sales

Variable Profit = Revenue – Variable Cost

Variable Profit% (VP%) = (Variable Profit)/Revenue

LTV = LTR x VP%

The calculation of VP% should be based on current numbers as they will apply going forward. Determining a company’s marketing efficiency requires comparing LTV to the cost of customer acquisition. As mentioned earlier in the post, if the CAC is too large a proportion of LTV, a company may be showing deceptive (profitless) growth. So, the next set of KPIs address marketing efficiency.

Marketing Efficiency

It does not make sense to invest in an inefficient company as they will burn through capital at a rapid rate and will find it difficult to become profitable. A key measure of efficiency is the relationship between LTV and CAC or LTV/CAC. Essentially this is how many dollars of variable profit the company will make for every dollar it spends on marketing and sales. A ratio of 5 or more usually means the company is efficient. The period used for calculating LTR will influence this number. Since churn tends to be much lower for B2B companies, 5 years is often used to calculate LTR and LTV. But, using 5 years means waiting longer to receive resulting profits and can obscure cash flow implications of slower recovery of CAC. So, a second metric important to understand burn is how long it takes to recover CAC:

CAC Recovery Time = number of months until variable profit equals the CAC

The longer the CAC recovery time, the more capital required to finance growth. Of course, existing customers are also contributing to the month’s revenue alongside new customers. So, another interesting KPI is contribution margin which measures the current state of balance between marketing/sales and Variable Profits:

Contribution Margin = Variable Profits – Sales and Marketing Cost

Early on this number will be negative as there aren’t enough older customers to cover the investment in new ones. But eventually the contribution margin in a month needs to turn positive. To reach profitability it needs to exceed all other costs of the business (G&A, R&D, etc.). By reducing a month’s marketing cost, a company can improve contribution margin that month at the expense of sequential growth… which is why this is a balancing act.

I realize this post is long but wanted to include a substantial portion of KPIs in one post. However, I’ll leave more detailed measurement of sales force productivity and deeper analysis of several of the KPIs discussed here for one or more future posts.

Soundbytes

I’ll begin by apologizing for a midyear brag, but I always tell others to enjoy success and therefore am about to do that myself. In my top ten predictions for 2018 I included a market prediction and 4 stock predictions. I was feeling pretty good that they were all working well when I started to create this post. However, the stock prices for high growth stocks can experience serious shifts in very short periods. Facebook and Tesla both had (what I consider) minor shortfalls against expectations in the 10 days since and have subsequently declined quite a bit in that period. But given the strength of my other two recommendations, Amazon and Stitchfix, the four still have an average gain of 15% as of July 27. Since I’ve only felt comfortable predicting the market when it was easy (after 9/11 and after the 2008 mortgage blowup), I was nervous about predicting the S&P would be up this year as it was a closer call and was somewhat controversial given the length of the bull market prior to this year. But it seemed obvious that the new tax law would be very positive for corporate earnings. So, I thought the S&P would be up despite the likelihood of rising interest rates. So far, it is ahead 4.4% year to date driven by stronger earnings. Since I always fear that my record of annual wins can’t continue I wanted to take a midyear victory lap just in case everything collapses in the second half of the year (which I don’t expect but always fear). So I continue to hold all 4 stocks and in fact bought a bit more Facebook today.

Company Valuations Implied by my Valuations Bible: Are Snap, Netflix, Square and Twitter Grossly Overvalued?

Applying the Gross Margin Multiple Method to Public Company Valuation

In my last two posts I’ve laid out a method to value companies not yet at their mature business models. The method provides a way to value unprofitable growth companies and those that are profitable but not yet at what could be their mature business model. This often occurs when a company is heavily investing in growth at the expense of near-term profits. In the last post, I showed how I would estimate what I believed the long-term model would be for Tesla, calling the result “Potential Earnings” or “PE”. Since this method requires multiple assumptions, some of which might not find agreement among investors, I provided a second, simplified method that only involved gross margin and revenue growth.

The first step was taking about 20 public companies and calculating how they were valued as a multiple of gross margin (GM) dollars. The second step was to determine a “least square line” and formula based on revenue growth and the gross margin multiple for these companies. The coefficient of 0.62 shows that there is a good correlation between Gross Margin and Revenue Growth, and one significantly better than the one between Revenue Growth and a company’s Revenue Multiple (that had a coefficient of 0.36 which is considered very modest).

Where’s the Beef?

The least square formula derived in my post for relating revenue growth to an implied multiple of Gross Margin dollars is:

GM Multiple = (24.773 x Revenue growth percent) + 4.1083

Implied Company Market Value = GM Multiple x GM Dollars

Now comes the controversial part. I am going to apply this formula to 10 companies using their data (with small adjustments) and compare the Implied Market Value (Implied MKT Cap) to their existing market Cap as of several days ago. I’ll than calculate the Implied Over (under) Valuation based on the comparison. If the two values are within 20% I view it as normal statistical variation.

Table 1: Valuation Analysis of 10 Tech Companies

  • * Includes net cash included in expected market cap
  • ** Uses adjusted GM%
  • *** Uses 1/31/18 year end
  • **** Growth rate used in the model is q4 2017 vs q4 2016.  See text

This method suggests that 5 companies are over-valued by 100% or more and a fifth, Workday, by 25%. Since Workday is close to a normal variation, I won’t discuss it further. I have added net cash for Facebook, Snap, Workday and Twitter to the implied market cap as it was material in each case but did not do so for the six others as the impact was not as material.

I decided to include the four companies I recommended, in this year’s top ten list, Amazon, Facebook, Tesla and Stitchfix, in the analysis. To my relief, they all show as under-valued with Stitchfix, (the only one below the Jan 2 price) having an implied valuation more than 100% above where it currently trades. The other three are up year to date, and while trading below what is suggested by this method, are within a normal range. For additional discussion of these four see our 2018 top Ten List.

 

Digging into the “Overvalued” Five

Why is there such a large discrepancy between actual market cap and that implied by this method for 5 companies? There are three possibilities:

  1. The method is inaccurate
  2. The method is a valid screen but I’m missing some adjustment for these companies
  3. The companies are over-valued and at some point, will adjust, making them risky investments

While the method is a good screen on valuation, it can be off for any given company for three reasons:  the revenue growth rate I’m using will radically change; a particular company has an ability to dramatically increase gross margins, and/or a particular company can generate much higher profit margins than their gross margin suggests. Each of these may be reflected in the company’s actual valuation but isn’t captured by this method.

To help understand what might make the stock attractive to an advocate, I’ll go into a lot of detail in analyzing Snap. Since similar arguments apply to the other 4, I’ll go into less detail for each but still point out what is implicit in their valuations.

Snap

Snap’s gross margin (GM) is well below its peers and hurts its potential profitability and implied valuation. Last year, GM was about 15%, excluding depreciation and amortization, but it was much higher in the seasonally strong Q4. It’s most direct competitor, Facebook, has a gross margin of 87%.  The difference is that Facebook monetizes its users at a much higher level and has invested billions of dollars and executed quite well in creating its own low-cost infrastructure, while Snap has outsourced its backend to cloud providers Google and Amazon. Snap has recently signed 5-year contracts with each of them to extend the relationships. Committing to lengthy contracts will likely lower the cost of goods sold.  Additionally, increasing revenue per user should also improve GM.  But, continuing to outsource puts a cap on how high margins can reach. Using our model, Snap would need 79% gross margin to justify its current valuation. If I assume that scale and the longer-term contracts will enable Snap to double its gross margins to 30%, the model still shows it as being over-valued by 128% (as opposed to the 276% shown in our table). The other reason bulls on Snap may justify its high valuation is that they expect it to continue to grow revenue at 100% or more in 2018 and beyond. What is built into most forecasts is an assumed decline in revenue growth rates over time… as that is what typically occurs. The model shows that growing revenue 100% a year for two more years without burning cash would leave it only 32% over-valued in 2 years. But as a company scales, keeping revenue growth at that high a level is a daunting task. In fact, Snap already saw revenue growth decline to 75% in Q4 of 2017.

Twitter

Twitter is not profitable.  Revenue declined in 2017 after growing a modest 15% in 2016, and yet it trades at a valuation that implies that it is a growth company of about 50%. While it has achieved such levels in the past, it may be difficult to even get back to 15% growth in the future given increased competition for advertising.

Netflix

I recommended Netflix in January 2015 as one of my stock picks for the year, and it proved a strong recommendation as the stock went up about 140% that year. However, between January 2015 and January 2018, the stock was up over 550% while trailing revenue only increased 112%.  I continue to like the fundamentals of Netflix, but my GM model indicates that the stock may have gotten ahead of itself by a fair amount, and it is unlikely to dramatically increase revenue growth rates from last year’s 32%.

Square

Square has followed what I believe to be the average pattern of revenue growth rate decline as it went from 49% growth in 2015, down to 35% growth in 2016, to under 30% growth in 2017. There is no reason to think this will radically change, but the stock is trading as if its revenue is expected to grow at a nearly 90% rate. On the GM side, Square has been improving GM each year and advocates will point out that it could go higher than the 38% it was in 2017. But, even if I use 45% for GM, assuming it can reach that, the model still implies it is 90% over-valued.

Blue Apron

I don’t want to beat up on a struggling Blue Apron and thought it might have reached its nadir, but the model still implies it is considerably over-valued. One problem that the company is facing is that investors are negative when a company has slow growth and keeps losing money. Such companies find it difficult to raise additional capital. So, before running out of cash, Blue Apron began cutting expenses to try to reach profitability. Unfortunately, given their customer churn, cutting marketing spend resulted in shrinking revenue in each sequential quarter of 2017. In Q4 the burn was down to $30 million but the company was now at a 13% decline in revenue versus Q4 of 2016 (which is what we used in our model). I assume the solution probably needs to be a sale of the company. There could be buyers who would like to acquire the customer base, supplier relationships and Blue Apron’s understanding of process. But given that it has very thin technology, considerable churn and strong competition, I’m not sure if a buyer would be willing to pay a substantial premium to its market cap.

 

An Alternative Theory on the Over Valued Five

I have to emphasize that I am no longer a Wall Street analyst and don’t have detailed knowledge of the companies discussed in this post, so I easily could be missing some important factors that drive their valuation.  However, if the GM multiple model is an accurate way of determining valuation, then why are they trading at such lofty premiums to implied value? One very noticeable common characteristic of all 5 companies in question is that they are well known brands used by millions (or even tens of millions) of people. Years ago, one of the most successful fund managers ever wrote a book where he told readers to rely on their judgement of what products they thought were great in deciding what stocks to own. I believe there is some large subset of personal and professional investors who do exactly that. So, the stories go:

  • “The younger generation is using Snap instead of Facebook and my son or daughter loves it”
  • “I use Twitter every day and really depend on it”
  • “Netflix is my go-to provider for video content and I’m even thinking of getting rid of my cable subscription”

Once investors substitute such inclinations for hard analysis, valuations can vary widely from those suggested by analytics. I’m not saying that such thoughts can’t prove correct, but I believe that investors need to be very wary of relying on such intuition in the face of evidence that contradicts it.

The Valuation Bible

Facebook valuation image

After many years of successfully picking public and private companies to invest in, I thought I’d share some of the core fundamentals I use to think about how a company should be valued. Let me start by saying numerous companies defy the logic that I will lay out in this post, often for good reasons, sometimes for poor ones. However, eventually most companies will likely approach this method, so it should at least be used as a sanity check against valuations.

When a company is young, it may not have any earnings at all, or it may be at an earnings level (relative to revenue) that is expected to rise. In this post, I’ll start by considering more mature companies that are approaching their long-term model for earnings to establish a framework, before addressing how this framework applies to less mature companies. The post will be followed by another one where I apply the rules to Tesla and discuss how it carries over into private companies.

Growth and Earnings are the Starting Points for Valuing Mature Companies

When a company is public, the most frequently cited metric for valuation is its price to earnings ratio (PE). This may be done based on either a trailing 12 months or a forward 12 months. In classic finance theory a company should be valued based on the present value of future cash flows. What this leads to is our first rule:

Rule 1: Higher Growth Rates should result in a higher PE ratio.

When I was on Wall Street, I studied hundreds of growth companies (this analysis does not apply to cyclical companies) over the prior 10-year period and found that there was a very strong correlation between a given year’s revenue growth rate and the next year’s revenue growth rate. While the growth rate usually declined year over year if it was over 10%, on average this decline was less than 20% of the prior year’s growth rate. What this means is that if we took a group of companies with a revenue growth rate of 40% this year, the average organic growth for the group would likely be about 33%-38% the next year. Of course, things like recessions, major new product releases, tax changes, and more could impact this, but over a lengthy period of time this tended to be a good sanity test. As of January 2, 2018, the average S&P company had a PE ratio of 25 on trailing earnings and was growing revenue at 5% per year. Rule 1 implies that companies growing faster should have higher PEs and those growing slower, lower PEs than the average.

Graph 1: Growth Rates vs. Price Earnings Ratios

graph

The graph shows the correlation between growth and PE based on the valuations of 21 public companies. Based on Rule 1, those above the line may be relatively under-priced and those below relatively over-priced. I say ‘may be’ as there are many other factors to consider, and the above is only one of several ways to value companies. Notice that most of the theoretically over-priced companies with growth rates of under 5% are traditional companies that have long histories of success and pay a dividend. What may be the case is that it takes several years for the market to adjust to their changed circumstances or they may be valued based on the return from the dividend. For example, is Coca Cola trading on: past glory, its 3.5% dividend, or is there something about current earnings that is deceptive (revenue growth has been a problem for several years as people switch from soda to healthier drinks)? I am not up to speed enough to know the answer. Those above the line may be buys despite appearing to be highly valued by other measures.

Relatively early in my career (in 1993-1995) I applied this theory to make one of my best calls on Wall Street: “Buy Dell sell Kellogg”. At the time Dell was growing revenue over 50% per year and Kellogg was struggling to grow it over 4% annually (its compounded growth from 1992 to 1995, this was partly based on price increases). Yet Dell’s PE was about half that of Kellogg and well below the S&P average. So, the call, while radical at the time, was an obvious consequence of Rule 1. Fortunately for me, Dell’s stock appreciated over 65X from January 1993 to January 2000 (and well over 100X while I had it as a top pick) while Kellogg, despite large appreciation in the overall stock market, saw its stock decline slightly over the same 7-year period (but holders did receive annual dividends).

Rule 2: Predictability of Revenue and Earnings Growth should drive a higher trailing PE

Investors place a great deal of value on predictability of growth and earnings, which is why companies with subscription/SaaS models tend to get higher multiples than those with regular sales models. It is also why companies with large sales backlogs usually get additional value. In both cases, investors can more readily value the companies on forward earnings since they are more predictable.

Rule 3: Market Opportunity should impact the Valuation of Emerging Leaders

When one considers why high growth rates might persist, the size of the market opportunity should be viewed as a major factor. The trick here is to make sure the market being considered is really the appropriate one for that company. In the early 1990s, Dell had a relatively small share of a rapidly growing PC market. Given its competitive advantages, I expected Dell to gain share in this mushrooming market. At the same time, Kellogg had a stable share of a relatively flat cereal market, hardly a formula for growth. In recent times, I have consistently recommended Facebook in this blog for the very same reasons I had recommended Dell: in 2013, Facebook had a modest share of the online advertising, a market expected to grow rapidly. Given the advantages Facebook had (and they were apparent as I saw every Azure ecommerce portfolio company moving a large portion of marketing spend to Facebook), it was relatively easy for me to realize that Facebook would rapidly gain share. During the time I’ve owned it and recommended it, this has worked out well as the share price is up over 8X.

How the rules can be applied to companies that are pre-profit

As a VC, it is important to evaluate what companies should be valued at well before they are profitable. While this is nearly impossible to do when we first invest (and won’t be covered in this post), it is feasible to get a realistic range when an offer comes in to acquire a portfolio company that has started to mature. Since they are not profitable, how can I apply a PE ratio?

What needs to be done is to try to forecast eventual profitability when the company matures. A first step is to see where current gross margins are and to understand whether they can realistically increase. The word realistic is the key one here. For example, if a young ecommerce company currently has one distribution center on the west coast, like our portfolio company Le Tote, the impact on shipping costs of adding a second eastern distribution center can be modeled based on current customer locations and known shipping rates from each distribution center. Such modeling, in the case of Le Tote, shows that gross margins will increase 5%-7% once the second distribution center is fully functional. On the other hand, a company that builds revenue city by city, like food service providers, may have little opportunity to save on shipping.

  • Calculating variable Profit Margin

Once the forecast range for “mature” gross margin is estimated, the next step is to identify other costs that will increase in some proportion to revenue. For example, if a company is an ecommerce company that acquires most of its new customers through Facebook, Google and other advertising and has high churn, the spend on customer acquisition may continue to increase in direct proportion to revenue. Similarly, if customer service needs to be labor intensive, this can also be a variable cost. So, the next step in the process is to access where one expects the “variable profit margin” to wind up. While I don’t know the company well, this appears to be a significant issue for Blue Apron: marketing and cost of goods add up to about 90% of revenue. I suspect that customer support probably eats up (no pun intended) 5-10% of what is left, putting variable margins very close to zero. If I assume that the company can eventually generate 10% variable profit margin (which is giving it credit for strong execution), it would need to reach about $4 billion in annual revenue to reach break-even if other costs (product, technology and G&A) do not increase. That means increasing revenue nearly 5-fold. At their current YTD growth rate this would take 9 years and explains why the stock has a low valuation.

  • Estimating Long Term Net Margin

Once the variable profit margin is determined, the next step would be to estimate what the long-term ratio of all other operating cost might be as a percent of revenue. Using this estimate I can determine a Theoretic Net Earnings Percent. Applying this percent to current (or next years) revenue yields a Theoretic Earnings and a Theoretic PE (TPE):

TPE= Market Cap/Theoretic Earnings     

To give you a sense of how I successfully use this, review my recap of the Top Ten Predictions from 2017 where I correctly predicted that Spotify would not go public last year despite strong top line growth as it was hard to see how its business model could support more than 2% or so positive operating margin, and that required renegotiating royalty deals with record labels.  Now that Spotify has successfully negotiated a 3% lower royalty rate from several of the labels, it appears that the 16% gross margins in 2016 could rise to 19% or more by the end of 2018. This means that variable margins (after marketing cost) might be 6%. This would narrow its losses, but still means it might be several years before the company achieves the 2% operating margins discussed in that post. As a result, Spotify appears headed for a non-traditional IPO, clearly fearing that portfolio managers would not be likely to value it at its private valuation price since that would lead to a TPE of over 200. Since Spotify is loved by many consumers, individuals might be willing to overpay relative to my valuation analysis.

Our next post will pick up this theme by walking through why this leads me to believe Tesla continues to have upside, and then discussing how entrepreneurs should view exit opportunities.

 

SoundBytes

I’ve often written about effective shooting percentage relative to Stephen Curry, and once again he leads the league among players who average 15 points or more per game. What also accounts for the Warriors success is the effective shooting of Klay Thompson, who is 3rd in the league, and Kevin Durant who is 6th. Not surprisingly, Lebron is also in the top 10 (4th). The table below shows the top ten among players averaging 15 points or more per game.  Of the top ten scorers in the league, 6 are among the top 10 effective shooters with James Harden only slightly behind at 54.8%. The remaining 3 are Cousins (53.0%), Lillard (52.2%), and Westbrook, the only one below the league average of 52.1% at 47.4%.

Table: Top Ten Effective Shooters in the League

table

*Note: Bolded players denote those in the top 10 in Points per Game

Using Technology to Revolutionize Urban Transit

Winter Traffic Photo

Worsening traffic requires new solutions

As our population increases, the traffic congestion in cities continues to worsen. In the Bay Area my commute into the city now takes about 20% longer than it did 10 years ago, and driving outside of typical rush hours is now often a major problem. In New York, the subway system helps quite a bit, but most of Manhattan is gridlocked for much of the day.

The two key ways of relieving cities from traffic snarl are:

  1. Reduce the number of vehicles on city streets
  2. Increase the speed at which vehicles move through city streets

Metro areas have been experimenting with different measures to improve car speed, such as:

  1. Encouraging carpooling and implementing high occupancy vehicle lanes on arteries that lead to urban centers
  2. Converting more streets to one-way with longer periods of green lights
  3. Prohibiting turns onto many streets as turning cars often cause congestion

No matter what a city does, traffic will continue to get worse unless compelling and effective urban transportation systems are created and/or enhanced. With that in mind, this post will review current alternatives and discuss various ways of attacking this problem.

Ride sharing services have increased congestion

Uber and Lyft have not helped relieve congestion. They have probably even led to increasing it, as so many rideshare vehicles are cruising the streets while awaiting their next ride. While the escalation of ridesharing services like Uber and Lyft may have reduced the number of people who commute using their own car to work, they have merely substituted an Uber driver for a personal driver. Commuters parked their cars when arriving at work while ridesharing drivers continue to cruise after dropping off a passenger, so the real benefit here has been in reducing demand for parking, not improving traffic congestion.

A simple way to think about this is that the total cars on the street at any point in time consists of those with someone going to a destination plus those cruising awaiting picking up a passenger. Uber does not reduce the number of people going to a destination by car (and probably increases it as some Uber riders would have taken public transportation if not for Uber).

The use of optimal traffic-aware routing GPS apps like Waze doesn’t reduce traffic but spreads it more evenly among alternate routes, therefore providing a modest increase in the speed that vehicles move through city streets. The thought that automating these vehicles will relieve pressure is unrealistic, as automated vehicles will still be subject to the same movement as those with drivers (who use Waze). Automating ridesharing cars can modestly reduce the number of cruising vehicles, as Uber and Lyft can optimize the number that remain in cruise mode. However, this will not reduce the number of cars transporting someone to a destination. So, it is clear to me that ridesharing services increase rather than reduce the number of vehicles on city streets and will continue to do so even when they are driverless.

Metro rail systems effectively reduce traffic but are expensive and can take decades to implement

Realistically, improving traffic flow requires cities to enhance their urban transport system, thereby reducing the number of vehicles on their streets. There are several historic alternatives but the only one that can move significant numbers of passengers from point A to point B without impacting other traffic is a rail system. However, construction of a rail system is costly, highly disruptive, and can take decades to go from concept to completion. For example, the New York City Second Avenue Line was tentatively approved in 1919. It is educational to read the history of reasons for delays, but the actual project didn’t begin until 2005 despite many millions of dollars being spent on planning, well before that date. The first construction commenced in April 2007. The first phase of the construction cost $4.5 billion and included 3 stations and 2 miles of tunnels. This phase was complete, and the line opened in January 2017. By May daily ridership was approximately 176,000 passengers. A second phase is projected to cost an additional $6 billion, add 1.5 more miles to the line and be completed 10-12 years from now (assuming no delays). Phase 1 and 2 together from actual start to hopeful finish will be over two decades from the 2005 start date…and about a century from when the line was first considered!

Dedicated bus rapid transit, less costly and less effective

Most urban transportation networks include bus lines through city streets. While buses do reduce the number of vehicles on the roads, they have several challenges that keep them from being the most efficient method of urban transport:

  1. They need to stop at traffic lights, slowing down passenger movement
  2. When they stop to let one passenger on or off, all other passengers are delayed
  3. They are very large and often cause other street traffic to be forced to slow down

One way of improving bus efficiency is a Dedicated Bus Rapid Transit System (BRT). Such a system creates a dedicated corridor for buses to use. A key to increasing the number of passengers such a system can transport is to remove them from normal traffic (thus the dedicated lanes) and to reduce or eliminate the need to stop for traffic lights by either altering the timing to automatically accommodate minimal stoppage of the buses or by creating overpasses and/or underpasses. If traffic lights are altered, the bus doesn’t stop for a traffic light but that can mean cross traffic stops longer, thus increasing cross traffic congestion. Elimination of interference using underpasses and/or overpasses at each intersection can be quite costly given the substantial size of buses. San Francisco has adopted the first, less optimal, less costly, approach along a two-mile corridor of Van Ness Avenue. The cost will still be over $200 million (excluding new buses) and it is expected to increase ridership from about 16,000 passengers per day to as much as 22,000 (which I’m estimating translates to 2,000-3,000 passengers per hour in each direction during peak hours). Given the increased time cross traffic will need to wait, it isn’t clear how much actual benefit will occur.

Will Automated Car Rapid Transit (ACRT) be the most cost effective solution?

I recently met with a company that expects to create a new alternative using very small automated car rapid transit (ACRT) that costs a fraction of and has more than double the capacity of a BRT.  The basic concept is to create a corridor similar to that of a BRT, utilizing underpasses under some streets and bridges over other streets. Therefore, cross traffic would not be affected by longer traffic light stoppages. Since the size of an underpass (tunnel) to accommodate a very small car is a fraction of that of a very large bus, so is the cost. The cars would be specially designed driverless automated cars that have no trunk, no back seats and hold one or two passengers. The same 3.5 to 4.0-meter-wide lane needed for a BRT would be sufficient for more than two lanes of such cars. Since the cars would be autonomous, speed and distance between cars could be controlled so that all cars in the corridor move at 30 miles per hour unless they exited. Since there would be overpasses and underpasses across each cross street, the cars would not stop for lights. Each vehicle would hold one or two passengers going to the same stop, so the car would not slow until it reached that destination. When it did, it would pull off the road without reducing speed until it was on the exit ramp.

The company claims that it will have the capacity to transport 10,000 passengers per hour per lane with the same setup as the Van Ness corridor if underpasses and overpasses were added. Since a capacity of 10,000 passengers per hour in each direction would provide significant excess capacity compared to likely usage, 2 lanes (3 meters in total width instead of 7-8 meters) is all that such a system would require. The reduced width would reduce construction cost while still providing excess capacity. Passengers would arrive at destinations much sooner than by bus as the autos would get there at 30 miles per hour without stopping even once. This translates to a 2-mile trip taking 4 minutes! Compare that to any experience you have had taking a bus.  The speed of movement also helps make each vehicle available to many more passengers during a day. While it is still unproven, this technology appears to offer significant cost/benefit vs other alternatives.

Conclusion

The population expansion within urban areas will continue to drive increased traffic unless additional solutions are implemented. If it works as well in practice as it does in theory, an ACRT like the one described above offers one potential way of improving transport efficiency. However, this is only one of many potential approaches to solving the problem of increased congestion. Regardless of the technology used, this is a space where innovation must happen if cities are to remain livable. While investment in underground rail is also a potential way of mitigating the problem, it will remain an extremely costly alternative unless innovation occurs in that domain.

The Business of Theater

Earnest Shackleton

I have become quite interested in analyzing theater, in particular, Broadway and Off-Broadway shows for two reasons:

  1. I’m struck by the fact that revenue for the show Hamilton is shaping up like a Unicorn tech company
  2. My son Matthew is producing a show that is now launching at a NYC theater, and as I have been able to closely observe the 10-year process of it getting to New York, I see many attributes that are consistent with a startup in tech.

Incubation

It is fitting that Matthew’s show, Ernest Shackleton Loves Me, was first incubated at Theatreworks, San Francisco, as it is the primary theater of Silicon Valley. Each year the company hosts a “writer’s retreat” to help incubate new shows. Teams go there for a week to work on the shows, all expenses paid. Theatreworks supplies actors, musicians, and support so the creators can see how songs and scenes seem to work (or not) when performed. Show creators exchange ideas much like what happens at a tech incubator. At the culmination of the week, a part of each show is performed before a live audience to get feedback.

Creation of the Beta Version

After attending the writer’s retreat the creators of Shackleton needed to do two things: find a producer (like a VC, a Producer is a backer of the show that recruits others to help finance the project); and add other key players to the team – a book writer, director, actors, etc. Recruiting strong players for each of these positions doesn’t guarantee success but certainly increases the probability. In the case of Shackleton, Matthew came on as lead producer and he and the team did quite well in getting a Tony winning book writer, an Obie winning director and very successful actors on board. Once this team was together an early (beta version) of the show was created and it was performed to an audience of potential investors (the pitch). Early investors in the show are like angel investors as risk is higher at this point.

Beta Testing

The next step was to run a beta test of the product – called the “out of town tryout”. In general, out of town is anyplace other than New York City. It is used to do continuous improvement of the show much like beta testing is used to iterate a technology product based on user feedback. Theater critics also review shows in each city where they are performed. Ernest Shackleton Loves Me (Shackleton) had three runs outside of NYC: Seattle, New Jersey and Boston. During each, the show was improved based on audience and critic reaction. While it received rave reviews in each location, critics and the live audience can be helpful as they usually still can suggest ways that a show can be improved. Responding to that feedback helps prepare a show for a New York run.

Completing the Funding

Like a tech startup, it becomes easier to raise money in theater once the product is complete. In theater, a great deal of funding is required for the steps mentioned above, but it is difficult to obtain the bulk of funding to bring a show to New York for most shows without having actual performances. An average musical that goes to Off-Broadway will require $1.0 – $2.0 million in capitalization. And an average one that goes to Broadway tends to capitalize between $8 – $17 million. Hamilton cost roughly $12.5 million to produce, while Shackleton will capitalize at the lower end of the Off-Broadway range due to having a small cast and relatively efficient management. For many shows the completion of funding goes through the early days of the NYC run. It is not unusual for a show to announce it will open at a certain theater on a certain date and then be unable to raise the incremental money needed to do so. Like a tech startup, some shows, like Shackleton, may run a crowdfunding campaign to help top off its funding.

You can see what a campaign for a theater production looks like by clicking on this link and perhaps support the arts, or by buying tickets on the website (since the producer is my son, I had to include that small ask)!

The Product Launch

Assuming funding is sufficient and a theater has been secured (there currently is a shortage of Broadway theaters), the New York run then begins.  This is the true “product launch”. Part of a shows capitalization may be needed to fund a shortfall in revenue versus weekly cost during the first few weeks of the show as reviews plus word of mouth are often needed to help drive revenue above weekly break-even. Part of the reason so many Broadway shows employ famous Hollywood stars or are revivals of shows that had prior success and/or are based on a movie, TV show, or other well-known property is to insure substantial initial audiences. Some examples of this currently on Broadway are Hamilton (bestselling book), Aladdin (movie), Beautiful (Carole King story), Chicago (revival of successful show), Groundhog Day (movie), Hello Dolly (revival plus Bette Midler as star) and Sunset Boulevard (revival plus Glenn Close as star).

Crossing Weekly Break Even

Gross weekly burn for shows have a wide range (just like startups), with Broadway musicals having weekly costs from $500,000 to about $800,000 and Off-Broadway musicals in the $50,000 to $200,000 range. In addition, there are royalties of roughly 10% of revenue that go to a variety of players like the composer, book writer, etc. Hamilton has about $650,000 in weekly cost and roughly a $740,000 breakeven level when royalties are factored in.  Shackleton weekly costs are about $53,000, at the low end of the range for an off-Broadway musical, at under 10% of Hamilton’s weekly cost.

Is Hamilton the Facebook of Broadway?

Successful Broadway shows have multiple sources of revenue and can return significant multiples to investors.

Chart 1: A ‘Hits’ Business Example Capital Account

Since Shackleton just had its first performance on April 14, it’s too early to predict what the profit (or loss) picture will be for investors. On the other hand, Hamilton already has a track record that can be analyzed. In its first months on Broadway the show was grossing about $2 million per week which I estimate drove about $ 1 million per week in profits. Financial investors, like preferred shareholders of a startup, are entitled to the equivalent of “liquidation preferences”. This meant that investors recouped their money in a very short period, perhaps as little as 13 weeks. Once they recouped 110%, the producer began splitting profits with financial investors. This reduced the financial investors to roughly 42% of profits. In the early days of the Hamilton run, scalpers were reselling tickets at enormous profits. When my wife and I went to see the show in New York (March 2016) we paid $165 per ticket for great orchestra seats which we could have resold for $2500 per seat! Instead, we went and enjoyed the show. But if a scalper owned those tickets they could have made 15 times their money. Subsequently, the company decided to capture a portion of this revenue by adjusting seat prices for the better seats and as a result the show now grosses nearly $3 million per week. Since fixed weekly costs probably did not change, I estimate weekly profits are now about $1.8 million. At 42% of this, investors would be accruing roughly $750,000 per week. At this run rate, investors would receive over 3X their investment dollars annually from this revenue source alone if prices held up.

Multiple Companies Amplify Revenue and Profits

Currently Hamilton has a second permanent show in Chicago, a national touring company in San Francisco (until August when it’s supposed to move to LA) and has announced a second touring company that will begin the tour in Seattle in early 2018 before moving to Las Vegas and Cleveland and other stops. I believe it will also have a fifth company in London and a sixth in Asia by late 2018 or early 2019. Surprisingly, the touring companies can, in some cities, generate more weekly revenue than the Broadway company due to larger venues. Table 1 shows an estimate of the revenue per performance in the sold out San Francisco venue, the Orpheum Theater which has a capacity 2203 versus the Broadway capacity (Richard Rogers Theater) of 1319.

Table 1: Hamilton San Francisco Revenue Estimates

While one would expect Broadway prices to be higher, this has not been the case. I estimate the average ticket price in San Francisco to be $339 whereas the average on Broadway is now $282. The combination of 67% higher seating capacity and 20% higher average ticket prices means the revenue per week in San Francisco is now close to $6 million. Since it was lower in the first 4 weeks of the 21 plus week run, I estimate the total revenue for the run to be about $120 million. Given the explosive revenue, I wouldn’t be surprised if the run in San Francisco was extended again. While it has not been disclosed what share of this revenue goes to the production company, normally the production company is compensated as a base guarantee level plus a share of the profits (overage) after the venue covers its labor and marketing costs. Given these high weekly grosses, I assume the production company’s share is close to 50% of the grosses given the enormous profits versus an average show at the San Francisco venue (this would include both guarantee and overage). At 50% of revenue, there would still be almost $3 million per week to go towards paying the production company expenses (guarantee) and the local theater’s labor and marketing costs. If I use a lower $2 million of company share per week as profits to the production company that annualizes at over $100 million in additional profits or $42 million more per year for financial investors. The Chicago company is generating lower revenue than in San Francisco as the theater is smaller (1800 seats) and average ticket prices appear to be closer to $200. This would make revenue roughly $2.8 million per week. When the show ramps to 6 companies (I think by early 2019) the show could be generating aggregate revenue of $18-20 million per week or more should demand hold up. So, it would not be surprising if annual ticket revenue exceeded $1 billion per year at that time.

Merchandise adds to the mix

I’m not sure what amount of income each item of merchandise generates to the production company. Items like the cast album and music downloads could generate over $25 million in revenue, but in general only 40% of the net income from this comes to the company. On the other hand, T-shirts ($50 each) and the high-end program ($20 each) have extremely large margin which I think would accrue to the production company. If an average attendee of the show across the 6 (future) or more production companies spent $15 this could mean $1.2 million in merchandise sales per week across the 6 companies or another $60 million per year in revenue. At 60% gross margin this would add another $36 million in profits.

I expect Total Revenue for Hamilton to exceed $10 billion

In addition to the sources of revenue outlined above Hamilton will also have the opportunity for licensing to schools and others to perform the show, a movie, additional touring companies and more.  It seems likely to easily surpass the $6 billion that Lion King and Phantom are reported to have grossed to date, or the $4 billion so far for Wicked. In fact, I believe it eventually will gross over a $10 billion total. How this gets divided between the various players is more difficult to fully access but investors appear likely to receive over 100x their investment, Lin-Manuel Miranda could net as much as $ 1 billion (before taxes) and many other participants should become millionaires.

Surprisingly Hamilton may not generate the Highest Multiple for Theater Investors!

Believe it or not, a very modest musical with 2 actors appears to be the winner as far as return on investment. It is The Fantasticks which because of its low budget and excellent financial performance sustained over decades is now over a 250X return on invested capital. Obviously, my son, an optimistic entrepreneur, hopes his 2 actor musical, Ernest Shackleton Loves Me, will match this record.