Pretend you’re the pricing product manager for Lyft’s ride-scheduling feature, and you’re launching a new city like Toledo, Ohio. The prevailing rate that people are used to paying for rides from the airport to downtown (either direction, one way) is $25. The prevailing wage that drivers are used to earning for this trip is $19. You launch with exactly this price: $25 per ride charged to the rider, $19 per ride paid to the driver. It turns out only 60 or so of every 100 rides requested are finding a driver at this price. (While there is more than one route to think about in Toledo, for the sake of this exercise you can focus on this one route.) Here’s your current unit economics for each side: Drivers: Customer acquisition cost (CAC) of a new driver is between $500. CAC is sensitive to the rate of acquisition since channels are only so deep. At the prevailing wage, drivers have a 5% monthly churn rate and complete 100 rides per month Riders: CAC of a new rider is $15 (similar to driver CAC it’s sensitive to the rate of acquisition, since existing marketing channels are only so deep) Each rider requests 1 ride per month on average Churn is interesting: riders who don’t experience a “failed to find driver” event churn at 10% monthly, but riders who experience one or more “failed to find driver” events churn at 33% monthly You’ve run one pricing experiment so far: when you reduced Lyft’s take from $6/ride to $3/ride across the board for a few weeks, match rates rose nearly instantly from 60% to roughly 93%.
Your task is to maximize the company’s net revenue (the difference between the amount riders pay and the amount Lyft pays out to drivers) for this route in Toledo for the next 12 months. Assume that you cannot charge riders more than the prevailing rate, which is $25. How much more or less do you pay drivers per trip (by changing Lyft’s take)?
My question is how do I take the churn rate and CAC for drivers and riders into account in the calculation?