Monday, May 30, 2016

About Phones and Mobility

We celebrate Memorial Day in the States every year at the end of May.  One of the things I like to do on the holiday is catch a baseball game with the local minor league team.  And I tend to watch people in addition to watching the game.

This year I made it to a game.  And while I was there enjoying the game with a hot dog and a soda, I noticed something very interesting.  I saw more people carrying mobile flip phones and other "unsmart" phones than smart phones (Note:  I do not live anywhere near Silicon Valley).  Smart phones seemed to be predominant in the over 40 crowd while "unsmart" phones appeared to be the choice over the under 40 people.  I was a bit stunned.

Now, keep in mind some qualifiers.  First, there were probably about 6,000 people at the game...I probably laid eyeballs on a 10th of those people at the most (I spent more time checking out people as the home team lost badly).  Second, my ability to judge age is not the best.

Still, my observations got me thinking.  Thinking enough that I actually talked to a few people about it.  All of the people carrying the "unsmart" phones made their choice because the cost of the monthly service was so much less.  And so long as their phone can text, check email, and function as a phone (in that order of priority), they were willing to live with the limitations and save the money.  Those people carrying smart phones?  In most cases, their employer subsidized part or all of the hardware and service costs.

Now here is the interesting thing to me.  We stress mobility in the enterprise for two reasons:  to increase the productivity of our workforce and to connect with our customers.  But if the smart phone tide for consumers is beginning to ebb due to increasing costs, doesn't that eventually disrupt the idea of using mobility to connect with our customers?  Is cost beginning to drive a new trend?

So that's how I spent my holiday...at the ballpark observing and pestering people about their mobile phone choices.  I'd be interested in hearing your thoughts and opinions.  Comments welcome.

Wednesday, May 18, 2016

Let Mikey Try It

Life Cereal came up with the perfect early adopter commercial back in the 70's:  Let Mikey try it.

It seems as though the providers that have mastered SaaS have a few things in common; one is leveraging an early adopter program.

The idea is that the provider shares new applications or new major releases with a small set of specific customers before making the new software generally available.  They do that because it allows them to learn from those customer experiences: catching software bugs and...just as important...perfecting the service component in the offering.  It's the latter component where an effective center of excellence comes into play.

By addressing service and service process issues based on learning acquired in the early adopter period, an effective center of excellence will have standardized processes and procedures for customers onboarding and starting with the new software by the time the product is generally available.  Which makes getting from onboarding start to valuable first use happen better, faster and cheaper.  And, in a SaaS world, that's the key to customer success.

It's a good model:  let Mikey try it.




Wednesday, May 11, 2016

A CoE for Customer Success?

So if the model we've put together in the past few posts is the key to the SaaS lifecycle, how does a Center of Excellence tie into all this?  Good question.  Let's take a look at it.

The CoE I work in, which seems typical for the industry, essentially divides the work into three categories:  Programs, Customers and Solutions.  The detail plays out in the following table:


BTW, don't thank me for this work breakdown.  It's the brainchild of Oracle's John Cafolla.  My particular role, which falls mostly into the "Solutions" category, is currently focused on building tools and technologies that improve the transition of our SaaS customers from "Start" to "First Valuable Use" - yup, another application of the "better, faster, cheaper" mantra.

Keep in mind that it's an evolving approach...note that "Escalation Support" (which we're previously defined as a negative-value activity in SaaS) is still such a substantial part of our workload as to hold a spot in the table.

It's also important to keep in mind that our particular CoE also deals with a huge base of customers coming to SaaS from legacy applications.  Due to the sheer volume, moving those customers forward is just as significant as helping customers who are new to us.

Finally, it's also worth noticing that the Solutions work focuses on the "Onboarding" stage of the SaaS Lifecycle...for the moment.  As our own journey as a SaaS provider moves forward, we'll eventually shift into an emphasis on tools and technologies to improve customer experiences in the "Nurturing" stage.  But that's a topic for another day.

So you asked how a Center of Excellence fits into the customer success - oriented SaaS lifecycle model?  Well, here ya go.

Comments encouraged.

Monday, May 2, 2016

Drivers

About the driving wheel
Want to know how it feels
To be taking time out, turn it all about
We're taking hold the driving' wheel

            - From Poco's "Drivin' Wheel"

If you can't measure it, you can't manage it
             - Peter Drucker

In my last post, I promised we would talk about influence drivers here.  So let's do that.

In any endeavor we undertake, we naturally want to achieve success...one or more positive outcomes.  But measuring the positive outcomes themselves give you after the fact information; puts us into reactive mode.  And, in customer success, we want to be proactive rather than reactive.  With that in mind, the idea is to measure drivers that influence positive outcomes...drivers that indicate how we're doing during the journey to our outcomes.

A really convenient point here:  most of our resulting outcomes are really determined during SaaS onboarding and nurturing...and most of the influence drivers we find relate to onboarding and nurturing.  Cool how that logic ties together, isn't it?

In a nutshell here, the idea is to measure influence drivers in a timely fashion (note that I'm avoiding the discussion of real-time, near real-time, and the related technical dogma - I'll only say that sooner is better).

So if it were me measuring customer success, I'd focus on something akin to the following:

Onboarding Drivers

  • Time to Provision:  Measure in days the time from when the customer subscribes to when they have access to all the SaaS environments promised.  The smaller the number, the more positive the influence.
  • Time to Value:  I'd measure this in days from the completion of the provisioning to the time of first value use or "go live" date.  The quicker the better.
  • Initial Adoption:  What is the rate of use by the initial individual users at the time of first value?  I'd measure this by transaction numbers or usage (daily, monthly, and/or frequency).  We're simply setting a baseline here to measure growth or contraction during nurturing.
  • Customer Satisfaction:  This comes down to a basic yes or no question:  would the initial set of users recommend your service at the time of go live.
Nurturing Drivers

  • Adoption
    • User Adoption: growth in usage - more transactions, more users
    • Feature Adoption:  do we see new types of transactions?  We're looking for growth in use case solutions or in the use of additional features included in the applications.
  • Capacity Utilization:  how many seats a customer is using relative to those they are paying for in the subscription?  the higher the ratio or percentage here, the better.
  • Business Results:  measurable gains that relate to the outcomes desired by the SaaS customer.
  • Escalations:  the number of open inbound requests, the number of closed requests, and the time to resolution for closed requests.  The smaller the numbers here, the more positive the influence.
  • Customer Feedback:  The same deal here as with Customer Satisfaction, just a different point in the lifecycle.  This comes down to a basic yes or no question:  would the initial set of users recommend your service at the time of go live.
Not only would collect these for onboarding and nurturing metrics, but I would share the metrics for each customer with that customer.  Because influence drivers matter just as much for customers as they do for providers.

In addition to the metrics above, I would also measure these outcomes:

  • Renewals:  both in terms of dollars and number of customers who extend their subscriptions
  • Growth: both in terms of dollars and number of customers who subscribe to additional SaaS products
  • Churn: both in terms of dollars and number of customers who cancel their subscriptions
Let me sum it up.  The mission of customer success is to add value for both customers and providers.  To add that value in the world of SaaS, you must be proactive rather than reactive.  And the best way to start being proactive is to measure influence drivers.

Last point for the day:  I'm stealing a huge portion of these ideas from Guy Nirpaz's book "Farm Don't Hunt: The Definitive Guide to Customer Success".  A very worthy read if you're into SaaS and customer success.

Comments welcome...