Edition 61
What Gets Measured in Service Supply Chain Optimization: Customer Behavior
by Dan Gettens, Chief Research Officer, OnProcess Technology, Inc.

Return to Menu


In optimizing their Service Supply Chain, many companies start by trying to better understand customer behavior. What is the pattern of customer response? What are the unique needs of customers in each segment? Which customer segments are under-served?

Getting this data is worthwhile. In key Service Supply Chain (SSC) processes, customer behavior may not be at all what we expect. Not only that, customer behavior can vary widely across key segments.

Let’s start with customer behavior in a representative SSC process: Reverse Logistics (RL) — customer returns behavior. We picked a simple metric to model customer behavior. The metric selected could be applied to a wide range of Service Supply Chain processes, including Reverse Logistics, Product Activations, Service Order Management, or Installations.

In some key SSC processes, customer behavior may not be what we expect. In the … example, the longer you wait for successful returns, the longer you get to wait. The potential benefits for waiting to resolve open customer issues are diminished.



Just for background, these are two common Reverse Logistics scenarios:

• A Broadband Service (internet, voice, cable) has been disconnected, either voluntarily or for non-payment for a customer, requiring the customer to return a product.

• A customer in the Computer or Technology Markets returns a product under an advanced exchange. Typically, the customer under warranty first gets a new replacement product and then returns their defective product.

Our plan in applying our metric (called Remaining Average-Wait-to-Return) is as follows: We pulled data from a stand-still (without proactive intervention) randomized control group. We compared Remaining Average-Wait-to-Return for products that were not returned at key markers: Days 15, 30, 45 and Day 60.

A common expectation is that the Remaining Average-Wait-to-Return may be steady or may decline.

Not at all like the actual results in our case example! Here’s how to read our case results in the table below: for customers not returning as of Day 15, you have to wait an average of 90.8 more days for returns. For customers not returning as of Day 30, you have to wait an average of 119.8 more days, which is an increase.



The above B2C RL Program had a total N Value of 7,725.

In other words, in some key SSC processes, customer behavior may not be what we expect. In the above RL case example, the longer you wait for successful returns, the longer you get to wait. The potential benefits for waiting to resolve open customer issues are diminished.

Also, the longer we wait, the longer the “black-out” period with our end customer — where we lack important communication with the customer and visibility into the customer’s issues and experiences. And we are left with unanswered questions about the customer’s experience: Why did the customer not return the product? Did the customer not return the product because the replacement part still has not worked? Were billing or other issues delaying the return? Does the customer still have service issues? Did the customer already return the product – but we have not recognized the return for some reason?

To complete the picture, let’s look at the same table, but now with Eventual Remaining Recovery Rates included – for products not yet returned at our Day markers.



Not only do we have to wait longer, we get fewer back when we do wait. The outcome of the case example is somewhat like the one liner about a restaurant: “The food is really terrible.” The response :”Yeah and the portions are way too small.”

However, unlike the restaurant example, SSC results can be improved – and in a predictable way. Also, we selected an RL Program, but could have picked among a wide range of programs to start to understand and respond to customer behavior and requirements: Product Activations, Service Order Management, or Installations. While simple, the metric selected can be a useful diagnostic tool for planning for Predictive Analytics.

Both simple metrics of customer behavior and more sophisticated predictive models can work in first, identifying customer segments that may be underserved, second, tailoring proactive outreach and, third, optimizing Service Supply Chain results.
RLM
Chief Research Officer
Dan Gettens, LSSMBB, is Chief Research Officer for OnProcess Technology. Dan is responsible for the definition, development and delivery of OnProcess’ product offerings, including Service Supply Chain Optimization (SSCO), CE360™ and RL360™ solutions. He is leading our initiatives to apply Lean Six Sigma methodologies in developing the next generation of OnProcess products. Dan drives the delivery of Market Research to OnProcess’ clients to provide visibility and actionable insights into the customer experience. Prior to joining OnProcess in 2005, he was Vice President of Global Business for Corporate Software – including Global Accounts, Latin America and Asia Pacific. Dan served as Director of Global Accounts for Digital and Compaq. At Digital, Dan’s responsibilities included management of the Digital Customer Advisory Board, decision support for U.S. Sales, pricing and contracts for global accounts, support for manufacturing and logistics, and development of the Company’s worldwide channels strategy. Dan received a Bachelors degree from Middlebury, and Masters from Yale University and Rensselaer. He completed the Executive Education Program at Babson. Dan has served as a member of the Advisory Board of the Pricing Institute.

Pedro Cueva
Associate Director, Analytics
Pedro has experience in Analytics-Based Solutions, Market Research, and Process Improvement. In his current role as Associate Director, Analytics Pedro’s focus is the optimization of the Service Supply Chain, delivery of Market Research & Analytics, Innovation Initiatives and the application of Lean Six Sigma for process improvement.

Return to Menu