5/15/2007

Collaborative outsourcing may be the way to go

The key theme of industry researcher Gartner's conference in London earlier this year was “Strategic Multisourcing” or the ability to work with multiple vendors for undertaking all the contextual work that that is better left to third party providers, while a company focuses on what is truly core to its own operations.

For many of the Fortune 1000 and FTSE 100 companies, this is not a new theme, for some of them have developed and sustained outsourcing relationships with half-a-dozen or more partners, most of them from India, for over a decade now.

One question that has dominated discussions in the past and becomes even more relevant for smaller firms and even many firms in India seeking to outsource for the first time is the choice between multiple partners and building a wide and deep relationship with a single trusted source.

The logic for multisourcing is compelling the ability to work with best of breed providers in each specific segment, the commercial advantage of “keeping the vendor honest” by always having multiple mature options, and having the flexibility to move work from internal departments to vendors and even across providers whenever the need arises.

One of our Fortune 100 clients is always proud of their relationship with four of the top vendors in India because it enables them to access leading edge solutions and benefit from best practices that may have been implemented first for another client in an entirely different domain.

The flip side to this advantage is of course the time and effort is takes to develop mature relationships with multiple sources. For some of our retail clients in the US, UK, Dubai and India, the feeling has been that effective development and deployment of merchandise management or point of sale systems is so key to their competitive advantage that it would be counter-productive to build trusted relationships with more than one partner!

Recent trends in technology too have made it beneficial to invest a lot in the initial selection process and draw up mutually agreed service-level agreements and productivity improvement plans and then focus on building a truly collaborative single source outsourcing relationship.

The new centres that we are setting up in Hyderabad, Gdansk (Poland) and Sao Paolo (Brazil) will deploy a new framework that enables the work done for clients to be distributed, with analyses of requirements done at the client’s premises, architecting and designing of solutions at one of the three new centres, development of the applications at full-service facilities in Pune or Shenzhen using generative techniques rather then "brute force programming," and testing of applications at one or more locations as seen most appropriate by the client.

This approach leverages the power of the Internet to perform work where it makes most sense, keeping in mind the proximity desired by any client at the analysis and testing stages and the cost and robust quality processes available at large offshore centres in actual system development.

The debate will continue but the message is clear outsourcing is becoming more and more key to the business fortunes of any firm and the more strategic and collaborative it can be, the better!

Ganesh Natarajan is Deputy Chairman & MD of Zensar Technologies and Vice Chairman of NASSCOM, the software industry association.

Time for some BPO magic: 99.9% accuracy = 1.8% accuracy!

Fifteen Stanford students are working with me this quarter as part of a Stanford Marketing class. One of them recently asked me: “How can you say there is a quality problem with Business Process Outsourcing (BPO) if most vendors advertise 99.9% accuracy?” Before I could answer him, I had to explain what 99.9% accuracy really meant:

  • Is it document-level, field-level or even character-level accuracy? Or how 99.9% accuracy can equal 2% accuracy! If 99.9% of documents have no errors, that is indeed an impressive level of quality. Let’s assume that the 99.9% accuracy is actually a field level number. Now, an insurance claim may easily have 200 fields. In that case, the document-level accuracy would just be 82% (if you like math, this is 0.999 multiplied by itself 200 times). However, usually once you ask the vendor, you will find that such a high accuracy level is actually a character-level accuracy. If 99.9% of characters have no errors, and most fields have 20 characters, then an average document field would only have 98% accuracy. Assuming 200 fields on average per document, if 98% of fields have no errors, then on average the document-level accuracy would be just 1.8%. In other words, if the character-level accuracy is 99.9%, only about 2% of documents would be error free. Make absolutely sure what is the context for the reported 99.9% accuracy, because it may translate to only 82% or even just 2% document level accuracy.
  • Was the base of the field-level accuracy the hypothetical number of fields that might be filled out or the actual number of fields that were filled out? Or how 99% accuracy can equal 90% accuracy! I recently analyzed an insurance claim that had almost a thousand fields. For example, it had space to specify about 30 separate diagnoses along with associated details. However, most of the time the claims contained just one to three diagnoses and an average claim had only about 100 fields out of 1000 filled out. If there was an error in ten fields in such a document, is that 90% accuracy or 99% accuracy? Remember that in most cases, only a small subset of fields are filled out for each document. Thus, whether we divide the errors by the total number of fields that could have been filled our or the number of fields that were actually filled out can significantly affect the reported error rate.
  • How was the accuracy evaluated? Or how 99.9% accuracy can equal 89.9% accuracy! Several vendors use double-entry for quality control and assume that any field that was consistently typed by both operators was not an error. Sounds reasonable, doesn’t it? After all, there is only one way in which a field can be processed correctly and multiple ways in which it can be processed incorrectly. Thus, if two operators typed the field consistently, they must have gotten it right! I recently evaluated a BPO vendor who leverages double-entry and discovered an interesting fact that completely blew this assumption out of the water. Over 50% of the errors in the insurance claims processed by the vendor came from fields being left blank. I was really intrigued by this pattern and researched it further. It turns out, in double-entry if the operator has any difficulty reading a field, they may leave the field blank assuming the 2nd operator will get it right. Leaving a field blank also reduces the probability that a discrepancy will be reported [a field may be entered incorrectly several different ways, but left blank only one way]. Even without collusion this can lead to systematic under-reporting of operator errors in a double-entry system. Essentially, the operators quickly figure out that if they try to interpret bad handwriting and put in their best guess, then the supervisor usually reports an error. However, if they are ‘lazy’ and simply leave the field blank, the supervisor rarely complains. Humans are really smart and quickly learn from such feedback, even when the lesson they learn is exactly contrary to what the BPO vendor would want them to do. In this specific case, if 50% of the errors come from fields left blank, 25% on average and up to 50% of the errors would not have been caught by the double-entry. Let us assume that the average first-time quality of the operators as reported by the double-entry system is 10%. The real first-time quality of the operators would thus be 13% to 20%. Now, almost all of the errors caught by the double entry system would have been corrected, and thus, the vendor might reasonably report a 99.9% accuracy. In reality, the 3% to 10% of the cases where both operators incorrectly left the field blank would never be caught, and the actual accuracy rate would be between 89.9% and 96.9%.

So what was my answer to the Stanford student? Be very careful what 99.9% accuracy really means. If a vendor has a real document-level error rate of 99.9%, they would really have exceptionally high quality. By the way, this vendor may still have a quality problem: they may simply be spending too much to achieve that level of quality!