Friday, April 22, 2016

Measuring process, practices, behaviour and culture

Some aspects of supplier performance cannot be ascertained by asking them, as they may not be aware. Gordon (2008) states that a lot of insight is needed into not only supplier performance using quantifiable performance metrics but also the means by which this performance is achieved. This includes the supplier's processes, practices, behaviour and culture. However, if we accept that by forging the right relationship with certain important suppliers we can add value, then it follows that we must also measure processes, practices, behaviour and culture; theirs and those that are joint between us.
There are measures we can develop for aspects of process efficiency and effectiveness, but practices, behaviour and culture are much more subjective. In a close personal relationship norms around practices, behaviour and even culture develop. There may not be a specific discussion around this but in a healthy relationship practices get agreed based on what works for both, behaviour gets shaped through parties agreeing what is mutually acceptable and where boundaries lie and if all these things happen the relationship takes on its own positive culture over time. Split up and meet someone new and the process begins over. The same happens with a supplier relationship, except companies rarely seek to agree what practices and behaviours are appropriate or expected, yet if we do this we then create the basis to measure it or at least check it feels right. In Chapter 11 we will explore the Relationship Charter, a means to define expected practices and behaviours and thus a basis to them measure the degree to which parties emulate this.

Collecting and analysing data

KPIs typically demand different data types, from different sources. Collecting data to create KPIs could be an automatic process in real time, perhaps using a corporate system or well-designed app, or perhaps it requires a regular or even irregular activity to produce the latest set of KPIs. We could do this or we could get the supplier to do this, or a combination of both.

Securing data to create KPIs is not easy. In fact, the Aberdeen Group (Limberakis, 2012) suggest that for the companies they identify as 'best in class' only 40 per cent of these have the ability to access a 'single source of truth' for all supplier information.
There are two forms of data within any measurement system, these are quantitative or qualitative:
• Quantitative - data, research or measures based upon the collection of numbers or hard data, eg the man's height is 1.88m, I travelled for 12.5 miles.
• Qualitative - based upon opinion, perception, observation or how it feels, eg the man is tall, his eyes are a bluely green colour, the road feels wet and slippery. In order for qualitative measures to be useful, to have any scientific value or be credible they need to be quantified and converted to quantitative measures to remove subjectivity.
Furthermore, there two types of data source available:
• Primary data - data or measurements that are directly collected by the individual or company and might include actual measurements of an aspect of a product, service or delivery compliance (quantitative) or asking a stakeholder for feedback about an engagement with a supplier (qualitative).
• Secondary data - data or measurements that are collected or made available by a third party. Again these could be quantitative, eg published raw material price indices, exchange rates or supplier provided test results; or qualitative, eg T heard that...'.
It is easier to find quantitative things to measure for suppliers who provide goods, but for service suppliers this can become more subjective as more relies upon what individuals do and what is experienced rather than what can be seen, held, touched, counted etc. Yet it is through this type of approach that we can measure soft factors such as relationships with suppliers, levels of service, end customer enjoyment of an experience and so on. Therefore the means by which subjective opinion is quantified and converted to give us hard data is crucial. For example, if you complete a customer satisfaction survey, whilst you might be invited to describe how you feel about something you will probably also be asked to rate your satisfaction, which of course is subjective and personal to you, using a scale against which there are precise definitions which convert your opinion into a comparable rating. If an airline employed contract cleaners to pre-prepare aircraft cabins before a flight then it is possible to measure turnaround time or check all the safety cards are present and in the right place. However it is the subsequent customer experience of 'cleanliness' that is the true test, and customers are likely to only complain if something is really bad, like finding someone's used ear plug or chewing gum in the seat pocket. Merely finding an old napkin or noting how dirty the footwell is would most likely only cause the passenger to think poorly of the airline. Perhaps it might even influence choice as to whom they fly with next time. Qualitative data and how we convert it is therefore an important consideration in designing supplier measurement systems
An example stakeholder survey tool
relationship is working overall. Indeed the design of data collection such as questionnaires or online surveys is a subject all of its own and so further reading is recommended here.
The SPM system for a specific supplier or group of suppliers needs to be as efficient as possible; capable of producing the ideal suite of KPIs when needed with minimal resources and effort. Ways to achieve this include to:
• utilize corporate systems - ERP, purchasing or eSourcing systems may include performance measurement capability;
• maximize available secondary data (assuming confidence in its validity);
• have the supplier measure and supply KPI data - whilst this has a cost to them, it may be more efficient for them to do this;
• use specialist providers - companies who can analyse available corporate information to extract spend or performance data; and
• find an App! There are smartphone or tablet apps for most things today. Someone, somewhere, may well have created a simple program that can help you.

Measurement system responsiveness

Measurement systems have different degrees of responsiveness, either in terms of obtaining feedback about the measurement or the time taken to be able to effect improvements. In order to prevent, or achieve timely correction of, a supplier problem we must understand our system responsiveness how quickly we can effect a correction. Imagine driving a car where the system of speed measurement took 10 seconds to display the current speed or worse you had to push the brake pedal five seconds before it would operate. Whilst not ideal, if you were aware, adaptation of driving style and planning breaking very early, would be possible.
In the UK radio industry, advertisers make decisions on whether or not to buy commercial airtime based upon listener ratings and the demographic profiles of the listeners. Until very recently the only way to measure this was for a small army of researchers to recruit a representative sample of listeners from across the nation who would then be paid to maintain a small log book of what stations they listened to and when over a set period of time. The results (known as RAJAR - RAdio Joint Audience Ratings) would be compiled and published quarterly and radio station executives and presenters would pore over the results to see how they performed three months ago. Historically, the only way to measure what happened in individual households was to go and ask people, which meant it took time to obtain meaningful feedback about the individual measures.
Where we can obtain a measure instantly it can take time to respond to it - like getting a large ship to respond; by the time it is apparent that the vessel is moving off course corrections can be made, but it is some time before it responds. This demands skilful anticipation and why most large ships are driven by computer control today. If customer satisfaction results suggest service from a supplier has dropped, this will need to be investigated, understood and a programme of retraining instigated and perhaps action to tackle morale, behaviour and organizational culture. This all takes time.
Furthermore, if it takes time to access and compile measures into a form that enables something to be done (just like the radio audience ratings), then it can take even more time to do something in response to a KPI.
The responsiveness of a measurement system impacts organizations and our relationship with suppliers. Despite the benefit of new IT solutions, measurement and improvement systems can still be complex and slow, but may also be the only way possible to guide and improve business outcomes. However, if we understand responsiveness we can at least manage supplier improvements and interventions accordingly.

SPM system maturity

So there are many different considerations in designing a SPM system for a supplier and developing KPIs that will be useful to us. This of course depends upon our SPM aims for this relationship. Sometimes a set of lagging measures may suffice, for others we may need outcome driven performance management using hard and soft measures to support a collaborative relationship. There are therefore different degrees of SPM and we can select different levels of SPM maturity according to the outcomes needed.

No comments:

Post a Comment