In order to agree on the most important thing
to work on first, we must have agreement on a company-wide method
of evaluating the impact of a problem. The Cost of Un-Quality
prioritizes improvement opportunities from segments of our company
that are not traditionally measured in the same units.
Many improvement activities will require
involvement of resources from many functional areas. The
participants need confidence that their own problems, which are
important to them, will be addressed in some fair way. It is
important that time be spent getting consensus on the methodology
used to choose priorities.
Two kinds of problems will emerge:
• An Event occurs occasionally and requires
only containment actions. That is, we must immediately minimize
its effect on operations and customers. Events are usually
resolved before the measurements can point them out, but if we
don't measure them, we might miss a pattern.
• A Pattern is evidence of problems that
recur and require Problem-Solving. The object of the Cost of
Un-Quality is to rank these recurring problems in order of their
business impact.
To investigate a pattern, we assign resources (allocate
Manpower, Money and Machines) based on the rankings, and look for
the root-cause of the problem. Once the root-cause is identified,
we move to problem-solving.
Monitor Progress
As we initiate solutions, we need to identify
what changes we should expect in existing measurements. At the
business level, some will clearly not change at all, but any
improvement activity can cause some local measurements to improve,
while others we exj2e_cl to see decline.
As an example, running smaller batches and
increasing the number of set-ups is a typical strategy for
inventory reduction. This clearly violates EOQ logic and quickly
produces a negative effect on traditional efficiency measures.
However, almost everyone who has done this, has more slowly
achieved predictable and massive reductions in inventory and lead
times, and gained improvements in quality and flexibility, as
well.
In order to validate changes to our processes
and make them permanent, we must understand the relationships
between the activities and their results. We need to:
• Develop breakthrough measurements—What does success
look like?
• Set milestones and timelines—What does progress look
like?
• Match frequency and level of detail with measuring
capability—Who can and should report? How often? To whom?
• Publish or post progress and trends—How are we
doing?
Single-Minute Exchange of Dies, or SMED, was
both a local set-up goal and a local measure of success. Implied
were progress measures of time reductions, quantity of reductions
achieved, and how much there was left to do. The global measures
affected were lead times, inventory levels, delivery performance,
lower total cost, and increased sales.
Some of these measures did not go in the
intended direction at the beginning. Though most SMED activities
are relatively low cost (and should be done immediately), some are
quite expensive and require more analysis of costs and benefits
both for the short-term, and into the future. Patience,
demonstrated by milestones and timelines, allows a fair chance for
changes to achieve positive results.
Measuring set-up time seems simple (start—finish),
but to focus improvements, we first separate internal (machine
needed) from external (preparation and parallel) activities. Any
set-up reduction team will break these down further and find ways
to measure the details. Warning: People not on the team will
resist outside measurement, so bring in everyone who affects a
set-up (tool room, blueprints, material movement, planning, etc.).
Seeing progress toward a goal improves morale,
garners support, and increases total participation in all
improvement activities. Widespread quality assurance attitudes
instill internal and external
confidence that quality is constantly
improving. Any quality breakdown is examined more closely because
it is now not acceptable.
Caution: Some measurements may cause
embarrassment. If area results, not names are posted, area
participants will generate significant peer pressure on poor
performers. They already knew who wasn't doing the job, and
by-area posting gives them incentives for group correction. Using
specific names causes a defensive posture from most people, even
if they, themselves are good performers.