President of Data Acuity Mr. Jim Desrosiers explains in details about manufacturing efficiency cover numerous topics like Overall Equipment Efficiency, the Loss Deployment Approach, the “Data to Information to Knowledge to Wisdom” Model and uses the company’s Catania Oils project as an example.

Video Transcript

[0:00] Mark Hepburn, ICONICS Vice President of Global Sales

Hey, thanks a lot Jim. Welcome.

[0:08] Jim Desrosiers President and Founder of Data Acuity

Thank you. I appreciate being here. I'm grateful to be out in the real world. I'm actually very grateful to see a lot of my friends that I haven't seen in a long, long time. So, it's really nice to catch up with a lot of you. And I've seen some new friends that I actually haven't seen in person over the last year and a half as well. So, I'm very grateful to be here in person. Ted and Mark had asked me to come up and talk just a little bit about the flow of a manufacturing efficiency project, how things get started, where they typically start, what discipline of automation do we typically start with? And then where do they migrate to over time. When that conversation starts, typically, when we sit down with a client to talk about a manufacturing efficiency project, it almost always starts with this elusive magical gain. We're looking to gain some efficiency, more product. We're looking to decrease our scrap. We're looking to gain efficiency out of our staff and our maintenance staff. But it's quite often very elusive. We're not quite sure where these magical gains are going to come from. We know that we want to do some kind of a data correlation that's going to lead us to some answers for what's happening in our plant today. We also know that we want to work towards some of the buzzwords that we've heard. We want to work towards predictive awareness, not just reacting to what's going on today, but in fact, have some predictive analysis of what may happen tomorrow and let's act on it before it happened. Most important, that conversation always leads to agile flexibility. We need the ability to start small and grow and not lose the work that we did as we're growing. Here's just a couple of examples of what that leads to. And we'll talk about these examples in a little bit more detail in a minute. But there's a whole bunch of automation disciplines we'll touch. We'll touch on OEE, Overall Equipment Effectiveness. We're going to touch on alarm management, product genealogy, product traceability, aquatic quality traceability. We're going to hit energy analysis, labor analysis. We're going to hit all of this stuff over quite a period of time, one step at a time. Today, we're really going to focus on the very beginning, which is a metric called OEE, Overall Equipment Effectiveness. That's where we almost always start; it gives us the window into a scorecard into where are we today. We're also going to lead to where we almost always end and that is the predictive awareness stuff. So, I'll give it a handful of small examples at the end of how to achieve predictive performance, predictive quality, predictive material usage, and predictive availability as well. So, we'll give some examples of that towards the end. 


Again, almost every manufacturing efficiency project starts with Overall Equipment Effectiveness; it's not a new concept. So, a lot of you are probably already familiar with it. But OEE is the measurement of: How much operating time did we actually get out of a machine or an asset compared to what it was actually scheduled to be operating. It does not take into consideration when it was scheduled to be down. Performance is the measurement of how fast or how much product am I making compared to how fast it should be. Quality is the measurement of good parts versus bad parts of the products that I actually made. And then quite often, we'll see a fourth pillar in there more often coming up in the future. That's a measurement of our material usage, sometimes referred to as mass balance. So, the availability of the machine, the performance of machine, the quality of machine, and also the efficient use of material through that machine. If we dig deeper, and we won't spend a lot of time on that today, but if we dig deeper, we can compare OEE as a performance measurement to what the true cost as well. So, we can weigh OEE versus the cost of capital equipment, the cost of material, the cost of labor, and the cost of the consumables like electricity that were used to make the product. So, OEE almost always start off with a scorecard that's going to give us the front view of: How are we doing today? And that's going to lead us where we want to go, where we want to assign resources. So, what we're going to do now, if we can, is do a quick video of a client of ours called Catania Oils.

[4:17] Video presenter

In this module of manufacturing data analytics, we will focus on Overall Equipment Effectiveness, or OEE, and introduce a metric called Loss Deployment. Data Acuity President Jim Desrosiers explains

[4:28] Jim Desrosiers, President and Founder of Data Acuity

OEE as a metric provides us with a clearer understanding of the difference between the quantity of sellable product an asset could make versus the actual product that asset made. The key insight that we're looking to gain from this metric is a full understanding of which resources we should assign to which priority problems. OEE is a top level measurement of our efficiency, and it breaks down into three separate buckets.


In this example, Catania Oils is looking to identify potential efficiency gains, leveraging the tool sets from ICONICS and Data Acuity. Catania Oils, Dan Brackett explains,

[5:05] David Brackett, Catania Oils Vice President of Operations

The first thing we were looking to do was to determine how many cases per minute we were producing so that we could determine how fast or the efficiency of our lines and that led us to ICONICS and to Data Acuity to better zero in on that information.

[5:22] Video presenter

OEE is when we look at the potential for machines versus what it's really producing. The key is to focus the right resources on the right problems. To do that, OEE breaks your performance into three buckets. These three buckets are availability, performance, and quality.

[5:39] Jim Desrosiers

Availability gives us a measurement of the amount of time an asset was operating compared to the amount of time that asset was scheduled to be operating. This does not account for the time that the asset was scheduled to not be operating. Performance gives us a measurement of the amount of product that the asset actually produced during the operating time, compared to the ideal amount of product that asset could have produced. Quality gives us a measurement of the good product versus the bad product, but only for the product actually produced. Drilling into these three buckets allows us to quickly analyze the true nature of the loss of efficiency. We're going to start by taking a look at a Pareto chart, which will list our loss of efficiency as either a quantity a time or a quantity of events.

[6:25] Video presenter

The next level is to be able to correlate the scores to another set of data.

[6:29] Dan Brackett

We collect data regularly through ICONICS. And so we get data from every line from every product and from every shift and that data is accumulated in the database, and a report comes out every day for us to be able to determine how each one of those lines is running.

[6:46] Video presenter

Another metric often overlooked, are critical to making gains in efficiency is last Deployment, which unlike OEE considers factors beyond simply the machines operation. It simply ask if we made a calculation that we can produce 500 bottles of oil per day, but we only produce 300. Where did those 200 bottles go?

[7:04] Jim Desrosiers

Loss Deployment as a metric includes OEE, but it actually gives us much greater insight into the true loss of efficiency. For Loss Deployment, we're going to start by figuring out what's the total amount of product we could make in an ideal situation 24/7 365 no loss of efficiency. From there, we're going to back down to five buckets. The first bucket is going to be how much product did we actually make in reality sellable product. The second bucket is going to be what was our loss of efficiency that can be attributed as the fault of the machine itself or the asset itself. The third bucket is going to be what was our loss of efficiency that can be attributed to the process around the asset, for example, waiting for, waiting for material waiting for operation and waiting for instructions. The next bucket is going to be loss of efficiencies that are attributed to required actions. For example, preventive maintenance is required, but it does cost us efficiency. A clean out or a changeover is required, but it does cost us efficiency. And the final bucket is going to be those losses and efficiency that were actually intention. For example, we've scheduled not to run the line on a Sunday, or we've scheduled not to run the line during a break time. As we indicated earlier, the metric Loss Deployment fully includes OEE. But the additional buckets give us even greater insight into the potential loss of efficiency. With 30 years of experience deploying automation systems, we want to not focus just on a single machine fault or quality defects, but the fact of the entire process. This insight allows us to capitalize on even greater efficiency gains.


So, as we talked about, OEE is something that is fairly common today in most plants. The challenge with OEE that we see specifically on the plant floor is one: Who decided whether the machine should be operating or not operate? In other words: Was it scheduled running, or not scheduled running, and who's the person that made that decision. That leads to a lot of finger pointing within a lot of plants. Second: on performance, who decided what the actual goal, the ideal speed of an asset is? There are typically three separate ideal speeds. There's the design speed, the spec speed, and the goal speed, the design speed from the OEM that built the equipment that was designed to run this fast. The spec speed is from the internal engineering department that says actually, when we're running the blue product, we run it this fast. And then there's the goal speed, which is the production people that say, “Well, in reality, our goal is to run this fast”. So, the challenge with OEE is who made that decision what the actual speed was. Same thing for quality; quite often you have destructible samples. We're going to take amount of product off the line and use it for quality testing. Who decided whether that was good product or bad product that affects the score? that the machine gets for OEE. Same thing happens on if we look at mass balance or material usage. If you walk through, for example, a food processing plant, you'll see all kinds of raw material on the floor. The question is, how did that get accounted for? So, what we want to do is figure out some way to take the finger pointing or the questions out of the process that OEE happens. 


And we introduced a concept called Loss Deployment. So, we're going to start Loss Deployment with an understanding of what's the actual amount of product we really could make in an ideal world. From there, we're going to move into trying to identify if we didn't make that, where did that loss go? Where was it deployed? The losses are going to fall into typically four buckets, but you can define any bucket you want. Typically, they're going to be because of the machine. because of the process around the machine, which are quite often referred to as waiting fours, because of required tasks, things like preventative maintenance, and clean outs, or because of planned losses, we don't run the machine on a Sunday, we had a team meeting, we don't have any customer orders, things like that. So, we're looking to understand the impact of production cost, also versus production values. If we dig deep enough Loss Deployment is going to let us understand not only our efficiency loss, but also did we have an opportunity to generate even more revenue by running a more a more valuable product that had a lower production cost to it. The goal here ultimately is to get the right resources focused on the right problems. So, let's not spend time on the issues that are less important and let's not give a list of tasks to a person that can't control them. So, here's the view of Loss Deployment we had from the Catania Oils example. Within that the bottom two buckets, the green and the reddish, represent OEE, that's what we all look at today. OEE that's our availability, performance, and quality; within that red bucket, we're going to break out of that how much how much opportunity to make a sellable product was Loss because the machine was down; how much was Loss, because machine was running slow; how much was Loss because the product was bad, all because of the machine itself. And then we're going to do a Pareto analysis within each of those to get greater understanding of the machines fault. The important thing here is this is information for the engineering staff and the maintenance staff. This is very good information, very valuable information. And every plant has that team of people already in place to take care of those machines. If we back off of that back off a traditional OEE and think about what's the maximum amount of revenue we could have generated from an asset in total, in a perfect world, we ran 24/7, we never run out of materials, we always have staff on site, we never make a bad product. In utopia, what could we have generated? It turns out that there's a yellow bucket that there's an amount of, of efficiency Loss because of the process. This is sometimes referred to as blocked or starved conditions. It wasn't this machine's fault. It was the machine feeding into it or the machine that it's feeding into. So, I'm waiting for the machine feeding me. I'm waiting for the machine I'm feeding to, blocked or starved upstream downstream. I could be waiting for materials. I could be waiting for instructions; it could be waiting for the maintenance department. I could be waiting for Quality Department to sign off on something. This is important information to the production supervisor. So, it's a different set of information going to a different person to address a different kind of problem. 


The next bucket up the blue one is required losses. So, these are things like I have to do a changeover. It cost me efficiency, but I have no choice; we have to change product. I have to do a clean out or a line clearance. I do preventative maintenance so that the machine will not break down unexpectedly. I might do destructive testing, quality testing that cost me product, cost me efficiency. This is information that's very important to production managers and maintenance managers. So, it's a different set of people that want to look at: Is this the appropriate amount of loss that's required. And then finally, there's the plant loss. And that is if we've made a decision to not run production. So maybe we don't run on Sundays; maybe we run two shifts, instead of three shifts. Maybe there's a team meeting. But part of this also is a decision to run a less valuable product or a more expensive product to manufacture. So, if we switch to a loss Deployment model on top OEE, we can actually identify what was the Loss revenue because we made a decision to run a product that is more expensive to manufacture. This is information that's fed to the scheduling department in plant management. And ultimately, we can turn it from quantity to products into dollar. What was the actual impact because of the machine, because of the process, because of required and because of intentional. So, I'm going to switch a little bit here to follow up in just a second. We're going to queue up the second Catania video. We started with OEE at Catania Oils, but we ended up with really unified visualization through their entire plant after we started with OEE. So, we're going to run the second video, and you'll see in there that there are terminals where they're pulling manufacturing efficiency information at the rail yard where raw material comes in, at the tank farm, at the quality lab, in the blending area where they mix oils, in the bottling lines, the fork trucks that move things around. They even go all the way up to the purchasing department that makes decisions on purchasing raw materials based on production levels and storage capacity. So, we can go ahead and run that second video.

15:54 Video presenter

In this module of manufacturing data analytics, we will focus on unified visualization. Catania Oils, Dan bracket explains.

16:01 Dan Brackett 

Once we started using ICONICS and collecting data and maximizing OEE, this really opened up the opportunities for us to use the system for other things. We're able to collect information out at our rail yard, from our trucks in the yard, from our fork trucks, and our production lines. Not only are we looking at efficiency, but we're also looking at data transfer between departments concerning items in concerning output.

[16:32] Jim Desrosiers 

Like most projects, our initial discussions with Catania Oils centered around a single specific challenge. In this case, more accurate and real time knowledge of production counts. We typically recommend putting the initial challenge aside to sketch out a long term, three to five year vision of data consumption, ultimately leading to the knowledge and wisdom we will gain from information coming out of the automation system. After considering that longer term visualization of data and information, it is important to then turn back to the single well defined and manageable project that we started with and then get started on that one initial piece. If we take this approach, we can build a modular, flexible, adaptable data architecture, which allows us to create one step at a time, a unified visualization. This is what Catania Oils now has throughout their entire organization. It has proven to be a manageable, maintainable, and cost effective approach to automation data.


So, we’ll wrap this up by moving to the end, which is trying to gain predictive awareness out of this data. So not just react to the current situation. But in fact, look for what may happen next for us. I'm going to introduce, that's not a true statement, I'd encourage you to look up something called the DIKW data model, which is data, information, knowledge, and wisdom. You'll find a lot of information on the web. We try to follow this through a project because it allows us to manage and organize the technical efforts of working with data. So first, we want to figure out a toolset that's going to give us a cost effective way to collect and organize the data coming in: Hyper Historian, Alarm Server. These are tools that can be used for this raw data collection. Second, we want to correlate that data to turn it into information so assign it to an asset, assign it to a batch, to an operator, assign it to a shift. And then where most companies are today is we're going to take that information now, and we're going to plot it and put it up on a graphical report that's going to turn it into knowledge for us. So, we're going to look at a chart and say it's not just information anymore. We can actually act upon the knowledge that's now in front of us. And finally, we want to reach towards wisdom, which is the predictive ability to have the system guide us and give us recommendations. Now, so in order to get from knowledge to wisdom, which is where most people are today, we have graphical ways of looking at information and acting upon it. And we're reaching for: how does it tell us what to do proactively. In order to get from knowledge to wisdom, there has to be some kind of insight. And we heard earlier some really fascinating things about AI as one tool to insert some insight here to get us from knowledge to wisdom. I try to remind people before we jump into really exciting new technologies like AI, let's take a step back and ask the experienced employees within the plant.


If you were to look at the knowledge, we have in front of us today. What are the kinds of things you'd recommend that would help us predict the next step? So, let's start with learning from experienced people in the plant and let's reach towards advanced tools like AI to get us from knowledge to wisdom. Finally, it's really important to make sure that we're correlating or monitoring. In other words, let's look at the return on investment, the ROI. And the best measurement of ROI is, in fact, OEE and last Deployment. That's going to give us an immediate snapshot of what was our return on the efforts we made when we put in wisdom systems. So, I'm going to show you and just going to run through this very quickly an example of reaching towards predictive maintenance. This is a system in the automotive industry in which we're taking maintenance information out of their CMMS, computerized maintenance management system. Today, most CMMS systems are purely calendar based; it schedules maintenance events by the calendar. If you look at the Gantt chart in the middle, this allows us to push into the CMMS. We want to schedule maintenance based on runtime as well. So that'd be like looking at the odometer on your car instead of the calendar for when to get your oil changed. In addition to that, thirdly, so we start with calendar we added into it the ability to schedule maintenance based on runtime. The top section of this shows you events, discrete events that may happen in a plant alarms, quality events, setting changes, this might lead us to pushing maintenance actions as well. So, if I have x amount of quality actions, I might proactively push a maintenance event in, so this is predictive. So far, this is pretty easy stuff. On the bottom you see process information that's lined up with the events in the runtime information. This allows us to look at real time process information and push scheduled tasks into the maintenance system. So, we may look at the torque on a motor; we may look at vibration a motor that may lead to a shaft alignment requirement. So, this is reaching towards predictive maintenance predictive performance is kind of interesting. If you look at the top right, you can see that this particular cycle should have taken me six minutes. And this was broken down to approximately 40 phases 47 sub steps in order to do this one manufacturing process. If we compare the data set for one manufacturing process to another, maybe one machine to another, one part to another, one phase to another, we can actually figure out if we are getting slower or faster so we can predictably figure out the correlation between actions and maintenance to how fast is the machine going by analyzing the phase differences in a manufacturing cycle. Predictive quality: what you're looking at here is an inspection of welds on a metal frame of a seat. If we take a look at which weld numbers come up as good or bad, the most often, we can predict what the quality of that product will be by looking at trends coming out of a weld. So, it's going to predict what the quality is before it actually goes bad and lead us to take action on that. In that case, that was predicting based on quantity, but we also might predict quality based on process information. So, in maintenance, I said we may look at the torque of a motor. Well, in the case of quality of something, a process like metal casting, for example, we might look at, let's correlate good parts versus bad parts to things like metal, metal temperature, or cycle time in predict quality based on trends of that information. And then finally, we're going to look at predicting material usage. This is a high altitude view of that casting plant. If we want to schedule fork trucks to deliver raw materials and remove scrap materials without somebody driving around the plant in a fork truck looking for opportunities. So, we're going to proactively point out to them that within the next seven minutes, you need to be over here to pick up a scrap bin, and you need to be over there to deliver raw materials. So, we're going to use do predictive material handling. Again, based on the correlation of machine process information, machine performance, and our production schedule.


I'm going to wrap up real quick here with just highlights on how we typically handle projects, We want to quite often start from the top, not the bottom. By that I mean, start from a discussion around the screens, the reports, the consumption of data. Don't start from a discussion around the PLC and the tags. It's important to start all conversations at the very top and work down towards the data, so you don't design yourself into a corner. You want to organize your efforts around subject areas and job roles, not around how things were done in the past or how things are set up today. You want to make sure that you have an understanding of the long term possibilities, think about three to five years, what could we do with the system. But as Mark Hepburn pointed out earlier, the really important thing about that is then very quickly back down to what's the very specific project we could get started with. So, we talked about the long term, but then we move to the various specific one. Consider how to build data structures that are going to let us do correlation of data in the future. So, make sure we're flexible enough in our data structures to think in different directions. Agile project management is very important because these things don't stop. We keep going and going to gain more efficiency. It's good to do a formalized assessment; that's great. Quite often it's impossible to have the time and the money to do that. And then finally, we want to keep on the return on investment, again that's where OEE and Loss Deployment comes in. It reminds us to constantly go back to: Are the efforts we are making correlated or in line with the return we're getting out of them? And that's all I have. Thank you.