Mr. Mark Hepburn, ICONICS Vice President of Global Sales hosts this session on the Innovative Technologies of the ICONICS platform along with session presenters. He explains the capabilities of the ICONICS platform and provides an overview of the new 10.97.1 release. Chris Elsbree ICONICS Senior Software Architect explains the new 3D HTML and dynamic cloning; Luke Gonyea, ICONICS Solution Sales Engineer explains the new Hyper Alarm Server and Hyper Alarm Logger; Alex Binder ICONICS Senior Applications Engineer provides a demonstration of AnanlytiX-BI on a marine port; and Jim Desrosiers President and Founder of Data Acuity talks about Overall Equipment Effectiveness and Loss Deployment as metrics in assessing operational productivity and efficiency and demonstrates these concepts through the Catania Oils use case.

Video Transcript

[0:00] Mark Hepburn ICONICS Vice President of Global Sales

Hello and welcome back to the mainstage here at the ICONICS customer event Connect 2021. It is really exciting to have you guys here. I could use from somebody out there the clicker and when you have a chance and if you could advance me to the next screen, that would be great. It's really fun being live again; let me tell you and having an audience. Thank you, Ted. You can see a lot of these sessions are recorded which is nice to see the demos all pulled together in a very concise way. I think that it really adds a lot to the experience. Thanks a lot, to our newsroom, to Paul Carter and Ryan Legg. And so welcome to our second keynote on Innovative Technologies to help you stay agile and resilient. Alright, I'm Mark Hepburn I lead the global ICONICS sales team based here in Foxborough, Massachusetts, near Boston. It's really a pleasure to join you here in person. And everybody virtually wherever you are; I understand there are 1000s of people online right now. Thank you for joining. It's a great experience for us. So, I've been with ICONICS for 18 years, a broad experience leading sales and marketing. I worked for GE for 15 years way back and had hands on industrial automation and programmable controllers back years and years ago. For the next 90 minutes I invite you to see the Innovative Software Technologies from ICONICS in action. We've got a lot of live demos and customer stories and a dynamic team to help share them with us. So, Chris Elsbree will illustrate key graphics technologies in action. Luke Gonyea will join us to highlight the new Hyper Alarm technologies with ICONICS digital twin called AssetWorX. Alex Bender is going to join remote from Australia. Thank you, Alex. He’ll share how all this can come together with Analytics-BI, ICONICS tool to bring complex real time operation elements together in a very dynamic way. And Jim Desrosiers will join to review best practices for manufacturing and highlights a in case study with Catania Oils located right here in Massachusetts.

[2:33]

Okay, so why, why ICONICS? So, I think Ted answered that very well this morning. There are a couple things I'd like to highlight. First, because it's the most scalable, reliable, and customizable platform in the market for real time operations. Our software is open, yet secure. And it provides extensible connectivity to virtually any kind of system, OT systems, operational technology coming from all of the control systems, the mechatronics, the things that make the things that we operate work, and from the IT systems, the information technologies that are in the main Management Information Systems area, whether those be in your corporate data center, or in the cloud, or coming from a web service. ICONICS is a platform that brings these together into context, so that we can connect to and virtually control any system, whether it be through IoT, on the edge, or across our enterprise. 

[3:38]

So, what is the ICONICS suite? We've heard a lot about this; we've heard the SUSTIE group thank us for GENESIS64. So, if we're using GENESIS64 ICONICS suite is a combination of what we think of as four elements. First of all, GENESIS64, Hyper Historian, AnalytiX, and IoT works, the commonality of one unique and singular platform service, as Ted Hill pointed out, coming from within the industry understands how valuable that is, not just for you today in your current applications, but as we look forward to the future proof elements of your systems. The single modular system allows us scalability and gives us flexibility. So just drilling down a little bit GENESIS64 is a sophisticated and complete system for supervisory control, data acquisition, visualization, alarm management, events management, notifications.

[4:47]

It's designed to scale from the smallest single machine applications, whether they be on the edge as we think about in this new environment. We think of cloud edge, edge to cloud, or just a standalone application, scaling down to something as small as a few tags of data, a few data elements to massive applications that are distributed across an enterprise or even across multiple enterprises with connections from edge to cloud. Hyper Historian: you heard a little bit about how it's used at Continental Tire, and Paul Carter was talking during the break and how it’s used to collect massive amounts of data. And then to preprocess that data, use statistical calculations and performance aggregation to provide advanced characteristics for further analytics and great data archival and open connectivity, Hyper Historian. AnalytiX: a set of cross industry tools for data analysis, and then IoT works, which is the core ICONICS platform for gateway devices. So, it takes that piece platform services and puts it right down at the edge, easy to use, set it and forget it, and you can manage and provision from the cloud. 

[6:06]

Okay, so ICONICS as a user, end user or a configurator systems integrator, it's important to know: Is it flexible? Is it competitive? Does it fit my needs? Can it be a fit where I am today? So ICONICS does scale very nicely. It's modular and flexible. It works very well in the sandbox with other automation systems. In fact, we've always said that ICONICS is like the Switzerland of automation. We connect everything. So, if you have multiple distributed control systems, and you want a unified visual interface, or a central alarm management, this is something you can do with ICONICS because it does talk to everything, but you can start very small. And this is really important. And I think when Jim Desrosiers comes up later, he'll talk about the importance of being able to start with one specific challenge and scale up an app. Ted mentioned that earlier as well. It's one of the key aspects of ICONICS. Start economically, and then run that base and solve whatever next problem you have. There's always a next problem.

[7:21]

Okay, so connectivity, to everything is really key to the ICONICS system. And As Ted mentioned, the ICONICS platform is designed to connect to all control systems, IT OT. So, there's open protocol, connectivity is really central. As Ted mentioned, we were among the first to embrace OPC, open standards communications; Olay for process control it was originally called. Unified architecture. And we were among the first to establish the board. Later this afternoon, Mr. Tom Burke joins the Connectivity and Security session. He was a founding member of the OPC Foundation, former president, now he is with ICONICS Mitsubishi Electric as our Global Alliance Manager. Please join to learn more about open connectivity and secure connectivity. 

[8:16]

So, connectivity, of course, must be in our system, and it's native to the ICONICS licensing, built right in OPC UA to the core, Modbus, BACnet, BACnet Secure Connect. We'll learn more about that in that session that Tom and Oliver Gruner are hosting this afternoon. MQTT so Message Queuing Telemetry Transport, so this has been adopted by IT and is a way of connecting systems together. It's originally developed back in the oil patch for connecting SCADA systems for telemetry. And now with Sparkplug B as Paul Carter mentioned during the break, is a great way to integrate data sources through a Message Broker which makes it very scalable and resilient. Really, IT technology in operational techniques in OT, SNMP, web services, data publishing, IoT publishing, the ability to connect any database natively, and of course, factory automation connectors. With ICONICS we've integrated the Device Explorer OPC Server into the ICONICS suite. This allows us to connect to over 200 different types of controllers from more than 70 vendors. That's a lot of controllers. All from within the ICONICS installation, click and install, browse natively; OPC is built right into the ICONICS suite. And with Takebishi who is are a very prominent automation software provider and a close partner of our parent company, Mitsubishi Electric in Japan. They've added some advanced servers like DNP3 and IEC60850, and MTConnect, like Paul Carter mentioned that at the break. Thank you, Paul. And these are important for these specialized applications like utility and energy and machine tool, right, of course, standard connectivity to the most popular manufacturers, Siemens, Rockwell, of course, the Modbus standard built in natively anyway, but all this in Takebishi and built in connectivity for our parent company devices with the Mitsubishi Electric FA Connector to all the popular PLCs and graphical operator terminals, etc. I mentioned IoTWorX. Please join us this afternoon for a session on it IoT Riding the Wave to learn more about how IoTWorX helps you to manage a fleet of devices and importantly, to connect the existing control system where we have billions and trillions of data points already connected through control systems, securely and contextually to the cloud.

[11:18]

ICONICS is a partner led company; we sell through partners. And it's important for our partners and prospective partners to know why ICONICS is the best choice for operational intelligence. I mentioned flexibility, scalable licensing, a unified platform, extensibility, resiliency; you can count on ICONICS. It runs some of the most critical infrastructure in the world. And we're SI friendly; systems integrators build the systems. Almost all of the systems deployed by ICONICS are done by third party systems integrators or specialty systems integrators within the businesses Of course we have our own professional services group that helps the SIs and the end users come up with their own architecture. Mitsubishi Electric globally has several service organizations that work together with our third party integrators. 

[12:14]

We've made things easier and better for our systems integrators, just a couple things I want to share. First of all, sign up is free, no charge. Now, second, self-training; we've added 338 so far, and counting, short instructional videos on the ICONICS Institute, please check out the ICONICS Institute. It's open to everybody. It's organized very logically; you can flow through these two to five minute videos in a series, or you can go back and refer to them one at a time. We have several of the people here in the audience who have created them. Thank you, Cynthia. Thank you, Jotham. Luke, Thanks, John. We got a whole community of people who put these together for you to make it easy and fast to learn. Open documentation, so on docs.ICONICS.com, it's all there. Everything you need to learn and run the ICONICS system. Ted mentioned, the Azure Marketplace. So, we've got the installation for the ICONICS suite; you press a button on the Azure Marketplace. It spins up an Azure VM on your tenant, and you're up and running in just minutes. We have a great certification program for our partners. So those that have demonstrated competence and experience are recognized as certified and gold certified. And as Ted mentioned, we're extensible through the Microsoft applications in Windows, the edge, SQL Server, all the things we're familiar with, and Azure services and Azure data lakes, and importantly through our ToolWorX. So, the extensibility ToolWorX, the power tool for the advanced systems integrator and original equipment manufacturers that want to extend their system out with our software development toolkit. 

[13:33]

Okay, so I'd like to pivot now and talk a little bit about client technologies, and soon I'll invite our Chief Software Architect, Mr. Chris Elsbree, to highlight some of these key features. I want to pick out three of the hundreds of amazing visual features that are in the ICONICS platform, of course, we've been long recognized for advanced visualization tools, and first with an OPC UA client, first with 3D in our industry, first with many, many firsts in visualizations. First with mobile and web, so when we think about accessibility, and everybody talks about visualization anywhere on any device, we introduced this idea of web HMI back in 2000, with mobile HMI and, and then in 2013, with any glass technology built on HTML5. So HTML5 visualization command and control securely on any device is a hallmark native to the ICONICS system designed for one place in the GraphWorX editor and deploy, whether it be thick client or thin client. And now 3D HTML5. GEO SCADA. This is one of the ways to put massive amounts of information into context and bring out exceptions, and ICONICS introduced the GEO SCADA with the ICONICS. Suite when the 64 bit platform was launched back in 2008. GEO SCADA continues to be one of the most important elements to bring context, location based context, into an application. Another key feature that I'd like to point out, something we added just a couple years ago, is dynamic cloning. This advanced visualization technique helps speed deployment and improve qualities of projects. So, it's amazing. It's almost like wizardry.

[15:52]

It’s not just fast, it’s quality so I want to stress that. If I could invite Chris Ellsbury to come up and join here. Chris has been with ICONICS since 1994. Yes. Camera on Chris, please. Working at ICONICS headquarters in Foxborough. He's the lead developer of GraphWorX64. He worked on GraphWorX32 and GraphWorX 2Dos. Anyway, he's responsible for researching new technologies and implementing core infrastructures and components. And this is important for unified platform consistency, the overall design and implementation of ICONICS software. Chris has a Bachelor of Science in Computer Science from WPI, Worcester Polytechnic, right out here out the pike. So, Chris, please show us a little bit about some of these great tools.

[16:50] Chris Elsbree ICONICS Chief Software Architect

Thanks, Mark. Let's start with a demonstration of 3D HTML. The main purpose of this example is to provide maintenance training for repairing fault in the robot that you see on screen. If you attended our customer summit a few years ago, you might actually recognize some elements of this screen from the HoloLens demo that I did at that show. Here we're seeing that those same 3D assets can now be used over the web. So, as you can see from the 3D camera movement now, around the robot, this is not just a static 2D render of a 3D model. This is real time rendered 3D. And the 3D rendering in the web browser is accomplished using a technology called Web GL. Web GL is supported in most modern web browsers such as Chrome and Edge and Firefox and Safari, on both desktop and mobile versions of those browsers. And this web GL technology is built into the web browser. There are no special plugins that you need for the browsers. Currently, I'm showing this in the Chrome browser, but we can show the same HMI in other browsers like here in Microsoft Edge, and I could also switch to running in Firefox. And so here we see the same HMI running in Firefox; it's really the choice of your browser. Notice that the display combines both 3D and 2D elements together into a single HMI with the robot on the left in the control panel on the right, and the 3D can better mimic a real world representation of equipment like this robot but including 2D elements is beneficial for the accessibility of interactive controls or texts and gauges. Here we have the 2D gauges showing the current angles for the joints of the robot. And of course, this isn't just a static 3D scene. We can animate and interact with equipment based on runtime process values. So, what we're demonstrating at the moment is the rotation animation within a 3D space. And this is actually six hierarchical joint rotations where each rotation is affecting the pivot point of the next joint’s rotation. So that allows basically to accurately represent a real robot arms positions. What's just been activated now is visual representation of the areas that the robot arm can reach with its program movements, and that demonstrates size and visibility animations. 

[19:21]

As I mentioned at the beginning, this example is to provide maintenance training for repairing a fault in the robot. This is done by providing step by step guide with text descriptions and by animating the steps themselves on the 3D robot. And those text descriptions, they could be alias to support multiple languages or different sets of instructions could be presented depending on the specific fault. So here we're seeing the use of color animations to highlight the parts that are relevant to the current step and 3D location animations to indicate removal and replacement of the parts. So having the sequence represented in 3D provides better spatial understanding of the locations and relationships of the parts in the robot. A technician could follow along the steps on a tablet while working on an actual robot and adjusting the 3D view to match what they're seeing in real life. So, let's take a look at how the 3D is actually configured to run in the web browser. And as those of you out there who already know ICONICS may have guessed, it's GraphWorX64 is the application that we use to configure the 3D HMIs for the web. If you've ever used GraphWorX64 to configure 3D for the desktop, then you'll be familiar with this already because the configuration experience is mostly the same. The main difference here is the GraphWorX64 is running in the web browser Edit Mode; you can see that indicated in the status bar at the lower right corner of the application. The web browser Edit Mode ensures that the configurator enables only the features that are supported on the web, so the 3D HTML5 that you're seeing now, the support in version 10.97.1, is currently in technical preview. So, there are some features of the desktop 3D that aren't quite supported yet for HTML5, but we're working to continue to improve that feature parity in subsequent releases.

[21:23]

Here we can see the rotation animation configuration and the data source that drives the 3D rotation for this joint. It's a local alias that resolves to a property in AssetWorX that's applying the current joint angle. And we can also see the angle limits for the rotation, and we can actually view that on screen. So here, we can visually adjust the dynamic limits of the angles of the rotation and the pivot point of the rotation. Once we're satisfied with all the configuration of our 3D scene, we can then use the convenient open and web browser button that you see at the top left of the application to run the GraphWorX64 display in the web browser of our choice. In this case, I'll just use the current Windows default web browser which is currently set as Microsoft Edge. And here I'm just going to skip saving because I don't want the changes that I just made. But with a simple click, we're back in the web browser running the 3D HMI that was just being edited. So that's a brief overview of the ICONICS new 3D support for HTML5. This feature opens up the reach of 3D HMI to be deployable beyond just the Windows desktop. Now, you can run GraphWorX64 3D visualizations over the web to your choice of web browser, regardless of the operating system or the device form factor. So, it could be Chrome; it could be Edge. Firefox, Safari, Windows, either iOS or Android. It could be desktop, tablet, or phone. 

[23:00] 

Now, the next demonstration that I'd like to show is for the clone dynamic, and using the clone dynamic is a great way to simplify configuration of displays when you need to visualize multiple instances of a graphical symbol where each symbol instance is going to be tied to unique data. So, the data for such instances could be specific for a specific type of equipment coming from AssetWorX, or it could be database table rows coming from GridWorX as just a couple of examples. So let me show what I'm talking about. Make it easier to understand. So here we're visualizing oil fields on a map where we're overlaying symbols representing pump stations and wells. The pump stations are the green icons, and the wells are the blue icons. This dashboard is primarily meant to showcase CFSWorX, and there will be a session on that later to focus in on the connected Field Service features. But right now, I want to focus on how the symbols that you see on screen are generated using dynamic cloning. The data sources for all these pump stations wells are defined in AssetWorX, and here we're seeing instances of the well assets that we're going to visualize in GraphWorX. So, we're going to use the latitude and longitude properties to position the symbols on the map, and the alarm state property will be used to make the symbol blink when there's an alarm. And as you can see, there's quite a few well instances here; there's 100 for Andrews County, and each of these needs to be visualized on the map in our dashboard. So, we've used our bulk editing and import features to bring these into AssetWorX quickly, but let's take a look at how we can visualize this information on an HMI screen without having to configure 100 unique graphical symbols. So, this is the GraphWorX display for that map that we were looking at. And you'll notice you don't see blue and green icons all over the map right now. There's only this one symbol instance for Andrews County, not the 100 instances like we saw in the AssetWorX configuration. So, without the clone dynamic, we would need to manually make a copy of the symbol and then 100 times manually hook up each of those symbols to their corresponding unique data sources in AssetWorX. So, this would be both tedious and error prone. And furthermore, it would assume that the number of those well assets in AssetWorX is not going to change because otherwise we would need to manually edit the graphics and add more symbol instances which would make maintaining these HMI screens significantly more cumbersome. But with the clone dynamic, there's a much better way to do this. So, this one symbol is the primary well symbol that the clone dynamic is going to replicate at runtime to represent the 100 wells defined for Andrews County.

[26:00]

In the number of instances property, you can see, we specified the number of instances should be equal to the number of assets defined in the wells branch for Andrews County and AssetWorX. So, this tells the clone dynamic how many copies of the symbol to generate at runtime. And next, we specify where the symbols are going to be placed based on the GPS coordinates. And so, take note of this instance number alias name property. And its value instance, this alias is how we're going to make each symbol data source uniquely bound to a specific asset. So, the instance alias is going to resolve to the instance number of the generated clone. The primary symbol that is instant zero. The first generated clone is instance one; the second generic code is instance two, etc. So, we can then use that number as an index into the list of assets. So, in the expression that you see on screen, you can see that we're getting the asset point name from the list of wells in AssetWorX for the cloned instances index and then getting the latitude property. And we can do the same thing for the longitude property to get the complete GPS position, and we can also use the alarm state property for each symbol instance to make the symbol blink based on alarm. And that's all there is to it. In runtime, this one symbol instance becomes 100 symbol instances positioned at the correct GPS coordinates on the map. And as you can see, on screen there are a lot of blue dots there on the map, and you would have had to without the client dynamic, copy those manually and configure those individually. But with the clone dynamic, we only need to configure that one symbol instance. And furthermore, if we add or remove assets in AssetWorX, we don't need to modify the configured graphics at all. GraphWorX will just detect the number of asset assets changed, and it will add or remove symbols dynamically to match the modified asset list. So, I hope this demonstration conveys some of the power and the time saving benefits of using the clone dynamic. It can greatly simplify the initial configuration required to create HMI’s that need to visualize many instances of replicated symbols. And it can also reduce if not eliminate ongoing maintenance of such HMI displays since the cloned instance generation can automatically adjust to changes made to the back end data sources. So, with that, I'm going to turn it back to Mark.

[28:39] Mark Hepburn

Thank you very much, Chris.

[28:43]

So that’s amazing stuff. You can see the flexibility and power and everything in a web browser 3D HTML5, dynamic cloning and there are some bulk import capabilities. Next, I'd like to pivot and talk about alarm innovation. Hyper Alarming is our 10.97.1 product. ICONICS has long been a leader in distributed advanced alarm management. Now with version 10.97.1, we are delivering what we aim at minimum 5X performance improvements really ready for the future. So first Hyper Alarm Server introduced in March with our 10.97.1, a much more powerful native integration server, better performance, control, and integrates natively with AssetWorX. So, everything comes together with a Hyper Alarm Server, and the Hyper Alarm Logger. So Hyper Alarm Logger introduced in 10.97.1, so that's our November release. This will have redundancy, again, native integration with platform services, full scalability, this asynchronous multi-threading processing that you get with a managed services that we have is a core ICONICS platform. So, there's not just for now, but also for the future, ready for cloud based, scalable applications that build on the really an underpinning of the roadmap for more analytics and for more cloud based applications that Ted Hill reviewed in the opening? 

[30:19]

Okay, so I'd like to just pivot a little bit around this concept of digital twins. We hear a lot about digital twins these days. The concept combines elements from our physical world and digital world and pulls them together with telemetry, with context from not just the things and their sensor data but putting in context where they are and how they're used and the people that are using them and historical data and alarm data. So, this whole idea, generally is digital twins. But I think if you ask a half dozen people, you're going to get different answers. What exactly a digital twin is. From our standpoint, I like to think of a digital twin for operations. Somebody who's trying to improve the productivity resilience of the other operating plant is the fusion of geometry, telemetry, that's the sensor data, analytics, and metadata. This metadata could come from the people, places, things are that used to come information systems like: What's the product? What's the batch? What's this, what's that? It's all part of the digital twin. So multiple data sources for operational control and insight. That's digital twins for operations. So, in this case, we see an object or set of objects. This is a crane; you'll hear more about cranes later in this presentation and marine ports crane, big stuff. So, I'm going to have a digital twin of a geometric twin coming from the computer automated design, from the Product Lifecycle Management application. I add telemetry to that; I get key performance indicators. I can put them in context on the device. With ICONICS, we connect that data through our digital twin we call AssetWorX, so I can take elements like speed, pressure, temperature, humidity, and so this is live and historical data, I can combine this with analytics, etc. And I end up with an object or with relation to all this metadata. And then I can organize that object into a logical structure, often geographical or financial, etc. And, of course, with AssetWorX, I have flexibility of creating virtual views of that, etc. It's a really flexible tool. So, we have then this digital twin in context, in AssetWorX. So, I see the object and who's working, what are they lifting, what's in the container, anything there's metadata that I can bring in from web servers and or from OPC or any of the data communications methods and combine them into this asset framework. So AssetWorX - ICONICS digital twin; it is a multi-level distributed architecture. It has the ability not just for live and historical data from AlarmWorX and live alarms from the Hyper Alarm Server, historical data from the Hyper Historian but also from AnalytiX, all together in one framework. We think over the years the ISA95 kind of architecture. This helps us support and define the layers of the architecture. Often, as you saw on the map with Chris's example, we'll look at things kind of geographically or financially, or you know, it could be logically around production. So, AssetWorX all at the center of this, and it's not just the back end the ability to connect into the data. There's also very nice front end. So, there's a tree like structure, some really nice features in there that help us to manage the project and run as an operator, including the ideas of reusable templates. 

[34:13]

There are back end data templates. There are front end visual templates, and the ability to add in some very nice visual effects, totalization, and iconagraphy, etc. And AssetWorX becomes an actionable operational template, so operators can interact. So, this concept of global commanding is very important for AssetWorX. As we look at web based, scalable cloud based applications is really important for us to avoid scripting that requires kind of client side activity or complex scripting that's really hard to diagnose. So, AssetWorX and of course, the entire ICONICS front end supports us global commanding, it's server side actions that substitute for scripting. Of course, ICONICS is open, so you can do scripting as well. But we think this is one of the major differentiators when we look at other projects from other applications, it's just this tangled web of scripting. Get rid of the scripting when you can, and you can I think virtually all of our projects use zero scripting.

[35:26] Mark Hepburn, ICONICS Vice President of Global Sales

I would like now to illustrate these together and invite up by my colleague, Mr. Luke Gonyea. Luke, come on up, please. So, Luke is one of our Solution Sales Engineers here in Massachusetts. He has been with us for three and a half years. So, Luke will demonstrate Hyper Alarming and AssetWorX together in some applications, so you can see how these things work and bring value.

[35:53] Luke Gonyea, ICONICS Solution Engineer

Excellent, thank you, Mark. Appreciate it. I prepared a demo today for the Hyper Alarm Server.

[36:05]

And thank you again for joining our virtual event today. I want to spend time on this demonstration highlighting our Hyper Alarm Server that was released in our 10.97.1 release, and then I will also touch on our new Hyper Alarm Logger which will be released with the 10.97.1 version of GENESIS. My demonstration today is going to be expanding on the same asset base modeling that Chris mentioned in his cloning dynamic example. For the presentation, I have a roller chain manufacturing facility. On the right half of my screen, you will see the layout of my different pieces of equipment in our facility. So, we have a high level facility that has several areas, each area has different product lines. And then each product line is going to have several pieces of equipment manufacturing these different parts. The asset model we have in GENESIS64 is going to model that physical layout of our equipment. So, on the left hand side, you can see my Asset Navigator and how it is mirroring that same configuration. With the Hyper Alarm Server, we are now able to have further integration between our alarms and that asset structure that we define. In my example, I have an additional column to my asset navigator titled “Active Alarms”. So, this is looking at that Hyper Alarm Server alarm data and then associating it with individual pieces of my asset tree. So, in our case, I have press A that has five current alarms, the 100 line, which is the parent line of that press A has between six and seven alarms, and then so forth, all the way up to auto assembly which would be the department and then the chain manufacturing plant which would be site wide. In addition to visualizing our alarms through our Asset Navigator, we have the traditional tabular view that our existing users would be familiar with our AlarmWorX64 product. In the tabular view, we can filter by different metadata fields that we have associated with our Hyper Alarm Server alarms. In this case, I have this current operator field and I'm looking at alarms just for John Smith. We can navigate using those alarms. As you see I load the alarm details pane on the right hand side. I'll use that to load our floorplan. The floorplan in addition to all the different widgets we have, can easily integrate with that Hyper Alarm Server. You can see that my asset is color coded based on alarm status. I also have historical information and a trend viewer right on the bottom of the screen. And then the real time asset information on the right hand side, all in a single interface so there's not multiple displays for these different functions. In addition to that asset information, we can have our Hyper Alarm Logger log these events for long term storage. And as I go and expand my alarm log, you can see the same fields that were available in the Hyper Alarm Server on the previous display available here on the Logger alarms. I want to spend a little more time talking about the configuration in the workbench in the back end here. 

[39:57]

So, I'm going to jump over to the workbench and talk more about our asset structure. In auto assembly, we have several product lines, and each line has its own associated pieces of equipment. In the graphical interface, we are emphasizing the configuration of press A so just quickly touching on the advanced analytics and the configuration of that on this asset in the workbench. And then on each individual property, we can configure our historical, our real time, and now our alarm information. So, if I go to this temperature asset, I have a Modbus tag reading in that real time information. I have the configuration for my Hyper Historian tag. And I now have my alarm conditions that I can define directly from this asset property. The configuration of the Hyper Alarm Server is also going to allow for more control over the configuration of these alarms and events.

[41:07 ] 

Part of that is having the ability to generate our own expressions. So, in our example, I'm going to look at this high limits alarm in actual Hyper Alarm Server configuration. In the Expression, I'm able to use the full ICONICS expression library to generate a statement that defines different alarm conditions. In this case, I have three conditions: I'm going to have a high high condition; I'm going to have a high condition; and then I'm going to have a normal condition; each of those conditions is going to be associated with a numeric trigger. In this case, we have three: normal being zero; high being one; and high high being two. In addition to this, further control and better modeling of our facility, we can define what inputs are available to our operators or to our engineers who are actually configurating these projects. In the case of my high limits alarm, I have our normal alarm and events, fields that we expect. So different values, the message text. In addition, I also have the related value that was talking about when we're looking at that GraphWorX display. So, I have my current operator relay and value. So, two of the biggest differences: One is that there's no longer limit to how many relay values we have on these different alignment events. The second from an engineering perspective is that in AlarmWorX64, it was just related values, 01, 02, 03, and so forth. With the new Hyper Alarm Server, we have the control to name that related value whatever we want. So, in our case, we're looking at current operator. The benefit of this being is that you and your engineers no longer have to memorize which related value maps to what field. You have the ability to customize and help that normalization and standardization of nomenclature across our project. So as this project grows, it's much easier to keep track of what field means what. In addition to the configuration of our Hyper Alarm Server, we also have a new area to configure the Hyper Alarm Logger. Not too much to show in terms of the configuration, but we can automatically download the subscription of our Hyper Alarm Logger without having to do any additional engineering behind the scenes. This all is going to work out of the box, reducing your time to value of these alarming event tools. In addition to having our Hyper Alarm Server configured directly from the provider in the workbench, we can integrate it with our equipment classes. 

[44:22]

So, in our case, I have the equipment classes for my 600 ton press. And what we'll see is on the actual figuration, I have all those fields that I mentioned in that Hyper Alarm Server configuration available to me as an engineer in the interface. Any extra fields I'm not using for this high limit alarm is not visible. So, it declutters our interface and makes things more intuitive and straightforward. In addition, you can also see this current operator related value, and how I'm able to configure that and how it works seamlessly with, in this case, the self-referential properties that we have in AssetWorX. So, it's pointing at whatever instance of this press I create, look at the operator property. This works seamlessly with our Bulk Asset Configurator. So, I can create as many presses as I need in my application, and this related value current operator field will update automatically. 

[45:35]

That concludes my demonstration of the Hyper Alarm Server. If you take one thing away from this presentation, I want to emphasize that the configuration for our real time connection, our historical data logging, our advanced analytics, and now with the Hyper Alarm Server and Logger, alarm and events are all configured through this centralized asset tool, AssetWorX, in the GENESIS64 platform.

[46:14]

Thank you, Luke. I very much appreciate it. So, a couple of key points I wanted to bring out there. There's this concept of related values. This is metadata, right? So, this has been really important for overall equipment effectiveness, manufacturing, productivity, and really any kind of operational excellence where we need to see what happened when something else happened. Another point that Luke made was that he took the data coming from the Hyper Alarm Server into the Hyper Alarm Logger. But we could just as well take from BACnet; we can take from a third party alarm server. We can take from OPC UA with a new Hyper Alarm Logger. So, thanks. So, I want to talk a little bit about analytics and analytics innovation. So, analytics tools extend the GENESIS64 suite across the ICONICS platform. And so, once we established our real time historical data collection, it is interesting as you're growing your project to add some other tools. The basis of much of our analytics is our Hyper Historian. It provides a historical frameworks for the analytics products. The Express version Hype Historian Express is included with GENESIS64. And then add on modules provide advanced filtering and aggregation, performance calculations input, so we use these for productivity and advanced analysis. It's super high speed, it's distributable, it's redundant, it integrates with AssetWorX. Hyper Historian is a fundamental part of the ICONICS platform, and a basis for the analytics is a library of built-in functions for time-series calculations, totalizations, averages, so we can do comparative elements. We can do totalizations for energy; we can compare period to period for fault rules, etc. A lot of arithmetic built inside; it's got a very nice data compression algorithm. And of course, it is both edge to cloud and talks to data lakes, the Azure Data Lake native connector data lakes, easy to get data in and out yet secure. Their pre solutions on top Hyper Historian include I mentioned to energy facility analytics, statistical process control, and overall equipment effectiveness. Later Jim Desrosiers is going to come up and show us an application with OEE that leverages a lot of these concepts. So, facility analytics. I mentioned fault detection and diagnostics. This is our FDD, fault detection diagnostics, product helps reduce time to solve issues when you heard earlier Shawn Hartnett from Microsoft explained how Microsoft achieved a tremendous savings in the operational performance particularly around energy and labor usage, it was largely through the use of the fault detection diagnostics methodology in facility analytics, reduce downtime, reduce energy consumption; it tells you what to do when and prioritize. Energy analytics. This helps us to collect any kind of meter data; we think about energy usually as electricity, but 40% of the worldwide energy is steam. We collect steam data, we collect water data, we connect anything you can measure with a meter aggregate and then provide out of the box visualization, drill down, roll up, integrates the dashboards, and etc. ReportWorX in Excel-based design environment. ReportWorX is fundamental for creating a report out of the ICONICS datasets. Of course, the data is available for other reporting tools. But ReportWorX is fully integrated with the ICONICS platform for triggering and for data storage and with our AnalytiX-BI, you get PDF, you get Excel, you get CSV, it connects to any data that we can connect to. Really important product. Now ReportWorX Express is included for free. The ReportWorX analytics modules are if you want to automate those, they're an add. BridgeWorX64, highly scalable, ETL extract, transform load product that moves information from one place to another based on events, like ReportWorX it connects to everything, and it is central to moving data from OT to IT. It's core to our information brokers solution files, databases, external data sources, quality analytics. So, this is SPC, statistical process control, has built in rule alarming all those statistical elements. You probably remember from your college class if you're actually looking at quality on manufactured items. SPC right out of the box; it's an extension of Hype Historian looks at integrates alarms, etc. CFSWorX, so connected field workers we saw this in Chris's demo. Ted talked about in the keynote, CFSWorX is in this new normal, really a key element for alerting for dispatch, for getting the people who are available and are skilled to fix a problem the right time, as guaranteed. It gives you a confirmation that the remote worker has received a notification, that they've accepted it, that they're working on. It integrates with mobile HMI for remote expert visualization. We saw a little bit of that with Zhi's demo in Ted's keynote earlier. And then the dispatching capabilities are significant. So, setting up those schedules, looking to see where people are, and of course, CFSWorX has to integrate with other systems. Where do we keep that the data about who's who needs to fix what and what are the assets; it's in the enterprise asset management system. So CFSWorX, integrates natively with Microsoft Dynamics 365 and with others like ServiceNow and Maximo and Salesforce, great notification system as well. I want to mention a tool called the Bulk Asset Configurator that allows us to very rapidly build systems, is a setup for the next topics; you saw something like this with Chris's demo that was able to very rapidly build systems. It helps a great deal with quality. And tying all this together is our AnalytiX-BI. So AnalytiX-BI connects to any ICONICS data source and external data sources and creates a relational framework to tie together all data sources with a real time kind of interactive experience. So, it loads these things in memory. It's highly performant. It gives us insight to the data relationships in history. It's like a Power BI inside of our product. So, within our product, of course, our product and all the data are extensible out through other visualization platforms like Power BI, but within our project, there's a huge advantage of having this stuff in memory. 

[51:38]

I would like to invite our colleague from Melbourne, Australia, Mr. Alex Binder, to share with us a project that he's built to demonstrate how all these things come together and bring value in a real application set. So, this is a marine port example. So, Alex, he has 14 years of industry experience; he's been with ICONICS for a number of years. He is an expert in HTML5, and he’s built his own products. Great stuff. Alex, if you can join us, please. We have Alex on connection.

[54:11] Alex Binder ICONICS Senior Applications Consultant

Thank you for the introduction, Mark. I'd like to share with you how ICONICS AnalytiX-BI and AssetWorX both come together in a real application. ICONICS software is used on a number of marine port applications worldwide. In this case, I was asked to put together a conceptual framework for a port in Southeast Asia. Here we have a homepage display which is showing data on the top 50 busiest container ports and the container traffic over the last decade. I'm grabbing this data from a web service which is imported into the AnalytiX-BI module using the data flow tool. I can click through the dataset and see that see the growth of container traffic over the last 10 years. I can also filter by regions and ports in a specific region. And just by clicking on these buttons here, you'll notice that displays are very responsive. And this is because with AnalytiX-BI, the data is cached within an in memory database. Another thing that's useful to note is the display uses only a single grid works chart which is linked to the single AnalytiX-BI data model. Now, the buttons on this display simply set global aliases to the analytics query used in the chart. So, this makes configuration of the display in GraphWorX very flexible while at the same time also remaining very easy to implement and maintain. Now if we drill down into one of the ports, I have an aerial view showing the layout of some different terminals here. In this demo, we have container terminals. We have a storage tank farm terminal, and we have refrigerated container or cold storage terminal. If you clicked on the transparent overlay, you can go into a particular terminal. And we see the map view of the equipment laid out over a map showing real time data. And we can also view some of the KPIs for those equipment. So, in this case, cranes overlaid on that cranes position, you can also access the different terminals through the tab view at the top. And if you operated multiple ports, you can add them as additional tabs here. Now let's go into the cold containers example. One of the great features in AnalytiX-BI which I find myself using again and again is the ability to do SQL ad hoc queries in the GraphWorX display. So AnalytiX-BI lets you query the data set. The time that container has been in storage, I can search for the minimum, its arrival time, departure time. I can do some of the metrics like the consumption per hour and storage. I can query the temperature data set, and I can say what is the maximum. And that's how it's linked to a single query on this tile here. It really simplifies my job of creating analytical displays. I can point everything on a display to the same data source, the BI table, and then just write different queries to it. And because the memory caching, the AnalytiX-BI’s responsiveness is great. Let's go through the cold containers display. Here we have the cold containers overview. It's got a list of refrigerated containers that have arrived at the terminal. And you can select them from a selection here. And it will show the summary for that container. And also, it's real time data like the per hour the temperature, humidity, energy consumption that refrigerated containers using. We pull in the data such as when it arrived at the terminal, when it departed, the contents, the customer it belongs to, the container ID, all of that will come from any ERP system such as SAP, and we match this up against the PLC level data, which would have been logged historically in the ICONICS Hyper Historian. And it's all pulled together into an AnalytiX-BI data model using the BI data flow tool. This tool is also used to clean up the datasets; you might need to change data types to match things up, to transpose data. You can tidy up some of the string formatting. And it allows us to create displays that gain insight into all of this. So, for example, here I have a refrigerated container where the power may have failed.

[59:13]

We can see in the trends that the energy consumption, the power went out for a little bit. The temperature may have spiked while that power is off. And what we show in these summary tiles is we can see in this temperature tile here that we've got a maximum value; it's gone out of range. Let's take a look at another display I've created which heavily uses AnalytiX-Bi. This is the cranes’ overview display. And it gives you an overview of all the RTG cranes in a port, and lets you gain some insights into which cranes may be performing or underperforming. So, the top half of this display, we see summary charts on the various crane metrics or KPIs. And we can also filter according to the entire port or what terminal. So, let's say that we take a look at our active time across all the different cranes in the port. And we can see that this crane here is got a low active time. So that's where we're going to the bottom half that is the display. And this lets you drill down into an individual crane. So, let's take a look at the active time. And here we can see that this crane had some downtime. That's why the monthly aggregate is showing low; a display like this is very useful to help you try and find any correlations in your data. For example, let's look into some of the other crane KPIs that we can look into the stack occupancy, and we can see there is a crane that is got a much higher fuller stack occupancy. And because it's six, we can then compare it to some of the other cranes. And maybe we look at the active time as well. So, the percentage of time that crane is busy. And if we compare against some of the other example cranes, we can see, as the stack is fuller, it's also affecting the active time. Another application of AnalytiX-BI that I've used in this demo is for the analysis of the various crane alarms. So in this display here, you can see all the different alarms from all the different cranes in the in the port. These are setup in the new Hyper Alarm Server, and we've stamped each alarm with some metadata like to what part of the crane, what category that the alarm belongs. And then we can pull that in using the analytics data flow into a model and then we can sort by a terminal and by category. And we can display some charts like this. You can see the severity of alarms This is like the total count of severity of alarms. Break down the alarms by shift. So, when an alarm comes in, it is just a timestamp. But in the BI, I read the timestamp and compare it to another table which will have the following shift A, B, and C but that's how we modify some of the original data. This alarm data set can also be viewed down at the actual crane level. So, if I go into an individual crane, and I go into the alarm analysis of that particular crane and using that same BI data model and just adding a filter for this particular crane number. And now these, this chart has just been updated showing this specific count of the alarms or the T1001 Crane.

[1:03:50] Mark Hepburn

Super. Hey, Alex, could you show us a little bit of the configuration of that? Would that be alright.

[1:03:58] Alex Binder 

Thanks, Mark. That's a great question. Let me go through some of the setup, or the analytics of how I did it for the tank models. So, if you see I'm in Workbench here which is our main configuration environment. And if I go into my analytics node, we've got the BI server. And then with the way BI works: it's split into two parts. We've got data flows and data models. So, data flows is what we use to pull in the data to tidy it up. So, if I go into my tank farm example. So, for the Customers table for this demo, I just used the demo Northwind database and took the customers from there. So, the data flow lets you set up steps. So, I've got a step to import the raw table. I remove some of the columns that I don't actually need. And here I've added an email address column. So, I take the contact name and make it @company. com. So, you get a preview of the output data set. So here I've created now an email column. I do some other tidying up. I rename, change the primary key type, from one data type to another. You can do all this within the data flows. So once you've got a data flow setup, you create a data models, so for my ports, like tanks. So, a data model is a collection of data tables and some views. So, the Customers table, I just import and input it into the data flow I just recreated. And here it shows you that the columns that we created, including the email one, I import the tank data and the tank log data. And a data model, lets you see, graphically, like a diagram of all the different tables and you add the links between the primary and foreign keys or you add the relationship links between the tables. So here, I'm linking the tank ID to the tank ID and the data log. And the customers ID to the customer ID to the data log. So, this lets you then create a data view. So, this is my data view that I'm actually using in the displays. And I'm querying the various tables that are in the same model. So, I'm getting this data set out. And you'll notice that this lets you simplify; I'm just using a SELECT query here. I don't need to do any of the joins because that is all done. Analytics-BI is clever enough to know the relationships because we've defined them in this diagram here. And then we allow you to simplify your query here. And then this data view is stored within memory and it's fast and it's cached. And that's what I'm pointing to in my GraphWorX displays. So that's a little bit of the background of how the AnalytiX-BI has been set up for that tank farm example. Back to you.

[1:07:27] Mark Hepburn

Super a thank you very much Alex. Alex, Senior Application Consultant down in Melbourne, Australia. So, I'd like to move on and talk about manufacturing productivity. So manufacturing represents about a third of the applications that people use ICONICS for so really important for us for productivity. And with the supply chain shortages in manufacturing today, all the more important to drive productivity. 

[1:07:58] 

I'd like to introduce one of our Gold Certified systems integrators as Data Acuity Inc. Mr. Jim Desrochers is the president and founder Data Acuity. He's known ICONICS for quite a while; in fact, he worked for ICONICS back in 1998/1999 and has since spent the balance of his 30 year career determining how to help customers to get the acuity, that pointed information, from their data. Would you mind rolling the video just so Jim could self-introduce and then Jimmy can come up on stage.

[1:08:39] Jim Desrosiers President and Founder of Data Acuity, Inc.

Data Acuity is an automation software company that focuses on manufacturing efficiency, energy efficiency, and the efficient delivery of support to automation systems. We are recognized as an expert in the integration of complex automation systems with data structures. For us the key word is focus. Our technologies and services are designed to focus our customers resources on the areas in which they can gain the greatest amount of efficiency.

[1:09:14] Mark Hepburn

Hey, thanks a lot, Jim. Welcome.

[1:09:22] Jim Desrosiers

Thank you. I appreciate being here. I'm grateful to be out in the real world. I'm actually very grateful to see a lot of my friends that I haven't seen in a long, long time. So, it's really nice to catch up with a lot of you. And I've seen some new friends that I actually haven't seen in person over the last year and a half as well. So, I'm very grateful to be here in person. Ted and Mark had asked me to come up and talk just a little bit about the flow of a manufacturing efficiency project, how things get started, where they typically start, what discipline of automation do we typically start with? And then where do they migrate to over time. When that conversation starts, typically, when we sit down with a client to talk about a manufacturing efficiency project, it almost always starts with this elusive magical gain. We're looking to gain some efficiency, more product. We're looking to decrease our scrap. We're looking to gain efficiency out of our staff and our maintenance staff. But it's quite often very elusive. We're not quite sure where these magical gains are going to come from. We know that we want to do some kind of a data correlation that's going to lead us to some answers for what's happening in our plant today. We also know that we want to work towards some of the buzzwords that we've heard. We want to work towards predictive awareness, not just reacting to what's going on today, but in fact, have some predictive analysis of what may happen tomorrow and let's act on it before it happens. Most important, that conversation always leads to agile flexibility. We need the ability to start small and grow and not lose the work that we did as we're growing. Here's just a couple of examples of what that leads to. And we'll talk about these examples in a little bit more detail in a minute. But there's a whole bunch of automation disciplines we'll touch. We'll touch on OEE, Overall Equipment Effectiveness. We're going to touch on alarm management, product genealogy, product traceability, aquatic quality traceability. We're going to hit energy analysis, labor analysis. We're going to hit all of this stuff over quite a period of time, one step at a time. Today, we're really going to focus on the very beginning, which is a metric called OEE, Overall Equipment Effectiveness. That's where we almost always start; it gives us the window into a scorecard into where are we today. We're also going to lead to where we almost always end and that is the predictive awareness stuff. So, I'll give it a handful of small examples at the end of how to achieve predictive performance, predictive quality, predictive material usage, and predictive availability as well. So, we'll give some examples of that towards the end. 

[1:11:55]

Again, almost every manufacturing efficiency project starts with Overall Equipment Effectiveness; it's not a new concept. So, a lot of you are probably already familiar with it. But OEE is the measurement of: How much operating time did we actually get out of a machine or an asset compared to what it was actually scheduled to be operating. It does not take into consideration when it was scheduled to be down. Performance is the measurement of how fast or how much product am I making compared to how fast it should be. Quality is the measurement of good parts versus bad parts of the products that I actually made. And then quite often, we'll see a fourth pillar in there more often coming up in the future. That's a measurement of our material usage, sometimes referred to as mass balance. So, the availability of the machine, the performance of machine, the quality of machine, and also the efficient use of material through that machine. If we dig deeper, and we won't spend a lot of time on that today, but if we dig deeper, we can compare OEE as a performance measurement to what the true cost as well. So, we can weigh OEE versus the cost of capital equipment, the cost of material, the cost of labor, and the cost of the consumables like electricity that were used to make the product. So, OEE almost always start off with a scorecard that's going to give us the front view of: How are we doing today? And that's going to lead us where we want to go, where we want to assign resources. So, what we're going to do now, if we can, is do a quick video of a client of ours called Catania Oils.

[1:13:26] Video Commentator

In this module of manufacturing data analytics, we will focus on Overall Equipment Effectiveness, or OEE, and introduce a metric called Loss Deployment. Data Acuity President Jim Desrosiers explains

[1:13:42] Jim Desrosiers 

OEE as a metric provides us with a clearer understanding of the difference between the quantity of sellable product an asset could make versus the actual product that asset made. The key insight that we're looking to gain from this metric is a full understanding of which resources we should assign to which priority problems. OEE is a top level measurement of our efficiency, and it breaks down into three separate buckets.

[1:14:08] Video Commentator

In this example, Catania Oils is looking to identify potential efficiency gains, leveraging the tool sets from ICONICS and Data Acuity. Catania Oils, Dan Brackett explains,

[1:14:19] David Brackett, Catania Oils Vice President of Operations

The first thing we were looking to do was to determine how many cases per minute we were producing so that we could determine how fast or the efficiency of our lines and that led us to ICONICS and to Data Acuity to better zero in on that information.

[1:14:35] Video Commentator

OEE is when we look at the potential for machines versus what it's really producing. The key is to focus the right resources on the right problems. To do that, OEE breaks your performance into three buckets. These three buckets are availability, performance, and quality.

[1:14:51] Jim Desrosiers

Availability gives us a measurement of the amount of time an asset was operating compared to the amount of time that asset was scheduled to be operating. This does not account for the time that the asset was scheduled to not be operating. Performance gives us a measurement of the amount of product that the asset actually produced during the operating time, compared to the ideal amount of product that asset could have produced. Quality gives us a measurement of the good product versus the bad product, but only for the product actually produced. Drilling into these three buckets allows us to quickly analyze the true nature of the loss of efficiency. We're going to start by taking a look at a Pareto chart, which will list our loss of efficiency as either a quantity a time or a quantity of events.

[1:15:37] Video Commentator

The next level is to be able to correlate the scores to another set of data.

[1:15:42] Dan Brackett

We collect data regularly through ICONICS. And so, we get data from every line from every product and from every shift and that data is accumulated in the database, and a report comes out every day for us to be able to determine how each one of those lines is running.

[1:16:00] Video Commentator

Another metric often overlooked, and which is critical to making gains in efficiency is Loss Deployment, which unlike OEE, considers factors beyond simply the machines operation. It simply asks if we made a calculation that we can produce 500 bottles of oil per day, but we only produce 300. Where did those 200 bottles go?

[1:16:18] Jim Desrosiers

Loss Deployment as a metric includes OEE, but it actually gives us much greater insight into the true loss of efficiency. For Loss Deployment, we're going to start by figuring out what's the total amount of product we could make in an ideal situation 24/7 365 no loss of efficiency. From there, we're going to back down to five buckets. The first bucket is going to be how much product did we actually make in reality sellable product. The second bucket is going to be what was our loss of efficiency that can be attributed as the fault of the machine itself or the asset itself. The third bucket is going to be what was our loss of efficiency that can be attributed to the process around the asset, for example, waiting for: waiting for material waiting for operation and waiting for instructions. The next bucket is going to be loss of efficiencies that are attributed to required actions. For example, preventive maintenance is required, but it does cost us efficiency. A clean out or a changeover is required, but it does cost us efficiency. And the final bucket is going to be those losses in efficiency that were actually intention. For example, we've scheduled not to run the line on a Sunday, or we've scheduled not to run the line during a break time. As we indicated earlier, the metric Loss Deployment fully includes OEE. But the additional buckets give us even greater insight into the potential loss of efficiency. With 30 years of experience deploying automation systems, we want to not focus just on a single machine fault or quality defect, but the fact of the entire process. This insight allows us to capitalize on even greater efficiency gains.

[1:18:00]

So, as we talked about, OEE is something that is fairly common today in most plants. The challenge with OEE that we see specifically on the plant floor is one: Who decided whether the machine should be operating or not operate? In other words: Was it scheduled running, or not scheduled running, and who's the person that made that decision. That leads to a lot of finger pointing within a lot of plants. Second: on performance, who decided what the actual goal, the ideal speed of an asset is? There are typically three separate ideal speeds. There's the design speed, the spec speed, and the goal speed. The design speed from the OEM that built the equipment that was designed to run this fast. The spec speed is from the internal engineering department that says actually, when we're running the blue product, we run it this fast. And then there's the goal speed, which is the production people that say, “Well, in reality, our goal is to run this fast”. So, the challenge with OEE is who made that decision what the actual Ideal was. Same thing for quality; quite often you have destructible samples. We're going to take amount of product off the line and use it for quality testing. Who decided whether that was good product or bad product that affects the score that the machine gets for OEE. Same thing happens on if we look at mass balance or material usage. If you walk through, for example, a food processing plant, you'll see all kinds of raw material on the floor. The question is, how did that get accounted for? So, what we want to do is figure out some way to take the finger pointing or the questions out of the process that OEE happens. 

[1:19:34]

And we introduced a concept called Loss Deployment. So, we're going to start Loss Deployment with an understanding of what's the actual amount of product we really could make in an ideal world. From there, we're going to move into trying to identify if we didn't make that, where did that loss go? Where was it deployed? The losses are going to fall into typically four buckets, but you can define any bucket you want. Typically, they're going to be because of the machine, because of the process around the machine, which are quite often referred to as waiting for’s, because of required tasks, things like preventative maintenance and clean outs, or because of planned losses, we don't run the machine on a Sunday, we had a team meeting, we don't have any customer orders, things like that. So, we're looking to understand the impact of production cost, also versus production values. If we dig deep enough Loss Deployment is going to let us understand not only our efficiency loss, but also did we have an opportunity to generate even more revenue by running a more a more valuable product that had a lower production cost to it. The goal here ultimately is to get the right resources focused on the right problems. So, let's not spend time on the issues that are less important and let's not give a list of tasks to a person that can't control them. So, here's the view of Loss Deployment we had from the Catania Oils example. Within that the bottom two buckets, the green and the reddish, represent OEE; that's what we all look at today. OEE that's our availability, performance, and quality; within that red bucket, we're going to break out of that how much opportunity to make a sellable product was Loss because the machine was down; how much was Loss because machine was running slow; how much was Loss because the product was bad, all because of the machine itself. And then we're going to do a Pareto analysis within each of those to get greater understanding of the machines fault. The important thing here is this is information for the engineering staff and the maintenance staff. This is very good information, very valuable information. And every plant has that team of people already in place to take care of those machines. If we back off of that back off a traditional OEE and think about what's the maximum amount of revenue we could have generated from an asset in total, in a perfect world: we ran 24/7, we never run out of materials, we always have staff on site, we never make a bad product. In utopia, what could we have generated? It turns out that there's a yellow bucket that there's an amount of efficiency Loss because of the process. This is sometimes referred to as blocked or starved conditions. It wasn't this machine's fault. It was the machine feeding into it or the machine that it's feeding into. So, I'm waiting for the machine feeding me. I'm waiting for the machine I'm feeding to, blocked, or starved, upstream downstream. I could be waiting for materials. I could be waiting for instructions; it could be waiting for the maintenance department. I could be waiting for Quality Department to sign off on something. This is important information to the production supervisor. So, it's a different set of information going to a different person to address a different kind of problem. 

[1:22:44]

The next bucket up the blue one is required losses. So, these are things like I have to do a changeover. It cost me efficiency, but I have no choice; we have to change product. I have to do a clean out or a line clearance. I do preventative maintenance so that the machine will not break down unexpectedly. I might do destructive testing, quality testing that cost me product, that cost me efficiency. This is information that's very important to production managers and maintenance managers. So, it's a different set of people that want to look at: Is this the appropriate amount of loss that's required. And then finally, there's the planned loss. And that is if we've made a decision to not run production. So maybe we don't run on Sundays; maybe we run two shifts, instead of three shifts. Maybe there's a team meeting. But part of this also is a decision to run a less valuable product or a more expensive product to manufacture. So, if we switch to a Loss Deployment model on top OEE, we can actually identify what was the loss revenue because we made a decision to run a product that is more expensive to manufacture. This is information that's fed to the scheduling department in plant management. And ultimately, we can turn it from quantity to products into dollar. What was the actual impact because of the machine, because of the process, because of required and because of intentional. So, I'm going to switch a little bit here to follow up in just a second. We're going to queue up the second Catania video. We started with OEE at Catania Oils, but we ended up with really unified visualization through their entire plant after we started with OEE. So, we're going to run the second video, and you'll see in there that there are terminals where they're pulling manufacturing efficiency information at the rail yard where raw material comes in, at the tank farm, at the quality lab, in the blending area where they mix oils, in the bottling lines, the fork trucks that move things around. They even go all the way up to the purchasing department that makes decisions on purchasing raw materials based on production levels and storage capacity. So, we can go ahead and run that second video.

[1:24:54] Video Commentator

In this module of manufacturing data analytics, we will focus on unified visualization. Catania Oils, Dan bracket explains.

[1:25:15]Dan Brackett 

Once we started using ICONICS and collecting data and maximizing OEE, this really opened up the opportunities for us to use the system for other things. We're able to collect information out at our rail yard, from our trucks in the yard, from our fork trucks, and our production lines. Not only are we looking at efficiency, but we're also looking at data transfer between departments concerning items in concerning output.

[1:25:46] Jim Desrosiers 

Like most projects, our initial discussions with Catania Oils centered around a single specific challenge. In this case, more accurate and real time knowledge of production counts. We typically recommend putting the initial challenge aside to sketch out a long term, three to five year vision of data consumption, ultimately leading to the knowledge and wisdom we will gain from information coming out of the automation system. After considering that longer term visualization of data and information, it is important to then turn back to the single well defined and manageable project that we started with and then get started on that one initial piece. If we take this approach, we can build a modular, flexible, adaptable data architecture, which allows us to create one step at a time, a unified visualization. This is what Catania Oils now has throughout their entire organization. It has proven to be a manageable, maintainable, and cost effective approach to automation data.

[1:26:56]

So, we’ll wrap this up by moving to the end, which is trying to gain predictive awareness out of this data. So not just react to the current situation. But in fact, look for what may happen next for us. I'd encourage you to look up something called the DIKW data model, which is data, information, knowledge, and wisdom. You'll find a lot of information on the web. We try to follow this through a project because it allows us to manage and organize the technical efforts of working with data. So first, we want to figure out a toolset that's going to give us a cost effective way to collect and organize the data coming in: Hyper Historian, the Alarm Server. These are tools that can be used for this raw data collection. Second, we want to correlate that data to turn it into information so assign it to an asset, assign it to a batch, to an operator, assign it to a shift. And then where most companies are today is we're going to take that information now, and we're going to plot it and put it up on a graphical report that's going to turn it into knowledge for us. So, we're going to look at a chart and say it's not just information anymore. We can actually act upon the knowledge that's now in front of us. And finally, we want to reach towards wisdom, which is the predictive ability to have the system guide us and give us recommendations. Now, so in order to get from knowledge to wisdom, which is where most people are today, we have graphical ways of looking at information and acting upon it. And we're reaching for: how does it tell us what to do proactively. In order to get from knowledge to wisdom, there has to be some kind of insight. And we heard earlier some really fascinating things about AI as one tool to insert some insight here to get us from knowledge to wisdom. I try to remind people before we jump into really exciting new technologies like AI, let's take a step back and ask the experienced employees within the plant.

[1:29:04]

If you were to look at the knowledge, we have in front of us today. What are the kinds of things you'd recommend that would help us predict the next step? So, let's start with learning from experienced people in the plant and let's reach towards advanced tools like AI to get us from knowledge to wisdom. Finally, it's really important to make sure that we're correlating or monitoring. In other words, let's look at the Return Nn Investment, the ROI. And the best measurement of ROI is, in fact, OEE and last deployment. That's going to give us an immediate snapshot of what was our return on the efforts we made when we put in wisdom systems. So, I'm going to run through this very quickly an example of reaching towards predictive maintenance. This is a system in the automotive industry in which we're taking maintenance information out of their CMMS, computerized maintenance management system. Today, most CMMS systems are purely calendar based; it schedules maintenance events by the calendar. If you look at the Gantt chart in the middle, this allows us to push into the CMMS. We want to schedule maintenance based on runtime as well. So that'd be like looking at the odometer on your car instead of the calendar for when to get your oil changed. In addition to that, thirdly, so we start with calendar. we added into it the ability to schedule maintenance based on runtime. The top section of this shows you events, discrete events that may happen in a plant alarms, quality events, setting changes; this might lead us to pushing maintenance actions as well. So, if I have x amount of quality actions, I might proactively push a maintenance event in, so this is predictive. So far, this is pretty easy stuff. On the bottom you see process information that's lined up with the events in the runtime information. This allows us to look at real time process information and push scheduled tasks into the maintenance system. So, we may look at the torque on a motor; we may look at vibration a motor that may lead to a shaft alignment requirement. So, this is reaching towards predictive maintenance predictive performance is kind of interesting. If you look at the top right, you can see that this particular cycle should have taken me six minutes. And this was broken down to approximately 40 phases 47 sub steps in order to do this one manufacturing process. If we compare the data set for one manufacturing process to another, maybe one machine to another, one part to another, one phase to another, we can actually figure out if we are getting slower or faster so we can predictably figure out the correlation between actions and maintenance to how fast is the machine going by analyzing the phase differences in a manufacturing cycle. Predictive quality: what you're looking at here is an inspection of welds on a metal frame of a seat. If we take a look at which weld numbers come up as good or bad, the most often, we can predict what the quality of that product will be by looking at trends coming out of a weld. So, it's going to predict what the quality is before it actually goes bad and lead us to take action on that. In that case, that was predicting based on quantity, but we also might predict quality based on process information. So, in maintenance, I said we may look at the torque of a motor. Well, in the case of quality of something, a process like metal casting, for example, we might look at, let's correlate good parts versus bad parts to things like metal temperature, or cycle time in predict quality based on trends of that information. And then finally, we're going to look at predicting material usage. This is a high altitude view of that casting plant. If we want to schedule fork trucks to deliver raw materials and remove scrap materials without somebody driving around the plant in a fork truck looking for opportunities. So, we're going to proactively point out to them that within the next seven minutes, you need to be over here to pick up a scrap bin, and you need to be over there to deliver raw materials. So, we're going to use do predictive material handling. Again, based on the correlation of machine process information, machine performance, and our production schedule.

[1:33:21]

I'm going to wrap up real quick here with just highlights on how we typically handle projects. We want to quite often start from the top, not the bottom. By that I mean, start from a discussion around the screens, the reports, the consumption of data. Don't start from a discussion around the PLC and the tags. It's important to start all conversations at the very top and work down towards the data, so you don't design yourself into a corner. You want to organize your efforts around subject areas and job roles, not around how things were done in the past or how things are set up today. You want to make sure that you have an understanding of the long term possibilities, think about three to five years, what could we do with the system. But as Mark pointed out earlier, the really important thing about that is then very quickly back down to what's the very specific project we could get started with. So, we talked about the long term, but then we move to the various specific one. Consider how to build data structures that are going to let us do correlation of data in the future. So, make sure we're flexible enough in our data structures to think in different directions. Agile project management is very important because these things don't stop. We keep going and going to gain more efficiency. It's good to do a formalized assessment; that's great. Quite often it's impossible to have the time and the money to do that. And then finally, we want to keep on the Return On Investment, again that's where OEE and Loss Deployment comes in. It reminds us to constantly go back to: Are the efforts we are making correlated or in line with the return we're getting out of them? And that's all I have. Thank you.

[1:34:55] Mark Hepburn 

Jim, thank you very much. Much appreciated. Data Acuity, Inc., gold certified ICONICS system integrator. He’s worked very closely with our parent company Mitsubishi Electric’s customers deploying these applications. Great stuff. I want to start to close this segment. I know we're a little bit over time, but with some highlights on the release we have coming up in November: the 10.9 7.1. It sounds like a really minor release, but we keep packing more and more features in to make the product more scalable and better and really better value for everybody. So, I want to call out eight new features. We talked about Hyper Alarming, so it's a Hyper Alarm Server, Hyper Alarm Logger. That's 10.97.1 with a Hyper Alarm Logger; Energy AnalytiX which has major enhancements to the user interface, to the data model to drill down Energy AnalytiX out of the box; BACnet SC - secure connectivity. This is a huge step function in the building automation community; you're going to be hearing a lot about BACnet SC if you're in the smart buildings or building space; fully integrated, fully supports the standard HTML5 in 3D, 3D graphics. You saw Chris present that in technical preview works. It's not the entire full function. You don't have other lighting and shading and some of these other capabilities; MQTT Sparkplug B: So, this is a message broker architecture; we're supporting it in our version 10.97.1. This message broker architecture and MQTT generally are becoming very important as we look at more efficient ways of managing distributed systems. Melco Connector: we've added a number of new connectivity methods. We've added connectivity to the GOT, to the soft GOT; with OPC UA, of course, we were one of the first on OPC UA. Now in 10.97.1, we're providing OPC UA on Docker. This is designed for the Linux platform, so go directly in IoTWorX and scale for the future as we look at more Docker based technologies and Kubernetes in the future. So really great to see all of those 200 or so different control devices and connectivity right there on the Linux platform with IoTWorX. CFSWorX: we've added a number of features there. I talked about some of them earlier. Some remote notification with SendinBlue and Azure and SMS service, now which is big in a number of industries like data centers, for instance, Maximo, which is a huge for CMMS. We now can connect to FDDWorX, not just alarms. CFSWorX continues to develop. Of course, all of these things are going to be illustrated this afternoon in our sessions. So, I want to start to wrap with re-answering this question: Why? And in one word: flexibility. And I’m going to ask our panel to double click into these over lunch in the interest of time. ICONICS has built this platform to make it easier, faster, better for you to deliver operational performance productivity whether it’s in manufacturing, process control, building automation, these same elements tend to apply. I want to thank our sponsors: Mitsubishi Electric, our parent company; with over 200 group companies with several of those including Mitsubishi Electric Automation in the United States and we’ve got a number of people joining with MERLE and MEPPI, for our valued system integrators. We are very happy to have Data Acuity and hundreds of system integrators that we have worldwide and all of our other partners in technology and original equipment manufacturers. Thank you very much. Please complete the survey at the event platform and we appreciate you joining. 

[1:39:42] Ryan Legg ICONICS Mid-Central Business Development Manager

Thank you very much, Mark. I appreciate the transition there. And, boy, a lot of content there. That was a heck of a session; there's more stuff in there than we'll ever be able to get covered. We're going on lunch break now, but Paul, and I just wanted to wrap up a few things here before that. Mark had answered the very simple question, “Why ICONICS?” And we had a slide up, and there are various different reasons. We can talk about a few of those here, Paul. I'll jump in and start. We talked about the scriptless HTML5 Thin Client technology.

[1:40:21] Paul Carter

And also, the new capability added to HTML5 and the upcoming release with the 3D visualization. And a lot of people may question: Why 3D? Those types of things. And there are some really good applications for 3D. Well, we saw a great demo; one of the things that I've had some customers do is piping diagrams. So, you take a chiller plant, and that's a very complicated piping diagram. Or if you’ve ever looked at a two dimensional process diagram, there's no correlation between what the process diagram looks like and what the physical equipment looks like. And with 3D, we can bring that digital twin; the real world is the same as the software world, the digital world. So really powerful addition in now with HTML5. That's going to be really great.

[1:42:09] Ryan Legg

Yeah, absolutely. The scalability, we've heard that all the way from the edge to the cloud. Another key factor. The augmented reality.

[1:41:19] Paul Carter 

That was kind of an interesting little demo.

[1:41:22 Ryan Legg

Yeah, just scanning the PLC and being able to see it right there. 

[1:41:27 Paul Carter

I think when the use of hands free tablets becomes more popular, whether it's something like HoloLens or the Real Wear device, those types of augmented reality functions will make maintenance people that much more capable. And I think those will be wonderful.

[1:41:42] Ryan Legg

Some cool things coming for sure. How about building your own self-service dashboards? I mean very user friendly there; I mean that this is stuff that the user can do themselves, not something has to be built in from the original system integrator or the original equipment manufacturer. 

[1:42:00] Paul Carter 

And the fast deployment of dashboards, as Chris Elsbree showed us, using the cloning function, which I'm sure we'll talk more about, but that's a fabulous feature.

[1:42:97] Ryan Legg

Yeah, we talked about the geo skate a little bit. Yes, very cool. The Hyper Historian; this was a topic came up with very high speed historian over was the speeds I got 100,000 tags per second.

[1:42:22] Paul Carter 

It's incredible. In the end, we've redefined big data to massive data. 

[1:42:27] Ryan Legg

Fault detection and diagnostics.

[1:42:33] Paul Carter

Yes. And fault detection has historically been used in buildings, but I think we're going to see that there are more and more opportunities in the industrial environment. Absolutely. And couple that together with the Data Acuity presentation and talk about exposing what's going on in manufacturing plants.

[1:42:50] Ryan Legg

OEE is a very key factor for manufacturers. We heard about the new Hyper Alarming for him from Luke Gonyea.

[1:42:56] Paul Carter

We introduced Hyper Alarming Logger which will be out next month with the version 10.97.1. functionality, so that's going to be fantastic. Another thing that Mark had on his list was the ability to do redundancy. The Hyper Alarming function improves our redundancy capability in the alarming side, so we can support full redundant applications for mission critical systems. Mark mentioned mission critical. Obviously, everything that ICONICS is doing is built on 64 bit technology, and that does a lot of things. It opens up the capability of the servers and the hardware that it's running on. But it also brings to us, which we'll hear more about this afternoon, much better ability to offer secure applications. You almost can't secure nowadays a 32 bit application, but you can have better security with 64 bit technology. It’s all in the name: GENESIS64.

[1:43:47] Ryan Legg

Universal connectivity. IoT ready. He talked about some new things coming like Sparkplug B, this is something some customers I'm working with are starting to ask about.

[1:44:07] Paul Carter

And one of the things that ran through the entire morning presentation is how dependent this whole concept of developing by assets becomes and all the ways that things plug into assets. And if we think back 10 years ago, we never developed applications by assets. This has been a real sea change in building these advanced applications; the use of assets is really incredible.

[1:44:32] Ryan Legg 

Integrated alerting. We're talking about the alarms, everything we want to know when these alarms happen. We have integrated for SMS messaging or email alerts. And the cool thing about this, it's built into our platform. But we can also interface with other people's products.

[1:44:53] Paul Carter

And then we have the added function of CFSWorX which makes it that much more powerful. So, the integration of Building Information Modeling was added; they call it BIM. That was talked about a little bit about today. So that's a nice capability. And being a Microsoft partner, we are always maintaining compatibility with Microsoft operating systems, including things like Windows 10. And I think Microsoft introduced Windows 11, so that'll be the next thing for engineers to work on. Make sure that whatever we're doing there, we sure will be on top of it.

[1:45:49] Ryan Legg

Absolutely. A devoted Microsoft partner.

[1:45:33] Paul Carter 

So really, a long list of reasons for: Why ICONICS; we've got a great set of capabilities and features. And again, going back to a little thing we talked about in the previous newsroom. Where does your imagination go? Well, what can you do with all these tools? And how can these be leveraged to help you obtain the things that are important to your operation? 

[1:45:57] Ryan Legg

Well said Paul, and with that said, I think we're going to take a lunch break now. So, those who are online, grab some lunch and we will see you in a little bit. 

[1:45:08] Paul Carter

During the next 30 minutes or so, there'll be a video playing, so if you want to sit in front of your computer, then have at it and watch the video loop through and then we'll be back in about 30 minutes and we'll get the afternoon sessions kicked off. Thank you all. We really appreciate your participation in Connect 2021. Thanks!