As the host of this session, Spyros Sakellariadis, ICONICS Global Director of IoT Business Development goes over what ICONICS, the company and the software does and what changes have been made to the software and how these incorporate changes and updates to leverage IoT and specifically to make ICONICS work better for the customer. He explains that ICONICS is writing software that helps companies reduce their operating costs, be more efficient, reduce response times to incidents, and so on. ICONICS is writing software for a very specific, measurable target.

Video Transcript

[0:00] Spyros Sakellariadis, ICONICS Global Director of IoT Business Development

Well, welcome to the session this afternoon on IT and OT, Riding the Wave as, as Mark called it on IT, OT, IoT and the rest of them. My name is Spyros Sakellariadis. I work on IoT Business Development at ICONICS. And I'll go through some very high-level slides around what ICONICS does. And then what changes were we're making and how we leverage IoT, and specifically, to make ICONICS work better for the customer. In a lot of the other sessions, which you have either seen this morning, or you'll see through the live broadcasts, this one has a lot of detail on what we do. But what we do, and just to make sure that we're very clear at a top level, is we developed software. We're not an SI. We're not an OEM. We develop software which we license to people, and our software is used to automate monitoring and management of systems. But the important thing is what we do. We do this because we're writing software that helps companies reduce their operating costs, be more efficient, reduce response times to incidents, and so on. We're doing this software for a very specific, measurable target. This is a case of the energy consumption from January of 2012, if I remember correctly, through to the end of last December of the energy consumption of a building on the Microsoft premise. And the only thing that was done in December of 2012 was add ICONICS. Nothing else: no additional sensors, no changes of procedures. The only thing that was done was add the software. Before then energy costs were going up and up and up after that down, down, down, down, down. And that's true for the whole entire Microsoft campus. I can say that with some level of certainty because I created this graph while I was working for Microsoft. What do we do? We monitor devices and equipment. We monitor the environment. We monitor devices; it doesn't really matter whether it's a device in a building, like on the Microsoft campus, a manufacturing piece of equipment, an environmental piece of equipment. We monitor equipment to get the telemetry. And what do we do with that? The first thing we do is we display it. We’ve got to get the data, then we display it. This is a case. If you happened to be in the room exactly opposite, you'd be hearing a talk from the International Union of Operating Engineers. It'd be showing you this display. They're monitoring the real time data in their training facility. Or, in this case, we're monitoring the air quality in our offices in Foxborough. Again, the one case was manufacturing equipment - building equipment: this case, air quality, or operational efficiency. You've seen all these slides in other sessions. The point is quite simple. We get data; we display it in a way that's meaningful to the audience. Doesn't matter what industry it's in, doesn't matter what type of equipment, we have the tools to display the real time information. Now, why do you want to see that? Because you want to diagnose what's wrong or what's happened, what's inefficient. This is a case again from the IUOE where they're showing faults in that system that they just were displaying the real time data. A fault is just a set of data that is showing you that things are not the way they're supposed to be. It could be a very simple thing like the temperature is too high, and you're about to blow a gasket. It could be a very complicated thing where there are 15 different live data streams that are all in the wrong proportion. And it tells you that you're that the bulldozer is about to break down, and you need to get it in for maintenance long before your scheduled maintenance. Doesn't matter what are the rules that you're defining: whether they're very simple “if/then” statements or complex machine learning models detecting and diagnosing a fault is simply applying logic to that incoming data stream, which you saw. 


Here's a case of Mitsubishi: we're displaying an elevator. Unfortunately, someone happens to be stuck in the fourth elevator shaft. It's not necessarily that you're about to blow a gasket over here. It's that someone who's screaming, and they have claustrophobia, and they're stuck in the elevator shaft. Again, it's just a fault that we're detecting with our software. Now, you've displayed the collected data. You've displayed real time data. You've detected problems. We also historize it, where what we're showing is trends. Trends are useful because they can show you work. They can help you diagnose why you have a problem. But they can also show you that things are getting better, deteriorating, in the winter when you need to do something slightly different than in the summer. So we've got the displaying the real time data, diagnosing problems, and displaying the historical information. Now, that gets you to, what in various other sessions, was called the elevated word wisdom, we usually call insights. The point is you’ve got information about what is wrong. The next thing you've got to be able to do is fix it. Two ways of fixing it. One, you get onto a computer, and you change was called setpoint, or whatever you're going to do. The other way is you dispatch someone to go in a truck and get the dead rat out of the damper. Part of what's happening when you're looking at faults is hopefully, they're coded in such a way that they tell you what's a mechanical versus one that you can fix remotely. This is a case in the Microsoft technology centers where you can change the speed of a fan, an exhaust fan in a in a in a garage, for example, through the ICONICS software. So we've gone up; we've gone to command and control. And at the end, you're also empowering the frontline workers. This is a particular device is from a company called Real Ware, and we can see all the ICONICS screens and so on in the visor there. We've got a flashlight; we can talk backwards and forwards with the people in the remote office or back in the head office. This picture is from a great video of a guy wearing the hat in the pouring rain fixing thing on an oil rig. So, we've empowered him. And one of the last couple things we do is we raise alarms. A fault is a fault is a fault. We also sometimes want to do something relatively drastic, like send someone a message that says, “Guy get out there and fix this”. So, there's the alarming feature as well. And finally, we create reports. So that's a very, very high-level view of the sequence of things that we do. Before getting on to the IT portion. I just wanted to close this section with one slide that says, why is it that you, for example, save money with remote detection. And basically, the reason is this: ultimately, without remote detection, if a problem starts here, it goes undetected for a whole bunch of time, someone understand there’s a problem because a user sees that water is coming through the ceiling, someone runs over there checks it out, starts working on it and fixes it. So, you've wasted water and time for this whole period. Here is electricity. Let's say if something's going wrong with remote detection, you detect the problem early. You can work out what to do through your system. The labor is much shorter because maybe 80% of these can be fixed remotely. And you're done. And you saved time, you saved energy, you saved electricity, you saved water and so on. Alright, moving on. 


I'm going to steal a couple of slides here from Microsoft. Firstly, what is IT? How are we doing it? How are we doing it that is completely agnostic to what industry you are in? What part of the world you're in and so on? This is just one of the standard slides from the Microsoft Azure team. How is it you can do it without having just a manufacturing application or just a boating application? So how do you do that agnostic across industries and global? How do you do it leveraging all the latest technologies? And every week new ones come out there; whatever it is: 250 different discrete services in actual loan that you could use. So let's take a look a little bit at the history. This is the traditional situation where to use a technical term, you have things, and you've got something like a PLC that's getting data from the things using some sort of private network, getting it into your data center, where hopefully you're using ICONICS. And stuff is happening. That's the part I would call the Old World. The next stage is the IoT world. And the basic IoT world is you insert a gateway in the middle. So, all this is happening on the private network. You then go through a public network, to your application in the cloud. The difference between the pre IoT and the current IoT world is simply you're now bringing in the ability to go over the public network by adding a gateway in between. Now, the more exciting and latest developments is you're now adding edge computing. And in this case, here, we have this product called IoTWorX which adds the edge computing to the environment. You're still going through the public network. You're still going to the application in the cloud, but you've also enabled edge computing. And in the case of IoTWorX, not only are you doing edge computing, but you can deploy it from the cloud and so on which makes a lot of sense. So, moving on, what is edge computing for Microsoft? This is where we get into the pure IoT stuff. This is what Microsoft regards as IoT Edge as - it all runs on in this IoT Edge which runs on top of say Linux. It has a bunch of different containers. And ISVs can put software into these containers which then allow you to do all sorts of stuff. With respect to ICONICS, this is what we've done. We've created IoTWorX which runs in a container on IoT Edge. And we've got, for example, you heard about Takebishi Device Explorer which gives us additional protocols. So, these guys all get data from the devices. They talk through the rest of IoT Edge up to IoT Hub. It's beyond the scope of today's talk to go into a lot of details on this. But this is where we do the edge computing. We do a lot of edge computing here in ICONICS. And we can do edge computing by talking to other modules here. So, for example, we could be doing edge computing here with ICONICS, getting data from devices, running some types of analytics, then passing it to Azure Stream Analytics to do additional types of analytics before it heads up to Azure IoT Hub. Now, once you've done that, you also want to see what's happening. We have this product called KPIWorX which can look at the data that's coming off the devices. This is KPIWorX looking at the data there on the edge from the edge so you’re still, as it were on- premises. Now moving it up, we want to get this data to something called IoT Hub. This is what you saw before, IoTWorX which gets it up to IoT Hub. There are other types of devices coming out these days at high speed that the device itself can talk through a protocol like MQTT directly to IoT Hub. And then there are third parties, like you know, Mitsubishi elevators, or Lutron power switches, and so on, that all have their own web service. And we then can get it from that web service into IoT Hub.


The key is this thing here. All the data is aggregated here. And it's upgraded in such a way that our GENESIS64 subscribes to Azure IoT Hub. If you happen to be a serious Azure geek, you'll do all sorts of things with stream analytics and SQL Server or get it here to do some of the stuff with Azure Functions. You'll talk to Azure digital twins. And in fact, we've got a great session about our new products the next gen which works with Azure digital twins by Zhi Wei Li going on which you can look at later. But again, the basic thing is any data coming from the edge can go through several different paths. This one, this one, or this one and get the data up into Azure IoT Hub, where GENESIS64 gets the data from IoT Hub and does all the magic you heard about earlier. Now, that's a quick slide. If you're a pure Azure IT engineer, you can use a tool and look at the data coming into IoT Hub. If you're more of an ICONICS person, you can look at the exact same data in the ICONICS Data Explorer. As you see it is exact same thing. I don't know if you can read it. But there you see the data coming from this occupancy sensor. There you see the exact same data. So, the data then comes into thing, you could see it in KPIWorX down there. But once you are up in Azure and using GENESIS64, we have our graphics tool that allows you to create very sophisticated screens. Here's the screen of monitoring a Power Meter. It's coming up through the same channel through IoTWorX or GENESIS64, acting as a gateway into Azure IoT Hub, being displayed on a GraphWorX screen. So, it's again, now we're just back to visualizing in the cloud tying back to some of the stuff I showed you earlier. Here's a case of I talked to you earlier about showing faults. Here's a case of that same data now being analyzed by a bunch of fault rules and ICONICS giving you the alerts. Moving on: say you've done all that fantastic stuff, all the secret sauce, the magic in GENESIS64. What enterprises want to do is connect that to other things they have, whether it's dynamics or Maximo a service. Now Salesforce, their CMMS is or other tools like Power BI and teams and so on, we have these products that allow the connectivity from Genesis to other enterprise systems. One of the big complaints that we heard from a lot was companies was them not wanting to deploy a pure siloed application that didn't talk to anything else. So, this interoperability quotient of ICONICS is very important to us. And here's a case, so you know, that those are ICONICS faults now being displayed in Dynamics 365 Field Service. At which point, you can do your schedule optimization. You can decide who's going to go fix it, who's available. Maybe the fault is here, and you've got six people around the world that can fix it. The only one that's available happens to be in Paris which isn't going to help you today here. So how are you going to deal with that? So just to wrap this part up, there are essentially three different ways you could architect the solution. One is only on the edge, the legacy way as it were. The other is the basic IoT with a gateway that sends it to the cloud and all the work is done in the cloud. And then this third way of using edge compute plus the cloud. The advantages of moving from left to right are way more than I could fit onto the small slide. But that gives you the basic start.


I want to play a video of the implementation that's used for building management at Microsoft. Microsoft has 770 buildings, give or take, worldwide depending on the day, and about 125 in Redmond. There's actually a debate on whether it's 125 or 160. The difference is what you call a building depends on whether people are in it. So, if a garage doesn't have people in it, it gets called a structure as opposed to a building. So, if you see different numbers, the reason is that different people count them in different ways. But what I'm going to show you is a video of Tearle Whitson, who is the guy that managed all those buildings using ICONICS. 

[19:15] Tearle Whitson, CBRE Senior Building Engineer

Good afternoon, welcome to the rock. This is the Redmond operation center. This space is primarily used for command and control. And that's where we host a lot of that out of the building automation systems. We also have used this space and leveraged it as the deployment and operational side of the energy smart building suite. So, our energy smart building suite allows us, which is the back end of the ICONICS platform, and polls and aggregates all of the building data from the building automation systems into an Azure hosted SQL database. That SQL database is then synced up to a data lake. And also, on the front end allows a software suite to run set algorithms to analyze the existing fault detection process within the buildings and to determine where their energy and money saving opportunities. A lot of the information that you get is information that was present in the building automation systems prior previously, but it now aggregates it puts it into one system allows you to look at a lot of that information in different areas in different places all together at once. So, one of the things that we can see in this area allow us to see baseload and what's the live demand for the campuses at any given time. This is where you're going to get base information on total power consumption. This point this is segregated on Puget Sound. But we have portions of that for our Asia campuses. And we're currently working on deploying the European campuses right now. So, what you'll see in live data is a real nice piece of snapshot information that normally you would have to pull utility data from a utility company pulling bills, and you get it 30/60/90 days in arrears, and then you never get real data. This is real time consumption for the campus. And I can actually historize that information, demonstrate it, look over time to see how the campuses come on and off. Has it shut down actively over nighttime periods? What did it do over the course of a weekend? Do I see a trough where a weekend should be? Or did I have a large amount of assets coming on at any given point here that weekend. And for even some of these smaller anomalies, I might want to explore looks like I had a small startup in the latter half of Saturday. And I may want to evaluate that I can double tap, pause and zoom into that area to start to evaluate how much did I change was it you know, up to a megawatt in that small area that I might want to evaluate even in more detail. I can take this information and go out to a trend analysis for you or to look at it in more detail. But this gives me a really quick high level a piece to kind of analyze the campus at a glance. And again, some nice built-in metrics just to be able to look back. What did we do over the course of the month? Is it significantly varied week over week, month over month? It's nice to see especially when we take seasonal shifts when you come from colder to warmer weather warmer colder. How did the campus handle that; same thing with our fault data? What's been the fault count over the campus over that time, and it looks like one of the things I know I had noticed, I’ll push it off to a year we'll see how well it responds with this. I think actually we only had the aggregated number and faults on this dataset since August. But you can look at the data we've been finding is that over the cooler months we’re finding a lot of extra faults. We're finding also a lot of change in comfort index over that same period of time. But average comfort has been dropping over the colder months as well. And it's a nice piece where all of these sets of data are pieces that technicians and engineers can evaluate. But with this dashboard, it allows you to at a glance, not just my trained technicians, but facility managers, business executives, anybody coming through, data scientists come through can look at this and they go, “Oh, I can do this with that dataset”. And it really allows a lot of different eyes and minds to say, “What can I do with it”. And this becomes really kind of the cornerstone for where the smart building and the smart city process progresses out. “What can I do with these pieces of information as I drill in, as I am able to navigate the system back down into Puget Sound?” There are a lot of different methods of aligning this data and viewing it. The touchscreen allows for a lot of cool parts of it, where you can just really zoom in and zoom out. But some people like to operate out of a nav tree and hierarchy and see how things are connected. A lot of times we found that different people prefer to just being able to zoom in and look at the campus at a glance.


This info allows me to see the heart of Puget Sound and the main cluster of buildings and see some high-level information around, “Do we have utility power for the campus?” If we don't have utility power because occasionally, we take power bumps and power transients, especially during inclement weather, do the generators startup? Do the life safety generators come on? So, when I'm dispatching technicians and teams, I send them to the right place and not just send them to go drive around campus and listen for generators. So, you find that you're stepping 10/20 years into the future. Whereas only a few years ago, we were running the same process that might have been going on in 1970s. To find out whether or not was buildings or how were the buildings operate. And now we can actually live aggregate look at data, look at the systems to know how they are performing. What's going on with them? And are there events happening within the building envelope? So, within the building envelopes. we can really move around and drill into more detailed pieces of information. I can continue to look in my Puget Sound architecture to get my list of buildings. I can use this as well to navigate and allow it to move me around over the city center. I can select the building; it'll pop open with a building navigation window that allows me to see high level information for that building. And at a glance, pieces of data that can really help you get a good snapshot of what's going on in that building. City Center happens to be our biggest building in the portfolio. It’s about a half a million square feet, a little over 2000 people are assigned to that building. And that's only assigned headcount; it's not live headcount. It's one of those data points that we're looking at testing to see who's on the test that we'd like to track to and be able to populate that with full time, real time. Data quality, how connected is the system? There's always a question of data connectivity on our buildings. Probably the biggest single biggest piece of important information is how connected are the buildings because often equipment is coming offline, project work is going on in the buildings. There are a lot of pieces of data that we miss. Again, this information allows me to look at the energy for that building specifically, and I can expand that view to see it in a larger window to be able to get more detail. But I can still pause it to actually be able to look back and do the same navigation. Or I can drill in and look at specific details. This case is actually a bad point to evaluate the building because the building is starting strongly in energy. A lot of times it's because it's getting too cold outside. And then first thing in the morning, it doesn't soft start the building over time. It's starting all at once and bringing a lot of units on simultaneously which causes a really large demand spike in energy. And that winds up being a little bit higher charge to the utility.


Other ways and information that I want to see here could be comfort per floor. And I'll show you a little bit more detail on that in a second. And then we also see fault rules and the active faults for the building all up: how many pieces of information of equipment might have active fault detection item currently present on the system. Typical fault rules: it's nothing crazy, it's an “if- then” statement of compare acts. It's just trying to compare data sets, one to another. It might be saying “If this room is supposed to be at a certain temperature, and it's not acheiving that temperature over a specific period of time, I might have a fault whether I'm overheating or I'm under cooling. In many cases, it might compare that I have that damper open, like we saw in the test setup of that damper open and pushing cold air into it. But at the same time, I'm heating it as well. That's a simultaneous heating and cooling. And even though I have an occupant that might be satisfied and happy, at the same time, it’s costing a lot of energy. And so, it's this type of information that is the backbone of what we do with the fault detection and diagnostics. And this has been one of the core pieces that we have focused on over the last three and a half to four years as this system has been live in production, it has led to about six and a half million dollars of straight reduction in utilities spend year over year, which is about 18%, considering the persistent savings from year to year. 


So, it's actually been a major achievement for getting at Microsoft energy consumption and meeting Microsoft’s sustainability goals going forward. But a lot of our information and interest now is starting to look at user comfort. And using this comfort information. One of the first things that we did was to look at that at a floor level and be able to see those temperatures and see that comfort level at a format. And this allows us to see what the current trends were on a given floor. And typically, what we're looking for is our normal design ranges, the green zone here in the middle 70 to 74 degrees Fahrenheit. And anything outside of that, we'll highlight by another color and heat map it and shade it and look at that shading. And this lets us see that it may or may not be that hot in the zone. But I might have some small amount of area in there that I want to look at. I’ll bring it down, and I can look to see that we have a set point of 74 degrees, and actually 72 degrees in that space. And we are running at 74 degrees, so two degrees above setpoint. In this case, so the zone is going to continue to try to cool, but it lets me know that that zone is running just slightly above our above it or our range there. So, this information is a nice piece at a glance, but it's floor over floor. And this lets me see one floor at a time in the 27-story office tower. That's only one building out of 125 in Puget Sound, and 160 plus that we're now connected to across the globe. If I turn on the fault indicator, I can highlight the zones where I have active faults. So, in that zone, I can select a fault, and I can look to see that I've just got a set point fault. And it may just be the set points for that zone are configured outside of our allowable range, but it allows me to take action on that. Meaning as soon as I see it, I can create a work order. And this information is the baseline data fields that we would submit to the CMMS system and push that out to where we would drive a work order. And then we would have a technician respond to that work order and make a correction. 


And it's really about where we want to go with the data; as much as possible from this habit that a technician can have less steps in between of transferring data, looking at one interface and looking at another. If you can come right from this live system, push that false connection out there, then the technician can drive a lot faster directly to a fault resolution and problem than having to open this system, or open another system up, come over here punch in a ticket and then move back out before that direct data connection. It carries in equipment type, what's the asset? What's the problem type? What are you? What are the categories? What are the savings numbers and fields in data fields on the other side? But it allows us to see a lot of information at a glance and start to get a good feel for where we need to get people moving on problems and how we promptly analyze bigger problems that might be on campus at any given moment.

[32:51] Spyros Sakellariadis

All right, so that was just a live demo that I recorded on my mobile phone probably about five years ago. Notice he said in terms of the savings as of right now, they're saving about $9 million a year on their electric bill because with this system. They're monitoring 456,000 data points at a five-minute interval applying a host of fault rules and basically fixing them and having all those benefits. 


The next thing I want to show you is an interview with a construction company in France called Bouygues that is deploying our latest software which is based on Microsoft digital twins. This is a pilot that's just starting up. And I want to play an interview with Bouygues as they talk about what it is they want to do with the software and what sort of benefits they expect to get. 

[34:14] Thomas Stadler, Bouygues Energies & Services Switzerland Chief Digital & Innovation Officer

Could you please introduce yourself and explain what your role is at Bouygues Energies & Services?

My pleasure. Bouygues and Services Switzerland is a subsidiary of Bouygues Energies and Services. We are part of the Bouygues group, a leading construction company headquartered in Paris. As “Energy and Services” already states in our name, we are the part of Bouygues concentrating on energy generation, transportation, as well as all services around the building from facility management, to HVAC, and of course, building automation. In Switzerland, we are about 5000 employees and have a revenue of about a billion. We are the leading independent provider for building automation and also provide industrial automation to our customers. 

[35:14] Could you please describe some of the challenges you are trying to address? 

Our challenges are that we see more and more customers asking for integrated solutions. Many customers have multiple building automation systems in their buildings and need to manage multiple buildings. Often in the building itself, they have a variety of building management systems, sensors, and alike. So, getting an understanding of what is actually happening in the building and how to manage and automate the building is a challenge for our customers. Even more so, if you consider like carbon dioxide reductions and energy management considerations that you want to execute your strategies across a portfolio of systems or buildings. So, what we are looking for is a solution we can offer to our customers where we can ensure that they can run their building, their buildings, seamlessly, which means we need to be able to orchestrate building automation systems and other systems in the building seamlessly in order to be able to apply our strategies and fulfill the customer requests on a centralized automated building management.

[36:35] Where is the project located and what is the ultimate goals of the project? 

We have multiple systems in one building, and we also have another site in France that also provides information into the same POS platform. So, to make sure that we would be able to prove that the concept is working, that we can have multiple buildings connected via the ICONICS digital twin into that platform to do all the wonderful things of orchestration, visualization, analytics in one place.

[37:16] What are the next steps of the project? 

So, we are already planning the next phase. The next phase for our proof of concept will be to add more systems. It's an easy one, and to get a pilot customer to roll the system out to them, at least multiple buildings, not just one, in order to be able to test the application of an overarching strategy. So, the building orchestration system: orchestration is the key word here. We want to be sure that if a customer decides, for whatever reason, energy consumption, CO2, he wants to apply one rule to all buildings, that we can implement this rule, that we have the sensors and the actors available to really make this happen from a strategic directive down to the individual building management system coherently in one go without going to multiple buildings, hundreds of buildings in some cases, and ensure that the strategy will be implemented. To make a very simple example, if you decide that Friday is the home office day, you could say, “Okay, starting Thursday evening, we reduce the ventilation, the heating, and the cooling for all our buildings, and we ramped up again on Sunday”. In our vision, with a building orchestration system, you can do this once in our Azure ICONICS platform, you apply that rule, and automatically, all those building management systems, the other sensors work to ensure that that strategy will be implemented across the portfolio of all the buildings. So, to measure where we are, and then to actually regulate to act upon the strategy.

[39:26] How would you describe what digital twins is doing for you in your project and how are you applying it? 

Well, we need a real presentation of the buildings with their complexity. Today, at least in a building management system, the building only exists with the available actors and sensors of the building management system. Any building has far more capabilities and sensors available. So, we really want to represent the building as much as we can for an orchestration strategy. So, if we have sensors measuring, for example the air quality, these should be included in a strategy for the ventilation. Maybe the building management system is not equipped to do that. Additional sensors will provide that, and we need to understand where those sensors are, where the sensors of the building management system are, how is that really represented in real life to understand relationships and interactions. And eventually, if you think this further, obviously one day, we would like to simulate our strategies before we apply them, so we would be able to really test in this digital twin some of our strategies and how these would be implemented: let's say, we change the ventilation, and we would like to know, what is the air quality doing, and hopefully, one day we will be able to do just that to actually advise our customers. If you would do this, our digital twin, our model predicts that this is happening. So, that would be an additional value we are looking for in the longer term. 

[41:27] What kind of goals are you trying to achieve? 

We need to be able to manage the building with all its different systems in a better way in order to achieve a cleaner building which is only viable if you consider all the input factors: people, the building itself, the weather outside. All of these need to work together to have a clear path to a more energy efficient building. Simply reducing the ventilation or the temperature will not help if, on that given day, you have a huge assembly of workers in that building and you're not creating the right effects. Then people will not be happy. If we are energy efficient, but the comfort is not there. So, again, the orchestration system will provide us with the capability to understand all the different factors: the weather, occupancy, maybe even where which tasks are done. Do we have a gathering in a cafeteria or are the people working in in their cubicles? All this information needs to give input into the building management system and then in terms of orchestration, with all the sensors, we will be able to execute better strategies for that given setting. So that is a long answer to the simple question. Are you expecting and how much are you expecting in terms of efficiency gains? Yes, we are expecting efficiency gain. Yes, we would hope they are double digit. Obviously, it will depend on what is available in the system in terms of equipment, in terms of other sensors, and also on the strategy and already given efficiency. So yes, gains are expected, and I would say we're looking for something which is double digit. Our expectation is that for the majority of our customers that use standard systems and use standard protocols, implementation should be fairly quick and easy. Obviously, we will always come across customers that have special or outdated legacy systems. And there, we really trust that the depth and breadth of the technical solutions available will also ensure that we still can get these customers onboarded in a reasonable amount of time. So, in a nutshell, we're looking for reuse. We're looking for low code, no code deployments. And obviously, we want a partner which can help us develop being standard adapters, so that hopefully, we will have something which becomes plug and play for our customers.

[44:34] How do you strike a balance between comfort and sustainability? 

We are looking for a balance, maybe between, let's say, comfort, and sustainability and energy efficiency. And each customer will have a different balance. And each customer maybe within the one side will emphasize some aspects more than others. Now, if you have photovoltaic roof, maybe energy consumption is not your primary concern because you're already producing your own energy and have surplus, so ventilation, which costs energies is not your issue. Maybe your issue is water. We really need to understand for each individual customer, how they want to strike this balance. And I think the beauty of the Azure and ICONICS system is that not only will we be able to have this building orchestration, but we will have all the powerful tools to do visualization to do analytics. And as I said earlier to do simulation, so within one platform, we cannot only tell our customer, “Look, here's what your building looks today. Here's what your building would look like, or your buildings would look like, when you apply this strategy”. And the customer will get visual aid to really understand some, you know, sometimes rather complex dependencies and linkages between some of the choices he needs to make. 

[46:20] Does the system meet your security requirements?

Yes, our CSO department has reviewed your overall solution as we are already a customer of Microsoft and Azure for a long time. We're very confident that the underlying platform is secure. We had a conversation with our CSO on the solution we are building here. It was reviewed, and we got the stamp of approval. We are good to go.

[47:00] Can you tell us the reason(s) you chose the ICONICS/Microsoft solution?

Microsoft and ICONICS won a tender; we did a rather long, elaborate list of requirements. And we wanted to make sure that we have a partner who's not only fulfilling the technical requirements, but also shares our view in terms of cloud, in terms of sustainability. And to be companies where we know we can have a long-term partnerships with preferably somebody we already know and is no surprise that given the requirements, the joint vision of Microsoft and ICONICS and the long-term relationship we already have, ICONICS came on top on our selection process.


Okay. That project is just starting up compared to the one from Tearle with Microsoft, which has been going since 2012. They're both buildings. You've seen a lot of examples from manufacturing and oil and gas and utilities and so on, in other sessions. But what I wanted to highlight between the two is ICONICS has been around a long time. And it's driving actual value today for customers in the realm of cost reduction, efficiencies, and so on. Bouygues also has the profile of a construction company, as opposed to at the moment Microsoft is just operating. So, they're building it in mind with what they want to achieve. And also, the exciting thing for us at ICONICS and for Microsoft is that this is the latest generation of OT on top of IT using passive SaaS services from Microsoft and the Azure digital twins. So, with that, I just wanted to thank our sponsors. We have a lot of great sponsors for this whole conference, and you will have seen many of them in the sessions here. Call out of course to Mitsubishi, of which we are a part and to Microsoft and I wanted to also play a video from Advantech, one of the major sponsors of this conference. 

[49:45] Nathan Smith, Advantech Channel Sales Director

Hi, this is Nathan Smith with Advantech IoT group. As an ICONICS. alliance partner, I'm here to talk about how Advantech industrial computer hardware and IO products support ICONICS edge to cloud software solution. I will start at the edge with Advantech wireless IO devices. Advantech offers a wide variety of wireless IO technologies. These devices support several communication protocols including MQTT, OPC UA, and Modbus support for connectivity. For edge computing. Advantech provides several Uno industrial fanless PCs that have the ideal form factor for an IoT works enabled intelligent edge solution. These gateways can connect to ICONICS cloud solution or local Genesis64 software to land port, Wi-Fi, or cellular connections depending on the application. Next, Advantech offers a wide range of touchscreen panel PCs from 6.5 inch up to 24 inch that support ICONICS GENESIS64 software for smaller single server installations, perfect for replacing or upgrading existing HMIs on a plant floor for instance. Advantech also offers industrial class servers to support a full-fledged SCADA system for on premise GENESIS64 deployment, as well as data historian support. Thanks for listening

[51:08] Spyros Sakellariadis, ICONICS Global Director of IoT Business Development

All right. So, with that, we've got a few minutes left for a Q&A. And let's see if we have any questions from the audience here.

[52:22] Megan Courtney, ICONICS Marketing Coordinator & Social Media Specialist

You did receive a couple questions. The first question is, “ Can you talk a bit more about the difference between using IoTWorX and GENESIS64 as an on-premise gateway?”

[52:34] Spyros Sakellariadis 

Sure. So, if you remember, in one of my earlier slides, I said there are multiple ways of getting data out of on-premise devices into the cloud. Up until about a year ago, the way we did it is we deployed some of the services of GENESIS64 onto an on-premise computer, configure that the data connectivity to the devices, pull them in and decide which points we're going to publish and point it to IoT Hub. This is something that worked flawlessly. I had one small Intel nook, pulling 15,000 points from a couple of buildings, pushing it up to IoT Hub. Now, the advantage of that is very simple: deploy it on anything; if you have local access to it great, you could also RDP into it, great. The disadvantage of that is you have to deploy it locally; you need someone that's going to go and install all the software. And if you have to maintain it, you have to either have remote access into the box, or you've got to be able to get locally onto the box. Now, in some cases, that's a problem. Other cases it's not. IoTWorX is our more modern version which you can deploy from the cloud. You can deploy and manage it from the cloud. And you don't have to have any local access to that box. So, the functionality of the two is very similar. You're again, discovering devices that you are going to publish data from. You're deciding which of those devices you are going to send up to the cloud and in what we call a published list, and off you go. So, the functionality is very similar. The difference is: do you have an environment where you have local access to the devices, and it's not prohibitively expensive to you to access them locally, or your security processes allow you to RDP into them. The advantage of the IoTWorX is that you don't have to have local access. You can deploy 1000s of them using templates from IoT Hub in GENESIS64 in the cloud, so it's much more efficient with respect to management and deployment. But it all depends on which way you want to go on that.

[54:03] Megan Courtney

Alright, our second question is, “How does GENESIS64 scale as you add more devices?”

[54:10] Spyros Sakellariadis 

Well, the answer is, “It does.” You can have a very small GENESIS64 sitting in the cloud that's managing one very small building. In my case, I have GENESIS64 in the cloud managing about 10 devices in my home. As you scale up, we take the example that Tearle was showing you: 456,000 objects around in that case 125 buildings in the Microsoft Puget Sound campus. And essentially if you think of the scale elements, on-premise, you scale the gateways. You can add 1, 2, 3, 4, 5 gateways, any one gateway, depending on how often you’re getting data and so on. You might put 20 objects on it. You might put 20,000 objects on it. You'd have to actually test it to see how many you could run. And you've got to work out the licensing about how many devices you're going to license on it. But on-premise, you scale by adding gateways as you need them. Pushing it up to IoT Hub. IoT Hub scales just by increasing the number of units of IoT Hub you use. That one is essentially, for our purposes, infinite. I mean, there's is an upper number, but it's got too many zeros to know how to call it a number. Once we push it into GENESIS64, you come up with architectures: a small one, for example, you might just have one box that does all the functions. As you grow larger, you might start to differentiate the functions. You might have a front-end server running some of the services and a backend box running other services. You might decide to have four front end servers, two historians, and some other boxes. So, you can deploy those boxes in a number of different ways. And you optimize it depending on what you want to do, but it scales basically at the moment by adding VMs. And in our future product, it'll scale the product that Bouygues is running, you're scaling by increasing the number of units of the various services.

[56:20] Megan Courtney

Thank you. It looks like we’ve received one more question. They asked, “Why would you use ICONICS rather than writing your own fault rules in Azure Stream AnalytiX in using applications such as Power BI?”

[56:39] Spyros Sakellariadis 

Well, I tried it. And you really don't want to write your own. Writing your own means essentially, you're still deploying some sort of gateway in Azure IoT Hub, and so on. But then you are subscribing to the Azure IoT Hub with either an Azure function or Azure Stream Analytics. And you've got to start writing code. And this can be quite simple, or it can be hideously complex. Your Azure Stream AnalytiX T SQL code might be three pages of SQL code. Now, that's one thing. But imagine you have 3000 fault rules. You don't want 3000 separate jobs, all with two pages of SQL code. And then of course, you have your operator engineer who happens to be on an oil rig and wearing a hard hat who wants to get in and change a fault rule. You really don't want him to have root access to your SQL Server, no T SQL, and be able to work out “line 306 of your procedure that needs changing”. So, you can write your own, but it's a horrendously complex thing then to manage. It's a standard build versus by comparison. What ICONICS does is it makes it simple. It takes care of the security. It takes care of allowing people with the right level of skills and the right security level to manage the system and use the system as opposed to writing your own which is for a bunch of IT coders, will hopefully work.