Pushing the Micro-Services Principles to the Edge

August 29, 2018

Tags: Azure  IoT  IoTEdge  Architecture  Microservices 

Micro-Services on Azure IoT Edge
Micro-Services on Azure IoT Edge

Azure IoT Edge

Azure IoT Edge is an Azure service that’s built on top of Azure IoT Hub to shift the heavy load of analytics to the Edge. There is a great potential for the Azure IoT Edge as it promises to reduce latency, improve security, and reduce cost. I presented about Azure IoT Edge not long ago and you see the slides here. However, many people wonder as I have been asked a few times before, how does it all work? How could you shift the workload from the cloud to the Edge and how can you operationalise such modules?

I have been lucky enough to be working in this field for the last few months and we have been facing interesting challenges. In this blog post, I will share the architecture we are using for pushing the Machine Learning load to the Edge. The scenario I share here is coming from the Mining field, but it can easily be adapted to other sectors too. I have used a similar architecture for Manufacturing.

The Scenario

Generally speaking, in the mining sector, there is normally lots of sensors in the field, and all of them generating data. Typically this data is stored on site in some system like OSI Soft PI Server. The client wants to be able to experiement with and productionise as many machine learning models as needed and as quickly as possible. In this scenario, the data is gathered and stored in an OSI Soft PI Server at each site. This data is then pushed to an Azure Data Lake.

The Machine Learning Models

The Data Science team works with the data in the Azure Data Lake to perform feature engineering and train the machine learning models. Once a model is built, we use Azure Machine Learning Servies to manage the models and their versioning. For more info on this, see the MSDN channel9 talk on Azure Machine Learning Services:

Azure Machine Learning Services
Azure Machine Learning Services

Once we have the machine learning models, we need to operationalise them.

Operationalisation of the ML Models

To operationalise a machine learning model, we build an automated data pipeline to feed the data in near real-time to the model and then action the output from the machine learning models. Generally speaking the output from the machine learning is used in some reporting and Decision Support Dashboards. In our case, we aim for full-autonomous systems and that’s why we integrate directly with the DCS Systems. This means that the results from the machine learning prediction are actioned and displayed/actioned in the Operation Technology systems (DCS). This integration is happening over OPC UA protocol.

Micro Services

A common problem that I see in many machine learning models that they are built as a one massive monolithic application. This is a very terrible idea that meshes all data pipeline, sql integration, the machine learning model, monitoring and all in one app. There is a lot of risk in deploying such an app to production as it would be impossible to know how such system is working or what’s not working. Thus, we have evolved our architecture over time to use the principles of a micro-services architecture on the Edge. The diagram above shows the micro services that are use for this scenario:

Why Micro Services

In using the micro services approach on the Edge, we have gained lots of flexibity and robustness. The main benefits can be summarised as:

Conclusion

We have covered how workloads can be designed and deployed to the Azure IoT Edge. I hope now you can see why it’s important to have a good design and architecture for deploying workloads to the Edge. If you end up bundling all your modules into one big module, it will be a nightmare to manage that in production. I hope you find this useful, and would love to hear any feedback :)

If you have a comment, feedback or a question, I would love to hear from you