The focus of every industrial revolution has been increasing the productivity of production systems. The fourth industrial revolution is here, and it’s seeking to improve both production and management systems. Digital transformation driven by smart manufacturing (also known as Industry 4.0) is the basis of this latest one – creating opportunities to achieve levels of productivity and specialization not previously possible.
Combining data generated through the Industrial Internet of Things (IIoT) and analytics creates a new set of capabilities known as predictive maintenance and quality. Fueled by smart manufacturing, these new capabilities are changing the way we do and see business, helping recognizing patterns and predicting failures or product quality issues before they happen.
Introducing the new industrial IoT platform
Most factories are composed of operation technology (OT) assets such as machines, equipment lines and robotic devices that aren’t always connected. The current trend is leaning toward smart manufacturing with a more IT-based factory floor to help save time, labor, cost and maintenance and upkeep. With OT and IT converging, the IIoT platform is emerging as a new, innovative concept for smart manufacturing with artificial intelligence (AI)-based technologies, including analytics, big data and cognitive manufacturing.
Smart manufacturing can spur a new surge of manufacturing productivity.
Targeting the pain points for key manufacturing personnel
In order to understand the impact of Industry 4.0 solutions, we must examine the key people involved in all aspects of a factory. True transformation happens when all unique challenges and each pain point is targeted.
Transforming your factory with a three-tiered architecture solution from IBM
Keeping the needs of different types of workers in mind and using our extensive manufacturing experience, IBM developed a three-tiered distributed architecture to implement smart manufacturing more efficiently. The model addresses the autonomy and self-sufficiency requirements of each production site and balances the workload between the three tiers.
Mapping IBM’s three-tiered architecture.
Edge level. The most physical part of the factory where product-related activities are performed.
Plant or factory level. Where plant and local activities are orchestrated and connected.
Enterprise level. Where analysis of all levels of information happens, and information storage for visualization and analytics is provided.
Leveraging the three architecture tiers to drive performance
IBM offers a suite of enterprise asset management (EAM) solutions to help drive cost savings and operational efficiency across the factory value chain. The portfolio of EAM solutions from IBM analyzes a variety of information from workflows, context and the environment to drive quality and enhance operations and decision making. The portfolio of EAM solutions from IBM helps deliver a smart manufacturing transformation.
Production quality insights use IoT and cognitive capabilities to sense, communicate and self- diagnose issues to optimize each factory’s performance and reduce unnecessary downtime. Insights help reduce unplanned downtime.
We are going to share our vision on the importance of infusing AI into our cloud platform and DevOps process. Gartner referred to something similar as AIOps (pronounced “AI Ops”) and this has become the common term that we use internally, albeit with a larger scope. Today’s post is just the start, as we intend to provide regular updates to share our adoption stories of using AI technologies to support how we build and operate Azure at scale.
There are two unique characteristics of cloud services:
The ever-increasing scale and complexity of the cloud platform and systems
The ever-changing needs of customers, partners, and their workloads
To build and operate reliable cloud services during this constant state of flux, and to do so as efficiently and effectively as possible, our cloud engineers (including thousands of Azure developers, operations engineers, customer support engineers, and program managers) heavily rely on data to make decisions and take actions. Furthermore, many of these decisions and actions need to be executed automatically as an integral part of our cloud services or our DevOps processes. Streamlining the path from data to decisions to actions involves identifying patterns in the data, reasoning, and making predictions based on historical data, then recommending or even taking actions based on the insights derived from all that underlying data.
Figure 1. Infusing AI into cloud platform and DevOps.
The AIOps vision
AIOps has started to transform the cloud business by improving service quality and customer experience at scale while boosting engineers’ productivity with intelligent tools, driving continuous cost optimization, and ultimately improving the reliability, performance, and efficiency of the platform itself. When we invest in advancing AIOps and related technologies, we see this ultimately provides value in several ways:
Higher service quality and efficiency: Cloud services will have built-in capabilities of self-monitoring, self-adapting, and self-healing, all with minimal human intervention. Platform-level automation powered by such intelligence will improve service quality (including reliability, and availability, and performance), and service efficiency to deliver the best possible customer experience.
Higher DevOps productivity: With the automation power of AI and ML, engineers are released from the toil of investigating repeated issues, manually operating and supporting their services, and can instead focus on solving new problems, building new functionality, and work that more directly impacts the customer and partner experience. In practice, AIOps empowers developers and engineers with insights to avoid looking at raw data, thereby improving engineer productivity.
Higher customer satisfaction: AIOps solutions play a critical role in enabling customers to use, maintain, and troubleshoot their workloads on top of our cloud services as easily as possible. We endeavor to use AIOps to understand customer needs better, in some cases to identify potential pain points and proactively reach out as needed. Data-driven insights into customer workload behavior could flag when Microsoft or the customer needs to take action to prevent issues or apply workarounds. Ultimately, the goal is to improve satisfaction by quickly identifying, mitigating, and fixing issues.
Figure 2. AI for Cloud: AIOps and AI-Serving Platform.
Moving beyond our vision, we wanted to start by briefly summarizing our general methodology for building AIOps solutions. A solution in this space always starts with data—measurements of systems, customers, and processes—as the key of any AIOps solution is distilling insights about system behavior, customer behaviors, and DevOps artifacts and processes. The insights could include identifying a problem that is happening now (detect), why it’s happening (diagnose), what will happen in the future (predict), and how to improve (optimize, adjust, and mitigate). Such insights should always be associated with business metrics—customer satisfaction, system quality, and DevOps productivity—and drive actions in line with prioritization determined by the business impact. The actions will also be fed back into the system and process. This feedback could be fully automated (infused into the system) or with humans in the loop (infused into the DevOps process). This overall methodology guided us to build AIOps solutions in three pillars.
Figure 3. AIOps methodologies: Data, insights, and actions.
AI for systems
Today, we’re introducing several AIOps solutions that are already in use and supporting Azure behind the scenes. The goal is to automate system management to reduce human intervention. As a result, this helps to reduce operational costs, improve system efficiency, and increase customer satisfaction. These solutions have already contributed significantly to the Azure platform availability improvements, especially for Azure IaaS virtual machines (VMs). AIOps solutions contributed in several ways including protecting customers’ workload from host failures through hardware failure prediction and proactive actions like live migration and Project Tardigrade and pre-provisioning VMs to shorten VM creation time.
Of course, engineering improvements and ongoing system innovation also play important roles in the continuous improvement of platform reliability.
Hardware Failure Prediction is to protect cloud customers from interruptions caused by hardware failures. Microsoft Research and Azure have built a disk failure prediction solution for Azure Compute, triggering the live migration of customer VMs from predicted-to-fail nodes to healthy nodes. We also expanded the prediction to other types of hardware issues including memory and networking router failures. This enables us to perform predictive maintenance for better availability.
Pre-Provisioning Service in Azure brings VM deployment reliability and latency benefits by creating pre-provisioned VMs. Pre-provisioned VMs are pre-created and partially configured VMs ahead of customer requests for VMs. As we described in the IJCAI 2020 publication, As we described in the AAAI-20 keynote mentioned above, the Pre-Provisioning Service leverages a prediction engine to predict VM configurations and the number of VMs per configuration to pre-create. This prediction engine applies dynamic models that are trained based on historical and current deployment behaviors and predicts future deployments. Pre-Provisioning Service uses this prediction to create and manage VM pools per VM configuration. Pre-Provisioning Service resizes the pool of VMs by destroying or adding VMs as prescribed by the latest predictions. Once a VM matching the customer’s request is identified, the VM is assigned from the pre-created pool to the customer’s subscription.
AI for DevOps
AI can boost engineering productivity and help in shipping high-quality services with speed. Below are a few examples of AI for DevOps solutions.
Incident management is an important aspect of cloud service management—identifying and mitigating rare but inevitable platform outages. A typical incident management procedure consists of multiple stages including detection, engagement, and mitigation stages. Time spent in each stage is used as a Key Performance Indicator (KPI) to measure and drive rapid issue resolution. KPIs include time to detect (TTD), time to engage (TTE), and time to mitigate (TTM).
Figure 4. Incident management procedures.
As shared in AIOps Innovations in Incident Management for Cloud Services at the AAAI-20 conference, we have developed AI-based solutions that enable engineers not only to detect issues early but also to identify the right team(s) to engage and therefore mitigate as quickly as possible. Tight integration into the platform enables end-to-end touchless mitigation for some scenarios, which considerably reduces customer impact and therefore improves the overall customer experience.
Anomaly Detection provides an end-to-end monitoring and anomaly detection solution for Azure IaaS. The detection solution targets a broad spectrum of anomaly patterns that includes not only generic patterns defined by thresholds, but also patterns which are typically more difficult to detect such as leaking patterns (for example, memory leaks) and emerging patterns (not a spike, but increasing with fluctuations over a longer term). Insights generated by the anomaly detection solutions are injected into the existing Azure DevOps platform and processes, for example, alerting through the telemetry platform, incident management platform, and, in some cases, triggering automated communications to impacted customers. This helps us detect issues as early as possible.
For an example that has already made its way into a customer-facing feature, Dynamic Threshold is an ML-based anomaly detection model. It is a feature of Azure Monitor used through the Azure portal or through the ARM API. Dynamic Threshold allows users to tune their detection sensitivity, including specifying how many violation points will trigger a monitoring alert.
Safe Deployment serves as an intelligent global “watchdog” for the safe rollout of Azure infrastructure components. We built a system, code name Gandalf, that analyzes temporal and spatial correlation to capture latent issues that happened hours or even days after the rollout. This helps to identify suspicious rollouts (during a sea of ongoing rollouts), which is common for Azure scenarios, and helps prevent the issue propagating and therefore prevents impact to additional customers.
AI for customers
To improve the Azure customer experience, we have been developing AI solutions to power the full lifecycle of customer management. For example, a decision support system has been developed to guide customers towards the best selection of support resources by leveraging the customer’s service selection and verbatim summary of the problem experienced. This helps shorten the time it takes to get customers and partners the right guidance and support that they need.
To achieve greater efficiencies in managing a global-scale cloud, we have been investing in building systems that support using AI to optimize cloud resource usage and therefore the customer experience. One example is Resource Central (RC), an AI-serving platform for Azure that we described in Communications of the ACM. It collects telemetry from Azure containers and servers, learns from their prior behaviors, and, when requested, produces predictions of their future behaviors. We are already using RC to predict many characteristics of Azure Compute workloads accurately, including resource procurement and allocation, all of which helps to improve system performance and efficiency.
Looking towards the future
We have shared our vision of AI infusion into the Azure platform and our DevOps processes and highlighted several solutions that are already in use to improve service quality across a range of areas. Look to us to share more details of our internal AI and ML solutions for even more intelligent cloud management in the future. We’re confident that these are the right investment solutions to improve our effectiveness and efficiency as a cloud provider, including improving the reliability and performance of the Azure platform itself.
Note blog reference: https://azure.microsoft.com/en-in/blog/advancing-azure-service-quality-with-artificial-intelligence-aiops/
Quality assurance is paramount parts of the manufacturing process. It is the part which comes at the end of the production where the final product is inspect for defects and errors before selling it to the customers. This is important because not only it minimize the errors in production, but also prevents humans from items which can be potentially be perilous.
Industries which are involved in producing end products such as Auto parts, mirrors, sanitary ware, Laminates, pharmaceuticals, consumer goods, beverages and foods etc. can immensely profit from visual quality inspection systems
How Do Visual Inspection Systems Work?
Visual Quality Inspection Systems take use of digital sensors, RFID Tags and QR Code which are protected inside the cameras. These cameras use the optical sensors for capturing images which are then processed by computers in order to measure specific characteristics and parameters for decision making
Benefits of Visual Inspection Systems
Save time for Inspection and more on production number: After Visual quality inspection systems have been planned and tuned in to the manufacturing process, they can do immense of production checking in a really short amount of time as compared to human inspection.
Accurate Inspection without any human interference: In human inspection systems, there’s always a great chance of errors and no matter how experienced and focused the employees are, these errors can never brought down to a negligible level. Human capabilities have limitations which Visual Quality Inspection systems haven’t. And this is how they eliminate the chances of error in inspection to a great extent and deliver a higher quality of products.
Compatible with every use-case: Another benefit of Visual Quality Inspections is that in case production method changes, these systems also alter accordingly with great ease.
Increase in Accuracy: They also improve the production efficiency. They can identify errors at a faster rate. Analysis of these remark defects can be made quickly and necessary corrections can be made immediately.
Remote Access: Unlike humans, these systems can work nonstop for twenty-four hours. They can also be operated and programmed from a remote place.
By adopting the visual inspection system in the manufacturing process, companies can boost production and also prevent the wastage which is generated by defected and faulty products. This will save not only the revenues, but also ensure complete customer satisfaction.
If your unit makes products at a mass level and you are looking for a modern and reliable inspection system, then buyTrident- Visual Quality Inspection platform which is trusted by numbers of industrial units for its accuracy and reliability.
Trident Information Systems P. Ltd is one of the leading global providers of Information Technology services and business solutions with over with a proven track record of over 15 years. We have been consistently adding value to the business bottom line of our global clientele.