sexta-feira, novembro 22, 2024
HomeIoTHow to digitize and automate vehicle assembly inspection process with voice-enabled AWS...

How to digitize and automate vehicle assembly inspection process with voice-enabled AWS services


Introduction

Today, most automotive manufacturers depend on workers to manually inspect defects during their vehicle assembly process. Quality inspectors record the defects and corrective actions through a paper checklist, which moves with the vehicle. This checklist is digitized only at the end of the day through a bulk scanning and upload process. The current inspection and recording systems hinder the Original Equipment Manufacturer’s (OEM) ability to correlate field defects with production issues. This can lead to increased warranty costs and quality risks. By implementing an artificial intelligence (AI) powered digital solution deployed at an edge gateway, the OEM can automate the inspection workflow, improve quality control, and proactively address quality concerns in their manufacturing processes.

In this blog, we present an Internet of Things (IoT) solution that you can use to automate and digitize the quality inspection process for an assembly line. With this guidance, you can deploy a Machine Learning (ML) model on a gateway device running AWS IoT Greengrass that is trained on voice samples. We will also discuss how to deploy an AWS Lambda function for inference “at the edge,” enrich the model output with data from on-premise servers, and transmit the defects and corrective data recorded at assembly line to the cloud.

AWS IoT Greengrass is an open-source, edge runtime, and cloud service that helps you to build, deploy, and manage software on edge, gateway devices. AWS IoT Greengrass provides pre-built software modules, called components, that help you run ML inferences on your local edge devices, execute Lambda functions, read data from on-premise servers hosting REST APIs, and connect and publish payloads to AWS IoT Core. To effectively train your ML models in the cloud, you can use Amazon SageMaker, a fully managed service that offers a broad set of tools to enable high-performance, low-cost ML to help you build and train high-quality ML models. Amazon SageMaker Ground Truth  uses high-quality datasets to train ML models through labelling raw data like audio files and generating labelled, synthetic data.

Solution Overview

The following diagram illustrates the proposed architecture to automate the quality inspection process. It includes: machine learning model training and deployment, defect data capture, data enrichment, data transmission, processing, and data visualization.

Solution architecture for automated quality inspection solutionFigure 1. Automated quality inspection architecture diagram

  1. Machine Learning (ML) model training

In this solution, we use whisper-tiny, which is an open-source pre-trained model. Whisper-tiny can convert audio into text, but only supports the English language. For improved accuracy, you can train the model more by using your own audio input files. Use any of the prebuilt or custom tools to assign the labeling tasks for your audio samples on SageMaker Ground Truth.

  1. ML model edge deployment

We use SageMaker to create an IoT edge-compatible inference model out of the whisper model. The model is stored in an Amazon Simple Storage Service (Amazon S3) bucket. We then create an AWS IoT Greengrass ML component using this model as an artifact and deploy the component to the IoT edge device.

  1. Voice-based defect capture

The AWS IoT Greengrass gateway captures the voice input either through a wired or wireless audio input device. The quality inspection personnel record their verbal defect observations using headphones connected to the AWS IoT Greengrass device (in this blog, we use pre-recorded samples). A Lambda function, deployed on the edge gateway, uses the ML model inference to convert the audio input into relevant textual data and maps it to an OEM-specified defect type.

  1. Add defect context

Defect and correction data captured at the inspection stations need contextual information, such as the vehicle VIN and the process ID, before transmitting the data to the cloud. (Typically, an on-premise server provides vehicle metadata as a REST API.) The Lambda function then invokes the on-premise REST API to access the vehicle metadata that is currently being inspected. The Lambda function enhances the defect and corrections data with the vehicle metadata before transmitting it to the cloud.

  1. Defect data transmission

AWS IoT Core is a managed cloud service that allows users to use message queueing telemetry transport (MQTT) to securely connect, manage, and interact with AWS IoT Greengrass-powered devices. The Lambda function publishes the defect data to specific topics, such as a “Quality Data” topic, to AWS IoT Core. Because we configured the Lambda function to subscribe for messages from different event sources, the Lambda component can act on either local publish/subscribe messages or AWS IoT Core MQTT messages. In this solution, we publish a payload to an AWS IoT Core topic as a trigger to invoke the Lambda function.

  1. Defect data processing

The AWS IoT Rules Engine processes incoming messages and enables connected devices to seamlessly interact with other AWS services. To persist the payload onto a datastore, we configure AWS IoT rules to route the payloads to an Amazon DynamoDB table. DynamoDB then stores the key-value user and device data.

  1. Visualize vehicle defects

Data can be exposed as REST APIs for end clients that want to search and visualize defects or build defect reports using a web portal or a mobile app.

You can use Amazon API Gateway to publish the REST APIs, which supports client devices to consume the defect and correction data through an API. You can control access to the APIs using Amazon Cognito pools as an authorizer by defining the users/applications identities in the Amazon Cognito User Pool.

The backend services that power the visualization REST APIs use Lambda. You can use a Lambda function to search for relevant data for the vehicle, across a group of vehicles, or for a particular vehicle batch. The functions can also help identify field issues related to the defects recorded during the assembly line vehicle inspection.

Prerequisites

  1. An AWS account.
  2. Basic Python knowledge.

Steps to setup the inspection process automation

Now that we have talked about the solution and its component, let’s go through the steps to setup and test the solution.

Step 1: Setup the AWS IoT Greengrass device

This blog uses an Amazon Elastic Compute Cloud (Amazon EC2) instance that runs Ubuntu OS as an AWS IoT Greengrass device. Complete the following steps to setup this instance.

Create an Ubuntu instance

  1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. Select a Region that supports AWS IoT Greengrass.
  3. Choose Launch Instance.
  4. Complete the following fields on the page:
    • Name: Enter a name for the instance.
    • Application and OS Images (Amazon Machine Image): Ubuntu & Ubuntu Server 20.04 LTS(HVM)
    • Instance type: t2.large
    • Key pair login: Create a new key pair.
    • Configure storage: 256 GiB.
  5. Launch the instance and SSH into it. For more information, see Connect to Linux Instance.

Install AWS SDK for Python (Boto3) in the instance

Complete the steps in How to Install AWS Python SDK in Ubuntu to set up the AWS SDK for Python on the Amazon EC2 instance.

Set up the AWS IoT Greengrass V2 core device

Sign into the AWS Management Console to verify that you’re using the same Region that you chose earlier.

Complete the following steps to create the AWS IoT Greengrass core device.

  1. In the navigation bar, select Greengrass devices and then Core devices.
  2. Choose Set up one core device.
  3. In the Step 1 section, specify a suitable name, such as, GreengrassQuickStartCore-audiototext for the Core device name or retain the default name provided on the console.
  4. In the Step 2 section, select Enter a new group name for the Thing group field.
  5. Specify a suitable name, such as, GreengrassQuickStartGrp for the field Thing group name or retain the default name provided on the console.Register a Greengrass device and add it to an AWS IoT thing group
  6. In the Step 3 page, select Linux as the Operating System.
  7. Complete all the steps specified in steps 3.1 to 3.3 (farther down the page).Install the Greengrass Core software on the IoT Greengrass core device

Step 2: Deploy ML Model to AWS IoT Greengrass device

The codebase can either be cloned to a local system or it can be set-up on Amazon SageMaker.

Set-up Amazon SageMaker Studio

  1. Navigate to the SageMaker console
  2. Choose Admin configuration, Domains, and choose Create domain.Amazon Sagemaker Landing Page
  1. Now, select Set-up for a single user to create a domain for your user.Create a new Sagemaker domain

Detailed overview of deployment steps

  1. Navigate to SageMaker Studio and open a new terminal.
  2. Clone the Gitlab repo to the SageMaker terminal, or to your local computer, using the GitHub link: AutoInspect-AI-Powered-vehicle-quality-inspection. (The following shows the repository’s structure.)Github repository structure
    • The repository contains the following folders:
    • Artifacts – This folder contains all model-related files that will be executed.
      • Audio – Contains a sample audio that is used for testing.
      • Model – Contains whisper-converted models in ONNX format. This is an open-source pre-trained model for speech-to-text conversion.
      • Tokens – Contains tokens used by models.
      • Results – The folder for storing results.
    • Recipes – Contains code to create the recipes for model artifacts.Git Repository Sub Module Structure
  1. Compress the folder to create greengrass-onnx.zip and upload it to an Amazon S3 bucket.
  2. Implement the following command to perform this task:
    • aws s3 cp greengrass-onnx.zip s3://your-bucket-name/greengrass-onnx-asr.zip
  3. Go to the recipe folder. Implement the following command to create a deployment recipe for the ONNX model and ONNX runtime:
    • aws greengrassv2 create-component-version --inline-recipe fileb://onnx-asr.json
    • aws greengrassv2 create-component-version --inline-recipe fileb://onnxruntime.json
  4. Navigate to the AWS IoT Greengrass console to review the recipe.
    • You can review it under Greengrass devices and then Components.
  5. Create a new deployment, select the target device and recipe, and start the deployment.

Step 3: Setup AWS Lambda service to transmit validation data to AWS Cloud

Define the Lambda function

  1. In the Lambda navigation menu, choose Functions.
  2. Select Create Function.
  3. Choose Author from Scratch.
  4. Provide a suitable function name, such as, GreengrassLambda
  5. Select Python 3.11 as Runtime.
  6. Create a function while keeping all other values as default.
  7. Open the Lambda function you just created.
  8. In the Code tab, copy the following script into the console and save the changes.
    import json
    import boto3
    
    # Specify the region_name you had chosen while launching Amazon EC2 instance set as the Greengrass device in Step 1
    client = boto3.client('iot-data', region_name="eu-west-1")
    def lambda_handler(event, context):
    print(event)
    response = client.publish(
    topic="audioDevice/data",
    qos=0,
    payload=json.dumps({"key":"sample_1.wav"})
    
    ##------------------------------------------------------##
    
    # Code to read the Speech to text data generated by Edge ML Mode as JSON. Replace the paths and filenames
    
    # with open('Results/filename.txt', 'r') as file:
    # file_contents = file.read()
    # data = json.loads(file_contents)
    
    ##------------------------------------------------------##
    
    # Sample Code to add context to Defect data from local OT system REST API
    
    #url = "https://api.example.com/data"
    # Send a GET request to the API
    #response = requests.get(url)
    #if response.status_code == 200:
    #apidata = response.json()
    #payload = data.copy()
    #payload.update(apidata)
    
    ##------------------------------------------------------##
    
    )
    print(response)
    return {
    'statusCode': 200,
    'body': json.dumps('Published to topic')
    }

  1. In the Actions option, select Publish new version at the top.

Import Lambda function as Component

Prerequisite: Verify that the Amazon EC2 instance set as the Greengrass device in Step 1, meets the Lambda function requirements.

  1. In the AWS IoT Greengrass console, choose Components.
  2. On the Components page, choose Create component.
  3. On the Create component page, under Component information, choose Enter recipe as JSON.
  4. Copy and replace the below content in the Recipe section and choose Create component.
    {
    	"RecipeFormatVersion": "2020-01-25",
    	"ComponentName": "lambda_function_depedencies",
    	"ComponentVersion": "1.0.0",
    	"ComponentType": "aws.greengrass.generic",
    	"ComponentDescription": "Install Dependencies for Lambda Function",
    	"ComponentPublisher": "Ed",
    	"Manifests": [
    		{
    			"Lifecycle": {
    				"install": "python3 -m pip install --user boto3"
    			},
    			"Artifacts": []
    		}
    	],
    	"Lifecycle": {}
    }
    

  5. On the Components page, choose Create component.
  6. Under Component information, choose Import Lambda function.
  7. In the Lambda function, search for and choose the Lambda function that you defined earlier at Step 3.
  8. In the Lambda function version, select the version to import.
  9. Under section Lambda function configuration
    • Choose Add event Source.
    • Specify Topic as defectlogger/trigger and choose Type AWS IoT Core MQTT.
    • Choose Additional parameters under the Component dependencies Then Add dependency and specify the component details as:
      • Component name: lambda_function_depedencies
      • Version Requirement: 1.0.0
      • Type: SOFT
  10. Keep all other options as default and choose Create Component.

Deploy Lambda component to AWS IoT Greengrass device

  1. In the AWS IoT Greengrass console navigation menu, choose Deployments.
  2. On the Deployments page, choose Create deployment.
  3. Provide a suitable name, such as, GreengrassLambda, select the Thing Group defined earlier and choose Next.
  4. In My Components, select the Lambda component you created.
  5. Keep all other options as default.
  6. In the last step, choose Deploy.

The following is an example of a successful deployment:Lambda Function deployment on Greengrass device

Step 4: Validate with a sample audio

  1. Navigate to the AWS IoT Core home page.
  2. Select MQTT test client.
  3. In the Subscribe to a Topic tab, specify audioDevice/data in the Topic Filter.
  4. In the Publish to a topic tab, specify defectlogger/trigger under the topic name.
  5. Press the Publish button a couple of times.
  6. Messages published to defectlogger/trigger invoke the Edge Lambda component.
  7. You should see the messages published by the Lambda component that were deployed on the AWS IoT Greengrass component in the Subscribe to a Topic section.
  8. If you would like to store the published data in a data store like DynamoDB, complete the steps outlined in Tutorial: Storing device data in a DynamoDB table.

Conclusion

In this blog, we demonstrated a solution where you can deploy an ML model on the factory floor that was developed using SageMaker on devices that run AWS IoT Greengrass software. We used an open-source model whisper-tiny (which provides speech to text capability) made it compatible for IoT edge devices, and deployed on a gateway device running AWS IoT Greengrass. This solution helps your assembly line users record vehicle defects and corrections using voice input. The ML Model running on the AWS IoT Greengrass edge device translates the audio input to textual data and adds context to the captured data. Data captured on the AWS IoT Greengrass edge device is transmitted to AWS IoT Core, where it is persisted on DynamoDB. Data persisted on the database can then be visualized using web portal or a mobile application.

The architecture outlined in this blog demonstrates how you can reduce the time assembly line users spend manually recording the defects and corrections. Using a voice-enabled solution enhances the system’s capabilities, can help you reduce manual errors and prevent data leakages, and increase the overall quality of your factory’s output. The same architecture can be used in other industries that need to digitize their quality data and automate quality processes.

———————————————————————————————————————————————

About the Authors

Pramod Kumar P is a Solutions Architect at Amazon Web Services. With over 20 years of technology experience and close to a decade of designing and architecting Connectivity Solutions (IoT) on AWS. Pramod guides customers to build solutions with the right architectural practices to meet their business outcomes.

Raju Joshi is a Data scientist at Amazon Web Services with more than six years of experience with distributed systems. He has expertise in implementing and delivering successful IT transformation projects by leveraging AWS Big Data, Machine learning and artificial intelligence solutions.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments