While AI is certainly proving helpful with quality control during the electric battery production and electric vehicle assembly processes, AI isn’t the only thing that improves inspection results.
Maybe you’ve noticed that the machine vision systems being demonstrated at industry events like The EV Battery Show or Automate are inherently smarter, faster and more accurate than the systems you installed in your production environments just a few years ago. And maybe you or someone you speak with credits that rapid performance gain to the infusion of artificial intelligence (AI) in those vision systems. But if you were to ask me what’s making machine vision systems and quality inspection outcomes ‘so much better’ in the automotive industry than they were five years ago, I would tell you AI is just part of it.
What’s really making a difference is…
- the simplicity of the software that’s used to control machine vision systems (or the robots they’re guiding). When we talk about the impact that deep learning software is having on machine vision, it’s often in the context of ‘intelligence.’ But it’s also making it easier for people to make adjustments to the inspection or production processes to refine as needed for better outcomes. It’s also making it easier for them to quickly investigate an identified issue and intervene as needed to correct it.
For example, it’s not deep learning that’s enabling robotics, automation, and machine vision engineers to easily design and set up the system, select or swap out the appropriate hardware, design the new inspection process, or program the robot's path. That’s the decision by an automotive industry leader to use a single software suite from a single vendor end-to-end in the production and inspection processes.
Of course, deep learning (AI) does make it easier for process engineers and data analysts to analyse the inspection data to identify trends, improve the inspection process, and reduce defects. The quality of data and automatic analysis occurring with AI assistance helps people see where and how they need to intervene. Plus, once an engineer sets up the machine learning/deep learning algorithms to start training the system and the training is complete, the system will naturally perform better with each inspection and the need for human inspector intervention may be reduced. (With deep learning, the capabilities for machine vision are greatly expanded.)
However, for someone who needs help knowing when and how to calibrate cameras, who must perform routine maintenance on the robot and other connected technologies, or is tasked with troubleshooting any issues that arise, a single software platform threading your entire machine vision system and production environment together makes it easier to keep things working well.
If you’re trying to scale an EV production line to increase production volume, retrofit a combustible engine vehicle line to support EV production, or outfit an entirely new EV production facility, having a single hardware-agnostic software platform that new employees can learn how to configure is almost more valuable than having ‘AI inside’ that software.
- the interoperability of the (AI-powered) software with different hardware components (whether sourced from the same vendor or different vendors) or different types of systems (such as robots). Many ‘bad experiences’ or frustrations with machine vision I hear about stem from someone’s decision to piece together hardware and software from multiple vendors. Most of this is due to compatibility issues that ultimately require manufacturers to buy and manage multiple systems and then figure out how to make them work with all their other systems. I hear all the time how complex machine vision has become. When I ask more questions, I realize that there’s a lot of proprietary or ‘best in class’ technology piecemealed together, and there are disconnects or workarounds hindering process efficiency, results accuracy, and more.
- the people designing, building, integrating, and helping you scale the machine vision system. Even if the technology components from different machine vision and robotics companies can technically work together, can the people responsible for helping you implement and maintain that technology work well enough together? I hear horror stories of auto manufacturers who are getting bounced around from one company to the next every time something needs fixing or adjusting to support a new application.
It gives weight to the old saying ‘one call, that’s all’. Having one vendor, or one person, you can call to help with anything and everything can go a long way to making machine vision systems work better. They can bring the right people to the table to work through any problems or map out the best system as you need to scale. They can also look at your system holistically as you have questions or ambitions and better advise on what you should or should not do.
- the flexibility and scalability of the machine vision ‘system’. This is where it bears repeating that it’s the software – not the hardware, necessarily – that really defines how much you can do with your machine vision system and how easily you can do it. Swapping out a camera or a 3D profiler takes a few minutes, and most hardware is software agnostic so there’s no problem there when it’s time to add on more, change applications, or upgrade to newer technology. So, when people say, ‘Machine vision is getting better,’ many times it’s because they are choosing a single software platform for the foundation of their system and then sourcing the various 2D cameras, 3D profilers/sensors and robotics components needed to complete the inspection.
They aren’t pigeonholed in with what I call a container system – a set of components that a machine vision technology provider decided to package up and sell as an out-of-the-box ‘solution.’ They simply start with a software platform that is flexible enough to do anything and everything they want as it pertains to machine vision quality inspections or machine vision-guided robotics processes in the production process. Then they tack on the hardware.
In fact, the automotive manufacturers currently seeing the most improvement in their machine vision system performance and supported inspection processes are the ones who have opted to source the right components (and different components) for each new system ‘build’. They use 2D cameras for some inspections, 3D profilers for others, and then use the same software to process and analyse the data from all the hardware. If they ever need to swap in 3D sensors for the 2D cameras or add a dozen more inspection points as they scale their infrastructure or move from combustible to hybrid to fully electric vehicle production, they don’t have to rip and replace the entire system. They can make incremental changes to the hardware without having to rollout and retrain their team on a completely new software suite. Their team only has to learn and use one design tool one time – and develop each application from one software platform – no matter how many different machine vision or robotics hardware components are plugged in over time.
- the increased use of 3D vision technologies – and the right 3D technologies. It’s hard to put the onus on AI to efficiently, accurately, and consistently inspect the quality of electric batteries and assembled EVs if it doesn’t have a clear picture of what the battery or assembled parts look like. A 2D image isn’t going to help you (or an ‘AI’) see if the placement of an EV battery in the vehicle or the subsequent wiring connections are correct. Nor will you be able to see if the welding points in an EV are good or if enough glue was applied during the battery module assembly. 2D cameras simply can’t capture the information you need to confirm the thickness or dimensions of the adhesive beads. In these cases, and many others, you’ll need 3D technologies that use laser triangulation to be able to analyse depth and understand what things look like all around the assembly.
Moreover, the ‘AI inside’ your machine vision system won’t be able to confirm if the battery cells are properly stacked together within the battery module once the glue is applied if you’re only giving it a 2D picture to look at. Those who can say with certainty the battery stack is well aligned are the ones using 3D profiling sensors to inform the AI’s assessment.
Another way 3D vision technology is improving the inspection of EV battery cells and other metallic EV components is by addressing the issues that 2D technology has capturing highly reflective and monochromatic surfaces. There are some inspections that simply can’t be done with accuracy using 2D cameras – AI or not.
One word of caution: not all 3D sensors work the same way or will deliver the same results, and there are some special considerations when it comes to software-hardware pairings for 3D. For EV battery or EV assembly inspections, for example, 3D technology that uses laser triangulation is going to be recommended over other types of 3D profilers. Some 3D technology works better when AI is used, too. So, again, don’t assume that by using 3D technology you’ll automatically get the best result. There’s a greater primer on 3D vision technology you can check out so you can ask the right questions when meeting with technology providers and system integrators.
AI is part of the reason why today’s machine vision systems are getting so much better so quickly. Their performance gains aren’t incremental, either. I’m not talking about standard generational improvements in technology. I’m talking major improvements with each inspection; differences in performance being realized over hours and days, not years and decades.
In fact, the machine vision solutions my Zebra colleagues and I build for automotive manufacturers and suppliers rely heavily on machine learning – and specifically deep learning – tools to make automated inspections possible and pass/fail decisions trustworthy. It’s the deep learning mechanism within the machine vision software that enables the ‘system’ to adapt to new types of defects, get better at its job, quickly learn and support new applications. It’s what helps you (as a human) continuously improve the quality of inspections.
I mean, the deep learning-based machine vision systems that my team and I are now designing, training, implementing and overseeing in automotive factories are returning exceptional results when asked to ‘find what’s wrong with this picture.’ They’re instantly and consistently spotting dozens of different defects, verifying the alignment of parts, detecting the presence or absence of components, identifying surface damage, and checking the correctness of part numbers in milliseconds. They’re analysing tiny details, making precise measurements, and flagging dents, scratches, dark spots, gaps in coating and more. They’re spotting specific patterns or calibration markers that the human eye isn’t fine-tuned – and just can’t be trained – to recognise. It’s quite phenomenal.
However, if you’re an EV supplier or manufacturer who is going to have to scale significantly and quickly – and with quality and safety at the forefront – you can’t just lean on AI to improve your inspection process speed and accuracy.
You must strongly consider…
whether you need to be using more 3D vision and what type of 3D technology is best.
the software’s overall simplicity to learn, use, and sync with your hardware and other systems.
whether you’re really getting a benefit from bringing in so many different software and hardware components (and vendors) or whether you’d be better off streamlining.
who you’ll be able to lean on to help you scale or adjust your system as needed (because that may be a recurring and frequent need).
whether you’ll be able to make incremental changes with the same machine system or if a total rip and replace – or complicated ‘bolt on’ effort – will be required every time you need machine vision for a new application or on a new line.
It’s these things that are really making machine vision systems ‘so much better’ – and what’s making automotive industry quality inspection results better. And it’s making the right decisions around these considerations that will make it easier for you to scale production of EV batteries or EV assembly without compromising quality or safety…not just ‘the addition of AI’ to machine vision systems.
I’m going to be at The Battery Show Europe in Stuttgart from 18-20 June if you’d like to chat more about anything machine vision related. I’m happy to share how my colleagues and I, along with Zebra partners and auto industry system integrators, have been working with automotive suppliers and manufacturers to ensure they have the right hardware, software, and people in place to support their high-tempo, high-stakes production operations and quality inspections, especially on EV lines. I can also show you how machine vision is being used to guide a robot and how a 3D line scanner is used for EV battery and assembly quality inspections. (We’ll have live demos in Zebra booth 4-A19.)
If you won’t be at the show, you can also email me here and we can connect after the event.
I’m keen to hear how you and your team are using machine vision today, what your experience has been so far, and what you wish was working better on your production lines. If you’re looking for ways to improve your quality inspection processes, or perhaps set up your new EV lines to ensure these ultra-high-speed, detailed-oriented inspections are executed well, let’s talk.
And if you’re curious what other automotive industry leaders feel will improve quality inspections or how they feel about AI’s impact on machine vision system performance? Check out this AI Machine Vision in the Automotive Industry Benchmark Report.
You can also learn more about deep learning and 3D vision in these fantastic posts from machine vision engineers: