Security-minded solution architects from Google Cloud and Qualcomm explain how far you’re going to have to go – and the lengths to which they’re going – to help protect your organization’s data.
There’s this notion that a secure network of devices is not good enough…that what you need is a network of secure devices. However, at Zebra, we believe the only thing that’s acceptable these days is a secure network of secure devices.
That’s why we’re working with Google Cloud and Qualcomm Technologies, Inc. to look deep into on-prem and cloud architectures to implement the best security features at every potential access point. Checking and updating settings at the network and edge device level is no longer sufficient. The only way to protect your intellectual property (IP) and reputation is to identify potential vulnerabilities and secure access to the network, device, silicon, software and architecture layers.
If it seems like overkill, it is. Because it has to be.
Geopolitical trends will continue to accelerate the importance of ever higher levels of security. We’ve said it many times here at Zebra: any device that’s connected to a network is a potential point of vulnerability if not properly secured. Any software component…any part of the software supply chain. And there are more people than ever who are targeting your organization, whether you realize they’re doing it or not.
This takes me to the point I want to make today.
As you start bringing AI models into your environment, even if it’s just a single AI model that’s running at the edge, you must recognize the potential vulnerabilities it introduces into your organization and diligently assess, monitor, and manage its provenance and behavior.
Don’t wait for a regulation to be put into place or the occurrence of a security incident to adopt this practice. Don’t also just do it just for the sake of practicing responsible AI. Do it because there are risks that you must mitigate so you aren’t burdening your organization with liabilities or potentially putting your employees, customers, or constituents in harm’s way. It’s not just tech companies that must worry about these things. It’s anyone and everyone who integrates AI into on-prem, cloud-based, or on-device workflows.
Fortunately, there are developers, engineers, solution architects, and business leaders at tech companies who are thinking about how best to support you from an AI security perspective.
On July 18, at the Aspen Security Forum, a group of leading technology companies including Google, Amazon, Intel, IBM, Microsoft and NVIDIA launched the Coalition for Secure AI (CoSAI). This is just another signal that AI security is more important than ever, especially given the rise in hackers using AI to make their phishing emails, text messages, deep fake videos, and audio attacks more sophisticated.
I asked Srikrishna (Sri) Shankavaram, the Principal Cybersecurity Architect on our AI & Advanced Development team here at Zebra, what his thoughts were about this coalition. He told me:
“The launch of CoSAI is important and timely. One of the key workstreams it will focus on is the software supply chain security for AI systems, which spans the entire lifecycle of AI systems from data collection and model training to deployment and maintenance. Due to the complexity and interconnectedness of this ecosystem, vulnerabilities at any stage can affect the entire system.”
We started talking about this a bit more and he noted a few things I want to echo out to you:
Edge AI is often touted as a more secure approach than cloud-based AI models because you only have to worry about securing the device running the LLM. There’s very little, if any, data flowing between that device and the cloud to run the application. That’s one of the reasons we’re seeing edge AI garner so much interest from retailers, manufacturers, healthcare providers, government leaders, and others who want to use generative AI (GenAI) to support frontline workers.
However, it’s critical to understand what will be required to ensure data security and privacy when using on-device AI because it still requires diligent effort.
Some key points were made about this in a recent conversation I had with Art Miller, VP of Business Development at Qualcomm Technologies, Inc., and Rouzbeh Aminpour, Global Technical Solution and Engineering Manager from Google Cloud:
Fortunately, AI security governance is going to be a key focus area for CoSAI, as the governance of AI security necessitates specialized resources to address the unique challenges and risks associated with AI. Developing a standard library for risk and control mapping helps in achieving consistent AI security practices across the industry. So, Sri told me he feels the CoSAI guidance could serve as a good template for you.
He also feels that creating an AI security maturity assessment checklist and a standardized scoring mechanism would enable organizations like yours to conduct self-assessments of AI security measures. It could provide you and your customers with assurance about the security of AI products. This approach also parallels the secure software development lifecycle (SDLC) practices already employed by organizations like Zebra through software assurance maturity model (SAMM) assessments. Therefore, it could help you extend our standard practices into your environment if you’re using Zebra on-device AI tools. So, keep an eye on the work CoSAI is doing with regards to governance.
In the meantime, reach out if you have questions or need more clarity on anything I just mentioned.
You may want to listen to my conversation with Art and Rouzbeh if you haven’t already:
I’d be happy to put you in touch with Sri, Art, or Rouzbeh as well to talk more about the current security climate and the mechanisms you’ll need throughout your entire tech stack to defend against threats and reduce vulnerabilities.
They can also explain more what Zebra, Google Cloud, and Qualcomm Technologies are doing to make it as easy as possible for you to keep your defenses strong at the edge and core of your business. We know security isn’t a set-and-forget setting configuration.
###
Ask the Experts: Is On-Device AI Going to Prove to Be Hype or Helpful?
What do you think about on-device AI? Is it all hype or will it prove helpful to frontline workers? Zebra CTO Tom Bianculli sat down with Art Miller from Qualcomm and Rouzbeh Aminpour from Google Cloud in this new podcast episode to break down the benefits of this new approach to AI so you can decide for yourself.
Ask the Expert: “How Can I Put Responsible AI Into Practice?”
Responsible AI development, training, testing, and use require ongoing engagement and foundational practices to ensure long-term success and ethical implementation. Here is what you need to do to get started, whether you're an AI model developer, tester, or user.
Zebra's Senior Director of AI and Advanced Technologies, Stuart Hubbard, sat down with inspired software engineer Saliha Demir to find out what she has learned – and once misunderstood – about AI. Her early AI experiences are certainly a wake-up call to us all. Listen to this…
Tom Bianculli serves as the Chief Technology Officer of Zebra Technologies. In this role, he is responsible for the exploration of emerging opportunities, coordinating with product teams on advanced product development and Internet of Things (IoT) initiatives. The Chief Technology Office is comprised of engineering, business, customer research and design functions.
Tom began his career in the tech industry at Symbol Technologies, Inc. (later acquired by Motorola) in 1994 as part of the data capture solutions business. In the following years, he held positions of increased responsibility including architectural and director of engineering roles.
Tom has been granted over 20 U.S. patents and is a Zebra Distinguished Innovator and Science Advisory Board associate. He was recently named one of the Top 100 Leaders in Technology 2021 by Technology Magazine.
Tom holds bachelor of science and master of science degrees in electrical engineering from Polytechnic University, NYU and serves on the board of directors for the School of Engineering at the New York Institute of Technology.