As artificial intelligence (AI) gets more popular; security risks start forming. AI is a tool to help your employees become more productivity. Are companies able to keep up with this new technology tool? Below you will learn how AI is generating security risks faster than companies can keep up.
As reported by Wall Street Journal on August 10th, 2023, by Belle Lin.
AI Is Generating Security Risks Faster Than Companies Can Keep Up
Generative artificial intelligence-based tools are set to offer workers an enormous productivity boost, but business technology leaders charged with implementing them are scrambling to understand their potential cybersecurity risks.
As services like Microsoft’s Copilot, a generative AI tool embedded in the company’s workplace software, become more commonplace, business leaders are now responsible for understanding what is inside the new capabilities they are getting—and whether they pass security muster.
To aid in supply-chain management, companies have long relied on inventories of goods they receive from manufacturers, which detail where each component comes from. More recently, software vendors have faced a push to devise a similar “software bill of materials,” which lists what is inside the software’s code, including its open-source and proprietary components.
The idea behind such a thorough inventory is that companies can better track the nuts and bolts of their software—including whether it houses security vulnerabilities like the Log4j software flaw—and more quickly respond to them.
Tainted software from network monitoring firm SolarWinds, which led to a massive hack of businesses and the U.S. government in 2020, forced many companies to re-evaluate their connections with third-party software vendors.
Now, as AI models are trained on company data, businesses need to be even more aware of where that data might be exposed in a supply chain, said Robert Boyce, managing director of cyber resilience services at consulting firm Accenture.
The challenge with generative AI is that the technology is developing so quickly that companies are rushing to figure out if it introduces new cybersecurity challenges or magnifies existing security weaknesses. Meanwhile, technology vendors have inundated businesses with new generative AI-based features and offerings—not all of which they need or have even paid for.
That makes an AI bill of materials that much harder to manage, analysts say. Large language models are so complex that it is nearly impossible to audit them in-depth, they add.
“The concern that most security leaders have is that there’s no visibility, monitoring, or explainability for some of those features,” said Jeff Pollard, a cybersecurity analyst at Forrester Research.
In some cases, generative AI can introduce security risks because models are trained with pre-existing code, David Johnson, a data scientist at the European Union’s law-enforcement agency Europol, said at a recent conference in Brussels. “That code can contain a vulnerability, so if the model subsequently generates new code, it can inherit that same vulnerability,” he said.
Interest in generative AI has created an opening for startups like Protect AI, which aims to help businesses track the components of their homegrown AI systems through a platform it calls a “machine-learning bill of materials.” The company said its platform can also identify security policy violations and malicious code injection attacks.
The Seattle-based startup recently found a vulnerability in the popular open-source machine-learning tool MLflow, which could have allowed an attacker to take over a company’s cloud account credentials and access training data, said Protect AI co-founder and President Daryan “D” Dehghanpisheh.
Those types of attacks—plus new forms like “prompt injections,” where hackers use “prompts” or text-based instructions to manipulate generative AI systems into sharing sensitive information—all represent AI-based supply chain risks that CIOs need to account for, Dehghanpisheh said.
Though rapid growth in generative AI has given rise to a plethora of new tools and AI models, most companies are still trying to get a full view into their data, code and AI operations, said Protect AI co-founder and CEO Ian Swanson.
Meanwhile, some technology leaders like Bryan Wise, CIO of San Francisco-based sales and marketing software firm 6sense, have focused on asking vendors tougher questions before signing up for new generative AI features.
“If somebody is requesting a generative AI product,” Wise said, “we have to lean in a little bit harder and say, ‘Tell me how that data is being used? How can you ensure that our data is not being used to improve your model?”
Most CIOs have prioritized preventing company data from leaking outside of their domain or becoming training data for third-party AI models, according to analysts. For some like Rob Franch, CIO of Houston-based funeral and cemetery services provider Carriage Services, assurances and guardrails from established vendors like Microsoft help alleviate those concerns, he said.
Still, a separate but related cybersecurity issue exists in the code that generative AI assistants help programmers write. Tools like Amazon’s CodeWhisperer and Microsoft-owned GitHub Copilot suggest new code snippets and provide technical recommendations to developers.
By using such tools, it is possible that engineers could produce inaccurate code documentation, code that doesn’t follow secure development practices, or reveal system information beyond what companies would typically share, according to Forrester’s Pollard.
Many vendors are reluctant to say that code in their products was written by AI when they explain how they work, said Diego Souza, chief information security officer at engine and generator maker Cummins. Souza said it is crucial that his security team tests products, makes sure vendors supply reports about potential security flaws and provides life-cycle updates before the company introduces new software into its systems. “The biggest challenge is to understand [whether] the code was written by AI or not,” he said.
Industry executives like Mårten Mickos, CEO of bug-bounty platform HackerOne, are concerned about “AI code sprawl”—or what Mickos calls a proliferation of “noisy, mediocre, weird, difficult to trace” code—which becomes a cybersecurity issue when it is turned into software. As developers use AI tools to build software more quickly, they are also writing vulnerable code at a faster rate, said Peter McKay, CEO of cybersecurity firm Snyk.
While cybersecurity practices around software development and supply chains aren’t new, using AI to help write code exacerbates the challenges, Pollard said. It is now harder for vendors and companies to determine what is in their code, and keep their software bill of materials up-to-date.
Shamim Mohammad, chief information and technology officer of CarMax, is using AI to create efficiencies and improve the used-car retailer’s customer and associate experiences. But he said that using generative AI also “increases risks around malicious codes, copyright, intellectual property and privacy.”
CarMax is developing a “governance model” to guide its use of AI, he said, which can mitigate those risks through education and setting guidelines and boundaries.
6sense’s Wise, who is making a series of “small bets” on generative AI rather than jumping in headfirst, said his evaluation process involves examining the technology’s architecture and data practices, rather than just its promised business outcome.
“All CIOs need to dig in a little bit deeper,” Wise said. “You need to ask the right questions to feel comfortable about the decisions you’re making.”
Additional Technology Resources
Who Gets the Job of Figuring Out AI?
Leveraging Artificial Intelligence and Machine Learning in Operations Management
0 Comments