World
DeepSeek fails multiple security tests, experts warn businesses
WASHINGTON (TNND) — Researchers say the emerging Chinese generative AI DeepSeek failed multiple security tests, potentially posing serious risks for users.
David Reid, a cyber security expert at Cedarville University tells us, it’s alarming to see the results from this latest test on DeepSeek.
It failed a bunch of benchmarks where you could jailbreak it. You could in some cases, generate actual malware which is a big red flag,” Reid said.
AppSOC, a Silicon Valley security provider, ran the tests on DeepSeek. What it found, was failure rates in several areas including jailbreaking, injection attacks, and malware generation.
“It’s one thing to say something is sort of bad or damage the reputation of the company, but now you actually have an AI program that is producing code that is harmful,” said Reid.
Reid says these tests are common for large language models and DeepSeek’s inability to pass them should be taken into consideration by consumers.
“It may be cheaper but I’m paying for what I get and the reason why it’s cheaper is because how they obtained it, how they’re making it,” said Reid.
Anjana Susarla, who specializes in responsible AI at Michigan State University says, organizations thinking of using DeepSeek in a corporate setting need to look at these results.
“Will they be able to manipulate these generative AI tools to gain access to sensitive information about the company and the people who work in the company?” Susarla said.
Plus, Susarla believes while it may be exciting that DeepSeek can do a lot of the same things as ChatGPT, these results show it’s not at the same level.
“Can we use DeepSeek into our chatbots or any kind of customer-facing application? The answer is no,” said Susarla.
AppSOC ended up giving DeepSeek a risk score of 8.3/10. Recommending that it not be used in any enterprise cases, especially those involving sensitive data or intellectual property.
Article by:Source: