Security Researchers Warn Rapid AI Adoption Is Creating Massive New Cybersecurity Risks

By Abdul Wasay

Security researchers from various cybersecurity firms have discovered that AI infrastructure exposes over 1 million services from 2 million hosts due to weak default configurations.

The findings reveal that businesses moving rapidly to self-host large language model infrastructure are sacrificing security for speed, putting decades of software security progress at risk as companies rush to adopt AI technology and deliver more value faster.

Researchers used certificate transparency logs to identify approximately 2 million hosts with 1 million exposed services. The investigation found that AI infrastructure was more vulnerable, exposed and misconfigured than any other software category previously examined. A significant number of hosts had been deployed straight out of the box with no authentication in place because authentication simply is not enabled by default in many of these projects.

Security researchers discovered numerous chatbots that left user conversations exposed. More concerning were generic chatbots hosting a wide range of models including multimodal LLMs freely available to use without authentication. Malicious users can jailbreak most models to bypass safety guardrails, a technique where attackers craft prompts that sneak past or override built-in safeguards by playing with instructions, context or hidden tokens to produce content that is supposed to be off-limits.

CyberArk researchers demonstrated that jailbreaks can work across practically any text-based model using automated methods. Their open-source framework FuzzyAI uses fuzzing techniques to systematically test LLM security boundaries by generating and testing adversarial inputs against models. The tool applies over 15 attacking methods including passive history which frames sensitive information within legitimate research contexts, taxonomy-based paraphrasing using persuasive language techniques, and best of N which exploits prompt augmentations through repeated sampling.

Researchers discovered exposed instances of agent management platforms including n8n and Flowise. The investigation identified over 90 exposed instances across sectors including government, marketing and finance with all chatbots, workflows, prompts and outward access open to anyone. One of the more surprising findings was the sheer number of exposed Ollama APIs accessible without authentication. Of 5,200 servers queried, 31% answered without requiring credentials with 518 models wrapping well-known frontier models from Anthropic, Deepseek, Moonshot, Google and OpenAI.

After analyzing applications in a lab environment, researchers found repeated insecure patterns including poor deployment practices with insecure defaults and misconfigured Docker setups, no authentication on fresh installs dropping users straight into high-privilege accounts, hardcoded credentials embedded in setup examples, and new technical vulnerabilities including arbitrary code execution discovered within days. Some projects powering large language model infrastructure have abandoned decades of security best practices in favor of shipping fast.

#LivingSafeOnline, #Cybersecurity, #AIThreats, #RapidAI, #DigitalSafety, #CyberDefense, #CyberRisk, #OnlineSecurity, #NationalSecurity, #CyberCrime, #AIinSecurity, #RiskManagement, #CyberPolicy, #CyberPower, #TechForGood

read more

India’s PNB hikes cybersecurity spend as AI models including Anthropic’s Mythos raise risks

Story by Nishit Navin and Ashwin Manikandan

BENGALURU/MUMBAI, May 5 (Reuters) – India’s Punjab National Bank is stepping up investments in cybersecurity and accelerating procurement of technology to guard against rising digital threats including those from advanced AI models, a senior executive said on Tuesday.

AD

The country’s third largest state-run lender by market capitalisation has earmarked about 20% of its technology budget for cybersecurity, or roughly 7 billion to 8 billion rupees ($73.5 million – $84 million) for the current financial year, executive director D Surendran told Reuters in an interview, adding that this allocation is more than 50% higher than the previous year.

“We don’t want to compromise on this kind of expenditure,” Surendran said, adding the bank will increase the spending further if required.

PNB’s move comes amid heightened regulatory focus on risks emerging from advanced AI models including Anthropic’s Mythos.

Last month India’s finance minister Nirmala Sitharaman met with heads of top banks to gauge preparedness against AI-related cybersecurity risks. India’s central bank has also been in talks with global regulators, lenders and government officials to understand the potential risks, Reuters has reported.

PNB is also fast-tracking purchases of security tools, including firewalls and other systems to address vulnerabilities, Surendran said.

“We have increased our frequency of audit… now we have made our audit process 24/7 so that the criticality will be identified fast,” Surendran said.

PNB SEES SUSTAINED LOAN GROWTH

The New-Delhi based lender, earlier in the day, posteda more than 14% rise in net profit to 52.25 billion rupees, helped by healthy loan growth and improving asset quality.

Loans grew 12.7% year-on-year while deposits rose 9.2%.

The bank will target 12-13% loan growth in financial year 2026/27, Surendran said, driven by credit to small and medium-sized enterprises and retail loans, he said.

The bank expects deposits to grow around 9-10% for the year.

($1 = 95.2800 Indian rupees)

(Reporting by Nishit Navin and Ashwin Manikandan; Editing by Ronojoy Mazumdar)

#LivingSafeOnline, #Cybersecurity, #PNB, #IndianBanks, #AIThreats, #AnthropicMythos, #DigitalSafety, #CyberDefense, #FinancialSecurity, #BankingRisk, #CyberInvestment, #AIinFinance, #NationalSecurity, #CyberRisk, #FinTechSecurity, #CyberPolicy, #CyberCrime, #RiskManagement, #CyberPower.
read more
Trustpilot
The rating of livingsafeonline.com at Trustprofile Reviews is 9.0/10 based on 12 reviews.
Verified by MonsterInsights