Security Researchers Warn Rapid AI Adoption Is Creating Massive New Cybersecurity Risks

By Abdul Wasay

Security researchers from various cybersecurity firms have discovered that AI infrastructure exposes over 1 million services from 2 million hosts due to weak default configurations.

The findings reveal that businesses moving rapidly to self-host large language model infrastructure are sacrificing security for speed, putting decades of software security progress at risk as companies rush to adopt AI technology and deliver more value faster.

Researchers used certificate transparency logs to identify approximately 2 million hosts with 1 million exposed services. The investigation found that AI infrastructure was more vulnerable, exposed and misconfigured than any other software category previously examined. A significant number of hosts had been deployed straight out of the box with no authentication in place because authentication simply is not enabled by default in many of these projects.

Security researchers discovered numerous chatbots that left user conversations exposed. More concerning were generic chatbots hosting a wide range of models including multimodal LLMs freely available to use without authentication. Malicious users can jailbreak most models to bypass safety guardrails, a technique where attackers craft prompts that sneak past or override built-in safeguards by playing with instructions, context or hidden tokens to produce content that is supposed to be off-limits.

CyberArk researchers demonstrated that jailbreaks can work across practically any text-based model using automated methods. Their open-source framework FuzzyAI uses fuzzing techniques to systematically test LLM security boundaries by generating and testing adversarial inputs against models. The tool applies over 15 attacking methods including passive history which frames sensitive information within legitimate research contexts, taxonomy-based paraphrasing using persuasive language techniques, and best of N which exploits prompt augmentations through repeated sampling.

Researchers discovered exposed instances of agent management platforms including n8n and Flowise. The investigation identified over 90 exposed instances across sectors including government, marketing and finance with all chatbots, workflows, prompts and outward access open to anyone. One of the more surprising findings was the sheer number of exposed Ollama APIs accessible without authentication. Of 5,200 servers queried, 31% answered without requiring credentials with 518 models wrapping well-known frontier models from Anthropic, Deepseek, Moonshot, Google and OpenAI.

After analyzing applications in a lab environment, researchers found repeated insecure patterns including poor deployment practices with insecure defaults and misconfigured Docker setups, no authentication on fresh installs dropping users straight into high-privilege accounts, hardcoded credentials embedded in setup examples, and new technical vulnerabilities including arbitrary code execution discovered within days. Some projects powering large language model infrastructure have abandoned decades of security best practices in favor of shipping fast.

#LivingSafeOnline, #Cybersecurity, #AIThreats, #RapidAI, #DigitalSafety, #CyberDefense, #CyberRisk, #OnlineSecurity, #NationalSecurity, #CyberCrime, #AIinSecurity, #RiskManagement, #CyberPolicy, #CyberPower, #TechForGood

read more

“The Nuclear Weapons of Cybersecurity”: Why Treasury Just Warned Banks About AI’s New Power

24/7 Wall St
Omor Ibne Ehsan

Quick Read

  • Check Point Software (CHKP) CEO Nadav Zafrir says the cybersecurity landscape is undergoing a fundamental shift as AI accelerates both threats and defenses.

  • Cybersecurity vendors defending against AI-powered attacks gain a tailwind as regulators treat frontier AI as systemically relevant to financial institutions.

  • The analyst who called NVIDIA in 2010 just named his top 10 stocks and Check Point Software wasn’t one of them. Get them here FREE.

It is unusual for the Treasury Secretary to call the heads of the largest Wall Street banks into a room to discuss a software system. It is even more unusual when the Federal Reserve Chair joins him. Treasury Secretary Scott Bessent and Fed Chair Jerome Powell recently gathered the CEOs of major Wall Street banks to warn them about Anthropic’s newest AI platform, Mythos, an artificial intelligence system reportedly so capable at hunting down software vulnerabilities that the company has only handed a preview to a handful of big tech and finance firms so they can patch holes before the rest of the world catches up.

That is the backdrop for a striking line from Steven Weber, the retired UC Berkeley professor who led the Center for Long-Term Cybersecurity. “Zero-day exploits are the nuclear weapons of the cybersecurity world,” Weber said. He argues that the rapid coding ability of frontier large language models has made this moment inevitable.

What a Zero-Day Actually Is

A zero-day is a software flaw the vendor has not yet discovered, which means defenders have zero days to fix it before an attacker can use it. Historically, finding one took elite human researchers weeks or months. AI systems trained on vast code corpora can now sift through software at machine speed, and that is the capability that has regulators alarmed.

The analyst who called NVIDIA in 2010 just named his top 10 stocks and Check Point Software wasn’t one of them.Get them here FREE.

OpenAI’s latest model, GPT-5.4 cyber, is raising similar concerns, and both Mythos and GPT-5.4 cyber excel at detecting zero-day exploits. The same skill that lets a defender harden a trading system lets an attacker compromise it. That is the dual-use problem in one sentence.

Why the Treasury Convened Bank CEOs

Banks sit on top of layers of legacy code, vendor software, and custom trading infrastructure. If an AI system can find unknown bugs faster than human teams can patch them, the asymmetry favors whoever deploys the model first. Bessent and Powell appearing together signals that policymakers now treat frontier AI as systemically relevant, in the same category as liquidity stress tests and counterparty risk.

The cybersecurity industry is reading the same signals. On Check Point Software‘s (NASDAQ:CHKP) most recent earnings call, CEO Nadav Zafrir said, “The cybersecurity landscape is undergoing a fundamental shift as AI accelerates both the scale and sophistication of threats. Our strategy is purpose-built for this environment. With our four-pillar architecture, we are well positioned to benefit from accelerating demand for secure, enterprise-grade AI transformation at scale.”

What Investors Should Watch

The arms race has two sides. AI defenders (endpoint security, identity, cloud security, and SOC automation vendors) gain a tailwind every time a regulator raises the alarm. AI attackers, in the wrong hands, raise tail risk for every financial institution running unaudited code. For more on the regulatory backdrop, the Treasury Department’s press release feed is the authoritative source for any formal follow-up to this private meeting.

Weber’s nuclear analogy is uncomfortable for a reason. Once a capability exists, the question shifts from whether it will be used to who controls its use, and on what timetable. Bank CEOs now have that timetable on their calendars.

The analyst who called NVIDIA in 2010 just named his top 10 AI stocks

#Cybersecurity #AIThreats #DigitalWeapons #TreasuryWarning #BankSecurity #FinancialSafety #ArtificialIntelligence #CyberDefense #NationalSecurity #RiskManagement #CyberWarfare #AIRegulation #FinTechSecurity #CyberCrime #DigitalSafety #OnlineSecurity #CyberRisk #FinancialInstitutions #CyberPolicy #CyberPower #LivingSafeOnline

read more
Trustpilot
The rating of livingsafeonline.com at Trustprofile Reviews is 9.0/10 based on 12 reviews.
Verified by MonsterInsights