I’m a backup and recovery provider, but here’s why you shouldn’t just trust me

Story by Dave Joyce

How much do you trust your backups? It’s an important question, and one that few businesses think to ask themselves until it’s too late. There’s a persistent belief in operational technology (OT) environments that a completed backup equates to a recoverable system.

A green flag on a dashboard may indicate a successful backup, but unless that backup is continuously tested and validated against current OT conditions, the “recovery” element – the most critical part of a backup and recovery strategy – is left to chance. And the more complex the environment, the more those chances dwindle.

<cs-card “=”” class=”card-outer card-full-size ” card-fill-color=”#565656″ card-secondary-color=”#272727″ gradient-angle=”112.05deg” id=”native_ad_inarticle-1-05e18dee-c5f8-4771-87ae-75d51af91037″ size=”_2x_1y” part=””>

That’s especially true in critical infrastructure such as factories, hospitals, labs, and transport networks, where the underlying architecture is usually far more fragile and diverse than mainstream enterprise IT. Many of the systems that underpin production or safety are built on legacy systems that can’t be easily virtualized or replaced.

A backup taken from these environments may appear intact, but without validation there’s no way of knowing if the data is corrupted, if drivers are missing, or if images are incomplete.

Those issues rarely reveal themselves until an incident occurs and what should have been a “backup and recovery” process turns into a “disaster recovery” process.

A lot of organizations treat a completed backup as the final word on resilience. They see the green light, assume the process has worked, and trust that if anything goes wrong everything will behave as expected.

<cs-card “=”” class=”card-outer card-full-size ” card-fill-color=”#565656″ card-secondary-color=”#272727″ gradient-angle=”112.05deg” id=”native_ad_inarticle-2-d697aa9b-8590-4ab2-9ec7-d384b1a7eb36″ size=”_2x_1y” part=””>

That’s a lot of trust to place in a basic backup process at a time when the threat surface is expanding faster than legacy-heavy OT environments can keep up. Last year, almost one-third of global ransomware attacks exploited unpatched vulnerabilities.

Cybercriminals are also four times more likely to target end-of-life systems – a list which, as of October 2025, now includes Windows 10. For organizations without a continuously validated backup and recovery process in place, the risks are mounting.

OT environments face pressures that traditional IT rarely encounters. Any interruption has immediate financial or safety consequences, which makes them prime targets for ransomware groups who know manufacturers, hospitals, and logistics providers can’t afford extended downtime.

The convergence of OT and IT only widens this attack surface, creating a landscape where even minor configuration drift or unspotted corruption can carry outsized consequences. In this context, treating a green tick as proof of resilience simply doesn’t hold up.

<cs-card “=”” class=”card-outer card-full-size ” card-fill-color=”#565656″ card-secondary-color=”#272727″ gradient-angle=”112.05deg” id=”native_ad_inarticle-3-069f0876-ae1e-4e85-b30e-108adf702c1a” size=”_2x_1y” part=””>

Why OT recovery is never as simple as it seems

The reality is that a company’s technology stack is rarely as modern as it might outwardly seem. Critical processes still rely on unsupported operating systems like Windows XP or Windows 7, bespoke embedded editions, or equipment controlled by aging Programmable Logic Controllers (PLCs).

Windows XP support ended in 2014, yet many organizations continue to operate XP-dependent devices. These systems often sit behind brittle chains of custom drivers and proprietary interfaces that may not have been manufactured in years.

Documentation is often missing, and the engineers who originally configured them have long since moved on. What’s left are inconsistent system states that can’t easily be lifted onto new or even slightly different hardware during a crisis.

Some OT environments limit change by necessity. Hospitals must avoid patching certain devices to maintain certification; manufacturing lines depend on chipsets that can’t be virtualized; air-gapped or remote sites rely on images that may not reflect current conditions.

In these cases, a backup that “succeeds” is often just one that didn’t encounter an obvious error – not one that can actually be restored.

Production lines, clinical systems, logistics hubs, and industrial control networks aren’t built with pause buttons. Even brief outages ripple outward into missed quotas, stalled deliveries, spoiled batches, safety risks, or overtime recovery costs.

It’s why ransomware campaigns increasingly target OT systems: they know the business impact is so severe that many organizations will pay simply to resume operations.

The Jaguar Land Rover incident, dubbed by some as “the most costly cyberattack in UK history”, is a case in point. When production was disrupted by issues linked to unprepared OT processes, delays cascaded across supply chains and dealer networks for weeks.

It demonstrated a truth the OT sector knows all too well – once operations stop, the financial and operational damage continues long after systems come back online.

Without proof that systems can be restored reliably, organizations are effectively gambling their production schedules, reputation, and revenue on the hope that the restore will work when they need it most.

How to validate your backups

So how do you actually validate? It’s not a single test – it’s a systematic process that moves from quick checks to full-scale recovery drills. Here’s how:

Start with integrity checks Run hash verification or checksum comparisons to confirm that backup data matches the source and hasn’t been corrupted. This catches silent data degradation – file corruption, partial overwrites, and unexpected changes that sit undetected for months.

Move to virtual test restores Boot a backup in an isolated virtual environment to confirm that operating systems, drivers, and applications load as expected. This reveals missing dependencies, configuration issues, and service initialization failures that integrity checks can’t detect.

Test on actual hardware Restore to the same type of production hardware you’d use in a real recovery. This exposes physical dependencies that virtualization masks: driver compatibility issues, firmware mismatches, and hardware-specific configurations. A backup that boots in a VM might fail entirely on real hardware.

Run full recovery drills Restoring one system is different from restoring 20 or 200. Test scenario-based drills that mirror real incidents – ransomware, site failures, supply chain disruptions – and document how long recovery actually takes versus your RTO targets.

Build it into incident response Train teams on which backups to use in different scenarios, how to isolate compromised systems, and how to restore in the right order. Make recovery muscle memory, not something you frantically figure out during a crisis.

Document and refine After every test, record what worked and what didn’t. Update your runbooks, feed lessons back into your backup schedule and storage choices, and create a cycle of continuous improvement. The 3-2-1-1-0 model captures this in its final digit: zero errors.

When organizations rehearse these restores systematically and refine their processes based on results, they turn backup and recovery from a box-ticking exercise into a resilient operational function. Validation gives you certainty, not hope, that recovery will work when it really counts.

The green light means nothing

I’m a backup and recovery expert, and this is why you shouldn’t just trust me—or anyone who says your backups will simply work when you need them.

When it comes to operational resilience, organizations should operate with zero trust until they can prove to themselves, and demonstrate to others, that they can recover exactly as needed. Trust is what you place in a green light on a dashboard. Proof is what you earn through testing and validation.

In OT environments where downtime is detrimental, where legacy systems can’t be easily rebuilt, and where attackers target the most vulnerable points – proof isn’t optional. A completed backup offers reassurance. A validated backup offers certainty. And in critical infrastructure, only certainty keeps operations running.

read more

Top 10 Best Practices for Effective Data Protection

Data is the lifeblood of productivity, and protecting sensitive data is more critical than ever. With cyber threats evolving rapidly and data privacy regulations tightening, organizations must stay vigilant and proactive to safeguard their most valuable assets. But how do you build an effective data protection framework?

In this article, we’ll explore data protection best practices from meeting compliance requirements to streamlining day-to-day operations. Whether you’re securing a small business or a large enterprise, these top strategies will help you build a strong defense against breaches and keep your sensitive data safe.

1. Define your data goals#

When tackling any data protection project, the first step is always to understand the outcome you want.

First, understand what data you need to protect. Identify your crown jewel data, and where you THINK it lives. (It’s probably more distributed than you expect, but this is a key step to help you define your protection focus.) Work with business owners to find any data outside the typical scope that you need to secure.

This is all to answer the question: “What data would hurt the company if it were breached?”

Second, work with the C-suit and board of directors to define what your data protection program will look like. Understand your budget, your risk tolerance to data loss, and what resources you have (or may need). Define how aggressive your protection program will be so you can balance risk and productivity. All organizations need to strike a balance between the two.

2. Automate data classification#

Next, begin your data classification journey—that is, find your data and catalog it. This is often the most difficult step in the journey, as organizations create new data all the time.

Your first instinct may be to try to keep up with all your data, but this may be a fool’s errand. The key to success is to have classification capabilities everywhere data moves (endpoint, inline, cloud), and rely on your DLP policy to jump in when risk arises. (More on this later.)

Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input.

3. Focus on zero trust security for access control#

Adopting a zero trust architecture is crucial for modern data protection strategies to be effective. Based on the maxim “never trust, always verify,” zero trust assumes security threats can come from inside or outside your network. Every access request is authenticated and authorized, greatly reducing the risk of unauthorized access and data breaches.

Look for a zero trust solution that emphasizes the importance of least-privileged access control between users and apps. With this approach, users never access the network, reducing the ability for threats to move laterally and propagate to other entities and data on the network. The principle of least privilege ensures that users have only the access they need for their roles, reducing the attack surface.

4. Centralize DLP for consistent alerting#

Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds.

The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine (endpoint, network, CASB), as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response.

Look to embrace Gartner’s security service edge approach, which delivers DLP from a centralized cloud service. Focus on vendors that support the most channels so that, as your program grows, you can easily add protection across devices, inline, and cloud.

5. Ensure blocking across key loss channels#

Once you have a centralized DLP, focus on the most important data loss channels to your organization. (You’ll need to add more channels as you grow, so ensure your platform can accommodate all of them and grow with you.) The most important channels can vary, but every organization focuses on certain common ones:

  • Web/Email: The most common ways users accidentally send sensitive data outside the organization.
  • SaaS data (CASB): Another common loss vector, as users can easily share data externally.
  • Endpoint: A key focus for many organizations looking to lock down USB, printing, and network shares.
  • Unmanaged devices/BYOD: If you have a large BYOD footprint, browser isolation is an innovative way to secure data headed to these devices without an agent or VDI. Devices are placed in an isolated browser, which enforces DLP inspection and prevents cut, paste, download, or print. (More on this later.)
  • SaaS posture control (SSPM/supply chain): SaaS platforms like Microsoft 365 can often be misconfigured. Continuously scanning for gaps and risky third-party integrations is key to minimizing data breaches.
  • IaaS posture control (DSPM): Most companies have a lot of sensitive data across AWS, Azure, or Google Cloud. Finding it all, and closing risky misconfigurations that expose it, is the driver behind data security posture management (DSPM).

6. Understand and maintain compliance#

Getting a handle on compliance is a key step for great data protection. You may need to keep up with many different regulations, depending on your industry (GDPR, PCI DSS, HIPAA, etc.). These rules are there to make sure personal data is safe and organizations are handling it the right way. Stay informed on the latest mandates to avoid fines and protect your brand, all while building trust with your customers and partners.

To keep on top of compliance, strong data governance practices are a must. This means regular security audits, keeping good records, and making sure your team is well-trained. Embrace technological approaches that help drive better compliance, such as data encryption and monitoring tools. By making compliance part of your routine, you can stay ahead of risks and ensure your data protection is both effective and in line with requirements.

7. Strategize for BYOD#

Although not a concern for every organization, unmanaged devices present a unique challenge for data protection. Your organization doesn’t own or have agents on these devices, so you can’t ensure their security posture or patch level, wipe them remotely, and so on. Yet their users (like partners or contractors) often have legitimate reasons to access your critical data.

You don’t want sensitive data to land on a BYOD endpoint and vanish from your sight. Until now, solutions to secure BYOD have revolved around CASB reverse proxies (problematic) and VDI approaches (expensive).

Browser isolation provides an effective and eloquent way to secure data without the cost and complexity of those approaches. By placing BYOD endpoints in an isolated browser (part of the security service edge), you can enforce great data protection without an endpoint agent. Data is streamed to the device as pixels, allowing interaction with the data but preventing download and cut/paste. You can also apply DLP inspection to the session and data based on your policy.

8. Control your cloud posture with SSPM and DSPM#

Cloud posture is one of the most commonly overlooked aspects of data hygiene. SaaS platforms and public clouds have many settings that DevOps teams without security expertise can easily overlook. The resulting misconfigurations can lead to dangerous gaps that expose sensitive data. Many of the largest data breaches in history have happened because such gaps let adversaries walk right in.

SaaS security posture management (SSPM) and data security posture management (DSPM for IaaS) are designed to uncover and help remediate these risks. By leveraging API access, SSPM and DSPM can continuously scan your cloud deployment, locate sensitive data, identify misconfigurations, and remediate exposures. Some SSPM approaches also feature integrated compliance with frameworks like NIST, ISO, and SOC 2.

9. Don’t forget about data security training#

Data security training is often where data protection programs fall apart. If users don’t understand or support your data protection goals, dissent can build across your teams and derail your program. Spend time building a training program that highlights your objectives and the value data protection will bring the organization. Ensure upper management supports and sponsors your data security training initiatives.

Some solutions offer built-in user coaching with incident management workflows. This valuable feature allows you to notify users about incidents via Slack or email for justification, education, and policy adjustment if needed. Involving users in their incidents helps promote awareness of data protection practices as well as how to identify and safely handle sensitive content.

10. Automate incident management and workflows#

Lastly, no data protection program would be complete without day-to-day operations. Ensuring your team can efficiently manage and quickly respond to incidents is critical. One way to ensure streamlined processes is to embrace a solution that enables workflow automation.

Designed to automate common incident management and response tasks, this feature can be a lifesaver for IT teams. By saving time and money while improving response times, IT teams can do more with less. Look for solutions that have a strong workflow automation offering integrated into the SSE to make incident management efficient and centralized.

Bringing it all together#

Data protection is not a one-time project; it’s an ongoing commitment. Staying informed of data protection best practices will help you build a resilient defense against evolving threats and ensure your organization’s long-term success.

Remember: investing in data protection is not just about mitigating risks and preventing data breaches. It’s also about building trust, maintaining your reputation, and unlocking new opportunities for growth.

read more

16 Chrome Extensions Hacked, Exposing Over 600,000 Users to Data Theft

î „Ravie Lakshmanan

                                                                                                                                                                                                                                                                                               A new attack campaign has targeted known Chrome browser extensions, leading to at least 16 extensions being compromised and exposing over 600,000 users to data exposure and credential theft.

The attack targeted publishers of browser extensions on the Chrome Web Store via a phishing campaign and used their access permissions to insert malicious code into legitimate extensions in order to steal cookies and user access tokens.

The first company to be known to have been exposed was cybersecurity firm Cyberhaven.

On December 27, Cyberhaven disclosed that a threat actor compromised its browser extension and injected malicious code to communicate with an external Command and Control (C&C) server located on the domain cyberhavenext[.]pro, download additional configuration files, and exfiltrate user data.

“Browser extensions are the soft underbelly of web security,” says Or Eshed, CEO of LayerX Security, which specializes in browser extension security. “Although we tend to think of browser extensions as harmless, in practice, they are frequently granted extensive permissions to sensitive user information such as cookies, access tokens, identity information, and more.

“Many organizations don’t even know what extensions they have installed on their endpoints, and aren’t aware of the extent of their exposure,” says Eshed.

Once news of the Cyberhaven breach broke, additional extensions that were also compromised and communicating with the same C&C server were quickly identified.

Jamie Blasco, CTO of SaaS security company Nudge Security, identified additional domains resolving to the same IP address of the C&C server used for the Cyberhaven breach.

Additional browser extensions currently suspected of having been compromised include:

  • AI Assistant – ChatGPT and Gemini for Chrome
  • Bard AI Chat Extension
  • GPT 4 Summary with OpenAI
  • Search Copilot AI Assistant for Chrome
  • TinaMInd AI Assistant
  • Wayin AI
  • VPNCity
  • Internxt VPN
  • Vindoz Flex Video Recorder
  • VidHelper Video Downloader
  • Bookmark Favicon Changer
  • Castorus
  • Uvoice
  • Reader Mode
  • Parrot Talks
  • Primus

These additional compromised extensions indicate that Cyberhaven was not a one-off target but part of a wide-scale attack campaign targeting legitimate browser extensions.

Analysis of compromised Cyberhaven indicates that the malicious code targeted identity data and access tokens of Facebook accounts, and specifically Facebook business accounts:

User data collected by the compromised Cyberhaven browser extension (source: Cyberhaven)
User data collected by the compromised Cyberhaven browser extension (source: Cyberhaven)

Cyberhaven says that the malicious version of the browser extension was removed about 24 hours after it went live. Some of the other exposed extensions have also already been updated or removed from the Chrome Web Store.

However, the fact the extension was removed from the Chrome store doesn’t mean that the exposure is over, says Or Eshed. “As long as the compromised version of the extension is still live on the endpoint, hackers can still access it and exfiltrate data,” he says.

Security researchers are continuing to look for additional exposed extensions, but the sophistication and scope of this attack campaign have upped the ante for many organizations of securing their browser extensions.

read more
Trustpilot
The rating of livingsafeonline.com at Trustprofile Reviews is 9.1/10 based on 13 reviews.
Verified by MonsterInsights