---
title: "WD My Cloud Data Recovery — NAS Network Storage - EXALAB"
description: "Data recovery from WD My Cloud, My Cloud Home and multi-bay EX/PR/DL network storage. OS 5 migration recovery, RAID rebuild. Free diagnostics and pickup, from CZK 1,500."
url: "https://www.exalab.cz/en/data-media/wd-my-cloud-data-recovery"
date: "2026-05-17T17:24:45+00:00"
language: "en-GB"
---

# Data Recovery from WD My Cloud Network Storage

Drive degradation and failure within an array, unsuccessful RAID rebuild, data loss from multi-bay devices, inaccessible data on My Cloud Home, failure after OS migration... We have many years of experience with data recovery from all types of storage media. Data recovery from WD My Cloud network storage devices.

 [ Consultation with a technician ](https://www.exalab.cz/index.php?Itemid=200#contactnumbers)  [ Consultation with a technician ](tel:+420608177773)  [ Free diagnostic evaluation ](https://www.exalab.cz/index.php?Itemid=200#contactnumbers) [ Data recovery price list ](https://www.exalab.cz/index.php?Itemid=198)

![](https://www.exalab.cz/images/svg/diag-cta.svg)**Free diagnostics**free consultation, diagnostics, pick-up

![](https://www.exalab.cz/images/svg/success-dollar-cta.svg)**You pay only for success**no data – no fee

![](https://www.exalab.cz/images/svg/express-cta.svg)**Express recovery 24/7**priority service available

![](https://www.exalab.cz/images/svg/success-rate-cta.svg)**Success rate &gt; 95%**own EXALAB laboratory

**WD My Cloud** is a line of network-attached storage (NAS) devices that Western Digital has been selling since 2013. The “My Cloud” label, however, covers several types of devices with different internal designs—from single-drive consumer models, through the My Cloud Home line built around a mobile app, to the two- and four-bay units for more demanding use (EX, PR, DL series). Data recovery procedures differ between these types; what they have in common is that we encounter all of them regularly in our lab. We work with all generations, including units damaged by the OS 3 to OS 5 migration. Diagnostics is free, and data recovery starts at CZK 1,500.

 ## <a id="MyCloudGuide"></a>Article guide

- [What to do when your My Cloud has failed](#MyCloudWhatToDo)
- [Identify your model](#MyCloudIdentify)

- [WD My Cloud in our lab](#MyCloudInLab)
- [Failure of one or more drives in the device](#MyCloudDiskFailure)

- [Failure after the OS 3 to OS 5 migration](#MyCloudOSMigration)
- [Classic My Cloud (single-drive and Mirror Gen 2)](#MyCloudSingleBay)

- [My Cloud Home—why you only see anonymous files after removing the drive](#MyCloudHome)
- [Multi-bay My Cloud—EX, PR and DL series](#MyCloudMultiBay)

- [Most common failures we see in the lab](#MyCloudFailures)
- [Frequently asked questions](#MyCloudFAQ)

 ## <a id="MyCloudWhatToDo"></a>What to do when your My Cloud has failed

If your WD My Cloud is unresponsive, shows a red LED, reports “Drive not found,” failed to start after a firmware update, or your data has disappeared—follow these principles. They determine whether successful data recovery is possible:

1. **Power the device off and disconnect it from the network.** This is especially important for multi-bay units with a degraded array, where every additional spin-up adds load to already strained drives and may push another drive in the array into failure—turning a recoverable situation into an unrecoverable one quickly.
2. **Do not run a factory reset from the admin interface.** Options like “Restore Factory Defaults,” “Quick Factory Restore” or “Full Factory Restore” sit in the menu next to safer choices, and a panicked user can easily pick the wrong one. In some variants, these operations overwrite the partition table, RAID metadata, or the data volume itself.
3. **Do not attempt to reinstall the firmware or “fix” the boot.** If the device stopped working after a failed OS 3 to OS 5 migration, the data on the internal drives typically remains intact—but further attempts to flash the firmware can damage the partition table.
4. **For multi-bay models, do not swap drives between bays.** If you remove the drives from their bays for inspection, always note the order (slot 1, 2, 3, 4). mdadm RAID metadata is order-sensitive, and returning drives to different positions risks an unsuccessful rebuild.
5. **Do not start a “blind” rebuild.** If the array reports degradation and you replace a drive with a new one, the device automatically starts a rebuild. But if a second or third drive in the array is also in poor condition (a typical scenario for 5–8-year-old units), the rebuild stresses the remaining drives and can finish them off. **For a RAID 5 array, the rebuild after a single drive failure is the most dangerous of all operations.**
6. **Do not run any “repair” tools on the drives unless you are absolutely sure of what you are doing** (and if you were, you probably wouldn’t be reading this guide). Windows chkdsk or Linux fsck run on a raw drive from a RAID array typically damages the array metadata.
7. **Get in touch.** Fill out a short form, call us, or write—we’ll suggest the best path based on the generation and the failure description. **Diagnostics and pickup are free** within the Czech Republic; pickup from other EU countries is available by arrangement.

**Warning, especially for My Cloud Home and Home Duo:** recovering data from this line requires a special procedure. Removing the drive and trying to read it on a regular computer will return thousands of files with anonymous hexadecimal names and no structure—the original file names and folders are stored in a separate database that needs to be processed with a professional tool.

 [Free consultation, diagnostics, pickup](https://www.exalab.cz/index.php?Itemid=200#contactnumbers)

 ## <a id="MyCloudIdentify"></a>Identify your model

Data recovery procedures differ significantly between My Cloud series—different operating systems, different file systems, different ways of storing data. Click on your model to jump to the section with technical details and the recovery approach.

- **WD My Cloud** (single-drive, classic)—[go to the classic models section](#MyCloudSingleBay)
- **My Cloud Mirror Gen 2** (two-drive, RAID 1)—[go to the classic models section](#MyCloudSingleBay)

- **My Cloud Home, My Cloud Home Duo** (proprietary REST SDK)—[go to the Home section](#MyCloudHome)
- **My Cloud EX2 Ultra, EX2100, EX4100** (multi-bay)—[go to the multi-bay section](#MyCloudMultiBay)

- **My Cloud PR2100, PR4100** (multi-bay Pro)—[go to the multi-bay section](#MyCloudMultiBay)
- **My Cloud DL2100, DL4100** (multi-bay business)—[go to the multi-bay section](#MyCloudMultiBay)

- **Not sure which model you have?**—[see failure descriptions by symptom](#MyCloudFailures) or [contact us](https://www.exalab.cz/index.php?Itemid=200#contactnumbers) and we’ll help identify it

If you’re not sure which category your unit belongs to, the type of interface and the way it’s used are the deciding factors: My Cloud Home is set up via a mobile app and requires a WD account; classic My Cloud is set up via the device’s web admin interface (dashboard) on the local network; multi-bay models are recognizable at a glance—they have two or four drive bays.

 ## <a id="MyCloudInLab"></a>WD My Cloud in our lab

The WD My Cloud line covers personal and small-business network storage that Western Digital has been selling since 2013. From a data recovery perspective, however, it cannot be treated as a homogeneous family—inside the plastic enclosures sit three architecturally distinct categories:

- **Classic models** (My Cloud, My Cloud Mirror Gen 2)—a standard 3.5" WD Red or Blue drive in a plastic enclosure with a network interface. Linux operating system, EXT4 file system. Single-drive My Cloud has no RAID; the two-drive Mirror Gen 2 contains two drives in RAID 1 (mirroring) via Linux mdadm.
- **My Cloud Home and Home Duo**—like the classic models, they contain 3.5" WD Red or Blue drives with an EXT4 file system, but on top of EXT4 sits a proprietary REST SDK layer that stores user files as anonymous hexadecimal “content IDs” and keeps the mapping to original names and folders in a separate SQLite database. After removing the drive, you can mount it under Linux, but the data only appears as thousands of unnamed files.
- **Multi-bay models** (EX2 Ultra, EX2100, EX4100, PR2100, PR4100, DL2100, DL4100)—Linux mdadm software RAID (RAID 0, 1, 5, 10, JBOD depending on the bay count) with EXT4 over the array. Architecturally close to Synology or QNAP, just with WD’s own firmware.

Internal drives are typically WD Red, Red Plus, or WD Blue in the oldest models; some newer models ship with SMR (Shingled Magnetic Recording) drives that complicate RAID rebuilds. Multi-bay models use various ARM SoCs (Marvell Armada 370, 385 and 388 in the EX series, Intel Pentium in the PR series).

Typical situations we see with My Cloud devices in the lab:

- red LED and “Drive not found” after the forced OS 3 to OS 5 migration (years 2021–2022, thousands of affected users),
- drive degradation in 5+-year-old units—SMART errors, knocking sounds, spontaneous disconnections,
- failed RAID 5 rebuild in multi-bay models after replacing a single defective drive,
- inaccessible data after an attempted factory reset from the admin interface,
- My Cloud Home—the user removed the drive after a unit failure, mounted it under Linux, and discovered only anonymous hex files,
- logical EXT4 file system damage after improper shutdown or power outage,
- drive damage from a power surge during a storm—typically all drives in the array at once.

If you’re dealing with a **WD My Book Live** or **My Book Live Duo**—this is an older line of network storage, no longer supported (and affected by the mass remote wipe in 2021). Architecturally they are closer to the classic single-drive My Cloud than to the external USB My Book; this page covers them, and the recovery approach is analogous to the classic single-drive My Cloud section below.

→ **Main WD pillar:** [Western Digital (WD) data recovery](https://www.exalab.cz/index.php?option=com_content&view=article&id=160&Catid=17)—an overview of all series, internal HDDs, external My Book and My Passport, networked My Cloud, WD SSDs.

 3 photos: WD My Cloud types (single-bay classic + My Cloud Home + multi-bay PR/EX)

 ## <a id="MyCloudDiskFailure"></a>Failure of one or more drives in the device

The most common situation in which a My Cloud arrives at the lab isn’t some exotic scenario—it’s **gradual failure of the drives themselves**. Most units in service today are 5–10 years old, often running 24/7, and mechanical wear at this stage of their lifespan is to be expected.

Symptoms vary slightly between models and device types, but the principles are shared:

- SMART errors in the dashboard or in the mobile app reports,
- knocking or clicking sounds audible from the plastic enclosure on power-up,
- gradually slowing data access; large file copies stall or are interrupted,
- spontaneous unmounting of volumes; the dashboard reports “Volume not mounted,”
- red LED on the front panel (which can also indicate other problems—see [OS migration](#MyCloudOSMigration)),
- for multi-bay models, reports of a degraded array and warnings about one or more failed drives.

The recovery procedure depends on the device type and the specific failure:

#### Single-drive models (My Cloud, My Cloud Home)

We remove the drive from the unit and continue all further work outside the original device. If the drive itself is physically damaged (knocking, heads not reading, platters with defects), we proceed as we would with any 3.5" hard drive—work with read heads in a laminar flow box, modifications to the PCB and service area data. For drives with degraded surfaces or unstable reads, we use the ACELab PC-3000 platform, which provides drive-level adjustments well beyond what any software offers, optionally combined with our in-house software solution developed in the lab for specific scenarios. For My Cloud Home, an additional step follows: reconstructing the original tree structure from the index.db database—see the [dedicated My Cloud Home section](#MyCloudHome).

#### Mirror Gen 2 (RAID 1) and multi-bay models

For RAID arrays the situation is more complicated—the drives are often all at the same stage of wear (same age, same load, same environment), and after one drive fails the risk of another failing during the rebuild is real. The key question we ask when accepting the job: **what is the actual condition of all the drives in the array?** Imaging each drive individually, assessing its condition, and reconstructing the array virtually outside the original device—that’s the standard procedure that minimizes the risk of losing a second drive during recovery.

**Warning—do not try to replace a failed drive yourself and start a rebuild if the data in the array matters to you.** If several other drives in the array are in similar condition, the rebuild will finish them off. For RAID 5, a rebuild after a single drive replacement is statistically the most dangerous operation—it runs all remaining drives at full load for tens of hours.

 [Free consultation, diagnostics, pickup](https://www.exalab.cz/index.php?Itemid=200#contactnumbers)

 ## <a id="MyCloudOSMigration"></a>Failure after the OS 3 to OS 5 migration

In 2021, Western Digital ended support for the older My Cloud OS 2 and OS 3 versions due to critical security vulnerabilities (among them those that, in June 2021, manifested as the mass wipe of My Book Live units). Users of OS 5-compatible devices were prompted to migrate to OS 5 by April 15, 2022; for users of older models that don’t support OS 5, remote access ended definitively on January 15, 2022. The migration to OS 5 is one-way—reverting to OS 3 is not possible.

The migration itself is an operation that **does not require erasing data on the user volume**. OS 5 runs on a separate system partition; user data remains on a separate data partition (mounted as Volume\_1 on single-drive models, /dev/md0 on multi-bay). From a technical standpoint, only the system partition root file system is rewritten.

In practice, however, a non-trivial number of units experienced failures during the migration or shortly after. Affected users report consistent symptoms:

#### Symptoms after a failed migration

- solid (or blinking) **red LED** on the front panel,
- dashboard reports “**Drive not found**” or “Volume not mounted,”
- the unit enters a **boot loop**—it cycles between power-on and restart,
- error message “**Error Code 1121: Unsupported File System**,”
- the unit doesn’t appear on the local network and isn’t visible via Network Discovery or the WD Discovery utility.

#### What happened technically

On affected units, damage occurred during the system area flash—an incomplete bootloader write, a damaged system partition table, or an inconsistent root filesystem. The key point: **these failures generally affected only the system partition. The data partition with user files typically remains intact**, and the EXT4 file system itself is still fully mountable.

#### Recovery procedure

In our lab, the procedure usually involves removing the drives and reading them outside the device:

- for single-drive models—remove the drive, create a binary disk image via the SATA interface, mount the EXT4 partition corresponding to the data volume, copy the data;
- for multi-bay models—remove all drives, image each one individually, virtually reconstruct the mdadm RAID array, mount EXT4 over the reconstructed array, copy the data. In specific cases we assemble the array in a native Linux environment.

The condition of the drives themselves naturally plays a role: in units where the drives have 5–8+ years of operation, we often run into additional problems during imaging—degraded sectors, weakening read heads, SMART errors. This extends the process, but recovery is usually achievable through standard lab procedures.

**Warning:** If your unit is in a “migration failed / red LED” state, **do not repeatedly attempt the migration through the web interface or manually via a firmware file**. Each additional attempt can complicate an originally recoverable situation—especially if the firmware update reaches a stage where it begins to overwrite partitions outside the system area.

→ **General information about NAS storage and recovery procedures:** [NAS data recovery](https://www.exalab.cz/index.php?Itemid=826).

 ## <a id="MyCloudSingleBay"></a>Classic My Cloud (single-drive and Mirror Gen 2)—Linux EXT4

The first generation of My Cloud (released in 2013) and its successor My Cloud Mirror Gen 2 are among the architecturally simplest members of the line. Inside the plastic enclosure sits a standard 3.5" SATA drive (typically WD Red, or WD Blue in older models); the external interface is gigabit Ethernet, and the USB 3.0 port is used to connect an external drive to the unit, not for data connection to a computer.

The operating system is a customized Linux (Debian-based in older generations, a custom distribution in OS 5), and the user-volume file system is **EXT4**. The drive holds several partitions—a small system partition (with firmware and OS), a swap partition, and the main data partition (taking up the rest of the drive’s capacity, mounted as Volume\_1).

There is one key difference between the single-drive My Cloud and Mirror Gen 2: Mirror Gen 2 contains **two drives in RAID 1 (mirroring)** via Linux mdadm. During normal operation it’s almost invisible, but it changes the recovery procedure—if the two drives have become out of sync (typically after an unexpected power outage), we have to determine which one holds the more recent data based on the event counters in the mdadm metadata.

From a recovery perspective, this category is one of the more straightforward:

- remove the drive from the plastic enclosure (4 screws and remove the front panel for most models),
- create a binary disk image via the SATA interface,
- identify the data partition—usually the largest EXT4 partition, sometimes labeled “cloud,”
- mount EXT4 under Linux, copy the data,
- for Mirror Gen 2, the same procedure applied to both drives.

Complications arise when the drive itself is physically damaged—we then proceed as we would with any 3.5" hard drive: work with read heads in a laminar flow box, modifications to the PCB and service area data. This is common for 5+-year-old units where the WD Red drives are at the end of their lifespan, with progressive disconnects during imaging caused by a growing number of bad sectors. The ACELab PC-3000 platform helps here by enabling work with the drive well beyond the limits of conventional software, optionally combined with our in-house software solution developed in the lab for situations where even standard equipment isn’t enough.

→ **General information about mechanical failures of 3.5" hard drives:** [HDD data recovery](https://www.exalab.cz/index.php?Itemid=711).

 1 photo: inside of single-bay My Cloud (3.5" WD Red + controller PCB)

 ## <a id="MyCloudHome"></a>My Cloud Home—why you only see anonymous files after removing the drive

My Cloud Home and My Cloud Home Duo (released in December 2017) are architecturally distinct from the rest of the My Cloud family. Western Digital introduced them as a “personal cloud for non-technical users”—setup and operation only via a mobile app and a WD user account, no traditional admin dashboard, no web interface for local management in the original sense.

Inside is a standard 3.5" SATA drive (often WD Red Plus or Blue) in capacities of 2, 3, 4, 6 and 8 TB for the single-drive version, and 4, 8, 12, 16 and 20 TB for the Home Duo. The drive’s file system is **EXT4**—the same as for classic My Cloud. The application platform was originally built on Android Runtime over a Linux kernel (the WD developer SDK targets Android API level 23, Marshmallow); in 2022, with firmware update 8.7.0, Western Digital fully migrated it to Debian Linux. The key difference from classic My Cloud, however, lies in the **layer above EXT4**.

#### REST SDK—why you only see anonymous files after removing the drive

For My Cloud Home, WD used its own file storage implementation called **REST SDK**. The architecture was designed around the mobile app and cloud synchronization, not around classic network sharing. After a device failure this has one major consequence for the user: **user files are not stored on the EXT4 partition under their original names or in their original folders**.

Instead, the layout is as follows:

- `/restsdk/data/files/`—contains all user files named as hexadecimal content IDs without extensions (e.g., `0a3f9b2e1c8d4567`);
- `/restsdk/data/db/index.db`—a SQLite database with a `files` table containing columns `id, name, parentID, mimeType, contentID` and other metadata. This table maps each content ID back to the original file name, mime type, parent folder, and timestamp.
- `/restsdk/data/thumbnails/`—a cache of thumbnails generated for the mobile app.

The consequence is straightforward: if you remove a drive from a My Cloud Home and mount the EXT4 partition under Linux, you’ll see thousands of unnamed hexadecimal files with no folder structure. Without the SQLite index.db database and a tool that can process it correctly, it’s not possible to reconstruct the original file names and tree structure.

#### Recovery procedure in our lab

For My Cloud Home and Home Duo we proceed as follows:

- remove the drive and create a binary image using ACELab PC-3000 tools or our in-house software solution developed in the lab (for Home Duo, both drives separately—Duo models use SPAN, RAID 0 stripe, or RAID 1 mirror depending on configuration, so a complex volume reconstruction is needed first);
- identify the EXT4 data partition (usually labeled “cloud”) and mount it;
- extract the index.db file from `/restsdk/data/db/`;
- process it usually with PC-3000 Data Extractor or UFS Explorer Professional. In specific cases we assemble the array in a native Linux environment;
- reconstruct the original tree structure of the data from index.db, then copy the data in the form the user knows from the mobile app.

Complications arise if the index.db database itself is damaged—for example, after an improper shutdown during a write, a failed firmware update, or on a drive with a degraded surface in the area where the database lies. In such cases we attempt to recover the database from the SQLite journal or from previous versions. In rare cases where the database cannot be reconstructed, what remains are files without their original names—we can at least sort the data by content (mime type signatures, EXIF metadata for photos), but the result lacks the structure the user is accustomed to.

#### Encryption

Some My Cloud Home units are encrypted via Linux LUKS—the user typically set a password during setup, but it isn’t stored anywhere visible. Without knowledge of the password, decrypting the LUKS partition is practically infeasible (the algorithms are designed to resist brute-force attacks). If you know the password and the device requires recovery, have it available when you submit the job.

**Warning—do not do the following with My Cloud Home:** Do not run a factory reset from the mobile app if you have unbacked-up data on the device. The reset overwrites the REST SDK structure and index.db—the data physically remains on the drive for some time as unmarked free space, but its reconstruction becomes substantially harder. Likewise, do not insert a replacement drive from another My Cloud Home into the unit—the units use proprietary keys tied to the specific hardware unit, and the operation will end by overwriting key structures.

 [Free consultation, diagnostics, pickup](https://www.exalab.cz/index.php?Itemid=200#contactnumbers)

→ **General information about NAS and network storage:** [NAS data recovery](https://www.exalab.cz/index.php?Itemid=826).

 1 photo: disassembled My Cloud Home (3.5" drive + controller PCB)

 ## <a id="MyCloudMultiBay"></a>Multi-bay My Cloud—EX, PR and DL series

The multi-bay My Cloud series (EX, PR and DL) target more demanding home users and small businesses. Architecturally, these units are closer to Synology and QNAP competitors—Linux operating system, mdadm software RAID, EXT4 file system over the array.

Categories by bay count and class:

- **2-bay**—EX2 Ultra, EX2100 (RAID 0, 1, JBOD, Spanning),
- **4-bay**—EX4100, PR2100, PR4100 (the PR models are the “Pro” variants), DL2100, DL4100 (the DL models are the “business” variants),
- the 4-bay multi-bay models support **RAID 0, 1, 5, 10 and JBOD**; in practice we most often see RAID 5 (4 drives, parity redundancy) and RAID 1 (2 drives, mirror).

SoCs differ between series: the EX line uses ARM Marvell Armada (370, 385 and 388), the PR line Intel Pentium. This affects performance but not the recovery procedure—mdadm metadata is platform-agnostic.

#### Typical multi-bay recovery scenarios

- **Single drive failure in RAID 5 and a failed rebuild.** A classic scenario for 5+-year-old units. The user replaces a failed drive with a new one, the unit starts a rebuild. But the array is degraded and the remaining 3 drives are under full load. If one of them is also close to failing, the rebuild finishes it off and the array ends up doubly degraded—beyond RAID 5’s ability to reconstruct the data.
- **Simultaneous failure of multiple drives (power surge, water, mechanical impact).** Storm, flooding, falling cabinet. The drives are largely in the same situation—same age, same time in service, same wear. If one fails, the others are in similar condition.
- **Corrupted mdadm metadata.** After an incorrect operation in the dashboard (RAID level change, factory reset, array expansion). The mdadm superblock is overwritten and the array “disappears.” But the data typically remains on the drives—reconstruction is a matter of knowing the original configuration.
- **Failed rebuild after a power outage.** If power fails during a rebuild, the mdadm event counters between drives diverge. A blind rebuild attempt then risks overwriting healthy data with the stale version from the “old” drive.

#### Lab recovery procedure

Multi-bay recovery is a sequence of steps that broadly resemble any Linux mdadm RAID work—the WD specifics are in the details:

- remove all drives (with the slot order noted),
- create a binary image of each drive individually, using ACELab PC-3000 or our in-house software solution developed in the lab if a drive shows physical problems,
- analyze the mdadm superblocks on the binary images, identify event counters and array state,
- reconstruct the array virtually in lab software. For more complex configurations, or when virtual reconstruction fails, we assemble the array directly on a Linux server;
- mount EXT4 (or Btrfs in some models with alternative firmware) over the reconstructed array,
- copy the data to target storage.

Complications arise when the drives in the array were in a state that the WD firmware should never have allowed into a RAID—typically **WD Red drives with SMR in a 4-bay configuration**. The basic WD Red line uses SMR (Shingled Magnetic Recording) in some models manufactured in recent years, and in multi-bay arrays it behaves problematically: during a rebuild it doesn’t hold stable timeouts, drops out of the array, and is generally unsuitable for RAID 5/6 use. If the array contains a mix of SMR and CMR drives, the situation tends to be even more complicated. Recovery is achievable, but requires a more cautious approach and more time.

→ **Detailed techniques and approaches to RAID arrays:** [RAID data recovery](https://www.exalab.cz/index.php?Itemid=825) and [NAS storage in general](https://www.exalab.cz/index.php?Itemid=826).

 1 photo: 4-bay PR/EX with 4 removed drives alongside

 ## <a id="MyCloudFailures"></a>Most common failures we see in the lab

WD My Cloud failures, ordered by how often we see them in our lab:

#### Red LED and “Drive not found” after the OS 3 to OS 5 migration

The dominant failure mode of the years 2021–2024. Symptoms: solid or blinking red LED on the front panel, dashboard unavailable or reporting a missing volume, the unit in a boot loop. Data on the internal drives is typically intact—the problem is in the damaged system partition after a failed flash operation. The recovery procedure is essentially standard (remove drives, image, mount the EXT4 data partition outside the device), but it requires deciding whether to risk another firmware repair attempt or to stop and hand the data back to the client.

#### Knocking or progressive drive degradation in 5+-year-old units

The WD Red drives originally fitted in older My Cloud units now have 5–8 years of 24/7 operation behind them. Symptoms: SMART errors in the dashboard, spontaneous unmounting of volumes, knocking sounds, repeated spin-ups. For single-drive models, recovery is handled as for any 3.5" hard drive (potentially including head transplantation). For multi-bay models the situation is more complex—the drives are often all at the same stage of wear, and a rebuild after replacing one of them risks finishing off another.

#### Failed RAID 5 rebuild after drive replacement

The classic scenario for multi-bay models. The user replaces the failed drive, the dashboard starts a rebuild, and during the rebuild (often after tens of hours of operation) another drive fails and the array enters a double-degraded state beyond RAID 5’s ability to reconstruct the data. Recovery procedure: image all drives (including the original “failed” one, which is often in better condition than the post-rebuild state suggests), virtually reconstruct the array, recover the data. Failed rebuilds are more common in arrays with WD Red SMR drives.

#### Anonymous hexadecimal file names after removing a drive from My Cloud Home

A scenario specific to My Cloud Home and Home Duo. The user opened the device after a failure, removed the drive, mounted it in Linux or via a USB-SATA adapter on Windows, and discovered only thousands of files with 16-character hex names, no extensions and no folders. This is the expected state—the user-facing structure is in a SQLite database that needs to be processed with a professional lab tool. See the [My Cloud Home section](#MyCloudHome).

#### Inaccessible data after a factory reset

Synology offers an “Erase All Data” option in its menu, QNAP “Restore Factory Defaults &amp; Format All Volumes”—WD My Cloud has analogous options. A panicked user, after a unit failure, often reaches for a reset hoping it will “fix something.” Depending on the type of reset, the impact ranges from minimal (password and network settings reset—data is safe) to destructive (data volume formatting). In RAID 5 arrays this additionally overwrites the mdadm superblocks; the data is still physically on the drives, but its reconstruction requires a deeper forensic procedure.

#### Drive damage from a power surge during a storm

Multi-bay units sharing a single power supply are vulnerable to a power surge as a whole—if a lightning strike or surge pulse reaches the unit, it typically affects all drives equally. The difference is in the severity. Sometimes the bridge and SATA controllers on the unit’s PCB survive but the drive electronics is destroyed; sometimes the other way around. The recovery procedure depends on the extent of the damage—it often combines drive PCB swaps (ROM transplantation) and array reconstruction.

#### Water and fire

Less common, but always individual. Brief contact with water typically affects the unit’s PCB first—oxidation of contacts, corrosion, short circuits; the drives themselves usually remain unharmed. With longer exposure, immersion, or flooding, water reaches even the sealed drive bodies and can damage platters and heads. Helium-filled drives (typically 12 TB and above) are more resistant in this respect thanks to their hermetically sealed bodies. In any case: do not power on the device after contact with water, do not dry it with any “home” methods, and bring it to the lab as soon as possible.

#### Mechanical damage (drop, impact)

For single-drive models, drops are less common than with portable external drives—the unit sits in one place and isn’t handled daily—but they happen. Typically during moves, cleaning, cable handling, or when something falls onto the device. For multi-bay models the risk is higher, especially during moves: a 4-bay PR4100 or EX4100 with full drives weighs over 5 kg, and a fall from desk height usually damages the mechanical components of multiple drives at once. Symptoms: the unit doesn’t power up after the incident, audible clicking or scraping sounds, dashboard reports of multiple drive failures, or complete array unavailability. The recovery procedure is standard lab work—remove the drives, identify the extent of mechanical damage, work in the laminar flow box, and if necessary transplant read heads. The key is not to power the device back on after a fall; repeated spin-ups of damaged drives only makes things worse.

→ **Main WD pillar with overview of all series:** [Western Digital (WD) data recovery](https://www.exalab.cz/index.php?option=com_content&view=article&id=160&Catid=17).

 1 photo: head-stack damage of 3.5" drive from My Cloud

 ## <a id="MyCloudFAQ"></a>Frequently asked questions

#### <a id="FAQmcRedLED"></a>My WD My Cloud is blinking red and unresponsive—is the data lost?

Not necessarily. A red LED on My Cloud signals a system-level error—most often a damaged system area on the drive after a failed OS 3 to OS 5 migration, or a failing internal drive. The data on the user volume usually remains intact, because it sits on a separate data partition.

The recovery procedure usually involves removing the drive, imaging it, and mounting the EXT4 data partition outside the device. We always verify the specific situation through free diagnostics—we’ll tell you what happened, what the recovery path would look like, and what it will cost.

#### <a id="FAQmcOSMigration"></a>Can I retry the OS 5 migration myself, or will it make things worse?

If your unit is in a “migration failed / red LED” state, we don’t recommend repeated migration attempts or manual firmware flashing. While in some cases a new migration attempt does succeed, if the original cause of the failure was the condition of the drive itself (degraded sectors, weakening heads), each additional write to the system area worsens this condition. In a worse case, the process can overwrite the partition table in such a way that finding the boundaries of the data volume requires a forensic procedure.

If you have data in the unit that matters to you, we recommend leaving the device powered off and contacting us. Diagnostics is free, and the decision whether to retry the migration or recover data outside the device is based on the specific drive condition.

#### <a id="FAQmcRAIDFail"></a>One drive in my PR4100 failed and a rebuild started—should I power the unit off?

If the rebuild is currently running and proceeding without errors (no new red LEDs, dashboard reports progress), let it finish—interrupting a rebuild risks corrupting the array. But if you see another drive reporting a problem during the rebuild (red LED on another bay, warning in the dashboard), **power the unit off and disconnect it immediately**. Continuing the rebuild in this situation typically leads to a double-degraded state from which RAID 5 cannot recover the data on its own.

After powering off, contact us. The standard procedure is to remove all drives (noting the slot order), image them, virtually reconstruct the array in lab software, and recover the data outside the original array. **Important:** please send the original “failed” drive too—it’s often in better condition than the post-rebuild state suggests, and contains data critical for the virtual reconstruction.

#### <a id="FAQmcHomeHex"></a>I removed the drive from my My Cloud Home, mounted it under Linux and only see anonymous files—what now?

That’s the expected state. My Cloud Home stores user files on the EXT4 partition under hexadecimal content IDs without their original names or folders. The mapping back to original file names is in a separate SQLite database (index.db in `/restsdk/data/db/`). Without proper processing of this database in the lab, you can’t access the data in its original form.

Standard recovery software (Disk Drill, Recuva, EaseUS) can’t process this layer—it scans EXT4 and returns the same anonymous result. If you have a drive from a My Cloud Home, we recommend not performing any further operations on it and contacting us. **Diagnostics is free**, and in our lab we have the experience and equipment for My Cloud Home that standard recovery software doesn’t offer.

#### <a id="FAQmcEncrypted"></a>My My Cloud Home was encrypted—do I have a chance to recover the data without the password?

My Cloud Home uses encryption via Linux LUKS. If you know the password, decrypting the partition after removing it from the original unit is a standard procedure, and recovery proceeds the usual way (reconstruction of the REST SDK tree from the index.db database, copying the data). If you don’t know the password, the situation is substantially more complicated—LUKS uses AES-256, and the algorithm is designed to resist brute-force attacks. Without the key, decryption is not feasible in any reasonable time.

If you have the password stored somewhere (password manager, paper note from setup, screenshot on your phone), try to find it before contacting us. If the password truly doesn’t exist, we can discuss the situation individually, but the outcome can’t be guaranteed in advance in such a case.

[Contact us](https://www.exalab.cz/index.php?option=com_content&view=article&id=11:kontaktujte-nas-zajistime-bezplatny-svoz-a-diagnostiku&catid=2:zachrana-dat#contactnumbers) [Pricing](https://www.exalab.cz/index.php?Itemid=198)

## Schema

```json
{ "@context": "https://schema.org", "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "name": "Home", "item": "https://www.exalab.cz/en" }, { "@type": "ListItem", "position": 2, "name": "Media", "item": "https://www.exalab.cz/en/data-media" }, { "@type": "ListItem", "position": 3, "name": "WD My Cloud Network Storage Data Recovery", "item": "https://www.exalab.cz/en/data-media/wd-my-cloud-data-recovery.md" } ] }
```

```json
{ "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [ { "@type": "Question", "name": "My WD My Cloud is blinking red and unresponsive—is the data lost?", "acceptedAnswer": { "@type": "Answer", "text": "Not necessarily. A red LED on My Cloud signals a system-level error—most often a damaged system area on the drive after a failed OS 3 to OS 5 migration, or a failing internal drive. The data on the user volume usually remains intact, because it sits on a separate data partition." } }, { "@type": "Question", "name": "Can I retry the OS 5 migration myself, or will it make things worse?", "acceptedAnswer": { "@type": "Answer", "text": "If your unit is in a “migration failed / red LED” state, we don’t recommend repeated migration attempts or manual firmware flashing. While in some cases a new migration attempt does succeed, if the original cause of the failure was the condition of the drive itself (degraded sectors, weakening heads), each additional write to the system area worsens this condition. In a worse case, the process can overwrite the partition table in such a way that finding the boundaries of the data volume requires a forensic procedure." } }, { "@type": "Question", "name": "One drive in my PR4100 failed and a rebuild started—should I power the unit off?", "acceptedAnswer": { "@type": "Answer", "text": "If the rebuild is currently running and proceeding without errors (no new red LEDs, dashboard reports progress), let it finish—interrupting a rebuild risks corrupting the array. But if you see another drive reporting a problem during the rebuild (red LED on another bay, warning in the dashboard), power the unit off and disconnect it immediately. Continuing the rebuild in this situation typically leads to a double-degraded state from which RAID 5 cannot recover the data on its own." } }, { "@type": "Question", "name": "I removed the drive from my My Cloud Home, mounted it under Linux and only see anonymous files—what now?", "acceptedAnswer": { "@type": "Answer", "text": "That’s the expected state. My Cloud Home stores user files on the EXT4 partition under hexadecimal content IDs without their original names or folders. The mapping back to original file names is in a separate SQLite database (index.db in /restsdk/data/db/). Without proper processing of this database in the lab, you can’t access the data in its original form." } }, { "@type": "Question", "name": "My My Cloud Home was encrypted—do I have a chance to recover the data without the password?", "acceptedAnswer": { "@type": "Answer", "text": "My Cloud Home uses encryption via Linux LUKS. If you know the password, decrypting the partition after removing it from the original unit is a standard procedure, and recovery proceeds the usual way (reconstruction of the REST SDK tree from the index.db database, copying the data). If you don’t know the password, the situation is substantially more complicated—LUKS uses AES-256, and the algorithm is designed to resist brute-force attacks. Without the key, decryption is not feasible in any reasonable time." } } ] }
```

```json
{ "@context": "https://schema.org", "@type": "Article", "mainEntityOfPage": { "@type": "WebPage", "@id": "https://www.exalab.cz/en/data-media/wd-my-cloud-data-recovery.md" }, "headline": "WD My Cloud Network Storage Data Recovery", "description": "Data recovery from WD My Cloud, My Cloud Home and multi-bay EX/PR/DL network storage. OS 5 migration recovery, RAID rebuild. Free diagnostics and pickup, from CZK 1,500.", "image": { "@type": "ImageObject", "url": "https://www.exalab.cz/images/products_services/hdd-ssd-flash-sd-white-bakc-w1200-fw.jpg" }, "publisher": { "@type": "Organization", "name": "EXALAB Data Recovery", "logo": { "@type": "ImageObject", "url": "https://www.exalab.cz/images/logo/logo_600x600.png" } }, "author": { "@type": "Person", "name": "Frantisek Fridrich", "url": "https://www.exalab.cz/en/data-media/wd-my-cloud-data-recovery" }, "datePublished": "2023-02-28T12:30:07+00:00", "dateCreated": "2023-02-28T12:30:34+00:00", "dateModified": "2026-05-09T20:01:54+00:00" }
```
