Drive degradation and failure within an array, unsuccessful RAID rebuild, data loss from multi-bay devices, inaccessible data on My Cloud Home, failure after OS migration... We have many years of experience with data recovery from all types of storage media. Data recovery from WD My Cloud network storage devices.
WD My Cloud is a line of network-attached storage (NAS) devices that Western Digital has been selling since 2013. The “My Cloud” label, however, covers several types of devices with different internal designs—from single-drive consumer models, through the My Cloud Home line built around a mobile app, to the two- and four-bay units for more demanding use (EX, PR, DL series). Data recovery procedures differ between these types; what they have in common is that we encounter all of them regularly in our lab. We work with all generations, including units damaged by the OS 3 to OS 5 migration. Diagnostics is free, and data recovery starts at CZK 1,500.
If your WD My Cloud is unresponsive, shows a red LED, reports “Drive not found,” failed to start after a firmware update, or your data has disappeared—follow these principles. They determine whether successful data recovery is possible:
Warning, especially for My Cloud Home and Home Duo: recovering data from this line requires a special procedure. Removing the drive and trying to read it on a regular computer will return thousands of files with anonymous hexadecimal names and no structure—the original file names and folders are stored in a separate database that needs to be processed with a professional tool.
Free consultation, diagnostics, pickup
Data recovery procedures differ significantly between My Cloud series—different operating systems, different file systems, different ways of storing data. Click on your model to jump to the section with technical details and the recovery approach.
If you’re not sure which category your unit belongs to, the type of interface and the way it’s used are the deciding factors: My Cloud Home is set up via a mobile app and requires a WD account; classic My Cloud is set up via the device’s web admin interface (dashboard) on the local network; multi-bay models are recognizable at a glance—they have two or four drive bays.
The WD My Cloud line covers personal and small-business network storage that Western Digital has been selling since 2013. From a data recovery perspective, however, it cannot be treated as a homogeneous family—inside the plastic enclosures sit three architecturally distinct categories:
Internal drives are typically WD Red, Red Plus, or WD Blue in the oldest models; some newer models ship with SMR (Shingled Magnetic Recording) drives that complicate RAID rebuilds. Multi-bay models use various ARM SoCs (Marvell Armada 370, 385 and 388 in the EX series, Intel Pentium in the PR series).
Typical situations we see with My Cloud devices in the lab:
If you’re dealing with a WD My Book Live or My Book Live Duo—this is an older line of network storage, no longer supported (and affected by the mass remote wipe in 2021). Architecturally they are closer to the classic single-drive My Cloud than to the external USB My Book; this page covers them, and the recovery approach is analogous to the classic single-drive My Cloud section below.
→ Main WD pillar: Western Digital (WD) data recovery—an overview of all series, internal HDDs, external My Book and My Passport, networked My Cloud, WD SSDs.
3 photos: WD My Cloud types (single-bay classic + My Cloud Home + multi-bay PR/EX)
The most common situation in which a My Cloud arrives at the lab isn’t some exotic scenario—it’s gradual failure of the drives themselves. Most units in service today are 5–10 years old, often running 24/7, and mechanical wear at this stage of their lifespan is to be expected.
Symptoms vary slightly between models and device types, but the principles are shared:
The recovery procedure depends on the device type and the specific failure:
We remove the drive from the unit and continue all further work outside the original device. If the drive itself is physically damaged (knocking, heads not reading, platters with defects), we proceed as we would with any 3.5" hard drive—work with read heads in a laminar flow box, modifications to the PCB and service area data. For drives with degraded surfaces or unstable reads, we use the ACELab PC-3000 platform, which provides drive-level adjustments well beyond what any software offers, optionally combined with our in-house software solution developed in the lab for specific scenarios. For My Cloud Home, an additional step follows: reconstructing the original tree structure from the index.db database—see the dedicated My Cloud Home section.
For RAID arrays the situation is more complicated—the drives are often all at the same stage of wear (same age, same load, same environment), and after one drive fails the risk of another failing during the rebuild is real. The key question we ask when accepting the job: what is the actual condition of all the drives in the array? Imaging each drive individually, assessing its condition, and reconstructing the array virtually outside the original device—that’s the standard procedure that minimizes the risk of losing a second drive during recovery.
Warning—do not try to replace a failed drive yourself and start a rebuild if the data in the array matters to you. If several other drives in the array are in similar condition, the rebuild will finish them off. For RAID 5, a rebuild after a single drive replacement is statistically the most dangerous operation—it runs all remaining drives at full load for tens of hours.
Free consultation, diagnostics, pickup
In 2021, Western Digital ended support for the older My Cloud OS 2 and OS 3 versions due to critical security vulnerabilities (among them those that, in June 2021, manifested as the mass wipe of My Book Live units). Users of OS 5-compatible devices were prompted to migrate to OS 5 by April 15, 2022; for users of older models that don’t support OS 5, remote access ended definitively on January 15, 2022. The migration to OS 5 is one-way—reverting to OS 3 is not possible.
The migration itself is an operation that does not require erasing data on the user volume. OS 5 runs on a separate system partition; user data remains on a separate data partition (mounted as Volume_1 on single-drive models, /dev/md0 on multi-bay). From a technical standpoint, only the system partition root file system is rewritten.
In practice, however, a non-trivial number of units experienced failures during the migration or shortly after. Affected users report consistent symptoms:
On affected units, damage occurred during the system area flash—an incomplete bootloader write, a damaged system partition table, or an inconsistent root filesystem. The key point: these failures generally affected only the system partition. The data partition with user files typically remains intact, and the EXT4 file system itself is still fully mountable.
In our lab, the procedure usually involves removing the drives and reading them outside the device:
The condition of the drives themselves naturally plays a role: in units where the drives have 5–8+ years of operation, we often run into additional problems during imaging—degraded sectors, weakening read heads, SMART errors. This extends the process, but recovery is usually achievable through standard lab procedures.
Warning: If your unit is in a “migration failed / red LED” state, do not repeatedly attempt the migration through the web interface or manually via a firmware file. Each additional attempt can complicate an originally recoverable situation—especially if the firmware update reaches a stage where it begins to overwrite partitions outside the system area.
→ General information about NAS storage and recovery procedures: NAS data recovery.
The first generation of My Cloud (released in 2013) and its successor My Cloud Mirror Gen 2 are among the architecturally simplest members of the line. Inside the plastic enclosure sits a standard 3.5" SATA drive (typically WD Red, or WD Blue in older models); the external interface is gigabit Ethernet, and the USB 3.0 port is used to connect an external drive to the unit, not for data connection to a computer.
The operating system is a customized Linux (Debian-based in older generations, a custom distribution in OS 5), and the user-volume file system is EXT4. The drive holds several partitions—a small system partition (with firmware and OS), a swap partition, and the main data partition (taking up the rest of the drive’s capacity, mounted as Volume_1).
There is one key difference between the single-drive My Cloud and Mirror Gen 2: Mirror Gen 2 contains two drives in RAID 1 (mirroring) via Linux mdadm. During normal operation it’s almost invisible, but it changes the recovery procedure—if the two drives have become out of sync (typically after an unexpected power outage), we have to determine which one holds the more recent data based on the event counters in the mdadm metadata.
From a recovery perspective, this category is one of the more straightforward:
Complications arise when the drive itself is physically damaged—we then proceed as we would with any 3.5" hard drive: work with read heads in a laminar flow box, modifications to the PCB and service area data. This is common for 5+-year-old units where the WD Red drives are at the end of their lifespan, with progressive disconnects during imaging caused by a growing number of bad sectors. The ACELab PC-3000 platform helps here by enabling work with the drive well beyond the limits of conventional software, optionally combined with our in-house software solution developed in the lab for situations where even standard equipment isn’t enough.
→ General information about mechanical failures of 3.5" hard drives: HDD data recovery.
1 photo: inside of single-bay My Cloud (3.5" WD Red + controller PCB)
My Cloud Home and My Cloud Home Duo (released in December 2017) are architecturally distinct from the rest of the My Cloud family. Western Digital introduced them as a “personal cloud for non-technical users”—setup and operation only via a mobile app and a WD user account, no traditional admin dashboard, no web interface for local management in the original sense.
Inside is a standard 3.5" SATA drive (often WD Red Plus or Blue) in capacities of 2, 3, 4, 6 and 8 TB for the single-drive version, and 4, 8, 12, 16 and 20 TB for the Home Duo. The drive’s file system is EXT4—the same as for classic My Cloud. The application platform was originally built on Android Runtime over a Linux kernel (the WD developer SDK targets Android API level 23, Marshmallow); in 2022, with firmware update 8.7.0, Western Digital fully migrated it to Debian Linux. The key difference from classic My Cloud, however, lies in the layer above EXT4.
For My Cloud Home, WD used its own file storage implementation called REST SDK. The architecture was designed around the mobile app and cloud synchronization, not around classic network sharing. After a device failure this has one major consequence for the user: user files are not stored on the EXT4 partition under their original names or in their original folders.
Instead, the layout is as follows:
/restsdk/data/files/—contains all user files named as hexadecimal content IDs without extensions (e.g., 0a3f9b2e1c8d4567);/restsdk/data/db/index.db—a SQLite database with a files table containing columns id, name, parentID, mimeType, contentID and other metadata. This table maps each content ID back to the original file name, mime type, parent folder, and timestamp./restsdk/data/thumbnails/—a cache of thumbnails generated for the mobile app.The consequence is straightforward: if you remove a drive from a My Cloud Home and mount the EXT4 partition under Linux, you’ll see thousands of unnamed hexadecimal files with no folder structure. Without the SQLite index.db database and a tool that can process it correctly, it’s not possible to reconstruct the original file names and tree structure.
For My Cloud Home and Home Duo we proceed as follows:
/restsdk/data/db/;Complications arise if the index.db database itself is damaged—for example, after an improper shutdown during a write, a failed firmware update, or on a drive with a degraded surface in the area where the database lies. In such cases we attempt to recover the database from the SQLite journal or from previous versions. In rare cases where the database cannot be reconstructed, what remains are files without their original names—we can at least sort the data by content (mime type signatures, EXIF metadata for photos), but the result lacks the structure the user is accustomed to.
Some My Cloud Home units are encrypted via Linux LUKS—the user typically set a password during setup, but it isn’t stored anywhere visible. Without knowledge of the password, decrypting the LUKS partition is practically infeasible (the algorithms are designed to resist brute-force attacks). If you know the password and the device requires recovery, have it available when you submit the job.
Warning—do not do the following with My Cloud Home: Do not run a factory reset from the mobile app if you have unbacked-up data on the device. The reset overwrites the REST SDK structure and index.db—the data physically remains on the drive for some time as unmarked free space, but its reconstruction becomes substantially harder. Likewise, do not insert a replacement drive from another My Cloud Home into the unit—the units use proprietary keys tied to the specific hardware unit, and the operation will end by overwriting key structures.
Free consultation, diagnostics, pickup
→ General information about NAS and network storage: NAS data recovery.
1 photo: disassembled My Cloud Home (3.5" drive + controller PCB)
The multi-bay My Cloud series (EX, PR and DL) target more demanding home users and small businesses. Architecturally, these units are closer to Synology and QNAP competitors—Linux operating system, mdadm software RAID, EXT4 file system over the array.
Categories by bay count and class:
SoCs differ between series: the EX line uses ARM Marvell Armada (370, 385 and 388), the PR line Intel Pentium. This affects performance but not the recovery procedure—mdadm metadata is platform-agnostic.
Multi-bay recovery is a sequence of steps that broadly resemble any Linux mdadm RAID work—the WD specifics are in the details:
Complications arise when the drives in the array were in a state that the WD firmware should never have allowed into a RAID—typically WD Red drives with SMR in a 4-bay configuration. The basic WD Red line uses SMR (Shingled Magnetic Recording) in some models manufactured in recent years, and in multi-bay arrays it behaves problematically: during a rebuild it doesn’t hold stable timeouts, drops out of the array, and is generally unsuitable for RAID 5/6 use. If the array contains a mix of SMR and CMR drives, the situation tends to be even more complicated. Recovery is achievable, but requires a more cautious approach and more time.
→ Detailed techniques and approaches to RAID arrays: RAID data recovery and NAS storage in general.
1 photo: 4-bay PR/EX with 4 removed drives alongside
WD My Cloud failures, ordered by how often we see them in our lab:
The dominant failure mode of the years 2021–2024. Symptoms: solid or blinking red LED on the front panel, dashboard unavailable or reporting a missing volume, the unit in a boot loop. Data on the internal drives is typically intact—the problem is in the damaged system partition after a failed flash operation. The recovery procedure is essentially standard (remove drives, image, mount the EXT4 data partition outside the device), but it requires deciding whether to risk another firmware repair attempt or to stop and hand the data back to the client.
The WD Red drives originally fitted in older My Cloud units now have 5–8 years of 24/7 operation behind them. Symptoms: SMART errors in the dashboard, spontaneous unmounting of volumes, knocking sounds, repeated spin-ups. For single-drive models, recovery is handled as for any 3.5" hard drive (potentially including head transplantation). For multi-bay models the situation is more complex—the drives are often all at the same stage of wear, and a rebuild after replacing one of them risks finishing off another.
The classic scenario for multi-bay models. The user replaces the failed drive, the dashboard starts a rebuild, and during the rebuild (often after tens of hours of operation) another drive fails and the array enters a double-degraded state beyond RAID 5’s ability to reconstruct the data. Recovery procedure: image all drives (including the original “failed” one, which is often in better condition than the post-rebuild state suggests), virtually reconstruct the array, recover the data. Failed rebuilds are more common in arrays with WD Red SMR drives.
A scenario specific to My Cloud Home and Home Duo. The user opened the device after a failure, removed the drive, mounted it in Linux or via a USB-SATA adapter on Windows, and discovered only thousands of files with 16-character hex names, no extensions and no folders. This is the expected state—the user-facing structure is in a SQLite database that needs to be processed with a professional lab tool. See the My Cloud Home section.
Synology offers an “Erase All Data” option in its menu, QNAP “Restore Factory Defaults & Format All Volumes”—WD My Cloud has analogous options. A panicked user, after a unit failure, often reaches for a reset hoping it will “fix something.” Depending on the type of reset, the impact ranges from minimal (password and network settings reset—data is safe) to destructive (data volume formatting). In RAID 5 arrays this additionally overwrites the mdadm superblocks; the data is still physically on the drives, but its reconstruction requires a deeper forensic procedure.
Multi-bay units sharing a single power supply are vulnerable to a power surge as a whole—if a lightning strike or surge pulse reaches the unit, it typically affects all drives equally. The difference is in the severity. Sometimes the bridge and SATA controllers on the unit’s PCB survive but the drive electronics is destroyed; sometimes the other way around. The recovery procedure depends on the extent of the damage—it often combines drive PCB swaps (ROM transplantation) and array reconstruction.
Less common, but always individual. Brief contact with water typically affects the unit’s PCB first—oxidation of contacts, corrosion, short circuits; the drives themselves usually remain unharmed. With longer exposure, immersion, or flooding, water reaches even the sealed drive bodies and can damage platters and heads. Helium-filled drives (typically 12 TB and above) are more resistant in this respect thanks to their hermetically sealed bodies. In any case: do not power on the device after contact with water, do not dry it with any “home” methods, and bring it to the lab as soon as possible.
For single-drive models, drops are less common than with portable external drives—the unit sits in one place and isn’t handled daily—but they happen. Typically during moves, cleaning, cable handling, or when something falls onto the device. For multi-bay models the risk is higher, especially during moves: a 4-bay PR4100 or EX4100 with full drives weighs over 5 kg, and a fall from desk height usually damages the mechanical components of multiple drives at once. Symptoms: the unit doesn’t power up after the incident, audible clicking or scraping sounds, dashboard reports of multiple drive failures, or complete array unavailability. The recovery procedure is standard lab work—remove the drives, identify the extent of mechanical damage, work in the laminar flow box, and if necessary transplant read heads. The key is not to power the device back on after a fall; repeated spin-ups of damaged drives only makes things worse.
→ Main WD pillar with overview of all series: Western Digital (WD) data recovery.
1 photo: head-stack damage of 3.5" drive from My Cloud
Not necessarily. A red LED on My Cloud signals a system-level error—most often a damaged system area on the drive after a failed OS 3 to OS 5 migration, or a failing internal drive. The data on the user volume usually remains intact, because it sits on a separate data partition.
The recovery procedure usually involves removing the drive, imaging it, and mounting the EXT4 data partition outside the device. We always verify the specific situation through free diagnostics—we’ll tell you what happened, what the recovery path would look like, and what it will cost.
If your unit is in a “migration failed / red LED” state, we don’t recommend repeated migration attempts or manual firmware flashing. While in some cases a new migration attempt does succeed, if the original cause of the failure was the condition of the drive itself (degraded sectors, weakening heads), each additional write to the system area worsens this condition. In a worse case, the process can overwrite the partition table in such a way that finding the boundaries of the data volume requires a forensic procedure.
If you have data in the unit that matters to you, we recommend leaving the device powered off and contacting us. Diagnostics is free, and the decision whether to retry the migration or recover data outside the device is based on the specific drive condition.
If the rebuild is currently running and proceeding without errors (no new red LEDs, dashboard reports progress), let it finish—interrupting a rebuild risks corrupting the array. But if you see another drive reporting a problem during the rebuild (red LED on another bay, warning in the dashboard), power the unit off and disconnect it immediately. Continuing the rebuild in this situation typically leads to a double-degraded state from which RAID 5 cannot recover the data on its own.
After powering off, contact us. The standard procedure is to remove all drives (noting the slot order), image them, virtually reconstruct the array in lab software, and recover the data outside the original array. Important: please send the original “failed” drive too—it’s often in better condition than the post-rebuild state suggests, and contains data critical for the virtual reconstruction.
That’s the expected state. My Cloud Home stores user files on the EXT4 partition under hexadecimal content IDs without their original names or folders. The mapping back to original file names is in a separate SQLite database (index.db in /restsdk/data/db/). Without proper processing of this database in the lab, you can’t access the data in its original form.
Standard recovery software (Disk Drill, Recuva, EaseUS) can’t process this layer—it scans EXT4 and returns the same anonymous result. If you have a drive from a My Cloud Home, we recommend not performing any further operations on it and contacting us. Diagnostics is free, and in our lab we have the experience and equipment for My Cloud Home that standard recovery software doesn’t offer.
My Cloud Home uses encryption via Linux LUKS. If you know the password, decrypting the partition after removing it from the original unit is a standard procedure, and recovery proceeds the usual way (reconstruction of the REST SDK tree from the index.db database, copying the data). If you don’t know the password, the situation is substantially more complicated—LUKS uses AES-256, and the algorithm is designed to resist brute-force attacks. Without the key, decryption is not feasible in any reasonable time.
If you have the password stored somewhere (password manager, paper note from setup, screenshot on your phone), try to find it before contacting us. If the password truly doesn’t exist, we can discuss the situation individually, but the outcome can’t be guaranteed in advance in such a case.
EXALAB Data Recovery
Microshop s.r.o.
Pod Marjánkou 4
169 00 Praha 6
Česká Republika
Opening hours:
Monday to Thursday
9.00 - 18.00
Friday 9.00 - 17.30
other opening hours are possible upon agreement
Hotline: +420 608 177 773
Office: +420 233 357 122
E-mail: [email protected]
Hotline: +420 608 177 773
Kancelář: +420 233 357 122
E-mail: [email protected]
Opening hours:
Monday to Thursday
9.00 - 18.00
Friday 9.00 - 17.30
other opening hours are possible upon agreement
EXALAB Data Recovery
Microshop s.r.o.
Pod Marjánkou 4
169 00 Praha 6
Česká Republika