No Data Corruption & Data Integrity
What exactly does the 'No Data Corruption & Data Integrity' slogan mean to every Internet hosting account owner?
The process of files getting corrupted resulting from some hardware or software failure is known as data corruption and this is one of the main problems which Internet hosting companies face since the larger a hard disk drive is and the more info is filed on it, the more likely it is for data to be corrupted. There are several fail-safes, still often the info gets corrupted silently, so neither the file system, nor the admins notice a thing. Thus, a corrupted file will be handled as a regular one and if the hard disk drive is a part of a RAID, that particular file will be copied on all other drives. In theory, this is done for redundancy, but in reality the damage will be worse. When some file gets corrupted, it will be partly or fully unreadable, so a text file will no longer be readable, an image file will show a random blend of colors in case it opens at all and an archive will be impossible to unpack, and you risk losing your content. Although the most commonly used server file systems have various checks, they quite often fail to identify a problem early enough or require an extensive time period to be able to check all the files and the hosting server will not be operational for the time being.
No Data Corruption & Data Integrity in Cloud Web Hosting
The integrity of the data which you upload to your new cloud web hosting account will be ensured by the ZFS file system which we use on our cloud platform. Most of the web hosting providers, like our company, use multiple hard drives to keep content and because the drives work in a RAID, exactly the same information is synchronized between the drives all of the time. If a file on a drive is damaged for whatever reason, yet, it is more than likely that it will be copied on the other drives since alternative file systems do not include special checks for this. Unlike them, ZFS uses a digital fingerprint, or a checksum, for every single file. In case a file gets damaged, its checksum won't match what ZFS has as a record for it, therefore the damaged copy shall be swapped with a good one from a different drive. As this happens immediately, there is no risk for any of your files to ever get damaged.