Technical properties

  1. LSFS is a log-structured file system that keeps track of the changes that are made in a journal. This file system keeps no data, changes only.
  2. Old data are stored on the file system until online defragmentation cleans it up.
  3. LSFS is a snapshot based file system and uses snapshots based on the concept of journaling file system. Every snapshot is incremental and occupies additional space that is equal to changes made since previous snapshot creation.
  4. The journal divides into file segments for convenience. The minimum size of file segment is 128 MB, maximum – 512 MB. LSFS always keeps one additional empty file-segment.
  5. Junk rate predetermines maximum allowed LSFS growth (overprovisioning) comparison to declared LSFS size. The default rate is 60% causing LSFS file-segments might use 2.5 times more space than initial LSFS size. Additionally metadata occupies up to 20% of initial LSFS size resulting a total overprovisioning of LSFS devices at 200%.
  6. A contribution of metadata is usually small, but some patterns cause high metadata growth and it may occupy as much space as the useful data. The “worst” pattern is 4k random write. Nevertheless, when the whole disk is full, the ratio between metadata and useful data should be stabilized and 3 times growth is maximum possible. Information about useful data size, metadata and fragmentation can be found in StarWind Management Console when LSFS device is selected.
  7. No matter which data access pattern is used, underlying storage always receives 4 MB blocks. Read patterns may vary starting from 4k blocks. According to mentioned above underlying storage may be a cheap spinning drive.
  8. If available disk space on the physical storage is low, the LSFS uses more aggressive defragmentation policy and slows down access speed for the end user. If there is no space on the physical drive, then LSFS device becomes ‘read-only’.
  9. 4k is the block, that LSFS uses. This means that write speed will be low if write requests are not aligned and deduplication will not be performed. But it shouldn’t be the problem since all brand new drives use 4k blocks. Starting from Windows Vista it is possible to use such drives. Hyper-V aligns VHDX, but VHD might not be aligned. ESX cannot work with 4k drives, but VMFS should be aligned as well, thus there is no problem.

Limits and requirements

  1. Required free RAM (it is not related to L1 cache):
    • 4.6 GB of RAM per 1 TB initial LSFS size (with deduplication disabled)
    • 7.6 GB of RAM per 1 TB initial LSFS size (with deduplication enabled)
  2. Maximum size of LSFS device is 64 TB.
  3. Over-provisioning is 200% (LSFS files can occupy 3 times more space compared to initial LSFS size). Snapshots require having additional space to store them.
  4. The physical block size of LSFS device is 4k. 512b or 4k are the sizes of virtual blocks. It is recommended to use 4k blocks for Microsoft Windows Environments and 512b blocks for VMware ESX.

Features and benefits

  1. Defragmentation works continuously in a background. File-segment will be defragmented, when data capacity is out of allowed value. Data from the old file-segmented will be moved to another empty file segment and the old fragmented file will be deleted. Maximum allowed junk rate before defragmentation process is 60%. This value can be changed using the context menu of the device in StarWind Management Console.
  2. It is possible to run full defragmentation via device context menu (for standalone devices only). Manually started defragmentation ignores default defragmentation rate and cleans up all junk blocks inside file-segments.
  3. HA based on LSFS uses snapshots for synchronization purposes. Snapshots are created every 5 minutes by default and then removed. HA synchronizes only latest changes after any failure, because each HA partner has healthy snapshot before the failure and there is no need to synchronize all the data but latest changes only that were made after the snapshot was created. Full synchronization takes part only after initial replica creation. Even in case of full synchronization, only useful data is replicated skipping junk data.
  4. LSFS creates restore points during a usual job. This is a journal part, which will be used in the case of failure. Restore points can be observed via Snapshot Manager in Device Recovery Mode.
  5. Deduplication analyzes unique chunks of data identifying and storing them. As the analysis continues, other chunks are being compared to the stored copy and whenever a match occurs, the redundant chunk is replaced with a reference shortcut that points to the stored chunk. Given that the same byte pattern may occur dozens, hundreds, or even thousands of times (the matching frequency is dependent on the chunk size), the amount of data to be stored or transferred can be greatly reduced. For example, similar VMs that reside on top of LSFS device will be deduplicated well, but pagefile of each VM will not. According to our observations deduplication is not applicable to pagefiles, thus the size of every pagefile will be added as unique data. Long story short, 10 similar VMs 12GB each (2GB for pagefile) occupy (12-2)+2*10=30GB.

Request a Product Feature

To request a new product feature or to provide feedback on a StarWind product, please email to our support at support@starwind.com and put “Request a Product Feature” as the subject.

Back to blog