Read/Write Compression: Combining UnionFS and SquashFS

Need to have write capability on your SquashFS compressed filesystem? UnionFS to the rescue!

Taking another small detour of the coverage of the 2.6.30 kernel (code named file-systems-o-plenty) this article will discuss using UnionFS with SquashFS to couple a read-only file system with a write capable file system (ext3, for example) creating an apparently writable read-only file system.

The combination allows you to take data that isn’t used very often, compress it, mark it as read-only, but also allow changes (edits, deletes, &c.). We’ll demonstrate a simple example that shows how to take a directory in a user’s account, use SquashFS to mount it read-only, and use UnionFS to couple a writable directory so that the user can still edit the data.

Quick Review of SquashFS

Recently, we tackled the subject of SquashFS (a good a place to start as any), an addition to the latest Linux kernel. SquashFS is a compressed file system that can create and mount a compressed file system as a read-only file system. It’s attractive for many reasons, among those:


  • Very large file systems – 2^64 bytes
  • Very large files (2 TiB)
  • Adjustable block sizes (up to 1 MiB)
  • Compresses metadata
  • Sparse file support
  • Exportable via NFS
  • Maintains file change times
  • Small cache for decompression of data (helps performance)

It is capable of compressing data to a very high level, potentially saving a great deal of space. In the article previously mentioned, there was some evidence mentioned where SquashFS achieved a 3:1 compression (but be aware that it depends upon the data being compressed).

SquashFS is fairly easy to use. The user space tools allow you to create an image of the directory tree that you want. Then you mount the image and you can access the data in a read-only mode (please see the previously mentioned article or the SquashFS website for details).

UnionFS

UnionFS is a stackable unification file system that merges the contents of several file systems (called branches) into a single coherent file system view. The various branches are overlaid on top of each other (this is an important point) and they can be mixed in a read-only and/or a read-write mode. The priorities of the various branches are set so that in the event of a common file the desired file appears to the user. In addition, UnionFS allows the insertion or deletion of branches anywhere in the fanout of the combined directory structure. UnionFS is based on a concept called “union mounts.”

Union mounts were derived from the concept of Union Directories from Plan 9. Union directories were developed so that you can mount directories from other devices without the contents of the mounted device taking precedence over the local directory. For example, if you NFS mount a directory on top of local one, the contents of the local directory are not accessible. Union directories were created so that any device to be shared over a network without special code.

For more info on union mounts and UnionFS I recommend checking out this series of 3 articles by Valerie Aurora: Part 1, Part 2, and Part 3 are an excellent set of articles that discuss some of the background of union mounts and how they are implemented.

At the core of UnionFS, or any union file system, are some basic concepts. The first concept behind UnionFS is to take a set of directories, called branches, and overlay them so that all of the intended data is visible in the final virtual file system. Each of these branches can be added to the union as read-only, read-write, or some possible variation that depends upon the ownership of the various branches. These branches are then ordered (also called stacked) to create the final file system. Depending upon the union implementation sometimes these branches can be re-ordered, added, removed, or have their permissions changed on the fly.

The second concept in union file systems is that of deleting files and directories. If a file is deleted from a read-write branch, then that file should not appear again even if it’s on a lower level branch. Typically, this is done through a combination of what are called whiteout directories and opaque directories. As the name implies, a whiteout directory is used to “cover up” all entries of a particular file from the lower branches. An opaque directory is somewhat similar but it covers up a file from the lower branches from only that point downward in the file system. These concepts are very important when using a file system such as SquashFS because a user may erase a file from a read-only SquashFS but that file has to appear as thought it has disappeared (i.e. it’s been erased).

Comments on "Read/Write Compression: Combining UnionFS and SquashFS"

rrolsberg

Puppy Linux has been doing this for a least four years and it works great! http://www.puppylinux.com

Reply

thank you for share!

Reply

, Pay Day Loan, [url="http://newuksinglereleases.co.uk/"]Pay Day Loan[/url], >:-OO, Payday Loans Online, [url="http://www.cheap-frames.co.uk/"]Payday Loans Online[/url], 11188, Payday Loan UK, [url="http://janpaydayloans.co.uk/"]Payday Loan UK[/url], >:-PP, Garcinia Cambogia, [url="http://tennising.net/"]Garcinia Cambogia[/url], 8[[[,

Reply

There is obviously an abundance of home equity loan providers for the market today morrissey tour 2014 this
way, when you inform them, i need a personal bank loan quick, but i have bad credit, you will possess exact payments mapped out.

Reply

Read/Write Compression: Combining UnionFS and SquashFS | Linux Magazine

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>