Read/Write Compression: Combining UnionFS and SquashFS

Need to have write capability on your SquashFS compressed filesystem? UnionFS to the rescue!

Taking another small detour of the coverage of the 2.6.30 kernel (code named file-systems-o-plenty) this article will discuss using UnionFS with SquashFS to couple a read-only file system with a write capable file system (ext3, for example) creating an apparently writable read-only file system.

The combination allows you to take data that isn’t used very often, compress it, mark it as read-only, but also allow changes (edits, deletes, &c.). We’ll demonstrate a simple example that shows how to take a directory in a user’s account, use SquashFS to mount it read-only, and use UnionFS to couple a writable directory so that the user can still edit the data.

Quick Review of SquashFS

Recently, we tackled the subject of SquashFS (a good a place to start as any), an addition to the latest Linux kernel. SquashFS is a compressed file system that can create and mount a compressed file system as a read-only file system. It’s attractive for many reasons, among those:

  • Very large file systems – 2^64 bytes
  • Very large files (2 TiB)
  • Adjustable block sizes (up to 1 MiB)
  • Compresses metadata
  • Sparse file support
  • Exportable via NFS
  • Maintains file change times
  • Small cache for decompression of data (helps performance)

It is capable of compressing data to a very high level, potentially saving a great deal of space. In the article previously mentioned, there was some evidence mentioned where SquashFS achieved a 3:1 compression (but be aware that it depends upon the data being compressed).

SquashFS is fairly easy to use. The user space tools allow you to create an image of the directory tree that you want. Then you mount the image and you can access the data in a read-only mode (please see the previously mentioned article or the SquashFS website for details).


UnionFS is a stackable unification file system that merges the contents of several file systems (called branches) into a single coherent file system view. The various branches are overlaid on top of each other (this is an important point) and they can be mixed in a read-only and/or a read-write mode. The priorities of the various branches are set so that in the event of a common file the desired file appears to the user. In addition, UnionFS allows the insertion or deletion of branches anywhere in the fanout of the combined directory structure. UnionFS is based on a concept called “union mounts.”

Union mounts were derived from the concept of Union Directories from Plan 9. Union directories were developed so that you can mount directories from other devices without the contents of the mounted device taking precedence over the local directory. For example, if you NFS mount a directory on top of local one, the contents of the local directory are not accessible. Union directories were created so that any device to be shared over a network without special code.

For more info on union mounts and UnionFS I recommend checking out this series of 3 articles by Valerie Aurora: Part 1, Part 2, and Part 3 are an excellent set of articles that discuss some of the background of union mounts and how they are implemented.

At the core of UnionFS, or any union file system, are some basic concepts. The first concept behind UnionFS is to take a set of directories, called branches, and overlay them so that all of the intended data is visible in the final virtual file system. Each of these branches can be added to the union as read-only, read-write, or some possible variation that depends upon the ownership of the various branches. These branches are then ordered (also called stacked) to create the final file system. Depending upon the union implementation sometimes these branches can be re-ordered, added, removed, or have their permissions changed on the fly.

The second concept in union file systems is that of deleting files and directories. If a file is deleted from a read-write branch, then that file should not appear again even if it’s on a lower level branch. Typically, this is done through a combination of what are called whiteout directories and opaque directories. As the name implies, a whiteout directory is used to “cover up” all entries of a particular file from the lower branches. An opaque directory is somewhat similar but it covers up a file from the lower branches from only that point downward in the file system. These concepts are very important when using a file system such as SquashFS because a user may erase a file from a read-only SquashFS but that file has to appear as thought it has disappeared (i.e. it’s been erased).

Comments on "Read/Write Compression: Combining UnionFS and SquashFS"

Im thankful for the blog article.Really looking forward to read more. Keep writing.

Check beneath, are some absolutely unrelated websites to ours, nevertheless, they may be most trustworthy sources that we use.

Always a major fan of linking to bloggers that I enjoy but really don’t get a lot of link enjoy from.

If you are going for most excellent contents like myself, only
go to see this web page everyday as it offers feature contents, thanks

Have you ever thought about writing an e-book or guest authoring on other websites?

I have a blog based upon on the same subjects you discuss
and would really like to have you share some stories/information. I know my visitors would enjoy your work.

If you’re even remotely interested, feel free to shoot me an e mail.

Feel free to surf to my web site; Russ

I simply want to say I’m very new to weblog and actually liked your web blog. Very likely I’m going to bookmark your website . You surely have good posts. Bless you for revealing your blog.

That will be the end of this article. Here you will obtain some websites that we feel you?ll value, just click the links.

I read this article fully about the resemblance of most recent and preceding technologies, it’s remarkable article.

Leave a Reply