Gentoo Optimizations Benchmarked

Gentoo is a source based distribution which lets the user decide how to optimize their system in many ways. Linux Magazine benchmarks three of the most common GCC optimizations; -Os, -O2 and -O3, and throws in Ubuntu for good measure.

As previously discussed on Linux Magazine, Gentoo is a source based distribution which lets the end user decide what their system will be.

All binary distributions make these choices for you, but building from source, Gentoo users can decide for themselves. They are able to choose what CPU their binaries will be built for, as well as GCC optimizations.

Taken directly from the GNU GCC Manual:

“Without any optimization option, the compiler’s goal is to reduce the cost of compilation and to make debugging produce the expected results… Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.

The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.”

The -O1 option begins the optimization, with -O2 and -O3 optimizing further. The special -Os option optimizes code for size. It enables all the options from -O2 which do not increase the size of the code and is especially useful for low-memory systems. Today Linux Magazine benchmarks three of the most common, -Os, -O2 and -O3.

The computer system used in these tests is an Intel Core2 CPU, and as such Gentoo was compiled in 64bit with the “-march=core2″ CPU type. Ubuntu is included for comparison purposes. Jaunty Jackalope 9.04 was chosen because it is the current stable version and more closely matches the Gentoo system, with the exception of the kernel which is two versions behind. Each installed system is just the base with any required dependencies for testing, plus XOrg and Xfce4.

The Hardware

Processor: Intel Core 2 Duo CPU E8400 @ 3.00GHz (Total Cores: 2) Intel SpeedStep Technology enabled
Motherboard: Gigabyte EP35-DS3P
Chipset: Intel 82G33/G31/P35/P31 + ICH9R
System Memory: 3965MB
Disk: 300GB ST3300831AS
Graphics: GeForce 8800 GT
Monitor: Samsung SyncMaster

Gentoo System

The Gentoo system was built from the stable branch using the same USE flags and packages. For each of the different GCC optimization levels, the system’s tool chain was re-built, then the entire system was rebuilt, twice. All systems used the same kernel configuration and packages.

Kernel: 2.6.30-gentoo-r5 (x86_64)
Desktop: Xfce 4.6.1
Display Server: X.Org Server 1.5.3
OpenGL: 3.0.0 NVIDIA 180.60
Compiler: GCC 4.3.3
Java: Sun Java(TM) SE Runtime Environment (build 1.6.0_15-b03)
File-System: ext3
Screen Resolution: 1280x1024

The make profile used was amd64/2008.0 with the following additional USE flags set:
custom-cflags custom-optimizations qt3support X sse sse2

Ubuntu System

The Ubuntu system is very similar in terms of software, with the exception of an older kernel and newer version of X.Org.

Kernel: 2.6.28-15-generic (x86_64)
Display Server: X.Org Server 1.6.0
OpenGL: 3.0.0 NVIDIA 180.44
Compiler: GCC 4.3.3
Jave: Sun Java(TM) SE Runtime Environment (build 1.6.0_16-b01)
File-System: ext3
Screen Resolution: 1280x1024

Let The Games Begin!

The benchmarking system used is Phoronix Test Suite, which is widely regarded as the most complete benchmarking tool for Unix systems. The tests themselves are broken down into categories. Low level hardware tests such as memory and disk benchmarking have been excluded as they are effectively the same.

Audio Visual
This first set of results shows the conversion of an audio file into several different formats, including FLAC, Monkey Audio, MP3, OGG Vorbis and Wavpack.

When encoding a file, there was not much difference between GCC optimizations, and Ubuntu also performed similarly.

Mencoder converts a video file between two formats and here while -O3 has an ever so slight advantage, the biggest surprise is Ubuntu which takes almost twice as long as Gentoo.

When it comes to playing back video, each Gentoo system performed on par while Ubuntu used slightly more CPU resources.

Comments on "Gentoo Optimizations Benchmarked"


Gentoo seems (incorrectly) synonymous with this idea that you can fiddle with your gcc flags and make things faster. This may or may not be the case, after all it\’s a piece of cake for Ubuntu to take the output of this test and then make sure that next time all packages are rebuilt with O2/Os, etc as appropriate for best performance

In fact the magic of gentoo is it\’s great dependency system, massive software library and very flexible configuration system. The downside of that is complexity in use and sometimes slower speed of installation – definitely gentoo is NOT for everyone.

However, like all tools they have their place. Some people like a potato peeler and others prefer a knife. Knives are very flexible, but also can cause you much grief and require a greater skill to learn to use. Which is the \”best\” tool is really a function of the desired task and needs of the operator

We use Gentoo to manage all our servers (many of them virtualised). This gives us the following advantages:

- No need to ever re-install because a new version is released. Instead we have a continuous rolling upgrade which keeps the servers always up to date with the latest software (risk of breakage while upgrading is largely mitigated by virtualisation which makes it easy to test an upgrade first and software versions can be pegged if you want to prevent an upgrade)

- We set gentoo to create binary packages as it compiles – this means that the first server can be somewhat slow to upgrade, but thereafter the other servers (which are set to identical compile settings) all pull the binary packages

- Gentoo has very flexible options to avoid pulling in a wadge of dependencies. This means you can easily create small virtual servers without a ton of bloat. For example we ask all packages which have bash-completion options to install their scripts for our ease while using the command line, but conversely largely we ask packages on our virtual servers not to install their bulky documentation, python/perl/ruby/snmp, etc dependencies – this keeps the package sizes small

- Near end to dependency hell!! No more thrashing around trying to solve a dependency nightmare! I am literally in the middle of upgrading 20 servers from GCC 3.4 to gcc 4.3, at the same time – no breakage whatsoever. Try upgrading major versions of readline/gcc/libhistory/mysql, etc on any RPM/Deb distro and you are in for a big headache (or more likely it\’s a major OS update), but on gentoo you can just pick a package version, then a script (revdep-rebuild) will rebuild any affected packages and correct the dependencies for you.. Cool!

So in conclusion it\’s entirely possible that our servers run a touch faster because we use gentoo, but I personally would still use it if they ran only 3/4 the speed of some other distro… Never having to re-install a server and the huge flexibility when installing packages (and having all appropriate dependencies resolved and built correctly) is the main win for me

However, please don\’t rush out and assume it works for you – I think gentoo is a complex tool and will likely only suit the more technically minded users. For everyone else I suspect that Arch/Sabayon, etc are probably more appropriate…

(This doesn\’t make it a bad distro – it\’s just that Ubunutu solves the needs of most users and therefore only those who need *more* flexibility in package management than Ubunutu brings are likely to want to investigate further)


I would like to see a similar comparison to Slackware and Arch. Other major distributions (RedHat,SuSE,Debian) would also be interesting, but the first ones are (supposed to be?) faster because they are \”simpler\”


I have to agree with ewildgoose. I have been using Gentoo for production servers for several years now, and I much prefer the maintenance routines on Gentoo over RedHat and Debian.

In my current position our servers are hosted by an outside company and they won\’t do any hands-on for any system that isn\’t Debian so I am forced to work with that. Unfortunately, that means that sometimes security updates require a complete system rebuild to a new version. This does not amuse me.

My personal server has been running Gentoo since 2006 with reboots only for new kernels, power outages and physical moves. And even with new kernels those outages are very short. Rebuilding the Debian machines on the other hand …


evardsson – we don\’t require hands-on support for our servers and we use a hosting service which gives us serial port access to our machine. This means we can see everything from the end of the bios boot onwards, including accessing grub. Additionally our host provides a \”rescue boot\”, but this can be simulated by a boot CD left in a drive. This together allows us to do remotely format the machine, install gentoo and manage everything including failed kernel upgrades, bootloader issues, etc

pattakosn – you are like many people debating whether 10% speed difference between two distros is relevant – for most purposes it\’s not. The key features for most users is how well the distro works for you and fits your needs. So evaluate Arch/Gentoo/Slackware/Ubuntu in terms of \”how much work you can get done over the next week\”, and then performance, ease of management, etc all effectively get boiled down to a much more objective benchmark – for me and my servers Gentoo lets me get stuff done and not spend time managing packages – for a desktop machine I may have different priorities though…

Slackware and Arch are kind of gentoo\’esque, but have headed in different directions. Arch (which I don\’t know that well) is a mostly binary packaging system, but the source to generate the packages is more available than it is if say you decided you wanted to recompile Redhat from the RPMS (source) – also the default compile architecture is a bit more aggressive (i686 I think?). Slackware is kind of even less user friendly than gentoo, and where gentoo effectively is a bunch of scripts which pull down all your source and compile it and basically do all the hardwork for you, slack is really a very nice manual on how to do all this yourself.

Personally slack doesn\’t work for me because there doesn\’t appear to be a strong package manager which watches files and can remove them or manage them for you (with gentoo, rpm/deb you can ask the package manager who installed any file, files are cleaned up when the package is uninstalled and config files are treated differently to binaries).

Arch could probably suit me nicely, but I don\’t have much experience with it…


I stumbled on a forum post pointing out that gentoo does not use the make.conf optimizations when building the kernel.

The biggest reasons I run Gentoo besides my use of older hardware are:

1. I can build the system from the ground up on whatever convoluted file-system I can create with the live CD (Raid, EVMS, Luks,…). Many of the other distro\’s require you install a working system first, then, rearrange things to your liking.

2. Like ewildgoose said above, Gentoo\’s portage provides a rolling upgrade to the latest system. You can upgrade as time allows. But be prepared for blocked and circular dependencies if you allow too much time between updates. Definitely test updates on non-production machines first.

3. You can still install binaries instead of compiling from source if the optimizations are not critical.


I\’m a huge Ubuntu fan, but having tried my hand at Gentoo as well, I have to note that these benchmarks have the potential to be inaccurate. For example, prelink–a third party program–is enabled on Ubuntu, which would certainly speed up certain actions. And what about compiler flags? How was the kernel compiled–with the generic \”world\” install or was it built specifically for the system?

Even using different kernel versions could make for huge variations–probably in Gentoo\’s favor. In fact, I know for a fact 2.6.28 is in portage, so why wasn\’t that used…? Being an nVidia user, I can also tell you that the binary nVidia drivers can be day and night between versions, with one version tying up resources or causing unnecessary processor load, and even a slightly minor version in either direction eliminating it.

I really, really appreciate the effort, and I\’m not trying to be a troll, but the fact that Ubuntu does a lot for you and Gentoo does nothing means that trying to compare them is a very difficult process. For example, if I were benchmarking a Windows program, would you consider the output from Windows 2000 to be comparable to Windows XP Professional, even though they\’re largely similar? Or between XP and XP SP3?

EDIT: Kudos to everyone praising Gentoo; I agree that portage is the best package manager out there, binary, source, or otherwise!

@ewildgoose: In my opinion, if you\’re already on Gentoo, you\’ve surpassed the need for Arch. I tried Arch for a few months, and while it does give you the \”compile from source\” aspect, it just doesn\’t compare with Gentoo\’s user and community base. What I usually recommend is for people who are looking to branch out from Ubuntu (or a similar distro) and start diving into behind-the-scenes Linux, Arch is a good stepping stone.


The author\’s computers have spent a ton of time compiling to give us this report. Thanks.

So how did -Os beat all in some tests? I suspect two possible causes: one, cache hits in the intel chip would be higher with a smaller program; two, disk reading time is overwhelming the test time — so smaller programs appear slightly faster.

Might be interesting to re-run the encoder tests with everything in a ram disk.


add -march=native optimization flag and be impressed


    -O2 -march=native -mtune=native -mno-aes -mvzeroupper -pipe = Corei7-AVX or any new Xeon …

    extreme performance, sadly these benchmarks are not accurate as simply using “O2 -O3 -Os” does not leverage the capabilities of GCC.
    Besides that, you can if you read the GCC manual use in fact ANY GCC options besides the standard which are introduced in the newer versions. Of course use your discretion to test, however you can if you are looking for these optimizations (Gentoo documentation does not make this clear) edit /usr/portage/eclass/toolchain.eclass and /usr/portage/eclass/toolchain-binutils.eclass (for binutils)…

    Here you can edit linker build-id options, enable ESP/ESPF hardening, libssp, GO, graphite opts, advanced C++ compilation options (GCC 4.8.x) and the list goes on. Essentially, anything you can find here : and here: is at your disposal.

    I write this as a 13-year Gentoo user, on a Corei7 3930K with SSD RAID arrays, 32GB DDR3 Quad-Channel ram on a Gentoo system compiled with GCC 4.8.2, latest svn-src binutils, and -O2 -march=native -mtune=native -mno-aes -mvzeroupper -pipe for C/CXXFLAGS (pretty conservative) and I dare say I’d put this machine up against any single-proc server or even dual proc….


-Os, -O1, -O2, -O3, it’s only the basics.

You need to set -march=native and -mtune=native, and optionally -fomit-frame-pointer if arch is x86.


    -march overwrites -mtune also -fomit-frame-pointer is not supported by manny packages when reporting bugs


      -march overwrites -mtune on most packages, -fomit-frame-pointer breaks debugging but can easily be circumvented by adding “FEATURES=”splitdebug” to /etc/make.conf to place debugging info in a seperate file (if you use Valgrind on Gentoo or any platform Glibc needs to be compiled with debugging symbols, emerge glibc with FEATURES=”splitdebug” and your entire system and you’ll eliminate the binary bloat but have the debug info still..)


Kernel can take different CFLAGS as well, you need to edit the kernel makefile. Users of Gentoo are, and I say this in the nicest way possible, “usually” more advanced users than deb/RPM based distro users. Not always the case of course …


The year 2013 ending – the actual twelve-monthly adviser coloring “Emerald Silpada In . and we’ll point out farewell . Coloration expert Pantone (Pantone) organization straight away reported your 2014 once-a-year adviser shade ( space ) “Radiant Orchid Orchid Green ” ( Virtually no. 18-3224 ) , made from violet colors will become common shade inside 2014 . You continue program trends , it’s time to stockpile this!


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>