Gentoo Optimizations Benchmarked – Part 2

Gentoo is a source based distribution which lets the user decide how to optimize their system in many ways and includes building for a specific CPU architecture. Linux Magazine benchmarks four such options; i486, i686, pentium3, core2, and throws in Ubuntu for good measure.

Gentoo is still the most popular source based distribution, famous for its ports like package management system, portage. It has the ability to build custom binaries from source, based on features specified through USE flags. Linux Magazine recently provided an overview of the distribution in celebration of its tenth anniversary.

The ability to build from source introduces other possibilities, like optimizing binaries built with GCC. Previously, Linux Magazine benchmarked three of the most popular GCC optimization levels, namely -Os, -O2 and -O3. The results showed that -Os was often the slowest and that -O2 performed the most consistently.

Now, it’s time to see what can be gained by building an operating system for a specific CPU architecture. Most Linux distributions are built for the lowest common denominator, somewhere between i386 and i686. This helps to ensure that it will run on as many computers as possible. Code which has been built for a specific CPU (such as Intel’s Core 2 Duo) may not run on different systems (when built with GCC’s -march option). Does building a system from scratch for a specific CPU provide any sizeable gains in system performance? Let’s take a look!

Test Setup

The computer system used in these tests has a 3GHz Intel Core2 Duo E8400 CPU, running on an Intel P35 ICH-9 chip-set. In order to accurately compare various CPU compiler options, these tests were all done in a 32-bit environment. Four different CPU optimizations were chosen, namely i486, i686, pentium3 (with mmx and sse support) and core2 (with mmx, sse, sse2, sse3 and sse41 support). The Core2 system was in fact built using GCC’s -march=”native” compiler option, which detects the CPU and applies supported optimizations automatically. The GCC optimization level used for each system was -O2, as this seems to represent the best overall performance, in accordance with our previous findings.

Ubuntu is included for interest’s sake and for basic comparison purposes only. This is not an apples-to-apples comparison. Karmic Koala 9.10 was chosen because it is the current stable version. Each installed system is just the base with any required dependencies for testing, plus X.Org and Xfce4.

The Hardware

Processor: Intel Core 2 Duo CPU E8400 @ 3.00GHz (total cores: 2)
Motherboard: Gigabyte EP35-DS3P
Chipset: Intel 82G33/G31/P35/P31 + ICH9R
System Memory: 3965MB
Disk: 500GB Western Digital WD5001AALS-0
Graphics: GeForce 8800 GT
Monitor: Samsung SyncMaster

Gentoo System

The Gentoo system was built from the testing branch using the same USE flags and packages for each system. For each of the different GCC CPU optimization levels, the system’s tool chain was re-built twice, then the entire system was rebuilt twice and then finally, world. All systems used the same kernel configuration and packages, but were build with the specified GCC optimizations.

Kernel: 2.6.32-gentoo-r5 (i686)
Compiler: GCC 4.4.3
File-System: ext4
Desktop: Xfce 4.7.0
Display Server: X.Org Server 1.7.5
OpenGL: 3.2.0, NVIDIA driver 190.53
Screen Resolution: 1280×1024

The make profile used was x86/10.0 with the following additional USE flags set:
custom-cflags custom-optimizations opengl qt3support sse sse2 threads X

In addition, the pentium3 and core2 system had both the mmx and sse USE flags enabled, with core2 also having sse2 and ssse3.

Ubuntu System

The Ubuntu system ran the following software.

Kernel: 2.6.31-19-generic (i686)
Compiler: GCC 4.4.1
File-System: ext4
Desktop: Xfce 4.6.1
Display Server: X.Org Server 1.6.4
OpenGL: 3.0.0, NVIDIA driver 185.18.36
Screen Resolution: 1280×1024

Next: Testing method

Comments on "Gentoo Optimizations Benchmarked – Part 2"


Great article. Being a gentoo and ubuntu lover, the article certainly provides some insights worth bearing in mind. It\’s also interesting to see those hardware specs are hackintosh-friendly! I\’m pretty sure those components lead a double life!!


Great article. I\’ve always wondered what the performance benefits , if any, you could gain from a source based distribution like Gentoo.
Not to wish any work on anyone, it would be interesting to see what the advantages would be of using the Intel compiler. Intel has a long history with this compiler, I believe that it even predates gcc.


I\’ve always wondered what the performance benefits , if any, you could gain from a source based distribution like Gentoo.

The answer being that this is the wrong question…

In fact my guess would be that the fastest binaries would be generated by carefully benchmarking each bit of software to determine the best compile options and then distributing the compiled binaries…

..However, that gets us back to a \”normal\” binary distribution again (rpm/deb, etc), plus all the associated dependency hell that goes with it

In my opinion (gentoo on a dozen servers), the point of a source based distro is good control over dependencies and ability to stay as bleading edge or stable as you need to. So I can have one server running an antique glibc with a cutting edge mysql and another server running a cutting edge glibc with some old version of nginx (or whatever)

Additionally it\’s very easy to update a bunch of servers over long periods of time without suffering the downtime of a full \”upgrade\”. Show me a Centos system installed 4 years ago and yet still running bang up to date glibc/mysql/apache, etc.

Finally, given a build template to create SomeSoftware-V1, it\’s usually fairly trivial to bump the template to create SomeSoftware-V1.1 and hence usually source distros can help you stay up to date with bleeding edge more easily. Now someone will shout that this is inappropriate for servers, but even there sometimes you want to track some new and fast moving utility, eg my MySql servers are pinned to some known version, but I track bleeding edge Maatkit mysql tools (since they are in fast development mode right now)

More related to Gentoo, and less to source distro\’s in general, but gentoo makes it very easy to have machine \”profiles\” which mandate software versions, optional features (USE flags), compile options and even base packages which must/must not be installed. So for example I have a web server template which requires a modern gcc to be used, hardened compiler flags to be used, certain versions of nginx to be installed, again with various default modules that nginx should support. So in many ways something like Kickstart for Redhat… But the cool thing is I can update the template right now and then all my machines pull in the changes and rebuild whatever is required to get to the end result!

The penalty of source based distros is package generation time (essentially gentoo still needs a binary package, it just builds it on demand instead of it being pre-built). However, for my needs I mitigate this by keeping all my servers \”similar\” and then my package repository is automatically re-used after the first machine updates.

So my usual update procedure is to copy one of my virtual servers, test the upgrade in the copy, if all goes well then run the upgrades on the live server(s) (takes only seconds per package).

Gentoo is not likely suitable for the majority audience, but if you have strong admin skills then it\’s a great fit and will allow you to very easily run lots of servers with variable configs, all very easily


Well said, ewildgoose. Source based distros are really about flexibility and control. You can decide what the system will be, down to the very core libraries, \”optimizations\” and make it your own. Binary distros make a whole bunch of decisions for you, including basic things such as features, dependencies, even configuring applications and daemons.

Gentoo provides a flexible framework for you to do whatever it is you want, yourself.



For example I just updated one of my linux-vservers from:
- gcc 3.4 -> 4.3
- glibc 2.9 -> 2.10
- mysql jumped from some older release to 5.0.84
- nginx jumped from 0.6 to 0.7

No particular issues to note from upgrading…

On the other hand I have some CentOs box running my phone system and it drives me nuts that I remain pinned to antique versions of stuff I would like to upgrade, but I either need to ditch the centos packages and roll my own, or some other equally painful path

However, Gentoo likely does NOT suit the average punter who does not need that level of control. There is quite a significant complexity over head to get to grips with. So it comes down to the old adage of choosing the best tool for the job…


I\’m not certain what this article was trying to prove. Testing code compiled for older processors on a newer processor just shows how design changes in newer processors support using older instructions.

If I build an embedded system using a 486 class processor, I would want everything compiled for that processor. The same holds true that if I build a newer system, I would want everything compiled to take advantage of new specialized instructions.

Binary packages can only be compiled to take advantage of a common subset of features found on the oldest supported processor for a given build. Hence some distro\’s have separate builds for x86, I386, I686,…

Building from source does take time, but with the proper compile options selected, you can make better use of CPU features. The gains may be small, but they can help. Gentoo does offer the ability to install using binaries instead of source if desired.

It may be baptism by fire, but anyone who really wants to understand how everything works in Linux needs to build at least one system from scratch using a distro like Gentoo.


I would be interested to see the effect that using \”-march=native -ftree-vectorize\” has on the benchmarks, as this would more properly allow gcc to make full use of the particular processor, which is the definite benefit of using Gentoo or other source based distributions.

Most certainly makes a difference on 64bit code, not so sure on the effect on 32bit code.

It is also possible to specify per-package CFLAGS/CXXFLAGS which allows tuning of individual packages if there is anything that requires maximum performance, rather than just the generic system CFLAGS/CXXFLAGS.


>>It is also possible to specify per-package CFLAGS/CXXFLAGS which allows tuning of individual packages<<
Who will do this for the 15 hundred or so individual packages on an average system? I have noticed that while some ebuilds do specify CFLAGS/CXXFLAGS, most don\’t.

I\’ve been a Gentoo\’st for around 6 to 7 years and have been best served by using the minimal flags, just \”-march=(arch) O2 -pipe\”.

However, USE flags are another matter. I think these make large differences in the system build, and while the USE flags in my make.conf are very few, I do have a very large packages.use file. Nearly a rule for every package on my system.

Regards, Rob


Couldn\’t agree more golding, hence the words \”possible to\”… I personally run with \”-march=native -O2 -ftree-vectorize -pipe\”… or at least have done so since gcc-4.4.0 (now on gcc-4.4.3)… previously had no \”-ftree-vectorize\”…

If you are going to do a lot of video/audio work, then I would be tempted to look at per=package optimisations for some of the video/audio codecs, indeed I am experimenting with hugin and it\’s dependencies, mainly just for the sake of curiosity, using some of the new graphite flags. BUT I wouldn\’t use them for the whole system.

USE flags are amazing, and indeed, the main reason for using gentoo. I think I probably took the opposite approach to yourself, as I have a large number, mainly video/audio/image flags in my /etc/make.conf, but I do have some overrides in /etc/portage/packages.use.



You should try this test with an ATI video card and robng15\’s suggestions. It would be very interesting.


I’d be interested to see this comparison with AMD. I suspect that AMD’s contributions to the GCC compiler may result in more substantial benefits on AMD architectures. Intel, I suspect, focused more on their Intel compiler.


Mobile phones have undergone so many facilities and the new generation cell phones have almost all functionalities of a personal computer. The high end mobile phones with advanced features are known as Smart phones. They are highly efficient in performing multiple functions and is a combination of gadgets rolled into one namely a camera, computer, calendar, TV etc.

But if you want to buy this or any other smartphones,you should compare the popular smartphones and choose the one which suits your needs best.I found an article on 10 Best Smartphone Reviews at :-


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>