Gentoo is a source based distribution which lets the user decide how to optimize their system in many ways and includes building for a specific CPU architecture. Linux Magazine benchmarks four such options; i486, i686, pentium3, core2, and throws in Ubuntu for good measure.
Summary
The results of the tests show that the biggest jump in performance is from i486 to i686 and that there is not much extra to gain from then on. One would have expected to have seen a greater advantage for the more optimised binaries when it came to encoding, however this also comes down to the program itself and whether it can make use of extensions such as MMX and SSE (something which an i486 binary can still do).
The biggest surprise was in Ogg encoding, where the Core2 system was twice as slow as others, including Pentium3! This does not seem right, however the test was repeated several times, all with the same results. This may be an issue with the Ogg encoder itself, when used with the highly optimized Core2 binary.
Yes, we are not comparing even systems here, but it’s interesting to note that Ubuntu faired well in some tests, but fell behind in others. Generally, it performed at or below the Gentoo i486 level. The main benefit that Gentoo has over Ubuntu is the ability to fine tune the system, leaving out what you don’t want and including those components you do. This means the system can be more lean, which results in better performance and less overhead. The systems used in these tests were already very lean, with just X.Org and Xfce4.
So does it pay to optimize your operating system specifically for your CPU? It sure does, but only just. Performance of the Core2 binaries over i686 was marginal and certainly there was a much larger performance gain from i486 to i686. Nevertheless, combining a CPU optimized binary with a specific GCC optimization (such as -Os) to fit your specific requirements may well be a very attractive proposition. Of course, the question will be whether the trade-off between compile time and performance gain is worthwhile, but then, that’s not what Gentoo’s really about.
Conclusion
At its core, Gentoo is about flexibility, not about optimizing code to run the fastest. It’s about being able to make your system whatever you want it to be, through the use of USE flags. While primarily referring to extra GCC optimizations (and not the basic CPU architecture and optimization level), the Gentoo Optimization Guide says the following:
“While CFLAGS and CXXFLAGS can be very effective means of getting source code to produce smaller and/or faster binaries, they can also impair the function of your code, bloat its size, slow down its execution time, or even cause compilation failures.
CFLAGS are not a magic bullet; they will not automatically make your system run any faster or your binaries to take up less space on disk. Adding more and more flags in an attempt to optimize your system is a sure recipe for failure. There is a point at which you will reach diminishing returns.
Despite the bragging you’ll find on the internet, aggressive CFLAGS and CXXFLAGS are far more likely to harm your programs than do them any good. Keep in mind that the reason the flags exist in the first place is because they are designed to be used at specific places for specific purposes. Just because one particular CFLAG is good for one bit of code doesn’t mean that it is suited to compiling everything you will ever install on your machine!”
So there you have it, straight from the horse’s mouth. Despite the reputation it has gathered over time due to a small minority, Gentoo is not about highly optimizing your system to get that “extra 1% performance.” Rather, it’s strength lies in the flexibility provided by its package management system.
Nevertheless, these tests do show that as a result of compiling from source, Gentoo offers some ever-so-slight additional benefits in performance (from i686 to Core2) which can be utilised to the user’s advantage. In reality however, these benefits are so tiny that they simply might not justify it (and will also make the system less portable). The combination of a specific GCC optimization level such as -Os is probably a better reason for building from source.
It’s fair to say that optimizing for a specific CPU architecture over i686 does not result in a sizeable performance increase. It was however, common to see a 20% performance increase over the i486 systems. What does make a huge increase (primarily in encoding and cryptography) is in the use of CPU instructions such as MMX and SSE. This really comes down to the coding of a particular application and whether it includes the ability to use them or not. An application built for i686 can still make use of these, thus being able to take advantage of both worlds – being a low common denominator and still making use of faster, more modern CPUs.
We must also remember that benchmarks are just benchmarks and are not necessarily a true representation of real world environments. There are lots of other factors which come into play, including the amount of other processes running and resources used on the system. This is usually where Gentoo has a major advantage, because users can easily strip out all the extras that their system doesn’t need.
So, while optimizing your computer for a specific CPU may not be everything, if you’re building from source anyway then applying some specific optimizations for your system generally isn’t going to hurt (especially if you can take advantage of CPU instruction sets). And besides, it’s a whole lot of fun!
Christopher Smart has been using Linux since 1999. In 2005 he created Kororaa Linux, which delivered the world's first Live CD showcasing 3D
desktop effects. He also founded the
MakeTheMove website, which introduces users to free software and encourages them to switch. In his spare time he enjoys writing articles on free software.
Comments on "Gentoo Optimizations Benchmarked – Part 2"
Great article. Being a gentoo and ubuntu lover, the article certainly provides some insights worth bearing in mind. It\’s also interesting to see those hardware specs are hackintosh-friendly! I\’m pretty sure those components lead a double life!!
Great article. I\’ve always wondered what the performance benefits , if any, you could gain from a source based distribution like Gentoo.
Not to wish any work on anyone, it would be interesting to see what the advantages would be of using the Intel compiler. Intel has a long history with this compiler, I believe that it even predates gcc.
The answer being that this is the wrong question…
In fact my guess would be that the fastest binaries would be generated by carefully benchmarking each bit of software to determine the best compile options and then distributing the compiled binaries…
..However, that gets us back to a \”normal\” binary distribution again (rpm/deb, etc), plus all the associated dependency hell that goes with it
In my opinion (gentoo on a dozen servers), the point of a source based distro is good control over dependencies and ability to stay as bleading edge or stable as you need to. So I can have one server running an antique glibc with a cutting edge mysql and another server running a cutting edge glibc with some old version of nginx (or whatever)
Additionally it\’s very easy to update a bunch of servers over long periods of time without suffering the downtime of a full \”upgrade\”. Show me a Centos system installed 4 years ago and yet still running bang up to date glibc/mysql/apache, etc.
Finally, given a build template to create SomeSoftware-V1, it\’s usually fairly trivial to bump the template to create SomeSoftware-V1.1 and hence usually source distros can help you stay up to date with bleeding edge more easily. Now someone will shout that this is inappropriate for servers, but even there sometimes you want to track some new and fast moving utility, eg my MySql servers are pinned to some known version, but I track bleeding edge Maatkit mysql tools (since they are in fast development mode right now)
More related to Gentoo, and less to source distro\’s in general, but gentoo makes it very easy to have machine \”profiles\” which mandate software versions, optional features (USE flags), compile options and even base packages which must/must not be installed. So for example I have a web server template which requires a modern gcc to be used, hardened compiler flags to be used, certain versions of nginx to be installed, again with various default modules that nginx should support. So in many ways something like Kickstart for Redhat… But the cool thing is I can update the template right now and then all my machines pull in the changes and rebuild whatever is required to get to the end result!
The penalty of source based distros is package generation time (essentially gentoo still needs a binary package, it just builds it on demand instead of it being pre-built). However, for my needs I mitigate this by keeping all my servers \”similar\” and then my package repository is automatically re-used after the first machine updates.
So my usual update procedure is to copy one of my virtual servers, test the upgrade in the copy, if all goes well then run the upgrades on the live server(s) (takes only seconds per package).
Gentoo is not likely suitable for the majority audience, but if you have strong admin skills then it\’s a great fit and will allow you to very easily run lots of servers with variable configs, all very easily
Well said, ewildgoose. Source based distros are really about flexibility and control. You can decide what the system will be, down to the very core libraries, \”optimizations\” and make it your own. Binary distros make a whole bunch of decisions for you, including basic things such as features, dependencies, even configuring applications and daemons.
Gentoo provides a flexible framework for you to do whatever it is you want, yourself.
-c
For example I just updated one of my linux-vservers from:
- gcc 3.4 -> 4.3
- glibc 2.9 -> 2.10
- mysql jumped from some older release to 5.0.84
- nginx jumped from 0.6 to 0.7
No particular issues to note from upgrading…
On the other hand I have some CentOs box running my phone system and it drives me nuts that I remain pinned to antique versions of stuff I would like to upgrade, but I either need to ditch the centos packages and roll my own, or some other equally painful path
However, Gentoo likely does NOT suit the average punter who does not need that level of control. There is quite a significant complexity over head to get to grips with. So it comes down to the old adage of choosing the best tool for the job…
I\’m not certain what this article was trying to prove. Testing code compiled for older processors on a newer processor just shows how design changes in newer processors support using older instructions.
If I build an embedded system using a 486 class processor, I would want everything compiled for that processor. The same holds true that if I build a newer system, I would want everything compiled to take advantage of new specialized instructions.
Binary packages can only be compiled to take advantage of a common subset of features found on the oldest supported processor for a given build. Hence some distro\’s have separate builds for x86, I386, I686,…
Building from source does take time, but with the proper compile options selected, you can make better use of CPU features. The gains may be small, but they can help. Gentoo does offer the ability to install using binaries instead of source if desired.
It may be baptism by fire, but anyone who really wants to understand how everything works in Linux needs to build at least one system from scratch using a distro like Gentoo.
I would be interested to see the effect that using \”-march=native -ftree-vectorize\” has on the benchmarks, as this would more properly allow gcc to make full use of the particular processor, which is the definite benefit of using Gentoo or other source based distributions.
Most certainly makes a difference on 64bit code, not so sure on the effect on 32bit code.
It is also possible to specify per-package CFLAGS/CXXFLAGS which allows tuning of individual packages if there is anything that requires maximum performance, rather than just the generic system CFLAGS/CXXFLAGS.
>>It is also possible to specify per-package CFLAGS/CXXFLAGS which allows tuning of individual packages<<
Who will do this for the 15 hundred or so individual packages on an average system? I have noticed that while some ebuilds do specify CFLAGS/CXXFLAGS, most don\’t.
I\’ve been a Gentoo\’st for around 6 to 7 years and have been best served by using the minimal flags, just \”-march=(arch) O2 -pipe\”.
However, USE flags are another matter. I think these make large differences in the system build, and while the USE flags in my make.conf are very few, I do have a very large packages.use file. Nearly a rule for every package on my system.
Regards, Rob
Couldn\’t agree more golding, hence the words \”possible to\”… I personally run with \”-march=native -O2 -ftree-vectorize -pipe\”… or at least have done so since gcc-4.4.0 (now on gcc-4.4.3)… previously had no \”-ftree-vectorize\”…
If you are going to do a lot of video/audio work, then I would be tempted to look at per=package optimisations for some of the video/audio codecs, indeed I am experimenting with hugin and it\’s dependencies, mainly just for the sake of curiosity, using some of the new graphite flags. BUT I wouldn\’t use them for the whole system.
USE flags are amazing, and indeed, the main reason for using gentoo. I think I probably took the opposite approach to yourself, as I have a large number, mainly video/audio/image flags in my /etc/make.conf, but I do have some overrides in /etc/portage/packages.use.
Rob.
You should try this test with an ATI video card and robng15\’s suggestions. It would be very interesting.
I’d be interested to see this comparison with AMD. I suspect that AMD’s contributions to the GCC compiler may result in more substantial benefits on AMD architectures. Intel, I suspect, focused more on their Intel compiler.