Tuning Up Your Linux Machines

Wednesday marked the beginning of the actual conference, as opposed to just the workshops and training sessions of the first few days. The attendance was much larger than earlier in the week, with people coming for the technical sessions and invited paper talks, and vendors have their booths set up to talk about their latest projects. As full as the schedule of the past few days have been, today it’s near the bursting point.

Rather than attend the opening session, I opted to sit in on “Linux Performance Tuning” training session presented by Theodore Ts’o. While the opening featured a keynote by Werner Vogels, I feel like tuning specific bits of the Linux OS is something that has been a gap in my knowledge for too long. This training session afforded me an excellent opportunity to learn from the best. Ted Ts’o is a key developer on the ext4 file system, he has been a kernel developer since the beginning of the Linux kernel, and works for the IBM Linux Technology Center. It’s sort of an understatement to say that he’s qualified to teach this class.

Over the long development of Linux, performance has increased as code became more efficient, but also because behavior of the software working with the hardware has improved. Through time, hardware has increased in speed, but also in the number of configurable settings, which provides the operating systems new and interesting ways of tweaking performance.

Tuning Linux is hard, because potential system bottlenecks are everywhere. What’s worse, certain bottlenecks often present as entirely different issues. Memory under-performance is notorious for this, so learning the tools for proper diagnosis is very important.

Ted started out by working with three tools that nearly everyone uses, but probably not to their fullest extent: free, top, and iostat. The first, ‘free’, displays memory utilization while ‘top’ displays an updating screen with a system summary at top and list of recently active programs at the bottom. The command ‘iostat’, part of the sysstat package, provides output regarding I/O performance of the system and the various storage attached. There are lots of columns out there for increasing the sophistication of your use of the Linux commands: top, free, and iostat. Make sure to read the man pages for each of these as well.

It should be noted that regardless of the tool used to evaluate your system, the impact of running the tool is not zero. There is always some resource utilization by the evaluation tool itself, and depending on the tool, it can be nontrivial. Don’t let this stop you, but only be aware that by watching the output, you are affecting the output, as well.

The most interesting parts of the class were the bits of information that Ted added from his experience working with file systems and storage. These included great pieces of advice, like for the next year or so, at least, don’t buy a laptop with a solid state drive. The drives that manufacturers insert have poor performance and high prices. Instead, get a spindle drive and replace it after market with a good (and somewhat cheaper) solid state drive. Another great tip was that if you are using local storage and it’s not solid state, you can partition your drive to take advantage of the fact that the outside of the spinning disk moves much faster than the inside. Put the file systems that you need fastest read/write access to first to take advantage of this. In fact, Ted suggested that using a fast terabyte drive, and only partitioning the first 300GB or so would lead to performance in the range of solid state media. This class was a wealth of knowledge!

Also covered were network tuning, which can be used to improve performance in extremely fast or extremely slow networks, and file system tuning, which can help performance when dealing with special cases such as large files, small files, quick access, and the like. In addition, Ted devoted a section to NFS tuning, which is in many ways, a combination of disk tuning, network tuning, memory tuning, and the addition of several protocol options which can be tweaked. The good news for NFS is that modern defaults are very good, and should only be changed when clients are atypical, although Ted did suggest using the ‘intr’ to allow you to recover more gracefully, should the NFS server die.

Overall, this was definitely one of, if not the, most valuable training that I took this week. I’m not an expert, but I have a lot more tools and knowledge, and as a mostly-linux administrator, I can tell you that some things are probably going to improve in my infrastructure as a direct result of being in this class. It is a tragedy that it was only a half day.

I’m not blogging to do a sales pitch, but let me just say this. If you have considered coming to LISA, but weren’t sure or couldn’t provide justification, take my word that this class, by itself, was more than worth the entire trip.

-
By Matt Simmons, author of the Standalone Sysadmin blog

Tags: 

Comments

[...] Wednesday marked the beginning of the actual conference, as opposed to just the workshops and training sessions of the first few days. The attendance was much larger than earlier in the week, with people coming for the technical sessions and invited paper talks, and vendors have their booths set up to talk about their latest [...] Go to Source for Full Story [...]

0 likes
0 dislikes

[...] the subject they cover. Probably the best example of this was Theodore Ts’o teaching “Linux Performance Tuning“. Ted was actually the first North American kernel developer, according to that wikipedia [...]

0 likes
0 dislikes

[...] the subject they cover. Probably the best example of this was Theodore Ts’o teaching “Linux Performance Tuning“. Ted was actually the first North American kernel developer, according to that wikipedia [...]

0 likes
0 dislikes

[...] the subject they cover. Probably the best example of this was Theodore Ts’o teaching “Linux Performance Tuning“. Ted was actually the first North American kernel developer, according to that wikipedia [...]

0 likes
0 dislikes