<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Storage Pools and Snapshots with Logical Volume Management</title>
	<atom:link href="http://www.linux-mag.com/id/7454/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7454/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: dog</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6803</link>
		<dc:creator>dog</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6803</guid>
		<description>&lt;blockquote&gt;&lt;p&gt;GUI Tools&lt;/p&gt;
&lt;p&gt;The command line tools for LVM are not too complicated but for novices it can be a bit daunting. So to help here are 3 GUI tools for LVM.&lt;/p&gt;
&lt;p&gt;* EVMS
&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;EVMS? eh? this isn\&#039;t a gui tool it is an alternative volume management systems written mostly by IBM.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<blockquote><p>GUI Tools</p>
<p>The command line tools for LVM are not too complicated but for novices it can be a bit daunting. So to help here are 3 GUI tools for LVM.</p>
<p>* EVMS
</p>
</blockquote>
<p>EVMS? eh? this isn\&#8217;t a gui tool it is an alternative volume management systems written mostly by IBM.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: littlemonkeymojo</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6804</link>
		<dc:creator>littlemonkeymojo</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6804</guid>
		<description>&lt;p&gt;Actually, if you look at the EVMS webpage (linked in the article) you\&#039;ll see that EVMS may be an alternative VMS, but it was also written (along with a GUI) to work with Linux LVM and Linux MD/Software RAID devices.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Actually, if you look at the EVMS webpage (linked in the article) you\&#8217;ll see that EVMS may be an alternative VMS, but it was also written (along with a GUI) to work with Linux LVM and Linux MD/Software RAID devices.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jpappas</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6805</link>
		<dc:creator>jpappas</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6805</guid>
		<description>&lt;p&gt;Great article!  The only thing that I thought was slightly unclear was the size heuristic for the snapshot sizing.  The referenced method allows for slightly more than 100% change in existing data and requires the VG to contain at least that much free space.  While there is definitely a value in such a large COW cache, it is not always possible to have that much coverage.  For sake of clarity, it bears mention that the \&quot;size\&quot; given to the snapshot is the cache for changes made to data existing at the time of the snapshot, and does not have to be nearly as large as the existing data, but large enough to cover the changes incurred during the expected life of the snapshot.&lt;br /&gt;
It also may have been good to cover the removal of the snapshot as well, although any user delving into LVM should learn lvremove quickly.&lt;/p&gt;
&lt;p&gt;Next up LVM+MD?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Great article!  The only thing that I thought was slightly unclear was the size heuristic for the snapshot sizing.  The referenced method allows for slightly more than 100% change in existing data and requires the VG to contain at least that much free space.  While there is definitely a value in such a large COW cache, it is not always possible to have that much coverage.  For sake of clarity, it bears mention that the \&#8221;size\&#8221; given to the snapshot is the cache for changes made to data existing at the time of the snapshot, and does not have to be nearly as large as the existing data, but large enough to cover the changes incurred during the expected life of the snapshot.<br />
It also may have been good to cover the removal of the snapshot as well, although any user delving into LVM should learn lvremove quickly.</p>
<p>Next up LVM+MD?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: clowenstein</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6806</link>
		<dc:creator>clowenstein</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6806</guid>
		<description>&lt;p&gt;This shows an example of combining physical partitions into a VG.  Is there a good reason to divide a physical device (/dev/sdb1, /dev/sdb2) and then recombine those partitions into /dev/primary_vg?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>This shows an example of combining physical partitions into a VG.  Is there a good reason to divide a physical device (/dev/sdb1, /dev/sdb2) and then recombine those partitions into /dev/primary_vg?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: thatblackguy</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6807</link>
		<dc:creator>thatblackguy</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6807</guid>
		<description>&lt;p&gt;This was a good review.  Here is another tip if you plan to use LVM effectively.  These suggestions will vary based on the distribution that you use, so KNOW that.&lt;/p&gt;
&lt;p&gt;I tend to split up the system so that it is easier to manage file systems that tend to grow often.  For instance if you take the OS, and separate the DATA, then you might end up with something like this.&lt;/p&gt;
&lt;p&gt;/dev/os/root&lt;br /&gt;
/dev/os/usr&lt;br /&gt;
/dev/os/tmp&lt;br /&gt;
/dev/os/opt&lt;br /&gt;
/dev/os/swap&lt;/p&gt;
&lt;p&gt;and the data you may put in a completely different VG like so.&lt;/p&gt;
&lt;p&gt;/dev/data/home&lt;br /&gt;
/dev/data/var&lt;br /&gt;
/dev/data/srv&lt;br /&gt;
/dev/data/tmp * some people may choose to do this for systems that do a lot of video and sound editing.&lt;/p&gt;
&lt;p&gt;I have experimented, and if you design your servers will then you can utilize your space better, and adapt to any storage issues that may present themselves in the future.  I try to leave extra unallocated space in each volume group just in case a file system fills up, then this will allow you to extend the logical volume and the file system resides on that volume.  Plan which filesystems you use carefully!  Some filesystems (and distribution admin tools)  will allow you to extend logical volume and filesystem in one command while others could turn out to be an ordeal.&lt;/p&gt;
&lt;p&gt;Hope that helps someone.&lt;/p&gt;
&lt;p&gt;http://intelliginix.com
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>This was a good review.  Here is another tip if you plan to use LVM effectively.  These suggestions will vary based on the distribution that you use, so KNOW that.</p>
<p>I tend to split up the system so that it is easier to manage file systems that tend to grow often.  For instance if you take the OS, and separate the DATA, then you might end up with something like this.</p>
<p>/dev/os/root<br />
/dev/os/usr<br />
/dev/os/tmp<br />
/dev/os/opt<br />
/dev/os/swap</p>
<p>and the data you may put in a completely different VG like so.</p>
<p>/dev/data/home<br />
/dev/data/var<br />
/dev/data/srv<br />
/dev/data/tmp * some people may choose to do this for systems that do a lot of video and sound editing.</p>
<p>I have experimented, and if you design your servers will then you can utilize your space better, and adapt to any storage issues that may present themselves in the future.  I try to leave extra unallocated space in each volume group just in case a file system fills up, then this will allow you to extend the logical volume and the file system resides on that volume.  Plan which filesystems you use carefully!  Some filesystems (and distribution admin tools)  will allow you to extend logical volume and filesystem in one command while others could turn out to be an ordeal.</p>
<p>Hope that helps someone.</p>
<p><a href="http://intelliginix.com" rel="nofollow">http://intelliginix.com</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mxcreep</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6808</link>
		<dc:creator>mxcreep</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6808</guid>
		<description>&lt;p&gt;Why doesn\&#039;t the article mention the performance penalties involved with lvm snapshots ? For us this was the main reason not to build storage boxes based on linux, instead we use Opensolaris / ZFS for these solutions now. It is expected these problems will be solved when BRTFS is commonly available.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Why doesn\&#8217;t the article mention the performance penalties involved with lvm snapshots ? For us this was the main reason not to build storage boxes based on linux, instead we use Opensolaris / ZFS for these solutions now. It is expected these problems will be solved when BRTFS is commonly available.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6809</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6809</guid>
		<description>&lt;p&gt;Thanks everyone for the comments and the suggestions I greatly appreciate.&lt;/p&gt;
&lt;p&gt;Here are some quick replies and/or comments:&lt;/p&gt;
&lt;p&gt;@clowenstein&lt;br /&gt;
The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\&#039;s to improve performance. Then again you could have used MD to create RAID groups and also improved performance. I\&#039;ve never dug into the interaction between LVM and MD so I don\&#039;t know what the \&quot;best\&quot; configuration is (however you want to define \&quot;best\&quot;). &lt;/p&gt;
&lt;p&gt;@thatblackguy:&lt;br /&gt;
Cool approach! I think it takes some work to setup everything as you describe but it does give you much more flexibility than a monolithic approach. Very cool - Thanks!&lt;/p&gt;
&lt;p&gt;@mxcreep&lt;br /&gt;
The article wasn\&#039;t intended to be an all-encompassing review of LVM with all of the pitfalls and benefits (I think I mentioned that twice which someone pointed out was bad writing - but oh well, I wanted to make sure my point came across). However, the performance penalty doesn\&#039;t really come from LVM. It comes from the fact that the file system has no snapshot capability built in. Consequently, you have to reply on LVM to take the snapshot for you. This forces you to \&quot;freeze\&quot; the file system (so to speak) during the snapshot and means that the file system takes a performance hit during the snapshot.&lt;/p&gt;
&lt;p&gt;I also have to be the bearer of bad tidings, but even ZFS suffers a little when a snapshot is taken. The performance penalty is less than having to use ZFS but it\&#039;s still there.&lt;/p&gt;
&lt;p&gt;The only time you won\&#039;t see a performance penalty due to a snapshot is for a log based file system because it is &lt;em&gt;designed&lt;/em&gt; for snapshots. That\&#039;s one of the really cool features of log based file systems.&lt;/p&gt;
&lt;p&gt;I hope that answers some questions. If it doesn\&#039;t feel free to repost and we\&#039;ll figure out the answer together. If it even gets too long, we can write a quick article about the question and the solution (hint, hint) :)&lt;/p&gt;
&lt;p&gt;Thanks!&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks everyone for the comments and the suggestions I greatly appreciate.</p>
<p>Here are some quick replies and/or comments:</p>
<p>@clowenstein<br />
The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\&#8217;s to improve performance. Then again you could have used MD to create RAID groups and also improved performance. I\&#8217;ve never dug into the interaction between LVM and MD so I don\&#8217;t know what the \&#8221;best\&#8221; configuration is (however you want to define \&#8221;best\&#8221;). </p>
<p>@thatblackguy:<br />
Cool approach! I think it takes some work to setup everything as you describe but it does give you much more flexibility than a monolithic approach. Very cool &#8211; Thanks!</p>
<p>@mxcreep<br />
The article wasn\&#8217;t intended to be an all-encompassing review of LVM with all of the pitfalls and benefits (I think I mentioned that twice which someone pointed out was bad writing &#8211; but oh well, I wanted to make sure my point came across). However, the performance penalty doesn\&#8217;t really come from LVM. It comes from the fact that the file system has no snapshot capability built in. Consequently, you have to reply on LVM to take the snapshot for you. This forces you to \&#8221;freeze\&#8221; the file system (so to speak) during the snapshot and means that the file system takes a performance hit during the snapshot.</p>
<p>I also have to be the bearer of bad tidings, but even ZFS suffers a little when a snapshot is taken. The performance penalty is less than having to use ZFS but it\&#8217;s still there.</p>
<p>The only time you won\&#8217;t see a performance penalty due to a snapshot is for a log based file system because it is <em>designed</em> for snapshots. That\&#8217;s one of the really cool features of log based file systems.</p>
<p>I hope that answers some questions. If it doesn\&#8217;t feel free to repost and we\&#8217;ll figure out the answer together. If it even gets too long, we can write a quick article about the question and the solution (hint, hint) :)</p>
<p>Thanks!</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mxcreep</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6810</link>
		<dc:creator>mxcreep</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6810</guid>
		<description>&lt;p&gt;@Jeff, &lt;/p&gt;
&lt;p&gt;My point about performance wasn\&#039;t really about the moment the snapshot was taken but more afterwards. A logical volume with one or more snapshots chained to it performs a lot slower.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>@Jeff, </p>
<p>My point about performance wasn\&#8217;t really about the moment the snapshot was taken but more afterwards. A logical volume with one or more snapshots chained to it performs a lot slower.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: clowenstein</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6811</link>
		<dc:creator>clowenstein</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6811</guid>
		<description>&lt;p&gt;Sorry to be late replying.  You said:&lt;br /&gt;
- - - - -&lt;br /&gt;
@clowenstein&lt;br /&gt;
The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\&#039;s to improve performance. Then again you could have used MD to create RAID groups and also improved performance.&lt;br /&gt;
- - - - -&lt;br /&gt;
Surely you realize that a single disk drive has only one head mechanism.  Creating stripes across partitions of one drive can only cause time-consuming head-seek motion.  This is definitely not a performance improver.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Sorry to be late replying.  You said:<br />
- &#8211; - &#8211; -<br />
@clowenstein<br />
The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\&#8217;s to improve performance. Then again you could have used MD to create RAID groups and also improved performance.<br />
- &#8211; - &#8211; -<br />
Surely you realize that a single disk drive has only one head mechanism.  Creating stripes across partitions of one drive can only cause time-consuming head-seek motion.  This is definitely not a performance improver.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7454/#comment-6812</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7454/#comment-6812</guid>
		<description>&lt;p&gt;@clownenstein&lt;/p&gt;
&lt;p&gt;Sorry I should have said \&quot;disks\&quot;.&lt;/p&gt;
&lt;p&gt;You are correct for a disk (that is the singular form). It\&#039;s not always a good idea (there are situations where it might be reasonable).&lt;/p&gt;
&lt;p&gt;For multiple disks (that\&#039;s plural indicating more than one) can give you a performance advantage but it too depends upon the exact configuration.&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>@clownenstein</p>
<p>Sorry I should have said \&#8221;disks\&#8221;.</p>
<p>You are correct for a disk (that is the singular form). It\&#8217;s not always a good idea (there are situations where it might be reasonable).</p>
<p>For multiple disks (that\&#8217;s plural indicating more than one) can give you a performance advantage but it too depends upon the exact configuration.</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
</channel>
</rss>