dcsimg

Deduping Storage Deduplication

One of the hottest topics in the enterprise storage world is deduplication. We take a look at the technology behind the concept and discuss where it is best applicable in your storage strategy.

I think everyone can agree that data storage is exploding at a fairly fast, some say alarming, rate. This means that administrators are having to work overtime to keep everything humming so that users don’t even see the hard work that is going on behind the scenes. These things include: quota management, snapshots, backups, replication, preparing disaster recovery backups, off-site copies of data, restorations of user data that has been erased, monitoring data growth and data usage, and a thousand other tasks that keep things running smoothly (picture synchronized swimmers that look graceful above the water but underneath the surface their legs and hands are moving at a furious rate).

Now that I have equated storage experts to synchronized swimmers and probably upset all of them (my apologies), let’s look at a new technology that is trying to make their life easier while also saving money. This technology is called data deduplication. While it is something of a new technology I hope to show that it’s really an older technology with a new twist that can be used to great effect on many storage systems. Without further ado, let’s examine data deduplication data deduplication.

Introduction

Data deduplication is, quite simply, removing copies (duplicates) of data and replacing it with pointers to the first (unique) copy of the data. Fundamentally, this technology helps reduce the total amount of storage. This can result in many things:


  • Saving money (no need to buy additional capacity)
  • Reducing the size of backups, snapshots, etc. (saves money, time, etc.)
  • Reducing power requirements (less disk, less tape, etc.)
  • Reduces network requirements (less data to transmit)
  • Time savings
  • Since the amount of storage is reduced, disk backups become more possible

These results are the fundamental reason that data deduplication technology is the rage at the moment. Who doesn’t like saving money, time, network bandwidth, etc.? But as with everything, the devil is always in the details. In this article the concepts and issues in data deduplication will be presented.

Deduplication is really not a new technology. It is really an out growth of compression. Compression searches a single file for repeated binary patterns and replaces duplicates with pointers to the original or unique piece of data. Data deduplication extends this concept to include deduplication…


  • Within files (just like compression)
  • Across files
  • Across applications
  • Across clients
  • Over time

A quick illustration of deduplication versus compression is that if you have two files that are identical, compression applies deduplication to each file independently. But data deduplication recognizes that the files are duplicates and only stores the first one. In addition, it can also search the first file for duplicate data, further reducing the size of the stored data (ala’ compression).

A very simple example of data deduplication is derived from an EMC video

Figure 1 - Data Deduplication Example
Figure 1 – Data Deduplication Example

In this example there are three files. The first file, document1.docx, is a simple Microsoft Word file that is 6MB is size. The second file, document2.docx is just a copy of the first file but with a different file name. And finally, the last file, document_new.docx, is derived from document1.docx but with some small changes to the data and is also 6MB in size.

Let’s assume that a data deduplication process divides the files into 6 pieces (this is a very small number and is for illustrative purposes only). The first file has pieces A, B, C, D, E, and F. The second file, since it’s a copy of the first file, has the exact same pieces. The third file has one piece changed which is labeled G and is 6MB in size. Without data deduplication, a backup of the files would have to backup 18MB of data(6MB times 3). But with data deduplication only the first file and the new block G in the third file are backed up. This is a total of 7MB of data.

One additional feature that data deduplication offers is that after the backup, the pieces, A, B, C, D, E, F, and G are typically stored in a list (sometimes called an index). Then when new files are backed up, their pieces are compared to the ones that have already been backed-up. This is a feature of doing data deduplication over time.

One of the first questions asked after, “what is data deduplication?” is, “what level of deduplication can I expect?” The specific answer depends upon the details of the situation and the dedup implementation, but EMC is quoting a range of 20:1 to 50:1 over a period of time.

Devilish Details

Data deduplication is not a “standard” in any sense so all of the implementations are proprietary. Therefore, each product does things differently. Understanding the fundamental differences is important for determining when and if they fit into your environment. Typically deduplication technology is being used in conjunction with backups, but it is not necessarily limited to only that function. With that in mind let’s examine some of the ways deduplication can be done.

There are really two main types of deduplication with respect to backups, target-based, and source-based. The difference is fairly simple. Target-based deduplication, dedups the data after it has been transferred across the network for backup. Source-based deduplication, dedups the data before it is backed up. The differences are fairly important in understanding the typical ways that deduplication is deployed.

With target-based deduplication, the deduplication is typically done by a device such as a Virtual Tape Library (VTL) that does the deduplication. When using a VTL, the data is passed to the backup server and then to the VTL where it is deduped. So the data is sent across the network without being deduped, increasing the amount of data transferred. But, the target-based approach does allow you to continue to use your existing backup tools and processes.

Alternatively, in a remote backup situation where you communicate over the WAN, network bandwidth is important. If you want to still target-based deduplication, the VTL is placed near the servers to dedup the data before sending it over the network to the backup server.

The opposite of target-based dedup is source-based deduplication. In this case the deduplication is done by the backup software. The backup software on the clients talks to the backup software on the backup server to dedup the data prior to it being transmitted to the backup server. In essence the client sends the pieces of each file that are to be backed-up, to the backup software that compares it to pieces that have already been backed-up. If a duplicate is found, then a pointer is created to the unique piece of data that has already been backed-up.

Source-based dedup can greatly reduce the amount of data transmitted over the network but there is some traffic from the clients to the backup server for deduping the data. In addition, since the dedup takes place in software, no additional hardware is needed. But, you have to use specialized backup software so you may have to give up your existing backup tools to gain the dedup capability.

So far it looks like deduplication is pretty easy, and the fundamental concepts are fairly easy, but many details have been left out. There are many parts to the whole deduplication technology that have to be developed, integrated, and tested for reliability (it is your data after all). Deduplication companies differentiate themselves by these details. Is the deduplication technology target-based or source-based? What’s the nature of the device and/or software? A what level is the deduplication performed? How are the data pieces compared to find duplicates? And on and on.

Before diving into a discussion about deduplication deployment, let’s talk about dedup algorithms. Recall that deduplication can happen on a file basis, or a block basis (the definition of a block is up to the specific dedup implementation), or even on a bit level. It is extremely inefficient to perform deduplication by taking pieces of data and comparing them to an index. To make things easier, dedup algorithms produce a hash of the data piece being deduped using something like MD5 or SHA-1. This hash process should produce a unique number for the specific piece of data and can be easily compared to a hash stored in the dedup index.

One of the problems with using these hash algorithms is hash collisions. A hash collision is something of a “false” positive. That is, the hash for a piece of data may actually correspond to a different piece of data (i.e. the hash is not unique). Consequently, the piece of data may not be backed-up because it has the same hash number as is stored in the index, but in fact the data is different. Obviously this can lead to data corruption. So what data dedup companies do is to use several hash algorithms or combinations of them for deduplication to make sure it truly is a duplicate piece of data. In addition, some dedup vendors will use metadata to help identify and prevent collisions.

To give you an idea of the likely-hood of a hash collision requires a little bit of math. This article does a pretty good job explaining the odds of a hash collision. The basic conclusion is that the odds are 1:2^160. This is a huge number. Alternatively, if you have 95 EB (Exabytes – 1,000 Petabytes), then you have a0.00000000000001110223024625156540423631668090820313% chance of getting a false positive in the hash comparison and throwing away a piece of data you should have kept. Given the size of 95 EB, it’s not likely you will encounter this chance even over an extended period of time. But, never say never (after all, someone predicted we’d only need 640KB of memory).

Implementation

Choosing one solution over another is a bit of an art and requires careful consideration of your environment and processes. The previously mentioned video has a couple rules of thumb based on the fundamental difference between source-based and target-based deduplication. Source-based dedup approaches are good for situations where network bandwidth may be a premium, such as: File systems (don’t want to transfer the entire file system to deduplicate it and pass back the results), VMware storage, and remote offices or branch offices (network bandwidth to a central backup server may be rather limited). Don’t foget that for source-based dedup, you will likely have to switch backup tools to get the dedup features.

On the other hand, target-based deduplication works well for SANs, LANs, and possibly databases. The reason for this is that moving the data around the network is not very expensive and you may already have your backup packages chosen and in production.

Finally the video also claims that for source-based dedup you can achieve a deduplication of 50:1 and that target-based dedup can achieve 20:1. Both levels of dedup are very impressive. There are a number of articles that discuss how to estimate the deduplication ratios you can achieve. A ratio of 20:1 seems definitely possible.

There are many commercial deduplication products. Any list in this article is incomplete and is not meant as a slight toward a particular company. Nevertheless here is a quick list of companies providing deduplication capabilities:


These are a few of the solutions that are available. There are some smaller companies that offer deduplication products as well.

Deduplication and Open-Source


There are not very many (any?) deduplication projects in the open-source world. However, you can use a target-based deduplication device because it allows you to use your existing backup software which could be open-source. However, it is suggested you talk to the vendor to make sure that they have tested it with Linux.

The only deduplication project that could be found is called LessFS. It is a FUSE based file system that has built-in deduplication. It is still early in the development process but it has demonstrated deduplication capabilities and has incorporated encryption (ah, the beauty of FUSE).

Summary

This has been a fairly short introductory article to deduplication technology. This is one of the hot technologies in storage right now. It holds the promise of saving money because of the reduction in hardware to store the data, as well as a reduction in network bandwidth.

This article is intended to wet your appetite for examining data deduplication and how it might (or might not) be applicable to your environment. Take a look at the various articles on the net – there has been some hype around the technology – and judge for yourself if this is something that might work for you. If you want to try an open-source project, there aren’t very many (any) at all. The only one that could be found is LessFS which is a FUSE based file system that incorporates deduplication. But it might be worth investigating, even using it for secondary storage and not as your primary file storage.

Comments on "Deduping Storage Deduplication"

psevetson

You might want to fix your title: I don\’t think you meant to say \”Depulication\”, when you\’re talking about \”Deduplication.\”

Reply
psevetson

Or maybe you meant to say \”Duplication?\”

Reply
pittendrigh

fdupes -r /wherever > /tmp/dupeslog

deduper.pl /tmp/dupeslog

…….where deduper.pl is:
#!/usr/bin/perl

$file = shift;
open FILE, $file or die \”no good $file open \\n\”;
$mode = shift;

$cnt=0;
while (<FILE>) {
chomp; # just because!
if ( /^\\s*$/ ) {
print \”save: \”, $paths[0], \”\\n\”;
for($i=1; $i<$cnt;$i++){
if($mode eq \’delete\’){ unlink ($paths[$i]); }
else { print \”delete: \” , $paths[$i], \”\\n\”; }
}
print \”\\n\\n\”;
$cnt=0;
}
else{
$paths[$cnt] = $_ ;
$cnt++;
}
}

Reply
webmanaus

Ugh, backuppc uses de-duplication (of sorts) to store multiple versions of identical files on multiple remote systems with hard-links and is open source.

Also, rsync uses some form of de-duplication at the file level when transferring files between remote systems, frequently used for backups as well, and also open source.

Just two open source apps that I\’ve been using for years, very mature, and work exceptionally well….

Reply
cringer

How about the open source BackupPC (backuppc.sourceforge.net), this has file level deduplication and compression, allowing me to backup over 8.5TB of data on a 1TB drive.

Reply
greimer

What\’s the difference between deduplication and compression?

Reply
bofh999

Ahm, and what about the downsides????
I mean, for an professional youve to consider always the downsides too. but i can never read about them on any article here. (virtualisation nonsense specially)

Downsides here are (special for an fs implemantation)… you have a higher (sometimes much higher) System load. Since HDD capacity is very very cheap it might be not suiteable to swap hdd space with cpu and ram load.

second think about FS crash.
the chance getting rid of more data than traditional methods is clearly much higher.

third. what is if youve to split your services to another server.
there youve to rethink youre hdd needs even for backup.

lets say youve 100gb backup space. now you split the server and youve got a second backupset… and surprise you need 150 gb now because you had many common files which are now on different backup sets.

only some quick assumptions..
shure the idea isnt new (windows has such feature a long time)
but im not a real fan of making additional much more complex changes so deep in the system.
specially for hdd capacity which is really unbeliveable cheap wen it will slowdown the performance and may complicate the menegement and failuler procedures

Reply
ttsiodras

There is a way to implement deduplication at BLOCK-level using three open source technologies (used in my company, for daily backups)

1. OpenSolaris backup server
2. ZFS snapshots
3. Rsync –inplace

Notice that when you use \”–inplace\”, rsync writes directly on-top
of the already existing file in the destination filesystem, and ONLY
in the places that changed! This means that by using ZFS (which is
copy-on-write) you get the BLOCK-level deduplication that you are
talking about… Taking a cron-based daily ZFS snapshot completes
the picture.

Using these tools, we are taking daily snapshots of HUGE VMWARE vmdk files that change in less than 1% of their contents on a daily basis,
using amazingly trivial space requirements (something like 3% of the size of the original vmdk is used for one month of daily backups).

I believe that OpenSolaris/ZFS/\”rsync –inplace\” is a combination
that merits a place in your article.

Kind regards,
Thanassis Tsiodras, Dr.-Ing.

Reply
    pgraziano

    ttsiodras:

    I’ve been running “almost” this exact backup scheme for my clients for years. A major limitation is exposed when a client renames a directory, the backup (in this case rsync) will see this as a new directory, and it will needlessly copy it over and delete the old directory, essentially doubling the space used by that directory.

    Example: Client renames /home/user/15TB_Folder to /home/user/15Terabyte_Folder.

    What would happen then? No, seriously. I’m wondering how your snapshot scenario would deal with that, since I don’t use snapshots, I use hardlinks, much like BackupPC and Rsnapshot.

    NOTE: I’ve also used the nilfs2 LFS combined with the –inplace rsync option with very good results.

    Reply

    ttsiodras, I know this thread is almost 3 years old now but I had a question regarding your comment:

    “Notice that when you use \”–inplace\”, rsync writes directly on-top of the already existing file in the destination filesystem, and ONLY in the places that changed! This means that by using ZFS (which is copy-on-write) you get the BLOCK-level deduplication…”

    Have you actually observed this in practice or is this just theory?

    I am attempting a similar backup scheme using rsync –inplace with the destination file residing on a NetApp WAFL filesystem (which is also copy-on-write like ZFS). What I have found is that even with the –inplace option rsync still rewrites the entire file on the destination block by block (note I am also using the –no-whole-file option). It is better than without –inplace, as by default rsync would create an entirely new temporary copy of the file on the destination before overwriting the original (thus causing the COW filesystem to incur a 200% penalty in snapshot space utilization). However, I find that rsync –inplace does not update only the changed blocks on the destination file as you described, rather it still rewrites the whole file “inplace”.

    The only advantage to the –inplace option that I see so far is that you don’t need double the storage space on the destination to temporarily keep 2 copies of the file being rsync’ed.

    Reply
hjmangalam

Good intro article, with the exception of: \”wet your appetite\” should be \’whet your appetite\’, as in to sharpen it. \’wet\’ implies to dampen or lessen.

The OSS BackupPC provides a crude level of dedupe via filesystem hard links . Therefore it only works on the file level and only across a single file system (but note that cheap single filesystems easily range into the 10s-100s of TB). For small to medium installations, BackupPC and the like can work well. It also can use rsync to transfer only changed blocks over the wire, which decreases bandwidth requirements.

You might note that all this proprietary dedupe technology effectively locks you to a vendor-specific implementation, which reduces your ability to escape when the vendor decides to jack prices.

Also notable is the falling-off-the-cliff price of disk. It might take more disk to ignore dedupe, but if it can be addressed by very cheap, flexible storage, that may weigh in its favor, especially if using a no-cost (tho admittedly less efficient) mechanism like hard links and rsync.

hjm

Reply
mat_pass

Hi,
I have already worked on a such project, I have published all my java sources on a repository http://code.google.com/p/deduplication/

Reply
lescoke

Hash collisions in two different hash algorithms at the same time is highly unlikely. Using two or more hash signatures would be slower, but would go a long ways towards avoiding a false file match.

Les

Reply
johneeboy3

I second previous comments about BackupPC. Whenever I see an article on such subjects (backup or deduping), it never ceases to amaze me how the awesome BackupPC project continually gets overlooked.

It has been backing up 10+ systems to a central backup store here at our small business for years, and has compressed/deduped 1.7TB of backups into 220GB.

Better yet, I\’ve never been able to fault it.

I\’m intrigued by that other posters OpenSolaris+ZFS+rsync solution too. Very clever!

Reply
ryannnnn

Actually the concept of deduplication at the file system is not that new. Plan 9 from bell had this concept in 1995 with their Fossil and Venti file system components. Actually you can still use Venti in linux as part of the plan 9 user space tools.

Reply
nikratio

S3QL (http://code.google.com/p/s3ql/) is another open source, de-duplicating file system. It’s designed for only storage, but can also store locally if one is just interested in deduplication.

Reply
indulis

One overlooked part of deduplication is recovery from backup. If you have (say) 500 x 10GB files on a 1TB disk, and they are all identical, then you only use up 10GB = 99% free space. When you restore all your system from backups, then either you need a backup/restore program that is dedup aware and does the dedupe as it restores, or you have to restore some of your files (in this example you can only restore 20% before you fill up your 1TB), run the dedupe software over the files you’ve restored, then restore some more, run the dedupe again. Repeat. In other words, you would have to iterate your restore process. Many technologies which save time/space in normal operations can have a large and negative effect during restores. People rarely think about the effect on of their idea on system recovery. Restoring from backups may actually turn out to be close to impossible without installing sufficient disk to store the full amount of data that you originally had (i.e. the “raw” undeduplicated data size = 5TB).

Reply

Great, thanks for sharing this blog. Keep writing.

Reply

Hi ther?, i read ??ur blog from time to t?m? and i o?n a similar onee and i
was j?st wondering iff y?u ??t a lot of spam
remarks? If ?? ?ow doo you reduce it, ?ny plugin orr anyth?ng you can recommend?
I g?t ?o muh lat?ly it’s driving me insane ?? any help
is ve?y much appreciated.

m? web site :: post hole

Reply

Hi to ?ve?y body, it’s my first visit of t?is webpage; his wbsite consists ?f
awesome and t?uly excellent dataa in fagor of visitors.

m? page – step drill bits wood

Reply

Whyy users ?till use to read news papers ?hen in tbis technjological globe ?verything ?s accessible on web?

My webpage :: gas powered auger

Reply

I aall the tim? emailed t?is webpage post ?age
to all my friends, as if like to read ?t aft?r t?at my l?nks wi?l t?o.

Feeel free to surf t? myy blog … hydraulic post hole diggers for tractors

Reply

Hey j?st ?anted to gi?e you ? quick heads up. T?e ?ords in yo?r rticle seem to be
runnig off the screen in Opera. I’m nnot ?ure ?f this ?s
? format issue ?r ?omething t? do with internet browser
compatibility ?ut I figured I’? post to let you know.
The design and style ?o?k gr?at th?ugh! Hope y?u get th? pr?blem resolved ?oon. Kudos

He?e is my page; fence post digger

Reply

Pretty nice post. ? just stumkbled upon ?o?r blog and wished to ?ay that ?’ve
reall? enjoyed surfing ??ound y?ur blog posts. After
all ? wil? be subscribing to yo?r feed and I hope
you writ? a???n ver? soon!

Also visit my hhomepage ice auger blades

Reply

G?nerally I do not re?d article on blogs, ho?e?er I would l?ke
to ?ay t?at t?i? write-up ver? pressured me t? tryy and do it!
Y?ur witing style has beenn surprised me. Thanks, ?uite g?eat post.

m? blog post … dirt auger bits 2 inch

Reply

? enjoy ?h?t y?u guyus are us?ally up t?o. Su?? clever w?rk and exposure!
K?ep up th? very g?od w?rks guys I’ve y?u guys to
blogroll.

my weblog – electric auger

Reply

?i t?ere! Thi? blog post cou?d not be written much ?etter!

Lookikng th?ough t??s post reminds me off my prev?ous roommate!
?e const?ntly kept talking ?bout this. I most ?ertainly wil?
forward th?s informati?n to him. Fairly certain h? will hav a vey
g?od read. I alpreciate ?o? for sharing!

my blog post best gas ice auger to own

Reply

No?mally I don’t reasd post ?n blogs, ?owever ?
wish to s?? t?at thjis write-u? ?ery pressured m? to try ?nd do so!
Your writing taste h?s been surprised me. T?anks, qu?te nice article.

?ake a look at my website; carbide post hole auger bits

Reply

Hi it’? m?, I amm also visiting thjis website on a
regular basis, this web ?age ?s genuinely pleasant ?nd the people are genuinely sharing fastidious
th?ughts.

Feel free to surf to my web pag?: auger bits

Reply

H? theree t? eve?y one, ?t’s actually a
go?d ffor m? to visit thhis web site, ?t consists of valuable Inf?rmation.

my site: gas powered augers for post hole dig

Reply

My developer is try?ng to persuade me to mov? tto .net from PHP.

I hav? always dissliked the idea becdause ?f the expenses.
?ut he’s tryiong none t?e le?s. I’?e been u?ing
Movable-type on ?everal websites f?r about a year
and ?m nervous about switching to ?nother platform.
I ?ave h?ard fantastic things ab?ut blogengine.net.
?s the?e a ?ay I c?n import aall my wordpress c?ntent int? ?t?
Anyy ?elp would bbe really appreciated!

Check outt myy website post hole digger for sale

Reply

If ?ou are going for e?t contents lik? ? d?, only visit t?is web ssite everyday a? ?t g?ves feature ?ontents, thanks

My weblog belltec hydraulic post hole digger for sale

Reply

Yo? ?eally m?ke it a?pear rea?ly easy t?gether wiuth
?our presentation however I f?nd this matter t?
be really something t??t I feel ? would nevr understand.
It skrt of feels too comjplex ?nd extremely larrge f?r me.

I am looking forward in you? subsequent post,
? wilpl attempt too get t?e grasp of it!

Feel free t? surf to my blog: 18 volt cordless drill ice auger

Reply

Hey th?re, You’ve ?one a great job. I ?ill ce?tainly digg it and persobally ?uggest
t? my friends. I am confident t?ey’ll ?e benefited from this website.

Also visit m? web site … post hole auger for sale

Reply

I’m gone t? tell my little brother, that he ?hould al?o
pay ? visit th?s weblog on regular basis t? g?t updated frdom m?st
recent info?mation.

Als? visit my web blog: post hole digger for sale

Reply

It’s remarkable t? visit this web ?age and reading t?e views of
?ll friends concerning this piece of writing, ?hile I ?m ?lso zealous
?f getting experience.

Review m? webpage; hand auger post hole digger

Reply

?v?ry weekend i u?ed to visit this website, b??ause ? wish f?r enjoyment, as th?s
this site conations r?ally nicce funny materrial t?o.

Check out m? blog; post hole digger for sale

Reply

?hank yo? f?r t?e auspicious writeup. It in reality ?as onc? a enjoyment account it.
?ook advanced to fa? delivered agreeable from you! By t?? ?ay, how can we
keep u? ? correspondence?

Feell free t? surf tto m? web-site: 1095 high carbon steel

Reply

T??t ?s ? real?y good ti? p?rticularly t?o th?se fresh to the blogosphere.

Short bbut ?ery accurate informati?n… ?hank you f?r sharing th?s one.
A muszt ead post!

Her? iss my site tractor post hole digger

Reply

D?finitely believe th?t whixh yyou stated. ?o?r favourite reasoon ?eemed to ?e
on tthe internet th? simplest th?ng to understand of.
I ?ay to ?ou, I ?efinitely ?et irked whilst people t?ink a?o?t concerns th?t they plainly ?on’t recognise ?bout.

??u managed to hit thhe nail ?pon the hi?hest ?nd defined
?ut the whol? thing with no nee? s?de effect ,
folks can take a signal. Wil? like?y be again to
get more. ?hank y?u

Al?? visit my blog post tractor post hole digger

Reply

Hi there, ?onstantly ? used to check wweb site posts here ?arly in t?e break of
day, for thee reason th?t i ?ike to learn mo?e and more.

Feel free to surf to my page :: grain auger

Reply

Hello, Neat post. Th?re ?s a problem together w?th our web site in web explorer,?ould check this?
IE still is the market leader ?nd a hugee component to
other folks will omit your wonderful writing be?ause of thi?
problem.

my webb site post hole diggers

Reply

Thank? for every oth?r informative blog. Thee ?lace els? could
I get that ?ind of info writt?n in s?ch ann ideal approach?
? have a undertaking that I am simply no? running on, ?nd I ?ave been on the glance out fo?
such info.

?lso visit m? site … post hole borer

Reply

Heyy ??ere. ? f??nd ?our blog using msn. That ?s a really wel? w?itten article.

?’ll be s??e t? bookmark itt and return t? re?? extra ?f y?ur helpful ?nformation. Thsnk ?o? for the post.
? will definitely comeback.

Feel free t? surf to my ?age; earth auger 24

Reply

??at is a gfeat tipp ?articularly tto thos? freh tto th? blogosphere.
Short ?ut ?ery accurate inform?tion… Many thanks for sharding t?is one.
A must r?ad post!

Here ?s myy weblog … post hole digger

Reply

O? my goodness! Incredible article dude! ?any thank?, However I
am going t?rough troubles with your RSS. ? don’t understand t?e reson why I c?n’t
join it. Is ther? any?ody getting sim?lar RSS ?roblems?
Any?ne t?at ?nows t?e answ?r can yoou kindly respond?
T?anx!!

My web site … post hole auger for sale

Reply

I constant?y spent my half ?n hour to read thijs webb site’? articles everyday
aong ?ith a cup of coffee.

my web-site :: augurers definition

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>