Checksumming Files to Find Bit-Rot

In a previous article extended file attributes were presented. These are additional bits of metadata that are tied to the file and can be used in a variety of ways. One of these ways is to add checksums to the file so that corrupted data can be detected. Let's take a look at how we can do this including some simple Python examples.

Do You Suffer From Bit-Rot?

Storage admins live in fear of corrupted data. This is why we make backups, copies (replicas), and use other methods to make sure we have copies of the data in case the original is corrupted. One of the most feared sources of data corruption is the proverbial bit-rot.

Bit rot can be caused by a number of sources but the result is always the same – one or more bits in the file have changed, causing silent data corruption. The “silent” part of the data corruption means that you don’t know it happened – all you know is that the data has changed (in essence it is now corrupt).

One source of data corruption is called URE (Unrecoverable Read Error) or UBER (Unrecoverable Bit Error Rate). These measures are a function of the storage media design and tell us the probability of encountering a bit on a drive that cannot be read. Sometimes specific bits on drives just cannot be read due to various factors. Usually the drive reports this error and it is put into the system logs. Also, many times the OS will give an error because it cannot read a specific portion of data. Or, in some cases, the drive will read the bit even though it may contain bad data (maybe the bit flipped due to cosmic radiation – which does happen, particularly at higher altitudes) which means that the bit can still be read but the bit is now incorrect.

The actual URE or UBER for a particular storage device is usually published by the manufacturer, although many times it can be hard to find. Typical ranges for hard drives are around 1 x 1014 bits which means that 1 out of ever 1014 bits cannot be read. Some SSD manufacturers will list their UBER as 1 x 1015 and some hard drive manufacturers will use this same number for enterprise class drives. Let’s convert that number to something a little easier to understand – how many Terabytes (TBs) must be read before encountering a URE.

One TB (1 TB) drives have about 2 billion (2 x 109) sectors and let’s assume a URE rate of 1 x 1014. The URE converts to about 24 x 109 sectors assuming that we have 512 byte sectors or 512 * 8 = 4,096 bits per sector. If you then divide the URE by the number of bytes per disk, you get the following:

24 x 10^9 / 2 x 10^9 = 12


This means if you have 12TBs or 12x 1TB drives your probability of encountering a URE is one (i.e. it’s going to happen). If you have 2TB drives, then all you need is 6x 2TB drives and you will encounter a URE.

If you have a RAID-5 group that has seven 2TB drives and one drive fails, the RAID rebuild has to read all of the remaining disks (all six of them). At that point you are almost guaranteed that during the RAID-5 rebuild, you will hit a URE and the RAID rebuild will fail. This means you have lost all of your data.

This is just an example of a form of bit-rot – the dreaded URE. It can and does happen and people either don’t see it or go screaming into the night that they lost all of their carefully saved KC and the Sunshine Band flash videos and mp3′s.

So what can be do about bit-rot? There are really two parts to that question. The first part, is detecting corrupted files and the second part is correcting corrupted files. In this article, I will talk about some simple techniques using extended file attributes that can help you detect corrupted data (recovering is the subject for a much longer article).

Checksums to the Rescue!

One way to check for corrupted data is through the use of a checksum. A checksum is a simple representation or finger print of a block of data (in our case, a file). There are a whole bunch of checksums including md5, sha-1, sha-2 (including 256, 384, and 512 bit checksums), and sha-3. These algorithms can be used to compute the checksum of a chunk of data, such as a file, with longer or more involved checksums typically requiring more computational work. Note that checksums are also used in cryptography, but we are using them as a way to finger print a file.

So for a given a file we could compute a checksum using a variety of techniques or compute different checksums using different algorithms for the same file. Then before a file is read the checksum of the file could be computed and compared against a stored checksum for that same file. If they do not match, then you know the file has changed. If the time stamps on the file haven’t changed since the checksums of the file were computed, then you know the file is corrupt (since no one changed the file, the data obviously fell victim to bit-rot or some other form of data corruption).

If you have read everything carefully to this point you can spot at least one flaw in the logic. The flaw I’m thinking of is that this process assumes that the checksum itself has not fallen victim to data corruption. So we have to find some way of ensuring the checksum itself does not fall victim to bit-rot or that we have a copy of the checksums stored somewhere that we assume does not fall victim to data corruption.

However, you can spot other flaw in the scheme (nothing’s perfect). One noticeable flaw is that until the checksum of the file is created and stored in some manner, the file can fall victim to bit-rot. We could go through some gyrations so that as the file is being created, the checksum is being computed and stored in real-time. However, that will dramatically increase the computational requirements and also slow down the I/O.

But for now, let’s assume that we are interested in ensuring data integrity for files that have been around a while and maybe haven’t been used for some period of time. The reason this is interesting is that since no one is using the file it is difficult to tell if the data is corrupt. Using checksums can allow us to detect if the file has been corrupted even if no one is actively using the file.

Checksums and Extended File Attributes

The whole point of this discussion is to help protect against bit-rot of files by using checksums. Since the focus is on files it makes sense to store the checksums with the file itself. This is easily accomplished using our knowledge of extended file attributes that we learned in a previous article.

The basic concept is to compute the checksums of the file and store the result in an extended attribute associated with the file. That way the checksum is stored with the file itself which is what we’re really after. To help improve things even more, let’s compute several checksums of the file since this will allow us to have several ways to detect file corruption. All of the checksums will be stored in extended attributes as well as in a file or database. However, as mentioned before there is the possibility of the checksums in the extended attributes might be corrupted – so what do we do?

A very simple solution is to store the checksums in a file or simple database and be sure that several of copies of the file or database are made. Then before you check the checksum of a file, you first look up the checksum in the file or database and then compare it to the checksums in the extended attributes. If they are identical, then the file is not corrupted.

There are lots of aspects to this process than you can develop to improve the probability of the checksums being valid. For example, you could make three copies of the checksum data and compute the checksum of these files. Then you compare the checksums of these three files before you read any data. If two of the three values are the same then you can assume that those two files are correct and that the third file is incorrect resulting in it being replaced from one of the other copies. But now we are getting into implementation details which is not the focus of this article.

Let’s take a quick look at some simple Python code to illustrate how we might compute the checksums for a file and store them in extended file attributes.

Sample Python Code

I’m not an expert coder and I don’t play one on television. Also, I’m not a “Pythonic” Python coder, so I’m sure there could be lots of debate about the code. However, the point of this sample code is to illustrate what is possible and how to go about implementing it.

For computing the checksums of a file, I will be using commands that typically come with most Linux distributions. In particular, I will be using md5sum, sha1sum, sha256sum, sha384sum, and sha512sum. To run these commands and grab the output to standard out (stdout), I will use a Python module called commands (note: I’m not using Python 3.x but Python 2.5.2 but I’ve also tested this code against Python 2.7.1 as well). This module has Python functions that allow us to run “shell commands” and capture the output in a tuple (this is a data type in Python).

However, the output from a shell command can have several parts to it so we may need to break the string into tokens so we can find what we want. A simple way to do that is to use the functions in the shlex module (Simple Lexical Analysis) for tokenizing a string based on spaces.

So let’s get coding! Here is the first part of my Python code to illustrate where I’m headed and how I import modules.

#!/usr/bin/python

#
# Test script for setting checksums on file
#

import sys

try:
   import commands                 # Needed for psopen
except ImportError:
   print "Cannot import commands module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

try:
   import shlex              # Needed for splitting input lines
except ImportError:
   print "Cannot import shlex module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

if __name__ == '__main__':

    # List of checksum functions:
    checksum_function_list = ["md5sum", "sha1sum", "sha256sum", "sha384sum", "sha512sum"];
    file_name = "./slides_fenics05.pdf";

# end if


At the top part of the code, I import the modules but raise exceptions and exit if the modules can’t be found since they are a key part of the code. Then in the main part of the code I define the list of the checksum functions I will be using. These are the exact command names to be used to compute the checksums. Note that I have chosen to compute the checksum of a file using 5 different algorithms. Since we will have multiple checksums for each file it will help improve the odds of finding a file with data corruption because I can check all five checksums. Plus it might also help find a corrupt checksum since we could compare the checksum of the file against all five measures and if one of the checksums in the extended file attributes is wrong but the other four are correct they we have found a corrupted extended attribute.

For the purposes of this article I’m just going to examine one file, slides_fenics05.pdf (a file I happen to have on my laptop).

The next step in the code is to add the code that loops over all five checksum functions.

#!/usr/bin/python

#
# Test script for setting checksums on file
#

import sys

try:
   import commands                 # Needed for psopen
except ImportError:
   print "Cannot import commands module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

try:
   import shlex              # Needed for splitting input lines
except ImportError:
   print "Cannot import shlex module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

if __name__ == '__main__':

    # List of checksum functions:
    checksum_function_list = ["md5sum", "sha1sum", "sha256sum", "sha384sum", "sha512sum"];
    file_name = "./slides_fenics05.pdf";

    for func in checksum_function_list:
        # Create command string to set extended attribute
        command_str = func + " " + file_name;
        checksum_output = commands.getstatusoutput(command_str);
        print "checksum_output: ",checksum_output
    # end for

# end if


Notice that I create the exact command line I want run as a string called “command_str”. This is the command executed by the function “commands.getstatusoutput”. Notice that this function returns a 2-tuple (status, output). You can see this in the output from the sample code below.

laytonjb@laytonjb-laptop:~/$ ./test1.py
checksum_output:  (0, '4052e5dd3d79de6b0a03d5dbc8821c60  ./slides_fenics05.pdf')
checksum_output:  (0, 'cdfcadf4752429f01c8105ff15c3e24fa9041b46  ./slides_fenics05.pdf')
checksum_output:  (0, '3c2ad544ba4245dc9e300afe79b81a3a25b2ff6e71e127724acd51124c47a381  ./slides_fenics05.pdf')
checksum_output:  (0, '0761eac4323d35a62c52f3c49dd2098e8b633724ed8dec2ee2de2ddda0874874a916b99287703a9eb1886af62d4ac0b3  ./slides_fenics05.pdf')
checksum_output:  (0, '42674cebe76d0c0567cf1bed21008b005912f0df76990456b669ef3d3942e607d69079e879ceecbb198e846a042f49ee28c145f9b1dc0b4bb4c9ddadd25777c5  ./slides_fenics05.pdf')
laytonjb@laytonjb-laptop:~/Documents/FEATURES/STORAGE094$


You can see that each time the commands.getstatusoutput function is called there are two parts in the output tuple – (1) the status of the command (was it successful?) and (2) the result of the command (the actual output). Ideally we should check the status of the command to determine if it was successful but I will leave that as an exercise for the reader :)

At this point we want to grab the output from the command (the second item in the 2-tuple) and extract the first part of the string which is the checksum. To do this we will use the shlex.split function in the shlex module. The code at this points looks like the following:

#!/usr/bin/python

#
# Test script for setting checksums on file
#

import sys

try:
   import commands                 # Needed for psopen
except ImportError:
   print "Cannot import commands module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

try:
   import shlex              # Needed for splitting input lines
except ImportError:
   print "Cannot import shlex module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

if __name__ == '__main__':

    # List of checksum functions:
    checksum_function_list = ["md5sum", "sha1sum", "sha256sum", "sha384sum", "sha512sum"];
    file_name = "./slides_fenics05.pdf";

    for func in checksum_function_list:
        # Create command string to set extended attribute
        command_str = func + " " + file_name;
        checksum_output = commands.getstatusoutput(command_str);
        print "checksum_output: ",checksum_output
        tokens = shlex.split(checksum_output[1]);
        checksum = tokens[0];
        print "   checksum = ",checksum," \n";
    # end for

# end if


In the code, the output from the checksum command is split (tokenized) based on spaces. Since the first token is the checksum that is what we’re interested in capturing and storing in the extended file attribute, we use the first token in the list and store it to a variable.

The output from the code at this stage is shown below:

laytonjb@laytonjb-laptop:~$ ./test1.py
checksum_output:  (0, '4052e5dd3d79de6b0a03d5dbc8821c60  ./slides_fenics05.pdf')
   checksum =  4052e5dd3d79de6b0a03d5dbc8821c60  

checksum_output:  (0, 'cdfcadf4752429f01c8105ff15c3e24fa9041b46  ./slides_fenics05.pdf')
   checksum =  cdfcadf4752429f01c8105ff15c3e24fa9041b46  

checksum_output:  (0, '3c2ad544ba4245dc9e300afe79b81a3a25b2ff6e71e127724acd51124c47a381  ./slides_fenics05.pdf')
   checksum =  3c2ad544ba4245dc9e300afe79b81a3a25b2ff6e71e127724acd51124c47a381  

checksum_output:  (0, '0761eac4323d35a62c52f3c49dd2098e8b633724ed8dec2ee2de2ddda0874874a916b99287703a9eb1886af62d4ac0b3  ./slides_fenics05.pdf')
   checksum =  0761eac4323d35a62c52f3c49dd2098e8b633724ed8dec2ee2de2ddda0874874a916b99287703a9eb1886af62d4ac0b3  

checksum_output:  (0, '42674cebe76d0c0567cf1bed21008b005912f0df76990456b669ef3d3942e607d69079e879ceecbb198e846a042f49ee28c145f9b1dc0b4bb4c9ddadd25777c5  ./slides_fenics05.pdf')
   checksum =  42674cebe76d0c0567cf1bed21008b005912f0df76990456b669ef3d3942e607d69079e879ceecbb198e846a042f49ee28c145f9b1dc0b4bb4c9ddadd25777c5

The final step in the code is to create the command to set the extended attribute for the file. I will create “user” attributes that look like “user.checksum.[function]” where [function] is the name of the checksum command. To do this we need to run a command that looks like the following:

setfattr -n user.checksum.md5sum -v [checksum] [file]


where [checksum] is the checksum that we stored and [file] is the name of the file. I’m using the “user” class of extended file attributes for illustration only. If I were doing this in production, I would run the script as root and store the checksums using the “system” class of extended file attributes since a normal user would not be able to change the result.

At this point, the code looks like the following with all of the “print” functions removed.

#!/usr/bin/python

#
# Test script for setting checksums on file
#

import sys

try:
   import commands                 # Needed for psopen
except ImportError:
   print "Cannot import commands module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

try:
   import shlex              # Needed for splitting input lines
except ImportError:
   print "Cannot import shlex module - this is needed for this application.";
   print "Exiting..."
   sys.exit();

if __name__ == '__main__':

    # List of checksum functions:
    checksum_function_list = ["md5sum", "sha1sum", "sha256sum", "sha384sum", "sha512sum"];
    file_name = "./slides_fenics05.pdf";

    for func in checksum_function_list:
        # Create command string to set extended attribute
        command_str = func + " " + file_name;
        checksum_output = commands.getstatusoutput(command_str);
        tokens = shlex.split(checksum_output[1]);
        checksum = tokens[0];

        xattr = "user.checksum." + func;
        command_str = "setfattr -n " + xattr + " -v " + str(checksum) + " " + file_name;
        xattr_output = commands.getstatusoutput(command_str);
    # end for

# end if


The way we check if the code is working is to look at the extended attributes of the file (recall this article on the details of the command).

laytonjb@laytonjb-laptop:~$ getfattr slides_fenics05.pdf
# file: slides_fenics05.pdf
user.checksum.md5sum
user.checksum.sha1sum
user.checksum.sha256sum
user.checksum.sha384sum
user.checksum.sha512sum


This lists the extended attributes for the file. We can look at each attribute individually. For example, here is the md5sum attribute.

laytonjb@laytonjb-laptop:~$ getfattr -n user.checksum.md5sum slides_fenics05.pdf
# file: slides_fenics05.pdf
user.checksum.md5sum="4052e5dd3d79de6b0a03d5dbc8821c60"


If you look at the md5sum from earlier output listings you can see that they match the md5 checksum in the extended file attribute associated with the file, indicating that the file hasn’t been corrupted.

Ideally we should be checking the status of each command to make sure that it returned successfully. But as I mentioned earlier that exercise is left up to the user.

One other aspect we need to consider is that users may have changed the data. We should store the date and time when the checksums were computed and store that value in the extended file attributes as well. So before computing the checksum on the file to see if it is corrupted we need to check if the time stamps associated with the file are more recent than the date and time stamp when the checksum was originally computed.

Summary

Data corruption is the most feared aspects of a storage admin’s life. This is why we do backups, replication, etc. – to recover data if the original data gets corrupted. One source of corrupted data is what is called bit-rot. Basically this is when a bit on the storage device goes bad and the data using that bit cannot be read or it returns the incorrect value indicating the file is now corrupt. But as we accumulate more and more data and this data gets colder (i.e. it hasn’t been used in a while), performing backups may not be easy (or even practical) so how do we determine if our data is corrupt?

The technique discussed in this article is to compute the checksum of the file and store it in an extended file attribute. In particular, I compute five different checksums to give us even more data to determine if the file has been corrupted. By storing all of the checksums in an additional location and ensuring that the stored values aren’t corrupt, we can compare the “correct” checksum to the checksum of the file. If they are the same, then the file is not corrupt. But if it’s different and yet the file has not been changed by the user, then the file is likely to be corrupt.

To help illustrate these ideas, I wrote some simple Python code to show you how it might be done. Hopefully this simple code will inspire you to think about how you might implement something a bit more robust around checksums of files and checking for data corruption.

Jeff Layton is an Enterprise Technologist for HPC at Dell. He can be found lounging around at a nearby Frys enjoying the coffee and waiting for sales (but never during working hours).

Comments on "Checksumming Files to Find Bit-Rot"

njsharp

“if you have 12x 1TB drives your probability of encountering a URE is one (i.e. it’s going to happen)”

No! Take some easier numbers. If the chance of an error is 1/5 (0.2) on a single device, then the chance of no error is 4/5 (0.8). The chance of no error on two devices is 4/5 x 4/5 = 0.64 which means a 0.36 (1-0.64) chance of an error. And so on. The chance of an error NEVER reaches 1 until infinity devices.

Reply
    frank1985

    Huh, that never occured to me, but the 1e14 chance of a URE would be just for one device – the odds of a URE would compound the more drives you pile on. So the URE odds for 2 drives = 1E14 x 1E14 = 1E28 following proper algebraic method. One very important reason for having multiple, smaller drives in a RAID – you almost assure that you would never hit a URE, but of course you back up just in case, even if it’s just to another SAN in another location.

    In fact I believe the chance of an error would never hit one, since the odds of an error would be (1E14)^(infinity) or 1E(infinity), at which point the odds of an error would be as close to zero as to be a non-event for all intents and purposes.

    Reply

      This is a really interesting article and I plan on reading it in more detail later (when I get to reading the Extended file attributes article!) :)

      I have one comment on the URE issue.
      I don’t claim to be an expert on bit rot, URE, or computer hardware. I do, however, know a little bit about BER (bit error rate) as it relates to my line of work.
      An error rate of 1 in 10^14 is usually an average!
      To interpret it, we imagine having a very large number of bits and counting all occurrences of URE in them. The error rate is then the number of URE divided by the (large number) of bits we’ve read. The key word here is large! For this measure to have any meaning, you must count a large number of bits.
      What I want to say is, in theory, if your drive consists of 3 bits (bear with me), and the manufacturer has reported an error rate of 1 in 10^14, there’s nothing to prevent all 3 bits from having a URE. It’s very unlikely but possible.
      The complete opposite is also possible: if you have a 12 TB hard drive and you read it bit by bit, you may find absolutely no URE’s! If you’ve read all of the 12 TB except one bit and found no URE’s does NOT make you more likely to find the last bit being in error. What makes us usually think this way is a cognitive bias known as the Gambler’s fallacy.

      I’m sorry if this seems like nitpicking or arrogant presumption on my part. I’m not undermining your expertise in any way. Thank you for the great article!

      Reply

      Please please please go find out something about statistics and then correct this article.

      A UBER of 1 in 1E14 is a UBER of 1E-14, not 1E14.

      If the UBER for one drive is 1E-14 then the UBER for two drives is still 1E-14. It doesn’t matter how many disks you have or how big they are – you still expect to get one unrecoverable bit error for every 1E14 bits read from it/them. The bit error rate is not in terms of how much data you have stored, but how much data you read.

      The UBER is an average error rate, not a deterministic process. That is, you expect to get one bit error for every 1E14 bits you read from the disk. If you read 1E15 bits from the disk, you expect ten bit errors. It might be none, it might be ten, it might be every single one of those 1E15 bits that are wrong, but if you do it over and over again you expect the average to be one out of every 1E14 bits.

      To calculate the probability of k bit errors after reading N bits you have to use the binomial distribution. GNU R is the only thing I can find that can calculate it at these sorts of probabilities and number of trials. It shows that after reading 1E14 bits, the probability of at least one bit error is 63%, at lesat two is 26%, at least three is 8%, at least four is 2% and so on. Alternatively, the probability of no errors after reading 1E14 bits is 37%, not 0.

      No matter how much data you read, the probability of having encountered and error never reaches 1.

      Reply
tarax

hi

Thank you very much for this interesting article ! Never thought of using xattr like this.

By the way, commands module is deprecated in recent versions of Python. Here is your script adapted to use the now “unified” subprocess module:


#!/usr/bin/python2.7
# ^^^
# Test script for setting checksums on file
#

import sys

try:
import subprocess # Needed for psopen
except ImportError:
print "Cannot import subprocess module - this is needed for this application";
print "Exiting..."
sys.exit();

try:
import shlex # Needed for splitting input lines
except ImportError:
print "Cannot import shlex module - this is needed for this application.";
print "Exiting..."
sys.exit();

print ""

if __name__ == '__main__':
# List of checksum functions:
checksum_function_list = ["md5sum", "sha1sum", "sha256sum", "sha384sum", "sha512sum"];
file_name = "~/test_file";

for func in checksum_function_list:
# Create command string to set extended attribute
command_str = func + " " + file_name;
checksum_output = subprocess.check_output(command_str, shell=True);
tokens = shlex.split(checksum_output);
checksum = tokens[0];

xattr = "user.checksum." + func;
command_str = "setfattr -n " + xattr + " -v " + str(checksum) + " " + file_name;
xattr_output = subprocess.check_output(command_str, shell=True);
# end for

# end if

Reply
tarax

Hoped code tag would have preserved case… sorry folks

Reply
anandv

Hi Jeff, I have a much much simpler way that I am following right now.

Step-1:
cd
md5sum * > md5sum.txt

Step-2:
When I need to check the file,
md5sum -c md5sum.txt

Reply
x95tobos

“For example, you could make three copies of the checksum data and compute the checksum of these files. Then you compare the checksums of these three files before you read any data. If two of the three values are the same then you can assume that those two files are correct and that the third file is incorrect resulting in it being replaced from one of the other copies.”

This is exactly what google does more or less with their data- couple of years ago, the idea was to take a “majority vote” among 3, 5 and so on replicas.

PS: running scripts as “root” is NEVER a good ideea – you may screw up all the permissions on the files you act and they may end up owned by root and therefore “invisible/untouchables” to all the other developers.

Reply
user31416

@anandv
* does not work with subdirectories or space in filenames.
find . -type f -print0 | xargs -0 md5sum

Reply

Or, you could spent some money on a decent card enabling you to run a RAID 5 *with* error correction.

Take a look on PAR2 (GNU) or ICE ECC (Windows).

Reply

I were a little bit familiar of this your broadcast offered brilliant clear idea wood pellet

Reply

Then you compare the checksums of these three files before you read any data. If two of the three values are the same then you can assume that those two files are correct and nexium vs prilosec that the third file is incorrect resulting in it being replaced from one of the other copies.”

Reply

While your calculations about bit errors/TB are correct your interpretations are a bit off. The expectation is 1 error per 12TB, but since bit errors are a stochastic process the number you will actually observe will follow a poisson distribution with the rate of 1 error/12TB. The probability the you will get one or more bad bits out of 12TB (1-pr(0 bad bits) = 0.63.

Reply

I want to first admit upfront that I am not a statistician and I make no claims about the statistics one way or the other by the author or any of its commentors. :-) I’m sure it is all correct!

That said, is bit rot, as described by the author, even a real world problem any more? I was under the impression that disks were so advanced now that even a single rotted bit can be detected upon a read and marked as such, telling the operating system that the drive had a “read error” at which point you are notified of the defect and can take action. So all it takes is that you just run a disk scan or defrag or do something else that causes a read across your entire drive and you will know. It doesn’t stop the bit from rotting, of course, but it is far from a silent killer.

Reply

Although the statistics part of the article are a little questionable, the “fix” is very though provoking. In addition as an individual who is learning python I really appreciate the way you described your thought processes in developing your python program. I also really appreciated the “new” 2.7 update.

These are the type of articles and responses I really like. Thanks and I look forward to more.

Reply

What I want to know is why I should care? I mean, not to say that what 4hairstyles you’ve got to say isn’t important, but I mean, it’s so generic. Everyone is just talking about this man. Give us something more, something that we can get behind so we can feel as passionately about it as you do.

Reply

Checksumming Files to Find Bit-Rot | Linux Magazine

Reply

A little bit of knowledge goes a long way in all situations in life.
Buying a car is no different! That means you need to read advice from experts,
as detailed below, to ensure that when you shop for that car, you really know
what you’re doing and how to get the best deal.

Reply

Car shopping is stressful. Now that there are hundreds of makes and models
to choose from, not to mention promotions and payment options, it’s easy to become frustrated and stressed out. The information here will help make buying a car as easy and stress-free as possible.

Reply

What I want to know is why I should care? I mean, not to say that what you’ve got to say isn’t important, but I mean, it’s so generic. Everyone is just talking about this man. Give us something more, something that we can get behind so we can feel as passionately about it as you do.

Reply

I really learn a lot of good things in this how to card count blog and I am sharing it with my friends who are searching similar type of information from a lot of days. Hope so this post will be very helpful for everyone.

Reply

Leave a Reply to Tom Cancel reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>