You are more than welcome to come back to me if you would like to discuss this further.Windows Vista took what seems like a giant leap backward in Disk Defragmenter software from what was available in Windows XP. I hope that helps My apologies if it sounded a little 'salesy'. So in conclusion, even though there are instances where you definitely DON'T want to defragment virtual volumes, fragmentation prevention will keep the machines running at peak I/O performance, whilst not interfering with storage technologies like replication, snapshots, de-duplication, etc. If you think that Windows could conceivably split files into far more fragments, and that fragmentation tends to build up accumulatively over time, you can easily see how this could have a negative effect on I/O queue lengths at the SAN level. If Windows writes a file to the NTFS volume and fragments it into 10 pieces, that is 10 separate I/Os that would need to travel down the storage stack to the SAN. This is important if you consider the I/Os that would have to travel down the storage stack from the virtual NTFS volume to the SAN. Because of the way this works, it is perfectly safe to use with the storage technologies mentioned above, as it is preventing fragmentation from occuring at the point the file is written instead of moving files around to clear up the fragmentation, after it has occurred. What is does is leverage the Windows Write Driver so that it more intelligently looks at the NTFS volume and finds a large enough area of contiguous (non-fragmented) free space to write the file to in just one non-fragmented file (or as few fragments as possible). This is designed to PREVENT fragmentation from occurring in the first place. Whilst V-locity 3 has the ability to act as a traditional defragmenter, you can also turn the defragmentation feature off, and use the more important IntelliWrite facility. De-duplication can remove duplicate blocks incorrectly allocated due to defragmentation. With De-duplication - Potential for additional de-duplication overhead.Or in other words, you can see your thin provisioned disk grow faster towards its maximum provision as a result of moving files during defragmentation. With Thin Provisioning - Likelihood of additional storage requirements for data that defragmented/moved.With Snapshots/CDP - Likelihood of additional storage requirements for data that defragmented/moved and snapshot-related performance lag.With SAN Replication - Likelihood of additional data replication traffic.I would like to expand a little further though if I may.ĭuring the traditional defragmentation process, files are moved around on the virtual volume, and this can have unwanted consequences if you are using certain storage technologies. His chart is a very handy overview of when to and not to defragment a file system depending on the storage technologies being used. I very much agree with the post by continuum. Having said that I am the Technical Director here in the UK, and don't intend for this to sound like a sales pitch. Firstly I would like to be up front and point out that I work for Diskeeper EMEA, a company that produces V-locity 3, a product that is designed to address the points raised in your post.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |