Hi:
Somebody tellme that with a format of my database disk with 64K blocks NTFS, i can have better performance, is that true ? there is any problem in SQL with this block size ?Opinin qu?
Well that was meant to be...
Say what?|||Somebody tellme that i will've a better performance with formating the disks for database files with 64K blocks size in NTFS file system.
Questions:
is that true ?
there is any problem in SQL with this block size ?|||Do you mean instead of FAT?
NTFS is more effecient than FAT
But this is at the OS level...and I've never seen anything but NTFS...|||Theoretically, it makes since. What I've been taught is larger block allocations should be faster at the expense of wasted disk space. Hence the smaller the files stored on the disk the larger amount of wasted space to store the file. Smaller block allocations better utilize space at the expense of slower read/write performance. It would be interesting to see if it provides any improvement. I've been wanting to test this for a while but have never had a chance. It would be interesting to see the results from different block sizes on one and multiple hardrive's with different RAID configurations. Especially, if you were to compare read and write performance.|||Just talked to my MS insiders. They said SQL Server writes in 8k chunks so the larger allocation on the disk won't matter as far as SQL Server is concerned. However, it'll help performance for the OS and pagefile.
So I guess anything >=8k shouldn't matter to SQL.|||SQL does not write directly to the disk. Everything is preallocated into 64K extents. The only "writing" occurs to the memory in that amount (8K).|||I'm agree with you, but, when SQL Server read or write an extent from/to the memory to/from disk due to a checkpoint or page fault, it will be working with 8 page's of 8k everyone(page size of data for SQL Server). That not reduce tha number of I/O ?.
:confused:|||I'm agree with you, but, when SQL Server read or write an extent from/to the memory to/from disk due to a checkpoint or page fault, it will be working with 8 page's of 8k everyone(page size of data for SQL Server). That not reduce tha number of I/O ?.|||Personally, I would not set the allocation size of the disk to 64K. The main reason is that you can (or at least should) have queries that are only interested in single pages, or less than 8 pages at a time. Notably, code tables that store data for integrity purposes that would be stored in mixed extents. Also, some index seeks should cause single pages to be read into memory. This would cause the O/S to read 56K of extra data, every time you do one of these index seeks. I am not sure what the writing side of it would be, but I suspect it would be just as bad. A general rule of thumb from the Oracle side of the world is if you believe it will really help you (i.e. you really have major performance problems that can not be solved otherwise), you should make the O/S allocation size equal to the database block size. Hope this helps.|||Also, some index seeks should cause single pages to be read into memory. This would cause the O/S to read 56K of extra data, every time you do one of these index seeks. Pages read don't have anything to do with allocation size. If optimizer needs to put 8K worth of data into cache, - only 1 page will be put into cache, not 8 (8K X 8=64K=EXTENT=Suggested Allocation size)
And if you really want to optimize the performance of a SQL box by messing with allocation size, - set the file growth to the same number as allocation unit size. Preallocate the database files to the multiple of allocation unit size by using the formula:
cast((file size + allocation unit -1) / allocation unit as int) * allocation unit|||Pages read don't have anything to do with allocation size. If optimizer needs to put 8K worth of data into cache, - only 1 page will be put into cache, not 8 (8K X 8=64K=EXTENT=Suggested Allocation size)
True, but you are likely to spend more time reading or updating the data than inserting it. If the disk allocation size were 64K, the disk is doing a lot more work than it has to for each physical read.
Think of it like Costco or BJ's Wholesaler. You might want a single can of coke, but your only option at Costco is to buy the whole case, because that is the smallest allocation size. You can toss 23 of the cans, since you are only interested in one, but it still causes you more work than before. (somehow in the back of my mind, I just know this metaphor is going to bite me but good)
Also, with an 8K allocation size, you can turn off the torn page detection option on your databases. I have never turned that off, so I am not sure how much you would save by doing that.|||Damn...now I really HAVE to order Kalens book...
And don't you just love his disposition!
Good stuff guys|||I also read this:
SQL Server performance can be improved when databases are created on NTFS volumes with a 64-KB extent size.
Reference: SQL SERVER 2000 SYSTEM ADMINISTRATION (microsoft press).
64 is the max size in the list when formatting a drive in WIN/NT.
Why the large size is recommended in the reference then?|||Dang! I could be very wrong, then. Maybe I will have to pick up that book, today. I will have to see if BOL has that, too.|||Any I/O operation to a disk-drive has the following three components of wait-time:
Seek time: When the read/write heads are not positioned at the correct cylinder, they must be moved to the proper position. This is the slowest, by far, of the three delays. (Operating-systems routinely sort pending I/O requests so that the r/w head's movements are orderly, so your request must "wait its turn.") Rotational latency: Once the r/w heads are positioned at the correct cylinder, we must wait for the disk to spin to the correct sector. Transfer time: When the correct sector spins around, the drive reconnects to the data-bus and data is transferred to/from RAM. But, depending on the hardware, it's possible that as much as a full-cylinder's worth of data might be moved in just one rotation.
A large block-size might be advantageous on a very large disk simply because it requires less "housekeeping" information to manage it, but I seriously doubt that it would seriously affect the only component that it realistically could: transfer-time, already the fastest of the three.
Operating systems routinely "buffer" information, reading more than they need to in hopes that the next I/O request can be satisfied without doing I/O at all. DBMSes normally inform the operating-system of the nature of their requests (for example, "I'm doing a big sequential read," or, "I'm doing random access work involving small blocks"). This is far-and-away the most effective strategy... avoidance!
The type of hardware you have can make an enormous difference... probably more, imho, than diddling with block-sizes. Having copious amounts of RAM ("hey, chips are cheap now...") is the single biggest factor. Having fast and redundant I/O channels, and hardware such as SCSI that can have multiple I/O transfers in-progress at the same time, will make the most difference.
Instead of diddling, measure. Find out what your various system processes are waiting on. Find out if, and how often, and for how long, user-requests are being delayed. Then respond accordingly. Don't "guess."|||GREAT STUFF all...
but isn't this really only a concern for MASSIVE amount of data...like many terabytes?|||GREAT STUFF all...
but isn't this really only a concern for MASSIVE amount of data...like many terabytes?
Probably so, but at least it's one helluva interview question.:eek:|||Heh. I don't think anyone is actually advocating going and actually doing any of this in practice. We are merely answering the question as given. And a .00001% increase in performance, is still an increase, right? ;-)|||Maybe if RogerWilco had used 64K allocation it wouldn't have taken 100+ hours to add his non-nullable BIT column...LOL!|||Cluster size also affects fragmentation. The smaller the size the more fragmented each file is if its size is greater than the cluster size. Considering autogrowth setting for database files it may be very beneficial to use the multiple of a cluster size while allocating the initial file size, as well as specifying the value for the growth (I hope by now nobody uses percent for file growth ;))|||I at least obtained a good discussion about the subject, right? thanks to all for its answers, but equal I have left the doubt :D I'm joking|||Maybe if RogerWilco had used 64K allocation it wouldn't have taken 100+ hours to add his non-nullable BIT column...LOL!It took that long because it was rolling back.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment