FC and IBM DB2 Tablespaces
Posted: Thu Dec 06, 2012 6:41 am
I have been reading up about using cache systems to increase the performance of database software such as IBM's DB2. Presently I am using Win 7 Pro SP1 x64 (on an i5-2400 processor) with 16gb ram and DB2 LUW 10.1 Express-C which is limited in use to 4gb memory and using only 2 cores of the 4 available.
I do NOT presently have an SSD installed.
2 x 500gb 7200rpm HDD's (C: & D:) with 5 x tablespaces split using automatic storage into 2 containers each (1 on each drive, 10 x containers, 5 on each drive).
The storage group containers range in size from 1gb to say 15gb on each drive. Total DB size is some 60gb uncompressed.
So I am interested in your comments regarding any expected performance increase when utilizing FC and using DB2 SQL based data. This is not an OLTP system, rather a data warehouse. There is obviously considerable sequential reading via indexes and I would hazard a guess as to 70/30 sequential vs random reads. Writes however are primarily random.
Given that DB2 writes containers into large individual files based on the tablespace config (as above), how does FC handle file sizes in excess of 10gb ? Also if the container for indexes is split into 2 files (1 on each of the c: and d: drives, ie: the index tablespace size is in excess of 20gb in total) how will FC handle reading portions of very large files such as this ?
Defraging these container files means that once contiguous they very rarely (if ever) fragment and only then mostly when resizing is required.
Given that my version of DB2 can only use 4gb ram and I have 16gb available, I am wondering if using a product like FC will increase the read performance. I look forward to any comments, suggestion you may have.
Many thanks, Fin.
I do NOT presently have an SSD installed.
2 x 500gb 7200rpm HDD's (C: & D:) with 5 x tablespaces split using automatic storage into 2 containers each (1 on each drive, 10 x containers, 5 on each drive).
The storage group containers range in size from 1gb to say 15gb on each drive. Total DB size is some 60gb uncompressed.
So I am interested in your comments regarding any expected performance increase when utilizing FC and using DB2 SQL based data. This is not an OLTP system, rather a data warehouse. There is obviously considerable sequential reading via indexes and I would hazard a guess as to 70/30 sequential vs random reads. Writes however are primarily random.
Given that DB2 writes containers into large individual files based on the tablespace config (as above), how does FC handle file sizes in excess of 10gb ? Also if the container for indexes is split into 2 files (1 on each of the c: and d: drives, ie: the index tablespace size is in excess of 20gb in total) how will FC handle reading portions of very large files such as this ?
Defraging these container files means that once contiguous they very rarely (if ever) fragment and only then mostly when resizing is required.
Given that my version of DB2 can only use 4gb ram and I have 16gb available, I am wondering if using a product like FC will increase the read performance. I look forward to any comments, suggestion you may have.
Many thanks, Fin.