DB2 Performance - buffer pool and buffer quality
Question on performance and buffer pool quality. I have been monitoring the buffer pool quality in ST04 (buffer pool) and have noticed that the Overall Buffer Quality is 78%, the Data Hit Ratio is 65% and Index Hit Ratio is 95%. I have increased the size of the buffer pool with parameter (BUFFPAGE) from 300,000 to 400,000KB. After a weeks monitoring is see that the buffer quality has not changed by much, quality is still around 80% and not reaching 95% and higher as to what we would like to see.
Do you perhaps think that the buffer pool size is too big, or do I need to increase it more. The system is quite busy sop I dont think its a case of the system not being used enough.
Can someone please shed some light on this.
The buffer quality is represented by the equation :
[(logical accesses - physical accesses)/logical accesses ] x100%
What are the figures for your logical and physical access. From this we can see whether you buffer size is large enough.
I forgot to mention - how big is your system/ no. of users, nationwide access, etc?
Just so I get a feel for its size in relation to the amount of data going through the buffers.
If you have increased your buffer size by 25% without any noticeable improvement, then the problem may lie elsewhere.
Are the Optimiser analyse-statistics and the Optimiser update-statistics jobs scheduled in db13? And how often do they run?
If you are having to read in large quantities of data before finding the record that you need, then buffer performance will decrease.
Also, in transaction ST04, under 'Detailed Analysis' , select 'SQL request' to analyse the shared cursor cache - Execute the following screen without changeing the default values.
Sort on the column - buffer gets per exection.
Which table have the highest values here?
Do you have any very large custom tables ? Are these indexed properly?
Do you use any SD or CO-PA modules - tables here are accessed in a different way and need optimising.
Stats are as follows:
no of users - 300 including nationwide
job "Run_DBSTATC" scheduled daily after hrs
job "Runstat_ALL" once week - weekend
Buffer pool stats
Logical Reads 795,571,728
Physical Reads 272,574,445
Physical Writes 680,172
Synchronous Reads 14,518,600
Synchronous Writes 5,593
Logical Reads 542,723,338
Physical Reads 6,678,312
Physical Writes 319,835
Synchronous Reads 3,922,968
Synchronous Writes 4,872
Size 10,240 KB
Quality 97.38 %
Heap Full 0
Size 20,480 KB
Quality 99.75 %
High-Water Mark 0 KB
hope this helps
Any help with this problem yet.
Looking at your reads, I see that you have a lot of table reads. Look in ST04 and check the SQL cache. Sort on Execution time and check the explain plans. See if there are any queries doing full table scans instead off using a index.
Then sort on Rows read and do the same there.
This would then be able to bring down the number of reads you are doing.
Second, how long did you wait after the change. Overall you want to monitor it when the % does not increase anymore. As long as the % is still nicely increasing it will overall mean that not enough has been read yet to fill the bufferpool.
How much memory are you currently allocating to the bufferpool? How much memory does your system have?
If you can not further increase your bufferpool and you do not have queries which just do full tablescans then you might need to look at splitting up your bufferpool.
Hope this helps.
One of the fundamental issues you do not seem to be concerned with is whether end users experience good response times. In my 8 years of experience, I've seen systems with lousy hit ratios and perfectly happy users, and systems with 99.99% hit ratios grinding to a halt.
Once again, performance is about service levels and end user experience (response time), not about abstract metrics such as hit ratios.
Will users ever really be happy with the response time?
Seriously tho', I would imagine that poor bufferpool hit ratios would have an impact on db request times, which would then impact response times.
1) how do u get to know your logical and physical access... am i right to say that physical access refers to the the fetching of data from database and logical refers to the buffer?
2) to know "big is your system/ no. of users, nationwide access" can usmm does it. if can, read /forums/viewtopic.php?t=34913&highlight=usmm
3) earlier on "One of the fundamental issues you do not seem to be concerned with is whether end users experience good response times. In my 8 years of experience, I've seen systems with lousy hit ratios and perfectly happy users, and systems with 99.99% hit ratios grinding to a halt.
Once again, performance is about service levels and end user experience (response time), not about abstract metrics such as hit ratios. " makes sense.... one point... 8 years of basis experience.. wow.....
Although response times for the end user is a measurement, my experience also has shown that it depends from user to user.
For example, where I work they do not use certain reports they asked us to create (just found out about this) because they are running too slow.
They never told us about this.
If your hitratio's are low then you need to look into why they are low. Any improvement in them has the possibility to increase the response time for users.
However, you also need to look at the SQL statement run against the database. Our hit ratio's for example have been dropping since we have been improving statements (i.e, rewrites, add indexes, ...). Currently I do not even really look at the ratio's since I know that they are wrong due to statements doing partly tablescans etc.. Have a statement that does a tablescan, have it run 1000 times/hour and your hitratio's go up offcourse.
If you feel that your statements are good and you have low hitratio's, then you might wanna see to possibly have multiple bufferpools. Get indexes good hitratio's, they should be the most important anyway. If after doing all that your ratio's are still low, you need more memory.
Offcourse, all this work (since overall it will be a lengthy process) will then depend on what the response time is. Especially compare the database response time to the overall response time. If it is acceptable, then you might not want to do this work (except for the statements offcourse). But you will then need to keep a close eye on database response time. Once it starts going bad then you might need to start doing this work.
As IBM stated (hope I get this right), 80% of performance improvements is in the 1st 20% off work. This thus would be things like ensuring statements are not doing tablescans, good indexes are used (sometimes you see a index is used but looking closer identifies that only MANDT is used in the index. To me this is the same as a tablescan unless you have a multitude of clients in your system).
I hope all this helps. Listen to the users, but it's not because they do not say something that it is not there. Also look at all the data both database and SAP are giving to you to give you a idea on what the end user sees in regards with performance.