Wednesday, December 3, 2014

Talks at UKOUG TECH 2014 with CERN Speakers

This is a short blog post with a list of the talks at the UKOUG TECH 2014 conference in Liverpool with CERN speakers. Come and say hi, we hope to see many of you there!


(*) slides can be downloaded from: http://cern.ch/canali

Tuesday, November 4, 2014

Life of an Oracle I/O: Tracing Logical and Physical I/O with SystemTap

Topic: This post is about tracing logical and physical reads in Oracle using SystemTap. You find here a few examples illustrating the main mechanisms used by Oracle to performo physical and logical I/O and you learn how to build SystemTap scripts to further explore and troubleshoot Oracle I/O.

Introduction: Performing input/output from and to the storage media, i.e. physical I/O, is one of the critical tasks underlying all database engines. When accessing data stored in RAM from the database cache we can talk of logical I/O. Physical I/O is performed by the database using system calls to the O/S kernel. The type of system calls used depends on database settings, the O/S and the type of storage. Logical I/O is performed by the database processes using Oracle kernel functions. Oracle instruments the timing of physical I/O calls with the wait event interface, while logical I/O time is generically accounted for as CPU time. Tracing I/O operations performed by Oracle processes together with the details provided by the Oracle wait event interface is a powerful technique to investigate Oracle I/O internals. With the help of a few real-life examples we investigate what the mechanisms are that Oracle uses to access storage, how the I/O operations correlate to the wait event interface and what the meaning of the wait event time is for the most common wait events associated with I/O.

Building the tracing tool: Investigations of what Oracle and the O/S do when an I/O operation is performed require specialized tools. Systemtap is a dynamic tracing tool specific to Linux that allows tracing of both the O/S and the Oracle functions (Linux kernel functions, system calls and Oracle database kernel functions). In the following you can find a description of a systemtap script that can be used to trace Oracle logical and physical I/O: it is composed of probes that can attach to the function calls of interest and can read function parameters, memory and CPU registers.
Systemtap requires relatively recent kernels to be able to probe userspace: either the utrace or uprobe_events functionality needs to be available. This is the case for example of RHEL/OL 6.5 and higher but not with RHEL 5. Moreover for tracing system calls the kernel debuginfo package needs to be installed.
More details on this and on how to build a lab environment can be found in this article. The script that we use in this blog post to trace Oracle logical and physical I/O and wait event interface is: trace_oracle_logicalio_wait_events_physicalio_12102.stp (for Oracle 12.1.0.2). A version for 11.2.0.4 is also available: trace_oracle_logicalio_wait_events_physicalio_11204.stp Before moving to the examples and studies of Oracle I/O I would like to review the three main parts of the script: tracing physical I/O, logical I/O and wait events.

Tracing physical I/O: Oracle ultimately issues system calls to execute I/O operations. The type of calls depend on database settings, the O/S type and the type of storage. In this article you can find examples to Oracle on Linux using ASM storage on block devices. Notably direct NFS, local storage and ASM with asmlib are left out and possibly addressed in future studies. System calls can be traced using the strace(1) utility in Linux too. However, in this article you can see how SystemTap probes can be built to gather information on the system calls of interest: pread, pwrite, io_submit and io_getevents. Additional useful SystemTap probes to the underlying block I/O layer are: ioblock.request, ioblock_trace.request and ioblock.end (note, these are not system calls, but rather tracepoints for function calls inside the Linux kernel).

Probe name Purpose Selected parameters
syscall.pread/pwrite synchronous I/O: read/write from/to a file descriptor fd (file descriptor number), offset, count. 
return value: bytes read/written
syscall.io_submit asynchronous I/O: submit blocks for processing nr (number of I/O). For each I/O: file descriptor, offset, bytes, opcode (0 for read, 1 for write)
return value: number of I/O operations performed
syscall.io_getevents asynchronous I/O: read events from the completion queue min_nr (min number of I/Os to read), timeout for event waiting.
Return value: number of I/O reaped, For each I/O: file descriptor, offset, bytes
ioblock.request, ioblock_trace.request I/O requests sent to the block device interface kernel layer devname, sector, size, rw, block_io structure address
ioblock.end return from the block device interface kernel layer devname, sector, rw, block_io structure address

Note added, August 2015: Systemtap probes on kernel.trace("block_rq_issue") and kernel.trace("block_rq_complete") can also be used to investigate the block I/O interface with the added advantage of not requiring kernel debuginfo. You can find examples of such probes in the page with download material or on GitHub.

Tracing logical I/O: This is about providing the history and details of the logical I/O operations, which is when Oracle reads a data block from memory (buffer cache). Logical I/O operations are a super-set of physical I/O: if Oracle does not find data in memory (buffer cache) it will issue a physical read. You can find a few examples of this mechanism in action later in this post.
Oracle instrumentation provides extensive information on logical reads in several V$ views, such as V$SYSSTAT, V$SESSTAT, V$SYSMETRIC. Also logical read data is available from sql*plus with "set autotrace on" and with tkprof reports of 10046 traces. Logical reads come in two main "flavors": consistent reads and current reads. Oracle will use consistent reads when reading data at a given system change number SCN, current reads are used when performing DML operations on the block. Consistent reads can be traced in Oracle using the events 10200 and 10201. More details can be found in the presentation by Julian Dyke on logical I/O.
In this article we show how to trace logical I/O by attaching SystemTap probes to a set of Oracle kernel functions. The role of functions kcbgtcr and kcbgcur for tracing consistent and current reads has already been discussed by Alexander Anokhin in his excellent blog articles on Dtrace LIO. More recently I have investigated this topic and found useful to also trace the functions kcbzib, kcbzgb, kcbzvb.
A summary of the main finding on these functions is in the table here below. Note: the expressions %rdi, %rsi, %r8 indicate the values of the corresponding CPU registers. With SystemTap the register value can be read with the function register, example: register("rdi"). The CPU registers are used to extract the function call arguments. Systemtap does not provide the function call arguments when there are no debug symbols available for the executable. Notably the calling conventions for x86-64 as documented in  www.x86-64.org/documentation/abi.pdf state that the integer parameters for a function call are available in the following registers (in order): %rdi, %rsi, %rdx, %rcx, %r8 and %r9.


Function name Purpose Selected parameters
kcbgtcr Kernel Cache Buffers Get Consistent Read

Note: key function for consistent reads
tbs#=user_int32(%rdi) 
rel file n#= user_int32(%rdi+4) >> 22 & 0x003FFFFF 
block#=user_int32(%rdi+4) & 0x003FFFFF
data_object_id#=user_int32(%rdi+8)
object_id#=user_int32(%rdi+12)

Note: for bigfile tablespaces:
block#=user_int32(%rdi+4)
kcbgcur Kernel Cache Buffers Current

Note: key function for current reads
tbs#=user_int32(%rdi)
rel file n#= user_int32(%rdi+4) >> 22 & 0x003FFFFF
block#=user_int32(%rdi+4) & 0x003FFFFF
data_object_id#=user_int32(%rdi+8)
object_id#=user_int32(%rdi+12)

Note: for bigfile tablespaces:
block#=user_int32(%rdi+4)
kcbzib kcbZIB should stand for: Kernel Cache Buffers, Z (kcbz.o is a module for physical IO helper functions), IB: Input Buffer

Note: it indicates that Oracle will perform physical read(s) into the buffer cache

kcbzgb The suffix GB in kcbZGB should stand for: Get (space for) Buffer

Oracle allocates space in the buffer cache for a given block (typically before I/O operations). 

tbs n#=%rsi,
rel file n#=%rdx >> 22 & 0x003FFFFF
block#=%rdx & 0x003FFFFF
data_object_id#=%r8
object_id#=%r9

Note: for bigfile tablespaces:
block#=%rdx
kcbzvb
Invoked after Oracle has read a given block. 

It is part of the call chain for checking blocks governed by the db_block_checking parameter. Name guess: Kernel cache buffer verify block. Thanks to @FritsHoogland for pointing this out.

This function is used both for reads via the buffer cache and for direct reads

tbs n#=%rsi, 
rel file n#=%rdx >> 22 & 0x003FFFFF,
block#=%rdx & 0x003FFFFF
data_object_id=user_int32(%rdi+24)

Note: for bigfile tablespaces:
block#=%rdx

Note: as I write this only uprobes supports return probes for userspace functions for kernel versions 3.10 or higher. This means that this functionality is not available with RHEL/OL 6.x, but it is available on RHEL 7.x (3.10.x kernel) for example.

Tracing wait events: The wait event interface is probably one of the best features in Oracle performance tuning as it provides a quantitative method to identify performance issues and bottlenecks. Cary Millsap has published inspiring work on the subject.
For our purposes it suffices to say that wait events are instrumentation points where selected Oracle kernel actions are timed. In particular Oracle will time the start and end calls to most physical I/O operations, therefore providing a good way to understand how much time is spent during physical I/O (some limitations of this mechanism are discussed later in this post). There are also limitations to using the wait event interface to study I/O latency, in particular in the examples section you can find some interesting challenges related to timing asynchronous I/O events.
Wait event tracing is typically activated in Oracle using the 10046 event or with by calling DBMS_MONITOR.SESSION_TRACE_ENABLE. In this article we get the wait event details by attaching SystemTap probes to the relevant Oracle kernel functions and in particular to  kskthbwt and kskthewt to gather information respectively at the start and at the end of each wait event. Enabling wait event tracing using event 10046 at level 8 and other methods is a way to externalize the wait event timing information (as well as other SQL processing details), not to enable the wait information. This is done using the database parameter TIMED_STATISTICS, and is and should be set to TRUE in modern Oracle databases. One key piece of information that the Oracle kernel functions kskthbwt and kskthewt give us is a pointer into the X$KSUSE fixed table (or rather to the underlying memory structure in the SGA), which is the underlying table to V$SESSION and therefore opens the road to correlate the wait event with many useful information on session executing the SQL. Resolving the pointer in register r13 to the base value of X$KSUSE and computing the offset for the fields of interest requires additional work, this is described in a previous article (see also the script: ksuse_find_offsets.sql).


Function namePurposeSelected parameters
kskthbwtKernel service Kompile thread begin wait.
This function is called at the start of an Oracle wait event.
kslwtbctx is its parent function call and marks the start of a wait event. 
The suffix "bwt" stands for "begin wait".
Useful information when probing the function:

register r13 or r14 (behavior is version-dependent) -> pointer into X$KSUSE (V$SESSION) array in the SGA  with a fixed offset (version-dependent)
register rsi -> timestamp of the beginning of the wait (in microseconds)
register rdx -> wait event number
kskthewtKernel service Kompile thread end wait.
This function is called at the end of an Oracle wait event.
kslwtectx is its parent function call marking the end of a wait event.
The suffix "ewt" stands for "end wait".
Useful information when probing the function:

register r13 -> 
pointer into X$KSUSE (V$SESSION) array in the SGA  with a fixed offset (version-dependent)
register rdi -> timestamp of the beginning of the wait (in microseconds)
register rsi -> wait event number


Running the script:
Below you find a few examples of traces for common Oracle I/O access patterns. The traces have been obtained by runnig  stap -v trace_oracle_logicalio_wait_events_physicalio_12102.stp -x <pid> where <pid> is the process id of the session being traced. The output is piped into sed -f eventsname.sed, which takes care of translating the event number with the associated event name. The file eventsname.sed is generated by running the script eventsname.sql. It’s important to rerun the script to generate the event names before testing, because the wait event numbers can change without any warning, for example by migrations to newer versions, but potentially also by applying a patch. The system used to collect the traces on the following examples is Oracle 12.1.0.2 running on OEL 7.0 with UEK3 under Virtualbox.


Example 1: single-block random reads

This example is about single block (random) reads. This access path to data is very important for many OLTP workloads as it is used for table access via a index key. In Figure 1 here below you can see a snippet of a trace file for a query of the type: select <col_list> from table <tab> where <indexed_col>=<val>.

Figure 1: Oracle single-block random read and db file sequential read wait event

The key points are:
  • Oracle issues a logical read on one of the blocks needed to to retrieve data for the SQL of this example
    • additional details are provided in the logical read call where we can see that Oracle wants to read block#=2505675 from tbs#=7 and belonging to obj#=32174 (which is the table segment in this example)
  • The block is not found the cache so a new block is allocated in the buffer cache and a physical read is initiated
  • The database marks the start of a wait event for 'db file sequential read'
  • The physical read request for a block becomes a pread call to the O/S
    • The IO request enters the kernel via a system call, any function encountered after the system call is a kernel function (such as ioblock.request), until the system call returns, and thus gets back to userland, which means the Oracle executable in this case.
  • This read call is passed on to the block I/O interface to read 8192 bytes (1 database block), from the block device device (/dev/sdl in the example of Figure 1)
  • The block interface and then the OS call return the read data to the Oracle kernel
  • The Oracle wait interface marks the end of the wait and computes the elapsed time
    • the wait event is  'db file sequential read' and the parameters are: p1 is the file number, p2 in the block number, p3 is the number of blocks read (1 block).
    • from the timestamp values in the trace you can see that the duration of the db file sequential read wait is dominated by the time it took to do the pread call, with a small overhead (note also that the overhead could have been more significant if we had high CPU utilization on the DB server, for example because of workload from other Oracle sessions)
  • We can conclude that in this example the measured wait time for db file sequential read corresponds to physical read latency.

Example 2: sequential I/O into the buffer cache with synchronous I/O

This example is about multi-block reads into the cache. This is an access path that is used for example when Oracle performs a full table scan. There are multiple ways in which Oracle can access the storage to perform full scans, in this example you see the case when Oracle uses synchronous I/O (i.e. the pread call) to perform O/S reads, examples 3 and 4 cover sequential reads with asynchronous I/O. In Figure 2 here below you can see a snippet from tracing the workload of a query of the type: select <col_list> from table <tab>, a full scan of <tab> is performed by Oracle.

Figure 2: Oracle sequential I/O in a case where reads are performed using pread O/S calls

The key points of the trace in Figure 2:
  • Oracle issues a consistent read for tbs#=7 block#=2621443 and finds it in the cache
  • Oracle issues a consistent read for the next block, does not find it in the cache so prepares to read a list of block from storage in one physical read. The list is 124-blocks long (it has shortened in this example for clarity)
  • The databases marks the start of a wait event for "db file scattered read"
  • The physical read requests become a single pread call to the OS for 1015808 bytes (124 blocks)
  • This request moves down the I/O chain to the block I/O interfaces which splits it into 2 reads from the underlying block device, as in the system used for tests the maximum size of I/O is 524288 bytes.
    • As investigated by Frits Hoogland: the submitted I/O request lands in the IO scheduler scattered/gather list, where it is broken down to the IO size as advertised by the block device.
  • The I/Os are completed by the block I/O interface
  • The pread call returns
  • The wait event interface times the end of the wait
    • The wait event is db file scattered read and the parameters are: p1 is the file number, p2 in the block number, p3 is the number of blocks.
    • From the timestamp values in the trace you can see that the duration of the db file scattered read wait is mostly accounted for by the time it took to do the pread call, with a small overhead 
  • From the trace of calls to the Oracle kernel function kcbzvb we get additional confirmation of the blocks that have been read
  • We can conclude that in this example the wait time for db file scattered read corresponds to I/O latency.
Example 3: sequential I/O into the buffer cache with asynchronous I/O O/S calls

In Figure 3 you can see a snippet from the trace of  the SQL: select <col_list> from table <tab>, where Oracle performs a full scan of table <tab>. This is the same query that was used for Example 2 above, the difference from that case is that the trace in Figure 3 has been taken at a later time in the execution of the query, when the Oracle process switched from using synchronous I/O with pread calls to using the asynchronous  I/O interface. Asynchronous I/O is implemented in Linux with io_submit and io_getevents calls, to send the I/O requests and then to reap the results. Although Oracle uses the asynchronous interface to perform read operations in this example, the final result is very similar to what described in Example 2 (i.e. with synchronous I/O): the Oracle wait event is the same, "db file scattered read", the reads are also performed into the buffer cache and I/Os are reaped with blocking calls, therefore making the process effectively synchronous. A more 'aggressive use' of the asynchronous I/O interface by Oracle aimed is found in Oracle with direct reads, that are discussed in Example 4 below.

Figure 3: Oracle performing sequential I/O reads into the buffer cache using asynchronous I/O calls

The key points are:
  • Oracle issues a consistent read for tbs#=7 block#=2522760, it does not find it in the cache so prepares to read a list of blocks into the buffer cache. The list is 128-blocks long and shortened in this example
  • The databases marks the start of a wait event for "db file scattered read"
  • The physical read requests are passed on to the OS as a call to io_submit with 2 requests for a total of 1MB in size (that is 128 blocks): the first request is for a read (opcode 0) of 65536 bytes, the second for a read operation of 983040 bytes (that is 1MB -64 KB)
  • These requests are passed to the block I/O interface which splits the largest request into 2 reads from the underlying block device (/dev/sdi in this example). The read of 64KB is passed on unchanged in size to /dev/sdr
  • Oracle calls io_getevents to reap the results of the asynchronous I/O requests that have just been submitted and waits. The timeout.tv_sec field is set to 600, which indicates this is a blocking wait (which means io_getevents waits for 600 secs (tv_sec) and 0 nano secs (tv_nsec)) for min_nr=2 of IOs to finish. 
  • The block device interface returns the data upstream.
  • The io_getevents call returns the 2 I/O read.
  • The wait event interface marks the end of the wait. The wait event in this case is db file scattered read. 
    • The wait event parameters are: p1 is the file number, p2 in the block number, p3 is the number of blocks.
    • The wait event duration is basically the time is took to do the io_submit and io_getevents calls plus a small overhead.
    • Note that although Oracle has used the asynchronous interface, it has also waited for all the I/Os to complete therefore making the I/O essentially synchronous
  • From the calls to kcbzvb we get additional confirmation of the blocks that have been read.
  • Conclusion: the wait time for db file scattered read in this example is not a precise measure of the I/O latency, as two reads on two different devices where submitted in parallel and the total time was reported as if it was a single read.

Example 4: full scans with direct reads and asynchronous I/O

In this paragraph you can find two examples of Oracle I/O for the case of direct reads on block storage configured with ASM. The same query that was used in Examples 2 and 3 above is also used here: it is a query that performs a full table scan. In this example differently from Examples 2 and 3 Oracle uses serial direct read to access data. Serial direct read is a feature introduced in 11gR2 for optimizing full segment scan operations and bypassing the buffer cache layer, similarly to what happens with parallel queries. Oracle chooses when to use serial direct reads vs. cached reads based on a few parameters. In first approximation direct reads are preferred when the table is "big" compared to the size of buffer cache, the actual mechanism is more complex and can potentially change between Oracle versions. You can read more on this topic in this blog post by Tanel Poder.
For the particular setup I used to run this tests I had to force the use of serial direct reads before running the query, this was done with: alter session set "_serial_direct_read"=always; 
Note that alter session set "_serial_direct_read"=never; can be used to disable serial direct reads and force Oracle to use cached reads (that is the type of I/O you have seen in Example 2 and 3 above). The default value for the underscore parameter "_serial_direct_read" is "auto".
For the investigations in this example we can use a slightly different SystemTap probe from what we have used in the previous examples. I have removed tracing of logical I/O and block I/O for reducing cluttering and added a new probe on the userspace function io_getevents_0_4 of libaio (libaio is the Linux library that implements asynchronous I/O). This is because serial direct reads use non-blocking calls to reap I/Os and these calls are best traced at the libaio interface for reasons described by Frits Hoogland in this blog post. The SystemTap script represented in Figure 4a and 4b is: trace_oracle_wait_events_asyncio_libaio_12102.stp (a version for 11.2.0.4 is available at trace_oracle_wait_events_asyncio_libaio_11204.stp).

Figure 4a: Trace of the wait event and I/O calls of Oracle during a full table scan with serial direct read and asynchronous I/O, this is what happens in the first phases of the full scan

The key points of Figure 4a are:
  • Oracle requests I/O in chunks of contiguous storage space of 1MB. This is to reduce the overhead of seek time and potentially make use of readahead when available. 
    • The value of 1MB for the read request comes from the parameter db_file_multiblock_read_count=128 multiplied by the block size of 8 KB. 
  • Oracle submits one I/O of 1 MB with a call to io_submit and then calls io_getevents_0_4 in the libiao library using the non-blocking mode to try to reap the I/O. This happens twice, with no results. Oracle then submits another I/O of 1MB and tries for 4 times to reap I/O in non-blocking mode, with no results (the I/O operations have not yet finished).
  • The wait event timing starts just before the calls to io_getevents, which are used to reap the I/O results this time in blocking mode (with a timeout of 600 seconds)
    • The wait event in this case is: "direct path read"
    • Oracle reaps the two I/Os in two subsequent calls to io_getevents
  • After reaping the I/Os Oracle marks the end of wait for the event. 
  • The result is that 2 reads of 1 MB each have been performed 
  • The wait event interface does not report the correct number of I/Os performed:
    • the "direct path read" wait in this example reports one read of 128 blocks (1 MB), while 2 MB were read
  • The elapsed time for the wait event "direct path read" does not measure the I/O latency
    • Oracle is only timing the blocking calls to io_getevents, which start after the I/Os are submitted. 
    • Oracle is timing 2 reads together.
The use of serial direct reads in Oracle is more complex than the simple example of Figure 4a. Frits Hoogland has discussed this in more details the internals of direct reads in Oracle in this presentation. In particular Oracle uses an adaptive mechanism for direct reads with the goal of driving the storage system to high throughput when possible. This is implemented by increasing the number of outstanding I/O when calling the asynchronous I/O interface. Internally this makes use of I/O slots. Note, from a comment on Oracle diagnostic event 10353 we have additional indirect information on this topic: "Slots are a unit of I/O and this factor controls the number of outstanding I/Os".
Another finding of Frits' is that there are I/O operations performed for direct reads that are not timed by the wait event interface. This happens because Oracle can issue the io_getevents with timeout=0, that is reaping the I/O without having to block. You can see the these mechanisms in action in the trace of Figure 4b, which is for the same full scan as reported above in Figure 4a just at later time when the mechanism of adaptive serial direct reads has tuned Oracle I/O requests to be "more aggressive" on the use of storage resources.

Figure 4b: Trace of the wait event and I/O calls of Oracle during a full table scan with serial direct read and asynchronous I/O, this is an example of what happens when the adaptive mechanism increases the number of outstanding I/Os for optimizing the throughput

The key points of Figure 4b are:
  • This is the same full scan as shown in Figure 4a, the difference is that the trace in Figure 4b has been taken at a later time of the full scan operation (the table has 1 million blocks) when adaptive serial direct reads has increased the number of I/O slots (outstanding asynchronous I/O operations).
  • You can see that several io_submit operations are performed before the wait event timing is started. 
  • Oracle is trying to keep several outstanding I/Os in the queue: up to 9 outstanding asynchronous I/O operations can be seen in Figure 4b, while only 2 were present in simpler case of Figure 4a, that is at the beginning of the full scan.
  • Several attempts to reap I/Os with io_getevents operations are run by the Oracle process before the wait event timing is started
    • 2 reads of 1 MB each are reaped by one io_getevents call before the wait event timing has started, therefore these reads will not be accounted for in the wait event interface.
    • It is important to note that the timeout for this particular io_getevents call (as well as for the io_getevents_0_4 calls in libaio which have not returned any I/O) is set to 0 by the Oracle process, which means that Oracle does not wait for I/O results to be available, it will only reap them if they are available.
  • Between the moment at which Oracle starts timing the wait event "direct path read" and the end of the wait event 3 io_getevents operations have been performed (with the timeout set to 600, therefore blocking for I/O reads to be available) and a total of 9 reads of 1MB each are performed
    • The wait event interface does not report all the I/O that has been performed:
      • In the wait event parameters we can only find trace that 1 MB has been read (128 blocks), while in reality 9 MB have been read.
      • This is in addition to the 2 MB read outside of the wait event instrumentation.
    • The elapsed time for "db file direct read" does not accurately reflect I/O latency but rather the wait time for reaping I/Os as seen in Figure 4b.

Example 5: asynchronous I/O for random reads

Oracle will optimize single-block reads when it can do prefetching and/or batching of I/O. This is an optimization for random I/O using the asynchronous I/O interface, as Oracle can group multiple I/O requests in a single io_submit call, as opposed to sending them one after the other as in the case of Example 1, where the OS call pread was used. The example from Figure 5 comes from running Kevin Closson's SLOB on a 12c database. In the execution plan please note step 3 where Oracle uses the operation "TABLE ACCESS BY INDEX ROWID BATCHED" to access table data.

Figure 5: execution plan for a SLOB query illustrating batch table access

Here below in Figure 6 you can see a snippet of the Oracle I/O tracing when running the SLOB workload. You can see that the asynchronous I/O interface is being used to to perform random I/O. In the case of Figure 6, 126 requests are submitted in a single io_submit operation and subsequently reaped by io_getevents, waiting for all the requests to finish. The wait event in this case is "db file parallel read" and the calculated wait time is the time for the batch of 126 requests to finish. Therefore the wait time of db file parallel read wait does not provide an accurate measurement of the I/O latency for single block random reads.

Figure 6: Oracle performing I/O for random reads with asynchronous I/Os, for the case of batched reads. The trace has been edited replacing a long list of similar trace lines (typically 126 lines only varying for the block#) with "..."

Example 6: tracing DML

In Figure 7 you can see a snippet of tracing Oracle logical I/O and wait events when inserting a row into a table followed by a commit. No physical I/O was performed in this operation (the table had been previously cached). You can see that most of the block reads have been done in current read mode, as expected for DML. Moreover the blocks involved are both belonging to the table being inserted (you can identify them in Figure 7 as belonging to tbs#=11) to rollback segments (you can identify them in Figure 7 as belonging to tbs#=2). Note that for simplicity the table in this example has no indexes.

Figure 7: Logical I/O trace of Oracle performing and insert and commit operation

Example 7: tracing log writer
In Figure 8 you can see the trace of log writer during a commit operation. This is the same commit operation traced in Example 6 (see also Figure 7 above). The wait event is "log file parallel write" and its duration is timed by oracle from the beginning to the end of the asynchronous I/O operations performed. Therefore the measured wait event time "log file parallel write" is not an accurate measure of block write latency, as its duration depends on the number of operations performed. In the case of Figure 8 you can see that log writer writes to two distinct block devices, which is consistent with the use of 1 logfile member per redo log group, allocated on an ASM diskgroup with normal redundancy: log writer sends 2 write I/O requests simultaneously in order to write the primary and mirror extents of the logfile and waits for both of them to finish.

Figure 8: log writer tracing during a commit operation

Conclusions
Dynamic tracing and in particular SystemTap provide the opportunity to investigate Oracle physical and logical I/O and open the way to drill-down investigations of Oracle's I/O into the various layer of the Oracle engine down to the Linux kernel layer. This post describes some of methods, tools and examples of how these investigations can be performed.
The examples from real-world Oracle workloads highlight some of the most common cases of Oracle logical and physical I/O and the connections between the wait event interface and the O/S calls issued by Oracle processes. The Oracle wait event interface can provide useful data for measuring I/O latency and in general for troubleshooting I/O problems. This is true for random reads instrumented by the wait event "db file sequential read". The information provided by wait events related to asynchronous I/O, however, require additional attention: such events typically do not provide an accurate measure of the I/O latency. Moreover this post provides examples where some of the asynchronous I/Os for Oracle direct reads are not instrumented at all by the wait event interface.

Acknowledgements: I would like to thank Tanel Poder and Frits Hoogland for original ideas that have inspired this work. Special thanks go to Frits Hoogland for providing comments and additional ideas from his deep expertise on the subject.

Download the scripts:discussed in this post from the webpage with download material or from GitHub.

Monday, September 22, 2014

SystemTap into Oracle for Fun and Profit

Topic: This post is about using SystemTap for investigating and troubleshooting Oracle RDBMS. In particular you will learn how to probe Oracle processes and their userspace functions. These techniques aim to be useful as well as fun to learn for those keen into peeking under the hood of the technology and improve their effectiveness in troubleshooting and performance investigations.

Introduction


Userspace probing is a very powerful technique that can be used to complement the available Oracle performance and profiling data, such as V$ views and 'event 10046' trace files, for advanced troubleshooting and investigations into the Oracle kernel. These type of investigations have been available for several years already mostly on Solaris with the introduction of DTrace (see for example Tanel's blog).
Dynamic tracing tools that allow kernel and userspace probing are also gaining momentum in the Linux word thanks to recent additions to the kernel. Several tools are available to provide the interface for collecting, aggregating and printing dynamic tracing data. SystemTap is one of the more mature tools, however more tools are now emerging, including 2 ports of DTrace. For a review of this topic see Brendan Gregg's presentation at LinuxCon 2014.
I have first learned about the possibility of using SystemTap for userspace tracing from Frits Hoogland and have since been quite excited by the possibilities that this opens for advanced troubleshooting, and in particular on how this can help investigating Oracle workloads for performance tuning and advanced troubleshooting.
I was was also quite happy to discover that the techniques described in this article work also under RHEL 6.5, this is definitely very good news for me as it means that I can start using SystemTap on many of the existing systems, without waiting for upgrades to the very latest kernels.

SystemTap documentation has additional details on the implementation of userspace probes: with Linux Kernels higher than 3.5 (such as when using RHEL 7.0, OL 7.0 or OL6.5 with UEK3) SystemTap will use the UPROBES module for userspace tracing, with older kernels SystemTap can use the UTRACE kernel extensions if available. SystemTap is able to provide userspace probing both with UTRACE and with UPROBES, more mother dynamic tracing tools seem to work only with UPROBES and therefore only with more recent versions of Linux. As reported by Frits in the above mentioned article you will need SystemTap 2.5 or higher for userspace tracing. As I write this SystemTap 2.6 has just come out.

This is how to check if UTRACE extensions are configured:
# grep CONFIG_UTRACE /boot/config-`uname -r`
CONFIG_UTRACE=y

This is how to check if UPROBES are available:
# grep CONFIG_UPROB /boot/config-`uname -r`
CONFIG_UPROBES=y
CONFIG_UPROBE_EVENT=y

In the following you will find a few example, ideas and test code to get started with using SystemTap userspace probing with Oracle. The scripts discussed in this blog can be found at this link.


Deep Dive: SystemTap reads Oracle wait event interface data and X$ tables


You will see in this paragraph how to hook SystemTap into the Oracle wait event interface. This provides a way to link together the two types of instrumentation, SystemTap probe-based data and
Oracle-based wait event interface data, and therefore opens the way for more advanced investigations.
The main implementation detail is to create a SystemTap probe into the Oracle kernel at the 'right point' and make it read the information we want.
As discussed in previous work I have published about DTrace and Oracle, "kskthewt" is an Oracle kernel function that can be used for this purpose: it is called by the Oracle kernel at the end of each wait event presumably to update the relevant performance counters. Our first step is to create a probe that starts executing once the Oracle executable steps into the function kskthewt. The next step is make SystemTap read the information we want. What is available are CPU registers and memory areas. Some CPU registers are used to pass arguments to the function, as detailed in the x86 calling conventions. In other cases the CPU registers can contain useful information 'left over' from parent function(s) processing. Reading directly from memory areas and in particular from the SGA opens the way to most of Oracle instrumentation data.
Additional facts that I have found recently  in collaboration with Frits Hoogland and that are of help on building a probe on the Oracle function kskthewt are:
  • When execution jumps into the function kskthewt, the CPU register R13 turns out to point into the X$KSUSE memory structure (with an offset). R12 seems to point to the same address most of the time but I have seen some cases when it didn't so I prefer to use R13.
  • X$KSUSE is a base fixed table for V$SESSION and therefore carries a lot of very interesting information. X$KSUSE is allocated in memory as a segmented array in the SGA, see also Julian Dyke's presentation on SGA internals for more details on this interesting topic.
  • V$SESSION.SADDR, the base address of a given session's V$SESSION data and of X$KSUSE, can be derived from the value of R13 by subtracting an offset. The value of the offset depends of the Oracle version and port and can be found 'experimentally' by using SystemTap or gdb (see more details in the script directory: trace_oracle_events_debug.stp).
  • The fields of interest of X$KSUSE (i.e. V$SESSION column values) can be found from the base value of X$KSUSE with an offset calculated with a query from X$KQFCO and X$KQFTA (see the script: ksuse_find_offsets.sql).
  • The register RDI (which is mapped as arg1 in the SystemTap probe) is found to be set to the timestamp value. This is a timestamp of the wait event, the same value as what can be seen in a trace file for the event 10046, if activated.
  • The register RSI (that is arg2 in SystemTap probes) is found to be set to the wait event number.

If you put all of this together you can write a SystemTap script to collect and print wait event and selected v$session details. An example of such a script is demonstrated in the the figure here. The scripts can be downloaded from here as a zip file. Direct links: trace_oracle_events_11204.stp and trace_oracle_events_12102.stp and have been tested with Oracle 11.2.0.4 on RHEL6.5 and with Oracle 12.1.0.2 on OEL7 respectively.

Figure 1: Script and example output of a SystemTap probe that collects wait event and V$SESSION details from Oracle kernel functions and SGA memory.

Note: In the example above you can see that SystemTap output is piped to sed -f eventsname.sed. This is to transform event id numbers into strings. The file eventsname.sed is generated by running the script eventsname.sql.


Systemtap aggregations and filtering of wait event data


One of the strengths of SystemTap, similarly to the rest of dynamic tracing tools, is to do on-the-fly aggregations and filtering. This helps reducing the amount of data collected, therefore helping to minimize the footprint of the tracing operations, while still producing meaningful reports and real-time tracing. Building on the results discussed above on how to write SystemTap probes hooking into the wait event interface you can now add aggregations and filtering to compute wait event histograms with microsecond precision.
One simple and powerful result that you can reproduce easily is to write a script that effectively reproduces/back-ports the functionality of the 12c feature of microsecond wait event histograms. This can be very useful when studying events of short duration such as random disk reads from solid state devices. In addition to aggregation, with custom filtering you can aggregate wait event details for a given user or on a given file, or any other custom filter condition. Overall SystemTap allows for much freedom in programming our probes, which is quite empowering.
Two examples scripts that can be downloaded from here, direct links: histograms_oracle_events_11204.stp and histograms_oracle_events_12102.stp

Figure2: Script and example output of a SystemTap probe that computes microsecond precision wait event histograms.

Another and alternative way to build a SystemTap probe for gathering and optionally aggregating and filtering wait Oracle event latency data is described in the following. For this you need also an additional piece of research in Oracle kernel function calls: the function kews_update_wait_time is called just before kskthewt with the CPU register RSI set to the wait time in micro seconds (register RSI is also available in SystemTap probes as arg2). See also details reported in this previous work. The results is that you can build a probe computing event histograms that is independent of the Oracle version. An example script is: histograms_oracle_events_version_independent.stp

Figure 2: Script and example output of a SystemTap probe that computes microsecond precision wait event histograms and is independent of the Oracle version.


Tracing Oracle logical I/O with SystemTap userspace probes


You can use Systemtap userspace probes to trace Oracle logical I/O too. The main research underlying this probe comes from Tanel Poder (search for the DTrace-based script qer_trace.sh) and the extensive work of Alexander Anokhin.
By hooking on the Oracle kernel function kcbgtcr we can get the details of the logical reads, or rather cache consistent reads as explained by Alexander Anokhin, kcbgtcr =Kernel Cache Buffer Get Consistent Read and the first argument of the function call points to a memory structure with the details of the tablespace, relative file number, block number and object number of the consistent read. We can use all this to built a profiler probe trace_oracle_logical_io_basic.stp

probe process("oracle").function("kcbgtcr") {
    printf("tbs#=%d, rfile=%d, block#=%d, obj#=%d\n",user_int32(u64_arg(1)), user_int32(u64_arg(1)+4) >> 22 & 0x003FFFFF, user_int32(u64_arg(1)+4) & 0x003FFFFF, user_int32(u64_arg(1)+8))
}

From there you can also build probes that perform aggregates and compute statics of the logical read operations, see for example: trace_oracle_logical_io_count.stp

Figure 3: Script and example output of a SystemTap probe to capture and aggregate details of logical I/O (for consistent read) in Oracle.


Systemtap can integrate Oracle wait event data with OS tracing


SystemTap's strength and first use case is to trace system calls. You can use this to go deeper with Oracle tracing by combining Oracle wait event interface data (collected with SystemTap as shown above) with data coming from SystemTap OS probes. 
In the following you can see an example based on Oracle using ASM, when storage is made visible to the OS as block devices (asmlib is not used in this example). The I/O calls issued by Oracle processes in this case will typically be pread and pwrite for synchronous single-block I/O, io_submit and io_getevents for multi-block and/or asynchronous I/O. At a lower level all I/O to block devices can be traced as ioblock requests.
You can start exploring the relationship between Oracle wait events, I/O system calls and block I/O with this example starter script from: trace_oracle_iocalls_12102.stp

Figure 4: SystemTap probes for OS I/O tracing and Oracle wait event tracing. This allows to get an integrated view of the actions underlying an Oracle I/O event and relative OS actions.

In the Figure 5 here below you can see an example of tracing Oracle I/O where single-block (synchronous) I/O is performed via a pread call to the OS. This is a typical access path for Oracle random I/O. On the OS side the pread call is cascaded to the block device stack as io_block_request. On the Oracle wait event interface the I/O operation is accounted as db file sequential read wait, with indication of the file number and block number (p1 and p2 parameters)

Figure 5: Oracle performing single-block I/O profiled using SystemTap probes. This provides end-to-end tracing of the I/O operation: we can see the OS call details and the corresponding to the Oracle wait event data

Linux asynchronous I/O interface is often used by Oracle for multi-block I/O. Thisis mainly for performance reasons and can only be done when the underlying filesystem allows it (this is the case in the examples reported here as we are using ASM). A study of Oracle async I/O would take us too far from our main subject, we will limit the discussion to some main observations and an example of how you can use SystemTap probes to further investigate this topic.
With AIO the kernel is actually doing the IO, the user process is just asking for an IO to be done, and checks back to get the requested information. When investigating async I/O you need to trace at least two calls: one for submitting the I/O requests (io_submit) and one for reaping the results (io_getevents). I/O requests are then passed on to the block I/O layer. At this stage they can be split into more operations of smaller size, as it is the case of the examples of Figure 6. See also Martin Bach's post on the topic of "Increasing the maximum I/O size in Linux". More details on tracing Oracle I/O can be found in Frits Hooglands' excellent work on profiling Oracle with gdb and study on multiblock I/O.


Figure 6: SytemTap probe example showing the profiling Oracle I/O in a case of multi-block read when Oracle has chosen to perform asynchronous I/O. The I/O requests at the OS level are submitted using io_submit and reaped with io_getevents. SystemTap allows to see also what happens at the block device level when the I/Os are split into smaller chunks of maximum size 512 KB in the system used for this test.


Lab setup and documentation


Here below you can find some pointers on how to setup a test environment to experiment with the techniques and scripts discussed here above:

- Download and install Oracle Virtual Box
- Download OL 7.0 (or OL 6.x) for example from https://edelivery.oracle.com/linux
- Notably install also the kernel-devel rpm

If you want to use OS probes with SystemTap download and install the debuginfo and debuginfo-common packages for the kernel you are using. The rpms can be found from https://oss.oracle.com/ol7/debuginfo/ and https://oss.oracle.com/el6/debuginfo/ respectively for OEL 7 and OEL 6. Similarly install libaio-debuginfo and libaio-devel if you want to trace OS calls for asynchronous I/O.
Please note that if you want only to run probes for the Oracle userspace functions you can skip the installation of the kernel and libaio debuginfo packages.

The scripts discussed here require systemtap 2.5 or higher. Consider uninstalling systemtap if already installed with an older version.
Download the latest version of systemtap from https://sourceware.org/systemtap/ftp/releases/ (version 2.6 as I write this).
To install SystemTap from the downloaded source run:

./configure
# the output will tell you if additional OS packages are needed, in case install them and repeat
make
make install

As post-installation check run:
stap --help

A good point to start with the SystemTap documentation is the beginners guide, see also the language reference and tapset reference manual.

Download and install Oracle version 12.1.0.2 or 11.2.0.4.
The software can be downloaded from https://edelivery.oracle.com/
Pre-installed VMs with Oracle database are also available for download from OTN (for example at this link)
Installation guides, besides the Oracle documentation, are also available at http://www.oracle-base.com/articles/12c/articles-12c.php and in the racattack wiki book.


Conclusions


Dynamic tracing makes available userspace-level probes into the Oracle kernel and Linux probes at the OS kernel level, which can be used to complement Oracle's performance data, notably data from the wait event interface and the statistics counters. These techniques can considerably extend the amount of data available when investigating performance issues and for advanced troubleshooting, therefore helping the troubleshooter to use systematic approaches to problem solving and overall to be more successful and consistent.

You have seen in this short article a few pointers and get-started examples (download from this link or from GitGub). There is a lot of potential in applying and extending these techniques for investigating Oracle and for advanced troubleshooting of Oracle and other complex platforms. I hope to cover some more examples in future posts. For the time being, I hope you enjoyed going through the listed examples and I wish to all the readers who have been patient enough to make this far fun and success in trying these techniques on their systems!


Acknowledgements


I would like to thank three authors of many original ideas that have inspired the material I have discussed here: Brendan Gregg, Tanel Poder and Frits Hoogland. Additional thanks go to Frits Hoogland for reviewing this post.

Tuesday, August 5, 2014

Scaling up Cardinality Estimates in 12.1.0.2

Topic: Counting the number of distinct values (NDV) for a table column has important applications in the database domain, ranging from query optimization to optimizing reports for large data warehouses. However the legacy SQL method of using SELECT COUNT (DISTINCT <COL>) can be very slow. This is a well known problem and Oracle 12.1.0.2 provides a new function APPROX_COUNT_DISTINCT implemented with a new-generation algorithm to address this issue by providing fast and scalable cardinality estimates.

Is SELECT COUNT( DISTINCT <col> ) slow? 

The answer as it is often the case is 'it depends'. Let's start with an example to get an idea on what are the key variables in this context. I will describe a few simple tests here below using a table called TESTTABLE (see SQL creation scripts later on in this post) of 10M rows, 4GB in size and with 3 columns: a unique numeric ID, a column with row-size 200 bytes and cardinality 10 and another with row-size 200 bytes cardinality 10M.  I have cached the table in the buffer cache to simplify the interpretation of the test results. As a reference the sqlplus output here below shows that it takes about 1 second to count the number of rows in the table and that no physical reads are performed while doing so because the table is cached in the buffer cache.

SQL> select count(*) from testtable; 

  COUNT(*)
----------
  10000000

Elapsed: 00:00:00.98

Statistics
-------------------------------
     526343  consistent gets
          0  physical reads
                      ..     ........
          0  redo size


Let's now count the NDV (number of distinct values) for the 3 different columns. SELECT COUNT(DISTINCT <col>) from TESTTABLE; 
The table here below shows the results rounded to the nearest integer value:

Column name Measured NDV Avg Col Length (bytes) Elapsed time (sec)
id 10M 6 11
val_lowcard 10 202 6
val_unique 10M 208 124


The figure here below shows the execution plan and the tkprof report for the execution with the longest time (NDV calculation for the column VAL_UNIQUE). We can see a pattern emerging. The algorithm used for count distinct needs to use large amounts of memory, therefore also CPU and eventually also processing will spill out the temporary areas and perform I/O operations to and from the temporary tablespace. This is why it can become very slow.


Meet the new algorithm in 12.1.0.2

APPROX_COUNT_DISTINCT is a new function available in Oracle's 12.1.0.2 SQL and is implemented with an algorithm that has low memory footprint and high scalability (more on that later in this post). The other difference with the count distinct legacy approach is that this new function provides estimates of the cardinality as opposed to correct counts, in many cases this would be 'good enough', with the speed difference making up for the lack of precision. Let's see how this new approach performs in the same test system as for the legacy SQL example above. I have run a similar set of tests as in the previous case, the SQL is:
SELECT APPROX_COUNT_DISTINCT(<col>) from TESTTABLE;

Column name Measured NDV Avg Col Length (bytes) Elapsed time (sec)
id 9705425 6 2
val_lowcard 10 202 5
val_unique 9971581 208 6

Here below the tkprof report for the execution plan for the case with the longest time (NDV calculation for the column VAL_UNIQUE). We can see that almost all the SQL execution time is accounted as CPU by Oracle and no I/O is performed.


Legacy vs. New

The new algorithm and the legacy approach have similar performance when counting columns of low cardinality, this is illustrated in the example above when counting the column 'val_lowcard' (cardinality 10). In the simple test performed here the new algorithm was marginally faster the the legacy SQL for the low cardinality example, however I can imagine that the opposite may also be true in some cases (this has been reported reported by  on a different test system where the legacy SQL was waster than APPROX_COUNT_DISTINCT for counting NDV in a low-cardinality case).
We expect that the elapsed time needed in the case of low NDV is close to time needed for a full scan of the column being measured, that is the most expensive operation when estimating the cardinality of columns with low NDV is typically to read the column data.

Let's now talk about the more interesting case of large NDV. The case of estimating columns of high cardinality is where the new algorithm shines. As the NDV increases also the 'volume of distinct values' increases, the number of distinct values multiplied by their size increases. As this 'volume of distinct values' get bigger, the legacy algorithm starts to fall behind in performance compared to the new 12.1.0.2 function. This is because the operations requiring sorting/hashing and temporary space start taking more and more resources: first CPU is needed to process the additional memory structures, then also I/O to read and write from the temporary tablespace is needed. This can potentially make the legacy SQL with count distinct orders of magnitude slower than APPROX_COUNT_DISTINCT.

Parallelism

APPROX_COUNT_DISTINCT scales very well by using parallel query. By looking at the execution plan and tkprof report above that should appear clear as the processing is entirely on CPU. Moreover additional details discussed below on the algorithm used by APPROX_COUNT_DISTINCT confirm the scalability of this approach when using parallel query.
The legacy approach instead does not scale very well, in particular when multiple parallel query slaves have to handle a concurrent count distinct while also using TEMP space of the temporary tablespace. In the interest of brevity I will not add here additional details and tests performed for this case.

HyperLogLog

I have first learned about the HyperLogLog (HLL) algorithm from Alex Fatkulin's blog and references therein. I find the whole concept quite neat, in particular I enjoy how clever use of math can help in tackling difficult problems when scaling up data processing to very large data sets. The reason I mention it here is that hints that HLL is used can be found by tracing Oracle process execution by stack profiling. For example see the Flame graph below and the annotations of my guess on function name qesandvHllAdd.
The flame graph data comes from Linux perf command and the graph itself is generated using Brendan Gregg's FlameGraph tool. More details on this technique of stack profiling applied at investigating Oracle at this blog entry.


Here below are pointers to the commands used for perf data collection and flame graph generation:

# perf -a -g -F 99999 -p <oracle_pid> 
# perf script >perf.script
$ cat perf.script|sed -f os_explain.sed| FlameGraph-master/stackcollapse-perf.pl |FlameGraph-master/flamegraph.pl --title "Flamegraph" >Fig1.svg


Conclusions

Oracle 12.1.0.2 makes available to the users a new and fast algorithm for computing cardinality estimates that outperforms the legacy SQL select count(distinct) for most cases of interest. As it is often the case the methods used for investigating the new feature can be more interesting than the actual results Moreover while investigating the new we learn (or re-learn) something important about the old!
The test workload discussed here above in this posts shows also that caching a table into RAM while it is often fast, it does not fix all performance use cases. Caching a table does not necessarily bring all I/O operations to zero, notably because of I/O for TEMP space. This is a well-know fact that appears in several contexts when doing performance tuning, although it may be overlooked at design time. Simplifying for future reference: table caching is not always fast=true.


References

Alex Fatkulin has covered the topic of HyperLogLog in a series very interesting blog posts. Particularly notable the effort to explain in simple terms the HyperLogLog algorithm here.
It is also worth noting that Oracle has introduced in 11g an algorithm for computing NDV restricted to the usage of the DBMS_STATS package. For additional details see Maria Colgan's blog post and Amit Poddar presentation at Hotsos (currently hosted in Jonathan Lewis' blog).
Tuning of Oracle work areas is discussed in Oracle Support Note "How To Super-Size Work Area Memory Size Used By Sessions?" (Doc ID 453540.1). Alex Fatkulin has presented at Hotsos 2014 "Leveraging In-Memory Storage to Overcome Oracle Database PGA Memory Limits". 


Test definition scripts

Here below you can find the the statements I used to create the test table for this post's examples. The test system where I ran the examples was configured with 5 GB of db_cache_size (6 GB for the whole SGA), automatic PGA management was used with pga_aggregate_target set to 3 GB. Different memory configurations for the buffer cache and for PGA may produce different test results. In the tests discussed here above, testtable is intended to fit in the buffer cache (and be cached there). Test results for select count distinct have a strong dependence on the size of memory work areas allocated in PGA and the thresholds for spilling to disk (see also the references section above).
No competing/concurrent workload was run during the tests. I have also added some pointers on how I cached the table, note that table caching can be achieved with several alternative methods, such as defining and using a keep cache or by using one of the 2 new methods available in 12.1.0.2: using the in-memory option or the big table caching feature.

create table testtable pctfree 0 cache
tablespace DATA01 nologging
as
select rownum id, to_char(mod(rownum,10))||rpad('mystring',200,'P') val_lowcard, to_char(rownum)||rpad('mystring',200,'P') val_unique
from dba_source,dba_source where rownum<=1e7;

exec dbms_stats.gather_table_stats('SYSTEM','TESTTABLE')

--use this to avoid Oracle's direct reads bypassing the buffer cache 
alter session set "_serial_direct_read"=never;

alter system flush buffer_cache;
--make sure that there is enough space in the buffer cache and clean it first

set autotrace on -- allows to measure logical and physical IO
select count(*) from testtable;
select count(*) from testtable; --repeat if needed