Presenting the second segment of our Oracle Database Testing series. The initial segment can be found here.
In the first installment, we discussed the meaning of “database”, why Oracle is a forerunner in the field, the significance of database testing, and different testing methodologies.
Recommended IPTV Service Providers
- IPTVGREAT – Rating 4.8/5 ( 600+ Reviews )
- IPTVRESALE – Rating 5/5 ( 200+ Reviews )
- IPTVGANG – Rating 4.7/5 ( 1200+ Reviews )
- IPTVUNLOCK – Rating 5/5 ( 65 Reviews )
- IPTVFOLLOW -Rating 5/5 ( 48 Reviews )
- IPTVTOPS – Rating 5/5 ( 43 Reviews )
This segment will accentuate methodical steps to scrutinize the database specifically for Memory, Space, and CPU processing. In the final installation of this series, we will further explore testing with Oracle Real Application Testing.
What You Will Learn:
#1) MEMORY Testing
We shall begin by evaluating your memory requirements. In order to optimize memory usage, we need to first become acquainted with different memory structures. These structures can be segregated into three major categories:
a) System or Shared Global Area (SGA) – This is a collective memory segment available to every Oracle process.
b) Process or Program Global Area (PGA) – This is private memory consumed by individual Oracle processes.
c) User Global Area (UGA) – This memory is associated with a user session, and depending on the connection mechanism, it can be a part of the PGA (with dedicated server) or part of the SGA (with shared server).
In order to test application performance and ascertain optimal memory sizing, it is necessary to finetune these memory segments.
For optimizing these memory segments, we must comprehend the amount of memory demanded by the application for proficient operation.
A simplified pictorial representation of the Memory Architecture is given below:
To inspect your existing SGA values, log in to the SQLPLUS command prompt:
SQL> show sga Total System Global Area 521936896 bytes Fixed Size 2177328 bytes Variable Size 465569488 bytes Database Buffers 46137344 bytes Redo Buffers 8052736 bytes
These components collectively determine the total SGA size. For instance, if our server possesses 8 GB of RAM (physical memory), we can allocate the SGA size to half of the available memory (4 GB), and then test whether the SGA values are suitable for the application by running the application with users logged in and executing SQL queries against the database.
Another major aspect to consider when sizing memory is the possibility of paging. Paging transpires when the operating system lacks sufficient physical memory and transfers data to disk to make room for new data. Elevated levels of paging usually indicate performance degradation.
A standard command to check paging activity is the “sar” command:
$ sar -g 00:00:00 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf 01:00:00 0.00 0.00 0.00 0.00 0.00
In the example above, this is an idle system with no observed paging. If you observe high values for pgfree/s and pgscan/s, it indicates that the memory configuration is insufficient for the application’s needs and requires further scrutiny.
To optimize the Program Global Area (PGA), it’s vital to evaluate the complexity of the SQL queries executed by the users. Work areas in the PGA are consumed for in-memory sorting operations like GROUP BY, ORDER BY, Hash-join, and Bitmap Merge.
Typically, the PGA doesn’t necessitate a significant amount of memory. We usually reserve 20% of the physical memory for the SGA size. Moreover, whether we are using an OLTP (Online Transaction Processing) system or a DSS (Decision Support System) impacts the memory allocation.
For an OLTP system, using the earlier example of 8 GB total physical memory and 4 GB allocated to SGA, PGA would be 20% of SGA, equating to 0.8 GB.
For a DSS system, which often involves memory-intensive queries, we could allocate 50% of SGA, resulting in a PGA size of 2 GB.
Bear in mind that these examples are only for illustrative purposes. In reality, many operational systems allocate more than 100 ~ 500 GB of SGA, and some even have configurations exceeding 1 TB of physical memory.
#2) CPU Processing Testing
Any SQL queries executed on a database that involve significant operations like sorting, data querying, or data writing will consume CPU cycles. Evaluating CPU usage and testing if there is sufficient processing power to satisfy application needs is crucial.
How can we ascertain if there is a CPU bottleneck?
Being at 100% CPU utilization is often not a red flag. It simply suggests that the processors are operating at maximum capacity, which is actually desirable given the substantial investment in them.
An SQL statement typically undergoes three phases during processing by the Oracle instance:
- Parse phase
- Execution phase
- Fetch phase
The Parsing phase involves syntax and semantic checks to ensure that the SQL statement is legitimate and can be rendered and executed by the Oracle database engine.
The Execution phase involves creating an execution plan and accessing the required objects step by step.
The Fetch phase retrieves the rows from database blocks based on the previously computed execution plan.
For a deeper comprehension of SQL statement processing, you can refer to Oracle Documentation.
Each of these phases consumes CPU resources. Parsing often leads to high CPU usage. Performance views such as V$SYSSTAT and V$SESSTAT can be employed to identify processes/sessions consuming CPU:
Example:
SQL> select (a.value / b.value)*100 "% CPU for parsing" from V$SYSSTAT a, V$SYSSTAT b where a.name = 'parse time cpu' and b.name = 'CPU used by this session'; % CPU for parsing ----------------- 7.70263467
In this example, the CPU percentage utilized for parsing SQL statements is approximately 7.7%. This is achievable when testing the application with users running SQL queries. If the CPU percentage for parsing is high, examine if literals are being used in the SQL queries, as they can result in hard parses that consume substantial CPU resources. Encourage application developers to use bind variables to reuse available cursors and avoid hard parsing.
To determine the CPU usage by users accessing the database via their sessions, you can deploy the following query:
SQL> SELECT n.username, s.sid, s.value FROM v$sesstat s,v$statname t, v$session n WHERE s.statistic# = t.statistic# AND n.sid = s.sid AND t.name='CPU used by this session' ORDER BY s.value desc; USERNAME SID VALUE ------------------------- ---------- ---------- SYS 191 125 190 64 134 50 66 45 192 4 4 3 133 3 126 2 72 1 67 0 125 0
In this example, only the SYS user is logged in, while the rest are background processes that consume minimal CPU resources.
From the operating system perspective, one of the accessible commands is “sar” to ascertain CPU usage. For comprehensive information regarding this command, you can issue $ man sar in a Unix console or refer to the Linux documentation.
$sar -u 10 5 ---- Reports CPU utilization every 10 seconds. 5 lines are displayed. Linux 3.8.13-26.2.1.el6uek.x86_64 (abcdefg) 08/07/2014 _x86_64_ (6 CPU) 05:17:58 PM CPU %user %nice %system %iowait %steal %idle 05:18:08 PM all 0.65 0.00 0.53 0.00 0.03 98.78 05:18:18 PM all 0.00 0.00 0.00 0.00 0.00 100.00 05:18:28 PM all 0.02 0.00 0.00 0.00 0.00 99.98 05:18:38 PM all 0.02 0.00 0.02 0.00 0.00 99.97 05:18:48 PM all 0.02 0.00 0.00 0.00 0.00 99.98 Average: all 0.14 0.00 0.11 0.00 0.01 99.74
In this example, the CPU consumption is virtually zero due to the system being idle.
After evaluating our application’s memory and CPU performance, we proceed to verify the amount of storage space necessitated by the database to meet user requirement.
#3) SPACE/Storage Testing
To assess storage requirements, we need to comprehend both logical and physical database storage structures in Oracle. Physical structures like data files, control files, and online redo log files are visible at the operating system level. Logical structures like data blocks, extents, segments, and tablespaces are recognized and managed solely within Oracle.
For details about these structures, you can check out the respective Oracle documentation.
Briefly, data files are a part of logical structures called tablespaces. A tablespace can demonstrate multiple data files. To compute the complete size of a fully operational database, you need to calculate the sum of all tablespace sizes, which is essentially the sum of all data files linked to the Oracle database at the operating system level.
Each application user has their own schema, which is symbolized by segments within the tablespace.
To oversee the growth trend and decide the size of objects (logical) and physical files like data files, you can refer to performance views such as DBA_SEGMENTS, DBA_EXTENTS, DBA_TABLESPACES, and DBA_DATA_FILES.
For instance, to know the size of your tablespaces, you can run the following query:
SQL> select tablespace_name "Tablespace",sum(bytes)/1024/1024 "Size in MegaBytes" from dba_data_files group by tablespace_name; Tablespace Size in MB ------------------------- ---------- UNDOTBS1 100 SYSAUX 680 USERS 45 SYSTEM 830 EXAMPLE 100
Adding up the sizes of these tablespaces will yield the total size of your database instance.
There are various storage products and engineered solutions available, such as NAS, SAN, Flash, and Solid State Devices to achieve optimal application performance.
Conclusion:
In conclusion, we have delved into effective strategies to scrutinize an Oracle database for memory, CPU, and storage restrictions. Application developers, architects, and administrators must take these factors into account before deploying an application in a production environment.
In the concluding segment of this Oracle Database testing series, we will further explore Real Database Application Testing.