Creation Zone

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Saturday, 11 December 2010

Oracle's Optimized Solution for Siebel CRM 8.1.1

Posted on 00:25 by Unknown

A brief explanation of what an optimized solution is and what it is not can be found in the previous blog entry Oracle's Optimized Solution for PeopleSoft HCM 9.0. We went through a similar exercise to publish another optimized solution around Siebel CRM 8.1.1.

The Siebel solution implements Oracle Siebel CRM using a unique combination of SPARC servers, Sun storage, Solaris OS virtualization, Oracle application middleware and Oracle database products.

URLs to the Siebel CRM white papers:
  • Optimized Solution for Siebel CRM 8.1.1 : A Technical White Paper
  • Optimized Solution for Siebel CRM 8.1.1 : A Business White Paper

White you are at it, do not forget to check the 13,000 user Siebel CRM benchmark on the latest SPARC T3 platform.

Read More
Posted in | No comments

Oracle's Optimized Solution for PeopleSoft HCM 9.0

Posted on 00:24 by Unknown

According to Oracle Corporation: Oracle's optimized solutions are applications-to-disk solutions that are comprised of Oracle's Sun servers, storage, and networking components, Oracle Solaris, Oracle Enterprise Linux, Oracle Database, Oracle Fusion Middleware and Oracle Applications.

To be clear, an optimized/optimal solution is neither a software package nor a hardware system bundled with pre-tuned software. It is simply a set of recommendations based on some testing performed in labs. The recommendations typically provide sizing guidelines for small, medium and large configurations, best practices, tuning tips and some performance data. Customers can refer to these guidelines when deploying enterprise applications on Oracle hardware to achieve optimal configuration for better TCO and ROI.

The PeopleSoft solution implements two modules in Oracle PeopleSoft Enterprise Human Capital Management (HCM) 9.0 to demonstrate how Oracleʼs servers, disk storage and advanced flash based storage technology can be used to accelerate database transactions to achieve unprecedented application performance. Workload consolidation is achieved through server consolidation while maintaining the appropriate balance of performance, availability, cost and expected future capacity requirements.

The optimized solution technical white paper can be accessed from the following URL:

    Oracleʼs Optimized Solution for PeopleSoft Human Capital Management Consolidation using M-series servers, Flash and Enterprise Storage

A corresponding solution brief targeting less patient is available at:

    Oracle's Optimized Solution for PeopleSoft HCM - A Business White paper

(Original blogpost location:
http://blogs.sun.com/mandalika/entry/oracle_s_optimized_solution_for)
Read More
Posted in | No comments

Thursday, 2 December 2010

Instructions to Turn ON/OFF Hardware Prefetch on SPARC64 Systems

Posted on 00:58 by Unknown

The hardware prefetch is ON by default on M-series servers such as M8000/M9000, M4000/M5000, M3000

The following excerpt is from a SPARC64 document:

Hardware speculatively issues the prefetch operation based on the prediction that there is high possibility to access to the following continuous address in the future, if there have been load accesses for a consecutive address.

Although this feature is designed to improve the performance of various workloads, due to the speculative nature, not all workloads may benefit with the default behavior. For example, in our experiments, we noticed 10+% improvement in CPU utilization while running some of the PeopleSoft workloads on M-series hardware with hardware prefetch turned off. Hence irrespective of the application/workload, the recommended approach is to conduct few experiments by running representative customer workloads on target M-series hardware with and without the hardware prefetch turned on.

Instructions to Turn On/Off Hardware Prefetch:

  1. Connect to the system Service Processor (XSCF)


    % ssh -l <userid> <host>
  2. Check the current prefetch mode by running the following command at XSCF> prompt


    XSCF> showprefetchmode
  3. Find the domain id of all mounted system boards (or skip to next step)


    XSCF> showboards -a
  4. Power-off all configured domains


    XSCF> poweroff -d <domainid> [OR]
    XSCF> poweroff -a

    From my experience, on larger systems with multiple domains configured, all domains must be powered off before the SP lets changing the prefetch mode. If someone has a correction to this information or better instruction that minimizes disruption, please let me know. I'd be happy to update these instructions.

  5. Wait until the domain(s) are completely powered off. Check the status by running showlogs command


    XSCF> showlogs power
  6. Change the prefetch mode to the desired value


    XSCF> setprefetchmode -s [on|off]
  7. Verify the prefetch mode


    XSCF> showprefetchmode
  8. Finally power-on all configured domains


    XSCF> poweron -d <domainid> [OR]
    XSCF> poweron -a
  9. Disconnect from SP, and wait for the OS to boot up

Note to Sun-Oracle customers:

If the default value of hardware prefetch is changed, please make sure to mention this in any service requests, bug reports, etc., that you may file with Oracle Corporation. Unfortunately none of the standard commands on Solaris report the status of hardware prefetch - so, providing this additional piece of information beforehand will help the person who is analyzing/diagnosing the case.

+Original blog post URL
        http://blogs.sun.com/mandalika/entry/instructions_to_turn_on_off
Read More
Posted in | No comments

Tuesday, 30 November 2010

Instructions to Restore Files from a Windows Filesystem using Ubuntu Live CD

Posted on 03:21 by Unknown

Windows users who have not interacted with any kind of UNIX or Linux distribution before are the target audience of this HOW-TO blog post.

  1. Download the latest version of Ubuntu 32-bit ISO image from the following location:

            http://www.ubuntu.com/getubuntu/download
  2. Burn the ISO image onto a CD or DVD so you can try running Ubuntu Linux from the CD (Live CD option)

  3. On the target system (running Windows or any other OS but does not recognize the disk drive(s) containing Windows FS) boot Linux from the CD or DVD that you burned

  4. Once the OS boots up: launch the partition editor

  5. Look for NTFS, FAT or FAT32 filesystems and note down the corresponding partition names.

    In the example, /dev/sda1 and /dev/sda2 are the Windows filesystems.

  6. Launch system terminal (command line interface)

  7. Mount the Windows filesystem using the "mount" command.

  8. Finally navigate to the mounted filesystem using the graphical user interface.



  9. Insert an USB flash device

    Ubuntu recognizes the USB drive and mounts it automatically. Type "df -h" on the command line terminal and look for "NO NAME". "NO NAME" filesystem is the USB file system.

  10. Backup your files from the Windows filesystem to the USB filesystem

Read More
Posted in | No comments

Sunday, 31 October 2010

SPARC T3 reiterates Siebel CRM's Supremacy on T-series Hardware

Posted on 00:17 by Unknown

It's been mentioned and proved several times that Sun/Oracle's T-series hardware is the best fit to deploy and run Siebel CRM. Feel free to browse through the list of Siebel benchmarks that Sun published in the past on T-series:

        2004-2010 : A Look Back at Sun Published Oracle Benchmarks

Oracle Corporation announced the availability of SPARC T3 servers in Oracle OpenWorld 2010, and sure enough there is a Siebel CRM benchmark on SPARC T3-1 server to support the server launch event. Check the following web page for high level details of the benchmark.

        SPARC T3-1 Server Posts a High Score on New Siebel CRM 8.1.1 Benchmark

I intend to provide the missing pieces of information in this blog post.

First of all, it is not a "Platform Sizing and Performance Program" (PSPP) benchmark. Siebel 8.1.1 was used to run the benchmark, and there is no Siebel PSPP benchmark kit available as of today for v8.1.1. Hence the test results from this benchmark exercise are not directly comparable to the Siebel 8.0 PSPP benchmark results.

Workload

The benchmark workload consists of a mix of Siebel Financial Services Call Center and Siebel Web Services / EAI transactions. The FINS Call Center transactions create a bunch of Opportunities, Quotes and Orders, where as the Web Services / EAI transactions submit new Service Requests (SR), search for and update existing SRs. The transaction mix is 40% FINS Call Center transactions and 60% Web Services / EAI transactions.

Software Versions

  • Siebel CRM 8.1.1
  • Oracle RDBMS 11g R2 (11.2.0.1), 64-bit
  • iPlanet Web Server 7.0 Update 8, 32-bit
  • Solaris 10 09/10 in the application-tier and
  • Solaris 10 10/09 in the web- and database-tiers

Hardware Configuration

  • Application Server : 1 x SPARC T3-1 Server (2 RU system)
    • One socket 16-Core 1.65 GHz SPARC T3 processor, 128 hardware threads, 6 MB L2 Cache, 64 GB RAM
  • Web Server + Database Server : 1 x Sun SPARC Enterprise T5240 Server (2 RU system)
    • Two socket 16-Core 1.165 GHz UltraSPARC T2 Plus processors, 128 hardware threads, 4 MB L2 Cache, 64 GB RAM

Virtualization Technology

iPlanet Web Server and the Oracle 11g Database Server were configured on a single Sun SPARC Enterprise T5240 Server. Those software layers were isolated from each other with the help of Oracle Solaris Containers virtualization technology. Resource allocations are shown below.

Tier#vCPUMemory (GB)
Database9648
Web3216

Test Results

#vUsersAvg Trx Resp Time (sec)Business Trx
Throughput/HR
Avg CPU Utilization (%)Avg Memory Footprint (GB)
FINSEAIFINSEAIAppDBWebAppDB + Web
13,0000.430.248,409116,4495842375235

Why stop at 13K users?

Notice that the average CPU utilization on the application server node (SPARC T3-1) is only ~58%. The application server node has room to accommodate more online vusers - however, there is not enough free memory left on the server to scale beyond 13,000 concurrent users. That is the main reason to stop at 13,000 user count in this benchmark.

Siebel Best Practices

Check the following presentation:

        Siebel on Oracle Solaris : Best Practices, Tuning Tips

Acknowledgments

Credit to all our peers at Oracle Corporation who helped us with the hardware, workload, verification and validation etc., in a timely manner. Also Jenny deserves special credit for spending enormous amount of time running the benchmark with patience.

Original blog post URL:
http://blogs.sun.com/mandalika/entry/sparc_t3_reiterates_siebel_s

Read More
Posted in | No comments

Friday, 8 October 2010

Is it really Solaris Versus Windows & Linux?

Posted on 23:09 by Unknown

(Even though the title explicitly states "Solaris Versus .. ", this blog entry is equally applicable to all the operating systems in the world with few changes.)

Lately I have seen quite a few e-mails and heard few customer representatives talking about the performance of their application(s) on Solaris, Windows and Linux. Typically they go like the following with a bunch of supporting data (all numbers) and no hardware configuration specified whatsoever.

  • "Transaction X is nearly twice as slow on Solaris compared to the same transaction running on Windows or Linux"
  • "Transaction X runs much faster on my Windows laptop than on a Solaris box"

Lack of awareness and taking the hardware completely out of the discussions and context are the biggest problems with complaints like these. Those claims make sense only when the underlying hardware is the same in all test cases. For example, comparing a single user, single threaded transaction running on Windows, Linux and Solaris on x86 hardware is appropriate (as long as the type and speed of the processor are identical), but not against Solaris running on SPARC hardware. This is mainly because the processor architecture is completely different for x86 and SPARC platforms.

Besides, these days Oracle offers two types of SPARC hardware - 1. T-series and 2. M-series, which serve different purposes though they are compatible with each other. It is hard to compare and analyze the performance discrimination between different SPARC offerings (T- and M-series) too with no proper understanding of the characteristics of the CPUs in use. Choosing the right hardware for the right job is the key.

It is improper to compare the business transactions running on x86 with SPARC systems or even between different types of SPARC systems, and to incorrectly attribute the hardware strength or weakness to the operating system that runs on top of the bare metal. If there is so much of discrepancy among different operating environments, it is recommended to spend some time understanding the nuances in testing hardware before spending enormous amounts of time trying to tune the application and the operating system.

The bottomline: in addition to the software (application + OS), hardware plays an important role in the performance and scalability of an application - so, unless the testing hardware is the same for all test cases on different operating systems, don't you just focus on the operating system alone and make hasty decisions to switch to other operating platforms. Carefully choose appropriate hardware for the task in hand.

Read More
Posted in | No comments

Thursday, 23 September 2010

OOW 2010 : Accelerate and Bullet-Proof Your Siebel CRM Deployment with Oracle's Sun Servers

Posted on 20:48 by Unknown

The best practices slides from today's OpenWorld presentation can be downloaded from the following location.

        Siebel on Oracle Solaris : Best Practices, Tuning Tips

The entire presentation with proper disclaimers and Oracle Solaris Cluster specific slides will be posted on Oracle's web site soon. Stay tuned.

Read More
Posted in | No comments

Monday, 9 August 2010

Identifying Ideal Oracle Database Objects for Flash Storage and Accelerators

Posted on 03:04 by Unknown

(Originally posted on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/identifying_ideal_oracle_database_objects)

The Sun Storage F5100 Flash Array and Sun Flash Accelerator F20 PCIe Card help accelerate I/O bound applications such as databases. The following are some of the guidelines to identify Oracle database objects that can benefit by using the flash storage. Even though the title explicitly states "Oracle", some of these guidelines are applicable to other databases and non-database products. Exercise discretion, evaluate and experiment before implementing these recommendations as they are.

  • Heavily used database tables and indexes are ideal for flash storage

    • - The database workloads with no I/O bottlenecks may not show significant performance gains
    • - The database workloads with severe I/O bottlenecks can fully realize the benefits of flash devices

      • Top 5 Timed Foreground Events section in any AWR report that was collected on the target database system is useful in finding whether disk I/O is a bottleneck

        • Large number of Waits and the large amount of time in DB spent waiting for some blocked resource under User I/O Wait Class is an indication of I/O contention on the system
  • Identify the I/O intensive tables and indexes in a database with the help of Oracle Enterprise Manager Database Control, a web-based tool for managing Oracle database(s)

    • - The "Performance" page in OEM Database Control helps you quickly identify and analyze performance problems
    • - Historical and the real-time database activity can be viewed from the "performance" page.
      • The same page also provides information about the top resource consuming database objects
  • An alternate way to identify the I/O intensive objects in a database is to analyze the AWR reports that are generated over a period of time especially when the database is busy

    • - Scan through the SQL ordered by .. tables in each AWR report
    • - Look for the top INSERT & UPDATE statements with more elapsed and DB times
      • The database tables that are updated frequently & repeatedly, along with the indexes created on such tables are good candidates for the flash devices

    • - SQL ordered by Reads is useful in identifying the database tables with large number of physical reads
      • The database table(s) from which large amounts of data is read/fetched from physical disk(s) are also good candidates for the flash devices

        • To identify I/O intensive indexes, look through the explain plans of the top SQLs that are sorted by Physical Reads

  • Examine the File IO Stats section in any AWR report that was collected on the target database system

    • - Consider moving the database files with heavy reads, writes and relatively high average buffer wait time to flash volumes
  • Examine Segments by Physical Reads, Segments by Physical Writes and Segments by Buffer Busy Waits sections in AWR report

    • - The database tables and indexes with large number of physical reads, physical writes and buffer busy waits may benefit from the flash acceleration
  • Sun flash storage may not be ideal for storing Oracle redo logs

    • - Sun Flash Modules (FMOD) in F5100 array and F20 Flash Accelerator Card are optimized for 4K sector size

        A redo log write that is not aligned with the beginning of the 4K physical sector results in a significant performance degradation

    • - In general, Oracle redo log files default to a block size that is equal to the physical sector size of the disk, which is typically 512 bytes

      • Majority of the recent Oracle Database platforms detect the 4K sector size on Sun flash devices
      • Oracle database automatically creates redo log files with a 4K block size on file systems created on Sun flash devices
        • However with a block size of 4K for the redo logs, there will be significant increase in redo wastage that may offset expected performance gains

F5100 Flash Storage and F20 PCIe Flash Accelerator Card as Oracle Database Smart Flash Cache

In addition to the I/O intensive database objects, customers running Oracle 11g Release 2 or later versions have the flexibility of using flash devices to turn on the "Database Smart Flash Cache" feature to reduce physical disk I/O. The Database Smart Flash Cache is a transparent extension of the database buffer cache using flash storage technology. The flash storage acts as a Level 2 cache to the (Level 1) SGA. Database Smart Flash Cache can significantly improve the performance of Oracle databases by reducing the amount of disk I/O at a much lower cost than adding an equivalent amount of RAM.

F20 Flash Accelerator offers an additional benefit - since it is a PCIe card, the I/O operations bypass disk controller overhead.

The database flash cache can be enabled by setting appropriate values to the following Oracle database parameters.


db_flash_cache_file
db_flash_cache_size


Check Oracle Database Administrator's Guide 11g Release 2 (11.2) : Configuring Database Smart Flash Cache documentation for the step-by-step instructions to configure Database Smart Flash Cache on flash devices.

Read More
Posted in | No comments

Wednesday, 7 July 2010

PeopleSoft NA Payroll 500K EE Benchmark on Solaris : The Saga Continues ..

Posted on 19:41 by Unknown
(Original post is at:
http://blogs.sun.com/mandalika/entry/peoplesoft_na_payroll_500k_ee
)

Few clarifications before we start.

Difference between 240K and 500K EE PeopleSoft NA Payroll benchmarks

Not too long ago Sun published PeopleSoft NA Payroll 240K EE benchmark results with 16 job streams and 8 job streams. First of all, I want to make sure everyone understands the fact that PeopleSoft NA Payroll 240K and 500K EE benchmarks are two completely different benchmarks. The 240K database model represents a large sized organization where as 500K database model represents an extra-large sized organization. Vendors [who benchmark] have the flexibility of configuring 8, 16, 24 or 32 parallel job streams (or threads) in those two benchmarks to parallellize the work being done.

Now that the clarifications are out of the way, here is the direct URL for the 500K Payroll benchmark results that Oracle|Sun published last week. (document will be updated shortly to fix the branding issues)

    PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M5000 (500K EE 32 job streams)

What's changed at Sun & Oracle?

The 500K payroll benchmark work was started few months ago when Sun is an independent entity. By the time the benchmark work is complete, Sun was part of Oracle Corporation. However it has no impact whatsoever on the way we have been operating and interacting with the PeopleSoft benchmark team for the past few years. We (Sun) still have to package all the benchmark results and submit for validation just like any other vendor. It is still the same laborious process that we have to go through from both ends of Oracle (that is, PeopleSoft & Sun). I just mentioned this to highlight Oracle's non-compromising nature on anything at any level in publishing quality benchmarks.

SUMMARY OF 500K NA PAYROLL BENCHMARK RESULTS

The following bar chart summarizes all the published benchmark results by different vendors. Each 3D bar on X-axis represent one vendor, and the Y-axis shows the throughput (#payments/hour) achieved by corresponding vendor. Actual throughput and the vendor name is also shown in each of the 3D bar for clarity. Common sense dictates that higher the throughput, the better it is.

The numbers in the following table were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the results and the actual numbers that are of interest to the customers. The results in the following table are sorted by the hourly throughput (payments/hour) in the descending order. The goal of this benchmark is to achieve as much hourly throughput as possible. Click on the link that is underneath the hourly throughput values to open corresponding benchmark result.

Oracle PeopleSoft North American Payroll 9.0 - Number of employees: 500,480 & Number of payments: 750,720
VendorOSHardware Config#Job StreamsElapsed Time (min)Hourly Throughput
Payments per Hour
SunSolaris 10 10/091x Sun SPARC Enterprise M5000 with 8 x 2.53 GHz SPARC64 VII Quad-Core CPUs & 64G RAM
1 x Sun Storage F5100 Flash Array with 40 Flash Modules for data, indexes. Capacity: 960 GB
1 x Sun Storage 2510 Array for redo logs. Capacity: 272 GB. Total storage capacity: 1.2 TB
3250.11898,886
IBMz/OS 1.101 x IBM System z10 Enterprise Class Model 2097-709 with 8 x 4.4 GHz IBM System z10 Gen1 CPUs & 32G RAM
1 x IBM TotalStorage DS8300. Total capacity: 9 TB
8*58.96763,962
HPHP-UX B.11.311 x HP Integrity rx7640 with 8 x 1.6 GHz Intel Itanium2 9150 Dual-Core CPUs & 64G RAM
1 x HP StorageWorks EVA 8100. Total capacity: 8 TB
3296.17468,370

This is all public information. Feel free to compare the hardware configurations and the data presented in all three rows and draw your own conclusions. Since all vendors used the same benchmark toolkit, comparisons should be pretty straight forward.

Sun Storage F5100 Flash Array, the differentiator

Of all these benchmark results, clearly the F5100 storage array is the key differentiator. The payroll workload is I/O intensive, and requires low latency storage for better throughput (it is implicit that less latency means less I/O waits).

There is a lot of noise from some of the outside blog readers (I do not know who those readers are or who they work for) when Sun published the very first NA Payroll 240K EE benchmark with eight job streams using an F5100 array that has 40 flash modules (FMOD). Few people thought it is necessary to have those many flash modules to get that kind of performance that Sun demonstrated in the benchmark. Now that we have the 500K benchmark result as well, I want to highlight another fact that it is the same F5100 that was used in all the three NA Payroll benchmarks that Sun published in the last 10 months. Even though other vendors increased the number of disk drives when moved from 240K to 500K EE benchmark environment, Sun hasn't increased the flash modules in F5100 -- the number of FMODs remained at 40 even in 500K EE benchmark. This fact implicitly suggests at least two things -- 1. F5100 array is resilient, scales and performs consistently even with increased load. 2. May be 40 flash modules are not needed in 240K EE Payroll benchmark. Hopefully this will silence those naysayers and critics now.

While we are on the same topic, the storage capacity in the other array that was used to store the redo logs was in fact reduced from 5.3 TB in a J4200 array that was used in 240K EE/16 job stream benchmark to 272 GB in a 2510 array that was used in 500K EE/32 job stream benchmark. Of course, in both cases, the redo logs consumed only 20 GB on disk - but since the arrays were connected to the database server, we have to report the total capacity of the array(s) whether it is being used or not.

Notes on CPU utilization and IBM's #job streams

Even though I highlighted the I/O factor in the above paragraph, it is hard to ignore the fact that the NA Payroll workload is CPU intensive too. Even when multiple job streams are configured, each stream runs as a single-thread process -- hence it is vital to have a server with powerful processors for better [overall] performance.

Observant readers might have noticed couple of interesting things.

  1. The maximum average CPU usage that Sun reported in 500K EE benchmark in any scenario by any process is only 43.99% (less than half of the total processing capacity)

    The reason is simple. The SUT, M5000, has eight quad-core processors and each core is capable of running two hardware threads in parallel. Hence there are 64 virtual CPUs on the system, and since we ran only 32 job streams, only half of the total available CPU power was in use.

    Customers in a similar situation have the flexibility to consolidate another workload onto the same system to take advantage of the available/remaining CPU cycles.

  2. IBM's 500K EE benchmark result is only with 8 job streams

    I do not know the exact reason - but if I have to speculate, it is as good as anyone's guess. Based on the benchmark results white paper, it appears that the z10 system (mainframe) has eight single core processors, and perhaps that is why they ran the whole benchmark with only eight job streams.

Also See:

    Benchmark Results White Papers

  • PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 -- 240K EE 16 stream benchmark
  • PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 -- 240K EE 8 stream benchmark

    Best Practices White Paper

  • Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card

    Blogs

  • Expensive non-performance by Joerg Moellenkamp
  • PeopleSoft NA Payroll 240K EE Benchmark with 16 Job Streams : Another Home Run for Sun
  • PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise
  • App benchmarks, incorrect conclusions and the Sun Storage F5100
  • Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance
Read More
Posted in | No comments

Tuesday, 15 June 2010

Book Review: Oracle Database 11g – Underground Advice for Database Administrators

Posted on 21:20 by Unknown
(06/15/2010: This blog post will be edited multiple times to add reviews for the remaining chapters in the book.)

Author: April C. Sims
Publisher: Packt
Target Audience: Oracle Database Administrators

Chapter #1 "When to step away from the keyboard" starts off with an interesting example, cautions the DBAs to be self-restraint but encourages to do the right thing at the end of the day. I liked the idea of listing out a whole bunch of graphical and command line Oracle tools [with brief descriptions] that an Oracle DBA may need in performing some of the day-to-day activities. Also couple of pages were dedicated to list out various tasks performed by Oracle DBAs on a daily, weekly, monthly, quarterly & yearly basis. It was interesting. And finally the chapter concludes with a bunch of useful tips for the administrators to avoid making unwanted errors.

The only thing that probably didn't fit in this chapter is the very brief discussion on staying away from dinosaurs. In my opinion, it is completely off-topic.

Chapter #2 "Maintaining Oracle Standards" is available for download. Get it from this location and read it yourself. You be the judge.

Chapter #3 "Tracking the Bits and Bytes". The first half of the chapter talks about Oracle Data Block and the methods to view the data at the block level, the finest level of granularity that contains the actual data. The author tried and succeeded with a decent follow up that briefly explains how transaction integrity is maintained in Oracle database. The hands on exercise makes the reader sweat a little, but may help understand the material that was presented earlier, better. The key is to focus and try to understand what is happening when running all those scripts and commands. I would like a much simpler example though. Admittedly this is not something that Oracle administrators do everyday, but it does not hurt to gain some insight into Oracle internal workings and to be prepared to leverage this knowledge when disaster strikes.

The second half of the chapter was dedicated for Log Miner, a PL/SQL package utility that can be used to extract the database transactions that have been executed over a period of time. April did a nice job briefly explaining why protecting the [physical] redo, undo and the archive log files is very important -- to keep the data & database transactions from falling into the wrong hands. An example using "Flashback Transaction Blackout" method was shown to demonstrate how to use log miner utility to retrieve the changes that were done to the database few minutes ago.

I am not impressed with the example in page 92 in section Identifying data in undo segments by flashing back to timestamp. There are a bunch of SQL statements in the example with no output from a test environment. I strongly believe that showing the actual output keeps the material interesting and easy to follow.

Also I did not like the idea of pointing to blogs and random web sites, as they may disappear any time without a warning.

To be continued ..
________________
Technorati Tags:
Oracle | Database | RDBMS | DBA | Book | Review | Packt
Read More
Posted in | No comments

Monday, 31 May 2010

Oracle RDBMS : Flushing a Single SQL Statement out of the Object Library Cache

Posted on 20:27 by Unknown

It is well known that the entire shared pool can be flushed with a simple ALTER SYSTEM statement.

SQL> ALTER SYSTEM FLUSH SHARED_POOL;

System altered.

What if the execution plan of a single SQL statement has to be invalidated or flushed out of the shared pool so the subsequent query execution forces a hard parse on that SQL statement. Oracle 11g introduced a new procedure called PURGE in the DBMS_SHARED_POOL package to flush a specific object such as a cursor, package, sequence, trigger, .. out of the object library cache.

The syntax for the PURGE procedure is shown below.

procedure PURGE (
name VARCHAR2,
flag CHAR DEFAULT 'P',
heaps NUMBER DEFAULT 1)

Explanation for each of the arguments is documented in detail in $ORACLE_HOME/rdbms/admin/dbmspool.sql file.

If a single SQL statement has to be flushed out of the object library cache, the first step is to find the address of the handle and the hash value of the cursor that has to go away. Name of the object [to be purged] is the concatenation of the ADDRESS and HASH_VALUE columns from the V$SQLAREA view. Here is an example:

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

ADDRESS HASH_VALUE
---------------- ----------
000000085FD77CF0 808321886

SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');

PL/SQL procedure successfully completed.

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

no rows selected

Note to Oracle 10g R2 Customers

The enhanced DBMS_SHARED_POOL package with the PURGE procedure is included in the 10.2.0.4 patchset release.

10.2.0.2 and 10.2.0.3 customers can download and install RDBMS patch 5614566 to get access to these enhancements in DBMS_SHARED_POOL package.

Also see:

  • Oracle Support Document ID 457309.1 "How To Flush an Object out the Library Cache [SGA]"
  • Oracle Support Document ID 751876.1 "DBMS_SHARED_POOL.PURGE Is Not Working On 10.2.0.4"
  • DBMS_SHARED_POOL.PURGE() procedure documentation
(Original post is at:
http://blogs.sun.com/mandalika/entry/oracle_rdbms_flushing_a_single
)
Read More
Posted in | No comments

Sunday, 23 May 2010

Oracle Database: Say No Thanks! to a New Index

Posted on 11:02 by Unknown
(Original post is at blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/oracle_database_say_i_no
)

.. unless you are working with a database that is largely read-only or if the new index is supposed to be fixing a critical performance issue with no side effect(s).

Two topics covered in this blog entry with plenty of simple examples:

  1. Why creating new indexes on a heavily modified table may not be a good idea? and
  2. How to identify unused indexes?
Read on.

Indexes are double-edged swords that may improve the performance of targeted queries, but in some cases they may accidentally degrade the performance of other queries that are not targeted. In any case, exercise caution while adding a new index to the database. After adding a new index, monitor the overall performance of the database, not just the targeted query.

If DML statements that modify data (INSERT, UPDATE, or DELETE) are being executed large number of times on a table, make sure that the addition of a new index on the same table does not negatively affect the performance of those DML operations. Usually this is not a problem if the SQLs being executed are simply retrieving but not adding or modifying the existing data. In all other cases, there is some performance overhead induced by the addition of each new index. For example, if there are 10 indexes created on a table DUMMY, adding a new row of data to the table DUMMY may require updating all 10 indexes behind the scenes by the database management system.

Here is an example demonstrating the performance overhead of a new index on a table.

SQL> CREATE TABLE VIDEO
2 (BARCODE VARCHAR(10) NOT NULL,
3 TITLE VARCHAR2(25) NOT NULL,
4 FORMAT VARCHAR2(10),
5 PRICE NUMBER,
6 DATA_OF_RELEASE DATE)
7 /

Table created.

SQL> insert into VIDEO values ('9301224321', 'AVATAR', 'BLU-RAY', 19.99, '22-APR-2010');

1 row created.

..

SQL> insert into VIDEO values ('3782460017', 'THE SIMPSONS - SEASON 20', 'BLU-RAY', 29.99, '04-JUL-2009');

1 row created.

SQL> select * from VIDEO;

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321 AVATAR BLU-RAY 19.99 22-APR-10
7619203043 BEN-HUR VHS 9.79 12-MAR-63
7305832093 THE MATRIX DVD 12.29 03-DEC-99
4810218795 MEMENTO DVD 8.49 02-FEB-02
3782460017 THE SIMPSONS - SEASON 20 BLU-RAY 29.99 04-JUL-09

SQL> select * from USER_INDEXES where TABLE_NAME = 'VIDEO';

no rows selected

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select * from VIDEO where FORMAT = 'BLU-RAY';

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321 AVATAR BLU-RAY 19.99 22-APR-10
3782460017 THE SIMPSONS - SEASON 20 BLU-RAY 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

SQL trace file has the following contents.

SQL ID: 0pu5s70nsdnzv
Plan Hash: 3846322456
SELECT *
FROM
VIDEO WHERE FORMAT = :"SYS_B_0"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 16 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 16 0 2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28

Rows Row Source Operation
------- ---------------------------------------------------
2 TABLE ACCESS FULL VIDEO (cr=16 pr=0 pw=0 time=3 us cost=4 size=100 card=2)

Let's create an index and see what happens.

SQL> create index VIDEO_IDX1 on VIDEO (FORMAT);

Index created.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select * from VIDEO where FORMAT = 'BLU-RAY';

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321 AVATAR BLU-RAY 19.99 22-APR-10
3782460017 THE SIMPSONS - SEASON 20 BLU-RAY 29.99 04-JUL-09

SQL> alter session set events '10046 trace name context off';

Session altered.

The latest contents of the trace file are as follows. Notice the reduction in buffer gets from 16 to 4. That is, the new index improved the query performance by 75%.

SQL ID: 0pu5s70nsdnzv
Plan Hash: 2773508764
SELECT *
FROM
VIDEO WHERE FORMAT = :"SYS_B_0"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 4 0 2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28 (CS90)

Rows Row Source Operation
------- ---------------------------------------------------
2 TABLE ACCESS BY INDEX ROWID VIDEO (cr=4 pr=0 pw=0 time=12 us cost=2 size=100 card=2)
2 INDEX RANGE SCAN VIDEO_IDX1 (cr=2 pr=0 pw=0 time=10 us cost=1 size=0 card=2)(object id 76899)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
2 TABLE ACCESS (BY INDEX ROWID) OF 'VIDEO' (TABLE)
2 INDEX MODE: ANALYZED (RANGE SCAN) OF 'VIDEO_IDX1' (INDEX)

So far so good. Let's add a new row of data and examine the trace file one more time. From hereafter, keep an eye on the "current" column (logical IOs performed due to an INSERT, UPDATE or DELETE) and notice how it changes with different actions -- adding and removing: indexes, new row(s) of data etc.,

SQL ID: dnb2d8cpdj56p
Plan Hash: 0
INSERT INTO VIDEO
VALUES
(:"SYS_B_0", :"SYS_B_1", :"SYS_B_2", :"SYS_B_3", :"SYS_B_4")


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 7 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 7 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 28 (CS90)

Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL (cr=1 pr=0 pw=0 time=0 us)


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
0 LOAD TABLE CONVENTIONAL OF 'VIDEO'

Now drop the index, re-insert the last row and get the tracing data again.

SQL> drop index VIDEO_IDX1;

Index dropped.

SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.

The contents of the latest trace file are shown below.

call     count       cpu    elapsed       disk      query    current        rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.00 0 2 5 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.00 0 2 5 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 28 (CS90)

Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL (cr=1 pr=0 pw=0 time=0 us)


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
0 LOAD TABLE CONVENTIONAL OF 'VIDEO'

This time create two indexes and see what happens.

SQL> CREATE INDEX VIDEO_IDX1 ON VIDEO (FORMAT);

Index created.

SQL> CREATE INDEX VIDEO_IDX2 ON VIDEO (TITLE);

Index created.

Trace file contents:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 9 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 9 1

Notice the two additional logical IOs (look under "current" column). Those additional logical input/output operations are the result of the new indexes. The number goes up as we add more indexes and data to the table VIDEO.

SQL> delete from VIDEO where BARCODE ='4457332907';

1 row deleted.

SQL> commit;

Commit complete.

SQL> create index VIDEO_IDX3 on VIDEO (PRICE, DATA_OF_RELEASE);

Index created.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> insert into VIDEO values ('4457332907', 'KING OF THE HILL - ALL', 'DVD', 90.00, '01-JAN-2011');

1 row created.

SQL> alter session set events '10046 trace name context off';

Session altered.


SQL trace:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 11 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 11 1

You can try other operations such as UPDATE, DELETE on your own.

Since there are only few rows of data in the table VIDEO, it is hard to notice the real performance impact in these examples. If you really want to see the negative performance impact due to the large number of indexes on a heavily updated table, try adding thousands or millions of rows of data and few more indexes.

Moral of the story: Indexes aren't always cheap - they may have some overhead associated with them. Be aware of those overheads and ensure that the index maintenance overhead do not offset the performance gains resulting from the indexes created on a particular table.


Monitoring Index Usage

Now we know the possible disadvantage of having too many indexes on a heavily updated table. One way to reduce the index maintenance overhead is to instrument the indexes so we can monitor their usage from time to time and remove the unused indexes. To start monitoring the index usage, alter the index by specifying the keywords MONITORING USAGE.

SQL> select index_name from user_indexes where table_name = 'VIDEO';

INDEX_NAME
--------------------------------------------------------------------------------
VIDEO_IDX3
VIDEO_IDX1
VIDEO_IDX2

SQL> alter index VIDEO_IDX1 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX2 MONITORING USAGE;

Index altered.

SQL> alter index VIDEO_IDX3 MONITORING USAGE;

Index altered.

Once the indexes are instrumented, query the V$OBJECT_USAGE view occasionally to see if the instrumented indexes are being used in executing SQL queries.

SQL> select * from VIDEO where BARCODE LIKE '%22%';

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
9301224321 AVATAR BLU-RAY 19.99 22-APR-10

SQL> select * from VIDEO where FORMAT = 'VHS';

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
7619203043 BEN-HUR VHS 9.79 12-MAR-63

SQL> select * from VIDEO where PRICE < 20;

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
4810218795 MEMENTO DVD 8.49 02-FEB-02
7619203043 BEN-HUR VHS 9.79 12-MAR-63
7305832093 THE MATRIX DVD 12.29 03-DEC-99
9301224321 AVATAR BLU-RAY 19.99 22-APR-10

SQL> select * from VIDEO where FORMAT = 'BLU-RAY' AND DATA_OF_RELEASE < '01-JAN-2010';

BARCODE TITLE FORMAT PRICE DATA_OF_RELEASE
--------------- ----------------------------------- --------------- ---------- ---------------
3782460017 THE SIMPSONS - SEASON 20 BLU-RAY 29.99 04-JUL-09


SQL> column INDEX_NAME format A25
SQL> column START_MONITORING format A30

SQL> select INDEX_NAME, USED, START_MONITORING
2 from V$OBJECT_USAGE
3 where INDEX_NAME LIKE 'VIDEO_IDX%'
4 /

INDEX_NAME USED START_MONITORING
------------------------- --------- ------------------------------
VIDEO_IDX1 YES 04/27/2010 01:10:20
VIDEO_IDX2 NO 04/27/2010 01:10:25
VIDEO_IDX3 YES 04/27/2010 01:10:31

In the above example, the index VIDEO_IDX2 was not in use during the period of index monitoring. If we are convinced that the queries that will be executed are similar to the ones that were executed during the index monitoring period, we can go ahead and remove the index VIDEO_IDX2 to reduce the performance overhead during updates on table VIDEO.

To stop monitoring the index usage, alter the index with the keywords NOMONITORING USAGE.

SQL> alter index VIDEO_IDX1 NOMONITORING USAGE;

Index altered.

Read More
Posted in | No comments

Monday, 10 May 2010

Music : Few Mixed Tunes with GarageBand

Posted on 02:07 by Unknown
(Originally posted on 12/09/2009. This blog posted will be updated as new compositions come along.)

For the past couple of weeks I have had fun playing with Apple's GarageBand. It is a nice piece of software with intuitive user interface and tons of free & ready-to-use music loops. It only took a couple of tries and about 6 hours to produce my first ever mixed tune with software of any kind. The second one just took two hours as my main focus was on only two loops. I'm happy with the output and decided to share it with my family and friends. Hence I uploaded those two tracks to iCompositions, an internet community web site that facilitates sharing each individuals' creative work with the rest of the community at free of cost. Click on the following music player images to listen to those instrumental tracks.
[12/09/09] Track #1 Hokum 
 
[12/09/09] Track #2 Thrum 
 
[12/18/09] Track #3 Kabuki Dance 
 
[01/05/10] Track #4 Phantasm 
 
[05/10/10] Track #5 Transgression 


Now if only I knew how to play a real instrument ..
________________
Technorati Tags:
 Music |  GarageBand
Read More
Posted in | No comments

Friday, 7 May 2010

Oracle Database 11g – Underground Advice for Database Administrators

Posted on 22:24 by Unknown


.. review coming soon ..

Meanwhile feel free to explore the Table of Contents and check the freely download-able chapter Chapter 2: Maintaining Oracle Standards
Read More
Posted in | No comments

Monday, 3 May 2010

Oracle 11g R1: Poor Data Pump Performance when Exporting a Partitioned Table

Posted on 00:12 by Unknown
(Originally posted on blogs.sun.com at http://blogs.sun.com/mandalika/entry/oracle_11g_r1_poor_data)

Symptom(s)

Data Pump Export utility, expdp, performs well with non-partitioned tables, but exhibits extreme poor performance when exporting objects from a partitioned table of similar size. In some cases the degradation can be as high as 3X or worse.

SQL traces may show that much of the time is being spent in a SQL statement that is similar to:

UPDATE "schema"."TABLE" mtu 
SET mtu.base_process_order = NVL((SELECT mts1.process_order FROM "schema"."TABLE" mts1
WHERE ..

Here is an example data export session:

Export: Release 11.1.0.7.0 - 64bit Production on Wednesday, 31 March, 2010 6:56:50

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCHMA"."SYS_EXPORT_TABLE_01": SCHMA/******** DIRECTORY=exp_dir DUMPFILE=SOME_DUMMY_PART_FULL.DMP TABLES=SOME_DUMMY_PART
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.56 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P01" 1.143 GB 13788224 rows
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P02" 1.143 GB 13788224 rows
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P03" 1.143 GB 13788224 rows
...
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P32" 151.1 MB 1789216 rows
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P33" 11.37 MB 136046 rows
. . exported "SCHMA"."SOME_DUMMY_PART":"DUMMY_PART_P00" 0 KB 0 rows
Master table "SCHMA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCHMA.SYS_EXPORT_TABLE_01 is:
/DBDUMP/SOME_DUMMY_PART_FULL.DMP
Job "SCHMA"."SYS_EXPORT_TABLE_01" successfully completed at 11:22:36

Solution(s) / Workaround

This is a known issue (that is, a bug) and a solution is readily available. Try any of the following to resolve the issue:

  • Apply the 11g database patch 8845859
  • Upgrade to 11.2.0.2 patchset when it is available, or
  • Specify "VERSION=10.2.0.3" expdp option as a workaround

I ran into this issue and I chose the workaround to make some quick progress. With the string "VERSION=10.2.0.3" appended, export time went down from 265 minutes to 60+ minutes.
Read More
Posted in | No comments

Wednesday, 21 April 2010

2004-2010 : A Look Back at Sun Published Oracle Benchmarks

Posted on 01:25 by Unknown
(Originally published on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/2004_2010_a_look_back
)

Since Sun Microsystems became a legacy, I got this idea of a reminiscent [farewell] blog post for the company that gave me the much needed break when I was a graduate student back in 2002. As I spend more than 50% of my time benchmarking different Oracle products on Sun hardware, it'd be fitting to fill this blog entry with a recollection of the benchmarks I was actively involved in over the past 6+ years. Without further ado, the list follows.

2004

 1.  10,000 user Siebel 7.5.2 PSPP benchmark on a combination of SunFire v440, v890 and E2900 servers. Database: Oracle 9i
  • Benchmark Report
  • Blog: Sun achieves winning Siebel benchmark
  • Blog: Sun and Siebel Kick Some Benchmark Butt
  • Blog: When Good Benchmarks Go Bad

2005

 2.  8,000 user Siebel 7.7 PSPP benchmark on a combination of SunFire v490, v890, T2000 and E2900 servers. Database: Oracle 9i

  • Benchmark Report

 3.  12,500 user Siebel 7.7 PSPP benchmark on a combination of SunFire v490, v890, T2000 and E2900 servers. Database: Oracle 9i

  • Benchmark Report

2006

 4.  10,000 user Siebel Analytics 7.8.4 benchmark on multiple SunFire T2000 servers. Database: Oracle 10g

  • Benchmark Report

2007

 5.  10,000 user Siebel 8.0 PSPP benchmark on two T5220 servers. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Sun publishes 10,000 user Siebel 8.0 PSPP benchmark on Niagara 2 systems

2008

 6.  Oracle E-Business Suite 11i Payroll benchmark for 5,000 employees. Database: Oracle 10g R1

  • White Paper (didn't qualify as a benchmark since we configured more than 4 payroll threads)
  • Blog: Running Batch Workloads on Sun's CMT Servers

 7.  14,000 user Siebel 8.0 PSPP benchmark on a single T5440 server. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Siebel 8.0 on Sun SPARC Enterprise T5440 - More Bang for the Buck!!
  • Blog: Siebel on Sun CMT hardware : Best Practices
  • Blueprint: Consolidating Oracle Siebel CRM 8 on a Single Sun SPARC Enterprise Server

 8.  10,000 user Siebel 8.0 PSPP benchmark on a single T5240 server. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Yet Another Siebel 8.0 PSPP Benchmark on Sun CMT Hardware ..

2009

 9.  4,000 user PeopleSoft HR Self-Service 8.9 benchmark on a combination of M3000 and T5120 servers. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: PeopleSoft HRMS 8.9 Self-Service Benchmark on M3000 & T5120 Servers

 10.  28,000 user Oracle Business Intelligence Enterprise Edition (OBIEE) 10.1.3.4 benchmark on a single T5440 server. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: T5440 Rocks [again] with Oracle Business Intelligence Enterprise Edition Workload
  • Blog: Oracle Business Intelligence on Sun : Few Best Practices

 11.  50,000 user Oracle Business Intelligence Enterprise Edition (OBIEE) 10.1.3.4 benchmark on two T5440 servers. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: Sun achieves the Magic Number 50,000 on T5440 with Oracle Business Intelligence EE 10.1.3.4
  • Blueprint: Deploying Oracle Business Intelligence Enterprise Edition

 12.  PeopleSoft North American Payroll 9.0 240K EE 8-stream benchmark on a single M4000 server with F5100 Flash Array storage. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise
  • Blog: Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance
  • Blog: App benchmarks, incorrect conclusions and the Sun Storage F5100
  • Blueprint: Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card

2010

 13.  PeopleSoft North American Payroll 9.0 240K EE 16-stream benchmark on a single M4000 server with F5100 Flash Array storage. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft NA Payroll 240K EE Benchmark with 16 Job Streams : Another Home Run for Sun

 14.  6,000 user PeopleSoft Campus Solutions 9.0 benchmark on a combination of X6270 blades and M4000 server. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades

Although challenging and exhilarating, benchmarks aren't always pleasant to work on, and really not for people with weak hearts. While running most of these benchmarks, my blood pressure shot up several times leaving me wonder why do I keep working on time sensitive and/or politically, strategically incorrect benchmarks (apparently not every benchmark finds a home somewhere on the public network). Nevertheless in the best interest of my employer, the showdown must go on.
Read More
Posted in | No comments

Monday, 1 March 2010

PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades

Posted on 00:54 by Unknown

Oracle|Sun published PeopleSoft Campus Solutions 9.0 benchmark results on February 18, 2010. Here is the direct URL to the benchmark results white paper:

      PeopleSoft Enterprise Campus Solutions 9.0 using Oracle 11g on a Sun SPARC Enterprise M4000 & Sun Blade X6270 Modules

Sun published three PeopleSoft benchmarks on SPARC platform over the last 12 month period -- one OLTP and two batch benchmarks[1]. The latest benchmark is somewhat special for at least couple of reasons:
  • Campus Solutions 9.0 workload has both online transactions and batch processes, and
  • This is the very first time ever Sun published a PeopleSoft benchmark on x64 hardware running Oracle Enterprise Linux

The summary of the benchmark test results is shown below. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the test results and the actual numbers that are of interest to the customers. Test results are sorted by the hourly throughput (invoices & transcripts per hour) in the descending order. Click on the link that is underneath the vendor name to open corresponding benchmark result.

While analyzing these test results, remember that the higher the throughput, the better. In the case of online transactions, it is desirable to keep the response times as low as possible.


Oracle PeopleSoft Campus Solutions 9.0 Benchmark Test Results

VendorHardware ConfigurationOSResource UtilizationResponse/Elapsed Times at Peak Load (6,000 users)
Online Transactions: Avg Response Times (sec)Batch Throughput/hr
CPU%Mem (GB)LogonLSCPage LoadPage SaveInvoiceTranscripts
SunDB1 x M4000 with 2 x 2.53GHz SPARC64 VII QC processors, 32GB RAM
1 x Sun Storage Flash Accelerator F20 with 4 x 24GB FMODs
1 x ST2540 array with 11 × 136.7GB SAS 15K RPM drives
Solaris 1037.2920.940.640.780.821.5731,79736,652
APP2 x X6270 blades with 2 x 2.93GHz Xeon 5570 QC processors, 24GB RAMOEL4 U841.69*4.99*
WEB+PS1 x X6270 blade with 2 x 2.8GHz Xeon 5560 QC processors, 24GB RAMOEL4 U833.086.03
HPDB1 x Integrity rx6600 with 4 x 1.6GHz Itanium 9050 DC procs, 32G RAM
1 x HP StorageWorks EVA8100 array with 58 x 146GB drives
HP-UX 11iv361300.710.910.831.6322,75330,257
APP2 x BL460c blade with 2 x 3.16GHz Xeon 5460 QC procs, 16GB RAMRHEL4U561.813.6
WEB1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAMRHEL4U544.363.77
PS1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAMRHEL4U521.901.48
HPDB1 x ProLiant DL580 G4 w/ 4 x 3.4GHz Xeon 7140M DC procs, 32G RAM
1 x HP StorageWorks XP128 array with 28 x 73GB drives
Win2003R270.3721.260.721.170.941.8017,62125,423
APP4 x BL480c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R265.612.17
WEB1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R254.113.13
PS1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R232.441.40


This is all public information. Feel free to compare the hardware configurations & the data presented in the table and draw your own conclusions. Since both Sun and HP used the same benchmark workload, toolkit and ran the benchmark with the same number of concurrent users and job streams for the batch processes, comparison should be pretty straight forward.

Hopefully the following paragraphs will provide relevant insights into the benchmark and the application.

Caution in interpreting the Online Transaction Response Times

Average response times for the online transactions were measured using HP's QuickTest Pro (QTP) tool. This is a benchmark requirement. QTP test scripts have a dependency on the web browser (IE in particular) -- hence it is extremely sensitive to the web browser latencies, remote desktop/VNC latencies and other latencies induced by the operating system. Be aware that all these latencies will be factored into the transaction response times and due to this, the final average transaction response times might be skewed a little. In other words, the reported average transaction response times may not necessarily be very accurate. In most of the cases we might be looking at the approximate values and the actual values might be far better than the ones reported in the benchmark report. (I really wish Oracle|PeopleSoft would throw away some of the skewed samples to make the data more accurate and reliable.). Please keep this in mind when looking at the response times of the online transactions.

Quick note about Consolidation

In our benchmark environment, we had the PeopleSoft Process Scheduler (batch server) set up on the same node as that of the web server node. In general, Oracle recommends setting up the process scheduler either on the database server node or on a dedicated system. However in the benchmark environment, we chose not to run the process scheduler on the database server node as it would hurt the performance of the online transactions. At the same time, we noticed plenty of idle CPU cycles on the web server node even at the peak load of 6,000 concurrent users, so we decided to run the PS on the web server node. In case if customers are not comfortable with this kind of setup, they can use any supported virtualization technology (eg., Logical Domains, Containers on Solaris, Oracle VM on OEL) to separate the process scheduler from the web server by allocating the system resources as they like. It is just a matter of choice.

PeopleSoft Load Balancing

PeopleSoft has load balancing mechanism built into the web server to forward the incoming requests to appropriate application server in the enterprise, and within the application server to send the request to an appropriate application server process, PSAPPSRV. (I'm not 100% sure but I think application server balances the load among application server processes in a round robin fashion on *nix platforms whereas on Windows, it forwards all the requests to a single application server process until it reaches the configured limit before moving on to the next available application server process.). However this in-built load balancing is not perfect. Most of the times, the number of requests processed by each of the identically configured application server processes [running on different application server nodes in the enterprise] may not be even. This minor shortcoming could lead to uneven resource usage across different nodes in the PeopleSoft deployment. You can notice this in the CPU and memory usage reported for the two app server nodes in the benchmark environment (check the benchmark results white paper).

Sun Flash Accelerator F20 PCIe Card

To reduce I/O latency, hot tables and hot indexes were placed on a Sun Flash Accelerator F20 PCIe Card in this benchmark. The F20 card has a total capacity of 96 GB with 4 x 24GB Flash Modules (FMODs). Although this workload is moderately I/O intensive, the batch processes in this benchmark generate a lot of I/O for few minutes in the steady state of the benchmark. The flash accelerator handled the burst of I/O activity pretty well, and as a result the performance of the batch processesing was improved.

Check the white paper Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card to know more about the top flash products offered by Oracle|Sun and how they can be deployed in a PeopleSoft environment for maximum benefit.

Solaris specific Tuning

Almost on all versions of Solaris 10, the kernel uses 4M as the maximum page size despite the fact that the underlying hardware supports as high as 256M pages. However large pages may improve the performance of some of the memory intensive workloads such as Oracle database by reducing the number of virtual <=> physical translations there by reducing the expensive dTLB/iTLB misses. In the benchmark environment, the following values were set in the /etc/system configuration file of the database server node to enable 256MB pages for the process heap and ISM.

* 256M pages for process heap
set max_uheap_lpsize=0x10000000

* 256M pages for ISM
set mmu_ism_pagesize=0x10000000


While we are on the same topic, Linux configuration is out-of-the-box. No OS tuning was performed in this benchmark.

Tuning Tip for Solaris Customers

Even though we did not set up the middle-tier on a Solaris box in this benchmark, this particular tuning tip is still valid and may help all those customers running the application server on Solaris. Consider lowering the shell limit for the file descriptors to a value of 512 or less if it was set to any value greater than 512. As of today (until the release of PeopleTools 8.50), there are certain parts of code in PeopleSoft calls the file control routine, fcntl(), and the file close routine, fclose(), in a loop "ulimit -n" number of times to close a bunch of files which were opened to perform a specific task. In general, PeopleSoft processes won't open hundreds of files. Hence the above mentioned behavior results in ton of dummy calls that error out. Besides, those system calls are not cheap -- they consume CPU cycles. It gets worse when there are a number of PeopleSoft processes that exhibit this kind of behavior simultaneously. (high system CPU% is one of the symptoms that helps identifying this behavior). Oracle|PeopleSoft is currently trying to address this performance issue. Meanwhile customers can lower the file descriptors shell limit to reduce its intensity and impact.

We have not observed this behavior on OEL when running the benchmark. But be sure to trace the system calls and figure out if the shell limit for the file descriptors need be lowered even on Linux or other supported platforms.

______________________________________

Footnotes:



1. PeopleSoft benchmarks on Sun platform in year 2009-2010

  1. PeopleSoft HRMS 8.9 SELF-SERVICE Using ORACLE on Sun SPARC Enterprise M3000 and Enterprise T5120 Servers -- online transactions (OLTP)
  2. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams) -- batch workload
  3. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 streams) -- batch workload


2. *HP's benchmark results white paper did not show the CPU and memory breakdown numbers separately for each of the application server nodes. It only shows the average of average CPU and memory utilization for all app server nodes under "App Servers". Sun's average CPU, memory numbers [shown in the above table] were calculated in the same way for consistency.

(Copied from the original post at Oracle|Sun blogs @
http://blogs.sun.com/mandalika/entry/peoplesoft_campus_solutions_9_0
)
Read More
Posted in | No comments

Thursday, 11 February 2010

Extracting DDL Statements from a PeopleSoft Data Mover exported DAT file

Posted on 01:07 by Unknown
Case in hand: Given a PeopleSoft Data Mover exported data file (db or dat file), how to extract the DDL statements [from that data file] which gets executed as part of the Data Mover's data import process?

Here is a quick way to do it:

  1. Insert the SET EXTRACT statements in the Data Mover script (DMS) before the IMPORT .. statement.

    eg.,

    % cat /tmp/retrieveddl.dms

    ..
    SET EXTRACT OUTPUT /tmp/ddl_stmts.log;
    SET EXTRACT DDL;

    ..

    IMPORT *;


    It is mandatory that the SET EXTRACT OUPUT statement must appear before any SET EXTRACT statements.

  2. Run the Data Mover utility with the modified DMS script as an argument.

    eg., OS: Solaris


    % psdmtx -CT ORACLE -CD NAP11 -CO NAP11 -CP NAP11 -CI people -CW peop1e -FP /tmp/retrieveddl.dms


    On successful completion, you will find the DDL statements in /tmp/retrieveddl.dms file.

Check chapter #2 "Using PeopleSoft Data Mover" in Enterprise PeopleTools x.xx PeopleBook: Data Management document for more ideas.

______
(Originally posted on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/extracting_ddl_statements_from_a
)
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • *nix: Workaround to cannot find zipfile directory in one of file.zip or file.zip.zip ..
    Symptom: You are trying to extract the archived files off of a huge (any file with size > 2 GB or 4GB, depending on the OS) ZIP file with...
  • C/C++: Printing Stack Trace with printstack() on Solaris
    libc on Solaris 9 and later, provides a useful function called printstack , to print a symbolic stack trace to the specified file descripto...
  • JDS: Installing Sun Java Desktop System 2.0
    This document will guide you through the process of installing JDS 2.0 on a PC from integrated CDROM images Requirements I...
  • Binary compatibility
    What's It? "Binary compatibility" (BC) is the ability of one machine to run software that was written for another without hav...
  • Solaris: NULL pointer bugs & /usr/lib/0@0.so.1 library
    Some programmers assume that a NULL character pointer is the same as a pointer to a NULL string. However de-referencing a NULL pointer (ie.,...
  • Database: Oracle Server Architecture (overview)
    Oracle server consists of the following core components: 1) database(s) & 2) instance(s) 1) database consists of: 1) datafil...
  • Sun: OpenJDK
    Open source JDK, that is. Sun Microsystems did it again -- As promised during JavaOne event back in May 2006, Sun made the implementation of...
  • Consolidating Siebel CRM 8.0 on a Single Sun SPARC Enterprise Server, T5440
    .. blueprint document is now available on wikis.sun.com . Here is the direct link to the blueprint:              Consolidating Oracle Siebel...
  • Oracle Internet Directory 11g Benchmark on SPARC T5
    SUMMARY System Under Test (SUT)     Oracle's SPARC T5-2 server Software     Oracle Internet Directory 11 g R1-PS6 Target Load     50...
  • Fix to Firefox 3 Crash on Solaris 10 x86
    Symptom : Firefox 3 crashes on Solaris 10 x86 when the web browser tries to render some of the HTML pages with SWF content in them. For exam...

Categories

  • 80s music playlist
  • bandwidth iperf network solaris
  • best
  • black friday
  • breakdown database groups locality oracle pmap sga solaris
  • buy
  • deal
  • ebiz ebs hrms oracle payroll
  • emca oracle rdbms database ORA-01034
  • friday
  • Garmin
  • generic+discussion software installer
  • GPS
  • how-to solaris mmap
  • impdp ora-01089 oracle rdbms solaris tips upgrade workarounds zombie
  • Magellan
  • music
  • Navigation
  • OATS Oracle
  • Oracle Business+Intelligence Analytics Solaris SPARC T4
  • oracle database flashback FDA
  • Oracle Database RDBMS Redo Flash+Storage
  • oracle database solaris
  • oracle database solaris resource manager virtualization consolidation
  • Oracle EBS E-Business+Suite SPARC SuperCluster Optimized+Solution
  • Oracle EBS E-Business+Suite Workaround Tip
  • oracle lob bfile blob securefile rdbms database tips performance clob
  • oracle obiee analytics presentation+services
  • Oracle OID LDAP ADS
  • Oracle OID LDAP SPARC T5 T5-2 Benchmark
  • oracle pls-00201 dbms_system
  • oracle siebel CRM SCBroker load+balancing
  • Oracle Siebel Sun SPARC T4 Benchmark
  • Oracle Siebel Sun SPARC T5 Benchmark T5-2
  • Oracle Solaris
  • Oracle Solaris Database RDBMS Redo Flash F40 AWR
  • oracle solaris rpc statd RPC troubleshooting
  • oracle solaris svm solaris+volume+manager
  • Oracle Solaris Tips
  • oracle+solaris
  • RDC
  • sale
  • Smartphone Samsung Galaxy S2 Phone+Shutter Tip Android ICS
  • solaris oracle database fmw weblogic java dfw
  • SuperCluster Oracle Database RDBMS RAC Solaris Zones
  • tee
  • thanksgiving sale
  • tips
  • TomTom
  • windows

Blog Archive

  • ►  2013 (16)
    • ►  December (3)
    • ►  November (2)
    • ►  October (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (1)
    • ►  May (1)
    • ►  April (1)
    • ►  March (1)
    • ►  February (2)
    • ►  January (1)
  • ►  2012 (14)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (2)
  • ►  2011 (15)
    • ►  December (2)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  August (2)
    • ►  July (1)
    • ►  May (2)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (1)
  • ▼  2010 (19)
    • ▼  December (3)
      • Oracle's Optimized Solution for Siebel CRM 8.1.1
      • Oracle's Optimized Solution for PeopleSoft HCM 9.0
      • Instructions to Turn ON/OFF Hardware Prefetch on S...
    • ►  November (1)
      • Instructions to Restore Files from a Windows Files...
    • ►  October (2)
      • SPARC T3 reiterates Siebel CRM's Supremacy on T-se...
      • Is it really Solaris Versus Windows & Linux?
    • ►  September (1)
      • OOW 2010 : Accelerate and Bullet-Proof Your Siebel...
    • ►  August (1)
      • Identifying Ideal Oracle Database Objects for Flas...
    • ►  July (1)
      • PeopleSoft NA Payroll 500K EE Benchmark on Solaris...
    • ►  June (1)
      • Book Review: Oracle Database 11g – Underground Adv...
    • ►  May (5)
      • Oracle RDBMS : Flushing a Single SQL Statement out...
      • Oracle Database: Say No Thanks! to a New Index
      • Music : Few Mixed Tunes with GarageBand
      • Oracle Database 11g – Underground Advice for Datab...
      • Oracle 11g R1: Poor Data Pump Performance when Exp...
    • ►  April (1)
      • 2004-2010 : A Look Back at Sun Published Oracle Be...
    • ►  March (1)
      • PeopleSoft Campus Solutions 9.0 benchmark on Sun S...
    • ►  February (1)
      • Extracting DDL Statements from a PeopleSoft Data M...
    • ►  January (1)
  • ►  2009 (25)
    • ►  December (1)
    • ►  November (2)
    • ►  October (1)
    • ►  September (1)
    • ►  August (2)
    • ►  July (2)
    • ►  June (1)
    • ►  May (2)
    • ►  April (3)
    • ►  March (1)
    • ►  February (5)
    • ►  January (4)
  • ►  2008 (34)
    • ►  December (2)
    • ►  November (2)
    • ►  October (2)
    • ►  September (1)
    • ►  August (4)
    • ►  July (2)
    • ►  June (3)
    • ►  May (3)
    • ►  April (2)
    • ►  March (5)
    • ►  February (4)
    • ►  January (4)
  • ►  2007 (33)
    • ►  December (2)
    • ►  November (4)
    • ►  October (2)
    • ►  September (5)
    • ►  August (3)
    • ►  June (2)
    • ►  May (3)
    • ►  April (5)
    • ►  March (3)
    • ►  February (1)
    • ►  January (3)
  • ►  2006 (40)
    • ►  December (2)
    • ►  November (6)
    • ►  October (2)
    • ►  September (2)
    • ►  August (1)
    • ►  July (2)
    • ►  June (2)
    • ►  May (4)
    • ►  April (5)
    • ►  March (5)
    • ►  February (3)
    • ►  January (6)
  • ►  2005 (72)
    • ►  December (5)
    • ►  November (2)
    • ►  October (6)
    • ►  September (5)
    • ►  August (5)
    • ►  July (10)
    • ►  June (8)
    • ►  May (9)
    • ►  April (6)
    • ►  March (6)
    • ►  February (5)
    • ►  January (5)
  • ►  2004 (36)
    • ►  December (1)
    • ►  November (5)
    • ►  October (12)
    • ►  September (18)
Powered by Blogger.

About Me

Unknown
View my complete profile