Creation Zone

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 21 April 2010

2004-2010 : A Look Back at Sun Published Oracle Benchmarks

Posted on 01:25 by Unknown
(Originally published on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/2004_2010_a_look_back
)

Since Sun Microsystems became a legacy, I got this idea of a reminiscent [farewell] blog post for the company that gave me the much needed break when I was a graduate student back in 2002. As I spend more than 50% of my time benchmarking different Oracle products on Sun hardware, it'd be fitting to fill this blog entry with a recollection of the benchmarks I was actively involved in over the past 6+ years. Without further ado, the list follows.

2004

 1.  10,000 user Siebel 7.5.2 PSPP benchmark on a combination of SunFire v440, v890 and E2900 servers. Database: Oracle 9i
  • Benchmark Report
  • Blog: Sun achieves winning Siebel benchmark
  • Blog: Sun and Siebel Kick Some Benchmark Butt
  • Blog: When Good Benchmarks Go Bad

2005

 2.  8,000 user Siebel 7.7 PSPP benchmark on a combination of SunFire v490, v890, T2000 and E2900 servers. Database: Oracle 9i

  • Benchmark Report

 3.  12,500 user Siebel 7.7 PSPP benchmark on a combination of SunFire v490, v890, T2000 and E2900 servers. Database: Oracle 9i

  • Benchmark Report

2006

 4.  10,000 user Siebel Analytics 7.8.4 benchmark on multiple SunFire T2000 servers. Database: Oracle 10g

  • Benchmark Report

2007

 5.  10,000 user Siebel 8.0 PSPP benchmark on two T5220 servers. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Sun publishes 10,000 user Siebel 8.0 PSPP benchmark on Niagara 2 systems

2008

 6.  Oracle E-Business Suite 11i Payroll benchmark for 5,000 employees. Database: Oracle 10g R1

  • White Paper (didn't qualify as a benchmark since we configured more than 4 payroll threads)
  • Blog: Running Batch Workloads on Sun's CMT Servers

 7.  14,000 user Siebel 8.0 PSPP benchmark on a single T5440 server. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Siebel 8.0 on Sun SPARC Enterprise T5440 - More Bang for the Buck!!
  • Blog: Siebel on Sun CMT hardware : Best Practices
  • Blueprint: Consolidating Oracle Siebel CRM 8 on a Single Sun SPARC Enterprise Server

 8.  10,000 user Siebel 8.0 PSPP benchmark on a single T5240 server. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: Yet Another Siebel 8.0 PSPP Benchmark on Sun CMT Hardware ..

2009

 9.  4,000 user PeopleSoft HR Self-Service 8.9 benchmark on a combination of M3000 and T5120 servers. Database: Oracle 10g R2

  • Benchmark Report
  • Blog: PeopleSoft HRMS 8.9 Self-Service Benchmark on M3000 & T5120 Servers

 10.  28,000 user Oracle Business Intelligence Enterprise Edition (OBIEE) 10.1.3.4 benchmark on a single T5440 server. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: T5440 Rocks [again] with Oracle Business Intelligence Enterprise Edition Workload
  • Blog: Oracle Business Intelligence on Sun : Few Best Practices

 11.  50,000 user Oracle Business Intelligence Enterprise Edition (OBIEE) 10.1.3.4 benchmark on two T5440 servers. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: Sun achieves the Magic Number 50,000 on T5440 with Oracle Business Intelligence EE 10.1.3.4
  • Blueprint: Deploying Oracle Business Intelligence Enterprise Edition

 12.  PeopleSoft North American Payroll 9.0 240K EE 8-stream benchmark on a single M4000 server with F5100 Flash Array storage. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise
  • Blog: Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance
  • Blog: App benchmarks, incorrect conclusions and the Sun Storage F5100
  • Blueprint: Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card

2010

 13.  PeopleSoft North American Payroll 9.0 240K EE 16-stream benchmark on a single M4000 server with F5100 Flash Array storage. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft NA Payroll 240K EE Benchmark with 16 Job Streams : Another Home Run for Sun

 14.  6,000 user PeopleSoft Campus Solutions 9.0 benchmark on a combination of X6270 blades and M4000 server. Database: Oracle 11g R1

  • Benchmark Report
  • Blog: PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades

Although challenging and exhilarating, benchmarks aren't always pleasant to work on, and really not for people with weak hearts. While running most of these benchmarks, my blood pressure shot up several times leaving me wonder why do I keep working on time sensitive and/or politically, strategically incorrect benchmarks (apparently not every benchmark finds a home somewhere on the public network). Nevertheless in the best interest of my employer, the showdown must go on.
Read More
Posted in | No comments

Monday, 1 March 2010

PeopleSoft Campus Solutions 9.0 benchmark on Sun SPARC Enterprise M4000 and X6270 blades

Posted on 00:54 by Unknown

Oracle|Sun published PeopleSoft Campus Solutions 9.0 benchmark results on February 18, 2010. Here is the direct URL to the benchmark results white paper:

      PeopleSoft Enterprise Campus Solutions 9.0 using Oracle 11g on a Sun SPARC Enterprise M4000 & Sun Blade X6270 Modules

Sun published three PeopleSoft benchmarks on SPARC platform over the last 12 month period -- one OLTP and two batch benchmarks[1]. The latest benchmark is somewhat special for at least couple of reasons:
  • Campus Solutions 9.0 workload has both online transactions and batch processes, and
  • This is the very first time ever Sun published a PeopleSoft benchmark on x64 hardware running Oracle Enterprise Linux

The summary of the benchmark test results is shown below. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the test results and the actual numbers that are of interest to the customers. Test results are sorted by the hourly throughput (invoices & transcripts per hour) in the descending order. Click on the link that is underneath the vendor name to open corresponding benchmark result.

While analyzing these test results, remember that the higher the throughput, the better. In the case of online transactions, it is desirable to keep the response times as low as possible.


Oracle PeopleSoft Campus Solutions 9.0 Benchmark Test Results

VendorHardware ConfigurationOSResource UtilizationResponse/Elapsed Times at Peak Load (6,000 users)
Online Transactions: Avg Response Times (sec)Batch Throughput/hr
CPU%Mem (GB)LogonLSCPage LoadPage SaveInvoiceTranscripts
SunDB1 x M4000 with 2 x 2.53GHz SPARC64 VII QC processors, 32GB RAM
1 x Sun Storage Flash Accelerator F20 with 4 x 24GB FMODs
1 x ST2540 array with 11 × 136.7GB SAS 15K RPM drives
Solaris 1037.2920.940.640.780.821.5731,79736,652
APP2 x X6270 blades with 2 x 2.93GHz Xeon 5570 QC processors, 24GB RAMOEL4 U841.69*4.99*
WEB+PS1 x X6270 blade with 2 x 2.8GHz Xeon 5560 QC processors, 24GB RAMOEL4 U833.086.03
HPDB1 x Integrity rx6600 with 4 x 1.6GHz Itanium 9050 DC procs, 32G RAM
1 x HP StorageWorks EVA8100 array with 58 x 146GB drives
HP-UX 11iv361300.710.910.831.6322,75330,257
APP2 x BL460c blade with 2 x 3.16GHz Xeon 5460 QC procs, 16GB RAMRHEL4U561.813.6
WEB1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAMRHEL4U544.363.77
PS1 x BL460c blade with 2 x 3GHz Xeon 5160 DC procs, 8GB RAMRHEL4U521.901.48
HPDB1 x ProLiant DL580 G4 w/ 4 x 3.4GHz Xeon 7140M DC procs, 32G RAM
1 x HP StorageWorks XP128 array with 28 x 73GB drives
Win2003R270.3721.260.721.170.941.8017,62125,423
APP4 x BL480c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R265.612.17
WEB1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R254.113.13
PS1 x BL460c G1 blades with 2 x 3GHz Xeon 5160 DC procs, 12GB RAMWin2003R232.441.40


This is all public information. Feel free to compare the hardware configurations & the data presented in the table and draw your own conclusions. Since both Sun and HP used the same benchmark workload, toolkit and ran the benchmark with the same number of concurrent users and job streams for the batch processes, comparison should be pretty straight forward.

Hopefully the following paragraphs will provide relevant insights into the benchmark and the application.

Caution in interpreting the Online Transaction Response Times

Average response times for the online transactions were measured using HP's QuickTest Pro (QTP) tool. This is a benchmark requirement. QTP test scripts have a dependency on the web browser (IE in particular) -- hence it is extremely sensitive to the web browser latencies, remote desktop/VNC latencies and other latencies induced by the operating system. Be aware that all these latencies will be factored into the transaction response times and due to this, the final average transaction response times might be skewed a little. In other words, the reported average transaction response times may not necessarily be very accurate. In most of the cases we might be looking at the approximate values and the actual values might be far better than the ones reported in the benchmark report. (I really wish Oracle|PeopleSoft would throw away some of the skewed samples to make the data more accurate and reliable.). Please keep this in mind when looking at the response times of the online transactions.

Quick note about Consolidation

In our benchmark environment, we had the PeopleSoft Process Scheduler (batch server) set up on the same node as that of the web server node. In general, Oracle recommends setting up the process scheduler either on the database server node or on a dedicated system. However in the benchmark environment, we chose not to run the process scheduler on the database server node as it would hurt the performance of the online transactions. At the same time, we noticed plenty of idle CPU cycles on the web server node even at the peak load of 6,000 concurrent users, so we decided to run the PS on the web server node. In case if customers are not comfortable with this kind of setup, they can use any supported virtualization technology (eg., Logical Domains, Containers on Solaris, Oracle VM on OEL) to separate the process scheduler from the web server by allocating the system resources as they like. It is just a matter of choice.

PeopleSoft Load Balancing

PeopleSoft has load balancing mechanism built into the web server to forward the incoming requests to appropriate application server in the enterprise, and within the application server to send the request to an appropriate application server process, PSAPPSRV. (I'm not 100% sure but I think application server balances the load among application server processes in a round robin fashion on *nix platforms whereas on Windows, it forwards all the requests to a single application server process until it reaches the configured limit before moving on to the next available application server process.). However this in-built load balancing is not perfect. Most of the times, the number of requests processed by each of the identically configured application server processes [running on different application server nodes in the enterprise] may not be even. This minor shortcoming could lead to uneven resource usage across different nodes in the PeopleSoft deployment. You can notice this in the CPU and memory usage reported for the two app server nodes in the benchmark environment (check the benchmark results white paper).

Sun Flash Accelerator F20 PCIe Card

To reduce I/O latency, hot tables and hot indexes were placed on a Sun Flash Accelerator F20 PCIe Card in this benchmark. The F20 card has a total capacity of 96 GB with 4 x 24GB Flash Modules (FMODs). Although this workload is moderately I/O intensive, the batch processes in this benchmark generate a lot of I/O for few minutes in the steady state of the benchmark. The flash accelerator handled the burst of I/O activity pretty well, and as a result the performance of the batch processesing was improved.

Check the white paper Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card to know more about the top flash products offered by Oracle|Sun and how they can be deployed in a PeopleSoft environment for maximum benefit.

Solaris specific Tuning

Almost on all versions of Solaris 10, the kernel uses 4M as the maximum page size despite the fact that the underlying hardware supports as high as 256M pages. However large pages may improve the performance of some of the memory intensive workloads such as Oracle database by reducing the number of virtual <=> physical translations there by reducing the expensive dTLB/iTLB misses. In the benchmark environment, the following values were set in the /etc/system configuration file of the database server node to enable 256MB pages for the process heap and ISM.

* 256M pages for process heap
set max_uheap_lpsize=0x10000000

* 256M pages for ISM
set mmu_ism_pagesize=0x10000000


While we are on the same topic, Linux configuration is out-of-the-box. No OS tuning was performed in this benchmark.

Tuning Tip for Solaris Customers

Even though we did not set up the middle-tier on a Solaris box in this benchmark, this particular tuning tip is still valid and may help all those customers running the application server on Solaris. Consider lowering the shell limit for the file descriptors to a value of 512 or less if it was set to any value greater than 512. As of today (until the release of PeopleTools 8.50), there are certain parts of code in PeopleSoft calls the file control routine, fcntl(), and the file close routine, fclose(), in a loop "ulimit -n" number of times to close a bunch of files which were opened to perform a specific task. In general, PeopleSoft processes won't open hundreds of files. Hence the above mentioned behavior results in ton of dummy calls that error out. Besides, those system calls are not cheap -- they consume CPU cycles. It gets worse when there are a number of PeopleSoft processes that exhibit this kind of behavior simultaneously. (high system CPU% is one of the symptoms that helps identifying this behavior). Oracle|PeopleSoft is currently trying to address this performance issue. Meanwhile customers can lower the file descriptors shell limit to reduce its intensity and impact.

We have not observed this behavior on OEL when running the benchmark. But be sure to trace the system calls and figure out if the shell limit for the file descriptors need be lowered even on Linux or other supported platforms.

______________________________________

Footnotes:



1. PeopleSoft benchmarks on Sun platform in year 2009-2010

  1. PeopleSoft HRMS 8.9 SELF-SERVICE Using ORACLE on Sun SPARC Enterprise M3000 and Enterprise T5120 Servers -- online transactions (OLTP)
  2. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams) -- batch workload
  3. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 streams) -- batch workload


2. *HP's benchmark results white paper did not show the CPU and memory breakdown numbers separately for each of the application server nodes. It only shows the average of average CPU and memory utilization for all app server nodes under "App Servers". Sun's average CPU, memory numbers [shown in the above table] were calculated in the same way for consistency.

(Copied from the original post at Oracle|Sun blogs @
http://blogs.sun.com/mandalika/entry/peoplesoft_campus_solutions_9_0
)
Read More
Posted in | No comments

Thursday, 11 February 2010

Extracting DDL Statements from a PeopleSoft Data Mover exported DAT file

Posted on 01:07 by Unknown
Case in hand: Given a PeopleSoft Data Mover exported data file (db or dat file), how to extract the DDL statements [from that data file] which gets executed as part of the Data Mover's data import process?

Here is a quick way to do it:

  1. Insert the SET EXTRACT statements in the Data Mover script (DMS) before the IMPORT .. statement.

    eg.,

    % cat /tmp/retrieveddl.dms

    ..
    SET EXTRACT OUTPUT /tmp/ddl_stmts.log;
    SET EXTRACT DDL;

    ..

    IMPORT *;


    It is mandatory that the SET EXTRACT OUPUT statement must appear before any SET EXTRACT statements.

  2. Run the Data Mover utility with the modified DMS script as an argument.

    eg., OS: Solaris


    % psdmtx -CT ORACLE -CD NAP11 -CO NAP11 -CP NAP11 -CI people -CW peop1e -FP /tmp/retrieveddl.dms


    On successful completion, you will find the DDL statements in /tmp/retrieveddl.dms file.

Check chapter #2 "Using PeopleSoft Data Mover" in Enterprise PeopleTools x.xx PeopleBook: Data Management document for more ideas.

______
(Originally posted on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/extracting_ddl_statements_from_a
)
Read More
Posted in | No comments

Sunday, 31 January 2010

PeopleSoft NA Payroll 240K EE Benchmark with 16 Job Streams : Another Home Run for Sun

Posted on 20:44 by Unknown
(Originally posted on blogs.sun.com at:
http://blogs.sun.com/mandalika/entry/peoplesoft_na_payroll_240k_ee
)

Poor Steve A.[1] ... This entry is not about Steve A. though. It is about the new PeopleSoft NA Payroll benchmark result that Sun published today.

First things first. Here is the direct URL to our latest benchmark results:

        PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (16 job streams[2] -- simply referred as 'stream' hereonwards)

The summary of the benchmark test results is shown below only for the 16 stream benchmarks. These numbers were extracted from the very first page of the benchmark results white papers where Oracle|PeopleSoft highlights the significance of the results and the actual numbers that are of interest to the customers. The results in the following table are sorted by the hourly throughput (payments/hour) in the descending order. The goal is to achieve as much hourly throughput as possible. Click on the link that is underneath the hourly throughput values to open corresponding benchmark result.


Oracle PeopleSoft North American Payroll 9.0 - Number of employees: 240,000 & Number of payments: 360,000
VendorOSHardware Config#Job StreamsElapsed Time (min)Hourly Throughput
Payments per Hour
SunSolaris 10 5/091x Sun SPARC Enterprise M4000 with 4 x 2.53 GHz SPARC64-VII Quad-Core processors and 32 GB memory
1 x Sun Storage F5100 Flash Array with 40 Flash Modules for data, indexes
1 x Sun Storage J4200 Array for redo logs
1643.78493,376
HPHP-UX1 x HP Integrity rx6600 with 4 x 1.6 GHz Intel Itanium2 9000 Dual-Core processors and 32 GB memory
1 x HP StorageWorks EVA 8100
1668.07317,320


This is all public information. Feel free to compare the hardware configurations and the data presented in both of the rows and draw your own conclusions. Since both Sun and HP used the same benchmark toolkit, workload and ran the benchmark with the same number of job streams, comparison should be pretty straight forward.

If you want to compare the 8 stream results, check the other blog entry: PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise. Sun used the same hardware to run both benchmark tests with 8 and 16 streams respectively. We could have gotten away with 20+ Flash Modules (FMODs), but we want to keep the benchmark environment consistent with our prior benchmark effort around the same benchmark workload with 8 job streams. Due to the same hardware setup, now we can easily demonstrate the advantage of parallelism (simply by comparing the test results from 8 and 16 stream benchmarks) and how resilient and scalable the F5100 Flash array is.

Our benchmarks showed an improvement of ~55% in overall throughput when the number of job streams were increased from 8 to 16. Also our 16 stream results showed ~55% improvement in overall throughput over HP's published results with the same number of streams at a maximum average CPU utilization of 45% compared to HP's maximum average CPU utilization of 89%. The half populated Sun Storage F5100 Flash Array played the key role in both of those benchmark efforts by demonstrating superior I/O performance over the traditional disk based arrays.

Before concluding, I would like to highlight a few known facts (just for the benefit of those people who may fall for the PR trickery):
  1. 8 job streams != 16 job streams. In other words, the results from an 8 stream effort is not comparable to that of a 16 stream result.
  2. The throughput should go up with increased number of job streams [ only up to some extent -- do not forget that there will be a saturation point for everything ]. For example, the throughput with 16 streams might be higher compared to the 8 stream throughput.
  3. The Law of Diminishing Returns applies to the software world too, not just for the economics. So, there is no guarantee that the throughput will be much better with 24 or 32 job streams.


Other blog posts and documents of interest:
  1. Best Practices for Oracle PeopleSoft Enterprise Payroll for North America using the Sun Storage F5100 Flash Array or Sun Flash Accelerator F20 PCIe Card
  2. PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000 (8 streams benchmark white paper)
  3. PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise
  4. App benchmarks, incorrect conclusions and the Sun Storage F5100
  5. Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance





































Notes:

[1] Steve A. tried so hard and his best to make everyone else believe that HP's 16 job stream NA Payroll 240K EE benchmark results are on par with Sun's 8 stream benchmark results. Apparently Steve A. failed and gave up after we showed the world a few screenshots from a published and eventually withdrawn benchmark [ by HP ]. You can read all his arguments, comparisons etc., in the comments section of my other blog entry PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise as well as in Joerg Moellenkamp's blog entries around the same topic.

[2] In PeopleSoft terminology, a job stream is something that is equivalent to a thread.
Read More
Posted in | No comments

Wednesday, 30 December 2009

Accessing MySQL Database(s) from StarOffice / OpenOffice.org Suite of Applications

Posted on 08:37 by Unknown
This blog post is organized into two major sections and several sub-sections. The major sections focus on the tasks to be performed at the MySQL server and the *Office client while the sub-sections talk about the steps to be performed in detail.

To show the examples in this exercise, we will be creating a new MySQL database user with user ID SOUSER. This new user will be granted read-only access to couple of tables in a MySQL database called ISVe. The database can be accessed from any host in the network. ben10.sfbay is the hostname of the MySQL server.

Tasks to be Performed at the MySQL Server


This section is intended only for the MySQL Server Administrators. If you are an end-user, skip ahead to Tasks to be Performed at the Client side.

Create a new MySQL user and grant required privileges.

eg.,

% mysql -u root -p
Enter password: *****
Server version: 5.1.25-rc-standard Source distribution
..

mysql> CREATE USER SOUSER IDENTIFIED BY 'SOUSER';
Query OK, 0 rows affected (0.00 sec)

mysql> show grants for SOUSER;
+-------------------------------------------------------------------------------------------------------+
| Grants for SOUSER@% |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'SOUSER'@'%' IDENTIFIED BY PASSWORD '*8370607DA2602E52F463FF3B2FFEA53E81B9314C' |
+-------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> USE ISVe;
Database changed

mysql> show tables;
+--------------------------+
| Tables_in_ISVe |
+--------------------------+
| CustomReport |
| CustomSQL |
| ISVeOldProjects |
| ISVeOrg |
| ISVeProject |
| ISVeProjectExecution |
| ISVeProjectGoalAlignment |
| ISVeProjectMiscInfo |
| ISVeProjectScoping |
| ISVeProjectStatus |
| ISVeProjects |
| ISVeProjectsVW |
| ISVeSearchLog |
| LastRefreshed |
+--------------------------+
14 rows in set (0.00 sec)

mysql> GRANT SELECT ON ISVe.ISVeOldProjects TO 'SOUSER'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT SELECT ON ISVe.ISVeProjects TO 'SOUSER'@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> show grants for SOUSER;
+-------------------------------------------------------------------------------------------------------+
| Grants for SOUSER@% |
+-------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'SOUSER'@'%' IDENTIFIED BY PASSWORD '*8370607DA2602E52F463FF3B2FFEA53E81B9314C' |
| GRANT SELECT ON `ISVe`.`ISVeOldProjects` TO 'SOUSER'@'%' |
| GRANT SELECT ON `ISVe`.`ISVeProjects` TO 'SOUSER'@'%' |
+-------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

mysql> quit
Bye


Check the database connectivity and the accessibility from a remote location.


% mysql -h ben10.sfbay -D ISVe -u SOUSER -pSOUSER
Server version: 5.1.25-rc-standard Source distribution

mysql> show tables;
+-----------------+
| Tables_in_ISVe |
+-----------------+
| ISVeOldProjects |
| ISVeProjects |
+-----------------+
2 rows in set (0.03 sec)

mysql> select count(*) from ISVeOldProjects;
+----------+
| count(*) |
+----------+
| 2880 |
+----------+
1 row in set (0.04 sec)

mysql> select count(*) from ISVeProjects;
+----------+
| count(*) |
+----------+
| 4967 |
+----------+
1 row in set (0.33 sec)

mysql> delete from ISVeOldProjects;
ERROR 1142 (42000): DELETE command denied to user 'SOUSER'@'vpn-192-155-222-19.SFBay.Sun.COM' for table 'ISVeOldProjects'

mysql> delete from ISVeProjects;
ERROR 1142 (42000): DELETE command denied to user 'SOUSER'@'vpn-192-155-222-19.SFBay.Sun.COM' for table 'ISVeProjects'

mysql> quit
Bye


Tasks to be Performed at the Client side (End-User's Workstation)


StarOffice and OpenOffice suite of applications can access the MySQL Server using JDBC or native drivers.

MySQL Connector/J is a platform independent JDBC Type 4 driver that is developed specifically to connect to a MySQL database. Using Connector/J, it is possible to connect to almost any version of MySQL Server from any version of StarOffice or OpenOffice.org

Sun|MySQL recently developed a native MySQL driver to facilitate connecting from StarOffice / OpenOffice.org suite of applications to a MySQL database. The new native driver is called MySQL Connector/OpenOffice.org. However the current version of the MySQL Connector for OO.o is compatible only with OpenOffice 3.1, StarOffice 9.1 or newer and it can connect only to MySQL Server 5.1 or later versions. This native connector is supposed to be faster in comparison with the Java connector.

We will explore both MySQL connectors in this section.

Note:

As an end user, you need not be concerned about the internal workings of these MySQL connectors. You just need to worry about installing and configuring the drivers so the *Office applications can connect to the MySQL database in a seamless fashion.

I. Connector/J approach


  1. Installation steps for MySQL Connector/J

    Using the following navigation, find the location of the JRE that is being used by StarOffice / OpenOffice.org


    • Launch StarOffice / OpenOffice.org
    • Tools Menu -> Options
    • In the 'Options' window, StarOffice / OpenOffice.org -> Java


    Here is a sample screen capture from a Mac running StarOffice 9.





    In the above example, /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home is the location of the JRE. Here onwards, this location will be referred as JRE_HOME.


    1. Download the connector from the following web page:
              http://dev.mysql.com/downloads/connector/j/

      As of this writing, 5.1.10 is the current version for Connector/J

    2. Extract the driver and the rest of the files from the compressed [downloaded] archive

      eg.,

      % gunzip -c mysql-connector-java-5.1.10.tar.gz | tar -xvf -



    3. Locate the jar file that contains the driver --- mysql-connector-java-5.1.10-bin.jar, and copy it into the <JRE_HOME>/lib/ext directory with 'root' privileges.

      eg.,

      % sudo cp mysql-connector-java-5.1.10-bin.jar /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext

      % ls -l /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext/*connector*jar
      /System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/lib/ext/mysql-connector-java-5.1.10-bin.jar



    4. Restart StarOffice / OpenOffice.org


    This concludes the installation of MySQL Connector/J.


    2. Configuration steps for Connector/J


    1. Launch StarOffice / OpenOffice.org

    2. In the Welcome screen, click on "Database". A database wizard pops up to help us create, open or connect to an existing database from StarOffice / OpenOffice.org.

    3. Since our main interest is only to connect to an existing database in this exercise, click on "Connect to an existing database" radio button and select "MySQL" from the drop-down menu that is underneath the selected radio button.





      Click on "Next >>" button

    4. In the next screen, select JDBC by clicking on "Connect using JDBC (Java Database Connectivity)" radio button





      Click on "Next >>" button

    5. In "Set up connection to a MySQL database using JDBC" screen, provide the name of the database, hostname or IP address of the MySQL database server (server URL) that you want to connect to along with the port# on which the MySQL server is actively listening for new database connections.

      MySQL JDBC driver class text field will be automatically filled with the string com.mysql.jdbc.Driver. Leave that string intact, and click on "Test Class" button to make sure that the relevant class can be loaded with no issues. Unless the driver class is loaded successfully, you will not be able to connect to the MySQL database. In case of unsuccessful class loading, double check the installation steps for MySQL Connector/J.





      Click on "Next >>" button

      Note:

      In the above screenshot, notice that the "Name of the database" was filled with ISVe?zeroDateTimeBehavior=convertToNull (It is not completely visible in the above screen capture, but you just have to believe me). In this example, ISVe is the database name and zeroDateTimeBehavior is the configuration property which was set to a value of convertToNull. Without this configuration property, Connector/J throws an exception when it encounters date values such as 0000-00-00. In such cases, the error message will be something similar to java.sql.SQLException: Value '0000-00-00' can not be represented as java.sql.Date.

      Configuration properties define how Connector/J will make a connection to a MySQL server. The list of Connector/J configuration properties are documented in the following web page:

              http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html

      If you have more than one configuration property, you can define all of those properties in the "Name of the database" field. The syntax would be:
          <MySQL_DB_Name>?<Config_Property1=Value>&<Config_Property2=Value>&..&<Config_Propertyn=Value>

    6. Provide the database user name and the password details in "Set up the user authentication" screen. Click on "Password required" check box if there is a password setup for the database user.





      Click on "Test Connection" button to ensure a successful connection to the MySQL database using the credentials provided in this window.

      Click on "Next >>" button

    7. In the final screen, simply accept the default values and click on 'Finish' button.

      "Yes, register the database for me" and "Open the database for edition" are the defaults selected in this screen.

      When you click on the 'Finish' button, you will be prompted to provide a name to the database to save it as a file on your local machine. The saved file contains information about the database including the queries, reports and forms that are linked to the MySQL database. The actual data remain in the MySQL database. Hence you need not worry about the size of the file that is being saved on your local machine. It will be small in size.





    8. Ultimately the Database area of the Base main window appears as shown in the following screen capture.





      Notice the RDBMS name, Type of Connectivity, MySQL Database Name along with the configuration properties, Database user name and the Database server hostname at the bottom of the window.

      You will be able to query the database, create new forms/reports etc., from this window. Unfortunately discussion around those topics is beyond the scope of this blog post - so, we will stop here.


II Connector/OpenOffice.org approach


MySQL Connector for OpenOffice.org is a MySQL driver for OpenOffice suite of applications. Even though it appears to be a native driver, MySQL Connector/OpenOffice.org has no implementation for the MySQL Client Server protocol. It is in reality a proxy on the top of MySQL Connector for C++ aka MySQL Connector/C++.

Unlike MySQL Connector/J, Connector/OpenOffice.org has no dependency on JRE, and it can easily be installed using the OpenOffice.org Extension Manager. Due to the underlying native code, Connector/OpenOffice.org may outperform Connector/J in performance.

1. Installation steps for MySQL Connector/OpenOffice.org

Before installing the connector, make sure that you have OpenOffice.org 3.1 [or later] -OR- StarOffice 9.1 [or later] suite installed, and the version of the MySQL server on which the database is hosted is at least 5.1. If any of these requirements are not met, skip this entire section and check the I. Connector/J approach for the instructions that may work with your current versions of StarOffice / OpenOffice and MySQL server.


  1. Download the connector for your platform from the following location:
            http://extensions.services.openoffice.org/project/mysql_connector

  2. Launch StarOffice / OpenOffice.org

  3. Bring up the "Extension Manager" by clicking on Tools Menu -> Extension Manager ...

  4. Click on "Add" button, then locate the OpenOffice connector that you downloaded in step #1 (see two steps above). Click on "Open" button. The name of the connector will be something similar to mysql-connector-ooo-....oxt.

  5. Choose appropriate response to the question "For whom do you want to install the extension?". In this example, I chose the option "Only for me".

  6. Read the "Extension Software License Agreement" and accept the agreement to install the Connector/OpenOffice.org as an extension to StarOffice / OpenOffice.org





  7. Restart StarOffice / OpenOffice.org to complete the installation.


2. Configuration steps for MySQL Connector/OpenOffice.org


  1. Launch StarOffice / OpenOffice.org

  2. In the Welcome screen, click on "Database". A database wizard pops up to help us create, open or connect to an existing database from StarOffice / OpenOffice.org.

  3. Since our main interest is only to connect to an existing database in this exercise, click on "Connect to an existing database" radio button and select "MySQL" from the drop-down menu that is underneath the selected radio button.





    Click on "Next >>" button

  4. In the next screen, select "Connect native" radio button





    Click on "Next >>" button

  5. In "Set up connection to a MySQL database" screen, provide the name of the database, hostname or IP address of the MySQL database server (server URL) that you want to connect to along with the port# on which the MySQL server is actively listening for new database connections. If the MySQL Server is running on the same machine as that of the StarOffice / OpenOffice.org application, you can provide the location of the socket under "Socket" field. If not, leave it blank.





    Click on "Next >>" button

  6. Provide the database user name and the password details in "Set up the user authentication" screen. Click on "Password required" check box if there is a password setup for the database user.

    Click on "Test Connection" button to ensure a successful connection to the MySQL database using the credentials provided in this window.





    Click on "Next >>" button

  7. In the final screen, simply accept the default values and click on 'Finish' button.

    "Yes, register the database for me" and "Open the database for edition" are the defaults selected in this screen.

    When you click on the 'Finish' button, you will be prompted to provide a name to the database to save it as a file on your local machine. The saved file contains information about the database including the queries, reports and forms that are linked to the MySQL database. The actual data remain in the MySQL database. Hence you need not worry about the size of the file that is being saved on your local machine. It will be small in size.





  8. Ultimately the Database area of the Base main window appears as shown in the following screen capture.





    Notice the RDBMS name, Type of Connectivity, MySQL Database Name along with the configuration properties, Database user name and the Database server hostname at the bottom of the window.

    You will be able to query the database, create new forms/reports etc., from this window. Unfortunately discussion around those topics is beyond the scope of this blog post - so, we will stop here.


That is all there is to it in installing and configuring the MySQL connectors for *Office suite of applications. Now enjoy the flexibility of fetching the data from your favorite office productivity software.

(Original blog post is at the following location:
http://blogs.sun.com/mandalika/entry/setting_up_staroffice_openoffice_org
)
Read More
Posted in | No comments

Sunday, 29 November 2009

PeopleSoft North American Payroll on Sun Solaris with F5100 Flash Array : A blog Reprise

Posted on 22:59 by Unknown
(Copied from my other blog at blogs.sun.com. Original post is at: http://blogs.sun.com/mandalika/entry/peoplesoft_north_american_payroll_on)

During the "Sun day" keynote at OOW 09, John Fowler stated that we are #1 in PeopleSoft North American Payroll performance. Later Vince Carbone from our Performance Technologies group went on comparing our benchmark numbers with HP's and IBM's in BestPerf's group blog at Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance. Meanwhile Jeorg Moellenkamp had been clarifying few things in his blog at App benchmarks, incorrect conclusions and the Sun Storage F5100. Interestingly it all happened while we have no concrete evidence in our hands to show to the outside world. We got our benchmark results validated right before the Oracle OpenWorld, which gave us the ability to speak about it publicly [ and we used it to the extent we could use ]. However Oracle folks were busy with their scheduled tasks for OOW 09 and couldn't work on the benchmark results white paper until now. Finally the white paper with the NA Payroll benchmark results is available on Oracle Applications benchmark web site. Here is the URL:

        PeopleSoft Enterprise Payroll 9.0 using Oracle for Solaris on a Sun SPARC Enterprise M4000

Once again the summary of results is shown below but in a slightly different format. These numbers were extracted from the very first page of the benchmark results white papers where PeopleSoft usually highlights the significance of the results and the actual numbers that they are interested in. The results are sorted by the hourly throughput (payments/hour) in the descending order. The goal is to achieve as much hourly throughput as possible. Since there is one 16 stream result as well in the following table, exercise caution when comparing 8 stream results with 16 stream results. In general, 16 parallel job streams are supposed to yield better throughput when compared to 8 parallel job streams. Hence comparing a 16 stream number with an 8 stream number is not an exact apple-to-apple comparison. It is more like comparing an apple to another apple that is half in size. Click on the link that is underneath the hourly throughput values to open corresponding benchmark result.

Oracle PeopleSoft North American Payroll 9.0 - Number of employees: 240,000 & Number of payments: 360,000
VendorOSHardware Config#Job StreamsElapsed Time (min)Hourly Throughput
Payments per Hour
SunSolaris 10 5/091x Sun SPARC Enterprise M4000 with 4 x 2.53 GHz SPARC64-VII Quad-Core processors and 32 GB memory
1 x Sun Storage F5100 Flash Array with 40 Flash Modules for data, indexes
1 x Sun Storage J4200 Array for redo logs
867.85318,349
HPHP-UX1 x HP Integrity rx6600 with 4 x 1.6 GHz Intel Itanium2 9000 Dual-Core processors and 32 GB memory
1 x HP StorageWorks EVA 8100
1668.07317,320
HPHP-UX1 x HP Integrity rx6600 with 4 x 1.6 GHz Intel Itanium2 9000 Dual-Core processors and 32 GB memory
1 x HP StorageWorks EVA 8100
889.77240,615*
IBMz/OS1 x IBM zSeries 990 model 2084-B16 with 313 Feature with 6 x IBM z990 Gen1 processors (populated: 13, used: 6) and 32 GB memory
1 x IBM TotalStorage DS8300 with dual 4-way processors
891.7235,551

This is all public information -- so, feel free to draw your own conclusions. *At this time of writing, HP's 8 stream results were pulled out of Oracle Applications benchmark web site for some reason I do not know why. Hopefully it will show up again on the same web site soon. If it doesn't re-appear even after a month, probably we can simply assume that the result is withdrawn.

As these benchmark results were already discussed by different people in different blogs, I have nothing much to add. The only thing that I want to highlight is that this particular workload is moderately CPU intensive, but very I/O bound. Hence the better the I/O sub-system, the better the performance. Vince provided an insight on Why Sun Storage F5100 is a good option for this workload, while Jignesh Shah from our ISV-Engineering organization focused on the performance of this benchmark workload with F20 PCIe Card.

Also when dealing with NA Payroll, it is very unlikely to achieve a nice out-of-the-box performance. It requires a lot of database tuning too. As the data sets are very large, we partitioned the data in some of the very hot objects and it showed good improvement in query response times. So if you are a PeopleSoft customer running Payroll application with millions of rows of non-partitioned data, consider partitioning the data. We are currently working on a best practices blueprint document for PeopleSoft North American Payroll that presents a variety of tuning tips like these in addition to the recommended practices for F5100 flash array and flash accelerator F20 PCIe card. Stay tuned ..

Related Blog Post:
  • PeopleSoft HRMS 8.9 Self-Service Benchmark on M3000 & T5120 Servers
Read More
Posted in | No comments

Monday, 9 November 2009

Java Code to Convert TWiki URLs to HTML Links

Posted on 01:50 by Unknown
Few days ago I was looking for some code or tool with the ability to convert TWiki URLs to equivalent HTML URLs. (For the uninitiated, TWiki supports wide variety of URL formats in addition to the standard HTML format.). I spent almost 3 hours searching the web and trying out different code, Java classes etc., But at the end, none of the things that I found elsewhere seems to have the ability to convert almost all the TWiki URL formats that I'm trying to convert [to HTML]. So, I gave up looking for some readymade code and quickly wrote some Java code to resolve the issue that I'm trying to solve.



Now that I have the code, thought publishing it in my blog would help other people who are looking for something similar to this. So here it is. Download the source code from the following location.



        TwikiToHTML.java



The code assumes that the TWiki system is deployed on http://www.mandalika.com (a fake web site), and is capable of handling the following TWiki URL formats:





[[http://twiki.etclabs.com/pub/MDE/IsveSeeameRomanIvanovMeraJanuaryTechEval/MeraJanuaryScoping.pdf][MeraJanuaryScoping]]

[[Main.JNetX#April_22nd_2008_Whats_new_at_Sun][report]]

[[Main.RomanIvanovLDomsCheatSheet][Executiontrackingpage]]

[[MDE.IsveBravoJeffTaylorProgressApamaSolarisx64Pott][Link]]

[[IsveSeeameOrgadKimchiIGTvLabProcedures][docs]]

[[http://scweb.france.com/~fparient/pub/europa08solaris.odp][solarisprezo]]<br>[[http://scweb.france.com/~fparient/pub/europa08fx.pdf][fxbenchmark]]

[[OmalleyDataProject]]

[[http://twiki.etclabs.com/bin/view/Main/MySQL64NvPlan][Plan]],[[http://twiki.etclabs.com/bin/view/Main/IntegrateSFW][Detail]]

MDE.IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization

http://sidc.israel.com/~mk147459/Reports/ana.html

SofCheck

<a href="http://twiki.org">twiki</a>





Sample inputs and outputs:





% java TwikiToHTML



TWiki URL: NotSoPl@inY0uKnow

HTTP equivalent:



TWiki URL: ItIsJustplain

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/ItIsJustplain" target="_new">ItIsJustplain</a>



TWiki URL: GiriK@lyanaChakravarth!

HTTP equivalent:



TWiki URL: &ersand

HTTP equivalent:



TWiki URL: Harisha

HTTP equivalent:



TWiki URL: Janaki Lakshmi

HTTP equivalent:



TWiki URL: JanakiLakshmi

HTTP equivalent:



TWiki URL: [[OmalleyDataProject]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/OmalleyDataProject" target="_new">OmalleyDataProject</a>



TWiki URL: [[Giri][Mandalika]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/Giri" target="_new">Mandalika</a>



TWiki URL: [[Giri][Mandalika]], [[Kalyana][Chakravarthy]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/Giri" target="_new">Mandalika</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/Kalyana" target="_new">Chakravarthy</a>



TWiki URL: [[Giri][Mandalika]], [[Kalyana][Chakravarthy]], [[IsveSeeameOrgadKimchiIGTvLabProcedures][docs]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/Giri" target="_new">Mandalika</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/Kalyana" target="_new">Chakravarthy</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/IsveSeeameOrgadKimchiIGTvLabProcedures" target="_new">docs</a>



TWiki URL: [[Giri][Mandalika]], [[Kalyana][Chakravarthy]], [[IsveSeeameOrgadKimchiIGTvLabProcedures][docs]], [[OmalleyDataProject][Omalley]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Downloads/Giri" target="_new">Mandalika</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/Kalyana" target="_new">Chakravarthy</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/IsveSeeameOrgadKimchiIGTvLabProcedures" target="_new">docs</a><BR><a href="http://www.mandalika.com/bin/view/Downloads/OmalleyDataProject" target="_new">Omalley</a>



TWiki URL: http://sidc.israel.etclabs.com/~mk147459/Reports/ana.html

HTTP equivalent: <a href="http://sidc.israel.etclabs.com/~mk147459/Reports/ana.html" target="_new">ana.html</a>



TWiki URL: http://sidc.israel.etclabs.com/~mk147459/Reports/ana.html,http://technopark02.blogspot.com

HTTP equivalent: <a href="http://sidc.israel.etclabs.com/~mk147459/Reports/ana.html" target="_new">ana.html</a><BR><a href="http://technopark02.blogspot.com" target="_new">technopark02.blogspot.com</a>



TWiki URL: [[http://www.mandalika.com/pub/MDE/IsveSeeameRomanIvanovMeraJanuaryTechEval/MeraJanuaryScoping.pdf][MeraJanuaryScoping]]

HTTP equivalent: <a href="http://www.mandalika.com/pub/MDE/IsveSeeameRomanIvanovMeraJanuaryTechEval/MeraJanuaryScoping.pdf" target="_new">MeraJanuaryScoping</a>



TWiki URL: [[http://www.mandalika.com/bin/view/Main/RomanIvanovRStyleSoftlabRSBankV6][execution details]] <BR> [[http://www.mandalika.com/bin/view/Main/OracleRAConLDom][OracleRAConLDom]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/RomanIvanovRStyleSoftlabRSBankV6" target="_new">execution details</a><BR><a href="http://www.mandalika.com/bin/view/Main/OracleRAConLDom" target="_new">OracleRAConLDom</a>



TWiki URL: [[http://www.mandalika.com/pub/ISVeProjects/ISVeProject20080213161851/Project20080213161851scoping.odt][scoping document]] , [[http://www.mandalika.com/pub/MDE/IsveSeeameRomanIvanovMeraJanuaryTechEval/MeraJanuaryScoping.pdf][MeraJanuaryScoping]] <BR> [[http://sidc.israel.etclabs.com/~ok134283/techeval/snooggie.html][techeval]] , [[http://sidc.israel.etclabs.com/~ok134283/techeval/servision.html][techeval]]

HTTP equivalent: <a href="http://www.mandalika.com/pub/ISVeProjects/ISVeProject20080213161851/Project20080213161851scoping.odt" target="_new">scoping document</a><BR><a href="http://www.mandalika.com/pub/MDE/IsveSeeameRomanIvanovMeraJanuaryTechEval/MeraJanuaryScoping.pdf" target="_new">MeraJanuaryScoping</a><BR><a href="http://sidc.israel.etclabs.com/~ok134283/techeval/snooggie.html" target="_new">techeval</a><BR><a href="http://sidc.israel.etclabs.com/~ok134283/techeval/servision.html" target="_new">techeval</a>



TWiki URL: [[Main.RomanIvanovLDomsCheatSheet][Execution tracking page]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/RomanIvanovLDomsCheatSheet" target="_new">Execution tracking page</a>



TWiki URL: [[MDE.IsveBravoJeffTaylorProgressApamaSolarisx64Pott][Link]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/MDE/IsveBravoJeffTaylorProgressApamaSolarisx64Pott" target="_new">Link</a>



TWiki URL: [[Main.JNetX#April_22nd_2008_Whats_new_at_Sun][report]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/JNetX#April_22nd_2008_Whats_new_at_Sun" target="_new">report</a>



TWiki URL: [[Main.RomanIvanovLDomsCheatSheet][Execution tracking page]], [[MDE.IsveBravoJeffTaylorProgressApamaSolarisx64Pott][Link]] <BR> [[Main.JNetX#April_22nd_2008_Whats_new_at_Sun][report]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/RomanIvanovLDomsCheatSheet" target="_new">Execution tracking page</a><BR><a href="http://www.mandalika.com/bin/view/MDE/IsveBravoJeffTaylorProgressApamaSolarisx64Pott" target="_new">Link</a><BR><a href="http://www.mandalika.com/bin/view/Main/JNetX#April_22nd_2008_Whats_new_at_Sun" target="_new">report</a>



TWiki URL: [[http://www.mandalika.com/bin/view/MDE/IsveKylinPDSChinaVolumeISVWorldonline][World-Online]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/MDE/IsveKylinPDSChinaVolumeISVWorldonline" target="_new">World-Online</a>



TWiki URL: [[ISVeProjects.IsveIndividualSummaryCarylTakvorian][Caryl Takvorian]] Tom Duell

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/ISVeProjects/IsveIndividualSummaryCarylTakvorian" target="_new">Caryl Takvorian</a>



TWiki URL: [[http://www.mandalika.com/bin/view/MDE/IsveGANEIndividualSummaryAndrewWalton][Andrew Walton]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/MDE/IsveGANEIndividualSummaryAndrewWalton" target="_new">Andrew Walton</a>



TWiki URL: [[http://etclabswebcollab.east.etclabs.com/gm/folder-1.11.1890618][Project Folder]]

HTTP equivalent: <a href="http://etclabswebcollab.east.etclabs.com/gm/folder-1.11.1890618" target="_new">Project Folder</a>



TWiki URL: [[http://etclabswebcollab.east.etclabs.com/gm/document-1.9.3262994/SUN%25E4%25B8%25BB%25E6%259C%25BAOCS%25E6%2580%25A7%25E8%2583%25BD%25E6%25B5%258B%25E8%25AF%2595%25E6%258A%25A5%25E5%2591%258A.doc][Cust. Report]], [[http://etclabswebcollab.east.etclabs.com/gm/document-1.9.3270163/ocs_perftune_t2000.odt][ISVE Report]]

HTTP equivalent: <a href="http://etclabswebcollab.east.etclabs.com/gm/document-1.9.3262994/SUN%25E4%25B8%25BB%25E6%259C%25BAOCS%25E6%2580%25A7%25E8%2583%25BD%25E6%25B5%258B%25E8%25AF%2595%25E6%258A%25A5%25E5%2591%258A.doc" target="_new">Cust. Report</a><BR><a href="http://etclabswebcollab.east.etclabs.com/gm/document-1.9.3270163/ocs_perftune_t2000.odt" target="_new">ISVE Report</a>



TWiki URL: MDE.IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/MDE/IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization" target="_new">IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization</a>



TWiki URL: [[Main.RomanIvanovLDomsCheatSheet]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/RomanIvanovLDomsCheatSheet" target="_new">RomanIvanovLDomsCheatSheet</a>



TWiki URL: MDE.PostgreSQLAutoTune <BR> [[Main.RomanIvanovLDomsCheatSheet]] , MDE.IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/Main/RomanIvanovLDomsCheatSheet" target="_new">RomanIvanovLDomsCheatSheet</a><BR><a href="http://www.mandalika.com/bin/view/MDE/PostgreSQLAutoTune" target="_new">PostgreSQLAutoTune</a><BR><a href="http://www.mandalika.com/bin/view/MDE/IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization" target="_new">IsveAlphaBogdanVasiliuSynopsysVCSPerformanceOptimization</a>



TWiki URL: [[http://www.mandalika.com/bin/view/ISVeProjects/IsveIndividualSummaryMohammedYousuf][Mohammed Yousuf]], Thiagarajan Chandrasekaran

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/ISVeProjects/IsveIndividualSummaryMohammedYousuf" target="_new">Mohammed Yousuf</a>



TWiki URL: MDE.IsveDabuboirProsDamienCookeMyStaffMyStaffPostgreSQLAdoption [[http://twiki.etclabs/pub/MDE/IsveDabuboirProsDamienCookeMyStaffMyStaffPostgreSQLAdoption/MyStaff.html][Project Status]]

HTTP equivalent: <a href="http://twiki.etclabs/pub/MDE/IsveDabuboirProsDamienCookeMyStaffMyStaffPostgreSQLAdoption/MyStaff.html" target="_new">Project Status</a><BR><a href="http://www.mandalika.com/bin/view/MDE/IsveDabuboirProsDamienCookeMyStaffMyStaffPostgreSQLAdoption" target="_new">IsveDabuboirProsDamienCookeMyStaffMyStaffPostgreSQLAdoption</a>



TWiki URL: [[http://www.xxx.yyy]], <a href="http://www.xyz.com">xyz</a>, http://www.yahoo.com , <a href="www.giril.com">giri</a> , [[http://www.harisha.com][harisha]], <a href="www.etclabs.com">etclabs</a>

HTTP equivalent: <a href="http://www.xyz.com">xyz</a><BR><a href="www.giril.com">giri</a><BR><a href="www.etclabs.com">etclabs</a><BR><a href="http://www.xxx.yyy" target="_new">www.xxx.yyy<BR><a href="http://www.harisha.com" target="_new">harisha</a><BR><a href="http://www.yahoo.com" target="_new">www.yahoo.com</a>



TWiki URL: New link YousufCertificationofOracleRAConSolaris10u6Containers , old link http://blogs.etclabs.com/rac

HTTP equivalent: <a href="http://blogs.etclabs.com/rac" target="_new">rac</a>



TWiki URL: attached email.

HTTP equivalent:



TWiki URL: [[http://www.mandalika.com/bin/view/MDE/Caucho Caucho]] [[http://www.mandalika.com/bin/view/MDE/ISVeProject20070812164315 20070812164315]]

HTTP equivalent: <a href="http://www.mandalika.com/bin/view/MDE/Caucho" target="_new">Caucho</a><BR><a href="http://www.mandalika.com/bin/view/MDE/ISVeProject20070812164315" target="_new">20070812164315</a>



TWiki URL: http://download.sap.com/download.epd?context=40E2D9D5E00EEF7CB82813BD3F95797BAEB80527B36EC026E03386D659E4DE48

HTTP equivalent: <a href="http://download.sap.com/download.epd?context=40E2D9D5E00EEF7CB82813BD3F95797BAEB80527B36EC026E03386D659E4DE48" target="_new">download.epd?context=40E2D9D5E00E ... BAEB80527B36EC026E03386D659E4DE48</a>







Disclaimer:

This code is poorly documented, not perfect and most importantly may not even meet the minimum Java coding standards. However it does the intended job with minor modifications. Feel free to use, modify, reproduce or redistribute - but please do not come back complaining. I have no intention of fixing or improving this code.
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • *nix: Workaround to cannot find zipfile directory in one of file.zip or file.zip.zip ..
    Symptom: You are trying to extract the archived files off of a huge (any file with size > 2 GB or 4GB, depending on the OS) ZIP file with...
  • C/C++: Printing Stack Trace with printstack() on Solaris
    libc on Solaris 9 and later, provides a useful function called printstack , to print a symbolic stack trace to the specified file descripto...
  • Sun: OpenJDK
    Open source JDK, that is. Sun Microsystems did it again -- As promised during JavaOne event back in May 2006, Sun made the implementation of...
  • Blast from the Past : The Weekend Playlist #3
    The 80s contd., The 80s witnessed the rise of fine talent - so, it is only fitting to dedicate another complete playlist for the 80s. Her...
  • Binary compatibility
    What's It? "Binary compatibility" (BC) is the ability of one machine to run software that was written for another without hav...
  • Sun Blueprint : MySQL in Solaris Containers
    While the costs of managing a data center are becoming a major concern with the increased number of under-utilized servers, customers are ac...
  • Database: Oracle Server Architecture (overview)
    Oracle server consists of the following core components: 1) database(s) & 2) instance(s) 1) database consists of: 1) datafil...
  • Fix to Firefox 3 Crash on Solaris 10 x86
    Symptom : Firefox 3 crashes on Solaris 10 x86 when the web browser tries to render some of the HTML pages with SWF content in them. For exam...
  • Consolidating Siebel CRM 8.0 on a Single Sun SPARC Enterprise Server, T5440
    .. blueprint document is now available on wikis.sun.com . Here is the direct link to the blueprint:              Consolidating Oracle Siebel...
  • Solaris: NULL pointer bugs & /usr/lib/0@0.so.1 library
    Some programmers assume that a NULL character pointer is the same as a pointer to a NULL string. However de-referencing a NULL pointer (ie.,...

Categories

  • 80s music playlist
  • bandwidth iperf network solaris
  • best
  • black friday
  • breakdown database groups locality oracle pmap sga solaris
  • buy
  • deal
  • ebiz ebs hrms oracle payroll
  • emca oracle rdbms database ORA-01034
  • friday
  • Garmin
  • generic+discussion software installer
  • GPS
  • how-to solaris mmap
  • impdp ora-01089 oracle rdbms solaris tips upgrade workarounds zombie
  • Magellan
  • music
  • Navigation
  • OATS Oracle
  • Oracle Business+Intelligence Analytics Solaris SPARC T4
  • oracle database flashback FDA
  • Oracle Database RDBMS Redo Flash+Storage
  • oracle database solaris
  • oracle database solaris resource manager virtualization consolidation
  • Oracle EBS E-Business+Suite SPARC SuperCluster Optimized+Solution
  • Oracle EBS E-Business+Suite Workaround Tip
  • oracle lob bfile blob securefile rdbms database tips performance clob
  • oracle obiee analytics presentation+services
  • Oracle OID LDAP ADS
  • Oracle OID LDAP SPARC T5 T5-2 Benchmark
  • oracle pls-00201 dbms_system
  • oracle siebel CRM SCBroker load+balancing
  • Oracle Siebel Sun SPARC T4 Benchmark
  • Oracle Siebel Sun SPARC T5 Benchmark T5-2
  • Oracle Solaris
  • Oracle Solaris Database RDBMS Redo Flash F40 AWR
  • oracle solaris rpc statd RPC troubleshooting
  • oracle solaris svm solaris+volume+manager
  • Oracle Solaris Tips
  • oracle+solaris
  • RDC
  • sale
  • Smartphone Samsung Galaxy S2 Phone+Shutter Tip Android ICS
  • solaris oracle database fmw weblogic java dfw
  • SuperCluster Oracle Database RDBMS RAC Solaris Zones
  • tee
  • thanksgiving sale
  • tips
  • TomTom
  • windows

Blog Archive

  • ▼  2013 (16)
    • ▼  December (3)
      • Blast from the Past : The Weekend Playlist #3
      • Measuring Network Bandwidth Using iperf
      • Blast from the Past : The Weekend Playlist #2
    • ►  November (2)
    • ►  October (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (1)
    • ►  May (1)
    • ►  April (1)
    • ►  March (1)
    • ►  February (2)
    • ►  January (1)
  • ►  2012 (14)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (2)
  • ►  2011 (15)
    • ►  December (2)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  August (2)
    • ►  July (1)
    • ►  May (2)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2010 (19)
    • ►  December (3)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (1)
    • ►  May (5)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2009 (25)
    • ►  December (1)
    • ►  November (2)
    • ►  October (1)
    • ►  September (1)
    • ►  August (2)
    • ►  July (2)
    • ►  June (1)
    • ►  May (2)
    • ►  April (3)
    • ►  March (1)
    • ►  February (5)
    • ►  January (4)
  • ►  2008 (34)
    • ►  December (2)
    • ►  November (2)
    • ►  October (2)
    • ►  September (1)
    • ►  August (4)
    • ►  July (2)
    • ►  June (3)
    • ►  May (3)
    • ►  April (2)
    • ►  March (5)
    • ►  February (4)
    • ►  January (4)
  • ►  2007 (33)
    • ►  December (2)
    • ►  November (4)
    • ►  October (2)
    • ►  September (5)
    • ►  August (3)
    • ►  June (2)
    • ►  May (3)
    • ►  April (5)
    • ►  March (3)
    • ►  February (1)
    • ►  January (3)
  • ►  2006 (40)
    • ►  December (2)
    • ►  November (6)
    • ►  October (2)
    • ►  September (2)
    • ►  August (1)
    • ►  July (2)
    • ►  June (2)
    • ►  May (4)
    • ►  April (5)
    • ►  March (5)
    • ►  February (3)
    • ►  January (6)
  • ►  2005 (72)
    • ►  December (5)
    • ►  November (2)
    • ►  October (6)
    • ►  September (5)
    • ►  August (5)
    • ►  July (10)
    • ►  June (8)
    • ►  May (9)
    • ►  April (6)
    • ►  March (6)
    • ►  February (5)
    • ►  January (5)
  • ►  2004 (36)
    • ►  December (1)
    • ►  November (5)
    • ►  October (12)
    • ►  September (18)
Powered by Blogger.

About Me

Unknown
View my complete profile