Creation Zone

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Saturday, 28 December 2013

Blast from the Past : The Weekend Playlist #3

Posted on 00:36 by Unknown

The 80s contd.,

The 80s witnessed the rise of fine talent - so, it is only fitting to dedicate another complete playlist for the 80s. Here it is. Enjoy. Earlier playlists can be accessed from the following locations:

    Blast from the Past : The Weekend Playlist #2 (80s)
    Blast from the Past : The Weekend Playlist #1 (50s, 60s and 70s)

Audio-Visual material courtesy: YouTube

1. Aerosmith - Dude (Looks Like a Lady) (1987)

Featured in Robin Williams' Mrs. Doubtfire.

2. Kool & the Gang - Celebration (1980)

San Francisco Bay Area Star 101.3 audience must be hating this one.

3. Wham - Wake Me Up Before You Go-Go (1984)

That was George Michael before his solo career.

4. Toto - Rosanna (1982)

Grammy winner

5. Club Nouveau - Lean on Me (1987)

Cover version. Original by Bill Withers in 1972.

6. Tom Petty - Free Fallin' (1989)

Enjoy

7. Kenny Loggins - Footloose (1984)

Of course, it was featured in Kevin Bacon's Footloose

8. Simple Minds - Don't You (Forget About Me) (1985)

This is my brother, Vishu's, pick.

9. Fun Boy Three - Our Lips Are Sealed (1983)

Another cover. Original by The Go-Go's just two years earlier.

10. Men without Hats - The Safety Dance (1983)

S s s s A a a a F f f f E e e e T t t t Y y y y

Read More
Posted in 80s music playlist | No comments

Saturday, 21 December 2013

Measuring Network Bandwidth Using iperf

Posted on 01:04 by Unknown

iperf is a simple, open source tool to measure the network bandwidth. It can test TCP or UDP throughput. Tools like iperf are useful to check the performance of a network real quick, by comparing the achieved bandwidth with the expectation. The example in this blog post is from a Solaris system, but the instructions and testing methodology are applicable on all supported platforms including Linux.

Download the source code from iperf's home page, and build the iperf binary. Those running Solaris 10 or later, can download the pre-built binary (file size: 245K) from this location to give it a quick try (right click and "Save Link As .." or similar option).

Testing methodology:

iperf's network performance measurements are based on the client-server communication model - hence requires establishing both a server and a client. The same iperf binary can be used to run the process in server and client modes.

  1. Start iperf in server mode
    iperf -s -i <interval>

    Option -s or --server starts the process in server mode. -i or --interval is the sampling interval in seconds.

  2. Start iperf in client mode, and test the network connection between client and the server with arbitrary data transfers.


    iperf -n <bytes> -i <interval> -c <ServerIP>

    Option -c or --client starts the process in client mode. Option -n or --bytes specify the number of bytes to transmit in bytes, KB (use suffix K) or MB (use suffix M). -i or --interval is the sampling interval in seconds. The last option is the IP address or the hostname of the server to connect to. By default, client connects to the server using TCP. -u or --udp switches to UDP.

  3. Check the network link speed on server and client, and compare the throughput achieved.

Check the man page out for the full list of options supported by iperf in client and server modes.

Here is a simple demonstration.

On server node:


iperfserv% dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full igb0

iperfserv% ifconfig net0 | grep inet
inet 10.129.193.63 netmask ffffff00 broadcast 10.129.193.255

iperfserv% ./iperf -v
iperf version 3.0-BETA5 (28 March 2013)SunOS iperfserv 5.11 11.1 sun4v sparc sun4v


iperfserv% ./iperf -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

On client node:


client% dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full igb0

client% ifconfig net0 | grep inet
inet 10.129.193.151 netmask ffffff00 broadcast 10.129.193.255

client% ./iperf -n 1024M -i 1 -c 10.129.193.63
Connecting to host 10.129.193.63, port 5201
[ 4] local 10.129.193.151 port 63507 connected to 10.129.193.63 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.01 sec 105 MBytes 875 Mbits/sec
[ 4] 1.01-2.02 sec 112 MBytes 934 Mbits/sec
[ 4] 2.02-3.00 sec 110 MBytes 934 Mbits/sec
[...]
[ 4] 8.02-9.01 sec 110 MBytes 933 Mbits/sec
[ 4] 9.01-9.27 sec 30.0 MBytes 934 Mbits/sec
[ ID] Interval Transfer Bandwidth
Sent
[ 4] 0.00-9.27 sec 1.00 GBytes 927 Mbits/sec
Received
[ 4] 0.00-9.27 sec 1.00 GBytes 927 Mbits/sec

iperf Done.

At the same time, somewhat similar messages are written to stdout on the server node.


iperfserv% ./iperf -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.129.193.151, port 33457
[ 5] local 10.129.193.63 port 5201 connected to 10.129.193.151 port 63507
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 104 MBytes 874 Mbits/sec
[ 5] 1.00-2.00 sec 111 MBytes 934 Mbits/sec
[ 5] 2.00-3.00 sec 111 MBytes 934 Mbits/sec
[...]
[ ID] Interval Transfer Bandwidth
Sent
[ 5] 0.00-9.28 sec 1.00 GBytes 927 Mbits/sec
Received
[ 5] 0.00-9.28 sec 1.00 GBytes 927 Mbits/sec
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

The link speed is specified in Mbps (megabit per second). In the above example, the network link is operating at 1000 Mbps speed, and the achieved bandwidth is 927 Mbps, which is 92.7% of the advertised bandwidth.

Notes:

  • It is not necessary to execute iperf in client and server modes as root or privileged user
  • In server mode, iperf uses port 5201 by default. It can be changed to something else using -p or --port option
  • Restart iperf server after each client test to get reliable, consistent results
  • Using iperf is just one of many ways to measure the network bandwidth. There are other tools such as uperf, ttcp, netperf, bwping, udpmon, tcpmon, .. just to name a few. Research and pick the one that best suits your requirement.
Read More
Posted in bandwidth iperf network solaris | No comments

Saturday, 7 December 2013

Blast from the Past : The Weekend Playlist #2

Posted on 03:19 by Unknown

The 80s

The quality of music steadily improved over the decades in mid-1900s, and eventually peaked in the 80s. Some may disagree, but in my opinion, the 80s were easily one of the best decades for music in the United States. The decade witnessed the emergence of many successful artists who delivered solid hits, that are still relevant and part of many pop culture references today. The launch of MTV in 1981 upped the ante to produce interesting videos in an effort to increase the global outreach.

In this iteration, let's focus on the decade of eighties. The following playlist has some random songs from the 80s in no particular order. The previous playlist can be accessed from this location:

    Blast from the Past : The Weekend Playlist #1 (50s, 60s and 70s)

Audio-Visual material courtesy: YouTube

1. INXS - Need You Tonight (1987)

Love the guitar riff.

2. Queen - I Want to Break Free (1984)

Featured in some of the Coke commercials in mid-2000s.

3. John Farnham - You're the Voice (1986)

Wasn't so popular in US, I believe. Great song nevertheless.

4. Phil Collins - Another Day in Paradise (1989)

Grammy winner.

5. Journey - Don't Stop Believin' (1981)

Very popular in pop culture.

6. Tears for Fears - Shout (1984)

#1 on the Billboard Hot 100 chart for 3 weeks.

7. Fine Young Cannibals - She Drives Me Crazy (1989)

Enjoy!

8. Mr Mister - Kyrie (1985)

Has 80s feel all over it.

9. Dream Academy - Life In A Northern Town (1985)

Expect to be hit with something unexpected @00:00:52s. Featured in one of the episodes of King of the Hill.

10. Robert Palmer - Addicted To Love (1986)

Who wouldn't like models in uniform fiddling with guitars and musical instruments.

Read More
Posted in 80s music playlist | No comments

Saturday, 30 November 2013

Things to Consider when Planning the Redo logs for Oracle Database

Posted on 13:51 by Unknown

Very basic and generic discussion from the performance point of view. Customers still have to do their due diligence in understanding redo logs, and how they work in Oracle database, before finalizing redo log configuration for their deployments.

  • size them properly
    • log writer writes to a single redo log file until either it is full or a manual log switch is requested
          Oracle supports multiplexed redo logs for availability, but this behavior of writing to a file until it is full or a log switch happens, still hold
    • if the transactions generate a lot of redo before a database commit, consider large sizes in tens of gigabytes for redo logs
    • if not sized properly, it leads to unnecessary log switches, which in turn increase checkpoint activity resulting in unnecessary slow down of the database operations
          two redo logs each with at least 5G in size might be a good start. observe the log switches, checkpoints and increase (or decrease, though there is no performance benefit) the file size accordingly

  • do not mix redo logs with the rest of the database or anything else
    • in a normal functioning database, most of the time, log writer simply writes redo entries sequentially to redo logs
    • any slow down in writing the redo data to logs hurt the performance of the database
    • best not to share the disks/volumes on which redo logs are hosted, with anything else
          set of disks, volumes exclusive to redo logs, that is

  • ensure that the underlying disks or I/O medium used to store the redo logs are fast, optimally configured and can sustain the amount of I/O bandwidth needed to write the redo entries to the redo logs
        if those requirements are not met, it could lead to 'log file sync' waits, which will slow down the database transactions

  • redo logs on non-volatile flash storage may have performance benefits over the traditional hard disk drives
    • check this blog post out, Redo logs on F40 PCIe Cards, for related discussion (keywords: 4K block size for redo logs, block alignment)
Read More
Posted in Oracle Database RDBMS Redo Flash+Storage | No comments

Sunday, 17 November 2013

Blast from the Past : The Weekend Playlist #1

Posted on 03:14 by Unknown

Music and Movies - the two powerful forms of entertainment, are very subjective. In general, there is no good or bad music, and there are no good or bad movies. Their success depend on a combination of various factors such as the mood we are in, our biases, memories, brand (eg., Pixar in movies; The Beatles in music, although I never came across any palatable original work from this boy-band group), derivation from other successful work -- remixes (Fatboy Slim), mashups (DJs), mockery (spoof movies, comedy skits), influence/herd mentality (Bieber, Miley on Billboard top singles), celebrity endorsements, timing of their release (cliched superhero summer tent-poles), their competition (someone will emerge as a winner even when the actual content and its presentation is equally weak), marketing push (Kelly Clarkson, The Avengers movie), viral status (Gangnam Style), cult following (Office Space), audience desperation, events that evoke feelings such as happiness, smugness, sympathy (Heath Ledger's death helping Nolan's Batman series) and pity (MJ's "This is it"), luck, the artists, crew and of course, the actual content. In short, if something was "well liked", it does not necessarily mean that it was really "well liked" -- multiple factors surrounding that piece of work must have properly aligned and helped in one way or the other, to make it more successful.

Now that the obvious disclaimer is out of the way, I think I can openly list out some of the stuff that I think is worth listening to. If you don't like or really hate something that you see here, too bad I guess - just deal with it. :-)

To kick start this series, I chose a handful of oldies from 50s, 60s and 70s. (yes, I intend to publish a few more playlists down the line, unfortunately). Here it goes in no particular order. I really like those shorter 2+ minute durations. Audio-Visual material courtesy: YouTube.

1. BJ Thomas - Raindrops Keep Fallin' on My Head (1969)

Featured in the movie, Butch Cassidy and the Sundance Kid.

2. Sonny & Cher - I Got You Babe (1965)

Prominently featured in Groundhog Day.

3. Stealers Wheel - Stuck in the Middle with You (1972)

Featured in the movie Reservoir Dogs.

4. Donovan - Jennifer Juniper (1968)

Featured in one of the episodes of The Simpsons.

5. Barry McGuire - Eve of Destruction (1965)


6. Billy Joe Royal - Down in the Boondocks (1965)


7. Link Wray - Rumble (1958)

Sounds unreal - way ahead of its time. Featured in Tarantino's Pulp Fiction.

8. America - A Horse with No Name (1972)

Earned bad song reputation for some of its lines.

9. Ella Fitzgerald & Louis Armstrong - Let's Call the Whole Thing Off (1959)

Featured in one of the episodes of The Simpsons animated series.

10. The Tokens - The Lion Sleeps Tonight (1961)

I guess this one too got some bad rap.

Before concluding: I do not know much about composing music, or playing musical instruments of any kind. Still I attempted composing a few instrumental tracks with the help of software. It's been a fun exercise so far. Listen to those amateur tracks @ icompositions | giri04 webpage.

Read More
Posted in music | No comments

Tuesday, 15 October 2013

[Script] Breakdown of Oracle SGA into Solaris Locality Groups

Posted on 00:01 by Unknown

Goal: for a given process, find out how the SGA was allocated in different locality groups on a system running Solaris operating system.

Download the shell script, sga_in_lgrp.sh. The script accepts any Oracle database process id as input, and prints out the memory allocated in each locality group.

Usage: ./sga_in_lgrp.sh <pid>

eg.,


# prstat -p 12820

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
12820 oracle 32G 32G sleep 60 -20 0:00:16 0.0% oracle/2

# ./sga_in_lgrp.sh 12820

Number of Locality Groups (lgrp): 4
------------------------------------

lgroup 1 : 8.56 GB
lgroup 2 : 6.56 GB
lgroup 3 : 6.81 GB
lgroup 4 : 10.07 GB

Total allocated memory: 32.00 GB

For those who wants to have a quick look at the source code, here it is.


# cat sga_in_lgrp.sh

#!/bin/bash

# check the argument count
if [ $# -lt 1 ]
then
echo "usage: ./sga_in_lgrp.sh <oracle pid>"
exit 1
fi

# find the number of locality groups
lgrp_count=$(kstat -l lgrp | tail -1 | awk -F':' '{ print $2 }')
echo "\nNumber of Locality Groups (lgrp): $lgrp_count"
echo "------------------------------------\n"

# save the ism output using pmap
pmap -sL $1 | grep ism | sort -k5 > /tmp/tmp_pmap_$1

# calculate the total amount of memory allocated in each lgroup
for i in `seq 1 $lgrp_count`
do
echo -n "lgroup $i : "
grep "$i \[" /tmp/tmp_pmap_$1 | awk '{ print $2 }' | sed 's/K//g' | awk '{ sum+=$1} END {printf ("%6.2f GB\n", sum/(1024*1024))}'
done

echo
echo -n "Total allocated memory: "
awk '{ print $2 }' /tmp/tmp_pmap_$1 | sed 's/K//g' | awk '{ sum+=$1} END {printf ("%6.2f GB\n\n", sum/(1024*1024))}'

rm /tmp/tmp_pmap_$1

Like many things in life, there will always be a better or simpler way to achieve this. If you find one, do not fret over this approach. Please share, if possible.

Read More
Posted in breakdown database groups locality oracle pmap sga solaris | No comments

Monday, 30 September 2013

Miscellaneous Tips: Solaris, Oracle Database, Java, FMW

Posted on 21:52 by Unknown

[Solaris] Cleanup all IPC resources

Run the following wrapper script with root user privileges.


for i in `ipcs -a | awk '{ print $2 }'`
do
ipcrm -m $i 2> /dev/null
ipcrm -q $i 2> /dev/null
ipcrm -s $i 2> /dev/null
done

[Java, WebLogic] Find the process id (pid) of a WebLogic managed server instance

Run the following as the user who owns the process, or with root user privileges.

/usr/java/bin/jps -v | grep <WLS_server_name> | awk '{ print $1 }'

I think this tip is applicable on all supported platforms.

eg.,
Finding the pid of a managed server, bi_server1.


# /usr/java/bin/jps -v | grep bi_server1 | awk '{ print $1 }'
18659

# pargs 18659 | grep weblogic.Name
argv[7]: -Dweblogic.Name=bi_server1

[Oracle Database] Make Oracle ignore hints

Set the following hidden parameter.

_optimizer_ignore_hints=TRUE

(in general, Oracle does not recommend playing with hidden parameters. Check with Oracle support when in doubt).


[Oracle Database] Data Pump Export in a RAC environment fails with ORA-31693, ORA-31617, ORA-19505, ORA-27037 errors

eg.,


ORA-31693: Table data object "<SCHEMA>"."<TABLE>":"P_1147" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "<FILE>" for write
ORA-19505: failed to identify file "<FILE>"
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

Workaround:

Add CLUSTER=N to the list of existing expdp options.


[Solaris, ZFS] Check the current ARC size and its breakdown


kstat -m zfs | grep size (any user) - OR -
echo ::arc | mdb -k | grep size (root user)

echo ::memstat | mdb -k (root user)


eg.,

# echo ::arc | mdb -k | grep size
size = 259391 MB
buf_size = 3218 MB
data_size = 249309 MB
other_size = 6863 MB
l2_hdr_size = 0 MB

# kstat -m zfs | grep size
buf_size 3375105344
data_size 261419672320
l2_hdr_size 0
other_size 7197048560
size 271991826224

# echo ::memstat | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 14646326 114424 4%
ZFS File Data 31948806 249600 10%
Anon 24660113 192657 7%
Exec and libs 8912 69 0%
Page cache 126052 984 0%
Free (cachelist) 24517 191 0%
Free (freelist) 263965754 2062232 79%
Total 335380480 2620160

[Fusion Middleware] Disable Fusion Middleware Diagnostic Framework (DFW) Dump Sampling

The Diagnostic Framework in FMW 11g environments detect, diagnose and resolve critical errors such as uncaught exceptions, deadlocked threads and out of memory errors. It is enabled by default.

Though DFW is supposed to diagnose and fix some of the issues transparently, due to the inevitable bugs in [all kinds of] software and misconfigurations, sometimes DFW itself may become a major issue. For instance, there is a bug that reported very high system CPU time on a SPARC server where FMW 11g was running. Per the bug description, the system CPU utilization spikes every minute exactly at 00s of a minute, CPU utilization goes down within few seconds - but the pattern persists and the spiky behavior returns within a minute. Another symptom was the sudden drop in available swap space from tens of giga bytes to a few mega bytes when the CPU spike occurs. Upon close examination, it was found out that DFW in FMW is forking tens of jstack processes to collect the thread dumps from an equal number of java processes running in that FMW environment, causing the sudden spike in CPU (each process is busy gathering thread dumps at the same time) and a steep drop in swap space (each jstack process forked a jmap process. both jstack and jmap processes consume some virtual memory just like any other process). All this happened because DFW thought it found a critical issue, and it wasn't noticed or addressed by anyone including the administrators (DFW couldn't fix this particular issue on its own) - so, it kept gathering the diagnostic data continuously. In this example, DFW did the right thing but the diagnostic data collection frequency was too short - only one minute, that diminished the value of DFW and made it a liability. In such dire situations, probably it is best to disable the dump sampling feature of Diagnostic Framework tentatively while the underlying original issue is being fixed in that application environment. It can be enabled again when the critical issue was fixed, and no longer an issue.

Steps to disable Fusion Middleware Diagnostic Framework (DFW) Dump Sampling: (courtesy: Shashidhara Varamballi)

Method (1) Using WLST:

  1. run wlst.sh
  2. connect to the AdminServer
  3. execute command: enableDumpSampling (enable=0, server='<server_name>')

Method (2) Manual editing of config file:

  1. Edit $DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dfw_config.xml
  2. Change the "enabled" attribute from "true" to "false".
    eg.,
    <dumpSampling enabled="false">
  3. Change the "useExternalCommands" attribute from "true" to "false".
    eg.,
    <threadDump useExternalCommands="false"/>
  4. Save the changes
--

SEE ALSO:

Fusion Middleware Diagnostics weblog


[Solaris 11] Virtual-to-physical link (NIC) mapping

Check the output of /sbin/dladm show-phys (any user). By default, only those physical links that are available on the running system are displayed. Option -P shows the physical device and attributes of all physical links.

eg.,


$ /sbin/dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full ixgbe0
net5 Infiniband down 0 unknown ibp2
net1 Ethernet up 1000 full ixgbe1
net6 Infiniband down 0 unknown ibp3
net4 Ethernet up 10 full usbecm2

$ /sbin/dladm show-phys -P
LINK DEVICE MEDIA FLAGS
net8 ibp1 Infiniband r----
net0 ixgbe0 Ethernet -----
net7 ibp0 Infiniband r----
net3 ixgbe3 Ethernet r----
net5 ibp2 Infiniband -----
net1 ixgbe1 Ethernet -----
net6 ibp3 Infiniband -----
net4 usbecm2 Ethernet -----
net2 vsw0 Ethernet r----
Read More
Posted in solaris oracle database fmw weblogic java dfw | No comments

Saturday, 31 August 2013

[Oracle Database] Unreliable AWR reports on T5 & Redo logs on F40 PCIe Cards

Posted on 15:19 by Unknown

(1) AWR report shows bogus wait events and times on SPARC T5 servers

Here is a sample from one of the Oracle 11g R2 databases running on a SPARC T5 server with Solaris 11.1 SRU 7.5

Top 5 Timed Foreground Events

EventWaitsTime(s)Avg wait (ms)% DB timeWait Class
latch: cache buffers chains278,727812,447,335291485013307324.15Concurrency
library cache: mutex X212,595449,966,33021165427370136.56Concurrency
buffer busy waits219,844349,975,25115919255732352.01Concurrency
latch: In memory undo latch25,46837,496,8001472310614171.59Concurrency
latch free2,60224,998,5839607449409459.46Other

Reason:
Unknown. There is a pending bug 17214885 - Implausible top foreground wait times reported in AWR report.

Tentative workaround:
Disable power management as shown below.


# poweradm set administrative-authority=none

# svcadm disable power
# svcadm enable power

Verify the setting by running poweradm list.

Also disable NUMA I/O object binding by setting the following parameter in /etc/system (requires a system reboot).

set numaio_bind_objects=0

Oracle Solaris 11 added support for NUMA I/O architecture. Here is a brief explanation of NUMA I/O from Solaris 11 : What's New web page.

Non-Uniform Memory Access (NUMA) I/O : Many modern systems are based on a NUMA architecture, where each CPU or set of CPUs is associated with its own physical memory and I/O devices. For best I/O performance, the processing associated with a device should be performed close to that device, and the memory used by that device for DMA (Direct Memory Access) and PIO (Programmed I/O) should be allocated close to that device as well. Oracle Solaris 11 adds support for this architecture by placing operating system resources (kernel threads, interrupts, and memory) on physical resources according to criteria such as the physical topology of the machine, specific high-level affinity requirements of I/O frameworks, actual load on the machine, and currently defined resource control and power management policies.

Do not forget to rollback these changes after applying the fix for the database bug 17214885, when available.

(2) Redo logs on F40 PCIe cards (non-volatile flash storage)

Per the F40 PCIe card user's guide, the Sun Flash Accelerator F40 PCIe Card is designed to provide best performance for data transfers that are multiples of 8k size, and using addresses that are 8k aligned. To achieve optimal performance, the size of the read/write data should be an integer multiple of this block size and the data transferred should be block aligned. I/O operations that are not block aligned and that do not use sizes that are a multiple of the block size may suffer performance degration, especially for write operations.

Oracle redo log files default to a block size that is equal to the physical sector size of the disk, typically 512 bytes. And most of the time, database writes to the redo log in a normal functioning environment. Oracle database supports a maximum block size of 4K for redo logs. Hence to achieve optimal performance for redo write operations on F40 PCIe cards, tune the environment as shown below.

  1. Configure the following init parameters

    _disk_sector_size_override=TRUE
    _simulate_disk_sectorsize=4096
  2. Create redo log files with 4K block size
    eg.,

    SQL> ALTER DATABASE ADD LOGFILE '/REDO/redo.dbf' size 20G blocksize 4096;
  3. [Solaris only] Append the following line to /kernel/drv/sd.conf (requires a reboot)

    sd-config-list="ATA 3E128-TS2-550B01","disksort:false, cache-nonvolatile:true, physical-block-size:4096";
  4. [Solaris only][F20] To enable maximum throughput from the MPT driver, append the following line to /kernel/drv/mpt.conf and reboot the system.

    mpt_doneq_thread_n_prop=8;

This tip is applicable to all kinds of flash storage that Oracle sells or sold including F20/F40 PCIe cards and F5100 storage array. sd-config-list in sd.conf may need some adjustment to reflect the correct vendor id and product id.

Read More
Posted in Oracle Solaris Database RDBMS Redo Flash F40 AWR | No comments

Tuesday, 30 July 2013

Oracle Tips : Solaris lgroups, CT optimization, Data Pump, Recompilation of Objects, ..

Posted on 02:09 by Unknown
1. [Re]compiling all objects in a schema


exec DBMS_UTILITY.compile_schema(schema => 'SCHEMA');

To recompile only the invalid objects in parallel:


exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');

A NULL value for SCHEMA recompiles all invalid objects in the database.


2. SGA breakdown in Solaris Locality Groups (lgroup)

To find the breakdown, execute pmap -L | grep shm. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.

(I'm pretty sure there will be a much easier way that I am not currently aware of.)


3. Default values for shared pool, java pool, large pool, ..

If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.

eg.,

SQL> select * from v$sgainfo;

NAME BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size 2171296 No
Redo Buffers 373620736 No
Buffer Cache Size 8.2410E+10 Yes
Shared Pool Size 1.7180E+10 Yes
Large Pool Size 536870912 Yes
Java Pool Size 1879048192 Yes
Streams Pool Size 268435456 Yes
Shared IO Pool Size 0 Yes
Granule Size 268435456 No
Maximum SGA Size 1.0265E+11 No
Startup overhead in Shared Pool 2717729536 No
Free SGA Memory Available 0

12 rows selected.

4. Fix to PLS-00201: identifier 'GV$SESSION' must be declared error

Grant select privilege on gv_$SESSION to the owner of the database object that failed to compile.

eg.,

SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Warning: Package Body altered with compilation errors.

SQL> show errors
Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
390/22 PL/SQL: Item ignored
390/22 PLS-00201: identifier 'GV$SESSION' must be declared

SQL> grant select on gv_$SESSION to OWF_MGR;
Grant succeeded.

SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
Package body altered.

5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)

Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.

On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.

Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.


priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>

From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.

Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.


6. ORA-14519 error while importing a table from a Data Pump export dump

ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block size 32768 [partition specification] conflicts with previously specified/implied tablespace YYY block size 8192
[object-level default]
Failing sql is:
CREATE TABLE XYZ
..

All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.


7. Index building task in Data Pump import process

When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.


impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp sqlfile=<index_def_file>.sql INCLUDE=INDEX

Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.

Read More
Posted in oracle database solaris | No comments

Sunday, 30 June 2013

Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

Posted on 00:27 by Unknown

1. Most Oracle software installers need assembler

Assembler (as) is not installed by default on Solaris 11.
     Find and install

eg.,

# pkg search assembler
INDEX ACTION VALUE PACKAGE
pkg.fmri set solaris/developer/assembler pkg:/developer/assembler@0.5.11-0.175.1.5.0.3.0

# pkg install pkg:/developer/assembler

Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.
     There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin



2. Non-interactive retrieval of the entire list of disks that format reports

If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot.


format < /dev/null

-or-

echo "\n" | format


3. Finding system wide file descriptors/handles in use

Run the following kstat command as any user (privileged or non-privileged).


kstat -n file_cache -s buf_inuse

Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles.



4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour)

Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file.

Steps:

  1. Append the following line to /etc/ssh/sshd_config
    Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour,3des-cbc
  2. Restart ssh daemon
    svcadm -v restart ssh


5. UFS: Finding the last mount point for a device

fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck.

eg.,

# fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6
** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE)
** Last Mounted on /export/oracle
** Phase 1 - Check Blocks and Sizes
...
...
Read More
Posted in Oracle Solaris Tips | No comments

Friday, 31 May 2013

Oracle Internet Directory 11g Benchmark on SPARC T5

Posted on 16:44 by Unknown

SUMMARY

System Under Test (SUT)    Oracle's SPARC T5-2 server
Software    Oracle Internet Directory 11gR1-PS6
Target Load    50 million user entries
Reference URL    OID/T5 benchmark white paper

Oracle Internet Directory (OID) is an LDAP v3 Directory Server that has multi-threaded, multi-process, multi-instance process architecture with Oracle database as the directory store.

BENCHMARK WORKLOAD DESCRIPTION

Five test scenarios were executed in this benchmark - each test scenario performing a different type of LDAP operation. The key metrics are throughput -- the number of operations completed per second, and latency -- the time it took in milliseconds to complete an operation.

TEST SCENARIOS & RESULTS

1. LDAP Search operation : search for and retrieve specific entries from the directory

In this test scenario, each LDAP search operation matches a single unique entry. Each Search operation results in the lookup of an entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry, and all entries are looked-up randomly.

#clientsThroughput
Operations/Second
Latency
milliseconds
1,000944,6241.05

2. LDAP Add operation : add entries, their object classes, attributes and values to the directory

In this test scenario, 16 concurrent LDAP clients added 500,000 entries of object class InetOrgPerson with 21 attributes to the directory.

#clientsThroughput
Operations/Second
Latency
milliseconds
161,00015.95

3. LDAP Compare operation : compare a given attribute value to the attribute value in a directory entry

In this test scenario, userpassword attribute was compared. That is, each LDAP Compare operation matches user password of a user.

#clientsThroughput
Operations/Second
Latency
milliseconds
1,000594,4261.68

4. LDAP Modify operation : add, delete or replace attributes for entries

In this test scenario, 50 concurrent LDAP clients updated a unique entry each time and a total of 50 million entries were updated. Attribute that is being modified was not indexed

#clientsThroughput
Operations/Second
Latency
milliseconds
5016,7352.98

5. LDAP Authentication operation : authenticates the credentials of a user

In this test scenario, 1000 concurrent LDAP clients authenticated 50 million users.

#clientsThroughput
Operations/Second
Latency
milliseconds
1,000305,3073.27

BONUS: LDAP Mixed operations Test

In this test scenario, 1000 LDAP clients were used to perform LDAP Search, Bind and Modify operations concurrently.
Operation breakdown (load distribution): Search: 65%. Bind: 30%. Modify: 5%

LDAP Operation#clientsThroughput
Operations/Second
Latency
milliseconds
Search650188,8323.86
Bind30087,1591.08
Modify5014,52812

And finally, the:

HARDWARE CONFIGURATION

 1 x Oracle SPARC T5-2 Server
    » 2 x 3.6 GHz SPARC T5 sockets each with 16 Cores (Total Cores: 32) and 8 MB L3 cache
    » 512 GB physical memory
    » 2 x 10 GbE cards
    » 1 x Sun Storage F5100 Flash Array with 80 flash modules
    » Oracle Solaris 11.1 operating system

ACKNOWLEDGEMENTS

Major credit goes to our colleague, Ramaprakash Sathyanarayan

Read More
Posted in Oracle OID LDAP SPARC T5 T5-2 Benchmark | No comments

Friday, 12 April 2013

Siebel 8.1.1.4 Benchmark on SPARC T5

Posted on 02:14 by Unknown

Hardly six months after announcing Siebel 8.1.1.4 benchmark results on Oracle SPARC T4 servers, we have a brand new set of Siebel 8.1.1.4 benchmark results on Oracle SPARC T5 servers. There are no updates to the Siebel benchmark kit in the last couple years - so, we continued to use the Siebel 8.1.1.4 benchmark workload to measure the performance of Siebel Financial Services Call Center and Order Management business transactions on the recently announced SPARC T5 servers.

Benchmark Details

The latest Siebel 8.1.1.4 benchmark was executed on a mix of SPARC T5-2, SPARC T4-2 and SPARC T4-1 servers. The benchmark test simulated the actions of a large corporation with 40,000 concurrent active users. To date, this is the highest user count we achieved in a Siebel benchmark.

User Load Breakdown & Achieved Throughput

Siebel Application Module%Total Load#UsersBusiness Trx per Hour
Financial Services Call Center7028,000273,786
Order Management3012,00059,553
Total    10040,000333,339

Average Transaction Response Times for both Financial Services Call Center and Order Management transactions were under one second.

Software & Hardware Specification

 Test ComponentSoftwareVersionServer ModelServer QtyPer Server SpecificationOS
Chips Cores vCPUs CPU Speed CPU Type Memory
Application ServerSiebel8.1.1.4SPARC T5-222322563.6 GHzSPARC-T5512 GBSolaris 10 1/13 (S10U11)
Database ServerOracle 11g R211.2.0.2SPARC T4-212161282.85 GHzSPARC-T4256 GBSolaris 10 8/11 (S10U10)
Web ServeriPlanet Web Server7.0.9 (7 U9)SPARC T4-1118642.85 GHzSPARC-T4128 GBSolaris 10 8/11 (S10U10)
Load GeneratorOracle Application Test Suite9.21.0043SunFire X420012442.6 GHzAMD Opteron 285 SE16 GBWindows 2003 R2 SP2
Load Drivers (Agents)Oracle Application Test Suite9.21.0043SunFire X41708212122.93 GHzIntel Xeon X5670 48 GBWindows 2003 R2 SP2

Additional Notes:

  • Siebel Gateway Server was configured to run on one of the application server nodes
  • Four Siebel application servers were configured in the Siebel Enterprise to handle 40,000 concurrent users
    • - Each SPARC T5-2 was configured to run two Siebel application server instances
    • - Each of the Siebel application server instances on SPARC T5-2 servers were separated using Solaris virtualization technology, Zones
    • - 40,000 concurrent user sessions were load balanced across all four Siebel application server instances
  • Siebel database was hosted on a Sun Storage F5100 Flash Array consisting 80 x 24 GB flash modules (FMODs)
    • - Siebel 8.1.1.4 benchmark workload is not I/O intensive and does not require flash storage for better I/O performance
  • Fourteen iPlanet Web Server virtual servers were configured with Siebel Web Server Extension (SWSE) plug-in to handle 40,000 concurrent user load
    • - All fourteen iPlanet Web Server instances forwarded HTTP requests from Siebel clients to all four Siebel application server instances in a round robin fashion
  • Oracle Application Test Suite (OATS) was stable and held up amazingly well over the entire duration of the test run.
    • - The test ran for more than five hours including a three hour ramp up state
    • - While we are at it, do not forget to check the Oracle Application Testing Suite (OATS): Few Tips & Tricks page
  • The benchmark test results were validated and thoroughly audited by the Siebel benchmark and PSR teams
    • - Nothing new here. All Sun published Siebel benchmarks including the SPARC T4 one were properly audited before releasing those to the outside world

Resource Utilization

Component#UsersCPU%Memory Footprint
Gateway/Application Server20,00067.03205.54 GB
Application Server20,00066.09206.24 GB
Database Server40,00033.43108.72 GB
Web Server40,00029.4814.03 GB

Finally, how does this benchmark stack up against other published benchmarks? Short answer is "very well". Head over to the Oracle Siebel Benchmark White Papers webpage to do the comparison yourself.


[Credit to our hard working colleagues in SAE, Siebel PSR, benchmark and Oracle Platform Integration (OPI) teams. Special thanks to Sumti Jairath and Venkat Krishnaswamy for the last minute fire drill]

Read More
Posted in Oracle Siebel Sun SPARC T5 Benchmark T5-2 | No comments

Tuesday, 5 March 2013

SuperCluster Best Practices : Deploying Oracle 11g Database in Zones

Posted on 23:47 by Unknown

To be clear, this post is about a white paper that's been out there for more than two months. Access it through the following url.

  Best Practices for Deploying Oracle Solaris Zones with Oracle Database 11g on SPARC SuperCluster

The focus of the paper is on databases and zones. On SuperCluster, customers have the choice of running their databases in logical domains that are dedicated to running Oracle Database 11g R2. With exclusive access to Exadata Storage Servers, those domains are aptly called "Database" domains. If the requirement mandates, it is possible to create and use all logical domains as "database domains" or "application domains" or a mix of those. Since the focus is on databases, the paper talks only about the database domains and how zones can be created, configured and used within each database domain for fine grained control over multiple databases consolidated in a SuperCluster environment.

When multiple databases are being consolidated (including RAC databases) in database logical domains, zones are one of the options that fulfill requirements such as the fault, operation, network, security and resource isolation, multiple RAC instances in a single logical domain, separate identity and independent manageability for database instances.

The best practices cover the following topics. Some of those are applicable to standalone, non-engineered environments as well.

Solaris Zones

  • CPU, memory and disk space allocation
  • Zone Root on Sun ZFS Storage Appliance
  • Network configuration
  • Use of DISM
  • Use of ZFS filesystem
  • SuperCluster specific zone deployment tool, ssc_exavm
  • ssctuner utility

Oracle Database

  • Exadata Storage Grid (Disk Group) Configuration
  • Disk Group Isolation
    • Shared Storage approach
    • Dedicated Storage Server approach
  • Resizing Grid Disks

Oracle RAC Configuration
Securing the Databases, and

Example Database Consolidation Scenarios

  • Consolidation example using Half-Rack SuperCluster
  • Consolidation example using Full-Rack SuperCluster

Acknowledgements

A large group of experts reviewed the material and provided quality feedback. Hence they deserve credit for their work and time. Listed below are some of those reviewers (sincere apologies if I missed listing any major contributors).

Kesari Mandyam, Binoy Sukumaran, Gowri Suserla, Allan Packer, Jennifer Glore, Hazel Alabado, Tom Daly, Krishnan Shankar, Gurubalan T, Rich long, Prasad Bagal, Lawrence To, Rene Kundersma, Raymond Dutcher, David Brean, Jeremy Ward, Suzi McDougall, Ken Kutzer, Larry Mctintosh, Roger Bitar, Mikel Manitius

Read More
Posted in SuperCluster Oracle Database RDBMS RAC Solaris Zones | No comments

Tuesday, 12 February 2013

OBIEE 11g Benchmark on SPARC T4

Posted on 17:31 by Unknown

Just like the Siebel 8.1.x/SPARC T4 benchmark post, this one too was overdue for at least four months. In any case, I hope the Oracle BI customers already knew about the OBIEE 11g/SPARC T4 benchmark effort. In here I will try to provide few additional / interesting details that aren't covered in the following Oracle PR that was posted on oracle.com on 09/30/2012.

    SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g


Benchmark Details

System Under Test

The entire BI middleware stack including the WebLogic 11g Server, OBI Server, OBI Presentation Server and Java Host was installed and configured on a single SPARC T4-4 server consisting four 8-Core 3.0 GHz SPARC T4 processors (total #cores: 32) and 128 GB physical memory. Oracle Solaris 10 8/11 is the operating system.

BI users were authenticated against Oracle Internet Directory (OID) in this benchmark - hence OID software which was part of Oracle Identity Management 11.1.1.6.0 was also installed and configured on the system under test (SUT). Oracle BI Server's Query Cache was turned on, and as a result, most of the query results were cached in OBIS layer, that resulted in minimal database activity making it ideal to have the Oracle 11g R2 database server with the OBIEE database running on the same box as well.

Oracle BI database was hosted on a Sun ZFS Storage 7120 Appliance. The BI Web Catalog was under a ZFS/zpool on a couple of SSDs.


Test Scenario

In this benchmark, 25000 concurrent users assumed five different business user roles -- Marketing Executive, Sales Representative, Sales Manager, Sales Vice-president, and Service Manager. The load was distributed equally among those five business user roles. Each of those different BI users accessed five different pre-built dashboards with each dashboard having an average of five reports - a mix of charts, tables and pivot tables - and returning 50-500 rows of aggregated data. The benchmark test scenario included drilling down into multiple levels from a table or chart within a dashboard. There is a 60 second think time between requests, per user.


BI Setup & Test Results

OBIEE 11g 11.1.1.6.0 was deployed on SUT in a vertical scale-out fashion. Two Oracle BI Presentation Server processes, one Oracle BI Server process, one Java Host process and two instances of WebLogic Managed Servers handled 25,000 concurrent user sessions smoothly. This configuration resulted in a sub-second overall average transaction response time (average of averages over a duration of 120 minutes or 2 hours). On average, 450 business transactions were executed per second, which triggered 750 SQL executions per second.

It took only 52% of CPU on average (~5% system CPU and rest in user land) to do all this work to achieve the throughput outlined above. Since 25,000 unique test/BI users hammered different dashboards consistently, not so surprisingly bulk of the CPU was spent in Oracle BI Presentation Server layer, which took a whopping 29%. BI Server consumed about 10-11% and the rest was shared by Java Host, OID, WebLogic Managed Server instances and the Oracle database.


So, what is the key take away from this whole exercise?

SPARC T4 rocks Oracle BI world. OBIEE 11g/SPARC T4 is an ideal combination that may work well for majority of OBIEE deployments on Solaris platform. Or in marketing jargon - The excellent vertical and horizontal scalability of the SPARC T4 server gives customer the option to scale up as well as scale out growth, to support large BI EE installations, with minimal hardware investment.

Evaluate and decide for yourself.

[Credit to our colleagues in Oracle FMW PSR, ISVe teams and SCA lab support engineers]
Read More
Posted in Oracle Business+Intelligence Analytics Solaris SPARC T4 | No comments

Sunday, 3 February 2013

[Tip] Samsung Galaxy S II: Turning off Camera Shutter Sound

Posted on 11:43 by Unknown

It is one of the hot topics among Galaxy S II users. In web forums, some of the recurring solutions appear to be rooting the phone or muting the "system" sounds. They seem to work in some cases. However there is a much simpler solution for Galaxy S II phones running Ice Cream Sandwich (ICS) version of Android.

Steps:

  1. Launch the Camera application

  2. Tap the Menu key button to bring up "Edit Shortcuts" menu

  3. Tap "Edit Shortcuts" menu item to list out all available shortcuts

  4. Look for "Shutter sound" shortcut

  5. Press, hold and drag the "Shutter sound" option to one of the empty boxes shown on top. If there are no empty boxes, simply drop it on to one of the non-empty boxes that contain the least desired shortcut/option.

  6. Finally tap on "Shutter sound" icon and select the "Off" button to keep the camera shutter silent

PS:
Screenshots were captured using T-Mobile Samsung Galaxy S II (SGH-T989) device running Android 4.0.3

Read More
Posted in Smartphone Samsung Galaxy S2 Phone+Shutter Tip Android ICS | No comments

Wednesday, 30 January 2013

Siebel 8.1.1.4 Benchmark on SPARC T4

Posted on 16:50 by Unknown

Siebel is a multi-threaded native application that performs well on Oracle's T-series SPARC hardware. We have several versions of Siebel benchmarks published on previous generation T-series servers ranging from Sun Fire T2000 to Oracle SPARC T3-4. So, it is natural to see that tradition extended to the current genration SPARC T4 as well.

Benchmark Details

29,000 user Siebel 8.1.1.4 benchmark on a mix of SPARC T4-1 and T4-2 servers was announced during the Oracle OpenWorld 2012 event. In this benchmark, Siebel application server instances ran on three SPARC T4-2/Solaris 10 8/11 systems where as the Oracle database server 11gR2 was configured on a single SPARC T4-1/Solaris 11 11/11 system. Several iPlanet web server 7 U9 instances with the Siebel Web Plug-in (SWE) installed ran on one SPARC T4-1/Solaris 10 8/11 system. Siebel database was hosted on a single Sun Storage F5100 flash array consisting 80 flash modules (FMODs) each with capacity 24 GB.

Siebel Call Center and Order Management System are the modules that were tested in the benchmark. The benchmark workload had 70% of virtual users running Siebel Call Center transactions and the remaining 30% vusers running Siebel Order Management System transactions. This benchmark on T4 exhibited sub-second response times on average for both Siebel Call Center and Order Management System modules.

Load balancing at various layers including web and test client systems ensured near uniform load across all web and application server instances. All three Siebel application server systems consumed ~78% CPU on average. The database and web server systems consumed ~53% and ~18% CPU respectively.

All these details are supposed to be available in a standard Oracle|Siebel benchmark template - but for some reason, I couldn't find it on Oracle's Siebel Benchmark White Papers web page yet. Meanwhile check out the following PR that was posted on oracle.com on 09/28/2012.

    SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark

Looks like the large number of vusers (29,000 to be precise) sets this benchmark apart from the other benchmarks published with the same Siebel 8.1.1.4 benchmark workload.

[Credit to our colleagues in Siebel PSR, benchmark, SAE and ISVe teams]

Read More
Posted in Oracle Siebel Sun SPARC T4 Benchmark | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • *nix: Workaround to cannot find zipfile directory in one of file.zip or file.zip.zip ..
    Symptom: You are trying to extract the archived files off of a huge (any file with size > 2 GB or 4GB, depending on the OS) ZIP file with...
  • JDS: Installing Sun Java Desktop System 2.0
    This document will guide you through the process of installing JDS 2.0 on a PC from integrated CDROM images Requirements I...
  • Linux: Installing Source RPM (SRPM) package
    RPM stands for RedHat Package Manager. RPM is a system for installing and managing software & most common software package manager used ...
  • Solaris: malloc Vs mtmalloc
    Performance of Single Vs Multi-threaded application Memory allocation performance in single and multithreaded environments is an important a...
  • C/C++: Printing Stack Trace with printstack() on Solaris
    libc on Solaris 9 and later, provides a useful function called printstack , to print a symbolic stack trace to the specified file descripto...
  • Installing MySQL 5.0.51b from the Source Code on Sun Solaris
    Building and installing the MySQL server from the source code is relatively very easy when compared to many other OSS applications. At least...
  • Oracle Apps on T2000: ORA-04020 during Autoinvoice
    The goal of this brief blog post is to provide a quick solution to all Sun-Oracle customers who may run into a deadlock when a handful of th...
  • Siebel Connection Broker Load Balancing Algorithm
    Siebel server architecture supports spawning multiple application object manager processes. The Siebel Connection Broker, SCBroker, tries to...
  • 64-bit dbx: internal error: signal SIGBUS (invalid address alignment)
    The other day I was chasing some lock contention issue with a 64-bit application running on Solaris 10 Update 1; and stumbled with an unexpe...
  • Oracle 10gR2/Solaris x64: Fixing ORA-20000: Oracle Text errors
    First, some facts: * Oracle Applications 11.5.10 (aka E-Business Suite 11 i ) database is now supported on Solaris 10 for x86-64 architectur...

Categories

  • 80s music playlist
  • bandwidth iperf network solaris
  • best
  • black friday
  • breakdown database groups locality oracle pmap sga solaris
  • buy
  • deal
  • ebiz ebs hrms oracle payroll
  • emca oracle rdbms database ORA-01034
  • friday
  • Garmin
  • generic+discussion software installer
  • GPS
  • how-to solaris mmap
  • impdp ora-01089 oracle rdbms solaris tips upgrade workarounds zombie
  • Magellan
  • music
  • Navigation
  • OATS Oracle
  • Oracle Business+Intelligence Analytics Solaris SPARC T4
  • oracle database flashback FDA
  • Oracle Database RDBMS Redo Flash+Storage
  • oracle database solaris
  • oracle database solaris resource manager virtualization consolidation
  • Oracle EBS E-Business+Suite SPARC SuperCluster Optimized+Solution
  • Oracle EBS E-Business+Suite Workaround Tip
  • oracle lob bfile blob securefile rdbms database tips performance clob
  • oracle obiee analytics presentation+services
  • Oracle OID LDAP ADS
  • Oracle OID LDAP SPARC T5 T5-2 Benchmark
  • oracle pls-00201 dbms_system
  • oracle siebel CRM SCBroker load+balancing
  • Oracle Siebel Sun SPARC T4 Benchmark
  • Oracle Siebel Sun SPARC T5 Benchmark T5-2
  • Oracle Solaris
  • Oracle Solaris Database RDBMS Redo Flash F40 AWR
  • oracle solaris rpc statd RPC troubleshooting
  • oracle solaris svm solaris+volume+manager
  • Oracle Solaris Tips
  • oracle+solaris
  • RDC
  • sale
  • Smartphone Samsung Galaxy S2 Phone+Shutter Tip Android ICS
  • solaris oracle database fmw weblogic java dfw
  • SuperCluster Oracle Database RDBMS RAC Solaris Zones
  • tee
  • thanksgiving sale
  • tips
  • TomTom
  • windows

Blog Archive

  • ▼  2013 (16)
    • ▼  December (3)
      • Blast from the Past : The Weekend Playlist #3
      • Measuring Network Bandwidth Using iperf
      • Blast from the Past : The Weekend Playlist #2
    • ►  November (2)
      • Things to Consider when Planning the Redo logs for...
      • Blast from the Past : The Weekend Playlist #1
    • ►  October (1)
      • [Script] Breakdown of Oracle SGA into Solaris Loca...
    • ►  September (1)
      • Miscellaneous Tips: Solaris, Oracle Database, Java...
    • ►  August (1)
      • [Oracle Database] Unreliable AWR reports on T5 & R...
    • ►  July (1)
      • Oracle Tips : Solaris lgroups, CT optimization, Da...
    • ►  June (1)
      • Solaris Tips : Assembler, Format, File Descriptors...
    • ►  May (1)
      • Oracle Internet Directory 11g Benchmark on SPARC T5
    • ►  April (1)
      • Siebel 8.1.1.4 Benchmark on SPARC T5
    • ►  March (1)
      • SuperCluster Best Practices : Deploying Oracle 11g...
    • ►  February (2)
      • OBIEE 11g Benchmark on SPARC T4
      • [Tip] Samsung Galaxy S II: Turning off Camera Shut...
    • ►  January (1)
      • Siebel 8.1.1.4 Benchmark on SPARC T4
  • ►  2012 (14)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (2)
  • ►  2011 (15)
    • ►  December (2)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  August (2)
    • ►  July (1)
    • ►  May (2)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2010 (19)
    • ►  December (3)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (1)
    • ►  May (5)
    • ►  April (1)
    • ►  March (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2009 (25)
    • ►  December (1)
    • ►  November (2)
    • ►  October (1)
    • ►  September (1)
    • ►  August (2)
    • ►  July (2)
    • ►  June (1)
    • ►  May (2)
    • ►  April (3)
    • ►  March (1)
    • ►  February (5)
    • ►  January (4)
  • ►  2008 (34)
    • ►  December (2)
    • ►  November (2)
    • ►  October (2)
    • ►  September (1)
    • ►  August (4)
    • ►  July (2)
    • ►  June (3)
    • ►  May (3)
    • ►  April (2)
    • ►  March (5)
    • ►  February (4)
    • ►  January (4)
  • ►  2007 (33)
    • ►  December (2)
    • ►  November (4)
    • ►  October (2)
    • ►  September (5)
    • ►  August (3)
    • ►  June (2)
    • ►  May (3)
    • ►  April (5)
    • ►  March (3)
    • ►  February (1)
    • ►  January (3)
  • ►  2006 (40)
    • ►  December (2)
    • ►  November (6)
    • ►  October (2)
    • ►  September (2)
    • ►  August (1)
    • ►  July (2)
    • ►  June (2)
    • ►  May (4)
    • ►  April (5)
    • ►  March (5)
    • ►  February (3)
    • ►  January (6)
  • ►  2005 (72)
    • ►  December (5)
    • ►  November (2)
    • ►  October (6)
    • ►  September (5)
    • ►  August (5)
    • ►  July (10)
    • ►  June (8)
    • ►  May (9)
    • ►  April (6)
    • ►  March (6)
    • ►  February (5)
    • ►  January (5)
  • ►  2004 (36)
    • ►  December (1)
    • ►  November (5)
    • ►  October (12)
    • ►  September (18)
Powered by Blogger.

About Me

Unknown
View my complete profile