Tuesday, August 24, 2010

Sizing Oracle Coherence Applications

The following is the last of the five posts which will list the best practices and performance suggestions for tuning one’s Oracle Coherence environment.  This is a general guide that will evolve as new product features are added to Oracle Coherence.

Sizing Oracle Coherence

Remember there are 4 types of virtual machines in Coherence:
  1. JVMs that are used for data storage (“storage JVMs”)
  2. JVMs that run client applications and do not store data (“client JVMs”)
  3. JVMs that run client applications and connect to the cluster via Coherence*Extend (TCP/IP)
  4. .NET applications that run client applications and connect to the cluster via Coherence*Extend (TCP/IP)
The protocol for storage and client JVMs (type 1 and 2 above) is TCMP (“Tangosol Cluster Management Protocol”, based on UDP unicast).

 

Tips for Sizing Oracle Coherence

  • Allow extra space for overhead
  • Every object has one full backup on another JVM on another machine
  • If a JVM fails, Oracle Coherence automatically fails over AND creates new backups on other JVMs
  • This means that if a JVM fails, other JVMs will need to accommodate the backups of the objects
  • Rule-of-thumb: each 1 GB JVM can store 350 MB of actual object data
  • That means a 16 GB machine will support about 4.5 GB of raw object data. 13 JVMs * 350 MB = 4.55 GB


    Question #1:    How many 1 GB JVMs can you run on a box with 16 GB of RAM?

    Answer: At most 13
    Start with 16 GB of RAM
    Subtract RAM required for OS and other apps
    / divide by 1.2 (remember, 1 GB of heap uses 1.2 GB of RAM)
    (16 GB – 400 MB ) / 1.2 GB = ~ 13


    Question #2:    How many 16 GB machines will be required to support 20 GB of data in the grid?

    Answer: At least five (six for HA)
    * Each JVM handles 350 MB
    * You have 13 JVMs per machine
    * You have 4.5 GB per machine (13 * 350 MB)
    * 20 GB / 4.5 GB per box = 4.44
    * Round up to 5


    I hope these series of five posts on Oracle Coherence have been helpful to you!

    Tuesday, August 17, 2010

    Tuning Oracle Coherence*Web Applications

    The following is the fourth of five posts which will list the best practices and performance suggestions for tuning one’s Oracle Coherence environment.  This is a general guide that will evolve as new product features are added to Oracle Coherence.

    Oracle Coherence*Web

    Oracle Coherence*Web is an HTTP session management module and a drop-in replacement for application server container session management. It basically “wraps” existing web applications, no runtime byte code manipulation is done and any requests to use sessions (from servlets, JSPs, filters, etc) are intercepted by Oracle Coherence*Web wrappers.  For more details you can look at:

    Coherence*Web Session Management Module
    Coherence*Web and WebLogic Server
    Coherence*Web and Other Application Server Containers

     

    High-Level Steps to enable Oracle Coherence*Web

    • Run the inspector on the existing WAR/EAR file (This generates a coherence-web.xml configuration file. This file wraps all servlets, filters, etc with Coherence implementations. It also contains configuration settings for Coherence*Web)
    • Inspect and (if any changes are required) modify the coherence-web.xml file
    • Run the installer process on the existing WAR/EAR which generates a new WAR/EAR and backs up the original WAR/EAR.
    • You now deploy the new WAR/EAR to the Application Server Container
    • The complete steps for the Oracle WebLogic Server are listed here.

     

    Troubleshooting Oracle Coherence*Web

    • Obtain a baseline for the application without Coherence to properly determine how sessions are being used and replicated. This will make it easier to compare with Coherence and further troubleshoot.
    • Network throughput can be an issue as well. Run a datagram test to determine how much one can push between machines. This will help tune the network between the web application tier and the data grid. Review the first post in this series and specifically in the networking section.
    • The session model will be a factor in the performance; the split session model is default and will keep small session attributes in the near cache while large ones will be accessed from the grid. If the application regularly uses lots of large attributes, another model may be more appropriate. Review the Session Models at this link.
    • Review their cache configuration file. From a web application caching perspective, Coherence*Web in a web-app really gets a big benefit from a near caching scheme, where objects of a size less than 1K are kept in the local JVM, avoiding the network hop and marshalling/deserialization.
    • If one is deploying multiple web applications, sometimes it's desirable to share session attributes and sometimes it is not. There is configuration for scoping link on this page.
    • For sizing it depends on the web-application and how the web-application uses the session to determine the proper size of the grid. Some testing with some average test cases should be used to arrive at a metric such as “1 user takes X MB in the grid”.

    Tuesday, August 10, 2010

    Tuning/Troubleshooting Oracle Coherence Applications

    The following is the third of five posts which will list the best practices and performance suggestions for tuning one’s Oracle Coherence environment.  This is a general guide that will evolve as new product features are added to Oracle Coherence.


    Troubleshooting Multicast Issues  

     

    1. If you have Oracle Coherence installed on the hosts between which you're testing multicasting, you can use its multicast connectivity test
    2. In addition you can use its datagram test to measure network throughput. The practical max on a well-tuned gigabit Ethernet link is ~115MB/sec.
    3. Finally make sure to use: -Djava.net.preferIPv4Stack=true
    4. Optionally one can use Well-Known-Addresses (WKA or Unicast) to eliminate any multicast issue.
    5. If one is on Windows 2003, 2008, Vista, or Windows 7 and are experiencing problems with sharing ports for multicast check the registry for HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Afd\Parameters\DisableAddressSharing and see if it is set to “1”. If so change this to “0”, reboot the machine, and retest. From Microsoft: "Enhanced socket security was added with the release of Windows Server 2003. In previous Microsoft server operating system releases, the default socket security easily allowed processes to hijack ports from unsuspecting applications. In Windows Server 2003, sockets are not in a sharable state by default. Therefore, if an application wants to allow other processes to reuse a port on which a socket is already bound, it must specifically enable it. "
    6. If one is on Windows, run the following command to generate some further information on the machine’s networking:
    netsh firewall show state verbose=enable

     

    Log Messages Explanation


    Review the following link for causes and actions to common TCMP (Tangosol Cluster Management Protocol, based on UDP unicast) log messages from Coherence.

    Tuesday, August 3, 2010

    Troubleshooting Checklist for Oracle Coherence Applications

    The following is the second of five posts which will list the best practices and performance suggestions for tuning one’s Oracle Coherence environment.  This is a general guide that will evolve as new product features are added to Oracle Coherence.

    General Oracle Coherence Performance Questions

    The following is a general list of questions to review when troubleshooting performance issues with Coherence.
    1. What application server is being used in conjunction with Oracle Coherence?
    2. Is Oracle Coherence being run within the same JVM as the app-server container or is the data grid setup outside of the app-server container? (i.e. is storage disabled here with –Dtangosol.coherence.distributed.localstorage=false )
    3. How many storage nodes are being used for Oracle Coherence? (Is there adequate storage for all the data?)
    4. What size is the java heaps for these storage nodes?
    5. Are the out-of-the-box Oracle Coherence configuration files being used from within coherence.jar? (i.e. Coherence itself has not been tuned to the environment/application?) See Sample Cache Configuration Files for details.
    6. Are configuration files specified via a –D flag to the Oracle Coherence Cache Servers or within a jar file? i.e. -Dtangosol.coherence.override=<file> and -Dtangosol.coherence.cacheconfig=<file> being used?
    7. What is the Thread-count set for Oracle Coherence?
    8. What type of partitioning is being used? Is a near-cache being used or replicated? “Partitioned/Distributed cache gives a real linear scalability and should be used in pretty much all scenarios. With Replicated cache the same data are copied over to all the nodes and is very performance taxing if data are changed.” Information on the near-cache, partitioned cache and replicated cache.
    9. Multicast or Unicast? (Review the Multicast Troubleshooting section below.)
    10. Is this a 32 bit JVM or a 64 bit JVM? JRockit or Sun JVM?
    11. What garbage collection algorithm is being used?
    12. Review the Platform-Specific Deployment Considerations section of the documentation.

    Tuesday, July 27, 2010

    Tuning a Oracle Coherence Application

    The following is the first of five posts which will list the best practices and performance suggestions for tuning one’s Oracle Coherence environment.  This is a general guide that will evolve as new product features are added to Oracle Coherence.   In general the key performance killers for any data-grid are the:  Network, Java Virtual Machine (JVM) configuration and grid configuration.  Network is critical and running a multicast test will help validate the environment.

    Tuning a Oracle Coherence Application

      Network Topology:

        1. Check the Network 
        2. Check the MTU size.   Oracle Coherence uses a packet size based on the standard 1500 byte MTU.  If one is on Windows, this operating system includes a fast I/O path for "small" packets, where small is defined as 1024 bytes.  Increasing this limit to match the MTU can significantly improve network performance. FastSendDatagramThreshold is the registry value for this on Windows. To make these changes to your registry, run the included "optimize.reg" registry file in the Oracle Coherence product installation.
        3. Run the Datagram test to measure network throughput and check the success rate.  The practical max on a well-tuned gigabit Ethernet link is ~115MB/sec.
        4. Make sure there is a 1 GB network between all servers (avoid mixed speed networks)
        5. Network Switch tuning (avoid multiple switches)
        6. Are Cisco switches involved?  The solution is to make sure unicast is being used with WKA (Well-Known Addresses).  See the following link with Cisco Switches around packet pauses (visible when coherence logging is turned up).
        7. Check the JMX MBean for packet delivery/receiver success rates when running these basic tests.

          Operating System:
            1. Adjust UDP buffer size on the operating systems where the Oracle Coherence Servers are running.  

              JVM Tuning:
                • Start with a Java Heap of 1 GB (-Xms=1g -Xmx=1g) for each Oracle Coherence Server and tune from there based on performance tests, application profile, and the operating system (32 bit or 64 bit OS). A JVM with 1 GB heap uses 1.2 GB of physical RAM.
                • Do not configure your JVMs to exceed physical RAM since this will cause swapping and bad performance.
              1. Run “swap –l”, “top”, or “vmstat” to verify the system is not swapping and RAM is available
              2. Allow ~400 MB for the operating system
              3. Take into account other software running on the system

                • If one is using Oracle HotSpot VM, make sure to use the “-server” argument.   This is a link to all HotSpot VM Options.
                • If one is using Oracle JRockit RealTime VM, the following parameters are a good starting point
              -Xms=1g -Xmx=1g -XgcPrio:deterministic -XpauseTarget=10ms -Xverbose:opt,memory,gc,gcpause,memdbg,compaction,starttime,load,cpuinfo,systemgc -Xverbosetimestamp –Xgcreport -Xverboselog:/full/path/logfile.log

                • If one is running into strange Network issues, make sure to first add the following flag to the Oracle Coherence Server nodes: -Djava.net.preferIPv4Stack=true

                  Oracle Coherence Application Tuning
                    1. Use getall() and putall() APIs which will result in a huge performance improvement
                    2. Serialization:  Use either the ExternalizableLite interface or POF (Portable Object Framework) from Coherence.  POF will result in the best performance gains (up to seven times (7x) compared to java.io.Serializable) however there is more initial code to with POF.
                    3. Entry Processors -  can be used to update data instead of doing:  lock(id), value=getID, setValue
                    4. Set once across the wire instead of multiple locks/etc in order to execute business rules where the data lives.  Three times (3x) improvement from less network hops.  Use invokeAll() API and look into setting the thread count in cache configuration to higher than one for this.
                    5. Use “lite” events such as event listeners which can be configured to receive or not receive old/new values.
                    6. Database integration – Caching Strategy 
                      1. Coherence Behind - Use Oracle Coherence as L2 Cache for ORM (Oracle TopLink JPA)
                      2. Coherence to the side - Application manages Data Crud in Oracle Coherence next to OR/M
                      3. Coherence on Top - Coherence is the system of record, use cacheloaders and cache stores to integrate with Data Sources

                  Review the Production Checklist which contains information on the following topics:
                      1. Network
                      2. Hardware
                      3. Operating System
                      4. JVM
                      5. Java Security Manager
                      6. Application Instrumentation
                      7. Coherence Editions and Modes
                      8. Coherence Operational Configuration
                      9. Coherence Cache Configuration
                      10. Large Cluster Configuration
                      11. Other Resources

                     

                    Oracle Coherence Networking Links

                    Coherence TCMP Network Protocol Explanation  TCMP stands for “Tangosol Cluster Management Protocol” which is based on UDP Unicast.

                    Important Links

                    Best Practices 
                    Performance Tuning
                    Production Checklist
                    Coherence Technical FAQ
                    Testing and Tuning Coherence
                    Sample Cache Configuration Files
                    Platform-Specific Deployment Considerations
                    Patterns for Pre-Loading the Cache

                    Wednesday, May 26, 2010

                    Oracle Coherence 3.5 Book Review Results

                    Review of Oracle Coherence 3.5 Book

                    The following is my review of the Oracle Coherence 3.5 eBook from Packt Publishing.

                    High-Level Overview
                    • I though this was an excellent book on Oracle Coherence! 
                    • Each chapter builds upon the previous like building blocks to build a solid foundation of knowledge and is an excellent reference book. 
                    • The core chapters (5,6,7,8) are very powerful, a great read and I highly recommend reading this book!

                    Detail:

                    • Preface: Review/Refresh of "What Coherence is..." for everyone.  More advanced users of Coherence can skip this part.
                    • Chapter 1: A review and background of RASP, basic testing and foundations of applications/application testing.  "Why/What Coherence is..."  A good review for people of all levels.
                    • Chapter 2: Getting started with Coherence from installation to the first application.  Excellent starting part for Coherence users to get their hands on starting and troubleshooting a basic Coherence application and setup.
                    • Chapter 3: Great information on the different types of caches and when to best make use of each one.
                    • Chapter 4: This goes into the concept of Domain Models and Serialization choices (Portable Object Format - POF)  for your application.
                    • Chapters 5, 6, 7, 8: I really enjoyed these next several chapters since they get into the fun stuff of  Coherence!   Including:
                      • Querying the data-grid with a variety of options in the most performant ways
                      • Submitting work-requests to the Coherence Data Grid from your application where the power of the Data Grid is used to parallel process and aggregate your request
                      • Data Grid Events where you can do multiple things like listen for data changes and take various actions.
                      • Finally using Coherence and backing data stores (the database is the classic example here) to use the power of Coherence as the Data Grid layer and read/write changes to/from the backing data store.    All-in-all, these core chapters are very powerful and a great read!
                    • Chapter 9:  This goes into good depth on what a Coherence*Extend client is to the data grid, when to use this, and the how-to's on setting it up.
                    • Chapter 10 and 11: Integration from .NET and C++ clients into a Coherence Data Grid gives very good insight if this is a use-case for you.  If not, it is a great read on the power and flexibility of Coherence to be accessed from multiple technologies outside of Java.


                    Suggestions:
                    • I really liked how Chapter 2 included the hands-on part so more hands-on examples included per chapter (where appropriate) would have been very nice. 
                    • The Coherent Bank application contains *alot* of great code to review and make use of and ties all of the chapters together.  I had one issue with the version of the code-sample I downloaded from the website when running the application so hopefully this is updated on the website in the future.  My specific exception was when accessing the deployed web-application:

                    Problem accessing /bank/login. Reason:

                    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'testDataCreator' defined in ServletContext resource
                    [/WEB-INF/bank-servlet.xml]: Invocation of init method failed; nested exception is (Wrapped) java.io.NotSerializableException:   com.seovic.samples.bank.domain.Customer

                    Monday, April 19, 2010

                    Coherence 3.5 Book Review

                    I am currently reviewing the new Oracle Coherence 3.5 eBook out from Packt Publishing and hope to have some comments out soon.  Looks to be a great book and I look forward to reading it!

                    Monday, February 15, 2010

                    Setting up Oracle Data Integrator (ODI) with Change Data Capture (CDC): DB Table to WebLogic Server JMS Queue

                    Introduction:

                    The following shows detailed steps on how to setup ODI with Change Data Capture on a database table so that all changes to a database table are loaded into a target. In this case the target is a JMS Queue hosted on a WebLogic Server 10.3 container.

                    Pre-requisites

                    • Download Oracle 10gR2 Database if it is not already installed
                    • Download ODI 10.1.3.5
                    • Install ODI 10.1.3.5
                    • Review ODI Documentation
                    • Download and install WebLogic Server 10.3
                    • Understanding of JMS Queues

                     

                    Setup ODI Repositories:

                    The following information makes assumptions that this will be setup on a Windows machine however the same concepts would apply to a UNIX/LINUX environment.
                    • Create a Master Repository in an Oracle DB. In this case we will set this up under a user in our local Oracle 10gR2 Database. We will use the “system” user and connect to our local DB which was setup earlier (jdbc:oracle:thin:@localhost:1541:orcl). Goto Start –> All Programs -> <ORACLE_HOME> ->Oracle Data Integrator -> Repository Management -> Create Master Repository
                    • This will take a couple of minutes to create the necessary ODI information within the database and schema one provided.
                    • Startup the Topology Manager and create a Work repository. This is needed prior to starting up the Oracle Designer. Goto Start –> All Programs -> <ORACLE_HOME> -> Oracle Data Integrator -> Topology Manager.
                    • Create a connection to the repository by clicking on the “New” icon and fill in the data like below: (SUPERVISOR/SUNOPSIS are the default user/password and need to be in capitals)
                    clip_image001
                    • Goto the repositories tab (the 5th one from the beginning) and create some named work repository. Right click and select “Insert new work repository”.
                    clip_image003
                    • In the Topology Manager, create a Physical Architecture for the Database Schema where you will be connecting to (the 1st tab at the bottom). (NOTE: It is assumed that there are some tables in this schema already.) Select the technology (in this case “Oracle”), then right click and select “Insert Data Server”. Before clicking “ok” to save, one will have to create the Logical Layer as well (see later steps). Fill in the user/password to the schema one will be connecting to and fill in the JDBC information in the JDBC tab.
                    clip_image005
                    clip_image007
                    • Next right click this Data Server and select “Insert Physical Schema”. Select the schema one want and then on the “Context” tab is where one will select the Logical Schema. The Logical Schema will first need to be created in the next step before selecting it here and clicking “ok”. (You may get an error while saving this however first save it and then open it up so the Logical Schema can be created.)
                    clip_image009
                    clip_image011
                    • In the Topology Manager, create a Logical Architecture for the Database Schema where you will be connecting to (the 3rd tab at the bottom). This will be needed for the Physical Architecture layer you created. Select the technology (in this case “Oracle”), then right click and select “Insert Logical Schema”.
                    clip_image013

                     

                    Setup ODI for use with WebLogic JMS Queues:

                    The following outlines the pre-work necessary to setup ODI to work with WebLogic JMS Queues.
                    Create the JMS queues on WebLogic Server
                    The following outlines the steps necessary to create a basic JMS queue for this example. If one already has a JMS queue defined on a WebLogic Server, this section can be skipped.
                    WLS Domain Creation
                    • 1. Create a default WLS 10.3 Domain. Run <BEA_HOME>\wlserver10.3\common\bin\config.cmd
                    • 2. Select the defaults (7001 as the port, weblogic/weblogic) into some named directory.
                    • 3. Start the newly created domain by running startWebLogic.cmd in the directory where the Domain was created.
                    • 4. Login to the WLS console: http://localhhost:7001/console as weblogic/weblogic

                    Create a JMS Server:
                    • 1. Services -> Messaging -> JMS Server -> Select New
                    • 2. Create a name, select Persistent Store (File Store).
                    • 3. Target this to the AdminServer

                    Create a JMS Module:
                    • 1. Services -> Messaging -> JMS Module -> Select New
                    • 2. At the end, select add resources to it.

                    Create a JMS Resource (Queue in this case):
                    • 1. Select "new" and click "Queue". In this example the JNDI Name is “TestQueueJNDI” and the Queue name is “weblogic.jms.poc.TestQueue”. The Queue name specified here will be the same string used in ODI when defining the Physical Architecture. Please make a note of it.
                    • 2. Click "Create a new Subdeployment"
                    • 3. Target the JMS Server which was just created.
                     
                    Update ODI to work with WLS JMS
                    The following outlines the pre-work necessary to setup ODI to work/connect to WLS JMS Queues.
                    • Within the WebLogic Server installation one will need to create a jar file which ODI will use for connecting to a WLS JMS Queue. Review the documentation
                    • Build the wlfullclient5.jar file for use with ODI by doing the following:
                      • First change the directory to the WL_HOME/server/lib directory
                      • Run the command: java -jar wljarbuilder.jar -profile wlfullclient5
                      • Copy the resulting wlfullclient5.jar file to the <ORACLE_HOME>\oracledi\drivers directory
                    • Modify the <ORACLE_HOME>\oracledi\bin\odiparams.bat file to use JDK 1.5 instead of the default of JDK 1.4. This is the only ODI file which needs to be modified to use JDK 1.5. Specifically in this file add the following line right before the existing “if” statement:
                    set ODI_JAVA_HOME=C:\bea10x\jrockit_150_11
                    if "%ODI_JAVA_HOME%" == "" set ODI_JAVA_HOME=%JAVA_HOME%
                    • If the ODI Designer is running, shut it down and restart it so the changes take effect. If not, then one will not be able to successfully test the JMS Connection from ODI to WLS.
                     
                    Create JMS Architecture in ODI Topology Manager
                    After restarting ODI Designer, one can create and test the connections to WebLogic Server JMS Queues. The following outlines the steps required to setup JMS Queues in ODI.
                    • In the Topology Manager, create a Physical Architecture for the WLS JMS Queue where you will be connecting to (the 1st tab at the bottom).
                    • Right click on JMS Queue in the Physical Architecture. Fill in the Definition and JNDI tabs to look like the following screenshots. Modify the host:port name and user/password to what is defined in the WebLogic Server Domain one is connecting to.
                    clip_image015
                    clip_image017
                    • Clicking “ok” to save. (There may be a message about the Logical Layer however that is created in the next step.)
                    • Create the Logical Layer for this JMS Queue.
                    clip_image019
                    • Make sure the Physical Schema for the JMS Data Server has the Logical Layer defined in the “Context” tab.
                    clip_image021
                    • Now one can test the connection to WLS JMS to make sure everything is setup correctly. Open up the Data Server defined for the WLS JMS Queue and click the “Test” button at the bottom.
                    clip_image023

                     

                    ODI Designer:

                    • Startup the ODI Designer: Goto Start -> All Programs -> <ORACLE_HOME> -> Oracle Data Integrator -> Designer
                    • Create a new connection to this new Master Repository and Work Repository. Click the “New” icon next to the default “Login Name”
                    • Connect with the SUPERVISOR/SUNOPSIS (user/password and all capitals), supply the Oracle Database information where the master repository was created, and the name of the Work Repository which was just created.
                    clip_image024
                    • Back in the Oracle Designer we will now create a Model based on the Logical Schema we created in the Topology Manager.
                    • Create a Project under the Projects tab.
                    • Next import the Knowledge Modules. Right click the project, select Import, and then “Import Knowledge Modules”. Select the directory under <ORACLE_HOME>\oracledi\impexp, and then select each knowledge module individually. We will select:
                    · CKM SQL
                    · IKM SQL Incremental Update
                    · IKM SQL to JMS Append
                    · JKM Oracle 10g Consistent (LOGMINER)
                    · LKM SQL to SQL
                    clip_image025

                     

                    Create a Model for the Database table

                    • Right click and select “Insert Model”.
                    • In the definition tab, select the technology and Logical Schema (created in the Topology Manager) and then click the “Reverse” button to get the tables for this schema into the model view. Click “ok”. NOTE: There are SQL statements to create the test table for the schema user in the APPENDIX section which one can use to test with.
                    clip_image027
                    • Double click the Model name you created and select the Journalizing tab. Make sure that the tab is filled out and looks like the following. Because you want to capture changes to the tables in a consistent fashion, you select the Consistent option and the JKM Oracle 10g Consistent (LOGMINER) knowledge module. This knowledge module, shown in the figure below, will capture new and changed data, using the LogMiner feature of Oracle Database 10g. Read the notes section for additional details on the user accessing the tables.
                    clip_image029
                    • Click “ok”.
                    • Next right-click the tables in turn, and choose Changed Data Capture ->Add to CDC. Then edit the model again to select the Journalized Tables tab. If there are multiple tables use the up and down arrow keys to place the tables in the correct order.
                    • Next add a subscriber to the journal by returning to the Designer application, right-clicking the Model, and choosing Changed Data Capture ->Subscriber->Subscribe. You add a new subscriber and execute the code locally to ensure that the code executes correctly. Once this step is complete, you have set up the changed-data capture process and you are ready to begin building your interfaces.
                    • Now one is ready to create the journal that captures changed data from these two tables. To do this, right-click the model again and choose Changed Data Capture ->Start Journal. Click OK to execute the code locally, and then start up the Operator application to check the progress of the operation. If all has gone well, you will be presented with a list of completed steps similar to the following. In order to start the Operator goto Start -> All Programs –> <ORACLE_HOME> -> Oracle Data Integrator -> Operator and login to the repository when presented with the same login screen.
                    clip_image031
                    • At this point if CDC is working properly the table should have a green icon instead of the yellow/orange icon. If this is not the case, then CDC is not properly setup.

                     

                    Create a Model for JMS DataStore in ODI Designer

                    • Right click and select “Insert Model”.
                    • In the definition tab, select the technology and Logical Schema (created in the Topology Manager).
                    clip_image033
                    • Next within this Model, right click and select “Insert Datastore”.
                    • Create a Datastore making sure the Name and Resource name is the name of the JMS Queue which was created within WebLogic Server. Also note that the “Datastore Type” is set to “table”.
                    clip_image035
                    • Within this Datastore, click on the “Files” tab and make sure the format of how the JMS Text Message will look like. This means that the JMS Text Message sent to the queue can have all of the columns of data from the DB table separated by “,” in this case. The delimiter can be anything one chooses.
                    clip_image037
                    • Next define the columns of data which can be added to this JMS Text message. In this case we are adding all of the columns of data from the DB table into the resulting JMS Message. Right click on the Column name and select “Insert Column”. This will be used in the mapping part in the interface defined later on.
                    clip_image039

                     

                    Create the Interface (DB to WLS JMS)

                    • Now we can create a new interface which will take the data from SUPPORT_CASE table in an Oracle Database when CDC happens and load it to a JMS Queue hosted on a WebLogic Server container.
                    • In the Project tab, right click the interface to create a new interface.
                    • Make sure the checkbox for “Staging Area Different from Target” is selected. Select the Logical Schema for the staging area. In this case it is the Oracle DB.
                    clip_image041
                    • Click on the Diagram tab and drag and drop the tables from the model for the target (“weblogic.jms.poc.testQueue” Datastore in this example) and the table for the source onto the respective locations on the Diagram Tab.
                    • Join and map any columns to the target column in the Diagram tab.
                    • Also on the diagram tab on the Interface, select the source, enable “Journalized Data Only”. Make sure the filter created has the correct subscriber name one wants. The resulting picture should look something like this:
                    clip_image043
                    • Click on the Flow tab to see how the data will be combined. Here one will see two boxes. One for the Staging Area (which is on the Database) and one on the Target Area (the WLS JMS Queue). Click on the Target Area and in the IKM dropdown box select “IKM SQL to JMS Append’. Click “apply” and there should be no errors.
                    clip_image045
                    • Next select the Controls tab to make sure the Control Knowledge Module was selected. This is used for handling constraint errors in the target table. Select the CKM SQL Knowledge Module, which will handle erroneous data for any ISO-92-compliant database. Click “OK”.
                    • The interface is now successfully created to load journalized data from the DB table into a WLS JMS Queue. Next the Package and Scenario will be created to automate the flow of this at runtime.

                     

                    Create the ODI Packages and Execute:

                    Now create an Oracle Data Integrator package to carry out the following steps:
                    • Check the SUPPORT_CASE journalized data to see if new or changed data records have been added.
                    • If journalized data is detected, extend the journal window.
                    • Execute the interface to read from the journalized data and load the target data store which in this case will be a WebLogic Server JMS Queue.
                    • Purge the journal window.
                    • Start this package again. (This will be in a loop ready for new journal changes.) Creating this package and then deploying it as an Oracle Data Integrator scenario effectively creates a real-time, continuously running ETL process.





                    Create the Package and Scenario for setting up CDC

                    The purpose of this is to automate setting up CDC within one’s environment.
                    • 1. To create this package, navigate to the Projects tab in the Designer application, locate the folder containing the interfaces you defined earlier, find the Packages entry, right-click it, and select Insert Package. Give the package a name and then navigate to the Diagram tab in the package details dialog box.
                    • 2. The next step in this package will be to drag the Model just created onto the canvas. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 3. Select “Stop” under Journalizing to stop this if it was already running so it can be successfully rerun.
                    clip_image047
                    • 4. The next step in this package will be to drag the Model just created onto the canvas again. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 5. Under the “Journalizing” section of the General tab, select “Start” and “Add Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image049
                    • 6. Connect the first step to this step with a green “ok” arrow and a red “ko” arrow.
                    • 7. Right click on this package to create a Scenario.
                    • 8. This scenario can be executed either from the tool by right clicking and selecting “Execute” or by running this from the command line.
                    • 9. Execute this Scenario so that Journaling is initially started. The next section will explain how to create and setup the main CDC process in a loop so that changes are continuously consumed.

                     

                    Create the Package and Scenario for the Main CDC process

                    • 1. To create this package, navigate to the Projects tab in the Designer application, locate the folder containing the interfaces you defined earlier, find the Packages entry, right-click it, and select Insert Package. Give the package a name and then navigate to the Diagram tab in the package details dialog box.
                    • 2. Using the toolbox on the right, go to the Event Detection folder and add the OdiWaitForLogData tool to the package canvas, as shown in the figure below. This tool will monitor the journalized data on a regular basis. In this case the step name has been changed to “Waiting for Changes”.
                    • 3. In this step update the following parameters:
                      • a. Logical Schema: Change this to the Logical Schema being used in your interface
                      • b. Subscriber: The subscriber name
                      • c. CDC Set: Change this to your Model Name and the Logical Schema name. It is in the form of: <%=odiRef.getObjectName("L","model_code","logical_schema", "D")%>
                    Example: <%=odiRef.getObjectName("L","ORACLE_SPOZ2_MODEL","Oracle 10gR2 Logical Schema SPOZ2", "D")%>
                    clip_image051
                    • 4. The next step in this package will be to drag the Model just created containing the Database table onto the canvas. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 5. Under the “Consumption” section of the General tab, select “Extend Window” and “Lock Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image053
                    • 6. Connect the first step to this step with a green “ok” arrow.
                    • 7. Next Drag the Interface just created for Populating the WLS JMS Queue after the previous step.
                    clip_image055
                    • 8. Connect the second step to this step for the interface with a green “ok” arrow.
                    • 9. The next step in this package will be to drag the Model again onto the canvas after the interface. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 10. Under the “Consumption” section of the General tab, select “Purge Journal” and “Unlock Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image057
                    • 11. Connect the third step from the interface to this step with a green “ok” arrow.
                    • 12. Right click on this package to create a Scenario.
                    clip_image059
                    • 13. After creating this Scenario, open up the package just created.
                    • 14. In this package, drag and drop the scenario to the last step in the list after “Purge and Unlock”.
                    • 15. Connect the fourth step from the “Purge and UnLock” step to this final step (the scenario) with a green “ok” arrow.
                    • 16. Click on the named scenario which was just added to the package and make sure to select “Asynchronous” in the parameters listed under the General tab.
                    clip_image061
                    • 17. Click “ok” to save and close the package.
                    • 18. Right click the scenario created in step 12 and click “Regenerate”.
                    clip_image063
                    • 19. The reason for doing steps 13 through 17 is to avoid any issues around log files growing too quickly and too large in the work repository. Plus one would not be able to clean the log file here.
                    • 20. This scenario can be executed either from the tool by right clicking and selecting “Execute” or by running this from the command line.

                     

                    Testing the Main CDC Scenario:

                    • 1. Make sure the Journal has been started by executing the scenario you created. Example: “SETTING_UP_CDC_SCEN”.
                    • 2. Make sure the main CDC scenario is executing by right clicking the named Scenario (for example “SUPPORT_CASE_TO_WLS_JMS_Q_SCEN”) and then starting the “Operator” utility. Here you should see something like this (notice the green icon that does not have a checkmark):
                    clip_image065
                    • 3. Goto the Model section, right click on the table with CDC setup on it and select “Data”.
                    • 4. In the window which comes up, change some data and hit enter. Click “ok” so the changes take effect. (Notice Changes were made to the name to add “7777” to it).
                    clip_image067
                    • 5. Go back to the Operator utility, click Refresh and you’ll notice that the Package ran and the interface to populate the JMS Queue executed. Double click on the “Pop. WLS JMS Queue” and select the “Execution” tab. There you will notice that the “Number of Inserts” shows a “1” meaning that one JMS message was sent to the target WLS JMS Queue.
                    clip_image069
                    • 6. Also notice that the scenario was stopped and restarted (by adding the scenario to the package to start again) as to avoid any log issues in the ODI Work Repository.
                    • 7. Login into the WebLogic Server Console, click on “Services”, “Messaging”, “JMS Modules”, the name of the JMS Module created, the name of the Queue created, click the Monitoring tab, select the box next to the Queue name and then click “Show Messages”.
                    clip_image071
                    • 8. Click on the Message ID to view the contents of the message in the JMS Queue.
                    clip_image073
                    • 9. Notice the change which was made to the “name” and that each field from the DB table is separated by a “,” (as was previously setup in the “Files” tab of the data store defined in the Model).
                    • 10. This scenario will continue to run (by design in the Package created) until one chooses to stop it.

                     

                    APPENDIX:

                    SQL statements to create the sample SUPPORT_CASE and SUPPORT_CASE_TRG tables.

                    drop table support_case;
                    create table support_case
                    (
                    customerid varchar2(20),
                    name varchar2(50),
                    email varchar2(30),
                    address varchar2(50),
                    city varchar2(40),
                    state varchar2(20),
                    zip varchar2(15),
                    country varchar2(30),
                    phone varchar2(20),
                    orderid varchar2(20),
                    constraint supportcase_pk primary key(customerid)
                    );
                    INSERT INTO support_case VALUES('1','Scott King','sking@oracle.com', '200 Main Street', 'New York', 'NY', '10022', 'USA', '212-555-1212', '200');
                    INSERT INTO support_case VALUES('2','Steven Pozarycki', 'steve@oracle.com', '22 Pinckney Street', 'Morris Plains', 'NJ', '07950', 'USA', '617-863-4444', '500');
                    INSERT INTO support_case VALUES('3', 'Murali M', 'murali@oracle.com', '180 Patriots Road', 'Boston', 'MA', '02116', 'USA', '617-285-2222', '300');
                    INSERT INTO support_case VALUES('4','Lloyd Doe, 'lloyd@oracle.com', '56 Cambridge Street', 'Boston', 'MA', '02116', 'USA', '617-452-3333', '400');
                    commit;

                    drop table support_case_trg;
                    create table support_case_trg
                    (
                    customerid varchar2(20),
                    name varchar2(50),
                    email varchar2(30),
                    address varchar2(50),
                    city varchar2(40),
                    state varchar2(20),
                    zip varchar2(15),
                    country varchar2(30),
                    phone varchar2(20),
                    orderid varchar2(20),
                    constraint supportcase_trg_pk primary key(customerid)
                    );
                    commit;

                    Tuesday, February 2, 2010

                    Setting up Oracle Data Integrator (ODI) with Change Data Capture (CDC) between two Database Tables

                    Introduction:

                    The following shows detailed steps on how to setup ODI with Change Data Capture on a database table so that all changes to a database table are loaded into a target. In this case a different database table is the target however the target could be anything.

                    Pre-requisites


                    Setup ODI:

                    The following information makes assumptions that this will be setup on a Windows machine however the same concepts would apply to a UNIX/LINUX environment.
                    • Create a Master Repository in an Oracle DB. In this case we will set this up under a user in our local Oracle 10gR2 Database. We will use the “system” user and connect to our local DB which was setup earlier (jdbc:oracle:thin:@localhost:1541:orcl). Goto Start -> All Programs -> <ORACLE_HOME> –> Oracle Data Integrator -> Repository Management -> Create Master Repository
                    • This will take a couple of minutes to create the necessary ODI information within the database and schema one provided.
                    • Startup the Topology Manager and create a Work repository. This is needed prior to starting up the Oracle Designer. Goto Start -> All Programs -> <ORACLE_HOME> -> Oracle Data Integrator -> Topology Manager.
                    • Create a connection to the repository by clicking on the “New” icon and fill in the data like below: (SUPERVISOR/SUNOPSIS are the default user/password and need to be in capitals)
                    clip_image001
                    • Goto the repositories tab (the 5th one from the beginning) and create some named work repository. Right click and select “Insert new work repository”.
                    clip_image003
                    • In the Topology Manager, create a Physical Architecture for the Database Schema where you will be connecting to (the 1st tab at the bottom). (NOTE: It is assumed that there are some tables in this schema already.) Select the technology (in this case “Oracle”), then right click and select “Insert Data Server”. Before clicking “ok” to save, one will have to create the Logical Layer as well (see later steps). Fill in the user/password to the schema one will be connecting to and fill in the JDBC information in the JDBC tab.
                    clip_image005
                    clip_image007
                    • Next right click this Data Server and select “Insert Physical Schema”. Select the schema one want and then on the “Context” tab is where one will select the Logical Schema. The Logical Schema will first need to be created in the next step before selecting it here and clicking “ok”. (You may get an error while saving this however first save it and then open it up so the Logical Schema can be created.)
                    clip_image009
                    clip_image011
                    • In the Topology Manager, create a Logical Architecture for the Database Schema where you will be connecting to (the 3rd tab at the bottom). This will be needed for the Physical Architecture layer you created. Select the technology (in this case “Oracle”), then right click and select “Insert Logical Schema”.
                    clip_image013
                    • Startup the ODI Designer: Goto Start -> All Programs -> <ORACLE_HOME> -> Oracle Data Integrator -> Designer
                    • Create a new connection to this new Master Repository and Work Repository. Click the “New” icon next to the default “Login Name”
                    • Connect with the SUPERVISOR/SUNOPSIS (user/password and all capitals), supply the Oracle Database information where the master repository was created, and the name of the Work Repository which was just created.
                    clip_image014
                    • Back in the Oracle Designer we will now create a Model based on the Logical Schema we created in the Topology Manager.
                    • Create a Project under the Projects tab.
                    • Next import the Knowledge Modules. Right click the project, select Import, and then “Import Knowledge Modules”. Select the directory under <ORACLE_HOME>\oracledi\impexp, and then select each knowledge module individually. We will select:
                              • · CKM SQL
                              • · IKM SQL Incremental Update
                              • · JKM Oracle 10g Consistent (LOGMINER)
                              • · LKM SQL to SQL
                    clip_image015
                    • Create a Model. Right click and select “Insert Model”.
                    • In the definition tab, select the technology and Logical Schema (created in the Topology Manager) and then click the “Reverse” button to get the tables for this schema into the model view. Click “ok”. NOTE: There are SQL statements to create the test tables for the schema user in the APPENDIX section which one can use to test with.
                    clip_image017
                    • Double click the Model name you created and select the Journalizing tab. Make sure that the tab is filled out and looks like the following. Because you want to capture changes to the tables in a consistent fashion, you select the Consistent option and the JKM Oracle 10g Consistent (LOGMINER) knowledge module. This knowledge module, shown in the figure below, will capture new and changed data, using the LogMiner feature of Oracle Database 10g. Read the notes section for additional details on the user accessing the tables.
                    clip_image019
                    • Click “ok”.
                    • Next right-click the tables in turn, and choose Changed Data Capture ->Add to CDC. Then edit the model again to select the Journalized Tables tab. If there are multiple tables use the up and down arrow keys to place the tables in the correct order.
                    • Next add a subscriber to the journal by returning to the Designer application, right-clicking the Model, and choosing Changed Data Capture ->Subscriber->Subscribe. You add a new subscriber and execute the code locally to ensure that the code executes correctly. Once this step is complete, you have set up the changed-data capture process and you are ready to begin building your interfaces.
                    • Now one is ready to create the journal that captures changed data from these two tables. To do this, right-click the model again and choose Changed Data Capture ->Start Journal. Click OK to execute the code locally, and then start up the Operator application to check the progress of the operation. If all has gone well, you will be presented with a list of completed steps similar to the following. In order to start the Operator goto Start à All Programs à <ORACLE_HOME> à Oracle Data Integrator à Operator and login to the repository when presented with the same login screen.
                    clip_image021
                    • At this point if CDC is working properly the table should have a green icon instead of the yellow/orange icon. If this is not the case, then CDC is not properly setup.
                    • Now we can create a new interface which will take the data from SUPPORT_CASE table in an Oracle Database when CDC happens and load it to a different table called SUPPORT_CASE_TRG in an Oracle Database.
                    • In the Project tab, right click the interface to create a new interface. Drag and drop the tables from the model onto the respective target and source locations on the Diagram Tab.
                    • Join and map any columns to the target column in the Diagram tab.
                    • Click on the Flow tab to see how the data will be combined.
                    • Next select the Controls tab to make sure the Control Knowledge Module was selected, used for handling constraint errors in the target table. Select the CKM SQL Knowledge Module, which will handle erroneous data for any ISO-92-compliant database. Click “apply”.
                    • Next test the interface. To do this, click Execute at the bottom right corner of the interface dialog and then open to the Operator application to check the progress of the interface. Within the operator application it will show the execution of the interface loading the data into the SUPPORT_CASE_TRG table:
                    clip_image023
                    • Now that we know the interface works, we can update the interface to load the changed data via the Journal tables were created earlier. You can verify that the four records were put into the SUPPORT_CASE_TRG table by right clicking on the table and selecting “View Data”.
                    • Prepare the Data with Consistent Journaling only: On the model: right click -> Changed Data Capture -> Consumption -> Extend window
                    • Then Lock the subscriber which will be consuming on the Model. On the model: right click -> Changed Data Capture -> Consumption -> Lock Subscribers
                    • Go back into the Diagram tab on the Interface, select the source, enable “Journalized Data Only” and click “Apply”.
                    clip_image025
                    • Click on the filter icon in the Diagram and make sure the subscriber is the one you locked earlier. In this case “SUBA”. The Implementation tab should look like this:
                    clip_image027
                    • To test this updated interface, you update some records in the SUPPORT_CASE table (right click, select “Data”, modify a record and click “apply”). Right click the SUPPORT_CASE table: Changed Data Capture -> Journal Data and it should show changes in the Journal table.

                     

                    Create the ODI Package and Execute:

                    Now create an Oracle Data Integrator package to carry out the following steps:
                      • Check the SUPPORT_CASE journalized data to see if new or changed data records have been added.
                      • If journalized data is detected, extend the journal window.
                      • Execute the interface to read from the journalized data and load the target data store.
                      • Purge the journal window.
                      • Start this package again. (This will be in a loop ready for new journal changes.) Creating this package and then deploying it as an Oracle Data Integrator scenario effectively creates a real-time, continuously running ETL process.





                    Create the Package and Scenario for setting up CDC

                    The purpose of this is to automate setting up CDC within one’s environment.
                    • 1. To create this package, navigate to the Projects tab in the Designer application, locate the folder containing the interfaces you defined earlier, find the Packages entry, right-click it, and select Insert Package. Give the package a name and then navigate to the Diagram tab in the package details dialog box.
                    • 2. The next step in this package will be to drag the Model just created onto the canvas. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 3. Select “Stop” under Journalizing to stop this if it was already running so it can be successfully rerun.
                    clip_image029
                    • 4. The next step in this package will be to drag the Model just created onto the canvas again. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 5. Under the “Journalizing” section of the General tab, select “Start” and “Add Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image031
                    • 6. Connect the first step to this step with a green “ok” arrow and a red “ko” arrow.
                    • 7. Right click on this package to create a Scenario.
                    • 8. This scenario can be executed either from the tool by right clicking and selecting “Execute” or by running this from the command line.
                    • 9. Execute this Scenario so that Journaling is initially started. The next section will explain how to create and setup the main CDC process in a loop so that changes are continuously consumed.

                    Create the Package and Scenario for the Main CDC process

                    • 1. To create this package, navigate to the Projects tab in the Designer application, locate the folder containing the interfaces you defined earlier, find the Packages entry, right-click it, and select Insert Package. Give the package a name and then navigate to the Diagram tab in the package details dialog box.
                    • 2. Using the toolbox on the right, go to the Event Detection folder and add the OdiWaitForLogData tool to the package canvas, as shown in the figure below. This tool will monitor the journalized data on a regular basis. In this case the step name has been changed to “Waiting for Changes”.
                    • 3. In this step update the following parameters:
                            • a. Logical Schema: Change this to the Logical Schema being used in your interface
                            • b. Subscriber: The subscriber name
                            • c. CDC Set: Change this to your Model Name and the Logical Schema name. It is in the form of: <%=odiRef.getObjectName("L","model_code","logical_schema", "D")%>
                    Example: <%=odiRef.getObjectName("L","ORACLE_SPOZ2_MODEL","Oracle 10gR2 Logical Schema SPOZ2", "D")%>
                    clip_image033
                    • 4. The next step in this package will be to drag the Model just created onto the canvas. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 5. Under the “Consumption” section of the General tab, select “Extend Window” and “Lock Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image035
                    • 6. Connect the first step to this step with a green “ok” arrow.
                    • 7. Next Drag the Interface just created after the previous step.
                    • 8. Connect the second step to this step for the interface with a green “ok” arrow.
                    • 9. The next step in this package will be to drag the Model again onto the canvas after the interface. Within the General Tab of this step, select the Drop down under “Type” and select “Journalizing Mode”.
                    • 10. Under the “Consumption” section of the General tab, select “Purge Journal” and “Unlock Subscribers”. Add the “SUBA” subscriber to the list.
                    clip_image037
                    • 11. Connect the third step from the interface to this step with a green “ok” arrow.
                    • 12. Right click on this package to create a Scenario.
                    clip_image039
                    • 13. After creating this Scenario, open up the package just created.
                    • 14. In this package, drag and drop the scenario to the last step in the list after “Purge and Unlock”.
                    • 15. Connect the fourth step from the “Purge and UnLock” step to this final step (the scenario) with a green “ok” arrow.
                    • 16. Click on the named scenario which was just added to the package and make sure to select “Asynchronous” in the parameters listed under the General tab.
                    clip_image041
                    • 17. Click “ok” to save and close the package.
                    • 18. Right click the scenario created in step 12 and click “Regenerate”.
                    clip_image043
                    • 19. The reason for doing steps 13 through 17 is to avoid any issues around log files growing too quickly and too large in the work repository. Plus one would not be able to clean the log file here.
                    • 20. This scenario can be executed either from the tool by right clicking and selecting “Execute” or by running this from the command line.

                    Testing the Main CDC Scenario:

                    • 1. Make sure the Journal has been started by executing the scenario you created. Example: “SETTING_UP_CDC_SCEN”.
                    • 2. Make sure the main CDC scenario is executing by right clicking the named Scenario and then starting the “Operator” utility. Here you should see something like this (notice the green icon that does not have a checkmark):
                    clip_image045
                    • 3. Goto the Model section, right click on the table with CDC setup on it and select “Data”.
                    • 4. In the window which comes up, change some data and hit enter. Click “ok” so the changes take effect. (Notice Changes were made to the name to add “7777” to it).
                    clip_image047
                    • 5. Go back to the Operator utility, click Refresh and you’ll notice that the Package ran.
                    clip_image049
                    • 6. Double click on the “CDC Changes Consumed” and select the “Execution” tab. There you will notice that the “Number of Updates” shows a “1” meaning that one row was updated in the target table.
                    clip_image051
                    • 7. Verify that changes were made to the target table in the database by right clicking on the target table and selecting “Data”.
                    • 8. This scenario will continue to run (by design in the Package created) until one chooses to stop it.
                    clip_image053

                     

                    APPENDIX:

                    SQL statements to create the sample SUPPORT_CASE and SUPPORT_CASE_TRG tables.

                    drop table support_case;
                    create table support_case
                    (
                    customerid varchar2(20),
                    name varchar2(50),
                    email varchar2(30),
                    address varchar2(50),
                    city varchar2(40),
                    state varchar2(20),
                    zip varchar2(15),
                    country varchar2(30),
                    phone varchar2(20),
                    orderid varchar2(20),
                    constraint supportcase_pk primary key(customerid)
                    );
                    INSERT INTO support_case VALUES('1','Scott King','sking@oracle.com', '200 Main Street', 'New York', 'NY', '10022', 'USA', '212-555-1212', '200');
                    INSERT INTO support_case VALUES('2','Steven Pozarycki', 'steve@oracle.com', '22 Pinckney Street', 'Morris Plains', 'NJ', '07950', 'USA', '617-863-4444', '500');
                    INSERT INTO support_case VALUES('3', 'Murali M, 'murali@oracle.com', '180 Patriots Road', 'Boston', 'MA', '02116', 'USA', '617-285-2222', '300');
                    INSERT INTO support_case VALUES('4','Lloyd Smith', 'lloyd.e.smith@oracle.com', '56 Cambridge Street', 'Boston', 'MA', '02116', 'USA', '617-452-3333', '400');
                    commit;

                    drop table support_case_trg;
                    create table support_case_trg
                    (
                    customerid varchar2(20),
                    name varchar2(50),
                    email varchar2(30),
                    address varchar2(50),
                    city varchar2(40),
                    state varchar2(20),
                    zip varchar2(15),
                    country varchar2(30),
                    phone varchar2(20),
                    orderid varchar2(20),
                    constraint supportcase_trg_pk primary key(customerid)
                    );
                    commit;