
Table of Content
Introduction
Oracle Database 11g provides a unified solution for information sharing which is Oracle Streams. Oracle Streams provides a set of elements designed to facilitate the capture, staging, and consumption of events within the Oracle database. Each element of Streams can be configured explicitly or implicitly. Explicit capture allows user applications to explicitly enqueue messages into a staging area on the source database. Using the implicit capture mechanism of Oracle Streams, changes made to a database can be efficiently captured and replicated to one or more remote systems with little impact to the originating system.
Oracle Streams replication, captures changes from a source database, stages and propagates those changes to one or more remote databases, and then consumes or applies the changes at each target database.
ORACLE STREAMS REPLICATION
The following illustration demonstrates the streams architectural elements as used in replication. The figure shows user DML activity on one or more database tables, for example, an update statement that changes the state of a row in the EMP table representing an employee whose employee id (empid) is 100. As usual, the change is recorded in the redo log. The capture process, previously configured to collect changes made to the EMP table, retrieves the changes from the redo log, formats the information into logical change records (LCR), and places the LCR into the staging area at the local database. Note that, although in this example, the capture process is shown at the source database, for performance reasons, you may choose to run the capture process at another location. The captured LCR is then propagated from the staging area (or queue) at this source database and delivered to another database that has subscribed to the changes on the EMP table. In addition, the apply engine at this second database is configured to receive the changes on the EMP table. Once the change has been propagated to the new site, the local default apply process at this second database automatically applies the change to the database.
Use Case
This Use case deals with configuring a Replication Environment that is maintained by Oracle Streams. Here we will demonstrate the steps involved in replicating all the objects of source schema from the source database, into a target schema in the target database.
What we need to do:
- Prerequisites
- Configuration of the Source and Target Databases.
- Replication of Source database into Target database.
Solution
Prerequisites
Here we are replicating all the objects of SCOTT schema from DBSOURCE database, to SCOTT schema in DBTARGET database, where:
Source Database: DBSOURCE
Target Database: DBTARGET
Source Schema Name: SCOTT
Target Schema Name: SCOTT
The following prerequisite steps need to be executed as given:
- Check Streams Unsupported objects present within the Schema
- Run the below query on DBA_STREAMS_UNSUPPORTED to get the list of tables and the reason, for Streams not supporting the tables during Replication.
1SQL > SELECT TABLE_NAME,REASON FROM DBA_STREAMS_UNSUPPORTED WHERE OWNER=’SCOTT’;
- Run the below query on DBA_STREAMS_UNSUPPORTED to get the list of tables and the reason, for Streams not supporting the tables during Replication.
- Add Supplemental Login for all the tables which are part of the Streams Replication
- Add supplemental login for all the tables present in SCOTT schema at the source side. Login as a DBA, with the User name SYSDBA.
1CONN SYS@DBSOURCE AS SYSDBA
- Add supplemental login for all the tables present in SCOTT schema at the source side. Login as a DBA, with the User name SYSDBA.
- You can use the following options in the ADD SUPPLEMENTAL LOG DATA clause of an ALTER TABLE statement. This will enable supplemental logging of data at a table level:
123456alter database force logging;alter database add supplemental log data;alter table SCOTT.EMP ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;alter table SCOTT.DEPT ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;alter table SCOTT.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) columns;spool off
Configuration of Source and Target Databases
The below steps need to be executed on both the source database(DBSOURCE) and target database(DBTARGET) sides, in the following sequence:
STEP 1a: Setting the Environment Variables at DBSOURCE
The database must run in the Archive log mode. Execute the below set of commands:
1 2 3 4 |
CONN SYS@DBSOURCE AS SYSDBA select * from global_name; alter system set global_names=true scope=both; |
Next, Restart the DB & perform the same changes on the Target DB side also.
STEP 1b: Setting the Environment Variables at DBTARGET
The database must run in the Archive log mode. Execute the below set of commands:
1 2 3 4 |
CONN SYS@DBTARGET AS SYSDBA select * from global_name; alter system set global_names=false scope=both; |
STEP 2a: Creation of Streams Administrator User at DBSOURCE
In order to perform forthcoming set of tasks, the User must be granted DBA role. Execute the below set of commands to grant the User admin privileges:
1 2 3 4 5 6 7 8 |
SQL> create tablespace strepadm datafile ‘/oradata/DBSOURCE/strepadm01.dbf’ size 1000m; CONN SYS@DBSOURCE AS SYSDBA create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp; GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’); |
STEP 2b: Creation of Streams Administrator User at DBTARGET
Similarly, on the Target DB side, execute the below string of commands:
1 2 3 4 5 6 7 8 |
SQL> create tablespace strepadm datafile ‘/oradata/DBTARGET/strepadm01.dbf’ size 1000m; CONN SYS@DBTARGET AS SYSDBA create user STRMADMIN identified by STRMADMIN default tablespace strepadm temporary tablespace temp; GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(‘STRMADMIN’); |
(If SCOTT schema is not present on the target side, please create the same.)
STEP 3: Creating DB Link at DBSOURCE
Connected as the Streams Administrator, create the streams queue and the database link that will be used for propagation at DBSOURCE.
Add the TNS ENTRY details in the tnsnames.ora file.Execute the below set of commands:
1 2 3 |
CONN STRMADMIN@DBSOURCE AS SYSDBA create database link DBTARGET connect to STRMADMIN identified by STRMADMIN using ‘DBTARGET’003B |
(This step need not be executed on the target DB side.)
STEP 4a: Create Queue and Queue Table at DBSOURCE
Connected as the Streams Administrator, create the Streams queue and the database link that will be used for propagation at DBSOURCE. This queue belongs to the source database side.Set the parameters as given and run the below commands:
1 2 3 4 5 6 7 8 |
connect STRMADMIN@DBSOURCE BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => ‘STREAMS_QUEUE_TABLE’, queue_name => ‘STREAMS_QUEUE_Q’, queue_user => ‘STRMADMIN’); END; / |
STEP 4b: Create Queue and Queue Table at DBTARGET
Connected as the Streams Administrator, create the Streams queue and the database link that will be used for propagation at DBTARGET. This queue belongs to the target database side.Execute the below set of commands on DBTARGET:
1 2 3 4 5 6 7 8 |
conn STRMADMIN@DBTARGET BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE ( queue_table => ‘STREAMS_QUEUE_TABLE’, queue_name => ‘STREAMS_QUEUE_Q’, queue_user => ‘STRMADMIN’); END; / |
STEP 5a: Export dump from the Source Database.
The contents of the parameter file exp_stream.par file are set as given below, before exporting the dump from the source DB.
1 2 3 4 5 6 7 8 |
file=exp_streams.dmp log=exp_streams.log object_consistent=y OWNER=SCOTT STATISTICS=NONE vi exp_streams.par $exp USERNAME/PASSWORD parfile=exp_streams.par |
The SCP command is used to copy the exported dump file from source server to target server.
SCP the .DMP file to target and import it into the DBTARGET.
1 |
$scp dump_filename.dmp username@hostname:/dir1/imp/ |
STEP 5b: Import Dump into the Target Database.
Next, to import the dump into the target DB, execute the below command with parameters set as follows :
1 |
$imp FROMUSER=SCOTT TOUSER=SCOTT FILE=exp_streams.dmp log=exp_streams.log STREAMS_INSTANTIATION=Y IGNORE=Y COMMIT=Y |
(If on the target side, the SCOTT schema doesn’t exist, we need to execute export steps from the source database and import into the target DB).
Replication of Source Database into Target Database
The Capture, Propagate and Apply steps are the three major elements in the Replication mechanism. The below diagram will give us a clear picture of the how data from source is replicated into the destination tables.
Continue processing with the below Rule creation steps at DBSOURCE and DBTARGET to enable successful Replication.
STEP 6a: Create Propagation Process at DBSOURCE
Login to the source database, and then create a rule for Propagation process. We need to sync the Network activity between source and target via DBLINK. Execute the below set of commands:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
conn strmadmin@DBSOURCE BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => ‘SCOTT’, streams_name => ‘STREAM_PROPAGATE_P1′, source_queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’, destination_queue_name => ‘STRMADMIN.STREAMS_QUEUE@DBTARGET’, include_dml => true, include_ddl => true, source_database => ‘DBSOURCE’); END; / |
STEP 6b: Create Apply Process at DBTARGET
Login to the Target database, and then create a rule for the Apply process. After creating the rule, we need to start the Apply process at DBTARGET. To create an apply process, a user must be granted DBA role, STRMADMIN. Execute the below set of commands with the following parameters:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
CONN STRMADMIN/STRMADMIN@DBTARGET BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => ‘SCOTT’, streams_type => ‘APPLY ‘, streams_name => ‘STREAM_APPLY_A1′, queue_name => ‘STRMADMIN.STREAMS_QUEUE_Q’, include_dml => true, include_ddl => true, source_database => ‘DBTARGET’); END; / |
STEP 7a: Create Capture Process at DBSOURCE
Login to the source database, and then create a rule for the Capture process. After creating the rule, we need to start the Capture process at the source database. Execute the below set of commands with the parameters as given below:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
CONN strmadmin@DBSOURCE BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => ‘SCOTT’, streams_type => ‘CAPTURE’, streams_name => ‘STREAM_CAPTURE_C1′, queue_name => ‘STRMADMIN.STREAMS_QUEUE¬_Q’, include_dml => true, include_ddl => true, source_database => ‘DBSOURCE’); END; / |
STEP 7b: Start the Apply Process at DBTARGET
Now, execute the below set of commands to start the Apply process at the target database:
1 2 3 4 5 6 7 |
connect STRMADMIN@DBTARGET BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => ‘STREAM_APPLY_A1′); END; / |
Set the parm stop_on_error to ‘false’, so that APPLY does not abort for every error:
1 2 3 4 5 6 7 |
BEGIN DBMS_APPLY_ADM.SET_PARAMETER( apply_name => ‘STREAM_APPLY_A1′, parameter => ‘disable_on_error’, value => ‘n’); END; / |
Then, begin the Apply process at the Target:
1 2 3 4 5 |
BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => ‘STREAM_APPLY_A1′); END; / |
STEP 8: Create Negative Rule at DBSOURCE for Unsupported Tables
Execute the below commands, and set a negative rule for all the tables which are unsupported by Streams ( the list of tables that was initially obtained from querying the DBA_STREAMS_UNSUPPORTED table):
1 2 3 4 5 6 7 8 9 10 11 12 |
conn strmadmin@DBSOURCE BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => ‘SCOTT.<UNSUPPORTED TABLE NAME>’, streams_type => ‘capture’, streams_name => ‘STREAM_CAPTURE_C1’, queue_name => ‘strmadmin.STREAMS_QUEUE_Q’, include_dml => true, include_ddl => true, inclusion_rule => false); END; / |
(This step not required at target.)
STEP 9 : Start the Capture Process at DBSOURCE
We run the START_CAPTURE procedure in the DBMS_CAPTURE_ADM package to start a capture process named STREAM_CAPTURE_C1 at the source:
1 2 3 4 5 6 |
connect STRMADMIN@DBTARGET BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => ‘STREAM_CAPTURE_C1′); END; / |
Conclusion
Oracle Streams simplifies sharing data between databases and database clusters, with its revolutionary concept of unifying message queuing and data replication capabilities. With Oracle Streams replication, the basic elements of capture, staging, and consumption in combination with user-defined functions, enabled us in overcoming the issues that bothered the older replication methods, and helped boost replication performance.
References
- http://docs.oracle.com/cd/B28359_01/server.111/b28321/toc.htm
- http://www.oracle.com/technetwork/database/features/data-integration/twp-streams-11gr1-134658.pdf