[Oracle] Create a cascade dataguard configuration

You are here:
  • Main
  • Oracle
  • [Oracle] Create a cascade dataguard configuration
< All Topics


  • Red Hat Linux 7
  • Oracle Database 12


  • PRIMARY_DB: db_unique_name of the primary database of the configuration
  • CASCADING_DB: db_unique_name of the standby database that forwards logs to the CASCADED_DB
  • CASCADED_DB: db_unique_name of the standby database that receives logs from the CASCADING_DB

Hi all,

in this tutorial we are going to create a cascade dataguard configuration. In this configuration, one or more of the standby databases do not receive archived logs directly from the primary database, but from one of the other standby databases. This can be useful in many situations, for example when the primary site is not directly reachable from the site of the cascaded database or just to offload the work of shipping the archived logs from the primary database.

NOTE: standby databases in this configuration are physical standby databases, some restrictions apply to logical standby databases. Refer to Oracle documentation for them.

Refer to this article to create a standard dataguard configuration between the PRIMARY_DB and the CASCADING_DB:
Create dataguard configuration in Oracle

Then, follow these steps to add a cascaded standby database to the configuration.

First, as every dataguard configuration, add the following lines to the tnsnames.ora files of every node:


And the following lines to the static listener description in the listener.ora of the cascaded site:


Next, be sure that the log_archive_config parameter is the same in all databases and include every database unique name of the dataguard configuration:


And that the fal_server parameter of each site includes the tns alias of the other sites as specified in the tnsnames.ora files. In this example, but it is also a best practice, the tns alias is equal to the db_unique_name of the database it points to.

fal_server = 'PRIMARY_DB, CASCADING_DB';
fal_server = 'PRIMARY_DB, CASCADED_DB';

Next, do not create a log_archive_dest on the primary that points to the cascaded database, the primary should only have a log_archive_dest that points to the cascading database. Instead, create a log_archive_dest on the cascading database that points to the cascaded database:

alter system set log_archive_dest_2 = 'service="CASCADED_DB" ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="CASCADED_DB" net_timeout=30 valid_for=(standby_logfile,all_roles)';

It should work now, but if a broker is used, a few additional steps need to be performed.

Add the database to the broker configuration:

add database 'CASCADED_DB' as connect identifier is CASCADED_DB;

And then make use of the RedoRoute parameter (only available from 12.x versions):

edit database 'PRIMARY_DB' set property RedoRoutes='(local:"CASCADING_DB")';
edit database 'CASCADING_DB' set property RedoRoutes='("PRIMARY_DB":"CASCADED_DB" async)';

These lines tell the broker that the cascading database is receiving archived logs from the primary database and is forwarding them to the cascaded database in asyncronous mode.

The final configuration from the broker should be similar to this one:


  Protection Mode: MaxPerformance
  PRIMARY_DB - Primary database
    CASCADING_DB - Physical standby database
      CASCADED_DB - Physical standby database (receiving current redo)

Note the indentation, that highlights the dependencies.

That’s all, see you in the next tutorial!

Table of Contents