Some big updates for the Python Oracle library (cx_Oracle)

There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here:

https://cjones-oracle.medium.com/open-source-python-thin-driver-for-oracle-database-e82aac7ecf5a

Posted in Uncategorized | Tagged , | Leave a comment

Cross-region event propagation with Oracle Transactional Event Queues

In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ (about 2,000 miles apart).

Of course, there are a lot of reasons why you might want to propagate events like this, and you don’t necessarily have to do it across geographic regions, you might just want to do it across two database instances in the same data center, or even just two topics in the same database!

Here’s a quick diagram of what we are going to build. We are going to use the JMS Pub/Sub model. Our producer will connect to the ASHPROD1 instance and put messages onto the topic ASH_TOPIC. Messages will be propagated from this topic to the topic PHX_TOPIC in the PHXPROD1 instance. Our consumer will connect to PHXPROD1 and consume messages from there.

To get started, let’s create two databases. To follow along, you’ll need an Oracle Cloud account – you can do this with the “Always Free” account using the 21c version of the Autonomous Database, so you can try this without spending any money ๐Ÿ™‚ You can also use 19c if you prefer.

Creating the databases

First we log into the Oracle Cloud Infrastructure (OCI) Console at https://cloud.oracle.com. Enter your cloud account name and hit the “Next” button.

After you log in, click on the “hamburger” (three lines) menu (1) and go to “Oracle Database” (2) and then “Autonomous Database” (3) as shown:

Choose your compartment (1), and the region (2) (I’ll use Ashburn and Phoenix – use any two regions you like, or two in the same region will work too), then click on the “Create Autonomous Database” (3) button:

In the dialog, we need to give the database a name, I used ASHPROD1. Choose “Transaction Processing” as the workload type and “Shared Infrastructure” as the deployment type:

You can accept the default 19c database (or toggle that “Always Free” switch to use 21c). The default 1 OCPU, 1 TB is fine for this exercise. Also provide a password for the administrator (don’t forget it!):

In the “Choose network access” section, choose the option for secure access and click on the “Add My IP Address” button. Choose the “Bring You Own License (BYOL)” option and provide an email address for the administrator:

Then click on the “Create Autonomous Database” button to create the database.

Now choose the second region, e.g. Phoenix, in the top right corner of the OCI Console and repeat this same process to create a second database, for example called PHXPROD1. This time though, choose the “secure access from anywhere” option, since we are going to need to be able to have ASHPROD1 connect to this instance too.

Obtain Database Wallets

So now we have our two databases. Let’s download the wallets so that we can connect to them. The database wallets contain the necessary information to connect to, and authenticate the database.

In the OCI Console, click on the database name to see the details of the database:

Next, click on the “DB Connection” button:

You will click on the “Download wallet” button (1) to get the wallet file, but while you are here, notice the connection strings (2) – we’ll use one of those later.

After you click on the button, provide a password for the wallet, and then click on the “Download” button:

Repeat this for the other database.

Creating our consumer

Let’s create a new project and write our consumer code. We’ll use Maven to simplify the dependency management and to make it easy to run our consumer. Let’s create a new directory and unzip our two wallets into this directory. So we should see something like this:


/home/mark/src/redstack
โ”œโ”€โ”€ ASHPROD1
โ”‚   โ”œโ”€โ”€ README
โ”‚   โ”œโ”€โ”€ cwallet.sso
โ”‚   โ”œโ”€โ”€ ewallet.p12
โ”‚   โ”œโ”€โ”€ ewallet.pem
โ”‚   โ”œโ”€โ”€ keystore.jks
โ”‚   โ”œโ”€โ”€ ojdbc.properties
โ”‚   โ”œโ”€โ”€ sqlnet.ora
โ”‚   โ”œโ”€โ”€ tnsnames.ora
โ”‚   โ””โ”€โ”€ truststore.jks
โ””โ”€โ”€ PHXPROD1
    โ”œโ”€โ”€ README
    โ”œโ”€โ”€ cwallet.sso
    โ”œโ”€โ”€ ewallet.p12
    โ”œโ”€โ”€ ewallet.pem
    โ”œโ”€โ”€ keystore.jks
    โ”œโ”€โ”€ ojdbc.properties
    โ”œโ”€โ”€ sqlnet.ora
    โ”œโ”€โ”€ tnsnames.ora
    โ””โ”€โ”€ truststore.jks

You will need to edit both of those sqlnet.ora files and update the DIRECTORY so that it is correct, for example, we would change this:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

To this;

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA =
  (DIRECTORY="/home/mark/src/redstack/PHXPROD1")))
SSL_SERVER_DN_MATCH=yes

Let’s add a Maven POM file to set up our project. I am assuming you have Maven and a JDK installed. If not – go get those now ๐Ÿ™‚ I am using Maven 3.8.4 and Java 17.0.3. Create a file called pom.xml with this content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.wordpress.redstack</groupId>
    <artifactId>propagation</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>propagation</name>
    <description>Demo of TEQ propagation</description>

    <properties>
        <maven.compiler.target>17</maven.compiler.target>
        <maven.compiler.source>17</maven.compiler.source>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>javax.transaction-api</artifactId>
            <version>1.2</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.jdbc</groupId>
            <artifactId>ojdbc8</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.messaging</groupId>
            <artifactId>aqapi</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>oraclepki</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>osdt_core</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.oracle.database.security</groupId>
            <artifactId>osdt_cert</artifactId>
            <version>19.3.0.0</version>
        </dependency>
        <dependency>
            <groupId>javax.jms</groupId>
            <artifactId>javax.jms-api</artifactId>
            <version>2.0.1</version>
        </dependency>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>jta</artifactId>
            <version>1.1</version>
        </dependency>
    </dependencies>

    <profiles>
        <profile>
            <id>consumer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.wordpress.redstack.Consumer</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
        <profile>
            <id>producer</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.codehaus.mojo</groupId>
                        <artifactId>exec-maven-plugin</artifactId>
                        <version>3.0.0</version>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>exec</goal>
                                </goals>
                            </execution>
                        </executions>
                        <configuration>
                            <executable>java</executable>
                            <arguments>
                                <argument>-Doracle.jdbc.fanEnabled=false</argument>
                                <argument>-classpath</argument>
                                <classpath/>
                                <argument>com.wordpress.redstack.Producer</argument>
                            </arguments>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>
    </profiles>
</project>

This defines the Maven coordinates for our project, the dependencies we need to compile and run our code, and also a convenience goal to run the consumer (or producer) directly from Maven so that we don’t have to worry about constructing the class path manually. Let’s also create some directories to store our code:

mkdir -p src/main/java/com/wordpress/redstack
mkdir -p src/main/resources

Now let’s create a Java class in src/main/java/com/wordpress/redstack/Consumer.java with the following content:

package com.wordpress.redstack;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicSubscriber;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Consumer {

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@phxprod1_high?TNS_ADMIN=/home/mark/src/redstack/PHXPROD1";
    private static String topicName = "phx_topic";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = 
           (AQjmsSession) conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber = 
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");

        System.out.println("Waiting for messages...");

        // wait forever for messages to arrive and print them out
        while (true) {

            // the 1_000 is a one second timeout
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000); 
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }
    }

}

Let’s take a look at the interesting parts of that code.

    private static String url = "jdbc:oracle:thin:@phxprod1_high?TNS_ADMIN=/home/mark/src/redstack/PHXPROD1";

This defines the URL that we will use to connect to the database. Notice that it is using an alias (phxprod1_high) – that might look familiar, remember we saw those on the OCI Console when we were downloading the wallet. If you take a look at the tnsnames.ora file in the PHXPROD1 wallet you will see how this is defined, something like this:

phxprod1_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=xxx_phxprod1_high.adb.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US")))

In our main() method, we start by connecting to the database instance:

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

Notice that we are reading the password from an environment variable – so you’ll need to set that variable wherever you are going to run this (note – this is not my real password, just an example):

export DB_PASSWORD=Welcome123##

Next we set up a TopicConnection, start a JMS Session, look up our Topic and create a Subscriber. This is all fairly standard JMS stuff ๐Ÿ™‚

        // create a JMS topic connection and session
        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) 
           conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // create a subscriber on the topic
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicSubscriber subscriber =
           (AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");

        System.out.println("Waiting for messages...");

I created a Durable Subscriber and named it BOOK. We’ll see that name again later, remember that!

Finally, we are going to just wait for messages forever and print them out.

        // wait forever for messages to arrive and print them out
        while (true) {
            AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000);
            if (message != null) {
                if (message.getText() != null) {
                    System.out.println(message.getText());
                } else {
                    System.out.println();
                }
            }
            session.commit();
        }

Normally, we would not wait forever, and we’d clean up our resources, but since this is just a small example consumer, we’ll make some allowances ๐Ÿ™‚

Ok, that takes care of our consumer. We won’t run it yet, since we have not created the topics. Let’s do that now!

Create the topics

We are going to create two topics, one in each database instance/region, and configure propagation between them. Let’s review what we want:

Ashburn (producer side)Phoenix (consumer side)
ASHPROD1 database instancePHXPROD1 database instance
ASH_TOPIC topicPHX_TOPIC topic

Navigate back to your ASHPROD1 Autonomous Database in the OCI Console and click on the “Database Actions” button:

Note that your browser might think this is a pop-up and block it. If so, clicking on the button again usually lets the browser know you really meant to open it ๐Ÿ™‚

In the Database Actions page, click on the “SQL” card to open the SQL Worksheet:

If you get the tour, you can click on “Next” or the “X” to close it.

We are just going to create our topics in the ADMIN schema. In real life, you would probably create a new user/schema to keep your topics in, perhaps several so that you can group them for easier administration. You can create topics with Java or PL/SQL. For this example, we will use PL/SQL.

Here’s the commands to create and start our new topic, ASH_TOPIC:

begin
    dbms_aqadm.create_sharded_queue(
        queue_name => 'ash_topic',
        multiple_consumers => TRUE
    ); 
    dbms_aqadm.start_queue('ash_topic');
end;

If you are using 21c, instead of create_sharded_queue, you should use create_transactional_event_queue – that procedure was renamed in 21c.

You can put these commands into the worksheet at the top (1), then click on the “Run Statement” button (2). You will see the result in the “Script Output” window (3) as shown below:

If you want to check, you can run this query to see details of the queues and topics in your schema:

select * from user_queues;

Now, we need to go to our PHXPROD1 database and create the PHX_TOPIC there. Just repeat what you just did for ASHPROD1 on the PHXPROD1 database and remember to change the name of the topic in the commands that you run!

Create the Database Link

Great, our topics are ready to go! Next, we need to create a Database Link from the ASHPROD1 database to the PHXPROD1 database. The Database Link will allow us to perform actions against the remote database, in this case, to enqueue messages on the remote topic.

Since our databases are using TLS, we need to make the remote database (PHXPROD1) wallet available to the ASHPROD1 database, so that it can authenticate. The easiest way to do this is to upload the files we need into an Object Store bucket.

Let’s create the bucket. In the OCI Console, make sure you are in the Ashburn region and then click on the “hamburger” menu (the three lines at the top left), then “Storage” and the “Buckets”:

Then click on the “Create Bucket” button. Give your bucket a name, I used dblinks and click on the “Create” button. All the defaults are fine for what we need:

Notice that your bucket is private:

Click on the “Upload” button to upload a file:

Then click on the “select files” link to choose the file. We need the file called cwallet.sso in the wallet we downloaded for the PHXPROD1 database (the remote database):

Once the upload completes you can close that dialog and then click on the “three dots” (1) next to the file we just uploaded and choose the “Create Pre-Authenticated Request” (2) option:

The defaults are what we want here – we want to be able to read this one object only. If you want to change the expiration to something like 2 days, just to be on the safe side, that’s not a bad idea at all! Click on the “Create Pre-Authenticated Request” button:

Make sure you take a copy of the URL, you won’t be able to get it again!

Ok, now we are ready to create the link. Open the SQL Worksheet for the ASHPROD1 database (the local/source database) and run these commands. You will need to get the right values for several fields before you run this, I’ll tell you where to get them next:

create or replace directory AQ_DBLINK_CREDENTIALS
as 'aq_dblink_credentials';

BEGIN
  DBMS_CLOUD.GET_OBJECT(
    object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/p/xxxx/n/xxxx/b/dblinks/o/cwallet.sso',
    directory_name => 'AQ_DBLINK_CREDENTIALS',
    file_name => 'cwallet.sso');

  DBMS_CLOUD.CREATE_CREDENTIAL(
    credential_name => 'CRED',
    username => 'ADMIN', -- remote db has case-sensitive login enabled, must be uppercase
    password => 'Welcome123##');

  DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
    db_link_name => 'PHXPROD1',
    hostname => 'adb.us-phoenix-1.oraclecloud.com',
    port => '1522',
    service_name => 'xxxxx.adb.oraclecloud.com',
    ssl_server_cert_dn => 'CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US',
    credential_name => 'CRED',
    directory_name => 'AQ_DBLINK_CREDENTIALS');
END;

In the GET_OBJECT call, the object_uri needs to be that URL that you just copied from the Pre-Authenticated Request.

In the CREATE_CREDENTIAL call, the username should be the user for the remote (PHXPROD1) database – we can just use ADMIN. Note that this must be in upper case since Autonomous Database is configured for case-sensitive login by default. The password should be the password for that user.

In the CREATE_DATABASE_LINK call, the db_link_name is what we are going to use to refer to the remote database. I just used the name of the database – you’ll see later why that makes things more intuitive. You can get the values for the hostname, port, service_name and ssl_server_cert_dn fields from the wallet you downloaded. Make sure you use the wallet for the PHXPROD1 database. You will find the right values in the tnsnames.ora file, and you can just copy them in here. Here’s an example, I’ve bolded the values we need:

phxprod1_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=xxxx_phxprod1_high.adb.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US")))

Once you have all the right values, paste this into the SQL Worksheet and click on the “Run Script” button:

You can check it worked by doing a query through the database link. For example, let’s get a list of the queues/topics on the remote database. We are entering this query on the ASHPROD1 instance, using the database link (“@PHXPROD1“) to have it run on the other database, notice that the output shows the topic PHX_TOPIC we created in the PHXPROD1 database:

Start message propagation

Ok, now we are ready to start propagating messages! (Yay!)

We want to run these commands in the SQL Worksheet on the ASHPROD1 database (the source/local database):

BEGIN
   dbms_aqadm.schedule_propagation(
      queue_name         => 'ash_topic', 
      destination        => 'phxprod1',
      destination_queue  => 'phx_topic');
   dbms_aqadm.enable_propagation_schedule(
      queue_name         => 'ash_topic',
      destination        => 'phxprod1',
      destination_queue  => 'phx_topic');
end;

You can view the schedule you just created with this query:

select destination, LAST_RUN_TIME, NEXT_RUN_TIME, LAST_ERROR_TIME, LAST_ERROR_MSG 
from dba_queue_schedules;

Start the consumer

Now we can start up our consumer! Back in our directory with our code (the one with the pom.xml in it) run this command to start the consumer:

export DB_PASSWORD=Welcome123##          <-- use your real password!
mvn clean compile exec:exec -P consumer

After a few moments, the consumer will start up and we will see this message indicating that it is connected and waiting for messages:

[INFO] 
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...

So now, we need to send some messages to it!

Creating a Producer

Let’s create another Java file called src/main/java/com/wordpress/redstack/Producer.java with this content:

package com.wordpress.redstack;

import java.sql.SQLException;

import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;

import oracle.AQ.AQException;
import oracle.jms.AQjmsAgent;
import oracle.jms.AQjmsFactory;
import oracle.jms.AQjmsSession;
import oracle.jms.AQjmsTextMessage;
import oracle.jms.AQjmsTopicPublisher;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

public class Producer {

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@ashprod1_high?TNS_ADMIN=/home/mark/src/redstack/ASHPROD1";
    private static String topicName = "ash_topic";

    public static void main(String[] args) throws AQException, SQLException, JMSException {

        // create a topic session
        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setURL(url);
        ds.setUser(username);
        ds.setPassword(System.getenv("DB_PASSWORD"));

        TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
        TopicConnection conn = tcf.createTopicConnection();
        conn.start();
        TopicSession session = (AQjmsSession) 
           conn.createSession(true, Session.AUTO_ACKNOWLEDGE);

        // publish message
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicPublisher publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

        AQjmsTextMessage message = (AQjmsTextMessage) 
           session.createTextMessage("hello from ashburn, virginia!");
        publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("bob", null) });
        session.commit();

        // clean up
        publisher.close();
        session.close();
        conn.close();
    }

}

Let’s walk through this code. It’s very similar to the consumer, so I’ll just point out the important differences.

    private static String username = "admin";
    private static String url = "jdbc:oracle:thin:@ashprod1_high?TNS_ADMIN=/home/mark/src/redstack/ASHPROD1";
    private static String topicName = "ash_topic";

Notice that we are using the ASHPROD1 instance in the producer and the ASH_TOPIC.

        // publish message
        Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
        AQjmsTopicPublisher publisher = (AQjmsTopicPublisher) session.createPublisher(topic);

        AQjmsTextMessage message = (AQjmsTextMessage) 
           session.createTextMessage("hello from ashburn, virginia!");
        publisher.publish(message, new AQjmsAgent[] { new AQjmsAgent("bob", null) });
        session.commit();

We create a TopicProducer, and we are sending a simple JMS Text Message to the topic.

Let’s run our producer now:

export DB_PASSWORD=Welcome123##          <-- use your real password!
mvn clean compile exec:exec -P producer

When that finishes (you’ll see a “BUILD SECCESS” message) go and have a look at your consumer, you should see something like this:

[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...
hello from ashburn, virginia!

Yay! It worked! We just published a message on the ASH_TOPIC in the ASHPROD1 instance and it was propagated to PHXPROD1 for us and our consumer read it off the PHX_TOPIC in PHXPROD1.

Here’s an interesting query we can run to see what happened:

select queue, msg_id, msg_state, enq_timestamp, deq_timestamp, deq_user_id, user_data, consumer_name from aq$ash_topic;

You can also run that on the remote database like this:

select queue, msg_id, msg_state, enq_timestamp, deq_timestamp, deq_user_id, user_data, consumer_name
 from aq$phx_topic@phxprod1;

Notice the consumer names – in the local ASHPROD1 instance, the consumer is AQ$_P_106126_92PHXPROD1 (yours will probably be slightly different.) That’s the propagation consumer that is running to propagate the messages to PHXPROD1.

But in the PHXPROD1 instance, the consumer is BOOK! That’s the name we gave to our consumer:

        AQjmsTopicSubscriber subscriber = (AQjmsTopicSubscriber) 
           session.createDurableSubscriber(topic, "BOOK");

Go ahead and send some more messages with the producer! Enjoy!

Posted in Uncategorized | Tagged , , , | Leave a comment

Installing Oracle REST Data Services (standalone)

I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install my own (“customer managed” or “standalone”) ORDS instance. It took me a few goes, and some help from Jeff Smith (yes, that Jeff Smith) to get it right, so I thought it would be good to document how I got it working!

In this example, I am going to use an Oracle 21c database, and I will set up ORDS 22.1 in one of the pluggable databases. Once we have it up and running, we will use Database Actions and look at some of the services in the REST service catalog.

Setting up a database

First, of course I needed a database to play with. I fired up a 21c database in a container for this exercise. You will need to go accept the license agreement before you can pull the container image.

Go to https://container-registry.oracle.com and navigate to “Database” and then “enterprise” and click on the button to accept the agreement. You may have to log in if you are not already.

You will also need to log in to Oracle Container Registry with your container runtime, in this post I am using Docker. This will prompt for your username and password:

docker login container-registry.oracle.com

Now we can start up a database in a container. Here is the command I used, I set a password and the SID/service names, and make sure to expose the database port so we can access the database from outside the container:

docker run -d \
  --name oracle-db \
  -p 1521:1521 \
  -e ORACLE_PWD=Welcome123## \
  -e ORACLE_SID=ORCL \
  -e ORACLE_PDB=PDB1 \
  container-registry.oracle.com/database/enterprise:21.3.0.0

Note: If you use different names, you will need to adjust the example commands appropriately! Also, if you want to be able to restart this database without losing all your data, you’ll want to mount a volume – the OCR page has details on how to do that.

It takes a few minutes to start up. You can watch the logs using this command, and you will need to wait until you see the message indicating it is ready to use:

docker logs -f oracle-db

When the database is ready, we can log on and give the necessary privileges to our PDB admin user.

sqlplus sys/Welcome123##@//localhost:1521/orcl as sysdba
SQL> alter session set container = PDB1;
SQL> grant dba to pdbadmin;

Ok, now we are ready to install ORDS!

Installing ORDS

First step is to download it, of course. Here is the site to get the latest version of ORDS:

https://www.oracle.com/database/technologies/appdev/rest-data-services-downloads.html

There is also a direct link to the latest version: https://download.oracle.com/otn_software/java/ords/ords-latest.zip

Once you have it downloaded, just unzip it into a new directory. I unzipped it into /home/mark/ords.

The steps that I am going to describe here are described in more detail in the documentation.

Now we want to run the pre-install script to set up the necessary privileges. I am using the pdbadmin user, the admin user in my PDB. This script will take just a few moments to run:

sqlplus sys/Welcome123##@//localhost:1521/pdb1 as sysdba \
        @scripts/installer/ords_installer_privileges.sql pdbadmin

Great, now we can run the installer. I used the interactive installer, which will ask you for the necessary information and let you type it in as you go. It is also possible to do a “silent” install by providing all of the information on the command line – the documentation explains how to do this.

Create a directory to hold the configuration and start the interactive installer:

cd /home/mark/ords
export PATH=/home/mark/ords/bin:$PATH
mkdir config
ords --config /home/mark/ords/config install

Here’s what the interactive install dialog looks like, I highlighted the data I entered in bold, mostly I just took the defaults:

Oracle REST Data Services - Interactive Install

  Enter a number to select the type of installation
    [1] Install or upgrade ORDS in the database only
    [2] Create or update a database pool and install/upgrade ORDS in the database
    [3] Create or update a database pool only
  Choose [2]:
  Enter a number to select the database connection type to use
    [1] Basic (host name, port, service name)
    [2] TNS (TNS alias, TNS directory)
    [3] Custom database URL
  Choose [1]:
  Enter the database host name [localhost]:
  Enter the database listen port [1521]:
  Enter the database service name [orcl]: pdb1

  Provide database user name with administrator privileges.
    Enter the administrator username: pdbadmin

  Enter the database password for pdbadmin:
Connecting to database user: pdbadmin url: jdbc:oracle:thin:@//localhost:1521/pdb1

Retrieving information.
  Enter the default tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [SYSAUX]:
  Enter the temporary tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [TEMP]:
  Enter a number to select additional feature(s) to enable:
    [1] Database Actions  (Enables all features)
    [2] REST Enabled SQL and Database API
    [3] REST Enabled SQL
    [4] Database API
    [5] None
  Choose [1]:
  Enter a number to configure and start ORDS in standalone mode
    [1] Configure and start ORDS in standalone mode
    [2] Skip
  Choose [1]:
  Enter a number to use HTTP or HTTPS protocol
    [1] HTTP
    [2] HTTPS
  Choose [1]:
  Enter the HTTP port [8080]:

Note: I just used HTTP, but if you want to use HTTPS, you will probably want to create some certificates and configure them in the installer. Here’s some commands to create a self-signed certificate and convert the key to the DER format ORDS requires:

# these are optional - only required if you want to use HTTPS

openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout ords.key.pem \
            -days 365 -out ords.pem
openssl x509 -in ords.pem  -text -noout
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.pem -out ords.der -nocrypt
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.key.pem -out ords.der -nocrypt

Once you complete the interview, the installer will perform the installation. It takes just a couple of minutes, and it will start up the ORDS standalone server for you. If you need to stop it (with Ctrl-C) you can restart it with this command:

ords --config /home/mark/ords/config serve

Ok, now we have ORDS up and running, we are going to need a user!

Preparing an ORDS user

Let’s create a regular database user and give them access to ORDS.

Using the PDB admin user, we can create a new user and give them the necessary permissions to use ORDS:

sqlplus pdbadmin/Welcome123##@//localhost:1521/pdb1

SQL> create user mark identified by Welcome123##;
SQL> grant connect, resource to mark;
SQL> grant unlimited tablespace to mark;
SQL> begin
    ords.enable_schema(
        p_enabled => true,
        p_schema => 'mark',
        p_url_mapping_type => 'BASE_PATH',
        p_url_mapping_pattern => 'mark',
        p_auto_rest_auth => false
    );
    commit;
end;
/

Great, now we are ready to use ORDS!

Log in to ORDS Database Actions

To log into ORDS, open a browser and go to this URL: http://localhost:8080/ords/sql-developer

You may need to change the hostname or port if you used something different.

You should see the login page:

Enter your username – I used mark, and the press Next, and enter the password, I used Welcome123##. This will take you to the main “Database Actions” page:

Let’s create a table and enter some data ๐Ÿ™‚

Click on the SQL card (top left) to open the SQL worksheet, and enter these statements:

create table city (
    name varchar2(64),
    population number
);

insert into city values ('Tokyo', 37000000);
insert into city values ('Dehli', 29000000);
insert into city values ('Shanghai', 26000000);
insert into city values ('Sao Paulo', 21000000);
insert into city values ('Values', 21000000);

Click on the “Run Script” icon to execute these statements – its the one the red arrow is highlighting:

Let’s expose that table as a REST service!

Creating a REST service

ORDS allows us to easily expose an SQL statement, or a PL/SQL block as a REST service. Let’s navigate to the REST page – click on the “hamburger menu” (1) and then the REST page (2):

The basic structure is that we create a “module” which contains “templates” which in turn contain “handlers.” So let’s start by creating a module. Click on the “Modules” option in the top menu.

Then click on the “Create Module” button in the top right corner:

Give the module a name, I used mark and a base path, I used /api/. Since we are just playing here, set the “Protected by Privilege” to Not protected. Obviously, in real life, you’d set up authentication, for example using OAuth, which ORDS provides out of the box – but’s that another post ๐Ÿ™‚ Finally, click on the “Create” button to create the module. It will now appear in the modules page:

Click on the module name (“mark” in the example above) to open the module, and click on the “Create Template” button on the right hand side:

Enter a URI Template for this service, I used cities for mine, then click on the “Create” button:

Now you will see the template page. Click on the “Create Handler” button on the right:

In the “Create Handler” dialog, we provide details of the service. Notice that you can choose the HTTP Method (GET, POST, DELETE, etc.) and you can control paging. For this service, we want to create a GET handler and we want the “Source Type” to be Collection Query which lets us enter an SQL statement. While you are here, have a look in the pull down and notice that you can also use PL/SQL! You can also use bind variables in here, so we can accept parameters and use them in the query or PL/SQL code.

For now, enter this simple query, then click on the “Create” button:

select * from city

Note: You should not include the semi-colon!

Once you have created the handler, you will see the details view, where you can test it by clicking on the “Run” button (1):

Notice that you see the results in the bottom half of the screen. You also have the URL for the service provided (2) and there is a “copy” button on the right hand side. Let’s test our service using cURL:

$ curl http://localhost:8080/ords/mark/api/cities
{"items":[{"name":"Tokyo","population":37000000},{"name":"Dehli","population":29000000},{"name":"Shanghai","population":26000000},{"name":"Sao Paulo","population":21000000},{"name":"Values","population":21000000}],"hasMore":false,"limit":25,"offset":0,"count":5,"links":[{"rel":"self","href":"http://localhost:8080/ords/mark/api/cities"},{"rel":"describedby","href":"http://localhost:8080/ords/mark/metadata-catalog/api/item"},{"rel":"first","href":"http://localhost:8080/ords/mark/api/cities"}]}

Great! We made a service!

Explore the out of the box services

ORDS also provides a heap of out of the box services for us automatically. To explore these, let’s use Postman, which is a very popular tool for REST testing. You can download it from Postman.

Jeff Smith has a great post here that explains how to import all the ORDS REST APIs into Postman.

When you open Postman, click on the “Import” button:

Now you need the right URL! If you have been following along and using the same names as me your URL will be:

http://localhost:8080/ords/mark/_/db-api/latest/metadata-catalog/openapi.json

If you used a different user, you will need to change “mark” to your username in that URL. After you click on “Import” chose “Link” as the type and enter your URL:

One good thing about Postman is that we can set the authentication parameters at the top level, on that “ORDS Database API” folder that you just created. Open that and click on the “Authorization” tab, choose “Basic Auth” as the type and enter the database user and password:

In this folder you will see a whole collection of services for all kinds of things. Let’s try a simple one! Navigate to the “Get Database version” service and click on the “Send” button in the top right corner. You’ll see the result data in the bottom pane:

Well, there you go! We installed ORDS, used the Database Actions and REST interfaces, created a service and explored the out of the box services! I hope you enjoyed!

Posted in Uncategorized | Tagged | Leave a comment

The OCI Service Mesh is now available!

Dusko Vukmanovic just announced the general availability of OCI Service Mesh in this blog post.

It provides security,ย observability, andย network traffic managementย for cloud nativeย applications without requiring any changes to the applications.

Its a free managed service and its available in all commercial regions today. Check it out!

Posted in Uncategorized | Tagged , | Leave a comment

Playing with Kafka Java Client for TEQ – creating the simplest of producers and consumers

Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka.

Kafka Java Client for TEQ is available as a preview in GitHub here: http://github.com/oracle/okafka

In this preview version, there are some limitations documented in the repository, but the main one to be aware of is that you need to use the okafka library, not the regular kafka one, so you would need to change existing kafka client code if you wanted to try out the preview.

Preparing the database

To get started, I grabbed a new Oracle Autonomous Database instance on Oracle Cloud, and I opened up the SQL Worksheet in Database Actions and created myself a user. As the ADMIN user, I ran the following commands:

create user mark identified by SomePassword;  -- that's not the real password!
grant connect, resource to mark;
grant create session to mark;
grant unlimited tablespace to mark;
grant execute on dbms_aqadm to mark;
grant execute on dbms_aqin to mark;
grant execute on dbms_aqjms to mark;
grant select_catalog_role to mark;
grant select on gv$instance to mark;
grant select on gv$listener_network to mark;
commit;

And of course, I needed a topic to work with, so I logged on to SQL Worksheet as my new MARK user and created a topic called topic1 with these commands:

begin
    sys.dbms_aqadm.create_sharded_queue(queue_name => 'topic1', multiple_consumers => TRUE); 
    sys.dbms_aqadm.set_queue_parameter('topic1', 'SHARD_NUM', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'STICKY_DEQUEUE', 1);
    sys.dbms_aqadm.set_queue_parameter('topic1', 'KEY_BASED_ENQUEUE', 1);
    sys.dbms_aqadm.start_queue('topic1');
end;

Note that this is for Oracle Database 19c. If you are using 21c, create_sharded_queue is renamed to create_transactional_event_queue, so you will have to update that line.

The topic is empty right now, since we just created it, but here are a couple of queries that will be useful later. We can see the messages in the topic, with details including the enqueue time, status, etc., using this query:

select * from topic1;

This is a useful query to see a count of messages in each status:

select msg_state, count(*)
from aq$topic1
group by msg_state;

Building the OKafka library

We need to build the OKafka library and install it in our local Maven repository so that it will be available to use as a dependency since the preview is not currently available in Maven Central.

First, clone the repository:

git clone https://github.com/oracle/okafka

Now we can build the uberjar with the included Gradle wrapper:

cd okafka
./gradlew fullJar

This will put the JAR file in gradle/build/libs and we can install this into our local Maven repository using this command:

mvn install:install-file \
    -DgroupId=org.oracle.okafka \
    -DartifactId=okafka \
    -Dversion=0.8 \
    -Dfile=clients/build/libs/okafka-0.8-full.jar \
    -DpomFile=clients/okafka-0.8.pom 

Now we are ready to start writing our code!

Creating the Producer

Let’s start by creating our Maven POM file. In a new directory, called okafka, I created a file called pom.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.example</groupId>
	<artifactId>okafka</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>okafka</name>

	<properties>
		<java.version>17</java.version>
		<maven.compiler.source>17</maven.compiler.source>
		<maven.compiler.target>17</maven.compiler.target>
		<okafka.version>0.8</okafka.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.oracle.okafka</groupId>
			<artifactId>okafka</artifactId>
			<version>${okafka.version}</version>
		</dependency>
	</dependencies>
</project>

I am using Java 17 for this example. But you could use anything from 1.8 onwards, just update the version in the properties if you are using an earlier version.

Now let’s create our producer class:

mkdir -p src/main/java/com/example/okafka
touch src/main/java/com/example/okafka/Producer.java

Here’s the content for Producer.java:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.Properties;

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

public class Producer {

    private static final String propertiesFilename = "producer.properties";

    public static void main(String[] args) {
        // configure logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");

        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
            for (int i = 0; i < 100; i++) {
                producer.send(new ProducerRecord<String, String>(
                    topicName, 0, "key", "value " + i));
            }
            System.out.println("sent 100 messages");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                     "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

Let’s walk through this code and talk about what it does.

First, let’s notice the imports. We are importing the OKafka versions of the familiar Kafka classes. These have the same interfaces as the standard Kafka ones, but they work with Oracle TEQ instead:

import org.oracle.okafka.clients.producer.KafkaProducer;
import org.oracle.okafka.clients.producer.ProducerRecord;

In the main() method we first set the log level and then we load some properties from our producer.properties config file. You will see the getProperties() method at the end of the file is a fairly standard, it is just reading the file and returning the contents as a new Properties object.

Let’s see what’s in that producer.properties file, which is located in the src/main/resources directory:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
batch.size=200
linger.ms=100
buffer.memory=326760
key.serializer=org.oracle.okafka.common.serialization.StringSerializer
value.serializer=org.oracle.okafka.common.serialization.StringSerializer
topic.name=TOPIC1

There are two groups of properties in there. The first group provide details about my Oracle Autonomous Database instance, including the location of the wallet file – we’ll get that and set it up in a moment.

The second group are the normal Kafka properties that you might expect to see, assuming you are familiar with Kafka. Notice that the bootstrap.servers lists the address of my Oracle Autonomous Database, not a Kafka broker! Also notice that we are using the serializers (and later, deserializers) provided in the OKafka library, not the standard Kafka ones.

Next, we set the topic name by reading it from the properties file. If it is not there, the second argument provides a default/fallback value:

String topicName = props.getProperty("topic.name", "TOPIC1");

And now we are ready to create the producer and send some messages:

try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
    for (int i = 0; i < 100; i++) {
        producer.send(new ProducerRecord<String, String>(
            topicName, 0, "key", "value " + i));
    }
    System.out.println("sent 100 messages");
} catch (Exception e) {
    e.printStackTrace();
}

We created the KafkaProducer and for this example, we are using String for both the key and the value.

We have a loop to send 100 messages, which we create with the ProducerRecord class. We are just setting them to some placeholder data.

Ok, that’s all we need in the code. But we will need to get the wallet and set it up so Java programs can use it to authenticate. Have a look at this post for details on how to do that! You just need to download the wallet from the OCI console, unzip it into a directory called wallet – put that in the same directory as the pom.xml, and then edit the sqlnet.ora to set the DIRECTORY to the right location, e.g. /home/mark/src/okafka/wallet for me, and then add your credentials using the setup_wallet.sh I showed in that post.

Finally, you need to add these lines to the ojdbc.properties file in the wallet directory to tell OKafka the user to connect to the database with:

user=mark
password=SomePassword
oracle.net.ssl_server_dn_match=true

With the wallet set up, we are ready to build and run our code!

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Producer

The output should look like this:

[main] INFO org.oracle.okafka.clients.producer.ProducerConfig - ProducerConfig values: 
        acks = 1
        batch.size = 200
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        buffer.memory = 326760
        client.id = 
        compression.type = none
        connections.max.idle.ms = 540000
        enable.idempotence = false
        interceptor.classes = []
        key.serializer = class org.oracle.okafka.common.serialization.StringSerializer
        linger.ms = 100
        max.block.ms = 60000
        max.in.flight.requests.per.connection = 5
        max.request.size = 1048576
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        partitioner.class = class org.oracle.okafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 0
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        tns.alias = prod_high
        transaction.timeout.ms = 60000
        transactional.id = null
        value.serializer = class org.oracle.okafka.common.serialization.StringSerializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
[kafka-producer-network-thread | producer-1] INFO org.oracle.okafka.clients.Metadata - Cluster ID: 
sent 100 messages
[main] INFO org.oracle.okafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.

You can see it dumps out the properties, and then after some informational messages you see the “sent 100 messages” output. Now you might want to go and run that query to look at the messages in the database!

Now, lets move on to creating a consumer, so we can read those messages back.

Creating the Consumer

The consumer is going to look very similar to the producer, and it will also have its own properties file. Here’s the contents of the properties file first – put this in src/main/resources/consumer.properties:

oracle.service.name=xxxxx_prod_high.adb.oraclecloud.com
oracle.instance.name=prod_high
oracle.net.tns_admin=/home/mark/src/testing-okafka/okafka/wallet
security.protocol=SSL
tns.alias=prod_high

bootstrap.servers=adb.us-ashburn-1.oraclecloud.com:1522
group.id=bob
enable.auto.commit=true
auto.commit.interval.ms=10000
key.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
value.deserializer=org.oracle.okafka.common.serialization.StringDeserializer
max.poll.records=100

And here is the content for Consumer.java which you create in src/main/java/com/example/okafka:

package com.example.okafka;

import java.io.FileNotFoundException;
import java.io.InputStream;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;

import org.oracle.okafka.clients.consumer.ConsumerRecord;
import org.oracle.okafka.clients.consumer.ConsumerRecords;
import org.oracle.okafka.clients.consumer.KafkaConsumer;

public class Consumer {
    private static final String propertiesFilename = "consumer.properties";

    public static void main(String[] args) {
        // logging level
        System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "INFO");


        // load props
        Properties props = getProperties();

        String topicName = props.getProperty("topic.name", "TOPIC1");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();
    }

    private static Properties getProperties() {
        Properties props = new Properties();

        try (
                InputStream inputStream = Producer.class
                    .getClassLoader()
                    .getResourceAsStream(propertiesFilename);
        ) {
            if (inputStream != null) {
                props.load(inputStream);
            } else {
                throw new FileNotFoundException(
                    "could not find properties file: " + propertiesFilename);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return props;
    }
}

A lot of this is the same as the producer, so let’s walk through the parts that are different.

First, we load a the different properties file, the consumer one, it has a few different properties that are relevant for consumers. In particular, we are setting the max.poll.records to 100 – so we’ll only be reading at most 100 messages off the topic at a time.

Here’s how we create the consumer:

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList(topicName));

Again, you may notice that this is very similar to Kafka. We are using String as the type for both the key and value. Notice we provided the appropriate deserializers in the property file, the ones from the OKafka library, not the standard Kafka ones.

Here’s the actual consumer code:

        ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(30_000));
        for (ConsumerRecord<String, String> record : records) {
            System.out.println(
                record.topic() + " " + 
                record.partition() + " " + 
                record.key() + " " + 
                record.value());
        }
        consumer.close();

We open our consumer and poll for messages (for 30 seconds) and then we just print out some information about each message, and then close out consumer! Again, this is very simple, but its enough to test consuming messages.

We can run this and we should see all of the message data in the output, here’s how to run it, and an excerpt of the output:

mvn clean package
CLASSPATH=target/okafka-0.0.1-SNAPSHOT.jar
CLASSPATH=$CLASSPATH:$HOME/.m2/repository/org/oracle/okafka/okafka/0.8/okafka-0.8.jar
java -classpath $CLASSAPTH com.example.okafka.Consumer

[main] INFO org.oracle.okafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
        auto.commit.interval.ms = 10000
        auto.offset.reset = latest
        bootstrap.servers = [adb.us-ashburn-1.oraclecloud.com:1522]
        check.crcs = true
        client.id = 
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = true
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = bob
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        isolation.level = read_uncommitted
        key.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 100
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        oracle.instance.name = prod_high
        oracle.net.tns_admin = /home/mark/src/okafka/wallet
        oracle.service.name = xxxxx_prod_high.adb.oraclecloud.com
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        security.protocol = SSL
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        tns.alias = prod_high
        value.deserializer = class org.oracle.okafka.common.serialization.StringDeserializer

[main] WARN org.oracle.okafka.common.utils.AppInfoParser - Error while loading kafka-version.properties :inStream parameter is null
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka version : unknown
[main] INFO org.oracle.okafka.common.utils.AppInfoParser - Kafka commitId : unknown
TOPIC1 0 key value 0
TOPIC1 0 key value 1
TOPIC1 0 key value 2
...

So there you go! We successfully created a very simple producer and consumer and we sent and received messages from a topic using the OKafka library and Oracle Transactional Event Queues!

Posted in Uncategorized | Tagged , , | 1 Comment

Loading data into Autonomous Data Warehouse using Datapump

Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed!

Thanks to Ankur Saini for sharing!

Posted in Uncategorized | Leave a comment

Configuring a Java application to connect to Autonomous Database using Mutual TLS

In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS.

The first thing you are going to need is an Oracle Autonomous Database instance. If you are reading this post, you probably already know how to get one. But just in case you don’t – here’s a good reference to get you started – and remember, this is available in the “always free” tier, so you can try this out for free!

When you look at your instance in the Oracle Cloud (OCI) console, you will see there is a button labelled DB Connection – go ahead and click on that:

Viewing the Autonomous Database instance in the Oracle Cloud Console.

In the slide out details page, there is a button labelled Download wallet – click on that and save the file somewhere convenient.

Downloading the wallet.

When you unzip the wallet file, you will see it contains a number of files, as shown below, including a tnsnames.ora and sqlnet.ora to tell your client how to access the database server, as well as some wallet files that contain certificates to authenticate to the database:

$ ls
Wallet_MYQUICKSTART.zip

$ unzip Wallet_MYQUICKSTART.zip
Archive:  Wallet_MYQUICKSTART.zip
  inflating: README
  inflating: cwallet.sso
  inflating: tnsnames.ora
  inflating: truststore.jks
  inflating: ojdbc.properties
  inflating: sqlnet.ora
  inflating: ewallet.p12
  inflating: keystore.jks

The first thing you need to do is edit the sqlnet.ora file and make sure the DIRECTORY entry matches the location where you unzipped the wallet, and then add the SSL_SERVER_DN_MATCH=yes option to the file, it should look something like this:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/home/mark/blog")))
SSL_SERVER_DN_MATCH=yes

Before we set up Mutual TLS – let’s review how we can use this wallet as-is to connect to the database using a username and password. Let’s take a look at a simple Java application that we can use to validate connectivity – you can grab the source code from GitHub:

$ git clone https://github.com/markxnelson/adb-mtls-sample

This repository contains a very simple, single class Java application that just connects to the database, checks that the connection was successful and then exits. It includes a Maven POM file to get the dependencies and to run the application.

Make sure you can compile the application successfully:

$ cd adb-mtls-sample
$ mvn clean compile

Before you run the sample, you will need to edit the Java class file to set the database JDBC URL and user to match your own environment. Notice these lines in the file src/main/java/com/github/markxnelson/SimpleJDBCTest.java:

// set the database JDBC URL - note that the alias ("myquickstart_high" in this example) and 
// the location of the wallet must be changed to match your own environment
private static String url = "jdbc:oracle:thin:@myquickstart_high?TNS_ADMIN=/home/mark/blog";
    
// the username to connect to the database with
private static String username = "admin";

You need to update these with the correct alias name for your database (it is defined in the tnsnames.ora file in the wallet you downloaded) and the location of the wallet, i.e. the directory where you unzipped the wallet, the same directory where the tnsnames.ora is located.

You also need to set the correct username that the sample should use to connect to your database. Note that the user must exist and have at least the connect privilege in the database.

Once you have made these updates, you can compile and run the sample. Note that this code expects you to provide the password for that use in an environment variable called DB_PASSWORD:

$ export DB_PASSWORD=whatever_it_is
$ mvn clean compile exec:exec

You will see the output from Maven, and toward the end, something like this:

[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

Great! We can connect to the database normally, using a username and password. If you want to be sure, try commenting out the two lines that set the user and password on the data source and run this again – the connection will fail and you will get an error!

Now let’s configure it to use mutual TLS instead.

I included a script called setup_wallet.sh in the sample repository. If you prefer, you can just run that script and provide the username and passwords when asked. If you want to do it manually, then read on!

First, we need to configure the Java class path to include the Oracle Wallet JAR files. Maven will have downloaded these from Maven Central for you when you compiled the application above, so you can find them in your local Maven repository:

  • $HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
  • $HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

You’ll need these for the command we run below – you can put them into an environment variable called CLASSPATH for easy access.

export CLASSPATH=$HOME/.m2/repository/com/oracle/database/security/oraclepki/19.3.0.0/oraclepki-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_core/19.3.0.0/osdt_core-19.3.0.0.jar
export CLASSPATH=$CLASSPATH:$HOME/.m2/repository/com/oracle/database/security/osdt_cert/19.3.0.0/osdt_cert-19.3.0.0.jar

Here’s the command you will need to run to add your credentials to the wallet (don’t run it yet!):

java \
    -Doracle.pki.debug=true \
    -classpath ${CLASSPATH} \
    oracle.security.pki.OracleSecretStoreTextUI \
    -nologo \
    -wrl "$USER_DEFINED_WALLET" \
    -createCredential "myquickstart_high" \
    $USER >/dev/null <<EOF
$DB_PASSWORD
$DB_PASSWORD
$WALLET_PASSWORD
EOF

First, set the environment variable USER_DEFINED_WALLET to the directory where you unzipped the wallet, i.e. the directory where the tnsnames.ora is located.

export USER_DEFINED_WALLET=/home/mark/blog

You’ll also want the change the alias in this command to match your database alias. In the example above it is myquickstart_high. You get this value from your tnsnames.ora – its the same one you used in the Java code earlier.

Now we are ready to run the command. This will update the wallet to add your user’s credentials and associate them with that database alias.

Once we have done that, we can edit the Java source code to comment out (or remove) the two lines that set the user and password:

//ds.setUser(username);
//ds.setPassword(password);

Now you can compile and run the program again, and this time it will get the credentials from the wallet and will use mutual TLS to connect to the database.

$ mvn clean compile exec:exec
... (lines omitted) ...
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ adb-mtls-sample ---
Trying to connect...
Connected!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

There you have it! We can now use this wallet to allow Java applications to connect to our database securely. This example we used was pretty simple, but you could imagine perhaps putting this wallet into a Kubernetes secret and mounting that secret as a volume for a pod running a Java microservice. This provides separation of the code from the credentials and certificates needed to connect to and validate the database, and helps us to build more secure microservices. Enjoy!

Posted in Uncategorized | Tagged , , | 1 Comment

Can Java microservices be as fast as Go?

I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.

Posted in Uncategorized | Leave a comment

Storing ATP Wallets in a Kubernetes Secret

In this previous post, we talked about how to create a WebLogic datasource for an ATP database. In that example we put the ATP wallet into the domain directly, which is fine if your domain is on a secure environment, but if we want to use ATP from a WebLogic domain running in Kubernetes, you might not want to burn the wallet into the Docker image. Doing so would enable anyone with access to the Docker image to retrieve the wallet.

A more reasonable thing to do in the Kubernetes environment would be to put the ATP wallet into a Kubernetes secret and mount that secret into the container.

You will, of course need to decide where you are going to mount it and update the sqlnet.ora with the right path, like we did in the previous post. Once that is taken care of, you can create the secret from the wallet using a small script like this:

#!/bin/bash
# Copyright 2019, Oracle Corporation and/or its affiliates. All rights reserved.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: atp-secret
  namespace: default
type: Opaque
data:
  ojdbc.properties: `cat ojdbc.properties | base64 -w0`
  tnsnames.ora: `cat tnsnames.ora | base64 -w0`
  sqlnet.ora: `cat sqlnet.ora | base64 -w0`
  cwallet.sso: `cat cwallet.sso | base64 -w0`
  ewallet.p12: `cat ewallet.p12 | base64 -w0`
  keystore.jks: `cat keystore.jks | base64 -w0`
  truststore.jks: `cat truststore.jks | base64 -w0`
EOF

We need to base64 encode the data that we put into the secret. When you mount the secret on a container (in a pod), Kubernetes will decode it, so it appears to the container in its original form.

Here is an example of how to mount the secret in a container:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-weblogic-server
  labels:
    app: my-weblogic-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-weblogic-server
  template:
    metadata:
      labels:
        app: my-weblogic-server
    spec:
      containers:
      - name: my-weblogic-server
        image: my-weblogic-server:1.2
        volumeMounts:
        - mountPath: /shared
          name: atp-secret
          readOnly: true
      volumes:
       - name: atp-secret
         secret:
           defaultMode: 420
           secretName: atp-secret

You will obviously still need to control access to the secret and the running containers, but overall this approach does help to provide a better security stance.

Posted in Uncategorized | Leave a comment

Configuring a WebLogic Data Source to use ATP

In this post I am going to share details about how to configure a WebLogic data source to use ATP.

If you are not familiar with ATP, it is the new Autonomous Transaction Processing service on Oracle Cloud. It provides a fully managed autonomous database. You can create a new database in the OCI console in the Database menu under โ€œAutonomous Transaction Processingโ€ by clicking on that big blue button:

You need to give it a name, choose the number of cores and set an admin password:

It will take a few minutes to provision the database. Once it is ready, click on the database to view details.

Then click on the โ€œDB Connectionโ€ button to download the wallet that we will need to connect to the database.

You need to provide a password for the wallet, and then you can download it:

Copy the wallet to your WebLogic server and unzip it. You will see the following files:

[oracle@domain1-admin-server atp]$ ls -l
total 40
-rw-rw-r--. 1 oracle oracle 6661 Feb  4 17:40 cwallet.sso
-rw-rw-r--. 1 oracle oracle 6616 Feb  4 17:40 ewallet.p12
-rw-rw-r--. 1 oracle oracle 3241 Feb  4 17:40 keystore.jks
-rw-rw-r--. 1 oracle oracle   87 Feb  4 17:40 ojdbc.properties
-rw-rw-r--. 1 oracle oracle  114 Feb  4 17:40 sqlnet.ora
-rw-rw-r--. 1 oracle oracle 6409 Feb  4 17:40 tnsnames.ora
-rw-rw-r--. 1 oracle oracle 3336 Feb  4 17:40 truststore.jks

I put these in a directory called /shared/atp. You need to update the sqlnet.ora to have the correct location as shown below:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/shared/atp")))
SSL_SERVER_DN_MATCH=yes

You will need to grab the hostname, port and service name from the tnsnames.ora to create the data source, here is an example:

productiondb_high = (description= (address=(protocol=tcps)(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))(connect_data=(service_name=feqamosccwtl3ac_productiondb_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn=
        "CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US"))   )

You can now log in to the WebLogic console and create a data source, give it a name on the first page:

You can take the defaults on the second page:

And the third:

On the next page, you need set the database name, hostname and port to the values from the tnsnames.ora:

On the next page you can provide the username and password. In this example I am just using the admin user. In a real life scenario you would probably go and create a โ€œnormalโ€ user and use that. You can find details about how to set up SQLPLUS here.

You also need to set up a set of properties that are required for ATP as shown below, you can find more details in the ATP documentation:

oracle.net.tns_admin=/shared/atp
oracle.net.ssl_version=1.2
javax.net.ssl.trustStore=/shared/atp/truststore.jks
oracle.net.ssl_server_dn_match=true
user=admin
javax.net.ssl.keyStoreType=JKS
javax.net.ssl.trustStoreType=JKS
javax.net.ssl.keyStore=/shared/atp/keystore.jks
javax.net.ssl.keyStorePassword=WebLogicCafe1
javax.net.ssl.trustStorePassword=WebLogicCafe1
oracle.jdbc.fanEnabled=false

Also notice the the URL format is jdbc:oracle:thin:@cafedatabase_high, you just need to put the name in there from the tnsnames.ora file:

On the next page you can target the data source to the appropriate servers, and we are done! Click on the โ€œFinishโ€ button and then you can activate changes if you are in production mode.

You can now go and test the data source (in the โ€œMonitoringโ€ tab and then โ€œTestingโ€, select the data source and click on the โ€œTest Data Sourceโ€ button.

You will see the success message:

Enjoy!

Posted in Uncategorized | 1 Comment