In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ).
Having a small, discrete piece of code that processes events off a queue is a pretty common use case. You could even call it a microservice I guess 🙂 since it does meet the well-established criteria of having its own code base, being loosely coupled, independently deployable and testable. One thing I find really interesting about writing a “microservice” like this and deploying it in the Oracle Database is that I can essentially scale it to zero instances, and it will only run (and consume resources) when there is actually work for it to do. I could also use the Database Resource Manager to control how many resources it is able to consume if I wanted to 🙂 And it would not be all that hard to instrument it so I could get logs, metrics and even distributed tracing – but that’s another story!
So, let’s go ahead and build this thing!
We’ll start with a new Oracle Database. I am going to run it in a Docker container using the standard image provided on Oracle Container Registry. If you have not used it before, you will need to login to Oracle Container Registry at https://container-registry.oracle.com and then navigate to the “Database” section, and then “enterprise” and read and accept the license.
Make sure you are logged into Oracle Container Registry in your Docker client too:
It will take a few minutes (the first time only) for the database files to be created and the instance to start up. You can watch the logs, or use this little shell command to do the waiting for you:
while ! docker logs oracle-db | grep -q "DATABASE IS READY TO USE!";
do
sleep 10
done
Great, now we have a database running, let’s set up the necessary permissions for our user. I am going to use the pdbadmin user in the PDB1 pluggable database. So let’s give that user permissions to use the TEQ packages (I am using the new SQLcl command line tool, but you can use SQL*Plus or SQL Developer, or whatever tool you prefer):
# sql sys/Welcome123##@//localhost:1521/pdb1 as sysdba
SQL> alter session set container = pdb1;
SQL> grant dba to pdbadmin;
SQL> grant execute on dbms_aqadm to pdbadmin;
SQL> grant execute on dbms_aq to pdbadmin;
SQL> commit;
SQL> exit
Ok, now we can connect with our pdbadmin user and start setting up our environment:
# sql pdbadmin/Welcome123##@//localhost:1521/pdb1
First we want to create our (TEQ) queue (or topic) and start it. We’ll call the queue my_teq:
We’ll keep our microservice super-simple for this demonstration, we’ll just have it record the messages it receives in an “output” table – so let’s create that table now:
create table my_log (
message varchar(256),
when timestamp(6)
);
Ok, so here is our consumer microservice:
create or replace procedure receiver (
context in raw,
reginfo in sys.aq$_reg_info,
descr in sys.aq$_descriptor,
payload in varchar2,
payloadl in number
) as
dequeue_options dbms_aq.dequeue_options_t;
message_properties dbms_aq.message_properties_t;
message_handle raw ( 16 ) ;
message sys.aq$_jms_text_message;
no_messages exception;
pragma exception_init ( no_messages, -25228 ) ;
begin
dequeue_options.msgid := descr.msg_id;
dequeue_options.consumer_name := descr.consumer_name;
dequeue_options.navigation := dbms_aq.first_message;
loop
dbms_aq.dequeue (
queue_name => 'my_teq',
dequeue_options => dequeue_options,
message_properties => message_properties,
payload => message,
msgid => message_handle
);
insert into my_log values ( message.text_vc, sysdate );
commit;
end loop;
exception
when no_messages then
dbms_output.put_line ( 'No more messages for processing' ) ;
commit;
end;
/
Let’s walk through that and talk about the details. First, the procedure must have this signature/interface:
procedure receiver (
context in raw,
reginfo in sys.aq$_reg_info,
descr in sys.aq$_descriptor,
payload in varchar2,
payloadl in number
)
The name of the procedure is up to you, but it must have those exact parameters in that order, since this is a callback, and the TEQ notification is expecting this signature so that it can pass the data about new messages to the consumer.
When we get the callback, we need to perform a dequeue operation to get the actual message/event off the queue/topic. Since it is possible that there is more than one, its a good idea to use a loop to read and process all of them before we complete. Here we have a simple loop to dequeue a message and then save the details in our log table:
We’ve also defined an exception handler for when there are no messages available (though this should not ever happen, but its still a good practice to cater for it anyway):
when no_messages then
dbms_output.put_line ( 'No more messages for processing' ) ;
commit;
I used the JMS message format in this example, but of course you could use RAW or JSON or a user-defined type instead.
Ok, so now that our microservice is ready, we need to tell the database to call it when there is a message to process. To do this, we create a notification as follows:
Ok, so let’s talk about what is happening here. This register function that we are running will set up the connection between the queue, the subscriber and the consumer. In the aq$_reg_info you can see the first parameter has the queue name followed by a colon and the subscriber name – so this is telling us “when we have a message on my_teq and it is addressed to the subscriber my_subscriber…”
The next parameter tells us that we are interested in AQ (and TEQ) notifications, and the third parameter tells us the callback address. In this case we are telling it to run the PL/SQL procedure called receiver.
Once that is done, you can check on the details with this query:
select r.reg_id, subscription_name, location_name, num_ntfns, num_pending_ntfns
from USER_SUBSCR_REGISTRATIONS r, V$SUBSCR_REGISTRATION_STATS s
where r.reg_id = s.reg_id;
REG_ID SUBSCRIPTION_NAME LOCATION_NAME NUM_NTFNS NUM_PENDING_NTFNS
______ ______________________________________ ________________ _________ _________________
301 "PDBADMIN"."MY_TEQ":"MY_SUBSCRIBER" plsql://receiver 40 0
If you come back and run this again later, you will see the number of notifications sent, and pending (i.e. the last two columns) will increase each time we send a message.
Ok, let’s enqueue a message (publish an event) to test this out!
We can use this command to send a test message. This creates and sends a JMS message on our my_teq queue/topic addressed to our my_subscriber consumer:
Once that is run, the notification will kick in, and the callback will occur and our microservice will run, and consume all of the messages and dump them out into our “output table.” We can check the results with this query:
SQL> select * from my_log;
MESSAGE WHEN
__________________ __________________________________
hello from mark 23-JUN-22 04.44.06.000000000 PM
Feel free to go run that a few more times to see what happens.
So there we go! We created a nice simple, loosely coupled consumer that will process messages/events as they arrive, and will scale to zero (consume no resources) when there is no work to do. Enjoy!
Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread!
There are some really interesting updates for the open source Python Oracle library (known as cx_oracle, but changing its name is part of this) – check it out here:
In this post I want to demonstrate how to use Oracle Transactional Event Queues (TEQ) to propagate messages/events across regions. I will use two Oracle Autonomous Databases running in Oracle Cloud, one in Ashburn, VA and one in Phoenix, AZ (about 2,000 miles apart).
Of course, there are a lot of reasons why you might want to propagate events like this, and you don’t necessarily have to do it across geographic regions, you might just want to do it across two database instances in the same data center, or even just two topics in the same database!
Here’s a quick diagram of what we are going to build. We are going to use the JMS Pub/Sub model. Our producer will connect to the ASHPROD1 instance and put messages onto the topic ASH_TOPIC. Messages will be propagated from this topic to the topic PHX_TOPIC in the PHXPROD1 instance. Our consumer will connect to PHXPROD1 and consume messages from there.
To get started, let’s create two databases. To follow along, you’ll need an Oracle Cloud account – you can do this with the “Always Free” account using the 21c version of the Autonomous Database, so you can try this without spending any money 🙂 You can also use 19c if you prefer.
Creating the databases
First we log into the Oracle Cloud Infrastructure (OCI) Console at https://cloud.oracle.com. Enter your cloud account name and hit the “Next” button.
After you log in, click on the “hamburger” (three lines) menu (1) and go to “Oracle Database” (2) and then “Autonomous Database” (3) as shown:
Choose your compartment (1), and the region (2) (I’ll use Ashburn and Phoenix – use any two regions you like, or two in the same region will work too), then click on the “Create Autonomous Database” (3) button:
In the dialog, we need to give the database a name, I used ASHPROD1. Choose “Transaction Processing” as the workload type and “Shared Infrastructure” as the deployment type:
You can accept the default 19c database (or toggle that “Always Free” switch to use 21c). The default 1 OCPU, 1 TB is fine for this exercise. Also provide a password for the administrator (don’t forget it!):
In the “Choose network access” section, choose the option for secure access and click on the “Add My IP Address” button. Choose the “Bring You Own License (BYOL)” option and provide an email address for the administrator:
Then click on the “Create Autonomous Database” button to create the database.
Now choose the second region, e.g. Phoenix, in the top right corner of the OCI Console and repeat this same process to create a second database, for example called PHXPROD1. This time though, choose the “secure access from anywhere” option, since we are going to need to be able to have ASHPROD1 connect to this instance too.
Obtain Database Wallets
So now we have our two databases. Let’s download the wallets so that we can connect to them. The database wallets contain the necessary information to connect to, and authenticate the database.
In the OCI Console, click on the database name to see the details of the database:
Next, click on the “DB Connection” button:
You will click on the “Download wallet” button (1) to get the wallet file, but while you are here, notice the connection strings (2) – we’ll use one of those later.
After you click on the button, provide a password for the wallet, and then click on the “Download” button:
Repeat this for the other database.
Creating our consumer
Let’s create a new project and write our consumer code. We’ll use Maven to simplify the dependency management and to make it easy to run our consumer. Let’s create a new directory and unzip our two wallets into this directory. So we should see something like this:
Let’s add a Maven POM file to set up our project. I am assuming you have Maven and a JDK installed. If not – go get those now 🙂 I am using Maven 3.8.4 and Java 17.0.3. Create a file called pom.xml with this content:
This defines the Maven coordinates for our project, the dependencies we need to compile and run our code, and also a convenience goal to run the consumer (or producer) directly from Maven so that we don’t have to worry about constructing the class path manually. Let’s also create some directories to store our code:
This defines the URL that we will use to connect to the database. Notice that it is using an alias (phxprod1_high) – that might look familiar, remember we saw those on the OCI Console when we were downloading the wallet. If you take a look at the tnsnames.ora file in the PHXPROD1 wallet you will see how this is defined, something like this:
Notice that we are reading the password from an environment variable – so you’ll need to set that variable wherever you are going to run this (note – this is not my real password, just an example):
export DB_PASSWORD=Welcome123##
Next we set up a TopicConnection, start a JMS Session, look up our Topic and create a Subscriber. This is all fairly standard JMS stuff 🙂
// create a JMS topic connection and session
TopicConnectionFactory tcf = AQjmsFactory.getTopicConnectionFactory(ds);
TopicConnection conn = tcf.createTopicConnection();
conn.start();
TopicSession session = (AQjmsSession)
conn.createSession(true, Session.AUTO_ACKNOWLEDGE);
// create a subscriber on the topic
Topic topic = ((AQjmsSession) session).getTopic(username, topicName);
AQjmsTopicSubscriber subscriber =
(AQjmsTopicSubscriber) session.createDurableSubscriber(topic, "BOOK");
System.out.println("Waiting for messages...");
I created a Durable Subscriber and named it BOOK. We’ll see that name again later, remember that!
Finally, we are going to just wait for messages forever and print them out.
// wait forever for messages to arrive and print them out
while (true) {
AQjmsTextMessage message = (AQjmsTextMessage) subscriber.receive(1_000);
if (message != null) {
if (message.getText() != null) {
System.out.println(message.getText());
} else {
System.out.println();
}
}
session.commit();
}
Normally, we would not wait forever, and we’d clean up our resources, but since this is just a small example consumer, we’ll make some allowances 🙂
Ok, that takes care of our consumer. We won’t run it yet, since we have not created the topics. Let’s do that now!
Create the topics
We are going to create two topics, one in each database instance/region, and configure propagation between them. Let’s review what we want:
Ashburn (producer side)
Phoenix (consumer side)
ASHPROD1 database instance
PHXPROD1 database instance
ASH_TOPIC topic
PHX_TOPIC topic
Navigate back to your ASHPROD1 Autonomous Database in the OCI Console and click on the “Database Actions” button:
Note that your browser might think this is a pop-up and block it. If so, clicking on the button again usually lets the browser know you really meant to open it 🙂
In the Database Actions page, click on the “SQL” card to open the SQL Worksheet:
If you get the tour, you can click on “Next” or the “X” to close it.
We are just going to create our topics in the ADMIN schema. In real life, you would probably create a new user/schema to keep your topics in, perhaps several so that you can group them for easier administration. You can create topics with Java or PL/SQL. For this example, we will use PL/SQL.
Here’s the commands to create and start our new topic, ASH_TOPIC:
If you are using 21c, instead of create_sharded_queue, you should use create_transactional_event_queue – that procedure was renamed in 21c.
You can put these commands into the worksheet at the top (1), then click on the “Run Statement” button (2). You will see the result in the “Script Output” window (3) as shown below:
If you want to check, you can run this query to see details of the queues and topics in your schema:
select * from user_queues;
Now, we need to go to our PHXPROD1 database and create the PHX_TOPIC there. Just repeat what you just did for ASHPROD1 on the PHXPROD1 database and remember to change the name of the topic in the commands that you run!
Create the Database Link
Great, our topics are ready to go! Next, we need to create a Database Link from the ASHPROD1 database to the PHXPROD1 database. The Database Link will allow us to perform actions against the remote database, in this case, to enqueue messages on the remote topic.
Since our databases are using TLS, we need to make the remote database (PHXPROD1) wallet available to the ASHPROD1 database, so that it can authenticate. The easiest way to do this is to upload the files we need into an Object Store bucket.
Let’s create the bucket. In the OCI Console, make sure you are in the Ashburn region and then click on the “hamburger” menu (the three lines at the top left), then “Storage” and the “Buckets”:
Then click on the “Create Bucket” button. Give your bucket a name, I used dblinks and click on the “Create” button. All the defaults are fine for what we need:
Notice that your bucket is private:
Click on the “Upload” button to upload a file:
Then click on the “select files” link to choose the file. We need the file called cwallet.sso in the wallet we downloaded for the PHXPROD1 database (the remote database):
Once the upload completes you can close that dialog and then click on the “three dots” (1) next to the file we just uploaded and choose the “Create Pre-Authenticated Request” (2) option:
The defaults are what we want here – we want to be able to read this one object only. If you want to change the expiration to something like 2 days, just to be on the safe side, that’s not a bad idea at all! Click on the “Create Pre-Authenticated Request” button:
Make sure you take a copy of the URL, you won’t be able to get it again!
Ok, now we are ready to create the link. Open the SQL Worksheet for the ASHPROD1 database (the local/source database) and run these commands. You will need to get the right values for several fields before you run this, I’ll tell you where to get them next:
create or replace directory AQ_DBLINK_CREDENTIALS
as 'aq_dblink_credentials';
BEGIN
DBMS_CLOUD.GET_OBJECT(
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/p/xxxx/n/xxxx/b/dblinks/o/cwallet.sso',
directory_name => 'AQ_DBLINK_CREDENTIALS',
file_name => 'cwallet.sso');
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'CRED',
username => 'ADMIN', -- remote db has case-sensitive login enabled, must be uppercase
password => 'Welcome123##');
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PHXPROD1',
hostname => 'adb.us-phoenix-1.oraclecloud.com',
port => '1522',
service_name => 'xxxxx.adb.oraclecloud.com',
ssl_server_cert_dn => 'CN=adwc.uscom-east-1.oraclecloud.com, OU=Oracle BMCS US, O=Oracle Corporation, L=Redwood City, ST=California, C=US',
credential_name => 'CRED',
directory_name => 'AQ_DBLINK_CREDENTIALS');
END;
In the GET_OBJECT call, the object_uri needs to be that URL that you just copied from the Pre-Authenticated Request.
In the CREATE_CREDENTIAL call, the username should be the user for the remote (PHXPROD1) database – we can just use ADMIN. Note that this must be in upper case since Autonomous Database is configured for case-sensitive login by default. The password should be the password for that user.
In the CREATE_DATABASE_LINK call, the db_link_name is what we are going to use to refer to the remote database. I just used the name of the database – you’ll see later why that makes things more intuitive. You can get the values for the hostname, port, service_name and ssl_server_cert_dn fields from the wallet you downloaded. Make sure you use the wallet for the PHXPROD1 database. You will find the right values in the tnsnames.ora file, and you can just copy them in here. Here’s an example, I’ve bolded the values we need:
Once you have all the right values, paste this into the SQL Worksheet and click on the “Run Script” button:
You can check it worked by doing a query through the database link. For example, let’s get a list of the queues/topics on the remote database. We are entering this query on the ASHPROD1 instance, using the database link (“@PHXPROD1“) to have it run on the other database, notice that the output shows the topic PHX_TOPIC we created in the PHXPROD1 database:
Start message propagation
Ok, now we are ready to start propagating messages! (Yay!)
We want to run these commands in the SQL Worksheet on the ASHPROD1 database (the source/local database):
We create a TopicProducer, and we are sending a simple JMS Text Message to the topic.
Let’s run our producer now:
export DB_PASSWORD=Welcome123## <-- use your real password!
mvn clean compile exec:exec -P producer
When that finishes (you’ll see a “BUILD SECCESS” message) go and have a look at your consumer, you should see something like this:
[INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ propagation ---
Waiting for messages...
hello from ashburn, virginia!
Yay! It worked! We just published a message on the ASH_TOPIC in the ASHPROD1 instance and it was propagated to PHXPROD1 for us and our consumer read it off the PHX_TOPIC in PHXPROD1.
Here’s an interesting query we can run to see what happened:
Notice the consumer names – in the local ASHPROD1 instance, the consumer is AQ$_P_106126_92PHXPROD1 (yours will probably be slightly different.) That’s the propagation consumer that is running to propagate the messages to PHXPROD1.
But in the PHXPROD1 instance, the consumer is BOOK! That’s the name we gave to our consumer:
I have been using Oracle REST Data Services (you might know it as “Database Actions”) with my Oracle Autonomous Database for a while now, and I wanted to play with some new features, which led me to want to install my own (“customer managed” or “standalone”) ORDS instance. It took me a few goes, and some help from Jeff Smith (yes, that Jeff Smith) to get it right, so I thought it would be good to document how I got it working!
In this example, I am going to use an Oracle 21c database, and I will set up ORDS 22.1 in one of the pluggable databases. Once we have it up and running, we will use Database Actions and look at some of the services in the REST service catalog.
Setting up a database
First, of course I needed a database to play with. I fired up a 21c database in a container for this exercise. You will need to go accept the license agreement before you can pull the container image.
Go to https://container-registry.oracle.com and navigate to “Database” and then “enterprise” and click on the button to accept the agreement. You may have to log in if you are not already.
You will also need to log in to Oracle Container Registry with your container runtime, in this post I am using Docker. This will prompt for your username and password:
docker login container-registry.oracle.com
Now we can start up a database in a container. Here is the command I used, I set a password and the SID/service names, and make sure to expose the database port so we can access the database from outside the container:
Note: If you use different names, you will need to adjust the example commands appropriately! Also, if you want to be able to restart this database without losing all your data, you’ll want to mount a volume – the OCR page has details on how to do that.
It takes a few minutes to start up. You can watch the logs using this command, and you will need to wait until you see the message indicating it is ready to use:
docker logs -f oracle-db
When the database is ready, we can log on and give the necessary privileges to our PDB admin user.
sqlplus sys/Welcome123##@//localhost:1521/orcl as sysdba
SQL> alter session set container = PDB1;
SQL> grant dba to pdbadmin;
Ok, now we are ready to install ORDS!
Installing ORDS
First step is to download it, of course. Here is the site to get the latest version of ORDS:
Once you have it downloaded, just unzip it into a new directory. I unzipped it into /home/mark/ords.
The steps that I am going to describe here are described in more detail in the documentation.
Now we want to run the pre-install script to set up the necessary privileges. I am using the pdbadmin user, the admin user in my PDB. This script will take just a few moments to run:
sqlplus sys/Welcome123##@//localhost:1521/pdb1 as sysdba \
@scripts/installer/ords_installer_privileges.sql pdbadmin
Great, now we can run the installer. I used the interactive installer, which will ask you for the necessary information and let you type it in as you go. It is also possible to do a “silent” install by providing all of the information on the command line – the documentation explains how to do this.
Create a directory to hold the configuration and start the interactive installer:
cd /home/mark/ords
export PATH=/home/mark/ords/bin:$PATH
mkdir config
ords --config /home/mark/ords/config install
Here’s what the interactive install dialog looks like, I highlighted the data I entered in bold, mostly I just took the defaults:
Oracle REST Data Services - Interactive Install
Enter a number to select the type of installation
[1] Install or upgrade ORDS in the database only
[2] Create or update a database pool and install/upgrade ORDS in the database
[3] Create or update a database pool only
Choose [2]:
Enter a number to select the database connection type to use
[1] Basic (host name, port, service name)
[2] TNS (TNS alias, TNS directory)
[3] Custom database URL
Choose [1]:
Enter the database host name [localhost]:
Enter the database listen port [1521]:
Enter the database service name [orcl]: pdb1
Provide database user name with administrator privileges.
Enter the administrator username: pdbadmin
Enter the database password for pdbadmin:
Connecting to database user: pdbadmin url: jdbc:oracle:thin:@//localhost:1521/pdb1
Retrieving information.
Enter the default tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA and ORDS_PUBLIC_USER [TEMP]:
Enter a number to select additional feature(s) to enable:
[1] Database Actions (Enables all features)
[2] REST Enabled SQL and Database API
[3] REST Enabled SQL
[4] Database API
[5] None
Choose [1]:
Enter a number to configure and start ORDS in standalone mode
[1] Configure and start ORDS in standalone mode
[2] Skip
Choose [1]:
Enter a number to use HTTP or HTTPS protocol
[1] HTTP
[2] HTTPS
Choose [1]:
Enter the HTTP port [8080]:
Note: I just used HTTP, but if you want to use HTTPS, you will probably want to create some certificates and configure them in the installer. Here’s some commands to create a self-signed certificate and convert the key to the DER format ORDS requires:
# these are optional - only required if you want to use HTTPS
openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout ords.key.pem \
-days 365 -out ords.pem
openssl x509 -in ords.pem -text -noout
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.pem -out ords.der -nocrypt
openssl pkcs8 -topk8 -inform PEM -outform DER -in ords.key.pem -out ords.der -nocrypt
Once you complete the interview, the installer will perform the installation. It takes just a couple of minutes, and it will start up the ORDS standalone server for you. If you need to stop it (with Ctrl-C) you can restart it with this command:
ords --config /home/mark/ords/config serve
Ok, now we have ORDS up and running, we are going to need a user!
Preparing an ORDS user
Let’s create a regular database user and give them access to ORDS.
Using the PDB admin user, we can create a new user and give them the necessary permissions to use ORDS:
sqlplus pdbadmin/Welcome123##@//localhost:1521/pdb1
SQL> create user mark identified by Welcome123##;
SQL> grant connect, resource to mark;
SQL> grant unlimited tablespace to mark;
SQL> begin
ords.enable_schema(
p_enabled => true,
p_schema => 'mark',
p_url_mapping_type => 'BASE_PATH',
p_url_mapping_pattern => 'mark',
p_auto_rest_auth => false
);
commit;
end;
/
You may need to change the hostname or port if you used something different.
You should see the login page:
Enter your username – I used mark, and the press Next, and enter the password, I used Welcome123##. This will take you to the main “Database Actions” page:
Let’s create a table and enter some data 🙂
Click on the SQL card (top left) to open the SQL worksheet, and enter these statements:
create table city (
name varchar2(64),
population number
);
insert into city values ('Tokyo', 37000000);
insert into city values ('Dehli', 29000000);
insert into city values ('Shanghai', 26000000);
insert into city values ('Sao Paulo', 21000000);
insert into city values ('Values', 21000000);
Click on the “Run Script” icon to execute these statements – its the one the red arrow is highlighting:
Let’s expose that table as a REST service!
Creating a REST service
ORDS allows us to easily expose an SQL statement, or a PL/SQL block as a REST service. Let’s navigate to the REST page – click on the “hamburger menu” (1) and then the REST page (2):
The basic structure is that we create a “module” which contains “templates” which in turn contain “handlers.” So let’s start by creating a module. Click on the “Modules” option in the top menu.
Then click on the “Create Module” button in the top right corner:
Give the module a name, I used mark and a base path, I used /api/. Since we are just playing here, set the “Protected by Privilege” to Not protected. Obviously, in real life, you’d set up authentication, for example using OAuth, which ORDS provides out of the box – but’s that another post 🙂 Finally, click on the “Create” button to create the module. It will now appear in the modules page:
Click on the module name (“mark” in the example above) to open the module, and click on the “Create Template” button on the right hand side:
Enter a URI Template for this service, I used cities for mine, then click on the “Create” button:
Now you will see the template page. Click on the “Create Handler” button on the right:
In the “Create Handler” dialog, we provide details of the service. Notice that you can choose the HTTP Method (GET, POST, DELETE, etc.) and you can control paging. For this service, we want to create a GET handler and we want the “Source Type” to be Collection Query which lets us enter an SQL statement. While you are here, have a look in the pull down and notice that you can also use PL/SQL! You can also use bind variables in here, so we can accept parameters and use them in the query or PL/SQL code.
For now, enter this simple query, then click on the “Create” button:
select * from city
Note: You should not include the semi-colon!
Once you have created the handler, you will see the details view, where you can test it by clicking on the “Run” button (1):
Notice that you see the results in the bottom half of the screen. You also have the URL for the service provided (2) and there is a “copy” button on the right hand side. Let’s test our service using cURL:
ORDS also provides a heap of out of the box services for us automatically. To explore these, let’s use Postman, which is a very popular tool for REST testing. You can download it from Postman.
Jeff Smith has a great post here that explains how to import all the ORDS REST APIs into Postman.
When you open Postman, click on the “Import” button:
Now you need the right URL! If you have been following along and using the same names as me your URL will be:
If you used a different user, you will need to change “mark” to your username in that URL. After you click on “Import” chose “Link” as the type and enter your URL:
One good thing about Postman is that we can set the authentication parameters at the top level, on that “ORDS Database API” folder that you just created. Open that and click on the “Authorization” tab, choose “Basic Auth” as the type and enter the database user and password:
In this folder you will see a whole collection of services for all kinds of things. Let’s try a simple one! Navigate to the “Get Database version” service and click on the “Send” button in the top right corner. You’ll see the result data in the bottom pane:
Well, there you go! We installed ORDS, used the Database Actions and REST interfaces, created a service and explored the out of the box services! I hope you enjoyed!
It provides security, observability, and network traffic management for cloud native applications without requiring any changes to the applications.
Its a free managed service and its available in all commercial regions today. Check it out!
Today I was playing with Kafka Java Client for TEQ, that allows you to use Oracle Transactional Event Queues (formerly known as Sharded Queues) in the Oracle Database just like Kafka.
In this preview version, there are some limitations documented in the repository, but the main one to be aware of is that you need to use the okafka library, not the regular kafka one, so you would need to change existing kafka client code if you wanted to try out the preview.
Preparing the database
To get started, I grabbed a new Oracle Autonomous Database instance on Oracle Cloud, and I opened up the SQL Worksheet in Database Actions and created myself a user. As the ADMIN user, I ran the following commands:
create user mark identified by SomePassword; -- that's not the real password!
grant connect, resource to mark;
grant create session to mark;
grant unlimited tablespace to mark;
grant execute on dbms_aqadm to mark;
grant execute on dbms_aqin to mark;
grant execute on dbms_aqjms to mark;
grant select_catalog_role to mark;
grant select on gv$instance to mark;
grant select on gv$listener_network to mark;
commit;
And of course, I needed a topic to work with, so I logged on to SQL Worksheet as my new MARK user and created a topic called topic1 with these commands:
Note that this is for Oracle Database 19c. If you are using 21c, create_sharded_queue is renamed to create_transactional_event_queue, so you will have to update that line.
The topic is empty right now, since we just created it, but here are a couple of queries that will be useful later. We can see the messages in the topic, with details including the enqueue time, status, etc., using this query:
select * from topic1;
This is a useful query to see a count of messages in each status:
select msg_state, count(*)
from aq$topic1
group by msg_state;
Building the OKafka library
We need to build the OKafka library and install it in our local Maven repository so that it will be available to use as a dependency since the preview is not currently available in Maven Central.
First, clone the repository:
git clone https://github.com/oracle/okafka
Now we can build the uberjar with the included Gradle wrapper:
cd okafka
./gradlew fullJar
This will put the JAR file in gradle/build/libs and we can install this into our local Maven repository using this command:
I am using Java 17 for this example. But you could use anything from 1.8 onwards, just update the version in the properties if you are using an earlier version.
Let’s walk through this code and talk about what it does.
First, let’s notice the imports. We are importing the OKafka versions of the familiar Kafka classes. These have the same interfaces as the standard Kafka ones, but they work with Oracle TEQ instead:
In the main() method we first set the log level and then we load some properties from our producer.properties config file. You will see the getProperties() method at the end of the file is a fairly standard, it is just reading the file and returning the contents as a new Properties object.
Let’s see what’s in that producer.properties file, which is located in the src/main/resources directory:
There are two groups of properties in there. The first group provide details about my Oracle Autonomous Database instance, including the location of the wallet file – we’ll get that and set it up in a moment.
The second group are the normal Kafka properties that you might expect to see, assuming you are familiar with Kafka. Notice that the bootstrap.servers lists the address of my Oracle Autonomous Database, not a Kafka broker! Also notice that we are using the serializers (and later, deserializers) provided in the OKafka library, not the standard Kafka ones.
Next, we set the topic name by reading it from the properties file. If it is not there, the second argument provides a default/fallback value:
And now we are ready to create the producer and send some messages:
try(KafkaProducer<String, String> producer = new KafkaProducer<>(props)) {
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<String, String>(
topicName, 0, "key", "value " + i));
}
System.out.println("sent 100 messages");
} catch (Exception e) {
e.printStackTrace();
}
We created the KafkaProducer and for this example, we are using String for both the key and the value.
We have a loop to send 100 messages, which we create with the ProducerRecord class. We are just setting them to some placeholder data.
Ok, that’s all we need in the code. But we will need to get the wallet and set it up so Java programs can use it to authenticate. Have a look at this post for details on how to do that! You just need to download the wallet from the OCI console, unzip it into a directory called wallet – put that in the same directory as the pom.xml, and then edit the sqlnet.ora to set the DIRECTORY to the right location, e.g. /home/mark/src/okafka/wallet for me, and then add your credentials using the setup_wallet.sh I showed in that post.
Finally, you need to add these lines to the ojdbc.properties file in the wallet directory to tell OKafka the user to connect to the database with:
You can see it dumps out the properties, and then after some informational messages you see the “sent 100 messages” output. Now you might want to go and run that query to look at the messages in the database!
Now, lets move on to creating a consumer, so we can read those messages back.
Creating the Consumer
The consumer is going to look very similar to the producer, and it will also have its own properties file. Here’s the contents of the properties file first – put this in src/main/resources/consumer.properties:
A lot of this is the same as the producer, so let’s walk through the parts that are different.
First, we load a the different properties file, the consumer one, it has a few different properties that are relevant for consumers. In particular, we are setting the max.poll.records to 100 – so we’ll only be reading at most 100 messages off the topic at a time.
Here’s how we create the consumer:
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(topicName));
Again, you may notice that this is very similar to Kafka. We are using String as the type for both the key and value. Notice we provided the appropriate deserializers in the property file, the ones from the OKafka library, not the standard Kafka ones.
We open our consumer and poll for messages (for 30 seconds) and then we just print out some information about each message, and then close out consumer! Again, this is very simple, but its enough to test consuming messages.
We can run this and we should see all of the message data in the output, here’s how to run it, and an excerpt of the output:
So there you go! We successfully created a very simple producer and consumer and we sent and received messages from a topic using the OKafka library and Oracle Transactional Event Queues!
Today I needed to load some data in my Oracle Autonomous Database running on Oracle Cloud (OCI). I found this great article that explained just what I needed!
In this post, I am going to explain how to configure a standalone Java (SE) application to connect to an Oracle Autonomous Database instance running in Oracle Cloud using Mutual TLS.
The first thing you are going to need is an Oracle Autonomous Database instance. If you are reading this post, you probably already know how to get one. But just in case you don’t – here’s a good reference to get you started – and remember, this is available in the “always free” tier, so you can try this out for free!
When you look at your instance in the Oracle Cloud (OCI) console, you will see there is a button labelled DB Connection – go ahead and click on that:
Viewing the Autonomous Database instance in the Oracle Cloud Console.
In the slide out details page, there is a button labelled Download wallet – click on that and save the file somewhere convenient.
Downloading the wallet.
When you unzip the wallet file, you will see it contains a number of files, as shown below, including a tnsnames.ora and sqlnet.ora to tell your client how to access the database server, as well as some wallet files that contain certificates to authenticate to the database:
The first thing you need to do is edit the sqlnet.ora file and make sure the DIRECTORY entry matches the location where you unzipped the wallet, and then add the SSL_SERVER_DN_MATCH=yes option to the file, it should look something like this:
Before we set up Mutual TLS – let’s review how we can use this wallet as-is to connect to the database using a username and password. Let’s take a look at a simple Java application that we can use to validate connectivity – you can grab the source code from GitHub:
This repository contains a very simple, single class Java application that just connects to the database, checks that the connection was successful and then exits. It includes a Maven POM file to get the dependencies and to run the application.
Make sure you can compile the application successfully:
$ cd adb-mtls-sample
$ mvn clean compile
Before you run the sample, you will need to edit the Java class file to set the database JDBC URL and user to match your own environment. Notice these lines in the file src/main/java/com/github/markxnelson/SimpleJDBCTest.java:
// set the database JDBC URL - note that the alias ("myquickstart_high" in this example) and
// the location of the wallet must be changed to match your own environment
private static String url = "jdbc:oracle:thin:@myquickstart_high?TNS_ADMIN=/home/mark/blog";
// the username to connect to the database with
private static String username = "admin";
You need to update these with the correct alias name for your database (it is defined in the tnsnames.ora file in the wallet you downloaded) and the location of the wallet, i.e. the directory where you unzipped the wallet, the same directory where the tnsnames.ora is located.
You also need to set the correct username that the sample should use to connect to your database. Note that the user must exist and have at least the connect privilege in the database.
Once you have made these updates, you can compile and run the sample. Note that this code expects you to provide the password for that use in an environment variable called DB_PASSWORD:
Great! We can connect to the database normally, using a username and password. If you want to be sure, try commenting out the two lines that set the user and password on the data source and run this again – the connection will fail and you will get an error!
Now let’s configure it to use mutual TLS instead.
I included a script called setup_wallet.sh in the sample repository. If you prefer, you can just run that script and provide the username and passwords when asked. If you want to do it manually, then read on!
First, we need to configure the Java class path to include the Oracle Wallet JAR files. Maven will have downloaded these from Maven Central for you when you compiled the application above, so you can find them in your local Maven repository:
First, set the environment variable USER_DEFINED_WALLET to the directory where you unzipped the wallet, i.e. the directory where the tnsnames.ora is located.
export USER_DEFINED_WALLET=/home/mark/blog
You’ll also want the change the alias in this command to match your database alias. In the example above it is myquickstart_high. You get this value from your tnsnames.ora – its the same one you used in the Java code earlier.
Now we are ready to run the command. This will update the wallet to add your user’s credentials and associate them with that database alias.
Once we have done that, we can edit the Java source code to comment out (or remove) the two lines that set the user and password:
Now you can compile and run the program again, and this time it will get the credentials from the wallet and will use mutual TLS to connect to the database.
There you have it! We can now use this wallet to allow Java applications to connect to our database securely. This example we used was pretty simple, but you could imagine perhaps putting this wallet into a Kubernetes secret and mounting that secret as a volume for a pod running a Java microservice. This provides separation of the code from the credentials and certificates needed to connect to and validate the database, and helps us to build more secure microservices. Enjoy!
I recently did a talk with Peter Nagy where we compared Java and Go microservices performance. We published a write up in the Helidon blog over at Medium.
1,312,983 people have been kind enough to visit our humble blog. Others get our posts by RSS or email or through syndicators. We hope you took away something of value. Please come again!
Copyright 2009-2022 Mark Nelson and other contributors. All Rights Reserved. The views expressed in this blog are our own and do not necessarily reflect the views of Oracle Corporation. All content is provided on an ‘as is’ basis, without warranties or conditions of any kind, either express or implied, including, without limitation, any warranties or conditions of title, non-infringement, merchantability, or fitness for a particular purpose. You are solely responsible for determining the appropriateness of using or redistributing and assume any risks.
The header image is a photograph of a cafe in Ravello, Italy, taken by one of my favourite photographers, Roland Slee. Used with permission. Copyright Roland Slee.
About Advertising
WordPress does not allow blog owners to advertise on blogs hosted on wordpress.com, however they do sometimes inject their own advertisments into some pages for some users in order to produce revenue to support the great service they provide to us at no charge. WordPress provide blog owners with the ability to opt-out of this advertising for a small yearly fee. We prefer not to have advertisements displayed to readers of our blog and have opted out for your viewing pleasure.
You must be logged in to post a comment.