Session catalog for DevLive Level Up 2023 released!

Hi again! In this earlier post, I mentioned that I am speaking at Level Up 2023. The session catalog has just been released on the event website. You can find my sessions in this stream:

Data strategies for developers – Sessions at a glance

I hope to see some of you there!

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

I’m speaking at Level Up 2023

Hi! I am going to be speaking at the Level Up 2023 event at Oracle Redwood Shores in March. I will talking about our new Developer Previews for both Oracle Backend for Spring Boot and Oracle Backend for Parse Platform, and running a hands on lab where we will use those to build a “Cloud Banking” application in Spring Boot complete with a web and mobile front end user interface. In the lab we’ll explore topics like service discovery, external configuration, workflow, API management, fault tolerance and observability.

If you’re in the Bay Area and you’d like to attend in person – or if you’d like to attend from anywhere digitally – you can find more information and register here:

Posted in Uncategorized | Tagged , , , , , , , , | 1 Comment

A first Spring Boot microservice with Oracle

In this post, I want to walk through creating a first simple Spring Boot microservice using Oracle. If you want to follow along, see this earlier post about setting up a development environment.

I want to create a “customer” microservice that I can use to create/register customers, and to get customer details. I want the customer information to be stored in my Oracle database. I am going to create a dedicated schema for this microservice, where it will keep its data. I could create a separate pluggable database, but that seems a little excessive given the simplicity of this service.

So my “customer” data will have the following attributes:

  • Customer ID
  • First name
  • Surname
  • Email address

My service will have endpoints to:

  • Create a customer
  • List all customers
  • Get a customer by ID

I am going to use Spring 3.0.0 with Java 17 and Maven. Spring 3.0.0 was just released (when I started writing this post) and has support for GraalVM native images and better observability and tracing.

Create the project

Let’s start by creating a project. If you set up your development environment like mine, with Visual Studio Code and the Spring Extension Pack, you can type Ctrl+Shift+P to bring up the actions and type in “Spring Init” to find the “Spring Initializr: Create a Maven project” action, then hit enter.

It will now ask you a series of questions. Here’s how I set up my project:

  • Spring Boot Version = 3.0.0
  • Language = Java
  • Group ID = com.redstack
  • Artifact ID = customer
  • Packaging = JAR
  • Java version = 17
  • Dependencies:
    • Spring Web
    • Spring Data JPA
    • Oracle Driver

After that, it will ask you which directory to create the project in. Once you answer all the questions, it will create the project for you and then give you the option to open it (in a new Visual Studio Code window.)

Note: If you prefer, you can go to the Spring Initializr website instead and answer the same questions there instead. It will then generate the project and give you a zip file to download. If you choose this option, just unzip the file and open it in Visual Studio Code.

Whichever approach you take, you should end up with a project open in Code that looks a lot like this:

I like to trim out a few things that we don’t really need. I tend to delete the “.mvn” directory, the “mvnw” and “mvnw.cmd” files and the “” file. Now is also a great time to create a git repository for this code. I like to add/commit all of these remaining files and keep that as my starting point.

Explore the generated code

Here’s the Maven POM (pom.xml) that was generated:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi=""
		<relativePath/> <!-- lookup parent from repository -->
	<description>Demo project for Spring Boot</description>




There’s a few things to note here. The parent is the standard spring-boot-starter-parent and this will bring in a bunch of useful defaults for us. The dependencies list contains the items we chose in the Spring Initializr (as expected) and finally, note the build section has the spring-boot-maven-plugin included. This will let us build and run the Spring Boot application easily from maven (with “mvn spring-boot:run“).

Let’s add one more dependency:


Lombok offers various annotations aimed at replacing Java code that is well known for being boilerplate, repetitive, or tedious to write. We’ll use it to avoid writing getters, setters, constructors and builders.

And here is the main CustomerApplication Java class file:

package com.redstack.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

public class CustomerApplication {

	public static void main(String[] args) {, args);


Nothing much to see here. Notice it has the SpringBootApplciation annotation.

Define the Customer Entity

Let’s go ahead and define our data model now. Since we are using JPA, we define our data model using a POJO. Create a file in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;

import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;

import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.GenerationType;
import jakarta.persistence.Id;
import jakarta.persistence.SequenceGenerator;

public class Customer {

            name = "customer_id_sequence",
            sequenceName = "customer_id_sequence"
            strategy = GenerationType.SEQUENCE,
            generator = "customer_id_sequence"
    private Integer id;
    private String firstName;
    private String lastName;
    private String email;

Starting from the bottom, we see the definition of the four fields that we wanted for our Customer entity – ID, first and last names, and email address.

The id field has some annotations on it. First it has @Id which identifies it as the key. Then we have a @SequenceGenerator annotation, which tells JPA that we want to create a “sequence” in the database and gives it a name. A sequence is a database object from which multiple users may generate unique integers. The last annotation, @GeneratedValue tells JPA that this field should be populated from that sequence.

The class also has some annotations on it. It has the JPA @Entity annotation which tells JPA that this is an entity that we want to store in the database. The other annotations are Lombok annotations to save us writing a bunch of boilerplate code. @Data generates getters for all fields, a useful toString method, and hashCode and equals implementations that check all non-transient fields. It will also generate setters for all non-final fields, as well as a constructor. @Builder generates some nice APIs to create instances of our object – we’ll see how we use it later on. And @AllArgsConstructor and @NoArgsConstructor generate pretty much what their names suggest they do.

Set up the Spring Boot Application Properties

Ok, next let’s set up the JPA configuration in the Spring Boot Application Properties. You will find a file called in src/main/resources. This file can be in either the “properties” format, or in YAML. I personally prefer to use YAML, so I renamed that file to application.yaml and here is the content:

  port: 8080

    name: customer
    username: 'customer'
    url: jdbc:oracle:thin:@//
    password: 'Welcome123'
    driver-class-name: oracle.jdbc.driver.OracleDriver
        dialect: org.hibernate.dialect.OracleDialect
        format-sql: 'true'
      ddl-auto: update
    show-sql: 'true'

Let’s look at what we have here. First we set the port to 8080, and the application’s name to “customer”. If you prefer to use the properties format these first two setting would like like this:


After that we set up the data source. You can provide the JDBC URL for your Oracle Database, and the username and password and the JBDC driver class, as shown. Note that the use will need to actually exist. You can create the user in the database by running these statements as an admin user:

create user customer identified by Welcome123;
grant connect, resource to customer;
alter user customer quota unlimited on users;

The final section of config we see here is the JPA configuration where we need to declare which “dialect” we are using – this identifies what kind of SQL should be generated, in our case Oracle. The format-sql and show-sql settings are jsut there to make the SQL statements we see in logs easier for us to read.

The ddl-auto setting is interesting. Here’s a good article that explains the possible values and what they do. We’ve used update in this example, which “instructs Hibernate to update the database schema by comparing the existing schema with the entity mappings and generate the appropriate schema migration scripts.” That’s a resonable choice for this scenario, but you shoudl be aware that there are probably better choices in some cases. For example, if you are actively developing the entity and making changes to it, create-drop might be better for you. And if the database objects already exist and you just want to use them, then none might be the best choice – we’ll talk more about this in a future post!

Create the JPA Repository Class

Next, let’s create the JPA Repository class which we can use to save, retrieve and delete entities in/from the database. Create a file called in src/main/java/com/redstack/customer with this content:

package com.redstack.customer;


public interface CustomerRepository extends JpaRepository<Customer, Integer> {

Ok, that takes care of our JPA work. Now, let’s get started on our services.

Create the Customer Service

Let’s start with a service to register (create) a new customer. We can start by defining the input data that we expect. Let’s create a in the same directory with this content:

package com.redstack.customer;

public record CustomerRegistrationRequest(
    String firstName,
    String lastName,
    String email) {

Notice that we did not include the ID, because we are going to get that from the database sequence. So we just need the client/caller to give us the remaining three fields.

Next, we can create our controller. Create a new file called in the same directory with this content:

package com.redstack.customer;

import org.springframework.http.ResponseEntity;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

public record CustomerController(CustomerService service) {

    public ResponseEntity<String> registerCustomer(@RequestBody CustomerRegistrationRequest req) {
        return ResponseEntity.status(HttpStatus.CREATED).body("Customer registered successfully.\n");

So here we used a Java record to define the controller, and we ask Spring to inject the CustomerService for us. Obviously, we have not created that yet, we’ll get to that in a minute! The reocrd has two annotations – @RestController tells spring to expose a REST API for this record, and @RequestMapping lets us set up the URL path for this controller. Since we set the port to 8080 earlier, and assuming we just run this on our development machine for now, this REST API will have a URL of http://localhost:8080/api/v1/customers.

Next we can define the handlers. Here we have just the first one, to handle HTTP POST requests. We will add others later. Our registerCustomer method will be exposed as the handler for POST requests, because we gavt it the @PostMapping annotation, and it will be able to return an HTTP response with a status code and body becauase we gave it the @RepsonseBody annotation. This method accepts the CustomerRegistrationRequest that we defined earlier. Notice that we add the @RequestBody annotation to that method argument. This tells Spring that the data will be provided by the caller as JSON in the HTTP Request Body (as opposed to being in a query or header, etc.) And this handler simply calls the registerCustomer method in the service and passes through the data.

So, its time to write the service! Create a new file called in the same directory with this content:

package com.redstack.customer;

import org.springframework.stereotype.Service;

public record CustomerService(CustomerRepository repository) {

    public void registerCustomer(CustomerRegistrationRequest req) {
        Customer customer = Customer.builder()

Again, we are using a Java record for the service. Records are immutable data classes that require only the type and name of fields. The equalshashCode, and toString methods, as well as the private, final fields and public constructor, are generated by the Java compiler. You can also include static variables and methods in records. I’m using them here to save a bunch of boilerplate code that I do not want to write.

We put the @Service annotation on the record to tell Spring that this is a service. In the record arguments, we have Spring inject an instance of our CustomerRepository which we will need to talk to the database.

For now, we just need one method in our service, registerCustomer(). We’ll add more later. This method also accepts the CustomerRegistrationRequest and the first thing we do with it is create a new Customer entity object. Notice that we are using the builder that we auto-generated with Lombok – we never wrote any code to create this builder! Yay! Then, all we need to do is use our JPA repository’s saveAndFlush() method to save that customer in the database. saveAndFlush will do an INSERT and then a COMMIT in the database.

Time to test the application!

Let’s start up our service and test it! Before we start, you might want to connect to your database and satisfy yourself that there is no CUSTOMER table there:

sql customer/Welcome123@//
SQL> select table_name from user_tables;

no rows selected

To run the service, run this Maven command:

mvn spring-boot:run

This will compile the code and then run the service. You will see a bunch of log messages appear. In around the middle you should see something like this:

2023-02-03T11:15:37.827-05:00  INFO 8488 --- [           main] SQL dialect                              : HHH000400: Using dialect: org.hibernate.dialect.OracleDialect
Hibernate: create global temporary table HTE_customer(id number(10,0), email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), rn_ number(10,0) not null, primary key (rn_)) on commit delete rows
Hibernate: create table customer (id number(10,0) not null, email varchar2(255 char), first_name varchar2(255 char), last_name varchar2(255 char), primary key (id))

There’s the SQL that it ran to create the CUSTOMER table for us! If you’d like to, you can check in the database with this statement:

SQL> describe customer;

Name          Null?       Type
_____________ ___________ _____________________
ID            NOT NULL    NUMBER(10)
EMAIL                     VARCHAR2(255 CHAR)
FIRST_NAME                VARCHAR2(255 CHAR)
LAST_NAME                 VARCHAR2(255 CHAR)

You can also take a look at the sequence if you would like to:

SQL> select sequence_name, min_value, increment_by, last_number from user_sequences;

_______________________ ____________ _______________ ______________
CUSTOMER_ID_SEQEUNCE               1              50           1001

Now, let’s invoke the service to test it! We can invoke the service using cURL, we need to do a POST, set the Content-Type header and provide the data in JSON format:

$ curl -i \
   -X POST \
   -H 'Content-Type: application/json' \
   -d '{"firstName": "Mark", "lastName": "Nelson", "email": ""}' \
HTTP/1.1 201
Content-Type: text/plain;charset=UTF-8
Content-Length: 34
Date: Fri, 03 Feb 2023 17:41:39 GMT

Customer registered successfully.

The “-i” tells cURL to pring out the response. You can see that we got a HTTP 201 (created), i.e., success!

Now we see the new record in the database, as expected:

SQL> select * from customer ;

_____ ________________ _____________ ____________
    1    Mark          Nelson

Great, that is working the way we wanted, so we can create customers and have them stored in the database. Now let’s add some endpoints to query customers from the database.

Add a “get all customers” endpoint

The first endpoint we want to add will allow us to get a list of all customers. To do this, let’s add this new method to our controller:

// add these imports
import java.util.List;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.ResponseStatus;

// ...

    @GetMapping(produces = {MediaType.APPLICATION_JSON_VALUE})
    public List<Customer> getAllCustomers() {
        return service.getAllCustomers();

Here we have a getAllCustomers() method that simply calls the corresponding method in the service (we’ll write that in a moment) and returns the results. Of course, we have some annotations too. The @GetMapping tells Spring Boot that this method will be exposed as an HTTP GET method handler. The produces defines the output body’s Content-Type, in this case it will be “application/json“. The @ResponseStatus sets the HTTP status code.

Here’s the method we need ot add to our CustomerService, notice it just uses a built-in method on the repository to get the data, its very simple:

// add this import
import java.util.List;

// ...

    public List<Customer> getAlCustomers() {
        return repository.findAll();

With those changes in place, we can restart the service and call this new GET endpoint like this:

$ curl -i http://localhost:8080/api/v1/customers
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 17:55:17 GMT


You might like to do a few more POSTs and another GET to observe what happens.

Add a “get customer by ID” endpoint

Let’s add the final endpoint that we wanted in our service. We want to be able to get a specific customer using the ID. Here’s the code to add to the controller:

// add these imports
import java.util.Optional;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;

// ...

    @GetMapping(path="/{id}", produces = {MediaType.APPLICATION_JSON_VALUE})
    public ResponseEntity<Customer> getCustomer(@PathVariable Integer id) {
        Optional<Customer> c = service.getCustomer(id);
        if (c.isPresent()) {
            return ResponseEntity.status(HttpStatus.OK).body(c.get());
        } else {
            return ResponseEntity.status(HttpStatus.NOT_FOUND).body(null);

Here we see some differences to the previous endpoint implementation. This one is a little more sophisticated. First, we have added a path to the @GetMapping annotation to add a positional parameter to the end of the path, so this endpoint will be /api/v1/customers/{id}. In the method arguments we have a @PathVariable annotation to grab that {id} from the path and use it as an argument to our method.

Also, notice that the method returns ResponseEntity<Customer>. This gives us some more control over the response, and allows us to set different HTTP status codes (and if we wanted to we could also control the headers, body, etc.) based on our own business logic.

Inside this method we call our service’s (soon to be written) getCustomer(id) method which returns an Optional<Customer>. Then we check if the Optional actually contains a Customer, indicating that a customer entity/record was found for the specified id, and if so we return it along with an HTTP 200 (OK). If the Optional is empty, then return an HTTP 404 (not found).

Here’s the new method to add to the service:

// add this import
import java.util.Optional;

// ...

    public Optional<Customer> getCustomer(Integer id) {
        return repository.findById(id);

This one is fairly sinple, we are just calling a standard built-in method on the JPA Repository class to get the data.

Now we can restart the application, and test the new endpoint by asking for customers that we know exist, and do not exist to observe the different outcomes:

$ curl -i http://localhost:8080/api/v1/customers/1
HTTP/1.1 200
Content-Type: application/json
Transfer-Encoding: chunked
Date: Fri, 03 Feb 2023 18:15:30 GMT


$ curl -i http://localhost:8080/api/v1/customers/5
HTTP/1.1 404
Content-Length: 0
Date: Fri, 03 Feb 2023 18:15:37 GMT

Notice the HTTP status codes are different in each case. Also, notice that the JSON returned when a customer is found is just one JSON object {…} not a list [{…}, … ,{…}] as in the get all customers endpoint.


Well there you have it, we have completed our simple customer microservice built using Spring Boot and Oracle Database. I hope you followed along and built it too, and enjoyed learing a bit about Spring Boot and Oracle! Stay tuned for more posts on this topic, each covering a little more advanced toopic than the last. See you soon!

Posted in Uncategorized | Tagged , , , | Leave a comment

Two new Backend as a Service offerings live now!

Hi everyone!

For the last few months I have been working on two projects which have just gone live with their first “Developer Preview” releases.

If you’d like to check them out and see what I have been up to, have a look at:

It’s been a lot of fun working on these, and I am really happy to be able to tell you about them at last!

The Oracle Mobile Backend as a Service offering is built on top of Parse and lets you easily build mobile and web apps using any of the Parse SDKs and have all your data stored in an Oracle Autonomous Database in JSON collections. It also includes the Parse Dashboard for managing your applicaiton data. Its easy to install from OCI Marketplace and once the install is done, you can start hitting those APIs and building your apps right away!

The Oracle Backend as a Serivce for Spring Cloud lets you easily install a comprehensive runtime environment for Spring Boot applications including a Kubernetes (OKE) cluster, Spring Config Server (with the config data in Oracle Autonomous Database), Spring Eureka Service Registry, APISIX API Gateway and Dashboard, Netflix Conductor, Spring Admin Dashboard, Prometheus, Grafana, Jaeger and Open Telemetry. You can build apps using Spring Data with JPA or JDBC access to the Oracle Autonomous Database. We have included a sample custom Spring component for using Oracle Transactional Event Queueing. There’s a CLI to manage deploying apps into the environment, managing configuration and database schema for services. We also included a set of sample applications that demonstrate how to use the platform – these include service discovery, fault tolerance, distributed tracing and so on.

As “Developer Preview” implies – there’s much more to come in this space!

I am planning to write more blog posts really soon to demonstrate how to use both of these offerings. I hope you’ll check them out!

Posted in Uncategorized | Tagged , , , , | 1 Comment

Development environment setup for Spring Boot with Oracle

Hi again! I am starting a series of posts about writing Spring Boot microservice applications with the Oracle Database, I plan to cover topics like databsae access, messaging, external configuration, service discovery, fault tolerance, workflow, observability and so on. But before I get started, I wanted to document how I set up my development environment.

Personally, I work on Windows 11 with the Windows Subsystem for Linux and Ubuntu 20.04. Of course you can adjust these instructions to work on macOS or Linux.


First thing we need is the Java Development Kit. I used Java 17, here’s a permalink to download the latest tar for x64:

You can just decompress that in your home directory and then add it to your path:

export JAVA_HOME=$HOME/jdk-17.0.3
export PATH=$JAVA_HOME/bin:$PATH

You can verify it is installed with this command:

$ java -version
java version "17.0.3" 2022-04-19 LTS
Java(TM) SE Runtime Environment (build 17.0.3+8-LTS-111)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.3+8-LTS-111, mixed mode, sharing)

Great! Now, let’s move on to build automation.


You can use Maven or Gradle to build Spring Boot projects, and when you generate a new project from Spring Initialzr (more on that later) it will give you a choice of these two. Personally, I prefer Maven, so that’s what I document here. If you prefer Gradle, I’m pretty sure you’ll already know how to set it up 🙂

I use Maven 3.8.6, which you can download from the Apache Maven website in various formats. Here’s a direct link for the zip file:

You can also just decompress this in your home directory and add it to your path:

export PATH=$HOME/apache-maven-3.8.6/bin:$PATH

You can verify it is installed with this command:

$ mvn -v
Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: /home/mark/apache-maven-3.8.6
Java version: 17.0.3, vendor: Oracle Corporation, runtime: /home/mark/jdk-17.0.3
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "", arch: "amd64", family: "unix"

Ok, now we are going to need an IDE!

Visual Studio Code

These days I find I am using Visual Studio Code for most of my coding. It’s free, lightweight, has a lot of plugins, and is well supported. Of course, you can use a different IDE if you prefer.

Another great feature of Visual Studio Code that I really like is the support for “remote coding.” This lets you run Visual Studio Code itself on Windows but it connects to a remote Linux machine and that’s where the actual code is stored, built, run, etc. This could be an SSH connection, or it could be connecting to a WSL2 “VM” on your machine. This latter option is what I do most often. So I get a nice friendly, well-behaved native desktop applciation, but I am coding on Linux. Kind of the best of both worlds!

You can download Visual Studio Code from its website and install it.

I use a few extensions (plugins) that you will probably want to get too! These add support for the languages and frameworks and give you things like completion and syntax checking and so on:

You can install these by opening the extensions tab (Ctrl-Shift-X) and using the search bar at the top to find and install them.

Containers and Kubernetes

Since our microservices applications are probably almost certainly going to end up running in Kubernetes, its a good idea to have a local test environment. I like to use “docker compose” for initial testing locally and then move to Kubernetes later.

I use Rancher Desktop for both containers and Kubernetes on my laptop. There are other options if you prefer to use something different.

Oracle Database

And last, but not least, you will need the Oracle Database container image so we can run a local database to test against. If you don’t already have it, you will need to go to Oracle Container Registry first, and navigate to “Database,” then “Enterprise,” and accept the license agreement, then pull the image with these commands:

docker login -u
docker pull

Then you can start a database with this command:

docker run -d \
   --name oracle-db \
   -p 1521:1521 \
   -e ORACLE_PWD=Welcome123 \

The first time yoiu start it up, it will create a database instacne for you. This takes a few minutes, you can watch the logs to see when it is done:

docker logs -f oracle-db

You will see this message in the logs when it is ready:


You can then stop and start the database container as needed – you won’t need to wait for it to create the database instance each time, it will stop and start in just a second or two.

docker stop oracle-db
docker start oracle-db

You are going to want to grab its IP address for later on, you can do that with this command:

docker inspect oracle-db | grep IPAddress

This container image has SQL*Plus in it, and you can use that as a database command line client, but I prefer the new Oracle SQLcl which is a lot nicer – it has completion and arrow key navigation and lots of other cool new features. Here’s a permalink for the latest version:

You can just unzip this and add it to your path too, like Maven and Java.

You can connect to the database using SQLcl like this (use the IP address you got above):

sql sys/Welcome123@// as sysdba

Well, that’s about everything we need! In the next post we’ll get started building a Spring Boot microservice!

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Oracle REST Data Services 22.3 brings new REST APIs for Transactional Event Queueing

Oracle REST Data Services 22.3 was released a couple of weeks ago, and it is now available on Oracle Autonomous Database as well! This release has a slew of new REST APIs for Oracle Transactional Event Queueing (or TxEventQ). If you have not heard of it, TxEventQ is essentially a new, faster implementation of Oracle Advanced Queueing which has been in the database for around twenty years.

Many of these new REST APIs are very simliar to the Kafka REST APIs, since TxEventQ provides Kafka compatibility as one of its features.

In this post, I want to show you how to use a few of the APIs, and then I’ll give you an idea of what kinds of APIs are available and where to find more information.

The first thing to do is the grab an Autonomous Database instance. It’s available in the Always Free tier, so you can try this for free! If you are not familiar with creating one, and accessing SQL and so on – check out this free LiveLab for details.

Make sure you get a 21c database – you may need to toggle the “Always Free” option to see 21c. The APIs described in this post are supported in Oracle Database 21c (and later).

When you get into your SQL worksheet, grab the URL from the browser, it will be something like this:

Now, chop off the end of the URL and replace it with the base URL for the TxEventQ REST APIs, and save that in an environment variable to save us some typing!

export ADDR=""

And let’s create another environment variable with the authentication details. You can encode them using base64 like this, assuming your userid is admin and your passsword is your_password:

$ echo -n "admin:your_password" | base64

Then we can use that value to create the authentication header details:

export AUTH="Authorization: Basic YWRtaW46eW91cl9wYXNzd29yZA=="

Great, that will save us from typing those each time!

Create a topic

Let’s start with by creating a topic. We are going to need to know the database name for this – you can find that by running this query in your SQL worksheet:

select sys_context('userenv','db_name') from dual

You’ll need to put that database name into the URL below after “clusters” and before “topics”, in this example my database name is “XYZABC_RED1“:

curl -X POST -H "$AUTH" -H "Content-Type: application/json" -d '{"topic_name": "mark1", "partitions_count": "6"}' "$ADDR/clusters/XYZABC_RED1/topics/"

In the body we specified the name of the topic (“mark1” in this case) and how many parititions we want the topic to have. When you run this request, you’ll see output something like this:


It created our topic for us!

List topics

Let’s list the topics now, try this request:

curl -X GET -H "$AUTH" "$ADDR/topics/"

The output will be a JSON list of topic names, like this. You might want to create a few more to make it more interesting!:


Get a topic

We can also get details of a single topic like this, the topic name is in the last part of the URL:

curl -X GET -H "$AUTH" "$ADDR/topics/mark1/"

The output looks like this:


Create a consumer group

Now let’s create a consumer group, here’s the request, notice the topic name is in the body, and the name of the consumer group is the last part of the URL (“sub1” in this case):

curl -X POST -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{"topic_name": "mark1"}' \

The output from this is empty (unless you specify verbose, or get an error), but we can easily check the result in the database by running this query:

select * from user_queue_subscribers

Publish messages

Ok, I think we’re ready to publish a message! Here’s the request:

curl -X POST -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{"records": [{"key":1,"value":"bob"}]}' \

The output will look something like this:


You can put mutliple records in the body to send put more than one message on the topic.

Consume messages

Now, let’s consume the messages off that topic with our consumer sub1. Here’s the request, notice the topic name is in the body, and the soncumer name is in the URL after “consumers”:

curl -X GET -H "$AUTH" \
     -H "Content-Type: application/json" \
     -d '{ "partitions": [ { "topic": "mark1", "partition": 0 } ] }' \

The output from this one (not surprisingly) looks like this:


Great, hopefully that gives you a feel for how these REST APIs for TxEventQ work!

But wait, there’s more!

Of course there are a lot more APIs available than the few I have shown you so far. They all follow a fairly similar pattern, let’s take a look at a list of what’s available:

Topics APIs

  • Create topic, optionally with partition count
  • List topics
  • Get a topic
  • Create a consumer group
  • Publish message(s)
  • List topics in a specific cluster
  • Get a topic in a specific cluster
  • Delete a topic

Partitions APIs

  • List paritions in a topic
  • Get details of one partition in a topic
  • Get partition message offsets
  • List partitions in a topic in a cluster
  • Get details of one partition in a topic in a cluster
  • List consumer lags for a partition
  • Publish message(s) in a particular partition

Consumer Group APIs

  • List consumer groups
  • Details of one consumer group
  • Get consumer group lag summary
  • Get consumer group lags for all partitions
  • Get consumer group lags for a given partition
  • Delete consumer groupworking
  • Create consumer group

Consumer APIs

  • Create consumer instance on consumer group
  • Delete consumer instance on consumer group
  • List topics that a consumer is subscribed to
  • Subscribe to topics
  • Unsubscribe from topics
  • Send message to given partition

Move Offset APIs

  • Fetch messages
  • Fetch messages from offset in specified partition
  • Move to beginning of partition
  • Move to end of partition
  • Get offsets in specific topics and paritions
  • Commit (ser) offsetes in specific partitions

Where to find more information

You can find more informaiton about the TxEventQ REST APIs in the documentation, here:

Or if you prefer, you can open the OpenAPI specification on your database instance. The URL will be something like this, and you can search the output for “teq” to find the APIs:

I hope you enjoyed this quick introduction to the new REST APIs for Transactional Event Queueing! Of course, this is available in any Oracle database, not just Autonomous Database. If you want to use Oracle REST Data Services with your own database, you might find this post about installing a standalone version interesting too!

Posted in Uncategorized | Tagged , , , , | Leave a comment

Getting started with the new observability exporter for Oracle database

My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it.

In this post I will start with a really simple example that just gets the exporter up and running and collects a few simple metrics from the database into Prometheus. In subsequent posts, I’ll go further and look at dashboards in Grafana, and also cover the logging and metrics capabilities! But you have to start somewhere right!

First thing we need is a database of course! I just fired one up in a container like this:

docker run -d \
       --name oracle-db \
       -p 1521:1521 \
       -e ORACLE_PWD=Welcome123 \
       -e ORACLE_SID=ORCL \
       -e ORACLE_PDB=PDB1 \

If you have not used this image before, you will first need to go to Oracle Container Registry at, log in, and navigate to the Database category and then the “enterprise” image and accept the license agreement. You will also need to login your docker client so you can pull the image:

docker login
# this will prompt you for your username and password

The image will take a short time to pull the first time, and the first startup will actually create the database instance, and that takes a few minutes too. You can watch the logs to see when the database is ready:

docker logs -f oracle-db

You only need to have these delays the first time you start the image. After that you can stop and start the container as needed and it will retain the data and startup very quickly.

# to stop the container:
docker stop oracle-db

# to start the container:
docker start oracle-db

Ok, so now we have a database available. Let’s connect to it and create some data to play with. You can use your favorite client – there’s SQL*Plus in that image if you don’t have anything else available. You can start it and connect to the database like this:

docker exec -ti oracle-db sqlplus pdbadmin/Welcome123@//localhost:1521/pdb1

Note: If you have not already, you might want to check out the new SQLcl command line tool which features command line completion and many other great features – check it out at

Let’s create a “customer” table and add a record:

create table customer (id number, name varchar2(256));
insert into customer (id, name) values (1, 'mark');

Great, and let’s just leave that session connected – that will come in handy later!

Now, let’s get the observability exporter and set it up.

First, you’ll need to clone the project from GitHub:

git clone
cd oracle-db-appdev-monitoring

You can build the project and create a container image (assuming you have Maven, Java and Docker installed) like this:

mvn clean package -DskipTests
docker build -t observability-exporter:0.1.0 .

If you don’t have those installed and you don’t want to – you can skip this step and just grab a pre-built container image from Oracle Container Registry:

docker pull	

If you do it this way, make sure to use the full name later when we start the exporter, not the short version!

Now we need to create a configuration file and define our metrics. I called mine mark-metrics.toml and here’s the content:

context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

context = "system"
request = "select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')"
metricsdesc = { session_count = "Current session count." }

context = "system"
request = "select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')"
metricsdesc = { active_sessions = "Active sessions." }

context = "system"
request = "select (c.session_count - a.active_sessions) as inactive_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')) c, (select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')) a"
metricsdesc = { inactive_sessions = "Inactive sessions." }

context = "system"
request = "select b.session_count as blocked_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and blocking_session_status = 'VALID' and con_id = sys_context('userenv','con_id')) b"
metricsdesc = { blocked_sessions = "Blocked sessions." }

I defined five metrics in this file. Each metric starts with the [[metric]] heading and can have several fields. You can see more information in the documentation here. In the spirit of keeping this first post simple, I just created basic metrics with no labels or anything fancy 🙂

Let’s take a close look at the first metric, here it is again:

context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

It is in the context (or group) called customers. The metric itself is called num_custs. You can see how we use the metricsdesc to create a human-readable documentation/description for the metric. And the metric itself is defined with an SQL statement. Wow! That’s pretty cool, right? That means that anything I can write an SQL statement to get from the database can be exported as a metric! In this one I just count the number of entries in that customer table we just created.

The other four metrics are some simple queries that get the number of sessions in the database as well as how many are active, inactive and blocked. These are all in the system context. You can define whatever contexts you like.

When you later look at a metric in Prometheus its name will be something like this:


Notice how the context (customers) and the metric name (num_custs) are in there.

Ok, now that we have defined our metrics, we can start up the exporter. Let’s run it in another container, alongside the database. We can start it like this:

docker run -d \
       -v /home/mark/oracle-db-appdev-monitoring/mark-metrics.toml:/metrics.toml \
       -p 9161:9161 \
       -e DEFAULT_METRICS=/metrics.toml \
       -e DATA_SOURCE_NAME=pdbadmin/Welcome123@ \
       --name exporter \

There’s a couple of things to note here. First, I am providing the configuration file we just created using the -v mount. This will give the exporter access to the metrics definitions. Second, we need to tell it how to connect to the database. You’ll need to get the IP address of the database container using this command:

docker inspect oracle-db | grep IPAddress

Yours will probably be diffrent to mine, so you’ll need to update the value of DATA_SOURCE_NAME to match your environment. And finally, a reminder – if you pulled the pre-built image down from Oracle Container Registry, you’ll need to use the fully qualified name on the last line.

Once this container starts up, grab its IP address too, we’ll need that in a minute:

docker inspect exporter | grep IPAddress

The exporter should start right up, and assuming we got the address right and no typos, it should be working and we can get metrics like this:

$ curl localhost:9161/metrics
# HELP oracledb_system_inactive_sessions Inactive sessions.
# TYPE oracledb_system_inactive_sessions gauge
oracledb_system_inactive_sessions 1.0
# HELP oracledb_up Whether the Oracle database server is up.
# TYPE oracledb_up gauge
oracledb_up 1.0
# HELP oracledb_system_blocked_sessions Blocked sessions.
# TYPE oracledb_system_blocked_sessions gauge
oracledb_system_blocked_sessions 0.0
# HELP oracledb_customers_num_custs Number of customers.
# TYPE oracledb_customers_num_custs gauge
oracledb_customers_num_custs 2.0
# HELP oracledb_system_active_sessions Active sessions.
# TYPE oracledb_system_active_sessions gauge
oracledb_system_active_sessions 1.0
# HELP oracledb_system_session_count Current session count.
# TYPE oracledb_system_session_count gauge
oracledb_system_session_count 2.0

If you don’t see this, check the container logs to see what the error was:

docker logs exporter

Assuming everything is working now, let’s start up Prometheus and configure it to scrape these metrics.

First, let’s create a configuration file called prometheus.yml with this content:

  scrape_interval:     10s
  evaluation_interval: 10s

  - job_name: 'prometheus'
    - targets: ['']

  - job_name: 'oracle-exporter'
    metrics_path: '/metrics'
    scrape_interval: 10s
    scrape_timeout: 8s
    - targets: ['']

The only thing you’ll need to change here is the very last line – you need to put the IP address of your exporter container in there.

Then you can start up Prometheus using this configuration like this:

docker run -d \
       --name prometheus \
       -p 9090:9090 \
       -v /home/mark/prometheus.yml:/etc/prometheus/prometheus.yml \
       prom/prometheus --config.file=/etc/prometheus/prometheus.yml

It should start right up and you can access it at http://localhost:9090

The user interface looks like this, and you can type into that search field to find a metric. If you start typing “num_custs” it should find our metric. Then hit enter, or click on the Execute button to see the value of the metric. It might take up to 10 seconds for data to be available, since we configured the scrape interval as 10 seconds in our configuration file. You should see something like this – yours will probably say 1, not 2:

If you go insert some more records into that table and then check again, you’ll see the value is updated. You can also click on the Graph tab to view that as a time series graph. Try adding and removing records to see what happens. Remember to wait a little while between each update so that new metrics are collected.

You can also try the other metrics we created! So there we go, that’s covered the very basic starting steps of defining some metrics, running the exporter and scraping the metrics into Prometheus! Stay tuned for some follow up posts where I will build dashboards in Grafana, and also look at exporting logs and distributed tracing!

Bonus info: If you use WSL2 like I do, you might see a warning on the Prometheus web user interface about clock skew. If you do, you can fix that by updating the time in WSL like this:

sudo hwclock -s
Posted in Uncategorized | Tagged , , , , | Leave a comment

New web page for Oracle Transactional Event Queueing

The new web page for Oracle Transactional Event Queueing is live and has lots of great information including sample code, links to hands-on labs, documentation and some user stories! Hope you can check it out!

Posted in Uncategorized | Tagged | Leave a comment

Creating a stored procedure (dare I call it a microservice?) to automatically process events on a queue

In this post I want to look at how to create a stored procedure in the database to automatically process events as they are produced on a Transactional Event Queue (TEQ).

Having a small, discrete piece of code that processes events off a queue is a pretty common use case. You could even call it a microservice I guess 🙂 since it does meet the well-established criteria of having its own code base, being loosely coupled, independently deployable and testable. One thing I find really interesting about writing a “microservice” like this and deploying it in the Oracle Database is that I can essentially scale it to zero instances, and it will only run (and consume resources) when there is actually work for it to do. I could also use the Database Resource Manager to control how many resources it is able to consume if I wanted to 🙂 And it would not be all that hard to instrument it so I could get logs, metrics and even distributed tracing – but that’s another story!

So, let’s go ahead and build this thing!

We’ll start with a new Oracle Database. I am going to run it in a Docker container using the standard image provided on Oracle Container Registry. If you have not used it before, you will need to login to Oracle Container Registry at and then navigate to the “Database” section, and then “enterprise” and read and accept the license.

Make sure you are logged into Oracle Container Registry in your Docker client too:

docker login -u

Once you have authenticated, you can start up an Oracle 21c Database using this command:

docker run -d \
  --name oracle-db \
  -p 1521:1521 \
  -e ORACLE_PWD=Welcome123## \

It will take a few minutes (the first time only) for the database files to be created and the instance to start up. You can watch the logs, or use this little shell command to do the waiting for you:

while ! docker logs oracle-db | grep -q "DATABASE IS READY TO USE!";
  sleep 10

Great, now we have a database running, let’s set up the necessary permissions for our user. I am going to use the pdbadmin user in the PDB1 pluggable database. So let’s give that user permissions to use the TEQ packages (I am using the new SQLcl command line tool, but you can use SQL*Plus or SQL Developer, or whatever tool you prefer):

# sql sys/Welcome123##@//localhost:1521/pdb1 as sysdba

SQL> alter session set container = pdb1;

SQL> grant dba to pdbadmin;

SQL> grant execute on dbms_aqadm to pdbadmin;
SQL> grant execute on dbms_aq to pdbadmin;

SQL> commit;
SQL> exit

Ok, now we can connect with our pdbadmin user and start setting up our environment:

# sql pdbadmin/Welcome123##@//localhost:1521/pdb1

First we want to create our (TEQ) queue (or topic) and start it. We’ll call the queue my_teq:

        queue_name         => 'my_teq',
        multiple_consumers => true
        queue_name         => 'my_teq'

Since we are using a multi-consumer queue (i.e. a topic) we need to add a subscriber too. Let’s call it my_subscriber:

        queue_name => 'my_teq',
        subscriber =>$_agent('my_subscriber', null, 0)

We’ll keep our microservice super-simple for this demonstration, we’ll just have it record the messages it receives in an “output” table – so let’s create that table now:

create table my_log (
    message varchar(256),
    when timestamp(6)

Ok, so here is our consumer microservice:

create or replace procedure receiver (
    context in raw,
    reginfo in$_reg_info,
    descr in$_descriptor,
    payload in varchar2,
    payloadl in number
) as
  dequeue_options dbms_aq.dequeue_options_t;
  message_properties dbms_aq.message_properties_t;
  message_handle raw ( 16 ) ;
  no_messages exception;
  pragma exception_init ( no_messages, -25228 ) ;
  dequeue_options.msgid := descr.msg_id;
  dequeue_options.consumer_name := descr.consumer_name;
  dequeue_options.navigation := dbms_aq.first_message;
    dbms_aq.dequeue (
      queue_name => 'my_teq',
      dequeue_options => dequeue_options,
      message_properties => message_properties,
      payload => message,
      msgid => message_handle
    insert into my_log values ( message.text_vc, sysdate );
  end loop;
when no_messages then
  dbms_output.put_line ( 'No more messages for processing' ) ;

Let’s walk through that and talk about the details. First, the procedure must have this signature/interface:

procedure receiver (
    context in raw,
    reginfo in$_reg_info,
    descr in$_descriptor,
    payload in varchar2,
    payloadl in number

The name of the procedure is up to you, but it must have those exact parameters in that order, since this is a callback, and the TEQ notification is expecting this signature so that it can pass the data about new messages to the consumer.

When we get the callback, we need to perform a dequeue operation to get the actual message/event off the queue/topic. Since it is possible that there is more than one, its a good idea to use a loop to read and process all of them before we complete. Here we have a simple loop to dequeue a message and then save the details in our log table:

    dbms_aq.dequeue (
      queue_name => 'my_teq',
      dequeue_options => dequeue_options,
      message_properties => message_properties,
      payload => message,
      msgid => message_handle
    insert into my_log values ( message.text_vc, sysdate );
  end loop;

We’ve also defined an exception handler for when there are no messages available (though this should not ever happen, but its still a good practice to cater for it anyway):

when no_messages then
  dbms_output.put_line ( 'No more messages for processing' ) ;

I used the JMS message format in this example, but of course you could use RAW or JSON or a user-defined type instead.

Ok, so now that our microservice is ready, we need to tell the database to call it when there is a message to process. To do this, we create a notification as follows:

      ), 1);

Ok, so let’s talk about what is happening here. This register function that we are running will set up the connection between the queue, the subscriber and the consumer. In the aq$_reg_info you can see the first parameter has the queue name followed by a colon and the subscriber name – so this is telling us “when we have a message on my_teq and it is addressed to the subscriber my_subscriber…”

The next parameter tells us that we are interested in AQ (and TEQ) notifications, and the third parameter tells us the callback address. In this case we are telling it to run the PL/SQL procedure called receiver.

Once that is done, you can check on the details with this query:

select r.reg_id, subscription_name, location_name, num_ntfns, num_pending_ntfns
where r.reg_id = s.reg_id;

______ ______________________________________ ________________ _________ _________________
   301 "PDBADMIN"."MY_TEQ":"MY_SUBSCRIBER"    plsql://receiver        40                 0

If you come back and run this again later, you will see the number of notifications sent, and pending (i.e. the last two columns) will increase each time we send a message.

Ok, let’s enqueue a message (publish an event) to test this out!

We can use this command to send a test message. This creates and sends a JMS message on our my_teq queue/topic addressed to our my_subscriber consumer:

  enqueue_options dbms_aq.enqueue_options_t;
  message_properties dbms_aq.message_properties_t;
  message_handle raw(16);
  message :=$_jms_text_message.construct;
  message.set_text('hello from mark');
  message_properties.recipient_list(0) :=$_agent('my_subscriber', null, null);
    queue_name => 'my_teq',
    enqueue_options => enqueue_options,
    message_properties => message_properties,
    payload => message,
    msgid => message_handle);

Once that is run, the notification will kick in, and the callback will occur and our microservice will run, and consume all of the messages and dump them out into our “output table.” We can check the results with this query:

SQL> select * from my_log;

           MESSAGE                               WHEN
__________________ __________________________________
hello from mark    23-JUN-22 PM

Feel free to go run that a few more times to see what happens.

So there we go! We created a nice simple, loosely coupled consumer that will process messages/events as they arrive, and will scale to zero (consume no resources) when there is no work to do. Enjoy!

Posted in Uncategorized | Tagged , , , | Leave a comment

Java to get go-routine-like virtual threads!

Yay! Java is finally going to get some lightweight threads, a bit like go-routines, which allow you to create threads in the JVM without each one consuming an OS thread!

I’m looking forward to trying it out in Java 19.

Read all the details in this article by Nicolai Parlog

Posted in Uncategorized | Tagged , , | Leave a comment