Getting started with the new observability exporter for Oracle database

My colleague Paul Parkinson recently published our new unified obserability exporter for Oracle Database on GitHub, you can read about it here. I wanted to start playing around with it to see what we can do with it.

In this post I will start with a really simple example that just gets the exporter up and running and collects a few simple metrics from the database into Prometheus. In subsequent posts, I’ll go further and look at dashboards in Grafana, and also cover the logging and metrics capabilities! But you have to start somewhere right!

First thing we need is a database of course! I just fired one up in a container like this:

docker run -d \
       --name oracle-db \
       -p 1521:1521 \
       -e ORACLE_PWD=Welcome123 \
       -e ORACLE_SID=ORCL \
       -e ORACLE_PDB=PDB1 \

If you have not used this image before, you will first need to go to Oracle Container Registry at, log in, and navigate to the Database category and then the “enterprise” image and accept the license agreement. You will also need to login your docker client so you can pull the image:

docker login
# this will prompt you for your username and password

The image will take a short time to pull the first time, and the first startup will actually create the database instance, and that takes a few minutes too. You can watch the logs to see when the database is ready:

docker logs -f oracle-db

You only need to have these delays the first time you start the image. After that you can stop and start the container as needed and it will retain the data and startup very quickly.

# to stop the container:
docker stop oracle-db

# to start the container:
docker start oracle-db

Ok, so now we have a database available. Let’s connect to it and create some data to play with. You can use your favorite client – there’s SQL*Plus in that image if you don’t have anything else available. You can start it and connect to the database like this:

docker exec -ti oracle-db sqlplus pdbadmin/Welcome123@//localhost:1521/pdb1

Note: If you have not already, you might want to check out the new SQLcl command line tool which features command line completion and many other great features – check it out at

Let’s create a “customer” table and add a record:

create table customer (id number, name varchar2(256));
insert into customer (id, name) values (1, 'mark');

Great, and let’s just leave that session connected – that will come in handy later!

Now, let’s get the observability exporter and set it up.

First, you’ll need to clone the project from GitHub:

git clone
cd oracle-db-appdev-monitoring

You can build the project and create a container image (assuming you have Maven, Java and Docker installed) like this:

mvn clean package -DskipTests
docker build -t observability-exporter:0.1.0 .

If you don’t have those installed and you don’t want to – you can skip this step and just grab a pre-built container image from Oracle Container Registry:

docker pull	

If you do it this way, make sure to use the full name later when we start the exporter, not the short version!

Now we need to create a configuration file and define our metrics. I called mine mark-metrics.toml and here’s the content:

context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

context = "system"
request = "select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')"
metricsdesc = { session_count = "Current session count." }

context = "system"
request = "select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')"
metricsdesc = { active_sessions = "Active sessions." }

context = "system"
request = "select (c.session_count - a.active_sessions) as inactive_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and con_id = sys_context('userenv','con_id')) c, (select count(*) as active_sessions from v$session where username is not null and type = 'USER' and status = 'ACTIVE' and con_id = sys_context('userenv','con_id')) a"
metricsdesc = { inactive_sessions = "Inactive sessions." }

context = "system"
request = "select b.session_count as blocked_sessions from (select count(*) as session_count from v$session where username is not null and type = 'USER' and blocking_session_status = 'VALID' and con_id = sys_context('userenv','con_id')) b"
metricsdesc = { blocked_sessions = "Blocked sessions." }

I defined five metrics in this file. Each metric starts with the [[metric]] heading and can have several fields. You can see more information in the documentation here. In the spirit of keeping this first post simple, I just created basic metrics with no labels or anything fancy 🙂

Let’s take a close look at the first metric, here it is again:

context = "customers"
request = "SELECT count(*) as num_custs FROM customer"
metricsdesc = { num_custs = "Number of customers." }

It is in the context (or group) called customers. The metric itself is called num_custs. You can see how we use the metricsdesc to create a human-readable documentation/description for the metric. And the metric itself is defined with an SQL statement. Wow! That’s pretty cool, right? That means that anything I can write an SQL statement to get from the database can be exported as a metric! In this one I just count the number of entries in that customer table we just created.

The other four metrics are some simple queries that get the number of sessions in the database as well as how many are active, inactive and blocked. These are all in the system context. You can define whatever contexts you like.

When you later look at a metric in Prometheus its name will be something like this:


Notice how the context (customers) and the metric name (num_custs) are in there.

Ok, now that we have defined our metrics, we can start up the exporter. Let’s run it in another container, alongside the database. We can start it like this:

docker run -d \
       -v /home/mark/oracle-db-appdev-monitoring/mark-metrics.toml:/metrics.toml \
       -p 9161:9161 \
       -e DEFAULT_METRICS=/metrics.toml \
       -e DATA_SOURCE_NAME=pdbadmin/Welcome123@ \
       --name exporter \

There’s a couple of things to note here. First, I am providing the configuration file we just created using the -v mount. This will give the exporter access to the metrics definitions. Second, we need to tell it how to connect to the database. You’ll need to get the IP address of the database container using this command:

docker inspect oracle-db | grep IPAddress

Yours will probably be diffrent to mine, so you’ll need to update the value of DATA_SOURCE_NAME to match your environment. And finally, a reminder – if you pulled the pre-built image down from Oracle Container Registry, you’ll need to use the fully qualified name on the last line.

Once this container starts up, grab its IP address too, we’ll need that in a minute:

docker inspect exporter | grep IPAddress

The exporter should start right up, and assuming we got the address right and no typos, it should be working and we can get metrics like this:

$ curl localhost:9161/metrics
# HELP oracledb_system_inactive_sessions Inactive sessions.
# TYPE oracledb_system_inactive_sessions gauge
oracledb_system_inactive_sessions 1.0
# HELP oracledb_up Whether the Oracle database server is up.
# TYPE oracledb_up gauge
oracledb_up 1.0
# HELP oracledb_system_blocked_sessions Blocked sessions.
# TYPE oracledb_system_blocked_sessions gauge
oracledb_system_blocked_sessions 0.0
# HELP oracledb_customers_num_custs Number of customers.
# TYPE oracledb_customers_num_custs gauge
oracledb_customers_num_custs 2.0
# HELP oracledb_system_active_sessions Active sessions.
# TYPE oracledb_system_active_sessions gauge
oracledb_system_active_sessions 1.0
# HELP oracledb_system_session_count Current session count.
# TYPE oracledb_system_session_count gauge
oracledb_system_session_count 2.0

If you don’t see this, check the container logs to see what the error was:

docker logs exporter

Assuming everything is working now, let’s start up Prometheus and configure it to scrape these metrics.

First, let’s create a configuration file called prometheus.yml with this content:

  scrape_interval:     10s
  evaluation_interval: 10s

  - job_name: 'prometheus'
    - targets: ['']

  - job_name: 'oracle-exporter'
    metrics_path: '/metrics'
    scrape_interval: 10s
    scrape_timeout: 8s
    - targets: ['']

The only thing you’ll need to change here is the very last line – you need to put the IP address of your exporter container in there.

Then you can start up Prometheus using this configuration like this:

docker run -d \
       --name prometheus \
       -p 9090:9090 \
       -v /home/mark/prometheus.yml:/etc/prometheus/prometheus.yml \
       prom/prometheus --config.file=/etc/prometheus/prometheus.yml

It should start right up and you can access it at http://localhost:9090

The user interface looks like this, and you can type into that search field to find a metric. If you start typing “num_custs” it should find our metric. Then hit enter, or click on the Execute button to see the value of the metric. It might take up to 10 seconds for data to be available, since we configured the scrape interval as 10 seconds in our configuration file. You should see something like this – yours will probably say 1, not 2:

If you go insert some more records into that table and then check again, you’ll see the value is updated. You can also click on the Graph tab to view that as a time series graph. Try adding and removing records to see what happens. Remember to wait a little while between each update so that new metrics are collected.

You can also try the other metrics we created! So there we go, that’s covered the very basic starting steps of defining some metrics, running the exporter and scraping the metrics into Prometheus! Stay tuned for some follow up posts where I will build dashboards in Grafana, and also look at exporting logs and distributed tracing!

Bonus info: If you use WSL2 like I do, you might see a warning on the Prometheus web user interface about clock skew. If you do, you can fix that by updating the time in WSL like this:

sudo hwclock -s

About Mark Nelson

Mark Nelson is a Developer Evangelist at Oracle, focusing on microservices and messaging. Before this role, Mark was an Architect in the Enterprise Cloud-Native Java Team, the Verrazzano Enterprise Container Platform project, worked on Wercker, WebLogic and was a senior member of the A-Team since 2010, and worked in Sales Consulting at Oracle since 2006 and various roles at IBM since 1994.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s