Create PostgreSQL database for integration testing with Docker

In this article, we will explore the idea of creating a PostgreSQL database Dockerfile. We can, for example, use it for creating integration tests in a separate CI/CD workflow.

Introduction

Let’s have an idea to create a separate database for integration testing. Creating a separate database will require creating a Docker file with a PostgreSQL database. We can consequently make and run dedicated integration CI/CD workflow for the integration tests. This CI/CD workflow will take our Docker PostgreSQL database and run the database with which integration tests will be working.

Creating Dockerfile

We will need to write three files for creating a database testing environment. Technically, we need to write only one file – Dockerfile. But two other files, database init scripts, will be used to prepare our containerized database for integration tests.

First we need to write PostgreSQL Dockerfile:

FROM postgres:latest
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD admin
ENV POSTGRES_DB postgres
COPY ./sql/db_user_creation.sql /docker-entrypoint-initdb.d/
COPY ./sql/init_tables.sql /docker-entrypoint-initdb.d/

Let’s take latest image of PostgreSQL database from posgres Docker vendor. We set up PostgreSQL’s image environment properties POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB by default values and copy our database init scripts to Docker image entry folder.

The official postgres docker image will run .sql scripts found in the /docker-entrypoint-initdb.d/ folder. I highly encourage you to check official documentation in case in future postgres releases might change this.

Best practice

I will go here a little bit sideways. There is a good practice to add on custom-defined Dockerfile last Docker commands from a source image.

ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]

The last three lines of our Dockerfile are copied from the Postgres image and considered a best practice. However, you can omit them. But in case you want to extend this Dockerfile further, please realize that first, you are running all commands in the Postgres source image. And only then you are running your custom Docker commands. Placing the last command from PostgreSQL’s image will secure consistent API.

Therefore, the Whole Dockerfile should look like this:

FROM postgres:latest
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD admin
ENV POSTGRES_DB postgres
COPY ./sql/db_user_creation.sql /docker-entrypoint-initdb.d/
COPY ./sql/init_tables.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]

The environment variables will instruct the container to create a postgres schema with postgres user (having admin password) on its first run. Any .sql files found in the /docker-entrypoint-initdb.d/ of the PostgreSQL container will be executed. If you want to execute .sh scripts, you can also place them in the /docker-entrypoint-initdb.d/ folder.

Custom init scripts

As you have noticed in Dockerfile description, we have created two other files which we copied to PostgreSQL image. First file is db_user_creation.sql:

CREATE USER mycustomsuperadmin WITH PASSWORD 'changethispassword';
GRANT ALL PRIVILEGES ON DATABASE postgres TO mycustomsuperadmin;

In this file, we could, for example, add a command for creating another database. However, only for demonstration purpose, we will create mycustomsuperadmin user with all the privileges for postgres database. Creating a new user is not mandatory.

The second file we need to create is the initialization of our database with tables and content. This file is also just for demonstration purposes, and you can come with your ideas and needs for what you need to put into the database. Here is example of some database tables we will pad into postgres database.

CREATE TABLE "public".items (
    "id"           integer NOT NULL GENERATED ALWAYS AS IDENTITY (start 1),
    name           varchar(50) NOT NULL,
    price          double precision NOT NULL,
    release        timestamptz(3) NOT NULL,
    CONSTRAINT     PK_Customer PRIMARY KEY ("id", "name", "price", "release")
);

CREATE TABLE "public".users (
    "id"           integer NOT NULL GENERATED ALWAYS AS IDENTITY (start 1),
    name           varchar(50) NOT NULL,
    salary         double precision
    timestamp      timestamptz(3) NOT NULL,
    CONSTRAINT     PK_Customer PRIMARY KEY ("id", "name", "salary", "timestamp")
);

Again, the following code is just for demonstration purposes, and naturally, you can place your own init scripts here.

Note : Default syntax for creating database in PostgreSQL is “database_name”.”scheme_name”.”table_name”.

Build Dockerfile

Now, when we have all the files, we need to run the Docker build command. So, go to the folder where you placed PostgreSQL Dockerfile and run Docker build command like this:

IntelliJ configuration

You can easily build and run your Docker image from IntelliJ. Check the image of configuration for setting up the file:

IntelliJ PostgreSQL Docker build runner settings

Docker build .

Hitting the following setup in IntelliJ will build your Dockerfile and place it in your local Docker repository.

Alternative with docker-compose

There is alternative way how to build and run PostgreSQL locally. It is without creating Docker image and instead just run docker-compose. Here is a code for docker-compose.yaml file:

version: '3.7'
services:
    postgres:
        image: postgres:latest
        restart: always
        environment:
            POSTGRES_USER: postgres
            POSTGRES_PASSWORD: admin
            POSTGRES_DB: postgres
        logging:
            options:
                max-size: 10m
                max-file: "3"
        ports:
            - '5432:5432'
        volumes:
            - ./postgres-data:/var/lib/postgresql/data
            # copy the sql script to create tables
            - .sql/init_tables.sql:/docker-entrypoint-initdb.d/init_tables.sql
            # copy the sql script to create user
            - .sql/db_user_creation.sql:/docker-entrypoint-initdb.d/db_user_creation.sql

Running docker-compose up -d will run docker-compose script in your repository. To turn off the service, execute command docker-compose down.

Node: So if you placed your Dockerfile in project-root/src/test/resources and you are in your command prompt on this sys-path, running docker-compose might first run docker-compose in your project-root/ folder.

Conclusion

This article has shown us how to create a PostgreSQL database Dockerfile. However, we can develop this idea further and, for example, make it for integration tests in a separate CI/CD workflow.

Did you find creating Dockerfile easy? Do you have your trick or know another way how to create Dockerfile? Let us know in the comments below the article. We would like to hear your ideas and stories.

This entry was posted in Solutions and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.