How to run a FastAPI/React App using Docker and Docker Compose

Why Should you use Docker to run your application

First of all here are some reasons why you should use docker to run your application

Portability: Docker allows you to package an application along with all its dependencies into a single container. This container can be easily moved from one environment to another without any changes, making it easier to deploy applications consistently across different environments.

Isolation: Each Docker container provides an isolated environment for an application to run in, which helps prevent conflicts with other applications or services on the same host. This also makes it easier to manage dependencies and avoid version conflicts.

Efficiency: Docker containers are lightweight and use fewer system resources than traditional virtual machines. This means you can run more containers on a single host, reducing infrastructure costs and improving resource utilization.

Consistency: By using Docker, you can ensure that all developers on your team are using the same environment to develop and test applications. This helps avoid issues that can arise from differences in operating systems, libraries, or other dependencies.

Scalability: Docker makes it easy to scale applications horizontally by adding or removing containers as needed. This allows you to respond quickly to changes in traffic or demand without having to manually configure servers or infrastructure.

Portability: Docker allows you to package an application along with all its dependencies into a single container. This container can be easily moved from one environment to another without any changes, making it easier to deploy applications consistently across different environments.

Isolation: Each Docker container provides an isolated environment for an application to run in, which helps prevent conflicts with other applications or services on the same host. This also makes it easier to manage dependencies and avoid version conflicts.

Efficiency: Docker containers are lightweight and use fewer system resources than traditional virtual machines. This means you can run more containers on a single host, reducing infrastructure costs and improving resource utilization.

Consistency: By using Docker, you can ensure that all developers on your team are using the same environment to develop and test applications. This helps avoid issues that can arise from differences in operating systems, libraries, or other dependencies.

Scalability: Docker makes it easy to scale applications horizontally by adding or removing containers as needed. This allows you to respond quickly to changes in traffic or demand without having to manually configure servers or infrastructure.

What I started with before using docker

I had a FastAPI application which I ran locally on port 8000.

I had a React application which was built using create-react-app which ran on port 3000 on my local computer

To bring the two separate applications together the React app sends HTTP requests to the FastAPI app and then processes the data in the response for the end user

I also had a PostgreSQL database which ran on port 5432 on my computer.

In this article I will explain how to build both the React and FastAPI docker images and then how to run these images using docker compose so you have a working application for development purposes.

Making a DockerFile for both your React and FastAPI apps

First of all make sure you have both docker and docker compose installed on your computer.

Then make Dockerfiles in both your FastAPI and React application directories.

Here is my Dockerfile for my FastAPI application. This Dockerfile is inside the backend directory at the same level as the app directory

FROM python:3.8
WORKDIR /usr/src/backend_app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

Here is what it does

First of all I build my image off the official Python 3.8 image, make sure you specify which version of python you used to develop your app. For me this was version 3.8

Next I specified a working directory for my docker container using the WORKDIR command. The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.

Next I copied the requirements.txt file from my local computer on to the docker container.

This next step installs all of the python packages on to the docker container that are needed for the FastAPI application using pip.

Finally I copy all the code from my backend repository on my local computer on to the docker container.

Below is my DockerFile for my React application. This dockerfile is inside the frontend directory at the same level as the package.json and package-lock.json.

FROM node:16.18-alpine
WORKDIR /usr/src/todo-app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]

First of all I build my image of the official node 16.18 alpine image. Make sure you specify which version of node your react app is running with.

Next I specified a working directory for my docker container using the WORKDIR command.

I copy both the package.json and package-lock.json from my local computer to the docker container.

I install all the npm packages needed for the React application on to my docker container

I copy all the code from my frontend directory from my local computer on to my docker container

Finally I run the npm start command so the react application runs inside the container.

How to run your containers using docker compose

So at the moment we have two different docker images one for the FastAPI application and one for the React application. The problem is that these are two separate containers running separately. So the question is how can we run both these images and allow them to “communicate” with each other like they do on my local computer. There is also the question about setting up a PostgreSQL database and how the FastAPI container can communicate with this. This is where docker compose comes in.

Docker Compose is a tool that allows you to define and run multi-container Docker applications. It provides a simple way to define the services, networks, and volumes that your application requires and then starts and stops them all with a single command.

This is my docker compose file. This is located at the same level as the backend and frontend directory.

version: "3"
services:
  api:
    build: ./backend
    depends_on:
      - postgres-db
    ports:
    - 8000:8000
    volumes:
    - ./backend:/usr/src/backend_app:ro
    command: bash -c "alembic upgrade head && uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload"
    environment:
      - DATABASE_HOSTNAME=postgres-db
      - DATABASE_PORT=${DATABASE_PORT}
      - DATABASE_PASSWORD=${DATABASE_PASSWORD}
      - DATABASE_NAME=${DATABASE_NAME}
      - DATABASE_USERNAME=${DATABASE_USERNAME}
      - SECRET_KEY=${SECRET_KEY}
      - ALGORITHM=${ALGORITHM}
      - ACCESS_TOKEN_EXPIRE_MINUTES=${ACCESS_TOKEN_EXPIRE_MINUTES}
      - REFRESH_SECRET_KEY=${REFRESH_SECRET_KEY}
      - REFRESH_TOKEN_EXPIRE_MINUTES=${REFRESH_TOKEN_EXPIRE_MINUTES}
  postgres-db:
    image: postgres
    restart: always
    environment:
      - POSTGRES_USER=${DATABASE_USERNAME}
      - POSTGRES_PASSWORD=${DATABASE_PASSWORD}
      - POSTGRES_DB=${DATABASE_NAME}
    volumes:
    - postgres-db:/var/lib/postgresql/data
    ports:
    - 5434:5432

  client:
    build: ./frontend
    depends_on:
      - api
    ports:
      - 3000:3000
    volumes:
      - ./frontend/:/usr/src/frontend_app

volumes:
  postgres-db:

As you can see from my docker compose file this file defines 3 services one called api which will create a container for my FastAPI application, one called postgres_db which will create a container for my PostgreSQL database and one called client which will create a container for my React application.

For my postgres_db service this is what every line does

postgres-db: This is the name of the service that will be used to reference it in the Docker Compose file and by other services.

  1. image: postgres: This specifies the Docker image to use for the service. In this case, it’s the official PostgreSQL image from Docker Hub.
  2. restart: always: This specifies that the container should always be restarted if it fails or is stopped.
  3. environment: ...: This defines the environment variables that will be passed to the container at runtime. Here, we’re setting the PostgreSQL username, password, and database name to values that are defined in the environment variables DATABASE_USERNAME, DATABASE_PASSWORD, and DATABASE_NAME, respectively.
  4. volumes: ...: This specifies the volumes that will be mounted inside the container. In this case, we’re creating a named volume called postgres-db and mounting it to the container’s /var/lib/postgresql/data directory. This will persist the database data between container restarts.
  5. ports: ...: This specifies the port mapping between the container and the host machine. In this case, we’re mapping port 5432 inside the container to port 5434 on the host machine. This will allow us to connect to the PostgreSQL database from outside the container via port 5434.

Overall, this Docker Compose file defines a PostgreSQL database service that will use the official PostgreSQL Docker image, persist data between restarts using a named volume, and expose the database on port 5434 on the host machine.

Now on to the api service.

api: This is the name of the service that will be used to reference it in the Docker Compose file and by other services.

  1. build: ./backend: This specifies that the Docker image for this service should be built using the Dockerfile located in the ./backend directory.
  2. depends_on: ...: This specifies that the api service depends on the postgres-db service. This means that the postgres-db service will be started before the api service.
  3. ports: ...: This specifies the port mapping between the container and the host machine. In this case, we’re mapping port 8000 inside the container to port 8000 on the host machine. This will allow us to connect to the API from outside the container via port 8000.
  4. volumes: ...: This specifies the volumes that will be mounted inside the container. In this case, we’re mounting the ./backend directory on the host machine to the /usr/src/backend_app directory inside the container, in read-only mode.
  5. command: ...: This specifies the command to run when the container is started. In this case, we’re running a Bash command that will upgrade the database schema using Alembic, and then start the Uvicorn web server with the app/main.py module as the entry point.
  6. environment: ...: This defines the environment variables that will be passed to the container at runtime. Here, we’re setting the database hostname, port, username, password, and database name to values that are defined in the environment variables DATABASE_HOSTNAME, DATABASE_PORT, DATABASE_PASSWORD, DATABASE_NAME, and DATABASE_USERNAME, respectively. We’re also setting various other environment variables related to security and token expiration.

Overall, this Docker Compose file defines an API service that will build a Docker image from the ./backend directory, depend on the postgres-db service, expose the API on port 8000, mount the ./backend directory in read-only mode, and start the API server with the specified command and environment variables.

And finally the client service.

client:: This is the name of the service.

  1. build: ./frontend: This specifies that the client container should be built from the Dockerfile located in the “./frontend” directory.
  2. depends_on:: This option specifies that the client service depends on the “api” service, which means that the “api” service will be started before the “client” service.
  3. ports:- 3000:3000: This maps port 3000 on the host machine to port 3000 in the container.
  4. volumes:: -./frontend/:/usr/src/frontend_app: This maps the “./frontend” directory on the host machine to the “/usr/src/frontend_app” directory inside the container.

Together, these options create a container named “client” that is built from the Dockerfile in the “./frontend” directory. The container depends on the “api” service, and maps port 3000 on the host machine to port 3000 in the container. Additionally, the “./frontend” directory on the host machine is mounted to the “/usr/src/frontend_app” directory in the container.

But how do the containers communicate with each other?

So I’ve explained how each container gets set up and run. But how do the containers know and communicate each other?

The answer to this is docker networking. Docker networking allows containers to communicate with each other and with the outside world. Docker networking also provides a DNS server, which allows containers to refer to each other by their container name instead of IP address. This makes it easier to connect containers together.

The beauty of using docker compose to define and run your docker application is that by default docker compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in.

For example, suppose your app is in a directory called myapp, then docker compose will set up a network called myapp_default automatically for you.

Running docker compose

Before running docker compose on the React application in the package.json file add this line at the bottom.

"proxy": "http://api:8000",

This is so when the React application is making HTTP requests to the backend it knows where to send the request. “api” in this instance is the backend/FastAPI docker container defined in the docker compose file. If your FastAPI service was called foo in your docker compose file you would then put “http://foo:8000” for example.

In the below commands docker-compose-dev.yml is the name of the file I used, if you called it something different use that, or if it is called docker-compose.yml you do not need the -f flag or the file name as it is the default.

Run this command to build the images and start the containers.

docker compose -f docker-compose-dev.yml up -d

You then navigate to localhost:3000 on your own computer and the project should be running via your docker containers.

To stop the running containers without destroying them run this command

docker compose -f docker-compose-dev.yml stop

To restart the containers that were stopped run this command

docker compose -f docker-compose-dev.yml start

To stop and remove containers, networks, and volumes that were created by the docker-compose up command. Run this command.

docker compose -f docker-compose-dev.yml down

So this is how to run a FastAPI/React application using Docker and docker compose.

To see the code repository use this link

https://github.com/garethbreeze1993/to-do-app

Thanks for reading

Proudly powered by WordPress | Theme: Code Blog by Crimson Themes.