How to install multiple projectson a single Docker image
The reason of choosing to run multiple projects on a single Docker image in front of XAMPP for local development
On my local environment I used XAMPP for many years and it was good enough for the work I was doing with PHP (especially Wordpress), JS and CSS don't actually need a server to run and compile.
For a long time XAMPP worked well but for some reasons in the previous years MySQL crashed a few times requiring to repair the MySQL databases that became corrupted. To do this is quite straightforward (many developers are explaining on the internet how to do it so I will not get deeper into this) but the solutions provided are not always working as expected. I ended a few time by installing XAMPP from scratch not just recovering the databases and this was a even longer process considering that I had to backup and install the project as on live with all the data.
Because I spent many hours, in multiple sessions, trying to recover databases, I decided to try Docker but in a way that I didn't used in in the past.
I worked with Docker on some projects so I was used with it and how is working but on the projects I worked before I had one project per one Docket image.
This time, for my local machine I didn't want to have so many Docker setups and images because I have around 8 personal projects that I update from time to time, sometimes I work on 2 in parallel and I copy code from one to another depending on the needs. So I wanted to have as with XAMPP, one setup for all the projects. With the start of one single Docker image I wanted to have all the projects running straight away.
It took me a long time to find the right setup because I was looking for a few specific things:
- to install multiple instances of Wordpress
- to use custom local domains
- to have SSL (secure domains), but this was less important for a website on a local machine. It was more of a challenge.
How I installed multiple Wordpress projects on a single Docker image - the challenge
While doing my research I found a few ways to achieve this setup.
It involved nginx (with the image stable-alpine) and I was able to find quite fast how to do it, including with SSL (links with https://) only that for some reasons I was not able to make a PHP function file_get_contents to work. I checked and did php config changes to be sure that php is allowing the use of this function, but it was without success.
I am using file_get_contents() to read the content of a generated css file that contains the CSS only for the first fold. For some Docker related reasons on how is working with localhost and the local IP address, with a custom domain, this PHP function was not working at all, returning false.
I abandoned that nginx setup after a while being unsuccessful with it.
I will talk less about the failed cases because they were many and I removed them for space reasons and I will jump to the setup that worked and I am already using for many months.
My final setup doesn't yet have SSL but it has all the above conditions working. Here is the docker-compose.yml file that I am using now
version: '3.7'
services:
revproxy:
build: ./nginx_revproxy #reference for Dockerfile
depends_on:
- site_1
- site_2
restart: always
ports:
- 80:80
- 443:443
networks:
- mynet
container_name: revproxy
volumes:
#- ./src/backend:/var/www/backend
- ./site1:/var/www/html/
- ./nginx_revproxy/conf/:/etc/nginx/conf.d/:ro
- ./certbot/conf/:/etc/letsencrypt
- ./certbot/www/:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot:latest
restart: unless-stopped
volumes:
- ./certbot/www/:/var/www/certbot
- ./certbot/conf/:/etc/letsencrypt
db_site_1:
command: --default-authentication-plugin=mysql_native_password
image: mariadb:10.5.8
container_name: db_site_1
restart: always
environment:
MYSQL_ROOT_USER: 'root'
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_DATABASE: 'dbname_site_1'
MYSQL_USER: 'dbuser_site_1'
MYSQL_PASSWORD: 'dbpass_site_1'
volumes:
- ./site1_mysql:/var/lib/mysql
ports:
- 3906:3306
networks:
- mynet
db_site_2: #2nd_instance
command: --default-authentication-plugin=mysql_native_password
image: mariadb:10.5.8
container_name: db_site_2
restart: always
environment:
MYSQL_ROOT_USER: 'root'
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_DATABASE: 'dbname_site_2'
MYSQL_USER: 'dbuser_site_2'
MYSQL_PASSWORD: 'dbpass_site_2'
volumes:
- ./site2_mysql:/var/lib/mysql
ports:
- 3907:3306
networks:
- mynet
phpmyadmin:
depends_on:
- db_site_1
- db_site_2
links:
- db_site_1
- db_site_2
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
PMA_HOSTS: db_site_1,db_site_2
PMA_PORT: 3306
PMA_ARBITRARY: 1
#MYSQL_USER: wp-user
#MYSQL_PASSWORD: wp-pass
#MYSQL_ROOT_PASSWORD: rootPassword
UPLOAD_LIMIT: 64M
restart: always
ports:
- 3001:80
networks:
- mynet
volumes:
- ./site1_mysql:/var/lib/mysql
- ./site2_mysql:/var/lib/mysql
site_1:
image: wordpress
#restart: always
container_name: site_1
ports:
- 8097:80 # change this to something above 1024
depends_on:
- db_site_1
environment:
WORDPRESS_DB_HOST: db_site_1:3306
WORDPRESS_DB_USER: dbuser_site_1
WORDPRESS_DB_PASSWORD: dbpass_site_1
WORDPRESS_DB_NAME: dbname_site_1
volumes:
- ./site1:/var/www/html/
- "/tmp/wp_site1:/tmp"
networks:
- mynet
site_2: #2nd_instance
image: wordpress
#restart: always
container_name: site_2
ports:
- 8098:80 # change this to something above 1024
depends_on:
- db_site_2
environment:
WORDPRESS_DB_HOST: db_site_2:3306 #:3306 #point to 2nd db instance
WORDPRESS_DB_USER: dbuser_site_2
WORDPRESS_DB_PASSWORD: dbpass_site_2
WORDPRESS_DB_NAME: dbname_site_2
volumes:
- ./site2:/var/www/html/
- "/tmp/wp_site2:/tmp"
networks:
- mynet
networks:
mynet:
#driver: bridge
#ipam:
#driver: default
#config:
#- subnet: "192.168.0.0/24"
# gateway: "192.168.0.1"
On bash you will have to use the next commands
To stop and remove containers
docker-compose down
To build, and start the wordpress websites
docker-compose up -d --build
To reset everything
docker-compose down
You will see on the above Docker setup letsencrypt of which installation was unsuccessful for me at that moment, but I know that in order to have the SSL certificate working you will need a nginx configuration for the URL both http (that is responding on port 80) and https (that is responding on port 443).
On a directory nginx_revproxy create a directory named conf and inside it add a file named default.conf that will have setups for every site that you plan on adding on this single Docker image
server {
listen 0.0.0.0:8097;
server_name site1.local;
charset utf-8;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://host.docker.internal:5001;
}
}
server {
listen 0.0.0.0:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name site1.local;
ssl_certificate /etc/nginx/ssl/_wildcard.site1.local.pem;
ssl_certificate_key /etc/nginx/ssl/_wildcard.site1.local-key.pem;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://host.docker.internal:5001;
}
}
server {
listen 0.0.0.0:8098;
server_name site2.local;
charset utf-8;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://host.docker.internal:5001;
}
}
server {
listen 0.0.0.0:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name site2.local;
ssl_certificate /etc/nginx/ssl/_wildcard.site2.local.pem;
ssl_certificate_key /etc/nginx/ssl/_wildcard.site2.local-key.pem;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://host.docker.internal:5001;
}
}
Look carefully how the ports from docker-compose.yml (8097 and 8098) are corresponding to the ports on the server setup for each domain on the default.conf file.
If you want to give a better structure to your setup you can also create a file named .env and add on it global names that you will use in multiple places like docker-componse.yml and default.conf files, as the ports above. Here is an example inside .env file
SITE_1_PORT=8097
SITE_2_PORT=8098
If you need to remove a project from your Docker image use with extreme caution the next command
rm -rf db_site_1/* site_1/*
which will remove the folders that contains the database for that website, and the website files. You can also add to the above command the folder where the SSL certificates are (if you have them generated) and any logs files from nginx.
The downsides of having multiple projects under one single Docker image
I personally like the above setup because if I want to run multiple project in the same time, and all the projects are sharing the same configurations, then I don't need to run multiple Docker images in the same time, which will consume a lot from my computer resources (as space, memory and processor).
On the other hand I find that my projects are very slow when they run from Docker even if on Docker app I keep running just the containers of a single website. I can't say more why this is happening but it seems to be related to how Docker is running on a localhost machine.
In the same time when I am running a Node server on one of the projects Docker is returning Bad Gateway on the browser as a response and I have to either refresh the browser (this is making me think that is happening because Docker is too slow) either I have to stop the node server from running.
Another downside of having multiple projects under a single Docker image is that you will not be able to see all the databases under PHPmyAdmin in the same time as you was used with XAMPP.
To see a single database you will need to login to PHPmyAdmin for that specific site of that database. To login to another database of another website you will have to login again on PHPmyAdmin and select the website database that you want to see.
The login credentials that you will need to use you will have to take from the docket-compose.yml db_site_1 -> environment setup.
Even if you are logged in into a project database and you want to switch to see another database, you will not be able to stay logged in for at least 2 databases in the same time. The change to a new database will require a new login even for the previous database if you are planning to return to it and this can be a time consuming step.