Docker: Running Apache Web Server In A Container

This is my second post for this blog series on Docker. If you haven’t already read my previous post, I highly recommend you to read that article first. Here, I’m going to dive a little deeper into container management by working on a further complicated application and advanced features of docker.

Until now, I’ve already covered the introduction, basic container usage and default networking in docker. So, let’s now get into more advanced concepts in container virtualization. For this post, my goal is to build and run a container serving web server. And as usual, there are going to be some challenges to achieve this goal, which we’ll tackle during this article. With that said, let’s get into action.

Docker Architecture
Docker Architecture

 Building a Docker Image

I’m using a CentOS’s latest image as before. If you’re planning to test this in some other platforms, the procedures might vary a little. Similar to previous post, let’s first create a Dockerfile to build an image with the required packages and configuration. My initial Dockerfile looks something like this:

[code language=”bash”]
sajjan@learner:~$ mkdir -p Dockerfiles/httpd
sajjan@learner:~$ vi Dockerfiles/httpd/Dockerfile
FROM centos
MAINTAINER sajjanbh <>
RUN yum -y –setopt=tsflags=nodocs install httpd
RUN yum clean all
CMD ["/usr/sbin/apachectl", "-DFOREGROUND"]

Then, I tried to build an image using this file. Note, I’m using Ubuntu as my host operating system. Here are the results I obtained:

[code language=”bash”]
sajjan@learner:~$ docker build -t sajjanbh/centhttpd:v1 Dockerfiles/httpd/

error: unpacking of archive failed on file /usr/sbin/suexec;589aec9b: cpio: cap_set_file
Error unpacking rpm package httpd-2.4.6-45.el7.centos.x86_64


Upon failing to build an image, I researched about the error that I received. Actually, it’s a well-known issue with AUFS. It occurs when trying to install high-privilege requiring packages (e.g. httpd) in CentOS like containers (e.g. Fedora, RHEL, Oracle, etc.) within non-CentOS like hosts (e.g. Ubuntu). There are some kernel patches for this issue. However, I couldn’t find a solid method to do it successfully. So, I used a work-around solution. First, I installed a CentOS machine, installed docker engine in it and then, built my httpd image there. After building it, I saved it as a tar file, copied it to my main Ubuntu system, and loaded it. And it worked without encountering above issue.

[code language=”bash”]
# These are on my new CnetOS host
[root@centos /]$ yum install docker-io
[root@centos /]$ docker pull centos
[root@centos /]$ mkdir -p docker/httpd
# Copying Dockerfile from Ubuntu host to newly created folder; Note: <ubuntu-host> is Ubuntu’s IP or hostname
[root@centos /]$ scp sajjan@<ubuntu-host>:/home/sajjan/Dockerfiles/httpd/Dockerfile docker/httpd
[root@centos /]$ docker build -t sajjanbh/centhttpd:v1 docker/httpd
# build image completed successfully; saving this image to tar file
[root@centos /]$ docker save -o centhttpd.tar sajjanbh/centhttpd:v1
[root@centos /]$ scp centhttpd.tar sajjan@<ubuntu-host>:/home/sajjan/
# This is in Ubuntu host. Loading copied tar file as docker image
sajjan@learner:~$ docker load -i centhttpd.tar

Encouraged by this small success, I continued my setup, just to face another immediately. Upon trying to run my image, it didn’t work as expected. Although I started it as a daemon, it exited prematurely instead of serving a website. Upon referring to working Dockerfiles online, httpd tends to confuse itself when closed incompletely during container restart. So, we need to specify remove statements in our Dockerfile to delete existing httpd data. Or better, we can create a Bash script defining the configuration and required actions, and then call it in the Dockerfile. Doing this is very useful when we need to deploy advanced level containers. Accordingly, my new Dockerfile and its corresponding script look like these:

[code language=”bash”]
[root@centos /]$ vi docker/httpd/
# Remove any existing httpd data
rm -rf /run/httpd/* /tmp/httpd*
# Start Apache server in foreground
exec /usr/sbin/apachectl -DFOREGROUND

[root@centos /]$ vi Dockerfiles/httpd/Dockerfile
# Make this image from CentOS base image
FROM centos
MAINTAINER sajjanbh <>
# Install httpd and clean all
RUN yum -y –setopt=tsflags=nodocs install httpd && \
yum clean all
# Open port 80 for the container
# Add and define above script
# Make the script executable inside container
RUN chmod -v +x /
# Execute the script on running the container
CMD ["/"]

[root@centos /]$ docker build -t sajjanbh/centhttpd:v2 docker/httpd/

Note: I built this image in CentOS host as mentioned above. After building this image and copying it to my Ubuntu system, I loaded and ran it. Here, I’m running this container in Daemon mode and I mapped host’s port 8080 to container’s 80 port. That means, I can access the web site served by this container via port 8080 of host machine.

[code language=”bash”]
sajjan@learner:~$ docker run -d –name web -p 8080:80 sajjanbh/centhttpd:v2

Upon running this HTTPD container, I am able to access its page as shown in the below screenshot:

Docker Container - Test Web Page
Docker Container – Test Web Page

Let’s also quickly mount our website’s source code to this container so that it’ll serve our web page instead of Apache’s test page. To do that, let’s stop the container. You may remove it altogether from the process list also. Then, let’s re-run the httpd image in a way that it’ll serve the page defined by us.

[code language=”bash”]
# Create a folder to store my web pages
sajjan@learner:~$ mkdir test-web && cd test-web
# Put your web page here
sajjan@learner:~$ vi index.html
# Run the Apache container with source code volume mapped to it
sajjan@learner:~$ docker run -d –name myweb -p 80:80 -v /home/sajjan/test-web/:/var/www/html/ sajjanbh/centhttpd:v2

My Test Web Page
My Test Web Page

Now that I’ve got my web server running in default mode, I would like to configure it and make it serve my own web application, right? However, there is a hurdle in doing that. Traditionally, docker containers run only one process or service, unlike physical or virtual machines which can run any number of services. So, when I ran my above container, the Apache server got started in foreground. That means if I try to attach to its TTY or console, I attach to that httpd process instead of the container’s shell. Thus, I cannot manage my container as I’d have done with traditional servers.

Well, at this point, it’s common to think that containers aren’t that beneficial and useful as they’re said to be. Once I thought it too. However, the problem doesn’t exist with the container virtualization or Docker. Rather, the problem is in our perception and inability to embrace change. We tend to view and use technologies the way we always have been doing. Therefore, until we unlearn what we already know and try to learn new technologies from new perspective, this problem cannot be solved.

Back to the topic, Docker by default allows only one foreground service to run inside a container, and I believe they’ve good reason for it as well. But there are methods like Supervisor and Runit to run multiple services in the container. About running single service or multiple services in a container, it all depends on our requirements and preference. Based on this information, I’d like to have sshd running in parallel with the httpd service in my container. This way, I can login to my container and perform my preferred configurations and administration tasks. Note: it’s not necessary that we must always have SSH access to the containers. In fact, I’d like to have my container as light as possible. So, once I’ve got my container fully configured, I won’t be using supervisor or sshd along with my web server.

[code language=”bash”]
# Defining a Dockerfile
[root@centos /]$ vi docker/supervisor/Dockerfile
# Build an image from CentOS base image
FROM centos
MAINTAINER sajjanbh <>

# Install SSH, Apache packages
RUN yum -y –setopt=tsflags=nodocs install openssh-server httpd python-setuptools && yum clean all

# Install Supervisor. Note: CentOS 7 base image repo doesn’t have supervisor
RUN easy_install supervisor

# Backup original sshd_config
RUN cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
RUN chmod a-w /etc/ssh/sshd_config.orig
# Make directories to store services’ data
RUN mkdir /var/run/sshd /var/log/supervisor

# Set root’s password
RUN echo ‘root:najjas123’ | chpasswd

# Permit SSH login for root user
RUN sed -i ‘s/#PermitRootLogin/PermitRootLogin/’ /etc/ssh/sshd_config

# Generate keys for SSH setup
RUN /usr/bin/ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C ” -N ”
RUN /usr/bin/ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C ” -N ”

# Add supervisor’s configuration file to the container. Note: This config file is referred by supervisord once started inside container
COPY supervisord.conf /usr/etc/supervisord.conf

# Allow ports 22 and 80 for container
EXPOSE 22 80

# Execute supervisord as entrypoint into the container
CMD ["/usr/bin/supervisord"]

# let’s also create a supervisord.conf file
[root@centos /]$ vi docker/supervisor/supervisord.conf

command=/usr/sbin/sshd -D

command=/bin/bash -c "rm -rf /run/httpd/* /tmp/httpd* && exec /usr/sbin/apachectl -DFOREGROUND"

Then, let’s build an image out of it. Note that I’m building and saving this image in a CentOS host machine.

[code language=”bash”]
# Building an image using above Dockerfile and config file
[root@centos /]$ docker build -t sajjanbh/centsupervisor:v1 docker/supervisor/
# Export the docker image as a tar ball
[root@centos /]$ docker save -o centsupervisor.tar sajjanbh/centsupervisor:v1

Next, let’s fetch that tar ball file to our Ubuntu docker host and start using it as follows:

[code language=”bash”]
# Download this tar ball to Ubuntu host
sajjan@learner:~$ scp root@docker2:/root/centsupervisor.tar .
# Import the docker image from tar ball
sajjan@learner:~$ docker load -i centsupervisor.tar

# Verify the image in docker local image repository
sajjan@learner:~$ docker images | grep sajjanbh/centsupervisor
sajjanbh/centsupervisor v1 ccf548acd958 39 minutes ago 250 MB

# Run this supervised container
sajjan@learner:~$ docker run -d –name mysupweb -p 2222:22 -p 80:80 -v /home/sajjan/test-web/:/var/www/html/ sajjanbh/centsupervisor:v1

# Verify SSH to container
sajjan@learner:~$ ssh root@localhost -p 2222
The authenticity of host ‘[localhost]:2222 ([]:2222)’ can’t be established.
RSA key fingerprint is SHA256:z9vUQ0SODzZlVFDKmwH9TTCbouAkoSbzy8FQ8iXBtRY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘[localhost]:2222’ (RSA) to the list of known hosts.
root@localhost’s password:

# Let’s also check the Apache’s access log to verify its functioning
[root@2aff9cd532ae ~]# tailf /var/log/httpd/ – – [12/Feb/2017:09:36:57 +0000] "GET / HTTP/1.1" 304 – "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0" – – [12/Feb/2017:09:36:57 +0000] "GET /style.css HTTP/1.1" 304 – "http://localhost/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0) Gecko/20100101 Firefox/51.0"

Upon successfully running this container, I saw the same web page as before. The main difference in this container in comparison to the above one is that it runs its services (SSH and Apache) in supervised mode. Here’s a video tutorial implementing the whole setup procedure:

Well, this is it for now. Today, I’ve covered about running services like Apache and SSH inside the container in both standalone and supervised modes. I hope this has been informative and useful for you. Please let me know of your opinion in the Comments sections below. And as always, thanks for reading!


Leave a Reply

Your email address will not be published. Required fields are marked *