Docker Series #4: Dockerfile

Docker Series #4: Dockerfile
In: Docker NetDevOps

Welcome to part-4 of our Docker blog post series. If you're following along, you've seen how Docker helps in managing containers, running web servers, and handling container lifecycle and restart policies. Now, we're going to dive into an incredibly useful concept, the Dockerfile.

A Dockerfile is a text document that contains all the instructions a user could tell Docker to assemble an image. Using a Dockerfile, you can automate the process of building, packaging, and configuring an application or service, ensuring consistency and efficiency across different stages of development and deployment. In this post, we'll explore how to containerize an app using a Dockerfile, turning your code into something that can be easily run and managed by Docker.

Containerizing a Simple Python Script with Docker

print('Hello World')

test_script.py

Think of a scenario where you have an awesome Python script that does some amazing thing (just kidding, it only prints out "Hello World", hehe – but that's awesome, right?). The script works perfectly on your computer, and now you want to send it to your colleagues for them to test.

💡
If you're not familiar with Python, don't worry! The examples in this post use simple Python scripts that primarily print out text to the terminal. You don't need extensive knowledge of Python to follow along.

Here's the problem, they might have different versions of Python, or some of them might not even have Python installed at all. How can you make sure it will run consistently across all of their machines?

Of course, we know Docker can be used, but then you'd want to send the commands that they'd need to run, and it might get a bit messy. Fortunately, there's a slightly better way to ensure that everyone can run your script without any hassle.

This is where the Dockerfile comes in. Within the Dockerfile, you can describe the application and tell Docker how to build an image from it. By using a Dockerfile, you encapsulate all the necessary information in a single file, ensuring that your Python script will run the same way on any system that has Docker installed, no matter what version of Python they have or even if they don't have Python at all. Let's look at a very basic dockerfile. The name of the file has to be Dockerfile without any extensions.

FROM python:3

WORKDIR /app

COPY ./test_script.py .

CMD [ "python", "./test_script.py" ]

Dockerfile

  1. FROM python:3: This line specifies the base image that Docker will use to build the new image. In this case, it's using the official Python 3 image from the Docker Hub. This image comes with Python 3 pre-installed, so you don't have to worry about installing it yourself.
  2. WORKDIR /app: This command sets the working directory inside the container to /app. All subsequent commands (like COPY and CMD) will be run from this directory. If the directory doesn't exist, Docker will create it.
  3. COPY ./test_script.py .: This line copies the test_script.py file from your local machine (the current directory) into the working directory inside the container (which is /app, as set by the WORKDIR command). The . at the end signifies the current directory inside the container.
  4. CMD [ "python", "./test_script.py" ]: Finally, this line defines the command that Docker will run when the container is started. In this case, it's executing the Python interpreter on the test_script.py file, effectively running your script.

The Dockerfile takes a simple approach: it starts from an existing Python 3 image, sets up the working directory, copies in the script, and then runs the script when the container starts. By defining these steps in a Dockerfile, you can ensure that anyone who builds a Docker container from this Dockerfile will have the same environment, dependencies, and execution behaviour, making it easy to share and run the code across different systems.

Docker Build - Create Image

To demonstrate this, navigate to the directory that has both of the above files. Run the docker build command as shown below. Please note that you named the Dockerfile without any extensions.

PS C:\Users\vsurr\Documents\python_dockerfile> docker build -t py_image:v1 .
[+] Building 0.9s (8/8) FINISHED
{......TRUNCATED......}
=> => naming to docker.io/library/py_image:v1

The docker build command is used to build a Docker image from a Dockerfile. In this specific command:

  • -t py_image:v1: This part of the command names the image and assigns a tag to it. The name py_image is an identifier that you can use when you want to refer to the image later. The v1 part is a tag that can be useful for versioning. If you don't specify a tag, Docker will use the latest tag by default.
  • .: This is the path to the directory containing the Dockerfile. In this case, the . means the current directory. Docker will look for a file named Dockerfile in this directory and use it to build the image.

So the whole command can be read as: "Build a Docker image using the Dockerfile in the current directory and name the resulting image py_image with the tag v1."

After running this command, the image will be available on your local system, and you can use it to create containers with the exact environment and behaviour defined in the Dockerfile.

PS C:\Users\vsurr\Documents\python_dockerfile> docker images
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
py_image     v1        8a59595dffed   25 minutes ago   1.01GB

Docker Run - Container

PS C:\Users\vsurr\Documents\python_dockerfile> docker run --name py_container py_image:v1
Hello World
  • --name py_container: This part of the command names the container py_container. This name can be used to refer to the container in other Docker commands, like docker stop or docker start.
  • py_image:v1: This specifies the image from which the container should be created. In this case, it's the image named py_image with the tag v1 that you created earlier with the docker build command.

So, the entire command can be read as: "Create and start a new container named py_container, using the py_image:v1 image.

When you run this command, Docker will create a new container from the specified image, and the command defined in the Dockerfile CMD [ "python", "./test_script.py" ] will be executed. In this case, that command runs a Python script, so you see the output Hello World printed to the terminal.

Including Dependencies with requirements.txt

In our previous example, we successfully ran a simple Python script inside a Docker container. Now, we're going to take a step further and include a requirements.txt file to manage the necessary packages for our script. In many real-world scenarios, scripts depend on external libraries, and managing these dependencies across different environments can be a challenge. In this case, we'll be using the popular requests library in our script.

The requirements.txt file is a standard way to define the dependencies in Python projects, and it can be effortlessly utilized within a Docker container. Before we dive into the Dockerfile and build commands, let's first take a look at the contents of the requirements.txt file and the new version of our Python script.

certifi==2023.7.22
charset-normalizer==3.2.0
idna==3.4
requests==2.31.0
urllib3==2.0.4
import requests

output = requests.get('https://httpbin.org/get')
print(output.json())

New Dockerfile

FROM python:3

WORKDIR /app

COPY . .

RUN pip install --no-cache-dir -r requirements.txt

CMD [ "python", "./api_example.py" ]
  • FROM python:3 - As before, this sets the base image to Python 3.
  • WORKDIR /app - This instruction also remains the same, setting the working directory inside the container to /app.
  • COPY . . - This time, we're copying all the files from the current directory on the host machine into the working directory inside the container. This includes both the Python script (api_example.py) and the requirements.txt file.
  • RUN pip install --no-cache-dir -r requirements.txt - This is a new instruction that executes the pip command to install the dependencies listed in the requirements.txt file. The --no-cache-dir option is used to avoid storing the cache, keeping the image size down.
  • CMD [ "python", "./api_example.py" ] - This command remains the same but points to the new script, api_example.py, which will be executed when the container starts.

This Dockerfile builds upon our previous example by adding a layer of complexity, showing how you can easily manage dependencies within a Docker container. It ensures that the specific packages listed in the requirements.txt file are installed, creating a consistent environment across different systems.

Build the Image and Run the Container

Similar to the previous example, let's build the image and run the container.

PS C:\Users\vsurr\Documents\docker_deep_dive\python_dockerfile> docker build -t api_image:v1 .

[+] Building 3.7s (9/9) FINISHED   docker:default
 => [internal] load build definition from Dockerfile      0.0s 
 {......TRUNCATED......}
 => => naming to docker.io/library/api_image:v1
docker run --name api_script api_image:v1

{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.31.0', 'X-Amzn-Trace-Id': 'Root=1-64d1ed3c-3c7772f3194880982ab563cf'}, 'origin': '82.31.82.173', 'url': 'https://httpbin.org/get'}

When we run the container, it executes the Python script inside the container. This script uses the requests library to connect to the specified URL and fetch a response. The response, returned in JSON format, contains various details such as the headers, origin, and URL.

By including the requirements.txt file in the Dockerfile, we have ensured that the necessary library is installed in the container, allowing the script to execute consistently across different environments.

Conclusion

The examples we explored in this post focused on Python, butof course the Dockerfile is not limited to just one language or simple scripts. It can be used to containerize more complex applications across various programming languages and frameworks.

Additionally, the Dockerfile not only instructing Docker on how to build an image but also acting as a simple documentation. Anyone new to the project can look at the Dockerfile to understand exactly what it does, how the image is constructed, and what dependencies are needed. It’s a powerful tool that enhances consistency and collaboration in the development process, allowing you to share your applications seamlessly with others.

Table of Contents
Written by
Suresh Vina
Tech enthusiast sharing Networking, Cloud & Automation insights. Join me in a welcoming space to learn & grow with simplicity and practicality.
Comments
More from Packetswitch
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Packetswitch.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.