topics/containers/README.md
<a name="exercises-running-containers"></a>
| Name | Topic | Objective & Instructions | Solution | Comments |
|---|---|---|---|---|
| Running Containers | Basics | Exercise | Solution | |
| Containerized Web Server | Applications | Exercise | Solution | |
| Containerized Database | Applications | Exercise | Solution | |
| Containerized Database with Persistent Storage | Applications | Exercise | Solution |
<a name="exercises-containers-images"></a>
| Name | Topic | Objective & Instructions | Solution | Comments |
|---|---|---|---|---|
| Working with Images | Image | Exercise | Solution | |
| Sharing Images (without a registry) | Images | Exercise | Solution | |
| Creating images on the fly | Images | Exercise | Solution | |
| My First Containerfile | Containerfile | Exercise |
<a name="exercises-containers-misc"></a>
| Name | Topic | Objective & Instructions | Solution | Comments |
|---|---|---|---|---|
| Run, Forest, Run! | Restart Policies | Exercise | Solution | |
| Layer by Layer | Image Layers | Exercise | Solution | |
| Containerize an application | Containerization | Exercise | Solution | |
| Multi-Stage Builds | Multi-Stage Builds | Exercise | Solution |
<a name="questions-containers-101"></a>
This can be tricky to answer since there are many ways to create a containers:
If to focus on OCI (Open Container Initiative) based containers, it offers the following definition: "An environment for executing processes with configurable isolation and resource limitations. For example, namespaces, resource limits, and mounts are all part of the container environment." </b></details>
<details> <summary>Why containers are needed? What is their goal?</summary> <b>OCI provides a good explanation: "Define a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container." </b></details>
<details> <summary>What is a container image?</summary> <b>An image of a container contains the application, its dependencies and the operating system where the application is executed.
It's a collection of read-only layers. These layers are loosely coupled
The primary difference between containers and VMs is that containers allow you to virtualize multiple workloads on a single operating system while in the case of VMs, the hardware is being virtualized to run multiple machines each with its own guest OS. You can also think about it as containers are for OS-level virtualization while VMs are for hardware virtualization.
You should choose VMs when:
You should choose containers when:
<a name="questions-common-commands"></a>
Note: I've used Podman in the answers, but other containers engines can be used as well (e.g. Docker)
podman run ubuntu
</b></details>
Because the container immediately exits after running the ubuntu image. This is completely normal and expected as containers designed to run a service or a app and exit when they are done running it. To see the container you can run podman ps -a
If you want the container to keep running, you can run a command like sleep 100 which will run for 100 seconds or you can attach to terminal of the container with a command similar: podman container run -it ubuntu /bin/bash
</b></details>
podman container ls
</b></details>
podman container exec -it [container id/name] bash
This can be done in advance while running the container: podman container run -it [image:tag] /bin/bash
</b></details>
False. You have to stop the container before removing it. </b></details>
<details> <summary>How to stop and remove a container?</summary> <b>podman container stop <container id/name> && podman container rm <container id/name>
</b></details>
With the -d flag. It will run in the background and will not attach it to the terminal.
docker container run -d httpd or podman container run -d httpd
</b></details>
False. Running that command will override the entry point so the httpd service won't run and instead podman will run the ls command.
</b></details>
False. podman restart creates an entirely new container with the same ID while reusing the filesystem and state of the original container.
</b></details>
podman run -d --name apache1 -p 8080:8080 registry.redhat.io/rhel8/httpd-24
curl 127.0.0.1:8080
</b></details>
<details> <summary>After running a container, it stopped. <code>podman ps</code> shows nothing. How can you show its details?</summary> <b>podman ps -a will shows also the details of a stopped container.
</b></details>
podman search --list-tags IMAGE_NAME
</b></details>
<a name="questions-images"></a>
podman search snake-game. Surprisingly, there are a couple of matches :)
INDEX NAME DESCRIPTION STARS
docker.io docker.io/dyego/snake-game 0
docker.io docker.io/ainizetap/snake-game 0
docker.io docker.io/islamifauzi/snake-games 0
docker.io docker.io/harish1551/snake-game 0
docker.io docker.io/spkane/snake-game A console based snake game in a container 0
docker.io docker.io/rahulgadre/snake-game This repository contains all the files to ru... 0
</b></details>
<details> <summary>How to list the container images on certain host?</summary> <b>CONTAINER_BINARY=podman
$CONTAINER_BINARY images
Note: you can also use $CONTAINER_RUNTIME image ls
</b></details>
CONTAINER_BINARY=podman
$CONTAINER_BINARY pull rhel
</b></details>
<details> <summary>True or False? It's not possible to remove an image if a certain container is using it</summary> <b>True. You should stop and remove the container before trying to remove the image it uses. </b></details>
<details> <summary>True or False? If a tag isn't specified when pulling an image, the 'latest' tag is being used</summary> <b>True </b></details>
<details> <summary>True or False? Using the 'latest' tag when pulling an image means, you are pulling the most recently published image</summary> <b>False. While this might be true in some cases, it's not guaranteed that you'll pull the latest published image when using the 'latest' tag.
For example, in some images, 'edge' tag is used for the most recently published images. </b></details>
<details> <summary>Where pulled images are stored?</summary> <b>Depends on the container technology being used. For example, in case of Docker, images are stored in /var/lib/docker/
</b></details>
True. These hashes are content based and since images (and their layers) are immutable, any change will cause the hashes to change. </b></details>
<details> <summary>How to list the layers of an image?</summary> <b>In case of Docker, you can use docker image inspect <name>
</b></details>
False. They share and access the one used by the host on which they are running. </b></details>
<details> <summary>True or False? A single container image can have multiple tags</summary> <b>True. When listing images, you might be able to see two images with the same ID but different tags. </b></details>
<details> <summary>What is a dangling image?</summary> <b>It's an image without tags attached to it. One way to reach this situation is by building an image with exact same name and tag as another already existing image. It can be still referenced by using its full SHA. </b></details>
<details> <summary>How to see changes done to a given image over time?</summary> <b>In the case of Docker, you could use docker history <name>
</b></details>
Creates a new image from a running container. Users can apply extra changes to be saved in the new image version.
Most of the time the user case for using podman commit would be to apply changes allowing to better debug the container. Not so much for creating a new image since commit adds additional overhead of potential logs and processes, not required for running the application in the container. This eventually makes images created by podman commit bigger due to the additional data stored there.
</b></details>
True.
One evidence for that can be found in pulling images. Sometimes when you pull an image, you'll see a line similar to the following:
fa20momervif17: already exists
This is because it recognizes such layer already exists on the host, so there is no need to pull the same layer twice. </b></details>
<details> <summary>What is the digest of an image? What problem does it solves?</summary> <b>Tags are mutable. This is mean that we can have two different images with the same name and the same tag. It can be very confusing to see two images with the same name and the same tag in your environment. How would you know if they are truly the same or are they different?
This is where "digests` come handy. A digest is a content-addressable identifier. It isn't mutable as tags. Its value is predictable and this is how you can tell if two images are the same content wise and not merely by looking at the name and the tag of the images. </b></details>
<details> <summary>True or False? A single image can support multiple architectures (Linux x64, Windows x64, ...)</summary> <b>True. </b></details>
<details> <summary>What is a distribution hash in regards to layers?</summary> <b>docker manifest inspect <name>
</b></details>
Look for "Cmd" or "Entrypoint" fields in the output of docker image inspec <image name>
</b></details>
docker image history <image name>:<tag>
</b></details>
When you build an image for the first time, the different layers are being cached. So, while the first build of the image might take time, any other build of the same image (given that Containerfile/Dockerfile didn't change or the content used by the instructions) will be instant thanks to the caching mechanism used.
In little bit more details, it works this way:
Note: in some cases (like COPY and ADD instructions) the instruction might stay the same but if the content of what being copied is changed then the cache is invalidated. The way this check is done is by comparing the checksum of each file that is being copied. </b></details>
<details> <summary>How to remove an image from the host?</summary> <b>podman rmi IMAGE
It will fail if some containers are using it. You can then use --force flag for that but generally, it's better if you inspect the containers using the image before doing so.
To delete all images: podman rmi -a
</b></details>
&& to concatenate RUN instructionsPros:
# On the local host
podman save -o some_image.tar IMAGE
rsync some_image.tar SOME_HOST
# On the remote host
podman load -i some_image.tar
</b></details>
<details> <summary>True or False? Once a container is stopped and removed, its image removed as well from the host</summary> <b>False. The image will still be available for use by potential containers in the future.
To remove the container, run podman rmi IMAGE
</b></details>
docker image history <image name>:<tag>
</b></details>
podman diff IMAGE_NAME
</b></details>
True. For mounted files you can use podman inspec CONTAINER_NAMD/ID
</b></details>
Registry </b></details>
A registry contains one or more repositories which in turn contain one or more images. </b></details>
<details> <summary>How to find out which registry do you use by default from your environment?</summary> <b>Depends on the containers technology you are using. For example, in case of Docker, it can be done with docker info
> docker info
Registry: https://index.docker.io/v1
</b></details>
<details> <summary>How to configure registries with the containers engine you are using?</summary> <b>For podman, registries can be configured in /etc/containers/registries.conf this way:
[registries.search]
registries = ["quay.io"]
</b></details>
<details> <summary>How to retrieve the latest ubuntu image?</summary> <b>podman image pull ubuntu:latest
</b></details>
podman push IMAGE
You can specify a specific registry: podman push IMAGE REGISTRY_ADDRESS
</b></details>
latest is quite common (which can mean latest build or latest release)
3.1 can be used to reference the latest release/tag of the image like 3.1.6commit for creating new official images as they include the overhead of logs and processes and usually end up with bigger imagespodman commit on a running container after making changes to it
</b></details>Image tags are used to distinguish between multiple versions of the same software or project. Let's say you developed a project called "FluffyUnicorn" and the current release is 1.0. You are about to release 1.1 but you still want to keep 1.0 as stable release for anyone who is interested in it. What would you do? If your answer is create another, separate new image, then you probably want to rethink the idea and just create a new image tag for the new release.
In addition, it's important to note that container registries support tags. So when pulling an image, you can specify a specific tag of that image. </b></details>
<details> <summary>How to tag an image?</summary> <b>podman tag IMAGE:TAG
for example: podman tag FluffyUnicorn:latest
</b></details>
False. You can run podman rmi IMAGE:TAG.
</b></details>
True. </b></details>
Different container engines (e.g. Docker, Podman) can build images automatically by reading the instructions from a Containerfile/Dockerfile. A Containerfile/Dockerfile is a text file that contains all the instructions for building an image which containers can use. </b></details>
<details> <summary>What instruction exists in every Containerfile/Dockefile and what does it do?</summary> <b>In every Containerfile/Dockerfile, you can find the instruction FROM <image name> which is also the first instruction (at least most of the time. You can put ARG before).
It specifies the base layer of the image to be used. Every other instruction is a layer on top of that base image. </b></details>
<details> <summary>List five different instructions that are available for use in a Containerfile/Dockerfile</summary> <b>apt-get install to install only main dependencies (instead of suggested, recommended packages)
</b></details>Docker docs: "A build’s context is the set of files located in the specified PATH or URL" </b></details>
<details> <summary>What is the difference between ADD and COPY in Containerfile/Dockerfile?</summary> <b>COPY takes in a source and destination. It lets you copy in a file or directory from the build context into the Docker image itself.
ADD lets you do the same, but it also supports two other sources. You can use a URL instead of a file or directory from the build context. In addition, you can extract a tar file from the source directly into the destination. </b></details>
<details> <summary>What is the difference between CMD and RUN in Containerfile/Dockerfile?</summary> <b>RUN lets you execute commands inside of your Docker image. These commands get executed once at build time and get written into your Docker image as a new layer. CMD is the command the container executes by default when you launch the built image. A Containerfile/Dockerfile can only have one CMD. You could say that CMD is a Docker run-time operation, meaning it’s not something that gets executed at build time. It happens when you run an image. A running image is called a container. </b></details>
<details> <summary>How to create a new image using a Containerfile/Dockerfile?</summary> <b>The following command is executed from within the directory where Dockefile resides:
docker image build -t some_app:latest .
podman image build -t some_app:latest .
</b></details>
One option is to use hadolint project which is a linter based on Containerfile/Dockerfile best practices. </b></details>
<details> <summary>Which instructions in Containerfile/Dockerfile create new layers?</summary> <b>Instructions such as FROM, COPY and RUN, create new image layers instead of just adding metadata. </b></details>
<details> <summary>Which instructions in Containerfile/Dockerfile create image metadata and don't create new layers?</summary> <b>Instructions such as ENTRYPOINT, ENV, EXPOSE, create image metadata and they don't create new layers. </b></details>
<details> <summary>Is it possible to identify which instruction create a new layer from the output of <code>podman image history</code>?</summary> <b> </b></details> <details> <summary>True or False? Each Containerfile instruction runs in an independent container using an image built from every previous layer/entry</summary> <b>True </b></details>
<details> <summary>What's the difference between these two forms:ENTRYPOINT ["cmd", "param0", "param1"]
CMD ["param0"]
ENTRYPOINT cmd param0 param1
CMD param0
The first form is also referred as "Exec form" and the second one is referred as "Shell form".
The second one (Shell form) wraps the commands in /bin/sh -c hence creates a shell process for it.
While using either Exec form or Shell form might be fine, it's the mixing that can lead to unexpected results.
Consider:
ENTRYPOINT ["ls"]
CMD /tmp
That would results in running ls /bin/sh -c /tmp
</b></details>
True but in case of ENTRYPOINT and CMD only the last instruction takes effect. </b></details>
<details> <summary>What happens when CMD instruction is defined but not an ENTRYPOINT instruction in a Containerfile/Dockerfile?</summary> <b>The ENTRYPOINT from the base image is being used in such case. </b></details>
<details> <summary>In the case of running <code>podman run -it IMAGE ls</code> the <code>ls</code> overrides the <code>___</code> instruction</summary> <b>CMD </b></details>
<a name="questions-containers-storage"></a>
It means the contents of the container and the data generated by it, is gone when the container is removed. </b></details>
<details> <summary>True or False? Applications running on containers, should use the container storage to store persistent data</summary> <b>False. Containers are not built to store persistent data and even if it's possible with some implementations, it might not perform well in case of applications with intensive I/O operations. </b></details>
<details> <summary>You stopped a running container but, it still uses the storage in case you ever resume it. How to reclaim the storage of a container?</summary> <b>In order to reclaim the storage of a container, you have to remove it. </b></details>
<details> <summary>How to create a new volume?</summary> <b>CONTAINER_BINARY=podman
$CONTAINER_BINARY volume create some_volume
</b></details>
<details> <summary>How to mount a directory from the host to a container?</summary> <b>CONTAINER_BINARY=podman
mkdir /tmp/dir_on_the_host
$CONTAINER_BINARY run -v /tmp/dir_on_the_host:/tmp/dir_on_the_container IMAGE_NAME
In some systems you'll have also to adjust security on the host itself:
podman unshare chown -R UID:GUID /tmp/dir_on_the_host
sudo semanage fcontext -a -t container_file_t '/tmp/dir_on_the_host(/.*)?'
sudo restorecon -Rv /tmp/dir_on_the_host
</b></details>
<a name="questions-containerfile"></a>
<a name="questions-architecture"></a>
Through the use of namespaces and cgroups. Linux kernel has several types of namespaces:
cgroups (Control Groups): used for limiting the amount of resources a certain groups of processes (and their children of course) use. This way, a group of processes isn't consuming all host resources and other groups can run and use part of the resources as well
namespaces: same as cgroups, namespaces isolate some of the system resources so it's available only for processes in the namespace. Differently from cgroups the focus with namespaces is on resources like mount points, IPC, network, ... and not about memory and CPU as in cgroups
SElinux: the access control mechanism used to protect processes. Unfortunately to this date many users don't actually understand SElinux and some turn it off but nonetheless, it's a very important security feature of the Linux kernel, used by container as well
Seccomp: similarly to SElinux, it's also a security mechanism, but its focus is on limiting the processes in regards to using system calls and file descriptors </b></details>
Docker/Podman CLI passes your request to Docker daemon. Docker/Podman daemon downloads the image from Docker Hub Docker/Podman daemon creates a new container by using the image it downloaded Docker/Podman daemon redirects output from container to Docker CLI which redirects it to the standard output </b></details>
<details> <summary>Describe difference between cgroups and namespaces</summary> <b> cgroup: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behavior. namespace: wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.In short:
Cgroups = limits how much you can use; namespaces = limits what you can see (and therefore use)
Cgroups involve resource metering and limiting: memory CPU block I/O network
Namespaces provide processes with their own view of the system
Multiple namespaces: pid,net, mnt, uts, ipc, user
</b></details>
<details> <summary>Which of the following are Linux features that containers use?cspaces
namegroups
namespaces
cgroups
ELlinux
SElinux</summary>
<b>namespaces
cgroups
SElinux </b></details>
True. The ephemeral storage layer is added on top of the base image layer and is exclusive to the running container. This way, containers created from the same base image, don't share the same storage. </b></details>
<a name="questions-docker-architecture"></a>
Note: running ps -ef | grep -i containerd on a system with Docker installed and running, you should see a process of containerd
</b></details>
False. The Docker daemon performs higher-level tasks compared to containerd.
It's responsible for managing networks, volumes, images, ... </b></details>
<details> <summary>Describe in detail what happens when you run `docker pull image:tag`?</summary> <b> Docker CLI passes your request to Docker daemon. Dockerd Logs shows the processdocker.io/library/busybox:latest resolved to a manifestList object with 9 entries; looking for a unknown/amd64 match
found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:400ee2ed939df769d4681023810d2e4fb9479b8401d97003c710d0e20f7c49c6
pulling blob "sha256:61c5ed1cbdf8e801f3b73d906c61261ad916b2532d6756e7c4fbcacb975299fb Downloaded 61c5ed1cbdf8 to tempfile /var/lib/docker/tmp/GetImageBlob909736690
Applying tar in /var/lib/docker/overlay2/507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7/diff" storage-driver=overlay2
Applied tar sha256:514c3a3e64d4ebf15f482c9e8909d130bcd53bcc452f0225b0a04744de7b8c43 to 507df36fe373108f19df4b22a07d10de7800f33c9613acb139827ba2645444f7, size: 1223534 </b></details>
<details> <summary>Describe in detail what happens when you run a container</summary> <b>False. While this was true at some point, today the container runtime isn't part of the daemon (it's part of containerd and runc) so stopping or killing the daemon will not affect running containers. </b></details>
<details> <summary>True or False? containerd forks a new instance runc for every container it creates</summary> <b>True </b></details>
<details> <summary>True or False? Running a dozen of containers will result in having a dozen of runc processes</summary> <b>False. Once a container is created, the parent runc process exists. </b></details>
<details> <summary>What is shim in regards to Docker?</summary> <b>shim is the process that becomes the container's parent when runc process exists. It's responsible for:
Via the local socket at /var/run/docker.sock
</b></details>
A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Containerfile/Dockerfile. Each layer except the very last one is read-only. Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer. The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged. Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state. </b></details>
<details> <summary>What best practices are you familiar related to working with containers?</summary> <b> </b></details> <details> <summary>How do you manage persistent storage in Docker?</summary> <b> </b></details> <details> <summary>How can you connect from the inside of your container to the localhost of your host, where the container runs?</summary> <b> </b></details> <details> <summary>How do you copy files from Docker container to the host and vice versa?</summary> <b> </b></details><a name="questions-docker-compose"></a>
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
For example, you can use it to set up ELK stack where the services are: elasticsearch, logstash and kibana. Each running in its own container.
In general, it's useful for running applications which composed out of several different services. It let's you manage it as one deployed app, instead of different multiple separate services. </b></details>
<details> <summary>Describe the process of using Docker Compose</summary>docker-compose up to run the services
</b></details>Multi-stages builds allow you to produce smaller container images by splitting the build process into multiple stages.
As an example, imagine you have one Containerfile/Dockerfile where you first build the application and then run it. The whole build process of the application might be using packages and libraries you don't really need for running the application later. Moreover, the build process might produce different artifacts which not all are needed for running the application.
How do you deal with that? Sure, one option is to add more instructions to remove all the unnecessary stuff but, there are a couple of issues with this approach:
A better solution might be to use multi-stage builds where one stage (the build process) is passing the relevant artifacts/outputs to the stage that runs the application. </b></details>
<details> <summary>True or False? In multi-stage builds, artifacts can be copied between stages</summary> <b>True. This allows us to eventually produce smaller images. </b></details>
<details> <summary>What <code>.dockerignore</code> is used for?</summary> <b>By default, Docker uses everything (all the files and directories) in the directory you use as build context.
.dockerignore used for excluding files and directories from the build context
</b></details>
<a name="questions-networking"></a>
CNM (Container Network Model):
<a name="questions-docker-networking"></a>
Docker is using the CNM (Container Network Model) design specification.
The implementation of CNM specification by Docker is called "libnetwork". It's written in Go. </b></details>
<details> <summary>Explain the following blocks in regards to CNM:Networks
Endpoints
Sandboxes</summary>
<b>Networks: software implementation of an switch. They used for grouping and isolating a collection of endpoints.
Endpoints: Virtual network interfaces. Used for making connections.
Sandboxes: Isolated network stack (interfaces, routing tables, ports, ...) </b></details>
True. An endpoint can connect only to a single network. </b></details>
<details> <summary>What are some features of libnetwork?</summary> <b><a name="questions-security"></a>
--privilged flag
</b></details>--privilged flag
</b></details><a name="questions-docker-in-production"></a>
Images:
False. Communication between client and server shouldn't be done over HTTP since it's insecure. It's better to enforce the daemon to only accept network connection that are secured with TLS.
Basically, the Docker daemon will only accept secured connections with certificates from trusted CA. </b></details>
<details> <summary>What forms of self-healing options available for Docker containers?</summary> <b>Restart Policies. It allows you to automatically restart containers after certain events. </b></details>
<details> <summary>What restart policies are you familiar with?</summary> <b>docker container stop)<a name="questions-rootless-containers"></a>
<details> <summary>Explain Rootless Containers</summary> <b>Historically, user needed root privileges to run containers. One of the most basic security recommendations is to provide users with minimum privileges for what they need.
For containers it's been the situation for a long time and still for running some containers today from docker.io, you'll need to have root privileges. </b></details>
<details> <summary>Are there disadvantages in running rootless containers?</summary> <b>Yes, the full list can be found here.
Some worth to mention:
In rootless containers, user namespace appears to be running as root but it doesn't, it's executed with regular user privileges. If an attacker manages to get out of the user space to the host with the same privileges, there's not much he can do because it's not root privileges as opposed to containers that run with root privileges. </b></details>
<details> <summary>When running a container, usually a virtual ethernet device is created. To do so, root privileges are required. How is it then managed in rootless containers?</summary> <b>Networking is usually managed by Slirp in rootless containers. Slirp creates a tap device which is also the default route and it creates it in the network namespace of the container. This device's file descriptor passed to the parent who runs it in the default namespace and the default namespace connected to the internet. This enables communication externally and internally. </b></details>
<details> <summary>When running a container, usually a layered file system is created, but it requires root privileges. How is it then managed in rootless containers?</summary> <b>New drivers were created to allow creating filesystems in a user namespaces. Drivers like the FUSE-OverlayFS. </b></details>
<a name="questions-oci"></a>
OCI (Open Container Initiative) is an open governance established in 2015 to standardize container creation - mostly image format and runtime. At that time there were a number of parties involved and the most prominent one was Docker.
Specifications published by OCI:
Create, Kill, Delete, Start and Query State. </b></details>
<a name="questions-containers-scenarios"></a>
podman commit can be a good choice for that. You can create a new image of the running container (with the issue) and share that new image with your team members.
What you probably want to avoid using:
podman save/load as it applies on an image, not a running container (so you'll share the image but the issue might not be reproduced when your team members run a container using it)Use tags. You can distinguish between different releases of a project using image tags. There is no need to create an entire separate image for version/release of a project. </b></details>