Question Mentor Logo Question Mentor
Practice With AI
Home » Directory » Interview Preparation » 100 Docker Interview Questions and Answers

100 Docker Interview Questions and Answers

100 Docker Interview Questions and Answers for 2026

1. What is Docker?

Docker is an open-source platform designed to automate the deployment, scaling, and management of applications within lightweight, portable containers. From a theoretical perspective, Docker embodies the principles of containerization, which is rooted in operating system-level virtualization. This approach leverages kernel features like namespaces and cgroups in Linux to isolate processes, providing a consistent environment across development, testing, and production stages.

Theoretically, Docker addresses the challenges of software deployment by encapsulating an application and its dependencies into a single unit, ensuring reproducibility and eliminating the “it works on my machine” problem. It draws from concepts in computer science such as process isolation, resource limiting, and union file systems, allowing for efficient resource utilization compared to traditional methods.

Key advantages include faster startup times, lower overhead, and improved portability, but trade-offs involve potential security risks if not properly configured, as containers share the host kernel.

2. What is a Docker container?

A Docker container is a standardized, executable unit that packages an application along with its dependencies, libraries, and configuration files, ensuring it runs consistently across different computing environments. Theoretically, containers are based on the concept of virtualization at the OS level, utilizing Linux kernel primitives like namespaces (for process isolation), cgroups (for resource control), and capabilities (for fine-grained permissions).

At a deeper level, a container represents an instance of a Docker image running as an isolated process on the host machine. This isolation provides a sandboxed environment, preventing interference between applications while sharing the host’s kernel, which contrasts with hypervisor-based virtualization.

Containers are ephemeral by design, promoting immutable infrastructure patterns where changes are made by rebuilding rather than modifying running instances, aligning with modern DevOps principles.

Note: Containers do not include a full OS; they rely on the host’s kernel, making them lightweight with startup times in seconds.

3. What is the difference between a Docker container and a virtual machine?

Docker containers and virtual machines (VMs) both provide isolation for applications, but they differ fundamentally in their architecture and resource usage. VMs operate through hardware virtualization, emulating physical hardware using a hypervisor (e.g., VMware, Hyper-V), which runs a full guest OS on top of the host OS. This results in higher overhead due to the duplication of OS layers.

Theoretically, VMs are based on full virtualization theory, where the hypervisor traps and emulates privileged instructions, providing strong isolation but at the cost of performance. Containers, however, use OS-level virtualization, sharing the host kernel and isolating processes via namespaces and cgroups, leading to near-native performance.

Key differences include:

  • Resource Efficiency: Containers are lightweight (MBs vs. GBs for VMs), with lower CPU/memory overhead.
  • Startup Time: Containers start in seconds; VMs in minutes.
  • Isolation: VMs offer stronger security isolation; containers share the kernel, potentially vulnerable to kernel exploits.
  • Portability: Both are portable, but containers are more so due to standardization.

Time complexity for provisioning: O(1) for containers vs. O(n) for VMs based on OS boot time. Trade-offs: Containers excel in microservices architectures but may require additional security measures like seccomp or AppArmor.

4. What is a Docker image?

A Docker image is a read-only template that contains the instructions for creating a Docker container, including the application code, runtime, libraries, environment variables, and configuration files. Theoretically, images are built on the union file system concept (e.g., AUFS, OverlayFS), where layers are stacked to form a cohesive file system view, enabling efficient storage and versioning.

From a computer science perspective, images represent immutable artifacts, aligning with functional programming principles where state changes create new versions rather than mutating existing ones. Each layer in an image corresponds to a Dockerfile instruction, cached for incremental builds.

Images are stored in registries and can be versioned with tags, promoting reproducibility in software deployment pipelines.

Note: Images are not runnable; they serve as blueprints for containers.

5. What is a Dockerfile?

A Dockerfile is a text-based script that contains a series of instructions used to build a Docker image automatically. It defines the base image, environment setup, file copies, and runtime commands, embodying declarative programming principles where the desired state is specified rather than procedural steps.

Theoretically, Dockerfiles facilitate infrastructure as code (IaC), drawing from build automation concepts in software engineering. They enable reproducible builds, version control integration, and CI/CD workflows.

Example of a simple Dockerfile:

# Use an official Python runtime as a parent image
FROM python:3.9-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

This example demonstrates layering: each instruction adds a layer, optimizing for cache reuse.

6. What is Docker Engine?

Docker Engine is the core component of Docker that creates, manages, and runs containers on a host machine. It consists of a server (daemon), a REST API, and a CLI client, theoretically implementing a client-server architecture for container orchestration.

At a conceptual level, it leverages Linux kernel features to provide container runtime, image management, and networking. The engine abstracts complexities like union file systems and network namespaces, allowing developers to focus on applications.

It supports plugins for storage, networking, and authorization, extending its capabilities in distributed systems.

7. What is Docker Hub?

Docker Hub is a cloud-based registry service provided by Docker for finding, storing, and sharing Docker images. Theoretically, it functions as a centralized repository in a distributed version control system analogy, enabling collaboration and image distribution across teams.

It supports public and private repositories, automated builds from GitHub/Bitbucket, and webhooks for CI/CD integration. From an architectural view, it’s built on scalable cloud infrastructure, handling image layers with content-addressable storage for efficiency.

Trade-offs: Convenience vs. vendor lock-in; alternatives include self-hosted registries for sensitive data.

8. What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications using a YAML file to configure services, networks, and volumes. Theoretically, it extends single-container management to orchestration of interconnected services, aligning with microservices architecture principles.

It simplifies development workflows by managing dependencies and startup order, using declarative syntax for reproducibility. Compose files define a directed acyclic graph (DAG) of service relationships implicitly.

Example compose file:

version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"

This defines a web service built from the current directory and a Redis service from an official image.

9. What is Docker Swarm?

Docker Swarm is Docker’s native clustering and orchestration tool for managing a cluster of Docker nodes as a single virtual system. Theoretically, it implements distributed systems concepts like leader election (via Raft consensus), service discovery, and load balancing.

It turns multiple machines into a swarm, with managers handling orchestration and workers running containers. Services are defined declaratively, with desired state reconciliation ensuring high availability.

Advantages: Built-in, simple setup; trade-offs: Less feature-rich than Kubernetes for complex workloads.

10. How do Docker containers differ from traditional virtualization?

Docker containers differ from traditional virtualization (e.g., VMs) by virtualizing at the OS level rather than hardware level. Traditional virtualization uses hypervisors to emulate hardware, running full guest OSes, while containers share the host kernel, isolating processes.

Theoretically, this leads to better resource utilization (no OS duplication), faster provisioning, and lower latency. Complexity analysis: Containers have O(1) overhead for kernel calls vs. O(log n) in VMs due to hypervisor mediation.

Trade-offs: Weaker isolation in containers, suitable for trusted workloads; VMs for multi-tenant environments.

11. What is the Docker daemon?

The Docker daemon (dockerd) is a persistent background process that manages Docker objects like images, containers, networks, and volumes. Theoretically, it acts as the server in a client-server model, exposing a REST API for CLI and other clients.

It handles low-level interactions with the kernel, implementing container lifecycle management. Running as root, it poses security considerations, mitigated by rootless mode in newer versions.

12. What command is used to build a Docker image?

The command used to build a Docker image is docker build. It reads a Dockerfile and creates an image by executing instructions layer by layer.

Example:

# Build an image from the current directory's Dockerfile, tagging it as myapp:v1
docker build -t myapp:v1 .

Theoretically, this process uses caching to optimize rebuilds, reducing time complexity for incremental changes.

13. What is the purpose of the FROM instruction in a Dockerfile?

The FROM instruction specifies the base image from which to build, serving as the foundation layer. Theoretically, it enables image inheritance, promoting reuse and layering in a hierarchical structure akin to object-oriented design.

It must be the first non-comment instruction, pulling from a registry if not local.

Example:

FROM ubuntu:20.04

14. What is the difference between CMD and ENTRYPOINT in a Dockerfile?

CMD and ENTRYPOINT both define the default executable for a container, but differ in override behavior. CMD provides default arguments that can be overridden by command-line args, while ENTRYPOINT sets a fixed executable, with args appending to it.

Theoretically, ENTRYPOINT enforces a command pattern, useful for wrapper scripts; CMD for flexibility.

Example with CMD:

CMD ["echo", "Hello World"]

Running with args overrides it.

Example with ENTRYPOINT:

ENTRYPOINT ["echo", "Hello"]
CMD ["World"]

Args append to ENTRYPOINT.

15. What is the EXPOSE instruction in a Dockerfile?

The EXPOSE instruction documents which ports the container listens on at runtime, informing users and tools like Compose for networking.

It doesn’t publish ports; that’s done with -p in run. Theoretically, it’s metadata for service discovery in container ecosystems.

Example:

EXPOSE 80/tcp

16. What is the COPY instruction in a Dockerfile?

The COPY instruction copies files or directories from the build context to the container’s filesystem, creating a new layer.

Theoretically, it ensures build reproducibility by embedding assets immutably.

Example:

COPY ./app /usr/src/app

17. What is the ADD instruction in a Dockerfile, and how does it differ from COPY?

The ADD instruction copies files/directories like COPY but also supports URL fetching and tar extraction. However, it’s less predictable, so COPY is preferred for local files.

Difference: ADD auto-unpacks tars and handles remote URLs; COPY is stricter and more secure.

Example:

ADD http://example.com/file.tar /app/

vs.

COPY file.tar /app/

Theoretically, ADD introduces non-determinism from external sources, affecting build caches.

18. What is a Docker volume?

A Docker volume is a persistent data storage mechanism that exists independently of containers, managed by Docker. Theoretically, volumes decouple data from container lifecycle, supporting stateful applications in stateless container paradigms.

They use host storage but abstract the path, improving portability.

19. What is the difference between a volume and a bind mount?

Volumes are managed by Docker, stored in a Docker-controlled directory, while bind mounts map host paths directly into containers.

Differences:

  • Management: Volumes are portable; bind mounts depend on host paths.
  • Performance: Bind mounts faster for large files.
  • Security: Volumes safer as they isolate data.

Theoretically, volumes align with container abstraction; bind mounts for development.

20. How do you run a container in detached mode?

To run a container in detached mode, use the -d flag with docker run, running it in the background.

Example:

docker run -d --name mycontainer nginx

This detaches the process, useful for long-running services.

21. What command lists all running containers?

The command to list all running containers is docker ps. Theoretically, this command interacts with the Docker daemon to query the state of container processes managed by the container runtime, leveraging kernel namespaces and control groups (cgroups) to enumerate active isolated environments.

From a computer science perspective, it reflects process management in operating systems, where the daemon maintains a registry of process IDs (PIDs) and their associated metadata, filtered to show only those in the “running” state. This operation has O(n) time complexity where n is the number of containers, as it iterates over the daemon’s internal data structures.

Advantages include quick diagnostics; trade-offs involve potential performance impact on heavily loaded systems due to repeated queries.

# List all running containers with details like ID, image, command, status
docker ps

Note: Use with flags like -q for quiet mode to get only IDs, useful in scripting.

22. What command lists all containers, including stopped ones?

The command to list all containers, including stopped ones, is docker ps -a. Theoretically, this extends the basic ps command by removing the filter on container status, providing a comprehensive view of the container lifecycle states as defined in Docker’s state machine model.

Conceptually, it draws from finite state machine theory, where containers transition between states like created, running, paused, stopped, and dead. The -a flag ensures enumeration of all states, aiding in resource management and debugging.

Time complexity remains O(n), but with potentially larger n including historical containers. Trade-offs: Useful for cleanup but may overwhelm output on systems with many exited containers.

# List all containers, showing status for each
docker ps -a

23. How do you stop a running container?

To stop a running container, use docker stop <container_id or name>. Theoretically, this command sends a SIGTERM signal to the container’s main process (PID 1), allowing graceful shutdown, followed by SIGKILL after a grace period if unresponsive, embodying signal handling in Unix-like systems.

From an OS theory viewpoint, it utilizes process signaling mechanisms, ensuring resource cleanup like releasing network ports and file descriptors. Advantages: Prevents data corruption; trade-offs: Potential delays if processes ignore SIGTERM.

# Stop a container named mycontainer
docker stop mycontainer

# Force stop with immediate SIGKILL using docker kill
docker kill mycontainer

Note: Always prefer stop over kill to allow cleanup hooks to run.

24. How do you remove a container?

To remove a container, use docker rm <container_id or name>. The container must be stopped first; use -f to force removal of running ones. Theoretically, this operation deallocates resources associated with the container’s namespace, removing its filesystem overlay and metadata from the daemon’s store.

Conceptually, it aligns with garbage collection in memory management, freeing up disk space and preventing namespace clutter. Time complexity is O(1) for metadata removal, but filesystem deletion may vary. Trade-offs: Irreversible; use with caution to avoid data loss if volumes are attached.

# Remove a stopped container
docker rm mycontainer

# Force remove a running container
docker rm -f mycontainer

25. What is docker ps -a used for?

docker ps -a is used to list all containers, including those that are stopped or exited. Theoretically, it provides a holistic view of container history, facilitating auditing and resource reclamation in container orchestration systems.

Drawing from database query theory, it’s akin to a SELECT * without WHERE clause on status, enabling users to inspect exit codes and logs for troubleshooting. Advantages: Comprehensive insights; trade-offs: Verbose output requiring filtering.

# View all containers for cleanup
docker ps -a

26. How do you pull an image from Docker Hub?

To pull an image from Docker Hub, use docker pull <image:tag>. Theoretically, this command downloads image layers from the registry using HTTP/HTTPS, verifying manifests and decompressing layers into the local storage driver (e.g., overlay2).

From a distributed systems perspective, it involves content-addressable storage with SHA256 digests for integrity, reducing bandwidth via layer caching. Time complexity depends on network; advantages: Atomicity ensures consistent pulls; trade-offs: Requires internet access.

# Pull the latest nginx image
docker pull nginx:latest

27. What command pushes an image to a registry?

The command to push an image to a registry is docker push <repository:tag>. Theoretically, it uploads image layers and manifest to the registry API, using authentication tokens for secure transmission.

Conceptually, it mirrors version control push operations, enabling distribution in CI/CD pipelines. Advantages: Facilitates sharing; trade-offs: Exposes images to potential vulnerabilities if not scanned.

# Push after tagging
docker push myrepo/myimage:v1

Note: Login with docker login first for private registries.

28. What is a Docker registry?

A Docker registry is a stateless, highly scalable server-side application that stores and distributes Docker images. Theoretically, it implements the Docker Registry HTTP API V2, using blob storage for layers and manifests, supporting push/pull operations in a client-server model.

From an architectural theory, it’s a key-value store where keys are repository/tags and values are image metadata, enabling microservices deployment. Advantages: Centralized management; trade-offs: Single point of failure without HA setup.

Example usage:

# Pull from a private registry
docker pull myregistry.com/image:tag

29. How do you tag a Docker image?

To tag a Docker image, use docker tag <source_image:tag> <target_repository:tag>. Theoretically, tagging creates an alias in the local image store without duplicating data, leveraging hard links in the storage backend.

Conceptually, it’s similar to symbolic links in filesystems, facilitating versioning and registry preparation. Advantages: No overhead; trade-offs: Can lead to confusion if overused.

# Tag local image for push
docker tag myimage:latest myrepo/myimage:v1

30. What is multi-stage build in Docker?

Multi-stage build in Docker uses multiple FROM instructions in a Dockerfile to create intermediate images for building artifacts, then copying only essentials to the final image. Theoretically, it optimizes the software supply chain by separating compile-time and runtime dependencies, reducing image size and attack surface.

From a build system theory, it’s akin to multi-phase compilation in languages like C++, where intermediate objects are discarded. Time complexity: O(m) where m is stages, but saves storage. Advantages: Smaller images; trade-offs: Complex debugging.

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o main .

# Final stage
FROM alpine:latest
COPY --from=builder /app/main /usr/local/bin/main
CMD ["main"]

Note: Use named stages for selective copying.

31. Why would you use multi-stage builds?

Multi-stage builds are used to optimize Docker images by separating the build environment from the runtime environment, resulting in smaller, more secure final images. Theoretically, this approach draws from software engineering principles of separation of concerns and minimalism, where compile-time dependencies (e.g., build tools, compilers) are isolated in intermediate stages and discarded, leaving only runtime artifacts in the final image.

From a computer science perspective, it aligns with layered file systems in Docker, where each stage creates temporary layers that can be referenced via –from for selective copying, reducing the overall image size and attack surface. Time complexity for builds remains linear with the number of instructions, but storage efficiency improves exponentially for large dependency graphs.

Theoretical advantages include reduced deployment times, lower bandwidth usage in registries, and enhanced security by minimizing exposed binaries. Trade-offs involve increased Dockerfile complexity and potential debugging challenges across stages.

Example in a Go application:

# Build stage (discarded after build)
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go mod download && go build -o main .

# Runtime stage (final image)
FROM alpine:3.18
WORKDIR /app
COPY --from=builder /app/main .
CMD ["./main"]

Note: This reduces image size from ~800MB (with Go tools) to ~10MB (Alpine with binary).

32. What is the role of .dockerignore file?

The .dockerignore file specifies patterns for files and directories to exclude from the Docker build context, preventing unnecessary data from being sent to the daemon. Theoretically, it functions like .gitignore in version control systems, optimizing the build process by reducing the context size, which is tarred and transmitted during docker build.

From an architectural theory, this minimizes I/O operations and network transfer in distributed build environments, improving build performance. It addresses the principle of least inclusion, ensuring only essential files are layered into images, thus avoiding bloat and potential security leaks from sensitive files.

Advantages: Faster builds, smaller contexts; trade-offs: Risk of excluding needed files if patterns are too broad.

Example .dockerignore:

# Exclude Git files and logs
.git
*.log
node_modules/

Note: Place it in the build context root; patterns follow glob syntax.

33. How do you create a Docker network?

To create a Docker network, use the command docker network create <network_name>. Theoretically, this instantiates a virtual network using Docker’s libnetwork, which abstracts networking primitives like bridges, overlays, or macvlan, enabling container isolation and communication.

From a networking theory perspective, it leverages software-defined networking (SDN) concepts, where the daemon configures kernel modules (e.g., iptables, veth pairs) to create namespaces for IP allocation and routing.

# Create a bridge network named mynet
docker network create mynet

# Create with subnet
docker network create --subnet=192.168.0.0/16 mynet

Advantages: Custom isolation; trade-offs: Overhead in complex topologies.

34. What are the different Docker network drivers?

Docker supports several network drivers: bridge (default for single-host), host (shares host network), overlay (for Swarm multi-host), macvlan (direct L2 access), ipvlan (L3 variant), and none (no networking). Theoretically, these drivers implement different OSI layers: bridge at L2, overlay at L3 with VXLAN encapsulation for distributed systems.

From computer science, they embody polymorphism in networking APIs, allowing pluggable backends for various topologies. Time complexity for packet routing varies: O(1) in host mode vs. O(log n) in overlays due to encapsulation.

  • Bridge: Isolated single-host networks.
  • Host: No isolation, uses host stack.
  • Overlay: Multi-host with service discovery.
  • Macvlan: Containers appear as physical devices.

Trade-offs: Bridge is simple but not scalable; overlay adds latency but enables clustering.

35. What is the bridge network in Docker?

The bridge network is Docker’s default single-host network driver, creating a private internal network (docker0 bridge) for container communication. Theoretically, it uses Linux bridge utilities to form a virtual switch at L2, with NAT for external access via iptables masquerading.

Conceptually, it applies graph theory where containers are nodes connected via veth pairs to the bridge, enabling broadcast domains. Advantages: Automatic DNS resolution; trade-offs: Limited to one host, potential IP conflicts.

# Inspect default bridge
docker network inspect bridge

36. How do containers in the same network communicate?

Containers in the same network communicate using container names as hostnames via Docker’s built-in DNS server, or directly via IP addresses. Theoretically, this relies on network namespaces and the daemon’s resolver, implementing service discovery without external tools.

From distributed systems theory, it’s a form of name resolution in overlay networks, with O(1) lookup via in-memory maps. Packets route through virtual interfaces, ensuring isolation.

# Run two containers on mynet
docker run -d --network mynet --name web nginx
docker run -it --network mynet alpine ping web

Trade-offs: DNS failures if daemon restarts; use –link for legacy aliasing.

37. What is Docker host network mode?

Docker host network mode allows a container to share the host’s network stack, bypassing namespace isolation. Theoretically, it eliminates networking overhead, directly binding to host interfaces, akin to running processes natively.

From OS theory, it disables net namespace, sharing IPs and ports, useful for performance-critical apps. Advantages: Low latency; trade-offs: No port mapping, security risks from exposure.

# Run in host mode
docker run --network host nginx

38. What is the purpose of –network host flag when running a container?

The –network host flag runs the container in the host’s network namespace, allowing direct access to host interfaces without NAT or port publishing. Theoretically, it optimizes for throughput by avoiding virtual networking layers, reducing context switches in kernel packet processing.

Conceptually, it trades isolation for efficiency, suitable for monitoring tools. Trade-offs: Port conflicts, reduced security.

docker run -d --network host --name myapp myimage

39. How do you inspect a Docker container?

To inspect a Docker container, use docker inspect <container_id or name>, which returns JSON metadata. Theoretically, this queries the daemon’s internal state, reflecting container configuration from namespaces to mounts.

From data structures perspective, it’s a serialized graph of object relationships. Useful for debugging.

# Inspect container
docker inspect mycontainer

40. What command shows logs of a container?

The command to show logs is docker logs <container_id or name>. Theoretically, it retrieves stdout/stderr from the daemon’s log driver (e.g., json-file), implementing logging as a decoupled service.

Advantages: Centralized viewing; trade-offs: Log rotation needed for large outputs.

# Tail logs
docker logs -f mycontainer

41. How do you execute a command inside a running container?

To execute a command inside a running container, use docker exec -it <container> <command>. Theoretically, it attaches to the container’s namespace via nsenter, executing in its isolated environment.

From process theory, it’s like ptrace for injection, enabling debugging.

# Run bash inside
docker exec -it mycontainer bash

42. What is docker exec used for?

docker exec is used to run commands in a running container without restarting it. Theoretically, it provides ad-hoc access to container processes, supporting dynamic management in immutable infrastructures.

Trade-offs: Potential state mutation conflicting with immutability principles.

docker exec mycontainer ls /app

43. How do you commit changes to a new image from a container?

To commit changes, use docker commit <container> <new_image:tag>. Theoretically, it creates a new layer from the container’s writable overlay, snapshotting state.

From versioning theory, it’s like a commit in VCS, but discouraged for reproducibility.

docker commit mycontainer newimage:v1

44. What is the difference between docker commit and docker build?

docker commit creates an image from a running container’s changes, while docker build uses a Dockerfile for reproducible builds. Theoretically, commit is imperative (state-based), build declarative (instruction-based), aligning with IaC principles.

Advantages of build: Caching, versioning; commit: Quick prototyping. Trade-offs: Commit lacks auditability.

# Commit example
docker commit c1 newimg

# Build example
docker build -t newimg .

45. How do you save a Docker image to a tar file?

Use docker save -o <file.tar> <image:tag> to export an image to tar. Theoretically, it serializes layers and manifest for offline transfer.

docker save -o myimage.tar myimage:v1

46. How do you load a Docker image from a tar file?

Use docker load -i <file.tar> to import. Theoretically, it deserializes and loads into storage.

docker load -i myimage.tar

47. What is Docker Content Trust?

Docker Content Trust (DCT) enforces image signing and verification using The Update Framework (TUF), ensuring integrity and authenticity. Theoretically, it applies cryptographic principles like digital signatures and root-of-trust to supply chain security.

Advantages: Mitigates MiTM; trade-offs: Key management overhead.

48. How do you enable Docker Content Trust?

Enable by setting export DOCKER_CONTENT_TRUST=1. Theoretically, it toggles verification in the client, rejecting unsigned images.

export DOCKER_CONTENT_TRUST=1
docker pull signedimage

49. What are Docker secrets?

Docker secrets are secure, encrypted blobs for sensitive data like passwords, managed by Swarm. Theoretically, they implement secret management patterns, using Raft for replication and in-memory mounting.

Advantages: No plaintext exposure; trade-offs: Swarm-only.

50. How do you manage secrets in Docker Swarm?

Manage secrets with docker secret create, then attach to services. Theoretically, secrets are stored encrypted in the Raft log, mounted as tmpfs in containers.

# Create secret
echo "mypassword" | docker secret create my_secret -

# Use in service
docker service create --secret my_secret nginx

Note: Access via /run/secrets/my_secret in container.

51. What is a HEALTHCHECK instruction in Dockerfile?

The HEALTHCHECK instruction in a Dockerfile defines a command that Docker runs periodically to check if the container is healthy. Theoretically, it embodies monitoring principles in distributed systems, where health checks ensure service availability by probing runtime state, aligning with fault-tolerant design patterns like circuit breakers and self-healing architectures.

From a computer science perspective, it leverages periodic polling (with configurable intervals) to execute a user-defined command, returning exit codes: 0 for healthy, 1 for unhealthy. This integrates with orchestrators like Swarm or Kubernetes for automated recovery, drawing from state machine models where container health transitions affect scheduling decisions.

Theoretical advantages include proactive failure detection, reducing downtime; trade-offs involve added overhead from repeated executions, potentially impacting performance in resource-constrained environments.

Example in Dockerfile:

# Healthcheck to verify if the app responds to HTTP
HEALTHCHECK --interval=30s --timeout=3s CMD curl -f http://localhost/ || exit 1

Note: Only one HEALTHCHECK per Dockerfile; subsequent ones override.

52. How do restart policies work in Docker?

Restart policies in Docker define how the daemon handles container exits, automatically restarting based on conditions. Theoretically, they implement resilience patterns in container orchestration, rooted in supervisor process models from operating systems, ensuring high availability by reconciling desired vs. actual state.

Conceptually, policies use finite state automata: no (default, no restart), always (restart indefinitely), on-failure[:max-retries] (restart on non-zero exit), unless-stopped (restart except if manually stopped). The daemon monitors exit codes, applying backoff strategies to prevent restart loops.

Time complexity for monitoring is O(1) per container, but aggregate overhead grows with fleet size. Advantages: Simplifies ops; trade-offs: May mask underlying issues if misconfigured.

# Run with on-failure policy
docker run -d --restart on-failure:3 myimage

53. What restart policy should be used for critical services?

For critical services, the “always” restart policy is recommended, ensuring the container restarts regardless of exit code. Theoretically, this aligns with always-available architectures in fault-tolerant systems, drawing from redundancy principles where automatic recovery maintains service levels.

From an academic view, it mitigates transient failures (e.g., network blips) but requires idempotency in applications to handle restarts gracefully. Trade-offs: Potential resource exhaustion from faulty loops; combine with health checks for robustness.

# Apply always policy
docker run -d --restart always critical-service

Note: In Swarm, use service replicas for better HA.

54. How do you limit CPU usage for a container?

To limit CPU usage, use flags like –cpus or –cpu-shares in docker run. Theoretically, this utilizes Linux cgroups (control groups) for resource isolation, enforcing proportional sharing or hard limits based on Completely Fair Scheduler (CFS) algorithms.

Conceptually, –cpus sets a hard cap (e.g., 1.5 cores), while –cpu-shares defines relative weights in contention scenarios, embodying fair-share scheduling theory. Time complexity for enforcement is O(log n) per scheduling decision.

Advantages: Prevents noisy neighbors; trade-offs: Over-limiting may cause throttling.

# Limit to 2 CPUs
docker run -d --cpus=2 myimage

55. How do you limit memory for a container?

To limit memory, use –memory and –memory-swap flags. Theoretically, this leverages cgroups for memory control, setting hard limits to prevent OOM (out-of-memory) kills by the kernel’s OOM killer heuristic.

From OS theory, it isolates virtual memory, with –memory as RSS limit and –memory-swap including swap. Advantages: Resource guarantees; trade-offs: Swapping degrades performance.

# Limit to 512MB memory
docker run -d --memory=512m myimage

56. What is Docker BuildKit?

Docker BuildKit is an advanced image build toolkit that enhances the build process with concurrency, caching, and extensibility. Theoretically, it rearchitects the builder as a graph-based executor, using a DAG (directed acyclic graph) for instructions to enable parallel execution and incremental builds.

Conceptually, rooted in build system theories like Make or Bazel, it supports frontend plugins and secrets mounting, improving efficiency. Advantages: Faster builds; trade-offs: Compatibility with older Dockerfiles.

57. How do you enable BuildKit?

Enable BuildKit by setting DOCKER_BUILDKIT=1 environment variable or using docker buildx. Theoretically, this switches to the BuildKit backend, optimizing the build pipeline with lazy loading and cache exports.

# Enable for a build
DOCKER_BUILDKIT=1 docker build -t myimage .

58. What is the difference between docker build and docker buildx?

docker build uses the classic builder, while docker buildx extends it with BuildKit features like multi-platform builds. Theoretically, buildx implements a pluggable architecture, supporting emulated architectures via QEMU.

Differences:

  • build: Sequential, single-platform.
  • buildx: Parallel, cross-platform.
# Classic
docker build -t img .

# Buildx multi-arch
docker buildx build --platform linux/amd64,linux/arm64 -t img .

59. What are Docker labels?

Docker labels are key-value metadata pairs attached to images, containers, etc., for organization and querying. Theoretically, they enable annotation in resource models, akin to tags in cloud computing for filtering and automation.

Advantages: Enhances discoverability; trade-offs: No enforcement, potential sprawl.

60. How do you add labels to an image?

Add labels using LABEL in Dockerfile or –label in commands. Theoretically, labels are stored in image manifests, supporting queries via filters.

LABEL version="1.0" maintainer="user@example.com"

61. What is Docker Scout?

Docker Scout is a security tool for analyzing images for vulnerabilities. Theoretically, it integrates SBOM (Software Bill of Materials) generation with vulnerability databases, applying static analysis for supply chain security.

Advantages: Proactive scanning; trade-offs: Requires Docker account.

docker scout cves myimage

62. How does Docker handle storage drivers?

Docker handles storage drivers by abstracting filesystem operations via pluggable backends like overlay2, aufs. Theoretically, drivers manage union file systems for layering, ensuring copy-on-write efficiency.

Conceptually, they optimize I/O with deduplication, but choice depends on kernel support.

63. What is the overlay2 storage driver?

Overlay2 is Docker’s default storage driver using OverlayFS for efficient layering. Theoretically, it stacks directories with lowerdir/upperdir, providing O(1) access via merged views.

Advantages: Fast, native; trade-offs: Inode limits on large images.

64. What happens to data when a container is removed?

When a container is removed, its writable layer and non-persisted data are deleted. Theoretically, this enforces ephemerality, but volumes persist independently.

Trade-offs: Data loss if not mounted; use volumes for state.

65. How do you persist data in Docker?

Persist data using volumes or bind mounts. Theoretically, volumes decouple storage from container lifecycle, supporting durable state in stateless designs.

# Create and mount volume
docker volume create myvol
docker run -v myvol:/data myimage

66. What is docker-compose.yml?

The docker-compose.yml file is a YAML-based configuration file used by Docker Compose to define and manage multi-container applications. Theoretically, it embodies declarative programming principles in container orchestration, where the desired state of services, networks, and volumes is specified in a human-readable format, allowing for reproducible environments across development, testing, and production.

From a computer science perspective, it represents a domain-specific language (DSL) for infrastructure as code (IaC), drawing from configuration management theories like those in Puppet or Ansible. The file structures a graph of dependencies, with services as nodes and links as edges, enabling topological sorting for startup order. Theoretical advantages include simplified complexity management in microservices architectures; trade-offs involve limited scalability for large clusters compared to full orchestrators like Kubernetes.

Example docker-compose.yml:

version: '3.8'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
  db:
    image: postgres:13
    environment:
      POSTGRES_PASSWORD: example
volumes:
  db-data:
networks:
  default:

Note: The version key specifies the Compose file format, influencing feature availability.

67. How do you start services with Docker Compose?

To start services with Docker Compose, use the command docker-compose up. Theoretically, this command parses the docker-compose.yml file to build a dependency graph, then provisions resources in a breadth-first manner, ensuring networks and volumes are created before services, aligning with graph theory in resource allocation.

Conceptually, it implements a reconciliation loop similar to control theory in systems engineering, where the current state is brought to match the declared state. Advantages: Atomic operations for consistency; trade-offs: Blocking behavior unless detached with -d.

# Start services in foreground
docker-compose up

# Start in detached mode with build
docker-compose up -d --build

Note: Use docker-compose down to stop and remove resources gracefully.

68. What command scales a service in Docker Compose?

The command to scale a service in Docker Compose is docker-compose up --scale <service>=<replicas>. Theoretically, scaling adjusts the replication factor in the service definition, drawing from distributed systems theory where horizontal scaling improves fault tolerance and load distribution via replicas.

From an architectural perspective, it emulates sharding patterns but is limited to single-host, lacking Swarm’s multi-node capabilities. Time complexity for scaling is O(n) with n replicas. Trade-offs: Resource contention on one host.

# Scale web service to 3 instances
docker-compose up -d --scale web=3

69. What is the difference between Docker Compose and Docker Swarm?

Docker Compose is for single-host multi-container orchestration, while Docker Swarm is for multi-host clustering. Theoretically, Compose follows a monolithic orchestration model, suitable for development, whereas Swarm implements distributed consensus via Raft algorithm for fault-tolerant, scalable production environments.

Key differences:

  • Scope: Compose: Local; Swarm: Cluster-wide.
  • Scaling: Compose: Manual on host; Swarm: Automatic across nodes.
  • Features: Swarm adds services, stacks, secrets; Compose focuses on simplicity.

Theoretical advantages of Swarm: High availability; trade-offs: Increased complexity in management.

70. How do you deploy a stack in Docker Swarm?

To deploy a stack in Docker Swarm, use docker stack deploy -c docker-compose.yml <stack_name>. Theoretically, this translates Compose declarations into Swarm objects (services, networks), applying desired state reconciliation in a distributed manner, rooted in control plane architectures.

Conceptually, it deploys a DAG of resources, ensuring eventual consistency across the cluster. Advantages: Blue-green deployments; trade-offs: Requires Swarm initialization.

# Deploy stack named mystack
docker stack deploy -c docker-compose.yml mystack

71. What is a Docker service?

A Docker service is a declarative abstraction for running replicated tasks (containers) in Swarm mode. Theoretically, it represents a unit of work in cluster scheduling, drawing from job scheduling theories like in Apache Mesos, where tasks are distributed based on constraints and resources.

Services maintain desired replicas, handling failures via rescheduling. Advantages: Scalability; trade-offs: Overhead from orchestration.

72. How do you create a service in Swarm mode?

To create a service in Swarm mode, use docker service create. Theoretically, this instantiates a service object in the Raft store, triggering the allocator to schedule tasks on nodes meeting constraints, embodying bin-packing algorithms.

# Create nginx service with 3 replicas
docker service create --name web --replicas 3 -p 80:80 nginx

73. What are overlay networks in Swarm?

Overlay networks in Swarm are virtual networks spanning multiple hosts using VXLAN encapsulation. Theoretically, they implement SDN principles, creating L2 overlays over L3 underlays for seamless container communication, addressing IPAM and routing in distributed systems.

Advantages: Multi-host connectivity; trade-offs: Encapsulation overhead.

# Create overlay network
docker network create -d overlay myoverlay

74. How does Docker Swarm handle load balancing?

Docker Swarm handles load balancing via built-in routing mesh and ingress network, distributing traffic round-robin across replicas. Theoretically, it uses IPVS (IP Virtual Server) for L4 balancing, drawing from kernel-level load balancing theories for efficiency.

Advantages: Automatic; trade-offs: No advanced algorithms like least connections.

75. What is the difference between replicas and global mode in Swarm?

Replicas mode runs a fixed number of tasks distributed across nodes, while global mode runs one task per node. Theoretically, replicas follows partitioned replication for scalability; global ensures ubiquitous presence like in sensor networks.

  • Replicas: Horizontal scaling.
  • Global: Daemon-like services.
# Replicas
docker service create --replicas 5 nginx

# Global
docker service create --mode global nginx

76. How do you update a service in Swarm without downtime?

To update a service without downtime, use docker service update --image <new_image> --update-parallelism 1 --update-delay 10s. Theoretically, this implements rolling updates, a strategy from deployment patterns ensuring zero-downtime by gradual replacement, based on canary release theories.

Advantages: Minimizes risk; trade-offs: Slower rollout.

docker service update --image nginx:newtag web

77. What is docker stack deploy?

docker stack deploy is a command to deploy or update a stack from a Compose file in Swarm. Theoretically, it orchestrates multi-service deployments, applying declarative updates to achieve convergence in distributed state management.

docker stack deploy -c compose.yml mystack

78. How do you drain a node in Docker Swarm?

To drain a node, use docker node update --availability drain <node>. Theoretically, draining evokes graceful shutdown in cluster management, rescheduling tasks elsewhere to maintain availability, rooted in fault isolation theories.

docker node update --availability drain worker1

79. What is the role of Docker in CI/CD pipelines?

Docker’s role in CI/CD is to provide consistent, portable environments for building, testing, and deploying applications. Theoretically, it enables immutable artifacts in pipelines, drawing from DevOps principles for reducing variability, with containers as units in workflow DAGs.

Advantages: Reproducibility; trade-offs: Image management overhead.

80. How do you integrate Docker with Jenkins?

To integrate Docker with Jenkins, install Docker plugins, configure agents, and use Docker steps in pipelines. Theoretically, this creates hybrid orchestration, where Jenkins acts as the control plane, leveraging Docker for isolated builds, aligning with agent-based CI theories.

// Jenkinsfile example
pipeline {
  agent { docker { image 'node:14' } }
  stages {
    stage('Build') {
      steps { sh 'npm install' }
    }
  }
}

Note: Use docker.build() for image creation in pipelines.

81. What is Docker in a microservices architecture?

Docker plays a pivotal role in microservices architecture by providing containerization, which enables the decomposition of monolithic applications into independently deployable, scalable services. Theoretically, microservices draw from service-oriented architecture (SOA) principles and domain-driven design (DDD), where each service represents a bounded context with its own data and logic. Docker encapsulates these services into containers, leveraging OS-level virtualization via Linux kernel features like namespaces and cgroups to achieve isolation, portability, and consistency across environments.

From a computer science perspective, Docker aligns with the actor model in concurrent systems, where containers act as isolated actors communicating via APIs, reducing coupling and enhancing fault isolation. Time complexity for deployment scales O(n) with services due to parallel builds, but startup is O(1) per container. Theoretical advantages include improved scalability through horizontal replication and easier rollback; trade-offs involve increased network overhead and complexity in service discovery, often mitigated by orchestrators like Kubernetes.

Note: In microservices, Docker facilitates polyglot persistence, allowing diverse tech stacks per service.

82. How do you debug a crashing container?

To debug a crashing container, start by examining logs with docker logs, then inspect configuration via docker inspect, and if needed, override entrypoints for interactive shells. Theoretically, debugging embodies fault diagnosis in distributed systems, rooted in observability principles like logging, metrics, and tracing (three pillars of observability), ensuring root cause analysis without disrupting production.

Conceptually, crashes often stem from resource exhaustion or misconfigurations, analyzed through state inspection akin to postmortem debugging in OS theory. Advantages: Minimizes downtime; trade-offs: Security risks from interactive access.

# View logs for errors
docker logs --tail 100 mycontainer

# Run with bash to debug
docker run -it --entrypoint /bin/bash myimage

# Attach to running if possible
docker exec -it mycontainer bash

Note: For systematic debugging, integrate tools like gdb if compiled with debug symbols.

83. What tools can be used to monitor Docker containers?

Tools for monitoring Docker containers include Prometheus for metrics, Grafana for visualization, cAdvisor for container-specific metrics, and ELK Stack (Elasticsearch, Logstash, Kibana) for logs. Theoretically, monitoring draws from control theory and telemetry in systems engineering, where agents collect time-series data to model system behavior, enabling anomaly detection via statistical models like Z-scores or ML-based forecasting.

From an academic view, these tools implement the observer pattern, decoupling data collection from analysis. Advantages: Proactive alerting; trade-offs: Overhead from agents, potentially 5-10% CPU.

  • Prometheus: Pull-based, query language (PromQL).
  • cAdvisor: Google-developed, exposes container metrics.
  • Datadog/ New Relic: SaaS with Docker integrations.
# Run cAdvisor
docker run -d --volume=/:/rootfs:ro --volume=/var/run:/var/run:ro google/cadvisor:latest

84. How do you secure a Docker image?

To secure a Docker image, use minimal base images, scan for vulnerabilities, sign with Content Trust, and apply least privilege principles. Theoretically, security is grounded in defense-in-depth and zero-trust models, where images are treated as untrusted artifacts, verified through cryptographic hashes and multi-stage builds to minimize attack surfaces.

Conceptually, it addresses supply chain attacks via SBOM (Software Bill of Materials) and provenance tracking. Advantages: Reduces exploits; trade-offs: Build time increases.

# Enable Content Trust
export DOCKER_CONTENT_TRUST=1

# Scan with Docker Scout
docker scout cves myimage

85. What are best practices for writing Dockerfiles?

Best practices for Dockerfiles include using official minimal bases, multi-stage builds, .dockerignore, non-root users, and caching optimization. Theoretically, these practices stem from build automation theory and immutable infrastructure, ensuring reproducibility and efficiency via layered caching, akin to memoization in functional programming.

Time complexity: Optimal ordering reduces rebuilds to O(1) for unchanged layers. Advantages: Smaller, secure images; trade-offs: Learning curve for optimization.

# Example optimized Dockerfile
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o main .

FROM alpine:3.18
WORKDIR /app
COPY --from=builder /app/main .
USER 1000
CMD ["./main"]
  • Minimize RUN instructions to reduce layers.
  • Pin versions for determinism.

86. Why avoid running containers as root?

Avoid running containers as root to mitigate privilege escalation risks, as root in container equates to host root via shared kernel. Theoretically, this follows the principle of least privilege (PoLP) from access control theory, reducing blast radius in case of breaches, aligned with capability-based security models.

Conceptually, non-root users limit filesystem access, preventing arbitrary writes. Advantages: Enhanced security; trade-offs: Requires USER instruction, potential permission issues.

# Run as non-root
RUN adduser -D myuser
USER myuser

Note: Use –user flag in run for overrides.

87. What is the –security-opt flag used for?

The –security-opt flag configures security options like seccomp profiles, AppArmor, or no-new-privileges. Theoretically, it fine-tunes kernel security modules, drawing from mandatory access control (MAC) theories to restrict syscalls and capabilities, enhancing confinement beyond defaults.

Advantages: Custom hardening; trade-offs: Misconfiguration may break apps.

# Apply seccomp profile
docker run --security-opt seccomp=/path/to/profile.json myimage

# No new privileges
docker run --security-opt no-new-privileges myimage

88. How do you scan Docker images for vulnerabilities?

Scan Docker images using tools like Docker Scout, Clair, or Trivy, which compare packages against CVE databases. Theoretically, scanning implements static analysis in security pipelines, rooted in vulnerability management lifecycle, using graph-based dependency resolution to identify risks.

Advantages: Early detection; trade-offs: False positives require triage.

# Using Docker Scout
docker scout cves myimage:latest --exit-code

89. What is Trivy, and how is it used with Docker?

Trivy is an open-source vulnerability scanner for containers, detecting OS and language-specific issues. Theoretically, it leverages offline databases for fast scans, embodying shift-left security in DevSecOps, with O(n) complexity over packages.

Used in CI/CD for gating builds. Advantages: Comprehensive, no daemon; trade-offs: Database updates needed.

# Install and scan
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -x -- -b /usr/local/bin
trivy image myimage:latest

90. How do you optimize Docker image size?

Optimize Docker image size using multi-stage builds, minimal bases (e.g., alpine), cleaning caches in RUN, and compressing assets. Theoretically, optimization reduces storage and transfer costs, drawing from data compression theories and efficient packing algorithms, minimizing layers via union FS.

Space complexity drops from GBs to MBs. Advantages: Faster pulls; trade-offs: Debugging harder in minimal envs.

# Optimized example
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
  • Remove unnecessary files post-install.
  • Use .dockerignore to exclude build artifacts.

91. What is the difference between docker run and docker create?

The difference between docker run and docker create lies in their lifecycle management of containers. Theoretically, docker create instantiates a container from an image, configuring it but not starting it, akin to allocating resources in a paused state within container runtime models. This draws from process creation theories in operating systems, where fork() separates creation from execution, allowing pre-configuration without resource consumption.

In contrast, docker run combines creation and starting, executing the container immediately, embodying an atomic operation in state transition diagrams from “created” to “running”. From a computer science perspective, docker create enables deferred execution, useful for setups requiring post-creation modifications, while docker run optimizes for immediate deployment in CI/CD pipelines.

Theoretical advantages of docker create: Flexibility in orchestration, reduced race conditions; trade-offs: Additional steps for starting. Time complexity is O(1) for both, but run incurs immediate runtime overhead.

# Using docker create: Creates but doesn't start
docker create --name mycontainer nginx

# Then start it separately
docker start mycontainer

# Using docker run: Creates and starts in one command
docker run -d --name mycontainer nginx

Note: docker create is ideal for scripting complex setups, like attaching volumes post-creation.

92. How do you use environment variables in Docker?

Environment variables in Docker are used to configure containers dynamically, passing values at runtime without hardcoding. Theoretically, this aligns with twelve-factor app methodology and configuration management theories, separating config from code for portability and scalability in distributed systems.

From a foundational perspective, variables are injected into the container’s process environment, leveraging OS env vars mechanisms, enabling parameterization akin to functional programming closures. Advantages: Enhances security (e.g., secrets), flexibility; trade-offs: Potential leakage if not managed, overhead in large sets.

# Set env var at run time
docker run -e MY_VAR="value" myimage

# Use in Dockerfile for build-time
FROM alpine
ENV APP_ENV=production
CMD ["echo", "$APP_ENV"]

# From file
docker run --env-file .env myimage

Contents of .env:

MY_VAR=value
ANOTHER_VAR=secret

Note: Use --env-file for multiple vars to avoid command-line exposure.

93. What is docker system prune?

docker system prune is a command to remove unused Docker objects like containers, images, networks, and volumes. Theoretically, it implements garbage collection in resource management, drawing from memory management theories where unreferenced objects are reclaimed to free storage, preventing resource exhaustion in long-running systems.

Conceptually, it scans the daemon’s store for dangling resources, using reference counting similar to GC algorithms. Advantages: Automates cleanup, saves disk; trade-offs: Irreversible, potential data loss if volumes are pruned.

# Prune all unused resources
docker system prune -f  # -f to force without prompt

# Prune with filters
docker system prune --filter "until=24h"

Note: Add –volumes to include volumes; use cautiously in production.

94. How do you clean up unused Docker resources?

To clean up unused Docker resources, use targeted prune commands or docker system prune. Theoretically, this follows resource lifecycle management in virtualization, ensuring efficient allocation via periodic reclamation, rooted in OS resource theories like paging and swapping to optimize utilization.

Advantages: Prevents bloat; trade-offs: Requires monitoring to avoid over-pruning. Time complexity O(n) over objects.

# Remove stopped containers
docker container prune -f

# Remove dangling images
docker image prune -f

# Remove unused networks
docker network prune -f

# Remove unused volumes
docker volume prune -f

# All in one
docker system prune -a -f --volumes  # -a for all unused images

Note: Schedule in cron jobs for automated maintenance.

95. What are Docker contexts?

Docker contexts are configurations for managing multiple Docker environments, like local, remote daemons, or Swarm clusters. Theoretically, they embody context switching in multiprocessing, allowing seamless transitions between endpoints, drawing from OS context management for efficiency in multi-environment workflows.

Conceptually, a context stores API endpoints, TLS info, akin to profiles in tools like kubectl. Advantages: Simplifies ops in hybrid setups; trade-offs: Security if creds are mishandled.

# List contexts
docker context ls

# Create a new context
docker context create remote --docker "host=ssh://user@remotehost"

96. How do you switch between Docker contexts?

To switch between Docker contexts, use docker context use <name>. Theoretically, this updates the CLI’s active configuration, similar to environment variables in shells, enabling polymorphic behavior across infrastructures without reconfiguration.

Advantages: Streamlines DevOps; trade-offs: Context mismatches can cause errors.

# Switch to remote context
docker context use remote

# Verify current
docker context show

Note: Export DOCKER_CONTEXT for scripting.

97. What is Docker Desktop?

Docker Desktop is a GUI-based application for managing Docker on macOS/Windows, including a VM for Linux containers. Theoretically, it abstracts hypervisor interactions (Hyper-V, WSL2), providing a unified interface for container lifecycle, rooted in desktop virtualization theories for cross-platform consistency.

Advantages: User-friendly for devs; trade-offs: Resource-heavy, licensing for enterprises.

98. How does Docker integrate with Kubernetes?

Docker integrates with Kubernetes via container runtimes like containerd, where Kubernetes uses Docker’s images and CLI for building/pushing. Theoretically, Kubernetes orchestrates at a higher abstraction, using CRI (Container Runtime Interface) to interface with Docker, drawing from layered architecture in systems design for modularity.

Advantages: Leverages Docker ecosystem; trade-offs: Docker’s deprecation in k8s favors CRI-O/containerd.

# Build and push for k8s
docker build -t myimage .
docker push myrepo/myimage

# In k8s yaml
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: app
    image: myrepo/myimage

99. What container runtimes does Kubernetes support besides Docker?

Besides Docker, Kubernetes supports runtimes like containerd, CRI-O, and gVisor via CRI. Theoretically, these implement low-level execution, focusing on security/isolation, with containerd as a lightweight daemon from Docker’s components, CRI-O optimized for OpenShift.

  • containerd: High-performance, OCI-compliant.
  • CRI-O: Kubernetes-native, minimal.
  • gVisor: Sandboxed for security.

Advantages: Flexibility; trade-offs: Compatibility variations.

100. How would you migrate from Docker Swarm to Kubernetes?

Migrating from Docker Swarm to Kubernetes involves translating Compose files to YAML manifests, setting up a k8s cluster, and redeploying services. Theoretically, this shifts from simple orchestration to declarative, API-driven models, drawing from cluster computing evolutions for better scalability.

Steps: Assess services, use kompose for conversion, handle stateful data with PVs. Advantages: Rich ecosystem; trade-offs: Steeper curve, resource demands.

# Install kompose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose
chmod +x kompose && sudo mv ./kompose /usr/local/bin/kompose

# Convert Compose to k8s
kompose convert -f docker-compose.yml

# Apply to k8s
kubectl apply -f .

Note: Manually adjust for Swarm-specific features like secrets.

Latest Post

Share:
Next: 100 Top Python Interview Questions
AngularJS Interview Questions

100 Top-Asked AngularJS Interview Questions in 2026

Having worked with legacy AngularJS systems well into 2026, I’ve noticed a surprising trend: despite its official end-of-life in 2021,…

By Question Mentor
NewADO.NET Interview Questions and Answers

200 ADO.NET Interview Questions and Answers for 2026

As we enter 2026, ADO.NET remains a critical skill in the .NET landscape, not as a legacy relic, but as…

By Question Mentor
150 Top-Asked Agile & Scrum Interview Questions in 2026

150 Top-Asked Agile & Scrum Interview Questions in 2026

Having spent over a decade coaching teams through Agile transformations, I’ve watched the landscape evolve from niche methodology to near-universal…

By Question Mentor
FEEDBACK