Compare commits

..

No commits in common. "main" and "5.0.1" have entirely different histories.
main ... 5.0.1

48 changed files with 756 additions and 1237 deletions

View File

@ -95,6 +95,17 @@ stages:
test: '2.18/sanity/1' test: '2.18/sanity/1'
- name: Units - name: Units
test: '2.18/units/1' test: '2.18/units/1'
- stage: Ansible_2_17
displayName: Sanity & Units 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.17/sanity/1'
- name: Units
test: '2.17/units/1'
### Docker ### Docker
- stage: Docker_devel - stage: Docker_devel
@ -163,6 +174,23 @@ stages:
groups: groups:
- 4 - 4
- 5 - 5
- stage: Docker_2_17
displayName: Docker 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/linux/{0}
targets:
- name: Fedora 39
test: fedora39
- name: Ubuntu 20.04
test: ubuntu2004
- name: Alpine 3.19
test: alpine319
groups:
- 4
- 5
### Community Docker ### Community Docker
- stage: Docker_community_devel - stage: Docker_community_devel
@ -257,6 +285,22 @@ stages:
- 3 - 3
- 4 - 4
- 5 - 5
- stage: Remote_2_17
displayName: Remote 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/{0}
targets:
- name: RHEL 9.3
test: rhel/9.3
groups:
- 1
- 2
- 3
- 4
- 5
## Finally ## Finally
@ -267,14 +311,17 @@ stages:
- Ansible_2_20 - Ansible_2_20
- Ansible_2_19 - Ansible_2_19
- Ansible_2_18 - Ansible_2_18
- Ansible_2_17
- Remote_devel - Remote_devel
- Remote_2_20 - Remote_2_20
- Remote_2_19 - Remote_2_19
- Remote_2_18 - Remote_2_18
- Remote_2_17
- Docker_devel - Docker_devel
- Docker_2_20 - Docker_2_20
- Docker_2_19 - Docker_2_19
- Docker_2_18 - Docker_2_18
- Docker_2_17
- Docker_community_devel - Docker_community_devel
jobs: jobs:
- template: templates/coverage.yml - template: templates/coverage.yml

View File

@ -45,7 +45,7 @@ jobs:
steps: steps:
- name: Check out repository - name: Check out repository
uses: actions/checkout@v6 uses: actions/checkout@v5
with: with:
persist-credentials: false persist-credentials: false

View File

@ -30,6 +30,6 @@ jobs:
upload-codecov-pr: false upload-codecov-pr: false
upload-codecov-push: false upload-codecov-push: false
upload-codecov-schedule: true upload-codecov-schedule: true
max-ansible-core: "2.17" max-ansible-core: "2.16"
secrets: secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@ -388,8 +388,6 @@ disable=raw-checker-failed,
unused-argument, unused-argument,
# Cannot remove yet due to inadequacy of rules # Cannot remove yet due to inadequacy of rules
inconsistent-return-statements, # doesn't notice that fail_json() does not return inconsistent-return-statements, # doesn't notice that fail_json() does not return
# Buggy impementation in pylint:
relative-beyond-top-level, # TODO
# Enable the message, report, category or checker with the given id(s). You can # Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option # either give multiple identifier separated by comma (,) or put this option

File diff suppressed because it is too large Load Diff

View File

@ -4,58 +4,6 @@ Docker Community Collection Release Notes
.. contents:: Topics .. contents:: Topics
v5.0.4
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- CLI-based modules - when parsing JSON output fails, also provide standard error output. Also provide information on the command and its result in machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216, https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
v5.0.3
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- docker_container - when the same port is mapped more than once for the same protocol without specifying an interface, a bug caused an invalid value to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213, https://github.com/ansible-collections/community.docker/pull/1214).
v5.0.2
======
Release Summary
---------------
Bugfix release for Docker 29.
Bugfixes
--------
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185, https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module used an undocumented feature of Docker that was removed from Docker 29.0.0, that allowed to pass the range to the deamon and let handle it. Now the module explodes ranges into a list of all contained ports, same as the Docker CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes ranges returned by the API for existing containers so that comparison should only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker 29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
Known Issues
------------
- docker_image, docker_image_export - idempotency for archiving images depends on whether the image IDs used by the image storage backend correspond to the IDs used in the tarball's ``manifest.json`` files. The new default backend in Docker 29 apparently uses image IDs that no longer correspond, whence idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
v5.0.1 v5.0.1
====== ======

View File

@ -2282,66 +2282,3 @@ releases:
- 5.0.1.yml - 5.0.1.yml
- typing.yml - typing.yml
release_date: '2025-11-09' release_date: '2025-11-09'
5.0.2:
changes:
bugfixes:
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused
a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185,
https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module
used an undocumented feature of Docker that was removed from Docker 29.0.0,
that allowed to pass the range to the deamon and let handle it. Now the
module explodes ranges into a list of all contained ports, same as the Docker
CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes
ranges returned by the API for existing containers so that comparison should
only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0
(https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker
29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker
29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
known_issues:
- docker_image, docker_image_export - idempotency for archiving images depends
on whether the image IDs used by the image storage backend correspond to
the IDs used in the tarball's ``manifest.json`` files. The new default backend
in Docker 29 apparently uses image IDs that no longer correspond, whence
idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
release_summary: Bugfix release for Docker 29.
fragments:
- 1187-docker.yml
- 1192-docker_container.yml
- 1199-docker_image-push.yml
- 1201-docker_network.yml
- 5.0.2.yml
release_date: '2025-11-16'
5.0.3:
changes:
bugfixes:
- docker_container - when the same port is mapped more than once for the same
protocol without specifying an interface, a bug caused an invalid value
to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213,
https://github.com/ansible-collections/community.docker/pull/1214).
release_summary: Bugfix release.
fragments:
- 1214-docker_container-ports.yml
- 5.0.3.yml
release_date: '2025-11-29'
5.0.4:
changes:
bugfixes:
- CLI-based modules - when parsing JSON output fails, also provide standard
error output. Also provide information on the command and its result in
machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216,
https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull
events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
release_summary: Bugfix release.
fragments:
- 1219-compose-v2-pull.yml
- 1221-cli-json-errors.yml
- 5.0.4.yml
release_date: '2025-12-06'

View File

@ -7,7 +7,7 @@
namespace: community namespace: community
name: docker name: docker
version: 5.1.0 version: 5.0.1
readme: README.md readme: README.md
authors: authors:
- Ansible Docker Working Group - Ansible Docker Working Group

View File

@ -266,9 +266,7 @@ class Connection(ConnectionBase):
if not isinstance(val, str): if not isinstance(val, str):
raise AnsibleConnectionFailure( raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be " f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
) )
local_cmd += [ local_cmd += [
b"-e", b"-e",

View File

@ -282,11 +282,11 @@ class Connection(ConnectionBase):
if not isinstance(val, str): if not isinstance(val, str):
raise AnsibleConnectionFailure( raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be " f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
) )
data["Env"].append(f"{k}={v}") kk = to_text(k, errors="surrogate_or_strict")
vv = to_text(v, errors="surrogate_or_strict")
data["Env"].append(f"{kk}={vv}")
if self.get_option("working_dir") is not None: if self.get_option("working_dir") is not None:
data["WorkingDir"] = self.get_option("working_dir") data["WorkingDir"] = self.get_option("working_dir")

View File

@ -43,8 +43,10 @@ docker_version: str | None # pylint: disable=invalid-name
try: try:
from docker import __version__ as docker_version from docker import __version__ as docker_version
from docker.errors import APIError, TLSParameterError from docker import auth
from docker.errors import APIError, NotFound, TLSParameterError
from docker.tls import TLSConfig from docker.tls import TLSConfig
from requests.exceptions import SSLError
if LooseVersion(docker_version) >= LooseVersion("3.0.0"): if LooseVersion(docker_version) >= LooseVersion("3.0.0"):
HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name
@ -389,6 +391,242 @@ class AnsibleDockerClientBase(Client):
) )
self.fail(f"SSL Exception: {error}") self.fail(f"SSL Exception: {error}")
def get_container_by_id(self, container_id: str) -> dict[str, t.Any] | None:
try:
self.log(f"Inspecting container Id {container_id}")
result = self.inspect_container(container=container_id)
self.log("Completed container inspection")
return result
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting container: {exc}")
def get_container(self, name: str | None) -> dict[str, t.Any] | None:
"""
Lookup a container and return the inspection results.
"""
if name is None:
return None
search_name = name
if not name.startswith("/"):
search_name = "/" + name
result = None
try:
for container in self.containers(all=True):
self.log(f"testing container: {container['Names']}")
if (
isinstance(container["Names"], list)
and search_name in container["Names"]
):
result = container
break
if container["Id"].startswith(name):
result = container
break
if container["Id"] == name:
result = container
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving container list: {exc}")
if result is None:
return None
return self.get_container_by_id(result["Id"])
def get_network(
self, name: str | None = None, network_id: str | None = None
) -> dict[str, t.Any] | None:
"""
Lookup a network and return the inspection results.
"""
if name is None and network_id is None:
return None
result = None
if network_id is None:
try:
for network in self.networks():
self.log(f"testing network: {network['Name']}")
if name == network["Name"]:
result = network
break
if network["Id"].startswith(name):
result = network
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving network list: {exc}")
if result is not None:
network_id = result["Id"]
if network_id is not None:
try:
self.log(f"Inspecting network Id {network_id}")
result = self.inspect_network(network_id)
self.log("Completed network inspection")
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting network: {exc}")
return result
def find_image(self, name: str, tag: str) -> dict[str, t.Any] | None:
"""
Lookup an image (by name and tag) and return the inspection results.
"""
if not name:
return None
self.log(f"Find image {name}:{tag}")
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = auth.resolve_repository_name(name)
if registry == "docker.io":
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
self.log(f"Check for docker.io image: {repo_name}")
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith("library/"):
# Sometimes library/xxx images are not found
lookup = repo_name[len("library/") :]
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images:
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
lookup = f"{registry}/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images and "/" not in repo_name:
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
lookup = f"{registry}/library/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail(f"Daemon returned more than one result for {name}:{tag}")
if len(images) == 1:
try:
inspection = self.inspect_image(images[0]["Id"])
except NotFound:
self.log(f"Image {name}:{tag} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image {name}:{tag} - {exc}")
return inspection
self.log(f"Image {name}:{tag} not found.")
return None
def find_image_by_id(
self, image_id: str, accept_missing_image: bool = False
) -> dict[str, t.Any] | None:
"""
Lookup an image (by ID) and return the inspection results.
"""
if not image_id:
return None
self.log(f"Find image {image_id} (by ID)")
try:
inspection = self.inspect_image(image_id)
except NotFound as exc:
if not accept_missing_image:
self.fail(f"Error inspecting image ID {image_id} - {exc}")
self.log(f"Image {image_id} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}")
return inspection
def _image_lookup(self, name: str, tag: str) -> list[dict[str, t.Any]]:
"""
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
"""
try:
response = self.images(name=name)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error searching for image {name} - {exc}")
images = response
if tag:
lookup = f"{name}:{tag}"
lookup_digest = f"{name}@{tag}"
images = []
for image in response:
tags = image.get("RepoTags")
digests = image.get("RepoDigests")
if (tags and lookup in tags) or (digests and lookup_digest in digests):
images = [image]
break
return images
def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]:
"""
Pull an image
"""
kwargs = {
"tag": tag,
"stream": True,
"decode": True,
}
if image_platform is not None:
kwargs["platform"] = image_platform
self.log(f"Pulling image {name}:{tag}")
old_tag = self.find_image(name, tag)
try:
for line in self.pull(name, **kwargs):
self.log(line, pretty_print=True)
if line.get("error"):
if line.get("errorDetail"):
error_detail = line.get("errorDetail")
self.fail(
f"Error pulling {name} - code: {error_detail.get('code')} message: {error_detail.get('message')}"
)
else:
self.fail(f"Error pulling {name} - {line.get('error')}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_tag = self.find_image(name, tag)
return new_tag, old_tag == new_tag
def inspect_distribution(self, image: str, **kwargs: t.Any) -> dict[str, t.Any]:
"""
Get image digest by directly calling the Docker API when running Docker SDK < 4.0.0
since prior versions did not support accessing private repositories.
"""
if self.docker_py_version < LooseVersion("4.0.0"):
registry = auth.resolve_repository_name(image)[0]
header = auth.get_config_header(self, registry)
if header:
return self._result(
self._get(
self._url("/distribution/{0}/json", image),
headers={"X-Registry-Auth": header},
),
json=True,
)
return super().inspect_distribution(image, **kwargs)
class AnsibleDockerClient(AnsibleDockerClientBase): class AnsibleDockerClient(AnsibleDockerClientBase):
def __init__( def __init__(

View File

@ -519,17 +519,6 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}") self.fail(f"Error inspecting image ID {image_id} - {exc}")
@staticmethod
def _compare_images(
img1: dict[str, t.Any] | None, img2: dict[str, t.Any] | None
) -> bool:
if img1 is None or img2 is None:
return img1 == img2
filter_keys = {"Metadata"}
img1_filtered = {k: v for k, v in img1.items() if k not in filter_keys}
img2_filtered = {k: v for k, v in img2.items() if k not in filter_keys}
return img1_filtered == img2_filtered
def pull_image( def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]: ) -> tuple[dict[str, t.Any] | None, bool]:
@ -537,7 +526,7 @@ class AnsibleDockerClientBase(Client):
Pull an image Pull an image
""" """
self.log(f"Pulling image {name}:{tag}") self.log(f"Pulling image {name}:{tag}")
old_image = self.find_image(name, tag) old_tag = self.find_image(name, tag)
try: try:
repository, image_tag = parse_repository_tag(name) repository, image_tag = parse_repository_tag(name)
registry, dummy_repo_name = auth.resolve_repository_name(repository) registry, dummy_repo_name = auth.resolve_repository_name(repository)
@ -574,9 +563,9 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}") self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_image = self.find_image(name, tag) new_tag = self.find_image(name, tag)
return new_image, self._compare_images(old_image, new_image) return new_tag, old_tag == new_tag
class AnsibleDockerClient(AnsibleDockerClientBase): class AnsibleDockerClient(AnsibleDockerClientBase):

View File

@ -126,16 +126,13 @@ class AnsibleDockerClientBase:
self._info: dict[str, t.Any] | None = None self._info: dict[str, t.Any] | None = None
if needs_api_version: if needs_api_version:
api_version_string = self._version["Server"].get(
"ApiVersion"
) or self._version["Server"].get("APIVersion")
if not isinstance(self._version.get("Server"), dict) or not isinstance( if not isinstance(self._version.get("Server"), dict) or not isinstance(
api_version_string, str self._version["Server"].get("ApiVersion"), str
): ):
self.fail( self.fail(
"Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?" "Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?"
) )
self.docker_api_version_str = to_text(api_version_string) self.docker_api_version_str = to_text(self._version["Server"]["ApiVersion"])
self.docker_api_version = LooseVersion(self.docker_api_version_str) self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25" min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version): if self.docker_api_version < LooseVersion(min_docker_api_version):
@ -197,11 +194,7 @@ class AnsibleDockerClientBase:
data = json.loads(stdout) data = json.loads(stdout)
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail( self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}", f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}"
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
) )
return rc, data, stderr return rc, data, stderr
@ -227,11 +220,7 @@ class AnsibleDockerClientBase:
result.append(json.loads(line)) result.append(json.loads(line))
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail( self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}", f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}"
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
) )
return rc, result, stderr return rc, result, stderr

View File

@ -132,7 +132,7 @@ DOCKER_PULL_PROGRESS_DONE = frozenset(
"Pull complete", "Pull complete",
) )
) )
DOCKER_PULL_PROGRESS_WORKING_OLD = frozenset( DOCKER_PULL_PROGRESS_WORKING = frozenset(
( (
"Pulling fs layer", "Pulling fs layer",
"Waiting", "Waiting",
@ -141,7 +141,6 @@ DOCKER_PULL_PROGRESS_WORKING_OLD = frozenset(
"Extracting", "Extracting",
) )
) )
DOCKER_PULL_PROGRESS_WORKING = frozenset(DOCKER_PULL_PROGRESS_WORKING_OLD | {"Working"})
class ResourceType: class ResourceType:
@ -192,7 +191,7 @@ _RE_PULL_EVENT = re.compile(
) )
_DOCKER_PULL_PROGRESS_WD = sorted( _DOCKER_PULL_PROGRESS_WD = sorted(
DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING_OLD DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING
) )
_RE_PULL_PROGRESS = re.compile( _RE_PULL_PROGRESS = re.compile(
@ -495,17 +494,7 @@ def parse_json_events(
# {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"} # {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"}
# (The longer form happens since Docker Compose 2.39.0) # (The longer form happens since Docker Compose 2.39.0)
continue continue
if ( if isinstance(resource_id, str) and " " in resource_id:
status in ("Working", "Done")
and isinstance(line_data.get("parent_id"), str)
and line_data["parent_id"].startswith("Image ")
):
# Compose 5.0.0+:
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}
resource_type = ResourceType.IMAGE_LAYER
resource_id = line_data["parent_id"][len("Image ") :]
elif isinstance(resource_id, str) and " " in resource_id:
resource_type_str, resource_id = resource_id.split(" ", 1) resource_type_str, resource_id = resource_id.split(" ", 1)
try: try:
resource_type = ResourceType.from_docker_compose_event( resource_type = ResourceType.from_docker_compose_event(
@ -524,7 +513,7 @@ def parse_json_events(
status, text = text, status status, text = text, status
elif ( elif (
text in DOCKER_PULL_PROGRESS_DONE text in DOCKER_PULL_PROGRESS_DONE
or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING_OLD or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING
): ):
resource_type = ResourceType.IMAGE_LAYER resource_type = ResourceType.IMAGE_LAYER
status, text = text, status status, text = text, status

View File

@ -659,9 +659,7 @@ def _preprocess_env(
if not isinstance(value, str): if not isinstance(value, str):
module.fail_json( module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be " msg="Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
final_env[name] = to_text(value, errors="surrogate_or_strict") final_env[name] = to_text(value, errors="surrogate_or_strict")
formatted_env = [] formatted_env = []
@ -949,8 +947,7 @@ def _preprocess_log(
value = to_text(v, errors="surrogate_or_strict") value = to_text(v, errors="surrogate_or_strict")
module.warn( module.warn(
f"Non-string value found for log_options option '{k}'. The value is automatically converted to {value!r}. " f"Non-string value found for log_options option '{k}'. The value is automatically converted to {value!r}. "
"If this is not correct, or you want to avoid such warnings, please quote the value," "If this is not correct, or you want to avoid such warnings, please quote the value."
" or explicitly convert the values to strings when templating them."
) )
v = value v = value
options[k] = v options[k] = v
@ -1019,7 +1016,7 @@ def _preprocess_ports(
else: else:
port_binds = len(container_ports) * [(ipaddr,)] port_binds = len(container_ports) * [(ipaddr,)]
else: else:
module.fail_json( return module.fail_json(
msg=f'Invalid port description "{port}" - expected 1 to 3 colon-separated parts, but got {p_len}. ' msg=f'Invalid port description "{port}" - expected 1 to 3 colon-separated parts, but got {p_len}. '
"Maybe you forgot to use square brackets ([...]) around an IPv6 address?" "Maybe you forgot to use square brackets ([...]) around an IPv6 address?"
) )
@ -1040,43 +1037,38 @@ def _preprocess_ports(
binds[idx] = bind binds[idx] = bind
values["published_ports"] = binds values["published_ports"] = binds
exposed: set[tuple[int, str]] = set() exposed = []
if "exposed_ports" in values: if "exposed_ports" in values:
for port in values["exposed_ports"]: for port in values["exposed_ports"]:
port = to_text(port, errors="surrogate_or_strict").strip() port = to_text(port, errors="surrogate_or_strict").strip()
protocol = "tcp" protocol = "tcp"
parts = port.split("/", maxsplit=1) matcher = re.search(r"(/.+$)", port)
if len(parts) == 2: if matcher:
port, protocol = parts protocol = matcher.group(1).replace("/", "")
parts = port.split("-", maxsplit=1) port = re.sub(r"/.+$", "", port)
if len(parts) < 2: exposed.append((port, protocol))
try:
exposed.add((int(port), protocol))
except ValueError as e:
module.fail_json(msg=f"Cannot parse port {port!r}: {e}")
else:
try:
start_port = int(parts[0])
end_port = int(parts[1])
if start_port > end_port:
raise ValueError(
"start port must be smaller or equal to end port."
)
except ValueError as e:
module.fail_json(msg=f"Cannot parse port range {port!r}: {e}")
for port in range(start_port, end_port + 1):
exposed.add((port, protocol))
if "published_ports" in values: if "published_ports" in values:
# Any published port should also be exposed # Any published port should also be exposed
for publish_port in values["published_ports"]: for publish_port in values["published_ports"]:
match = False
if isinstance(publish_port, str) and "/" in publish_port: if isinstance(publish_port, str) and "/" in publish_port:
port, protocol = publish_port.split("/") port, protocol = publish_port.split("/")
port = int(port) port = int(port)
else: else:
protocol = "tcp" protocol = "tcp"
port = int(publish_port) port = int(publish_port)
exposed.add((port, protocol)) for exposed_port in exposed:
values["ports"] = sorted(exposed) if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], str) and "-" in exposed_port[0]:
start_port, end_port = exposed_port[0].split("-")
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
values["ports"] = exposed
return values return values

View File

@ -29,7 +29,6 @@ from ansible_collections.community.docker.plugins.module_utils._common_api impor
RequestException, RequestException,
) )
from ansible_collections.community.docker.plugins.module_utils._module_container.base import ( from ansible_collections.community.docker.plugins.module_utils._module_container.base import (
_DEFAULT_IP_REPLACEMENT_STRING,
OPTION_AUTO_REMOVE, OPTION_AUTO_REMOVE,
OPTION_BLKIO_WEIGHT, OPTION_BLKIO_WEIGHT,
OPTION_CAP_DROP, OPTION_CAP_DROP,
@ -128,6 +127,11 @@ if t.TYPE_CHECKING:
Sentry = object Sentry = object
_DEFAULT_IP_REPLACEMENT_STRING = (
"[[DEFAULT_IP:iewahhaeB4Sae6Aen8IeShairoh4zeph7xaekoh8Geingunaesaeweiy3ooleiwi]]"
)
_SENTRY: Sentry = object() _SENTRY: Sentry = object()
@ -1966,20 +1970,10 @@ def _get_values_ports(
config = container["Config"] config = container["Config"]
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517 # "ExposedPorts": null returns None type & causes AttributeError - PR #5517
expected_exposed: list[str] = []
if config.get("ExposedPorts") is not None: if config.get("ExposedPorts") is not None:
for port_and_protocol in config.get("ExposedPorts", {}): expected_exposed = [_normalize_port(p) for p in config.get("ExposedPorts", {})]
port, protocol = _normalize_port(port_and_protocol).rsplit("/") else:
try: expected_exposed = []
start, end = port.split("-", 1)
start_port = int(start)
end_port = int(end)
for port_no in range(start_port, end_port + 1):
expected_exposed.append(f"{port_no}/{protocol}")
continue
except ValueError:
# Either it is not a range, or a broken one - in both cases, simply add the original form
expected_exposed.append(f"{port}/{protocol}")
return { return {
"published_ports": host_config.get("PortBindings"), "published_ports": host_config.get("PortBindings"),
@ -2033,14 +2027,17 @@ def _get_expected_values_ports(
] ]
expected_values["published_ports"] = expected_bound_ports expected_values["published_ports"] = expected_bound_ports
image_ports: set[str] = set() image_ports = []
if image: if image:
image_exposed_ports = image["Config"].get("ExposedPorts") or {} image_exposed_ports = image["Config"].get("ExposedPorts") or {}
image_ports = {_normalize_port(p) for p in image_exposed_ports} image_ports = [_normalize_port(p) for p in image_exposed_ports]
param_ports: set[str] = set() param_ports = []
if "ports" in values: if "ports" in values:
param_ports = {f"{p[0]}/{p[1]}" for p in values["ports"]} param_ports = [
result = sorted(image_ports | param_ports) to_text(p[0], errors="surrogate_or_strict") + "/" + p[1]
for p in values["ports"]
]
result = list(set(image_ports + param_ports))
expected_values["exposed_ports"] = result expected_values["exposed_ports"] = result
if "publish_all_ports" in values: if "publish_all_ports" in values:
@ -2089,26 +2086,16 @@ def _preprocess_value_ports(
if "published_ports" not in values: if "published_ports" not in values:
return values return values
found = False found = False
for port_specs in values["published_ports"].values(): for port_spec in values["published_ports"].values():
if not isinstance(port_specs, list): if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
port_specs = [port_specs] found = True
for port_spec in port_specs: break
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
found = True
break
if not found: if not found:
return values return values
default_ip = _get_default_host_ip(module, client) default_ip = _get_default_host_ip(module, client)
for port, port_specs in values["published_ports"].items(): for port, port_spec in values["published_ports"].items():
if isinstance(port_specs, list): if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
for index, port_spec in enumerate(port_specs): values["published_ports"][port] = tuple([default_ip] + list(port_spec[1:]))
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
port_specs[index] = tuple([default_ip] + list(port_spec[1:]))
else:
if port_specs[0] == _DEFAULT_IP_REPLACEMENT_STRING:
values["published_ports"][port] = tuple(
[default_ip] + list(port_specs[1:])
)
return values return values

View File

@ -25,7 +25,6 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DockerBaseClass, DockerBaseClass,
compare_generic, compare_generic,
is_image_name_id, is_image_name_id,
normalize_ip_address,
sanitize_result, sanitize_result,
) )
@ -926,13 +925,13 @@ class ContainerManager(DockerBaseClass, t.Generic[Client]):
else: else:
diff = False diff = False
network_info_ipam = network_info.get("IPAMConfig") or {} network_info_ipam = network_info.get("IPAMConfig") or {}
if network.get("ipv4_address") and normalize_ip_address( if network.get("ipv4_address") and network[
network["ipv4_address"] "ipv4_address"
) != normalize_ip_address(network_info_ipam.get("IPv4Address")): ] != network_info_ipam.get("IPv4Address"):
diff = True diff = True
if network.get("ipv6_address") and normalize_ip_address( if network.get("ipv6_address") and network[
network["ipv6_address"] "ipv6_address"
) != normalize_ip_address(network_info_ipam.get("IPv6Address")): ] != network_info_ipam.get("IPv6Address"):
diff = True diff = True
if network.get("aliases") and not compare_generic( if network.get("aliases") and not compare_generic(
network["aliases"], network["aliases"],

View File

@ -7,7 +7,6 @@
from __future__ import annotations from __future__ import annotations
import ipaddress
import json import json
import re import re
import typing as t import typing as t
@ -506,47 +505,3 @@ def omit_none_from_dict(d: dict[str, t.Any]) -> dict[str, t.Any]:
Return a copy of the dictionary with all keys with value None omitted. Return a copy of the dictionary with all keys with value None omitted.
""" """
return {k: v for (k, v) in d.items() if v is not None} return {k: v for (k, v) in d.items() if v is not None}
@t.overload
def normalize_ip_address(ip_address: str) -> str: ...
@t.overload
def normalize_ip_address(ip_address: str | None) -> str | None: ...
def normalize_ip_address(ip_address: str | None) -> str | None:
"""
Given an IP address as a string, normalize it so that it can be
used to compare IP addresses as strings.
"""
if ip_address is None:
return None
try:
return ipaddress.ip_address(ip_address).compressed
except ValueError:
# Fallback for invalid addresses: simply return the input
return ip_address
@t.overload
def normalize_ip_network(network: str) -> str: ...
@t.overload
def normalize_ip_network(network: str | None) -> str | None: ...
def normalize_ip_network(network: str | None) -> str | None:
"""
Given a network in CIDR notation as a string, normalize it so that it can be
used to compare networks as strings.
"""
if network is None:
return None
try:
return ipaddress.ip_network(network).compressed
except ValueError:
# Fallback for invalid networks: simply return the input
return network

View File

@ -210,14 +210,13 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n" self.stdin += "\n"
if self.env is not None: if self.env is not None:
for name, value in self.env.items(): for name, value in list(self.env.items()):
if not isinstance(value, str): if not isinstance(value, str):
self.fail( self.fail(
"Non-string value found for env option. Ambiguous env options must be " "Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_exec_cmd(self, dry_run: bool) -> list[str]: def get_exec_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["exec"] args = self.get_base_args(plain_progress=True) + ["exec"]

View File

@ -296,14 +296,13 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n" self.stdin += "\n"
if self.env is not None: if self.env is not None:
for name, value in self.env.items(): for name, value in list(self.env.items()):
if not isinstance(value, str): if not isinstance(value, str):
self.fail( self.fail(
"Non-string value found for env option. Ambiguous env options must be " "Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_run_cmd(self, dry_run: bool) -> list[str]: def get_run_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["run"] args = self.get_base_args(plain_progress=True) + ["run"]

View File

@ -221,17 +221,16 @@ def main() -> None:
stdin: str | None = client.module.params["stdin"] stdin: str | None = client.module.params["stdin"]
strip_empty_ends: bool = client.module.params["strip_empty_ends"] strip_empty_ends: bool = client.module.params["strip_empty_ends"]
tty: bool = client.module.params["tty"] tty: bool = client.module.params["tty"]
env: dict[str, t.Any] | None = client.module.params["env"] env: dict[str, t.Any] = client.module.params["env"]
if env is not None: if env is not None:
for name, value in env.items(): for name, value in list(env.items()):
if not isinstance(value, str): if not isinstance(value, str):
client.module.fail_json( client.module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be " msg="Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified " f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
env[name] = to_text(value, errors="surrogate_or_strict")
if command is not None: if command is not None:
argv = shlex.split(command) argv = shlex.split(command)

View File

@ -21,8 +21,6 @@ description:
notes: notes:
- Building images is done using Docker daemon's API. It is not possible to use BuildKit / buildx this way. Use M(community.docker.docker_image_build) - Building images is done using Docker daemon's API. It is not possible to use BuildKit / buildx this way. Use M(community.docker.docker_image_build)
to build images with BuildKit. to build images with BuildKit.
- Exporting images is generally not idempotent. It depends on whether the image ID equals the IDs found in the generated tarball's C(manifest.json).
This was the case with the default storage backend up to Docker 28, but seems to have changed in Docker 29.
extends_documentation_fragment: extends_documentation_fragment:
- community.docker._docker.api_documentation - community.docker._docker.api_documentation
- community.docker._attributes - community.docker._attributes
@ -805,7 +803,7 @@ class ImageManager(DockerBaseClass):
if line.get("errorDetail"): if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"]) raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status") status = line.get("status")
if status in ("Pushing", "Pushed"): if status == "Pushing":
changed = True changed = True
self.results["changed"] = changed self.results["changed"] = changed
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught

View File

@ -28,13 +28,7 @@ attributes:
diff_mode: diff_mode:
support: none support: none
idempotent: idempotent:
support: partial support: full
details:
- Whether the module is idempotent depends on the storage API used for images,
which determines how the image ID is computed. The idempotency check needs
that the image ID equals the ID stored in archive's C(manifest.json).
This seemed to have worked fine with the default storage backend up to Docker 28,
but seems to have changed in Docker 29.
options: options:
names: names:

View File

@ -159,7 +159,7 @@ class ImagePusher(DockerBaseClass):
if line.get("errorDetail"): if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"]) raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status") status = line.get("status")
if status in ("Pushing", "Pushed"): if status == "Pushing":
results["changed"] = True results["changed"] = True
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
if "unauthorized" in str(exc): if "unauthorized" in str(exc):

View File

@ -219,7 +219,6 @@ class ImageRemover(DockerBaseClass):
elif is_image_name_id(name): elif is_image_name_id(name):
deleted.append(image["Id"]) deleted.append(image["Id"])
# TODO: the following is no longer correct with Docker 29+...
untagged[:] = sorted( untagged[:] = sorted(
(image.get("RepoTags") or []) + (image.get("RepoDigests") or []) (image.get("RepoTags") or []) + (image.get("RepoDigests") or [])
) )

View File

@ -299,8 +299,6 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DifferenceTracker, DifferenceTracker,
DockerBaseClass, DockerBaseClass,
clean_dict_booleans_for_docker_api, clean_dict_booleans_for_docker_api,
normalize_ip_address,
normalize_ip_network,
sanitize_labels, sanitize_labels,
) )
@ -362,7 +360,6 @@ def validate_cidr(cidr: str) -> t.Literal["ipv4", "ipv6"]:
:rtype: str :rtype: str
:raises ValueError: If ``cidr`` is not a valid CIDR :raises ValueError: If ``cidr`` is not a valid CIDR
""" """
# TODO: Use ipaddress for this instead of rolling your own...
if CIDR_IPV4.match(cidr): if CIDR_IPV4.match(cidr):
return "ipv4" return "ipv4"
if CIDR_IPV6.match(cidr): if CIDR_IPV6.match(cidr):
@ -392,19 +389,6 @@ def dicts_are_essentially_equal(a: dict[str, t.Any], b: dict[str, t.Any]) -> boo
return True return True
def normalize_ipam_values(ipam_config: dict[str, t.Any]) -> dict[str, t.Any]:
result = {}
for key, value in ipam_config.items():
if key in ("subnet", "iprange"):
value = normalize_ip_network(value)
elif key in ("gateway",):
value = normalize_ip_address(value)
elif key in ("aux_addresses",) and value is not None:
value = {k: normalize_ip_address(v) for k, v in value.items()}
result[key] = value
return result
class DockerNetworkManager: class DockerNetworkManager:
def __init__(self, client: AnsibleDockerClient) -> None: def __init__(self, client: AnsibleDockerClient) -> None:
self.client = client self.client = client
@ -529,35 +513,24 @@ class DockerNetworkManager:
else: else:
# Put network's IPAM config into the same format as module's IPAM config # Put network's IPAM config into the same format as module's IPAM config
net_ipam_configs = [] net_ipam_configs = []
net_ipam_configs_normalized = []
for net_ipam_config in net["IPAM"]["Config"]: for net_ipam_config in net["IPAM"]["Config"]:
config = {} config = {}
for k, v in net_ipam_config.items(): for k, v in net_ipam_config.items():
config[normalize_ipam_config_key(k)] = v config[normalize_ipam_config_key(k)] = v
net_ipam_configs.append(config) net_ipam_configs.append(config)
net_ipam_configs_normalized.append(normalize_ipam_values(config))
# Compare lists of dicts as sets of dicts # Compare lists of dicts as sets of dicts
for idx, ipam_config in enumerate(self.parameters.ipam_config): for idx, ipam_config in enumerate(self.parameters.ipam_config):
ipam_config_normalized = normalize_ipam_values(ipam_config)
net_config = {} net_config = {}
net_config_normalized = {} for net_ipam_config in net_ipam_configs:
for net_ipam_config, net_ipam_config_normalized in zip( if dicts_are_essentially_equal(ipam_config, net_ipam_config):
net_ipam_configs, net_ipam_configs_normalized
):
if dicts_are_essentially_equal(
ipam_config_normalized, net_ipam_config_normalized
):
net_config = net_ipam_config net_config = net_ipam_config
net_config_normalized = net_ipam_config_normalized
break break
for key, value in ipam_config.items(): for key, value in ipam_config.items():
if value is None: if value is None:
# due to recursive argument_spec, all keys are always present # due to recursive argument_spec, all keys are always present
# (but have default value None if not specified) # (but have default value None if not specified)
continue continue
if ipam_config_normalized[key] != net_config_normalized.get( if value != net_config.get(key):
key
):
differences.add( differences.add(
f"ipam_config[{idx}].{key}", f"ipam_config[{idx}].{key}",
parameter=value, parameter=value,

View File

@ -914,10 +914,8 @@ def get_docker_environment(
for name, value in env.items(): for name, value in env.items():
if not isinstance(value, str): if not isinstance(value, str):
raise ValueError( raise ValueError(
"Non-string value found for env option. Ambiguous env options must be " "Non-string value found for env option. "
"wrapped in quotes to avoid them being interpreted when directly specified " f"Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: {name}"
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
env_dict[name] = str(value) env_dict[name] = str(value)
elif env is not None and isinstance(env, list): elif env is not None and isinstance(env, list):

View File

@ -124,16 +124,13 @@
# - present_3_check is changed -- whether this is true depends on a combination of Docker CLI and Docker Compose version... # - present_3_check is changed -- whether this is true depends on a combination of Docker CLI and Docker Compose version...
# Compose 2.37.3 with Docker 28.2.x results in 'changed', while Compose 2.37.3 with Docker 28.3.0 results in 'not changed'. # Compose 2.37.3 with Docker 28.2.x results in 'changed', while Compose 2.37.3 with Docker 28.3.0 results in 'not changed'.
# It seems that Docker is now clever enough to notice that nothing is rebuilt... # It seems that Docker is now clever enough to notice that nothing is rebuilt...
# With Docker 29.0.0, the behvaior seems to change again... I'm currently tending to simply ignore this check, for that - present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# reason the next three lines are commented out: - ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed))
# - present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# - ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed))
# - present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# Same as above:
# - present_4_check is changed # - present_4_check is changed
# Same as above...
- present_4_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_4_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# Also seems like a hopeless case with Docker 29: - present_4 is not changed
# - present_4 is not changed
- present_4.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_4.warnings | default([]) | select('regex', ' please report this at ') | length == 0
always: always:

View File

@ -81,19 +81,16 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- present_1_check is failed or present_1_check is changed - present_1_check is failed or present_1_check is changed
- present_1_check is changed or 'General error:' in present_1_check.msg - present_1_check is changed or present_1_check.msg.startswith('General error:')
- present_1_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_1_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_1 is failed - present_1 is failed
- >- - present_1.msg.startswith('General error:')
'General error:' in present_1.msg
- present_1.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_1.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2_check is failed - present_2_check is failed
- present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':') or - present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':')
present_2_check.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_2_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2 is failed - present_2 is failed
- present_2.msg.startswith('Error when processing ' ~ cname ~ ':') or - present_2.msg.startswith('Error when processing ' ~ cname ~ ':')
present_2.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_2.warnings | default([]) | select('regex', ' please report this at ') | length == 0
#################################################################### ####################################################################

View File

@ -9,10 +9,12 @@
non_existing_image: does-not-exist:latest non_existing_image: does-not-exist:latest
project_src: "{{ remote_tmp_dir }}/{{ pname }}" project_src: "{{ remote_tmp_dir }}/{{ pname }}"
test_service_non_existing: | test_service_non_existing: |
version: '3'
services: services:
{{ cname }}: {{ cname }}:
image: {{ non_existing_image }} image: {{ non_existing_image }}
test_service_simple: | test_service_simple: |
version: '3'
services: services:
{{ cname }}: {{ cname }}:
image: {{ docker_test_image_simple_1 }} image: {{ docker_test_image_simple_1 }}

View File

@ -77,8 +77,7 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- result_1.rc == 0 - result_1.rc == 0
# Since Compose 5, unrelated output shows up in stderr... - result_1.stderr == ""
- result_1.stderr == "" or ("Creating" in result_1.stderr and "Created" in result_1.stderr)
- >- - >-
"usr" in result_1.stdout_lines "usr" in result_1.stdout_lines
and and

View File

@ -37,10 +37,7 @@
register: docker_host_info register: docker_host_info
# Run the tests # Run the tests
- module_defaults: - block:
community.general.docker_container:
debug: true
block:
- ansible.builtin.include_tasks: run-test.yml - ansible.builtin.include_tasks: run-test.yml
with_fileglob: with_fileglob:
- "tests/*.yml" - "tests/*.yml"

View File

@ -128,7 +128,6 @@
image: "{{ docker_test_image_digest_base }}@sha256:{{ docker_test_image_digest_v1 }}" image: "{{ docker_test_image_digest_base }}@sha256:{{ docker_test_image_digest_v1 }}"
name: "{{ cname }}" name: "{{ cname }}"
pull: true pull: true
debug: true
state: present state: present
force_kill: true force_kill: true
register: digest_3 register: digest_3

View File

@ -3077,14 +3077,10 @@
that: that:
- log_options_1 is changed - log_options_1 is changed
- log_options_2 is not changed - log_options_2 is not changed
- message in (log_options_2.warnings | default([])) - "'Non-string value found for log_options option \\'max-file\\'. The value is automatically converted to \\'5\\'. If this is not correct, or you want to
avoid such warnings, please quote the value.' in (log_options_2.warnings | default([]))"
- log_options_3 is not changed - log_options_3 is not changed
- log_options_4 is changed - log_options_4 is changed
vars:
message: >-
Non-string value found for log_options option 'max-file'. The value is automatically converted to '5'.
If this is not correct, or you want to avoid such warnings, please quote the value,
or explicitly convert the values to strings when templating them.
#################################################################### ####################################################################
## mac_address ##################################################### ## mac_address #####################################################
@ -3690,6 +3686,18 @@
register: platform_5 register: platform_5
ignore_errors: true ignore_errors: true
- name: platform (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_simple_1 }}"
name: "{{ cname }}"
state: present
pull: true
platform: 386
force_kill: true
debug: true
register: platform_6
ignore_errors: true
- name: cleanup - name: cleanup
community.docker.docker_container: community.docker.docker_container:
name: "{{ cname }}" name: "{{ cname }}"
@ -3704,6 +3712,7 @@
- platform_3 is not changed and platform_3 is not failed - platform_3 is not changed and platform_3 is not failed
- platform_4 is not changed and platform_4 is not failed - platform_4 is not changed and platform_4 is not failed
- platform_5 is changed - platform_5 is changed
- platform_6 is not changed and platform_6 is not failed
when: docker_api_version is version('1.41', '>=') when: docker_api_version is version('1.41', '>=')
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:

View File

@ -106,101 +106,6 @@
force_kill: true force_kill: true
register: published_ports_3 register: published_ports_3
- name: published_ports -- port range (same range, but listed explicitly)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010"
- "9011"
- "9012"
- "9013"
- "9014"
- "9015"
- "9016"
- "9017"
- "9018"
- "9019"
- "9020"
- "9021"
- "9022"
- "9023"
- "9024"
- "9025"
- "9026"
- "9027"
- "9028"
- "9029"
- "9030"
- "9031"
- "9032"
- "9033"
- "9034"
- "9035"
- "9036"
- "9037"
- "9038"
- "9039"
- "9040"
- "9041"
- "9042"
- "9043"
- "9044"
- "9045"
- "9046"
- "9047"
- "9048"
- "9049"
- "9050"
published_ports:
- "9001:9001"
- "9020:9020"
- "9021:9021"
- "9022:9022"
- "9023:9023"
- "9024:9024"
- "9025:9025"
- "9026:9026"
- "9027:9027"
- "9028:9028"
- "9029:9029"
- "9030:9030"
- "9031:9031"
- "9032:9032"
- "9033:9033"
- "9034:9034"
- "9035:9035"
- "9036:9036"
- "9037:9037"
- "9038:9038"
- "9039:9039"
- "9040:9040"
- "9041:9041"
- "9042:9042"
- "9043:9043"
- "9044:9044"
- "9045:9045"
- "9046:9046"
- "9047:9047"
- "9048:9048"
- "9049:9049"
- "9050:9050"
- "9051:9051"
- "9052:9052"
- "9053:9053"
- "9054:9054"
- "9055:9055"
- "9056:9056"
- "9057:9057"
- "9058:9058"
- "9059:9059"
- "9060:9060"
force_kill: true
register: published_ports_4
- name: cleanup - name: cleanup
community.docker.docker_container: community.docker.docker_container:
name: "{{ cname }}" name: "{{ cname }}"
@ -213,7 +118,6 @@
- published_ports_1 is changed - published_ports_1 is changed
- published_ports_2 is not changed - published_ports_2 is not changed
- published_ports_3 is changed - published_ports_3 is changed
- published_ports_4 is not changed
#################################################################### ####################################################################
## published_ports: one-element container port range ############### ## published_ports: one-element container port range ###############
@ -277,58 +181,6 @@
- published_ports_2 is not changed - published_ports_2 is not changed
- published_ports_3 is changed - published_ports_3 is changed
####################################################################
## published_ports: duplicate ports ################################
####################################################################
- name: published_ports -- duplicate ports
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
register: published_ports_1
- name: published_ports -- duplicate ports (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
force_kill: true
register: published_ports_2
- name: published_ports -- duplicate ports (idempotency w/ protocol)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80/tcp
- 10000:80/tcp
force_kill: true
register: published_ports_3
- name: cleanup
community.docker.docker_container:
name: "{{ cname }}"
state: absent
force_kill: true
diff: false
- ansible.builtin.assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is not changed
#################################################################### ####################################################################
## published_ports: IPv6 addresses ################################# ## published_ports: IPv6 addresses #################################
#################################################################### ####################################################################

View File

@ -256,10 +256,6 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- archive_image_2 is not changed - archive_image_2 is not changed
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199
- name: Archive image 3rd time, should overwrite due to different id - name: Archive image 3rd time, should overwrite due to different id
community.docker.docker_image: community.docker.docker_image:

View File

@ -67,7 +67,3 @@
manifests_json: "{{ manifests.results | map(attribute='stdout') | map('from_json') }}" manifests_json: "{{ manifests.results | map(attribute='stdout') | map('from_json') }}"
manifest_json_images: "{{ item.2 | map(attribute='Config') | map('regex_replace', '.json$', '') | map('regex_replace', '^blobs/sha256/', '') | sort }}" manifest_json_images: "{{ item.2 | map(attribute='Config') | map('regex_replace', '.json$', '') | map('regex_replace', '^blobs/sha256/', '') | sort }}"
export_image_ids: "{{ item.1 | map('regex_replace', '^sha256:', '') | unique | sort }}" export_image_ids: "{{ item.1 | map('regex_replace', '^sha256:', '') | unique | sort }}"
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199

View File

@ -73,17 +73,11 @@
loop: "{{ all_images }}" loop: "{{ all_images }}"
when: remove_all_images is failed when: remove_all_images is failed
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (IDs) - name: Load all images (IDs)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-2.tar" path: "{{ remote_tmp_dir }}/archive-2.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -116,17 +110,11 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (mixed images and IDs) - name: Load all images (mixed images and IDs)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-3.tar" path: "{{ remote_tmp_dir }}/archive-3.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loading log - name: Print loading log
ansible.builtin.debug: ansible.builtin.debug:
var: result.stdout_lines var: result.stdout_lines
@ -139,14 +127,10 @@
that: that:
- result is changed - result is changed
# For some reason, *sometimes* only the named image is found; in fact, in that case, the log only mentions that image and nothing else # For some reason, *sometimes* only the named image is found; in fact, in that case, the log only mentions that image and nothing else
# With Docker 29, a third possibility appears: just two entries. - "result.images | length == 3 or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout"
- >- - (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0]]]
result.images | length == 3 - result.images | length in [1, 3]
or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout - (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0]]]
or result.images | length == 2
- (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0], image_ids[1]] | sort, [image_names[0]]]
- result.images | length in [1, 2, 3]
- (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0], image_ids[1]] | sort, [image_ids[0]]]
# Same image twice # Same image twice
@ -155,17 +139,11 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (same image twice) - name: Load all images (same image twice)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-4.tar" path: "{{ remote_tmp_dir }}/archive-4.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -173,11 +151,10 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- result is changed - result is changed
- result.image_names | length in [1, 2] - result.image_names | length == 1
- (result.image_names | sort) in [[image_names[0]], [image_names[0], image_ids[0]] | sort] - result.image_names[0] == image_names[0]
- result.images | length in [1, 2] - result.images | length == 1
- result.images[0].Id == image_ids[0] - result.images[0].Id == image_ids[0]
- result.images[1].Id | default(image_ids[0]) == image_ids[0]
# Single image by ID # Single image by ID
@ -186,17 +163,11 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (single image by ID) - name: Load all images (single image by ID)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-5.tar" path: "{{ remote_tmp_dir }}/archive-5.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -226,17 +197,11 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (names) - name: Load all images (names)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-1.tar" path: "{{ remote_tmp_dir }}/archive-1.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names

View File

@ -142,8 +142,6 @@
- present_3_check.actions[0] == ('Pulled image ' ~ image_name) - present_3_check.actions[0] == ('Pulled image ' ~ image_name)
- present_3_check.diff.before.id == present_1.diff.after.id - present_3_check.diff.before.id == present_1.diff.after.id
- present_3_check.diff.after.id == 'unknown' - present_3_check.diff.after.id == 'unknown'
- ansible.builtin.assert:
that:
- present_3 is changed - present_3 is changed
- present_3.actions | length == 1 - present_3.actions | length == 1
- present_3.actions[0] == ('Pulled image ' ~ image_name) - present_3.actions[0] == ('Pulled image ' ~ image_name)
@ -168,11 +166,6 @@
- present_5.actions[0] == ('Pulled image ' ~ image_name) - present_5.actions[0] == ('Pulled image ' ~ image_name)
- present_5.diff.before.id == present_3.diff.after.id - present_5.diff.before.id == present_3.diff.after.id
- present_5.diff.after.id == present_1.diff.after.id - present_5.diff.after.id == present_1.diff.after.id
when: docker_cli_version is version("29.0.0", "<")
# From Docker 29 on, Docker won't pull images for other architectures
# if there are better matching ones. The above tests assume it will
# just do what it is told, and thus fail from 29.0.0 on.
# https://github.com/ansible-collections/community.docker/pull/1199
always: always:
- name: cleanup - name: cleanup

View File

@ -7,9 +7,11 @@
block: block:
- name: Make sure images are not there - name: Make sure images are not there
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "sha256:{{ item }}" name: "{{ item }}"
force: true force: true
loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}" loop:
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"
- name: Pull image 1 - name: Pull image 1
community.docker.docker_image_pull: community.docker.docker_image_pull:
@ -80,6 +82,8 @@
always: always:
- name: cleanup - name: cleanup
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "sha256:{{ item }}" name: "{{ item }}"
force: true force: true
loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}" loop:
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"

View File

@ -18,7 +18,7 @@
- name: Push image ID (must fail) - name: Push image ID (must fail)
community.docker.docker_image_push: community.docker.docker_image_push:
name: "sha256:{{ docker_test_image_digest_v1_image_ids[0] }}" name: "sha256:{{ docker_test_image_digest_v1_image_id }}"
register: fail_2 register: fail_2
ignore_errors: true ignore_errors: true

View File

@ -80,6 +80,4 @@
that: that:
- push_4 is failed - push_4 is failed
- >- - >-
push_4.msg.startswith('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': ') push_4.msg == ('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': no basic auth credentials')
- >-
push_4.msg.endswith(': no basic auth credentials')

View File

@ -8,16 +8,15 @@
# and should not be used as examples of how to write Ansible roles # # and should not be used as examples of how to write Ansible roles #
#################################################################### ####################################################################
- vars: - block:
image: "{{ docker_test_image_hello_world }}"
image_ids: "{{ docker_test_image_hello_world_image_ids }}"
block:
- name: Pick image prefix - name: Pick image prefix
ansible.builtin.set_fact: ansible.builtin.set_fact:
iname_prefix: "{{ 'ansible-docker-test-%0x' % ((2**32) | random) }}" iname_prefix: "{{ 'ansible-docker-test-%0x' % ((2**32) | random) }}"
- name: Define image names - name: Define image names
ansible.builtin.set_fact: ansible.builtin.set_fact:
image: "{{ docker_test_image_hello_world }}"
image_id: "{{ docker_test_image_hello_world_image_id }}"
image_names: image_names:
- "{{ iname_prefix }}-tagged-1:latest" - "{{ iname_prefix }}-tagged-1:latest"
- "{{ iname_prefix }}-tagged-1:foo" - "{{ iname_prefix }}-tagged-1:foo"
@ -25,9 +24,8 @@
- name: Remove image complete - name: Remove image complete
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "{{ item }}" name: "{{ image_id }}"
force: true force: true
loop: "{{ image_ids }}"
- name: Remove tagged images - name: Remove tagged images
community.docker.docker_image_remove: community.docker.docker_image_remove:
@ -104,11 +102,10 @@
- remove_2 is changed - remove_2 is changed
- remove_2.diff.before.id == pulled_image.image.Id - remove_2.diff.before.id == pulled_image.image.Id
- remove_2.diff.before.tags | length == 4 - remove_2.diff.before.tags | length == 4
# With Docker 29, there are now two digests in before and after: - remove_2.diff.before.digests | length == 1
- remove_2.diff.before.digests | length in [1, 2]
- remove_2.diff.after.id == pulled_image.image.Id - remove_2.diff.after.id == pulled_image.image.Id
- remove_2.diff.after.tags | length == 3 - remove_2.diff.after.tags | length == 3
- remove_2.diff.after.digests | length in [1, 2] - remove_2.diff.after.digests | length == 1
- remove_2.deleted | length == 0 - remove_2.deleted | length == 0
- remove_2.untagged | length == 1 - remove_2.untagged | length == 1
- remove_2.untagged[0] == (iname_prefix ~ '-tagged-1:latest') - remove_2.untagged[0] == (iname_prefix ~ '-tagged-1:latest')
@ -177,11 +174,10 @@
- remove_4 is changed - remove_4 is changed
- remove_4.diff.before.id == pulled_image.image.Id - remove_4.diff.before.id == pulled_image.image.Id
- remove_4.diff.before.tags | length == 3 - remove_4.diff.before.tags | length == 3
# With Docker 29, there are now two digests in before and after: - remove_4.diff.before.digests | length == 1
- remove_4.diff.before.digests | length in [1, 2]
- remove_4.diff.after.id == pulled_image.image.Id - remove_4.diff.after.id == pulled_image.image.Id
- remove_4.diff.after.tags | length == 2 - remove_4.diff.after.tags | length == 2
- remove_4.diff.after.digests | length in [1, 2] - remove_4.diff.after.digests | length == 1
- remove_4.deleted | length == 0 - remove_4.deleted | length == 0
- remove_4.untagged | length == 1 - remove_4.untagged | length == 1
- remove_4.untagged[0] == (iname_prefix ~ '-tagged-1:foo') - remove_4.untagged[0] == (iname_prefix ~ '-tagged-1:foo')
@ -249,22 +245,16 @@
- remove_6 is changed - remove_6 is changed
- remove_6.diff.before.id == pulled_image.image.Id - remove_6.diff.before.id == pulled_image.image.Id
- remove_6.diff.before.tags | length == 2 - remove_6.diff.before.tags | length == 2
# With Docker 29, there are now two digests in before and after: - remove_6.diff.before.digests | length == 1
- remove_6.diff.before.digests | length in [1, 2]
- remove_6.diff.after.exists is false - remove_6.diff.after.exists is false
- remove_6.deleted | length >= 1 - remove_6.deleted | length > 1
- pulled_image.image.Id in remove_6.deleted - pulled_image.image.Id in remove_6.deleted
- remove_6.untagged | length in [2, 3] - remove_6.untagged | length == 3
- (iname_prefix ~ '-tagged-1:bar') in remove_6.untagged - (iname_prefix ~ '-tagged-1:bar') in remove_6.untagged
- image in remove_6.untagged - image in remove_6.untagged
- remove_6_check.deleted | length == 1 - remove_6_check.deleted | length == 1
- remove_6_check.deleted[0] == pulled_image.image.Id - remove_6_check.deleted[0] == pulled_image.image.Id
# The following is only true for Docker < 29... - remove_6_check.untagged == remove_6.untagged
# We use the CLI version as a proxy...
- >-
remove_6_check.untagged == remove_6.untagged
or
docker_cli_version is version("29.0.0", ">=")
- info_5.images | length == 0 - info_5.images | length == 0
- name: Remove image ID (force, idempotent, check mode) - name: Remove image ID (force, idempotent, check mode)

View File

@ -133,24 +133,8 @@
- name: Get proxied daemon URLs - name: Get proxied daemon URLs
ansible.builtin.set_fact: ansible.builtin.set_fact:
# Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists. docker_daemon_frontend_https: "https://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000"
# Use the bridge network's IP address instead... docker_daemon_frontend_http: "http://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:6000"
docker_daemon_frontend_https: >-
https://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
docker_daemon_frontend_http: >-
http://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:6000
- name: Wait for registry frontend - name: Wait for registry frontend
ansible.builtin.uri: ansible.builtin.uri:

View File

@ -4,18 +4,12 @@
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
docker_test_image_digest_v1: e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker_test_image_digest_v1: e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9
docker_test_image_digest_v1_image_ids: docker_test_image_digest_v1_image_id: 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee
- 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee # Docker 28 and before
- e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 # Docker 29
docker_test_image_digest_v2: ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b docker_test_image_digest_v2: ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b
docker_test_image_digest_v2_image_ids: docker_test_image_digest_v2_image_id: dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843
- dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843 # Docker 28 and before
- ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b # Docker 29
docker_test_image_digest_base: quay.io/ansible/docker-test-containers docker_test_image_digest_base: quay.io/ansible/docker-test-containers
docker_test_image_hello_world: quay.io/ansible/docker-test-containers:hello-world docker_test_image_hello_world: quay.io/ansible/docker-test-containers:hello-world
docker_test_image_hello_world_image_ids: docker_test_image_hello_world_image_id: sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b
- sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b # Docker 28 and before
- sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 # Docker 29
docker_test_image_hello_world_base: quay.io/ansible/docker-test-containers docker_test_image_hello_world_base: quay.io/ansible/docker-test-containers
docker_test_image_busybox: quay.io/ansible/docker-test-containers:busybox docker_test_image_busybox: quay.io/ansible/docker-test-containers:busybox
docker_test_image_alpine: quay.io/ansible/docker-test-containers:alpine3.8 docker_test_image_alpine: quay.io/ansible/docker-test-containers:alpine3.8

View File

@ -102,17 +102,7 @@
# This host/port combination cannot be used if the tests are running inside a docker container. # This host/port combination cannot be used if the tests are running inside a docker container.
docker_registry_frontend_address: localhost:{{ nginx_container.container.NetworkSettings.Ports['5000/tcp'].0.HostPort }} docker_registry_frontend_address: localhost:{{ nginx_container.container.NetworkSettings.Ports['5000/tcp'].0.HostPort }}
# The following host/port combination can be used from inside the docker container. # The following host/port combination can be used from inside the docker container.
docker_registry_frontend_address_internal: >- docker_registry_frontend_address_internal: "{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000"
{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else
(
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
# Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists.
# Use the bridge network's IP address instead...
- name: Wait for registry frontend - name: Wait for registry frontend
ansible.builtin.uri: ansible.builtin.uri:

View File

@ -27,7 +27,7 @@
- name: Install cryptography (Darwin, and potentially upgrade for other OSes) - name: Install cryptography (Darwin, and potentially upgrade for other OSes)
become: true become: true
ansible.builtin.pip: ansible.builtin.pip:
name: cryptography>=3.3.0 name: cryptography>=1.3.0
extra_args: "-c {{ remote_constraints }}" extra_args: "-c {{ remote_constraints }}"
- name: Register cryptography version - name: Register cryptography version

View File

@ -9,7 +9,6 @@ import pytest
from ansible_collections.community.docker.plugins.module_utils._compose_v2 import ( from ansible_collections.community.docker.plugins.module_utils._compose_v2 import (
Event, Event,
parse_events, parse_events,
parse_json_events,
) )
from .compose_v2_test_cases import EVENT_TEST_CASES from .compose_v2_test_cases import EVENT_TEST_CASES
@ -385,208 +384,3 @@ def test_parse_events(
assert collected_events == events assert collected_events == events
assert collected_warnings == warnings assert collected_warnings == warnings
JSON_TEST_CASES: list[tuple[str, str, str, list[Event], list[str]]] = [
(
"pull-compose-2",
"2.40.3",
'{"level":"warning","msg":"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version`'
' is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:16:30Z"}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pulling fs layer"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Downloading","status":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Download complete","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pull complete","percent":100}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version` is obsolete,"
" it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulling",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pulling fs layer",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Downloading",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Download complete",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pull complete",
None,
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulled",
None,
),
],
[],
),
(
"pull-compose-5",
"5.0.0",
'{"level":"warning","msg":"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute'
' `version` is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:08:22Z"}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute `version`"
" is obsolete, it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ghcr.io/ansible-collections/simple-1:tag",
"Pulling",
"Working",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image", "ghcr.io/ansible-collections/simple-1:tag", "Pulled", "Done"
),
],
[],
),
]
@pytest.mark.parametrize(
"test_id, compose_version, stderr, events, warnings",
JSON_TEST_CASES,
ids=[tc[0] for tc in JSON_TEST_CASES],
)
def test_parse_json_events(
test_id: str,
compose_version: str,
stderr: str,
events: list[Event],
warnings: list[str],
) -> None:
collected_warnings = []
def collect_warning(msg: str) -> None:
collected_warnings.append(msg)
collected_events = parse_json_events(
stderr.encode("utf-8"),
warn_function=collect_warning,
)
print(collected_events)
print(collected_warnings)
assert collected_events == events
assert collected_warnings == warnings