Compare commits

...

26 Commits
5.0.1 ... main

Author SHA1 Message Date
Felix Fontein
947ec9a442 The next expected release will be 5.1.0. 2025-12-06 23:32:17 +01:00
Felix Fontein
25e7ba222e Release 5.0.4. 2025-12-06 22:45:11 +01:00
Felix Fontein
6ab8cc0d82
Improve JSON parsing error handling. (#1221) 2025-12-06 22:25:30 +01:00
Felix Fontein
159df0ab91 Prepare 5.0.4. 2025-12-06 17:57:12 +01:00
Felix Fontein
174c0c8058
Docker Compose 5+: improve image layer event parsing (#1219)
* Remove long deprecated version fields.

* Add first JSON event parsing tests.

* Improve image layer event parsing for Compose 5+.

* Add 'Working' to image working actions.

* Add changelog fragment.

* Shorten lines.

* Adjust docker_compose_v2_run tests.
2025-12-06 17:48:17 +01:00
Felix Fontein
2efcd6b2ec
Adjust test for error message for Compose 5.0.0. (#1217) 2025-12-06 14:04:39 +01:00
Felix Fontein
faa7dee456 The next release will be 5.1.0. 2025-11-29 23:16:22 +01:00
Felix Fontein
908c23a3c3 Release 5.0.3. 2025-11-29 22:35:55 +01:00
Felix Fontein
350f67d971 Prepare 5.0.3. 2025-11-26 07:30:53 +01:00
Felix Fontein
846fc8564b
docker_container: do not send wrong host IP for duplicate ports (#1214)
* DRY.

* Port spec can be a list of port specs.

* Add changelog fragment.

* Add test.
2025-11-26 07:29:30 +01:00
dependabot[bot]
d2947476f7
Bump actions/checkout from 5 to 6 in the ci group (#1211)
Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 06:20:29 +01:00
Felix Fontein
5d2b4085ec
Remove code that's not used. (#1209) 2025-11-23 09:48:34 +01:00
Felix Fontein
a869184ad4 Shut up pylint due to bugs. 2025-11-23 08:56:42 +01:00
Felix Fontein
a985e05482 The next expected release will be 5.1.0. 2025-11-16 13:54:23 +01:00
Felix Fontein
13e74e58fa Release 5.0.2. 2025-11-16 12:48:11 +01:00
Felix Fontein
c61c0e24b8
Improve error/warning messages w.r.t. YAML quoting (#1205)
* Remove superfluous conversions/assignments.

* Improve messages.
2025-11-16 12:32:51 +01:00
Felix Fontein
e42423b949 Forgot to update the version number. 2025-11-16 11:57:17 +01:00
Felix Fontein
0d37f20100 Prepare 5.0.2. 2025-11-16 11:56:18 +01:00
Felix Fontein
a349c5eed7
Fix connection tests. (#1202) 2025-11-16 10:55:07 +01:00
Felix Fontein
3da2799e03
Fix IP subnet and address idempotency. (#1201) 2025-11-16 10:47:35 +01:00
Felix Fontein
d207643e0c
docker_image(_push): fix push detection (#1199)
* Fix IP address retrieval for registry setup.

* Adjust push detection to Docker 29.

* Idempotency for export no longer works.

* Disable pull idempotency checks that play with architecture.

* Add more known image IDs.

* Adjust load tests.

* Adjust error message check.

* Allow for more digests.

* Make sure a new enough cryptography version is installed.
2025-11-16 10:09:23 +01:00
Felix Fontein
90c4b4c543
docker_image(_pull), docker_container: fix compatibility with Docker 29.0.0 (#1192)
* Add debug flag to failing task.

* Add more debug output.

* Fix pull idempotency.

* Revert "Add more debug output."

This reverts commit 64020149bf.

* Fix casing.

* Remove unreliable test.

* Add 'debug: true' to all tasks.

* Reformat.

* Fix idempotency problem for IPv6 addresses.

* Fix expose ranges handling.

* Update changelog fragment to also mention other affected modules.
2025-11-15 17:13:46 +01:00
Felix Fontein
68993fe353
docker_compose_v2: ignore result of build idempotency test since this seems like a hopeless case (#1196)
* Ignore result of idempotency test since this seems like a hopeless cause...

* And another one.
2025-11-15 17:06:21 +01:00
Felix Fontein
97314ec892
Move ansible-core 2.17 to EOL CI. (#1189) 2025-11-12 19:41:25 +01:00
Felix Fontein
ec14568b22
Work around Docker 29.0.0 bug. (#1187) 2025-11-12 19:21:55 +01:00
Felix Fontein
94d22f758b The next planned release will be 5.1.0. 2025-11-09 21:32:51 +01:00
48 changed files with 1254 additions and 773 deletions

View File

@ -95,17 +95,6 @@ stages:
test: '2.18/sanity/1'
- name: Units
test: '2.18/units/1'
- stage: Ansible_2_17
displayName: Sanity & Units 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.17/sanity/1'
- name: Units
test: '2.17/units/1'
### Docker
- stage: Docker_devel
@ -174,23 +163,6 @@ stages:
groups:
- 4
- 5
- stage: Docker_2_17
displayName: Docker 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/linux/{0}
targets:
- name: Fedora 39
test: fedora39
- name: Ubuntu 20.04
test: ubuntu2004
- name: Alpine 3.19
test: alpine319
groups:
- 4
- 5
### Community Docker
- stage: Docker_community_devel
@ -285,22 +257,6 @@ stages:
- 3
- 4
- 5
- stage: Remote_2_17
displayName: Remote 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/{0}
targets:
- name: RHEL 9.3
test: rhel/9.3
groups:
- 1
- 2
- 3
- 4
- 5
## Finally
@ -311,17 +267,14 @@ stages:
- Ansible_2_20
- Ansible_2_19
- Ansible_2_18
- Ansible_2_17
- Remote_devel
- Remote_2_20
- Remote_2_19
- Remote_2_18
- Remote_2_17
- Docker_devel
- Docker_2_20
- Docker_2_19
- Docker_2_18
- Docker_2_17
- Docker_community_devel
jobs:
- template: templates/coverage.yml

View File

@ -45,7 +45,7 @@ jobs:
steps:
- name: Check out repository
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
persist-credentials: false

View File

@ -30,6 +30,6 @@ jobs:
upload-codecov-pr: false
upload-codecov-push: false
upload-codecov-schedule: true
max-ansible-core: "2.16"
max-ansible-core: "2.17"
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@ -388,6 +388,8 @@ disable=raw-checker-failed,
unused-argument,
# Cannot remove yet due to inadequacy of rules
inconsistent-return-statements, # doesn't notice that fail_json() does not return
# Buggy impementation in pylint:
relative-beyond-top-level, # TODO
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,58 @@ Docker Community Collection Release Notes
.. contents:: Topics
v5.0.4
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- CLI-based modules - when parsing JSON output fails, also provide standard error output. Also provide information on the command and its result in machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216, https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
v5.0.3
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- docker_container - when the same port is mapped more than once for the same protocol without specifying an interface, a bug caused an invalid value to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213, https://github.com/ansible-collections/community.docker/pull/1214).
v5.0.2
======
Release Summary
---------------
Bugfix release for Docker 29.
Bugfixes
--------
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185, https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module used an undocumented feature of Docker that was removed from Docker 29.0.0, that allowed to pass the range to the deamon and let handle it. Now the module explodes ranges into a list of all contained ports, same as the Docker CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes ranges returned by the API for existing containers so that comparison should only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker 29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
Known Issues
------------
- docker_image, docker_image_export - idempotency for archiving images depends on whether the image IDs used by the image storage backend correspond to the IDs used in the tarball's ``manifest.json`` files. The new default backend in Docker 29 apparently uses image IDs that no longer correspond, whence idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
v5.0.1
======

View File

@ -2282,3 +2282,66 @@ releases:
- 5.0.1.yml
- typing.yml
release_date: '2025-11-09'
5.0.2:
changes:
bugfixes:
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused
a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185,
https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module
used an undocumented feature of Docker that was removed from Docker 29.0.0,
that allowed to pass the range to the deamon and let handle it. Now the
module explodes ranges into a list of all contained ports, same as the Docker
CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes
ranges returned by the API for existing containers so that comparison should
only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0
(https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker
29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker
29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
known_issues:
- docker_image, docker_image_export - idempotency for archiving images depends
on whether the image IDs used by the image storage backend correspond to
the IDs used in the tarball's ``manifest.json`` files. The new default backend
in Docker 29 apparently uses image IDs that no longer correspond, whence
idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
release_summary: Bugfix release for Docker 29.
fragments:
- 1187-docker.yml
- 1192-docker_container.yml
- 1199-docker_image-push.yml
- 1201-docker_network.yml
- 5.0.2.yml
release_date: '2025-11-16'
5.0.3:
changes:
bugfixes:
- docker_container - when the same port is mapped more than once for the same
protocol without specifying an interface, a bug caused an invalid value
to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213,
https://github.com/ansible-collections/community.docker/pull/1214).
release_summary: Bugfix release.
fragments:
- 1214-docker_container-ports.yml
- 5.0.3.yml
release_date: '2025-11-29'
5.0.4:
changes:
bugfixes:
- CLI-based modules - when parsing JSON output fails, also provide standard
error output. Also provide information on the command and its result in
machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216,
https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull
events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
release_summary: Bugfix release.
fragments:
- 1219-compose-v2-pull.yml
- 1221-cli-json-errors.yml
- 5.0.4.yml
release_date: '2025-12-06'

View File

@ -7,7 +7,7 @@
namespace: community
name: docker
version: 5.0.1
version: 5.1.0
readme: README.md
authors:
- Ansible Docker Working Group

View File

@ -266,7 +266,9 @@ class Connection(ConnectionBase):
if not isinstance(val, str):
raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
)
local_cmd += [
b"-e",

View File

@ -282,11 +282,11 @@ class Connection(ConnectionBase):
if not isinstance(val, str):
raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
)
kk = to_text(k, errors="surrogate_or_strict")
vv = to_text(v, errors="surrogate_or_strict")
data["Env"].append(f"{kk}={vv}")
data["Env"].append(f"{k}={v}")
if self.get_option("working_dir") is not None:
data["WorkingDir"] = self.get_option("working_dir")

View File

@ -43,10 +43,8 @@ docker_version: str | None # pylint: disable=invalid-name
try:
from docker import __version__ as docker_version
from docker import auth
from docker.errors import APIError, NotFound, TLSParameterError
from docker.errors import APIError, TLSParameterError
from docker.tls import TLSConfig
from requests.exceptions import SSLError
if LooseVersion(docker_version) >= LooseVersion("3.0.0"):
HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name
@ -391,242 +389,6 @@ class AnsibleDockerClientBase(Client):
)
self.fail(f"SSL Exception: {error}")
def get_container_by_id(self, container_id: str) -> dict[str, t.Any] | None:
try:
self.log(f"Inspecting container Id {container_id}")
result = self.inspect_container(container=container_id)
self.log("Completed container inspection")
return result
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting container: {exc}")
def get_container(self, name: str | None) -> dict[str, t.Any] | None:
"""
Lookup a container and return the inspection results.
"""
if name is None:
return None
search_name = name
if not name.startswith("/"):
search_name = "/" + name
result = None
try:
for container in self.containers(all=True):
self.log(f"testing container: {container['Names']}")
if (
isinstance(container["Names"], list)
and search_name in container["Names"]
):
result = container
break
if container["Id"].startswith(name):
result = container
break
if container["Id"] == name:
result = container
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving container list: {exc}")
if result is None:
return None
return self.get_container_by_id(result["Id"])
def get_network(
self, name: str | None = None, network_id: str | None = None
) -> dict[str, t.Any] | None:
"""
Lookup a network and return the inspection results.
"""
if name is None and network_id is None:
return None
result = None
if network_id is None:
try:
for network in self.networks():
self.log(f"testing network: {network['Name']}")
if name == network["Name"]:
result = network
break
if network["Id"].startswith(name):
result = network
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving network list: {exc}")
if result is not None:
network_id = result["Id"]
if network_id is not None:
try:
self.log(f"Inspecting network Id {network_id}")
result = self.inspect_network(network_id)
self.log("Completed network inspection")
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting network: {exc}")
return result
def find_image(self, name: str, tag: str) -> dict[str, t.Any] | None:
"""
Lookup an image (by name and tag) and return the inspection results.
"""
if not name:
return None
self.log(f"Find image {name}:{tag}")
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = auth.resolve_repository_name(name)
if registry == "docker.io":
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
self.log(f"Check for docker.io image: {repo_name}")
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith("library/"):
# Sometimes library/xxx images are not found
lookup = repo_name[len("library/") :]
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images:
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
lookup = f"{registry}/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images and "/" not in repo_name:
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
lookup = f"{registry}/library/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail(f"Daemon returned more than one result for {name}:{tag}")
if len(images) == 1:
try:
inspection = self.inspect_image(images[0]["Id"])
except NotFound:
self.log(f"Image {name}:{tag} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image {name}:{tag} - {exc}")
return inspection
self.log(f"Image {name}:{tag} not found.")
return None
def find_image_by_id(
self, image_id: str, accept_missing_image: bool = False
) -> dict[str, t.Any] | None:
"""
Lookup an image (by ID) and return the inspection results.
"""
if not image_id:
return None
self.log(f"Find image {image_id} (by ID)")
try:
inspection = self.inspect_image(image_id)
except NotFound as exc:
if not accept_missing_image:
self.fail(f"Error inspecting image ID {image_id} - {exc}")
self.log(f"Image {image_id} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}")
return inspection
def _image_lookup(self, name: str, tag: str) -> list[dict[str, t.Any]]:
"""
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
"""
try:
response = self.images(name=name)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error searching for image {name} - {exc}")
images = response
if tag:
lookup = f"{name}:{tag}"
lookup_digest = f"{name}@{tag}"
images = []
for image in response:
tags = image.get("RepoTags")
digests = image.get("RepoDigests")
if (tags and lookup in tags) or (digests and lookup_digest in digests):
images = [image]
break
return images
def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]:
"""
Pull an image
"""
kwargs = {
"tag": tag,
"stream": True,
"decode": True,
}
if image_platform is not None:
kwargs["platform"] = image_platform
self.log(f"Pulling image {name}:{tag}")
old_tag = self.find_image(name, tag)
try:
for line in self.pull(name, **kwargs):
self.log(line, pretty_print=True)
if line.get("error"):
if line.get("errorDetail"):
error_detail = line.get("errorDetail")
self.fail(
f"Error pulling {name} - code: {error_detail.get('code')} message: {error_detail.get('message')}"
)
else:
self.fail(f"Error pulling {name} - {line.get('error')}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_tag = self.find_image(name, tag)
return new_tag, old_tag == new_tag
def inspect_distribution(self, image: str, **kwargs: t.Any) -> dict[str, t.Any]:
"""
Get image digest by directly calling the Docker API when running Docker SDK < 4.0.0
since prior versions did not support accessing private repositories.
"""
if self.docker_py_version < LooseVersion("4.0.0"):
registry = auth.resolve_repository_name(image)[0]
header = auth.get_config_header(self, registry)
if header:
return self._result(
self._get(
self._url("/distribution/{0}/json", image),
headers={"X-Registry-Auth": header},
),
json=True,
)
return super().inspect_distribution(image, **kwargs)
class AnsibleDockerClient(AnsibleDockerClientBase):
def __init__(

View File

@ -519,6 +519,17 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}")
@staticmethod
def _compare_images(
img1: dict[str, t.Any] | None, img2: dict[str, t.Any] | None
) -> bool:
if img1 is None or img2 is None:
return img1 == img2
filter_keys = {"Metadata"}
img1_filtered = {k: v for k, v in img1.items() if k not in filter_keys}
img2_filtered = {k: v for k, v in img2.items() if k not in filter_keys}
return img1_filtered == img2_filtered
def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]:
@ -526,7 +537,7 @@ class AnsibleDockerClientBase(Client):
Pull an image
"""
self.log(f"Pulling image {name}:{tag}")
old_tag = self.find_image(name, tag)
old_image = self.find_image(name, tag)
try:
repository, image_tag = parse_repository_tag(name)
registry, dummy_repo_name = auth.resolve_repository_name(repository)
@ -563,9 +574,9 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_tag = self.find_image(name, tag)
new_image = self.find_image(name, tag)
return new_tag, old_tag == new_tag
return new_image, self._compare_images(old_image, new_image)
class AnsibleDockerClient(AnsibleDockerClientBase):

View File

@ -126,13 +126,16 @@ class AnsibleDockerClientBase:
self._info: dict[str, t.Any] | None = None
if needs_api_version:
api_version_string = self._version["Server"].get(
"ApiVersion"
) or self._version["Server"].get("APIVersion")
if not isinstance(self._version.get("Server"), dict) or not isinstance(
self._version["Server"].get("ApiVersion"), str
api_version_string, str
):
self.fail(
"Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?"
)
self.docker_api_version_str = to_text(self._version["Server"]["ApiVersion"])
self.docker_api_version_str = to_text(api_version_string)
self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version):
@ -194,7 +197,11 @@ class AnsibleDockerClientBase:
data = json.loads(stdout)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}"
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
)
return rc, data, stderr
@ -220,7 +227,11 @@ class AnsibleDockerClientBase:
result.append(json.loads(line))
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}"
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
)
return rc, result, stderr

View File

@ -132,7 +132,7 @@ DOCKER_PULL_PROGRESS_DONE = frozenset(
"Pull complete",
)
)
DOCKER_PULL_PROGRESS_WORKING = frozenset(
DOCKER_PULL_PROGRESS_WORKING_OLD = frozenset(
(
"Pulling fs layer",
"Waiting",
@ -141,6 +141,7 @@ DOCKER_PULL_PROGRESS_WORKING = frozenset(
"Extracting",
)
)
DOCKER_PULL_PROGRESS_WORKING = frozenset(DOCKER_PULL_PROGRESS_WORKING_OLD | {"Working"})
class ResourceType:
@ -191,7 +192,7 @@ _RE_PULL_EVENT = re.compile(
)
_DOCKER_PULL_PROGRESS_WD = sorted(
DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING
DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING_OLD
)
_RE_PULL_PROGRESS = re.compile(
@ -494,7 +495,17 @@ def parse_json_events(
# {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"}
# (The longer form happens since Docker Compose 2.39.0)
continue
if isinstance(resource_id, str) and " " in resource_id:
if (
status in ("Working", "Done")
and isinstance(line_data.get("parent_id"), str)
and line_data["parent_id"].startswith("Image ")
):
# Compose 5.0.0+:
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}
resource_type = ResourceType.IMAGE_LAYER
resource_id = line_data["parent_id"][len("Image ") :]
elif isinstance(resource_id, str) and " " in resource_id:
resource_type_str, resource_id = resource_id.split(" ", 1)
try:
resource_type = ResourceType.from_docker_compose_event(
@ -513,7 +524,7 @@ def parse_json_events(
status, text = text, status
elif (
text in DOCKER_PULL_PROGRESS_DONE
or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING
or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING_OLD
):
resource_type = ResourceType.IMAGE_LAYER
status, text = text, status

View File

@ -659,7 +659,9 @@ def _preprocess_env(
if not isinstance(value, str):
module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
)
final_env[name] = to_text(value, errors="surrogate_or_strict")
formatted_env = []
@ -947,7 +949,8 @@ def _preprocess_log(
value = to_text(v, errors="surrogate_or_strict")
module.warn(
f"Non-string value found for log_options option '{k}'. The value is automatically converted to {value!r}. "
"If this is not correct, or you want to avoid such warnings, please quote the value."
"If this is not correct, or you want to avoid such warnings, please quote the value,"
" or explicitly convert the values to strings when templating them."
)
v = value
options[k] = v
@ -1016,7 +1019,7 @@ def _preprocess_ports(
else:
port_binds = len(container_ports) * [(ipaddr,)]
else:
return module.fail_json(
module.fail_json(
msg=f'Invalid port description "{port}" - expected 1 to 3 colon-separated parts, but got {p_len}. '
"Maybe you forgot to use square brackets ([...]) around an IPv6 address?"
)
@ -1037,38 +1040,43 @@ def _preprocess_ports(
binds[idx] = bind
values["published_ports"] = binds
exposed = []
exposed: set[tuple[int, str]] = set()
if "exposed_ports" in values:
for port in values["exposed_ports"]:
port = to_text(port, errors="surrogate_or_strict").strip()
protocol = "tcp"
matcher = re.search(r"(/.+$)", port)
if matcher:
protocol = matcher.group(1).replace("/", "")
port = re.sub(r"/.+$", "", port)
exposed.append((port, protocol))
parts = port.split("/", maxsplit=1)
if len(parts) == 2:
port, protocol = parts
parts = port.split("-", maxsplit=1)
if len(parts) < 2:
try:
exposed.add((int(port), protocol))
except ValueError as e:
module.fail_json(msg=f"Cannot parse port {port!r}: {e}")
else:
try:
start_port = int(parts[0])
end_port = int(parts[1])
if start_port > end_port:
raise ValueError(
"start port must be smaller or equal to end port."
)
except ValueError as e:
module.fail_json(msg=f"Cannot parse port range {port!r}: {e}")
for port in range(start_port, end_port + 1):
exposed.add((port, protocol))
if "published_ports" in values:
# Any published port should also be exposed
for publish_port in values["published_ports"]:
match = False
if isinstance(publish_port, str) and "/" in publish_port:
port, protocol = publish_port.split("/")
port = int(port)
else:
protocol = "tcp"
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], str) and "-" in exposed_port[0]:
start_port, end_port = exposed_port[0].split("-")
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
values["ports"] = exposed
exposed.add((port, protocol))
values["ports"] = sorted(exposed)
return values

View File

@ -29,6 +29,7 @@ from ansible_collections.community.docker.plugins.module_utils._common_api impor
RequestException,
)
from ansible_collections.community.docker.plugins.module_utils._module_container.base import (
_DEFAULT_IP_REPLACEMENT_STRING,
OPTION_AUTO_REMOVE,
OPTION_BLKIO_WEIGHT,
OPTION_CAP_DROP,
@ -127,11 +128,6 @@ if t.TYPE_CHECKING:
Sentry = object
_DEFAULT_IP_REPLACEMENT_STRING = (
"[[DEFAULT_IP:iewahhaeB4Sae6Aen8IeShairoh4zeph7xaekoh8Geingunaesaeweiy3ooleiwi]]"
)
_SENTRY: Sentry = object()
@ -1970,10 +1966,20 @@ def _get_values_ports(
config = container["Config"]
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
expected_exposed: list[str] = []
if config.get("ExposedPorts") is not None:
expected_exposed = [_normalize_port(p) for p in config.get("ExposedPorts", {})]
else:
expected_exposed = []
for port_and_protocol in config.get("ExposedPorts", {}):
port, protocol = _normalize_port(port_and_protocol).rsplit("/")
try:
start, end = port.split("-", 1)
start_port = int(start)
end_port = int(end)
for port_no in range(start_port, end_port + 1):
expected_exposed.append(f"{port_no}/{protocol}")
continue
except ValueError:
# Either it is not a range, or a broken one - in both cases, simply add the original form
expected_exposed.append(f"{port}/{protocol}")
return {
"published_ports": host_config.get("PortBindings"),
@ -2027,17 +2033,14 @@ def _get_expected_values_ports(
]
expected_values["published_ports"] = expected_bound_ports
image_ports = []
image_ports: set[str] = set()
if image:
image_exposed_ports = image["Config"].get("ExposedPorts") or {}
image_ports = [_normalize_port(p) for p in image_exposed_ports]
param_ports = []
image_ports = {_normalize_port(p) for p in image_exposed_ports}
param_ports: set[str] = set()
if "ports" in values:
param_ports = [
to_text(p[0], errors="surrogate_or_strict") + "/" + p[1]
for p in values["ports"]
]
result = list(set(image_ports + param_ports))
param_ports = {f"{p[0]}/{p[1]}" for p in values["ports"]}
result = sorted(image_ports | param_ports)
expected_values["exposed_ports"] = result
if "publish_all_ports" in values:
@ -2086,16 +2089,26 @@ def _preprocess_value_ports(
if "published_ports" not in values:
return values
found = False
for port_spec in values["published_ports"].values():
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
found = True
break
for port_specs in values["published_ports"].values():
if not isinstance(port_specs, list):
port_specs = [port_specs]
for port_spec in port_specs:
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
found = True
break
if not found:
return values
default_ip = _get_default_host_ip(module, client)
for port, port_spec in values["published_ports"].items():
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
values["published_ports"][port] = tuple([default_ip] + list(port_spec[1:]))
for port, port_specs in values["published_ports"].items():
if isinstance(port_specs, list):
for index, port_spec in enumerate(port_specs):
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
port_specs[index] = tuple([default_ip] + list(port_spec[1:]))
else:
if port_specs[0] == _DEFAULT_IP_REPLACEMENT_STRING:
values["published_ports"][port] = tuple(
[default_ip] + list(port_specs[1:])
)
return values

View File

@ -25,6 +25,7 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DockerBaseClass,
compare_generic,
is_image_name_id,
normalize_ip_address,
sanitize_result,
)
@ -925,13 +926,13 @@ class ContainerManager(DockerBaseClass, t.Generic[Client]):
else:
diff = False
network_info_ipam = network_info.get("IPAMConfig") or {}
if network.get("ipv4_address") and network[
"ipv4_address"
] != network_info_ipam.get("IPv4Address"):
if network.get("ipv4_address") and normalize_ip_address(
network["ipv4_address"]
) != normalize_ip_address(network_info_ipam.get("IPv4Address")):
diff = True
if network.get("ipv6_address") and network[
"ipv6_address"
] != network_info_ipam.get("IPv6Address"):
if network.get("ipv6_address") and normalize_ip_address(
network["ipv6_address"]
) != normalize_ip_address(network_info_ipam.get("IPv6Address")):
diff = True
if network.get("aliases") and not compare_generic(
network["aliases"],

View File

@ -7,6 +7,7 @@
from __future__ import annotations
import ipaddress
import json
import re
import typing as t
@ -505,3 +506,47 @@ def omit_none_from_dict(d: dict[str, t.Any]) -> dict[str, t.Any]:
Return a copy of the dictionary with all keys with value None omitted.
"""
return {k: v for (k, v) in d.items() if v is not None}
@t.overload
def normalize_ip_address(ip_address: str) -> str: ...
@t.overload
def normalize_ip_address(ip_address: str | None) -> str | None: ...
def normalize_ip_address(ip_address: str | None) -> str | None:
"""
Given an IP address as a string, normalize it so that it can be
used to compare IP addresses as strings.
"""
if ip_address is None:
return None
try:
return ipaddress.ip_address(ip_address).compressed
except ValueError:
# Fallback for invalid addresses: simply return the input
return ip_address
@t.overload
def normalize_ip_network(network: str) -> str: ...
@t.overload
def normalize_ip_network(network: str | None) -> str | None: ...
def normalize_ip_network(network: str | None) -> str | None:
"""
Given a network in CIDR notation as a string, normalize it so that it can be
used to compare networks as strings.
"""
if network is None:
return None
try:
return ipaddress.ip_network(network).compressed
except ValueError:
# Fallback for invalid networks: simply return the input
return network

View File

@ -210,13 +210,14 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n"
if self.env is not None:
for name, value in list(self.env.items()):
for name, value in self.env.items():
if not isinstance(value, str):
self.fail(
"Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
)
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_exec_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["exec"]

View File

@ -296,13 +296,14 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n"
if self.env is not None:
for name, value in list(self.env.items()):
for name, value in self.env.items():
if not isinstance(value, str):
self.fail(
"Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
)
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_run_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["run"]

View File

@ -221,16 +221,17 @@ def main() -> None:
stdin: str | None = client.module.params["stdin"]
strip_empty_ends: bool = client.module.params["strip_empty_ends"]
tty: bool = client.module.params["tty"]
env: dict[str, t.Any] = client.module.params["env"]
env: dict[str, t.Any] | None = client.module.params["env"]
if env is not None:
for name, value in list(env.items()):
for name, value in env.items():
if not isinstance(value, str):
client.module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}"
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
)
env[name] = to_text(value, errors="surrogate_or_strict")
if command is not None:
argv = shlex.split(command)

View File

@ -21,6 +21,8 @@ description:
notes:
- Building images is done using Docker daemon's API. It is not possible to use BuildKit / buildx this way. Use M(community.docker.docker_image_build)
to build images with BuildKit.
- Exporting images is generally not idempotent. It depends on whether the image ID equals the IDs found in the generated tarball's C(manifest.json).
This was the case with the default storage backend up to Docker 28, but seems to have changed in Docker 29.
extends_documentation_fragment:
- community.docker._docker.api_documentation
- community.docker._attributes
@ -803,7 +805,7 @@ class ImageManager(DockerBaseClass):
if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status")
if status == "Pushing":
if status in ("Pushing", "Pushed"):
changed = True
self.results["changed"] = changed
except Exception as exc: # pylint: disable=broad-exception-caught

View File

@ -28,7 +28,13 @@ attributes:
diff_mode:
support: none
idempotent:
support: full
support: partial
details:
- Whether the module is idempotent depends on the storage API used for images,
which determines how the image ID is computed. The idempotency check needs
that the image ID equals the ID stored in archive's C(manifest.json).
This seemed to have worked fine with the default storage backend up to Docker 28,
but seems to have changed in Docker 29.
options:
names:

View File

@ -159,7 +159,7 @@ class ImagePusher(DockerBaseClass):
if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status")
if status == "Pushing":
if status in ("Pushing", "Pushed"):
results["changed"] = True
except Exception as exc: # pylint: disable=broad-exception-caught
if "unauthorized" in str(exc):

View File

@ -219,6 +219,7 @@ class ImageRemover(DockerBaseClass):
elif is_image_name_id(name):
deleted.append(image["Id"])
# TODO: the following is no longer correct with Docker 29+...
untagged[:] = sorted(
(image.get("RepoTags") or []) + (image.get("RepoDigests") or [])
)

View File

@ -299,6 +299,8 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DifferenceTracker,
DockerBaseClass,
clean_dict_booleans_for_docker_api,
normalize_ip_address,
normalize_ip_network,
sanitize_labels,
)
@ -360,6 +362,7 @@ def validate_cidr(cidr: str) -> t.Literal["ipv4", "ipv6"]:
:rtype: str
:raises ValueError: If ``cidr`` is not a valid CIDR
"""
# TODO: Use ipaddress for this instead of rolling your own...
if CIDR_IPV4.match(cidr):
return "ipv4"
if CIDR_IPV6.match(cidr):
@ -389,6 +392,19 @@ def dicts_are_essentially_equal(a: dict[str, t.Any], b: dict[str, t.Any]) -> boo
return True
def normalize_ipam_values(ipam_config: dict[str, t.Any]) -> dict[str, t.Any]:
result = {}
for key, value in ipam_config.items():
if key in ("subnet", "iprange"):
value = normalize_ip_network(value)
elif key in ("gateway",):
value = normalize_ip_address(value)
elif key in ("aux_addresses",) and value is not None:
value = {k: normalize_ip_address(v) for k, v in value.items()}
result[key] = value
return result
class DockerNetworkManager:
def __init__(self, client: AnsibleDockerClient) -> None:
self.client = client
@ -513,24 +529,35 @@ class DockerNetworkManager:
else:
# Put network's IPAM config into the same format as module's IPAM config
net_ipam_configs = []
net_ipam_configs_normalized = []
for net_ipam_config in net["IPAM"]["Config"]:
config = {}
for k, v in net_ipam_config.items():
config[normalize_ipam_config_key(k)] = v
net_ipam_configs.append(config)
net_ipam_configs_normalized.append(normalize_ipam_values(config))
# Compare lists of dicts as sets of dicts
for idx, ipam_config in enumerate(self.parameters.ipam_config):
ipam_config_normalized = normalize_ipam_values(ipam_config)
net_config = {}
for net_ipam_config in net_ipam_configs:
if dicts_are_essentially_equal(ipam_config, net_ipam_config):
net_config_normalized = {}
for net_ipam_config, net_ipam_config_normalized in zip(
net_ipam_configs, net_ipam_configs_normalized
):
if dicts_are_essentially_equal(
ipam_config_normalized, net_ipam_config_normalized
):
net_config = net_ipam_config
net_config_normalized = net_ipam_config_normalized
break
for key, value in ipam_config.items():
if value is None:
# due to recursive argument_spec, all keys are always present
# (but have default value None if not specified)
continue
if value != net_config.get(key):
if ipam_config_normalized[key] != net_config_normalized.get(
key
):
differences.add(
f"ipam_config[{idx}].{key}",
parameter=value,

View File

@ -914,8 +914,10 @@ def get_docker_environment(
for name, value in env.items():
if not isinstance(value, str):
raise ValueError(
"Non-string value found for env option. "
f"Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: {name}"
"Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
)
env_dict[name] = str(value)
elif env is not None and isinstance(env, list):

View File

@ -124,13 +124,16 @@
# - present_3_check is changed -- whether this is true depends on a combination of Docker CLI and Docker Compose version...
# Compose 2.37.3 with Docker 28.2.x results in 'changed', while Compose 2.37.3 with Docker 28.3.0 results in 'not changed'.
# It seems that Docker is now clever enough to notice that nothing is rebuilt...
- present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed))
- present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# With Docker 29.0.0, the behvaior seems to change again... I'm currently tending to simply ignore this check, for that
# reason the next three lines are commented out:
# - present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# - ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed))
# - present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# Same as above:
# - present_4_check is changed
# Same as above...
- present_4_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_4 is not changed
# Also seems like a hopeless case with Docker 29:
# - present_4 is not changed
- present_4.warnings | default([]) | select('regex', ' please report this at ') | length == 0
always:

View File

@ -81,16 +81,19 @@
- ansible.builtin.assert:
that:
- present_1_check is failed or present_1_check is changed
- present_1_check is changed or present_1_check.msg.startswith('General error:')
- present_1_check is changed or 'General error:' in present_1_check.msg
- present_1_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_1 is failed
- present_1.msg.startswith('General error:')
- >-
'General error:' in present_1.msg
- present_1.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2_check is failed
- present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':')
- present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':') or
present_2_check.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2 is failed
- present_2.msg.startswith('Error when processing ' ~ cname ~ ':')
- present_2.msg.startswith('Error when processing ' ~ cname ~ ':') or
present_2.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2.warnings | default([]) | select('regex', ' please report this at ') | length == 0
####################################################################

View File

@ -9,12 +9,10 @@
non_existing_image: does-not-exist:latest
project_src: "{{ remote_tmp_dir }}/{{ pname }}"
test_service_non_existing: |
version: '3'
services:
{{ cname }}:
image: {{ non_existing_image }}
test_service_simple: |
version: '3'
services:
{{ cname }}:
image: {{ docker_test_image_simple_1 }}

View File

@ -77,7 +77,8 @@
- ansible.builtin.assert:
that:
- result_1.rc == 0
- result_1.stderr == ""
# Since Compose 5, unrelated output shows up in stderr...
- result_1.stderr == "" or ("Creating" in result_1.stderr and "Created" in result_1.stderr)
- >-
"usr" in result_1.stdout_lines
and

View File

@ -37,7 +37,10 @@
register: docker_host_info
# Run the tests
- block:
- module_defaults:
community.general.docker_container:
debug: true
block:
- ansible.builtin.include_tasks: run-test.yml
with_fileglob:
- "tests/*.yml"

View File

@ -128,6 +128,7 @@
image: "{{ docker_test_image_digest_base }}@sha256:{{ docker_test_image_digest_v1 }}"
name: "{{ cname }}"
pull: true
debug: true
state: present
force_kill: true
register: digest_3

View File

@ -3077,10 +3077,14 @@
that:
- log_options_1 is changed
- log_options_2 is not changed
- "'Non-string value found for log_options option \\'max-file\\'. The value is automatically converted to \\'5\\'. If this is not correct, or you want to
avoid such warnings, please quote the value.' in (log_options_2.warnings | default([]))"
- message in (log_options_2.warnings | default([]))
- log_options_3 is not changed
- log_options_4 is changed
vars:
message: >-
Non-string value found for log_options option 'max-file'. The value is automatically converted to '5'.
If this is not correct, or you want to avoid such warnings, please quote the value,
or explicitly convert the values to strings when templating them.
####################################################################
## mac_address #####################################################
@ -3686,18 +3690,6 @@
register: platform_5
ignore_errors: true
- name: platform (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_simple_1 }}"
name: "{{ cname }}"
state: present
pull: true
platform: 386
force_kill: true
debug: true
register: platform_6
ignore_errors: true
- name: cleanup
community.docker.docker_container:
name: "{{ cname }}"
@ -3712,7 +3704,6 @@
- platform_3 is not changed and platform_3 is not failed
- platform_4 is not changed and platform_4 is not failed
- platform_5 is changed
- platform_6 is not changed and platform_6 is not failed
when: docker_api_version is version('1.41', '>=')
- ansible.builtin.assert:
that:

View File

@ -106,6 +106,101 @@
force_kill: true
register: published_ports_3
- name: published_ports -- port range (same range, but listed explicitly)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010"
- "9011"
- "9012"
- "9013"
- "9014"
- "9015"
- "9016"
- "9017"
- "9018"
- "9019"
- "9020"
- "9021"
- "9022"
- "9023"
- "9024"
- "9025"
- "9026"
- "9027"
- "9028"
- "9029"
- "9030"
- "9031"
- "9032"
- "9033"
- "9034"
- "9035"
- "9036"
- "9037"
- "9038"
- "9039"
- "9040"
- "9041"
- "9042"
- "9043"
- "9044"
- "9045"
- "9046"
- "9047"
- "9048"
- "9049"
- "9050"
published_ports:
- "9001:9001"
- "9020:9020"
- "9021:9021"
- "9022:9022"
- "9023:9023"
- "9024:9024"
- "9025:9025"
- "9026:9026"
- "9027:9027"
- "9028:9028"
- "9029:9029"
- "9030:9030"
- "9031:9031"
- "9032:9032"
- "9033:9033"
- "9034:9034"
- "9035:9035"
- "9036:9036"
- "9037:9037"
- "9038:9038"
- "9039:9039"
- "9040:9040"
- "9041:9041"
- "9042:9042"
- "9043:9043"
- "9044:9044"
- "9045:9045"
- "9046:9046"
- "9047:9047"
- "9048:9048"
- "9049:9049"
- "9050:9050"
- "9051:9051"
- "9052:9052"
- "9053:9053"
- "9054:9054"
- "9055:9055"
- "9056:9056"
- "9057:9057"
- "9058:9058"
- "9059:9059"
- "9060:9060"
force_kill: true
register: published_ports_4
- name: cleanup
community.docker.docker_container:
name: "{{ cname }}"
@ -118,6 +213,7 @@
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is changed
- published_ports_4 is not changed
####################################################################
## published_ports: one-element container port range ###############
@ -181,6 +277,58 @@
- published_ports_2 is not changed
- published_ports_3 is changed
####################################################################
## published_ports: duplicate ports ################################
####################################################################
- name: published_ports -- duplicate ports
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
register: published_ports_1
- name: published_ports -- duplicate ports (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
force_kill: true
register: published_ports_2
- name: published_ports -- duplicate ports (idempotency w/ protocol)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80/tcp
- 10000:80/tcp
force_kill: true
register: published_ports_3
- name: cleanup
community.docker.docker_container:
name: "{{ cname }}"
state: absent
force_kill: true
diff: false
- ansible.builtin.assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is not changed
####################################################################
## published_ports: IPv6 addresses #################################
####################################################################

View File

@ -256,6 +256,10 @@
- ansible.builtin.assert:
that:
- archive_image_2 is not changed
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199
- name: Archive image 3rd time, should overwrite due to different id
community.docker.docker_image:

View File

@ -67,3 +67,7 @@
manifests_json: "{{ manifests.results | map(attribute='stdout') | map('from_json') }}"
manifest_json_images: "{{ item.2 | map(attribute='Config') | map('regex_replace', '.json$', '') | map('regex_replace', '^blobs/sha256/', '') | sort }}"
export_image_ids: "{{ item.1 | map('regex_replace', '^sha256:', '') | unique | sort }}"
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199

View File

@ -73,11 +73,17 @@
loop: "{{ all_images }}"
when: remove_all_images is failed
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (IDs)
community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-2.tar"
register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names
ansible.builtin.debug:
var: result.image_names
@ -110,11 +116,17 @@
name: "{{ item }}"
loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (mixed images and IDs)
community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-3.tar"
register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loading log
ansible.builtin.debug:
var: result.stdout_lines
@ -127,10 +139,14 @@
that:
- result is changed
# For some reason, *sometimes* only the named image is found; in fact, in that case, the log only mentions that image and nothing else
- "result.images | length == 3 or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout"
- (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0]]]
- result.images | length in [1, 3]
- (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0]]]
# With Docker 29, a third possibility appears: just two entries.
- >-
result.images | length == 3
or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout
or result.images | length == 2
- (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0], image_ids[1]] | sort, [image_names[0]]]
- result.images | length in [1, 2, 3]
- (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0], image_ids[1]] | sort, [image_ids[0]]]
# Same image twice
@ -139,11 +155,17 @@
name: "{{ item }}"
loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (same image twice)
community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-4.tar"
register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names
ansible.builtin.debug:
var: result.image_names
@ -151,10 +173,11 @@
- ansible.builtin.assert:
that:
- result is changed
- result.image_names | length == 1
- result.image_names[0] == image_names[0]
- result.images | length == 1
- result.image_names | length in [1, 2]
- (result.image_names | sort) in [[image_names[0]], [image_names[0], image_ids[0]] | sort]
- result.images | length in [1, 2]
- result.images[0].Id == image_ids[0]
- result.images[1].Id | default(image_ids[0]) == image_ids[0]
# Single image by ID
@ -163,11 +186,17 @@
name: "{{ item }}"
loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (single image by ID)
community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-5.tar"
register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names
ansible.builtin.debug:
var: result.image_names
@ -197,11 +226,17 @@
name: "{{ item }}"
loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (names)
community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-1.tar"
register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names
ansible.builtin.debug:
var: result.image_names

View File

@ -142,6 +142,8 @@
- present_3_check.actions[0] == ('Pulled image ' ~ image_name)
- present_3_check.diff.before.id == present_1.diff.after.id
- present_3_check.diff.after.id == 'unknown'
- ansible.builtin.assert:
that:
- present_3 is changed
- present_3.actions | length == 1
- present_3.actions[0] == ('Pulled image ' ~ image_name)
@ -166,6 +168,11 @@
- present_5.actions[0] == ('Pulled image ' ~ image_name)
- present_5.diff.before.id == present_3.diff.after.id
- present_5.diff.after.id == present_1.diff.after.id
when: docker_cli_version is version("29.0.0", "<")
# From Docker 29 on, Docker won't pull images for other architectures
# if there are better matching ones. The above tests assume it will
# just do what it is told, and thus fail from 29.0.0 on.
# https://github.com/ansible-collections/community.docker/pull/1199
always:
- name: cleanup

View File

@ -7,11 +7,9 @@
block:
- name: Make sure images are not there
community.docker.docker_image_remove:
name: "{{ item }}"
name: "sha256:{{ item }}"
force: true
loop:
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"
loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}"
- name: Pull image 1
community.docker.docker_image_pull:
@ -82,8 +80,6 @@
always:
- name: cleanup
community.docker.docker_image_remove:
name: "{{ item }}"
name: "sha256:{{ item }}"
force: true
loop:
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"
loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}"

View File

@ -18,7 +18,7 @@
- name: Push image ID (must fail)
community.docker.docker_image_push:
name: "sha256:{{ docker_test_image_digest_v1_image_id }}"
name: "sha256:{{ docker_test_image_digest_v1_image_ids[0] }}"
register: fail_2
ignore_errors: true

View File

@ -80,4 +80,6 @@
that:
- push_4 is failed
- >-
push_4.msg == ('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': no basic auth credentials')
push_4.msg.startswith('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': ')
- >-
push_4.msg.endswith(': no basic auth credentials')

View File

@ -8,15 +8,16 @@
# and should not be used as examples of how to write Ansible roles #
####################################################################
- block:
- vars:
image: "{{ docker_test_image_hello_world }}"
image_ids: "{{ docker_test_image_hello_world_image_ids }}"
block:
- name: Pick image prefix
ansible.builtin.set_fact:
iname_prefix: "{{ 'ansible-docker-test-%0x' % ((2**32) | random) }}"
- name: Define image names
ansible.builtin.set_fact:
image: "{{ docker_test_image_hello_world }}"
image_id: "{{ docker_test_image_hello_world_image_id }}"
image_names:
- "{{ iname_prefix }}-tagged-1:latest"
- "{{ iname_prefix }}-tagged-1:foo"
@ -24,8 +25,9 @@
- name: Remove image complete
community.docker.docker_image_remove:
name: "{{ image_id }}"
name: "{{ item }}"
force: true
loop: "{{ image_ids }}"
- name: Remove tagged images
community.docker.docker_image_remove:
@ -102,10 +104,11 @@
- remove_2 is changed
- remove_2.diff.before.id == pulled_image.image.Id
- remove_2.diff.before.tags | length == 4
- remove_2.diff.before.digests | length == 1
# With Docker 29, there are now two digests in before and after:
- remove_2.diff.before.digests | length in [1, 2]
- remove_2.diff.after.id == pulled_image.image.Id
- remove_2.diff.after.tags | length == 3
- remove_2.diff.after.digests | length == 1
- remove_2.diff.after.digests | length in [1, 2]
- remove_2.deleted | length == 0
- remove_2.untagged | length == 1
- remove_2.untagged[0] == (iname_prefix ~ '-tagged-1:latest')
@ -174,10 +177,11 @@
- remove_4 is changed
- remove_4.diff.before.id == pulled_image.image.Id
- remove_4.diff.before.tags | length == 3
- remove_4.diff.before.digests | length == 1
# With Docker 29, there are now two digests in before and after:
- remove_4.diff.before.digests | length in [1, 2]
- remove_4.diff.after.id == pulled_image.image.Id
- remove_4.diff.after.tags | length == 2
- remove_4.diff.after.digests | length == 1
- remove_4.diff.after.digests | length in [1, 2]
- remove_4.deleted | length == 0
- remove_4.untagged | length == 1
- remove_4.untagged[0] == (iname_prefix ~ '-tagged-1:foo')
@ -245,16 +249,22 @@
- remove_6 is changed
- remove_6.diff.before.id == pulled_image.image.Id
- remove_6.diff.before.tags | length == 2
- remove_6.diff.before.digests | length == 1
# With Docker 29, there are now two digests in before and after:
- remove_6.diff.before.digests | length in [1, 2]
- remove_6.diff.after.exists is false
- remove_6.deleted | length > 1
- remove_6.deleted | length >= 1
- pulled_image.image.Id in remove_6.deleted
- remove_6.untagged | length == 3
- remove_6.untagged | length in [2, 3]
- (iname_prefix ~ '-tagged-1:bar') in remove_6.untagged
- image in remove_6.untagged
- remove_6_check.deleted | length == 1
- remove_6_check.deleted[0] == pulled_image.image.Id
- remove_6_check.untagged == remove_6.untagged
# The following is only true for Docker < 29...
# We use the CLI version as a proxy...
- >-
remove_6_check.untagged == remove_6.untagged
or
docker_cli_version is version("29.0.0", ">=")
- info_5.images | length == 0
- name: Remove image ID (force, idempotent, check mode)

View File

@ -133,8 +133,24 @@
- name: Get proxied daemon URLs
ansible.builtin.set_fact:
docker_daemon_frontend_https: "https://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000"
docker_daemon_frontend_http: "http://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:6000"
# Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists.
# Use the bridge network's IP address instead...
docker_daemon_frontend_https: >-
https://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
docker_daemon_frontend_http: >-
http://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:6000
- name: Wait for registry frontend
ansible.builtin.uri:

View File

@ -4,12 +4,18 @@
# SPDX-License-Identifier: GPL-3.0-or-later
docker_test_image_digest_v1: e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9
docker_test_image_digest_v1_image_id: 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee
docker_test_image_digest_v1_image_ids:
- 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee # Docker 28 and before
- e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 # Docker 29
docker_test_image_digest_v2: ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b
docker_test_image_digest_v2_image_id: dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843
docker_test_image_digest_v2_image_ids:
- dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843 # Docker 28 and before
- ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b # Docker 29
docker_test_image_digest_base: quay.io/ansible/docker-test-containers
docker_test_image_hello_world: quay.io/ansible/docker-test-containers:hello-world
docker_test_image_hello_world_image_id: sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b
docker_test_image_hello_world_image_ids:
- sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b # Docker 28 and before
- sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 # Docker 29
docker_test_image_hello_world_base: quay.io/ansible/docker-test-containers
docker_test_image_busybox: quay.io/ansible/docker-test-containers:busybox
docker_test_image_alpine: quay.io/ansible/docker-test-containers:alpine3.8

View File

@ -102,7 +102,17 @@
# This host/port combination cannot be used if the tests are running inside a docker container.
docker_registry_frontend_address: localhost:{{ nginx_container.container.NetworkSettings.Ports['5000/tcp'].0.HostPort }}
# The following host/port combination can be used from inside the docker container.
docker_registry_frontend_address_internal: "{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000"
docker_registry_frontend_address_internal: >-
{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else
(
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
# Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists.
# Use the bridge network's IP address instead...
- name: Wait for registry frontend
ansible.builtin.uri:

View File

@ -27,7 +27,7 @@
- name: Install cryptography (Darwin, and potentially upgrade for other OSes)
become: true
ansible.builtin.pip:
name: cryptography>=1.3.0
name: cryptography>=3.3.0
extra_args: "-c {{ remote_constraints }}"
- name: Register cryptography version

View File

@ -9,6 +9,7 @@ import pytest
from ansible_collections.community.docker.plugins.module_utils._compose_v2 import (
Event,
parse_events,
parse_json_events,
)
from .compose_v2_test_cases import EVENT_TEST_CASES
@ -384,3 +385,208 @@ def test_parse_events(
assert collected_events == events
assert collected_warnings == warnings
JSON_TEST_CASES: list[tuple[str, str, str, list[Event], list[str]]] = [
(
"pull-compose-2",
"2.40.3",
'{"level":"warning","msg":"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version`'
' is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:16:30Z"}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pulling fs layer"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Downloading","status":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Download complete","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pull complete","percent":100}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version` is obsolete,"
" it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulling",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pulling fs layer",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Downloading",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Download complete",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pull complete",
None,
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulled",
None,
),
],
[],
),
(
"pull-compose-5",
"5.0.0",
'{"level":"warning","msg":"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute'
' `version` is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:08:22Z"}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute `version`"
" is obsolete, it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ghcr.io/ansible-collections/simple-1:tag",
"Pulling",
"Working",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image", "ghcr.io/ansible-collections/simple-1:tag", "Pulled", "Done"
),
],
[],
),
]
@pytest.mark.parametrize(
"test_id, compose_version, stderr, events, warnings",
JSON_TEST_CASES,
ids=[tc[0] for tc in JSON_TEST_CASES],
)
def test_parse_json_events(
test_id: str,
compose_version: str,
stderr: str,
events: list[Event],
warnings: list[str],
) -> None:
collected_warnings = []
def collect_warning(msg: str) -> None:
collected_warnings.append(msg)
collected_events = parse_json_events(
stderr.encode("utf-8"),
warn_function=collect_warning,
)
print(collected_events)
print(collected_warnings)
assert collected_events == events
assert collected_warnings == warnings