Compare commits

...

36 Commits

Author SHA1 Message Date
Felix Fontein
947ec9a442 The next expected release will be 5.1.0. 2025-12-06 23:32:17 +01:00
Felix Fontein
25e7ba222e Release 5.0.4. 2025-12-06 22:45:11 +01:00
Felix Fontein
6ab8cc0d82
Improve JSON parsing error handling. (#1221) 2025-12-06 22:25:30 +01:00
Felix Fontein
159df0ab91 Prepare 5.0.4. 2025-12-06 17:57:12 +01:00
Felix Fontein
174c0c8058
Docker Compose 5+: improve image layer event parsing (#1219)
* Remove long deprecated version fields.

* Add first JSON event parsing tests.

* Improve image layer event parsing for Compose 5+.

* Add 'Working' to image working actions.

* Add changelog fragment.

* Shorten lines.

* Adjust docker_compose_v2_run tests.
2025-12-06 17:48:17 +01:00
Felix Fontein
2efcd6b2ec
Adjust test for error message for Compose 5.0.0. (#1217) 2025-12-06 14:04:39 +01:00
Felix Fontein
faa7dee456 The next release will be 5.1.0. 2025-11-29 23:16:22 +01:00
Felix Fontein
908c23a3c3 Release 5.0.3. 2025-11-29 22:35:55 +01:00
Felix Fontein
350f67d971 Prepare 5.0.3. 2025-11-26 07:30:53 +01:00
Felix Fontein
846fc8564b
docker_container: do not send wrong host IP for duplicate ports (#1214)
* DRY.

* Port spec can be a list of port specs.

* Add changelog fragment.

* Add test.
2025-11-26 07:29:30 +01:00
dependabot[bot]
d2947476f7
Bump actions/checkout from 5 to 6 in the ci group (#1211)
Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 06:20:29 +01:00
Felix Fontein
5d2b4085ec
Remove code that's not used. (#1209) 2025-11-23 09:48:34 +01:00
Felix Fontein
a869184ad4 Shut up pylint due to bugs. 2025-11-23 08:56:42 +01:00
Felix Fontein
a985e05482 The next expected release will be 5.1.0. 2025-11-16 13:54:23 +01:00
Felix Fontein
13e74e58fa Release 5.0.2. 2025-11-16 12:48:11 +01:00
Felix Fontein
c61c0e24b8
Improve error/warning messages w.r.t. YAML quoting (#1205)
* Remove superfluous conversions/assignments.

* Improve messages.
2025-11-16 12:32:51 +01:00
Felix Fontein
e42423b949 Forgot to update the version number. 2025-11-16 11:57:17 +01:00
Felix Fontein
0d37f20100 Prepare 5.0.2. 2025-11-16 11:56:18 +01:00
Felix Fontein
a349c5eed7
Fix connection tests. (#1202) 2025-11-16 10:55:07 +01:00
Felix Fontein
3da2799e03
Fix IP subnet and address idempotency. (#1201) 2025-11-16 10:47:35 +01:00
Felix Fontein
d207643e0c
docker_image(_push): fix push detection (#1199)
* Fix IP address retrieval for registry setup.

* Adjust push detection to Docker 29.

* Idempotency for export no longer works.

* Disable pull idempotency checks that play with architecture.

* Add more known image IDs.

* Adjust load tests.

* Adjust error message check.

* Allow for more digests.

* Make sure a new enough cryptography version is installed.
2025-11-16 10:09:23 +01:00
Felix Fontein
90c4b4c543
docker_image(_pull), docker_container: fix compatibility with Docker 29.0.0 (#1192)
* Add debug flag to failing task.

* Add more debug output.

* Fix pull idempotency.

* Revert "Add more debug output."

This reverts commit 64020149bf.

* Fix casing.

* Remove unreliable test.

* Add 'debug: true' to all tasks.

* Reformat.

* Fix idempotency problem for IPv6 addresses.

* Fix expose ranges handling.

* Update changelog fragment to also mention other affected modules.
2025-11-15 17:13:46 +01:00
Felix Fontein
68993fe353
docker_compose_v2: ignore result of build idempotency test since this seems like a hopeless case (#1196)
* Ignore result of idempotency test since this seems like a hopeless cause...

* And another one.
2025-11-15 17:06:21 +01:00
Felix Fontein
97314ec892
Move ansible-core 2.17 to EOL CI. (#1189) 2025-11-12 19:41:25 +01:00
Felix Fontein
ec14568b22
Work around Docker 29.0.0 bug. (#1187) 2025-11-12 19:21:55 +01:00
Felix Fontein
94d22f758b The next planned release will be 5.1.0. 2025-11-09 21:32:51 +01:00
Felix Fontein
aedf8f9674 Release 5.0.1. 2025-11-09 21:12:23 +01:00
Felix Fontein
86ea32b214 Prepare 5.0.1. 2025-11-08 10:02:08 +01:00
Nik Reiman
9d7dda7292
Fix error for "Cannot locate specified Dockerfile" (#1184)
In 3350283bcc, a subtle bug was introduced
by renaming this variable. For image builds that go down the `else`
branch, they never set this variable, which is then referenced below
when constructing the `params` dict. This results in a very confusing
bug from the Docker backend when trying to construct images:

> An unexpected Docker error occurred: 500 Server Error for
> http+docker://localhost/v1.51/build?t=molecule_local%2Fubuntu%3A24.04&q=False&nocache=False&rm=True&forcerm=True&pull=True&dockerfile=%2Fhome%2Fci%2F.ansible%2Ftmp%2Fmolecule.IaMj.install-github%2FDockerfile_ubuntu_24_04:
> Internal Server Error ("Cannot locate specified Dockerfile:
> /home/ci/.ansible/tmp/molecule.IaMj.install-github/Dockerfile_ubuntu_24_04")

Within the Docker daemon logs, the actual error presents itself like
this:

> level=debug msg="FIXME: Got an API for which error does not match any
> expected type!!!" error="Cannot locate specified Dockerfile:
> $HOME/.ansible/tmp/molecule.5DrS.install-package/Dockerfile_ubuntu_24_04"
> error_type="*errors.fundamental" module=api

Unfortunately, these are all red herrings and the actual cause of the
problem isn't Docker itself or the missing file, but in fact the
`docker_image` module not passing the correct parameter data here.
2025-11-08 10:01:05 +01:00
Felix Fontein
dee138bc4b
Fix typing info. (#1183) 2025-11-06 07:15:05 +01:00
Felix Fontein
00c480254d The next expected release will be 5.1.0. 2025-11-02 12:51:01 +01:00
Felix Fontein
02f787a930 Release 5.0.0. 2025-11-02 12:30:18 +01:00
Felix Fontein
ea76592af6 Prepare 5.0.0. 2025-10-29 21:15:29 +01:00
Felix Fontein
dbc7b0ec18
Cleanup with ruff check (#1182)
* Implement improvements suggested by ruff check.

* Add ruff check to CI.
2025-10-28 06:58:15 +01:00
Felix Fontein
3bade286f8 Fix mypy config. 2025-10-26 10:02:49 +01:00
Felix Fontein
3dcf394aa5 Remove stable-3 from weekly CI run. 2025-10-25 13:36:34 +02:00
80 changed files with 1601 additions and 1044 deletions

View File

@ -30,7 +30,6 @@ schedules:
branches: branches:
include: include:
- stable-4 - stable-4
- stable-3
variables: variables:
- name: checkoutPath - name: checkoutPath
@ -96,17 +95,6 @@ stages:
test: '2.18/sanity/1' test: '2.18/sanity/1'
- name: Units - name: Units
test: '2.18/units/1' test: '2.18/units/1'
- stage: Ansible_2_17
displayName: Sanity & Units 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.17/sanity/1'
- name: Units
test: '2.17/units/1'
### Docker ### Docker
- stage: Docker_devel - stage: Docker_devel
@ -175,23 +163,6 @@ stages:
groups: groups:
- 4 - 4
- 5 - 5
- stage: Docker_2_17
displayName: Docker 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/linux/{0}
targets:
- name: Fedora 39
test: fedora39
- name: Ubuntu 20.04
test: ubuntu2004
- name: Alpine 3.19
test: alpine319
groups:
- 4
- 5
### Community Docker ### Community Docker
- stage: Docker_community_devel - stage: Docker_community_devel
@ -286,22 +257,6 @@ stages:
- 3 - 3
- 4 - 4
- 5 - 5
- stage: Remote_2_17
displayName: Remote 2.17
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.17/{0}
targets:
- name: RHEL 9.3
test: rhel/9.3
groups:
- 1
- 2
- 3
- 4
- 5
## Finally ## Finally
@ -312,17 +267,14 @@ stages:
- Ansible_2_20 - Ansible_2_20
- Ansible_2_19 - Ansible_2_19
- Ansible_2_18 - Ansible_2_18
- Ansible_2_17
- Remote_devel - Remote_devel
- Remote_2_20 - Remote_2_20
- Remote_2_19 - Remote_2_19
- Remote_2_18 - Remote_2_18
- Remote_2_17
- Docker_devel - Docker_devel
- Docker_2_20 - Docker_2_20
- Docker_2_19 - Docker_2_19
- Docker_2_18 - Docker_2_18
- Docker_2_17
- Docker_community_devel - Docker_community_devel
jobs: jobs:
- template: templates/coverage.yml - template: templates/coverage.yml

View File

@ -45,7 +45,7 @@ jobs:
steps: steps:
- name: Check out repository - name: Check out repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
persist-credentials: false persist-credentials: false

View File

@ -30,6 +30,6 @@ jobs:
upload-codecov-pr: false upload-codecov-pr: false
upload-codecov-push: false upload-codecov-push: false
upload-codecov-schedule: true upload-codecov-schedule: true
max-ansible-core: "2.16" max-ansible-core: "2.17"
secrets: secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@ -22,10 +22,6 @@ follow_untyped_imports = True
# Docker SDK for Python has partial typing information # Docker SDK for Python has partial typing information
follow_untyped_imports = True follow_untyped_imports = True
[mypy-ansible_collections.community.internal_test_tools.*]
# community.internal_test_tools has no typing information
ignore_missing_imports = True
[mypy-jsondiff.*] [mypy-jsondiff.*]
# jsondiff has no typing information # jsondiff has no typing information
ignore_missing_imports = True ignore_missing_imports = True

View File

@ -388,6 +388,8 @@ disable=raw-checker-failed,
unused-argument, unused-argument,
# Cannot remove yet due to inadequacy of rules # Cannot remove yet due to inadequacy of rules
inconsistent-return-statements, # doesn't notice that fail_json() does not return inconsistent-return-statements, # doesn't notice that fail_json() does not return
# Buggy impementation in pylint:
relative-beyond-top-level, # TODO
# Enable the message, report, category or checker with the given id(s). You can # Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option # either give multiple identifier separated by comma (,) or put this option

File diff suppressed because it is too large Load Diff

View File

@ -4,20 +4,86 @@ Docker Community Collection Release Notes
.. contents:: Topics .. contents:: Topics
v5.0.0-a1 v5.0.4
========= ======
Release Summary Release Summary
--------------- ---------------
First alpha release of community.docker 5.0.0. Bugfix release.
The main changes are that the collection dropped support for some ansible-core versions that are End of Life, and thus dropped support for Python 2.7. Bugfixes
--------
- CLI-based modules - when parsing JSON output fails, also provide standard error output. Also provide information on the command and its result in machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216, https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
v5.0.3
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- docker_container - when the same port is mapped more than once for the same protocol without specifying an interface, a bug caused an invalid value to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213, https://github.com/ansible-collections/community.docker/pull/1214).
v5.0.2
======
Release Summary
---------------
Bugfix release for Docker 29.
Bugfixes
--------
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185, https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module used an undocumented feature of Docker that was removed from Docker 29.0.0, that allowed to pass the range to the deamon and let handle it. Now the module explodes ranges into a list of all contained ports, same as the Docker CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes ranges returned by the API for existing containers so that comparison should only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker 29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
Known Issues
------------
- docker_image, docker_image_export - idempotency for archiving images depends on whether the image IDs used by the image storage backend correspond to the IDs used in the tarball's ``manifest.json`` files. The new default backend in Docker 29 apparently uses image IDs that no longer correspond, whence idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
v5.0.1
======
Release Summary
---------------
Bugfix release.
Bugfixes
--------
- docker_compose_v2_run - when ``detach=true``, ensure that the returned container ID is not a bytes string (https://github.com/ansible-collections/community.docker/pull/1183).
- docker_image - fix 'Cannot locate specified Dockerfile' error (https://github.com/ansible-collections/community.docker/pull/1184).
v5.0.0
======
Release Summary
---------------
New major release.
The main changes are that the collection dropped support for some ansible-core
versions that are End of Life, and thus dropped support for Python 2.7.
This allowed to modernize the Python code, in particular with type hints. This allowed to modernize the Python code, in particular with type hints.
Also all module and plugin utils are now private to the collection, which makes it easier to refactor code. Also all module and plugin utils are now private to the collection, which
All these changes should have no effect on end-users. makes it easier to refactor code. All these changes should have no effect on
end-users.
The current plan is to release 5.0.0 in time for Ansible 13's feature freeze, so in roughly one week.
Minor Changes Minor Changes
------------- -------------

View File

@ -19,6 +19,8 @@ stable_branches = [ "stable-*" ]
run_isort = true run_isort = true
isort_config = ".isort.cfg" isort_config = ".isort.cfg"
run_black = true run_black = true
run_ruff_check = true
ruff_check_config = "ruff.toml"
run_flake8 = true run_flake8 = true
flake8_config = ".flake8" flake8_config = ".flake8"
run_pylint = true run_pylint = true

View File

@ -2250,3 +2250,98 @@ releases:
- 5.0.0-a1.yml - 5.0.0-a1.yml
- 5.0.0.yml - 5.0.0.yml
release_date: '2025-10-25' release_date: '2025-10-25'
5.0.0:
changes:
release_summary: 'New major release.
The main changes are that the collection dropped support for some ansible-core
versions that are End of Life, and thus dropped support for Python 2.7.
This allowed to modernize the Python code, in particular with type hints.
Also all module and plugin utils are now private to the collection, which
makes it easier to refactor code. All these changes should have no effect
on
end-users.'
fragments:
- 5.0.0.yml
release_date: '2025-11-02'
5.0.1:
changes:
bugfixes:
- docker_compose_v2_run - when ``detach=true``, ensure that the returned container
ID is not a bytes string (https://github.com/ansible-collections/community.docker/pull/1183).
- docker_image - fix 'Cannot locate specified Dockerfile' error (https://github.com/ansible-collections/community.docker/pull/1184).
release_summary: Bugfix release.
fragments:
- 1185-fix.yml
- 5.0.1.yml
- typing.yml
release_date: '2025-11-09'
5.0.2:
changes:
bugfixes:
- Docker CLI based modules - work around bug in Docker 29.0.0 that caused
a breaking change in ``docker version --format json`` output (https://github.com/ansible-collections/community.docker/issues/1185,
https://github.com/ansible-collections/community.docker/pull/1187).
- docker_container - fix ``pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix handling of exposed port ranges. So far, the module
used an undocumented feature of Docker that was removed from Docker 29.0.0,
that allowed to pass the range to the deamon and let handle it. Now the
module explodes ranges into a list of all contained ports, same as the Docker
CLI does. For backwards compatibility with Docker < 29.0.0, it also explodes
ranges returned by the API for existing containers so that comparison should
only indicate a difference if the ranges actually change (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_container - fix idempotency for IPv6 addresses with Docker 29.0.0
(https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image - fix ``source=pull`` idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_image, docker_image_push - adjust image push detection to Docker
29 (https://github.com/ansible-collections/community.docker/pull/1199).
- docker_image_pull - fix idempotency with Docker 29.0.0 (https://github.com/ansible-collections/community.docker/pull/1192).
- docker_network - fix idempotency for IPv6 addresses and networks with Docker
29.0.0 (https://github.com/ansible-collections/community.docker/pull/1201).
known_issues:
- docker_image, docker_image_export - idempotency for archiving images depends
on whether the image IDs used by the image storage backend correspond to
the IDs used in the tarball's ``manifest.json`` files. The new default backend
in Docker 29 apparently uses image IDs that no longer correspond, whence
idempotency no longer works (https://github.com/ansible-collections/community.docker/pull/1199).
release_summary: Bugfix release for Docker 29.
fragments:
- 1187-docker.yml
- 1192-docker_container.yml
- 1199-docker_image-push.yml
- 1201-docker_network.yml
- 5.0.2.yml
release_date: '2025-11-16'
5.0.3:
changes:
bugfixes:
- docker_container - when the same port is mapped more than once for the same
protocol without specifying an interface, a bug caused an invalid value
to be passed for the interface (https://github.com/ansible-collections/community.docker/issues/1213,
https://github.com/ansible-collections/community.docker/pull/1214).
release_summary: Bugfix release.
fragments:
- 1214-docker_container-ports.yml
- 5.0.3.yml
release_date: '2025-11-29'
5.0.4:
changes:
bugfixes:
- CLI-based modules - when parsing JSON output fails, also provide standard
error output. Also provide information on the command and its result in
machine-readable way (https://github.com/ansible-collections/community.docker/issues/1216,
https://github.com/ansible-collections/community.docker/pull/1221).
- docker_compose_v2, docker_compose_v2_pull - adjust parsing from image pull
events to changes in Docker Compose 5.0.0 (https://github.com/ansible-collections/community.docker/pull/1219).
release_summary: Bugfix release.
fragments:
- 1219-compose-v2-pull.yml
- 1221-cli-json-errors.yml
- 5.0.4.yml
release_date: '2025-12-06'

View File

@ -7,7 +7,7 @@
namespace: community namespace: community
name: docker name: docker
version: 5.0.0-a1 version: 5.1.0
readme: README.md readme: README.md
authors: authors:
- Ansible Docker Working Group - Ansible Docker Working Group

View File

@ -228,12 +228,12 @@ class Connection(ConnectionBase):
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
) as p: ) as p:
out, err = p.communicate() out_b, err_b = p.communicate()
out = to_text(out, errors="surrogate_or_strict") out = to_text(out_b, errors="surrogate_or_strict")
if p.returncode != 0: if p.returncode != 0:
display.warning( display.warning(
f"unable to retrieve default user from docker container: {out} {to_text(err)}" f"unable to retrieve default user from docker container: {out} {to_text(err_b)}"
) )
self._container_user_cache[container] = None self._container_user_cache[container] = None
return None return None
@ -266,7 +266,9 @@ class Connection(ConnectionBase):
if not isinstance(val, str): if not isinstance(val, str):
raise AnsibleConnectionFailure( raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be " f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
) )
local_cmd += [ local_cmd += [
b"-e", b"-e",

View File

@ -282,11 +282,11 @@ class Connection(ConnectionBase):
if not isinstance(val, str): if not isinstance(val, str):
raise AnsibleConnectionFailure( raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be " f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. {what}: {val!r}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
) )
kk = to_text(k, errors="surrogate_or_strict") data["Env"].append(f"{k}={v}")
vv = to_text(v, errors="surrogate_or_strict")
data["Env"].append(f"{kk}={vv}")
if self.get_option("working_dir") is not None: if self.get_option("working_dir") is not None:
data["WorkingDir"] = self.get_option("working_dir") data["WorkingDir"] = self.get_option("working_dir")

View File

@ -116,9 +116,9 @@ class Connection(ConnectionBase):
] ]
cmd_parts = nsenter_cmd_parts + [cmd] cmd_parts = nsenter_cmd_parts + [cmd]
cmd = to_bytes(" ".join(cmd_parts)) cmd_b = to_bytes(" ".join(cmd_parts))
display.vvv(f"EXEC {to_text(cmd)}", host=self._play_context.remote_addr) display.vvv(f"EXEC {to_text(cmd_b)}", host=self._play_context.remote_addr)
display.debug("opening command with Popen()") display.debug("opening command with Popen()")
master = None master = None
@ -137,9 +137,9 @@ class Connection(ConnectionBase):
display.debug(f"Unable to open pty: {e}") display.debug(f"Unable to open pty: {e}")
with subprocess.Popen( with subprocess.Popen(
cmd, cmd_b,
shell=isinstance(cmd, (str, bytes)), shell=True,
executable=executable if isinstance(cmd, (str, bytes)) else None, executable=executable,
cwd=self.cwd, cwd=self.cwd,
stdin=stdin, stdin=stdin,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,

View File

@ -698,9 +698,7 @@ class APIClient(_Session):
if auth.INDEX_URL not in auth_data and auth.INDEX_NAME in auth_data: if auth.INDEX_URL not in auth_data and auth.INDEX_NAME in auth_data:
auth_data[auth.INDEX_URL] = auth_data.get(auth.INDEX_NAME, {}) auth_data[auth.INDEX_URL] = auth_data.get(auth.INDEX_NAME, {})
log.debug( log.debug("Sending auth config (%s)", ", ".join(repr(k) for k in auth_data))
"Sending auth config (%s)", ", ".join(repr(k) for k in auth_data.keys())
)
if auth_data: if auth_data:
headers["X-Registry-Config"] = auth.encode_header(auth_data) headers["X-Registry-Config"] = auth.encode_header(auth_data)

View File

@ -292,7 +292,7 @@ class AuthConfig(dict):
log.debug("No entry found") log.debug("No entry found")
return None return None
except StoreError as e: except StoreError as e:
raise errors.DockerException(f"Credentials store error: {e}") raise errors.DockerException(f"Credentials store error: {e}") from e
def _get_store_instance(self, name: str) -> Store: def _get_store_instance(self, name: str) -> Store:
if name not in self._stores: if name not in self._stores:
@ -310,7 +310,7 @@ class AuthConfig(dict):
if self.creds_store: if self.creds_store:
# Retrieve all credentials from the default store # Retrieve all credentials from the default store
store = self._get_store_instance(self.creds_store) store = self._get_store_instance(self.creds_store)
for k in store.list().keys(): for k in store.list():
auth_data[k] = self._resolve_authconfig_credstore(k, self.creds_store) auth_data[k] = self._resolve_authconfig_credstore(k, self.creds_store)
auth_data[convert_to_hostname(k)] = auth_data[k] auth_data[convert_to_hostname(k)] = auth_data[k]

View File

@ -102,8 +102,7 @@ def get_tls_dir(name: str | None = None, endpoint: str = "") -> str:
def get_context_host(path: str | None = None, tls: bool = False) -> str: def get_context_host(path: str | None = None, tls: bool = False) -> str:
host = parse_host(path, IS_WINDOWS_PLATFORM, tls) host = parse_host(path, IS_WINDOWS_PLATFORM, tls)
if host == DEFAULT_UNIX_SOCKET: if host == DEFAULT_UNIX_SOCKET and host.startswith("http+"):
# remove http+ from default docker socket url # remove http+ from default docker socket url
if host.startswith("http+"): host = host[5:]
host = host[5:]
return host return host

View File

@ -90,13 +90,13 @@ class Store:
env=env, env=env,
) )
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise errors.process_store_error(e, self.program) raise errors.process_store_error(e, self.program) from e
except OSError as e: except OSError as e:
if e.errno == errno.ENOENT: if e.errno == errno.ENOENT:
raise errors.StoreError( raise errors.StoreError(
f"{self.program} not installed or not available in PATH" f"{self.program} not installed or not available in PATH"
) ) from e
raise errors.StoreError( raise errors.StoreError(
f'Unexpected OS error "{e.strerror}", errno={e.errno}' f'Unexpected OS error "{e.strerror}", errno={e.errno}'
) ) from e
return output return output

View File

@ -98,7 +98,7 @@ def create_archive(
extra_files = extra_files or [] extra_files = extra_files or []
if not fileobj: if not fileobj:
# pylint: disable-next=consider-using-with # pylint: disable-next=consider-using-with
fileobj = tempfile.NamedTemporaryFile() fileobj = tempfile.NamedTemporaryFile() # noqa: SIM115
with tarfile.open(mode="w:gz" if gzip else "w", fileobj=fileobj) as tarf: with tarfile.open(mode="w:gz" if gzip else "w", fileobj=fileobj) as tarf:
if files is None: if files is None:
@ -146,7 +146,8 @@ def create_archive(
def mkbuildcontext(dockerfile: io.BytesIO | t.IO[bytes]) -> t.IO[bytes]: def mkbuildcontext(dockerfile: io.BytesIO | t.IO[bytes]) -> t.IO[bytes]:
f = tempfile.NamedTemporaryFile() # pylint: disable=consider-using-with # pylint: disable-next=consider-using-with
f = tempfile.NamedTemporaryFile() # noqa: SIM115
try: try:
with tarfile.open(mode="w", fileobj=f) as tarf: with tarfile.open(mode="w", fileobj=f) as tarf:
if isinstance(dockerfile, io.StringIO): # type: ignore if isinstance(dockerfile, io.StringIO): # type: ignore
@ -195,11 +196,14 @@ class PatternMatcher:
for pattern in self.patterns: for pattern in self.patterns:
negative = pattern.exclusion negative = pattern.exclusion
match = pattern.match(filepath) match = pattern.match(filepath)
if not match and parent_path != "": if (
if len(pattern.dirs) <= len(parent_path_dirs): not match
match = pattern.match( and parent_path != ""
os.path.sep.join(parent_path_dirs[: len(pattern.dirs)]) and len(pattern.dirs) <= len(parent_path_dirs)
) ):
match = pattern.match(
os.path.sep.join(parent_path_dirs[: len(pattern.dirs)])
)
if match: if match:
matched = not negative matched = not negative

View File

@ -22,7 +22,7 @@ from ..transport.npipesocket import NpipeSocket
if t.TYPE_CHECKING: if t.TYPE_CHECKING:
from collections.abc import Iterable, Sequence from collections.abc import Sequence
from ..._socket_helper import SocketLike from ..._socket_helper import SocketLike
@ -59,8 +59,8 @@ def read(socket: SocketLike, n: int = 4096) -> bytes | None:
try: try:
if hasattr(socket, "recv"): if hasattr(socket, "recv"):
return socket.recv(n) return socket.recv(n)
if isinstance(socket, getattr(pysocket, "SocketIO")): if isinstance(socket, pysocket.SocketIO): # type: ignore
return socket.read(n) return socket.read(n) # type: ignore[unreachable]
return os.read(socket.fileno(), n) return os.read(socket.fileno(), n)
except EnvironmentError as e: except EnvironmentError as e:
if e.errno not in recoverable_errors: if e.errno not in recoverable_errors:

View File

@ -36,7 +36,6 @@ from ..tls import TLSConfig
if t.TYPE_CHECKING: if t.TYPE_CHECKING:
import ssl
from collections.abc import Mapping, Sequence from collections.abc import Mapping, Sequence
@ -298,7 +297,7 @@ def parse_host(addr: str | None, is_win32: bool = False, tls: bool = False) -> s
if proto == "unix" and parsed_url.hostname is not None: if proto == "unix" and parsed_url.hostname is not None:
# For legacy reasons, we consider unix://path # For legacy reasons, we consider unix://path
# to be valid and equivalent to unix:///path # to be valid and equivalent to unix:///path
path = "/".join((parsed_url.hostname, path)) path = f"{parsed_url.hostname}/{path}"
netloc = parsed_url.netloc netloc = parsed_url.netloc
if proto in ("tcp", "ssh"): if proto in ("tcp", "ssh"):
@ -429,9 +428,8 @@ def parse_bytes(s: int | float | str) -> int | float:
if len(s) == 0: if len(s) == 0:
return 0 return 0
if s[-2:-1].isalpha() and s[-1].isalpha(): if s[-2:-1].isalpha() and s[-1].isalpha() and (s[-1] == "b" or s[-1] == "B"):
if s[-1] == "b" or s[-1] == "B": s = s[:-1]
s = s[:-1]
units = BYTE_UNITS units = BYTE_UNITS
suffix = s[-1].lower() suffix = s[-1].lower()

View File

@ -43,10 +43,8 @@ docker_version: str | None # pylint: disable=invalid-name
try: try:
from docker import __version__ as docker_version from docker import __version__ as docker_version
from docker import auth from docker.errors import APIError, TLSParameterError
from docker.errors import APIError, NotFound, TLSParameterError
from docker.tls import TLSConfig from docker.tls import TLSConfig
from requests.exceptions import SSLError
if LooseVersion(docker_version) >= LooseVersion("3.0.0"): if LooseVersion(docker_version) >= LooseVersion("3.0.0"):
HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name
@ -391,242 +389,6 @@ class AnsibleDockerClientBase(Client):
) )
self.fail(f"SSL Exception: {error}") self.fail(f"SSL Exception: {error}")
def get_container_by_id(self, container_id: str) -> dict[str, t.Any] | None:
try:
self.log(f"Inspecting container Id {container_id}")
result = self.inspect_container(container=container_id)
self.log("Completed container inspection")
return result
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting container: {exc}")
def get_container(self, name: str | None) -> dict[str, t.Any] | None:
"""
Lookup a container and return the inspection results.
"""
if name is None:
return None
search_name = name
if not name.startswith("/"):
search_name = "/" + name
result = None
try:
for container in self.containers(all=True):
self.log(f"testing container: {container['Names']}")
if (
isinstance(container["Names"], list)
and search_name in container["Names"]
):
result = container
break
if container["Id"].startswith(name):
result = container
break
if container["Id"] == name:
result = container
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving container list: {exc}")
if result is None:
return None
return self.get_container_by_id(result["Id"])
def get_network(
self, name: str | None = None, network_id: str | None = None
) -> dict[str, t.Any] | None:
"""
Lookup a network and return the inspection results.
"""
if name is None and network_id is None:
return None
result = None
if network_id is None:
try:
for network in self.networks():
self.log(f"testing network: {network['Name']}")
if name == network["Name"]:
result = network
break
if network["Id"].startswith(name):
result = network
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving network list: {exc}")
if result is not None:
network_id = result["Id"]
if network_id is not None:
try:
self.log(f"Inspecting network Id {network_id}")
result = self.inspect_network(network_id)
self.log("Completed network inspection")
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting network: {exc}")
return result
def find_image(self, name: str, tag: str) -> dict[str, t.Any] | None:
"""
Lookup an image (by name and tag) and return the inspection results.
"""
if not name:
return None
self.log(f"Find image {name}:{tag}")
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = auth.resolve_repository_name(name)
if registry == "docker.io":
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
self.log(f"Check for docker.io image: {repo_name}")
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith("library/"):
# Sometimes library/xxx images are not found
lookup = repo_name[len("library/") :]
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images:
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
lookup = f"{registry}/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images and "/" not in repo_name:
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
lookup = f"{registry}/library/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail(f"Daemon returned more than one result for {name}:{tag}")
if len(images) == 1:
try:
inspection = self.inspect_image(images[0]["Id"])
except NotFound:
self.log(f"Image {name}:{tag} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image {name}:{tag} - {exc}")
return inspection
self.log(f"Image {name}:{tag} not found.")
return None
def find_image_by_id(
self, image_id: str, accept_missing_image: bool = False
) -> dict[str, t.Any] | None:
"""
Lookup an image (by ID) and return the inspection results.
"""
if not image_id:
return None
self.log(f"Find image {image_id} (by ID)")
try:
inspection = self.inspect_image(image_id)
except NotFound as exc:
if not accept_missing_image:
self.fail(f"Error inspecting image ID {image_id} - {exc}")
self.log(f"Image {image_id} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}")
return inspection
def _image_lookup(self, name: str, tag: str) -> list[dict[str, t.Any]]:
"""
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
"""
try:
response = self.images(name=name)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error searching for image {name} - {exc}")
images = response
if tag:
lookup = f"{name}:{tag}"
lookup_digest = f"{name}@{tag}"
images = []
for image in response:
tags = image.get("RepoTags")
digests = image.get("RepoDigests")
if (tags and lookup in tags) or (digests and lookup_digest in digests):
images = [image]
break
return images
def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]:
"""
Pull an image
"""
kwargs = {
"tag": tag,
"stream": True,
"decode": True,
}
if image_platform is not None:
kwargs["platform"] = image_platform
self.log(f"Pulling image {name}:{tag}")
old_tag = self.find_image(name, tag)
try:
for line in self.pull(name, **kwargs):
self.log(line, pretty_print=True)
if line.get("error"):
if line.get("errorDetail"):
error_detail = line.get("errorDetail")
self.fail(
f"Error pulling {name} - code: {error_detail.get('code')} message: {error_detail.get('message')}"
)
else:
self.fail(f"Error pulling {name} - {line.get('error')}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_tag = self.find_image(name, tag)
return new_tag, old_tag == new_tag
def inspect_distribution(self, image: str, **kwargs: t.Any) -> dict[str, t.Any]:
"""
Get image digest by directly calling the Docker API when running Docker SDK < 4.0.0
since prior versions did not support accessing private repositories.
"""
if self.docker_py_version < LooseVersion("4.0.0"):
registry = auth.resolve_repository_name(image)[0]
header = auth.get_config_header(self, registry)
if header:
return self._result(
self._get(
self._url("/distribution/{0}/json", image),
headers={"X-Registry-Auth": header},
),
json=True,
)
return super().inspect_distribution(image, **kwargs)
class AnsibleDockerClient(AnsibleDockerClientBase): class AnsibleDockerClient(AnsibleDockerClientBase):
def __init__( def __init__(
@ -718,9 +480,8 @@ class AnsibleDockerClient(AnsibleDockerClientBase):
) -> None: ) -> None:
self.option_minimal_versions: dict[str, dict[str, t.Any]] = {} self.option_minimal_versions: dict[str, dict[str, t.Any]] = {}
for option in self.module.argument_spec: for option in self.module.argument_spec:
if ignore_params is not None: if ignore_params is not None and option in ignore_params:
if option in ignore_params: continue
continue
self.option_minimal_versions[option] = {} self.option_minimal_versions[option] = {}
self.option_minimal_versions.update(option_minimal_versions) self.option_minimal_versions.update(option_minimal_versions)

View File

@ -519,6 +519,17 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}") self.fail(f"Error inspecting image ID {image_id} - {exc}")
@staticmethod
def _compare_images(
img1: dict[str, t.Any] | None, img2: dict[str, t.Any] | None
) -> bool:
if img1 is None or img2 is None:
return img1 == img2
filter_keys = {"Metadata"}
img1_filtered = {k: v for k, v in img1.items() if k not in filter_keys}
img2_filtered = {k: v for k, v in img2.items() if k not in filter_keys}
return img1_filtered == img2_filtered
def pull_image( def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]: ) -> tuple[dict[str, t.Any] | None, bool]:
@ -526,7 +537,7 @@ class AnsibleDockerClientBase(Client):
Pull an image Pull an image
""" """
self.log(f"Pulling image {name}:{tag}") self.log(f"Pulling image {name}:{tag}")
old_tag = self.find_image(name, tag) old_image = self.find_image(name, tag)
try: try:
repository, image_tag = parse_repository_tag(name) repository, image_tag = parse_repository_tag(name)
registry, dummy_repo_name = auth.resolve_repository_name(repository) registry, dummy_repo_name = auth.resolve_repository_name(repository)
@ -563,9 +574,9 @@ class AnsibleDockerClientBase(Client):
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}") self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_tag = self.find_image(name, tag) new_image = self.find_image(name, tag)
return new_tag, old_tag == new_tag return new_image, self._compare_images(old_image, new_image)
class AnsibleDockerClient(AnsibleDockerClientBase): class AnsibleDockerClient(AnsibleDockerClientBase):
@ -654,9 +665,8 @@ class AnsibleDockerClient(AnsibleDockerClientBase):
) -> None: ) -> None:
self.option_minimal_versions: dict[str, dict[str, t.Any]] = {} self.option_minimal_versions: dict[str, dict[str, t.Any]] = {}
for option in self.module.argument_spec: for option in self.module.argument_spec:
if ignore_params is not None: if ignore_params is not None and option in ignore_params:
if option in ignore_params: continue
continue
self.option_minimal_versions[option] = {} self.option_minimal_versions[option] = {}
self.option_minimal_versions.update(option_minimal_versions) self.option_minimal_versions.update(option_minimal_versions)

View File

@ -126,13 +126,16 @@ class AnsibleDockerClientBase:
self._info: dict[str, t.Any] | None = None self._info: dict[str, t.Any] | None = None
if needs_api_version: if needs_api_version:
api_version_string = self._version["Server"].get(
"ApiVersion"
) or self._version["Server"].get("APIVersion")
if not isinstance(self._version.get("Server"), dict) or not isinstance( if not isinstance(self._version.get("Server"), dict) or not isinstance(
self._version["Server"].get("ApiVersion"), str api_version_string, str
): ):
self.fail( self.fail(
"Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?" "Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?"
) )
self.docker_api_version_str = to_text(self._version["Server"]["ApiVersion"]) self.docker_api_version_str = to_text(api_version_string)
self.docker_api_version = LooseVersion(self.docker_api_version_str) self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25" min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version): if self.docker_api_version < LooseVersion(min_docker_api_version):
@ -194,7 +197,11 @@ class AnsibleDockerClientBase:
data = json.loads(stdout) data = json.loads(stdout)
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail( self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}" f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
) )
return rc, data, stderr return rc, data, stderr
@ -220,7 +227,11 @@ class AnsibleDockerClientBase:
result.append(json.loads(line)) result.append(json.loads(line))
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail( self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}" f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
) )
return rc, result, stderr return rc, result, stderr

View File

@ -132,7 +132,7 @@ DOCKER_PULL_PROGRESS_DONE = frozenset(
"Pull complete", "Pull complete",
) )
) )
DOCKER_PULL_PROGRESS_WORKING = frozenset( DOCKER_PULL_PROGRESS_WORKING_OLD = frozenset(
( (
"Pulling fs layer", "Pulling fs layer",
"Waiting", "Waiting",
@ -141,6 +141,7 @@ DOCKER_PULL_PROGRESS_WORKING = frozenset(
"Extracting", "Extracting",
) )
) )
DOCKER_PULL_PROGRESS_WORKING = frozenset(DOCKER_PULL_PROGRESS_WORKING_OLD | {"Working"})
class ResourceType: class ResourceType:
@ -191,7 +192,7 @@ _RE_PULL_EVENT = re.compile(
) )
_DOCKER_PULL_PROGRESS_WD = sorted( _DOCKER_PULL_PROGRESS_WD = sorted(
DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING DOCKER_PULL_PROGRESS_DONE | DOCKER_PULL_PROGRESS_WORKING_OLD
) )
_RE_PULL_PROGRESS = re.compile( _RE_PULL_PROGRESS = re.compile(
@ -494,7 +495,17 @@ def parse_json_events(
# {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"} # {"dry-run":true,"id":"ansible-docker-test-dc713f1f-container ==> ==>","text":"naming to ansible-docker-test-dc713f1f-image"}
# (The longer form happens since Docker Compose 2.39.0) # (The longer form happens since Docker Compose 2.39.0)
continue continue
if isinstance(resource_id, str) and " " in resource_id: if (
status in ("Working", "Done")
and isinstance(line_data.get("parent_id"), str)
and line_data["parent_id"].startswith("Image ")
):
# Compose 5.0.0+:
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}
# {"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}
resource_type = ResourceType.IMAGE_LAYER
resource_id = line_data["parent_id"][len("Image ") :]
elif isinstance(resource_id, str) and " " in resource_id:
resource_type_str, resource_id = resource_id.split(" ", 1) resource_type_str, resource_id = resource_id.split(" ", 1)
try: try:
resource_type = ResourceType.from_docker_compose_event( resource_type = ResourceType.from_docker_compose_event(
@ -513,7 +524,7 @@ def parse_json_events(
status, text = text, status status, text = text, status
elif ( elif (
text in DOCKER_PULL_PROGRESS_DONE text in DOCKER_PULL_PROGRESS_DONE
or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING or line_data.get("text") in DOCKER_PULL_PROGRESS_WORKING_OLD
): ):
resource_type = ResourceType.IMAGE_LAYER resource_type = ResourceType.IMAGE_LAYER
status, text = text, status status, text = text, status
@ -556,8 +567,8 @@ def parse_events(
stderr_lines = stderr.splitlines() stderr_lines = stderr.splitlines()
if stderr_lines and stderr_lines[-1] == b"": if stderr_lines and stderr_lines[-1] == b"":
del stderr_lines[-1] del stderr_lines[-1]
for index, line in enumerate(stderr_lines): for index, line_b in enumerate(stderr_lines):
line = to_text(line.strip()) line = to_text(line_b.strip())
if not line: if not line:
continue continue
warn_missing_dry_run_prefix = False warn_missing_dry_run_prefix = False
@ -690,9 +701,7 @@ def emit_warnings(
def is_failed(events: Sequence[Event], rc: int) -> bool: def is_failed(events: Sequence[Event], rc: int) -> bool:
if rc: return bool(rc)
return True
return False
def update_failed( def update_failed(

View File

@ -479,9 +479,8 @@ def fetch_file(
reader = tar.extractfile(member) reader = tar.extractfile(member)
if reader: if reader:
with reader as in_f: with reader as in_f, open(b_out_path, "wb") as out_f:
with open(b_out_path, "wb") as out_f: shutil.copyfileobj(in_f, out_f)
shutil.copyfileobj(in_f, out_f)
return in_path return in_path
def process_symlink(in_path: str, member: tarfile.TarInfo) -> str: def process_symlink(in_path: str, member: tarfile.TarInfo) -> str:

View File

@ -659,7 +659,9 @@ def _preprocess_env(
if not isinstance(value, str): if not isinstance(value, str):
module.fail_json( module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be " msg="Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
final_env[name] = to_text(value, errors="surrogate_or_strict") final_env[name] = to_text(value, errors="surrogate_or_strict")
formatted_env = [] formatted_env = []
@ -890,14 +892,15 @@ def _preprocess_mounts(
check_collision(container, "volumes") check_collision(container, "volumes")
new_vols.append(f"{host}:{container}:{mode}") new_vols.append(f"{host}:{container}:{mode}")
continue continue
if len(parts) == 2: if (
if not _is_volume_permissions(parts[1]) and re.match( len(parts) == 2
r"[.~]", parts[0] and not _is_volume_permissions(parts[1])
): and re.match(r"[.~]", parts[0])
host = os.path.abspath(os.path.expanduser(parts[0])) ):
check_collision(parts[1], "volumes") host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append(f"{host}:{parts[1]}:rw") check_collision(parts[1], "volumes")
continue new_vols.append(f"{host}:{parts[1]}:rw")
continue
check_collision(parts[min(1, len(parts) - 1)], "volumes") check_collision(parts[min(1, len(parts) - 1)], "volumes")
new_vols.append(vol) new_vols.append(vol)
values["volumes"] = new_vols values["volumes"] = new_vols
@ -946,7 +949,8 @@ def _preprocess_log(
value = to_text(v, errors="surrogate_or_strict") value = to_text(v, errors="surrogate_or_strict")
module.warn( module.warn(
f"Non-string value found for log_options option '{k}'. The value is automatically converted to {value!r}. " f"Non-string value found for log_options option '{k}'. The value is automatically converted to {value!r}. "
"If this is not correct, or you want to avoid such warnings, please quote the value." "If this is not correct, or you want to avoid such warnings, please quote the value,"
" or explicitly convert the values to strings when templating them."
) )
v = value v = value
options[k] = v options[k] = v
@ -1015,7 +1019,7 @@ def _preprocess_ports(
else: else:
port_binds = len(container_ports) * [(ipaddr,)] port_binds = len(container_ports) * [(ipaddr,)]
else: else:
return module.fail_json( module.fail_json(
msg=f'Invalid port description "{port}" - expected 1 to 3 colon-separated parts, but got {p_len}. ' msg=f'Invalid port description "{port}" - expected 1 to 3 colon-separated parts, but got {p_len}. '
"Maybe you forgot to use square brackets ([...]) around an IPv6 address?" "Maybe you forgot to use square brackets ([...]) around an IPv6 address?"
) )
@ -1036,38 +1040,43 @@ def _preprocess_ports(
binds[idx] = bind binds[idx] = bind
values["published_ports"] = binds values["published_ports"] = binds
exposed = [] exposed: set[tuple[int, str]] = set()
if "exposed_ports" in values: if "exposed_ports" in values:
for port in values["exposed_ports"]: for port in values["exposed_ports"]:
port = to_text(port, errors="surrogate_or_strict").strip() port = to_text(port, errors="surrogate_or_strict").strip()
protocol = "tcp" protocol = "tcp"
matcher = re.search(r"(/.+$)", port) parts = port.split("/", maxsplit=1)
if matcher: if len(parts) == 2:
protocol = matcher.group(1).replace("/", "") port, protocol = parts
port = re.sub(r"/.+$", "", port) parts = port.split("-", maxsplit=1)
exposed.append((port, protocol)) if len(parts) < 2:
try:
exposed.add((int(port), protocol))
except ValueError as e:
module.fail_json(msg=f"Cannot parse port {port!r}: {e}")
else:
try:
start_port = int(parts[0])
end_port = int(parts[1])
if start_port > end_port:
raise ValueError(
"start port must be smaller or equal to end port."
)
except ValueError as e:
module.fail_json(msg=f"Cannot parse port range {port!r}: {e}")
for port in range(start_port, end_port + 1):
exposed.add((port, protocol))
if "published_ports" in values: if "published_ports" in values:
# Any published port should also be exposed # Any published port should also be exposed
for publish_port in values["published_ports"]: for publish_port in values["published_ports"]:
match = False
if isinstance(publish_port, str) and "/" in publish_port: if isinstance(publish_port, str) and "/" in publish_port:
port, protocol = publish_port.split("/") port, protocol = publish_port.split("/")
port = int(port) port = int(port)
else: else:
protocol = "tcp" protocol = "tcp"
port = int(publish_port) port = int(publish_port)
for exposed_port in exposed: exposed.add((port, protocol))
if exposed_port[1] != protocol: values["ports"] = sorted(exposed)
continue
if isinstance(exposed_port[0], str) and "-" in exposed_port[0]:
start_port, end_port = exposed_port[0].split("-")
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
values["ports"] = exposed
return values return values

View File

@ -29,6 +29,7 @@ from ansible_collections.community.docker.plugins.module_utils._common_api impor
RequestException, RequestException,
) )
from ansible_collections.community.docker.plugins.module_utils._module_container.base import ( from ansible_collections.community.docker.plugins.module_utils._module_container.base import (
_DEFAULT_IP_REPLACEMENT_STRING,
OPTION_AUTO_REMOVE, OPTION_AUTO_REMOVE,
OPTION_BLKIO_WEIGHT, OPTION_BLKIO_WEIGHT,
OPTION_CAP_DROP, OPTION_CAP_DROP,
@ -127,11 +128,6 @@ if t.TYPE_CHECKING:
Sentry = object Sentry = object
_DEFAULT_IP_REPLACEMENT_STRING = (
"[[DEFAULT_IP:iewahhaeB4Sae6Aen8IeShairoh4zeph7xaekoh8Geingunaesaeweiy3ooleiwi]]"
)
_SENTRY: Sentry = object() _SENTRY: Sentry = object()
@ -219,12 +215,11 @@ class DockerAPIEngineDriver(EngineDriver[AnsibleDockerClient]):
return False return False
def is_container_running(self, container: dict[str, t.Any]) -> bool: def is_container_running(self, container: dict[str, t.Any]) -> bool:
if container.get("State"): return bool(
if container["State"].get("Running") and not container["State"].get( container.get("State")
"Ghost", False and container["State"].get("Running")
): and not container["State"].get("Ghost", False)
return True )
return False
def is_container_paused(self, container: dict[str, t.Any]) -> bool: def is_container_paused(self, container: dict[str, t.Any]) -> bool:
if container.get("State"): if container.get("State"):
@ -1706,9 +1701,8 @@ def _get_expected_values_mounts(
parts = vol.split(":") parts = vol.split(":")
if len(parts) == 3: if len(parts) == 3:
continue continue
if len(parts) == 2: if len(parts) == 2 and not _is_volume_permissions(parts[1]):
if not _is_volume_permissions(parts[1]): continue
continue
expected_vols[vol] = {} expected_vols[vol] = {}
if expected_vols: if expected_vols:
expected_values["volumes"] = expected_vols expected_values["volumes"] = expected_vols
@ -1805,9 +1799,8 @@ def _set_values_mounts(
parts = volume.split(":") parts = volume.split(":")
if len(parts) == 3: if len(parts) == 3:
continue continue
if len(parts) == 2: if len(parts) == 2 and not _is_volume_permissions(parts[1]):
if not _is_volume_permissions(parts[1]): continue
continue
volumes[volume] = {} volumes[volume] = {}
data["Volumes"] = volumes data["Volumes"] = volumes
if "volume_binds" in values: if "volume_binds" in values:
@ -1973,10 +1966,20 @@ def _get_values_ports(
config = container["Config"] config = container["Config"]
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517 # "ExposedPorts": null returns None type & causes AttributeError - PR #5517
expected_exposed: list[str] = []
if config.get("ExposedPorts") is not None: if config.get("ExposedPorts") is not None:
expected_exposed = [_normalize_port(p) for p in config.get("ExposedPorts", {})] for port_and_protocol in config.get("ExposedPorts", {}):
else: port, protocol = _normalize_port(port_and_protocol).rsplit("/")
expected_exposed = [] try:
start, end = port.split("-", 1)
start_port = int(start)
end_port = int(end)
for port_no in range(start_port, end_port + 1):
expected_exposed.append(f"{port_no}/{protocol}")
continue
except ValueError:
# Either it is not a range, or a broken one - in both cases, simply add the original form
expected_exposed.append(f"{port}/{protocol}")
return { return {
"published_ports": host_config.get("PortBindings"), "published_ports": host_config.get("PortBindings"),
@ -2030,17 +2033,14 @@ def _get_expected_values_ports(
] ]
expected_values["published_ports"] = expected_bound_ports expected_values["published_ports"] = expected_bound_ports
image_ports = [] image_ports: set[str] = set()
if image: if image:
image_exposed_ports = image["Config"].get("ExposedPorts") or {} image_exposed_ports = image["Config"].get("ExposedPorts") or {}
image_ports = [_normalize_port(p) for p in image_exposed_ports] image_ports = {_normalize_port(p) for p in image_exposed_ports}
param_ports = [] param_ports: set[str] = set()
if "ports" in values: if "ports" in values:
param_ports = [ param_ports = {f"{p[0]}/{p[1]}" for p in values["ports"]}
to_text(p[0], errors="surrogate_or_strict") + "/" + p[1] result = sorted(image_ports | param_ports)
for p in values["ports"]
]
result = list(set(image_ports + param_ports))
expected_values["exposed_ports"] = result expected_values["exposed_ports"] = result
if "publish_all_ports" in values: if "publish_all_ports" in values:
@ -2089,16 +2089,26 @@ def _preprocess_value_ports(
if "published_ports" not in values: if "published_ports" not in values:
return values return values
found = False found = False
for port_spec in values["published_ports"].values(): for port_specs in values["published_ports"].values():
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING: if not isinstance(port_specs, list):
found = True port_specs = [port_specs]
break for port_spec in port_specs:
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
found = True
break
if not found: if not found:
return values return values
default_ip = _get_default_host_ip(module, client) default_ip = _get_default_host_ip(module, client)
for port, port_spec in values["published_ports"].items(): for port, port_specs in values["published_ports"].items():
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING: if isinstance(port_specs, list):
values["published_ports"][port] = tuple([default_ip] + list(port_spec[1:])) for index, port_spec in enumerate(port_specs):
if port_spec[0] == _DEFAULT_IP_REPLACEMENT_STRING:
port_specs[index] = tuple([default_ip] + list(port_spec[1:]))
else:
if port_specs[0] == _DEFAULT_IP_REPLACEMENT_STRING:
values["published_ports"][port] = tuple(
[default_ip] + list(port_specs[1:])
)
return values return values

View File

@ -25,6 +25,7 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DockerBaseClass, DockerBaseClass,
compare_generic, compare_generic,
is_image_name_id, is_image_name_id,
normalize_ip_address,
sanitize_result, sanitize_result,
) )
@ -217,11 +218,13 @@ class ContainerManager(DockerBaseClass, t.Generic[Client]):
"The wildcard can only be used with comparison modes 'strict' and 'ignore'!" "The wildcard can only be used with comparison modes 'strict' and 'ignore'!"
) )
for option in self.all_options.values(): for option in self.all_options.values():
if option.name == "networks": # `networks` is special: only update if
# `networks` is special: only update if # some value is actually specified
# some value is actually specified if (
if self.module.params["networks"] is None: option.name == "networks"
continue and self.module.params["networks"] is None
):
continue
option.comparison = value option.comparison = value
# Now process all other comparisons. # Now process all other comparisons.
comp_aliases_used: dict[str, str] = {} comp_aliases_used: dict[str, str] = {}
@ -679,13 +682,17 @@ class ContainerManager(DockerBaseClass, t.Generic[Client]):
def _image_is_different( def _image_is_different(
self, image: dict[str, t.Any] | None, container: Container self, image: dict[str, t.Any] | None, container: Container
) -> bool: ) -> bool:
if image and image.get("Id"): if (
if container and container.image: image
if image.get("Id") != container.image: and image.get("Id")
self.diff_tracker.add( and container
"image", parameter=image.get("Id"), active=container.image and container.image
) and image.get("Id") != container.image
return True ):
self.diff_tracker.add(
"image", parameter=image.get("Id"), active=container.image
)
return True
return False return False
def _compose_create_parameters(self, image: str) -> dict[str, t.Any]: def _compose_create_parameters(self, image: str) -> dict[str, t.Any]:
@ -919,22 +926,21 @@ class ContainerManager(DockerBaseClass, t.Generic[Client]):
else: else:
diff = False diff = False
network_info_ipam = network_info.get("IPAMConfig") or {} network_info_ipam = network_info.get("IPAMConfig") or {}
if network.get("ipv4_address") and network[ if network.get("ipv4_address") and normalize_ip_address(
"ipv4_address" network["ipv4_address"]
] != network_info_ipam.get("IPv4Address"): ) != normalize_ip_address(network_info_ipam.get("IPv4Address")):
diff = True diff = True
if network.get("ipv6_address") and network[ if network.get("ipv6_address") and normalize_ip_address(
"ipv6_address" network["ipv6_address"]
] != network_info_ipam.get("IPv6Address"): ) != normalize_ip_address(network_info_ipam.get("IPv6Address")):
diff = True
if network.get("aliases") and not compare_generic(
network["aliases"],
network_info.get("Aliases"),
"allow_more_present",
"set",
):
diff = True diff = True
if network.get("aliases"):
if not compare_generic(
network["aliases"],
network_info.get("Aliases"),
"allow_more_present",
"set",
):
diff = True
if network.get("links"): if network.get("links"):
expected_links = [] expected_links = []
for link, alias in network["links"]: for link, alias in network["links"]:

View File

@ -73,7 +73,7 @@ class DockerSocketHandlerBase:
def __exit__( def __exit__(
self, self,
type_: t.Type[BaseException] | None, type_: type[BaseException] | None,
value: BaseException | None, value: BaseException | None,
tb: TracebackType | None, tb: TracebackType | None,
) -> None: ) -> None:
@ -199,10 +199,9 @@ class DockerSocketHandlerBase:
if event & selectors.EVENT_WRITE != 0: if event & selectors.EVENT_WRITE != 0:
self._write() self._write()
result = len(events) result = len(events)
if self._paramiko_read_workaround and len(self._write_buffer) > 0: if self._paramiko_read_workaround and len(self._write_buffer) > 0 and self._sock.send_ready(): # type: ignore
if self._sock.send_ready(): # type: ignore self._write()
self._write() result += 1
result += 1
return result > 0 return result > 0
def is_eof(self) -> bool: def is_eof(self) -> bool:

View File

@ -64,8 +64,8 @@ def shutdown_writing(
# probably: "TypeError: shutdown() takes 1 positional argument but 2 were given" # probably: "TypeError: shutdown() takes 1 positional argument but 2 were given"
log(f"Shutting down for writing not possible; trying shutdown instead: {e}") log(f"Shutting down for writing not possible; trying shutdown instead: {e}")
sock.shutdown() # type: ignore sock.shutdown() # type: ignore
elif isinstance(sock, getattr(pysocket, "SocketIO")): elif isinstance(sock, pysocket.SocketIO): # type: ignore
sock._sock.shutdown(pysocket.SHUT_WR) sock._sock.shutdown(pysocket.SHUT_WR) # type: ignore[unreachable]
else: else:
log("No idea how to signal end of writing") log("No idea how to signal end of writing")

View File

@ -115,9 +115,7 @@ class AnsibleDockerSwarmClient(AnsibleDockerClient):
:return: True if node is Swarm Worker, False otherwise :return: True if node is Swarm Worker, False otherwise
""" """
if self.check_if_swarm_node() and not self.check_if_swarm_manager(): return bool(self.check_if_swarm_node() and not self.check_if_swarm_manager())
return True
return False
def check_if_swarm_node_is_down( def check_if_swarm_node_is_down(
self, node_id: str | None = None, repeat_check: int = 1 self, node_id: str | None = None, repeat_check: int = 1
@ -181,9 +179,8 @@ class AnsibleDockerSwarmClient(AnsibleDockerClient):
self.fail( self.fail(
"Cannot inspect node: To inspect node execute module on Swarm Manager" "Cannot inspect node: To inspect node execute module on Swarm Manager"
) )
if exc.status_code == 404: if exc.status_code == 404 and skip_missing:
if skip_missing: return None
return None
self.fail(f"Error while reading from Swarm manager: {exc}") self.fail(f"Error while reading from Swarm manager: {exc}")
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting swarm node: {exc}") self.fail(f"Error inspecting swarm node: {exc}")
@ -191,19 +188,18 @@ class AnsibleDockerSwarmClient(AnsibleDockerClient):
json_str = json.dumps(node_info, ensure_ascii=False) json_str = json.dumps(node_info, ensure_ascii=False)
node_info = json.loads(json_str) node_info = json.loads(json_str)
if "ManagerStatus" in node_info: if "ManagerStatus" in node_info and node_info["ManagerStatus"].get("Leader"):
if node_info["ManagerStatus"].get("Leader"): # This is workaround of bug in Docker when in some cases the Leader IP is 0.0.0.0
# This is workaround of bug in Docker when in some cases the Leader IP is 0.0.0.0 # Check moby/moby#35437 for details
# Check moby/moby#35437 for details count_colons = node_info["ManagerStatus"]["Addr"].count(":")
count_colons = node_info["ManagerStatus"]["Addr"].count(":") if count_colons == 1:
if count_colons == 1: swarm_leader_ip = (
swarm_leader_ip = ( node_info["ManagerStatus"]["Addr"].split(":", 1)[0]
node_info["ManagerStatus"]["Addr"].split(":", 1)[0] or node_info["Status"]["Addr"]
or node_info["Status"]["Addr"] )
) else:
else: swarm_leader_ip = node_info["Status"]["Addr"]
swarm_leader_ip = node_info["Status"]["Addr"] node_info["Status"]["Addr"] = swarm_leader_ip
node_info["Status"]["Addr"] = swarm_leader_ip
return node_info return node_info
def get_all_nodes_inspect(self) -> list[dict[str, t.Any]]: def get_all_nodes_inspect(self) -> list[dict[str, t.Any]]:

View File

@ -7,6 +7,7 @@
from __future__ import annotations from __future__ import annotations
import ipaddress
import json import json
import re import re
import typing as t import typing as t
@ -27,7 +28,7 @@ if t.TYPE_CHECKING:
from ._common_api import AnsibleDockerClientBase as CAPIADCB from ._common_api import AnsibleDockerClientBase as CAPIADCB
from ._common_cli import AnsibleDockerClientBase as CCLIADCB from ._common_cli import AnsibleDockerClientBase as CCLIADCB
Client = t.Union[CADCB, CAPIADCB, CCLIADCB] Client = t.Union[CADCB, CAPIADCB, CCLIADCB] # noqa: UP007
DEFAULT_DOCKER_HOST = "unix:///var/run/docker.sock" DEFAULT_DOCKER_HOST = "unix:///var/run/docker.sock"
@ -94,9 +95,7 @@ BYTE_SUFFIXES = ["B", "KB", "MB", "GB", "TB", "PB"]
def is_image_name_id(name: str) -> bool: def is_image_name_id(name: str) -> bool:
"""Check whether the given image name is in fact an image ID (hash).""" """Check whether the given image name is in fact an image ID (hash)."""
if re.match("^sha256:[0-9a-fA-F]{64}$", name): return bool(re.match("^sha256:[0-9a-fA-F]{64}$", name))
return True
return False
def is_valid_tag(tag: str, allow_empty: bool = False) -> bool: def is_valid_tag(tag: str, allow_empty: bool = False) -> bool:
@ -507,3 +506,47 @@ def omit_none_from_dict(d: dict[str, t.Any]) -> dict[str, t.Any]:
Return a copy of the dictionary with all keys with value None omitted. Return a copy of the dictionary with all keys with value None omitted.
""" """
return {k: v for (k, v) in d.items() if v is not None} return {k: v for (k, v) in d.items() if v is not None}
@t.overload
def normalize_ip_address(ip_address: str) -> str: ...
@t.overload
def normalize_ip_address(ip_address: str | None) -> str | None: ...
def normalize_ip_address(ip_address: str | None) -> str | None:
"""
Given an IP address as a string, normalize it so that it can be
used to compare IP addresses as strings.
"""
if ip_address is None:
return None
try:
return ipaddress.ip_address(ip_address).compressed
except ValueError:
# Fallback for invalid addresses: simply return the input
return ip_address
@t.overload
def normalize_ip_network(network: str) -> str: ...
@t.overload
def normalize_ip_network(network: str | None) -> str | None: ...
def normalize_ip_network(network: str | None) -> str | None:
"""
Given a network in CIDR notation as a string, normalize it so that it can be
used to compare networks as strings.
"""
if network is None:
return None
try:
return ipaddress.ip_network(network).compressed
except ValueError:
# Fallback for invalid networks: simply return the input
return network

View File

@ -585,10 +585,10 @@ class ServicesManager(BaseComposeManager):
return args return args
def _are_containers_stopped(self) -> bool: def _are_containers_stopped(self) -> bool:
for container in self.list_containers_raw(): return all(
if container["State"] not in ("created", "exited", "stopped", "killed"): container["State"] in ("created", "exited", "stopped", "killed")
return False for container in self.list_containers_raw()
return True )
def cmd_stop(self) -> dict[str, t.Any]: def cmd_stop(self) -> dict[str, t.Any]:
# Since 'docker compose stop' **always** claims it is stopping containers, even if they are already # Since 'docker compose stop' **always** claims it is stopping containers, even if they are already

View File

@ -210,13 +210,14 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n" self.stdin += "\n"
if self.env is not None: if self.env is not None:
for name, value in list(self.env.items()): for name, value in self.env.items():
if not isinstance(value, str): if not isinstance(value, str):
self.fail( self.fail(
"Non-string value found for env option. Ambiguous env options must be " "Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_exec_cmd(self, dry_run: bool) -> list[str]: def get_exec_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["exec"] args = self.get_base_args(plain_progress=True) + ["exec"]
@ -250,11 +251,11 @@ class ExecManager(BaseComposeManager):
kwargs["data"] = self.stdin.encode("utf-8") kwargs["data"] = self.stdin.encode("utf-8")
if self.detach: if self.detach:
kwargs["check_rc"] = True kwargs["check_rc"] = True
rc, stdout, stderr = self.client.call_cli(*args, **kwargs) rc, stdout_b, stderr_b = self.client.call_cli(*args, **kwargs)
if self.detach: if self.detach:
return {} return {}
stdout = to_text(stdout) stdout = to_text(stdout_b)
stderr = to_text(stderr) stderr = to_text(stderr_b)
if self.strip_empty_ends: if self.strip_empty_ends:
stdout = stdout.rstrip("\r\n") stdout = stdout.rstrip("\r\n")
stderr = stderr.rstrip("\r\n") stderr = stderr.rstrip("\r\n")

View File

@ -296,13 +296,14 @@ class ExecManager(BaseComposeManager):
self.stdin += "\n" self.stdin += "\n"
if self.env is not None: if self.env is not None:
for name, value in list(self.env.items()): for name, value in self.env.items():
if not isinstance(value, str): if not isinstance(value, str):
self.fail( self.fail(
"Non-string value found for env option. Ambiguous env options must be " "Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
self.env[name] = to_text(value, errors="surrogate_or_strict")
def get_run_cmd(self, dry_run: bool) -> list[str]: def get_run_cmd(self, dry_run: bool) -> list[str]:
args = self.get_base_args(plain_progress=True) + ["run"] args = self.get_base_args(plain_progress=True) + ["run"]
@ -368,13 +369,13 @@ class ExecManager(BaseComposeManager):
kwargs["data"] = self.stdin.encode("utf-8") kwargs["data"] = self.stdin.encode("utf-8")
if self.detach: if self.detach:
kwargs["check_rc"] = True kwargs["check_rc"] = True
rc, stdout, stderr = self.client.call_cli(*args, **kwargs) rc, stdout_b, stderr_b = self.client.call_cli(*args, **kwargs)
if self.detach: if self.detach:
return { return {
"container_id": stdout.strip(), "container_id": to_text(stdout_b.strip()),
} }
stdout = to_text(stdout) stdout = to_text(stdout_b)
stderr = to_text(stderr) stderr = to_text(stderr_b)
if self.strip_empty_ends: if self.strip_empty_ends:
stdout = stdout.rstrip("\r\n") stdout = stdout.rstrip("\r\n")
stderr = stderr.rstrip("\r\n") stderr = stderr.rstrip("\r\n")

View File

@ -287,20 +287,20 @@ def are_fileobjs_equal_read_first(
def is_container_file_not_regular_file(container_stat: dict[str, t.Any]) -> bool: def is_container_file_not_regular_file(container_stat: dict[str, t.Any]) -> bool:
for bit in ( return any(
# https://pkg.go.dev/io/fs#FileMode container_stat["mode"] & 1 << bit != 0
32 - 1, # ModeDir for bit in (
32 - 4, # ModeTemporary # https://pkg.go.dev/io/fs#FileMode
32 - 5, # ModeSymlink 32 - 1, # ModeDir
32 - 6, # ModeDevice 32 - 4, # ModeTemporary
32 - 7, # ModeNamedPipe 32 - 5, # ModeSymlink
32 - 8, # ModeSocket 32 - 6, # ModeDevice
32 - 11, # ModeCharDevice 32 - 7, # ModeNamedPipe
32 - 13, # ModeIrregular 32 - 8, # ModeSocket
): 32 - 11, # ModeCharDevice
if container_stat["mode"] & (1 << bit) != 0: 32 - 13, # ModeIrregular
return True )
return False )
def get_container_file_mode(container_stat: dict[str, t.Any]) -> int: def get_container_file_mode(container_stat: dict[str, t.Any]) -> int:
@ -420,7 +420,7 @@ def retrieve_diff(
def is_binary(content: bytes) -> bool: def is_binary(content: bytes) -> bool:
if b"\x00" in content: if b"\x00" in content: # noqa: SIM103
return True return True
# TODO: better detection # TODO: better detection
# (ansible-core also just checks for 0x00, and even just sticks to the first 8k, so this is not too bad...) # (ansible-core also just checks for 0x00, and even just sticks to the first 8k, so this is not too bad...)
@ -695,11 +695,10 @@ def is_file_idempotent(
mf = tar.extractfile(member) mf = tar.extractfile(member)
if mf is None: if mf is None:
raise AssertionError("Member should be present for regular file") raise AssertionError("Member should be present for regular file")
with mf as tar_f: with mf as tar_f, open(managed_path, "rb") as local_f:
with open(managed_path, "rb") as local_f: is_equal = are_fileobjs_equal_with_diff_of_first(
is_equal = are_fileobjs_equal_with_diff_of_first( tar_f, local_f, member.size, diff, max_file_size_for_diff, in_path
tar_f, local_f, member.size, diff, max_file_size_for_diff, in_path )
)
return container_path, mode, is_equal return container_path, mode, is_equal
def process_symlink(in_path: str, member: tarfile.TarInfo) -> tuple[str, int, bool]: def process_symlink(in_path: str, member: tarfile.TarInfo) -> tuple[str, int, bool]:

View File

@ -221,16 +221,17 @@ def main() -> None:
stdin: str | None = client.module.params["stdin"] stdin: str | None = client.module.params["stdin"]
strip_empty_ends: bool = client.module.params["strip_empty_ends"] strip_empty_ends: bool = client.module.params["strip_empty_ends"]
tty: bool = client.module.params["tty"] tty: bool = client.module.params["tty"]
env: dict[str, t.Any] = client.module.params["env"] env: dict[str, t.Any] | None = client.module.params["env"]
if env is not None: if env is not None:
for name, value in list(env.items()): for name, value in env.items():
if not isinstance(value, str): if not isinstance(value, str):
client.module.fail_json( client.module.fail_json(
msg="Non-string value found for env option. Ambiguous env options must be " msg="Non-string value found for env option. Ambiguous env options must be "
f"wrapped in quotes to avoid them being interpreted. Key: {name}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
env[name] = to_text(value, errors="surrogate_or_strict")
if command is not None: if command is not None:
argv = shlex.split(command) argv = shlex.split(command)

View File

@ -21,6 +21,8 @@ description:
notes: notes:
- Building images is done using Docker daemon's API. It is not possible to use BuildKit / buildx this way. Use M(community.docker.docker_image_build) - Building images is done using Docker daemon's API. It is not possible to use BuildKit / buildx this way. Use M(community.docker.docker_image_build)
to build images with BuildKit. to build images with BuildKit.
- Exporting images is generally not idempotent. It depends on whether the image ID equals the IDs found in the generated tarball's C(manifest.json).
This was the case with the default storage backend up to Docker 28, but seems to have changed in Docker 29.
extends_documentation_fragment: extends_documentation_fragment:
- community.docker._docker.api_documentation - community.docker._docker.api_documentation
- community.docker._attributes - community.docker._attributes
@ -803,7 +805,7 @@ class ImageManager(DockerBaseClass):
if line.get("errorDetail"): if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"]) raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status") status = line.get("status")
if status == "Pushing": if status in ("Pushing", "Pushed"):
changed = True changed = True
self.results["changed"] = changed self.results["changed"] = changed
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
@ -902,11 +904,13 @@ class ImageManager(DockerBaseClass):
buildargs[key] = to_text(value) buildargs[key] = to_text(value)
container_limits = self.container_limits or {} container_limits = self.container_limits or {}
for key in container_limits.keys(): for key in container_limits:
if key not in CONTAINER_LIMITS_KEYS: if key not in CONTAINER_LIMITS_KEYS:
raise DockerException(f"Invalid container_limits key {key}") raise DockerException(f"Invalid container_limits key {key}")
dockerfile = self.dockerfile dockerfile: tuple[str, str | None] | tuple[None, None] | str | None = (
self.dockerfile
)
if self.build_path.startswith( if self.build_path.startswith(
("http://", "https://", "git://", "github.com/", "git@") ("http://", "https://", "git://", "github.com/", "git@")
): ):
@ -924,7 +928,8 @@ class ImageManager(DockerBaseClass):
[line.strip() for line in f.read().splitlines()], [line.strip() for line in f.read().splitlines()],
) )
) )
dockerfile_data = process_dockerfile(dockerfile, self.build_path) dockerfile_data = process_dockerfile(self.dockerfile, self.build_path)
dockerfile = dockerfile_data
context = tar( context = tar(
self.build_path, exclude=exclude, dockerfile=dockerfile_data, gzip=False self.build_path, exclude=exclude, dockerfile=dockerfile_data, gzip=False
) )
@ -1207,13 +1212,13 @@ def main() -> None:
if not is_valid_tag(client.module.params["tag"], allow_empty=True): if not is_valid_tag(client.module.params["tag"], allow_empty=True):
client.fail(f'"{client.module.params["tag"]}" is not a valid docker tag!') client.fail(f'"{client.module.params["tag"]}" is not a valid docker tag!')
if client.module.params["source"] == "build": if client.module.params["source"] == "build" and (
if not client.module.params["build"] or not client.module.params["build"].get( not client.module.params["build"]
"path" or not client.module.params["build"].get("path")
): ):
client.fail( client.fail(
'If "source" is set to "build", the "build.path" option must be specified.' 'If "source" is set to "build", the "build.path" option must be specified.'
) )
try: try:
results = {"changed": False, "actions": [], "image": {}} results = {"changed": False, "actions": [], "image": {}}

View File

@ -368,16 +368,20 @@ class ImageBuilder(DockerBaseClass):
if self.secrets: if self.secrets:
for secret in self.secrets: for secret in self.secrets:
if secret["type"] in ("env", "value"): if secret["type"] in ("env", "value") and LooseVersion(
if LooseVersion(buildx_version) < LooseVersion("0.6.0"): buildx_version
self.fail( ) < LooseVersion("0.6.0"):
f"The Docker buildx plugin has version {buildx_version}, but 0.6.0 is needed for secrets of type=env and type=value" self.fail(
) f"The Docker buildx plugin has version {buildx_version}, but 0.6.0 is needed for secrets of type=env and type=value"
if self.outputs and len(self.outputs) > 1: )
if LooseVersion(buildx_version) < LooseVersion("0.13.0"): if (
self.fail( self.outputs
f"The Docker buildx plugin has version {buildx_version}, but 0.13.0 is needed to specify more than one output" and len(self.outputs) > 1
) and LooseVersion(buildx_version) < LooseVersion("0.13.0")
):
self.fail(
f"The Docker buildx plugin has version {buildx_version}, but 0.13.0 is needed to specify more than one output"
)
self.path = parameters["path"] self.path = parameters["path"]
if not os.path.isdir(self.path): if not os.path.isdir(self.path):
@ -530,9 +534,8 @@ class ImageBuilder(DockerBaseClass):
"image": image or {}, "image": image or {},
} }
if image: if image and self.rebuild == "never":
if self.rebuild == "never": return results
return results
results["changed"] = True results["changed"] = True
if not self.check_mode: if not self.check_mode:

View File

@ -28,7 +28,13 @@ attributes:
diff_mode: diff_mode:
support: none support: none
idempotent: idempotent:
support: full support: partial
details:
- Whether the module is idempotent depends on the storage API used for images,
which determines how the image ID is computed. The idempotency check needs
that the image ID equals the ID stored in archive's C(manifest.json).
This seemed to have worked fine with the default storage backend up to Docker 28,
but seems to have changed in Docker 29.
options: options:
names: names:

View File

@ -159,7 +159,7 @@ class ImagePusher(DockerBaseClass):
if line.get("errorDetail"): if line.get("errorDetail"):
raise RuntimeError(line["errorDetail"]["message"]) raise RuntimeError(line["errorDetail"]["message"])
status = line.get("status") status = line.get("status")
if status == "Pushing": if status in ("Pushing", "Pushed"):
results["changed"] = True results["changed"] = True
except Exception as exc: # pylint: disable=broad-exception-caught except Exception as exc: # pylint: disable=broad-exception-caught
if "unauthorized" in str(exc): if "unauthorized" in str(exc):

View File

@ -219,6 +219,7 @@ class ImageRemover(DockerBaseClass):
elif is_image_name_id(name): elif is_image_name_id(name):
deleted.append(image["Id"]) deleted.append(image["Id"])
# TODO: the following is no longer correct with Docker 29+...
untagged[:] = sorted( untagged[:] = sorted(
(image.get("RepoTags") or []) + (image.get("RepoDigests") or []) (image.get("RepoTags") or []) + (image.get("RepoDigests") or [])
) )

View File

@ -299,6 +299,8 @@ from ansible_collections.community.docker.plugins.module_utils._util import (
DifferenceTracker, DifferenceTracker,
DockerBaseClass, DockerBaseClass,
clean_dict_booleans_for_docker_api, clean_dict_booleans_for_docker_api,
normalize_ip_address,
normalize_ip_network,
sanitize_labels, sanitize_labels,
) )
@ -360,6 +362,7 @@ def validate_cidr(cidr: str) -> t.Literal["ipv4", "ipv6"]:
:rtype: str :rtype: str
:raises ValueError: If ``cidr`` is not a valid CIDR :raises ValueError: If ``cidr`` is not a valid CIDR
""" """
# TODO: Use ipaddress for this instead of rolling your own...
if CIDR_IPV4.match(cidr): if CIDR_IPV4.match(cidr):
return "ipv4" return "ipv4"
if CIDR_IPV6.match(cidr): if CIDR_IPV6.match(cidr):
@ -389,6 +392,19 @@ def dicts_are_essentially_equal(a: dict[str, t.Any], b: dict[str, t.Any]) -> boo
return True return True
def normalize_ipam_values(ipam_config: dict[str, t.Any]) -> dict[str, t.Any]:
result = {}
for key, value in ipam_config.items():
if key in ("subnet", "iprange"):
value = normalize_ip_network(value)
elif key in ("gateway",):
value = normalize_ip_address(value)
elif key in ("aux_addresses",) and value is not None:
value = {k: normalize_ip_address(v) for k, v in value.items()}
result[key] = value
return result
class DockerNetworkManager: class DockerNetworkManager:
def __init__(self, client: AnsibleDockerClient) -> None: def __init__(self, client: AnsibleDockerClient) -> None:
self.client = client self.client = client
@ -478,23 +494,21 @@ class DockerNetworkManager:
) )
else: else:
for key, value in self.parameters.driver_options.items(): for key, value in self.parameters.driver_options.items():
if not (key in net["Options"]) or value != net["Options"][key]: if key not in net["Options"] or value != net["Options"][key]:
differences.add( differences.add(
f"driver_options.{key}", f"driver_options.{key}",
parameter=value, parameter=value,
active=net["Options"].get(key), active=net["Options"].get(key),
) )
if self.parameters.ipam_driver: if self.parameters.ipam_driver and (
if ( not net.get("IPAM") or net["IPAM"]["Driver"] != self.parameters.ipam_driver
not net.get("IPAM") ):
or net["IPAM"]["Driver"] != self.parameters.ipam_driver differences.add(
): "ipam_driver",
differences.add( parameter=self.parameters.ipam_driver,
"ipam_driver", active=net.get("IPAM"),
parameter=self.parameters.ipam_driver, )
active=net.get("IPAM"),
)
if self.parameters.ipam_driver_options is not None: if self.parameters.ipam_driver_options is not None:
ipam_driver_options = net["IPAM"].get("Options") or {} ipam_driver_options = net["IPAM"].get("Options") or {}
@ -515,24 +529,35 @@ class DockerNetworkManager:
else: else:
# Put network's IPAM config into the same format as module's IPAM config # Put network's IPAM config into the same format as module's IPAM config
net_ipam_configs = [] net_ipam_configs = []
net_ipam_configs_normalized = []
for net_ipam_config in net["IPAM"]["Config"]: for net_ipam_config in net["IPAM"]["Config"]:
config = {} config = {}
for k, v in net_ipam_config.items(): for k, v in net_ipam_config.items():
config[normalize_ipam_config_key(k)] = v config[normalize_ipam_config_key(k)] = v
net_ipam_configs.append(config) net_ipam_configs.append(config)
net_ipam_configs_normalized.append(normalize_ipam_values(config))
# Compare lists of dicts as sets of dicts # Compare lists of dicts as sets of dicts
for idx, ipam_config in enumerate(self.parameters.ipam_config): for idx, ipam_config in enumerate(self.parameters.ipam_config):
ipam_config_normalized = normalize_ipam_values(ipam_config)
net_config = {} net_config = {}
for net_ipam_config in net_ipam_configs: net_config_normalized = {}
if dicts_are_essentially_equal(ipam_config, net_ipam_config): for net_ipam_config, net_ipam_config_normalized in zip(
net_ipam_configs, net_ipam_configs_normalized
):
if dicts_are_essentially_equal(
ipam_config_normalized, net_ipam_config_normalized
):
net_config = net_ipam_config net_config = net_ipam_config
net_config_normalized = net_ipam_config_normalized
break break
for key, value in ipam_config.items(): for key, value in ipam_config.items():
if value is None: if value is None:
# due to recursive argument_spec, all keys are always present # due to recursive argument_spec, all keys are always present
# (but have default value None if not specified) # (but have default value None if not specified)
continue continue
if value != net_config.get(key): if ipam_config_normalized[key] != net_config_normalized.get(
key
):
differences.add( differences.add(
f"ipam_config[{idx}].{key}", f"ipam_config[{idx}].{key}",
parameter=value, parameter=value,
@ -597,7 +622,7 @@ class DockerNetworkManager:
) )
else: else:
for key, value in self.parameters.labels.items(): for key, value in self.parameters.labels.items():
if not (key in net["Labels"]) or value != net["Labels"][key]: if key not in net["Labels"] or value != net["Labels"][key]:
differences.add( differences.add(
f"labels.{key}", f"labels.{key}",
parameter=value, parameter=value,

View File

@ -216,14 +216,14 @@ class SwarmNodeManager(DockerBaseClass):
if self.parameters.role is None: if self.parameters.role is None:
node_spec["Role"] = node_info["Spec"]["Role"] node_spec["Role"] = node_info["Spec"]["Role"]
else: else:
if not node_info["Spec"]["Role"] == self.parameters.role: if node_info["Spec"]["Role"] != self.parameters.role:
node_spec["Role"] = self.parameters.role node_spec["Role"] = self.parameters.role
changed = True changed = True
if self.parameters.availability is None: if self.parameters.availability is None:
node_spec["Availability"] = node_info["Spec"]["Availability"] node_spec["Availability"] = node_info["Spec"]["Availability"]
else: else:
if not node_info["Spec"]["Availability"] == self.parameters.availability: if node_info["Spec"]["Availability"] != self.parameters.availability:
node_info["Spec"]["Availability"] = self.parameters.availability node_info["Spec"]["Availability"] = self.parameters.availability
changed = True changed = True

View File

@ -1,5 +1,4 @@
#!/usr/bin/python #!/usr/bin/python
# coding: utf-8
# #
# Copyright (c) 2021 Red Hat | Ansible Sakar Mehra<@sakarmehra100@gmail.com | @sakar97> # Copyright (c) 2021 Red Hat | Ansible Sakar Mehra<@sakarmehra100@gmail.com | @sakar97>
# Copyright (c) 2019, Vladimir Porshkevich (@porshkevich) <neosonic@mail.ru> # Copyright (c) 2019, Vladimir Porshkevich (@porshkevich) <neosonic@mail.ru>
@ -281,7 +280,7 @@ class DockerPluginManager:
stream=True, stream=True,
) )
self.client._raise_for_status(response) self.client._raise_for_status(response)
for data in self.client._stream_helper(response, decode=True): for dummy in self.client._stream_helper(response, decode=True):
pass pass
# Inspect and configure plugin # Inspect and configure plugin
self.existing_plugin = self.client.get_json( self.existing_plugin = self.client.get_json(

View File

@ -322,7 +322,7 @@ def main() -> None:
before_after_differences = json_diff( before_after_differences = json_diff(
before_stack_services, after_stack_services before_stack_services, after_stack_services
) )
for k in before_after_differences.keys(): for k in before_after_differences:
if isinstance(before_after_differences[k], dict): if isinstance(before_after_differences[k], dict):
before_after_differences[k].pop("UpdatedAt", None) before_after_differences[k].pop("UpdatedAt", None)
before_after_differences[k].pop("Version", None) before_after_differences[k].pop("Version", None)

View File

@ -554,9 +554,8 @@ class SwarmManager(DockerBaseClass):
except APIError as exc: except APIError as exc:
self.client.fail(f"Can not create a new Swarm Cluster: {exc}") self.client.fail(f"Can not create a new Swarm Cluster: {exc}")
if not self.client.check_if_swarm_manager(): if not self.client.check_if_swarm_manager() and not self.check_mode:
if not self.check_mode: self.client.fail("Swarm not created or other error!")
self.client.fail("Swarm not created or other error!")
self.created = True self.created = True
self.inspect_swarm() self.inspect_swarm()

View File

@ -914,8 +914,10 @@ def get_docker_environment(
for name, value in env.items(): for name, value in env.items():
if not isinstance(value, str): if not isinstance(value, str):
raise ValueError( raise ValueError(
"Non-string value found for env option. " "Non-string value found for env option. Ambiguous env options must be "
f"Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: {name}" "wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"Key: {name}"
) )
env_dict[name] = str(value) env_dict[name] = str(value)
elif env is not None and isinstance(env, list): elif env is not None and isinstance(env, list):
@ -2380,13 +2382,13 @@ class DockerServiceManager:
ds.container_labels = task_template_data["ContainerSpec"].get("Labels") ds.container_labels = task_template_data["ContainerSpec"].get("Labels")
mode = raw_data["Spec"]["Mode"] mode = raw_data["Spec"]["Mode"]
if "Replicated" in mode.keys(): if "Replicated" in mode:
ds.mode = to_text("replicated", encoding="utf-8") ds.mode = to_text("replicated", encoding="utf-8") # type: ignore
ds.replicas = mode["Replicated"]["Replicas"] ds.replicas = mode["Replicated"]["Replicas"]
elif "Global" in mode.keys(): elif "Global" in mode:
ds.mode = "global" ds.mode = "global"
elif "ReplicatedJob" in mode.keys(): elif "ReplicatedJob" in mode:
ds.mode = to_text("replicated-job", encoding="utf-8") ds.mode = to_text("replicated-job", encoding="utf-8") # type: ignore
ds.replicas = mode["ReplicatedJob"]["TotalCompletions"] ds.replicas = mode["ReplicatedJob"]["TotalCompletions"]
else: else:
raise ValueError(f"Unknown service mode: {mode}") raise ValueError(f"Unknown service mode: {mode}")
@ -2649,10 +2651,9 @@ class DockerServiceManager:
def _detect_publish_mode_usage(client: AnsibleDockerClient) -> bool: def _detect_publish_mode_usage(client: AnsibleDockerClient) -> bool:
for publish_def in client.module.params["publish"] or []: return any(
if publish_def.get("mode"): publish_def.get("mode") for publish_def in client.module.params["publish"] or []
return True )
return False
def _detect_healthcheck_start_period(client: AnsibleDockerClient) -> bool: def _detect_healthcheck_start_period(client: AnsibleDockerClient) -> bool:

View File

@ -1,5 +1,4 @@
#!/usr/bin/python #!/usr/bin/python
# coding: utf-8
# #
# Copyright 2017 Red Hat | Ansible, Alex Grönholm <alex.gronholm@nextday.fi> # Copyright 2017 Red Hat | Ansible, Alex Grönholm <alex.gronholm@nextday.fi>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)

View File

@ -1,5 +1,4 @@
#!/usr/bin/python #!/usr/bin/python
# coding: utf-8
# #
# Copyright 2017 Red Hat | Ansible, Alex Grönholm <alex.gronholm@nextday.fi> # Copyright 2017 Red Hat | Ansible, Alex Grönholm <alex.gronholm@nextday.fi>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt) # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)

31
ruff.toml Normal file
View File

@ -0,0 +1,31 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
line-length = 160
[lint]
# https://docs.astral.sh/ruff/rules/
select = ["A", "B", "E", "F", "FA", "FLY", "UP", "SIM"]
ignore = [
# Better keep ignored (for now)
"F811", # Redefinition of unused `xxx` (happens a lot for fixtures in unit tests)
"E402", # Module level import not at top of file
"E741", # Ambiguous variable name
"UP012", # unnecessary-encode-utf8
"UP015", # Unnecessary mode argument
"SIM105", # suppressible-exception
"SIM108", # if-else-block-instead-of-if-exp
# To fix later:
"B905", # zip-without-explicit-strict - needs Python 3.10+
# To fix:
"UP024", # Replace aliased errors with `OSError`
]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# Allow unused variables when underscore-prefixed or starting with dummy
dummy-variable-rgx = "^(_|dummy).*$"

View File

@ -124,13 +124,16 @@
# - present_3_check is changed -- whether this is true depends on a combination of Docker CLI and Docker Compose version... # - present_3_check is changed -- whether this is true depends on a combination of Docker CLI and Docker Compose version...
# Compose 2.37.3 with Docker 28.2.x results in 'changed', while Compose 2.37.3 with Docker 28.3.0 results in 'not changed'. # Compose 2.37.3 with Docker 28.2.x results in 'changed', while Compose 2.37.3 with Docker 28.3.0 results in 'not changed'.
# It seems that Docker is now clever enough to notice that nothing is rebuilt... # It seems that Docker is now clever enough to notice that nothing is rebuilt...
- present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 # With Docker 29.0.0, the behvaior seems to change again... I'm currently tending to simply ignore this check, for that
- ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed)) # reason the next three lines are commented out:
- present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0 # - present_3_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# - ((present_3 is changed) if docker_compose_version is version('2.31.0', '>=') and docker_compose_version is version('2.32.2', '<') else (present_3 is not changed))
# - present_3.warnings | default([]) | select('regex', ' please report this at ') | length == 0
# Same as above:
# - present_4_check is changed # - present_4_check is changed
# Same as above...
- present_4_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_4_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_4 is not changed # Also seems like a hopeless case with Docker 29:
# - present_4 is not changed
- present_4.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_4.warnings | default([]) | select('regex', ' please report this at ') | length == 0
always: always:

View File

@ -81,16 +81,19 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- present_1_check is failed or present_1_check is changed - present_1_check is failed or present_1_check is changed
- present_1_check is changed or present_1_check.msg.startswith('General error:') - present_1_check is changed or 'General error:' in present_1_check.msg
- present_1_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_1_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_1 is failed - present_1 is failed
- present_1.msg.startswith('General error:') - >-
'General error:' in present_1.msg
- present_1.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_1.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2_check is failed - present_2_check is failed
- present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':') - present_2_check.msg.startswith('Error when processing ' ~ cname ~ ':') or
present_2_check.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_2_check.warnings | default([]) | select('regex', ' please report this at ') | length == 0
- present_2 is failed - present_2 is failed
- present_2.msg.startswith('Error when processing ' ~ cname ~ ':') - present_2.msg.startswith('Error when processing ' ~ cname ~ ':') or
present_2.msg.startswith('Error when processing image ' ~ non_existing_image ~ ':')
- present_2.warnings | default([]) | select('regex', ' please report this at ') | length == 0 - present_2.warnings | default([]) | select('regex', ' please report this at ') | length == 0
#################################################################### ####################################################################

View File

@ -9,12 +9,10 @@
non_existing_image: does-not-exist:latest non_existing_image: does-not-exist:latest
project_src: "{{ remote_tmp_dir }}/{{ pname }}" project_src: "{{ remote_tmp_dir }}/{{ pname }}"
test_service_non_existing: | test_service_non_existing: |
version: '3'
services: services:
{{ cname }}: {{ cname }}:
image: {{ non_existing_image }} image: {{ non_existing_image }}
test_service_simple: | test_service_simple: |
version: '3'
services: services:
{{ cname }}: {{ cname }}:
image: {{ docker_test_image_simple_1 }} image: {{ docker_test_image_simple_1 }}

View File

@ -77,7 +77,8 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- result_1.rc == 0 - result_1.rc == 0
- result_1.stderr == "" # Since Compose 5, unrelated output shows up in stderr...
- result_1.stderr == "" or ("Creating" in result_1.stderr and "Created" in result_1.stderr)
- >- - >-
"usr" in result_1.stdout_lines "usr" in result_1.stdout_lines
and and

View File

@ -37,7 +37,10 @@
register: docker_host_info register: docker_host_info
# Run the tests # Run the tests
- block: - module_defaults:
community.general.docker_container:
debug: true
block:
- ansible.builtin.include_tasks: run-test.yml - ansible.builtin.include_tasks: run-test.yml
with_fileglob: with_fileglob:
- "tests/*.yml" - "tests/*.yml"

View File

@ -128,6 +128,7 @@
image: "{{ docker_test_image_digest_base }}@sha256:{{ docker_test_image_digest_v1 }}" image: "{{ docker_test_image_digest_base }}@sha256:{{ docker_test_image_digest_v1 }}"
name: "{{ cname }}" name: "{{ cname }}"
pull: true pull: true
debug: true
state: present state: present
force_kill: true force_kill: true
register: digest_3 register: digest_3

View File

@ -3077,10 +3077,14 @@
that: that:
- log_options_1 is changed - log_options_1 is changed
- log_options_2 is not changed - log_options_2 is not changed
- "'Non-string value found for log_options option \\'max-file\\'. The value is automatically converted to \\'5\\'. If this is not correct, or you want to - message in (log_options_2.warnings | default([]))
avoid such warnings, please quote the value.' in (log_options_2.warnings | default([]))"
- log_options_3 is not changed - log_options_3 is not changed
- log_options_4 is changed - log_options_4 is changed
vars:
message: >-
Non-string value found for log_options option 'max-file'. The value is automatically converted to '5'.
If this is not correct, or you want to avoid such warnings, please quote the value,
or explicitly convert the values to strings when templating them.
#################################################################### ####################################################################
## mac_address ##################################################### ## mac_address #####################################################
@ -3686,18 +3690,6 @@
register: platform_5 register: platform_5
ignore_errors: true ignore_errors: true
- name: platform (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_simple_1 }}"
name: "{{ cname }}"
state: present
pull: true
platform: 386
force_kill: true
debug: true
register: platform_6
ignore_errors: true
- name: cleanup - name: cleanup
community.docker.docker_container: community.docker.docker_container:
name: "{{ cname }}" name: "{{ cname }}"
@ -3712,7 +3704,6 @@
- platform_3 is not changed and platform_3 is not failed - platform_3 is not changed and platform_3 is not failed
- platform_4 is not changed and platform_4 is not failed - platform_4 is not changed and platform_4 is not failed
- platform_5 is changed - platform_5 is changed
- platform_6 is not changed and platform_6 is not failed
when: docker_api_version is version('1.41', '>=') when: docker_api_version is version('1.41', '>=')
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:

View File

@ -106,6 +106,101 @@
force_kill: true force_kill: true
register: published_ports_3 register: published_ports_3
- name: published_ports -- port range (same range, but listed explicitly)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
exposed_ports:
- "9001"
- "9010"
- "9011"
- "9012"
- "9013"
- "9014"
- "9015"
- "9016"
- "9017"
- "9018"
- "9019"
- "9020"
- "9021"
- "9022"
- "9023"
- "9024"
- "9025"
- "9026"
- "9027"
- "9028"
- "9029"
- "9030"
- "9031"
- "9032"
- "9033"
- "9034"
- "9035"
- "9036"
- "9037"
- "9038"
- "9039"
- "9040"
- "9041"
- "9042"
- "9043"
- "9044"
- "9045"
- "9046"
- "9047"
- "9048"
- "9049"
- "9050"
published_ports:
- "9001:9001"
- "9020:9020"
- "9021:9021"
- "9022:9022"
- "9023:9023"
- "9024:9024"
- "9025:9025"
- "9026:9026"
- "9027:9027"
- "9028:9028"
- "9029:9029"
- "9030:9030"
- "9031:9031"
- "9032:9032"
- "9033:9033"
- "9034:9034"
- "9035:9035"
- "9036:9036"
- "9037:9037"
- "9038:9038"
- "9039:9039"
- "9040:9040"
- "9041:9041"
- "9042:9042"
- "9043:9043"
- "9044:9044"
- "9045:9045"
- "9046:9046"
- "9047:9047"
- "9048:9048"
- "9049:9049"
- "9050:9050"
- "9051:9051"
- "9052:9052"
- "9053:9053"
- "9054:9054"
- "9055:9055"
- "9056:9056"
- "9057:9057"
- "9058:9058"
- "9059:9059"
- "9060:9060"
force_kill: true
register: published_ports_4
- name: cleanup - name: cleanup
community.docker.docker_container: community.docker.docker_container:
name: "{{ cname }}" name: "{{ cname }}"
@ -118,6 +213,7 @@
- published_ports_1 is changed - published_ports_1 is changed
- published_ports_2 is not changed - published_ports_2 is not changed
- published_ports_3 is changed - published_ports_3 is changed
- published_ports_4 is not changed
#################################################################### ####################################################################
## published_ports: one-element container port range ############### ## published_ports: one-element container port range ###############
@ -181,6 +277,58 @@
- published_ports_2 is not changed - published_ports_2 is not changed
- published_ports_3 is changed - published_ports_3 is changed
####################################################################
## published_ports: duplicate ports ################################
####################################################################
- name: published_ports -- duplicate ports
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
register: published_ports_1
- name: published_ports -- duplicate ports (idempotency)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80
- 10000:80
force_kill: true
register: published_ports_2
- name: published_ports -- duplicate ports (idempotency w/ protocol)
community.docker.docker_container:
image: "{{ docker_test_image_alpine }}"
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
published_ports:
- 8000:80/tcp
- 10000:80/tcp
force_kill: true
register: published_ports_3
- name: cleanup
community.docker.docker_container:
name: "{{ cname }}"
state: absent
force_kill: true
diff: false
- ansible.builtin.assert:
that:
- published_ports_1 is changed
- published_ports_2 is not changed
- published_ports_3 is not changed
#################################################################### ####################################################################
## published_ports: IPv6 addresses ################################# ## published_ports: IPv6 addresses #################################
#################################################################### ####################################################################

View File

@ -256,6 +256,10 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- archive_image_2 is not changed - archive_image_2 is not changed
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199
- name: Archive image 3rd time, should overwrite due to different id - name: Archive image 3rd time, should overwrite due to different id
community.docker.docker_image: community.docker.docker_image:

View File

@ -67,3 +67,7 @@
manifests_json: "{{ manifests.results | map(attribute='stdout') | map('from_json') }}" manifests_json: "{{ manifests.results | map(attribute='stdout') | map('from_json') }}"
manifest_json_images: "{{ item.2 | map(attribute='Config') | map('regex_replace', '.json$', '') | map('regex_replace', '^blobs/sha256/', '') | sort }}" manifest_json_images: "{{ item.2 | map(attribute='Config') | map('regex_replace', '.json$', '') | map('regex_replace', '^blobs/sha256/', '') | sort }}"
export_image_ids: "{{ item.1 | map('regex_replace', '^sha256:', '') | unique | sort }}" export_image_ids: "{{ item.1 | map('regex_replace', '^sha256:', '') | unique | sort }}"
when: docker_cli_version is version("29.0.0", "<")
# Apparently idempotency no longer works with the default storage backend
# in Docker 29.0.0.
# https://github.com/ansible-collections/community.docker/pull/1199

View File

@ -73,11 +73,17 @@
loop: "{{ all_images }}" loop: "{{ all_images }}"
when: remove_all_images is failed when: remove_all_images is failed
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (IDs) - name: Load all images (IDs)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-2.tar" path: "{{ remote_tmp_dir }}/archive-2.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -110,11 +116,17 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (mixed images and IDs) - name: Load all images (mixed images and IDs)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-3.tar" path: "{{ remote_tmp_dir }}/archive-3.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loading log - name: Print loading log
ansible.builtin.debug: ansible.builtin.debug:
var: result.stdout_lines var: result.stdout_lines
@ -127,10 +139,14 @@
that: that:
- result is changed - result is changed
# For some reason, *sometimes* only the named image is found; in fact, in that case, the log only mentions that image and nothing else # For some reason, *sometimes* only the named image is found; in fact, in that case, the log only mentions that image and nothing else
- "result.images | length == 3 or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout" # With Docker 29, a third possibility appears: just two entries.
- (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0]]] - >-
- result.images | length in [1, 3] result.images | length == 3
- (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0]]] or ('Loaded image: ' ~ docker_test_image_hello_world) == result.stdout
or result.images | length == 2
- (result.image_names | sort) in [[image_names[0], image_ids[0], image_ids[1]] | sort, [image_names[0], image_ids[1]] | sort, [image_names[0]]]
- result.images | length in [1, 2, 3]
- (result.images | map(attribute='Id') | sort) in [[image_ids[0], image_ids[0], image_ids[1]] | sort, [image_ids[0], image_ids[1]] | sort, [image_ids[0]]]
# Same image twice # Same image twice
@ -139,11 +155,17 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (same image twice) - name: Load all images (same image twice)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-4.tar" path: "{{ remote_tmp_dir }}/archive-4.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -151,10 +173,11 @@
- ansible.builtin.assert: - ansible.builtin.assert:
that: that:
- result is changed - result is changed
- result.image_names | length == 1 - result.image_names | length in [1, 2]
- result.image_names[0] == image_names[0] - (result.image_names | sort) in [[image_names[0]], [image_names[0], image_ids[0]] | sort]
- result.images | length == 1 - result.images | length in [1, 2]
- result.images[0].Id == image_ids[0] - result.images[0].Id == image_ids[0]
- result.images[1].Id | default(image_ids[0]) == image_ids[0]
# Single image by ID # Single image by ID
@ -163,11 +186,17 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (single image by ID) - name: Load all images (single image by ID)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-5.tar" path: "{{ remote_tmp_dir }}/archive-5.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names
@ -197,11 +226,17 @@
name: "{{ item }}" name: "{{ item }}"
loop: "{{ all_images }}" loop: "{{ all_images }}"
- name: Show all images
ansible.builtin.command: docker image ls
- name: Load all images (names) - name: Load all images (names)
community.docker.docker_image_load: community.docker.docker_image_load:
path: "{{ remote_tmp_dir }}/archive-1.tar" path: "{{ remote_tmp_dir }}/archive-1.tar"
register: result register: result
- name: Show all images
ansible.builtin.command: docker image ls
- name: Print loaded image names - name: Print loaded image names
ansible.builtin.debug: ansible.builtin.debug:
var: result.image_names var: result.image_names

View File

@ -142,6 +142,8 @@
- present_3_check.actions[0] == ('Pulled image ' ~ image_name) - present_3_check.actions[0] == ('Pulled image ' ~ image_name)
- present_3_check.diff.before.id == present_1.diff.after.id - present_3_check.diff.before.id == present_1.diff.after.id
- present_3_check.diff.after.id == 'unknown' - present_3_check.diff.after.id == 'unknown'
- ansible.builtin.assert:
that:
- present_3 is changed - present_3 is changed
- present_3.actions | length == 1 - present_3.actions | length == 1
- present_3.actions[0] == ('Pulled image ' ~ image_name) - present_3.actions[0] == ('Pulled image ' ~ image_name)
@ -166,6 +168,11 @@
- present_5.actions[0] == ('Pulled image ' ~ image_name) - present_5.actions[0] == ('Pulled image ' ~ image_name)
- present_5.diff.before.id == present_3.diff.after.id - present_5.diff.before.id == present_3.diff.after.id
- present_5.diff.after.id == present_1.diff.after.id - present_5.diff.after.id == present_1.diff.after.id
when: docker_cli_version is version("29.0.0", "<")
# From Docker 29 on, Docker won't pull images for other architectures
# if there are better matching ones. The above tests assume it will
# just do what it is told, and thus fail from 29.0.0 on.
# https://github.com/ansible-collections/community.docker/pull/1199
always: always:
- name: cleanup - name: cleanup

View File

@ -7,11 +7,9 @@
block: block:
- name: Make sure images are not there - name: Make sure images are not there
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "{{ item }}" name: "sha256:{{ item }}"
force: true force: true
loop: loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}"
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"
- name: Pull image 1 - name: Pull image 1
community.docker.docker_image_pull: community.docker.docker_image_pull:
@ -82,8 +80,6 @@
always: always:
- name: cleanup - name: cleanup
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "{{ item }}" name: "sha256:{{ item }}"
force: true force: true
loop: loop: "{{ docker_test_image_digest_v1_image_ids + docker_test_image_digest_v2_image_ids }}"
- "sha256:{{ docker_test_image_digest_v1_image_id }}"
- "sha256:{{ docker_test_image_digest_v2_image_id }}"

View File

@ -18,7 +18,7 @@
- name: Push image ID (must fail) - name: Push image ID (must fail)
community.docker.docker_image_push: community.docker.docker_image_push:
name: "sha256:{{ docker_test_image_digest_v1_image_id }}" name: "sha256:{{ docker_test_image_digest_v1_image_ids[0] }}"
register: fail_2 register: fail_2
ignore_errors: true ignore_errors: true

View File

@ -80,4 +80,6 @@
that: that:
- push_4 is failed - push_4 is failed
- >- - >-
push_4.msg == ('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': no basic auth credentials') push_4.msg.startswith('Error pushing image ' ~ image_name_base2 ~ ':' ~ image_tag ~ ': ')
- >-
push_4.msg.endswith(': no basic auth credentials')

View File

@ -8,15 +8,16 @@
# and should not be used as examples of how to write Ansible roles # # and should not be used as examples of how to write Ansible roles #
#################################################################### ####################################################################
- block: - vars:
image: "{{ docker_test_image_hello_world }}"
image_ids: "{{ docker_test_image_hello_world_image_ids }}"
block:
- name: Pick image prefix - name: Pick image prefix
ansible.builtin.set_fact: ansible.builtin.set_fact:
iname_prefix: "{{ 'ansible-docker-test-%0x' % ((2**32) | random) }}" iname_prefix: "{{ 'ansible-docker-test-%0x' % ((2**32) | random) }}"
- name: Define image names - name: Define image names
ansible.builtin.set_fact: ansible.builtin.set_fact:
image: "{{ docker_test_image_hello_world }}"
image_id: "{{ docker_test_image_hello_world_image_id }}"
image_names: image_names:
- "{{ iname_prefix }}-tagged-1:latest" - "{{ iname_prefix }}-tagged-1:latest"
- "{{ iname_prefix }}-tagged-1:foo" - "{{ iname_prefix }}-tagged-1:foo"
@ -24,8 +25,9 @@
- name: Remove image complete - name: Remove image complete
community.docker.docker_image_remove: community.docker.docker_image_remove:
name: "{{ image_id }}" name: "{{ item }}"
force: true force: true
loop: "{{ image_ids }}"
- name: Remove tagged images - name: Remove tagged images
community.docker.docker_image_remove: community.docker.docker_image_remove:
@ -102,10 +104,11 @@
- remove_2 is changed - remove_2 is changed
- remove_2.diff.before.id == pulled_image.image.Id - remove_2.diff.before.id == pulled_image.image.Id
- remove_2.diff.before.tags | length == 4 - remove_2.diff.before.tags | length == 4
- remove_2.diff.before.digests | length == 1 # With Docker 29, there are now two digests in before and after:
- remove_2.diff.before.digests | length in [1, 2]
- remove_2.diff.after.id == pulled_image.image.Id - remove_2.diff.after.id == pulled_image.image.Id
- remove_2.diff.after.tags | length == 3 - remove_2.diff.after.tags | length == 3
- remove_2.diff.after.digests | length == 1 - remove_2.diff.after.digests | length in [1, 2]
- remove_2.deleted | length == 0 - remove_2.deleted | length == 0
- remove_2.untagged | length == 1 - remove_2.untagged | length == 1
- remove_2.untagged[0] == (iname_prefix ~ '-tagged-1:latest') - remove_2.untagged[0] == (iname_prefix ~ '-tagged-1:latest')
@ -174,10 +177,11 @@
- remove_4 is changed - remove_4 is changed
- remove_4.diff.before.id == pulled_image.image.Id - remove_4.diff.before.id == pulled_image.image.Id
- remove_4.diff.before.tags | length == 3 - remove_4.diff.before.tags | length == 3
- remove_4.diff.before.digests | length == 1 # With Docker 29, there are now two digests in before and after:
- remove_4.diff.before.digests | length in [1, 2]
- remove_4.diff.after.id == pulled_image.image.Id - remove_4.diff.after.id == pulled_image.image.Id
- remove_4.diff.after.tags | length == 2 - remove_4.diff.after.tags | length == 2
- remove_4.diff.after.digests | length == 1 - remove_4.diff.after.digests | length in [1, 2]
- remove_4.deleted | length == 0 - remove_4.deleted | length == 0
- remove_4.untagged | length == 1 - remove_4.untagged | length == 1
- remove_4.untagged[0] == (iname_prefix ~ '-tagged-1:foo') - remove_4.untagged[0] == (iname_prefix ~ '-tagged-1:foo')
@ -245,16 +249,22 @@
- remove_6 is changed - remove_6 is changed
- remove_6.diff.before.id == pulled_image.image.Id - remove_6.diff.before.id == pulled_image.image.Id
- remove_6.diff.before.tags | length == 2 - remove_6.diff.before.tags | length == 2
- remove_6.diff.before.digests | length == 1 # With Docker 29, there are now two digests in before and after:
- remove_6.diff.before.digests | length in [1, 2]
- remove_6.diff.after.exists is false - remove_6.diff.after.exists is false
- remove_6.deleted | length > 1 - remove_6.deleted | length >= 1
- pulled_image.image.Id in remove_6.deleted - pulled_image.image.Id in remove_6.deleted
- remove_6.untagged | length == 3 - remove_6.untagged | length in [2, 3]
- (iname_prefix ~ '-tagged-1:bar') in remove_6.untagged - (iname_prefix ~ '-tagged-1:bar') in remove_6.untagged
- image in remove_6.untagged - image in remove_6.untagged
- remove_6_check.deleted | length == 1 - remove_6_check.deleted | length == 1
- remove_6_check.deleted[0] == pulled_image.image.Id - remove_6_check.deleted[0] == pulled_image.image.Id
- remove_6_check.untagged == remove_6.untagged # The following is only true for Docker < 29...
# We use the CLI version as a proxy...
- >-
remove_6_check.untagged == remove_6.untagged
or
docker_cli_version is version("29.0.0", ">=")
- info_5.images | length == 0 - info_5.images | length == 0
- name: Remove image ID (force, idempotent, check mode) - name: Remove image ID (force, idempotent, check mode)

View File

@ -133,8 +133,24 @@
- name: Get proxied daemon URLs - name: Get proxied daemon URLs
ansible.builtin.set_fact: ansible.builtin.set_fact:
docker_daemon_frontend_https: "https://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000" # Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists.
docker_daemon_frontend_http: "http://{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:6000" # Use the bridge network's IP address instead...
docker_daemon_frontend_https: >-
https://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
docker_daemon_frontend_http: >-
http://{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else (
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:6000
- name: Wait for registry frontend - name: Wait for registry frontend
ansible.builtin.uri: ansible.builtin.uri:

View File

@ -4,12 +4,18 @@
# SPDX-License-Identifier: GPL-3.0-or-later # SPDX-License-Identifier: GPL-3.0-or-later
docker_test_image_digest_v1: e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker_test_image_digest_v1: e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9
docker_test_image_digest_v1_image_id: 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee docker_test_image_digest_v1_image_ids:
- 758ec7f3a1ee85f8f08399b55641bfb13e8c1109287ddc5e22b68c3d653152ee # Docker 28 and before
- e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 # Docker 29
docker_test_image_digest_v2: ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b docker_test_image_digest_v2: ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b
docker_test_image_digest_v2_image_id: dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843 docker_test_image_digest_v2_image_ids:
- dc3bacd8b5ea796cea5d6070c8f145df9076f26a6bc1c8981fd5b176d37de843 # Docker 28 and before
- ee44b399df993016003bf5466bd3eeb221305e9d0fa831606bc7902d149c775b # Docker 29
docker_test_image_digest_base: quay.io/ansible/docker-test-containers docker_test_image_digest_base: quay.io/ansible/docker-test-containers
docker_test_image_hello_world: quay.io/ansible/docker-test-containers:hello-world docker_test_image_hello_world: quay.io/ansible/docker-test-containers:hello-world
docker_test_image_hello_world_image_id: sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b docker_test_image_hello_world_image_ids:
- sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b # Docker 28 and before
- sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 # Docker 29
docker_test_image_hello_world_base: quay.io/ansible/docker-test-containers docker_test_image_hello_world_base: quay.io/ansible/docker-test-containers
docker_test_image_busybox: quay.io/ansible/docker-test-containers:busybox docker_test_image_busybox: quay.io/ansible/docker-test-containers:busybox
docker_test_image_alpine: quay.io/ansible/docker-test-containers:alpine3.8 docker_test_image_alpine: quay.io/ansible/docker-test-containers:alpine3.8

View File

@ -102,7 +102,17 @@
# This host/port combination cannot be used if the tests are running inside a docker container. # This host/port combination cannot be used if the tests are running inside a docker container.
docker_registry_frontend_address: localhost:{{ nginx_container.container.NetworkSettings.Ports['5000/tcp'].0.HostPort }} docker_registry_frontend_address: localhost:{{ nginx_container.container.NetworkSettings.Ports['5000/tcp'].0.HostPort }}
# The following host/port combination can be used from inside the docker container. # The following host/port combination can be used from inside the docker container.
docker_registry_frontend_address_internal: "{{ nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress if current_container_network_ip else nginx_container.container.NetworkSettings.IPAddress }}:5000" docker_registry_frontend_address_internal: >-
{{
nginx_container.container.NetworkSettings.Networks[current_container_network_ip].IPAddress
if current_container_network_ip else
(
nginx_container.container.NetworkSettings.IPAddress
| default(nginx_container.container.NetworkSettings.Networks['bridge'].IPAddress)
)
}}:5000
# Since Docker 29, nginx_container.container.NetworkSettings.IPAddress no longer exists.
# Use the bridge network's IP address instead...
- name: Wait for registry frontend - name: Wait for registry frontend
ansible.builtin.uri: ansible.builtin.uri:

View File

@ -27,7 +27,7 @@
- name: Install cryptography (Darwin, and potentially upgrade for other OSes) - name: Install cryptography (Darwin, and potentially upgrade for other OSes)
become: true become: true
ansible.builtin.pip: ansible.builtin.pip:
name: cryptography>=1.3.0 name: cryptography>=3.3.0
extra_args: "-c {{ remote_constraints }}" extra_args: "-c {{ remote_constraints }}"
- name: Register cryptography version - name: Register cryptography version

View File

@ -226,7 +226,7 @@ class DockerApiTest(BaseAPIClientTest):
def test_retrieve_server_version(self) -> None: def test_retrieve_server_version(self) -> None:
client = APIClient(version="auto") client = APIClient(version="auto")
assert isinstance(client._version, str) assert isinstance(client._version, str)
assert not (client._version == "auto") assert client._version != "auto"
client.close() client.close()
def test_auto_retrieve_server_version(self) -> None: def test_auto_retrieve_server_version(self) -> None:
@ -323,8 +323,8 @@ class DockerApiTest(BaseAPIClientTest):
# mock a stream interface # mock a stream interface
raw_resp = urllib3.HTTPResponse(body=body) raw_resp = urllib3.HTTPResponse(body=body)
setattr(raw_resp._fp, "chunked", True) raw_resp._fp.chunked = True
setattr(raw_resp._fp, "chunk_left", len(body.getvalue()) - 1) raw_resp._fp.chunk_left = len(body.getvalue()) - 1
# pass `decode=False` to the helper # pass `decode=False` to the helper
raw_resp._fp.seek(0) raw_resp._fp.seek(0)
@ -339,7 +339,7 @@ class DockerApiTest(BaseAPIClientTest):
assert result == content assert result == content
# non-chunked response, pass `decode=False` to the helper # non-chunked response, pass `decode=False` to the helper
setattr(raw_resp._fp, "chunked", False) raw_resp._fp.chunked = False
raw_resp._fp.seek(0) raw_resp._fp.seek(0)
resp = create_response(status_code=status_code, content=content, raw=raw_resp) resp = create_response(status_code=status_code, content=content, raw=raw_resp)
result = next(self.client._stream_helper(resp)) result = next(self.client._stream_helper(resp))
@ -503,7 +503,7 @@ class TCPSocketStreamTest(unittest.TestCase):
cls.thread.join() cls.thread.join()
@classmethod @classmethod
def get_handler_class(cls) -> t.Type[BaseHTTPRequestHandler]: def get_handler_class(cls) -> type[BaseHTTPRequestHandler]:
stdout_data = cls.stdout_data stdout_data = cls.stdout_data
stderr_data = cls.stderr_data stderr_data = cls.stderr_data

View File

@ -581,7 +581,7 @@ fake_responses: dict[str | tuple[str, str], Callable] = {
f"{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/restart": post_fake_restart_container, f"{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/restart": post_fake_restart_container,
f"{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b": delete_fake_remove_container, f"{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b": delete_fake_remove_container,
# TODO: the following is a duplicate of the import endpoint further above! # TODO: the following is a duplicate of the import endpoint further above!
f"{prefix}/{CURRENT_VERSION}/images/create": post_fake_image_create, f"{prefix}/{CURRENT_VERSION}/images/create": post_fake_image_create, # noqa: F601
f"{prefix}/{CURRENT_VERSION}/images/e9aa60c60128": delete_fake_remove_image, f"{prefix}/{CURRENT_VERSION}/images/e9aa60c60128": delete_fake_remove_image,
f"{prefix}/{CURRENT_VERSION}/images/e9aa60c60128/get": get_fake_get_image, f"{prefix}/{CURRENT_VERSION}/images/e9aa60c60128/get": get_fake_get_image,
f"{prefix}/{CURRENT_VERSION}/images/load": post_fake_load_image, f"{prefix}/{CURRENT_VERSION}/images/load": post_fake_load_image,

View File

@ -256,7 +256,7 @@ class ResolveAuthTest(unittest.TestCase):
m.return_value = None m.return_value = None
ac = auth.resolve_authconfig(auth_config, None) ac = auth.resolve_authconfig(auth_config, None)
assert ac is not None assert ac is not None
assert "indexuser" == ac["username"] assert ac["username"] == "indexuser"
class LoadConfigTest(unittest.TestCase): class LoadConfigTest(unittest.TestCase):

View File

@ -421,18 +421,18 @@ class TarTest(unittest.TestCase):
base = make_tree(dirs, files) base = make_tree(dirs, files)
self.addCleanup(shutil.rmtree, base) self.addCleanup(shutil.rmtree, base)
with tar(base, exclude=exclude) as archive: with tar(base, exclude=exclude) as archive, tarfile.open(
with tarfile.open(fileobj=archive) as tar_data: fileobj=archive
assert sorted(tar_data.getnames()) == sorted(expected_names) ) as tar_data:
assert sorted(tar_data.getnames()) == sorted(expected_names)
def test_tar_with_empty_directory(self) -> None: def test_tar_with_empty_directory(self) -> None:
base = tempfile.mkdtemp() base = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base) self.addCleanup(shutil.rmtree, base)
for d in ["foo", "bar"]: for d in ["foo", "bar"]:
os.makedirs(os.path.join(base, d)) os.makedirs(os.path.join(base, d))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert sorted(tar_data.getnames()) == ["bar", "foo"]
assert sorted(tar_data.getnames()) == ["bar", "foo"]
@pytest.mark.skipif( @pytest.mark.skipif(
IS_WINDOWS_PLATFORM or os.geteuid() == 0, IS_WINDOWS_PLATFORM or os.geteuid() == 0,
@ -458,9 +458,8 @@ class TarTest(unittest.TestCase):
f.write("content") f.write("content")
os.makedirs(os.path.join(base, "bar")) os.makedirs(os.path.join(base, "bar"))
os.symlink("../foo", os.path.join(base, "bar/foo")) os.symlink("../foo", os.path.join(base, "bar/foo"))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows") @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows")
def test_tar_with_directory_symlinks(self) -> None: def test_tar_with_directory_symlinks(self) -> None:
@ -469,9 +468,8 @@ class TarTest(unittest.TestCase):
for d in ["foo", "bar"]: for d in ["foo", "bar"]:
os.makedirs(os.path.join(base, d)) os.makedirs(os.path.join(base, d))
os.symlink("../foo", os.path.join(base, "bar/foo")) os.symlink("../foo", os.path.join(base, "bar/foo"))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows") @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows")
def test_tar_with_broken_symlinks(self) -> None: def test_tar_with_broken_symlinks(self) -> None:
@ -481,9 +479,8 @@ class TarTest(unittest.TestCase):
os.makedirs(os.path.join(base, d)) os.makedirs(os.path.join(base, d))
os.symlink("../baz", os.path.join(base, "bar/foo")) os.symlink("../baz", os.path.join(base, "bar/foo"))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
assert sorted(tar_data.getnames()) == ["bar", "bar/foo", "foo"]
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No UNIX sockets on Win32") @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No UNIX sockets on Win32")
def test_tar_socket_file(self) -> None: def test_tar_socket_file(self) -> None:
@ -494,9 +491,8 @@ class TarTest(unittest.TestCase):
sock = socket.socket(socket.AF_UNIX) sock = socket.socket(socket.AF_UNIX)
self.addCleanup(sock.close) self.addCleanup(sock.close)
sock.bind(os.path.join(base, "test.sock")) sock.bind(os.path.join(base, "test.sock"))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert sorted(tar_data.getnames()) == ["bar", "foo"]
assert sorted(tar_data.getnames()) == ["bar", "foo"]
def tar_test_negative_mtime_bug(self) -> None: def tar_test_negative_mtime_bug(self) -> None:
base = tempfile.mkdtemp() base = tempfile.mkdtemp()
@ -505,10 +501,9 @@ class TarTest(unittest.TestCase):
with open(filename, "wt", encoding="utf-8") as f: with open(filename, "wt", encoding="utf-8") as f:
f.write("Invisible Full Moon") f.write("Invisible Full Moon")
os.utime(filename, (12345, -3600.0)) os.utime(filename, (12345, -3600.0))
with tar(base) as archive: with tar(base) as archive, tarfile.open(fileobj=archive) as tar_data:
with tarfile.open(fileobj=archive) as tar_data: assert tar_data.getnames() == ["th.txt"]
assert tar_data.getnames() == ["th.txt"] assert tar_data.getmember("th.txt").mtime == -3600
assert tar_data.getmember("th.txt").mtime == -3600
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows") @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason="No symlinks on Windows")
def test_tar_directory_link(self) -> None: def test_tar_directory_link(self) -> None:

View File

@ -58,7 +58,12 @@ class KwargsFromEnvTest(unittest.TestCase):
self.os_environ = os.environ.copy() self.os_environ = os.environ.copy()
def tearDown(self) -> None: def tearDown(self) -> None:
os.environ = self.os_environ # type: ignore for k, v in self.os_environ.items():
if os.environ.get(k) != v:
os.environ[k] = v
for k in os.environ:
if k not in self.os_environ:
os.environ.pop(k)
def test_kwargs_from_env_empty(self) -> None: def test_kwargs_from_env_empty(self) -> None:
os.environ.update(DOCKER_HOST="", DOCKER_CERT_PATH="") os.environ.update(DOCKER_HOST="", DOCKER_CERT_PATH="")
@ -75,7 +80,7 @@ class KwargsFromEnvTest(unittest.TestCase):
DOCKER_TLS_VERIFY="1", DOCKER_TLS_VERIFY="1",
) )
kwargs = kwargs_from_env(assert_hostname=False) kwargs = kwargs_from_env(assert_hostname=False)
assert "tcp://192.168.59.103:2376" == kwargs["base_url"] assert kwargs["base_url"] == "tcp://192.168.59.103:2376"
assert "ca.pem" in kwargs["tls"].ca_cert assert "ca.pem" in kwargs["tls"].ca_cert
assert "cert.pem" in kwargs["tls"].cert[0] assert "cert.pem" in kwargs["tls"].cert[0]
assert "key.pem" in kwargs["tls"].cert[1] assert "key.pem" in kwargs["tls"].cert[1]
@ -99,7 +104,7 @@ class KwargsFromEnvTest(unittest.TestCase):
DOCKER_TLS_VERIFY="", DOCKER_TLS_VERIFY="",
) )
kwargs = kwargs_from_env(assert_hostname=True) kwargs = kwargs_from_env(assert_hostname=True)
assert "tcp://192.168.59.103:2376" == kwargs["base_url"] assert kwargs["base_url"] == "tcp://192.168.59.103:2376"
assert "ca.pem" in kwargs["tls"].ca_cert assert "ca.pem" in kwargs["tls"].ca_cert
assert "cert.pem" in kwargs["tls"].cert[0] assert "cert.pem" in kwargs["tls"].cert[0]
assert "key.pem" in kwargs["tls"].cert[1] assert "key.pem" in kwargs["tls"].cert[1]
@ -125,7 +130,7 @@ class KwargsFromEnvTest(unittest.TestCase):
) )
os.environ.pop("DOCKER_CERT_PATH", None) os.environ.pop("DOCKER_CERT_PATH", None)
kwargs = kwargs_from_env(assert_hostname=True) kwargs = kwargs_from_env(assert_hostname=True)
assert "tcp://192.168.59.103:2376" == kwargs["base_url"] assert kwargs["base_url"] == "tcp://192.168.59.103:2376"
def test_kwargs_from_env_no_cert_path(self) -> None: def test_kwargs_from_env_no_cert_path(self) -> None:
try: try:
@ -157,7 +162,7 @@ class KwargsFromEnvTest(unittest.TestCase):
"DOCKER_HOST": "http://docker.gensokyo.jp:2581", "DOCKER_HOST": "http://docker.gensokyo.jp:2581",
} }
) )
assert "http://docker.gensokyo.jp:2581" == kwargs["base_url"] assert kwargs["base_url"] == "http://docker.gensokyo.jp:2581"
assert "tls" not in kwargs assert "tls" not in kwargs

View File

@ -9,6 +9,7 @@ import pytest
from ansible_collections.community.docker.plugins.module_utils._compose_v2 import ( from ansible_collections.community.docker.plugins.module_utils._compose_v2 import (
Event, Event,
parse_events, parse_events,
parse_json_events,
) )
from .compose_v2_test_cases import EVENT_TEST_CASES from .compose_v2_test_cases import EVENT_TEST_CASES
@ -384,3 +385,208 @@ def test_parse_events(
assert collected_events == events assert collected_events == events
assert collected_warnings == warnings assert collected_warnings == warnings
JSON_TEST_CASES: list[tuple[str, str, str, list[Event], list[str]]] = [
(
"pull-compose-2",
"2.40.3",
'{"level":"warning","msg":"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version`'
' is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:16:30Z"}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pulling fs layer"}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Downloading","status":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Download complete","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Extracting","status":"[============'
'======================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"ansible-docker-test-3c46cd06-cont","text":"Pull complete","percent":100}\n'
'{"id":"ansible-docker-test-3c46cd06-cont","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.f9pcm_i3.test/ansible-docker-test-3c46cd06-pull/docker-compose.yml: the attribute `version` is obsolete,"
" it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulling",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pulling fs layer",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Downloading",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Download complete",
None,
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Extracting",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"63a26ae4e8a8",
"Pull complete",
None,
),
Event(
"image",
"ansible-docker-test-3c46cd06-cont",
"Pulled",
None,
),
],
[],
),
(
"pull-compose-5",
"5.0.0",
'{"level":"warning","msg":"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute'
' `version` is obsolete, it will be ignored, please remove it to avoid potential confusion","time":"2025-12-06T13:08:22Z"}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"Pulling"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[\\u003e '
' ] 6.89kB/599.9kB","current":6890,"total":599883,"percent":1}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working"}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[==\\u003e '
' ] 32.77kB/599.9kB","current":32768,"total":599883,"percent":5}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Working","text":"[=============='
'====================================\\u003e] 599.9kB/599.9kB","current":599883,"total":599883,"percent":100}\n'
'{"id":"63a26ae4e8a8","parent_id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","percent":100}\n'
'{"id":"Image ghcr.io/ansible-collections/simple-1:tag","status":"Done","text":"Pulled"}\n',
[
Event(
"unknown",
None,
"Warning",
"/tmp/ansible.1n0q46aj.test/ansible-docker-test-b2fa9191-pull/docker-compose.yml: the attribute `version`"
" is obsolete, it will be ignored, please remove it to avoid potential confusion",
),
Event(
"image",
"ghcr.io/ansible-collections/simple-1:tag",
"Pulling",
"Working",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[> ] 6.89kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
None,
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==> ] 32.77kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer",
"ghcr.io/ansible-collections/simple-1:tag",
"Working",
"[==================================================>] 599.9kB/599.9kB",
),
Event(
"image-layer", "ghcr.io/ansible-collections/simple-1:tag", "Done", None
),
Event(
"image", "ghcr.io/ansible-collections/simple-1:tag", "Pulled", "Done"
),
],
[],
),
]
@pytest.mark.parametrize(
"test_id, compose_version, stderr, events, warnings",
JSON_TEST_CASES,
ids=[tc[0] for tc in JSON_TEST_CASES],
)
def test_parse_json_events(
test_id: str,
compose_version: str,
stderr: str,
events: list[Event],
warnings: list[str],
) -> None:
collected_warnings = []
def collect_warning(msg: str) -> None:
collected_warnings.append(msg)
collected_events = parse_json_events(
stderr.encode("utf-8"),
warn_function=collect_warning,
)
print(collected_events)
print(collected_warnings)
assert collected_events == events
assert collected_warnings == warnings

View File

@ -23,7 +23,6 @@ from ..test_support.docker_image_archive_stubbing import (
if t.TYPE_CHECKING: if t.TYPE_CHECKING:
from collections.abc import Callable from collections.abc import Callable
from pathlib import Path
def assert_no_logging(msg: str) -> t.NoReturn: def assert_no_logging(msg: str) -> t.NoReturn:

View File

@ -156,7 +156,7 @@ def test_has_list_changed() -> None:
[{"a": 1}, {"a": 2}], [{"a": 1}, {"a": 2}], sort_key="a" [{"a": 1}, {"a": 2}], [{"a": 1}, {"a": 2}], sort_key="a"
) )
with pytest.raises(Exception): with pytest.raises(ValueError):
docker_swarm_service.has_list_changed( docker_swarm_service.has_list_changed(
[{"a": 1}, {"a": 2}], [{"a": 1}, {"a": 2}] [{"a": 1}, {"a": 2}], [{"a": 1}, {"a": 2}]
) )

View File

@ -36,15 +36,14 @@ def write_imitation_archive(
def write_imitation_archive_with_manifest( def write_imitation_archive_with_manifest(
file_name: str, manifest: list[dict[str, t.Any]] file_name: str, manifest: list[dict[str, t.Any]]
) -> None: ) -> None:
with tarfile.open(file_name, "w") as tf: with tarfile.open(file_name, "w") as tf, TemporaryFile() as f:
with TemporaryFile() as f: f.write(json.dumps(manifest).encode("utf-8"))
f.write(json.dumps(manifest).encode("utf-8"))
ti = tarfile.TarInfo("manifest.json") ti = tarfile.TarInfo("manifest.json")
ti.size = f.tell() ti.size = f.tell()
f.seek(0) f.seek(0)
tf.addfile(ti, f) tf.addfile(ti, f)
def write_irrelevant_tar(file_name: str) -> None: def write_irrelevant_tar(file_name: str) -> None:
@ -55,12 +54,11 @@ def write_irrelevant_tar(file_name: str) -> None:
:type file_name: str :type file_name: str
""" """
with tarfile.open(file_name, "w") as tf: with tarfile.open(file_name, "w") as tf, TemporaryFile() as f:
with TemporaryFile() as f: f.write("Hello, world.".encode("utf-8"))
f.write("Hello, world.".encode("utf-8"))
ti = tarfile.TarInfo("hi.txt") ti = tarfile.TarInfo("hi.txt")
ti.size = f.tell() ti.size = f.tell()
f.seek(0) f.seek(0)
tf.addfile(ti, f) tf.addfile(ti, f)