Compare commits

...

696 Commits
2.7.13 ... main

Author SHA1 Message Date
Felix Fontein
947ec9a442 The next expected release will be 5.1.0. 2025-12-06 23:32:17 +01:00
Felix Fontein
25e7ba222e Release 5.0.4. 2025-12-06 22:45:11 +01:00
Felix Fontein
6ab8cc0d82
Improve JSON parsing error handling. (#1221) 2025-12-06 22:25:30 +01:00
Felix Fontein
159df0ab91 Prepare 5.0.4. 2025-12-06 17:57:12 +01:00
Felix Fontein
174c0c8058
Docker Compose 5+: improve image layer event parsing (#1219)
* Remove long deprecated version fields.

* Add first JSON event parsing tests.

* Improve image layer event parsing for Compose 5+.

* Add 'Working' to image working actions.

* Add changelog fragment.

* Shorten lines.

* Adjust docker_compose_v2_run tests.
2025-12-06 17:48:17 +01:00
Felix Fontein
2efcd6b2ec
Adjust test for error message for Compose 5.0.0. (#1217) 2025-12-06 14:04:39 +01:00
Felix Fontein
faa7dee456 The next release will be 5.1.0. 2025-11-29 23:16:22 +01:00
Felix Fontein
908c23a3c3 Release 5.0.3. 2025-11-29 22:35:55 +01:00
Felix Fontein
350f67d971 Prepare 5.0.3. 2025-11-26 07:30:53 +01:00
Felix Fontein
846fc8564b
docker_container: do not send wrong host IP for duplicate ports (#1214)
* DRY.

* Port spec can be a list of port specs.

* Add changelog fragment.

* Add test.
2025-11-26 07:29:30 +01:00
dependabot[bot]
d2947476f7
Bump actions/checkout from 5 to 6 in the ci group (#1211)
Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 06:20:29 +01:00
Felix Fontein
5d2b4085ec
Remove code that's not used. (#1209) 2025-11-23 09:48:34 +01:00
Felix Fontein
a869184ad4 Shut up pylint due to bugs. 2025-11-23 08:56:42 +01:00
Felix Fontein
a985e05482 The next expected release will be 5.1.0. 2025-11-16 13:54:23 +01:00
Felix Fontein
13e74e58fa Release 5.0.2. 2025-11-16 12:48:11 +01:00
Felix Fontein
c61c0e24b8
Improve error/warning messages w.r.t. YAML quoting (#1205)
* Remove superfluous conversions/assignments.

* Improve messages.
2025-11-16 12:32:51 +01:00
Felix Fontein
e42423b949 Forgot to update the version number. 2025-11-16 11:57:17 +01:00
Felix Fontein
0d37f20100 Prepare 5.0.2. 2025-11-16 11:56:18 +01:00
Felix Fontein
a349c5eed7
Fix connection tests. (#1202) 2025-11-16 10:55:07 +01:00
Felix Fontein
3da2799e03
Fix IP subnet and address idempotency. (#1201) 2025-11-16 10:47:35 +01:00
Felix Fontein
d207643e0c
docker_image(_push): fix push detection (#1199)
* Fix IP address retrieval for registry setup.

* Adjust push detection to Docker 29.

* Idempotency for export no longer works.

* Disable pull idempotency checks that play with architecture.

* Add more known image IDs.

* Adjust load tests.

* Adjust error message check.

* Allow for more digests.

* Make sure a new enough cryptography version is installed.
2025-11-16 10:09:23 +01:00
Felix Fontein
90c4b4c543
docker_image(_pull), docker_container: fix compatibility with Docker 29.0.0 (#1192)
* Add debug flag to failing task.

* Add more debug output.

* Fix pull idempotency.

* Revert "Add more debug output."

This reverts commit 64020149bf.

* Fix casing.

* Remove unreliable test.

* Add 'debug: true' to all tasks.

* Reformat.

* Fix idempotency problem for IPv6 addresses.

* Fix expose ranges handling.

* Update changelog fragment to also mention other affected modules.
2025-11-15 17:13:46 +01:00
Felix Fontein
68993fe353
docker_compose_v2: ignore result of build idempotency test since this seems like a hopeless case (#1196)
* Ignore result of idempotency test since this seems like a hopeless cause...

* And another one.
2025-11-15 17:06:21 +01:00
Felix Fontein
97314ec892
Move ansible-core 2.17 to EOL CI. (#1189) 2025-11-12 19:41:25 +01:00
Felix Fontein
ec14568b22
Work around Docker 29.0.0 bug. (#1187) 2025-11-12 19:21:55 +01:00
Felix Fontein
94d22f758b The next planned release will be 5.1.0. 2025-11-09 21:32:51 +01:00
Felix Fontein
aedf8f9674 Release 5.0.1. 2025-11-09 21:12:23 +01:00
Felix Fontein
86ea32b214 Prepare 5.0.1. 2025-11-08 10:02:08 +01:00
Nik Reiman
9d7dda7292
Fix error for "Cannot locate specified Dockerfile" (#1184)
In 3350283bcc, a subtle bug was introduced
by renaming this variable. For image builds that go down the `else`
branch, they never set this variable, which is then referenced below
when constructing the `params` dict. This results in a very confusing
bug from the Docker backend when trying to construct images:

> An unexpected Docker error occurred: 500 Server Error for
> http+docker://localhost/v1.51/build?t=molecule_local%2Fubuntu%3A24.04&q=False&nocache=False&rm=True&forcerm=True&pull=True&dockerfile=%2Fhome%2Fci%2F.ansible%2Ftmp%2Fmolecule.IaMj.install-github%2FDockerfile_ubuntu_24_04:
> Internal Server Error ("Cannot locate specified Dockerfile:
> /home/ci/.ansible/tmp/molecule.IaMj.install-github/Dockerfile_ubuntu_24_04")

Within the Docker daemon logs, the actual error presents itself like
this:

> level=debug msg="FIXME: Got an API for which error does not match any
> expected type!!!" error="Cannot locate specified Dockerfile:
> $HOME/.ansible/tmp/molecule.5DrS.install-package/Dockerfile_ubuntu_24_04"
> error_type="*errors.fundamental" module=api

Unfortunately, these are all red herrings and the actual cause of the
problem isn't Docker itself or the missing file, but in fact the
`docker_image` module not passing the correct parameter data here.
2025-11-08 10:01:05 +01:00
Felix Fontein
dee138bc4b
Fix typing info. (#1183) 2025-11-06 07:15:05 +01:00
Felix Fontein
00c480254d The next expected release will be 5.1.0. 2025-11-02 12:51:01 +01:00
Felix Fontein
02f787a930 Release 5.0.0. 2025-11-02 12:30:18 +01:00
Felix Fontein
ea76592af6 Prepare 5.0.0. 2025-10-29 21:15:29 +01:00
Felix Fontein
dbc7b0ec18
Cleanup with ruff check (#1182)
* Implement improvements suggested by ruff check.

* Add ruff check to CI.
2025-10-28 06:58:15 +01:00
Felix Fontein
3bade286f8 Fix mypy config. 2025-10-26 10:02:49 +01:00
Felix Fontein
3dcf394aa5 Remove stable-3 from weekly CI run. 2025-10-25 13:36:34 +02:00
Felix Fontein
7afd659459 Release 5.0.0-a1. 2025-10-25 11:29:02 +02:00
Felix Fontein
54084defd0 Prepare 5.0.0-a1. 2025-10-25 11:07:52 +02:00
Felix Fontein
95bdce75e6
Add ansible-lint to CI (#1181)
* Improve Ansible code.

* Add some ansible-lint ignores.

* Add ansible-lint to CI.
2025-10-25 11:07:40 +02:00
Felix Fontein
b24bce77b6
Use FQCNs. (#1180) 2025-10-25 10:12:21 +02:00
Felix Fontein
be000755fc
Python code modernization, 8/n (#1179)
* Use to_text instead of to_native.

* Remove no longer needed pylint ignores.

* Remove another pylint ignore.

* Remove no longer needed ignore.

* Address redefined-outer-name.

* Address consider-using-with.
2025-10-25 00:36:04 +00:00
Felix Fontein
6ad4bfcd40
Add typing information, 2/n (#1178)
* Add typing to Docker Stack modules. Clean modules up.

* Add typing to Docker Swarm modules.

* Add typing to unit tests.

* Add more typing.

* Add ignore.txt entries.
2025-10-25 01:16:04 +02:00
Felix Fontein
3350283bcc
Add typing information, 1/2 (#1176)
* Re-enable typing and improve config.

* Make mypy pass.

* Improve settings.

* First batch of types.

* Add more type hints.

* Fixes.

* Format.

* Fix split_port() without returning to previous type chaos.

* Continue with type hints (and ignores).
2025-10-23 07:05:42 +02:00
Felix Fontein
24f35644e3 Adjust checks. 2025-10-16 17:45:05 +02:00
Felix Fontein
6b5d76bdee
Adjust 'report this' messages to only report if the latest version still has this problem. (#1173) 2025-10-16 17:41:11 +02:00
Felix Fontein
3ff2cfe615
Drop support for docker-py. (#1171) 2025-10-15 21:55:07 +02:00
Felix Fontein
0646e52bae
Python code modernization, 7/n (#1170)
* Address abstract-method.

* Fix broken signature.
2025-10-15 21:27:20 +02:00
Felix Fontein
04fa3fe352
Ansible-core devel's version was bumped to 2.21.0.dev0, add stable-2.20 to CI (#1168)
* Ansible-core devel's version was bumped to 2.21.0.dev0.

* Add stable-2.20 to CI.
2025-10-15 11:39:45 +00:00
Felix Fontein
597162b153
Avoid Python 2 compat (conditional) imports. (#1167) 2025-10-13 22:31:59 +02:00
Felix Fontein
6f9ebc3f14 Fix issues with pylint 4.0. 2025-10-13 22:09:31 +02:00
Felix Fontein
16b5bfa27b Disable type checking for now. 2025-10-12 22:32:40 +02:00
Felix Fontein
17e30adb93
selectors is now part of stdlib. (#1166) 2025-10-12 22:00:51 +02:00
Felix Fontein
c75aa5dd64
Python code modernization, 5/n (#1165)
* Address raise-missing-from.

* Address simplifiable-if-expression.

* Address unnecessary-dunder-call.

* Address unnecessary-pass.

* Address use-list-literal.

* Address unused-variable.

* Address use-dict-literal.
2025-10-12 16:02:27 +02:00
Felix Fontein
cad22de628
Python code modernization, 4/n (#1162)
* Address attribute-defined-outside-init.

* Address broad-exception-raised.

* Address broad-exception-caught.

* Address consider-iterating-dictionary.

* Address consider-using-dict-comprehension.

* Address consider-using-f-string.

* Address consider-using-in.

* Address consider-using-max-builtin.

* Address some consider-using-with.

* Address invalid-name.

* Address keyword-arg-before-vararg.

* Address line-too-long.

* Address no-else-continue.

* Address no-else-raise.

* Address no-else-return.

* Remove broken dead code.

* Make consider-using-f-string changes compatible with older Python versions.

* Python 3.11 and earlier apparently do not like multi-line f-strings.
2025-10-11 23:06:50 +02:00
Felix Fontein
33c8a49191
Fix crashes due to wrong names. (#1161) 2025-10-11 15:29:14 +02:00
Felix Fontein
892e9d9cbd Reorganize imports due to https://github.com/ansible-community/antsibull-nox/pull/136. 2025-10-10 21:19:28 +02:00
Felix Fontein
f7e976f3da
Avoid losing data from events if multiple arrive at the same time. (#1158) 2025-10-10 20:21:21 +02:00
Felix Fontein
e8ec22d3b1
Python code modernization, 3/n (#1157)
* Remove __metaclass__ = type.

for i in $(grep -REl '__metaclass__ = type' plugins/ tests/); do
  sed -e '/^__metaclass__ = type/d' -i $i;
done

* Remove super arguments, and stop inheriting from object.
2025-10-10 08:11:58 +02:00
Felix Fontein
741c318b1d
Python code modernization, 2/n (#1156)
* Adjust all __future__ imports:

for i in $(grep -REl "__future__.*absolute_import" plugins/ tests/); do
  sed -e 's/from __future__ import .*/from __future__ import annotations/g' -i $i;
done

* Remove all UTF-8 encoding specifications for Python source files:

for i in $(grep -REl '[-][*]- coding: utf-8 -[*]-' plugins/ tests/); do
  sed -e '/^# -\*- coding: utf-8 -\*-/d' -i $i;
done

* Reformat.
2025-10-09 20:46:48 +02:00
Felix Fontein
a3efa26e2e
Address some pylint issues (#1155)
* Address cyclic-import.

* Address redefined-builtin.

* Address redefined-argument-from-local.

* Address many redefined-outer-name.

* Address pointless-string-statement.

* No longer needed due to separate bugfix.

* Address useless-return.

* Address possibly-used-before-assignment.

* Add TODOs.

* Address super-init-not-called.

* Address function-redefined.

* Address unspecified-encoding.

* Clean up more imports.
2025-10-09 20:11:36 +02:00
Felix Fontein
db09affaea Fix isort config. 2025-10-07 23:00:42 +02:00
Felix Fontein
ec5f7682a1
Prevent loss of data. (#1152) 2025-10-07 22:05:05 +02:00
Felix Fontein
0acb773127 Install more deps for type checking. 2025-10-07 19:40:32 +02:00
Felix Fontein
acf18f0ade
Add more CI checks (#1150)
* Enable mypy.

* Add flake8.

* Add pylint with a long list of ignores to be removed.
2025-10-07 19:37:16 +02:00
Felix Fontein
449b37e1c9
Fix docker_container_exec's detach=true. (#1145) 2025-10-07 18:49:20 +02:00
Felix Fontein
54c2e49fdf
Fix diff for plugin options. (#1146) 2025-10-07 18:31:27 +02:00
salty
ebb8569b5f
docker_container: add driver_opts and gw_priority (#1143)
closes #1142
2025-10-07 18:26:25 +02:00
Felix Fontein
d0f4ef57a4 Forgot to adjust antsibull-nox config. 2025-10-07 07:49:49 +02:00
Felix Fontein
117271579e
Make all doc fragments, module utils, and plugin utils private (#1144)
* Make all doc fragments, module utils, and plugin utils private.

* Remove some unused and no longer needed imports.

This hopefully also fixes the CI issues, which do not happen locally for me...

* Fix formatting.

* Try to make CI happy, again.

* Fix imports.

* Lint.
2025-10-07 07:32:33 +02:00
Felix Fontein
bb39e67c8f Make CI pass; add black and isort to CI; add reformat commit to .git-blame-ignore-revs. 2025-10-06 18:57:33 +02:00
Felix Fontein
d65d37e9e9 Reformat code with black and isort. 2025-10-06 18:34:59 +02:00
Felix Fontein
f45232635c
Python code modernization, 1/n (#1141)
* Remove unicode text prefixes.

* Replace str.format() uses with f-strings.

* Replace % with f-strings, and do some cleanup.

* Fix wrong variable.

* Avoid unnecessary string conversion.
2025-10-06 18:30:54 +02:00
Felix Fontein
1f2817fa20
Prepare 5.0.0 (#1123)
* Bump version to 5.0.0-a1.

* Drop support for ansible-core 2.15 and 2.16.

* Remove Python 2 and early Python 3 compatibility.
2025-10-05 20:22:50 +02:00
Felix Fontein
b9cf9015c4 Add stable-4 to CI. 2025-10-05 17:41:40 +02:00
Felix Fontein
d757294540 Release 4.8.1. 2025-10-05 16:30:40 +02:00
Felix Fontein
626426c199 Prepare 4.8.1. 2025-10-05 16:29:41 +02:00
Felix Fontein
251e4eca49
Remove remaining usages of ansible.module_utils.six. (#1140) 2025-10-05 16:17:50 +02:00
Felix Fontein
ebe42308cc
Replace ansible.module_utils.six with own module utils in some cases (#1138)
* Replace ansible.module_utils.six with own module utils in some cases.

* Add ignore.txt entires.
2025-10-04 23:45:27 +02:00
Felix Fontein
82b49c7cf2
Fix wrong replacements. (#1139) 2025-10-04 23:18:11 +02:00
Felix Fontein
1902e0fdf2
Avoid six in plugin code. (#1137) 2025-10-04 21:51:59 +02:00
Felix Fontein
de9794ffe8 The next expected release will be 4.9.0. 2025-10-03 22:53:57 +02:00
Felix Fontein
8723784cf0 Release 4.8.0. 2025-10-03 22:30:21 +02:00
Felix Fontein
82b3184605
Another try to add RHEL 10 to CI. (#1136) 2025-10-03 21:16:18 +02:00
Felix Fontein
f8ea3fcba3 Prepare 4.8.0. 2025-09-29 22:50:37 +02:00
Felix Fontein
fd011d3871
Support missing fields and missing types in mounts. (#1134) 2025-09-29 22:35:07 +02:00
Felix Fontein
8e2056fcb1
Only upload code coverage data for scheduled CI runs. (#1135) 2025-09-29 22:34:57 +02:00
Felix Fontein
a3093604fa
Put all integration test sessions into the nox config. (#1133)
This includes ones for ansible-core versions that only run with AZP,
and includes remote sessions that won't work with GHA but require AZP
also for older ansible-core versions.
2025-09-29 20:49:06 +02:00
Felix Fontein
c9c420c036
CI: Move ansible-core 2.16 from AZP to GHA (#1132)
* Move ansible-core 2.16 to EOL CI.

* Remove no longer relevant EOL CI badge.

* CentOS 7 does not work in GHA.
2025-09-27 12:27:44 +02:00
Felix Fontein
1e038c072f
CI: replace felixfontein/ansible-test-gh-action@main with antsibull-nox. (#1131) 2025-09-27 11:35:25 +02:00
Felix Fontein
ad7397a332 Add repository configuration to antsibull-nox.toml. 2025-09-26 06:58:05 +02:00
Felix Fontein
5e58fee998 Disable Ubuntu 24.04 for now since it's a lot slower thna RHEL 9.6. 2025-09-16 06:55:47 +02:00
Felix Fontein
2b5c06da20
CI: Start using Ubuntu VMs instead of RHEL VMs (#1128)
* Start using Ubuntu VMs instead of RHEL VMs.

* Use correct Python executable.

* Fix starting podman on non-RHEL systems.
2025-09-14 23:27:52 +02:00
Felix Fontein
93d165e10b
Restrict cffi on Python 2. (#1126) 2025-09-12 19:58:48 +02:00
dependabot[bot]
3e2b149dc2
Bump actions/setup-go from 5 to 6 in the ci group (#1124)
Bumps the ci group with 1 update: [actions/setup-go](https://github.com/actions/setup-go).


Updates `actions/setup-go` from 5 to 6
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 18:50:34 +02:00
Felix Fontein
1f53619edf
Add ignores necessary for ansible-core 2.20 if Python 2.7 is still supported by the collection. (#1122) 2025-08-28 21:15:43 +02:00
Laurent Commarieu
ba58752646
Rename login_results to login_result in docker_login docstring (#1121) 2025-08-26 21:48:57 +02:00
dependabot[bot]
3d44b9569c
Bump actions/checkout from 4 to 5 in the ci group (#1120)
Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 4 to 5
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-25 12:06:08 +02:00
doftmoon
fdf9f83454
fix: typo in scenario_guide.rst (#1118)
the duplication typo in listing
2025-08-22 22:25:50 +02:00
Felix Fontein
68ac6fecb1
Avoid deprecated functionality. (#1117) 2025-08-17 20:15:40 +02:00
Felix Fontein
1ba34b9b7c
CI: Add Debian 13 Trixie (#1113)
* Add Debian 13 Trixie to CI.

* I don't think this is needed any longer.

* Debian: adjust way GPG signature is installed for Docker's software repo.
2025-08-10 20:11:54 +02:00
Felix Fontein
0631d15656 The next expected release will be 4.8.0. 2025-08-04 19:49:55 +02:00
Felix Fontein
8c66e6299c Release 4.7.0. 2025-08-04 19:24:43 +02:00
Felix Fontein
da76583d6b
Use distribution:3.0.0. (#1112) 2025-08-03 17:56:32 +02:00
Felix Fontein
47197cf7d2 Update release summary. 2025-08-03 15:20:10 +02:00
Felix Fontein
e1920d1cc7
Work around bug in Docker 28.3.3 that prevents pushing to registry without authentication. (#1110) 2025-08-03 15:19:16 +02:00
Felix Fontein
cfd59ac9e5 Add Distribution 3.0.0 image. 2025-08-03 14:18:29 +02:00
Felix Fontein
c565698f09 Prepare 4.7.0. 2025-08-03 13:13:28 +02:00
tpourcelot
449448e820
docker_swarm_service: add support for replicated jobs (#1108)
* feat(docker_swarm_service): Add support for replicated jobs

* chore(docker_swarm_plugin): Fixes after review

* chore(docker_swarm_service): Add a check for minimum version

* chore(docker_swarm_service): Add changelog fragment for #1108

* fix(docker_swarm_service): Fix typo in version check

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Tristan Pourcelot <tristan.pourcelot@exatrack.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-03 13:12:29 +02:00
Felix Fontein
920015706b
Ansible-core devel EE: use Python 3.12. (#1107) 2025-07-31 21:48:10 +02:00
Felix Fontein
a45d5b6d2f
Bump AZP container. (#1104) 2025-07-29 17:38:46 +02:00
Felix Fontein
0224e5faef Normalize changelog configs. 2025-07-27 16:35:31 +02:00
Felix Fontein
e0139eee10 The next expected release will be 4.7.0. 2025-07-26 15:24:40 +02:00
Felix Fontein
3913a9aec1 Release 4.6.2. 2025-07-26 14:40:37 +02:00
Felix Fontein
e0a4f37e31
Bump Alpine 3.21 to 3.22, Fedora 41 to 42, and RHEL 9.5 to 9.6. (#1103)
Add old versions to stable-2.19 if not present yet.
2025-07-26 12:23:40 +02:00
Felix Fontein
89c58cd171 Prepare 4.6.2. 2025-07-25 22:00:45 +02:00
Felix Fontein
ac301beebd
Adjust to Compose 2.39.0+. (#1101) 2025-07-25 21:59:41 +02:00
Felix Fontein
8365810b52
Move EE tests to antsibull-nox (#1100)
* Move EE tests to antsibull-nox.

* Make EE tests work.
2025-07-25 21:23:06 +02:00
Felix Fontein
8b55159279 Skip tabs. 2025-07-06 18:08:11 +02:00
Felix Fontein
9fa966dca6 Adjust README. 2025-07-01 22:34:34 +02:00
Felix Fontein
e1347723d1
CI: Add stable-2.19 (#1095)
* Add ignore-2.20.txt.

* Add stable-2.19 to CI.
2025-07-01 07:30:08 +02:00
Felix Fontein
c8fc5bc175
Apparently Compose 2.37.3 + Docker 28.3.0 result in a behavior change. (#1093) 2025-06-25 22:00:42 +02:00
Felix Fontein
ebaf1c73ff
Adjust scenario guide to reality. (#1091) 2025-06-22 00:40:17 +02:00
Felix Fontein
c1df4bc8da Add linting check for RST code blocks. Fix code block languages. 2025-06-18 21:49:42 +02:00
Felix Fontein
ae1d457b49
ContainerName is also missing in 2.37.1. (#1088) 2025-06-14 17:03:27 +02:00
Felix Fontein
d354f6b40d Next release is expected to be 4.7.0. 2025-06-09 18:52:55 +02:00
Felix Fontein
2e20028392 Release 4.6.1. 2025-06-09 13:16:44 +02:00
Felix Fontein
2108c3dc71 Forgot community.general. 2025-06-08 20:25:00 +02:00
Felix Fontein
eba578cc92 Prepare 4.6.1. 2025-06-08 19:08:22 +02:00
Felix Fontein
e9f4553b01
docker_container idempotency: work around Docker not returning true configured command when command is [] (#1085)
* Work around Docker not returning true configured command when command is [].

* Lint.

* Add test.

* Add changelog fragment.
2025-06-08 19:05:09 +02:00
Felix Fontein
8ecbd9a5cc
docker_compose_v2: work around bug in docker compose images --format json (#1083)
* Work around ubg in docker compose images --format json.

* ContainerName is no longer in image record.
2025-06-07 23:28:51 +02:00
Felix Fontein
72d827a9e2 Enable no-trailing-whitespace test. 2025-06-04 15:18:19 +02:00
Felix Fontein
5a992bb34d
Use community.crypto 2.x.y for ansible-core < 2.17. (#1078) 2025-05-06 21:43:59 +02:00
Felix Fontein
38591969dd Use community.crypto 2.x.y on ubuntu2004. 2025-05-03 13:24:31 +02:00
Felix Fontein
c9835e9b65 The next expected release will be 4.7.0. 2025-05-02 10:16:42 +02:00
Felix Fontein
9a93812d3b Release 4.6.0. 2025-05-02 09:51:05 +02:00
Felix Fontein
9a9372dd06 Prepare 4.6.0. 2025-05-02 06:22:34 +02:00
Felix Fontein
cdf02b642c Lint doc fragments. 2025-05-01 16:46:13 +02:00
Felix Fontein
295428167b
Use community.crypto 2.x.y for older ansible-core versions / Python versions. (#1076) 2025-05-01 12:30:07 +02:00
Felix Fontein
e0b9c45579
Re-enable RHEL 8 in CI (#1075)
* Re-enable RHEL 8 in CI.

* Skip podman on RHEL 8.
2025-04-28 21:42:31 +02:00
Felix Fontein
ad989c1942
docker_container_copy_into: add mode_parse option (#1074)
* Add mode_parse option.

* Make yamllint config strict.

* Lint.
2025-04-28 20:46:11 +02:00
Felix Fontein
424b39fe36 Since we require ansible-core >= 2.15, nothing before Python 2.7 is supported and used in tests anyway. 2025-04-27 12:24:53 +02:00
Felix Fontein
961acd9120 Remove no longer needed file. 2025-04-27 12:24:10 +02:00
Felix Fontein
62d0b3d1cb Add reformat commit to .git-blame-ignore-revs. 2025-04-26 12:39:53 +02:00
Felix Fontein
2487d1a0bf Fix linting errors. 2025-04-26 12:39:00 +02:00
Felix Fontein
795e6b23dc Add yamllint to antsibull-nox and add config files. 2025-04-26 12:39:00 +02:00
Felix Fontein
3a3ece3ba5
Warn about octal modes. (#1072) 2025-04-26 12:22:16 +02:00
Felix Fontein
545d99e7c1 Fix info on blanket license statement for changelog fragments. 2025-04-24 22:48:20 +02:00
Felix Fontein
5cbd81e7a7 Adjust times. 2025-04-19 19:57:11 +02:00
Felix Fontein
e20118b68f
Run extra sanity tests with nox. (#1068) 2025-04-19 17:54:12 +02:00
Felix Fontein
8694f488d7
CI: fix certificates for HTTPS connection tests (#1066)
* Try to fix CA cert for HTTPS connection tests.

* Try to fix leaf certificate.

* Add more properties.
2025-04-11 14:09:01 +02:00
Felix Fontein
106c3d33d6 Migrate .reuse/dep5 to REUSE.toml. 2025-03-29 12:17:22 +01:00
Felix Fontein
80329beade The next expected release will be 4.6.0. 2025-03-22 13:11:15 +01:00
Felix Fontein
2d65015e86 Release 4.5.2. 2025-03-22 12:52:14 +01:00
Felix Fontein
585595187b Prepare 4.5.2. 2025-03-22 12:37:05 +01:00
Felix Fontein
a1e9412bed
Use new tools from community.internal_test_tools. (#1061) 2025-03-22 11:59:27 +01:00
Felix Fontein
635716c07b
docker_compose_v2: use --yes when available instead of -y (#1060)
* Use --yes if available.

* Add smoke test.
2025-03-21 22:05:10 +01:00
londondaintta
c13b891bc9
fix(docker_compose_v2): fix version check (#1059)
* fix version check

* add changelog fragment
2025-03-17 20:20:21 +01:00
Felix Fontein
9730b2a3c3
Use shared unit test utils from community.internal_test_tools. (#1056) 2025-03-12 21:39:05 +01:00
Felix Fontein
9972eee967 Make sure that community.internal_test_tools is installed for unit tests. 2025-03-12 21:00:09 +01:00
Felix Fontein
5b5b4e7204 The next expected release will be 4.6.0. 2025-03-11 20:31:01 +01:00
Felix Fontein
7355c7de0d Release 4.5.1. 2025-03-11 20:09:11 +01:00
Felix Fontein
9d015a2563 Prepare 4.5.1. 2025-03-10 22:19:36 +01:00
Jonas Geiler
a3f9c21228
fix(docker_compose_v2): use correct flag for assume_yes (#1054)
* fix: use correct flag for `assume_yes`

The correct flag is `--y`, not `--yes`.

* refactor(docker_compose_v2): use `-y` instead of `--y` to ensure future compatibility

Maybe they'll change it back to `--yes` sometime, so I'll use the short form that most likely won't change.

* docs(docker_compose_v2): add note about `-y` flag

Co-authored-by: Felix Fontein <felix@fontein.de>

* chore: add changelog fragment

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-03-10 22:18:45 +01:00
Felix Fontein
fdb97428a3
Fix/improve tests (#1052)
* Improve unit test condition.

* Improve/fix tests.
2025-03-08 09:54:41 +01:00
Felix Fontein
ca7f3eb82f Group CI updates. 2025-03-03 19:00:27 +01:00
Felix Fontein
efe50114a7 The next expected release will be 4.6.0. 2025-03-03 18:46:48 +01:00
Felix Fontein
799fe434e5 Release 4.5.0. 2025-03-03 18:26:31 +01:00
Felix Fontein
6a69fbc0b0 Prepare 4.5.0. 2025-03-01 15:41:25 +01:00
Felix Fontein
cfb970bd53
docker_network: add enable_ipv4 option (#1049)
* Add enable_ipv4 option.

* Add changelog fragment.
2025-03-01 15:03:43 +01:00
londondaintta
187014101b
add --yes parameter to docker compose up (#1045)
* add yes option for compose up to assume yes and prevent hanging

* fix type

* add default

* add changelog fragment

* Apply doc suggestion

Co-authored-by: Felix Fontein <felix@fontein.de>

* set version_added to 4.5.0

* use `assume_yes` to avoid clashing with yaml `yes` keyword

* add version check

* default self.yes to False

* update description

* Fail on older version

Co-authored-by: Felix Fontein <felix@fontein.de>

* update changelog fragment

* update description

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-03-01 15:03:23 +01:00
Felix Fontein
f99a9b618c Next expected release will be 4.5.0. 2025-02-13 21:47:39 +01:00
Felix Fontein
e36ee04ea6 Release 4.4.0. 2025-02-13 21:27:43 +01:00
Felix Fontein
5ce4783053 Prepare 4.4.0 release. 2025-02-13 21:10:00 +01:00
Felix Fontein
ab53cb2e80 Clean up workflow. 2025-02-11 22:52:54 +01:00
Felix Fontein
22ab85fe2b
docker_context_info: fix some aspects (#1043)
* Extend docker_context_info tests.

* Fix a bug in the context code.

* Fix TLS handling for contexts.

* Adjust code to fix tests.
2025-02-10 23:54:36 +01:00
Felix Fontein
20042ea780
Add basic podman tests to CI (#1040)
* Setup podman and run some basic tests with it.

* Clean up Docker setup.
2025-02-10 23:19:54 +01:00
Felix Fontein
18ca4184cc
Cleanup AZP config similarly to ansible-core did some years ago. (#1041) 2025-02-10 22:44:36 +01:00
Felix Fontein
3b6068e44b
Add docker_context_info module (#1039)
* Vendor parts of the Docker SDK for Python

This is a combination of the latest git version
(db7f8b8bb6)
with some fixes to make it compatible with Python 2.7
and adjusting some imports.

* Polishing.

* Fix bug that prevents contexts to be found when no Docker config file is present.

Ref: https://github.com/docker/docker-py/issues/3190

* Linting.

* Fix typos.

* Adjust more to behavior of Docker CLI.

* Add first iteration of docker_context_info module.

* Improvements.

* Add basic CI.

* Add caveat on contexts[].config result.
2025-02-10 21:59:05 +01:00
Alexandre Díaz
ea3ac5f195
fix: docker_compose_v2_run: don't need sanitize labels (#1034) 2025-02-02 17:27:28 +01:00
Felix Fontein
bcd6e57450
Vendored Docker SDK for Python code: remove unused constants (#1037)
* Remove constants that are never used.

* Adjust unit tests.
2025-02-01 23:14:19 +01:00
Felix Fontein
511cfe52ca
Improve error handling. (#1035) 2025-01-31 19:39:08 +01:00
Felix Fontein
8bae4e9c6d
Also mention glob patterns and redirects. (#1032) 2025-01-25 22:26:17 +01:00
Felix Fontein
ce074ba8f0 The next expected release is 4.4.0. 2025-01-23 20:48:06 +01:00
Felix Fontein
05eb3b90eb Release 4.3.1. 2025-01-23 20:13:09 +01:00
Felix Fontein
b1bba23507 Prepare 4.3.1 release. 2025-01-22 21:46:43 +01:00
Felix Fontein
9cc70f5202
Fix label sanitization error handling. (#1029) 2025-01-22 20:45:08 +00:00
Felix Fontein
9e26c4794e
docker_compose_v2: fix tests (#1027)
* Since docker-compose 2.32.2 present_3 is no longer changed.

This has been caused by https://github.com/docker/compose/pull/12442,
since that PR removes the "building" event.

* Remove deprecated 'version' fields.
2025-01-14 21:17:05 +01:00
Felix Fontein
993d66971d
CI: Try to get more targets for SSH connection test (#1026)
* Try to get more targets for SSH connection test.

* Install paramiko from system repos on CentOS 7.
2025-01-11 12:54:11 +01:00
Felix Fontein
b72e17cc53
Add Fedora 41, Alpine 3.21, RHEL 9.5 to CI for devel. (#1024) 2025-01-08 07:09:16 +01:00
Felix Fontein
29ff1241ce
Use multiple YAML documents for inventory plugin examples. (#1023) 2025-01-07 21:21:01 +01:00
Felix Fontein
4aed658919 Fix CI badge image URL. 2025-01-04 11:22:50 +01:00
Felix Fontein
1a218f3c5e Improve extra sanity test for docker action group. 2025-01-03 14:50:33 +01:00
Felix Fontein
8c1e3eb5cf The next expected release will be 4.4.0. 2024-12-30 22:29:08 +01:00
Felix Fontein
3da95fcebf Release 4.3.0. 2024-12-30 22:04:17 +01:00
Felix Fontein
0ae405a3e1 Prepare 4.3.0 release. 2024-12-30 21:17:00 +01:00
Felix Fontein
5bfec5d4d2
Add 'idempotent' attribute (#1022)
* Add 'idempotent' attribute.

* Mention check mode in attribute description.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2024-12-30 21:11:14 +01:00
Felix Fontein
c10ae4a24d Fix doc fragments indents. 2024-12-29 15:48:30 +01:00
Florian Apolloner
6172a9291c
Determine the compose version via a CLI call and not the docker API. (#1021)
* Determine the compose version via a CLI call and not the docker API.

* Update plugins/module_utils/compose_v2.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-12-29 14:13:39 +01:00
Felix Fontein
bd992583c2 Improve formulations. 2024-12-28 17:09:33 +01:00
Felix Fontein
769d15de63
Reformat documentation with 'andebox yaml-doc' (#1020)
* Reformat documentation with 'andebox yaml-doc'.

* Revert unwanted changes.

* Fix too long lines.

* Fix broken quotes.

* Forgot two line breaks.
2024-12-28 16:40:50 +01:00
Felix Fontein
f69536ef3b Improve language. 2024-12-28 14:30:49 +01:00
Felix Fontein
04c97728dc
Arch Linux updated to Python 3.13. (#1018) 2024-12-22 21:27:52 +01:00
Felix Fontein
d17ee667ce
docker_network: adjust documentation to reality for state=absent + force=true. (#1016) 2024-12-20 22:51:04 +01:00
Felix Fontein
742a373e59 Next expected release will be 4.3.0. 2024-12-16 21:09:59 +01:00
Felix Fontein
e638e02124 Release 4.2.0. 2024-12-16 20:42:31 +01:00
Felix Fontein
18287220ab Fix README. 2024-12-14 22:05:30 +01:00
Felix Fontein
6a377eefdc Use correct workflow. 2024-12-14 21:54:27 +01:00
Felix Fontein
e6bfd9bda3 Prepare 4.2.0. 2024-12-14 21:34:19 +01:00
Felix Fontein
8616e7f6f2
docker_image_build: work around strange behavior of docker buildx build when --output is provided (#1006)
* Work around strange behavior of docker buildx build when --output is provided.

* Adjust tests.

* Allow to pass multiple image names; correctly quote --output values.

* Return executed command.

* Adjust tests.
2024-12-14 21:32:33 +01:00
Felix Fontein
2e7b4e4605
docker_compose_v2: add ignore_build_events option; ignore build events by default (#1011)
* Add ignore_build_events option.

* Adjust docs and tests.

* Switch default to true.

* Remove unnecessary parts from tests.
2024-12-14 19:54:40 +01:00
Felix Fontein
80770ed972
Fix some issues pointed out by zizmor. (#1009) 2024-12-14 15:31:16 +01:00
Felix Fontein
7583ea82ac
Prevent crash if Mode isn't present, which happens for Swarm jobs. (#1003) 2024-12-04 21:39:50 +01:00
Maksim Vorobyev
e19812917d
Add 'ingress' option to docker_network module (#999)
* Add 'ingress' option to docker_network module

* sanity fixes

* add changelog fragment

* Update plugins/modules/docker_network.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/999-add-ingress-option-to-docker_network-module.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/docker_network.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* move 'ingress' tests to overlay.yml

* move Sworm init and Swarm cleanup to block

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-12-04 21:39:09 +01:00
Felix Fontein
d8548ef55f The next expected release will be 4.2.0. 2024-11-23 14:51:08 +01:00
Felix Fontein
c294fa4063 Release 4.1.0. 2024-11-23 14:32:32 +01:00
Michael
6595d299e2
Doc fix for docker_container image_name_mismatch (#991)
* doc-fix-image-name-mismatch

* Update description.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-11-23 14:12:27 +01:00
Felix Fontein
78bdccd453
Correctly set can_talk_is_docker. (#995) 2024-11-23 13:19:32 +01:00
Felix Fontein
8344999c0c Prepare 4.1.0 release. 2024-11-23 12:54:02 +01:00
Sánta Balázs Levente
e3b36e5f0a
module docker_compose_v2_run: fix env argument (#992)
* module docker_compose_v2_run: fix env argument

* fix missing "--env" in docker_compose_v2_exec, and added changelog fragment

* Update changelogs/fragments/992-module-docker_compose_v2_run-fix-env-argument.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-11-23 00:06:08 +01:00
dependabot[bot]
9b2a371c00
Bump fsfe/reuse-action from 4 to 5 (#989)
Bumps [fsfe/reuse-action](https://github.com/fsfe/reuse-action) from 4 to 5.
- [Release notes](https://github.com/fsfe/reuse-action/releases)
- [Commits](https://github.com/fsfe/reuse-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: fsfe/reuse-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 19:36:11 +01:00
aliou-sidibe
fb9784e4c7
Add 'detach' option to docker_stack module to control immediate exit behavior on stack deployment/remove (#987) 2024-11-17 15:30:32 +01:00
Felix Fontein
c17fef37b3 Next expected release is 4.1.0. 2024-11-10 12:33:47 +01:00
Felix Fontein
385839d891 Release 4.0.1. 2024-11-10 12:08:04 +01:00
Felix Fontein
4157bd8269 Prepare 4.0.1 release. 2024-11-09 23:54:27 +01:00
Felix Fontein
1e10834905
Sanitize labels. (#985) 2024-11-09 23:53:22 +01:00
Felix Fontein
6daeff69f6 Update CI for old stable branches. 2024-10-20 11:04:00 +02:00
Felix Fontein
9da9e35df7 The next expected release will be 4.1.0. 2024-10-20 10:56:49 +02:00
Felix Fontein
90cf544dba Release 4.0.0 2024-10-20 10:38:33 +02:00
Felix Fontein
a740cfa0c4
Add more tests. (#980) 2024-10-19 22:07:06 +02:00
Christoph
be5564d4de
add renew_anon_volumes parameter to docker compose up (#977)
* add `renew_anon_volumes` parameter to `docker compose up`

* Apply suggestions from code review

Apply suggested changes to Documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix sanity check error

apply suggestion from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* add changelog fragment for PR #977

* apply suggested changes to changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Christoph Sieber <Christoph.Sieber@telekom.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2024-10-19 14:21:13 +02:00
Felix Fontein
309a30e9be Fix reuse workflow branches. 2024-10-19 12:34:47 +02:00
Felix Fontein
f7823ea626
Prepare 4.0.0 release. (#971) 2024-10-18 21:01:49 +02:00
Felix Fontein
8c5b90df55 The next expected release will be 3.13.2. 2024-10-15 20:49:29 +02:00
Felix Fontein
0749d61513 Release 3.13.1. 2024-10-15 20:30:18 +02:00
Felix Fontein
9f55d1c5b7 Prepare 3.13.1 release. 2024-10-14 20:58:16 +02:00
Felix Fontein
28e87f4602
Compose v2: improve parsing of dry-run building JSON events (#976)
* Catch more warnings that shouldn't be there.

* Add explicit handling of dry-run image build JSON events.

These produce some one-off ID values that don't make any sense as ID values.
2024-10-14 20:56:49 +02:00
Felix Fontein
ea38591dec Next expected release is 3.14.0. 2024-10-04 11:03:37 +02:00
Felix Fontein
54d70d9afc Release 3.13.0. 2024-10-04 10:31:38 +02:00
Felix Fontein
1485adce29
Make clear that inventory config files need to have a very speific ending. (#974) 2024-10-04 10:17:23 +02:00
Felix Fontein
0806996f82 Prepare 3.13.0 release. 2024-09-28 08:37:31 +03:00
Felix Fontein
423a9bbf61
Add Docker Compose v2 exec and run modules. (#969) 2024-09-27 13:00:48 +03:00
Felix Fontein
d478174786
Add stable-2.18 to CI. (#970) 2024-09-24 13:46:51 +03:00
Felix Fontein
e87ad6188c Next expected release will be 3.13.0. 2024-09-17 21:21:11 +02:00
Felix Fontein
bfb0fed227 Release 3.12.2. 2024-09-17 21:02:04 +02:00
Felix Fontein
ca648a0390 Prepare 3.12.2 release. 2024-09-17 20:52:58 +02:00
Felix Fontein
3802e424d9
docker_prune: improve docs, fix handling of lists for filters (#966)
* Improve docs.

* Fix handling of lists for filters.
2024-09-17 20:50:48 +02:00
x4e-jonas
d8cefc4190
Fix typo in Docker connection tests. (#964)
Co-authored-by: x4e-jonas <x4e-jonas@users.noreply.github.com>
2024-09-09 15:28:17 +02:00
Felix Fontein
37df0e8e28
Remove link to Google Groups mailing list. (#962)
Ref: https://groups.google.com/g/ansible-project/c/B0oKR0aQqXs
2024-09-08 16:16:06 +02:00
Felix Fontein
df9f84f216 Improve communication link description. 2024-08-15 21:40:17 +02:00
Felix Fontein
ca5fe4dc10 Next expected release is 3.13.0. 2024-08-13 10:17:23 +02:00
Felix Fontein
6791364105 Release 3.12.1. 2024-08-13 10:03:00 +02:00
Felix Fontein
dbe99e3a63 Prepare 3.12.1. 2024-08-13 09:47:07 +02:00
Felix Fontein
a4aa8d3224
Announce dropping support for ansible-core < 2.15 in next major release. (#954) 2024-08-12 21:23:03 +02:00
Felix Fontein
d36768609d
Improve communication info. (#953) 2024-08-12 17:05:18 +02:00
Andrew Klychkov
65ead853e7
README: Add Communication section with Forum information (#950)
* README: Add Communication section with Forum information

* Insert tag, remove category.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-08-12 13:00:24 +02:00
Felix Fontein
e1e4d5df1a Next expected release is 3.13.0. 2024-08-07 16:32:46 +02:00
Felix Fontein
d797af0d67 Release 3.12.0. 2024-08-07 16:15:23 +02:00
Felix Fontein
3cc27ecd65
Handle yet another random unstructured error output. (#949) 2024-08-07 15:58:23 +02:00
Felix Fontein
d91f854d45
Fix composition of --output parameters. (#947) 2024-08-01 17:22:10 +02:00
Felix Fontein
41445def33
Upload Docker image used for connection tests to GHCR. (#944) 2024-07-25 20:46:43 +00:00
Felix Fontein
c3aceebd7d
Docker* connection plugins: add working_dir and privileged options (#943)
* Add working_dir option.

* Add privileged option.

* Add basic tests.

* Also test privileged.
2024-07-25 20:35:32 +00:00
Felix Fontein
7464002bc3
Docker* connection plugins: allow to pass extra environment variables when running commands (#940)
* Allow to pass extra environment variables when running commands.

* Make compatible with older Python.

* Remove env and ini sources for extra_env.
2024-07-25 21:26:15 +02:00
Felix Fontein
0fe84b510b
docker_compose_v2_pull: add new options ignore_buildable, include_deps, and services; fix service CLI for docker_compose_v2 module (#942)
* Add new options for --ignore-buildable, --include-deps, and for providing services.

* Add services after -- and not before.
2024-07-25 20:47:32 +02:00
Felix Fontein
45b2531129
docker_compose_v2* modules: use --progress json for Compose 2.29.0+ (#931)
* Use --progress json for Compose 2.29.0+.

* Add changelog fragment.

* Fix/improve handling of warnings.

* Improve parsing of warnings and some one-off messages.

* Improve warnings.

* Handle tail messages.

* Fix bug in regular event parsing.
2024-07-25 18:33:42 +02:00
Felix Fontein
ebec16d42c
Handle network_mode=default correctly for Docker 26.1.0+. (#936) 2024-07-25 18:30:12 +02:00
Felix Fontein
2ddadf1e2b
docker_container: pass networks to Daemon on container creation (#933)
* Pass networks to Daemon on container creation.

* Restore old behavior, and only provide all networks on creation for API 1.44+.
2024-07-23 17:34:26 +02:00
Felix Fontein
11ce793f7d Prepare 3.12.0 release. 2024-07-20 17:07:34 +02:00
Felix Fontein
22bbfbaf8b
CLI modules: improve docker version/info output processing, avoid querying for API version if it's not needed (#935)
* Don't assume that docker version/info JSON output contains the expected fields.

* Allow CLI modules to not require the API version.

* Add changelog fragment.
2024-07-20 15:51:02 +02:00
Felix Fontein
a30fd93a44
Check for unparsable messages. (#932) 2024-07-17 23:54:31 +02:00
Felix Fontein
609fa2c8b4 Reformat and re-order changelogs/changelog.yaml. 2024-07-11 22:44:27 +02:00
Felix Fontein
852cf73e2d Next expected release is 3.12.0. 2024-07-09 22:33:00 +02:00
Felix Fontein
65d8dc8908 Release 3.11.0. 2024-07-09 22:13:38 +02:00
Felix Fontein
4b7e74b75e
docker_container: allow to wait for a container to become healthy (#921)
* Allow to wait for a container to become healthy.

* Improve wording.

Co-authored-by: Don Naro <dnaro@redhat.com>

* Improve explanation.

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2024-07-09 20:07:00 +02:00
Felix Fontein
ec37166a6c
Use registry image from ghcr.io. (#927) 2024-07-08 19:22:18 +00:00
Felix Fontein
8e8a091469
Get rid of hello-world image, 2/2 (#925)
* Use our image for container platform test.

* Remove docker_test_image_hello_world_platform image.
2024-07-08 09:27:27 +02:00
Felix Fontein
f9461bb441
Get rid of hello-world image, 1/2 (#924)
* Use our image for pull test.

* Add 386 versions of the images.
2024-07-08 09:04:06 +02:00
dependabot[bot]
4277b60340
Bump fsfe/reuse-action from 3 to 4 (#923)
Bumps [fsfe/reuse-action](https://github.com/fsfe/reuse-action) from 3 to 4.
- [Release notes](https://github.com/fsfe/reuse-action/releases)
- [Commits](https://github.com/fsfe/reuse-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: fsfe/reuse-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-08 07:51:45 +02:00
Felix Fontein
8efbd560f9 Use variable instead of image directly. 2024-07-07 23:20:42 +02:00
Felix Fontein
6fcbd34e23
Prevent infinite loop. (#922) 2024-07-07 23:18:58 +02:00
Felix Fontein
7ec56d33cb Fix health check so that Docker also likes it... 2024-07-07 22:30:05 +02:00
Felix Fontein
569880486f
Improve health check image. (#919) 2024-07-07 21:19:26 +02:00
Felix Fontein
ff412f475e Disable Go module info caching. 2024-07-07 20:20:10 +02:00
Felix Fontein
f69a29403b
Add health check test image. (#918) 2024-07-07 19:31:16 +02:00
Felix Fontein
259f2cf8b7
Fix docker_compose_v2 example. (#917) 2024-07-06 21:20:13 +02:00
Felix Fontein
48c0cdf2c5
Imporve parsing of skipped messages. (#916) 2024-07-06 21:10:39 +02:00
Ethan Williams
e2f93a0c66
fix mis-named keys and invalid values in copy into example (#915) 2024-07-06 20:55:12 +02:00
Felix Fontein
bbf163e61d
Add link to forum. (#913) 2024-07-05 22:38:24 +02:00
Fran Jurinec
9b5dbd4543
Add support from device_cgroup_rules parameter (#910) 2024-07-04 09:51:32 +02:00
Felix Fontein
7fe2f57951 'alternatives' is now required. 2024-07-04 08:01:06 +02:00
Felix Fontein
1713995bfc
Fix CI for CentOS 7. (#908) 2024-07-01 13:51:02 +02:00
Andrew Dawes
d98850e9e9
Support ansible-test integration tests for arm64 (#906)
* Support ansible-test integration for arm64

* Replace set_fact with inline templated conditional
2024-06-30 14:07:25 +02:00
Felix Fontein
2ce838ab92
Use new images for export/import tests. (#905) 2024-06-29 19:20:24 +02:00
Felix Fontein
81cabbf697
CI: Run some tests with the latest development versions of Docker SDK for Python, requests, and urllib3 (#902)
* Run some tests with the latest development versions of Docker SDK for Python, requests, and urllib3.

* Use LooseVersion instead of StrictVersion to parse urllib3 versions.
2024-06-29 18:57:08 +02:00
Felix Fontein
165571f5cf
Adjust docs publishing workflow. (#903)
Ref: https://github.com/ansible-community/github-docs-build/issues/92
2024-06-29 17:27:43 +02:00
Felix Fontein
7efc6381d0
CI: use new container images for Compose v2 pull tests (#900)
* Use simple-1 image instead of Alpine image for docker_compose_v2_pull tests.

* Use simple-1 image instead of Alpine image for docker_compose_v2 pull tests.
2024-06-29 11:43:00 +02:00
Felix Fontein
37c639f6e8
Publish test images under another tag to work around strange behavior of Compose's pull policy. (#901) 2024-06-29 11:16:32 +02:00
Felix Fontein
d334c2362f
Create helper OCI images for use in CI. (#899) 2024-06-28 22:59:20 +02:00
Felix Fontein
ad9d362336
Make docker_host and cli_context mutually exclusive. (#895) 2024-06-28 22:26:34 +02:00
Felix Fontein
36dcb94b39
Document host-gateway. (#897) 2024-06-28 17:03:32 +02:00
Felix Fontein
ace4ee4f70
Make sure that Docker SDK for Python is installed for docker_stack* tests. (#896) 2024-06-28 16:46:26 +02:00
Felix Fontein
08063a0439
Skip certain tests on Docker 27.0.0+. (#893) 2024-06-28 07:40:45 +02:00
Felix Fontein
eddeb91697
Adjust CI matrix for ansible-core devel's ansible-test (#889)
* Adjust CI matrix for ansible-core devel's ansible-test.

* Kick docker-compose v1 out from devel docker tests.
2024-06-18 08:00:13 +02:00
Felix Fontein
801d65c610 Next expected release is 3.11.0. 2024-06-16 22:23:34 +02:00
Felix Fontein
3383cd551e Release 3.10.4. 2024-06-16 22:04:40 +02:00
Felix Fontein
4cac2ac021
Make sure that one of project_src and definition is provided. (#886) 2024-06-16 21:49:20 +02:00
Felix Fontein
a5b5681608 Prepare 3.10.4 release. 2024-06-16 21:10:13 +02:00
Felix Fontein
6fc9727f60
Remove Fedora 32 tests from CI. (#882) 2024-06-14 08:23:29 +02:00
Felix Fontein
691bc6de72
Docker Compose v1 tests: restrict API version to 1.44 if default API version is 1.45+ (#881)
* Restrict API version to 1.44 if default API version is 1.45+.

* Set COMPOSE_API_VERSION if api_version is provided.

* Add changelog.
2024-06-14 08:02:12 +02:00
Felix Fontein
fd5110c94c
Fix shellcheck errors. (#880) 2024-06-13 21:39:07 +02:00
Felix Fontein
0616fb12df
Try to fix docker-compose v1 tests on Arch. (#879) 2024-06-10 21:19:04 +02:00
Felix Fontein
4cb29220e5
Bump Azure test container to 6.0.0. (#877) 2024-06-10 20:38:53 +02:00
Sih Sîng-hông薛丞宏
a22e92cdc0
Update the example of docker_compose_v2.py (#874)
`docker-compse` => `docker compose`.
2024-06-04 13:01:23 +02:00
Felix Fontein
c2c47636b4
Stop building EE with CentOS Stream 8, which no longer has builds. (#873) 2024-06-04 08:01:32 +02:00
Felix Fontein
a3952f0068 The next expected release is 3.11.0. 2024-05-26 21:10:18 +02:00
Felix Fontein
9e7b5407fd Release 3.10.3. 2024-05-26 20:48:21 +02:00
Felix Fontein
de7729c33c Prepare 3.10.3 release. 2024-05-26 16:39:21 +02:00
Felix Fontein
205867e392
Avoid using the deprecated selectors compat module utils. (#871) 2024-05-25 09:00:18 +02:00
Felix Fontein
7867390473
Force requests<2.32.0 for docker-compose. (#867) 2024-05-22 07:43:03 +02:00
Felix Fontein
54fd5284d9 Next expected release is 3.11.0. 2024-05-21 21:31:12 +02:00
Felix Fontein
260b2859c5 Release 3.10.2. 2024-05-21 21:03:51 +02:00
Felix Fontein
32612dc6ec Prepare 3.10.2 release. 2024-05-21 19:23:25 +02:00
Felix Fontein
1b50cee901
Add fix for requests 2.32.2+. (#864) 2024-05-21 19:22:39 +02:00
Felix Fontein
e242a41bda The next expected release is 3.11.0. 2024-05-20 22:13:17 +02:00
Felix Fontein
b9add7b415 Release 3.10.1. 2024-05-20 21:44:19 +02:00
Felix Fontein
570f5fb524 Add known_issues instead of extended release summary. 2024-05-20 21:27:43 +02:00
Felix Fontein
8cbec47816 Prepare 3.10.1 release. 2024-05-20 21:12:32 +02:00
Felix Fontein
ab8b6662c2
Add hotfix for requests 2.32.0. (#861) 2024-05-20 21:08:25 +02:00
Felix Fontein
daa253a62d
From now on automatically add period to new plugins in changelog, and use FQCNs. (#859) 2024-05-20 08:50:29 +02:00
Felix Fontein
427a7d4f0c Next expected release is 3.11.0. 2024-05-19 21:20:48 +02:00
Felix Fontein
b6e698c1de Release 3.10.0. 2024-05-19 21:05:29 +02:00
Felix Fontein
97ea49cc17 Prepare 3.10.0 release. 2024-05-18 16:27:10 +02:00
Felix Fontein
16c345f6fd
Add REUSE badge. (#858) 2024-05-15 21:37:37 +02:00
x4rd0o1Vtx
5016a96eba
Allow healthcheck override without test option (#847)
* Add healthcheck test_cli_compatible option

* Update plugins/module_utils/util.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/docker_container.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-05-15 18:45:01 +02:00
Felix Fontein
2eb2c9febf
Add test for unsafe plugin util. (#856) 2024-05-12 01:00:50 +02:00
Felix Fontein
8cbdf5400c
Pass codecov token to ansible-test-gh-action. (#854) 2024-05-11 21:29:30 +02:00
Felix Fontein
36c118d154
Deprecate ssl_version. (#853) 2024-05-11 15:53:19 +02:00
Felix Fontein
f2a5d6f872
docker_image_build: allow to specify multiple platforms, allow to specify secrets and outputs (#852)
* Add note on idempotency.

* Make platform a list of strings.

* Support specifying secrets.

* Add test for secrets.

* Support specifying outputs.

* Ignore invalid choices syntax for ansible-core <= 2.16.

It actually works with ansible-core 2.14+ (though not with <= 2.13),
but the sanity tests only accept it from 2.17 on.

* Only use --secret with type=env for buildx 0.6.0+, and multiple --output for buildx 0.13.0+.
2024-05-11 15:52:47 +02:00
Felix Fontein
e176a8a17b Improve tasks, show images before docker_compose_v2_pull tests. 2024-05-10 13:06:54 +02:00
x4rd0o1Vtx
a4a05e7fa5
Add healthcheck start-interval option (#848) 2024-05-09 21:22:06 +02:00
Felix Fontein
f51ca84197
docker_prune: add new options for cleaning build caches (#845)
* Add new options for cleaning build caches to docker_prune.

* Add tests.
2024-05-09 17:12:36 +02:00
Kenny Millington
9beac01ce1
docker_network: Add support for --config-from and --config-only (#843)
Co-authored-by: Felix Fontein <felix@fontein.de>
2024-05-04 15:16:34 +02:00
Felix Fontein
30faf0b8e6
Deprecate Docker Compose v1. (#833) 2024-05-04 13:15:53 +00:00
Florian
368d616229
Add sysctls option to docker_swarm_service (#836)
* add sysctls option to docker_swarm_service

* Add added version number

Co-authored-by: Felix Fontein <felix@fontein.de>

* version added -> 3.10.0

Co-authored-by: Felix Fontein <felix@fontein.de>

* changelog fragment for docker_swarm_service sysctls

* add minimal docker_py / docker_api versions to use for sysctls

* set expected sysctls to null on integration test

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-04-30 22:44:25 +02:00
Felix Fontein
f09a2540aa
Arch Linux switched to Python 3.12. (#842) 2024-04-28 19:07:19 +02:00
Felix Fontein
e2f293ce2d Next expected release is 3.10.0. 2024-04-21 16:51:15 +02:00
Felix Fontein
379ce23270 Release 3.9.0 2024-04-21 16:20:52 +02:00
Felix Fontein
8bcc3519d4
Add check_files_existing option. (#839) 2024-04-21 16:01:07 +02:00
Felix Fontein
6368854a8c Prepare 3.9.0 release. 2024-04-20 10:51:16 +02:00
Felix Fontein
cab1bcb96e
Include changelog in docsite (#837)
* Include changelog in docsite.

* Fix changelog.
2024-04-18 12:50:33 +02:00
Felix Fontein
1ee9109a73
Make wrapping variables as unsafe smarter to avoid triggering an AWX bug. (#835) 2024-04-18 07:52:15 +02:00
Felix Fontein
8ad45286a3
Remove unused code that relies on functionality deprecated in Python 3.12. (#834) 2024-04-15 11:21:46 +00:00
Felix Fontein
9e8c367c47
docker_compose_v2: allow to specify inline compose definitions (#832)
* Allow to specify inline compose definitions.

* Remove comma that trips Python 2.7.

* Add tests.

* Add PyYAML as EE dependency.

* Be more explicit on PyYAML.
2024-04-09 17:41:12 +02:00
Felix Fontein
2925334a1a
Make sure project_src is an absolute path. (#828) 2024-04-04 21:39:38 +02:00
Felix Fontein
9ff53bc143
CI: Add stable-2.17; copy ignore.txt files from 2.17 to 2.18; move stable-2.14 from AZP to GHA (#830)
* Add stable-2.17 to CI; copy ignore files from 2.17 to 2.18.

* Move stable-2.14 from AZP to GHA.
2024-04-03 08:32:28 +02:00
Felix Fontein
7102d38923
Better error message if Compose version is 'dev'. (#826) 2024-03-29 19:29:14 +01:00
Felix Fontein
8f3f310c78
Docker Compose v1 no longer runs on Docker 26, which is now installed on the VM. (#822) 2024-03-24 12:57:19 +01:00
Felix Fontein
7d120ab42e
Ignore pylint warnings for construct that does not work with Python 2 (#821)
* Ignore pylint warnings for construct that does not work with Python 2.

* Revert "Ignore pylint warnings for construct that does not work with Python 2."

This reverts commit 92c19c78dc.

* Different approach: use ignore.txt since otherwise ansible-core 2.14 tests fail.
2024-03-23 12:28:38 +01:00
Felix Fontein
8ed1dddbba
Move Alpine 3.18 docker to stable-2.16, add Alpine 3.19 docker. (#820) 2024-03-22 13:58:37 +01:00
Felix Fontein
63483b2724 Next expected release will be 3.9.0. 2024-03-16 20:44:13 +01:00
Felix Fontein
59a8220c7f Release 3.8.1. 2024-03-16 20:16:57 +01:00
Felix Fontein
61c54874fd Prepare 3.8.1 release. 2024-03-15 07:34:43 +01:00
Felix Fontein
bf1281ae7f
Prevent RCE via inventory plugins (#815)
* Prevent RCE via inventory plugins.

* Do not make ansible_connection unsafe.

* Add test.
2024-03-14 20:08:41 +01:00
Felix Fontein
14e2f92974
Improve inventory integration tests. (#817) 2024-03-13 13:56:22 +01:00
Felix Fontein
4bab9a6b0e
Fix idempotency of docker_compose_v2_pull. (#814) 2024-03-13 13:20:11 +01:00
Felix Fontein
6600f501ae
Fix Python deps setup in callback/inventory tests. (#816) 2024-03-13 07:47:51 +01:00
dependabot[bot]
83d2d0ef8e
Bump fsfe/reuse-action from 2 to 3 (#812)
Bumps [fsfe/reuse-action](https://github.com/fsfe/reuse-action) from 2 to 3.
- [Release notes](https://github.com/fsfe/reuse-action/releases)
- [Commits](https://github.com/fsfe/reuse-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: fsfe/reuse-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-04 06:23:23 +01:00
Felix Fontein
6aea7efed9
Improve parsing of warnings and errors (#811)
* Add logfmt message parser.

* Parse logfmt formatted warnings.

* Follow-up for #810.

* Fix handling of warning and error messages.

* Make Python 2 compatible.

* Linting. Improving tests.
2024-03-03 13:38:55 +00:00
Felix Fontein
37e28b62d3
Do not fail on non-fatal errors. (#810) 2024-02-28 21:43:30 +01:00
Felix Fontein
d4b654793e The next expected release will be 3.9.0. 2024-02-25 21:32:55 +01:00
Felix Fontein
eafa7d03a8 Release 3.8.0. 2024-02-25 20:58:22 +01:00
Felix Fontein
a7c7adce2f
Add docker_container_exec note on env variables; remove superfluous notes (#806)
* Remove unnecessary notes.

* Add note for evaluating environment variables.
2024-02-24 20:45:13 +01:00
Felix Fontein
bbc36e9923 Prepare 3.8.0 release. 2024-02-23 20:06:07 +01:00
Felix Fontein
45d32d53c9
Do not consider 'Waiting' events as changes/actions. (#804) 2024-02-23 19:58:40 +01:00
Felix Fontein
6f5d67860c
docker_compose_v2: ignore some pull events (#803)
* Ignore some pull events.

* Adjust tests.
2024-02-23 18:24:16 +01:00
tigattack
f0c91ef5f9
docs(docker_plugin): note that --grant-all-permissions is true by default (#800)
* docs(docker_plugin): note that `--grant-all-permissions` is true by default

Fixes #145

* Update plugins/modules/docker_plugin.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2024-02-16 13:48:53 +01:00
Felix Fontein
6366464812
docker_container: allow pull=never, and make check mode behavior configurable (#797)
* Allow to configure behavior of pull=true in check mode.

* Change pull to option that accepts some strings as well, such as pull=never.

* Adjust values.
2024-02-14 22:49:22 +01:00
Felix Fontein
e494464e56
Add wait and wait_timeout options. (#796) 2024-02-14 22:48:36 +01:00
Felix Fontein
b5de1fd1ad
Add MarkDown changelog and use it by default. (#788) 2024-02-09 13:08:24 +01:00
tigattack
4c2e7ebfbc
Fix typo in docker_image_build docs (#793) 2024-02-06 23:34:39 +01:00
Felix Fontein
7b554082ea
Improve parsing. (#786) 2024-02-01 17:52:31 +00:00
Felix Fontein
c97ac2337f
Zuul third-party-check: disable ansible-doc part of galaxy-importer. (#781) 2024-01-27 14:49:28 +01:00
Felix Fontein
9f35743ab9 Next release will be 3.8.0. 2024-01-27 13:32:35 +01:00
Felix Fontein
810bf738d7 Release 3.7.0. 2024-01-27 13:03:00 +01:00
Felix Fontein
b5d085bb88
Parse build events from stderr. (#779) 2024-01-25 06:40:32 +01:00
Felix Fontein
b5391c7971
Add action group sanity test. (#777) 2024-01-24 08:25:17 +01:00
Felix Fontein
cc32e0e6ad Prepare 3.7.0. 2024-01-24 07:27:56 +01:00
Felix Fontein
32cb76b164
Add scale option. (#776) 2024-01-24 07:16:33 +01:00
Felix Fontein
eebb73a503
docker_compose_v2: add files option (#775)
* Add files option.

* Shorten lines.
2024-01-24 07:15:00 +01:00
Felix Fontein
b2a79d9eb7
Add docker_image_export module (#774)
* Add docker_image_export module.

* Add basic tests.

* Add more seealsos.
2024-01-22 22:03:38 +01:00
Felix Fontein
a53ecb6e66
Forgot to add docker_compose_v2_pull to action group. (#773) 2024-01-21 22:04:27 +01:00
Felix Fontein
901c1c4f9c Next expected release is 3.7.0. 2024-01-21 09:16:29 +01:00
Felix Fontein
564d3f389f Release 3.6.0. 2024-01-21 09:01:17 +01:00
Felix Fontein
fcf608b334
Add networks[].mac_address option. (#763) 2024-01-20 14:23:12 +01:00
Felix Fontein
37d0a44c0b
Adjust descriptions. (#766) 2024-01-20 14:13:11 +01:00
Felix Fontein
ac41379119
Fix archive idempotency. (#765) 2024-01-20 14:12:55 +01:00
Felix Fontein
648e0652d5
mac_address no longer works with Docker API v1.44+. (#764) 2024-01-20 14:06:29 +01:00
Felix Fontein
b2cee5677a Next expected release will be 3.6.0. 2024-01-18 08:20:29 +01:00
Felix Fontein
31540c43d6 Release 3.6.0-rc1. 2024-01-18 08:06:02 +01:00
Felix Fontein
eb3e0b17cd Prepare 3.6.0-rc1 release. 2024-01-18 07:41:45 +01:00
Felix Fontein
7129cc5a30
Simplifiy workflows. (#762) 2024-01-17 23:13:35 +01:00
Felix Fontein
c3322fd55b Fix typo. 2024-01-17 12:58:20 +01:00
Felix Fontein
6082efc855
Improve docs sharing for docker_compose_v2* modules; fix examples and return docs for docker_compose_v2_pull (#761)
* Move more common documentation to docs fragment.

* Fix examples and return values for docker_compose_v2_pull.

* Remove ignore.
2024-01-17 07:53:44 +01:00
Felix Fontein
22d956efa8
Allow to pass --build or --no-build to 'docker compose up'. (#760) 2024-01-17 06:57:35 +01:00
Simon Baerlocher
98a74b1f9c
feat: add docker-compose services support. (#758)
* feat: add docker-compose services support.

fix: typo

* fix: error

* fix: ci Job

* feat: add argument_spec

* fix: whitespace

* feat: refactored docker_compose_v2 in response to feedback
2024-01-16 19:07:12 +01:00
Felix Fontein
ab73061a5f
Also check for compose.yaml and compose.yml, and do not require the Compose file to be an actual file. (#759) 2024-01-16 19:03:29 +01:00
Felix Fontein
d4a5280512 Mention new module in docs. 2024-01-14 17:00:02 +01:00
Felix Fontein
4dd671248c Next expected release is 3.6.0. 2024-01-14 16:57:14 +01:00
Felix Fontein
5eb115cb10 Release 3.6.0-b2. 2024-01-14 16:34:25 +01:00
Felix Fontein
daf32ed6ec Remove part that's already in the regular changelog. 2024-01-14 08:55:32 +01:00
Felix Fontein
1c8272f821
Change Docker Stack modules to use common CLI module framework. (#745) 2024-01-14 08:54:06 +01:00
Felix Fontein
5adac5216a
Add Galaxy import workflow (#754)
* Add Galaxy import workflow.

* A dependency of galaxy-importer does not work on Python 3.12.

* Make yamllint happy.
2024-01-13 19:25:22 +01:00
Felix Fontein
b84c771fc5 Prepare 3.6.0-b2. 2024-01-13 16:21:27 +01:00
Felix Fontein
f04cdb7e06
Remove sanity ignore files for Ansible 2.9 and ansible-base 2.10. (#753) 2024-01-13 16:05:59 +01:00
Felix Fontein
f429017d94
Add inventory filter capability (#698)
* Add inventory filter capability.

* Use community.library_inventory_filtering_v1 collection.

* Bump dependency to 1.0.0.

* Mention the new dependency in the changelog.
2024-01-13 15:51:02 +01:00
Felix Fontein
97a0610f25
Docker Compose v2: extend/improve event parsing tests (#752)
* Normalize ansible-docker-test-xxx in stderr logs.

* Deduplicate.

* Add new testcases including the new module.
2024-01-13 15:49:30 +01:00
Felix Fontein
307dc4045a
Add docker_compose_v2_pull module (#751)
* Add docker_compose_v2_pull module.

* Improve and extend parsing of events.

* Add ignores.

* --policy is only available since Compose 2.22.0.
2024-01-13 14:36:26 +01:00
Felix Fontein
8ca5e2f810
Extract more common code and docs fragment for Docker Compose. (#748) 2024-01-07 18:17:10 +01:00
Felix Fontein
cb4dd2fed1
docker_compose_v2: move some code to module_utils (#747)
* Move some code to module_utils.

* Add unit tests.

Test cases are auto-generated from integration test logs.

* Rename ResourceEvent → Event.
2024-01-07 16:17:31 +01:00
Felix Fontein
eed89f32eb
docker_compose_v2: allow to specify pull policy; parse pull events; improve error handling; always return stderr (#746)
* Add pull option for 'docker compose up'.

* Improve dry-mode event parsing, and also parse pull-related events.

* Improve error handling, and add first tests.

* Fix action status documentation.

* Add more tests.

* Always return stderr.

This makes debugging misbehavior a lot easier since you can see
what 'docker compose' actually returned.

* Reformat existing tests.
2024-01-07 08:45:20 +01:00
Felix Fontein
4a5293503e
Rename ca_cert option to ca_path (#744)
* Rename ca_cert option to ca_path.

* Two more.
2024-01-06 17:03:39 +01:00
Felix Fontein
5f9f78ede6
Update/improve documentation (#743)
* Mention new modules in guide.

* Improve formatting.

* Improve docs for SSL version option.

* Add docs and example for module defaults group.

* Remove not applicable comment.

* Improvements.

* Remove dead link for Ansible Operator.

* Ansible-bender seems to be no longer actively maintained, and its more aimed at podman.

* Add note and preamble for example.
2024-01-06 10:07:53 +01:00
Felix Fontein
22d595eddb Next expected release is 3.6.0. 2024-01-04 23:14:15 +01:00
Felix Fontein
7d680aa102 Release 3.6.0-b1. 2024-01-04 22:44:56 +01:00
Felix Fontein
5256f94342
Adjust to new shellcheck in ansible-core devel's sanity tests. (#741) 2024-01-04 22:27:34 +01:00
Felix Fontein
7c61325a83 3.6.0 -> 3.6.0-b1. 2024-01-04 21:53:06 +01:00
Felix Fontein
b774837183
Add docker_compose_v2 module (#739)
* Add docker_compose_v2 module.

* Add note on compatibility.

* Parse more events.

Emit warnings (or things we assume are warnings), and report unparsable
messages to the user so they can report them to us.
2024-01-03 07:05:08 +00:00
Felix Fontein
762ce3e1cf
Remove 'debug' parameter from new CLI modules. Move log writing to single function. (#740) 2024-01-02 21:10:59 +01:00
Felix Fontein
7aa9791ea6 Prepare 3.6.0 release. 2024-01-02 15:31:58 +01:00
Felix Fontein
39717d380e
Avoid shadowing loop variables. (#738) 2024-01-02 14:21:19 +01:00
Felix Fontein
2caa77c032
Remove superfluous timeout argument. (#737) 2024-01-02 14:05:27 +01:00
Felix Fontein
ce7402dc9f
Add docker_image_build module. (#735) 2024-01-02 09:21:45 +01:00
Felix Fontein
199d9e50d3
Fix Unix socket path. (#736) 2024-01-01 22:53:58 +01:00
Felix Fontein
56bbef2b44 Fix example. 2024-01-01 18:09:08 +01:00
Felix Fontein
42453444ff
Compose digest instead of accidentally using wrong one. (#733) 2023-12-31 15:31:43 +01:00
Felix Fontein
02bb4ceaf7 Update docs. 2023-12-31 15:14:01 +01:00
Felix Fontein
c3f8f80a75
Add docker_image_remove module. (#732) 2023-12-31 15:13:04 +01:00
Felix Fontein
66b341aa9e
Add docker_image_tag module (#730)
* Add docker_image_tag module.

* Add check mode tests.

* Improve and test image ID/digest handling.

* Adjust more tests.
2023-12-31 10:41:18 +01:00
Felix Fontein
20e78a92e0
Add docker_image_pull module (#728)
* Add docker_image_pull module.

* Support platform during idempotency check.

* Add diff mode, extend tests.

* Add image ID tests.
2023-12-31 09:51:42 +01:00
Felix Fontein
e22cee2c41
Add docker_image_push module. (#731) 2023-12-31 08:33:32 +00:00
Felix Fontein
8ee0452776
Run registry tests only when registry is present. (#729) 2023-12-29 11:27:49 +01:00
Felix Fontein
b1dfe49e7d Fix documentation link. 2023-12-28 22:09:32 +01:00
Felix Fontein
0812d0b495
Support labels and shm_size for image build. Allow to specify (swap) memory limits in other units than bytes. (#727) 2023-12-28 21:42:55 +01:00
dependabot[bot]
74636e7f0e
Bump actions/setup-python from 4 to 5 (#724)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-11 06:56:50 +01:00
Felix Fontein
48f48a0ef8 Install community.library_inventory_filtering_v1 for PR docs builds.
This is required to make CI in #698 pass.
2023-12-10 09:53:59 +01:00
Felix Fontein
46e6070041 The next expected release is 3.6.0. 2023-12-10 09:51:23 +01:00
Felix Fontein
080a2d68c1 Release 3.5.0. 2023-12-10 09:27:58 +01:00
Felix Fontein
c4c347c626
Add proper platform handling. (#705) 2023-12-10 09:03:32 +01:00
Felix Fontein
b3ef5f5196
Clean up vendored Docker SDK for Python TLS handling code. (#722) 2023-12-09 23:19:36 +01:00
Felix Fontein
26772304f9
Do not accept tls_hostname for Docker SDK for Python 7.0.0+. (#721) 2023-12-09 23:16:03 +01:00
Felix Fontein
a120794958 Prepare 3.5.0 release. 2023-12-09 22:06:03 +01:00
Felix Fontein
3aa1ddcca0
Docker SDK for Python 7+: make sure that ssl_version is not passed, and error out if it was explicitly set (#715)
* Do not accept ssl_version for Docker SDK for Python 7.0.0+.

* Add changelog fragment.

* Generally avoid sending None values to TLSConfig. Potentially prevents similar errors in the future, assuming the users do not pass values in.

* Python 2.6 compatibility.
2023-12-09 17:59:06 +00:00
Felix Fontein
4929ef603a
Integration tests: split up Docker setup, move docker_compose tests into own group (#718)
* Rename setup role.

* Create new CI group 6, and move docker_compose v1 tests into there.

* Split up Docker setup in integration tests.

* Change setup_docker_compose_v1 to install its own Docker SDK for Python.

* Docker SDK for Python not needed to set up registry or query host info.
2023-12-09 17:35:54 +01:00
Felix Fontein
80e39f84d8
Update docker_compose docs to indicate that it is incompatible with Docker SDK for Python 7+. (#717) 2023-12-09 14:40:15 +01:00
Felix Fontein
907dc28f73
Deprecate default 'ignore' of 'image_name_mismatch'. (#703) 2023-12-07 12:32:50 +01:00
Felix Fontein
d8cef6c71e
docker_container: refactoring preparing better comparisons (#713)
* Always get the container's image as well to allow get_value() to use that one too.

* Allow options and engines to overwrite comparison functions.

* Do not fail if image (by ID) cannot be found.

* Allow to control when container image is needed.

* Pass option to compare function.

* Allow to pass the host info for retrieving a value.

* Add changelog fragment.
2023-12-05 07:26:11 +01:00
Felix Fontein
b8afdc52b1
Fix bad expressions in tests. (#711)
ci_complete
2023-11-28 22:52:43 +01:00
Felix Fontein
cbdaab3e42
Remove Fedora 36 from CI. (#709) 2023-11-24 21:21:12 +01:00
Felix Fontein
64847ad875
devel supports Fedora 39, and no longer Fedora 38. (#707) 2023-11-17 21:17:27 +01:00
Felix Fontein
5630e3e4f3
Add rhel/9.3 for devel, remove rhel/9.2. (#706) 2023-11-15 21:40:30 +01:00
Felix Fontein
a50be1abf6 Next expected release is 3.5.0. 2023-11-12 12:27:56 +01:00
Felix Fontein
1052ce2ded Release 3.4.11. 2023-11-12 12:04:56 +01:00
Alexander Jähnel
4c220c4d74
fix(community.docker.docker_volume): labels can be none (#702)
* fix(community.docker.docker_volume): labels can be none

catch case where volume labels can are done (default) eg:

$ docker volume inspect foo
[    
    {
        "CreatedAt": "2023-11-11T12:55:23+01:00",                                                                                                            
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/foo/_data",
        "Name": "foo",
        "Options": {},
        "Scope": "local"
    }
]

* Update plugins/modules/docker_volume.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* add(community.docker.docker_volume): changelog fragment

* Update changelogs/fragments/702-docker-volume-label-none.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-11-12 11:19:56 +01:00
Felix Fontein
9ba09432a7 Prepare 3.4.11 release. 2023-11-12 10:02:23 +01:00
Felix Fontein
14683421b5
Fix failing sanity test. (#700) 2023-11-08 13:23:39 +01:00
Felix Fontein
70695a8dcd Next expected release is 3.5.0. 2023-10-29 15:54:29 +01:00
Felix Fontein
ee054c6bf7 Release 3.4.10. 2023-10-29 15:30:46 +01:00
Felix Fontein
a0775fe194 Prepare 3.4.10 release. 2023-10-29 08:32:13 +01:00
Felix Fontein
1c66f880ee
Fix typos, improve markup, improve scenario guide (#699)
* Fix typos.

* Improve markup.

* Mention missing 'new' modules in scenario guide.
2023-10-29 08:30:24 +01:00
Felix Fontein
fbc2750b6a
Do not pass data_path_addr for older Docker SDK for Python versions. (#696) 2023-10-14 23:48:46 +02:00
Felix Fontein
33c0957292 Next expected release will be 3.5.0. 2023-10-08 22:18:48 +02:00
Felix Fontein
70ea796914 Release 3.4.9. 2023-10-08 21:35:36 +02:00
Felix Fontein
be610963b5 Prepare 3.4.9 release. 2023-10-08 18:38:22 +02:00
Felix Fontein
4d9b85c975
Update vendored Docker SDK for Python code (#694)
* vendored Docker SDK for Python code: volume: added support for bind propagation

https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation

Cherry-picked from bea63224e0

Co-authored-by: Janne Jakob Fleischer <janne.fleischer@ils-forschung.de>
Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* vendored Docker SDK for Python code: fix: eventlet compatibility

Check if poll attribute exists on select module instead of win32 platform check

The implementation done in #2865 is breaking usage of docker-py library within eventlet.
As per the Python `select.poll` documentation (https://docs.python.org/3/library/select.html#select.poll) and eventlet select removal advice (eventlet/eventlet#608 (comment)), it is preferable to use an implementation based on the availability of the `poll()` method that trying to check if the platform is `win32`.

Fixes https://github.com/docker/docker-py/issues/3131

Cherry-picked from 78439ebbe1

Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>

* vendored Docker SDK for Python code: fix: use response.text to get string rather than bytes

Adjusted from 0618951093

Co-authored-by: Mehmet Nuri Deveci <5735811+mndeveci@users.noreply.github.com>
Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* vendored Docker SDK for Python code: Fix missing asserts or assignments

Cherry-picked from 0566f1260c

Co-authored-by: Aarni Koskela <akx@iki.fi>

---------

Co-authored-by: Janne Jakob Fleischer <janne.fleischer@ils-forschung.de>
Co-authored-by: Milas Bowman <milas.bowman@docker.com>
Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>
Co-authored-by: Mehmet Nuri Deveci <5735811+mndeveci@users.noreply.github.com>
Co-authored-by: Aarni Koskela <akx@iki.fi>
2023-10-08 18:16:27 +02:00
Ethan Paul
78801088ae
Update docker_stack_info module documentation to clarify functionality (#693)
* Update documentation to reflect module functionality

Clarify that this module is used for accessing information on all stacks
Add link to docker_stack_task_info module for users looking for detailed info on a single stack

Fixes #690

* Remove trailing whitespace, add trailing period.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-10-07 14:04:59 +02:00
Felix Fontein
2c633dadeb
CI: ansible-core devel drops support for Python 2.7 and 3.6 (#691)
* ansible-core devel drops support for Python 2.7 and 3.6.

* Force PyYAML 5.3.1 on Alpine.
2023-10-04 23:22:08 +02:00
Felix Fontein
e7133f8b1b
Fix Galaxy URLs. (#688) 2023-09-30 21:36:05 +02:00
Felix Fontein
d9f49fc073
Add ansible-core 2.16 to the matrix. (#686) 2023-09-19 17:51:26 +02:00
dependabot[bot]
128117bb1c
Bump actions/checkout from 3 to 4 (#685)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-11 18:57:44 +02:00
Felix Fontein
d266c69ddc
Fix documentation (#684)
* Fix documentation.

* Fix line length.
2023-09-10 21:27:26 +02:00
Felix Fontein
6f6dd14492
Ignore sanity check. (#679) 2023-08-11 09:01:39 +02:00
bastantoine
e21d6d380c
Fix example of docker-compose module (#674) 2023-07-25 10:09:47 +02:00
Felix Fontein
92fc542c00
Make unit tests work with Python 3.12. (#673) 2023-07-23 22:33:07 +02:00
Felix Fontein
017536953a
Force PyYAML to 5.3.1. (#669) 2023-07-19 15:57:00 +02:00
Felix Fontein
0a8f3fa7d6
Remove no longer needed ignore. (#668) 2023-07-15 12:40:52 +02:00
Felix Fontein
1d0b6ddef3
Install and use Python 3.11 on RHEL UBI 9. (#666) 2023-07-12 19:24:48 +02:00
Felix Fontein
3f6d5a96d9
Disable EE with ansible-core devel for now until UBI 9 has Python 3.10 support. (#664) 2023-07-12 08:13:05 +02:00
Felix Fontein
e04111550d
Remove Fedora 37 from devel; add Fedora 38. (#658) 2023-06-26 22:01:21 +02:00
Felix Fontein
285bbf54cb
Add Debian Bookworm to CI. (#657) 2023-06-24 16:29:13 +02:00
Felix Fontein
8da385f39f
Bump AZP container. (#655) 2023-06-24 11:11:31 +02:00
Felix Fontein
ad1da2dc6c Next expected release is 3.5.0. 2023-06-22 14:10:11 +02:00
Felix Fontein
377f0f7355 Release 3.4.8. 2023-06-22 10:49:45 +02:00
Felix Fontein
a1c5a2d342 Update release summary. 2023-06-22 07:02:40 +02:00
Felix Fontein
024bdec919
Use semantic markup (#645)
* Use semantic markup.

* Linting.

* Define docsite targets.

* Forgot one env var.

* Add array stubs.
2023-06-22 07:01:31 +02:00
Felix Fontein
f94beeb027
Add RHEL 8.7 and 9.2 to CI (#649)
* Add RHEL 8.7, 8.8, and 9.2 to CI.

* Skip RHEL 8.8 for now.
2023-06-20 08:10:28 +02:00
Felix Fontein
3f9f41e5a9
SuSE: install docker-compose v1 from pip instead of system packages. (#650)
The system package switched to docker-compose v2.
2023-06-20 07:58:54 +02:00
Felix Fontein
5b287b6650 Next expected release is 3.4.8. 2023-06-15 13:35:05 +02:00
Felix Fontein
e5d289a650 Release 3.4.7. 2023-06-15 13:15:25 +02:00
Felix Fontein
440669e76d Prepare 3.4.7 release. 2023-06-15 07:22:20 +02:00
Kendi
861988fd36
Update docker_container_exec.py documentation (#642)
Should be or not and
2023-06-01 12:59:49 +02:00
Felix Fontein
cad2ecca3d
Move ansible-core 2.12 to EOL CI (#640)
* https://github.com/ansible/ansible/pull/79734 has been merged and backported for all branches but stable-2.10 and stable-2.11.

* Move ansible-core 2.12 to EOL CI.
2023-05-29 17:01:07 +02:00
Felix Fontein
748d619fb2
Fix EndpointSpec KeyError. (#637) 2023-05-26 17:58:09 +02:00
Felix Fontein
74b70f81c8
Switch to Ansible Galaxy compatible requirements files for tests. (#633) 2023-05-21 13:54:35 +02:00
Felix Fontein
db71c974e3 Next expected release is 3.5.0. 2023-05-20 21:38:07 +02:00
Felix Fontein
a284137d15 Release 3.4.6. 2023-05-20 21:14:49 +02:00
Felix Fontein
cdccf955a8 Prepare 3.4.6. 2023-05-20 19:38:28 +02:00
Felix Fontein
1660bf4104
vendored Docker SDK for Python code: update to latest version (#619)
* socket: fix for errors on pipe close in Windows (https://github.com/docker/docker-py/pull/3099)

Need to return data, not size. By returning an empty
string, EOF will be detected properly since `len()`
will be `0`.

Fixes https://github.com/docker/docker-py/issues/3098.

Cherry-picked from f84623225e

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* socket: use poll() instead of select() except on Windows (https://github.com/docker/docker-py/pull/2865)

Fixes https://github.com/docker/docker-py/issues/2278, which was originally addressed in https://github.com/docker/docker-py/pull/2279, but was not
properly merged. Additionally it did not address the problem
of poll not existing on Windows. This patch falls back on the
more limited select method if host system is Windows.

Cherry-picked from a02ba74333

Co-authored-by: Tyler Westland <tylerofthewest@gmail.com>

* api: respect timeouts on Windows named pipes (https://github.com/docker/docker-py/pull/3112)

Cherry-picked from 9cadad009e

Co-authored-by: Imogen <59090860+ImogenBits@users.noreply.github.com>

* Add URL to changelog.

* api: avoid socket timeouts when executing commands (https://github.com/docker/docker-py/pull/3125)

Only listen to read events when polling a socket in order
to avoid incorrectly trying to read from a socket that is
not actually ready.

Cherry-picked from c5e582c413

Co-authored-by: Loïc Leyendecker <loic.leyendecker@gmail.com>

---------

Co-authored-by: Milas Bowman <milas.bowman@docker.com>
Co-authored-by: Tyler Westland <tylerofthewest@gmail.com>
Co-authored-by: Imogen <59090860+ImogenBits@users.noreply.github.com>
Co-authored-by: Loïc Leyendecker <loic.leyendecker@gmail.com>
2023-05-20 19:35:56 +02:00
Felix Fontein
2259246f4f
Rewrite EE test workflow to use ansible-builder 3.0.0. (#630) 2023-05-20 11:21:25 +02:00
Felix Fontein
d7f7e44b9e
Make sure plugins/module_utils/socket_handler.py works when Docker SDK for Python is not installed. (#620) 2023-05-15 21:43:31 +02:00
Felix Fontein
7bdb2127e0
Improve examples: use FQCNs and always add name: to tasks (#624)
* Improve examples: use FQCNs and always add name: to tasks.

* Improvements.

Co-authored-by: Don Naro <dnaro@redhat.com>

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2023-05-15 21:41:58 +02:00
Felix Fontein
245ab76b09
Warn that SSLSocket cannot send close_notify TLS alerts (#621)
* Warn that SSLSocket cannot send close_notify TLS alerts.

* Improve formulation.

Co-authored-by: Don Naro <dnaro@redhat.com>

---------

Co-authored-by: Don Naro <dnaro@redhat.com>
2023-05-15 21:41:23 +02:00
Felix Fontein
6187068ee5
Improve time units of docker_swarm documentation. (#623) 2023-05-13 15:25:42 +00:00
Felix Fontein
c3b523a11e Next expected release is 3.5.0. 2023-05-05 22:46:17 +02:00
Felix Fontein
ce16a0d5f1 Release 3.4.5. 2023-05-05 22:23:19 +02:00
Felix Fontein
39f2e9b9c4
Make compatible with requests 2.29.0 and urllib3 2.0 (#613)
* Make compatible with requests 2.29.0.

* This fix should also work with urllib3 2.0 according to urllib3 maintainer.

* Add changelog fragment.

* We still need the constraint for CI until Docker SDK for Python has a new release with a fix.

* Make modifications to response_class as small as possible.

* Revert "We still need the constraint for CI until Docker SDK for Python has a new release with a fix."

This reverts commit 698d544a1e08308e8bf8b4e56ab78c5079f9a17b.

* The pip coming with the ansible-core 2.11 alpine3 image seems to be too old.
2023-05-05 22:09:02 +02:00
Felix Fontein
5a26eee6d4 Prepare 3.4.5. 2023-05-05 21:22:57 +02:00
Felix Fontein
35f2d1617f
Arch Linux now uses Python 3.11. (#616) 2023-05-04 07:12:13 +02:00
Felix Fontein
054353bb14
Simplify test setup. (#615) 2023-05-03 19:39:04 +02:00
Felix Fontein
a3642616c2 Next expected release is 3.5.0. 2023-05-01 10:27:11 +02:00
Felix Fontein
0475be5166 Release 3.4.4. 2023-05-01 09:54:42 +02:00
Felix Fontein
ba5b3306a7 Also mention urllib3 2.0.0 in known_issues. 2023-05-01 09:51:38 +02:00
Felix Fontein
4ab05500e3 Prepare 3.4.4 release. 2023-05-01 09:47:48 +02:00
Felix Fontein
088cbaed4e
Restrict requests to < 2.29.0 (#612)
* Restrict requests to < 2.29.0.

* Also avoid urllib3, which gets installed in some cases even though it shouldn't.
2023-04-29 16:25:07 +02:00
Felix Fontein
9e1a0a6fb8
Do extra docs validation; explicitly disallow semantic markup in docs (#607)
* Do extra docs validation. Explicitly disallow semantic markup in docs.

* Forgot to add new requirement.

* Improve test.

* TEMP - make CI fail.

* Revert "TEMP - make CI fail."

This reverts commit d381f1a431.

* Remove unnecessary import.

* Make sure ANSIBLE_COLLECTIONS_PATH is set.

* Make sure sanity tests from older Ansible versions don't complain.
2023-04-16 18:18:12 +02:00
Ville Ojamo
634da44f67
docker_swarm: document docker_node module for manager removal (#602)
* docker_swarm: document manager removal

Add note that community.docker.swarm_node needs to
be used to demote a manager before it can be removed.

Fixes #601.

* docker_swarm: improve wording

* docker_swarm: fix formatting
2023-04-08 18:09:57 +02:00
Felix Fontein
5d61cb2b8d
Update CI matrix: add stable-2.15 (#600)
* Add ignore files for bumped devel version.

* Update CI matrix.
2023-04-04 06:12:39 +00:00
Felix Fontein
334d34e205 Next expected release is 3.5.0. 2023-03-24 07:44:47 +01:00
Felix Fontein
e0f4c33782 Release 3.4.3. 2023-03-24 07:19:39 +01:00
Felix Fontein
65407001ff Prepare 3.4.3 release. 2023-03-23 21:27:15 +01:00
Felix Fontein
d0a3e587a5
More true/false normalization. (#597) 2023-03-06 22:17:13 +01:00
Felix Fontein
c504c87404 Expected next release is 2.5.0. 2023-02-25 16:04:03 +01:00
Felix Fontein
82dde7cadd Release 3.4.2. 2023-02-25 15:37:21 +01:00
Felix Fontein
18584f6b40 Prepare 3.4.2 release. 2023-02-24 21:36:26 +01:00
Felix Fontein
08bfcf7e5f
docker_prune: correctly return 'changed' result (#593)
* Correctly return 'changed' status.

* Extend tests.

* Fix typo.
2023-02-24 17:24:16 +01:00
Felix Fontein
d0e61097f1 Cancel concurrent workflow runs in PRs. 2023-02-23 09:56:26 +01:00
Felix Fontein
dfa60dc91d Next expected release is 3.5.0. 2023-02-20 23:05:27 +01:00
Felix Fontein
5dd90e5884 Release 3.4.1. 2023-02-20 22:39:26 +01:00
Felix Fontein
dd19db8c8f
Normalize bools in tests. (#589) 2023-02-15 22:29:41 +01:00
Felix Fontein
a426232523
Fix imports. (#585) 2023-02-12 22:09:02 +01:00
Felix Fontein
449b91d489
Remove unnecessary test imports. (#583) 2023-02-12 20:59:51 +01:00
Felix Fontein
983b2b4783
exec: fix file handle leak with container.exec_* APIs (https://github.com/docker/docker-py/pull/2320) (#582)
Requests with stream=True MUST be closed or else the connection will
never be returned to the connection pool. Both ContainerApiMixin.attach
and ExecApiMixin.exec_start were leaking in the stream=False case.
exec_start was modified to follow attach for the stream=True case as
that allows the caller to close the stream when done (untested).

Tested with:

    # Test exec_run (stream=False) - observe one less leak
    make integration-test-py3 file=models_containers_test.py' -k test_exec_run_success -vs -W error::ResourceWarning'
    # Test exec_start (stream=True, fully reads from CancellableStream)
    make integration-test-py3 file=api_exec_test.py' -k test_execute_command -vs -W error::ResourceWarning'

After this change, one resource leak is removed, the remaining resource
leaks occur because none of the tests call client.close().

Fixes https://github.com/docker/docker-py/issues/1293
(Regression from https://github.com/docker/docker-py/pull/1130)

Cherry-picked from 34e6829dd4

Co-authored-by: Peter Wu <pwu@cloudflare.com>
Co-authored-by: Milas Bowman <milas.bowman@docker.com>
2023-02-12 08:29:28 +01:00
Felix Fontein
5c70d8fd7a Prepare 3.4.1 release. 2023-02-10 14:06:22 +01:00
Kristof Mattei
d2f551fc5d
fix: fix tmpfs_size and tmpfs_mode not being set (#580)
* fix: fix tmpfs_size and tmpfs_mode not being set

* fix: wrong file

* fix: add changelog fragment

* fix: update changelog fragment to match formatting

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2023-02-10 14:05:09 +01:00
Felix Fontein
54a3dc151d
Remove unneccessary imports (#575)
* Remove unneccessary imports.

* Add noqas.
2023-02-09 15:25:45 +01:00
Felix Fontein
eb186f0098 Restrict regular CI runs on old branches to stable-2. 2023-02-06 09:28:15 +01:00
Felix Fontein
3a1bfc4be2
CI: Make tests work with Docker API version 1.42 (#576)
* Make tests work with API version 1.42.

* Make sure anonymous volume is separated from container a bit earlier.

* Another try.
2023-02-04 22:16:25 +01:00
Felix Fontein
421bae419d
Improve current_container_facts docs (#574)
* Improve current_container_facts docs.

* [TEMP] Run current_container_facts module in CI outside of ansible-test.

* Revert "[TEMP] Run current_container_facts module in CI outside of ansible-test."

This reverts commit 1cdd3e3550.

* Describe current state of return values.
2023-02-03 17:13:31 +01:00
Felix Fontein
0e1152a630 Use branch from https://github.com/ansible-community/github-docs-build/pull/67. 2023-02-03 15:37:02 +01:00
Felix Fontein
d57b26269a
Looks like BuilderSize was never documented and eventually got removed. Replace with something that is documented (https://docs.docker.com/engine/api/v1.42/#tag/System/operation/SystemDataUsage). (#569) 2023-02-03 11:33:29 +01:00
Felix Fontein
a78bd6f443
Fix check in SSH connection test (#567)
* Fix check.

* Adjust error check.
2023-01-22 19:11:52 +00:00
Felix Fontein
c0d9ca67c4
Restrict to old enough paramiko on RHEL 8 or other systems using Python 3.6. (#563) 2023-01-22 17:15:27 +01:00
David Jack Wange Olrik
c24ea78f6e
docs: Fix json path in asserts (#560)
##### SUMMARY

The current path to the running state does not include `output.services.` which it should.

##### ISSUE TYPE

- Docs Pull Request

+label: docsite_pr
2023-01-20 13:19:33 +01:00
Felix Fontein
dc611a05d1 Next expected release is 3.5.0. 2023-01-14 11:47:49 +01:00
Felix Fontein
96b6f5917d Release 3.4.0. 2023-01-14 11:20:16 +01:00
Felix Fontein
b114d451fd Forgot to add version_added. 2023-01-14 11:19:42 +01:00
Felix Fontein
c7cbec0163
docker_plugin: do not crash when plugin doesn't exist (#553)
* Do not crash when plugin doesn't exist.

* Improve style.

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2023-01-13 20:49:06 +01:00
Felix Fontein
01429108d3 Use stable-2.x branches from my work to get the patch from https://github.com/ansible/ansible/pull/79734. 2023-01-13 20:30:02 +01:00
Felix Fontein
4e6ac335f3
Improve envvar fallback handling. (#554) 2023-01-13 06:37:04 +01:00
Felix Fontein
757b02cc15 Prepare 3.4.0 release. 2023-01-09 11:54:13 +01:00
Felix Fontein
e198e4ab43
Add docker_container_copy_into module (#545)
* Move copying functionality to module_utils.

* Add docker_container_copy_into module.

* Use new module in other tests.

* Fix copyright and attributes.

* Improve idempotency, improve stat code.

* Document and test when a stopped container works.

* Improve owner/group detection error handling when container is stopped.

* Fix formulation.

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Improve file comparison.

* Avoid reading whole file at once.

* Stream when fetching files from daemon.

* Fix comment.

* Use read() instead of read1().

* Stream files when copying into container.

* Linting.

* Add force parameter.

* Simplify library code.

* Linting.

* Add content and content_is_b64 options.

* Make force=false work as for copy module: only copy if the destination does not exist.

* Improve docs.

* content should be no_log.

* Implement diff mode.

* Improve error handling.

* Lint and improve.

* Set owner/group ID to avoid ID lookup (which fails in paused containers).

* Apply suggestions from code review

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2023-01-09 11:52:29 +01:00
Felix Fontein
134d32cae6
CI: improve Docker setup (#550)
* Cache has already been updated a few lines before.

* When skipping Docker cleanup, create flag to avoid the expensive part of the setup (including package manager cache update) to be run again.
2023-01-08 22:21:59 +01:00
Felix Fontein
dc5af8985b
Update CI matrix. (#549) 2023-01-07 11:25:10 +01:00
Felix Fontein
18091193de
Fix error handling. (#546) 2023-01-05 09:42:42 +01:00
Felix Fontein
311926aaad
Forgot to switch docs fragment for docker_container: it no longer depends on the Docker SDK for Python. (#544) 2023-01-01 21:54:35 +01:00
Felix Fontein
3470e5effb
Add setup role for Docker Compose v2. (#542) 2022-12-30 15:19:20 +01:00
Felix Fontein
c6aca384ed Delete stopped container as well. 2022-12-28 16:33:45 +01:00
Felix Fontein
faa7fef504
docker_host_info: allow to list all containers (#538)
* Allow to list all containers.

* Fix typo.
2022-12-27 21:39:17 +01:00
Felix Fontein
44b98609fd
CI: add CentOS Stream 8 with Python 3.6 to matrix (#531)
* Add more VMs.

* Disable corresponding docker containers.

* Remove VMs.
2022-12-20 22:57:21 +01:00
Felix Fontein
839ad6086e Improve docsite build. 2022-12-18 21:50:41 +01:00
Felix Fontein
f3e77c193d Switch to my fork of ansible-test-gh-action. 2022-12-18 09:54:09 +01:00
Felix Fontein
5cb24d2e69 The ansible-test patch has been backported to stable-2.12. 2022-12-17 19:39:36 +01:00
Mark Mercado
cc9191f8cb
Fix the collection name (#532) 2022-12-14 07:45:21 +01:00
Felix Fontein
b6034929bd Fix CI names. 2022-12-12 21:27:07 +01:00
Felix Fontein
02915cd22c
Improve CI (#528)
* Update CI scripts to be more close to the ones in ansible-core.

* Extend CI matrix.

* Make sure that docker daemon is running (when not in a container).

* Make sure that connection plugin tests do not uninstall Docker daemon.

* Check some conditions.

* Fix error ignores.

* Skip SSH test on Alpine VMs.

* Take care of more errors.

* Adjust for more errors.

* Improve conditions.

* Remove new entries from CI matrix; make CI matrix nicer.
2022-12-11 17:30:37 +01:00
Felix Fontein
7e213200ce Next expected release is 3.4.0. 2022-12-09 21:46:55 +01:00
Felix Fontein
b8fb4740a3 Release 3.3.2. 2022-12-09 21:27:38 +01:00
Felix Fontein
970b95e3fa
Bump CentOS Stream 8 Python from 3.8 to 3.9. (#529) 2022-12-09 14:58:00 +01:00
Felix Fontein
6db26903ed Prepare 3.3.2 release. 2022-12-08 22:16:57 +01:00
Felix Fontein
3a40112a76
Remove timeout when waiting for container to finish. (#527) 2022-12-08 22:15:42 +01:00
Felix Fontein
b318c02148 Allow triggering docs workflow manually. 2022-12-07 19:54:23 +01:00
Felix Fontein
f823555ae3
Backports to stable-2.13 and stable-2.14 have been merged. (#525)
https://github.com/ansible/ansible/pull/79538
https://github.com/ansible/ansible/pull/79507
2022-12-07 08:58:58 +01:00
Felix Fontein
13968bda22 Next expected release is 3.4.0. 2022-12-06 13:55:14 +01:00
Felix Fontein
080a043c79 Release 3.3.1. 2022-12-06 13:21:27 +01:00
Felix Fontein
ed48629399 Prepare 3.3.1 release. 2022-12-06 08:15:55 +01:00
Felix Fontein
e87b327764
Improve container detection. (#522) 2022-12-06 08:11:44 +01:00
Felix Fontein
3ed7c56704 The next expected release is 3.4.0. 2022-12-03 21:40:51 +01:00
Felix Fontein
ffff6b7e77 Release 3.3.0. 2022-12-03 21:15:34 +01:00
Felix Fontein
11351839ee
Fix CI image selection. (#521) 2022-12-03 15:31:00 +01:00
Felix Fontein
019712b09f
Only use ubuntu-20.04 if necessary. (#520) 2022-12-02 07:59:10 +01:00
Felix Fontein
6ccbde9f98
Fix chdir option. (#518) 2022-12-02 06:48:39 +01:00
Felix Fontein
549de87ab5
Switch CI from ubuntu-latest to ubuntu-20.04 to avoid problems with ansible-test from ansible-core 2.12, 2.13, 2.14. (#519) 2022-12-01 23:02:19 +01:00
Felix Fontein
2957138153
latest docker-py bugfix (npipe) (#513)
* socket: handle npipe close on Windows (https://github.com/docker/docker-py/pull/3056)

Fixes https://github.com/docker/docker-py/issues/3045

Cherry-picked from 30022984f6

Co-authored-by: Nick Santos <nick.santos@docker.com>

* Add changelog fragment.

Co-authored-by: Nick Santos <nick.santos@docker.com>
2022-12-01 06:59:05 +01:00
Felix Fontein
a239c0b2db Prepare 3.3.0 release. 2022-12-01 00:04:00 +01:00
Felix Fontein
6e04e1f172
Handle ansible_default_ipv4 not there in tests. (#514) 2022-12-01 00:02:02 +01:00
iamjpotts
166d485216
Make image archive/save idempotent, using image id and repo tags as keys (#500) 2022-11-30 23:45:36 +01:00
Felix Fontein
c2d84efccb
Make current_container_facts work with newer Docker versions and latest ansible-test container changes (#510)
* Add more debug output.

* Add basic integration test.

* Split into lines.

* Fix docker detection, add podman detection.

ci_complete

* Improve regular expression.

* Document that this module is trying its best, but might not be perfect.

* Update comment.
2022-11-30 22:25:33 +01:00
iamjpotts
ee9ddb954f
Add docstring to ImageManager.__init__ and fix docstring for ImageManager.archive_image (#509) 2022-11-30 22:04:11 +01:00
Felix Fontein
90086f00ad Next expected release is 3.3.0. 2022-11-28 22:49:30 +01:00
Felix Fontein
5b4d24a817 Release 3.2.2. 2022-11-28 22:30:20 +01:00
Felix Fontein
9ab3130b43 Prepare 3.2.2 release. 2022-11-28 22:11:07 +01:00
Felix Fontein
edf0d3ec99
Make kill_signal accept strings. (#506) 2022-11-28 22:10:07 +01:00
Felix Fontein
70d68dd2bd
ansible-core 2.11 is EOL. Move CI runs to GHA. (#504) 2022-11-27 22:37:54 +01:00
Felix Fontein
d043fc6cbc Include collection name into docs workflows. 2022-11-27 17:44:21 +01:00
Felix Fontein
b2c86af64c Reference documentation in README. 2022-11-26 09:53:44 +01:00
Felix Fontein
20492e940f Add GH Pages publishing. 2022-11-26 09:39:13 +01:00
iamjpotts
ce4b0ddcab
Ignore PyCharm related files (#501) 2022-11-25 21:19:47 +01:00
Felix Fontein
f17e6d52bd Allow changelog fragments with .yaml ending. 2022-11-17 12:41:54 +01:00
Felix Fontein
0d42792f97 Next expected release is 3.3.0. 2022-11-06 22:20:49 +01:00
Felix Fontein
1427a7ccdd Release 3.2.1. 2022-11-06 22:02:10 +01:00
Felix Fontein
c7f5c74f15 Prepare 3.2.1 release. 2022-11-06 21:16:18 +01:00
Felix Fontein
2261dff49f
Document attributes (#497)
* Add 'docker' action group attribute.

* Compatibility with older ansible-core releases.

* Fix typo.

* Docment standard attributes.

* Improve docs.

* Add shortcuts for common combinations.
2022-11-06 21:15:09 +01:00
Felix Fontein
7ea99edf07 Next expected release is 3.3.0. 2022-11-01 21:41:17 +01:00
Felix Fontein
79b05f5e1d Release 3.2.0. 2022-11-01 21:18:46 +01:00
Felix Fontein
5b31f17016
Add image_name_mismatch option. (#488) 2022-11-01 19:48:58 +00:00
Felix Fontein
4e1bb64b0a Prepare 3.2.0 release. 2022-11-01 20:48:21 +01:00
Felix Fontein
51d5744cb0
docker_container: deprecate ignore_image and purge_networks (#487)
* Deprecate ignore_image and purge_networks.

* Fix YAML.

* Simple replacement doesn't work in this case.
2022-11-01 19:57:56 +01:00
Felix Fontein
1ac3a99e7c
Fix non-matching defaults. (#494) 2022-11-01 18:08:45 +01:00
James A. Robinson
df864221d6
added documentation to indicate docker_swarm_service does not currently support operating on stack based services. (#491) 2022-10-26 12:49:59 +02:00
Felix Fontein
af854ed63b Use dependabot to update GHAs. 2022-10-17 22:57:32 +02:00
Felix Fontein
a380607717 Bump one more. 2022-10-17 22:43:07 +02:00
Felix Fontein
25d44fe061
Bump checkout version. (#486) 2022-10-17 22:37:13 +02:00
Felix Fontein
ac606cd2bf
Change CI group identifiers. (#484) 2022-10-10 22:39:27 +02:00
Felix Fontein
1e93feed2b Fail on docs build errors. 2022-09-22 06:40:57 +02:00
Felix Fontein
e412c0d081
Add stable-2.14 to CI. (#478) 2022-09-21 08:16:55 +02:00
Felix Fontein
a309a1b2f0 Next expected release is 3.2.0. 2022-09-08 07:18:27 +02:00
Felix Fontein
a72d7795c4 Release 3.1.0. 2022-09-08 06:51:36 +02:00
Felix Fontein
3b41e7d6a8
Improve docker_compose example (#470)
* Improve compose docs.

* Also adjust inline v1 example.
2022-09-08 06:33:06 +02:00
Felix Fontein
9458bc6e62
Clarify that BuildKit / buildx cannot be used with docker_image. (#468) 2022-09-07 21:52:38 +02:00
Felix Fontein
d159479615 Prepare 3.1.0 release. 2022-09-03 12:08:05 +02:00
Max
c9ea1d3f92
docker_swarm: add data_path_port option for swarm init (#466)
* Add data_path_port option for swarm init and swarm join

* Add changelog fragment

* Update changelogs/fragments/466-add-data-path-port.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/docker_swarm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* add change for docker sdk, remove reference to swarm join

* remove duplicate entry

Co-authored-by: Felix Fontein <felix@fontein.de>
2022-09-03 11:20:02 +02:00
Felix Fontein
1e24120014
Fix two more booleans. (#464) 2022-08-23 21:26:50 +02:00
Felix Fontein
8254e72da0 Fix workflow's permissions. 2022-08-21 11:35:25 +02:00
Felix Fontein
68ea9c5f41
Make reuse conformant (#462)
* Add .license files.

* Add reuse test.

* Update README.

* Add changelog fragment.

* Normalize licenses extra sanity test.

* Declare REUSE conformance.

* Update README.
2022-08-21 08:29:15 +02:00
Felix Fontein
8eabc7fe00 Release 3.0.2. 2022-08-16 22:47:57 +02:00
Felix Fontein
d8297df7d0
Fix docker_image's build.args (#456)
* Add tests for build.args.

* Fix bug: store build args in correct dict

* Add changelog fragment.

* Update copyright notice.
2022-08-16 22:37:51 +02:00
Felix Fontein
ad05773e34
Fix docs fragment. (#460) 2022-08-16 21:54:56 +02:00
Felix Fontein
9b8c10b8cf
Move Fedora 35 from devel to stable-2.13 CI runs. (#458) 2022-08-16 21:41:36 +02:00
Felix Fontein
03ecea94c0 Prepare 3.0.2 release. 2022-08-16 13:48:02 +02:00
Evgeni Golov
91caf49988
correctly document incompatibility with Python 3.12+ (#454)
there was no, and there will not be, a Python 2.12 ;-)
2022-08-15 14:00:17 +02:00
Felix Fontein
6c9152567b Next expected release is 3.1.0. 2022-08-15 08:58:41 +02:00
Felix Fontein
fbac2ecc3d Release 3.0.1. 2022-08-15 08:02:34 +02:00
Felix Fontein
b720c8f486
Forgot to update copied version of deprecation notice. (#453) 2022-08-15 08:01:42 +02:00
Felix Fontein
f7cf12555c
docker_container: fix env_file option (#452)
* Add better tests for env and env_file.

* Make sure that non-container options are also passed to preprocessing code.

* Add changelog fragment.

* Add env_file override test.
2022-08-15 07:45:59 +02:00
Felix Fontein
f9741b7457 Prepare 3.0.1 release. 2022-08-14 12:26:23 +02:00
Felix Fontein
9acc75be85 Next expected release is 3.1.0. 2022-08-12 22:35:33 +02:00
Felix Fontein
be4f333696 Release 3.0.0. 2022-08-12 22:03:48 +02:00
Felix Fontein
a50257381f
Fix docker_plugin crash when handling plugin options (#447)
* Fix docker_plugin crash when handling plugin options.

* Try to add tests.
2022-08-12 19:29:45 +02:00
Felix Fontein
f513ba2c59
Fix error formatting bug. (#448) 2022-08-12 13:53:59 +02:00
Felix Fontein
5b76f05bef Prepare 3.0.0 release. 2022-08-10 21:49:56 +02:00
Felix Fontein
be58ccc13f
Normalize booleans in all other plugins and modules. (#440) 2022-08-10 21:25:10 +02:00
Felix Fontein
1bf8da2390
Normalize booleans in docker_container docs. (#439) 2022-08-09 18:32:05 +02:00
Felix Fontein
74134eda33
Fix docker_container tests (#441)
* Add diff output to figure out a bit more why the test fails.

* Make sure that both images have been pulled in advance.

* Dump the correct image.

* Allow tty test to fail in certain circumstances.
2022-08-08 23:23:23 +02:00
Felix Fontein
1e4633a606
For Python > 2, always use shutil.which instead of custom Windows helper code. (#438)
This is related to
42789818be
in the sense that for Python > 2, we also exclusively use shutil.which now,
but we do not remove the helper function since we need it for Python 2 on Windows.

Co-authored-by: Daniel Möller <n1ngu@riseup.net>

Co-authored-by: Daniel Möller <n1ngu@riseup.net>
2022-08-08 20:58:12 +02:00
Felix Fontein
bc6757d3b8
Fix docs. (#436) 2022-08-04 14:23:52 +02:00
Felix Fontein
b2bb064e47 Release 3.0.0-rc2. 2022-07-31 17:50:47 +02:00
Felix Fontein
da9b076904 Prepare 3.0.0-rc2. 2022-07-31 17:11:48 +02:00
Felix Fontein
ae708a7333
Vendored Docker SDK for Python updates (#434)
* utils: fix IPv6 address w/ port parsing

This was using a deprecated function (`urllib.splitnport`),
ostensibly to work around issues with brackets on IPv6 addresses.

Ironically, its usage was broken, and would result in mangled IPv6
addresses if they had a port specified in some instances.

Usage of the deprecated function has been eliminated and extra test
cases added where missing. All existing cases pass as-is. (The only
other change to the test was to improve assertion messages.)

Cherry-picked from
f16c4e1147

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* client: fix exception semantics in _raise_for_status

We want "The above exception was the direct cause of the following exception:" instead of "During handling of the above exception, another exception occurred:"

Cherry-picked from
bb11197ee3

Co-authored-by: Maor Kleinberger <kmaork@gmail.com>

* tls: use auto-negotiated highest version

Specific TLS versions are deprecated in latest Python, which
causes test failures due to treating deprecation errors as
warnings.

Luckily, the fix here is straightforward: we can eliminate some
custom version selection logic by using `PROTOCOL_TLS_CLIENT`,
which is the recommended method and will select the highest TLS
version supported by both client and server.

Cherry-picked from
56dd6de7df

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* transport: fix ProxyCommand for SSH conn

Cherry-picked from
4e19cc48df

Co-authored-by: Guy Lichtman <glicht@users.noreply.github.com>

* ssh: do not create unnecessary subshell on exec

Cherry-picked from
bb40ba051f

Co-authored-by: liubo <liubo@uniontech.com>

* ssh: reject unknown host keys when using Python SSH impl

In the Secure Shell (SSH) protocol, host keys are used to verify the identity of remote hosts. Accepting unknown host keys may leave the connection open to man-in-the-middle attacks.

Do not accept unknown host keys. In particular, do not set the default missing host key policy for the Paramiko library to either AutoAddPolicy or WarningPolicy. Both of these policies continue even when the host key is unknown. The default setting of RejectPolicy is secure because it throws an exception when it encounters an unknown host key.

Reference: https://cwe.mitre.org/data/definitions/295.html

NOTE: This only affects SSH connections using the native Python SSH implementation (Paramiko), when `use_ssh_client=False` (default). If using the system SSH client (`use_ssh_client=True`), the host configuration
(e.g. `~/.ssh/config`) will apply.

Cherry-picked from
d9298647d9

Co-authored-by: Audun Nes <audun.nes@gmail.com>

* lint: fix deprecation warnings from threading package

Set `daemon` attribute instead of using `setDaemon` method that
was deprecated in Python 3.10.

Cherry-picked from
adf5a97b12

Co-authored-by: Karthikeyan Singaravelan <tir.karthi@gmail.com>

* api: preserve cause when re-raising error

Use `from e` to ensure that the error context is propagated
correctly.

Cherry-picked from
05e143429e

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* build: trim trailing whitespace from dockerignore entries

Cherry-picked from
3ee3a2486f

Co-authored-by: Clément Loiselet <clement.loiselet@capgemini.com>

* Improve formulation, also mention the security change as a breaking change.

Co-authored-by: Milas Bowman <milas.bowman@docker.com>
Co-authored-by: Maor Kleinberger <kmaork@gmail.com>
Co-authored-by: Guy Lichtman <glicht@users.noreply.github.com>
Co-authored-by: liubo <liubo@uniontech.com>
Co-authored-by: Audun Nes <audun.nes@gmail.com>
Co-authored-by: Karthikeyan Singaravelan <tir.karthi@gmail.com>
Co-authored-by: Clément Loiselet <clement.loiselet@capgemini.com>
2022-07-31 17:09:18 +02:00
Maxwell G
a33e51e04a
Prefer unitest.mock by universally using compat.mock (#433)
* Prefer unitest.mock by using compat.mock

`mock` is a backport of the `unittest.mock` module from the stdlib, and
there's no reason to use it on newer Python versions. `mock` is deprecated
in Fedora, so I figured I'd propose this here before downstream patching
our ansible-collection-community-docker package.

* Remove compat.mock code for older Python 3 versions

This removes compatibility for older versions of Python 3 that are no
longer supported.
2022-07-31 16:39:31 +02:00
Felix Fontein
e90647b209 Release 3.0.0-rc1. 2022-07-26 08:59:01 +02:00
Felix Fontein
cc55aeb882 Prepare 3.0.0-rc1. 2022-07-26 08:28:25 +02:00
Felix Fontein
9c5d562c0e
Fix bug when TLS is used (#432)
* Fix bug when TLS is used.

* Add HTTP/HTTPS connection test.
2022-07-26 08:25:53 +02:00
Felix Fontein
6caaa3a90b Release 3.0.0-a3. 2022-07-23 15:21:05 +02:00
Felix Fontein
ca3d7a3609 Prepare 3.0.0-a3 release. 2022-07-23 15:10:51 +02:00
Felix Fontein
a4539a309e
Move licenses to LICENSES/, use SPDX-License-Identifier, mention all licenses in galaxy.yml (#430)
* Move licenses to LICENSES/, use SPDX-License-Identifier, mention all licenses in galaxy.yml.

* ignore.txt lines cannot be empty or contain only a comment.

* Cleanup.

* This particular __init__.py seems to be crucial.

* Try extra newline.

* Markdown comments are a real mess. I hope this won't break Galaxy...

* More licenses.

* Add sanity test.

* Skip some files, lint.

* Make sure there is a copyright line everywhere.

* Also check for copyright line in sanity tests.

* Remove colon after 'Copyright'.

* Normalize lint script.

* Avoid colon after 'Copyright' in lint script.

* Improve license checker.

* Update README.md

Co-authored-by: Maxwell G <9920591+gotmax23@users.noreply.github.com>

* Remove superfluous space.

* Referencing target instead of symlink

Co-authored-by: Maxwell G <9920591+gotmax23@users.noreply.github.com>
2022-07-20 07:45:33 +02:00
Felix Fontein
0e3d7d4802 Release 3.0.0-a2. 2022-07-15 19:28:10 +02:00
Felix Fontein
e26890a909
Implement platform parameter for docker_container, first version. (#426) 2022-07-15 17:14:57 +02:00
Felix Fontein
5d0a036819
docker_container: add image_comparison parameter (#428)
* Add image_comparison parameter.

* Forgot version_added.
2022-07-15 17:14:40 +02:00
Felix Fontein
37c868e192
Add support for cgroupns_mode parameter. (#427) 2022-07-15 17:14:23 +02:00
Felix Fontein
2f1d9b3ff9 Prepare 3.0.0-a2 release. 2022-07-15 14:10:49 +02:00
Felix Fontein
77e63e2cca
Rewrite docker_container to use Docker API directly (#422)
* Begin experiments for docker_container rewrite.

* Continued.

* We support API >= 1.25 only anyway.

* Continued.

* Fix bugs.

* Complete first basic implementation.

* Continuing.

* Improvements and fixes.

* Continuing.

* More 'easy' options.

* More options.

* Work on volumes and mounts.

* Add more options.

* The last option.

* Copy over.

* Fix exposed ports.

* Fix bugs.

* Fix command and entrypoint.

* More fixes.

* Fix more bugs.

* ci_complete

* Lint, fix Python 2.7 bugs, work around ansible-test bug.

ci_complete

* Remove no longer applicable test.

ci_complete

* Remove unnecessary ignore.

ci_complete

* Start with engine driver.

* Refactoring.

* Avoid using anything Docker specific from self.client.

* Refactor.

* Add Python 2.6 ignore.txt entries for ansible-core < 2.12.

* Improve healthcheck handling.

* Fix container removal logic.

* ci_complete

* Remove handling of older Docker SDK for Pyhon versions from integration tests.

* Avoid recreation if a pure update is possible without losing the diff data.

* Cover the case that blkio_weight does not work.

* Update plugins/module_utils/module_container/docker_api.py

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Improve memory_swap tests.

* Fix URLs in changelog fragment.

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2022-07-15 07:24:14 +02:00
Felix Fontein
04121b5882
Rewrite docker_plugin to not use the Docker SDK for Python (#429)
* Rewrite the docker_plugin module to use the low-level client from Docker SDK for Python.

* Rewrite to no longer use the Docker SDK for Python.

* Remove Docker SDK for Python version from tests.
2022-07-14 16:29:37 +02:00
Felix Fontein
c00b4ec9be
Adjust to b1dd2af4ca. (#423) 2022-07-12 17:01:50 +02:00
Felix Fontein
f6d4cad46e
Fix tests on Ubuntu 22.04 (#419)
* Try to fix tests on Ubuntu 22.04.

* Let Ansible handle the apt repo install.
2022-07-07 22:54:14 +02:00
Felix Fontein
58211153db
Improve README (#418)
* List missing plugins.

* Fix short description.

* Improve section on requirements.

* Apply suggestions from code review

Co-authored-by: Don Naro <dnaro@redhat.com>

Co-authored-by: Don Naro <dnaro@redhat.com>
2022-07-07 13:56:48 +02:00
Felix Fontein
b90cc8b3f9 Release 3.0.0-a1. 2022-07-07 07:07:55 +02:00
Felix Fontein
848e21d253 Prepare 3.0.0-a1 release. 2022-07-06 22:40:00 +02:00
Felix Fontein
da9252a67e
Rewrite the docker_api connection plugin (#414)
* Rewrite the docker_api connection plugin.

* Improve formulation.

* Improve error messages.
2022-07-06 21:48:55 +02:00
Felix Fontein
23a90668c9
Rewrite the docker_containers inventory plugin (#413)
* Rewrite the docker_containers inventory plugin.

* Improve error messages.
2022-07-06 21:48:32 +02:00
Felix Fontein
c3a76007d0
Rewrite the docker_volume_info module (#412)
* Rewrite the docker_volume_info module.

* Improve error messages.
2022-07-06 21:48:22 +02:00
Felix Fontein
6869eaf869
Rewrite the docker_volume module (#411)
* Rewrite the docker_volume module.

* Improve error messages.
2022-07-06 21:48:16 +02:00
Felix Fontein
e60ce69102
Rewrite the docker_prune module (#410)
* Rewrite the docker_prune module.

* Improve error messages.
2022-07-06 21:47:43 +02:00
Felix Fontein
18fdd04782
Rewrite the docker_network_info module (#409)
* Rewrite the docker_network_info module.

* Improve error messages.
2022-07-06 21:47:37 +02:00
Felix Fontein
4c026307fb
Rewrite the docker_network module (#408)
* Rewrite the docker_network module.

* Update plugins/modules/docker_network.py

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Improve error messages.

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2022-07-06 21:47:32 +02:00
Felix Fontein
a406b08981
Rewrite the docker_login module (#407)
* Rewrite the docker_login module.

* Improve error messages.
2022-07-06 21:47:27 +02:00
Felix Fontein
f82c8401c2
Rewrite the docker_image_load module (#406)
* Rewrite the docker_image_load module.

* Improve error messages.
2022-07-06 21:46:19 +02:00
Felix Fontein
e4f3402035
Rewrite the docker_image_info module (#405)
* Rewrite the docker_image_info module.

* Improve error messages.
2022-07-06 21:46:14 +02:00
Felix Fontein
4f2f45b953
Rewrite the docker_image module (#404)
* Rewrite the docker_image module.

* Improve error messages.
2022-07-06 21:46:02 +02:00
Felix Fontein
9e168b75cf
Rewrite the docker_host_info module (#403)
* Rewrite the docker_host_info module.

* Improve error messages.
2022-07-06 21:45:57 +02:00
Felix Fontein
37ff980a44
Rewrite the docker_container_info module (#402)
* Rewrite the docker_container_info module.

* Improve error messages.

* Remove wrong requirement.
2022-07-06 21:45:51 +02:00
Felix Fontein
1101997844
Rewrite the docker_container_exec module (#401)
* Rewrite docker_container_exec.

* Improve error messages.
2022-07-06 21:45:44 +02:00
Felix Fontein
9e57f29b3b
Refactoring. (#415) 2022-07-03 13:28:11 +02:00
Felix Fontein
623786c659
Implement all remaining deprecations for 3.0.0 (#400)
* Remove support for Ansible 2.9 and ansible-base 2.10.

* Remove Ansible 2.9 compatiblity code.

* Remove docker-compose from EE.

* Drop support for Python 2.6. Stop advertising docker-py for Python 2.6.

* Drop support for API versions 1.20 to 1.24.

* Fix condition.
2022-07-02 17:13:53 +02:00
Felix Fontein
4d508b4c37
Vendor API connection code from Docker SDK for Python (#398)
* Vendor parts of the Docker SDK for Python

This is a combination of the latest git version
(a48a5a9647)
and the version before Python 2.7 support was removed
(650aad3a5f),
including some modifications to work with Ansible module_utils's
system (i.e. third-party imports are guarded, and errors are
reported during runtime through a new exception
MissingRequirementException).

* Create module_utils and plugin_utils for working with the vendored code.

The delete call cannot be called delete() since that method already exists from requests.

* Vendor more code from Docker SDK for Python.

* Adjust code from common module_utils.

* Add unit tests from Docker SDK for Python.

* Make test compile with Python 2.6, but skip them on Python 2.6.

* Skip test that requires a network server.

* Add changelog.

* Update changelogs/fragments/398-docker-api.yml

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>

* Minimum API version is 1.25.

Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
2022-07-02 16:40:44 +02:00
Felix Fontein
21d112bddb
Unvendor distutils.version (#271)
* Unvendor distutils.version.

* Document breaking change.

* Update comment, add StrictVersion.
2022-07-02 15:06:58 +02:00
Felix Fontein
0071b343f1
Drop CentOS 8 from CI. (#289) 2022-07-02 15:01:19 +02:00
Felix Fontein
bc64aef5ca Revert "Revert "Remove deprecated functionality. (#363)""
This reverts commit e6d597b539.
2022-07-02 14:28:31 +02:00
Felix Fontein
6206976dbb Revert "Revert "Remove deprecations from docker_container, bump collection version to 3.0.0 (#399)""
This reverts commit 57e19ca596.
2022-07-02 14:28:27 +02:00
703 changed files with 75873 additions and 24873 deletions

30
.ansible-lint Normal file
View File

@ -0,0 +1,30 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
skip_list:
# Ignore rules that make no sense:
- galaxy[tags]
- galaxy[version-incorrect]
- meta-runtime[unsupported-version]
- no-changed-when
- sanity[cannot-ignore] # some of the rules you cannot ignore actually MUST be ignored, like yamllint:unparsable-with-libyaml
- yaml # we're using yamllint ourselves
- run-once[task] # wtf???
# To be checked and maybe fixed:
- ignore-errors
- key-order[task]
- name[casing]
- name[missing]
- name[play]
- name[template]
- no-free-form
- no-handler
- risky-file-permissions
- risky-shell-pipe
- var-naming[no-reserved]
- var-naming[no-role-prefix]
- var-naming[pattern]
- var-naming[read-only]

View File

@ -1,3 +1,9 @@
<!--
Copyright (c) Ansible Project
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
-->
## Azure Pipelines Configuration
Please see the [Documentation](https://github.com/ansible/community/wiki/Testing:-Azure-Pipelines) for more information.

View File

@ -1,3 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
trigger:
batch: true
branches:
@ -24,15 +29,13 @@ schedules:
always: true
branches:
include:
- stable-*
- stable-4
variables:
- name: checkoutPath
value: ansible_collections/community/docker
- name: coverageBranches
value: main
- name: pipelinesCoverage
value: coverage
- name: entryPoint
value: tests/utils/shippable/shippable.sh
- name: fetchDepth
@ -41,7 +44,7 @@ variables:
resources:
containers:
- container: default
image: quay.io/ansible/azure-pipelines-test-container:3.0.0
image: quay.io/ansible/azure-pipelines-test-container:7.0.0
pool: Standard
@ -57,65 +60,41 @@ stages:
targets:
- name: Sanity
test: 'devel/sanity/1'
- name: Sanity Extra # Only on devel
test: 'devel/sanity/extra'
- name: Units
test: 'devel/units/1'
- stage: Ansible_2_13
displayName: Sanity & Units 2.13
- stage: Ansible_2_20
displayName: Sanity & Units 2.20
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.13/sanity/1'
test: '2.20/sanity/1'
- name: Units
test: '2.13/units/1'
- stage: Ansible_2_12
displayName: Sanity & Units 2.12
test: '2.20/units/1'
- stage: Ansible_2_19
displayName: Sanity & Units 2.19
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.12/sanity/1'
test: '2.19/sanity/1'
- name: Units
test: '2.12/units/1'
- stage: Ansible_2_11
displayName: Sanity & Units 2.11
test: '2.19/units/1'
- stage: Ansible_2_18
displayName: Sanity & Units 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.11/sanity/1'
test: '2.18/sanity/1'
- name: Units
test: '2.11/units/1'
- stage: Ansible_2_10
displayName: Sanity & Units 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.10/sanity/1'
- name: Units
test: '2.10/units/1'
- stage: Ansible_2_9
displayName: Sanity & Units 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
targets:
- name: Sanity
test: '2.9/sanity/1'
- name: Units
test: '2.9/units/1'
test: '2.18/units/1'
### Docker
- stage: Docker_devel
@ -126,97 +105,61 @@ stages:
parameters:
testFormat: devel/linux/{0}
targets:
- name: CentOS 7
test: centos7
- name: Fedora 36
test: fedora36
- name: Fedora 35
test: fedora35
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Fedora 42
test: fedora42
- name: Ubuntu 22.04
test: ubuntu2204
- name: Alpine 3
test: alpine3
- name: Ubuntu 24.04
test: ubuntu2404
- name: Alpine 3.22
test: alpine322
groups:
- 4
- 5
- stage: Docker_2_13
displayName: Docker 2.13
- stage: Docker_2_20
displayName: Docker 2.20
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.13/linux/{0}
testFormat: 2.20/linux/{0}
targets:
- name: CentOS 7
test: centos7
- name: openSUSE 15 py2
test: opensuse15py2
- name: Alpine 3
test: alpine3
- name: Fedora 42
test: fedora42
- name: Alpine 3.22
test: alpine322
groups:
- 4
- 5
- stage: Docker_2_12
displayName: Docker 2.12
- stage: Docker_2_19
displayName: Docker 2.19
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.12/linux/{0}
testFormat: 2.19/linux/{0}
targets:
- name: CentOS 8
test: centos8
- name: Fedora 34
test: fedora34
- name: Ubuntu 18.04
test: ubuntu1804
- name: Fedora 41
test: fedora41
- name: Alpine 3.21
test: alpine321
groups:
- 4
- 5
- stage: Docker_2_11
displayName: Docker 2.11
- stage: Docker_2_18
displayName: Docker 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.11/linux/{0}
testFormat: 2.18/linux/{0}
targets:
- name: Fedora 33
test: fedora33
- name: Alpine 3
test: alpine3
groups:
- 4
- 5
- stage: Docker_2_10
displayName: Docker 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.10/linux/{0}
targets:
- name: Fedora 32
test: fedora32
- name: Ubuntu 16.04
test: ubuntu1604
groups:
- 4
- 5
- stage: Docker_2_9
displayName: Docker 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.9/linux/{0}
targets:
- name: Fedora 31
test: fedora31
- name: Fedora 40
test: fedora40
- name: Ubuntu 22.04
test: ubuntu2204
- name: Alpine 3.20
test: alpine320
groups:
- 4
- 5
@ -230,12 +173,14 @@ stages:
parameters:
testFormat: devel/linux-community/{0}
targets:
- name: Debian Bullseye
- name: Debian 11 Bullseye
test: debian-bullseye/3.9
- name: Debian 12 Bookworm
test: debian-bookworm/3.11
- name: Debian 13 Trixie
test: debian-13-trixie/3.13
- name: ArchLinux
test: archlinux/3.10
- name: CentOS Stream 8
test: centos-stream8/3.8
test: archlinux/3.13
groups:
- 4
- 5
@ -247,95 +192,71 @@ stages:
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: devel/rhel/{0}
testFormat: devel/{0}
targets:
- test: '7.9'
- test: '9.0-pypi-latest'
- name: RHEL 10.0
test: rhel/10.0
- name: RHEL 9.6 with Docker SDK, urllib3, requests from sources
test: rhel/9.6-dev-latest
# For some reason, Ubuntu 24.04 is *extremely* slower than RHEL 9.6
# - name: Ubuntu 24.04
# test: ubuntu/24.04
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_13
displayName: Remote 2.13
- stage: Remote_2_20
displayName: Remote 2.20
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: 2.13/rhel/{0}
testFormat: 2.20/{0}
targets:
- test: '8.5'
- name: RHEL 9.6
test: rhel/9.6
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_12
displayName: Remote 2.12
- stage: Remote_2_19
displayName: Remote 2.19
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: 2.12/rhel/{0}
testFormat: 2.19/{0}
targets:
- test: '8.4'
- name: RHEL 9.5
test: rhel/9.5
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_11
displayName: Remote 2.11
- stage: Remote_2_18
displayName: Remote 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: 2.11/rhel/{0}
testFormat: 2.18/{0}
targets:
- test: '8.3'
- name: RHEL 9.4
test: rhel/9.4
groups:
- 1
- 2
- 3
- 4
- 5
- stage: Remote_2_10
displayName: Remote 2.10
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: 2.10/rhel/{0}
targets:
- test: '7.8'
groups:
- 1
- 2
- 3
- 4
- stage: Remote_2_9
displayName: Remote 2.9
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: RHEL {0}
testFormat: 2.9/rhel/{0}
targets:
- test: '8.2'
groups:
- 1
- 2
- 3
- 4
## Finally
@ -343,23 +264,17 @@ stages:
condition: succeededOrFailed()
dependsOn:
- Ansible_devel
- Ansible_2_13
- Ansible_2_12
- Ansible_2_11
- Ansible_2_10
- Ansible_2_9
- Ansible_2_20
- Ansible_2_19
- Ansible_2_18
- Remote_devel
- Remote_2_13
- Remote_2_12
- Remote_2_11
- Remote_2_10
- Remote_2_9
- Remote_2_20
- Remote_2_19
- Remote_2_18
- Docker_devel
- Docker_2_13
- Docker_2_12
- Docker_2_11
- Docker_2_10
- Docker_2_9
- Docker_2_20
- Docker_2_19
- Docker_2_18
- Docker_community_devel
jobs:
- template: templates/coverage.yml

View File

@ -1,6 +1,10 @@
#!/usr/bin/env bash
# Aggregate code coverage results for later processing.
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -o pipefail -eu
agent_temp_directory="$1"

View File

@ -1,4 +1,8 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Combine coverage data from multiple jobs, keeping the data only from the most recent attempt from each job.
Coverage artifacts must be named using the format: "Coverage $(System.JobAttempt) {StableUniqueNameForEachJob}"

View File

@ -1,6 +1,10 @@
#!/usr/bin/env bash
# Check the test results and set variables for use in later steps.
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -o pipefail -eu
if [[ "$PWD" =~ /ansible_collections/ ]]; then

View File

@ -1,4 +1,8 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Upload code coverage reports to codecov.io.
Multiple coverage files from multiple languages are accepted and aggregated after upload.

View File

@ -1,6 +1,10 @@
#!/usr/bin/env bash
# Generate code coverage reports for uploading to Azure Pipelines and codecov.io.
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -o pipefail -eu
PATH="${PWD}/bin:${PATH}"

View File

@ -1,6 +1,10 @@
#!/usr/bin/env bash
# Configure the test environment and run the tests.
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -o pipefail -eu
entry_point="$1"

View File

@ -1,4 +1,8 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
"""Prepends a relative timestamp to each input line from stdin and writes it to stdout."""
from __future__ import (absolute_import, division, print_function)

View File

@ -1,3 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# This template adds a job for processing code coverage data.
# It will upload results to Azure Pipelines and codecov.io.
# Use it from a job stage that completes after all other jobs have completed.
@ -23,16 +28,6 @@ jobs:
- bash: .azure-pipelines/scripts/report-coverage.sh
displayName: Generate Coverage Report
condition: gt(variables.coverageFileCount, 0)
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
# Azure Pipelines only accepts a single coverage data file.
# That means only Python or PowerShell coverage can be uploaded, but not both.
# Set the "pipelinesCoverage" variable to determine which type is uploaded.
# Use "coverage" for Python and "coverage-powershell" for PowerShell.
summaryFileLocation: "$(outputPath)/reports/$(pipelinesCoverage).xml"
displayName: Publish to Azure Pipelines
condition: gt(variables.coverageFileCount, 0)
- bash: .azure-pipelines/scripts/publish-codecov.py "$(outputPath)"
displayName: Publish to codecov.io
condition: gt(variables.coverageFileCount, 0)

View File

@ -1,3 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# This template uses the provided targets and optional groups to generate a matrix which is then passed to the test template.
# If this matrix template does not provide the required functionality, consider using the test template directly instead.
@ -45,11 +50,11 @@ jobs:
parameters:
jobs:
- ${{ if eq(length(parameters.groups), 0) }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(parameters.nameFormat, coalesce(target.name, target.test)) }}
test: ${{ format(parameters.testFormat, coalesce(target.test, target.name)) }}
- ${{ if not(eq(length(parameters.groups), 0)) }}:
- ${{ each group in parameters.groups }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(format(parameters.nameGroupFormat, parameters.nameFormat), coalesce(target.name, target.test), group) }}
test: ${{ format(format(parameters.testGroupFormat, parameters.testFormat), coalesce(target.test, target.name), group) }}
- name: ${{ format(parameters.nameFormat, coalesce(target.name, target.test)) }}
test: ${{ format(parameters.testFormat, coalesce(target.test, target.name)) }}
- ${{ if not(eq(length(parameters.groups), 0)) }}:
- ${{ each group in parameters.groups }}:
- ${{ each target in parameters.targets }}:
- name: ${{ format(format(parameters.nameGroupFormat, parameters.nameFormat), coalesce(target.name, target.test), group) }}
test: ${{ format(format(parameters.testGroupFormat, parameters.testFormat), coalesce(target.test, target.name), group) }}

View File

@ -1,3 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# This template uses the provided list of jobs to create test one or more test jobs.
# It can be used directly if needed, or through the matrix template.
@ -9,37 +14,37 @@ parameters:
jobs:
- ${{ each job in parameters.jobs }}:
- job: test_${{ replace(replace(replace(job.test, '/', '_'), '.', '_'), '-', '_') }}
displayName: ${{ job.name }}
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- bash: .azure-pipelines/scripts/run-tests.sh "$(entryPoint)" "${{ job.test }}" "$(coverageBranches)"
displayName: Run Tests
- bash: .azure-pipelines/scripts/process-results.sh
condition: succeededOrFailed()
displayName: Process Results
- bash: .azure-pipelines/scripts/aggregate-coverage.sh "$(Agent.TempDirectory)"
condition: eq(variables.haveCoverageData, 'true')
displayName: Aggregate Coverage Data
- task: PublishTestResults@2
condition: eq(variables.haveTestResults, 'true')
inputs:
testResultsFiles: "$(outputPath)/junit/*.xml"
displayName: Publish Test Results
- task: PublishPipelineArtifact@1
condition: eq(variables.haveBotResults, 'true')
displayName: Publish Bot Results
inputs:
targetPath: "$(outputPath)/bot/"
artifactName: "Bot $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"
- task: PublishPipelineArtifact@1
condition: eq(variables.haveCoverageData, 'true')
displayName: Publish Coverage Data
inputs:
targetPath: "$(Agent.TempDirectory)/coverage/"
artifactName: "Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"
- job: test_${{ replace(replace(replace(job.test, '/', '_'), '.', '_'), '-', '_') }}
displayName: ${{ job.name }}
container: default
workspace:
clean: all
steps:
- checkout: self
fetchDepth: $(fetchDepth)
path: $(checkoutPath)
- bash: .azure-pipelines/scripts/run-tests.sh "$(entryPoint)" "${{ job.test }}" "$(coverageBranches)"
displayName: Run Tests
- bash: .azure-pipelines/scripts/process-results.sh
condition: succeededOrFailed()
displayName: Process Results
- bash: .azure-pipelines/scripts/aggregate-coverage.sh "$(Agent.TempDirectory)"
condition: eq(variables.haveCoverageData, 'true')
displayName: Aggregate Coverage Data
- task: PublishTestResults@2
condition: eq(variables.haveTestResults, 'true')
inputs:
testResultsFiles: "$(outputPath)/junit/*.xml"
displayName: Publish Test Results
- task: PublishPipelineArtifact@1
condition: eq(variables.haveBotResults, 'true')
displayName: Publish Bot Results
inputs:
targetPath: "$(outputPath)/bot/"
artifactName: "Bot $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"
- task: PublishPipelineArtifact@1
condition: eq(variables.haveCoverageData, 'true')
displayName: Publish Coverage Data
inputs:
targetPath: "$(Agent.TempDirectory)/coverage/"
artifactName: "Coverage $(System.JobAttempt) $(System.StageDisplayName) $(System.JobDisplayName)"

13
.flake8 Normal file
View File

@ -0,0 +1,13 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
[flake8]
extend-ignore = E203, E402, F401
count = true
# TODO: decrease this to ~10
max-complexity = 60
# black's max-line-length is 89, but it doesn't touch long string literals.
# Since ansible-test's limit is 160, let's use that here.
max-line-length = 160
statistics = true

8
.git-blame-ignore-revs Normal file
View File

@ -0,0 +1,8 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Reformat YAML: https://github.com/ansible-collections/community.docker/pull/1071
2487d1a0bf4f2c79d3ab5a9e7d0f969432bf32a2
# Reformat with black and isort
d65d37e9e9a78e03a35643704b413121515ee39c

15
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,15 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
groups:
ci:
patterns:
- "*"

View File

@ -1,4 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
backport_branch_prefix: patchback/backports/
backport_label_prefix: backport-
target_branch_prefix: stable-

90
.github/workflows/docker-images.yml vendored Normal file
View File

@ -0,0 +1,90 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: Helper Docker images for testing
'on':
# Run CI against all pushes (direct commits, also merged PRs), Pull Requests
push:
branches:
- main
paths:
- .github/workflows/docker-images.yml
- tests/images/**
pull_request:
branches:
- main
paths:
- .github/workflows/docker-images.yml
- tests/images/**
# Run CI once per day (at 03:00 UTC)
schedule:
- cron: '0 3 * * *'
env:
CONTAINER_REGISTRY: ghcr.io/ansible-collections
jobs:
build:
name: Build image ${{ matrix.name }}:${{ matrix.tag }}
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
include:
- name: simple-1
tag: tag
tag-as-latest: true
- name: simple-2
tag: tag
tag-as-latest: true
- name: healthcheck
tag: check
tag-as-latest: true
steps:
- name: Check out repository
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Install dependencies
run: |
sudo apt-get install podman buildah
- name: Set up Go 1.22
uses: actions/setup-go@v6
with:
go-version: '1.22'
cache: false # true (default) results in warnings since we don't use Go modules
- name: Build ${{ matrix.name }} image
run: |
./build.sh "${CONTAINER_REGISTRY}/${{ matrix.name }}:${{ matrix.tag }}"
working-directory: tests/images/${{ matrix.name }}
- name: Tag image as latest
if: matrix.tag-as-latest && matrix.tag != 'latest'
run: |
podman tag "${CONTAINER_REGISTRY}/${{ matrix.name }}:${{ matrix.tag }}" "${CONTAINER_REGISTRY}/${{ matrix.name }}:latest"
- name: Publish container image ${{ env.CONTAINER_REGISTRY }}/${{ matrix.name }}:${{ matrix.tag }}
if: github.event_name != 'pull_request'
uses: redhat-actions/push-to-registry@v2
with:
registry: ${{ env.CONTAINER_REGISTRY }}
image: ${{ matrix.name }}
tags: ${{ matrix.tag }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Publish container image ${{ env.CONTAINER_REGISTRY }}/${{ matrix.name }}:latest
if: github.event_name != 'pull_request' && matrix.tag-as-latest && matrix.tag != 'latest'
uses: redhat-actions/push-to-registry@v2
with:
registry: ${{ env.CONTAINER_REGISTRY }}
image: ${{ matrix.name }}
tags: latest
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,23 +1,61 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: Collection Docs
concurrency:
group: docs-${{ github.head_ref }}
group: docs-pr-${{ github.head_ref }}
cancel-in-progress: true
on:
'on':
pull_request_target:
types: [opened, synchronize, reopened, closed]
env:
GHP_BASE_URL: https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}
jobs:
build-docs:
permissions:
contents: read
name: Build Ansible Docs
uses: ansible-community/github-docs-build/.github/workflows/_shared-docs-build-pr.yml@main
with:
collection-name: community.docker
init-lenient: false
init-fail-on-error: true
squash-hierarchy: true
init-project: Community.Docker Collection
init-copyright: Community.Docker Contributors
init-title: Community.Docker Collection Documentation
init-html-short-title: Community.Docker Collection Docs
init-extra-html-theme-options: |
documentation_home_url=https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}/branch/main/
render-file-line: '> * `$<status>` [$<path_tail>](https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}/pr/${{ github.event.number }}/$<path_tail>)'
extra-collections: community.library_inventory_filtering_v1
publish-docs-gh-pages:
# for now we won't run this on forks
if: github.repository == 'ansible-collections/community.docker'
permissions:
contents: write
pages: write
id-token: write
needs: [build-docs]
name: Publish Ansible Docs
uses: ansible-community/github-docs-build/.github/workflows/_shared-docs-build-publish-gh-pages.yml@main
with:
artifact-name: ${{ needs.build-docs.outputs.artifact-name }}
action: ${{ (github.event.action == 'closed' || needs.build-docs.outputs.changed != 'true') && 'teardown' || 'publish' }}
publish-gh-pages-branch: true
secrets:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
comment:
permissions:
pull-requests: write
runs-on: ubuntu-latest
needs: build-docs
needs: [build-docs, publish-docs-gh-pages]
name: PR comments
steps:
- name: PR comment
@ -35,13 +73,20 @@ jobs:
Thank you for contribution!✨
This PR has been merged and your docs changes will be incorporated when they are next published.
This PR has been merged and the docs are now incorporated into `main`:
${{ env.GHP_BASE_URL }}/branch/main
body: |
## Docs Build 📝
Thank you for contribution!✨
The docsite for **this PR** is available for download as an artifact from this run:
The docs for **this PR** have been published here:
${{ env.GHP_BASE_URL }}/pr/${{ github.event.number }}
You can compare to the docs for the `main` branch here:
${{ env.GHP_BASE_URL }}/branch/main
The docsite for **this PR** is also available for download as an artifact from this run:
${{ needs.build-docs.outputs.artifact-url }}
File changes:

56
.github/workflows/docs-push.yml vendored Normal file
View File

@ -0,0 +1,56 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: Collection Docs
concurrency:
group: docs-push-${{ github.sha }}
cancel-in-progress: true
'on':
push:
branches:
- main
- stable-*
tags:
- '*'
# Run CI once per day (at 09:00 UTC)
schedule:
- cron: '0 9 * * *'
# Allow manual trigger (for newer antsibull-docs, sphinx-ansible-theme, ... versions)
workflow_dispatch:
jobs:
build-docs:
permissions:
contents: read
name: Build Ansible Docs
uses: ansible-community/github-docs-build/.github/workflows/_shared-docs-build-push.yml@main
with:
collection-name: community.docker
init-lenient: false
init-fail-on-error: true
squash-hierarchy: true
init-project: Community.Docker Collection
init-copyright: Community.Docker Contributors
init-title: Community.Docker Collection Documentation
init-html-short-title: Community.Docker Collection Docs
init-extra-html-theme-options: |
documentation_home_url=https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}/branch/main/
extra-collections: community.library_inventory_filtering_v1
publish-docs-gh-pages:
# for now we won't run this on forks
if: github.repository == 'ansible-collections/community.docker'
permissions:
contents: write
pages: write
id-token: write
needs: [build-docs]
name: Publish Ansible Docs
uses: ansible-community/github-docs-build/.github/workflows/_shared-docs-build-publish-gh-pages.yml@main
with:
artifact-name: ${{ needs.build-docs.outputs.artifact-name }}
publish-gh-pages-branch: true
secrets:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -1,114 +0,0 @@
---
name: execution environment
on:
# Run CI against all pushes (direct commits, also merged PRs), Pull Requests
push:
branches:
- main
- stable-*
pull_request:
# Run CI once per day (at 04:30 UTC)
# This ensures that even if there haven't been commits that we are still testing against latest version of ansible-builder
schedule:
- cron: '30 4 * * *'
env:
NAMESPACE: community
COLLECTION_NAME: docker
jobs:
build:
name: Build and test EE (Ⓐ${{ matrix.runner_tag }})
strategy:
matrix:
runner_tag:
- devel
- stable-2.12-latest
- stable-2.11-latest
- stable-2.9-latest
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
with:
path: ansible_collections/${{ env.NAMESPACE }}/${{ env.COLLECTION_NAME }}
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.10'
- name: Install ansible-builder and ansible-navigator
run: pip install ansible-builder ansible-navigator
- name: Verify requirements
run: ansible-builder introspect --sanitize .
- name: Make sure galaxy.yml has version entry
run: >-
python -c
'import yaml ;
f = open("galaxy.yml", "rb") ;
data = yaml.safe_load(f) ;
f.close() ;
data["version"] = data.get("version") or "0.0.1" ;
f = open("galaxy.yml", "wb") ;
f.write(yaml.dump(data).encode("utf-8")) ;
f.close() ;
'
working-directory: ansible_collections/${{ env.NAMESPACE }}/${{ env.COLLECTION_NAME }}
- name: Build collection
run: |
ansible-galaxy collection build --output-path ../../../
working-directory: ansible_collections/${{ env.NAMESPACE }}/${{ env.COLLECTION_NAME }}
- name: Create files for building execution environment
run: |
COLLECTION_FILENAME="$(ls "${{ env.NAMESPACE }}-${{ env.COLLECTION_NAME }}"-*.tar.gz)"
# EE config
cat > execution-environment.yml <<EOF
---
version: 1
build_arg_defaults:
EE_BASE_IMAGE: 'quay.io/ansible/ansible-runner:${{ matrix.runner_tag }}'
dependencies:
galaxy: requirements.yml
EOF
echo "::group::execution-environment.yml"
cat execution-environment.yml
echo "::endgroup::"
# Requirements
cat > requirements.yml <<EOF
---
collections:
- name: ${COLLECTION_FILENAME}
type: file
EOF
echo "::group::requirements.yml"
cat requirements.yml
echo "::endgroup::"
- name: Build image based on ${{ matrix.runner_tag }}
run: |
mkdir -p context/_build/
cp "${{ env.NAMESPACE }}-${{ env.COLLECTION_NAME }}"-*.tar.gz context/_build/
ansible-builder build -v 3 -t test-ee:latest --container-runtime=docker
- name: Make /var/run/docker.sock accessible by everyone
run: sudo chmod a+rw /var/run/docker.sock
- name: Run basic tests
run: >
ansible-navigator run
--mode stdout
--pull-policy never
--set-environment-variable ANSIBLE_PRIVATE_ROLE_VARS=true
--container-engine docker
--container-options=-v --container-options=/var/run/docker.sock:/var/run/docker.sock
--execution-environment-image test-ee:latest
-v
all.yml
working-directory: ansible_collections/${{ env.NAMESPACE }}/${{ env.COLLECTION_NAME }}/tests/ee

35
.github/workflows/nox.yml vendored Normal file
View File

@ -0,0 +1,35 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: nox
'on':
push:
branches:
- main
- stable-*
pull_request:
# Run CI once per day (at 09:00 UTC)
schedule:
- cron: '0 9 * * *'
workflow_dispatch:
jobs:
nox:
uses: ansible-community/antsibull-nox/.github/workflows/reusable-nox-run.yml@main
with:
session-name: Run extra sanity tests
change-detection-in-prs: true
ansible-test:
uses: ansible-community/antsibull-nox/.github/workflows/reusable-nox-matrix.yml@main
with:
change-detection-in-prs: true
upload-codecov: true
upload-codecov-pr: false
upload-codecov-push: false
upload-codecov-schedule: true
max-ansible-core: "2.17"
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

8
.gitignore vendored
View File

@ -1,5 +1,10 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
/tests/output/
/changelogs/.plugin-cache.yaml
/tests/integration/inventory
# Byte-compiled / optimized / DLL files
__pycache__/
@ -130,3 +135,6 @@ dmypy.json
# Pyre type checker
.pyre/
# PyCharm
.idea

7
.isort.cfg Normal file
View File

@ -0,0 +1,7 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
[isort]
profile=black
lines_after_imports = 2

27
.mypy.ini Normal file
View File

@ -0,0 +1,27 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
[mypy]
check_untyped_defs = True
disallow_untyped_defs = True
# strict = True -- only try to enable once everything (including dependencies!) is typed
strict_equality = True
strict_bytes = True
warn_redundant_casts = True
# warn_return_any = True
warn_unreachable = True
[mypy-ansible.*]
# ansible-core has partial typing information
follow_untyped_imports = True
[mypy-docker.*]
# Docker SDK for Python has partial typing information
follow_untyped_imports = True
[mypy-jsondiff.*]
# jsondiff has no typing information
ignore_missing_imports = True

598
.pylintrc Normal file
View File

@ -0,0 +1,598 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
[MAIN]
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# Specify a score threshold under which the program will exit with error.
fail-under=10
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=0
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.7
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
#variable-rgx=
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=160
# Maximum number of lines in a module.
max-module-lines=1000
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=old
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=raw-checker-failed,
bad-inline-option,
deprecated-pragma,
duplicate-code,
file-ignored,
import-outside-toplevel,
missing-class-docstring,
missing-function-docstring,
missing-module-docstring,
locally-disabled,
suppressed-message,
use-implicit-booleaness-not-comparison,
use-implicit-booleaness-not-comparison-to-string,
use-implicit-booleaness-not-comparison-to-zero,
superfluous-parens,
too-few-public-methods,
too-many-ancestors,
too-many-arguments,
too-many-boolean-expressions,
too-many-branches,
too-many-function-args,
too-many-instance-attributes,
too-many-lines,
too-many-locals,
too-many-nested-blocks,
too-many-positional-arguments,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
ungrouped-imports,
useless-parent-delegation,
wrong-import-order,
wrong-import-position,
# To clean up:
fixme,
import-error, # TODO figure out why pylint cannot find the module
no-name-in-module, # TODO figure out why pylint cannot find the module
protected-access,
subprocess-popen-preexec-fn,
unexpected-keyword-arg,
unused-argument,
# Cannot remove yet due to inadequacy of rules
inconsistent-return-statements, # doesn't notice that fail_json() does not return
# Buggy impementation in pylint:
relative-beyond-top-level, # TODO
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

53
.yamllint Normal file
View File

@ -0,0 +1,53 @@
---
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
extends: default
ignore: |
/changelogs/
rules:
line-length:
max: 300
level: error
document-start:
present: true
document-end: false
truthy:
level: error
allowed-values:
- 'true'
- 'false'
indentation:
spaces: 2
indent-sequences: true
key-duplicates: enable
trailing-spaces: enable
new-line-at-end-of-file: disable
hyphens:
max-spaces-after: 1
empty-lines:
max: 2
max-start: 0
max-end: 0
commas:
max-spaces-before: 0
min-spaces-after: 1
max-spaces-after: 1
colons:
max-spaces-before: 0
max-spaces-after: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 0
braces:
min-spaces-inside: 0
max-spaces-inside: 1
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
comments:
min-spaces-from-content: 1
comments-indentation: false

54
.yamllint-docs Normal file
View File

@ -0,0 +1,54 @@
---
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
extends: default
ignore: |
/changelogs/
rules:
line-length:
max: 160
level: error
document-start:
present: false
document-end:
present: false
truthy:
level: error
allowed-values:
- 'true'
- 'false'
indentation:
spaces: 2
indent-sequences: true
key-duplicates: enable
trailing-spaces: enable
new-line-at-end-of-file: disable
hyphens:
max-spaces-after: 1
empty-lines:
max: 2
max-start: 0
max-end: 0
commas:
max-spaces-before: 0
min-spaces-after: 1
max-spaces-after: 1
colons:
max-spaces-before: 0
max-spaces-after: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 0
braces:
min-spaces-inside: 0
max-spaces-inside: 1
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
comments:
min-spaces-from-content: 1
comments-indentation: false

54
.yamllint-examples Normal file
View File

@ -0,0 +1,54 @@
---
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
extends: default
ignore: |
/changelogs/
rules:
line-length:
max: 160
level: error
document-start:
present: true
document-end:
present: false
truthy:
level: error
allowed-values:
- 'true'
- 'false'
indentation:
spaces: 2
indent-sequences: true
key-duplicates: enable
trailing-spaces: enable
new-line-at-end-of-file: disable
hyphens:
max-spaces-after: 1
empty-lines:
max: 2
max-start: 0
max-end: 0
commas:
max-spaces-before: 0
min-spaces-after: 1
max-spaces-after: 1
colons:
max-spaces-before: 0
max-spaces-after: 1
brackets:
min-spaces-inside: 0
max-spaces-inside: 0
braces:
min-spaces-inside: 0
max-spaces-inside: 1
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
comments:
min-spaces-from-content: 1
comments-indentation: false

2107
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

3
CHANGELOG.md.license Normal file
View File

@ -0,0 +1,3 @@
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
SPDX-FileCopyrightText: Ansible Project

File diff suppressed because it is too large Load Diff

3
CHANGELOG.rst.license Normal file
View File

@ -0,0 +1,3 @@
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
SPDX-FileCopyrightText: Ansible Project

191
LICENSES/Apache-2.0.txt Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1 @@
../COPYING

View File

@ -1,48 +0,0 @@
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.

View File

@ -1,43 +1,95 @@
<!--
Copyright (c) Ansible Project
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
-->
# Docker Community Collection
[![Doc](https://img.shields.io/badge/docs-brightgreen.svg)](https://docs.ansible.com/ansible/latest/collections/community/docker/)
[![Documentation](https://img.shields.io/badge/docs-brightgreen.svg)](https://docs.ansible.com/ansible/devel/collections/community/docker/)
[![Build Status](https://dev.azure.com/ansible/community.docker/_apis/build/status/CI?branchName=main)](https://dev.azure.com/ansible/community.docker/_build?definitionId=25)
[![Nox CI](https://github.com/ansible-collections/community.docker/actions/workflows/nox.yml/badge.svg?branch=main)](https://github.com/ansible-collections/community.docker/actions)
[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.docker)](https://codecov.io/gh/ansible-collections/community.docker)
[![REUSE status](https://api.reuse.software/badge/github.com/ansible-collections/community.docker)](https://api.reuse.software/info/github.com/ansible-collections/community.docker)
This repo contains the `community.docker` Ansible Collection. The collection includes many modules and plugins to work with Docker.
Please note that this collection does **not** support Windows targets. The connection plugins included in this collection support Windows targets on a best-effort basis, but we are not testing this in CI.
## Code of Conduct
We follow [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) in all our interactions within this project.
If you encounter abusive behavior violating the [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html), please refer to the [policy violations](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html#policy-violations) section of the Code of Conduct for information on how to raise a complaint.
## Communication
* Join the Ansible forum:
* [Get Help](https://forum.ansible.com/c/help/6): get help or help others. Please add appropriate tags if you start new discussions, for example the `docker`, `docker-compose`, or `docker-swarm` tags.
* [Posts tagged with 'docker'](https://forum.ansible.com/tag/docker): subscribe to participate in Docker related conversations.
* [Posts tagged with 'docker-compose'](https://forum.ansible.com/tag/docker-compose): subscribe to participate in Docker Compose related conversations.
* [Posts tagged with 'docker-swarm'](https://forum.ansible.com/tag/docker-swarm): subscribe to participate in Docker Swarm related conversations.
* [Social Spaces](https://forum.ansible.com/c/chat/4): gather and interact with fellow enthusiasts.
* [News & Announcements](https://forum.ansible.com/c/news/5): track project-wide announcements including social events.
* The Ansible [Bullhorn newsletter](https://docs.ansible.com/ansible/devel/community/communication.html#the-bullhorn): used to announce releases and important changes.
For more information about communication, see the [Ansible communication guide](https://docs.ansible.com/ansible/devel/community/communication.html).
## Tested with Ansible
Tested with the current Ansible 2.9, ansible-base 2.10, ansible-core 2.11, ansible-core 2.12 and ansible-core 2.13 releases and the current development version of ansible-core. Ansible versions before 2.9.10 are not supported.
Please note that support for Ansible 2.9 and ansible-base 2.10 has been deprecated and will be dropped from community.docker 3.0.0 on.
Tested with the current ansible-core 2.17, ansible-core 2.18, and ansible-core 2.19 releases, and the current development version of ansible-core. Ansible/ansible-base versions before 2.17.0 are not supported.
## External requirements
Most modules and plugins require the [Docker SDK for Python](https://pypi.org/project/docker/). For Python 2.6 support, use [the deprecated docker-py library](https://pypi.org/project/docker-py/) instead.
Some modules and plugins require Docker CLI, or other external, programs. Some require the [Docker SDK for Python](https://pypi.org/project/docker/) and some use [requests](https://pypi.org/project/requests/) to directly communicate with the Docker daemon API. All modules and plugins require Python 2.7 or later. Python 2.6 is no longer supported; use community.docker 2.x.y if you need to use Python 2.6.
Please note that Python 2.6 support has been deprecated and will be dropped from community.docker 3.0.0 on.
Installing the Docker SDK for Python also installs the requirements for the modules and plugins that use `requests`. If you want to directly install the Python libraries instead of the SDK, you need the following ones:
Both libraries cannot be installed at the same time. If you accidentally did install them simultaneously, you have to uninstall *both* before re-installing one of them.
- [requests](https://pypi.org/project/requests/);
- [pywin32](https://pypi.org/project/pywin32/) when using named pipes on Windows with the Windows 32 API;
- [paramiko](https://pypi.org/project/paramiko/) when using SSH to connect to the Docker daemon with `use_ssh_client=false`;
- [pyOpenSSL](https://pypi.org/project/pyOpenSSL/) when using TLS to connect to the Docker daemon;
- [backports.ssl_match_hostname](https://pypi.org/project/backports.ssl_match_hostname/) when using TLS to connect to the Docker daemon on Python 2.
If you have Docker SDK for Python < 2.0.0 installed ([docker-py](https://pypi.org/project/docker-py/)), you can still use it for modules that support it, though we recommend to uninstall it and then install [docker](https://pypi.org/project/docker/), the Docker SDK for Python >= 2.0.0. Note that both libraries cannot be installed at the same time. If you accidentally did install them simultaneously, you have to uninstall *both* before re-installing one of them.
## Collection Documentation
Browsing the [**latest** collection documentation](https://docs.ansible.com/ansible/latest/collections/community/docker) will show docs for the _latest version released in the Ansible package_, not the latest version of the collection released on Galaxy.
Browsing the [**devel** collection documentation](https://docs.ansible.com/ansible/devel/collections/community/docker) shows docs for the _latest version released on Galaxy_.
We also separately publish [**latest commit** collection documentation](https://ansible-collections.github.io/community.docker/branch/main/) which shows docs for the _latest commit in the `main` branch_.
If you use the Ansible package and do not update collections independently, use **latest**. If you install or update this collection directly from Galaxy, use **devel**. If you are looking to contribute, use **latest commit**.
## Included content
* Connection plugins:
- community.docker.docker: use Docker containers as remotes
- community.docker.docker: use Docker containers as remotes using the Docker CLI program
- community.docker.docker_api: use Docker containers as remotes using the Docker API
- community.docker.nsenter: execute commands on the host running the controller container
* Inventory plugins:
- community.docker.docker_containers: dynamic inventory plugin for Docker containers
- community.docker.docker_machine: collect Docker machines as inventory
- community.docker.docker_swarm: collect Docker Swarm nodes as inventory
* Modules:
* Docker:
- community.docker.docker_container: manage Docker containers
- community.docker.docker_container_copy_into: copy a file into a Docker container
- community.docker.docker_container_exec: run commands in Docker containers
- community.docker.docker_container_info: retrieve information on Docker containers
- community.docker.docker_host_info: retrieve information on the Docker daemon
- community.docker.docker_image: manage Docker images
- community.docker.docker_image_build: build Docker images using Docker buildx
- community.docker.docker_image_export: export (archive) Docker images
- community.docker.docker_image_info: retrieve information on Docker images
- community.docker.docker_image_load: load Docker images from archives
- community.docker.docker_image_pull: pull Docker images from registries
- community.docker.docker_image_push: push Docker images to registries
- community.docker.docker_image_remove: remove Docker images
- community.docker.docker_image_tag: tag Docker images with new names and/or tags
- community.docker.docker_login: log in and out to/from registries
- community.docker.docker_network: manage Docker networks
- community.docker.docker_network_info: retrieve information on Docker networks
@ -46,7 +98,10 @@ Both libraries cannot be installed at the same time. If you accidentally did ins
- community.docker.docker_volume: manage Docker volumes
- community.docker.docker_volume_info: retrieve information on Docker volumes
* Docker Compose:
- community.docker.docker_compose: manage Docker Compose files
- community.docker.docker_compose_v2: manage Docker Compose files (Docker compose CLI plugin)
- community.docker.docker_compose_v2_exec: run command in a container of a Compose service
- community.docker.docker_compose_v2_pull: pull a Docker compose project
- community.docker.docker_compose_v2_run: run command in a new container of a Compose service
* Docker Swarm:
- community.docker.docker_config: manage configurations
- community.docker.docker_node: manage Docker Swarm nodes
@ -65,7 +120,7 @@ Both libraries cannot be installed at the same time. If you accidentally did ins
## Using this collection
Before using the General community collection, you need to install the collection with the `ansible-galaxy` CLI:
Before using the Docker community collection, you need to install the collection with the `ansible-galaxy` CLI:
ansible-galaxy collection install community.docker
@ -86,7 +141,7 @@ You can find more information in the [developer guide for collections](https://d
## Release notes
See the [changelog](https://github.com/ansible-collections/community.docker/tree/main/CHANGELOG.rst).
See the [changelog](https://github.com/ansible-collections/community.docker/tree/main/CHANGELOG.md).
## More information
@ -100,6 +155,10 @@ See the [changelog](https://github.com/ansible-collections/community.docker/tree
## Licensing
GNU General Public License v3.0 or later.
This collection is primarily licensed and distributed as a whole under the GNU General Public License v3.0 or later.
See [COPYING](https://www.gnu.org/licenses/gpl-3.0.txt) to see the full text.
See [LICENSES/GPL-3.0-or-later.txt](https://github.com/ansible-collections/community.docker/blob/main/COPYING) for the full text.
Parts of the collection are licensed under the [Apache 2.0 license](https://github.com/ansible-collections/community.docker/blob/main/LICENSES/Apache-2.0.txt). This mostly applies to files vendored from the [Docker SDK for Python](https://github.com/docker/docker-py/).
All files have a machine readable `SDPX-License-Identifier:` comment denoting its respective license(s) or an equivalent entry in an accompanying `.license` file. Only changelog fragments (which will not be part of a release) are covered by a blanket statement in `REUSE.toml`. This conforms to the [REUSE specification](https://reuse.software/spec/).

11
REUSE.toml Normal file
View File

@ -0,0 +1,11 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
version = 1
[[annotations]]
path = "changelogs/fragments/**"
precedence = "aggregate"
SPDX-FileCopyrightText = "Ansible Project"
SPDX-License-Identifier = "GPL-3.0-or-later"

265
antsibull-nox.toml Normal file
View File

@ -0,0 +1,265 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
[collection_sources]
"ansible.posix" = "git+https://github.com/ansible-collections/ansible.posix.git,main"
"community.general" = "git+https://github.com/ansible-collections/community.general.git,main"
"community.internal_test_tools" = "git+https://github.com/ansible-collections/community.internal_test_tools.git,main"
"community.library_inventory_filtering_v1" = "git+https://github.com/ansible-collections/community.library_inventory_filtering.git,stable-1"
[vcs]
vcs = "git"
development_branch = "main"
stable_branches = [ "stable-*" ]
[sessions]
[sessions.lint]
run_isort = true
isort_config = ".isort.cfg"
run_black = true
run_ruff_check = true
ruff_check_config = "ruff.toml"
run_flake8 = true
flake8_config = ".flake8"
run_pylint = true
pylint_rcfile = ".pylintrc"
run_yamllint = true
yamllint_config = ".yamllint"
yamllint_config_plugins = ".yamllint-docs"
yamllint_config_plugins_examples = ".yamllint-examples"
run_mypy = true
mypy_ansible_core_package = "ansible-core>=2.19.0"
mypy_config = ".mypy.ini"
mypy_extra_deps = [
"docker",
"paramiko",
"urllib3",
"requests",
"types-mock",
"types-paramiko",
"types-pywin32",
"types-PyYAML",
"types-requests",
]
[sessions.docs_check]
validate_collection_refs="all"
codeblocks_restrict_types = [
"ansible-output",
"console",
"yaml",
"yaml+jinja",
]
codeblocks_restrict_type_exact_case = true
codeblocks_allow_without_type = false
codeblocks_allow_literal_blocks = false
[sessions.license_check]
[sessions.extra_checks]
run_no_unwanted_files = true
no_unwanted_files_module_extensions = [".py"]
no_unwanted_files_yaml_extensions = [".yml"]
run_action_groups = true
run_no_trailing_whitespace = true
run_avoid_characters = true
[[sessions.extra_checks.action_groups_config]]
name = "docker"
pattern = "^.*$"
exclusions = [
"current_container_facts",
"docker_context_info",
]
doc_fragment = "community.docker._attributes.actiongroup_docker"
[[sessions.extra_checks.avoid_character_group]]
name = "tab"
regex = "\\x09"
skip_directories = [
"tests/images/",
]
[sessions.build_import_check]
run_galaxy_importer = true
[sessions.ansible_test_sanity]
include_devel = true
[sessions.ansible_test_units]
include_devel = true
[sessions.ansible_test_integration]
session_name_template = "ansible-test-integration-{ansible_core}{dash_docker_short}{dash_remote}{dash_python_version}{dash_target_dashized}"
display_name_template = "main+Ⓐ{ansible_core}{plus_docker_short}{plus_remote}{plus_py_python_version}{plus_target}{plus_force_docker_sdk_for_python_dev}{plus_force_docker_sdk_for_python_pypi}"
description_template = "Run main integration tests with ansible-core {ansible_core}{comma_docker_short}{comma_remote}{comma_py_python_version}{comma_target}{comma_force_docker_sdk_for_python_dev}{comma_force_docker_sdk_for_python_pypi}"
[sessions.ansible_test_integration.ansible_vars]
force_docker_sdk_for_python_dev = { type = "value", value = false, template_value = "" }
force_docker_sdk_for_python_pypi = { type = "value", value = false, template_value = "" }
##################################################################################################
# Ansible-core 2.17:
[[sessions.ansible_test_integration.groups]]
session_name = "ansible-test-integration-2.17"
description = "Meta session for running all ansible-test-integration-2.17-* sessions."
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.17"
target = [ "azp/4/", "azp/5/" ]
docker = [ "fedora39", "ubuntu2004", "alpine319" ]
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.17"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [ "rhel/9.3" ]
# Ansible-core 2.18:
[[sessions.ansible_test_integration.groups]]
session_name = "ansible-test-integration-2.18"
description = "Meta session for running all ansible-test-integration-2.18-* sessions."
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.18"
target = [ "azp/4/", "azp/5/" ]
docker = [ "fedora40", "ubuntu2204", "alpine320" ]
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.18"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [ "rhel/9.4" ]
# Ansible-core 2.19:
[[sessions.ansible_test_integration.groups]]
session_name = "ansible-test-integration-2.19"
description = "Meta session for running all ansible-test-integration-2.19-* sessions."
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.19"
target = [ "azp/4/", "azp/5/" ]
docker = [ "fedora41", "alpine321" ]
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.19"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [ "rhel/9.5", "ubuntu/22.04" ]
# Ansible-core 2.20:
[[sessions.ansible_test_integration.groups]]
session_name = "ansible-test-integration-2.20"
description = "Meta session for running all ansible-test-integration-2.20-* sessions."
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.20"
target = [ "azp/4/", "azp/5/" ]
docker = [ "fedora42", "alpine322" ]
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "2.20"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [ "rhel/9.6" ]
# Ansible-core devel:
[[sessions.ansible_test_integration.groups]]
session_name = "ansible-test-integration-devel"
description = "Meta session for running all ansible-test-integration-devel-* sessions."
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/4/", "azp/5/" ]
docker = [ "fedora42", "ubuntu2204", "ubuntu2404", "alpine322" ]
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/4/", "azp/5/" ]
python_version = "3.9"
docker = "quay.io/ansible-community/test-image:debian-bullseye"
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/4/", "azp/5/" ]
python_version = "3.11"
docker = "quay.io/ansible-community/test-image:debian-bookworm"
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/4/", "azp/5/" ]
python_version = "3.13"
docker = "quay.io/ansible-community/test-image:debian-13-trixie"
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/4/", "azp/5/" ]
python_version = "3.13"
docker = "quay.io/ansible-community/test-image:archlinux"
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [ "rhel/9.6" ]
ansible_vars = { force_docker_sdk_for_python_dev = { type = "value", value = true, template_value = "sdk-dev-latest" } }
[[sessions.ansible_test_integration.groups.sessions]]
ansible_core = "devel"
target = [ "azp/1/", "azp/2/", "azp/3/", "azp/4/", "azp/5/" ]
remote = [
"rhel/10.0",
# For some reason, Ubuntu 24.04 is *extremely* slower than RHEL 9.6
# "ubuntu/24.04",
]
##################################################################################################
[sessions.ansible_lint]
ansible_lint_package = [
"ansible-lint",
"ansible-compat < 25.8.2",
]
[[sessions.ee_check.execution_environments]]
name = "devel-ubi-9"
description = "ansible-core devel @ RHEL UBI 9"
test_playbooks = ["tests/ee/all.yml"]
config.images.base_image.name = "docker.io/redhat/ubi9:latest"
config.dependencies.ansible_core.package_pip = "https://github.com/ansible/ansible/archive/devel.tar.gz"
config.dependencies.ansible_runner.package_pip = "ansible-runner"
config.dependencies.python_interpreter.package_system = "python3.12 python3.12-pip python3.12-wheel python3.12-cryptography"
config.dependencies.python_interpreter.python_path = "/usr/bin/python3.12"
runtime_environment = {"ANSIBLE_PRIVATE_ROLE_VARS" = "true"}
runtime_container_options = [
# Mount Docker socket into the container so we can talk to Docker outside the container
"-v",
"/var/run/docker.sock:/var/run/docker.sock",
# Need to be root so we can access /var/run/docker.sock, which usually isn't accessible by the user,
# but only by the group the user is in (but that group membership isn't there in the container)
"--user",
"0",
]
[[sessions.ee_check.execution_environments]]
name = "2.17-rocky-9"
description = "ansible-core 2.17 @ Rocky Linux 9"
test_playbooks = ["tests/ee/all.yml"]
config.images.base_image.name = "quay.io/rockylinux/rockylinux:9"
config.dependencies.ansible_core.package_pip = "https://github.com/ansible/ansible/archive/stable-2.17.tar.gz"
config.dependencies.ansible_runner.package_pip = "ansible-runner"
config.dependencies.python_interpreter.package_system = "python3.11 python3.11-pip python3.11-wheel python3.11-cryptography"
config.dependencies.python_interpreter.python_path = "/usr/bin/python3.11"
runtime_environment = {"ANSIBLE_PRIVATE_ROLE_VARS" = "true"}
runtime_container_options = [
# Mount Docker socket into the container so we can talk to Docker outside the container
"-v",
"/var/run/docker.sock:/var/run/docker.sock",
# Need to be root so we can access /var/run/docker.sock, which usually isn't accessible by the user,
# but only by the group the user is in (but that group membership isn't there in the container)
"--user",
"0",
]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,3 @@
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
SPDX-FileCopyrightText: Ansible Project

View File

@ -1,29 +1,43 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
changelog_filename_template: ../CHANGELOG.rst
changelog_filename_version_depth: 0
changes_file: changelog.yaml
changes_format: combined
ignore_other_fragment_extensions: true
keep_fragments: false
mention_ancestor: true
new_plugins_after_name: removed_features
notesdir: fragments
output_formats:
- md
- rst
prelude_section_name: release_summary
prelude_section_title: Release Summary
sections:
- - major_changes
- Major Changes
- - minor_changes
- Minor Changes
- - breaking_changes
- Breaking Changes / Porting Guide
- - deprecated_features
- Deprecated Features
- - removed_features
- Removed Features (previously deprecated)
- - security_fixes
- Security Fixes
- - bugfixes
- Bugfixes
- - known_issues
- Known Issues
- - major_changes
- Major Changes
- - minor_changes
- Minor Changes
- - breaking_changes
- Breaking Changes / Porting Guide
- - deprecated_features
- Deprecated Features
- - removed_features
- Removed Features (previously deprecated)
- - security_fixes
- Security Fixes
- - bugfixes
- Bugfixes
- - known_issues
- Known Issues
title: Docker Community Collection
trivial_section_name: trivial
use_fqcn: true
add_plugin_period: true
changelog_nice_yaml: true
changelog_sort: version
vcs: auto

18
docs/docsite/config.yml Normal file
View File

@ -0,0 +1,18 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# The following `.. envvar::` directives are defined in the extra docsite docs:
envvar_directives:
- DOCKER_HOST
- DOCKER_API_VERSION
- DOCKER_TIMEOUT
- DOCKER_CERT_PATH
- DOCKER_SSL_VERSION
- DOCKER_TLS
- DOCKER_TLS_HOSTNAME
- DOCKER_TLS_VERIFY
changelog:
write_changelog: true

View File

@ -1,4 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
sections:
- title: Scenario Guide
toctree:

View File

@ -1,10 +1,20 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
edit_on_github:
repository: ansible-collections/community.docker
branch: main
path_prefix: ''
extra_links:
- description: Ask for help (Docker)
url: https://forum.ansible.com/tags/c/help/6/none/docker
- description: Ask for help (Docker Compose)
url: https://forum.ansible.com/tags/c/help/6/none/docker-compose
- description: Ask for help (Docker Swarm)
url: https://forum.ansible.com/tags/c/help/6/none/docker-swarm
- description: Submit a bug report
url: https://github.com/ansible-collections/community.docker/issues/new?assignees=&labels=&template=bug_report.md
- description: Request a feature
@ -18,6 +28,16 @@ communication:
- topic: General usage and support questions
network: Libera
channel: '#ansible'
mailing_lists:
- topic: Ansible Project List
url: https://groups.google.com/g/ansible-project
forums:
- topic: "Ansible Forum: General usage and support questions"
# The following URL directly points to the "Get Help" section
url: https://forum.ansible.com/c/help/6/none
- topic: "Ansible Forum: Discussions about Docker"
# The following URL directly points to the "docker" tag
url: https://forum.ansible.com/tag/docker
- topic: "Ansible Forum: Discussions about Docker Compose"
# The following URL directly points to the "docker-compose" tag
url: https://forum.ansible.com/tag/docker-compose
- topic: "Ansible Forum: Discussions about Docker Swarm"
# The following URL directly points to the "docker-swarm" tag
url: https://forum.ansible.com/tag/docker-swarm

View File

@ -1,9 +1,14 @@
..
Copyright (c) Ansible Project
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
.. _ansible_collections.community.docker.docsite.scenario_guide:
Docker Guide
============
The `community.docker collection <https://galaxy.ansible.com/community/docker>`_ offers several modules and plugins for orchestrating Docker containers and Docker Swarm.
The `community.docker collection <https://galaxy.ansible.com/ui/repo/published/community/docker/>`_ offers several modules and plugins for orchestrating Docker containers and Docker Swarm.
.. contents::
:local:
@ -13,31 +18,23 @@ The `community.docker collection <https://galaxy.ansible.com/community/docker>`_
Requirements
------------
Most of the modules and plugins in community.docker require the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_. The SDK needs to be installed on the machines where the modules and plugins are executed, and for the Python version(s) with which the modules and plugins are executed. You can use the :ref:`community.general.python_requirements_info module <ansible_collections.community.general.python_requirements_info_module>` to make sure that the Docker SDK for Python is installed on the correct machine and for the Python version used by Ansible.
Most of the modules and plugins in community.docker require the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_. The SDK needs to be installed on the machines where the modules and plugins are executed, and for the Python version(s) with which the modules and plugins are executed. You can use the :ansplugin:`community.general.python_requirements_info module <community.general.python_requirements_info#module>` to make sure that the Docker SDK for Python is installed on the correct machine and for the Python version used by Ansible.
Note that plugins (inventory plugins and connection plugins) are always executed in the context of Ansible itself. If you use a plugin that requires the Docker SDK for Python, you need to install it on the machine running ``ansible`` or ``ansible-playbook`` and for the same Python interpreter used by Ansible. To see which Python is used, run ``ansible --version``.
You can install the Docker SDK for Python for Python 3.6 or later as follows:
.. code-block:: bash
.. code-block:: console
$ pip install docker
For Python 2.7, you need to use a version between 2.0.0 and 4.4.4 since the Python package for Docker removed support for Python 2.7 on 5.0.0. You can install the specific version of the Docker SDK for Python as follows:
.. code-block:: bash
.. code-block:: console
$ pip install 'docker==4.4.4'
For Python 2.6, you need a version before 2.0.0. For these versions, the SDK was called ``docker-py``, so you need to install it as follows:
.. code-block:: bash
$ pip install 'docker-py>=1.10.0'
Please install only one of ``docker`` or ``docker-py``. Installing both will result in a broken installation. If this happens, Ansible will detect it and inform you about it. If that happens, you must uninstall both and reinstall the correct version.
If in doubt, always install ``docker`` and never ``docker-py``.
Note that the Docker SDK for Python was called ``docker-py`` on PyPi before version 2.0.0. Please avoid installing this really old version, and make sure to not install both ``docker`` and ``docker-py``. Installing both will result in a broken installation. If this happens, Ansible will detect it and inform you about it. If that happens, you must uninstall both and reinstall the correct version. If in doubt, always install ``docker`` and never ``docker-py``.
Connecting to the Docker API
@ -52,7 +49,7 @@ Parameters
Most plugins and modules can be configured by the following parameters:
docker_host
The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix://var/run/docker.sock``. To connect to a remote host, provide the TCP connection string (for example: ``tcp://192.0.2.23:2376``). If TLS is used to encrypt the connection to the API, then the module will automatically replace 'tcp' in the connection URL with 'https'.
The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix:///var/run/docker.sock``. To connect to a remote host, provide the TCP connection string (for example: ``tcp://192.0.2.23:2376``). If TLS is used to encrypt the connection to the API, then the module will automatically replace ``tcp`` in the connection URL with ``https``.
api_version
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported by the Docker SDK for Python installed.
@ -66,7 +63,7 @@ Most plugins and modules can be configured by the following parameters:
validate_certs
Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Default is ``false``.
cacert_path
ca_path
Use a CA certificate when performing server verification by providing the path to a CA certificate file.
cert_path
@ -81,6 +78,58 @@ Most plugins and modules can be configured by the following parameters:
ssl_version
Provide a valid SSL version number. The default value is determined by the Docker SDK for Python.
This option is not available for the CLI based plugins. It is mainly needed for legacy systems and should be avoided.
Module default group
....................
To avoid having to specify common parameters for all the modules in every task, you can use the ``community.docker.docker`` :ref:`module defaults group <module_defaults_groups>`, or its short name ``docker``.
.. note::
Module default groups only work for modules, not for plugins (connection and inventory plugins).
The following example shows how the module default group can be used in a playbook:
.. code-block:: yaml+jinja
---
- name: Pull image and start the container
hosts: localhost
gather_facts: false
module_defaults:
group/community.docker.docker:
# Select Docker Daemon on other host
docker_host: tcp://192.0.2.23:2376
# Configure TLS
tls: true
validate_certs: true
tls_hostname: docker.example.com
ca_path: /path/to/cacert.pem
# Increase timeout
timeout: 120
tasks:
- name: Pull image
community.docker.docker_image_pull:
name: python
tag: 3.12
- name: Start container
community.docker.docker_container:
cleanup: true
command: python --version
detach: false
image: python:3.12
name: my-python-container
output_logs: true
- name: Show output
ansible.builtin.debug:
msg: "{{ output.container.Output }}"
Here the two ``community.docker`` tasks will use the options set for the module defaults group.
Environment variables
.....................
@ -89,27 +138,38 @@ You can also control how the plugins and modules connect to the Docker API by se
For plugins, they have to be set for the environment Ansible itself runs in. For modules, they have to be set for the environment the modules are executed in. For modules running on remote machines, the environment variables have to be set on that machine for the user used to execute the modules with.
DOCKER_HOST
The URL or Unix socket path used to connect to the Docker API.
.. envvar:: DOCKER_HOST
DOCKER_API_VERSION
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
by docker-py.
The URL or Unix socket path used to connect to the Docker API.
DOCKER_TIMEOUT
The maximum amount of time in seconds to wait on a response from the API.
.. envvar:: DOCKER_API_VERSION
DOCKER_CERT_PATH
Path to the directory containing the client certificate, client key and CA certificate.
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
by Docker SDK for Python.
DOCKER_SSL_VERSION
Provide a valid SSL version number.
.. envvar:: DOCKER_TIMEOUT
DOCKER_TLS
Secure the connection to the API by using TLS without verifying the authenticity of the Docker Host.
The maximum amount of time in seconds to wait on a response from the API.
DOCKER_TLS_VERIFY
Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.
.. envvar:: DOCKER_CERT_PATH
Path to the directory containing the client certificate, client key and CA certificate.
.. envvar:: DOCKER_SSL_VERSION
Provide a valid SSL version number.
.. envvar:: DOCKER_TLS
Secure the connection to the API by using TLS without verifying the authenticity of the Docker Host.
.. envvar:: DOCKER_TLS_HOSTNAME
When verifying the authenticity of the Docker Host, uses this hostname to compare to the host's certificate.
.. envvar:: DOCKER_TLS_VERIFY
Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.
Plain Docker daemon: images, networks, volumes, and containers
@ -118,70 +178,114 @@ Plain Docker daemon: images, networks, volumes, and containers
For working with a plain Docker daemon, that is without Swarm, there are connection plugins, an inventory plugin, and several modules available:
docker connection plugin
The :ref:`community.docker.docker connection plugin <ansible_collections.community.docker.docker_connection>` uses the Docker CLI utility to connect to Docker containers and execute modules in them. It essentially wraps ``docker exec`` and ``docker cp``. This connection plugin is supported by the :ref:`ansible.posix.synchronize module <ansible_collections.ansible.posix.synchronize_module>`.
The :ansplugin:`community.docker.docker connection plugin <community.docker.docker#connection>` uses the Docker CLI utility to connect to Docker containers and execute modules in them. It essentially wraps ``docker exec`` and ``docker cp``. This connection plugin is supported by the :ansplugin:`ansible.posix.synchronize module <ansible.posix.synchronize#module>`.
docker_api connection plugin
The :ref:`community.docker.docker_api connection plugin <ansible_collections.community.docker.docker_api_connection>` talks directly to the Docker daemon to connect to Docker containers and execute modules in them.
The :ansplugin:`community.docker.docker_api connection plugin <community.docker.docker_api#connection>` talks directly to the Docker daemon to connect to Docker containers and execute modules in them.
docker_containers inventory plugin
The :ref:`community.docker.docker_containers inventory plugin <ansible_collections.community.docker.docker_containers_inventory>` allows you to dynamically add Docker containers from a Docker Daemon to your Ansible inventory. See :ref:`dynamic_inventory` for details on dynamic inventories.
The :ansplugin:`community.docker.docker_containers inventory plugin <community.docker.docker_containers#inventory>` allows you to dynamically add Docker containers from a Docker Daemon to your Ansible inventory. See :ref:`dynamic_inventory` for details on dynamic inventories.
The `docker inventory script <https://github.com/ansible-community/contrib-scripts/blob/main/inventory/docker.py>`_ is deprecated. Please use the inventory plugin instead. The inventory plugin has several compatibility options. If you need to collect Docker containers from multiple Docker daemons, you need to add every Docker daemon as an individual inventory source.
docker_host_info module
The :ref:`community.docker.docker_host_info module <ansible_collections.community.docker.docker_host_info_module>` allows you to retrieve information on a Docker daemon, such as all containers, images, volumes, networks and so on.
The :ansplugin:`community.docker.docker_host_info module <community.docker.docker_host_info#module>` allows you to retrieve information on a Docker daemon, such as all containers, images, volumes, networks and so on.
docker_login module
The :ref:`community.docker.docker_login module <ansible_collections.community.docker.docker_login_module>` allows you to log in and out of a remote registry, such as Docker Hub or a private registry. It provides similar functionality to the ``docker login`` and ``docker logout`` CLI commands.
The :ansplugin:`community.docker.docker_login module <community.docker.docker_login#module>` allows you to log in and out of a remote registry, such as Docker Hub or a private registry. It provides similar functionality to the ``docker login`` and ``docker logout`` CLI commands.
docker_prune module
The :ref:`community.docker.docker_prune module <ansible_collections.community.docker.docker_prune_module>` allows you to prune no longer needed containers, images, volumes and so on. It provides similar functionality to the ``docker prune`` CLI command.
The :ansplugin:`community.docker.docker_prune module <community.docker.docker_prune#module>` allows you to prune no longer needed containers, images, volumes and so on. It provides similar functionality to the ``docker prune`` CLI command.
docker_image module
The :ref:`community.docker.docker_image module <ansible_collections.community.docker.docker_image_module>` provides full control over images, including: build, pull, push, tag and remove.
The :ansplugin:`community.docker.docker_image module <community.docker.docker_image#module>` provides full control over images, including: build, pull, push, tag and remove.
docker_image_build
The :ansplugin:`community.docker.docker_image_build module <community.docker.docker_image_build#module>` allows you to build a Docker image using Docker buildx.
docker_image_export module
The :ansplugin:`community.docker.docker_image_export module <community.docker.docker_image_export#module>` allows you to export (archive) images.
docker_image_info module
The :ref:`community.docker.docker_image_info module <ansible_collections.community.docker.docker_image_info_module>` allows you to list and inspect images.
The :ansplugin:`community.docker.docker_image_info module <community.docker.docker_image_info#module>` allows you to list and inspect images.
docker_image_load
The :ansplugin:`community.docker.docker_image_load module <community.docker.docker_image_load#module>` allows you to import one or multiple images from tarballs.
docker_image_pull
The :ansplugin:`community.docker.docker_image_pull module <community.docker.docker_image_pull#module>` allows you to pull a Docker image from a registry.
docker_image_push
The :ansplugin:`community.docker.docker_image_push module <community.docker.docker_image_push#module>` allows you to push a Docker image to a registry.
docker_image_remove
The :ansplugin:`community.docker.docker_image_remove module <community.docker.docker_image_remove#module>` allows you to remove and/or untag a Docker image from the Docker daemon.
docker_image_tag
The :ansplugin:`community.docker.docker_image_tag module <community.docker.docker_image_tag#module>` allows you to tag a Docker image with additional names and/or tags.
docker_network module
The :ref:`community.docker.docker_network module <ansible_collections.community.docker.docker_network_module>` provides full control over Docker networks.
The :ansplugin:`community.docker.docker_network module <community.docker.docker_network#module>` provides full control over Docker networks.
docker_network_info module
The :ref:`community.docker.docker_network_info module <ansible_collections.community.docker.docker_network_info_module>` allows you to inspect Docker networks.
The :ansplugin:`community.docker.docker_network_info module <community.docker.docker_network_info#module>` allows you to inspect Docker networks.
docker_volume_info module
The :ref:`community.docker.docker_volume_info module <ansible_collections.community.docker.docker_volume_info_module>` provides full control over Docker volumes.
The :ansplugin:`community.docker.docker_volume_info module <community.docker.docker_volume_info#module>` provides full control over Docker volumes.
docker_volume module
The :ref:`community.docker.docker_volume module <ansible_collections.community.docker.docker_volume_module>` allows you to inspect Docker volumes.
The :ansplugin:`community.docker.docker_volume module <community.docker.docker_volume#module>` allows you to inspect Docker volumes.
docker_container module
The :ref:`community.docker.docker_container module <ansible_collections.community.docker.docker_container_module>` manages the container lifecycle by providing the ability to create, update, stop, start and destroy a Docker container.
The :ansplugin:`community.docker.docker_container module <community.docker.docker_container#module>` manages the container lifecycle by providing the ability to create, update, stop, start and destroy a Docker container.
docker_container_copy_into
The :ansplugin:`community.docker.docker_container_copy_into module <community.docker.docker_container_copy_into#module>` allows you to copy files from the control node into a container.
docker_container_exec
The :ansplugin:`community.docker.docker_container_exec module <community.docker.docker_container_exec#module>` allows you to execute commands in a running container.
docker_container_info module
The :ref:`community.docker.docker_container_info module <ansible_collections.community.docker.docker_container_info_module>` allows you to inspect a Docker container.
The :ansplugin:`community.docker.docker_container_info module <community.docker.docker_container_info#module>` allows you to inspect a Docker container.
docker_plugin
The :ansplugin:`community.docker.docker_plugin module <community.docker.docker_plugin#module>` allows you to manage Docker plugins.
Docker Compose
--------------
The :ref:`community.docker.docker_compose module <ansible_collections.community.docker.docker_compose_module>`
allows you to use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm.
Supports compose versions 1 and 2.
Docker Compose v2
.................
Next to Docker SDK for Python, you need to install `docker-compose <https://github.com/docker/compose>`_ on the remote machines to use the module.
There are several modules for working with Docker Compose projects:
community.docker.docker_compose_v2
The :ansplugin:`community.docker.docker_compose_v2 module <community.docker.docker_compose_v2#module>` allows you to use your existing Docker Compose files to orchestrate containers on a single Docker daemon or on Swarm.
community.docker.docker_compose_v2_exec
The :ansplugin:`community.docker.docker_compose_v2_exec module <community.docker.docker_compose_v2_exec#module>` allows you to run a command in a container of Docker Compose projects.
community.docker.docker_compose_v2_pull
The :ansplugin:`community.docker.docker_compose_v2_pull module <community.docker.docker_compose_v2_pull#module>` allows you to pull Docker Compose projects.
community.docker.docker_compose_v2_run
The :ansplugin:`community.docker.docker_compose_v2_run module <community.docker.docker_compose_v2_run#module>` allows you to run a command in a new container of a Docker Compose project.
These modules use the Docker CLI "compose" plugin (``docker compose``), and thus needs access to the Docker CLI tool.
No further requirements next to to the CLI tool and its Docker Compose plugin are needed.
Docker Machine
--------------
The :ref:`community.docker.docker_machine inventory plugin <ansible_collections.community.docker.docker_machine_inventory>` allows you to dynamically add Docker Machine hosts to your Ansible inventory.
The :ansplugin:`community.docker.docker_machine inventory plugin <community.docker.docker_machine#inventory>` allows you to dynamically add Docker Machine hosts to your Ansible inventory.
Docker stack
------------
Docker Swarm stack
------------------
The :ref:`community.docker.docker_stack module <ansible_collections.community.docker.docker_stack_module>` module allows you to control Docker stacks. Information on stacks can be retrieved by the :ref:`community.docker.docker_stack_info module <ansible_collections.community.docker.docker_stack_info_module>`, and information on stack tasks can be retrieved by the :ref:`community.docker.docker_stack_task_info module <ansible_collections.community.docker.docker_stack_task_info_module>`.
The :ansplugin:`community.docker.docker_stack module <community.docker.docker_stack#module>` module allows you to control Docker Swarm stacks. Information on Swarm stacks can be retrieved by the :ansplugin:`community.docker.docker_stack_info module <community.docker.docker_stack_info#module>`, and information on Swarm stack tasks can be retrieved by the :ansplugin:`community.docker.docker_stack_task_info module <community.docker.docker_stack_task_info#module>`.
Docker Swarm
@ -195,19 +299,19 @@ Swarm management
One inventory plugin and several modules are provided to manage Docker Swarms:
docker_swarm inventory plugin
The :ref:`community.docker.docker_swarm inventory plugin <ansible_collections.community.docker.docker_swarm_inventory>` allows you to dynamically add all Docker Swarm nodes to your Ansible inventory.
The :ansplugin:`community.docker.docker_swarm inventory plugin <community.docker.docker_swarm#inventory>` allows you to dynamically add all Docker Swarm nodes to your Ansible inventory.
docker_swarm module
The :ref:`community.docker.docker_swarm module <ansible_collections.community.docker.docker_swarm_module>` allows you to globally configure Docker Swarm manager nodes to join and leave swarms, and to change the Docker Swarm configuration.
The :ansplugin:`community.docker.docker_swarm module <community.docker.docker_swarm#module>` allows you to globally configure Docker Swarm manager nodes to join and leave swarms, and to change the Docker Swarm configuration.
docker_swarm_info module
The :ref:`community.docker.docker_swarm_info module <ansible_collections.community.docker.docker_swarm_info_module>` allows you to retrieve information on Docker Swarm.
The :ansplugin:`community.docker.docker_swarm_info module <community.docker.docker_swarm_info#module>` allows you to retrieve information on Docker Swarm.
docker_node module
The :ref:`community.docker.docker_node module <ansible_collections.community.docker.docker_node_module>` allows you to manage Docker Swarm nodes.
The :ansplugin:`community.docker.docker_node module <community.docker.docker_node#module>` allows you to manage Docker Swarm nodes.
docker_node_info module
The :ref:`community.docker.docker_node_info module <ansible_collections.community.docker.docker_node_info_module>` allows you to retrieve information on Docker Swarm nodes.
The :ansplugin:`community.docker.docker_node_info module <community.docker.docker_node_info#module>` allows you to retrieve information on Docker Swarm nodes.
Configuration management
........................
@ -215,21 +319,12 @@ Configuration management
The community.docker collection offers modules to manage Docker Swarm configurations and secrets:
docker_config module
The :ref:`community.docker.docker_config module <ansible_collections.community.docker.docker_config_module>` allows you to create and modify Docker Swarm configs.
The :ansplugin:`community.docker.docker_config module <community.docker.docker_config#module>` allows you to create and modify Docker Swarm configs.
docker_secret module
The :ref:`community.docker.docker_secret module <ansible_collections.community.docker.docker_secret_module>` allows you to create and modify Docker Swarm secrets.
The :ansplugin:`community.docker.docker_secret module <community.docker.docker_secret#module>` allows you to create and modify Docker Swarm secrets.
Swarm services
..............
Docker Swarm services can be created and updated with the :ref:`community.docker.docker_swarm_service module <ansible_collections.community.docker.docker_swarm_service_module>`, and information on them can be queried by the :ref:`community.docker.docker_swarm_service_info module <ansible_collections.community.docker.docker_swarm_service_info_module>`.
Helpful links
-------------
Still using Dockerfile to build images? Check out `ansible-bender <https://github.com/ansible-community/ansible-bender>`_, and start building images from your Ansible playbooks.
Use `Ansible Operator <https://learn.openshift.com/ansibleop/ansible-operator-overview/>`_ to launch your docker-compose file on `OpenShift <https://www.okd.io/>`_. Go from an app on your laptop to a fully scalable app in the cloud with Kubernetes in just a few moments.
Docker Swarm services can be created and updated with the :ansplugin:`community.docker.docker_swarm_service module <community.docker.docker_swarm_service#module>`, and information on them can be queried by the :ansplugin:`community.docker.docker_swarm_service_info module <community.docker.docker_swarm_service_info#module>`.

View File

@ -1,17 +1,27 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# See https://docs.ansible.com/ansible/latest/dev_guide/collections_galaxy_meta.html
namespace: community
name: docker
version: 2.7.0
version: 5.1.0
readme: README.md
authors:
- Ansible Docker Working Group
description: Modules and plugins for working with Docker
license_file: COPYING
license:
- GPL-3.0-or-later
- Apache-2.0
# license_file: COPYING
tags:
- docker
dependencies:
community.library_inventory_filtering_v1: '>=1.0.0'
repository: https://github.com/ansible-collections/community.docker
#documentation: https://github.com/ansible-collection-migration/community.REPO_NAME/tree/main/docs
documentation: https://docs.ansible.com/ansible/latest/collections/community/docker/
homepage: https://github.com/ansible-collections/community.docker
issues: https://github.com/ansible-collections/community.docker/issues
build_ignore:

View File

@ -0,0 +1,3 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later

View File

@ -1,2 +1,16 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
docker
docker-compose
urllib3
requests
paramiko
pyyaml
# We assume that EEs are not based on Windows, and have Python >= 3.5.
# (ansible-builder does not support conditionals, it will simply add
# the following unconditionally to the requirements)
#
# pywin32 ; sys_platform == 'win32'
# backports.ssl-match-hostname ; python_version < '3.5'

View File

@ -1,4 +1,8 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
version: 1
dependencies:
python: meta/ee-requirements.txt

View File

@ -1,27 +1,51 @@
---
requires_ansible: '>=2.9.10'
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
requires_ansible: '>=2.17.0'
action_groups:
docker:
- docker_compose
- docker_config
- docker_container
- docker_container_exec
- docker_container_info
- docker_host_info
- docker_image
- docker_image_info
- docker_image_load
- docker_login
- docker_network
- docker_network_info
- docker_node
- docker_node_info
- docker_plugin
- docker_prune
- docker_secret
- docker_swarm
- docker_swarm_info
- docker_swarm_service
- docker_swarm_service_info
- docker_volume
- docker_volume_info
- docker_compose_v2
- docker_compose_v2_exec
- docker_compose_v2_pull
- docker_compose_v2_run
- docker_config
- docker_container
- docker_container_copy_into
- docker_container_exec
- docker_container_info
- docker_host_info
- docker_image
- docker_image_build
- docker_image_export
- docker_image_info
- docker_image_load
- docker_image_pull
- docker_image_push
- docker_image_remove
- docker_image_tag
- docker_login
- docker_network
- docker_network_info
- docker_node
- docker_node_info
- docker_plugin
- docker_prune
- docker_secret
- docker_stack
- docker_stack_info
- docker_stack_task_info
- docker_swarm
- docker_swarm_info
- docker_swarm_service
- docker_swarm_service_info
- docker_volume
- docker_volume_info
plugin_routing:
modules:
docker_compose:
tombstone:
removal_version: 4.0.0
warning_text: This module uses docker-compose v1, which is End of Life since July 2022. Please migrate to community.docker.docker_compose_v2.

27
noxfile.py Normal file
View File

@ -0,0 +1,27 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Felix Fontein <felix@fontein.de>
# /// script
# dependencies = ["nox>=2025.02.09", "antsibull-nox"]
# ///
import sys
import nox
try:
import antsibull_nox
except ImportError:
print("You need to install antsibull-nox in the same Python environment as nox.")
sys.exit(1)
antsibull_nox.load_antsibull_nox_toml()
# Allow to run the noxfile with `python noxfile.py`, `pipx run noxfile.py`, or similar.
# Requires nox >= 2025.02.09
if __name__ == "__main__":
nox.main()

View File

@ -0,0 +1,49 @@
# Copyright (c) 2022, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
import base64
import typing as t
from ansible import constants as C
from ansible.plugins.action import ActionBase
from ansible.utils.vars import merge_hash
from ansible_collections.community.docker.plugins.module_utils._scramble import (
unscramble,
)
class ActionModule(ActionBase):
# Set to True when transferring files to the remote
TRANSFERS_FILES = False
def run(
self, tmp: str | None = None, task_vars: dict[str, t.Any] | None = None
) -> dict[str, t.Any]:
self._supports_check_mode = True
self._supports_async = True
result = super().run(tmp, task_vars)
del tmp # tmp no longer has any effect
# pylint: disable-next=no-member
max_file_size_for_diff: int = C.MAX_FILE_SIZE_FOR_DIFF # type: ignore
self._task.args["_max_file_size_for_diff"] = max_file_size_for_diff
result = merge_hash(
result,
self._execute_module(task_vars=task_vars, wrap_async=self._task.async_val),
)
if "diff" in result and result["diff"].get("scrambled_diff"):
# Scrambling is not done for security, but to avoid no_log screwing up the diff
diff = result["diff"]
key = base64.b64decode(diff.pop("scrambled_diff"))
for k in ("before", "after"):
if k in diff:
diff[k] = unscramble(diff[k], key)
return result

View File

@ -4,200 +4,250 @@
# (c) 2015, Leendert Brouwer (https://github.com/objectified)
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = '''
author:
- Lorin Hochestein (!UNKNOWN)
- Leendert Brouwer (!UNKNOWN)
name: docker
short_description: Run tasks in docker containers
DOCUMENTATION = r"""
author:
- Lorin Hochestein (!UNKNOWN)
- Leendert Brouwer (!UNKNOWN)
name: docker
short_description: Run tasks in docker containers
description:
- Run commands or put/fetch files to an existing docker container.
- Uses the Docker CLI to execute commands in the container. If you prefer to directly connect to the Docker daemon, use
the P(community.docker.docker_api#connection) connection plugin.
options:
remote_addr:
description:
- Run commands or put/fetch files to an existing docker container.
- Uses the Docker CLI to execute commands in the container. If you prefer
to directly connect to the Docker daemon, use the
R(community.docker.docker_api,ansible_collections.community.docker.docker_api_connection)
connection plugin.
options:
remote_addr:
description:
- The name of the container you want to access.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_docker_host
remote_user:
description:
- The user to execute as inside the container.
- If Docker is too old to allow this (< 1.7), the one set by Docker itself will be used.
vars:
- name: ansible_user
- name: ansible_docker_user
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
cli:
- name: user
keyword:
- name: remote_user
docker_extra_args:
description:
- Extra arguments to pass to the docker command line.
default: ''
vars:
- name: ansible_docker_extra_args
ini:
- section: docker_connection
key: extra_cli_args
container_timeout:
default: 10
description:
- Controls how long we can wait to access reading output from the container once execution started.
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_DOCKER_TIMEOUT
version_added: 2.2.0
ini:
- key: timeout
section: defaults
- key: timeout
section: docker_connection
version_added: 2.2.0
vars:
- name: ansible_docker_timeout
version_added: 2.2.0
cli:
- name: timeout
type: integer
'''
- The name of the container you want to access.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_docker_host
remote_user:
description:
- The user to execute as inside the container.
- If Docker is too old to allow this (< 1.7), the one set by Docker itself will be used.
vars:
- name: ansible_user
- name: ansible_docker_user
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
cli:
- name: user
keyword:
- name: remote_user
docker_extra_args:
description:
- Extra arguments to pass to the docker command line.
default: ''
vars:
- name: ansible_docker_extra_args
ini:
- section: docker_connection
key: extra_cli_args
container_timeout:
default: 10
description:
- Controls how long we can wait to access reading output from the container once execution started.
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_DOCKER_TIMEOUT
version_added: 2.2.0
ini:
- key: timeout
section: defaults
- key: timeout
section: docker_connection
version_added: 2.2.0
vars:
- name: ansible_docker_timeout
version_added: 2.2.0
cli:
- name: timeout
type: integer
extra_env:
description:
- Provide extra environment variables to set when running commands in the Docker container.
- This option can currently only be provided as Ansible variables due to limitations of ansible-core's configuration
manager.
vars:
- name: ansible_docker_extra_env
type: dict
version_added: 3.12.0
working_dir:
description:
- The directory inside the container to run commands in.
- Requires Docker CLI version 18.06 or later.
env:
- name: ANSIBLE_DOCKER_WORKING_DIR
ini:
- key: working_dir
section: docker_connection
vars:
- name: ansible_docker_working_dir
type: string
version_added: 3.12.0
privileged:
description:
- Whether commands should be run with extended privileges.
- B(Note) that this allows command to potentially break out of the container. Use with care!
env:
- name: ANSIBLE_DOCKER_PRIVILEGED
ini:
- key: privileged
section: docker_connection
vars:
- name: ansible_docker_privileged
type: boolean
default: false
version_added: 3.12.0
"""
import fcntl
import os
import os.path
import subprocess
import re
import selectors
import subprocess
import typing as t
from shlex import quote
from ansible.compat import selectors
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.six.moves import shlex_quote
from ansible.errors import AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import BUFSIZE, ConnectionBase
from ansible.utils.display import Display
from ansible_collections.community.docker.plugins.module_utils.version import LooseVersion
from ansible_collections.community.docker.plugins.module_utils._version import (
LooseVersion,
)
display = Display()
class Connection(ConnectionBase):
''' Local docker based connections '''
"""Local docker based connections"""
transport = 'community.docker.docker'
transport = "community.docker.docker"
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
super().__init__(*args, **kwargs)
# Note: docker supports running as non-root in some configurations.
# (For instance, setting the UNIX socket file to be readable and
# writable by a specific UNIX group and then putting users into that
# group). Therefore we don't check that the user is root when using
# group). Therefore we do not check that the user is root when using
# this connection. But if the user is getting a permission denied
# error it probably means that docker on their system is only
# configured to be connected to by root and they are not running as
# root.
self._docker_args = []
self._container_user_cache = {}
self._version = None
self._docker_args: list[bytes | str] = []
self._container_user_cache: dict[str, str | None] = {}
self._version: str | None = None
self.remote_user: str | None = None
self.timeout: int | float | None = None
# Windows uses Powershell modules
if getattr(self._shell, "_IS_WINDOWS", False):
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.module_implementation_preferences = (".ps1", ".exe", "")
if 'docker_command' in kwargs:
self.docker_cmd = kwargs['docker_command']
if "docker_command" in kwargs:
self.docker_cmd = kwargs["docker_command"]
else:
try:
self.docker_cmd = get_bin_path('docker')
except ValueError:
raise AnsibleError("docker command not found in PATH")
self.docker_cmd = get_bin_path("docker")
except ValueError as exc:
raise AnsibleError("docker command not found in PATH") from exc
@staticmethod
def _sanitize_version(version):
version = re.sub(u'[^0-9a-zA-Z.]', u'', version)
version = re.sub(u'^v', u'', version)
def _sanitize_version(version: str) -> str:
version = re.sub("[^0-9a-zA-Z.]", "", version)
version = re.sub("^v", "", version)
return version
def _old_docker_version(self):
def _old_docker_version(self) -> tuple[list[str], str, bytes, int]:
cmd_args = self._docker_args
old_version_subcommand = ['version']
old_version_subcommand = ["version"]
old_docker_cmd = [self.docker_cmd] + cmd_args + old_version_subcommand
p = subprocess.Popen(old_docker_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
cmd_output, err = p.communicate()
with subprocess.Popen(
old_docker_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
) as p:
cmd_output, err = p.communicate()
return old_docker_cmd, to_native(cmd_output), err, p.returncode
return old_docker_cmd, to_text(cmd_output), err, p.returncode
def _new_docker_version(self):
def _new_docker_version(self) -> tuple[list[str], str, bytes, int]:
# no result yet, must be newer Docker version
cmd_args = self._docker_args
new_version_subcommand = ['version', '--format', "'{{.Server.Version}}'"]
new_version_subcommand = ["version", "--format", "'{{.Server.Version}}'"]
new_docker_cmd = [self.docker_cmd] + cmd_args + new_version_subcommand
p = subprocess.Popen(new_docker_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
cmd_output, err = p.communicate()
return new_docker_cmd, to_native(cmd_output), err, p.returncode
def _get_docker_version(self):
with subprocess.Popen(
new_docker_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
) as p:
cmd_output, err = p.communicate()
return new_docker_cmd, to_text(cmd_output), err, p.returncode
def _get_docker_version(self) -> str:
cmd, cmd_output, err, returncode = self._old_docker_version()
if returncode == 0:
for line in to_text(cmd_output, errors='surrogate_or_strict').split(u'\n'):
if line.startswith(u'Server version:'): # old docker versions
for line in to_text(cmd_output, errors="surrogate_or_strict").split("\n"):
if line.startswith("Server version:"): # old docker versions
return self._sanitize_version(line.split()[2])
cmd, cmd_output, err, returncode = self._new_docker_version()
if returncode:
raise AnsibleError('Docker version check (%s) failed: %s' % (to_native(cmd), to_native(err)))
raise AnsibleError(
f"Docker version check ({to_text(cmd)}) failed: {to_text(err)}"
)
return self._sanitize_version(to_text(cmd_output, errors='surrogate_or_strict'))
return self._sanitize_version(to_text(cmd_output, errors="surrogate_or_strict"))
def _get_docker_remote_user(self):
""" Get the default user configured in the docker container """
container = self.get_option('remote_addr')
def _get_docker_remote_user(self) -> str | None:
"""Get the default user configured in the docker container"""
container = self.get_option("remote_addr")
if container in self._container_user_cache:
return self._container_user_cache[container]
p = subprocess.Popen([self.docker_cmd, 'inspect', '--format', '{{.Config.User}}', container],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with subprocess.Popen(
[self.docker_cmd, "inspect", "--format", "{{.Config.User}}", container],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
) as p:
out_b, err_b = p.communicate()
out = to_text(out_b, errors="surrogate_or_strict")
out, err = p.communicate()
out = to_text(out, errors='surrogate_or_strict')
if p.returncode != 0:
display.warning(u'unable to retrieve default user from docker container: %s %s' % (out, to_text(err)))
self._container_user_cache[container] = None
return None
if p.returncode != 0:
display.warning(
f"unable to retrieve default user from docker container: {out} {to_text(err_b)}"
)
self._container_user_cache[container] = None
return None
# The default exec user is root, unless it was changed in the Dockerfile with USER
user = out.strip() or u'root'
user = out.strip() or "root"
self._container_user_cache[container] = user
return user
def _build_exec_cmd(self, cmd):
""" Build the local docker exec command to run cmd on remote_host
def _build_exec_cmd(self, cmd: list[bytes | str]) -> list[bytes | str]:
"""Build the local docker exec command to run cmd on remote_host
If remote_user is available and is supported by the docker
version we are using, it will be provided to docker exec.
If remote_user is available and is supported by the docker
version we are using, it will be provided to docker exec.
"""
local_cmd = [self.docker_cmd]
@ -205,247 +255,371 @@ class Connection(ConnectionBase):
if self._docker_args:
local_cmd += self._docker_args
local_cmd += [b'exec']
local_cmd += [b"exec"]
if self.remote_user is not None:
local_cmd += [b'-u', self.remote_user]
local_cmd += [b"-u", self.remote_user]
if self.get_option("extra_env"):
for k, v in self.get_option("extra_env").items():
for val, what in ((k, "Key"), (v, "Value")):
if not isinstance(val, str):
raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
)
local_cmd += [
b"-e",
b"%s=%s"
% (
to_bytes(k, errors="surrogate_or_strict"),
to_bytes(v, errors="surrogate_or_strict"),
),
]
if self.get_option("working_dir") is not None:
local_cmd += [
b"-w",
to_bytes(self.get_option("working_dir"), errors="surrogate_or_strict"),
]
if self.docker_version != "dev" and LooseVersion(
self.docker_version
) < LooseVersion("18.06"):
# https://github.com/docker/cli/pull/732, first appeared in release 18.06.0
raise AnsibleConnectionFailure(
f"Providing the working directory requires Docker CLI version 18.06 or newer. You have Docker CLI version {self.docker_version}."
)
if self.get_option("privileged"):
local_cmd += [b"--privileged"]
# -i is needed to keep stdin open which allows pipelining to work
local_cmd += [b'-i', self.get_option('remote_addr')] + cmd
local_cmd += [b"-i", self.get_option("remote_addr")] + cmd
return local_cmd
def _set_docker_args(self):
def _set_docker_args(self) -> None:
# TODO: this is mostly for backwards compatibility, play_context is used as fallback for older versions
# docker arguments
del self._docker_args[:]
extra_args = self.get_option('docker_extra_args') or getattr(self._play_context, 'docker_extra_args', '')
extra_args = self.get_option("docker_extra_args") or getattr(
self._play_context, "docker_extra_args", ""
)
if extra_args:
self._docker_args += extra_args.split(' ')
self._docker_args += extra_args.split(" ")
def _set_conn_data(self):
''' initialize for the connection, cannot do only in init since all data is not ready at that point '''
def _set_conn_data(self) -> None:
"""initialize for the connection, cannot do only in init since all data is not ready at that point"""
self._set_docker_args()
self.remote_user = self.get_option('remote_user')
self.remote_user = self.get_option("remote_user")
if self.remote_user is None and self._play_context.remote_user is not None:
self.remote_user = self._play_context.remote_user
# timeout, use unless default and pc is different, backwards compat
self.timeout = self.get_option('container_timeout')
self.timeout = self.get_option("container_timeout")
if self.timeout == 10 and self.timeout != self._play_context.timeout:
self.timeout = self._play_context.timeout
@property
def docker_version(self):
def docker_version(self) -> str:
if not self._version:
self._set_docker_args()
self._version = self._get_docker_version()
if self._version == u'dev':
display.warning(u'Docker version number is "dev". Will assume latest version.')
if self._version != u'dev' and LooseVersion(self._version) < LooseVersion(u'1.3'):
raise AnsibleError('docker connection type requires docker 1.3 or higher')
if self._version == "dev":
display.warning(
'Docker version number is "dev". Will assume latest version.'
)
if self._version != "dev" and LooseVersion(self._version) < LooseVersion(
"1.3"
):
raise AnsibleError(
"docker connection type requires docker 1.3 or higher"
)
return self._version
def _get_actual_user(self):
def _get_actual_user(self) -> str | None:
if self.remote_user is not None:
# An explicit user is provided
if self.docker_version == u'dev' or LooseVersion(self.docker_version) >= LooseVersion(u'1.7'):
if self.docker_version == "dev" or LooseVersion(
self.docker_version
) >= LooseVersion("1.7"):
# Support for specifying the exec user was added in docker 1.7
return self.remote_user
else:
self.remote_user = None
actual_user = self._get_docker_remote_user()
if actual_user != self.get_option('remote_user'):
display.warning(u'docker {0} does not support remote_user, using container default: {1}'
.format(self.docker_version, self.actual_user or u'?'))
return actual_user
elif self._display.verbosity > 2:
# Since we're not setting the actual_user, look it up so we have it for logging later
self.remote_user = None
actual_user = self._get_docker_remote_user()
if actual_user != self.get_option("remote_user"):
display.warning(
f"docker {self.docker_version} does not support remote_user, using container default: {actual_user or '?'}"
)
return actual_user
if self._display.verbosity > 2:
# Since we are not setting the actual_user, look it up so we have it for logging later
# Only do this if display verbosity is high enough that we'll need the value
# This saves overhead from calling into docker when we don't need to.
# This saves overhead from calling into docker when we do not need to.
return self._get_docker_remote_user()
else:
return None
return None
def _connect(self, port=None):
""" Connect to the container. Nothing to do """
super(Connection, self)._connect()
def _connect(self) -> t.Self:
"""Connect to the container. Nothing to do"""
super()._connect() # type: ignore[safe-super]
if not self._connected:
self._set_conn_data()
actual_user = self._get_actual_user()
display.vvv(u"ESTABLISH DOCKER CONNECTION FOR USER: {0}".format(
actual_user or u'?'), host=self.get_option('remote_addr')
display.vvv(
f"ESTABLISH DOCKER CONNECTION FOR USER: {actual_user or '?'}",
host=self.get_option("remote_addr"),
)
self._connected = True
return self
def exec_command(self, cmd, in_data=None, sudoable=False):
""" Run a command on the docker host """
def exec_command(
self, cmd: str, in_data: bytes | None = None, sudoable: bool = False
) -> tuple[int, bytes, bytes]:
"""Run a command on the docker host"""
self._set_conn_data()
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
super().exec_command(cmd, in_data=in_data, sudoable=sudoable) # type: ignore[safe-super]
local_cmd = self._build_exec_cmd([self._play_context.executable, '-c', cmd])
local_cmd = self._build_exec_cmd([self._play_context.executable, "-c", cmd])
display.vvv(u"EXEC {0}".format(to_text(local_cmd)), host=self.get_option('remote_addr'))
display.vvv(f"EXEC {to_text(local_cmd)}", host=self.get_option("remote_addr"))
display.debug("opening command with Popen()")
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
p = subprocess.Popen(
with subprocess.Popen(
local_cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
display.debug("done running command with Popen()")
) as p:
assert p.stdin is not None
assert p.stdout is not None
assert p.stderr is not None
display.debug("done running command with Popen()")
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(
p.stdout,
fcntl.F_SETFL,
fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK,
)
fcntl.fcntl(
p.stderr,
fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK,
)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
become_output = b''
try:
while not self.become.check_success(become_output) and not self.become.check_password_prompt(become_output):
events = selector.select(self.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output))
become_output = b""
try:
while not self.become.check_success(
become_output
) and not self.become.check_password_prompt(become_output):
events = selector.select(self.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError(
"timeout waiting for privilege escalation password prompt:\n"
+ to_text(become_output)
)
for key, event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
chunks = b""
for key, dummy_event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
if chunk:
chunks += chunk
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if chunk:
chunks += chunk
if not chunk:
stdout, stderr = p.communicate()
raise AnsibleError('privilege output closed while waiting for password prompt:\n' + to_native(become_output))
become_output += chunk
finally:
selector.close()
if not chunks:
stdout, stderr = p.communicate()
raise AnsibleError(
"privilege output closed while waiting for password prompt:\n"
+ to_text(become_output)
)
become_output += chunks
finally:
selector.close()
if not self.become.check_success(become_output):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
p.stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
if not self.become.check_success(become_output):
become_pass = self.become.get_option(
"become_pass", playcontext=self._play_context
)
p.stdin.write(
to_bytes(become_pass, errors="surrogate_or_strict") + b"\n"
)
fcntl.fcntl(
p.stdout,
fcntl.F_SETFL,
fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK,
)
fcntl.fcntl(
p.stderr,
fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK,
)
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
display.debug("done with docker.exec_command()")
return (p.returncode, stdout, stderr)
display.debug("done with docker.exec_command()")
return (p.returncode, stdout, stderr)
def _prefix_login_path(self, remote_path):
''' Make sure that we put files into a standard path
def _prefix_login_path(self, remote_path: str) -> str:
"""Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we are not guaranteed that a home dir will
exist in any given chroot. So for now we are choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it's a problem
'''
Can revisit using $HOME instead if it is a problem
"""
if getattr(self._shell, "_IS_WINDOWS", False):
import ntpath
return ntpath.normpath(remote_path)
else:
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
""" Transfer a file from local to docker container """
return ntpath.normpath(remote_path)
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path: str, out_path: str) -> None:
"""Transfer a file from local to docker container"""
self._set_conn_data()
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.get_option('remote_addr'))
super().put_file(in_path, out_path) # type: ignore[safe-super]
display.vvv(f"PUT {in_path} TO {out_path}", host=self.get_option("remote_addr"))
out_path = self._prefix_login_path(out_path)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
if not os.path.exists(to_bytes(in_path, errors="surrogate_or_strict")):
raise AnsibleFileNotFound(
"file or module does not exist: %s" % to_native(in_path))
f"file or module does not exist: {to_text(in_path)}"
)
out_path = shlex_quote(out_path)
# Older docker doesn't have native support for copying files into
out_path = quote(out_path)
# Older docker does not have native support for copying files into
# running containers, so we use docker exec to implement this
# Although docker version 1.8 and later provide support, the
# owner and group of the files are always set to root
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
if not os.fstat(in_file.fileno()).st_size:
count = ' count=0'
count = " count=0"
else:
count = ''
args = self._build_exec_cmd([self._play_context.executable, "-c", "dd of=%s bs=%s%s" % (out_path, BUFSIZE, count)])
args = [to_bytes(i, errors='surrogate_or_strict') for i in args]
count = ""
args = self._build_exec_cmd(
[
self._play_context.executable,
"-c",
f"dd of={out_path} bs={BUFSIZE}{count}",
]
)
args = [to_bytes(i, errors="surrogate_or_strict") for i in args]
try:
p = subprocess.Popen(args, stdin=in_file, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError:
raise AnsibleError("docker connection requires dd command in the container to put files")
# pylint: disable-next=consider-using-with
p = subprocess.Popen(
args, stdin=in_file, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
except OSError as exc:
raise AnsibleError(
"docker connection requires dd command in the container to put files"
) from exc
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
raise AnsibleError(
f"failed to transfer file {to_text(in_path)} to {to_text(out_path)}:\n{to_text(stdout)}\n{to_text(stderr)}"
)
def fetch_file(self, in_path, out_path):
""" Fetch a file from container to local. """
def fetch_file(self, in_path: str, out_path: str) -> None:
"""Fetch a file from container to local."""
self._set_conn_data()
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.get_option('remote_addr'))
super().fetch_file(in_path, out_path) # type: ignore[safe-super]
display.vvv(
f"FETCH {in_path} TO {out_path}", host=self.get_option("remote_addr")
)
in_path = self._prefix_login_path(in_path)
# out_path is the final file path, but docker takes a directory, not a
# file path
out_dir = os.path.dirname(out_path)
args = [self.docker_cmd, "cp", "%s:%s" % (self.get_option('remote_addr'), in_path), out_dir]
args = [to_bytes(i, errors='surrogate_or_strict') for i in args]
args = [
self.docker_cmd,
"cp",
f"{self.get_option('remote_addr')}:{in_path}",
out_dir,
]
args = [to_bytes(i, errors="surrogate_or_strict") for i in args]
p = subprocess.Popen(args, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
with subprocess.Popen(
args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
) as p:
p.communicate()
if getattr(self._shell, "_IS_WINDOWS", False):
import ntpath
actual_out_path = ntpath.join(out_dir, ntpath.basename(in_path))
else:
actual_out_path = os.path.join(out_dir, os.path.basename(in_path))
if getattr(self._shell, "_IS_WINDOWS", False):
import ntpath
if p.returncode != 0:
# Older docker doesn't have native support for fetching files command `cp`
# If `cp` fails, try to use `dd` instead
args = self._build_exec_cmd([self._play_context.executable, "-c", "dd if=%s bs=%s" % (in_path, BUFSIZE)])
args = [to_bytes(i, errors='surrogate_or_strict') for i in args]
with open(to_bytes(actual_out_path, errors='surrogate_or_strict'), 'wb') as out_file:
try:
p = subprocess.Popen(args, stdin=subprocess.PIPE,
stdout=out_file, stderr=subprocess.PIPE)
except OSError:
raise AnsibleError("docker connection requires dd command in the container to put files")
stdout, stderr = p.communicate()
actual_out_path = ntpath.join(out_dir, ntpath.basename(in_path))
else:
actual_out_path = os.path.join(out_dir, os.path.basename(in_path))
if p.returncode != 0:
raise AnsibleError("failed to fetch file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
if p.returncode != 0:
# Older docker does not have native support for fetching files command `cp`
# If `cp` fails, try to use `dd` instead
args = self._build_exec_cmd(
[
self._play_context.executable,
"-c",
f"dd if={in_path} bs={BUFSIZE}",
]
)
args = [to_bytes(i, errors="surrogate_or_strict") for i in args]
with open(
to_bytes(actual_out_path, errors="surrogate_or_strict"), "wb"
) as out_file:
try:
# pylint: disable-next=consider-using-with
pp = subprocess.Popen(
args,
stdin=subprocess.PIPE,
stdout=out_file,
stderr=subprocess.PIPE,
)
except OSError as exc:
raise AnsibleError(
"docker connection requires dd command in the container to put files"
) from exc
stdout, stderr = pp.communicate()
if pp.returncode != 0:
raise AnsibleError(
f"failed to fetch file {in_path} to {out_path}:\n{stdout!r}\n{stderr!r}"
)
# Rename if needed
if actual_out_path != out_path:
os.rename(to_bytes(actual_out_path, errors='strict'), to_bytes(out_path, errors='strict'))
os.rename(
to_bytes(actual_out_path, errors="strict"),
to_bytes(out_path, errors="strict"),
)
def close(self):
""" Terminate the connection. Nothing to do for Docker"""
super(Connection, self).close()
def close(self) -> None:
"""Terminate the connection. Nothing to do for Docker"""
super().close() # type: ignore[safe-super]
self._connected = False
def reset(self):
def reset(self) -> None:
# Clear container user cache
self._container_user_cache = {}

View File

@ -1,102 +1,150 @@
# Copyright (c) 2019-2020, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = '''
DOCUMENTATION = r"""
author:
- Felix Fontein (@felixfontein)
- Felix Fontein (@felixfontein)
name: docker_api
short_description: Run tasks in docker containers
version_added: 1.1.0
description:
- Run commands or put/fetch files to an existing docker container.
- Uses Docker SDK for Python to interact directly with the Docker daemon instead of
using the Docker CLI. Use the
R(community.docker.docker,ansible_collections.community.docker.docker_connection)
connection plugin if you want to use the Docker CLI.
options:
remote_user:
type: str
description:
- The user to execute as inside the container.
vars:
- name: ansible_user
- name: ansible_docker_user
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
cli:
- name: user
keyword:
- name: remote_user
remote_addr:
type: str
description:
- The name of the container you want to access.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_docker_host
container_timeout:
default: 10
description:
- Controls how long we can wait to access reading output from the container once execution started.
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_DOCKER_TIMEOUT
version_added: 2.2.0
ini:
- key: timeout
section: defaults
- key: timeout
section: docker_connection
version_added: 2.2.0
vars:
- name: ansible_docker_timeout
version_added: 2.2.0
cli:
- name: timeout
type: integer
- Run commands or put/fetch files to an existing docker container.
- Uses the L(requests library,https://pypi.org/project/requests/) to interact directly with the Docker daemon instead of
using the Docker CLI. Use the P(community.docker.docker#connection) connection plugin if you want to use the Docker CLI.
notes:
- Does B(not work with TCP TLS sockets)! This is caused by the inability to send C(close_notify) without closing the connection
with Python's C(SSLSocket)s. See U(https://github.com/ansible-collections/community.docker/issues/605) for more information.
extends_documentation_fragment:
- community.docker.docker
- community.docker.docker.var_names
- community.docker.docker.docker_py_1_documentation
'''
- community.docker._docker.api_documentation
- community.docker._docker.var_names
options:
remote_user:
type: str
description:
- The user to execute as inside the container.
vars:
- name: ansible_user
- name: ansible_docker_user
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
cli:
- name: user
keyword:
- name: remote_user
remote_addr:
type: str
description:
- The name of the container you want to access.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_docker_host
container_timeout:
default: 10
description:
- Controls how long we can wait to access reading output from the container once execution started.
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_DOCKER_TIMEOUT
version_added: 2.2.0
ini:
- key: timeout
section: defaults
- key: timeout
section: docker_connection
version_added: 2.2.0
vars:
- name: ansible_docker_timeout
version_added: 2.2.0
cli:
- name: timeout
type: integer
extra_env:
description:
- Provide extra environment variables to set when running commands in the Docker container.
- This option can currently only be provided as Ansible variables due to limitations of ansible-core's configuration
manager.
vars:
- name: ansible_docker_extra_env
type: dict
version_added: 3.12.0
working_dir:
description:
- The directory inside the container to run commands in.
- Requires Docker API version 1.35 or later.
env:
- name: ANSIBLE_DOCKER_WORKING_DIR
ini:
- key: working_dir
section: docker_connection
vars:
- name: ansible_docker_working_dir
type: string
version_added: 3.12.0
privileged:
description:
- Whether commands should be run with extended privileges.
- B(Note) that this allows command to potentially break out of the container. Use with care!
env:
- name: ANSIBLE_DOCKER_PRIVILEGED
ini:
- key: privileged
section: docker_connection
vars:
- name: ansible_docker_privileged
type: boolean
default: false
version_added: 3.12.0
"""
import io
import os
import os.path
import shutil
import tarfile
import typing as t
from ansible.errors import AnsibleFileNotFound, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.errors import AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible_collections.community.docker.plugins.module_utils.common import (
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
APIError,
DockerException,
NotFound,
)
from ansible_collections.community.docker.plugins.module_utils._common_api import (
RequestException,
)
from ansible_collections.community.docker.plugins.plugin_utils.socket_handler import (
DockerSocketHandler,
from ansible_collections.community.docker.plugins.module_utils._copy import (
DockerFileCopyError,
DockerFileNotFound,
fetch_file,
put_file,
)
from ansible_collections.community.docker.plugins.plugin_utils.common import (
from ansible_collections.community.docker.plugins.module_utils._version import (
LooseVersion,
)
from ansible_collections.community.docker.plugins.plugin_utils._common_api import (
AnsibleDockerClient,
)
from ansible_collections.community.docker.plugins.plugin_utils._socket_handler import (
DockerSocketHandler,
)
if t.TYPE_CHECKING:
from collections.abc import Callable
_T = t.TypeVar("_T")
try:
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible_collections.community.docker.plugins.module_utils.common
pass
MIN_DOCKER_PY = '1.7.0'
MIN_DOCKER_API = None
@ -104,128 +152,206 @@ display = Display()
class Connection(ConnectionBase):
''' Local docker based connections '''
"""Local docker based connections"""
transport = 'community.docker.docker_api'
transport = "community.docker.docker_api"
has_pipelining = True
def _call_client(self, callable, not_found_can_be_resource=False):
def _call_client(
self,
f: Callable[[AnsibleDockerClient], _T],
not_found_can_be_resource: bool = False,
) -> _T:
if self.client is None:
raise AssertionError("Client must be present")
remote_addr = self.get_option("remote_addr")
try:
return callable()
return f(self.client)
except NotFound as e:
if not_found_can_be_resource:
raise AnsibleConnectionFailure('Could not find container "{1}" or resource in it ({0})'.format(e, self.get_option('remote_addr')))
else:
raise AnsibleConnectionFailure('Could not find container "{1}" ({0})'.format(e, self.get_option('remote_addr')))
raise AnsibleConnectionFailure(
f'Could not find container "{remote_addr}" or resource in it ({e})'
) from e
raise AnsibleConnectionFailure(
f'Could not find container "{remote_addr}" ({e})'
) from e
except APIError as e:
if e.response and e.response.status_code == 409:
raise AnsibleConnectionFailure('The container "{1}" has been paused ({0})'.format(e, self.get_option('remote_addr')))
if e.response is not None and e.response.status_code == 409:
raise AnsibleConnectionFailure(
f'The container "{remote_addr}" has been paused ({e})'
) from e
self.client.fail(
'An unexpected docker error occurred for container "{1}": {0}'.format(e, self.get_option('remote_addr'))
f'An unexpected Docker error occurred for container "{remote_addr}": {e}'
)
except DockerException as e:
self.client.fail(
'An unexpected docker error occurred for container "{1}": {0}'.format(e, self.get_option('remote_addr'))
f'An unexpected Docker error occurred for container "{remote_addr}": {e}'
)
except RequestException as e:
self.client.fail(
'An unexpected requests error occurred for container "{1}" when docker-py tried to talk to the docker daemon: {0}'
.format(e, self.get_option('remote_addr'))
f'An unexpected requests error occurred for container "{remote_addr}" when trying to talk to the Docker daemon: {e}'
)
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
super().__init__(*args, **kwargs)
self.client = None
self.ids = dict()
self.client: AnsibleDockerClient | None = None
self.ids: dict[str | None, tuple[int, int]] = {}
# Windows uses Powershell modules
if getattr(self._shell, "_IS_WINDOWS", False):
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.module_implementation_preferences = (".ps1", ".exe", "")
self.actual_user = None
self.actual_user: str | None = None
def _connect(self, port=None):
""" Connect to the container. Nothing to do """
super(Connection, self)._connect()
def _connect(self) -> Connection:
"""Connect to the container. Nothing to do"""
super()._connect() # type: ignore[safe-super]
if not self._connected:
self.actual_user = self.get_option('remote_user')
display.vvv(u"ESTABLISH DOCKER CONNECTION FOR USER: {0}".format(
self.actual_user or u'?'), host=self.get_option('remote_addr')
self.actual_user = self.get_option("remote_user")
display.vvv(
f"ESTABLISH DOCKER CONNECTION FOR USER: {self.actual_user or '?'}",
host=self.get_option("remote_addr"),
)
if self.client is None:
self.client = AnsibleDockerClient(self, min_docker_version=MIN_DOCKER_PY, min_docker_api_version=MIN_DOCKER_API)
self.client = AnsibleDockerClient(
self, min_docker_api_version=MIN_DOCKER_API
)
self._connected = True
if self.actual_user is None and display.verbosity > 2:
# Since we're not setting the actual_user, look it up so we have it for logging later
# Since we are not setting the actual_user, look it up so we have it for logging later
# Only do this if display verbosity is high enough that we'll need the value
# This saves overhead from calling into docker when we don't need to
display.vvv(u"Trying to determine actual user")
result = self._call_client(lambda: self.client.inspect_container(self.get_option('remote_addr')))
if result.get('Config'):
self.actual_user = result['Config'].get('User')
# This saves overhead from calling into docker when we do not need to
display.vvv("Trying to determine actual user")
result = self._call_client(
lambda client: client.get_json(
"/containers/{0}/json", self.get_option("remote_addr")
)
)
if result.get("Config"):
self.actual_user = result["Config"].get("User")
if self.actual_user is not None:
display.vvv(u"Actual user is '{0}'".format(self.actual_user))
display.vvv(f"Actual user is '{self.actual_user}'")
def exec_command(self, cmd, in_data=None, sudoable=False):
""" Run a command on the docker host """
return self
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
def exec_command(
self, cmd: str, in_data: bytes | None = None, sudoable: bool = False
) -> tuple[int, bytes, bytes]:
"""Run a command on the docker host"""
command = [self._play_context.executable, '-c', to_text(cmd)]
super().exec_command(cmd, in_data=in_data, sudoable=sudoable) # type: ignore[safe-super]
if self.client is None:
raise AssertionError("Client must be present")
command = [self._play_context.executable, "-c", cmd]
do_become = self.become and self.become.expect_prompt() and sudoable
stdin_part = (
f", with stdin ({len(in_data)} bytes)" if in_data is not None else ""
)
become_part = ", with become prompt" if do_become else ""
display.vvv(
u"EXEC {0}{1}{2}".format(
to_text(command),
', with stdin ({0} bytes)'.format(len(in_data)) if in_data is not None else '',
', with become prompt' if do_become else '',
),
host=self.get_option('remote_addr')
f"EXEC {to_text(command)}{stdin_part}{become_part}",
host=self.get_option("remote_addr"),
)
need_stdin = True if (in_data is not None) or do_become else False
need_stdin = bool((in_data is not None) or do_become)
exec_data = self._call_client(lambda: self.client.exec_create(
self.get_option('remote_addr'),
command,
stdout=True,
stderr=True,
stdin=need_stdin,
user=self.get_option('remote_user') or '',
# workdir=None, - only works for Docker SDK for Python 3.0.0 and later
))
exec_id = exec_data['Id']
data = {
"Container": self.get_option("remote_addr"),
"User": self.get_option("remote_user") or "",
"Privileged": self.get_option("privileged"),
"Tty": False,
"AttachStdin": need_stdin,
"AttachStdout": True,
"AttachStderr": True,
"Cmd": command,
}
if "detachKeys" in self.client._general_configs:
data["detachKeys"] = self.client._general_configs["detachKeys"]
if self.get_option("extra_env"):
data["Env"] = []
for k, v in self.get_option("extra_env").items():
for val, what in ((k, "Key"), (v, "Value")):
if not isinstance(val, str):
raise AnsibleConnectionFailure(
f"Non-string {what.lower()} found for extra_env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted when directly specified "
"in YAML, or explicitly converted to strings when the option is templated. "
f"{what}: {val!r}"
)
data["Env"].append(f"{k}={v}")
if self.get_option("working_dir") is not None:
data["WorkingDir"] = self.get_option("working_dir")
if self.client.docker_api_version < LooseVersion("1.35"):
raise AnsibleConnectionFailure(
"Providing the working directory requires Docker API version 1.35 or newer."
f" The Docker daemon the connection is using has API version {self.client.docker_api_version_str}."
)
exec_data = self._call_client(
lambda client: client.post_json_to_json(
"/containers/{0}/exec", self.get_option("remote_addr"), data=data
)
)
exec_id = exec_data["Id"]
data = {"Tty": False, "Detach": False}
if need_stdin:
exec_socket = self._call_client(lambda: self.client.exec_start(
exec_id,
detach=False,
socket=True,
))
exec_socket = self._call_client(
lambda client: client.post_json_to_stream_socket(
"/exec/{0}/start", exec_id, data=data
)
)
try:
with DockerSocketHandler(display, exec_socket, container=self.get_option('remote_addr')) as exec_socket_handler:
with DockerSocketHandler(
display, exec_socket, container=self.get_option("remote_addr")
) as exec_socket_handler:
if do_become:
become_output = [b'']
assert self.become is not None
def append_become_output(stream_id, data):
become_output = [b""]
def append_become_output(stream_id: int, data: bytes) -> None:
become_output[0] += data
exec_socket_handler.set_block_done_callback(append_become_output)
exec_socket_handler.set_block_done_callback(
append_become_output
)
while not self.become.check_success(become_output[0]) and not self.become.check_password_prompt(become_output[0]):
if not exec_socket_handler.select(self.get_option('container_timeout')):
while not self.become.check_success(
become_output[0]
) and not self.become.check_password_prompt(become_output[0]):
if not exec_socket_handler.select(
self.get_option("container_timeout")
):
stdout, stderr = exec_socket_handler.consume()
raise AnsibleConnectionFailure('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output[0]))
raise AnsibleConnectionFailure(
"timeout waiting for privilege escalation password prompt:\n"
+ to_text(become_output[0])
)
if exec_socket_handler.is_eof():
raise AnsibleConnectionFailure('privilege output closed while waiting for password prompt:\n' + to_native(become_output[0]))
raise AnsibleConnectionFailure(
"privilege output closed while waiting for password prompt:\n"
+ to_text(become_output[0])
)
if not self.become.check_success(become_output[0]):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
exec_socket_handler.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
become_pass = self.become.get_option(
"become_pass", playcontext=self._play_context
)
exec_socket_handler.write(
to_bytes(become_pass, errors="surrogate_or_strict")
+ b"\n"
)
if in_data is not None:
exec_socket_handler.write(in_data)
@ -234,155 +360,122 @@ class Connection(ConnectionBase):
finally:
exec_socket.close()
else:
stdout, stderr = self._call_client(lambda: self.client.exec_start(
exec_id,
detach=False,
stream=False,
socket=False,
demux=True,
))
stdout, stderr = self._call_client(
lambda client: client.post_json_to_stream(
"/exec/{0}/start",
exec_id,
stream=False,
demux=True,
tty=False,
data=data,
)
)
result = self._call_client(lambda: self.client.exec_inspect(exec_id))
result = self._call_client(
lambda client: client.get_json("/exec/{0}/json", exec_id)
)
return result.get('ExitCode') or 0, stdout or b'', stderr or b''
return result.get("ExitCode") or 0, stdout or b"", stderr or b""
def _prefix_login_path(self, remote_path):
''' Make sure that we put files into a standard path
def _prefix_login_path(self, remote_path: str) -> str:
"""Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we are not guaranteed that a home dir will
exist in any given chroot. So for now we are choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it's a problem
'''
Can revisit using $HOME instead if it is a problem
"""
if getattr(self._shell, "_IS_WINDOWS", False):
import ntpath
return ntpath.normpath(remote_path)
else:
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
""" Transfer a file from local to docker container """
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.get_option('remote_addr'))
return ntpath.normpath(remote_path)
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path: str, out_path: str) -> None:
"""Transfer a file from local to docker container"""
super().put_file(in_path, out_path) # type: ignore[safe-super]
display.vvv(f"PUT {in_path} TO {out_path}", host=self.get_option("remote_addr"))
if self.client is None:
raise AssertionError("Client must be present")
out_path = self._prefix_login_path(out_path)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound(
"file or module does not exist: %s" % to_native(in_path))
if self.actual_user not in self.ids:
dummy, ids, dummy = self.exec_command(b'id -u && id -g')
dummy, ids, dummy2 = self.exec_command("id -u && id -g")
remote_addr = self.get_option("remote_addr")
try:
user_id, group_id = ids.splitlines()
self.ids[self.actual_user] = int(user_id), int(group_id)
b_user_id, b_group_id = ids.splitlines()
user_id, group_id = int(b_user_id), int(b_group_id)
self.ids[self.actual_user] = user_id, group_id
display.vvvv(
'PUT: Determined uid={0} and gid={1} for user "{2}"'.format(user_id, group_id, self.actual_user),
host=self.get_option('remote_addr')
f'PUT: Determined uid={user_id} and gid={group_id} for user "{self.actual_user}"',
host=remote_addr,
)
except Exception as e:
raise AnsibleConnectionFailure(
'Error while determining user and group ID of current user in container "{1}": {0}\nGot value: {2!r}'
.format(e, self.get_option('remote_addr'), ids)
)
f'Error while determining user and group ID of current user in container "{remote_addr}": {e}\nGot value: {ids!r}'
) from e
b_in_path = to_bytes(in_path, errors='surrogate_or_strict')
out_dir, out_file = os.path.split(out_path)
# TODO: stream tar file, instead of creating it in-memory into a BytesIO
bio = io.BytesIO()
with tarfile.open(fileobj=bio, mode='w|', dereference=True, encoding='utf-8') as tar:
# Note that without both name (bytes) and arcname (unicode), this either fails for
# Python 2.6/2.7, Python 3.5/3.6, or Python 3.7+. Only when passing both (in this
# form) it works with Python 2.6, 2.7, 3.5, 3.6, and 3.7 up to 3.9.
tarinfo = tar.gettarinfo(b_in_path, arcname=to_text(out_file))
user_id, group_id = self.ids[self.actual_user]
tarinfo.uid = user_id
tarinfo.uname = ''
if self.actual_user:
tarinfo.uname = self.actual_user
tarinfo.gid = group_id
tarinfo.gname = ''
tarinfo.mode &= 0o700
with open(b_in_path, 'rb') as f:
tar.addfile(tarinfo, fileobj=f)
data = bio.getvalue()
ok = self._call_client(lambda: self.client.put_archive(
self.get_option('remote_addr'),
out_dir,
data, # can also be file object for streaming; this is only clear from the
# implementation of put_archive(), which uses requests's put().
# See https://2.python-requests.org/en/master/user/advanced/#streaming-uploads
# WARNING: might not work with all transports!
), not_found_can_be_resource=True)
if not ok:
raise AnsibleConnectionFailure(
'Unknown error while creating file "{0}" in container "{1}".'
.format(out_path, self.get_option('remote_addr'))
user_id, group_id = self.ids[self.actual_user]
try:
self._call_client(
lambda client: put_file(
client,
container=self.get_option("remote_addr"),
in_path=in_path,
out_path=out_path,
user_id=user_id,
group_id=group_id,
user_name=self.actual_user,
follow_links=True,
),
not_found_can_be_resource=True,
)
except DockerFileNotFound as exc:
raise AnsibleFileNotFound(to_text(exc)) from exc
except DockerFileCopyError as exc:
raise AnsibleConnectionFailure(to_text(exc)) from exc
def fetch_file(self, in_path, out_path):
""" Fetch a file from container to local. """
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.get_option('remote_addr'))
def fetch_file(self, in_path: str, out_path: str) -> None:
"""Fetch a file from container to local."""
super().fetch_file(in_path, out_path) # type: ignore[safe-super]
display.vvv(
f"FETCH {in_path} TO {out_path}", host=self.get_option("remote_addr")
)
if self.client is None:
raise AssertionError("Client must be present")
in_path = self._prefix_login_path(in_path)
b_out_path = to_bytes(out_path, errors='surrogate_or_strict')
considered_in_paths = set()
try:
self._call_client(
lambda client: fetch_file(
client,
container=self.get_option("remote_addr"),
in_path=in_path,
out_path=out_path,
follow_links=True,
log=lambda msg: display.vvvv(
msg, host=self.get_option("remote_addr")
),
),
not_found_can_be_resource=True,
)
except DockerFileNotFound as exc:
raise AnsibleFileNotFound(to_text(exc)) from exc
except DockerFileCopyError as exc:
raise AnsibleConnectionFailure(to_text(exc)) from exc
while True:
if in_path in considered_in_paths:
raise AnsibleConnectionFailure('Found infinite symbolic link loop when trying to fetch "{0}"'.format(in_path))
considered_in_paths.add(in_path)
display.vvvv('FETCH: Fetching "%s"' % in_path, host=self.get_option('remote_addr'))
stream, stats = self._call_client(lambda: self.client.get_archive(
self.get_option('remote_addr'),
in_path,
), not_found_can_be_resource=True)
# TODO: stream tar file instead of downloading it into a BytesIO
bio = io.BytesIO()
for chunk in stream:
bio.write(chunk)
bio.seek(0)
with tarfile.open(fileobj=bio, mode='r|') as tar:
symlink_member = None
first = True
for member in tar:
if not first:
raise AnsibleConnectionFailure('Received tarfile contains more than one file!')
first = False
if member.issym():
symlink_member = member
continue
if not member.isfile():
raise AnsibleConnectionFailure('Remote file "%s" is not a regular file or a symbolic link' % in_path)
in_f = tar.extractfile(member) # in Python 2, this *cannot* be used in `with`...
with open(b_out_path, 'wb') as out_f:
shutil.copyfileobj(in_f, out_f, member.size)
if first:
raise AnsibleConnectionFailure('Received tarfile is empty!')
# If the only member was a file, it's already extracted. If it is a symlink, process it now.
if symlink_member is not None:
in_path = os.path.join(os.path.split(in_path)[0], symlink_member.linkname)
display.vvvv('FETCH: Following symbolic link to "%s"' % in_path, host=self.get_option('remote_addr'))
continue
return
def close(self):
""" Terminate the connection. Nothing to do for Docker"""
super(Connection, self).close()
def close(self) -> None:
"""Terminate the connection. Nothing to do for Docker"""
super().close() # type: ignore[safe-super]
self._connected = False
def reset(self):
def reset(self) -> None:
self.ids.clear()

View File

@ -1,77 +1,76 @@
# (c) 2021 Jeff Goldschrafe <jeff@holyhandgrenade.org>
# Copyright (c) 2021 Jeff Goldschrafe <jeff@holyhandgrenade.org>
# Based on Ansible local connection plugin by:
# (c) 2012 Michael DeHaan <michael.dehaan@gmail.com>
# (c) 2015, 2017 Toshio Kuratomi <tkuratomi@ansible.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Copyright (c) 2012 Michael DeHaan <michael.dehaan@gmail.com>
# Copyright (c) 2015, 2017 Toshio Kuratomi <tkuratomi@ansible.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = '''
name: nsenter
short_description: execute on host running controller container
version_added: 1.9.0
DOCUMENTATION = r"""
name: nsenter
short_description: execute on host running controller container
version_added: 1.9.0
description:
- This connection plugin allows Ansible, running in a privileged container, to execute tasks on the container host instead
of in the container itself.
- This is useful for running Ansible in a pull model, while still keeping the Ansible control node containerized.
- It relies on having privileged access to run C(nsenter) in the host's PID namespace, allowing it to enter the namespaces
of the provided PID (default PID 1, or init/systemd).
author: Jeff Goldschrafe (@jgoldschrafe)
options:
nsenter_pid:
description:
- This connection plugin allows Ansible, running in a privileged container, to execute tasks on the container
host instead of in the container itself.
- This is useful for running Ansible in a pull model, while still keeping the Ansible control node
containerized.
- It relies on having privileged access to run C(nsenter) in the host's PID namespace, allowing it to enter the
namespaces of the provided PID (default PID 1, or init/systemd).
author: Jeff Goldschrafe (@jgoldschrafe)
options:
nsenter_pid:
description:
- PID to attach with using nsenter.
- The default should be fine unless you are attaching as a non-root user.
type: int
default: 1
vars:
- name: ansible_nsenter_pid
env:
- name: ANSIBLE_NSENTER_PID
ini:
- section: nsenter_connection
key: nsenter_pid
notes:
- The remote user is ignored; this plugin always runs as root.
- >-
This plugin requires the Ansible controller container to be launched in the following way:
(1) The container image contains the C(nsenter) program;
(2) The container is launched in privileged mode;
(3) The container is launched in the host's PID namespace (C(--pid host)).
'''
- PID to attach with using nsenter.
- The default should be fine unless you are attaching as a non-root user.
type: int
default: 1
vars:
- name: ansible_nsenter_pid
env:
- name: ANSIBLE_NSENTER_PID
ini:
- section: nsenter_connection
key: nsenter_pid
notes:
- The remote user is ignored; this plugin always runs as root.
- "This plugin requires the Ansible controller container to be launched in the following way: (1) The container image contains
the C(nsenter) program; (2) The container is launched in privileged mode; (3) The container is launched in the host's
PID namespace (C(--pid host))."
"""
import fcntl
import os
import pty
import shutil
import selectors
import shlex
import subprocess
import fcntl
import typing as t
import ansible.constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import binary_type, text_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
display = Display()
class Connection(ConnectionBase):
'''Connections to a container host using nsenter
'''
"""Connections to a container host using nsenter"""
transport = 'community.docker.nsenter'
transport = "community.docker.nsenter"
has_pipelining = False
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
super().__init__(*args, **kwargs)
self.cwd = None
self._nsenter_pid = None
def _connect(self):
def _connect(self) -> t.Self:
self._nsenter_pid = self.get_option("nsenter_pid")
# Because nsenter requires very high privileges, our remote user
@ -80,24 +79,28 @@ class Connection(ConnectionBase):
if not self._connected:
display.vvv(
u"ESTABLISH NSENTER CONNECTION FOR USER: {0}".format(
self._play_context.remote_user
),
f"ESTABLISH NSENTER CONNECTION FOR USER: {self._play_context.remote_user}",
host=self._play_context.remote_addr,
)
self._connected = True
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
def exec_command(
self, cmd: str, in_data: bytes | None = None, sudoable: bool = True
) -> tuple[int, bytes, bytes]:
super().exec_command(cmd, in_data=in_data, sudoable=sudoable) # type: ignore[safe-super]
display.debug("in nsenter.exec_command()")
executable = C.DEFAULT_EXECUTABLE.split()[0] if C.DEFAULT_EXECUTABLE else None
# pylint: disable-next=no-member
def_executable: str | None = C.DEFAULT_EXECUTABLE # type: ignore[attr-defined]
executable = def_executable.split()[0] if def_executable else None
if not os.path.exists(to_bytes(executable, errors='surrogate_or_strict')):
raise AnsibleError("failed to find the executable specified %s."
" Please verify if the executable exists and re-try." % executable)
if not os.path.exists(to_bytes(executable, errors="surrogate_or_strict")):
raise AnsibleError(
f"failed to find the executable specified {executable}."
" Please verify if the executable exists and re-try."
)
# Rewrite the provided command to prefix it with nsenter
nsenter_cmd_parts = [
@ -108,18 +111,14 @@ class Connection(ConnectionBase):
"--pid",
"--uts",
"--preserve-credentials",
"--target={0}".format(self._nsenter_pid),
f"--target={self._nsenter_pid}",
"--",
]
if isinstance(cmd, (text_type, binary_type)):
cmd_parts = nsenter_cmd_parts + [cmd]
cmd = to_bytes(" ".join(cmd_parts))
else:
cmd_parts = nsenter_cmd_parts + cmd
cmd = [to_bytes(arg) for arg in cmd_parts]
cmd_parts = nsenter_cmd_parts + [cmd]
cmd_b = to_bytes(" ".join(cmd_parts))
display.vvv(u"EXEC {0}".format(to_text(cmd)), host=self._play_context.remote_addr)
display.vvv(f"EXEC {to_text(cmd_b)}", host=self._play_context.remote_addr)
display.debug("opening command with Popen()")
master = None
@ -128,112 +127,162 @@ class Connection(ConnectionBase):
# This plugin does not support pipelining. This diverges from the behavior of
# the core "local" connection plugin that this one derives from.
if sudoable and self.become and self.become.expect_prompt():
# Create a pty if sudoable for privlege escalation that needs it.
# Create a pty if sudoable for privilege escalation that needs it.
# Falls back to using a standard pipe if this fails, which may
# cause the command to fail in certain situations where we are escalating
# privileges or the command otherwise needs a pty.
try:
master, stdin = pty.openpty()
except (IOError, OSError) as e:
display.debug("Unable to open pty: %s" % to_native(e))
display.debug(f"Unable to open pty: {e}")
p = subprocess.Popen(
cmd,
shell=isinstance(cmd, (text_type, binary_type)),
executable=executable if isinstance(cmd, (text_type, binary_type)) else None,
with subprocess.Popen(
cmd_b,
shell=True,
executable=executable,
cwd=self.cwd,
stdin=stdin,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
) as p:
assert p.stderr is not None
assert p.stdin is not None
assert p.stdout is not None
# if we created a master, we can close the other half of the pty now, otherwise master is stdin
if master is not None:
os.close(stdin)
# if we created a master, we can close the other half of the pty now, otherwise master is stdin
if master is not None:
os.close(stdin)
display.debug("done running command with Popen()")
display.debug("done running command with Popen()")
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(
p.stdout,
fcntl.F_SETFL,
fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK,
)
fcntl.fcntl(
p.stderr,
fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK,
)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
become_output = b""
try:
while not self.become.check_success(
become_output
) and not self.become.check_password_prompt(become_output):
events = selector.select(self._play_context.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError(
"timeout waiting for privilege escalation password prompt:\n"
+ to_text(become_output)
)
become_output = b''
try:
while not self.become.check_success(become_output) and not self.become.check_password_prompt(become_output):
events = selector.select(self._play_context.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output))
chunks = b""
for key, dummy_event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
if chunk:
chunks += chunk
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if chunk:
chunks += chunk
for key, event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if not chunks:
stdout, stderr = p.communicate()
raise AnsibleError(
"privilege output closed while waiting for password prompt:\n"
+ to_text(become_output)
)
become_output += chunks
finally:
selector.close()
if not chunk:
stdout, stderr = p.communicate()
raise AnsibleError('privilege output closed while waiting for password prompt:\n' + to_native(become_output))
become_output += chunk
finally:
selector.close()
if not self.become.check_success(become_output):
become_pass = self.become.get_option(
"become_pass", playcontext=self._play_context
)
if master is None:
p.stdin.write(
to_bytes(become_pass, errors="surrogate_or_strict") + b"\n"
)
else:
os.write(
master,
to_bytes(become_pass, errors="surrogate_or_strict") + b"\n",
)
if not self.become.check_success(become_output):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
if master is None:
p.stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
else:
os.write(master, to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
fcntl.fcntl(
p.stdout,
fcntl.F_SETFL,
fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK,
)
fcntl.fcntl(
p.stderr,
fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK,
)
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
# finally, close the other half of the pty, if it was created
if master:
os.close(master)
# finally, close the other half of the pty, if it was created
if master:
os.close(master)
display.debug("done with nsenter.exec_command()")
return (p.returncode, stdout, stderr)
display.debug("done with nsenter.exec_command()")
return (p.returncode, stdout, stderr)
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
def put_file(self, in_path: str, out_path: str) -> None:
super().put_file(in_path, out_path) # type: ignore[safe-super]
in_path = unfrackpath(in_path, basedir=self.cwd)
out_path = unfrackpath(out_path, basedir=self.cwd)
display.vvv(u"PUT {0} to {1}".format(in_path, out_path), host=self._play_context.remote_addr)
display.vvv(f"PUT {in_path} to {out_path}", host=self._play_context.remote_addr)
try:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
in_data = in_file.read()
rc, out, err = self.exec_command(cmd=["tee", out_path], in_data=in_data)
rc, dummy_out, err = self.exec_command(
cmd=f"tee {shlex.quote(out_path)}", in_data=in_data
)
if rc != 0:
raise AnsibleError("failed to transfer file to {0}: {1}".format(out_path, err))
raise AnsibleError(
f"failed to transfer file to {out_path}: {to_text(err)}"
)
except IOError as e:
raise AnsibleError("failed to transfer file to {0}: {1}".format(out_path, to_native(e)))
raise AnsibleError(f"failed to transfer file to {out_path}: {e}") from e
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
def fetch_file(self, in_path: str, out_path: str) -> None:
super().fetch_file(in_path, out_path) # type: ignore[safe-super]
in_path = unfrackpath(in_path, basedir=self.cwd)
out_path = unfrackpath(out_path, basedir=self.cwd)
try:
rc, out, err = self.exec_command(cmd=["cat", in_path])
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
rc, out, err = self.exec_command(cmd=f"cat {shlex.quote(in_path)}")
display.vvv(
f"FETCH {in_path} TO {out_path}", host=self._play_context.remote_addr
)
if rc != 0:
raise AnsibleError("failed to transfer file to {0}: {1}".format(in_path, err))
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb') as out_file:
raise AnsibleError(
f"failed to transfer file to {in_path}: {to_text(err)}"
)
with open(
to_bytes(out_path, errors="surrogate_or_strict"), "wb"
) as out_file:
out_file.write(out)
except IOError as e:
raise AnsibleError("failed to transfer file to {0}: {1}".format(to_native(out_path), to_native(e)))
raise AnsibleError(
f"failed to transfer file to {to_text(out_path)}: {e}"
) from e
def close(self):
''' terminate the connection; nothing to do here '''
def close(self) -> None:
"""terminate the connection; nothing to do here"""
self._connected = False

View File

@ -0,0 +1,110 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
class ModuleDocFragment:
# Standard documentation fragment
DOCUMENTATION = r"""
options: {}
attributes:
check_mode:
description: Can run in C(check_mode) and return changed status prediction without modifying target.
diff_mode:
description: Will return details on what has changed (or possibly needs changing in C(check_mode)), when in diff mode.
idempotent:
description:
- When run twice in a row outside check mode, with the same arguments, the second invocation indicates no change.
- This assumes that the system controlled/queried by the module has not changed in a relevant way.
"""
# Should be used together with the standard fragment
IDEMPOTENT_NOT_MODIFY_STATE = r"""
options: {}
attributes:
idempotent:
support: full
details:
- This action does not modify state.
"""
# Should be used together with the standard fragment
INFO_MODULE = r"""
options: {}
attributes:
check_mode:
support: full
details:
- This action does not modify state.
diff_mode:
support: N/A
details:
- This action does not modify state.
"""
ACTIONGROUP_DOCKER = r"""
options: {}
attributes:
action_group:
description: Use C(group/docker) or C(group/community.docker.docker) in C(module_defaults) to set defaults for this module.
support: full
membership:
- community.docker.docker
- docker
"""
CONN = r"""
options: {}
attributes:
become:
description: Is usable alongside C(become) keywords.
connection:
description: Uses the target's configured connection information to execute code on it.
delegation:
description: Can be used in conjunction with C(delegate_to) and related keywords.
"""
FACTS = r"""
options: {}
attributes:
facts:
description: Action returns an C(ansible_facts) dictionary that will update existing host facts.
"""
# Should be used together with the standard fragment and the FACTS fragment
FACTS_MODULE = r"""
options: {}
attributes:
check_mode:
support: full
details:
- This action does not modify state.
diff_mode:
support: N/A
details:
- This action does not modify state.
facts:
support: full
"""
FILES = r"""
options: {}
attributes:
safe_file_operations:
description: Uses Ansible's strict file operation functions to ensure proper permissions and avoid data corruption.
"""
FLOW = r"""
options: {}
attributes:
action:
description: Indicates this has a corresponding action plugin so some parts of the options can be executed on the controller.
async:
description: Supports being used with the C(async) keyword.
"""

View File

@ -0,0 +1,82 @@
# Copyright (c) 2023, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
class ModuleDocFragment:
# Docker doc fragment
DOCUMENTATION = r"""
options:
project_src:
description:
- Path to a directory containing a Compose file (C(compose.yml), C(compose.yaml), C(docker-compose.yml), or C(docker-compose.yaml)).
- If O(files) is provided, will look for these files in this directory instead.
- Mutually exclusive with O(definition). One of O(project_src) and O(definition) must be provided.
type: path
project_name:
description:
- Provide a project name. If not provided, the project name is taken from the basename of O(project_src).
- Required when O(definition) is provided.
type: str
files:
description:
- List of Compose file names relative to O(project_src) to be used instead of the main Compose file (C(compose.yml),
C(compose.yaml), C(docker-compose.yml), or C(docker-compose.yaml)).
- Files are loaded and merged in the order given.
- Mutually exclusive with O(definition).
type: list
elements: path
version_added: 3.7.0
definition:
description:
- Compose file describing one or more services, networks and volumes.
- Mutually exclusive with O(project_src) and O(files). One of O(project_src) and O(definition) must be provided.
- If provided, PyYAML must be available to this module, and O(project_name) must be specified.
- Note that a temporary directory will be created and deleted afterwards when using this option.
type: dict
version_added: 3.9.0
env_files:
description:
- By default environment files are loaded from a C(.env) file located directly under the O(project_src) directory.
- O(env_files) can be used to specify the path of one or multiple custom environment files instead.
- The path is relative to the O(project_src) directory.
type: list
elements: path
profiles:
description:
- List of profiles to enable when starting services.
- Equivalent to C(docker compose --profile).
type: list
elements: str
check_files_existing:
description:
- If set to V(false), the module will not check whether one of the files C(compose.yaml), C(compose.yml), C(docker-compose.yaml),
or C(docker-compose.yml) exists in O(project_src) if O(files) is not provided.
- This can be useful if environment files with C(COMPOSE_FILE) are used to configure a different filename. The module
currently does not check for C(COMPOSE_FILE) in environment files or the current environment.
type: bool
default: true
version_added: 3.9.0
requirements:
- "PyYAML if O(definition) is used"
notes:
- |-
The Docker compose CLI plugin has no stable output format (see for example U(https://github.com/docker/compose/issues/10872)),
and for the main operations also no machine friendly output format. The module tries to accomodate this with various
version-dependent behavior adjustments and with testing older and newer versions of the Docker compose CLI plugin.
Currently the module is tested with multiple plugin versions between 2.18.1 and 2.23.3. The exact list of plugin versions
will change over time. New releases of the Docker compose CLI plugin can break this module at any time.
"""
# The following needs to be kept in sync with the compose_v2 module utils
MINIMUM_VERSION = r"""
options: {}
requirements:
- "Docker CLI with Docker compose plugin 2.18.0 or later"
"""

View File

@ -0,0 +1,389 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
class ModuleDocFragment:
# Docker doc fragment
DOCUMENTATION = r"""
options:
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the TCP connection
string. For example, V(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection, the module will automatically
replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable E(DOCKER_HOST) will be used instead.
If the environment variable is not set, the default value will be used.
type: str
default: unix:///var/run/docker.sock
aliases:
- docker_url
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_HOSTNAME) will be used instead.
If the environment variable is not set, the default value will be used.
- Note that this option had a default value V(localhost) in older versions. It was removed in community.docker 3.0.0.
- B(Note:) this option is no longer supported for Docker SDK for Python 7.0.0+. Specifying it with Docker SDK for Python
7.0.0 or newer will lead to an error.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by Docker SDK for Python and the docker daemon.
- If the value is not specified in the task, the value of environment variable E(DOCKER_API_VERSION) will be used instead.
If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases:
- docker_api_version
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TIMEOUT) will be used instead.
If the environment variable is not set, the default value will be used.
type: int
default: 60
ca_path:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set, the file C(ca.pem)
from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
- This option was called O(ca_cert) and got renamed to O(ca_path) in community.docker 3.6.0. The old name has been added
as an alias and can still be used.
type: path
aliases:
- ca_cert
- tls_ca_cert
- cacert_path
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set, the file C(cert.pem)
from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_cert
- cert_path
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set, the file C(key.pem)
from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_key
- key_path
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Note that
if O(validate_certs) is set to V(true) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS) will be used instead. If
the environment variable is not set, the default value will be used.
type: bool
default: false
use_ssh_client:
description:
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
- Requires Docker SDK for Python 4.4.0 or newer.
type: bool
default: false
version_added: 1.5.0
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_VERIFY) will be used instead.
If the environment variable is not set, the default value will be used.
type: bool
default: false
aliases:
- tls_verify
debug:
description:
- Debug mode.
type: bool
default: false
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables. You can define
E(DOCKER_HOST), E(DOCKER_TLS_HOSTNAME), E(DOCKER_API_VERSION), E(DOCKER_CERT_PATH), E(DOCKER_TLS), E(DOCKER_TLS_VERIFY)
and E(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped with the product that sets up the environment.
It will set these variables for you. See U(https://docs.docker.com/machine/reference/env/) for more details.
- When connecting to Docker daemon with TLS, you might need to install additional Python packages. For the Docker SDK for
Python, version 2.4 or newer, this can be done by installing C(docker[tls]) with M(ansible.builtin.pip).
- Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
In general, it will use C($HOME/.docker/config.json) if the E(DOCKER_CONFIG) environment variable is not specified, and
use C($DOCKER_CONFIG/config.json) otherwise.
"""
# For plugins: allow to define common options with Ansible variables
VAR_NAMES = r"""
options:
docker_host:
vars:
- name: ansible_docker_docker_host
tls_hostname:
vars:
- name: ansible_docker_tls_hostname
api_version:
vars:
- name: ansible_docker_api_version
timeout:
vars:
- name: ansible_docker_timeout
ca_path:
vars:
- name: ansible_docker_ca_cert
- name: ansible_docker_ca_path
version_added: 3.6.0
client_cert:
vars:
- name: ansible_docker_client_cert
client_key:
vars:
- name: ansible_docker_client_key
tls:
vars:
- name: ansible_docker_tls
validate_certs:
vars:
- name: ansible_docker_validate_certs
"""
# Additional, more specific stuff for minimal Docker SDK for Python version >= 2.0.
DOCKER_PY_2_DOCUMENTATION = r"""
options: {}
notes:
- This module uses the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
communicate with the Docker daemon.
requirements:
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
This module does B(not) work with docker-py."
"""
# Docker doc fragment when using the vendored API access code
API_DOCUMENTATION = r"""
options:
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
TCP connection string. For example, V(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
the module will automatically replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable E(DOCKER_HOST) will be used
instead. If the environment variable is not set, the default value will be used.
type: str
default: unix:///var/run/docker.sock
aliases:
- docker_url
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_HOSTNAME) will
be used instead. If the environment variable is not set, the default value will be used.
- Note that this option had a default value V(localhost) in older versions. It was removed in community.docker 3.0.0.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by this collection and the docker daemon.
- If the value is not specified in the task, the value of environment variable E(DOCKER_API_VERSION) will be
used instead. If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases:
- docker_api_version
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TIMEOUT) will be used
instead. If the environment variable is not set, the default value will be used.
type: int
default: 60
ca_path:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(ca.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
- This option was called O(ca_cert) and got renamed to O(ca_path) in community.docker 3.6.0. The old name has
been added as an alias and can still be used.
type: path
aliases:
- ca_cert
- tls_ca_cert
- cacert_path
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(cert.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_cert
- cert_path
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(key.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_key
- key_path
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
server. Note that if O(validate_certs) is set to V(true) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS) will be used
instead. If the environment variable is not set, the default value will be used.
type: bool
default: false
use_ssh_client:
description:
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
type: bool
default: false
version_added: 1.5.0
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_VERIFY) will be
used instead. If the environment variable is not set, the default value will be used.
type: bool
default: false
aliases:
- tls_verify
debug:
description:
- Debug mode
type: bool
default: false
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
You can define E(DOCKER_HOST), E(DOCKER_TLS_HOSTNAME), E(DOCKER_API_VERSION), E(DOCKER_CERT_PATH),
E(DOCKER_TLS), E(DOCKER_TLS_VERIFY) and E(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
with the product that sets up the environment. It will set these variables for you. See
U(https://docs.docker.com/machine/reference/env/) for more details.
# - Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
# In general, it will use C($HOME/.docker/config.json) if the E(DOCKER_CONFIG) environment variable is not specified,
# and use C($DOCKER_CONFIG/config.json) otherwise.
- This module does B(not) use the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
communicate with the Docker daemon. It uses code derived from the Docker SDK or Python that is included in this
collection.
requirements:
- requests
- pywin32 (when using named pipes on Windows 32)
- paramiko (when using SSH with O(use_ssh_client=false))
- pyOpenSSL (when using TLS)
"""
# Docker doc fragment when using the Docker CLI
CLI_DOCUMENTATION = r"""
options:
docker_cli:
description:
- Path to the Docker CLI. If not provided, will search for Docker CLI on the E(PATH).
type: path
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
TCP connection string. For example, V(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
the module will automatically replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable E(DOCKER_HOST) will be used
instead. If the environment variable is not set, the default value will be used.
- Mutually exclusive with O(cli_context). If neither O(docker_host) nor O(cli_context) are provided, the
value V(unix:///var/run/docker.sock) is used.
type: str
aliases:
- docker_url
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_HOSTNAME) will
be used instead. If the environment variable is not set, the default value will be used.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by this collection and the docker daemon.
- If the value is not specified in the task, the value of environment variable E(DOCKER_API_VERSION) will be
used instead. If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases:
- docker_api_version
ca_path:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(ca.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- ca_cert
- tls_ca_cert
- cacert_path
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(cert.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_cert
- cert_path
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable E(DOCKER_CERT_PATH) is set,
the file C(key.pem) from the directory specified in the environment variable E(DOCKER_CERT_PATH) will be used.
type: path
aliases:
- tls_client_key
- key_path
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
server. Note that if O(validate_certs) is set to V(true) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS) will be used
instead. If the environment variable is not set, the default value will be used.
type: bool
default: false
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TLS_VERIFY) will be
used instead. If the environment variable is not set, the default value will be used.
type: bool
default: false
aliases:
- tls_verify
# debug:
# description:
# - Debug mode
# type: bool
# default: false
cli_context:
description:
- The Docker CLI context to use.
- Mutually exclusive with O(docker_host).
type: str
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
You can define E(DOCKER_HOST), E(DOCKER_TLS_HOSTNAME), E(DOCKER_API_VERSION), E(DOCKER_CERT_PATH),
E(DOCKER_TLS), E(DOCKER_TLS_VERIFY) and E(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
with the product that sets up the environment. It will set these variables for you. See
U(https://docs.docker.com/machine/reference/env/) for more details.
- This module does B(not) use the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
communicate with the Docker daemon. It directly calls the Docker CLI program.
"""

View File

@ -1,187 +0,0 @@
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Docker doc fragment
DOCUMENTATION = r'''
options:
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
TCP connection string. For example, C(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
the module will automatically replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable C(DOCKER_HOST) will be used
instead. If the environment variable is not set, the default value will be used.
type: str
default: unix://var/run/docker.sock
aliases: [ docker_url ]
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_HOSTNAME) will
be used instead. If the environment variable is not set, the default value will be used.
- The current default value is C(localhost). This default is deprecated and will change in community.docker
2.0.0 to be a value computed from I(docker_host). Explicitly specify C(localhost) to make sure this value
will still be used, and to disable the deprecation message which will be shown otherwise.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by Docker SDK for Python and the docker daemon.
- If the value is not specified in the task, the value of environment variable C(DOCKER_API_VERSION) will be
used instead. If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases: [ docker_api_version ]
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TIMEOUT) will be used
instead. If the environment variable is not set, the default value will be used.
type: int
default: 60
ca_cert:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(ca.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_ca_cert, cacert_path ]
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(cert.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_cert, cert_path ]
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(key.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_key, key_path ]
ssl_version:
description:
- Provide a valid SSL version number. Default value determined by ssl.py module.
- If the value is not specified in the task, the value of environment variable C(DOCKER_SSL_VERSION) will be
used instead.
type: str
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
server. Note that if I(validate_certs) is set to C(yes) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS) will be used
instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
use_ssh_client:
description:
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
- Requires Docker SDK for Python 4.4.0 or newer.
type: bool
default: no
version_added: 1.5.0
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_VERIFY) will be
used instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
aliases: [ tls_verify ]
debug:
description:
- Debug mode
type: bool
default: no
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
You can define C(DOCKER_HOST), C(DOCKER_TLS_HOSTNAME), C(DOCKER_API_VERSION), C(DOCKER_CERT_PATH), C(DOCKER_SSL_VERSION),
C(DOCKER_TLS), C(DOCKER_TLS_VERIFY) and C(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
with the product that sets up the environment. It will set these variables for you. See
U(https://docs.docker.com/machine/reference/env/) for more details.
- When connecting to Docker daemon with TLS, you might need to install additional Python packages.
For the Docker SDK for Python, version 2.4 or newer, this can be done by installing C(docker[tls]) with M(ansible.builtin.pip).
- Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
In general, it will use C($HOME/.docker/config.json) if the C(DOCKER_CONFIG) environment variable is not specified,
and use C($DOCKER_CONFIG/config.json) otherwise.
'''
# For plugins: allow to define common options with Ansible variables
VAR_NAMES = r'''
options:
docker_host:
vars:
- name: ansible_docker_docker_host
tls_hostname:
vars:
- name: ansible_docker_tls_hostname
api_version:
vars:
- name: ansible_docker_api_version
timeout:
vars:
- name: ansible_docker_timeout
ca_cert:
vars:
- name: ansible_docker_ca_cert
client_cert:
vars:
- name: ansible_docker_client_cert
client_key:
vars:
- name: ansible_docker_client_key
ssl_version:
vars:
- name: ansible_docker_ssl_version
tls:
vars:
- name: ansible_docker_tls
validate_certs:
vars:
- name: ansible_docker_validate_certs
'''
# Additional, more specific stuff for minimal Docker SDK for Python version < 2.0
DOCKER_PY_1_DOCUMENTATION = r'''
options: {}
notes:
- This module uses the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
communicate with the Docker daemon.
requirements:
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
For Python 2.6, C(docker-py) must be used. Otherwise, it is recommended to
install the C(docker) Python module. Note that both modules should *not*
be installed at the same time. Also note that when both modules are installed
and one of them is uninstalled, the other might no longer function and a
reinstall of it is required."
'''
# Additional, more specific stuff for minimal Docker SDK for Python version >= 2.0.
# Note that Docker SDK for Python >= 2.0 requires Python 2.7 or newer.
DOCKER_PY_2_DOCUMENTATION = r'''
options: {}
notes:
- This module uses the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
communicate with the Docker daemon.
requirements:
- "Python >= 2.7"
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
This module does *not* work with docker-py."
'''

View File

@ -1,132 +1,138 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Felix Fontein <felix@fontein.de>
# For the parts taken from the docker inventory script:
# Copyright (c) 2016, Paul Durivage <paul.durivage@gmail.com>
# Copyright (c) 2016, Chris Houseknecht <house@redhat.com>
# Copyright (c) 2016, James Tanner <jtanner@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = '''
DOCUMENTATION = r"""
name: docker_containers
short_description: Ansible dynamic inventory plugin for Docker containers.
short_description: Ansible dynamic inventory plugin for Docker containers
version_added: 1.1.0
author:
- Felix Fontein (@felixfontein)
requirements:
- L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.10.0
- Felix Fontein (@felixfontein)
extends_documentation_fragment:
- ansible.builtin.constructed
- community.docker.docker
- community.docker.docker.docker_py_1_documentation
- ansible.builtin.constructed
- community.docker._docker.api_documentation
- community.library_inventory_filtering_v1.inventory_filter
description:
- Reads inventories from the Docker API.
- Uses a YAML configuration file that ends with C(docker.[yml|yaml]).
- Reads inventories from the Docker API.
- Uses a YAML configuration file that ends with V(docker.(yml|yaml\)).
notes:
- The configuration file must be a YAML file whose filename ends with V(docker.yml) or V(docker.yaml). Other filenames will
not be accepted.
options:
plugin:
description:
- The name of this plugin, it should always be set to C(community.docker.docker_containers)
for this plugin to recognize it as it's own.
type: str
required: true
choices: [ community.docker.docker_containers ]
plugin:
description:
- The name of this plugin, it should always be set to V(community.docker.docker_containers) for this plugin to recognize
it as its own.
type: str
required: true
choices: [community.docker.docker_containers]
connection_type:
description:
- Which connection type to use the containers.
- One way to connect to containers is to use SSH (C(ssh)). For this, the options I(default_ip) and
I(private_ssh_port) are used. This requires that a SSH daemon is running inside the containers.
- Alternatively, C(docker-cli) selects the
R(docker connection plugin,ansible_collections.community.docker.docker_connection),
and C(docker-api) (default) selects the
R(docker_api connection plugin,ansible_collections.community.docker.docker_api_connection).
- When C(docker-api) is used, all Docker daemon configuration values are passed from the inventory plugin
to the connection plugin. This can be controlled with I(configure_docker_daemon).
type: str
default: docker-api
choices:
- ssh
- docker-cli
- docker-api
connection_type:
description:
- Which connection type to use the containers.
- One way to connect to containers is to use SSH (V(ssh)). For this, the options O(default_ip) and O(private_ssh_port)
are used. This requires that a SSH daemon is running inside the containers.
- Alternatively, V(docker-cli) selects the P(community.docker.docker#connection) connection plugin, and V(docker-api)
(default) selects the P(community.docker.docker_api#connection) connection plugin.
- When V(docker-api) is used, all Docker daemon configuration values are passed from the inventory plugin to the connection
plugin. This can be controlled with O(configure_docker_daemon).
- Note that the P(community.docker.docker_api#connection) does B(not work with TCP TLS sockets)!
See U(https://github.com/ansible-collections/community.docker/issues/605) for more information.
type: str
default: docker-api
choices:
- ssh
- docker-cli
- docker-api
configure_docker_daemon:
description:
- Whether to pass all Docker daemon configuration from the inventory plugin to the connection plugin.
- Only used when I(connection_type=docker-api).
type: bool
default: true
version_added: 1.8.0
configure_docker_daemon:
description:
- Whether to pass all Docker daemon configuration from the inventory plugin to the connection plugin.
- Only used when O(connection_type=docker-api).
type: bool
default: true
version_added: 1.8.0
verbose_output:
description:
- Toggle to (not) include all available inspection metadata.
- Note that all top-level keys will be transformed to the format C(docker_xxx).
For example, C(HostConfig) is converted to C(docker_hostconfig).
- If this is C(false), these values can only be used during I(constructed), I(groups), and I(keyed_groups).
- The C(docker) inventory script always added these variables, so for compatibility set this to C(true).
type: bool
default: false
verbose_output:
description:
- Toggle to (not) include all available inspection metadata.
- Note that all top-level keys will be transformed to the format C(docker_xxx). For example, C(HostConfig) is converted
to C(docker_hostconfig).
- If this is V(false), these values can only be used during O(compose), O(groups), and O(keyed_groups).
- The C(docker) inventory script always added these variables, so for compatibility set this to V(true).
type: bool
default: false
default_ip:
description:
- The IP address to assign to ansible_host when the container's SSH port is mapped to interface
'0.0.0.0'.
- Only used if I(connection_type) is C(ssh).
type: str
default: 127.0.0.1
default_ip:
description:
- The IP address to assign to ansible_host when the container's SSH port is mapped to interface '0.0.0.0'.
- Only used if O(connection_type) is V(ssh).
type: str
default: 127.0.0.1
private_ssh_port:
description:
- The port containers use for SSH.
- Only used if I(connection_type) is C(ssh).
type: int
default: 22
private_ssh_port:
description:
- The port containers use for SSH.
- Only used if O(connection_type) is V(ssh).
type: int
default: 22
add_legacy_groups:
description:
- "Add the same groups as the C(docker) inventory script does. These are the following:"
- "C(<container id>): contains the container of this ID."
- "C(<container name>): contains the container that has this name."
- "C(<container short id>): contains the containers that have this short ID (first 13 letters of ID)."
- "C(image_<image name>): contains the containers that have the image C(<image name>)."
- "C(stack_<stack name>): contains the containers that belong to the stack C(<stack name>)."
- "C(service_<service name>): contains the containers that belong to the service C(<service name>)"
- "C(<docker_host>): contains the containers which belong to the Docker daemon I(docker_host).
Useful if you run this plugin against multiple Docker daemons."
- "C(running): contains all containers that are running."
- "C(stopped): contains all containers that are not running."
- If this is not set to C(true), you should use keyed groups to add the containers to groups.
See the examples for how to do that.
type: bool
default: false
'''
add_legacy_groups:
description:
- 'Add the same groups as the C(docker) inventory script does. These are the following:'
- 'C(<container id>): contains the container of this ID.'
- 'C(<container name>): contains the container that has this name.'
- 'C(<container short id>): contains the containers that have this short ID (first 13 letters of ID).'
- 'C(image_<image name>): contains the containers that have the image C(<image name>).'
- 'C(stack_<stack name>): contains the containers that belong to the stack C(<stack name>).'
- 'C(service_<service name>): contains the containers that belong to the service C(<service name>).'
- 'C(<docker_host>): contains the containers which belong to the Docker daemon O(docker_host). Useful if you run this
plugin against multiple Docker daemons.'
- 'C(running): contains all containers that are running.'
- 'C(stopped): contains all containers that are not running.'
- If this is not set to V(true), you should use keyed groups to add the containers to groups. See the examples for how
to do that.
type: bool
default: false
EXAMPLES = '''
filters:
version_added: 3.5.0
"""
EXAMPLES = """
---
# Minimal example using local Docker daemon
plugin: community.docker.docker_containers
docker_host: unix://var/run/docker.sock
docker_host: unix:///var/run/docker.sock
---
# Minimal example using remote Docker daemon
plugin: community.docker.docker_containers
docker_host: tcp://my-docker-host:2375
---
# Example using remote Docker daemon with unverified TLS
plugin: community.docker.docker_containers
docker_host: tcp://my-docker-host:2376
tls: true
---
# Example using remote Docker daemon with verified TLS and client certificate verification
plugin: community.docker.docker_containers
docker_host: tcp://my-docker-host:2376
validate_certs: true
ca_cert: /somewhere/ca.pem
ca_path: /somewhere/ca.pem
client_key: /somewhere/key.pem
client_cert: /somewhere/cert.pem
---
# Example using constructed features to create groups
plugin: community.docker.docker_containers
docker_host: tcp://my-docker-host:2375
@ -139,6 +145,7 @@ keyed_groups:
- prefix: os
key: docker_platform
---
# Example using SSH connection with an explicit fallback for when port 22 has not been
# exported: use container name as ansible_ssh_host and 22 as ansible_ssh_port
plugin: community.docker.docker_containers
@ -146,203 +153,273 @@ connection_type: ssh
compose:
ansible_ssh_host: ansible_ssh_host | default(docker_name[1:], true)
ansible_ssh_port: ansible_ssh_port | default(22, true)
'''
---
# Only consider containers which have a label 'foo', or whose name starts with 'a'
plugin: community.docker.docker_containers
filters:
# Accept all containers which have a label called 'foo'
- include: >-
"foo" in docker_config.Labels
# Next accept all containers whose inventory_hostname starts with 'a'
- include: >-
inventory_hostname.startswith("a")
# Exclude all containers that did not match any of the above filters
- exclude: true
"""
import re
import typing as t
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_native
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible_collections.community.library_inventory_filtering_v1.plugins.plugin_utils.inventory_filter import (
filter_host,
parse_filters,
)
from ansible_collections.community.docker.plugins.module_utils.common import (
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
APIError,
DockerException,
)
from ansible_collections.community.docker.plugins.module_utils._common_api import (
RequestException,
)
from ansible_collections.community.docker.plugins.module_utils.util import (
from ansible_collections.community.docker.plugins.module_utils._util import (
DOCKER_COMMON_ARGS_VARS,
)
from ansible_collections.community.docker.plugins.plugin_utils.common import (
from ansible_collections.community.docker.plugins.plugin_utils._common_api import (
AnsibleDockerClient,
)
from ansible_collections.community.docker.plugins.plugin_utils._unsafe import (
make_unsafe,
)
if t.TYPE_CHECKING:
from ansible.inventory.data import InventoryData
from ansible.parsing.dataloader import DataLoader
try:
from docker.errors import DockerException, APIError
except Exception:
# missing Docker SDK for Python handled in ansible_collections.community.docker.plugins.module_utils.common
pass
MIN_DOCKER_PY = '1.7.0'
MIN_DOCKER_API = None
class InventoryModule(BaseInventoryPlugin, Constructable):
''' Host inventory parser for ansible using Docker daemon as source. '''
"""Host inventory parser for ansible using Docker daemon as source."""
NAME = 'community.docker.docker_containers'
NAME = "community.docker.docker_containers"
def _slugify(self, value):
return 'docker_%s' % (re.sub(r'[^\w-]', '_', value).lower().lstrip('_'))
def _slugify(self, value: str) -> str:
slug = re.sub(r"[^\w-]", "_", value).lower().lstrip("_")
return f"docker_{slug}"
def _populate(self, client):
strict = self.get_option('strict')
def _populate(self, client: AnsibleDockerClient) -> None:
strict = self.get_option("strict")
ssh_port = self.get_option('private_ssh_port')
default_ip = self.get_option('default_ip')
hostname = self.get_option('docker_host')
verbose_output = self.get_option('verbose_output')
connection_type = self.get_option('connection_type')
add_legacy_groups = self.get_option('add_legacy_groups')
ssh_port = self.get_option("private_ssh_port")
default_ip = self.get_option("default_ip")
hostname = self.get_option("docker_host")
verbose_output = self.get_option("verbose_output")
connection_type = self.get_option("connection_type")
add_legacy_groups = self.get_option("add_legacy_groups")
if self.inventory is None:
raise AssertionError("Inventory must be there")
try:
containers = client.containers(all=True)
params = {
"limit": -1,
"all": 1,
"size": 0,
"trunc_cmd": 0,
"since": None,
"before": None,
}
containers = client.get_json("/containers/json", params=params)
except APIError as exc:
raise AnsibleError("Error listing containers: %s" % to_native(exc))
raise AnsibleError(f"Error listing containers: {exc}") from exc
if add_legacy_groups:
self.inventory.add_group('running')
self.inventory.add_group('stopped')
self.inventory.add_group("running")
self.inventory.add_group("stopped")
extra_facts = {}
if self.get_option('configure_docker_daemon'):
if self.get_option("configure_docker_daemon"):
for option_name, var_name in DOCKER_COMMON_ARGS_VARS.items():
value = self.get_option(option_name)
if value is not None:
extra_facts[var_name] = value
filters = parse_filters(self.get_option("filters"))
for container in containers:
id = container.get('Id')
short_id = id[:13]
container_id = container.get("Id")
short_container_id = container_id[:13]
try:
name = container.get('Names', list())[0].lstrip('/')
name = container.get("Names", [])[0].lstrip("/")
full_name = name
except IndexError:
name = short_id
full_name = id
name = short_container_id
full_name = container_id
self.inventory.add_host(name)
facts = dict(
docker_name=name,
docker_short_id=short_id
)
full_facts = dict()
facts = {
"docker_name": make_unsafe(name),
"docker_short_id": make_unsafe(short_container_id),
}
full_facts = {}
try:
inspect = client.inspect_container(id)
inspect = client.get_json("/containers/{0}/json", container_id)
except APIError as exc:
raise AnsibleError("Error inspecting container %s - %s" % (name, str(exc)))
raise AnsibleError(
f"Error inspecting container {name} - {exc}"
) from exc
state = inspect.get('State') or dict()
config = inspect.get('Config') or dict()
labels = config.get('Labels') or dict()
state = inspect.get("State") or {}
config = inspect.get("Config") or {}
labels = config.get("Labels") or {}
running = state.get('Running')
running = state.get("Running")
groups = []
# Add container to groups
image_name = config.get('Image')
image_name = config.get("Image")
if image_name and add_legacy_groups:
self.inventory.add_group('image_{0}'.format(image_name))
self.inventory.add_host(name, group='image_{0}'.format(image_name))
groups.append(f"image_{image_name}")
stack_name = labels.get('com.docker.stack.namespace')
stack_name = labels.get("com.docker.stack.namespace")
if stack_name:
full_facts['docker_stack'] = stack_name
full_facts["docker_stack"] = stack_name
if add_legacy_groups:
self.inventory.add_group('stack_{0}'.format(stack_name))
self.inventory.add_host(name, group='stack_{0}'.format(stack_name))
groups.append(f"stack_{stack_name}")
service_name = labels.get('com.docker.swarm.service.name')
service_name = labels.get("com.docker.swarm.service.name")
if service_name:
full_facts['docker_service'] = service_name
full_facts["docker_service"] = service_name
if add_legacy_groups:
self.inventory.add_group('service_{0}'.format(service_name))
self.inventory.add_host(name, group='service_{0}'.format(service_name))
groups.append(f"service_{service_name}")
if connection_type == 'ssh':
ansible_connection = None
if connection_type == "ssh":
# Figure out ssh IP and Port
try:
# Lookup the public facing port Nat'ed to ssh port.
port = client.port(container, ssh_port)[0]
network_settings = inspect.get("NetworkSettings") or {}
port_settings = network_settings.get("Ports") or {}
port = port_settings.get(f"{ssh_port}/tcp")[0] # type: ignore[index]
except (IndexError, AttributeError, TypeError):
port = dict()
port = {}
try:
ip = default_ip if port['HostIp'] == '0.0.0.0' else port['HostIp']
ip = default_ip if port["HostIp"] == "0.0.0.0" else port["HostIp"]
except KeyError:
ip = ''
ip = ""
facts.update(dict(
ansible_ssh_host=ip,
ansible_ssh_port=port.get('HostPort', 0),
))
elif connection_type == 'docker-cli':
facts.update(dict(
ansible_host=full_name,
ansible_connection='community.docker.docker',
))
elif connection_type == 'docker-api':
facts.update(dict(
ansible_host=full_name,
ansible_connection='community.docker.docker_api',
))
facts.update(
{
"ansible_ssh_host": ip,
"ansible_ssh_port": port.get("HostPort", 0),
}
)
elif connection_type == "docker-cli":
facts.update(
{
"ansible_host": full_name,
}
)
ansible_connection = "community.docker.docker"
elif connection_type == "docker-api":
facts.update(
{
"ansible_host": full_name,
}
)
facts.update(extra_facts)
ansible_connection = "community.docker.docker_api"
full_facts.update(facts)
for key, value in inspect.items():
fact_key = self._slugify(key)
full_facts[fact_key] = value
full_facts = make_unsafe(full_facts)
if ansible_connection:
for d in (facts, full_facts):
if "ansible_connection" not in d:
d["ansible_connection"] = ansible_connection
if not filter_host(self, name, full_facts, filters):
continue
if verbose_output:
facts.update(full_facts)
self.inventory.add_host(name)
for group in groups:
self.inventory.add_group(group)
self.inventory.add_host(name, group=group)
for key, value in facts.items():
self.inventory.set_variable(name, key, value)
# Use constructed if applicable
# Composed variables
self._set_composite_vars(self.get_option('compose'), full_facts, name, strict=strict)
self._set_composite_vars(
self.get_option("compose"), full_facts, name, strict=strict
)
# Complex groups based on jinja2 conditionals, hosts that meet the conditional are added to group
self._add_host_to_composed_groups(self.get_option('groups'), full_facts, name, strict=strict)
self._add_host_to_composed_groups(
self.get_option("groups"), full_facts, name, strict=strict
)
# Create groups based on variable values and add the corresponding hosts to it
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), full_facts, name, strict=strict)
self._add_host_to_keyed_groups(
self.get_option("keyed_groups"), full_facts, name, strict=strict
)
# We need to do this last since we also add a group called `name`.
# When we do this before a set_variable() call, the variables are assigned
# to the group, and not to the host.
if add_legacy_groups:
self.inventory.add_group(id)
self.inventory.add_host(name, group=id)
self.inventory.add_group(container_id)
self.inventory.add_host(name, group=container_id)
self.inventory.add_group(name)
self.inventory.add_host(name, group=name)
self.inventory.add_group(short_id)
self.inventory.add_host(name, group=short_id)
self.inventory.add_group(short_container_id)
self.inventory.add_host(name, group=short_container_id)
self.inventory.add_group(hostname)
self.inventory.add_host(name, group=hostname)
if running is True:
self.inventory.add_host(name, group='running')
self.inventory.add_host(name, group="running")
else:
self.inventory.add_host(name, group='stopped')
self.inventory.add_host(name, group="stopped")
def verify_file(self, path):
def verify_file(self, path: str) -> bool:
"""Return the possibly of a file being consumable by this plugin."""
return (
super(InventoryModule, self).verify_file(path) and
path.endswith(('docker.yaml', 'docker.yml')))
return super().verify_file(path) and path.endswith(
("docker.yaml", "docker.yml")
)
def _create_client(self):
return AnsibleDockerClient(self, min_docker_version=MIN_DOCKER_PY, min_docker_api_version=MIN_DOCKER_API)
def _create_client(self) -> AnsibleDockerClient:
return AnsibleDockerClient(self, min_docker_api_version=MIN_DOCKER_API)
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path, cache)
def parse(
self,
inventory: InventoryData,
loader: DataLoader,
path: str,
cache: bool = True,
) -> None:
super().parse(inventory, loader, path, cache)
self._read_config_data(path)
client = self._create_client()
try:
self._populate(client)
except DockerException as e:
raise AnsibleError(
'An unexpected docker error occurred: {0}'.format(e)
)
raise AnsibleError(f"An unexpected Docker error occurred: {e}") from e
except RequestException as e:
raise AnsibleError(
'An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e)
)
f"An unexpected requests error occurred when trying to talk to the Docker daemon: {e}"
) from e

View File

@ -1,63 +1,70 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2019, Ximon Eighteen <ximon.eighteen@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = '''
name: docker_machine
author: Ximon Eighteen (@ximon18)
short_description: Docker Machine inventory source
requirements:
- L(Docker Machine,https://docs.docker.com/machine/)
extends_documentation_fragment:
- constructed
DOCUMENTATION = r"""
name: docker_machine
author: Ximon Eighteen (@ximon18)
short_description: Docker Machine inventory source
requirements:
- L(Docker Machine,https://docs.docker.com/machine/)
extends_documentation_fragment:
- ansible.builtin.constructed
- community.library_inventory_filtering_v1.inventory_filter
description:
- Get inventory hosts from Docker Machine.
- Uses a YAML configuration file that ends with V(docker_machine.(yml|yaml\)).
- The plugin sets standard host variables C(ansible_host), C(ansible_port), C(ansible_user) and C(ansible_ssh_private_key).
- The plugin stores the Docker Machine 'env' output variables in C(dm_) prefixed host variables.
notes:
- The configuration file must be a YAML file whose filename ends with V(docker_machine.yml) or V(docker_machine.yaml). Other
filenames will not be accepted.
options:
plugin:
description: Token that ensures this is a source file for the C(docker_machine) plugin.
required: true
choices: ['docker_machine', 'community.docker.docker_machine']
daemon_env:
description:
- Get inventory hosts from Docker Machine.
- Uses a YAML configuration file that ends with docker_machine.(yml|yaml).
- The plugin sets standard host variables C(ansible_host), C(ansible_port), C(ansible_user) and C(ansible_ssh_private_key).
- The plugin stores the Docker Machine 'env' output variables in I(dm_) prefixed host variables.
- Whether docker daemon connection environment variables should be fetched, and how to behave if they cannot be fetched.
- With V(require) and V(require-silently), fetch them and skip any host for which they cannot be fetched. A warning
will be issued for any skipped host if the choice is V(require).
- With V(optional) and V(optional-silently), fetch them and not skip hosts for which they cannot be fetched. A warning
will be issued for hosts where they cannot be fetched if the choice is V(optional).
- With V(skip), do not attempt to fetch the docker daemon connection environment variables.
- If fetched successfully, the variables will be prefixed with C(dm_) and stored as host variables.
type: str
choices:
- require
- require-silently
- optional
- optional-silently
- skip
default: require
running_required:
description:
- When V(true), hosts which Docker Machine indicates are in a state other than C(running) will be skipped.
type: bool
default: true
verbose_output:
description:
- When V(true), include all available nodes metadata (for example C(Image), C(Region), C(Size)) as a JSON object named
C(docker_machine_node_attributes).
type: bool
default: true
filters:
version_added: 3.5.0
"""
options:
plugin:
description: token that ensures this is a source file for the C(docker_machine) plugin.
required: yes
choices: ['docker_machine', 'community.docker.docker_machine']
daemon_env:
description:
- Whether docker daemon connection environment variables should be fetched, and how to behave if they cannot be fetched.
- With C(require) and C(require-silently), fetch them and skip any host for which they cannot be fetched.
A warning will be issued for any skipped host if the choice is C(require).
- With C(optional) and C(optional-silently), fetch them and not skip hosts for which they cannot be fetched.
A warning will be issued for hosts where they cannot be fetched if the choice is C(optional).
- With C(skip), do not attempt to fetch the docker daemon connection environment variables.
- If fetched successfully, the variables will be prefixed with I(dm_) and stored as host variables.
type: str
choices:
- require
- require-silently
- optional
- optional-silently
- skip
default: require
running_required:
description:
- When C(true), hosts which Docker Machine indicates are in a state other than C(running) will be skipped.
type: bool
default: yes
verbose_output:
description:
- When C(true), include all available nodes metadata (for exmaple C(Image), C(Region), C(Size)) as a JSON object
named C(docker_machine_node_attributes).
type: bool
default: yes
'''
EXAMPLES = '''
EXAMPLES = """
---
# Minimal example
plugin: community.docker.docker_machine
---
# Example using constructed features to create a group per Docker Machine driver
# (https://docs.docker.com/machine/drivers/), for example:
# $ docker-machine create --driver digitalocean ... mymachine
@ -70,68 +77,95 @@ plugin: community.docker.docker_machine
# ]
# ...
# }
strict: no
plugin: community.docker.docker_machine
strict: false
keyed_groups:
- separator: ''
key: docker_machine_node_attributes.DriverName
---
# Example grouping hosts by Digital Machine tag
strict: no
plugin: community.docker.docker_machine
strict: false
keyed_groups:
- prefix: tag
key: 'dm_tags'
---
# Example using compose to override the default SSH behaviour of asking the user to accept the remote host key
plugin: community.docker.docker_machine
compose:
ansible_ssh_common_args: '"-o StrictHostKeyChecking=accept-new"'
'''
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
from ansible.utils.display import Display
"""
import json
import re
import subprocess
import typing as t
from ansible.errors import AnsibleError
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable, Constructable
from ansible.utils.display import Display
from ansible_collections.community.library_inventory_filtering_v1.plugins.plugin_utils.inventory_filter import (
filter_host,
parse_filters,
)
from ansible_collections.community.docker.plugins.plugin_utils._unsafe import (
make_unsafe,
)
if t.TYPE_CHECKING:
from ansible.inventory.data import InventoryData
from ansible.parsing.dataloader import DataLoader
DaemonEnv = t.Literal[
"require", "require-silently", "optional", "optional-silently", "skip"
]
display = Display()
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
''' Host inventory parser for ansible using Docker machine as source. '''
"""Host inventory parser for ansible using Docker machine as source."""
NAME = 'community.docker.docker_machine'
NAME = "community.docker.docker_machine"
DOCKER_MACHINE_PATH = None
docker_machine_path: str | None = None
def _run_command(self, args):
if not self.DOCKER_MACHINE_PATH:
def _run_command(self, args: list[str]) -> str:
if not self.docker_machine_path:
try:
self.DOCKER_MACHINE_PATH = get_bin_path('docker-machine')
self.docker_machine_path = get_bin_path("docker-machine")
except ValueError as e:
raise AnsibleError(to_native(e))
raise AnsibleError(to_text(e)) from e
command = [self.DOCKER_MACHINE_PATH]
command = [self.docker_machine_path]
command.extend(args)
display.debug('Executing command {0}'.format(command))
display.debug(f"Executing command {command}")
try:
result = subprocess.check_output(command)
except subprocess.CalledProcessError as e:
display.warning('Exception {0} caught while executing command {1}, this was the original exception: {2}'.format(type(e).__name__, command, e))
display.warning(
f"Exception {type(e).__name__} caught while executing command {command}, this was the original exception: {e}"
)
raise e
return to_text(result).strip()
def _get_docker_daemon_variables(self, machine_name):
'''
def _get_docker_daemon_variables(self, machine_name: str) -> list[tuple[str, str]]:
"""
Capture settings from Docker Machine that would be needed to connect to the remote Docker daemon installed on
the Docker Machine remote host. Note: passing '--shell=sh' is a workaround for 'Error: Unknown shell'.
'''
"""
try:
env_lines = self._run_command(['env', '--shell=sh', machine_name]).splitlines()
env_lines = self._run_command(
["env", "--shell=sh", machine_name]
).splitlines()
except subprocess.CalledProcessError:
# This can happen when the machine is created but provisioning is incomplete
return []
@ -146,22 +180,22 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
# capture any of the DOCKER_xxx variables that were output and create Ansible host vars
# with the same name and value but with a dm_ name prefix.
vars = []
env_vars = []
for line in env_lines:
match = re.search('(DOCKER_[^=]+)="([^"]+)"', line)
if match:
env_var_name = match.group(1)
env_var_value = match.group(2)
vars.append((env_var_name, env_var_value))
env_vars.append((env_var_name, env_var_value))
return vars
return env_vars
def _get_machine_names(self):
# Filter out machines that are not in the Running state as we probably can't do anything useful actions
def _get_machine_names(self) -> list[str]:
# Filter out machines that are not in the Running state as we probably cannot do anything useful actions
# with them.
ls_command = ['ls', '-q']
if self.get_option('running_required'):
ls_command.extend(['--filter', 'state=Running'])
ls_command = ["ls", "-q"]
if self.get_option("running_required"):
ls_command.extend(["--filter", "state=Running"])
try:
ls_lines = self._run_command(ls_command)
@ -170,47 +204,62 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
return ls_lines.splitlines()
def _inspect_docker_machine_host(self, node):
def _inspect_docker_machine_host(self, node: str) -> t.Any | None:
try:
inspect_lines = self._run_command(['inspect', self.node])
inspect_lines = self._run_command(["inspect", node])
except subprocess.CalledProcessError:
return None
return json.loads(inspect_lines)
def _ip_addr_docker_machine_host(self, node):
def _ip_addr_docker_machine_host(self, node: str) -> t.Any | None:
try:
ip_addr = self._run_command(['ip', self.node])
ip_addr = self._run_command(["ip", node])
except subprocess.CalledProcessError:
return None
return ip_addr
def _should_skip_host(self, machine_name, env_var_tuples, daemon_env):
def _should_skip_host(
self,
machine_name: str,
env_var_tuples: list[tuple[str, str]],
daemon_env: DaemonEnv,
) -> bool:
if not env_var_tuples:
warning_prefix = 'Unable to fetch Docker daemon env vars from Docker Machine for host {0}'.format(machine_name)
if daemon_env in ('require', 'require-silently'):
if daemon_env == 'require':
display.warning('{0}: host will be skipped'.format(warning_prefix))
warning_prefix = f"Unable to fetch Docker daemon env vars from Docker Machine for host {machine_name}"
if daemon_env in ("require", "require-silently"):
if daemon_env == "require":
display.warning(f"{warning_prefix}: host will be skipped")
return True
else: # 'optional', 'optional-silently'
if daemon_env == 'optional':
display.warning('{0}: host will lack dm_DOCKER_xxx variables'.format(warning_prefix))
if daemon_env == "optional":
display.warning(
f"{warning_prefix}: host will lack dm_DOCKER_xxx variables"
)
# daemon_env is 'optional-silently'
return False
def _populate(self):
daemon_env = self.get_option('daemon_env')
def _populate(self) -> None:
if self.inventory is None:
raise AssertionError("Inventory must be there")
daemon_env: DaemonEnv = self.get_option("daemon_env")
filters = parse_filters(self.get_option("filters"))
try:
for self.node in self._get_machine_names():
self.node_attrs = self._inspect_docker_machine_host(self.node)
if not self.node_attrs:
for node in self._get_machine_names():
node_attrs = self._inspect_docker_machine_host(node)
if not node_attrs:
continue
machine_name = self.node_attrs['Driver']['MachineName']
unsafe_node_attrs = make_unsafe(node_attrs)
machine_name = unsafe_node_attrs["Driver"]["MachineName"]
if not filter_host(self, machine_name, unsafe_node_attrs, filters):
continue
# query `docker-machine env` to obtain remote Docker daemon connection settings in the form of commands
# that could be used to set environment variables to influence a local Docker client:
if daemon_env == 'skip':
if daemon_env == "skip":
env_var_tuples = []
else:
env_var_tuples = self._get_docker_daemon_variables(machine_name)
@ -223,52 +272,90 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
# check for valid ip address from inspect output, else explicitly use ip command to find host ip address
# this works around an issue seen with Google Compute Platform where the IP address was not available
# via the 'inspect' subcommand but was via the 'ip' subcomannd.
if self.node_attrs['Driver']['IPAddress']:
ip_addr = self.node_attrs['Driver']['IPAddress']
if unsafe_node_attrs["Driver"]["IPAddress"]:
ip_addr = unsafe_node_attrs["Driver"]["IPAddress"]
else:
ip_addr = self._ip_addr_docker_machine_host(self.node)
ip_addr = self._ip_addr_docker_machine_host(node)
# set standard Ansible remote host connection settings to details captured from `docker-machine`
# see: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
self.inventory.set_variable(machine_name, 'ansible_host', ip_addr)
self.inventory.set_variable(machine_name, 'ansible_port', self.node_attrs['Driver']['SSHPort'])
self.inventory.set_variable(machine_name, 'ansible_user', self.node_attrs['Driver']['SSHUser'])
self.inventory.set_variable(machine_name, 'ansible_ssh_private_key_file', self.node_attrs['Driver']['SSHKeyPath'])
self.inventory.set_variable(
machine_name, "ansible_host", make_unsafe(ip_addr)
)
self.inventory.set_variable(
machine_name, "ansible_port", unsafe_node_attrs["Driver"]["SSHPort"]
)
self.inventory.set_variable(
machine_name, "ansible_user", unsafe_node_attrs["Driver"]["SSHUser"]
)
self.inventory.set_variable(
machine_name,
"ansible_ssh_private_key_file",
unsafe_node_attrs["Driver"]["SSHKeyPath"],
)
# set variables based on Docker Machine tags
tags = self.node_attrs['Driver'].get('Tags') or ''
self.inventory.set_variable(machine_name, 'dm_tags', tags)
tags = unsafe_node_attrs["Driver"].get("Tags") or ""
self.inventory.set_variable(machine_name, "dm_tags", make_unsafe(tags))
# set variables based on Docker Machine env variables
for kv in env_var_tuples:
self.inventory.set_variable(machine_name, 'dm_{0}'.format(kv[0]), kv[1])
self.inventory.set_variable(
machine_name, f"dm_{kv[0]}", make_unsafe(kv[1])
)
if self.get_option('verbose_output'):
self.inventory.set_variable(machine_name, 'docker_machine_node_attributes', self.node_attrs)
if self.get_option("verbose_output"):
self.inventory.set_variable(
machine_name,
"docker_machine_node_attributes",
unsafe_node_attrs,
)
# Use constructed if applicable
strict = self.get_option('strict')
strict = self.get_option("strict")
# Composed variables
self._set_composite_vars(self.get_option('compose'), self.node_attrs, machine_name, strict=strict)
self._set_composite_vars(
self.get_option("compose"),
unsafe_node_attrs,
machine_name,
strict=strict,
)
# Complex groups based on jinja2 conditionals, hosts that meet the conditional are added to group
self._add_host_to_composed_groups(self.get_option('groups'), self.node_attrs, machine_name, strict=strict)
self._add_host_to_composed_groups(
self.get_option("groups"),
unsafe_node_attrs,
machine_name,
strict=strict,
)
# Create groups based on variable values and add the corresponding hosts to it
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), self.node_attrs, machine_name, strict=strict)
self._add_host_to_keyed_groups(
self.get_option("keyed_groups"),
unsafe_node_attrs,
machine_name,
strict=strict,
)
except Exception as e:
raise AnsibleError('Unable to fetch hosts from Docker Machine, this was the original exception: %s' %
to_native(e), orig_exc=e)
raise AnsibleError(
f"Unable to fetch hosts from Docker Machine, this was the original exception: {e}"
) from e
def verify_file(self, path):
def verify_file(self, path: str) -> bool:
"""Return the possibility of a file being consumable by this plugin."""
return (
super(InventoryModule, self).verify_file(path) and
path.endswith(('docker_machine.yaml', 'docker_machine.yml')))
return super().verify_file(path) and path.endswith(
("docker_machine.yaml", "docker_machine.yml")
)
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path, cache)
def parse(
self,
inventory: InventoryData,
loader: DataLoader,
path: str,
cache: bool = True,
) -> None:
super().parse(inventory, loader, path, cache)
self._read_config_data(path)
self._populate()

View File

@ -1,136 +1,139 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Stefan Heitmueller <stefan.heitmueller@gmx.com>
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
from __future__ import annotations
__metaclass__ = type
DOCUMENTATION = '''
name: docker_swarm
author:
- Stefan Heitmüller (@morph027) <stefan.heitmueller@gmx.com>
short_description: Ansible dynamic inventory plugin for Docker swarm nodes.
requirements:
- python >= 2.7
- L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.10.0
extends_documentation_fragment:
- constructed
DOCUMENTATION = r"""
name: docker_swarm
author:
- Stefan Heitmüller (@morph027) <stefan.heitmueller@gmx.com>
short_description: Ansible dynamic inventory plugin for Docker swarm nodes
requirements:
- L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.10.0
extends_documentation_fragment:
- ansible.builtin.constructed
- community.library_inventory_filtering_v1.inventory_filter
description:
- Reads inventories from the Docker swarm API.
- Uses a YAML configuration file that ends with V(docker_swarm.(yml|yaml\)).
- 'The plugin returns following groups of swarm nodes: C(all) - all hosts; C(workers) - all worker nodes; C(managers) -
all manager nodes; C(leader) - the swarm leader node; C(nonleaders) - all nodes except the swarm leader.'
notes:
- The configuration file must be a YAML file whose filename ends with V(docker_swarm.yml) or V(docker_swarm.yaml). Other
filenames will not be accepted.
options:
plugin:
description: The name of this plugin, it should always be set to V(community.docker.docker_swarm) for this plugin to recognize
it as its own.
type: str
required: true
choices: [docker_swarm, community.docker.docker_swarm]
docker_host:
description:
- Reads inventories from the Docker swarm API.
- Uses a YAML configuration file docker_swarm.[yml|yaml].
- "The plugin returns following groups of swarm nodes: I(all) - all hosts; I(workers) - all worker nodes;
I(managers) - all manager nodes; I(leader) - the swarm leader node;
I(nonleaders) - all nodes except the swarm leader."
options:
plugin:
description: The name of this plugin, it should always be set to C(community.docker.docker_swarm)
for this plugin to recognize it as it's own.
type: str
required: true
choices: [ docker_swarm, community.docker.docker_swarm ]
docker_host:
description:
- Socket of a Docker swarm manager node (C(tcp), C(unix)).
- "Use C(unix://var/run/docker.sock) to connect via local socket."
type: str
required: true
aliases: [ docker_url ]
verbose_output:
description: Toggle to (not) include all available nodes metadata (for example C(Platform), C(Architecture), C(OS),
C(EngineVersion))
type: bool
default: yes
tls:
description: Connect using TLS without verifying the authenticity of the Docker host server.
type: bool
default: no
validate_certs:
description: Toggle if connecting using TLS with or without verifying the authenticity of the Docker
host server.
type: bool
default: no
aliases: [ tls_verify ]
client_key:
description: Path to the client's TLS key file.
type: path
aliases: [ tls_client_key, key_path ]
ca_cert:
description: Use a CA certificate when performing server verification by providing the path to a CA
certificate file.
type: path
aliases: [ tls_ca_cert, cacert_path ]
client_cert:
description: Path to the client's TLS certificate file.
type: path
aliases: [ tls_client_cert, cert_path ]
tls_hostname:
description: When verifying the authenticity of the Docker host server, provide the expected name of
the server.
type: str
ssl_version:
description: Provide a valid SSL version number. Default value determined by ssl.py module.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by docker-py.
type: str
aliases: [ docker_api_version ]
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TIMEOUT)
will be used instead. If the environment variable is not set, the default value will be used.
type: int
default: 60
aliases: [ time_out ]
use_ssh_client:
description:
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
- Requires Docker SDK for Python 4.4.0 or newer.
type: bool
default: no
version_added: 1.5.0
include_host_uri:
description: Toggle to return the additional attribute C(ansible_host_uri) which contains the URI of the
swarm leader in format of C(tcp://172.16.0.1:2376). This value may be used without additional
modification as value of option I(docker_host) in Docker Swarm modules when connecting via API.
The port always defaults to C(2376).
type: bool
default: no
include_host_uri_port:
description: Override the detected port number included in I(ansible_host_uri)
type: int
'''
- Socket of a Docker swarm manager node (C(tcp), C(unix)).
- Use V(unix:///var/run/docker.sock) to connect through a local socket.
type: str
required: true
aliases: [docker_url]
verbose_output:
description: Toggle to (not) include all available nodes metadata (for example C(Platform), C(Architecture), C(OS), C(EngineVersion)).
type: bool
default: true
tls:
description: Connect using TLS without verifying the authenticity of the Docker host server.
type: bool
default: false
validate_certs:
description: Toggle if connecting using TLS with or without verifying the authenticity of the Docker host server.
type: bool
default: false
aliases: [tls_verify]
client_key:
description: Path to the client's TLS key file.
type: path
aliases: [tls_client_key, key_path]
ca_path:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- This option was called O(ca_cert) and got renamed to O(ca_path) in community.docker 3.6.0. The old name has been added
as an alias and can still be used.
type: path
aliases: [ca_cert, tls_ca_cert, cacert_path]
client_cert:
description: Path to the client's TLS certificate file.
type: path
aliases: [tls_client_cert, cert_path]
tls_hostname:
description: When verifying the authenticity of the Docker host server, provide the expected name of the server.
type: str
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by Docker SDK for Python.
type: str
aliases: [docker_api_version]
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable E(DOCKER_TIMEOUT). will be used instead.
If the environment variable is not set, the default value will be used.
type: int
default: 60
aliases: [time_out]
use_ssh_client:
description:
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
- Requires Docker SDK for Python 4.4.0 or newer.
type: bool
default: false
version_added: 1.5.0
include_host_uri:
description: Toggle to return the additional attribute C(ansible_host_uri) which contains the URI of the swarm leader
in format of V(tcp://172.16.0.1:2376). This value may be used without additional modification as value of option O(docker_host)
in Docker Swarm modules when connecting through the API. The port always defaults to V(2376).
type: bool
default: false
include_host_uri_port:
description: Override the detected port number included in C(ansible_host_uri).
type: int
filters:
version_added: 3.5.0
"""
EXAMPLES = '''
EXAMPLES = """
---
# Minimal example using local docker
plugin: community.docker.docker_swarm
docker_host: unix://var/run/docker.sock
docker_host: unix:///var/run/docker.sock
---
# Minimal example using remote docker
plugin: community.docker.docker_swarm
docker_host: tcp://my-docker-host:2375
---
# Example using remote docker with unverified TLS
plugin: community.docker.docker_swarm
docker_host: tcp://my-docker-host:2376
tls: yes
tls: true
---
# Example using remote docker with verified TLS and client certificate verification
plugin: community.docker.docker_swarm
docker_host: tcp://my-docker-host:2376
validate_certs: yes
ca_cert: /somewhere/ca.pem
validate_certs: true
ca_path: /somewhere/ca.pem
client_key: /somewhere/key.pem
client_cert: /somewhere/cert.pem
---
# Example using constructed features to create groups and set ansible_host
plugin: community.docker.docker_swarm
docker_host: tcp://my-docker-host:2375
strict: False
strict: false
keyed_groups:
# add for example x86_64 hosts to an arch_x86_64 group
- prefix: arch
@ -143,121 +146,195 @@ keyed_groups:
# hint: labels containing special characters will be converted to safe names
- key: 'Spec.Labels'
prefix: label
'''
"""
import typing as t
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_native
from ansible_collections.community.docker.plugins.module_utils.common import get_connect_params
from ansible_collections.community.docker.plugins.module_utils.util import update_tls_hostname
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible.parsing.utils.addresses import parse_address
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible_collections.community.library_inventory_filtering_v1.plugins.plugin_utils.inventory_filter import (
filter_host,
parse_filters,
)
from ansible_collections.community.docker.plugins.module_utils._common import (
get_connect_params,
)
from ansible_collections.community.docker.plugins.module_utils._util import (
update_tls_hostname,
)
from ansible_collections.community.docker.plugins.plugin_utils._unsafe import (
make_unsafe,
)
if t.TYPE_CHECKING:
from ansible.inventory.data import InventoryData
from ansible.parsing.dataloader import DataLoader
try:
import docker
HAS_DOCKER = True
except ImportError:
HAS_DOCKER = False
class InventoryModule(BaseInventoryPlugin, Constructable):
''' Host inventory parser for ansible using Docker swarm as source. '''
"""Host inventory parser for ansible using Docker swarm as source."""
NAME = 'community.docker.docker_swarm'
NAME = "community.docker.docker_swarm"
def _fail(self, msg):
def _fail(self, msg: str) -> t.NoReturn:
raise AnsibleError(msg)
def _populate(self):
raw_params = dict(
docker_host=self.get_option('docker_host'),
tls=self.get_option('tls'),
tls_verify=self.get_option('validate_certs'),
key_path=self.get_option('client_key'),
cacert_path=self.get_option('ca_cert'),
cert_path=self.get_option('client_cert'),
tls_hostname=self.get_option('tls_hostname'),
api_version=self.get_option('api_version'),
timeout=self.get_option('timeout'),
ssl_version=self.get_option('ssl_version'),
use_ssh_client=self.get_option('use_ssh_client'),
debug=None,
)
def _populate(self) -> None:
if self.inventory is None:
raise AssertionError("Inventory must be there")
raw_params = {
"docker_host": self.get_option("docker_host"),
"tls": self.get_option("tls"),
"tls_verify": self.get_option("validate_certs"),
"key_path": self.get_option("client_key"),
"cacert_path": self.get_option("ca_path"),
"cert_path": self.get_option("client_cert"),
"tls_hostname": self.get_option("tls_hostname"),
"api_version": self.get_option("api_version"),
"timeout": self.get_option("timeout"),
"use_ssh_client": self.get_option("use_ssh_client"),
"debug": None,
}
update_tls_hostname(raw_params)
connect_params = get_connect_params(raw_params, fail_function=self._fail)
self.client = docker.DockerClient(**connect_params)
self.inventory.add_group('all')
self.inventory.add_group('manager')
self.inventory.add_group('worker')
self.inventory.add_group('leader')
self.inventory.add_group('nonleaders')
client = docker.DockerClient(**connect_params)
self.inventory.add_group("all")
self.inventory.add_group("manager")
self.inventory.add_group("worker")
self.inventory.add_group("leader")
self.inventory.add_group("nonleaders")
if self.get_option('include_host_uri'):
if self.get_option('include_host_uri_port'):
host_uri_port = str(self.get_option('include_host_uri_port'))
elif self.get_option('tls') or self.get_option('validate_certs'):
host_uri_port = '2376'
filters = parse_filters(self.get_option("filters"))
if self.get_option("include_host_uri"):
if self.get_option("include_host_uri_port"):
host_uri_port = str(self.get_option("include_host_uri_port"))
elif self.get_option("tls") or self.get_option("validate_certs"):
host_uri_port = "2376"
else:
host_uri_port = '2375'
host_uri_port = "2375"
try:
self.nodes = self.client.nodes.list()
for self.node in self.nodes:
self.node_attrs = self.client.nodes.get(self.node.id).attrs
self.inventory.add_host(self.node_attrs['ID'])
self.inventory.add_host(self.node_attrs['ID'], group=self.node_attrs['Spec']['Role'])
self.inventory.set_variable(self.node_attrs['ID'], 'ansible_host',
self.node_attrs['Status']['Addr'])
if self.get_option('include_host_uri'):
self.inventory.set_variable(self.node_attrs['ID'], 'ansible_host_uri',
'tcp://' + self.node_attrs['Status']['Addr'] + ':' + host_uri_port)
if self.get_option('verbose_output'):
self.inventory.set_variable(self.node_attrs['ID'], 'docker_swarm_node_attributes', self.node_attrs)
if 'ManagerStatus' in self.node_attrs:
if self.node_attrs['ManagerStatus'].get('Leader'):
nodes = client.nodes.list()
for node in nodes:
node_attrs = client.nodes.get(node.id).attrs
unsafe_node_attrs = make_unsafe(node_attrs)
if not filter_host(
self, unsafe_node_attrs["ID"], unsafe_node_attrs, filters
):
continue
self.inventory.add_host(unsafe_node_attrs["ID"])
self.inventory.add_host(
unsafe_node_attrs["ID"], group=unsafe_node_attrs["Spec"]["Role"]
)
self.inventory.set_variable(
unsafe_node_attrs["ID"],
"ansible_host",
unsafe_node_attrs["Status"]["Addr"],
)
if self.get_option("include_host_uri"):
self.inventory.set_variable(
unsafe_node_attrs["ID"],
"ansible_host_uri",
make_unsafe(
"tcp://"
+ unsafe_node_attrs["Status"]["Addr"]
+ ":"
+ host_uri_port
),
)
if self.get_option("verbose_output"):
self.inventory.set_variable(
unsafe_node_attrs["ID"],
"docker_swarm_node_attributes",
unsafe_node_attrs,
)
if "ManagerStatus" in unsafe_node_attrs:
if unsafe_node_attrs["ManagerStatus"].get("Leader"):
# This is workaround of bug in Docker when in some cases the Leader IP is 0.0.0.0
# Check moby/moby#35437 for details
swarm_leader_ip = parse_address(self.node_attrs['ManagerStatus']['Addr'])[0] or \
self.node_attrs['Status']['Addr']
if self.get_option('include_host_uri'):
self.inventory.set_variable(self.node_attrs['ID'], 'ansible_host_uri',
'tcp://' + swarm_leader_ip + ':' + host_uri_port)
self.inventory.set_variable(self.node_attrs['ID'], 'ansible_host', swarm_leader_ip)
self.inventory.add_host(self.node_attrs['ID'], group='leader')
swarm_leader_ip = (
parse_address(node_attrs["ManagerStatus"]["Addr"])[0]
or unsafe_node_attrs["Status"]["Addr"]
)
if self.get_option("include_host_uri"):
self.inventory.set_variable(
unsafe_node_attrs["ID"],
"ansible_host_uri",
make_unsafe(
"tcp://" + swarm_leader_ip + ":" + host_uri_port
),
)
self.inventory.set_variable(
unsafe_node_attrs["ID"],
"ansible_host",
make_unsafe(swarm_leader_ip),
)
self.inventory.add_host(unsafe_node_attrs["ID"], group="leader")
else:
self.inventory.add_host(self.node_attrs['ID'], group='nonleaders')
self.inventory.add_host(
unsafe_node_attrs["ID"], group="nonleaders"
)
else:
self.inventory.add_host(self.node_attrs['ID'], group='nonleaders')
self.inventory.add_host(unsafe_node_attrs["ID"], group="nonleaders")
# Use constructed if applicable
strict = self.get_option('strict')
strict = self.get_option("strict")
# Composed variables
self._set_composite_vars(self.get_option('compose'),
self.node_attrs,
self.node_attrs['ID'],
strict=strict)
self._set_composite_vars(
self.get_option("compose"),
unsafe_node_attrs,
unsafe_node_attrs["ID"],
strict=strict,
)
# Complex groups based on jinja2 conditionals, hosts that meet the conditional are added to group
self._add_host_to_composed_groups(self.get_option('groups'),
self.node_attrs,
self.node_attrs['ID'],
strict=strict)
self._add_host_to_composed_groups(
self.get_option("groups"),
unsafe_node_attrs,
unsafe_node_attrs["ID"],
strict=strict,
)
# Create groups based on variable values and add the corresponding hosts to it
self._add_host_to_keyed_groups(self.get_option('keyed_groups'),
self.node_attrs,
self.node_attrs['ID'],
strict=strict)
self._add_host_to_keyed_groups(
self.get_option("keyed_groups"),
unsafe_node_attrs,
unsafe_node_attrs["ID"],
strict=strict,
)
except Exception as e:
raise AnsibleError('Unable to fetch hosts from Docker swarm API, this was the original exception: %s' %
to_native(e))
raise AnsibleError(
f"Unable to fetch hosts from Docker swarm API, this was the original exception: {e}"
) from e
def verify_file(self, path):
def verify_file(self, path: str) -> bool:
"""Return the possibly of a file being consumable by this plugin."""
return (
super(InventoryModule, self).verify_file(path) and
path.endswith(('docker_swarm.yaml', 'docker_swarm.yml')))
return super().verify_file(path) and path.endswith(
("docker_swarm.yaml", "docker_swarm.yml")
)
def parse(self, inventory, loader, path, cache=True):
def parse(
self,
inventory: InventoryData,
loader: DataLoader,
path: str,
cache: bool = True,
) -> None:
if not HAS_DOCKER:
raise AnsibleError('The Docker swarm dynamic inventory plugin requires the Docker SDK for Python: '
'https://github.com/docker/docker-py.')
super(InventoryModule, self).parse(inventory, loader, path, cache)
raise AnsibleError(
"The Docker swarm dynamic inventory plugin requires the Docker SDK for Python: "
"https://github.com/docker/docker-py."
)
super().parse(inventory, loader, path, cache)
self._read_config_data(path)
self._populate()

View File

@ -0,0 +1,103 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import traceback
import typing as t
REQUESTS_IMPORT_ERROR: str | None # pylint: disable=invalid-name
try:
from requests import Session # noqa: F401, pylint: disable=unused-import
from requests.adapters import ( # noqa: F401, pylint: disable=unused-import
HTTPAdapter,
)
from requests.exceptions import ( # noqa: F401, pylint: disable=unused-import
HTTPError,
InvalidSchema,
)
except ImportError:
REQUESTS_IMPORT_ERROR = traceback.format_exc() # pylint: disable=invalid-name
class Session: # type: ignore
__attrs__: list[t.Never] = []
class HTTPAdapter: # type: ignore
__attrs__: list[t.Never] = []
class HTTPError(Exception): # type: ignore
pass
class InvalidSchema(Exception): # type: ignore
pass
else:
REQUESTS_IMPORT_ERROR = None # pylint: disable=invalid-name
URLLIB3_IMPORT_ERROR: str | None = None # pylint: disable=invalid-name
try:
from requests.packages import urllib3 # pylint: disable=unused-import
from requests.packages.urllib3 import ( # type: ignore # pylint: disable=unused-import # isort: skip
connection as urllib3_connection,
)
except ImportError:
try:
import urllib3 # pylint: disable=unused-import
from urllib3 import (
connection as urllib3_connection, # pylint: disable=unused-import
)
except ImportError:
URLLIB3_IMPORT_ERROR = traceback.format_exc() # pylint: disable=invalid-name
class _HTTPConnectionPool:
pass
class _HTTPConnection:
pass
class FakeURLLIB3:
def __init__(self) -> None:
self._collections = self
self.poolmanager = self
self.connection = self
self.connectionpool = self
self.RecentlyUsedContainer = object() # pylint: disable=invalid-name
self.PoolManager = object() # pylint: disable=invalid-name
self.match_hostname = object()
self.HTTPConnectionPool = ( # pylint: disable=invalid-name
_HTTPConnectionPool
)
class FakeURLLIB3Connection:
def __init__(self) -> None:
self.HTTPConnection = _HTTPConnection # pylint: disable=invalid-name
urllib3 = FakeURLLIB3()
urllib3_connection = FakeURLLIB3Connection()
def fail_on_missing_imports() -> None:
if REQUESTS_IMPORT_ERROR is not None:
from .errors import MissingRequirementException # pylint: disable=cyclic-import
raise MissingRequirementException(
"You have to install requests", "requests", REQUESTS_IMPORT_ERROR
)
if URLLIB3_IMPORT_ERROR is not None:
from .errors import MissingRequirementException # pylint: disable=cyclic-import
raise MissingRequirementException(
"You have to install urllib3", "urllib3", URLLIB3_IMPORT_ERROR
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,407 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import base64
import json
import logging
import typing as t
from . import errors
from .credentials.errors import CredentialsNotFound, StoreError
from .credentials.store import Store
from .utils import config
if t.TYPE_CHECKING:
from ansible_collections.community.docker.plugins.module_utils._api.api.client import (
APIClient,
)
INDEX_NAME = "docker.io"
INDEX_URL = f"https://index.{INDEX_NAME}/v1/"
TOKEN_USERNAME = "<token>"
log = logging.getLogger(__name__)
def resolve_repository_name(repo_name: str) -> tuple[str, str]:
if "://" in repo_name:
raise errors.InvalidRepository(
f"Repository name cannot contain a scheme ({repo_name})"
)
index_name, remote_name = split_repo_name(repo_name)
if index_name[0] == "-" or index_name[-1] == "-":
raise errors.InvalidRepository(
f"Invalid index name ({index_name}). Cannot begin or end with a hyphen."
)
return resolve_index_name(index_name), remote_name
def resolve_index_name(index_name: str) -> str:
index_name = convert_to_hostname(index_name)
if index_name == "index." + INDEX_NAME:
index_name = INDEX_NAME
return index_name
def get_config_header(client: APIClient, registry: str) -> bytes | None:
log.debug("Looking for auth config")
if not client._auth_configs or client._auth_configs.is_empty:
log.debug("No auth config in memory - loading from filesystem")
client._auth_configs = load_config(credstore_env=client.credstore_env)
authcfg = resolve_authconfig(
client._auth_configs, registry, credstore_env=client.credstore_env
)
# Do not fail here if no authentication exists for this
# specific registry as we can have a readonly pull. Just
# put the header if we can.
if authcfg:
log.debug("Found auth config")
# auth_config needs to be a dict in the format used by
# auth.py username , password, serveraddress, email
return encode_header(authcfg)
log.debug("No auth config found")
return None
def split_repo_name(repo_name: str) -> tuple[str, str]:
parts = repo_name.split("/", 1)
if len(parts) == 1 or (
"." not in parts[0] and ":" not in parts[0] and parts[0] != "localhost"
):
# This is a docker index repo (ex: username/foobar or ubuntu)
return INDEX_NAME, repo_name
return tuple(parts) # type: ignore
def get_credential_store(
authconfig: dict[str, t.Any] | AuthConfig, registry: str
) -> str | None:
if not isinstance(authconfig, AuthConfig):
authconfig = AuthConfig(authconfig)
return authconfig.get_credential_store(registry)
class AuthConfig(dict):
def __init__(
self, dct: dict[str, t.Any], credstore_env: dict[str, str] | None = None
):
if "auths" not in dct:
dct["auths"] = {}
self.update(dct)
self._credstore_env = credstore_env
self._stores: dict[str, Store] = {}
@classmethod
def parse_auth(
cls, entries: dict[str, dict[str, t.Any]], raise_on_error: bool = False
) -> dict[str, dict[str, t.Any]]:
"""
Parses authentication entries
Args:
entries: Dict of authentication entries.
raise_on_error: If set to true, an invalid format will raise
InvalidConfigFile
Returns:
Authentication registry.
"""
conf: dict[str, dict[str, t.Any]] = {}
for registry, entry in entries.items():
if not isinstance(entry, dict):
log.debug("Config entry for key %s is not auth config", registry) # type: ignore
# We sometimes fall back to parsing the whole config as if it
# was the auth config by itself, for legacy purposes. In that
# case, we fail silently and return an empty conf if any of the
# keys is not formatted properly.
if raise_on_error:
raise errors.InvalidConfigFile(
f"Invalid configuration for registry {registry}"
)
return {}
if "identitytoken" in entry:
log.debug("Found an IdentityToken entry for registry %s", registry)
conf[registry] = {"IdentityToken": entry["identitytoken"]}
continue # Other values are irrelevant if we have a token
if "auth" not in entry:
# Starting with engine v1.11 (API 1.23), an empty dictionary is
# a valid value in the auths config.
# https://github.com/docker/compose/issues/3265
log.debug(
"Auth data for %s is absent. Client might be using a credentials store instead.",
registry,
)
conf[registry] = {}
continue
username, password = decode_auth(entry["auth"])
log.debug(
"Found entry (registry=%s, username=%s)", repr(registry), repr(username)
)
conf[registry] = {
"username": username,
"password": password,
"email": entry.get("email"),
"serveraddress": registry,
}
return conf
@classmethod
def load_config(
cls,
config_path: str | None,
config_dict: dict[str, t.Any] | None,
credstore_env: dict[str, str] | None = None,
) -> t.Self:
"""
Loads authentication data from a Docker configuration file in the given
root directory or if config_path is passed use given path.
Lookup priority:
explicit config_path parameter > DOCKER_CONFIG environment
variable > ~/.docker/config.json > ~/.dockercfg
"""
if not config_dict:
config_file = config.find_config_file(config_path)
if not config_file:
return cls({}, credstore_env)
try:
with open(config_file, "rt", encoding="utf-8") as f:
config_dict = json.load(f)
except (IOError, KeyError, ValueError) as e:
# Likely missing new Docker config file or it is in an
# unknown format, continue to attempt to read old location
# and format.
log.debug(e)
return cls(_load_legacy_config(config_file), credstore_env)
res = {}
if config_dict.get("auths"):
log.debug("Found 'auths' section")
res.update(
{"auths": cls.parse_auth(config_dict.pop("auths"), raise_on_error=True)}
)
if config_dict.get("credsStore"):
log.debug("Found 'credsStore' section")
res.update({"credsStore": config_dict.pop("credsStore")})
if config_dict.get("credHelpers"):
log.debug("Found 'credHelpers' section")
res.update({"credHelpers": config_dict.pop("credHelpers")})
if res:
return cls(res, credstore_env)
log.debug(
"Could not find auth-related section ; attempting to interpret "
"as auth-only file"
)
return cls({"auths": cls.parse_auth(config_dict)}, credstore_env)
@property
def auths(self) -> dict[str, dict[str, t.Any]]:
return self.get("auths", {})
@property
def creds_store(self) -> str | None:
return self.get("credsStore", None)
@property
def cred_helpers(self) -> dict[str, t.Any]:
return self.get("credHelpers", {})
@property
def is_empty(self) -> bool:
return not self.auths and not self.creds_store and not self.cred_helpers
def resolve_authconfig(
self, registry: str | None = None
) -> dict[str, t.Any] | None:
"""
Returns the authentication data from the given auth configuration for a
specific registry. As with the Docker client, legacy entries in the
config with full URLs are stripped down to hostnames before checking
for a match. Returns None if no match was found.
"""
if self.creds_store or self.cred_helpers:
store_name = self.get_credential_store(registry)
if store_name is not None:
log.debug('Using credentials store "%s"', store_name)
cfg = self._resolve_authconfig_credstore(registry, store_name)
if cfg is not None:
return cfg
log.debug("No entry in credstore - fetching from auth dict")
# Default to the public index server
registry = resolve_index_name(registry) if registry else INDEX_NAME
log.debug("Looking for auth entry for %s", repr(registry))
if registry in self.auths:
log.debug("Found %s", repr(registry))
return self.auths[registry]
for key, conf in self.auths.items():
if resolve_index_name(key) == registry:
log.debug("Found %s", repr(key))
return conf
log.debug("No entry found")
return None
def _resolve_authconfig_credstore(
self, registry: str | None, credstore_name: str
) -> dict[str, t.Any] | None:
if not registry or registry == INDEX_NAME:
# The ecosystem is a little schizophrenic with index.docker.io VS
# docker.io - in that case, it seems the full URL is necessary.
registry = INDEX_URL
log.debug("Looking for auth entry for %s", repr(registry))
store = self._get_store_instance(credstore_name)
try:
data = store.get(registry)
res = {
"ServerAddress": registry,
}
if data["Username"] == TOKEN_USERNAME:
res["IdentityToken"] = data["Secret"]
else:
res.update(
{
"Username": data["Username"],
"Password": data["Secret"],
}
)
return res
except CredentialsNotFound:
log.debug("No entry found")
return None
except StoreError as e:
raise errors.DockerException(f"Credentials store error: {e}") from e
def _get_store_instance(self, name: str) -> Store:
if name not in self._stores:
self._stores[name] = Store(name, environment=self._credstore_env)
return self._stores[name]
def get_credential_store(self, registry: str | None) -> str | None:
if not registry or registry == INDEX_NAME:
registry = INDEX_URL
return self.cred_helpers.get(registry) or self.creds_store
def get_all_credentials(self) -> dict[str, dict[str, t.Any] | None]:
auth_data: dict[str, dict[str, t.Any] | None] = self.auths.copy() # type: ignore
if self.creds_store:
# Retrieve all credentials from the default store
store = self._get_store_instance(self.creds_store)
for k in store.list():
auth_data[k] = self._resolve_authconfig_credstore(k, self.creds_store)
auth_data[convert_to_hostname(k)] = auth_data[k]
# credHelpers entries take priority over all others
for reg, store_name in self.cred_helpers.items():
auth_data[reg] = self._resolve_authconfig_credstore(reg, store_name)
auth_data[convert_to_hostname(reg)] = auth_data[reg]
return auth_data
def add_auth(self, reg: str, data: dict[str, t.Any]) -> None:
self["auths"][reg] = data
def resolve_authconfig(
authconfig: AuthConfig | dict[str, t.Any],
registry: str | None = None,
credstore_env: dict[str, str] | None = None,
) -> dict[str, t.Any] | None:
if not isinstance(authconfig, AuthConfig):
authconfig = AuthConfig(authconfig, credstore_env)
return authconfig.resolve_authconfig(registry)
def convert_to_hostname(url: str) -> str:
return url.replace("http://", "").replace("https://", "").split("/", 1)[0]
def decode_auth(auth: str | bytes) -> tuple[str, str]:
if isinstance(auth, str):
auth = auth.encode("ascii")
s = base64.b64decode(auth)
login, pwd = s.split(b":", 1)
return login.decode("utf8"), pwd.decode("utf8")
def encode_header(auth: dict[str, t.Any]) -> bytes:
auth_json = json.dumps(auth).encode("ascii")
return base64.urlsafe_b64encode(auth_json)
def parse_auth(
entries: dict[str, dict[str, t.Any]], raise_on_error: bool = False
) -> dict[str, dict[str, t.Any]]:
"""
Parses authentication entries
Args:
entries: Dict of authentication entries.
raise_on_error: If set to true, an invalid format will raise
InvalidConfigFile
Returns:
Authentication registry.
"""
return AuthConfig.parse_auth(entries, raise_on_error)
def load_config(
config_path: str | None = None,
config_dict: dict[str, t.Any] | None = None,
credstore_env: dict[str, str] | None = None,
) -> AuthConfig:
return AuthConfig.load_config(config_path, config_dict, credstore_env)
def _load_legacy_config(config_file: str) -> dict[str, dict[str, t.Any]]:
log.debug("Attempting to parse legacy auth file format")
try:
data = []
with open(config_file, "rt", encoding="utf-8") as f:
for line in f.readlines():
data.append(line.strip().split(" = ")[1])
if len(data) < 2:
# Not enough data
raise errors.InvalidConfigFile("Invalid or empty configuration file!")
username, password = decode_auth(data[0])
return {
"auths": {
INDEX_NAME: {
"username": username,
"password": password,
"email": data[1],
"serveraddress": INDEX_URL,
}
}
}
except Exception as e: # pylint: disable=broad-exception-caught
log.debug(e)
log.debug("All parsing attempts failed - returning empty config")
return {}

View File

@ -0,0 +1,41 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import sys
MINIMUM_DOCKER_API_VERSION = "1.21"
DEFAULT_TIMEOUT_SECONDS = 60
STREAM_HEADER_SIZE_BYTES = 8
CONTAINER_LIMITS_KEYS = ["memory", "memswap", "cpushares", "cpusetcpus"]
DEFAULT_HTTP_HOST = "127.0.0.1"
DEFAULT_UNIX_SOCKET = "http+unix:///var/run/docker.sock"
DEFAULT_NPIPE = "npipe:////./pipe/docker_engine"
BYTE_UNITS = {"b": 1, "k": 1024, "m": 1024 * 1024, "g": 1024 * 1024 * 1024}
IS_WINDOWS_PLATFORM = sys.platform == "win32"
WINDOWS_LONGPATH_PREFIX = "\\\\?\\"
DEFAULT_USER_AGENT = "ansible-community.docker"
DEFAULT_NUM_POOLS = 25
# The OpenSSH server default value for MaxSessions is 10 which means we can
# use up to 9, leaving the final session for the underlying SSH connection.
# For more details see: https://github.com/docker/docker-py/issues/2246
DEFAULT_NUM_POOLS_SSH = 9
DEFAULT_MAX_POOL_SIZE = 10
DEFAULT_DATA_CHUNK_SIZE = 1024 * 2048

View File

@ -0,0 +1,254 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2025 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import os
import typing as t
from .. import errors
from .config import (
METAFILE,
get_current_context_name,
get_meta_dir,
write_context_name_to_docker_config,
)
from .context import Context
if t.TYPE_CHECKING:
from ..tls import TLSConfig
def create_default_context() -> Context:
host = None
if os.environ.get("DOCKER_HOST"):
host = os.environ.get("DOCKER_HOST")
return Context(
"default", "swarm", host, description="Current DOCKER_HOST based configuration"
)
class ContextAPI:
"""Context API.
Contains methods for context management:
create, list, remove, get, inspect.
"""
DEFAULT_CONTEXT = None
@classmethod
def get_default_context(cls) -> Context:
context = cls.DEFAULT_CONTEXT
if context is None:
context = create_default_context()
cls.DEFAULT_CONTEXT = context
return context
@classmethod
def create_context(
cls,
name: str,
orchestrator: str | None = None,
host: str | None = None,
tls_cfg: TLSConfig | None = None,
default_namespace: str | None = None,
skip_tls_verify: bool = False,
) -> Context:
"""Creates a new context.
Returns:
(Context): a Context object.
Raises:
:py:class:`docker.errors.MissingContextParameter`
If a context name is not provided.
:py:class:`docker.errors.ContextAlreadyExists`
If a context with the name already exists.
:py:class:`docker.errors.ContextException`
If name is default.
Example:
>>> from docker.context import ContextAPI
>>> ctx = ContextAPI.create_context(name='test')
>>> print(ctx.Metadata)
{
"Name": "test",
"Metadata": {},
"Endpoints": {
"docker": {
"Host": "unix:///var/run/docker.sock",
"SkipTLSVerify": false
}
}
}
"""
if not name:
raise errors.MissingContextParameter("name")
if name == "default":
raise errors.ContextException('"default" is a reserved context name')
ctx = Context.load_context(name)
if ctx:
raise errors.ContextAlreadyExists(name)
endpoint = "docker"
if orchestrator and orchestrator != "swarm":
endpoint = orchestrator
ctx = Context(name, orchestrator)
ctx.set_endpoint(
endpoint,
host,
tls_cfg,
skip_tls_verify=skip_tls_verify,
def_namespace=default_namespace,
)
ctx.save()
return ctx
@classmethod
def get_context(cls, name: str | None = None) -> Context | None:
"""Retrieves a context object.
Args:
name (str): The name of the context
Example:
>>> from docker.context import ContextAPI
>>> ctx = ContextAPI.get_context(name='test')
>>> print(ctx.Metadata)
{
"Name": "test",
"Metadata": {},
"Endpoints": {
"docker": {
"Host": "unix:///var/run/docker.sock",
"SkipTLSVerify": false
}
}
}
"""
if not name:
name = get_current_context_name()
if name == "default":
return cls.get_default_context()
return Context.load_context(name)
@classmethod
def contexts(cls) -> list[Context]:
"""Context list.
Returns:
(Context): List of context objects.
Raises:
:py:class:`docker.errors.APIError`
If something goes wrong.
"""
names = []
for dirname, dummy, fnames in os.walk(get_meta_dir()):
for filename in fnames:
if filename == METAFILE:
filepath = os.path.join(dirname, filename)
try:
with open(filepath, "rt", encoding="utf-8") as f:
data = json.load(f)
name = data["Name"]
if name == "default":
raise ValueError('"default" is a reserved context name')
names.append(name)
except Exception as e:
raise errors.ContextException(
f"Failed to load metafile {filepath}: {e}"
) from e
contexts = [cls.get_default_context()]
for name in names:
context = Context.load_context(name)
if not context:
raise errors.ContextException(f"Context {name} cannot be found")
contexts.append(context)
return contexts
@classmethod
def get_current_context(cls) -> Context | None:
"""Get current context.
Returns:
(Context): current context object.
"""
return cls.get_context()
@classmethod
def set_current_context(cls, name: str = "default") -> None:
ctx = cls.get_context(name)
if not ctx:
raise errors.ContextNotFound(name)
err = write_context_name_to_docker_config(name)
if err:
raise errors.ContextException(f"Failed to set current context: {err}")
@classmethod
def remove_context(cls, name: str) -> None:
"""Remove a context. Similar to the ``docker context rm`` command.
Args:
name (str): The name of the context
Raises:
:py:class:`docker.errors.MissingContextParameter`
If a context name is not provided.
:py:class:`docker.errors.ContextNotFound`
If a context with the name does not exist.
:py:class:`docker.errors.ContextException`
If name is default.
Example:
>>> from docker.context import ContextAPI
>>> ContextAPI.remove_context(name='test')
>>>
"""
if not name:
raise errors.MissingContextParameter("name")
if name == "default":
raise errors.ContextException('context "default" cannot be removed')
ctx = Context.load_context(name)
if not ctx:
raise errors.ContextNotFound(name)
if name == get_current_context_name():
write_context_name_to_docker_config(None)
ctx.remove()
@classmethod
def inspect_context(cls, name: str = "default") -> dict[str, t.Any]:
"""Inspect a context. Similar to the ``docker context inspect`` command.
Args:
name (str): The name of the context
Raises:
:py:class:`docker.errors.MissingContextParameter`
If a context name is not provided.
:py:class:`docker.errors.ContextNotFound`
If a context with the name does not exist.
Example:
>>> from docker.context import ContextAPI
>>> ContextAPI.remove_context(name='test')
>>>
"""
if not name:
raise errors.MissingContextParameter("name")
if name == "default":
return cls.get_default_context()()
ctx = Context.load_context(name)
if not ctx:
raise errors.ContextNotFound(name)
return ctx()

View File

@ -0,0 +1,108 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2025 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import hashlib
import json
import os
from ..constants import DEFAULT_UNIX_SOCKET, IS_WINDOWS_PLATFORM
from ..utils.config import find_config_file, get_default_config_file
from ..utils.utils import parse_host
METAFILE = "meta.json"
def get_current_context_name_with_source() -> tuple[str, str]:
if os.environ.get("DOCKER_HOST"):
return "default", "DOCKER_HOST environment variable set"
if os.environ.get("DOCKER_CONTEXT"):
return os.environ["DOCKER_CONTEXT"], "DOCKER_CONTEXT environment variable set"
docker_cfg_path = find_config_file()
if docker_cfg_path:
try:
with open(docker_cfg_path, "rt", encoding="utf-8") as f:
return (
json.load(f).get("currentContext", "default"),
f"configuration file {docker_cfg_path}",
)
except Exception: # pylint: disable=broad-exception-caught
pass
return "default", "fallback value"
def get_current_context_name() -> str:
return get_current_context_name_with_source()[0]
def write_context_name_to_docker_config(name: str | None = None) -> Exception | None:
if name == "default":
name = None
docker_cfg_path = find_config_file()
config = {}
if docker_cfg_path:
try:
with open(docker_cfg_path, "rt", encoding="utf-8") as f:
config = json.load(f)
except Exception as e: # pylint: disable=broad-exception-caught
return e
current_context = config.get("currentContext", None)
if current_context and not name:
del config["currentContext"]
elif name:
config["currentContext"] = name
else:
return None
if not docker_cfg_path:
docker_cfg_path = get_default_config_file()
try:
with open(docker_cfg_path, "wt", encoding="utf-8") as f:
json.dump(config, f, indent=4)
return None
except Exception as e: # pylint: disable=broad-exception-caught
return e
def get_context_id(name: str) -> str:
return hashlib.sha256(name.encode("utf-8")).hexdigest()
def get_context_dir() -> str:
docker_cfg_path = find_config_file() or get_default_config_file()
return os.path.join(os.path.dirname(docker_cfg_path), "contexts")
def get_meta_dir(name: str | None = None) -> str:
meta_dir = os.path.join(get_context_dir(), "meta")
if name:
return os.path.join(meta_dir, get_context_id(name))
return meta_dir
def get_meta_file(name: str) -> str:
return os.path.join(get_meta_dir(name), METAFILE)
def get_tls_dir(name: str | None = None, endpoint: str = "") -> str:
context_dir = get_context_dir()
if name:
return os.path.join(context_dir, "tls", get_context_id(name), endpoint)
return os.path.join(context_dir, "tls")
def get_context_host(path: str | None = None, tls: bool = False) -> str:
host = parse_host(path, IS_WINDOWS_PLATFORM, tls)
if host == DEFAULT_UNIX_SOCKET and host.startswith("http+"):
# remove http+ from default docker socket url
host = host[5:]
return host

View File

@ -0,0 +1,287 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2025 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import os
import typing as t
from shutil import copyfile, rmtree
from ..errors import ContextException
from ..tls import TLSConfig
from .config import (
get_context_host,
get_meta_dir,
get_meta_file,
get_tls_dir,
)
IN_MEMORY = "IN MEMORY"
class Context:
"""A context."""
def __init__(
self,
name: str,
orchestrator: str | None = None,
host: str | None = None,
endpoints: dict[str, dict[str, t.Any]] | None = None,
skip_tls_verify: bool = False,
tls: bool = False,
description: str | None = None,
) -> None:
if not name:
raise ValueError("Name not provided")
self.name = name
self.context_type = None
self.orchestrator = orchestrator
self.endpoints = {}
self.tls_cfg: dict[str, TLSConfig] = {}
self.meta_path = IN_MEMORY
self.tls_path = IN_MEMORY
self.description = description
if not endpoints:
# set default docker endpoint if no endpoint is set
default_endpoint = (
"docker"
if (not orchestrator or orchestrator == "swarm")
else orchestrator
)
self.endpoints = {
default_endpoint: {
"Host": get_context_host(host, skip_tls_verify or tls),
"SkipTLSVerify": skip_tls_verify,
}
}
return
# check docker endpoints
for k, v in endpoints.items():
if not isinstance(v, dict):
# unknown format
raise ContextException(
f"Unknown endpoint format for context {name}: {v}",
)
self.endpoints[k] = v
if k != "docker":
continue
self.endpoints[k]["Host"] = v.get(
"Host", get_context_host(host, skip_tls_verify or tls)
)
self.endpoints[k]["SkipTLSVerify"] = bool(
v.get("SkipTLSVerify", skip_tls_verify)
)
def set_endpoint(
self,
name: str = "docker",
host: str | None = None,
tls_cfg: TLSConfig | None = None,
skip_tls_verify: bool = False,
def_namespace: str | None = None,
) -> None:
self.endpoints[name] = {
"Host": get_context_host(host, not skip_tls_verify or tls_cfg is not None),
"SkipTLSVerify": skip_tls_verify,
}
if def_namespace:
self.endpoints[name]["DefaultNamespace"] = def_namespace
if tls_cfg:
self.tls_cfg[name] = tls_cfg
def inspect(self) -> dict[str, t.Any]:
return self()
@classmethod
def load_context(cls, name: str) -> t.Self | None:
meta = Context._load_meta(name)
if meta:
instance = cls(
meta["Name"],
orchestrator=meta["Metadata"].get("StackOrchestrator", None),
endpoints=meta.get("Endpoints", None),
description=meta["Metadata"].get("Description"),
)
instance.context_type = meta["Metadata"].get("Type", None)
instance._load_certs()
instance.meta_path = get_meta_dir(name)
return instance
return None
@classmethod
def _load_meta(cls, name: str) -> dict[str, t.Any] | None:
meta_file = get_meta_file(name)
if not os.path.isfile(meta_file):
return None
metadata: dict[str, t.Any] = {}
try:
with open(meta_file, "rt", encoding="utf-8") as f:
metadata = json.load(f)
except (OSError, KeyError, ValueError) as e:
# unknown format
raise RuntimeError(
f"Detected corrupted meta file for context {name} : {e}"
) from e
# for docker endpoints, set defaults for
# Host and SkipTLSVerify fields
for k, v in metadata["Endpoints"].items():
if k != "docker":
continue
metadata["Endpoints"][k]["Host"] = v.get(
"Host", get_context_host(None, False)
)
metadata["Endpoints"][k]["SkipTLSVerify"] = bool(
v.get("SkipTLSVerify", True)
)
return metadata
def _load_certs(self) -> None:
certs = {}
tls_dir = get_tls_dir(self.name)
for endpoint in self.endpoints:
if not os.path.isdir(os.path.join(tls_dir, endpoint)):
continue
ca_cert = None
cert = None
key = None
for filename in os.listdir(os.path.join(tls_dir, endpoint)):
if filename.startswith("ca"):
ca_cert = os.path.join(tls_dir, endpoint, filename)
elif filename.startswith("cert"):
cert = os.path.join(tls_dir, endpoint, filename)
elif filename.startswith("key"):
key = os.path.join(tls_dir, endpoint, filename)
if all([cert, key]) or ca_cert:
verify = None
if endpoint == "docker" and not self.endpoints["docker"].get(
"SkipTLSVerify", False
):
verify = True
certs[endpoint] = TLSConfig(
client_cert=(cert, key) if cert and key else None,
ca_cert=ca_cert,
verify=verify,
)
self.tls_cfg = certs
self.tls_path = tls_dir
def save(self) -> None:
meta_dir = get_meta_dir(self.name)
if not os.path.isdir(meta_dir):
os.makedirs(meta_dir)
with open(get_meta_file(self.name), "wt", encoding="utf-8") as f:
f.write(json.dumps(self.Metadata))
tls_dir = get_tls_dir(self.name)
for endpoint, tls in self.tls_cfg.items():
if not os.path.isdir(os.path.join(tls_dir, endpoint)):
os.makedirs(os.path.join(tls_dir, endpoint))
ca_file = tls.ca_cert
if ca_file:
copyfile(
ca_file, os.path.join(tls_dir, endpoint, os.path.basename(ca_file))
)
if tls.cert:
cert_file, key_file = tls.cert
copyfile(
cert_file,
os.path.join(tls_dir, endpoint, os.path.basename(cert_file)),
)
copyfile(
key_file,
os.path.join(tls_dir, endpoint, os.path.basename(key_file)),
)
self.meta_path = get_meta_dir(self.name)
self.tls_path = get_tls_dir(self.name)
def remove(self) -> None:
if os.path.isdir(self.meta_path):
rmtree(self.meta_path)
if os.path.isdir(self.tls_path):
rmtree(self.tls_path)
def __repr__(self) -> str:
return f"<{self.__class__.__name__}: '{self.name}'>"
def __str__(self) -> str:
return json.dumps(self.__call__(), indent=2)
def __call__(self) -> dict[str, t.Any]:
result = self.Metadata
result.update(self.TLSMaterial)
result.update(self.Storage)
return result
def is_docker_host(self) -> bool:
return self.context_type is None
@property
def Name(self) -> str: # pylint: disable=invalid-name
return self.name
@property
def Host(self) -> str | None: # pylint: disable=invalid-name
if not self.orchestrator or self.orchestrator == "swarm":
endpoint = self.endpoints.get("docker", None)
if endpoint:
return endpoint.get("Host", None) # type: ignore
return None
return self.endpoints[self.orchestrator].get("Host", None) # type: ignore
@property
def Orchestrator(self) -> str | None: # pylint: disable=invalid-name
return self.orchestrator
@property
def Metadata(self) -> dict[str, t.Any]: # pylint: disable=invalid-name
meta: dict[str, t.Any] = {}
if self.orchestrator:
meta = {"StackOrchestrator": self.orchestrator}
return {"Name": self.name, "Metadata": meta, "Endpoints": self.endpoints}
@property
def TLSConfig(self) -> TLSConfig | None: # pylint: disable=invalid-name
key = self.orchestrator
if not key or key == "swarm":
key = "docker"
if key in self.tls_cfg:
return self.tls_cfg[key]
return None
@property
def TLSMaterial(self) -> dict[str, t.Any]: # pylint: disable=invalid-name
certs: dict[str, t.Any] = {}
for endpoint, tls in self.tls_cfg.items():
paths = [tls.ca_cert, *tls.cert] if tls.cert else [tls.ca_cert]
certs[endpoint] = [
os.path.basename(path) if path else None for path in paths
]
return {"TLSMaterial": certs}
@property
def Storage(self) -> dict[str, t.Any]: # pylint: disable=invalid-name
return {"Storage": {"MetadataPath": self.meta_path, "TLSPath": self.tls_path}}

View File

@ -0,0 +1,18 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
PROGRAM_PREFIX = "docker-credential-"
DEFAULT_LINUX_STORE = "secretservice"
DEFAULT_OSX_STORE = "osxkeychain"
DEFAULT_WIN32_STORE = "wincred"

View File

@ -0,0 +1,39 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
if t.TYPE_CHECKING:
from subprocess import CalledProcessError
class StoreError(RuntimeError):
pass
class CredentialsNotFound(StoreError):
pass
class InitializationError(StoreError):
pass
def process_store_error(cpe: CalledProcessError, program: str) -> StoreError:
message = cpe.output.decode("utf-8")
if "credentials not found in native keychain" in message:
return CredentialsNotFound(f"No matching credentials in {program}")
return StoreError(
f'Credentials store {program} exited with "{cpe.output.decode("utf-8").strip()}".'
)

View File

@ -0,0 +1,102 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import errno
import json
import subprocess
import typing as t
from . import constants, errors
from .utils import create_environment_dict, find_executable
class Store:
def __init__(self, program: str, environment: dict[str, str] | None = None) -> None:
"""Create a store object that acts as an interface to
perform the basic operations for storing, retrieving
and erasing credentials using `program`.
"""
self.program = constants.PROGRAM_PREFIX + program
self.exe = find_executable(self.program)
self.environment = environment
if self.exe is None:
raise errors.InitializationError(
f"{self.program} not installed or not available in PATH"
)
def get(self, server: str | bytes) -> dict[str, t.Any]:
"""Retrieve credentials for `server`. If no credentials are found,
a `StoreError` will be raised.
"""
if not isinstance(server, bytes):
server = server.encode("utf-8")
data = self._execute("get", server)
result = json.loads(data.decode("utf-8"))
# docker-credential-pass will return an object for inexistent servers
# whereas other helpers will exit with returncode != 0. For
# consistency, if no significant data is returned,
# raise CredentialsNotFound
if result["Username"] == "" and result["Secret"] == "":
raise errors.CredentialsNotFound(
f"No matching credentials in {self.program}"
)
return result
def store(self, server: str, username: str, secret: str) -> bytes:
"""Store credentials for `server`. Raises a `StoreError` if an error
occurs.
"""
data_input = json.dumps(
{"ServerURL": server, "Username": username, "Secret": secret}
).encode("utf-8")
return self._execute("store", data_input)
def erase(self, server: str | bytes) -> None:
"""Erase credentials for `server`. Raises a `StoreError` if an error
occurs.
"""
if not isinstance(server, bytes):
server = server.encode("utf-8")
self._execute("erase", server)
def list(self) -> t.Any:
"""List stored credentials. Requires v0.4.0+ of the helper."""
data = self._execute("list", None)
return json.loads(data.decode("utf-8"))
def _execute(self, subcmd: str, data_input: bytes | None) -> bytes:
if self.exe is None:
raise errors.StoreError(
f"{self.program} not installed or not available in PATH"
)
output = None
env = create_environment_dict(self.environment)
try:
output = subprocess.check_output(
[self.exe, subcmd],
input=data_input,
env=env,
)
except subprocess.CalledProcessError as e:
raise errors.process_store_error(e, self.program) from e
except OSError as e:
if e.errno == errno.ENOENT:
raise errors.StoreError(
f"{self.program} not installed or not available in PATH"
) from e
raise errors.StoreError(
f'Unexpected OS error "{e.strerror}", errno={e.errno}'
) from e
return output

View File

@ -0,0 +1,35 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import os
from shutil import which
def find_executable(executable: str, path: str | None = None) -> str | None:
"""
As distutils.spawn.find_executable, but on Windows, look up
every extension declared in PATHEXT instead of just `.exe`
"""
# shutil.which() already uses PATHEXT on Windows, so on
# Python 3 we can simply use shutil.which() in all cases.
# (https://github.com/docker/docker-py/commit/42789818bed5d86b487a030e2e60b02bf0cfa284)
return which(executable, path=path)
def create_environment_dict(overrides: dict[str, str] | None) -> dict[str, str]:
"""
Create and return a copy of os.environ with the specified overrides
"""
result = os.environ.copy()
result.update(overrides or {})
return result

View File

@ -0,0 +1,245 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
from ansible.module_utils.common.text.converters import to_text
from ._import_helper import HTTPError as _HTTPError
if t.TYPE_CHECKING:
from requests import Response
class DockerException(Exception):
"""
A base class from which all other exceptions inherit.
If you want to catch all errors that the Docker SDK might raise,
catch this base exception.
"""
def create_api_error_from_http_exception(e: _HTTPError) -> t.NoReturn:
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()["message"]
except ValueError:
explanation = to_text((response.content or "").strip())
cls = APIError
if response.status_code == 404:
if explanation and (
"No such image" in str(explanation)
or "not found: does not exist or no pull access" in str(explanation)
or "repository does not exist" in str(explanation)
):
cls = ImageNotFound
else:
cls = NotFound
raise cls(e, response=response, explanation=explanation) from e
class APIError(_HTTPError, DockerException):
"""
An HTTP error from the API.
"""
def __init__(
self,
message: str | Exception,
response: Response | None = None,
explanation: str | None = None,
) -> None:
# requests 1.2 supports response as a keyword argument, but
# requests 1.1 does not
super().__init__(message)
self.response = response
self.explanation = explanation or ""
def __str__(self) -> str:
message = super().__str__()
if self.is_client_error():
message = f"{self.response.status_code} Client Error for {self.response.url}: {self.response.reason}"
elif self.is_server_error():
message = f"{self.response.status_code} Server Error for {self.response.url}: {self.response.reason}"
if self.explanation:
message = f'{message} ("{self.explanation}")'
return message
@property
def status_code(self) -> int | None:
if self.response is not None:
return self.response.status_code
return None
def is_error(self) -> bool:
return self.is_client_error() or self.is_server_error()
def is_client_error(self) -> bool:
if self.status_code is None:
return False
return 400 <= self.status_code < 500
def is_server_error(self) -> bool:
if self.status_code is None:
return False
return 500 <= self.status_code < 600
class NotFound(APIError):
pass
class ImageNotFound(NotFound):
pass
class InvalidVersion(DockerException):
pass
class InvalidRepository(DockerException):
pass
class InvalidConfigFile(DockerException):
pass
class InvalidArgument(DockerException):
pass
class DeprecatedMethod(DockerException):
pass
class TLSParameterError(DockerException):
def __init__(self, msg: str) -> None:
self.msg = msg
def __str__(self) -> str:
return self.msg + (
". TLS configurations should map the Docker CLI "
"client configurations. See "
"https://docs.docker.com/engine/articles/https/ "
"for API details."
)
class NullResource(DockerException, ValueError):
pass
class ContainerError(DockerException):
"""
Represents a container that has exited with a non-zero exit code.
"""
def __init__(
self,
container: str,
exit_status: int,
command: list[str],
image: str,
stderr: str | None,
):
self.container = container
self.exit_status = exit_status
self.command = command
self.image = image
self.stderr = stderr
err = f": {stderr}" if stderr is not None else ""
msg = f"Command '{command}' in image '{image}' returned non-zero exit status {exit_status}{err}"
super().__init__(msg)
class StreamParseError(RuntimeError):
def __init__(self, reason: Exception) -> None:
self.msg = reason
class BuildError(DockerException):
def __init__(self, reason: str, build_log: str) -> None:
super().__init__(reason)
self.msg = reason
self.build_log = build_log
class ImageLoadError(DockerException):
pass
def create_unexpected_kwargs_error(name: str, kwargs: dict[str, t.Any]) -> TypeError:
quoted_kwargs = [f"'{k}'" for k in sorted(kwargs)]
text = [f"{name}() "]
if len(quoted_kwargs) == 1:
text.append("got an unexpected keyword argument ")
else:
text.append("got unexpected keyword arguments ")
text.append(", ".join(quoted_kwargs))
return TypeError("".join(text))
class MissingContextParameter(DockerException):
def __init__(self, param: str) -> None:
self.param = param
def __str__(self) -> str:
return f"missing parameter: {self.param}"
class ContextAlreadyExists(DockerException):
def __init__(self, name: str) -> None:
self.name = name
def __str__(self) -> str:
return f"context {self.name} already exists"
class ContextException(DockerException):
def __init__(self, msg: str) -> None:
self.msg = msg
def __str__(self) -> str:
return self.msg
class ContextNotFound(DockerException):
def __init__(self, name: str) -> None:
self.name = name
def __str__(self) -> str:
return f"context '{self.name}' not found"
class MissingRequirementException(DockerException):
def __init__(
self, msg: str, requirement: str, import_exception: ImportError | str
) -> None:
self.msg = msg
self.requirement = requirement
self.import_exception = import_exception
def __str__(self) -> str:
return self.msg

View File

@ -0,0 +1,108 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import os
import typing as t
from . import errors
from .transport.ssladapter import SSLHTTPAdapter
if t.TYPE_CHECKING:
from ansible_collections.community.docker.plugins.module_utils._api.api.client import (
APIClient,
)
class TLSConfig:
"""
TLS configuration.
Args:
client_cert (tuple of str): Path to client cert, path to client key.
ca_cert (str): Path to CA cert file.
verify (bool or str): This can be ``False`` or a path to a CA cert
file.
assert_hostname (bool): Verify the hostname of the server.
.. _`SSL version`:
https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1
"""
cert: tuple[str, str] | None = None
ca_cert: str | None = None
verify: bool | None = None
def __init__(
self,
client_cert: tuple[str, str] | None = None,
ca_cert: str | None = None,
verify: bool | None = None,
assert_hostname: bool | None = None,
):
# Argument compatibility/mapping with
# https://docs.docker.com/engine/articles/https/
# This diverges from the Docker CLI in that users can specify 'tls'
# here, but also disable any public/default CA pool verification by
# leaving verify=False
self.assert_hostname = assert_hostname
# "client_cert" must have both or neither cert/key files. In
# either case, Alert the user when both are expected, but any are
# missing.
if client_cert:
try:
tls_cert, tls_key = client_cert
except ValueError:
raise errors.TLSParameterError(
"client_cert must be a tuple of (client certificate, key file)"
) from None
if not (tls_cert and tls_key) or (
not os.path.isfile(tls_cert) or not os.path.isfile(tls_key)
):
raise errors.TLSParameterError(
"Path to a certificate and key files must be provided"
" through the client_cert param"
)
self.cert = (tls_cert, tls_key)
# If verify is set, make sure the cert exists
self.verify = verify
self.ca_cert = ca_cert
if self.verify and self.ca_cert and not os.path.isfile(self.ca_cert):
raise errors.TLSParameterError(
"Invalid CA certificate provided for `ca_cert`."
)
def configure_client(self, client: APIClient) -> None:
"""
Configure a client with these TLS options.
"""
if self.verify and self.ca_cert:
client.verify = self.ca_cert
else:
client.verify = self.verify
if self.cert:
client.cert = self.cert
client.mount(
"https://",
SSLHTTPAdapter(
assert_hostname=self.assert_hostname,
),
)

View File

@ -0,0 +1,35 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
from .._import_helper import HTTPAdapter as _HTTPAdapter
class BaseHTTPAdapter(_HTTPAdapter):
def close(self) -> None:
# pylint finds our HTTPAdapter stub instead of requests.adapters.HTTPAdapter:
# pylint: disable-next=no-member
super().close()
if hasattr(self, "pools"):
self.pools.clear()
# Hotfix for requests 2.32.0 and 2.32.1: its commit
# https://github.com/psf/requests/commit/c0813a2d910ea6b4f8438b91d315b8d181302356
# changes requests.adapters.HTTPAdapter to no longer call get_connection() from
# send(), but instead call _get_connection().
def _get_connection(self, request, *args, **kwargs): # type: ignore
return self.get_connection(request.url, kwargs.get("proxies"))
# Fix for requests 2.32.2+:
# https://github.com/psf/requests/commit/c98e4d133ef29c46a9b68cd783087218a8075e05
def get_connection_with_tls_context(self, request, verify, proxies=None, cert=None): # type: ignore
return self.get_connection(request.url, proxies)

View File

@ -0,0 +1,124 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
from queue import Empty
from .. import constants
from .._import_helper import HTTPAdapter, urllib3, urllib3_connection
from .basehttpadapter import BaseHTTPAdapter
from .npipesocket import NpipeSocket
if t.TYPE_CHECKING:
from collections.abc import Mapping
from requests import PreparedRequest
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
class NpipeHTTPConnection(urllib3_connection.HTTPConnection):
def __init__(self, npipe_path: str, timeout: int | float = 60) -> None:
super().__init__("localhost", timeout=timeout)
self.npipe_path = npipe_path
self.timeout = timeout
def connect(self) -> None:
sock = NpipeSocket()
sock.settimeout(self.timeout)
sock.connect(self.npipe_path)
self.sock = sock
class NpipeHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
def __init__(
self, npipe_path: str, timeout: int | float = 60, maxsize: int = 10
) -> None:
super().__init__("localhost", timeout=timeout, maxsize=maxsize)
self.npipe_path = npipe_path
self.timeout = timeout
def _new_conn(self) -> NpipeHTTPConnection:
return NpipeHTTPConnection(self.npipe_path, self.timeout)
# When re-using connections, urllib3 tries to call select() on our
# NpipeSocket instance, causing a crash. To circumvent this, we override
# _get_conn, where that check happens.
def _get_conn(self, timeout: int | float) -> NpipeHTTPConnection:
conn = None
try:
conn = self.pool.get(block=self.block, timeout=timeout)
except AttributeError as exc: # self.pool is None
raise urllib3.exceptions.ClosedPoolError(self, "Pool is closed.") from exc
except Empty as exc:
if self.block:
raise urllib3.exceptions.EmptyPoolError(
self,
"Pool reached maximum size and no more connections are allowed.",
) from exc
# Oh well, we'll create a new connection then
return conn or self._new_conn()
class NpipeHTTPAdapter(BaseHTTPAdapter):
__attrs__ = HTTPAdapter.__attrs__ + [
"npipe_path",
"pools",
"timeout",
"max_pool_size",
]
def __init__(
self,
base_url: str,
timeout: int | float = 60,
pool_connections: int = constants.DEFAULT_NUM_POOLS,
max_pool_size: int = constants.DEFAULT_MAX_POOL_SIZE,
) -> None:
self.npipe_path = base_url.replace("npipe://", "")
self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close()
)
super().__init__()
def get_connection(
self, url: str | bytes, proxies: Mapping[str, str] | None = None
) -> NpipeHTTPConnectionPool:
with self.pools.lock:
pool = self.pools.get(url)
if pool:
return pool
pool = NpipeHTTPConnectionPool(
self.npipe_path, self.timeout, maxsize=self.max_pool_size
)
self.pools[url] = pool
return pool
def request_url(
self, request: PreparedRequest, proxies: Mapping[str, str] | None
) -> str:
# The select_proxy utility in requests errors out when the provided URL
# does not have a hostname, like is the case when using a UNIX socket.
# Since proxies are an irrelevant notion in the case of UNIX sockets
# anyway, we simply return the path URL directly.
# See also: https://github.com/docker/docker-sdk-python/issues/811
return request.path_url

View File

@ -0,0 +1,278 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import functools
import io
import time
import traceback
import typing as t
PYWIN32_IMPORT_ERROR: str | None # pylint: disable=invalid-name
try:
import pywintypes
import win32api
import win32event
import win32file
import win32pipe
except ImportError:
PYWIN32_IMPORT_ERROR = traceback.format_exc() # pylint: disable=invalid-name
else:
PYWIN32_IMPORT_ERROR = None # pylint: disable=invalid-name
if t.TYPE_CHECKING:
from collections.abc import Buffer, Callable
_Self = t.TypeVar("_Self")
_P = t.ParamSpec("_P")
_R = t.TypeVar("_R")
ERROR_PIPE_BUSY = 0xE7
SECURITY_SQOS_PRESENT = 0x100000
SECURITY_ANONYMOUS = 0
MAXIMUM_RETRY_COUNT = 10
def check_closed(
f: Callable[t.Concatenate[_Self, _P], _R],
) -> Callable[t.Concatenate[_Self, _P], _R]:
@functools.wraps(f)
def wrapped(self: _Self, *args: _P.args, **kwargs: _P.kwargs) -> _R:
if self._closed: # type: ignore
raise RuntimeError("Can not reuse socket after connection was closed.")
return f(self, *args, **kwargs)
return wrapped
class NpipeSocket:
"""Partial implementation of the socket API over windows named pipes.
This implementation is only designed to be used as a client socket,
and server-specific methods (bind, listen, accept...) are not
implemented.
"""
def __init__(self, handle: t.Any | None = None) -> None:
self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT
self._handle = handle
self._address: str | None = None
self._closed = False
self.flags: int | None = None
def accept(self) -> t.NoReturn:
raise NotImplementedError()
def bind(self, address: t.Any) -> t.NoReturn:
raise NotImplementedError()
def close(self) -> None:
if self._handle is None:
raise ValueError("Handle not present")
self._handle.Close()
self._closed = True
@check_closed
def connect(self, address: str, retry_count: int = 0) -> None:
try:
handle = win32file.CreateFile(
address,
win32file.GENERIC_READ | win32file.GENERIC_WRITE,
0,
None,
win32file.OPEN_EXISTING,
(
SECURITY_ANONYMOUS
| SECURITY_SQOS_PRESENT
| win32file.FILE_FLAG_OVERLAPPED
),
0,
)
except win32pipe.error as e:
# See Remarks:
# https://msdn.microsoft.com/en-us/library/aa365800.aspx
if e.winerror == ERROR_PIPE_BUSY:
# Another program or thread has grabbed our pipe instance
# before we got to it. Wait for availability and attempt to
# connect again.
retry_count = retry_count + 1
if retry_count < MAXIMUM_RETRY_COUNT:
time.sleep(1)
return self.connect(address, retry_count)
raise e
self.flags = win32pipe.GetNamedPipeInfo(handle)[0] # type: ignore
self._handle = handle
self._address = address
@check_closed
def connect_ex(self, address: str) -> None:
self.connect(address)
@check_closed
def detach(self) -> t.Any:
self._closed = True
return self._handle
@check_closed
def dup(self) -> NpipeSocket:
return NpipeSocket(self._handle)
def getpeername(self) -> str | None:
return self._address
def getsockname(self) -> str | None:
return self._address
def getsockopt(
self, level: t.Any, optname: t.Any, buflen: t.Any = None
) -> t.NoReturn:
raise NotImplementedError()
def ioctl(self, control: t.Any, option: t.Any) -> t.NoReturn:
raise NotImplementedError()
def listen(self, backlog: t.Any) -> t.NoReturn:
raise NotImplementedError()
def makefile(self, mode: str, bufsize: int | None = None) -> t.IO[bytes]:
if mode.strip("b") != "r":
raise NotImplementedError()
rawio = NpipeFileIOBase(self)
if bufsize is None or bufsize <= 0:
bufsize = io.DEFAULT_BUFFER_SIZE
return io.BufferedReader(rawio, buffer_size=bufsize)
@check_closed
def recv(self, bufsize: int, flags: int = 0) -> str:
if self._handle is None:
raise ValueError("Handle not present")
dummy_err, data = win32file.ReadFile(self._handle, bufsize)
return data
@check_closed
def recvfrom(self, bufsize: int, flags: int = 0) -> tuple[str, str | None]:
data = self.recv(bufsize, flags)
return (data, self._address)
@check_closed
def recvfrom_into(
self, buf: Buffer, nbytes: int = 0, flags: int = 0
) -> tuple[int, str | None]:
return self.recv_into(buf, nbytes), self._address
@check_closed
def recv_into(self, buf: Buffer, nbytes: int = 0) -> int:
if self._handle is None:
raise ValueError("Handle not present")
readbuf = buf if isinstance(buf, memoryview) else memoryview(buf)
event = win32event.CreateEvent(None, True, True, None)
try:
overlapped = pywintypes.OVERLAPPED()
overlapped.hEvent = event
dummy_err, dummy_data = win32file.ReadFile( # type: ignore
self._handle, readbuf[:nbytes] if nbytes else readbuf, overlapped
)
wait_result = win32event.WaitForSingleObject(event, self._timeout)
if wait_result == win32event.WAIT_TIMEOUT:
win32file.CancelIo(self._handle)
raise TimeoutError
return win32file.GetOverlappedResult(self._handle, overlapped, 0)
finally:
win32api.CloseHandle(event)
@check_closed
def send(self, string: Buffer, flags: int = 0) -> int:
if self._handle is None:
raise ValueError("Handle not present")
event = win32event.CreateEvent(None, True, True, None)
try:
overlapped = pywintypes.OVERLAPPED()
overlapped.hEvent = event
win32file.WriteFile(self._handle, string, overlapped) # type: ignore
wait_result = win32event.WaitForSingleObject(event, self._timeout)
if wait_result == win32event.WAIT_TIMEOUT:
win32file.CancelIo(self._handle)
raise TimeoutError
return win32file.GetOverlappedResult(self._handle, overlapped, 0)
finally:
win32api.CloseHandle(event)
@check_closed
def sendall(self, string: Buffer, flags: int = 0) -> int:
return self.send(string, flags)
@check_closed
def sendto(self, string: Buffer, address: str) -> int:
self.connect(address)
return self.send(string)
def setblocking(self, flag: bool) -> None:
if flag:
return self.settimeout(None)
return self.settimeout(0)
def settimeout(self, value: int | float | None) -> None:
if value is None:
# Blocking mode
self._timeout = win32event.INFINITE
elif not isinstance(value, (float, int)) or value < 0:
raise ValueError("Timeout value out of range")
else:
# Timeout mode - Value converted to milliseconds
self._timeout = int(value * 1000)
def gettimeout(self) -> int | float | None:
return self._timeout
def setsockopt(self, level: t.Any, optname: t.Any, value: t.Any) -> t.NoReturn:
raise NotImplementedError()
@check_closed
def shutdown(self, how: t.Any) -> None:
return self.close()
class NpipeFileIOBase(io.RawIOBase):
def __init__(self, npipe_socket: NpipeSocket | None) -> None:
self.sock = npipe_socket
def close(self) -> None:
super().close()
self.sock = None
def fileno(self) -> int:
if self.sock is None:
raise RuntimeError("socket is closed")
# TODO: This is definitely a bug, NpipeSocket.fileno() does not exist!
return self.sock.fileno() # type: ignore
def isatty(self) -> bool:
return False
def readable(self) -> bool:
return True
def readinto(self, buf: Buffer) -> int:
if self.sock is None:
raise RuntimeError("socket is closed")
return self.sock.recv_into(buf)
def seekable(self) -> bool:
return False
def writable(self) -> bool:
return False

View File

@ -0,0 +1,312 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import logging
import os
import signal
import socket
import subprocess
import traceback
import typing as t
from queue import Empty
from urllib.parse import urlparse
from .. import constants
from .._import_helper import HTTPAdapter, urllib3, urllib3_connection
from .basehttpadapter import BaseHTTPAdapter
PARAMIKO_IMPORT_ERROR: str | None # pylint: disable=invalid-name
try:
import paramiko
except ImportError:
PARAMIKO_IMPORT_ERROR = traceback.format_exc() # pylint: disable=invalid-name
else:
PARAMIKO_IMPORT_ERROR = None # pylint: disable=invalid-name
if t.TYPE_CHECKING:
from collections.abc import Buffer, Mapping
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
class SSHSocket(socket.socket):
def __init__(self, host: str) -> None:
super().__init__(socket.AF_INET, socket.SOCK_STREAM)
self.host = host
self.port = None
self.user = None
if ":" in self.host:
self.host, self.port = self.host.split(":")
if "@" in self.host:
self.user, self.host = self.host.split("@")
self.proc: subprocess.Popen | None = None
def connect(self, *args_: t.Any, **kwargs: t.Any) -> None:
args = ["ssh"]
if self.user:
args = args + ["-l", self.user]
if self.port:
args = args + ["-p", self.port]
args = args + ["--", self.host, "docker system dial-stdio"]
preexec_func = None
if not constants.IS_WINDOWS_PLATFORM:
def f() -> None:
signal.signal(signal.SIGINT, signal.SIG_IGN)
preexec_func = f
env = dict(os.environ)
# drop LD_LIBRARY_PATH and SSL_CERT_FILE
env.pop("LD_LIBRARY_PATH", None)
env.pop("SSL_CERT_FILE", None)
self.proc = subprocess.Popen( # pylint: disable=consider-using-with
args,
env=env,
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
preexec_fn=preexec_func,
)
def _write(self, data: Buffer) -> int:
if not self.proc:
raise RuntimeError(
"SSH subprocess not initiated. connect() must be called first."
)
assert self.proc.stdin is not None
if self.proc.stdin.closed:
raise RuntimeError(
"SSH subprocess not initiated. connect() must be called first after close()."
)
written = self.proc.stdin.write(data)
self.proc.stdin.flush()
return written
def sendall(self, data: Buffer, *args: t.Any, **kwargs: t.Any) -> None:
self._write(data)
def send(self, data: Buffer, *args: t.Any, **kwargs: t.Any) -> int:
return self._write(data)
def recv(self, n: int, *args: t.Any, **kwargs: t.Any) -> bytes:
if not self.proc:
raise RuntimeError(
"SSH subprocess not initiated. connect() must be called first."
)
assert self.proc.stdout is not None
return self.proc.stdout.read(n)
def makefile(self, mode: str, *args: t.Any, **kwargs: t.Any) -> t.IO: # type: ignore
if not self.proc:
self.connect()
assert self.proc is not None
assert self.proc.stdout is not None
self.proc.stdout.channel = self # type: ignore
return self.proc.stdout
def close(self) -> None:
if not self.proc:
return
assert self.proc.stdin is not None
if self.proc.stdin.closed:
return
self.proc.stdin.write(b"\n\n")
self.proc.stdin.flush()
self.proc.terminate()
class SSHConnection(urllib3_connection.HTTPConnection):
def __init__(
self,
*,
ssh_transport: paramiko.Transport | None = None,
timeout: int | float = 60,
host: str,
) -> None:
super().__init__("localhost", timeout=timeout)
self.ssh_transport = ssh_transport
self.timeout = timeout
self.ssh_host = host
self.sock: paramiko.Channel | SSHSocket | None = None
def connect(self) -> None:
if self.ssh_transport:
channel = self.ssh_transport.open_session()
channel.settimeout(self.timeout)
channel.exec_command("docker system dial-stdio")
self.sock = channel
else:
sock = SSHSocket(self.ssh_host)
sock.settimeout(self.timeout)
sock.connect()
self.sock = sock
class SSHConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
scheme = "ssh"
def __init__(
self,
*,
ssh_client: paramiko.SSHClient | None = None,
timeout: int | float = 60,
maxsize: int = 10,
host: str,
) -> None:
super().__init__("localhost", timeout=timeout, maxsize=maxsize)
self.ssh_transport: paramiko.Transport | None = None
self.timeout = timeout
if ssh_client:
self.ssh_transport = ssh_client.get_transport()
self.ssh_host = host
def _new_conn(self) -> SSHConnection:
return SSHConnection(
ssh_transport=self.ssh_transport,
timeout=self.timeout,
host=self.ssh_host,
)
# When re-using connections, urllib3 calls fileno() on our
# SSH channel instance, quickly overloading our fd limit. To avoid this,
# we override _get_conn
def _get_conn(self, timeout: int | float) -> SSHConnection:
conn = None
try:
conn = self.pool.get(block=self.block, timeout=timeout)
except AttributeError as exc: # self.pool is None
raise urllib3.exceptions.ClosedPoolError(self, "Pool is closed.") from exc
except Empty as exc:
if self.block:
raise urllib3.exceptions.EmptyPoolError(
self,
"Pool reached maximum size and no more connections are allowed.",
) from exc
# Oh well, we'll create a new connection then
return conn or self._new_conn()
class SSHHTTPAdapter(BaseHTTPAdapter):
__attrs__ = HTTPAdapter.__attrs__ + [
"pools",
"timeout",
"ssh_client",
"ssh_params",
"max_pool_size",
]
def __init__(
self,
base_url: str,
timeout: int | float = 60,
pool_connections: int = constants.DEFAULT_NUM_POOLS,
max_pool_size: int = constants.DEFAULT_MAX_POOL_SIZE,
shell_out: bool = False,
) -> None:
self.ssh_client: paramiko.SSHClient | None = None
if not shell_out:
self._create_paramiko_client(base_url)
self._connect()
self.ssh_host = base_url
if base_url.startswith("ssh://"):
self.ssh_host = base_url[len("ssh://") :]
self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close()
)
super().__init__()
def _create_paramiko_client(self, base_url: str) -> None:
logging.getLogger("paramiko").setLevel(logging.WARNING)
self.ssh_client = paramiko.SSHClient()
base_url_p = urlparse(base_url)
assert base_url_p.hostname is not None
self.ssh_params: dict[str, t.Any] = {
"hostname": base_url_p.hostname,
"port": base_url_p.port,
"username": base_url_p.username,
}
ssh_config_file = os.path.expanduser("~/.ssh/config")
if os.path.exists(ssh_config_file):
conf = paramiko.SSHConfig()
with open(ssh_config_file, "rt", encoding="utf-8") as f:
conf.parse(f)
host_config = conf.lookup(base_url_p.hostname)
if "proxycommand" in host_config:
self.ssh_params["sock"] = paramiko.ProxyCommand(
host_config["proxycommand"]
)
if "hostname" in host_config:
self.ssh_params["hostname"] = host_config["hostname"]
if base_url_p.port is None and "port" in host_config:
self.ssh_params["port"] = host_config["port"]
if base_url_p.username is None and "user" in host_config:
self.ssh_params["username"] = host_config["user"]
if "identityfile" in host_config:
self.ssh_params["key_filename"] = host_config["identityfile"]
self.ssh_client.load_system_host_keys()
self.ssh_client.set_missing_host_key_policy(paramiko.RejectPolicy())
def _connect(self) -> None:
if self.ssh_client:
self.ssh_client.connect(**self.ssh_params)
def get_connection(
self, url: str | bytes, proxies: Mapping[str, str] | None = None
) -> SSHConnectionPool:
if not self.ssh_client:
return SSHConnectionPool(
ssh_client=self.ssh_client,
timeout=self.timeout,
maxsize=self.max_pool_size,
host=self.ssh_host,
)
with self.pools.lock:
pool = self.pools.get(url)
if pool:
return pool
# Connection is closed try a reconnect
if self.ssh_client and not self.ssh_client.get_transport():
self._connect()
pool = SSHConnectionPool(
ssh_client=self.ssh_client,
timeout=self.timeout,
maxsize=self.max_pool_size,
host=self.ssh_host,
)
self.pools[url] = pool
return pool
def close(self) -> None:
super().close()
if self.ssh_client:
self.ssh_client.close()

View File

@ -0,0 +1,72 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
from .._import_helper import HTTPAdapter, urllib3
from .basehttpadapter import BaseHTTPAdapter
# Resolves OpenSSL issues in some servers:
# https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/
# https://github.com/kennethreitz/requests/pull/799
PoolManager = urllib3.poolmanager.PoolManager
class SSLHTTPAdapter(BaseHTTPAdapter):
"""An HTTPS Transport Adapter that uses an arbitrary SSL version."""
__attrs__ = HTTPAdapter.__attrs__ + ["assert_hostname"]
def __init__(
self,
assert_hostname: bool | None = None,
**kwargs: t.Any,
) -> None:
self.assert_hostname = assert_hostname
super().__init__(**kwargs)
def init_poolmanager(
self, connections: int, maxsize: int, block: bool = False, **kwargs: t.Any
) -> None:
kwargs = {
"num_pools": connections,
"maxsize": maxsize,
"block": block,
}
if self.assert_hostname is not None:
kwargs["assert_hostname"] = self.assert_hostname
self.poolmanager = PoolManager(**kwargs)
def get_connection(self, *args: t.Any, **kwargs: t.Any) -> urllib3.ConnectionPool:
"""
Ensure assert_hostname is set correctly on our pool
We already take care of a normal poolmanager via init_poolmanager
But we still need to take care of when there is a proxy poolmanager
Note that this method is no longer called for newer requests versions.
"""
# pylint finds our HTTPAdapter stub instead of requests.adapters.HTTPAdapter:
# pylint: disable-next=no-member
conn = super().get_connection(*args, **kwargs)
if (
self.assert_hostname is not None
and conn.assert_hostname != self.assert_hostname # type: ignore
):
conn.assert_hostname = self.assert_hostname # type: ignore
return conn

View File

@ -0,0 +1,127 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import socket
import typing as t
from .. import constants
from .._import_helper import HTTPAdapter, urllib3, urllib3_connection
from .basehttpadapter import BaseHTTPAdapter
if t.TYPE_CHECKING:
from collections.abc import Mapping
from requests import PreparedRequest
from ..._socket_helper import SocketLike
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
class UnixHTTPConnection(urllib3_connection.HTTPConnection):
def __init__(
self, base_url: str | bytes, unix_socket: str, timeout: int | float = 60
) -> None:
super().__init__("localhost", timeout=timeout)
self.base_url = base_url
self.unix_socket = unix_socket
self.timeout = timeout
self.disable_buffering = False
def connect(self) -> None:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(self.timeout)
sock.connect(self.unix_socket)
self.sock = sock
def putheader(self, header: str, *values: str) -> None:
super().putheader(header, *values)
if header == "Connection" and "Upgrade" in values:
self.disable_buffering = True
def response_class(self, sock: SocketLike, *args: t.Any, **kwargs: t.Any) -> t.Any:
# FIXME: We may need to disable buffering on Py3,
# but there's no clear way to do it at the moment. See:
# https://github.com/docker/docker-py/issues/1799
return super().response_class(sock, *args, **kwargs)
class UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
def __init__(
self,
base_url: str | bytes,
socket_path: str,
timeout: int | float = 60,
maxsize: int = 10,
) -> None:
super().__init__("localhost", timeout=timeout, maxsize=maxsize)
self.base_url = base_url
self.socket_path = socket_path
self.timeout = timeout
def _new_conn(self) -> UnixHTTPConnection:
return UnixHTTPConnection(self.base_url, self.socket_path, self.timeout)
class UnixHTTPAdapter(BaseHTTPAdapter):
__attrs__ = HTTPAdapter.__attrs__ + [
"pools",
"socket_path",
"timeout",
"max_pool_size",
]
def __init__(
self,
socket_url: str,
timeout: int | float = 60,
pool_connections: int = constants.DEFAULT_NUM_POOLS,
max_pool_size: int = constants.DEFAULT_MAX_POOL_SIZE,
) -> None:
socket_path = socket_url.replace("http+unix://", "")
if not socket_path.startswith("/"):
socket_path = "/" + socket_path
self.socket_path = socket_path
self.timeout = timeout
self.max_pool_size = max_pool_size
def f(p: t.Any) -> None:
p.close()
self.pools = RecentlyUsedContainer(pool_connections, dispose_func=f)
super().__init__()
def get_connection(
self, url: str | bytes, proxies: Mapping[str, str] | None = None
) -> UnixHTTPConnectionPool:
with self.pools.lock:
pool = self.pools.get(url)
if pool:
return pool
pool = UnixHTTPConnectionPool(
url, self.socket_path, self.timeout, maxsize=self.max_pool_size
)
self.pools[url] = pool
return pool
def request_url(self, request: PreparedRequest, proxies: Mapping[str, str]) -> str:
# The select_proxy utility in requests errors out when the provided URL
# does not have a hostname, like is the case when using a UNIX socket.
# Since proxies are an irrelevant notion in the case of UNIX sockets
# anyway, we simply return the path URL directly.
# See also: https://github.com/docker/docker-py/issues/811
return request.path_url

View File

@ -0,0 +1,91 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import socket
import typing as t
from .._import_helper import urllib3
from ..errors import DockerException
if t.TYPE_CHECKING:
from requests import Response
_T = t.TypeVar("_T")
class CancellableStream(t.Generic[_T]):
"""
Stream wrapper for real-time events, logs, etc. from the server.
Example:
>>> events = client.events()
>>> for event in events:
... print(event)
>>> # and cancel from another thread
>>> events.close()
"""
def __init__(self, stream: t.Generator[_T], response: Response) -> None:
self._stream = stream
self._response = response
def __iter__(self) -> t.Self:
return self
def __next__(self) -> _T:
try:
return next(self._stream)
except urllib3.exceptions.ProtocolError as exc:
raise StopIteration from exc
except socket.error as exc:
raise StopIteration from exc
next = __next__
def close(self) -> None:
"""
Closes the event streaming.
"""
if not self._response.raw.closed:
# find the underlying socket object
# based on api.client._get_raw_response_socket
sock_fp = self._response.raw._fp.fp # type: ignore
if hasattr(sock_fp, "raw"):
sock_raw = sock_fp.raw
if hasattr(sock_raw, "sock"):
sock = sock_raw.sock
elif hasattr(sock_raw, "_sock"):
sock = sock_raw._sock
elif hasattr(sock_fp, "channel"):
# We are working with a paramiko (SSH) channel, which does not
# support cancelable streams with the current implementation
raise DockerException(
"Cancellable streams not supported for the SSH protocol"
)
else:
sock = sock_fp._sock # type: ignore
if hasattr(urllib3.contrib, "pyopenssl") and isinstance(
sock, urllib3.contrib.pyopenssl.WrappedSocket
):
sock = sock.socket
sock.shutdown(socket.SHUT_RDWR)
sock.close()

View File

@ -0,0 +1,311 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import io
import os
import random
import re
import tarfile
import tempfile
import typing as t
from ..constants import IS_WINDOWS_PLATFORM, WINDOWS_LONGPATH_PREFIX
from . import fnmatch
if t.TYPE_CHECKING:
from collections.abc import Sequence
_SEP = re.compile("/|\\\\") if IS_WINDOWS_PLATFORM else re.compile("/")
def tar(
path: str,
exclude: list[str] | None = None,
dockerfile: tuple[str, str | None] | tuple[None, None] | None = None,
fileobj: t.IO[bytes] | None = None,
gzip: bool = False,
) -> t.IO[bytes]:
root = os.path.abspath(path)
exclude = exclude or []
dockerfile = dockerfile or (None, None)
extra_files: list[tuple[str, str]] = []
if dockerfile[1] is not None:
assert dockerfile[0] is not None
dockerignore_contents = "\n".join(
(exclude or [".dockerignore"]) + [dockerfile[0]]
)
extra_files = [
(".dockerignore", dockerignore_contents),
dockerfile, # type: ignore
]
return create_archive(
files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile[0])),
root=root,
fileobj=fileobj,
gzip=gzip,
extra_files=extra_files,
)
def exclude_paths(
root: str, patterns: list[str], dockerfile: str | None = None
) -> set[str]:
"""
Given a root directory path and a list of .dockerignore patterns, return
an iterator of all paths (both regular files and directories) in the root
directory that do *not* match any of the patterns.
All paths returned are relative to the root.
"""
if dockerfile is None:
dockerfile = "Dockerfile"
patterns.append("!" + dockerfile)
pm = PatternMatcher(patterns)
return set(pm.walk(root))
def build_file_list(root: str) -> list[str]:
files = []
for dirname, dirnames, fnames in os.walk(root):
for filename in fnames + dirnames:
longpath = os.path.join(dirname, filename)
files.append(longpath.replace(root, "", 1).lstrip("/"))
return files
def create_archive(
root: str,
files: Sequence[str] | None = None,
fileobj: t.IO[bytes] | None = None,
gzip: bool = False,
extra_files: Sequence[tuple[str, str]] | None = None,
) -> t.IO[bytes]:
extra_files = extra_files or []
if not fileobj:
# pylint: disable-next=consider-using-with
fileobj = tempfile.NamedTemporaryFile() # noqa: SIM115
with tarfile.open(mode="w:gz" if gzip else "w", fileobj=fileobj) as tarf:
if files is None:
files = build_file_list(root)
extra_names = set(e[0] for e in extra_files)
for path in files:
if path in extra_names:
# Extra files override context files with the same name
continue
full_path = os.path.join(root, path)
i = tarf.gettarinfo(full_path, arcname=path)
if i is None:
# This happens when we encounter a socket file. We can safely
# ignore it and proceed.
continue # type: ignore
# Workaround https://bugs.python.org/issue32713
if i.mtime < 0 or i.mtime > 8**11 - 1:
i.mtime = int(i.mtime)
if IS_WINDOWS_PLATFORM:
# Windows does not keep track of the execute bit, so we make files
# and directories executable by default.
i.mode = i.mode & 0o755 | 0o111
if i.isfile():
try:
with open(full_path, "rb") as f:
tarf.addfile(i, f)
except IOError as exc:
raise IOError(f"Can not read file in context: {full_path}") from exc
else:
# Directories, FIFOs, symlinks... do not need to be read.
tarf.addfile(i, None)
for name, contents in extra_files:
info = tarfile.TarInfo(name)
contents_encoded = contents.encode("utf-8")
info.size = len(contents_encoded)
tarf.addfile(info, io.BytesIO(contents_encoded))
fileobj.seek(0)
return fileobj
def mkbuildcontext(dockerfile: io.BytesIO | t.IO[bytes]) -> t.IO[bytes]:
# pylint: disable-next=consider-using-with
f = tempfile.NamedTemporaryFile() # noqa: SIM115
try:
with tarfile.open(mode="w", fileobj=f) as tarf:
if isinstance(dockerfile, io.StringIO): # type: ignore
raise TypeError("Please use io.BytesIO to create in-memory Dockerfiles")
if isinstance(dockerfile, io.BytesIO):
dfinfo = tarfile.TarInfo("Dockerfile")
dfinfo.size = len(dockerfile.getvalue())
dockerfile.seek(0)
else:
dfinfo = tarf.gettarinfo(fileobj=dockerfile, arcname="Dockerfile")
tarf.addfile(dfinfo, dockerfile)
f.seek(0)
except Exception: # noqa: E722
f.close()
raise
return f
def split_path(p: str) -> list[str]:
return [pt for pt in re.split(_SEP, p) if pt and pt != "."]
def normalize_slashes(p: str) -> str:
if IS_WINDOWS_PLATFORM:
return "/".join(split_path(p))
return p
def walk(root: str, patterns: Sequence[str], default: bool = True) -> t.Generator[str]:
pm = PatternMatcher(patterns)
return pm.walk(root)
# Heavily based on
# https://github.com/moby/moby/blob/master/pkg/fileutils/fileutils.go
class PatternMatcher:
def __init__(self, patterns: Sequence[str]) -> None:
self.patterns = list(filter(lambda p: p.dirs, [Pattern(p) for p in patterns]))
self.patterns.append(Pattern("!.dockerignore"))
def matches(self, filepath: str) -> bool:
matched = False
parent_path = os.path.dirname(filepath)
parent_path_dirs = split_path(parent_path)
for pattern in self.patterns:
negative = pattern.exclusion
match = pattern.match(filepath)
if (
not match
and parent_path != ""
and len(pattern.dirs) <= len(parent_path_dirs)
):
match = pattern.match(
os.path.sep.join(parent_path_dirs[: len(pattern.dirs)])
)
if match:
matched = not negative
return matched
def walk(self, root: str) -> t.Generator[str]:
def rec_walk(current_dir: str) -> t.Generator[str]:
for f in os.listdir(current_dir):
fpath = os.path.join(os.path.relpath(current_dir, root), f)
if fpath.startswith("." + os.path.sep):
fpath = fpath[2:]
match = self.matches(fpath)
if not match:
yield fpath
cur = os.path.join(root, fpath)
if not os.path.isdir(cur) or os.path.islink(cur):
continue
if match:
# If we want to skip this file and it is a directory
# then we should first check to see if there's an
# excludes pattern (e.g. !dir/file) that starts with this
# dir. If so then we cannot skip this dir.
skip = True
for pat in self.patterns:
if not pat.exclusion:
continue
if pat.cleaned_pattern.startswith(normalize_slashes(fpath)):
skip = False
break
if skip:
continue
yield from rec_walk(cur)
return rec_walk(root)
class Pattern:
def __init__(self, pattern_str: str) -> None:
self.exclusion = False
if pattern_str.startswith("!"):
self.exclusion = True
pattern_str = pattern_str[1:]
self.dirs = self.normalize(pattern_str)
self.cleaned_pattern = "/".join(self.dirs)
@classmethod
def normalize(cls, p: str) -> list[str]:
# Remove trailing spaces
p = p.strip()
# Leading and trailing slashes are not relevant. Yes,
# "foo.py/" must exclude the "foo.py" regular file. "."
# components are not relevant either, even if the whole
# pattern is only ".", as the Docker reference states: "For
# historical reasons, the pattern . is ignored."
# ".." component must be cleared with the potential previous
# component, regardless of whether it exists: "A preprocessing
# step [...] eliminates . and .. elements using Go's
# filepath.".
i = 0
split = split_path(p)
while i < len(split):
if split[i] == "..":
del split[i]
if i > 0:
del split[i - 1]
i -= 1
else:
i += 1
return split
def match(self, filepath: str) -> bool:
return fnmatch.fnmatch(normalize_slashes(filepath), self.cleaned_pattern)
def process_dockerfile(
dockerfile: str | None, path: str
) -> tuple[str, str | None] | tuple[None, None]:
if not dockerfile:
return (None, None)
abs_dockerfile = dockerfile
if not os.path.isabs(dockerfile):
abs_dockerfile = os.path.join(path, dockerfile)
if IS_WINDOWS_PLATFORM and path.startswith(WINDOWS_LONGPATH_PREFIX):
abs_dockerfile = f"{WINDOWS_LONGPATH_PREFIX}{os.path.normpath(abs_dockerfile[len(WINDOWS_LONGPATH_PREFIX) :])}"
if os.path.splitdrive(path)[0] != os.path.splitdrive(abs_dockerfile)[
0
] or os.path.relpath(abs_dockerfile, path).startswith(".."):
# Dockerfile not in context - read data to insert into tar later
with open(abs_dockerfile, "rt", encoding="utf-8") as df:
return (f".dockerfile.{random.getrandbits(160):x}", df.read())
# Dockerfile is inside the context - return path relative to context root
if dockerfile == abs_dockerfile:
# Only calculate relpath if necessary to avoid errors
# on Windows client -> Linux Docker
# see https://github.com/docker/compose/issues/5969
dockerfile = os.path.relpath(abs_dockerfile, path)
return (dockerfile, None)

View File

@ -0,0 +1,90 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import logging
import os
import typing as t
from ..constants import IS_WINDOWS_PLATFORM
DOCKER_CONFIG_FILENAME = os.path.join(".docker", "config.json")
LEGACY_DOCKER_CONFIG_FILENAME = ".dockercfg"
log = logging.getLogger(__name__)
def get_default_config_file() -> str:
return os.path.join(home_dir(), DOCKER_CONFIG_FILENAME)
def find_config_file(config_path: str | None = None) -> str | None:
homedir = home_dir()
paths = list(
filter(
None,
[
config_path, # 1
config_path_from_environment(), # 2
os.path.join(homedir, DOCKER_CONFIG_FILENAME), # 3
os.path.join(homedir, LEGACY_DOCKER_CONFIG_FILENAME), # 4
],
)
)
log.debug("Trying paths: %s", repr(paths))
for path in paths:
if os.path.exists(path):
log.debug("Found file at path: %s", path)
return path
log.debug("No config file found")
return None
def config_path_from_environment() -> str | None:
config_dir = os.environ.get("DOCKER_CONFIG")
if not config_dir:
return None
return os.path.join(config_dir, os.path.basename(DOCKER_CONFIG_FILENAME))
def home_dir() -> str:
"""
Get the user's home directory, using the same logic as the Docker Engine
client - use %USERPROFILE% on Windows, $HOME/getuid on POSIX.
"""
if IS_WINDOWS_PLATFORM:
return os.environ.get("USERPROFILE", "")
return os.path.expanduser("~")
def load_general_config(config_path: str | None = None) -> dict[str, t.Any]:
config_file = find_config_file(config_path)
if not config_file:
return {}
try:
with open(config_file, "rt", encoding="utf-8") as f:
return json.load(f)
except (IOError, ValueError) as e:
# In the case of a legacy `.dockercfg` file, we will not
# be able to load any JSON data.
log.debug(e)
log.debug("All parsing attempts failed - returning empty config")
return {}

View File

@ -0,0 +1,68 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import functools
import typing as t
from .. import errors
from . import utils
if t.TYPE_CHECKING:
from collections.abc import Callable
from ..api.client import APIClient
_Self = t.TypeVar("_Self")
_P = t.ParamSpec("_P")
_R = t.TypeVar("_R")
def minimum_version(
version: str,
) -> Callable[
[Callable[t.Concatenate[_Self, _P], _R]],
Callable[t.Concatenate[_Self, _P], _R],
]:
def decorator(
f: Callable[t.Concatenate[_Self, _P], _R],
) -> Callable[t.Concatenate[_Self, _P], _R]:
@functools.wraps(f)
def wrapper(self: _Self, *args: _P.args, **kwargs: _P.kwargs) -> _R:
# We use _Self instead of APIClient since this is used for mixins for APIClient.
# This unfortunately means that self._version does not exist in the mixin,
# it only exists after mixing in. This is why we ignore types here.
if utils.version_lt(self._version, version): # type: ignore
raise errors.InvalidVersion(
f"{f.__name__} is not available for version < {version}"
)
return f(self, *args, **kwargs)
return wrapper
return decorator
def update_headers(
f: Callable[t.Concatenate[APIClient, _P], _R],
) -> Callable[t.Concatenate[APIClient, _P], _R]:
def inner(self: APIClient, *args: _P.args, **kwargs: _P.kwargs) -> _R:
if "HttpHeaders" in self._general_configs:
if not kwargs.get("headers"):
kwargs["headers"] = self._general_configs["HttpHeaders"]
else:
# We cannot (yet) model that kwargs["headers"] should be a dictionary
kwargs["headers"].update(self._general_configs["HttpHeaders"]) # type: ignore
return f(self, *args, **kwargs)
return inner

View File

@ -0,0 +1,129 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
"""Filename matching with shell patterns.
fnmatch(FILENAME, PATTERN) matches according to the local convention.
fnmatchcase(FILENAME, PATTERN) always takes case in account.
The functions operate by translating the pattern into a regular
expression. They cache the compiled regular expressions for speed.
The function translate(PATTERN) returns a regular expression
corresponding to PATTERN. (It does not compile it.)
"""
from __future__ import annotations
import re
__all__ = ["fnmatch", "fnmatchcase", "translate"]
_cache: dict[str, re.Pattern] = {}
_MAXCACHE = 100
def _purge() -> None:
"""Clear the pattern cache"""
_cache.clear()
def fnmatch(name: str, pat: str) -> bool:
"""Test whether FILENAME matches PATTERN.
Patterns are Unix shell style:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
An initial period in FILENAME is not special.
Both FILENAME and PATTERN are first case-normalized
if the operating system requires it.
If you do not want this, use fnmatchcase(FILENAME, PATTERN).
"""
name = name.lower()
pat = pat.lower()
return fnmatchcase(name, pat)
def fnmatchcase(name: str, pat: str) -> bool:
"""Test whether FILENAME matches PATTERN, including case.
This is a version of fnmatch() which does not case-normalize
its arguments.
"""
try:
re_pat = _cache[pat]
except KeyError:
res = translate(pat)
if len(_cache) >= _MAXCACHE:
_cache.clear()
_cache[pat] = re_pat = re.compile(res)
return re_pat.match(name) is not None
def translate(pat: str) -> str:
"""Translate a shell PATTERN to a regular expression.
There is no way to quote meta-characters.
"""
i, n = 0, len(pat)
res = "^"
while i < n:
c = pat[i]
i = i + 1
if c == "*":
if i < n and pat[i] == "*":
# is some flavor of "**"
i = i + 1
# Treat **/ as ** so eat the "/"
if i < n and pat[i] == "/":
i = i + 1
if i >= n:
# is "**EOF" - to align with .gitignore just accept all
res = res + ".*"
else:
# is "**"
# Note that this allows for any # of /'s (even 0) because
# the .* will eat everything, even /'s
res = res + "(.*/)?"
else:
# is "*" so map it to anything but "/"
res = res + "[^/]*"
elif c == "?":
# "?" is any char except "/"
res = res + "[^/]"
elif c == "[":
j = i
if j < n and pat[j] == "!":
j = j + 1
if j < n and pat[j] == "]":
j = j + 1
while j < n and pat[j] != "]":
j = j + 1
if j >= n:
res = res + "\\["
else:
stuff = pat[i:j].replace("\\", "\\\\")
i = j + 1
if stuff[0] == "!":
stuff = "^" + stuff[1:]
elif stuff[0] == "^":
stuff = "\\" + stuff
res = f"{res}[{stuff}]"
else:
res = res + re.escape(c)
return res + "$"

View File

@ -0,0 +1,101 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import json.decoder
import typing as t
from ..errors import StreamParseError
if t.TYPE_CHECKING:
import re
from collections.abc import Callable
_T = t.TypeVar("_T")
json_decoder = json.JSONDecoder()
def stream_as_text(stream: t.Generator[bytes | str]) -> t.Generator[str]:
"""
Given a stream of bytes or text, if any of the items in the stream
are bytes convert them to text.
This function can be removed once we return text streams
instead of byte streams.
"""
for data in stream:
if not isinstance(data, str):
data = data.decode("utf-8", "replace")
yield data
def json_splitter(buffer: str) -> tuple[t.Any, str] | None:
"""Attempt to parse a json object from a buffer. If there is at least one
object, return it and the rest of the buffer, otherwise return None.
"""
buffer = buffer.strip()
try:
obj, index = json_decoder.raw_decode(buffer)
ws: re.Pattern = json.decoder.WHITESPACE # type: ignore[attr-defined]
m = ws.match(buffer, index)
rest = buffer[m.end() :] if m else buffer[index:]
return obj, rest
except ValueError:
return None
def json_stream(stream: t.Generator[str | bytes]) -> t.Generator[t.Any]:
"""Given a stream of text, return a stream of json objects.
This handles streams which are inconsistently buffered (some entries may
be newline delimited, and others are not).
"""
return split_buffer(stream, json_splitter, json_decoder.decode)
def line_splitter(buffer: str, separator: str = "\n") -> tuple[str, str] | None:
index = buffer.find(str(separator))
if index == -1:
return None
return buffer[: index + 1], buffer[index + 1 :]
def split_buffer(
stream: t.Generator[str | bytes],
splitter: Callable[[str], tuple[_T, str] | None],
decoder: Callable[[str], _T],
) -> t.Generator[_T | str]:
"""Given a generator which yields strings and a splitter function,
joins all input, splits on the separator and yields each chunk.
Unlike string.split(), each chunk includes the trailing
separator, except for the last one if none was found on the end
of the input.
"""
buffered = ""
for data in stream_as_text(stream):
buffered += data
while True:
buffer_split = splitter(buffered)
if buffer_split is None:
break
item, buffered = buffer_split
yield item
if buffered:
try:
yield decoder(buffered)
except Exception as e:
raise StreamParseError(e) from e

View File

@ -0,0 +1,137 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import re
import typing as t
if t.TYPE_CHECKING:
from collections.abc import Collection, Sequence
PORT_SPEC = re.compile(
"^" # Match full string
"(" # External part
r"(\[?(?P<host>[a-fA-F\d.:]+)\]?:)?" # Address
r"(?P<ext>[\d]*)(-(?P<ext_end>[\d]+))?:" # External range
")?"
r"(?P<int>[\d]+)(-(?P<int_end>[\d]+))?" # Internal range
"(?P<proto>/(udp|tcp|sctp))?" # Protocol
"$" # Match full string
)
def add_port_mapping(
port_bindings: dict[str, list[str | tuple[str, str | None] | None]],
internal_port: str,
external: str | tuple[str, str | None] | None,
) -> None:
if internal_port in port_bindings:
port_bindings[internal_port].append(external)
else:
port_bindings[internal_port] = [external]
def add_port(
port_bindings: dict[str, list[str | tuple[str, str | None] | None]],
internal_port_range: list[str],
external_range: list[str] | list[tuple[str, str | None]] | None,
) -> None:
if external_range is None:
for internal_port in internal_port_range:
add_port_mapping(port_bindings, internal_port, None)
else:
for internal_port, external_port in zip(internal_port_range, external_range):
# mypy loses the exact type of eternal_port elements for some reason...
add_port_mapping(port_bindings, internal_port, external_port) # type: ignore
def build_port_bindings(
ports: Collection[str],
) -> dict[str, list[str | tuple[str, str | None] | None]]:
port_bindings: dict[str, list[str | tuple[str, str | None] | None]] = {}
for port in ports:
internal_port_range, external_range = split_port(port)
add_port(port_bindings, internal_port_range, external_range)
return port_bindings
def _raise_invalid_port(port: str) -> t.NoReturn:
raise ValueError(
f'Invalid port "{port}", should be '
"[[remote_ip:]remote_port[-remote_port]:]"
"port[/protocol]"
)
@t.overload
def port_range(
start: str,
end: str | None,
proto: str,
randomly_available_port: bool = False,
) -> list[str]: ...
@t.overload
def port_range(
start: str | None,
end: str | None,
proto: str,
randomly_available_port: bool = False,
) -> list[str] | None: ...
def port_range(
start: str | None,
end: str | None,
proto: str,
randomly_available_port: bool = False,
) -> list[str] | None:
if start is None:
return start
if end is None:
return [f"{start}{proto}"]
if randomly_available_port:
return [f"{start}-{end}{proto}"]
return [f"{port}{proto}" for port in range(int(start), int(end) + 1)]
def split_port(
port: str | int,
) -> tuple[list[str], list[str] | list[tuple[str, str | None]] | None]:
port = str(port)
match = PORT_SPEC.match(port)
if match is None:
_raise_invalid_port(port)
parts = match.groupdict()
host: str | None = parts["host"]
proto: str = parts["proto"] or ""
int_p: str = parts["int"]
ext_p: str = parts["ext"]
internal: list[str] = port_range(int_p, parts["int_end"], proto) # type: ignore
external = port_range(ext_p or None, parts["ext_end"], "", len(internal) == 1)
if host is None:
if (external is not None and len(internal) != len(external)) or ext_p == "":
raise ValueError("Port ranges don't match in length")
return internal, external
external_or_none: Sequence[str | None]
if not external:
external_or_none = [None] * len(internal)
else:
external_or_none = external
if len(internal) != len(external_or_none):
raise ValueError("Port ranges don't match in length")
return internal, [(host, ext_port) for ext_port in external_or_none]

View File

@ -0,0 +1,98 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
from .utils import format_environment
class ProxyConfig(dict):
"""
Hold the client's proxy configuration
"""
@property
def http(self) -> str | None:
return self.get("http")
@property
def https(self) -> str | None:
return self.get("https")
@property
def ftp(self) -> str | None:
return self.get("ftp")
@property
def no_proxy(self) -> str | None:
return self.get("no_proxy")
@staticmethod
def from_dict(config: dict[str, str]) -> ProxyConfig:
"""
Instantiate a new ProxyConfig from a dictionary that represents a
client configuration, as described in `the documentation`_.
.. _the documentation:
https://docs.docker.com/network/proxy/#configure-the-docker-client
"""
return ProxyConfig(
http=config.get("httpProxy"),
https=config.get("httpsProxy"),
ftp=config.get("ftpProxy"),
no_proxy=config.get("noProxy"),
)
def get_environment(self) -> dict[str, str]:
"""
Return a dictionary representing the environment variables used to
set the proxy settings.
"""
env = {}
if self.http:
env["http_proxy"] = env["HTTP_PROXY"] = self.http
if self.https:
env["https_proxy"] = env["HTTPS_PROXY"] = self.https
if self.ftp:
env["ftp_proxy"] = env["FTP_PROXY"] = self.ftp
if self.no_proxy:
env["no_proxy"] = env["NO_PROXY"] = self.no_proxy
return env
@t.overload
def inject_proxy_environment(self, environment: list[str]) -> list[str]: ...
@t.overload
def inject_proxy_environment(
self, environment: list[str] | None
) -> list[str] | None: ...
def inject_proxy_environment(
self, environment: list[str] | None
) -> list[str] | None:
"""
Given a list of strings representing environment variables, prepend the
environment variables corresponding to the proxy settings.
"""
if not self:
return environment
proxy_env = format_environment(self.get_environment())
if not environment:
return proxy_env
# It is important to prepend our variables, because we want the
# variables defined in "environment" to take precedence.
return proxy_env + environment
def __str__(self) -> str:
return f"ProxyConfig(http={self.http}, https={self.https}, ftp={self.ftp}, no_proxy={self.no_proxy})"

View File

@ -0,0 +1,243 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import errno
import os
import select
import socket as pysocket
import struct
import typing as t
from ..transport.npipesocket import NpipeSocket
if t.TYPE_CHECKING:
from collections.abc import Sequence
from ..._socket_helper import SocketLike
STDOUT = 1
STDERR = 2
class SocketError(Exception):
pass
# NpipeSockets have their own error types
# pywintypes.error: (109, 'ReadFile', 'The pipe has been ended.')
NPIPE_ENDED = 109
def read(socket: SocketLike, n: int = 4096) -> bytes | None:
"""
Reads at most n bytes from socket
"""
recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)
if not isinstance(socket, NpipeSocket): # type: ignore[unreachable]
if not hasattr(select, "poll"):
# Limited to 1024
select.select([socket], [], [])
else:
poll = select.poll()
poll.register(socket, select.POLLIN | select.POLLPRI)
poll.poll()
try:
if hasattr(socket, "recv"):
return socket.recv(n)
if isinstance(socket, pysocket.SocketIO): # type: ignore
return socket.read(n) # type: ignore[unreachable]
return os.read(socket.fileno(), n)
except EnvironmentError as e:
if e.errno not in recoverable_errors:
raise
return None # TODO ???
except Exception as e:
is_pipe_ended = (
isinstance(socket, NpipeSocket) # type: ignore[unreachable]
and len(e.args) > 0
and e.args[0] == NPIPE_ENDED
)
if is_pipe_ended:
# npipes do not support duplex sockets, so we interpret
# a PIPE_ENDED error as a close operation (0-length read).
return b""
raise
def read_exactly(socket: SocketLike, n: int) -> bytes:
"""
Reads exactly n bytes from socket
Raises SocketError if there is not enough data
"""
data = b""
while len(data) < n:
next_data = read(socket, n - len(data))
if not next_data:
raise SocketError("Unexpected EOF")
data += next_data
return data
def next_frame_header(socket: SocketLike) -> tuple[int, int]:
"""
Returns the stream and size of the next frame of data waiting to be read
from socket, according to the protocol defined here:
https://docs.docker.com/engine/api/v1.24/#attach-to-a-container
"""
try:
data = read_exactly(socket, 8)
except SocketError:
return (-1, -1)
stream, actual = struct.unpack(">BxxxL", data)
return (stream, actual)
def frames_iter(socket: SocketLike, tty: bool) -> t.Generator[tuple[int, bytes]]:
"""
Return a generator of frames read from socket. A frame is a tuple where
the first item is the stream number and the second item is a chunk of data.
If the tty setting is enabled, the streams are multiplexed into the stdout
stream.
"""
if tty:
return ((STDOUT, frame) for frame in frames_iter_tty(socket))
return frames_iter_no_tty(socket)
def frames_iter_no_tty(socket: SocketLike) -> t.Generator[tuple[int, bytes]]:
"""
Returns a generator of data read from the socket when the tty setting is
not enabled.
"""
while True:
(stream, n) = next_frame_header(socket)
if n < 0:
break
while n > 0:
result = read(socket, n)
if result is None:
continue
data_length = len(result)
if data_length == 0:
# We have reached EOF
return
n -= data_length
yield (stream, result)
def frames_iter_tty(socket: SocketLike) -> t.Generator[bytes]:
"""
Return a generator of data read from the socket when the tty setting is
enabled.
"""
while True:
result = read(socket)
if not result:
# We have reached EOF
return
yield result
@t.overload
def consume_socket_output(
frames: Sequence[bytes] | t.Generator[bytes], demux: t.Literal[False] = False
) -> bytes: ...
@t.overload
def consume_socket_output(
frames: (
Sequence[tuple[bytes | None, bytes | None]]
| t.Generator[tuple[bytes | None, bytes | None]]
),
demux: t.Literal[True],
) -> tuple[bytes, bytes]: ...
@t.overload
def consume_socket_output(
frames: (
Sequence[bytes]
| Sequence[tuple[bytes | None, bytes | None]]
| t.Generator[bytes]
| t.Generator[tuple[bytes | None, bytes | None]]
),
demux: bool = False,
) -> bytes | tuple[bytes, bytes]: ...
def consume_socket_output(
frames: (
Sequence[bytes]
| Sequence[tuple[bytes | None, bytes | None]]
| t.Generator[bytes]
| t.Generator[tuple[bytes | None, bytes | None]]
),
demux: bool = False,
) -> bytes | tuple[bytes, bytes]:
"""
Iterate through frames read from the socket and return the result.
Args:
demux (bool):
If False, stdout and stderr are multiplexed, and the result is the
concatenation of all the frames. If True, the streams are
demultiplexed, and the result is a 2-tuple where each item is the
concatenation of frames belonging to the same stream.
"""
if demux is False:
# If the streams are multiplexed, the generator returns strings, that
# we just need to concatenate.
return b"".join(frames) # type: ignore
# If the streams are demultiplexed, the generator yields tuples
# (stdout, stderr)
out: list[bytes | None] = [None, None]
frame: tuple[bytes | None, bytes | None]
for frame in frames: # type: ignore
# It is guaranteed that for each frame, one and only one stream
# is not None.
if frame == (None, None):
raise AssertionError(f"frame must be (None, None), but got {frame}")
if frame[0] is not None:
if out[0] is None:
out[0] = frame[0]
else:
out[0] += frame[0]
else:
if out[1] is None:
out[1] = frame[1]
else:
out[1] += frame[1] # type: ignore[operator]
return tuple(out) # type: ignore
def demux_adaptor(stream_id: int, data: bytes) -> tuple[bytes | None, bytes | None]:
"""
Utility to demultiplex stdout and stderr when reading frames from the
socket.
"""
if stream_id == STDOUT:
return (data, None)
if stream_id == STDERR:
return (None, data)
raise ValueError(f"{stream_id} is not a valid stream")

View File

@ -0,0 +1,520 @@
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import base64
import collections
import json
import os
import os.path
import shlex
import string
import typing as t
from urllib.parse import urlparse, urlunparse
from ansible_collections.community.docker.plugins.module_utils._version import (
StrictVersion,
)
from .. import errors
from ..constants import (
BYTE_UNITS,
DEFAULT_HTTP_HOST,
DEFAULT_NPIPE,
DEFAULT_UNIX_SOCKET,
)
from ..tls import TLSConfig
if t.TYPE_CHECKING:
from collections.abc import Mapping, Sequence
URLComponents = collections.namedtuple(
"URLComponents",
"scheme netloc url params query fragment",
)
def decode_json_header(header: str | bytes) -> dict[str, t.Any]:
data = base64.b64decode(header).decode("utf-8")
return json.loads(data)
def compare_version(v1: str, v2: str) -> t.Literal[-1, 0, 1]:
"""Compare docker versions
>>> v1 = '1.9'
>>> v2 = '1.10'
>>> compare_version(v1, v2)
1
>>> compare_version(v2, v1)
-1
>>> compare_version(v2, v2)
0
"""
s1 = StrictVersion(v1)
s2 = StrictVersion(v2)
if s1 == s2:
return 0
if s1 > s2:
return -1
return 1
def version_lt(v1: str, v2: str) -> bool:
return compare_version(v1, v2) > 0
def version_gte(v1: str, v2: str) -> bool:
return not version_lt(v1, v2)
def _convert_port_binding(
binding: (
tuple[str, str | int | None]
| tuple[str | int | None]
| dict[str, str]
| str
| int
),
) -> dict[str, str]:
result = {"HostIp": "", "HostPort": ""}
host_port: str | int | None = ""
if isinstance(binding, tuple):
if len(binding) == 2:
host_port = binding[1] # type: ignore
result["HostIp"] = binding[0]
elif isinstance(binding[0], str):
result["HostIp"] = binding[0]
else:
host_port = binding[0]
elif isinstance(binding, dict):
if "HostPort" in binding:
host_port = binding["HostPort"]
if "HostIp" in binding:
result["HostIp"] = binding["HostIp"]
else:
raise ValueError(binding)
else:
host_port = binding
result["HostPort"] = str(host_port) if host_port is not None else ""
return result
def convert_port_bindings(
port_bindings: dict[
str | int,
tuple[str, str | int | None]
| tuple[str | int | None]
| dict[str, str]
| str
| int
| list[
tuple[str, str | int | None]
| tuple[str | int | None]
| dict[str, str]
| str
| int
],
],
) -> dict[str, list[dict[str, str]]]:
result = {}
for k, v in port_bindings.items():
key = str(k)
if "/" not in key:
key += "/tcp"
if isinstance(v, list):
result[key] = [_convert_port_binding(binding) for binding in v]
else:
result[key] = [_convert_port_binding(v)]
return result
def convert_volume_binds(
binds: (
list[str]
| Mapping[
str | bytes, dict[str, str | bytes] | dict[str, str] | bytes | str | int
]
),
) -> list[str]:
if isinstance(binds, list):
return binds # type: ignore
result = []
for k, v in binds.items():
if isinstance(k, bytes):
k = k.decode("utf-8")
if isinstance(v, dict):
if "ro" in v and "mode" in v:
raise ValueError(f'Binding cannot contain both "ro" and "mode": {v!r}')
bind = v["bind"]
if isinstance(bind, bytes):
bind = bind.decode("utf-8")
if "ro" in v:
mode = "ro" if v["ro"] else "rw"
elif "mode" in v:
mode = v["mode"] # type: ignore # TODO
else:
mode = "rw"
# NOTE: this is only relevant for Linux hosts
# (does not apply in Docker Desktop)
propagation_modes = [
"rshared",
"shared",
"rslave",
"slave",
"rprivate",
"private",
]
if "propagation" in v and v["propagation"] in propagation_modes:
if mode:
mode = ",".join([mode, v["propagation"]]) # type: ignore # TODO
else:
mode = v["propagation"] # type: ignore # TODO
result.append(f"{k}:{bind}:{mode}")
else:
if isinstance(v, bytes):
v = v.decode("utf-8")
result.append(f"{k}:{v}:rw")
return result
def convert_tmpfs_mounts(tmpfs: dict[str, str] | list[str]) -> dict[str, str]:
if isinstance(tmpfs, dict):
return tmpfs
if not isinstance(tmpfs, list):
raise ValueError(
f"Expected tmpfs value to be either a list or a dict, found: {type(tmpfs).__name__}"
)
result = {}
for mount in tmpfs:
if isinstance(mount, str):
if ":" in mount:
name, options = mount.split(":", 1)
else:
name = mount
options = ""
else:
raise ValueError(
f"Expected item in tmpfs list to be a string, found: {type(mount).__name__}"
)
result[name] = options
return result
def convert_service_networks(
networks: list[str | dict[str, str]],
) -> list[dict[str, str]]:
if not networks:
return networks # type: ignore
if not isinstance(networks, list):
raise TypeError("networks parameter must be a list.")
result = []
for n in networks:
if isinstance(n, str):
n = {"Target": n}
result.append(n)
return result
def parse_repository_tag(repo_name: str) -> tuple[str, str | None]:
parts = repo_name.rsplit("@", 1)
if len(parts) == 2:
return tuple(parts) # type: ignore
parts = repo_name.rsplit(":", 1)
if len(parts) == 2 and "/" not in parts[1]:
return tuple(parts) # type: ignore
return repo_name, None
def parse_host(addr: str | None, is_win32: bool = False, tls: bool = False) -> str:
# Sensible defaults
if not addr and is_win32:
return DEFAULT_NPIPE
if not addr or addr.strip() == "unix://":
return DEFAULT_UNIX_SOCKET
addr = addr.strip()
parsed_url = urlparse(addr)
proto = parsed_url.scheme
if not proto or any(x not in string.ascii_letters + "+" for x in proto):
# https://bugs.python.org/issue754016
parsed_url = urlparse("//" + addr, "tcp")
proto = "tcp"
if proto == "fd":
raise errors.DockerException("fd protocol is not implemented")
# These protos are valid aliases for our library but not for the
# official spec
if proto in ("http", "https"):
tls = proto == "https"
proto = "tcp"
elif proto == "http+unix":
proto = "unix"
if proto not in ("tcp", "unix", "npipe", "ssh"):
raise errors.DockerException(f"Invalid bind address protocol: {addr}")
if proto == "tcp" and not parsed_url.netloc:
# "tcp://" is exceptionally disallowed by convention;
# omitting a hostname for other protocols is fine
raise errors.DockerException(f"Invalid bind address format: {addr}")
if any(
[parsed_url.params, parsed_url.query, parsed_url.fragment, parsed_url.password]
):
raise errors.DockerException(f"Invalid bind address format: {addr}")
if parsed_url.path and proto == "ssh":
raise errors.DockerException(
f"Invalid bind address format: no path allowed for this protocol: {addr}"
)
path = parsed_url.path
if proto == "unix" and parsed_url.hostname is not None:
# For legacy reasons, we consider unix://path
# to be valid and equivalent to unix:///path
path = f"{parsed_url.hostname}/{path}"
netloc = parsed_url.netloc
if proto in ("tcp", "ssh"):
port = parsed_url.port or 0
if port <= 0:
port = 22 if proto == "ssh" else (2375 if tls else 2376)
netloc = f"{parsed_url.netloc}:{port}"
if not parsed_url.hostname:
netloc = f"{DEFAULT_HTTP_HOST}:{port}"
# Rewrite schemes to fit library internals (requests adapters)
if proto == "tcp":
proto = f"http{'s' if tls else ''}"
elif proto == "unix":
proto = "http+unix"
if proto in ("http+unix", "npipe"):
return f"{proto}://{path}".rstrip("/")
return urlunparse(
URLComponents(
scheme=proto,
netloc=netloc,
url=path,
params="",
query="",
fragment="",
)
).rstrip("/")
def parse_devices(devices: Sequence[dict[str, str] | str]) -> list[dict[str, str]]:
device_list = []
for device in devices:
if isinstance(device, dict):
device_list.append(device)
continue
if not isinstance(device, str):
raise errors.DockerException(f"Invalid device type {type(device)}")
device_mapping = device.split(":")
if device_mapping:
path_on_host = device_mapping[0]
if len(device_mapping) > 1:
path_in_container = device_mapping[1]
else:
path_in_container = path_on_host
if len(device_mapping) > 2:
permissions = device_mapping[2]
else:
permissions = "rwm"
device_list.append(
{
"PathOnHost": path_on_host,
"PathInContainer": path_in_container,
"CgroupPermissions": permissions,
}
)
return device_list
def kwargs_from_env(
assert_hostname: bool | None = None,
environment: Mapping[str, str] | None = None,
) -> dict[str, t.Any]:
if not environment:
environment = os.environ
host = environment.get("DOCKER_HOST")
# empty string for cert path is the same as unset.
cert_path = environment.get("DOCKER_CERT_PATH") or None
# empty string for tls verify counts as "false".
# Any value or 'unset' counts as true.
tls_verify_str = environment.get("DOCKER_TLS_VERIFY")
if tls_verify_str == "":
tls_verify = False
else:
tls_verify = tls_verify_str is not None
enable_tls = cert_path or tls_verify
params: dict[str, t.Any] = {}
if host:
params["base_url"] = host
if not enable_tls:
return params
if not cert_path:
cert_path = os.path.join(os.path.expanduser("~"), ".docker")
if not tls_verify and assert_hostname is None:
# assert_hostname is a subset of TLS verification,
# so if it is not set already then set it to false.
assert_hostname = False
params["tls"] = TLSConfig(
client_cert=(
os.path.join(cert_path, "cert.pem"),
os.path.join(cert_path, "key.pem"),
),
ca_cert=os.path.join(cert_path, "ca.pem"),
verify=tls_verify,
assert_hostname=assert_hostname,
)
return params
def convert_filters(
filters: Mapping[str, bool | str | int | list[int] | list[str] | list[str | int]],
) -> str:
result = {}
for k, v in filters.items():
if isinstance(v, bool):
v = "true" if v else "false"
if not isinstance(v, list):
v = [
v,
]
result[k] = [str(item) if not isinstance(item, str) else item for item in v]
return json.dumps(result)
def parse_bytes(s: int | float | str) -> int | float:
if isinstance(s, (int, float)):
return s
if len(s) == 0:
return 0
if s[-2:-1].isalpha() and s[-1].isalpha() and (s[-1] == "b" or s[-1] == "B"):
s = s[:-1]
units = BYTE_UNITS
suffix = s[-1].lower()
# Check if the variable is a string representation of an int
# without a units part. Assuming that the units are bytes.
if suffix.isdigit():
digits_part = s
suffix = "b"
else:
digits_part = s[:-1]
if suffix in units or suffix.isdigit():
try:
digits = float(digits_part)
except ValueError as exc:
raise errors.DockerException(
f"Failed converting the string value for memory ({digits_part}) to an integer."
) from exc
# Reconvert to long for the final result
s = int(digits * units[suffix])
else:
raise errors.DockerException(
f"The specified value for memory ({s}) should specify the units. The postfix should be one of the `b` `k` `m` `g` characters"
)
return s
def normalize_links(links: dict[str, str] | Sequence[tuple[str, str]]) -> list[str]:
if isinstance(links, dict):
sorted_links = sorted(links.items())
else:
sorted_links = sorted(links)
return [f"{k}:{v}" if v else k for k, v in sorted_links]
def parse_env_file(env_file: str | os.PathLike) -> dict[str, str]:
"""
Reads a line-separated environment file.
The format of each line should be "key=value".
"""
environment = {}
with open(env_file, "rt", encoding="utf-8") as f:
for line in f:
if line[0] == "#":
continue
line = line.strip()
if not line:
continue
parse_line = line.split("=", 1)
if len(parse_line) == 2:
k, v = parse_line
environment[k] = v
else:
raise errors.DockerException(
f"Invalid line in environment file {env_file}:\n{line}"
)
return environment
def split_command(command: str) -> list[str]:
return shlex.split(command)
def format_environment(environment: Mapping[str, str | bytes | None]) -> list[str]:
def format_env(key: str, value: str | bytes | None) -> str:
if value is None:
return key
if isinstance(value, bytes):
value = value.decode("utf-8")
return f"{key}={value}"
return [format_env(*var) for var in environment.items()]
def format_extra_hosts(extra_hosts: Mapping[str, str], task: bool = False) -> list[str]:
# Use format dictated by Swarm API if container is part of a task
if task:
return [f"{v} {k}" for k, v in sorted(extra_hosts.items())]
return [f"{k}:{v}" for k, v in sorted(extra_hosts.items())]

View File

@ -0,0 +1,556 @@
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import abc
import os
import platform
import re
import sys
import traceback
import typing as t
from collections.abc import Mapping, Sequence
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible_collections.community.docker.plugins.module_utils._util import (
DEFAULT_DOCKER_HOST,
DEFAULT_TIMEOUT_SECONDS,
DEFAULT_TLS,
DEFAULT_TLS_VERIFY,
DOCKER_COMMON_ARGS,
DOCKER_MUTUALLY_EXCLUSIVE,
DOCKER_REQUIRED_TOGETHER,
sanitize_result,
update_tls_hostname,
)
from ansible_collections.community.docker.plugins.module_utils._version import (
LooseVersion,
)
HAS_DOCKER_PY_2 = False # pylint: disable=invalid-name
HAS_DOCKER_PY_3 = False # pylint: disable=invalid-name
HAS_DOCKER_ERROR: None | str # pylint: disable=invalid-name
HAS_DOCKER_TRACEBACK: None | str # pylint: disable=invalid-name
docker_version: str | None # pylint: disable=invalid-name
try:
from docker import __version__ as docker_version
from docker.errors import APIError, TLSParameterError
from docker.tls import TLSConfig
if LooseVersion(docker_version) >= LooseVersion("3.0.0"):
HAS_DOCKER_PY_3 = True # pylint: disable=invalid-name
from docker import APIClient as Client
elif LooseVersion(docker_version) >= LooseVersion("2.0.0"):
HAS_DOCKER_PY_2 = True # pylint: disable=invalid-name
from docker import APIClient as Client
else:
from docker import Client # type: ignore
except ImportError as exc:
HAS_DOCKER_ERROR = str(exc) # pylint: disable=invalid-name
HAS_DOCKER_TRACEBACK = traceback.format_exc() # pylint: disable=invalid-name
HAS_DOCKER_PY = False # pylint: disable=invalid-name
docker_version = None # pylint: disable=invalid-name
else:
HAS_DOCKER_PY = True # pylint: disable=invalid-name
HAS_DOCKER_ERROR = None # pylint: disable=invalid-name
HAS_DOCKER_TRACEBACK = None # pylint: disable=invalid-name
try:
from requests.exceptions import ( # noqa: F401, pylint: disable=unused-import
RequestException,
)
except ImportError:
# Either Docker SDK for Python is no longer using requests, or Docker SDK for Python is not around either,
# or Docker SDK for Python's dependency requests is missing. In any case, define an exception
# class RequestException so that our code does not break.
class RequestException(Exception): # type: ignore
pass
if t.TYPE_CHECKING:
from collections.abc import Callable
MIN_DOCKER_VERSION = "2.0.0"
if not HAS_DOCKER_PY:
# No Docker SDK for Python. Create a place holder client to allow
# instantiation of AnsibleModule and proper error handing
class Client: # type: ignore # noqa: F811, pylint: disable=function-redefined
def __init__(self, **kwargs: t.Any) -> None:
pass
class APIError(Exception): # type: ignore # noqa: F811, pylint: disable=function-redefined
pass
class NotFound(Exception): # type: ignore # noqa: F811, pylint: disable=function-redefined
pass
def _get_tls_config(
fail_function: Callable[[str], t.NoReturn], **kwargs: t.Any
) -> TLSConfig:
if "assert_hostname" in kwargs and LooseVersion(docker_version) >= LooseVersion(
"7.0.0b1"
):
assert_hostname = kwargs.pop("assert_hostname")
if assert_hostname is not None:
fail_function(
"tls_hostname is not compatible with Docker SDK for Python 7.0.0+. You are using"
f" Docker SDK for Python {docker_version}. The tls_hostname option (value: {assert_hostname})"
" has either been set directly or with the environment variable DOCKER_TLS_HOSTNAME."
" Make sure it is not set, or switch to an older version of Docker SDK for Python."
)
# Filter out all None parameters
kwargs = dict((k, v) for k, v in kwargs.items() if v is not None)
try:
return TLSConfig(**kwargs)
except TLSParameterError as exc:
fail_function(f"TLS config error: {exc}")
def is_using_tls(auth_data: dict[str, t.Any]) -> bool:
return auth_data["tls_verify"] or auth_data["tls"]
def get_connect_params(
auth_data: dict[str, t.Any], fail_function: Callable[[str], t.NoReturn]
) -> dict[str, t.Any]:
if is_using_tls(auth_data):
auth_data["docker_host"] = auth_data["docker_host"].replace(
"tcp://", "https://"
)
result = {
"base_url": auth_data["docker_host"],
"version": auth_data["api_version"],
"timeout": auth_data["timeout"],
}
if auth_data["tls_verify"]:
# TLS with verification
tls_config = {
"verify": True,
"assert_hostname": auth_data["tls_hostname"],
"fail_function": fail_function,
}
if auth_data["cert_path"] and auth_data["key_path"]:
tls_config["client_cert"] = (auth_data["cert_path"], auth_data["key_path"])
if auth_data["cacert_path"]:
tls_config["ca_cert"] = auth_data["cacert_path"]
result["tls"] = _get_tls_config(**tls_config)
elif auth_data["tls"]:
# TLS without verification
tls_config = {
"verify": False,
"fail_function": fail_function,
}
if auth_data["cert_path"] and auth_data["key_path"]:
tls_config["client_cert"] = (auth_data["cert_path"], auth_data["key_path"])
result["tls"] = _get_tls_config(**tls_config)
if auth_data.get("use_ssh_client"):
if LooseVersion(docker_version) < LooseVersion("4.4.0"):
fail_function(
"use_ssh_client=True requires Docker SDK for Python 4.4.0 or newer"
)
result["use_ssh_client"] = True
# No TLS
return result
DOCKERPYUPGRADE_SWITCH_TO_DOCKER = (
"Try `pip uninstall docker-py` followed by `pip install docker`."
)
DOCKERPYUPGRADE_UPGRADE_DOCKER = "Use `pip install --upgrade docker` to upgrade."
class AnsibleDockerClientBase(Client):
def __init__(
self,
min_docker_version: str | None = None,
min_docker_api_version: str | None = None,
) -> None:
if min_docker_version is None:
min_docker_version = MIN_DOCKER_VERSION
self.docker_py_version = LooseVersion(docker_version)
if not HAS_DOCKER_PY:
msg = missing_required_lib("Docker SDK for Python: docker>=5.0.0")
msg = f"{msg}, for example via `pip install docker`. The error was: {HAS_DOCKER_ERROR}"
self.fail(msg, exception=HAS_DOCKER_TRACEBACK)
if self.docker_py_version < LooseVersion(min_docker_version):
msg = (
f"Error: Docker SDK for Python version is {docker_version} ({platform.node()}'s Python {sys.executable})."
f" Minimum version required is {min_docker_version}."
)
if docker_version < LooseVersion("2.0"):
msg += DOCKERPYUPGRADE_SWITCH_TO_DOCKER
else:
msg += DOCKERPYUPGRADE_UPGRADE_DOCKER
self.fail(msg)
self._connect_params = get_connect_params(
self.auth_params, fail_function=self.fail
)
try:
super().__init__(**self._connect_params)
self.docker_api_version_str = self.api_version
except APIError as exc:
self.fail(f"Docker API error: {exc}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error connecting: {exc}")
self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version):
self.fail(
f"Docker API version is {self.docker_api_version_str}. Minimum version required is {min_docker_api_version}."
)
def log(self, msg: t.Any, pretty_print: bool = False) -> None:
pass
# if self.debug:
# from .util import log_debug
# log_debug(msg, pretty_print=pretty_print)
@abc.abstractmethod
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
pass
@abc.abstractmethod
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
pass
@staticmethod
def _get_value(
param_name: str,
param_value: t.Any,
env_variable: str | None,
default_value: t.Any | None,
value_type: t.Literal["str", "bool", "int"] = "str",
) -> t.Any:
if param_value is not None:
# take module parameter value
if value_type == "bool":
if param_value in BOOLEANS_TRUE:
return True
if param_value in BOOLEANS_FALSE:
return False
return bool(param_value)
if value_type == "int":
return int(param_value)
return param_value
if env_variable is not None:
env_value = os.environ.get(env_variable)
if env_value is not None:
# take the env variable value
if param_name == "cert_path":
return os.path.join(env_value, "cert.pem")
if param_name == "cacert_path":
return os.path.join(env_value, "ca.pem")
if param_name == "key_path":
return os.path.join(env_value, "key.pem")
if value_type == "bool":
if env_value in BOOLEANS_TRUE:
return True
if env_value in BOOLEANS_FALSE:
return False
return bool(env_value)
if value_type == "int":
return int(env_value)
return env_value
# take the default
return default_value
@abc.abstractmethod
def _get_params(self) -> dict[str, t.Any]:
pass
@property
def auth_params(self) -> dict[str, t.Any]:
# Get authentication credentials.
# Precedence: module parameters-> environment variables-> defaults.
self.log("Getting credentials")
client_params = self._get_params()
params = {}
for key in DOCKER_COMMON_ARGS:
params[key] = client_params.get(key)
result = {
"docker_host": self._get_value(
"docker_host",
params["docker_host"],
"DOCKER_HOST",
DEFAULT_DOCKER_HOST,
value_type="str",
),
"tls_hostname": self._get_value(
"tls_hostname",
params["tls_hostname"],
"DOCKER_TLS_HOSTNAME",
None,
value_type="str",
),
"api_version": self._get_value(
"api_version",
params["api_version"],
"DOCKER_API_VERSION",
"auto",
value_type="str",
),
"cacert_path": self._get_value(
"cacert_path",
params["ca_path"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"cert_path": self._get_value(
"cert_path",
params["client_cert"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"key_path": self._get_value(
"key_path",
params["client_key"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"tls": self._get_value(
"tls", params["tls"], "DOCKER_TLS", DEFAULT_TLS, value_type="bool"
),
"tls_verify": self._get_value(
"validate_certs",
params["validate_certs"],
"DOCKER_TLS_VERIFY",
DEFAULT_TLS_VERIFY,
value_type="bool",
),
"timeout": self._get_value(
"timeout",
params["timeout"],
"DOCKER_TIMEOUT",
DEFAULT_TIMEOUT_SECONDS,
value_type="int",
),
"use_ssh_client": self._get_value(
"use_ssh_client",
params["use_ssh_client"],
None,
False,
value_type="bool",
),
}
update_tls_hostname(result)
return result
def _handle_ssl_error(self, error: Exception) -> t.NoReturn:
match = re.match(r"hostname.*doesn\'t match (\'.*\')", str(error))
if match:
hostname = self.auth_params["tls_hostname"]
self.fail(
f"You asked for verification that Docker daemons certificate's hostname matches {hostname}. "
f"The actual certificate's hostname is {match.group(1)}. Most likely you need to set DOCKER_TLS_HOSTNAME "
f"or pass `tls_hostname` with a value of {match.group(1)}. You may also use TLS without verification by "
"setting the `tls` parameter to true."
)
self.fail(f"SSL Exception: {error}")
class AnsibleDockerClient(AnsibleDockerClientBase):
def __init__(
self,
argument_spec: dict[str, t.Any] | None = None,
supports_check_mode: bool = False,
mutually_exclusive: Sequence[Sequence[str]] | None = None,
required_together: Sequence[Sequence[str]] | None = None,
required_if: (
Sequence[
tuple[str, t.Any, Sequence[str]]
| tuple[str, t.Any, Sequence[str], bool]
]
| None
) = None,
required_one_of: Sequence[Sequence[str]] | None = None,
required_by: dict[str, Sequence[str]] | None = None,
min_docker_version: str | None = None,
min_docker_api_version: str | None = None,
option_minimal_versions: dict[str, t.Any] | None = None,
option_minimal_versions_ignore_params: Sequence[str] | None = None,
fail_results: dict[str, t.Any] | None = None,
):
# Modules can put information in here which will always be returned
# in case client.fail() is called.
self.fail_results = fail_results or {}
merged_arg_spec = {}
merged_arg_spec.update(DOCKER_COMMON_ARGS)
if argument_spec:
merged_arg_spec.update(argument_spec)
self.arg_spec = merged_arg_spec
mutually_exclusive_params: list[Sequence[str]] = []
mutually_exclusive_params += DOCKER_MUTUALLY_EXCLUSIVE
if mutually_exclusive:
mutually_exclusive_params += mutually_exclusive
required_together_params: list[Sequence[str]] = []
required_together_params += DOCKER_REQUIRED_TOGETHER
if required_together:
required_together_params += required_together
self.module = AnsibleModule(
argument_spec=merged_arg_spec,
supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive_params,
required_together=required_together_params,
required_if=required_if,
required_one_of=required_one_of,
required_by=required_by or {},
)
self.debug = self.module.params.get("debug")
self.check_mode = self.module.check_mode
super().__init__(
min_docker_version=min_docker_version,
min_docker_api_version=min_docker_api_version,
)
if option_minimal_versions is not None:
self._get_minimal_versions(
option_minimal_versions, option_minimal_versions_ignore_params
)
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
self.fail_results.update(kwargs)
self.module.fail_json(msg=msg, **sanitize_result(self.fail_results))
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
self.module.deprecate(
msg, version=version, date=date, collection_name=collection_name
)
def _get_params(self) -> dict[str, t.Any]:
return self.module.params
def _get_minimal_versions(
self,
option_minimal_versions: dict[str, t.Any],
ignore_params: Sequence[str] | None = None,
) -> None:
self.option_minimal_versions: dict[str, dict[str, t.Any]] = {}
for option in self.module.argument_spec:
if ignore_params is not None and option in ignore_params:
continue
self.option_minimal_versions[option] = {}
self.option_minimal_versions.update(option_minimal_versions)
for option, data in self.option_minimal_versions.items():
# Test whether option is supported, and store result
support_docker_py = True
support_docker_api = True
if "docker_py_version" in data:
support_docker_py = self.docker_py_version >= LooseVersion(
data["docker_py_version"]
)
if "docker_api_version" in data:
support_docker_api = self.docker_api_version >= LooseVersion(
data["docker_api_version"]
)
data["supported"] = support_docker_py and support_docker_api
# Fail if option is not supported but used
if not data["supported"]:
# Test whether option is specified
if "detect_usage" in data:
used = data["detect_usage"](self)
else:
used = self.module.params.get(option) is not None
if used and "default" in self.module.argument_spec[option]:
used = (
self.module.params[option]
!= self.module.argument_spec[option]["default"]
)
if used:
# If the option is used, compose error message.
if "usage_msg" in data:
usg = data["usage_msg"]
else:
usg = f"set {option} option"
if not support_docker_api:
msg = f"Docker API version is {self.docker_api_version_str}. Minimum version required is {data['docker_api_version']} to {usg}."
elif not support_docker_py:
msg = (
f"Docker SDK for Python version is {docker_version} ({platform.node()}'s Python {sys.executable})."
f" Minimum version required is {data['docker_py_version']} to {usg}. {DOCKERPYUPGRADE_UPGRADE_DOCKER}"
)
else:
# should not happen
msg = f"Cannot {usg} with your configuration."
self.fail(msg)
def report_warnings(
self, result: t.Any, warnings_key: Sequence[str] | None = None
) -> None:
"""
Checks result of client operation for warnings, and if present, outputs them.
warnings_key should be a list of keys used to crawl the result dictionary.
For example, if warnings_key == ['a', 'b'], the function will consider
result['a']['b'] if these keys exist. If the result is a non-empty string, it
will be reported as a warning. If the result is a list, every entry will be
reported as a warning.
In most cases (if warnings are returned at all), warnings_key should be
['Warnings'] or ['Warning']. The default value (if not specified) is ['Warnings'].
"""
if warnings_key is None:
warnings_key = ["Warnings"]
for key in warnings_key:
if not isinstance(result, Mapping):
return
result = result.get(key)
if isinstance(result, Sequence):
for warning in result:
self.module.warn(f"Docker warning: {warning}")
elif isinstance(result, str) and result:
self.module.warn(f"Docker warning: {result}")

View File

@ -0,0 +1,731 @@
# Copyright 2016 Red Hat | Ansible
# Copyright (c) 2022 Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import abc
import os
import re
import typing as t
from collections.abc import Mapping, Sequence
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible_collections.community.docker.plugins.module_utils._version import (
LooseVersion,
)
try:
from requests.exceptions import ( # noqa: F401, pylint: disable=unused-import
RequestException,
SSLError,
)
except ImportError:
# Define an exception class RequestException so that our code does not break.
class RequestException(Exception): # type: ignore
pass
from ansible_collections.community.docker.plugins.module_utils._api import auth
from ansible_collections.community.docker.plugins.module_utils._api.api.client import (
APIClient as Client,
)
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
APIError,
MissingRequirementException,
NotFound,
TLSParameterError,
)
from ansible_collections.community.docker.plugins.module_utils._api.tls import TLSConfig
from ansible_collections.community.docker.plugins.module_utils._api.utils.utils import (
convert_filters,
parse_repository_tag,
)
from ansible_collections.community.docker.plugins.module_utils._util import (
DEFAULT_DOCKER_HOST,
DEFAULT_TIMEOUT_SECONDS,
DEFAULT_TLS,
DEFAULT_TLS_VERIFY,
DOCKER_COMMON_ARGS,
DOCKER_MUTUALLY_EXCLUSIVE,
DOCKER_REQUIRED_TOGETHER,
sanitize_result,
update_tls_hostname,
)
if t.TYPE_CHECKING:
from collections.abc import Callable
def _get_tls_config(
fail_function: Callable[[str], t.NoReturn], **kwargs: t.Any
) -> TLSConfig:
try:
return TLSConfig(**kwargs)
except TLSParameterError as exc:
fail_function(f"TLS config error: {exc}")
def is_using_tls(auth_data: dict[str, t.Any]) -> bool:
return auth_data["tls_verify"] or auth_data["tls"]
def get_connect_params(
auth_data: dict[str, t.Any], fail_function: Callable[[str], t.NoReturn]
) -> dict[str, t.Any]:
if is_using_tls(auth_data):
auth_data["docker_host"] = auth_data["docker_host"].replace(
"tcp://", "https://"
)
result = {
"base_url": auth_data["docker_host"],
"version": auth_data["api_version"],
"timeout": auth_data["timeout"],
}
if auth_data["tls_verify"]:
# TLS with verification
tls_config = {
"verify": True,
"assert_hostname": auth_data["tls_hostname"],
"fail_function": fail_function,
}
if auth_data["cert_path"] and auth_data["key_path"]:
tls_config["client_cert"] = (auth_data["cert_path"], auth_data["key_path"])
if auth_data["cacert_path"]:
tls_config["ca_cert"] = auth_data["cacert_path"]
result["tls"] = _get_tls_config(**tls_config)
elif auth_data["tls"]:
# TLS without verification
tls_config = {
"verify": False,
"fail_function": fail_function,
}
if auth_data["cert_path"] and auth_data["key_path"]:
tls_config["client_cert"] = (auth_data["cert_path"], auth_data["key_path"])
result["tls"] = _get_tls_config(**tls_config)
if auth_data.get("use_ssh_client"):
result["use_ssh_client"] = True
# No TLS
return result
class AnsibleDockerClientBase(Client):
def __init__(self, min_docker_api_version: str | None = None) -> None:
self._connect_params = get_connect_params(
self.auth_params, fail_function=self.fail
)
try:
super().__init__(**self._connect_params)
self.docker_api_version_str = self.api_version
except MissingRequirementException as exc:
self.fail(
missing_required_lib(exc.requirement), exception=exc.import_exception
)
except APIError as exc:
self.fail(f"Docker API error: {exc}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error connecting: {exc}")
self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version):
self.fail(
f"Docker API version is {self.docker_api_version_str}. Minimum version required is {min_docker_api_version}."
)
def log(self, msg: t.Any, pretty_print: bool = False) -> None:
pass
# if self.debug:
# from .util import log_debug
# log_debug(msg, pretty_print=pretty_print)
@abc.abstractmethod
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
pass
@abc.abstractmethod
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
pass
@staticmethod
def _get_value(
param_name: str,
param_value: t.Any,
env_variable: str | None,
default_value: t.Any | None,
value_type: t.Literal["str", "bool", "int"] = "str",
) -> t.Any:
if param_value is not None:
# take module parameter value
if value_type == "bool":
if param_value in BOOLEANS_TRUE:
return True
if param_value in BOOLEANS_FALSE:
return False
return bool(param_value)
if value_type == "int":
return int(param_value)
return param_value
if env_variable is not None:
env_value = os.environ.get(env_variable)
if env_value is not None:
# take the env variable value
if param_name == "cert_path":
return os.path.join(env_value, "cert.pem")
if param_name == "cacert_path":
return os.path.join(env_value, "ca.pem")
if param_name == "key_path":
return os.path.join(env_value, "key.pem")
if value_type == "bool":
if env_value in BOOLEANS_TRUE:
return True
if env_value in BOOLEANS_FALSE:
return False
return bool(env_value)
if value_type == "int":
return int(env_value)
return env_value
# take the default
return default_value
@abc.abstractmethod
def _get_params(self) -> dict[str, t.Any]:
pass
@property
def auth_params(self) -> dict[str, t.Any]:
# Get authentication credentials.
# Precedence: module parameters-> environment variables-> defaults.
self.log("Getting credentials")
client_params = self._get_params()
params = {}
for key in DOCKER_COMMON_ARGS:
params[key] = client_params.get(key)
result = {
"docker_host": self._get_value(
"docker_host",
params["docker_host"],
"DOCKER_HOST",
DEFAULT_DOCKER_HOST,
value_type="str",
),
"tls_hostname": self._get_value(
"tls_hostname",
params["tls_hostname"],
"DOCKER_TLS_HOSTNAME",
None,
value_type="str",
),
"api_version": self._get_value(
"api_version",
params["api_version"],
"DOCKER_API_VERSION",
"auto",
value_type="str",
),
"cacert_path": self._get_value(
"cacert_path",
params["ca_path"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"cert_path": self._get_value(
"cert_path",
params["client_cert"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"key_path": self._get_value(
"key_path",
params["client_key"],
"DOCKER_CERT_PATH",
None,
value_type="str",
),
"tls": self._get_value(
"tls", params["tls"], "DOCKER_TLS", DEFAULT_TLS, value_type="bool"
),
"tls_verify": self._get_value(
"validate_certs",
params["validate_certs"],
"DOCKER_TLS_VERIFY",
DEFAULT_TLS_VERIFY,
value_type="bool",
),
"timeout": self._get_value(
"timeout",
params["timeout"],
"DOCKER_TIMEOUT",
DEFAULT_TIMEOUT_SECONDS,
value_type="int",
),
"use_ssh_client": self._get_value(
"use_ssh_client",
params["use_ssh_client"],
None,
False,
value_type="bool",
),
}
def depr(*args: t.Any, **kwargs: t.Any) -> None:
self.deprecate(*args, **kwargs)
update_tls_hostname(
result,
old_behavior=True,
deprecate_function=depr,
uses_tls=is_using_tls(result),
)
return result
def _handle_ssl_error(self, error: Exception) -> t.NoReturn:
match = re.match(r"hostname.*doesn\'t match (\'.*\')", str(error))
if match:
hostname = self.auth_params["tls_hostname"]
self.fail(
f"You asked for verification that Docker daemons certificate's hostname matches {hostname}. "
f"The actual certificate's hostname is {match.group(1)}. Most likely you need to set DOCKER_TLS_HOSTNAME "
f"or pass `tls_hostname` with a value of {match.group(1)}. You may also use TLS without verification by "
"setting the `tls` parameter to true."
)
self.fail(f"SSL Exception: {error}")
def get_container_by_id(self, container_id: str) -> dict[str, t.Any] | None:
try:
self.log(f"Inspecting container Id {container_id}")
result = self.get_json("/containers/{0}/json", container_id)
self.log("Completed container inspection")
return result
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting container: {exc}")
def get_container(self, name: str | None) -> dict[str, t.Any] | None:
"""
Lookup a container and return the inspection results.
"""
if name is None:
return None
search_name = name
if not name.startswith("/"):
search_name = "/" + name
result = None
try:
params = {
"limit": -1,
"all": 1,
"size": 0,
"trunc_cmd": 0,
}
containers = self.get_json("/containers/json", params=params)
for container in containers:
self.log(f"testing container: {container['Names']}")
if (
isinstance(container["Names"], list)
and search_name in container["Names"]
):
result = container
break
if container["Id"].startswith(name):
result = container
break
if container["Id"] == name:
result = container
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving container list: {exc}")
if result is None:
return None
return self.get_container_by_id(result["Id"])
def get_network(
self, name: str | None = None, network_id: str | None = None
) -> dict[str, t.Any] | None:
"""
Lookup a network and return the inspection results.
"""
if name is None and network_id is None:
return None
result = None
if network_id is None:
try:
networks = self.get_json("/networks")
for network in networks:
self.log(f"testing network: {network['Name']}")
if name == network["Name"]:
result = network
break
if network["Id"].startswith(name):
result = network
break
except SSLError as exc:
self._handle_ssl_error(exc)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error retrieving network list: {exc}")
if result is not None:
network_id = result["Id"]
if network_id is not None:
try:
self.log(f"Inspecting network Id {network_id}")
result = self.get_json("/networks/{0}", network_id)
self.log("Completed network inspection")
except NotFound:
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting network: {exc}")
return result
def _image_lookup(self, name: str, tag: str | None) -> list[dict[str, t.Any]]:
"""
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
"""
try:
params: dict[str, t.Any] = {
"only_ids": 0,
"all": 0,
}
if LooseVersion(self.api_version) < LooseVersion("1.25"):
# only use "filter" on API 1.24 and under, as it is deprecated
params["filter"] = name
else:
params["filters"] = convert_filters({"reference": name})
images = self.get_json("/images/json", params=params)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error searching for image {name} - {exc}")
if tag:
lookup = f"{name}:{tag}"
lookup_digest = f"{name}@{tag}"
response = images
images = []
for image in response:
tags = image.get("RepoTags")
digests = image.get("RepoDigests")
if (tags and lookup in tags) or (digests and lookup_digest in digests):
images = [image]
break
return images
def find_image(self, name: str, tag: str | None) -> dict[str, t.Any] | None:
"""
Lookup an image (by name and tag) and return the inspection results.
"""
if not name:
return None
self.log(f"Find image {name}:{tag}")
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = auth.resolve_repository_name(name)
if registry == "docker.io":
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
self.log(f"Check for docker.io image: {repo_name}")
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith("library/"):
# Sometimes library/xxx images are not found
lookup = repo_name[len("library/") :]
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images:
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
lookup = f"{registry}/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images and "/" not in repo_name:
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
lookup = f"{registry}/library/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail(f"Daemon returned more than one result for {name}:{tag}")
if len(images) == 1:
try:
return self.get_json("/images/{0}/json", images[0]["Id"])
except NotFound:
self.log(f"Image {name}:{tag} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image {name}:{tag} - {exc}")
self.log(f"Image {name}:{tag} not found.")
return None
def find_image_by_id(
self, image_id: str, accept_missing_image: bool = False
) -> dict[str, t.Any] | None:
"""
Lookup an image (by ID) and return the inspection results.
"""
if not image_id:
return None
self.log(f"Find image {image_id} (by ID)")
try:
return self.get_json("/images/{0}/json", image_id)
except NotFound as exc:
if not accept_missing_image:
self.fail(f"Error inspecting image ID {image_id} - {exc}")
self.log(f"Image {image_id} not found.")
return None
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error inspecting image ID {image_id} - {exc}")
@staticmethod
def _compare_images(
img1: dict[str, t.Any] | None, img2: dict[str, t.Any] | None
) -> bool:
if img1 is None or img2 is None:
return img1 == img2
filter_keys = {"Metadata"}
img1_filtered = {k: v for k, v in img1.items() if k not in filter_keys}
img2_filtered = {k: v for k, v in img2.items() if k not in filter_keys}
return img1_filtered == img2_filtered
def pull_image(
self, name: str, tag: str = "latest", image_platform: str | None = None
) -> tuple[dict[str, t.Any] | None, bool]:
"""
Pull an image
"""
self.log(f"Pulling image {name}:{tag}")
old_image = self.find_image(name, tag)
try:
repository, image_tag = parse_repository_tag(name)
registry, dummy_repo_name = auth.resolve_repository_name(repository)
params = {
"tag": tag or image_tag or "latest",
"fromImage": repository,
}
if image_platform is not None:
params["platform"] = image_platform
headers = {}
header = auth.get_config_header(self, registry)
if header:
headers["X-Registry-Auth"] = header
response = self._post(
self._url("/images/create"),
params=params,
headers=headers,
stream=True,
timeout=None,
)
self._raise_for_status(response)
for line in self._stream_helper(response, decode=True):
self.log(line, pretty_print=True)
if line.get("error"):
if line.get("errorDetail"):
error_detail = line.get("errorDetail")
self.fail(
f"Error pulling {name} - code: {error_detail.get('code')} message: {error_detail.get('message')}"
)
else:
self.fail(f"Error pulling {name} - {line.get('error')}")
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(f"Error pulling image {name}:{tag} - {exc}")
new_image = self.find_image(name, tag)
return new_image, self._compare_images(old_image, new_image)
class AnsibleDockerClient(AnsibleDockerClientBase):
def __init__(
self,
argument_spec: dict[str, t.Any] | None = None,
supports_check_mode: bool = False,
mutually_exclusive: Sequence[Sequence[str]] | None = None,
required_together: Sequence[Sequence[str]] | None = None,
required_if: (
Sequence[
tuple[str, t.Any, Sequence[str]]
| tuple[str, t.Any, Sequence[str], bool]
]
| None
) = None,
required_one_of: Sequence[Sequence[str]] | None = None,
required_by: dict[str, Sequence[str]] | None = None,
min_docker_api_version: str | None = None,
option_minimal_versions: dict[str, t.Any] | None = None,
option_minimal_versions_ignore_params: Sequence[str] | None = None,
fail_results: dict[str, t.Any] | None = None,
):
# Modules can put information in here which will always be returned
# in case client.fail() is called.
self.fail_results = fail_results or {}
merged_arg_spec = {}
merged_arg_spec.update(DOCKER_COMMON_ARGS)
if argument_spec:
merged_arg_spec.update(argument_spec)
self.arg_spec = merged_arg_spec
mutually_exclusive_params: list[Sequence[str]] = []
mutually_exclusive_params += DOCKER_MUTUALLY_EXCLUSIVE
if mutually_exclusive:
mutually_exclusive_params += mutually_exclusive
required_together_params: list[Sequence[str]] = []
required_together_params += DOCKER_REQUIRED_TOGETHER
if required_together:
required_together_params += required_together
self.module = AnsibleModule(
argument_spec=merged_arg_spec,
supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive_params,
required_together=required_together_params,
required_if=required_if,
required_one_of=required_one_of,
required_by=required_by or {},
)
self.debug = self.module.params.get("debug")
self.check_mode = self.module.check_mode
super().__init__(min_docker_api_version=min_docker_api_version)
if option_minimal_versions is not None:
self._get_minimal_versions(
option_minimal_versions, option_minimal_versions_ignore_params
)
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
self.fail_results.update(kwargs)
self.module.fail_json(msg=msg, **sanitize_result(self.fail_results))
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
self.module.deprecate(
msg, version=version, date=date, collection_name=collection_name
)
def _get_params(self) -> dict[str, t.Any]:
return self.module.params
def _get_minimal_versions(
self,
option_minimal_versions: dict[str, t.Any],
ignore_params: Sequence[str] | None = None,
) -> None:
self.option_minimal_versions: dict[str, dict[str, t.Any]] = {}
for option in self.module.argument_spec:
if ignore_params is not None and option in ignore_params:
continue
self.option_minimal_versions[option] = {}
self.option_minimal_versions.update(option_minimal_versions)
for option, data in self.option_minimal_versions.items():
# Test whether option is supported, and store result
support_docker_api = True
if "docker_api_version" in data:
support_docker_api = self.docker_api_version >= LooseVersion(
data["docker_api_version"]
)
data["supported"] = support_docker_api
# Fail if option is not supported but used
if not data["supported"]:
# Test whether option is specified
if "detect_usage" in data:
used = data["detect_usage"](self)
else:
used = self.module.params.get(option) is not None
if used and "default" in self.module.argument_spec[option]:
used = (
self.module.params[option]
!= self.module.argument_spec[option]["default"]
)
if used:
# If the option is used, compose error message.
if "usage_msg" in data:
usg = data["usage_msg"]
else:
usg = f"set {option} option"
if not support_docker_api:
msg = f"Docker API version is {self.docker_api_version_str}. Minimum version required is {data['docker_api_version']} to {usg}."
else:
# should not happen
msg = f"Cannot {usg} with your configuration."
self.fail(msg)
def report_warnings(
self, result: t.Any, warnings_key: Sequence[str] | None = None
) -> None:
"""
Checks result of client operation for warnings, and if present, outputs them.
warnings_key should be a list of keys used to crawl the result dictionary.
For example, if warnings_key == ['a', 'b'], the function will consider
result['a']['b'] if these keys exist. If the result is a non-empty string, it
will be reported as a warning. If the result is a list, every entry will be
reported as a warning.
In most cases (if warnings are returned at all), warnings_key should be
['Warnings'] or ['Warning']. The default value (if not specified) is ['Warnings'].
"""
if warnings_key is None:
warnings_key = ["Warnings"]
for key in warnings_key:
if not isinstance(result, Mapping):
return
result = result.get(key)
if isinstance(result, Sequence):
for warning in result:
self.module.warn(f"Docker warning: {warning}")
elif isinstance(result, str) and result:
self.module.warn(f"Docker warning: {result}")

View File

@ -0,0 +1,490 @@
# Copyright (c) 2023, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import abc
import json
import shlex
import typing as t
from ansible.module_utils.basic import AnsibleModule, env_fallback
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_text
from ansible_collections.community.docker.plugins.module_utils._api.auth import (
resolve_repository_name,
)
from ansible_collections.community.docker.plugins.module_utils._util import (
DEFAULT_DOCKER_HOST,
DEFAULT_TLS,
DEFAULT_TLS_VERIFY,
DOCKER_MUTUALLY_EXCLUSIVE,
DOCKER_REQUIRED_TOGETHER,
sanitize_result,
)
from ansible_collections.community.docker.plugins.module_utils._version import (
LooseVersion,
)
if t.TYPE_CHECKING:
from collections.abc import Mapping, Sequence
DOCKER_COMMON_ARGS = {
"docker_cli": {"type": "path"},
"docker_host": {
"type": "str",
"fallback": (env_fallback, ["DOCKER_HOST"]),
"aliases": ["docker_url"],
},
"tls_hostname": {
"type": "str",
"fallback": (env_fallback, ["DOCKER_TLS_HOSTNAME"]),
},
"api_version": {
"type": "str",
"default": "auto",
"fallback": (env_fallback, ["DOCKER_API_VERSION"]),
"aliases": ["docker_api_version"],
},
"ca_path": {"type": "path", "aliases": ["ca_cert", "tls_ca_cert", "cacert_path"]},
"client_cert": {"type": "path", "aliases": ["tls_client_cert", "cert_path"]},
"client_key": {"type": "path", "aliases": ["tls_client_key", "key_path"]},
"tls": {
"type": "bool",
"default": DEFAULT_TLS,
"fallback": (env_fallback, ["DOCKER_TLS"]),
},
"validate_certs": {
"type": "bool",
"default": DEFAULT_TLS_VERIFY,
"fallback": (env_fallback, ["DOCKER_TLS_VERIFY"]),
"aliases": ["tls_verify"],
},
# "debug": {"type": "bool", "default: False},
"cli_context": {"type": "str"},
}
class DockerException(Exception):
pass
class AnsibleDockerClientBase:
docker_api_version_str: str | None
docker_api_version: LooseVersion | None
def __init__(
self,
common_args: dict[str, t.Any],
min_docker_api_version: str | None = None,
needs_api_version: bool = True,
) -> None:
self._environment: dict[str, str] = {}
if common_args["tls_hostname"]:
self._environment["DOCKER_TLS_HOSTNAME"] = common_args["tls_hostname"]
if common_args["api_version"] and common_args["api_version"] != "auto":
self._environment["DOCKER_API_VERSION"] = common_args["api_version"]
cli = common_args.get("docker_cli")
if cli is None:
try:
cli = get_bin_path("docker")
except ValueError:
self.fail(
"Cannot find docker CLI in path. Please provide it explicitly with the docker_cli parameter"
)
self._cli = cli
self._cli_base = [self._cli]
docker_host = common_args["docker_host"]
if not docker_host and not common_args["cli_context"]:
docker_host = DEFAULT_DOCKER_HOST
if docker_host:
self._cli_base.extend(["--host", docker_host])
if common_args["validate_certs"]:
self._cli_base.append("--tlsverify")
elif common_args["tls"]:
self._cli_base.append("--tls")
if common_args["ca_path"]:
self._cli_base.extend(["--tlscacert", common_args["ca_path"]])
if common_args["client_cert"]:
self._cli_base.extend(["--tlscert", common_args["client_cert"]])
if common_args["client_key"]:
self._cli_base.extend(["--tlskey", common_args["client_key"]])
if common_args["cli_context"]:
self._cli_base.extend(["--context", common_args["cli_context"]])
# `--format json` was only added as a shorthand for `--format {{ json . }}` in Docker 23.0
dummy, self._version, dummy2 = self.call_cli_json(
"version", "--format", "{{ json . }}", check_rc=True
)
self._info: dict[str, t.Any] | None = None
if needs_api_version:
api_version_string = self._version["Server"].get(
"ApiVersion"
) or self._version["Server"].get("APIVersion")
if not isinstance(self._version.get("Server"), dict) or not isinstance(
api_version_string, str
):
self.fail(
"Cannot determine Docker Daemon information. Are you maybe using podman instead of docker?"
)
self.docker_api_version_str = to_text(api_version_string)
self.docker_api_version = LooseVersion(self.docker_api_version_str)
min_docker_api_version = min_docker_api_version or "1.25"
if self.docker_api_version < LooseVersion(min_docker_api_version):
self.fail(
f"Docker API version is {self.docker_api_version_str}. Minimum version required is {min_docker_api_version}."
)
else:
self.docker_api_version_str = None
self.docker_api_version = None
if min_docker_api_version is not None:
self.fail(
"Internal error: cannot have needs_api_version=False with min_docker_api_version not None"
)
def log(self, msg: str, pretty_print: bool = False) -> None:
pass
# if self.debug:
# from .util import log_debug
# log_debug(msg, pretty_print=pretty_print)
def get_cli(self) -> str:
return self._cli
def get_version_info(self) -> str:
return self._version
def _compose_cmd(self, args: t.Sequence[str]) -> list[str]:
return self._cli_base + list(args)
def _compose_cmd_str(self, args: t.Sequence[str]) -> str:
return " ".join(shlex.quote(a) for a in self._compose_cmd(args))
@abc.abstractmethod
def call_cli(
self,
*args: str,
check_rc: bool = False,
data: bytes | None = None,
cwd: str | None = None,
environ_update: dict[str, str] | None = None,
) -> tuple[int, bytes, bytes]:
pass
def call_cli_json(
self,
*args: str,
check_rc: bool = False,
data: bytes | None = None,
cwd: str | None = None,
environ_update: dict[str, str] | None = None,
warn_on_stderr: bool = False,
) -> tuple[int, t.Any, bytes]:
rc, stdout, stderr = self.call_cli(
*args, check_rc=check_rc, data=data, cwd=cwd, environ_update=environ_update
)
if warn_on_stderr and stderr:
self.warn(to_text(stderr))
try:
data = json.loads(stdout)
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
)
return rc, data, stderr
def call_cli_json_stream(
self,
*args: str,
check_rc: bool = False,
data: bytes | None = None,
cwd: str | None = None,
environ_update: dict[str, str] | None = None,
warn_on_stderr: bool = False,
) -> tuple[int, list[t.Any], bytes]:
rc, stdout, stderr = self.call_cli(
*args, check_rc=check_rc, data=data, cwd=cwd, environ_update=environ_update
)
if warn_on_stderr and stderr:
self.warn(to_text(stderr))
result = []
try:
for line in stdout.splitlines():
line = line.strip()
if line.startswith(b"{"):
result.append(json.loads(line))
except Exception as exc: # pylint: disable=broad-exception-caught
self.fail(
f"Error while parsing JSON output of {self._compose_cmd_str(args)}: {exc}\nJSON output: {to_text(stdout)}\n\nError output:\n{to_text(stderr)}",
cmd=self._compose_cmd_str(args),
rc=rc,
stdout=stdout,
stderr=stderr,
)
return rc, result, stderr
@abc.abstractmethod
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
pass
@abc.abstractmethod
def warn(self, msg: str) -> None:
pass
@abc.abstractmethod
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
pass
def get_cli_info(self) -> dict[str, t.Any]:
if self._info is None:
dummy, self._info, dummy2 = self.call_cli_json(
"info", "--format", "{{ json . }}", check_rc=True
)
return self._info
def get_client_plugin_info(self, component: str) -> dict[str, t.Any] | None:
cli_info = self.get_cli_info()
if not isinstance(cli_info.get("ClientInfo"), dict):
self.fail(
"Cannot determine Docker client information. Are you maybe using podman instead of docker?"
)
for plugin in cli_info["ClientInfo"].get("Plugins") or []:
if plugin.get("Name") == component:
return plugin
return None
def _image_lookup(self, name: str, tag: str) -> list[dict[str, t.Any]]:
"""
Including a tag in the name parameter sent to the Docker SDK for Python images method
does not work consistently. Instead, get the result set for name and manually check
if the tag exists.
"""
dummy, images, dummy2 = self.call_cli_json_stream(
"image",
"ls",
"--format",
"{{ json . }}",
"--no-trunc",
"--filter",
f"reference={name}",
check_rc=True,
)
if tag:
response = images
images = []
for image in response:
if image.get("Tag") == tag or image.get("Digest") == tag:
images = [image]
break
return images
@t.overload
def find_image(self, name: None, tag: str) -> None: ...
@t.overload
def find_image(self, name: str, tag: str) -> dict[str, t.Any] | None: ...
def find_image(self, name: str | None, tag: str) -> dict[str, t.Any] | None:
"""
Lookup an image (by name and tag) and return the inspection results.
"""
if not name:
return None
self.log(f"Find image {name}:{tag}")
images = self._image_lookup(name, tag)
if not images:
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
registry, repo_name = resolve_repository_name(name)
if registry == "docker.io":
# If docker.io is explicitly there in name, the image
# is not found in some cases (#41509)
self.log(f"Check for docker.io image: {repo_name}")
images = self._image_lookup(repo_name, tag)
if not images and repo_name.startswith("library/"):
# Sometimes library/xxx images are not found
lookup = repo_name[len("library/") :]
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images:
# Last case for some Docker versions: if docker.io was not there,
# it can be that the image was not found either
# (https://github.com/ansible/ansible/pull/15586)
lookup = f"{registry}/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if not images and "/" not in repo_name:
# This seems to be happening with podman-docker
# (https://github.com/ansible-collections/community.docker/issues/291)
lookup = f"{registry}/library/{repo_name}"
self.log(f"Check for docker.io image: {lookup}")
images = self._image_lookup(lookup, tag)
if len(images) > 1:
self.fail(f"Daemon returned more than one result for {name}:{tag}")
if len(images) == 1:
rc, image, stderr = self.call_cli_json("image", "inspect", images[0]["ID"])
if not image:
self.log(f"Image {name}:{tag} not found.")
return None
if rc != 0:
self.fail(f"Error inspecting image {name}:{tag} - {to_text(stderr)}")
return image[0]
self.log(f"Image {name}:{tag} not found.")
return None
@t.overload
def find_image_by_id(
self, image_id: None, accept_missing_image: bool = False
) -> None: ...
@t.overload
def find_image_by_id(
self, image_id: str | None, accept_missing_image: bool = False
) -> dict[str, t.Any] | None: ...
def find_image_by_id(
self, image_id: str | None, accept_missing_image: bool = False
) -> dict[str, t.Any] | None:
"""
Lookup an image (by ID) and return the inspection results.
"""
if not image_id:
return None
self.log(f"Find image {image_id} (by ID)")
rc, image, stderr = self.call_cli_json("image", "inspect", image_id)
if not image:
if not accept_missing_image:
self.fail(f"Error inspecting image ID {image_id} - {to_text(stderr)}")
self.log(f"Image {image_id} not found.")
return None
if rc != 0:
self.fail(f"Error inspecting image ID {image_id} - {to_text(stderr)}")
return image[0]
class AnsibleModuleDockerClient(AnsibleDockerClientBase):
def __init__(
self,
argument_spec: dict[str, t.Any] | None = None,
supports_check_mode: bool = False,
mutually_exclusive: Sequence[Sequence[str]] | None = None,
required_together: Sequence[Sequence[str]] | None = None,
required_if: (
Sequence[
tuple[str, t.Any, Sequence[str]]
| tuple[str, t.Any, Sequence[str], bool]
]
| None
) = None,
required_one_of: Sequence[Sequence[str]] | None = None,
required_by: Mapping[str, Sequence[str]] | None = None,
min_docker_api_version: str | None = None,
fail_results: dict[str, t.Any] | None = None,
needs_api_version: bool = True,
) -> None:
# Modules can put information in here which will always be returned
# in case client.fail() is called.
self.fail_results = fail_results or {}
merged_arg_spec = {}
merged_arg_spec.update(DOCKER_COMMON_ARGS)
if argument_spec:
merged_arg_spec.update(argument_spec)
self.arg_spec = merged_arg_spec
mutually_exclusive_params: list[Sequence[str]] = [
("docker_host", "cli_context")
]
mutually_exclusive_params += DOCKER_MUTUALLY_EXCLUSIVE
if mutually_exclusive:
mutually_exclusive_params += mutually_exclusive
required_together_params: list[Sequence[str]] = []
required_together_params += DOCKER_REQUIRED_TOGETHER
if required_together:
required_together_params += required_together
self.module = AnsibleModule(
argument_spec=merged_arg_spec,
supports_check_mode=supports_check_mode,
mutually_exclusive=mutually_exclusive_params,
required_together=required_together_params,
required_if=required_if,
required_one_of=required_one_of,
required_by=required_by or {},
)
self.debug = False # self.module.params['debug']
self.check_mode = self.module.check_mode
self.diff = self.module._diff
common_args = dict((k, self.module.params[k]) for k in DOCKER_COMMON_ARGS)
super().__init__(
common_args,
min_docker_api_version=min_docker_api_version,
needs_api_version=needs_api_version,
)
def call_cli(
self,
*args: str,
check_rc: bool = False,
data: bytes | None = None,
cwd: str | None = None,
environ_update: dict[str, str] | None = None,
) -> tuple[int, bytes, bytes]:
environment = self._environment.copy()
if environ_update:
environment.update(environ_update)
rc, stdout, stderr = self.module.run_command(
self._compose_cmd(args),
binary_data=True,
check_rc=check_rc,
cwd=cwd,
data=data,
encoding=None,
environ_update=environment,
expand_user_and_vars=False,
ignore_invalid_cwd=False,
)
return rc, stdout, stderr
def fail(self, msg: str, **kwargs: t.Any) -> t.NoReturn:
self.fail_results.update(kwargs)
self.module.fail_json(msg=msg, **sanitize_result(self.fail_results))
def warn(self, msg: str) -> None:
self.module.warn(msg)
def deprecate(
self,
msg: str,
version: str | None = None,
date: str | None = None,
collection_name: str | None = None,
) -> None:
self.module.deprecate(
msg, version=version, date=date, collection_name=collection_name
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,591 @@
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import base64
import datetime
import io
import json
import os
import os.path
import shutil
import stat
import tarfile
import typing as t
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
APIError,
NotFound,
)
if t.TYPE_CHECKING:
from collections.abc import Callable
from _typeshed import WriteableBuffer
from ansible_collections.community.docker.plugins.module_utils._api.api.client import (
APIClient,
)
class DockerFileCopyError(Exception):
pass
class DockerUnexpectedError(DockerFileCopyError):
pass
class DockerFileNotFound(DockerFileCopyError):
pass
def _put_archive(
client: APIClient, container: str, path: str, data: bytes | t.Generator[bytes]
) -> bool:
# data can also be file object for streaming. This is because _put uses requests's put().
# See https://requests.readthedocs.io/en/latest/user/advanced/#streaming-uploads
url = client._url("/containers/{0}/archive", container)
res = client._put(url, params={"path": path}, data=data)
client._raise_for_status(res)
return res.status_code == 200
def _symlink_tar_creator(
b_in_path: bytes,
file_stat: os.stat_result,
out_file: str | bytes,
user_id: int,
group_id: int,
mode: int | None = None,
user_name: str | None = None,
) -> bytes:
if not stat.S_ISLNK(file_stat.st_mode):
raise DockerUnexpectedError("stat information is not for a symlink")
bio = io.BytesIO()
with tarfile.open(
fileobj=bio, mode="w|", dereference=False, encoding="utf-8"
) as tar:
# Note that without both name (bytes) and arcname (unicode), this either fails for
# Python 2.7, Python 3.5/3.6, or Python 3.7+. Only when passing both (in this
# form) it works with Python 2.7, 3.5, 3.6, and 3.7 up to 3.11
tarinfo = tar.gettarinfo(b_in_path, arcname=to_text(out_file))
tarinfo.uid = user_id
tarinfo.uname = ""
if user_name:
tarinfo.uname = user_name
tarinfo.gid = group_id
tarinfo.gname = ""
tarinfo.mode &= 0o700
if mode is not None:
tarinfo.mode = mode
if not tarinfo.issym():
raise DockerUnexpectedError("stat information is not for a symlink")
tar.addfile(tarinfo)
return bio.getvalue()
def _symlink_tar_generator(
b_in_path: bytes,
file_stat: os.stat_result,
out_file: str | bytes,
user_id: int,
group_id: int,
mode: int | None = None,
user_name: str | None = None,
) -> t.Generator[bytes]:
yield _symlink_tar_creator(
b_in_path, file_stat, out_file, user_id, group_id, mode, user_name
)
def _regular_file_tar_generator(
b_in_path: bytes,
file_stat: os.stat_result,
out_file: str | bytes,
user_id: int,
group_id: int,
mode: int | None = None,
user_name: str | None = None,
) -> t.Generator[bytes]:
if not stat.S_ISREG(file_stat.st_mode):
raise DockerUnexpectedError("stat information is not for a regular file")
tarinfo = tarfile.TarInfo()
tarinfo.name = (
os.path.splitdrive(to_text(out_file))[1].replace(os.sep, "/").lstrip("/")
)
tarinfo.mode = (file_stat.st_mode & 0o700) if mode is None else mode
tarinfo.uid = user_id
tarinfo.gid = group_id
tarinfo.size = file_stat.st_size
tarinfo.mtime = file_stat.st_mtime
tarinfo.type = tarfile.REGTYPE
tarinfo.linkname = ""
if user_name:
tarinfo.uname = user_name
tarinfo_buf = tarinfo.tobuf()
total_size = len(tarinfo_buf)
yield tarinfo_buf
size = tarinfo.size
total_size += size
with open(b_in_path, "rb") as f:
while size > 0:
to_read = min(size, 65536)
buf = f.read(to_read)
if not buf:
break
size -= len(buf)
yield buf
if size:
# If for some reason the file shrunk, fill up to the announced size with zeros.
# (If it enlarged, ignore the remainder.)
yield tarfile.NUL * size
remainder = tarinfo.size % tarfile.BLOCKSIZE
if remainder:
# We need to write a multiple of 512 bytes. Fill up with zeros.
yield tarfile.NUL * (tarfile.BLOCKSIZE - remainder)
total_size += tarfile.BLOCKSIZE - remainder
# End with two zeroed blocks
yield tarfile.NUL * (2 * tarfile.BLOCKSIZE)
total_size += 2 * tarfile.BLOCKSIZE
remainder = total_size % tarfile.RECORDSIZE
if remainder > 0:
yield tarfile.NUL * (tarfile.RECORDSIZE - remainder)
def _regular_content_tar_generator(
content: bytes,
out_file: str | bytes,
user_id: int,
group_id: int,
mode: int,
user_name: str | None = None,
) -> t.Generator[bytes]:
tarinfo = tarfile.TarInfo()
tarinfo.name = (
os.path.splitdrive(to_text(out_file))[1].replace(os.sep, "/").lstrip("/")
)
tarinfo.mode = mode
tarinfo.uid = user_id
tarinfo.gid = group_id
tarinfo.size = len(content)
tarinfo.mtime = int(datetime.datetime.now().timestamp())
tarinfo.type = tarfile.REGTYPE
tarinfo.linkname = ""
if user_name:
tarinfo.uname = user_name
tarinfo_buf = tarinfo.tobuf()
total_size = len(tarinfo_buf)
yield tarinfo_buf
total_size += len(content)
yield content
remainder = tarinfo.size % tarfile.BLOCKSIZE
if remainder:
# We need to write a multiple of 512 bytes. Fill up with zeros.
yield tarfile.NUL * (tarfile.BLOCKSIZE - remainder)
total_size += tarfile.BLOCKSIZE - remainder
# End with two zeroed blocks
yield tarfile.NUL * (2 * tarfile.BLOCKSIZE)
total_size += 2 * tarfile.BLOCKSIZE
remainder = total_size % tarfile.RECORDSIZE
if remainder > 0:
yield tarfile.NUL * (tarfile.RECORDSIZE - remainder)
def put_file(
client: APIClient,
container: str,
in_path: str,
out_path: str,
user_id: int,
group_id: int,
mode: int | None = None,
user_name: str | None = None,
follow_links: bool = False,
) -> None:
"""Transfer a file from local to Docker container."""
if not os.path.exists(to_bytes(in_path, errors="surrogate_or_strict")):
raise DockerFileNotFound(f"file or module does not exist: {to_text(in_path)}")
b_in_path = to_bytes(in_path, errors="surrogate_or_strict")
out_dir, out_file = os.path.split(out_path)
if follow_links:
file_stat = os.stat(b_in_path)
else:
file_stat = os.lstat(b_in_path)
if stat.S_ISREG(file_stat.st_mode):
stream = _regular_file_tar_generator(
b_in_path,
file_stat,
out_file,
user_id,
group_id,
mode=mode,
user_name=user_name,
)
elif stat.S_ISLNK(file_stat.st_mode):
stream = _symlink_tar_generator(
b_in_path,
file_stat,
out_file,
user_id,
group_id,
mode=mode,
user_name=user_name,
)
else:
file_part = " referenced by" if follow_links else ""
raise DockerFileCopyError(
f"File{file_part} {in_path} is neither a regular file nor a symlink (stat mode {oct(file_stat.st_mode)})."
)
ok = _put_archive(client, container, out_dir, stream)
if not ok:
raise DockerUnexpectedError(
f'Unknown error while creating file "{out_path}" in container "{container}".'
)
def put_file_content(
client: APIClient,
container: str,
content: bytes,
out_path: str,
user_id: int,
group_id: int,
mode: int,
user_name: str | None = None,
) -> None:
"""Transfer a file from local to Docker container."""
out_dir, out_file = os.path.split(out_path)
stream = _regular_content_tar_generator(
content, out_file, user_id, group_id, mode, user_name=user_name
)
ok = _put_archive(client, container, out_dir, stream)
if not ok:
raise DockerUnexpectedError(
f'Unknown error while creating file "{out_path}" in container "{container}".'
)
def stat_file(
client: APIClient,
container: str,
in_path: str,
follow_links: bool = False,
log: Callable[[str], None] | None = None,
) -> tuple[str, dict[str, t.Any] | None, str | None]:
"""Fetch information on a file from a Docker container to local.
Return a tuple ``(path, stat_data, link_target)`` where:
:path: is the resolved path in case ``follow_links=True``;
:stat_data: is ``None`` if the file does not exist, or a dictionary with fields
``name`` (string), ``size`` (integer), ``mode`` (integer, see https://pkg.go.dev/io/fs#FileMode),
``mtime`` (string), and ``linkTarget`` (string);
:link_target: is ``None`` if the file is not a symlink or when ``follow_links=False``,
and a string with the symlink target otherwise.
"""
considered_in_paths = set()
while True:
if in_path in considered_in_paths:
raise DockerFileCopyError(
f"Found infinite symbolic link loop when trying to stating {in_path!r}"
)
considered_in_paths.add(in_path)
if log:
log(f"FETCH: Stating {in_path!r}")
response = client._head(
client._url("/containers/{0}/archive", container),
params={"path": in_path},
)
if response.status_code == 404:
return in_path, None, None
client._raise_for_status(response)
header = response.headers.get("x-docker-container-path-stat")
try:
if header is None:
raise ValueError("x-docker-container-path-stat header not present")
stat_data = json.loads(base64.b64decode(header))
except Exception as exc:
raise DockerUnexpectedError(
f"When retrieving information for {in_path} from {container}, obtained header {header!r} that cannot be loaded as JSON: {exc}"
) from exc
# https://pkg.go.dev/io/fs#FileMode: bit 32 - 5 means ModeSymlink
if stat_data["mode"] & (1 << (32 - 5)) != 0:
link_target = stat_data["linkTarget"]
if not follow_links:
return in_path, stat_data, link_target
in_path = os.path.join(os.path.split(in_path)[0], link_target)
continue
return in_path, stat_data, None
class _RawGeneratorFileobj(io.RawIOBase):
def __init__(self, stream: t.Generator[bytes]):
self._stream = stream
self._buf = b""
def readable(self) -> bool:
return True
def _readinto_from_buf(self, b: WriteableBuffer, index: int, length: int) -> int:
cpy = min(length - index, len(self._buf))
if cpy:
b[index : index + cpy] = self._buf[:cpy] # type: ignore # TODO!
self._buf = self._buf[cpy:]
index += cpy
return index
def readinto(self, b: WriteableBuffer) -> int:
index = 0
length = len(b) # type: ignore # TODO!
index = self._readinto_from_buf(b, index, length)
if index == length:
return index
try:
self._buf += next(self._stream)
except StopIteration:
return index
return self._readinto_from_buf(b, index, length)
def _stream_generator_to_fileobj(stream: t.Generator[bytes]) -> io.BufferedReader:
"""Given a generator that generates chunks of bytes, create a readable buffered stream."""
raw = _RawGeneratorFileobj(stream)
return io.BufferedReader(raw)
_T = t.TypeVar("_T")
def fetch_file_ex(
client: APIClient,
container: str,
in_path: str,
process_none: Callable[[str], _T],
process_regular: Callable[[str, tarfile.TarFile, tarfile.TarInfo], _T],
process_symlink: Callable[[str, tarfile.TarInfo], _T],
process_other: Callable[[str, tarfile.TarInfo], _T],
follow_links: bool = False,
log: Callable[[str], None] | None = None,
) -> _T:
"""Fetch a file (as a tar file entry) from a Docker container to local."""
considered_in_paths: set[str] = set()
while True:
if in_path in considered_in_paths:
raise DockerFileCopyError(
f'Found infinite symbolic link loop when trying to fetch "{in_path}"'
)
considered_in_paths.add(in_path)
if log:
log(f'FETCH: Fetching "{in_path}"')
try:
stream = client.get_raw_stream(
"/containers/{0}/archive",
container,
params={"path": in_path},
headers={"Accept-Encoding": "identity"},
)
except NotFound:
return process_none(in_path)
with tarfile.open(
fileobj=_stream_generator_to_fileobj(stream), mode="r|"
) as tar:
symlink_member: tarfile.TarInfo | None = None
result: _T | None = None
found = False
for member in tar:
if found:
raise DockerUnexpectedError(
"Received tarfile contains more than one file!"
)
found = True
if member.issym():
symlink_member = member
continue
if member.isfile():
result = process_regular(in_path, tar, member)
continue
result = process_other(in_path, member)
if symlink_member:
if not follow_links:
return process_symlink(in_path, symlink_member)
in_path = os.path.join(
os.path.split(in_path)[0], symlink_member.linkname
)
if log:
log(f'FETCH: Following symbolic link to "{in_path}"')
continue
if found:
return result # type: ignore
raise DockerUnexpectedError("Received tarfile is empty!")
def fetch_file(
client: APIClient,
container: str,
in_path: str,
out_path: str,
follow_links: bool = False,
log: Callable[[str], None] | None = None,
) -> str:
b_out_path = to_bytes(out_path, errors="surrogate_or_strict")
def process_none(in_path: str) -> str:
raise DockerFileNotFound(
f"File {in_path} does not exist in container {container}"
)
def process_regular(
in_path: str, tar: tarfile.TarFile, member: tarfile.TarInfo
) -> str:
if not follow_links and os.path.exists(b_out_path):
os.unlink(b_out_path)
reader = tar.extractfile(member)
if reader:
with reader as in_f, open(b_out_path, "wb") as out_f:
shutil.copyfileobj(in_f, out_f)
return in_path
def process_symlink(in_path: str, member: tarfile.TarInfo) -> str:
if os.path.exists(b_out_path):
os.unlink(b_out_path)
os.symlink(member.linkname, b_out_path)
return in_path
def process_other(in_path: str, member: tarfile.TarInfo) -> str:
raise DockerFileCopyError(
f'Remote file "{in_path}" is not a regular file or a symbolic link'
)
return fetch_file_ex(
client,
container,
in_path,
process_none,
process_regular,
process_symlink,
process_other,
follow_links=follow_links,
log=log,
)
def _execute_command(
client: APIClient,
container: str,
command: list[str],
log: Callable[[str], None] | None = None,
check_rc: bool = False,
) -> tuple[int, bytes, bytes]:
if log:
log(f"Executing {command} in {container}")
data = {
"Container": container,
"User": "",
"Privileged": False,
"Tty": False,
"AttachStdin": False,
"AttachStdout": True,
"AttachStderr": True,
"Cmd": command,
}
if "detachKeys" in client._general_configs:
data["detachKeys"] = client._general_configs["detachKeys"]
try:
exec_data = client.post_json_to_json(
"/containers/{0}/exec", container, data=data
)
except NotFound as e:
raise DockerFileCopyError(f'Could not find container "{container}"') from e
except APIError as e:
if e.response is not None and e.response.status_code == 409:
raise DockerFileCopyError(
f'Cannot execute command in paused container "{container}"'
) from e
raise
exec_id = exec_data["Id"]
data = {"Tty": False, "Detach": False}
stdout, stderr = client.post_json_to_stream(
"/exec/{0}/start", exec_id, stream=False, demux=True, tty=False
)
result = client.get_json("/exec/{0}/json", exec_id)
rc: int = result.get("ExitCode") or 0
stdout = stdout or b""
stderr = stderr or b""
if log:
log(f"Exit code {rc}, stdout {stdout!r}, stderr {stderr!r}")
if check_rc and rc != 0:
command_str = " ".join(command)
raise DockerUnexpectedError(
f'Obtained unexpected exit code {rc} when running "{command_str}" in {container}.\nSTDOUT: {stdout!r}\nSTDERR: {stderr!r}'
)
return rc, stdout, stderr
def determine_user_group(
client: APIClient, container: str, log: Callable[[str], None] | None = None
) -> tuple[int, int]:
dummy_rc, stdout, dummy_stderr = _execute_command(
client, container, ["/bin/sh", "-c", "id -u && id -g"], check_rc=True, log=log
)
stdout_lines = stdout.splitlines()
if len(stdout_lines) != 2:
raise DockerUnexpectedError(
f"Expected two-line output to obtain user and group ID for container {container}, but got {len(stdout_lines)} lines:\n{stdout!r}"
)
user_id, group_id = stdout_lines
try:
return int(user_id), int(group_id)
except ValueError as exc:
raise DockerUnexpectedError(
f"Expected two-line output with numeric IDs to obtain user and group ID for container {container}, but got {user_id!r} and {group_id!r} instead"
) from exc

View File

@ -0,0 +1,166 @@
# Copyright 2022 Red Hat | Ansible
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import os
import tarfile
class ImageArchiveManifestSummary:
"""
Represents data extracted from a manifest.json found in the tar archive output of the
"docker image save some:tag > some.tar" command.
"""
def __init__(self, image_id: str, repo_tags: list[str]) -> None:
"""
:param image_id: File name portion of Config entry, e.g. abcde12345 from abcde12345.json
:param repo_tags Docker image names, e.g. ["hello-world:latest"]
"""
self.image_id = image_id
self.repo_tags = repo_tags
class ImageArchiveInvalidException(Exception):
pass
def api_image_id(archive_image_id: str) -> str:
"""
Accepts an image hash in the format stored in manifest.json, and returns an equivalent identifier
that represents the same image hash, but in the format presented by the Docker Engine API.
:param archive_image_id: plain image hash
:returns: Prefixed hash used by REST api
"""
return f"sha256:{archive_image_id}"
def load_archived_image_manifest(
archive_path: str,
) -> list[ImageArchiveManifestSummary] | None:
"""
Attempts to get image IDs and image names from metadata stored in the image
archive tar file.
The tar should contain a file "manifest.json" with an array with one or more entries,
and every entry should have a Config field with the image ID in its file name, as
well as a RepoTags list, which typically has only one entry.
:raises:
ImageArchiveInvalidException: A file already exists at archive_path, but could not extract an image ID from it.
:param archive_path: Tar file to read
:return: None, if no file at archive_path, or a list of ImageArchiveManifestSummary objects.
"""
try:
# FileNotFoundError does not exist in Python 2
if not os.path.isfile(archive_path):
return None
with tarfile.open(archive_path, "r") as tf:
try:
try:
reader = tf.extractfile("manifest.json")
if reader is None:
raise ImageArchiveInvalidException(
"Failed to read manifest.json"
)
with reader as ef:
manifest = json.load(ef)
except ImageArchiveInvalidException:
raise
except Exception as exc:
raise ImageArchiveInvalidException(
f"Failed to decode and deserialize manifest.json: {exc}"
) from exc
if len(manifest) == 0:
raise ImageArchiveInvalidException(
"Expected to have at least one entry in manifest.json but found none"
)
result = []
for index, meta in enumerate(manifest):
try:
config_file = meta["Config"]
except KeyError as exc:
raise ImageArchiveInvalidException(
f"Failed to get Config entry from {index + 1}th manifest in manifest.json: {exc}"
) from exc
# Extracts hash without 'sha256:' prefix
try:
# Strip off .json filename extension, leaving just the hash.
image_id = os.path.splitext(config_file)[0]
except Exception as exc:
raise ImageArchiveInvalidException(
f"Failed to extract image id from config file name {config_file}: {exc}"
) from exc
for prefix in ("blobs/sha256/",): # Moby 25.0.0, Docker API 1.44
if image_id.startswith(prefix):
image_id = image_id[len(prefix) :]
try:
repo_tags = meta["RepoTags"]
except KeyError as exc:
raise ImageArchiveInvalidException(
f"Failed to get RepoTags entry from {index + 1}th manifest in manifest.json: {exc}"
) from exc
result.append(
ImageArchiveManifestSummary(
image_id=image_id, repo_tags=repo_tags
)
)
return result
except ImageArchiveInvalidException:
raise
except Exception as exc:
raise ImageArchiveInvalidException(
f"Failed to extract manifest.json from tar file {archive_path}: {exc}"
) from exc
except ImageArchiveInvalidException:
raise
except Exception as exc:
raise ImageArchiveInvalidException(
f"Failed to open tar file {archive_path}: {exc}"
) from exc
def archived_image_manifest(archive_path: str) -> ImageArchiveManifestSummary | None:
"""
Attempts to get Image.Id and image name from metadata stored in the image
archive tar file.
The tar should contain a file "manifest.json" with an array with a single entry,
and the entry should have a Config field with the image ID in its file name, as
well as a RepoTags list, which typically has only one entry.
:raises:
ImageArchiveInvalidException: A file already exists at archive_path, but could not extract an image ID from it.
:param archive_path: Tar file to read
:return: None, if no file at archive_path, or the extracted image ID, which will not have a sha256: prefix.
"""
results = load_archived_image_manifest(archive_path)
if results is None:
return None
if len(results) == 1:
return results[0]
raise ImageArchiveInvalidException(
f"Expected to have one entry in manifest.json but found {len(results)}"
)

View File

@ -0,0 +1,212 @@
# Copyright (c) 2024, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
"""
Parse go logfmt messages.
See https://pkg.go.dev/github.com/kr/logfmt?utm_source=godoc for information on the format.
"""
from __future__ import annotations
import typing as t
from enum import Enum
# The format is defined in https://pkg.go.dev/github.com/kr/logfmt?utm_source=godoc
# (look for "EBNFish")
class InvalidLogFmt(Exception):
pass
class _Mode(Enum):
GARBAGE = 0
KEY = 1
EQUAL = 2
IDENT_VALUE = 3
QUOTED_VALUE = 4
_ESCAPE_DICT = {
'"': '"',
"\\": "\\",
"'": "'",
"/": "/",
"b": "\b",
"f": "\f",
"n": "\n",
"r": "\r",
"t": "\t",
}
_HEX_DICT = {
"0": 0,
"1": 1,
"2": 2,
"3": 3,
"4": 4,
"5": 5,
"6": 6,
"7": 7,
"8": 8,
"9": 9,
"a": 0xA,
"b": 0xB,
"c": 0xC,
"d": 0xD,
"e": 0xE,
"f": 0xF,
"A": 0xA,
"B": 0xB,
"C": 0xC,
"D": 0xD,
"E": 0xE,
"F": 0xF,
}
def _is_ident(cur: str) -> bool:
return cur > " " and cur not in ('"', "=")
class _Parser:
def __init__(self, line: str) -> None:
self.line = line
self.index = 0
self.length = len(line)
def done(self) -> bool:
return self.index >= self.length
def cur(self) -> str:
return self.line[self.index]
def next(self) -> None:
self.index += 1
def prev(self) -> None:
self.index -= 1
def parse_unicode_sequence(self) -> str:
if self.index + 6 > self.length:
raise InvalidLogFmt("Not enough space for unicode escape")
if self.line[self.index : self.index + 2] != "\\u":
raise InvalidLogFmt("Invalid unicode escape start")
v = 0
for dummy_index in range(self.index + 2, self.index + 6):
v <<= 4
try:
v += _HEX_DICT[self.line[self.index]]
except KeyError:
raise InvalidLogFmt(
f"Invalid unicode escape digit {self.line[self.index]!r}"
) from None
self.index += 6
return chr(v)
def parse_line(line: str, logrus_mode: bool = False) -> dict[str, t.Any]:
result: dict[str, t.Any] = {}
parser = _Parser(line)
key: list[str] = []
value: list[str] = []
mode = _Mode.GARBAGE
def handle_kv(has_no_value: bool = False) -> None:
k = "".join(key)
v = None if has_no_value else "".join(value)
result[k] = v
del key[:]
del value[:]
def parse_garbage(cur: str) -> _Mode:
if _is_ident(cur):
return _Mode.KEY
parser.next()
return _Mode.GARBAGE
def parse_key(cur: str) -> _Mode:
if _is_ident(cur):
key.append(cur)
parser.next()
return _Mode.KEY
if cur == "=":
parser.next()
return _Mode.EQUAL
if logrus_mode:
raise InvalidLogFmt('Key must always be followed by "=" in logrus mode')
handle_kv(has_no_value=True)
parser.next()
return _Mode.GARBAGE
def parse_equal(cur: str) -> _Mode:
if _is_ident(cur):
value.append(cur)
parser.next()
return _Mode.IDENT_VALUE
if cur == '"':
parser.next()
return _Mode.QUOTED_VALUE
handle_kv()
parser.next()
return _Mode.GARBAGE
def parse_ident_value(cur: str) -> _Mode:
if _is_ident(cur):
value.append(cur)
parser.next()
return _Mode.IDENT_VALUE
handle_kv()
parser.next()
return _Mode.GARBAGE
def parse_quoted_value(cur: str) -> _Mode:
if cur == "\\":
parser.next()
if parser.done():
raise InvalidLogFmt("Unterminated escape sequence in quoted string")
cur = parser.cur()
if cur in _ESCAPE_DICT:
value.append(_ESCAPE_DICT[cur])
elif cur != "u":
es = f"\\{cur}"
raise InvalidLogFmt(f"Unknown escape sequence {es!r}")
else:
parser.prev()
value.append(parser.parse_unicode_sequence())
parser.next()
return _Mode.QUOTED_VALUE
if cur == '"':
handle_kv()
parser.next()
return _Mode.GARBAGE
if cur < " ":
raise InvalidLogFmt("Control characters in quoted string are not allowed")
value.append(cur)
parser.next()
return _Mode.QUOTED_VALUE
parsers = {
_Mode.GARBAGE: parse_garbage,
_Mode.KEY: parse_key,
_Mode.EQUAL: parse_equal,
_Mode.IDENT_VALUE: parse_ident_value,
_Mode.QUOTED_VALUE: parse_quoted_value,
}
while not parser.done():
mode = parsers[mode](parser.cur())
if mode == _Mode.KEY and logrus_mode:
raise InvalidLogFmt('Key must always be followed by "=" in logrus mode')
if mode in (_Mode.KEY, _Mode.EQUAL):
handle_kv(has_no_value=True)
elif mode == _Mode.IDENT_VALUE:
handle_kv()
elif mode == _Mode.QUOTED_VALUE:
raise InvalidLogFmt("Unterminated quoted string")
return result

Some files were not shown because too many files have changed in this diff Show More