mirror of
https://github.com/ansible-collections/community.docker.git
synced 2026-03-31 09:04:49 +00:00
Vendor API connection code from Docker SDK for Python (#398)
* Vendor parts of the Docker SDK for Python This is a combination of the latest git version (a48a5a9647) and the version before Python 2.7 support was removed (650aad3a5f), including some modifications to work with Ansible module_utils's system (i.e. third-party imports are guarded, and errors are reported during runtime through a new exception MissingRequirementException). * Create module_utils and plugin_utils for working with the vendored code. The delete call cannot be called delete() since that method already exists from requests. * Vendor more code from Docker SDK for Python. * Adjust code from common module_utils. * Add unit tests from Docker SDK for Python. * Make test compile with Python 2.6, but skip them on Python 2.6. * Skip test that requires a network server. * Add changelog. * Update changelogs/fragments/398-docker-api.yml Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com> * Minimum API version is 1.25. Co-authored-by: Brian Scholer <1260690+briantist@users.noreply.github.com>
This commit is contained in:
parent
21d112bddb
commit
4d508b4c37
191
Apache-2.0.txt
Normal file
191
Apache-2.0.txt
Normal file
@ -0,0 +1,191 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Copyright 2016 Docker, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
5
changelogs/fragments/398-docker-api.yml
Normal file
5
changelogs/fragments/398-docker-api.yml
Normal file
@ -0,0 +1,5 @@
|
||||
major_changes:
|
||||
- "The collection now contains vendored code from the Docker SDK for Python to talk to the Docker daemon.
|
||||
Modules and plugins using this code no longer need the Docker SDK for Python installed on the machine
|
||||
the module or plugin is running on
|
||||
(https://github.com/ansible-collections/community.docker/pull/398)."
|
||||
@ -1,2 +1,11 @@
|
||||
docker
|
||||
docker-compose
|
||||
requests
|
||||
paramiko
|
||||
|
||||
# We assume that EEs are not based on Windows, and have Python >= 3.5.
|
||||
# (ansible-builder does not support conditionals, it will simply add
|
||||
# the following unconditionally to the requirements)
|
||||
#
|
||||
# pywin32 ; sys_platform == 'win32'
|
||||
# backports.ssl-match-hostname ; python_version < '3.5'
|
||||
|
||||
@ -10,7 +10,6 @@ class ModuleDocFragment(object):
|
||||
|
||||
# Docker doc fragment
|
||||
DOCUMENTATION = r'''
|
||||
|
||||
options:
|
||||
docker_host:
|
||||
description:
|
||||
@ -183,3 +182,118 @@ requirements:
|
||||
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
|
||||
This module does *not* work with docker-py."
|
||||
'''
|
||||
|
||||
# Docker doc fragment when using the vendored API access code
|
||||
API_DOCUMENTATION = r'''
|
||||
options:
|
||||
docker_host:
|
||||
description:
|
||||
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
|
||||
TCP connection string. For example, C(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
|
||||
the module will automatically replace C(tcp) in the connection URL with C(https).
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_HOST) will be used
|
||||
instead. If the environment variable is not set, the default value will be used.
|
||||
type: str
|
||||
default: unix://var/run/docker.sock
|
||||
aliases: [ docker_url ]
|
||||
tls_hostname:
|
||||
description:
|
||||
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_HOSTNAME) will
|
||||
be used instead. If the environment variable is not set, the default value will be used.
|
||||
- The current default value is C(localhost). This default is deprecated and will change in community.docker
|
||||
2.0.0 to be a value computed from I(docker_host). Explicitly specify C(localhost) to make sure this value
|
||||
will still be used, and to disable the deprecation message which will be shown otherwise.
|
||||
type: str
|
||||
api_version:
|
||||
description:
|
||||
- The version of the Docker API running on the Docker Host.
|
||||
- Defaults to the latest version of the API supported by this collection and the docker daemon.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_API_VERSION) will be
|
||||
used instead. If the environment variable is not set, the default value will be used.
|
||||
type: str
|
||||
default: auto
|
||||
aliases: [ docker_api_version ]
|
||||
timeout:
|
||||
description:
|
||||
- The maximum amount of time in seconds to wait on a response from the API.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_TIMEOUT) will be used
|
||||
instead. If the environment variable is not set, the default value will be used.
|
||||
type: int
|
||||
default: 60
|
||||
ca_cert:
|
||||
description:
|
||||
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
|
||||
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
|
||||
the file C(ca.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
|
||||
type: path
|
||||
aliases: [ tls_ca_cert, cacert_path ]
|
||||
client_cert:
|
||||
description:
|
||||
- Path to the client's TLS certificate file.
|
||||
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
|
||||
the file C(cert.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
|
||||
type: path
|
||||
aliases: [ tls_client_cert, cert_path ]
|
||||
client_key:
|
||||
description:
|
||||
- Path to the client's TLS key file.
|
||||
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
|
||||
the file C(key.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
|
||||
type: path
|
||||
aliases: [ tls_client_key, key_path ]
|
||||
ssl_version:
|
||||
description:
|
||||
- Provide a valid SSL version number. Default value determined by ssl.py module.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_SSL_VERSION) will be
|
||||
used instead.
|
||||
type: str
|
||||
tls:
|
||||
description:
|
||||
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
|
||||
server. Note that if I(validate_certs) is set to C(yes) as well, it will take precedence.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS) will be used
|
||||
instead. If the environment variable is not set, the default value will be used.
|
||||
type: bool
|
||||
default: no
|
||||
use_ssh_client:
|
||||
description:
|
||||
- For SSH transports, use the C(ssh) CLI tool instead of paramiko.
|
||||
type: bool
|
||||
default: no
|
||||
version_added: 1.5.0
|
||||
validate_certs:
|
||||
description:
|
||||
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
|
||||
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_VERIFY) will be
|
||||
used instead. If the environment variable is not set, the default value will be used.
|
||||
type: bool
|
||||
default: no
|
||||
aliases: [ tls_verify ]
|
||||
debug:
|
||||
description:
|
||||
- Debug mode
|
||||
type: bool
|
||||
default: no
|
||||
|
||||
notes:
|
||||
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
|
||||
You can define C(DOCKER_HOST), C(DOCKER_TLS_HOSTNAME), C(DOCKER_API_VERSION), C(DOCKER_CERT_PATH), C(DOCKER_SSL_VERSION),
|
||||
C(DOCKER_TLS), C(DOCKER_TLS_VERIFY) and C(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
|
||||
with the product that sets up the environment. It will set these variables for you. See
|
||||
U(https://docs.docker.com/machine/reference/env/) for more details.
|
||||
# - When connecting to Docker daemon with TLS, you might need to install additional Python packages.
|
||||
# For the Docker SDK for Python, version 2.4 or newer, this can be done by installing C(docker[tls]) with M(ansible.builtin.pip).
|
||||
# - Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
|
||||
# In general, it will use C($HOME/.docker/config.json) if the C(DOCKER_CONFIG) environment variable is not specified,
|
||||
# and use C($DOCKER_CONFIG/config.json) otherwise.
|
||||
- This module does B(not) use the L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) to
|
||||
communicate with the Docker daemon. It uses code derived from the Docker SDK or Python hat is included in this
|
||||
collection.
|
||||
requirements:
|
||||
- requests
|
||||
- pywin32 (when using named pipes on Windows 32)
|
||||
- paramiko (when using SSH with I(use_ssh_client=false))
|
||||
- pyOpenSSL (when using TLS)
|
||||
- backports.ssl_match_hostname (when using TLS on Python 2)
|
||||
'''
|
||||
|
||||
96
plugins/module_utils/_api/_import_helper.py
Normal file
96
plugins/module_utils/_api/_import_helper.py
Normal file
@ -0,0 +1,96 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import traceback
|
||||
|
||||
from ansible.module_utils.six import PY2
|
||||
|
||||
|
||||
REQUESTS_IMPORT_ERROR = None
|
||||
URLLIB3_IMPORT_ERROR = None
|
||||
BACKPORTS_SSL_MATCH_HOSTNAME_IMPORT_ERROR = None
|
||||
|
||||
|
||||
try:
|
||||
from requests import Session
|
||||
from requests.adapters import HTTPAdapter
|
||||
from requests.exceptions import HTTPError, InvalidSchema
|
||||
except ImportError:
|
||||
REQUESTS_IMPORT_ERROR = traceback.format_exc()
|
||||
|
||||
class Session(object):
|
||||
__attrs__ = []
|
||||
|
||||
class HTTPAdapter(object):
|
||||
__attrs__ = []
|
||||
|
||||
class HTTPError(Exception):
|
||||
pass
|
||||
|
||||
class InvalidSchema(Exception):
|
||||
pass
|
||||
|
||||
|
||||
try:
|
||||
from requests.packages import urllib3
|
||||
except ImportError:
|
||||
try:
|
||||
import urllib3
|
||||
except ImportError:
|
||||
URLLIB3_IMPORT_ERROR = traceback.format_exc()
|
||||
|
||||
class _HTTPConnectionPool(object):
|
||||
pass
|
||||
|
||||
class FakeURLLIB3(object):
|
||||
def __init__(self):
|
||||
self._collections = self
|
||||
self.poolmanager = self
|
||||
self.connection = self
|
||||
self.connectionpool = self
|
||||
|
||||
self.RecentlyUsedContainer = object()
|
||||
self.PoolManager = object()
|
||||
self.match_hostname = object()
|
||||
self.HTTPConnectionPool = _HTTPConnectionPool
|
||||
|
||||
urllib3 = FakeURLLIB3()
|
||||
|
||||
|
||||
# Monkey-patching match_hostname with a version that supports
|
||||
# IP-address checking. Not necessary for Python 3.5 and above
|
||||
if PY2:
|
||||
try:
|
||||
from backports.ssl_match_hostname import match_hostname
|
||||
urllib3.connection.match_hostname = match_hostname
|
||||
except ImportError:
|
||||
BACKPORTS_SSL_MATCH_HOSTNAME_IMPORT_ERROR = traceback.format_exc()
|
||||
|
||||
|
||||
def fail_on_missing_imports():
|
||||
if REQUESTS_IMPORT_ERROR is not None:
|
||||
from .errors import MissingRequirementException
|
||||
|
||||
raise MissingRequirementException(
|
||||
'You have to install requests',
|
||||
'requests', REQUESTS_IMPORT_ERROR)
|
||||
if URLLIB3_IMPORT_ERROR is not None:
|
||||
from .errors import MissingRequirementException
|
||||
|
||||
raise MissingRequirementException(
|
||||
'You have to install urllib3',
|
||||
'urllib3', URLLIB3_IMPORT_ERROR)
|
||||
if BACKPORTS_SSL_MATCH_HOSTNAME_IMPORT_ERROR is not None:
|
||||
from .errors import MissingRequirementException
|
||||
|
||||
raise MissingRequirementException(
|
||||
'You have to install backports.ssl-match-hostname',
|
||||
'backports.ssl-match-hostname', BACKPORTS_SSL_MATCH_HOSTNAME_IMPORT_ERROR)
|
||||
591
plugins/module_utils/_api/api/client.py
Normal file
591
plugins/module_utils/_api/api/client.py
Normal file
@ -0,0 +1,591 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import json
|
||||
import logging
|
||||
import struct
|
||||
from functools import partial
|
||||
|
||||
from ansible.module_utils.six import PY3, binary_type, iteritems, string_types
|
||||
from ansible.module_utils.six.moves.urllib.parse import quote
|
||||
|
||||
from .. import auth
|
||||
from .._import_helper import fail_on_missing_imports
|
||||
from .._import_helper import HTTPError as _HTTPError
|
||||
from .._import_helper import InvalidSchema as _InvalidSchema
|
||||
from .._import_helper import Session as _Session
|
||||
from ..constants import (DEFAULT_NUM_POOLS, DEFAULT_NUM_POOLS_SSH,
|
||||
DEFAULT_MAX_POOL_SIZE, DEFAULT_TIMEOUT_SECONDS,
|
||||
DEFAULT_USER_AGENT, IS_WINDOWS_PLATFORM,
|
||||
MINIMUM_DOCKER_API_VERSION, STREAM_HEADER_SIZE_BYTES,
|
||||
DEFAULT_DATA_CHUNK_SIZE)
|
||||
from ..errors import (DockerException, InvalidVersion, TLSParameterError, MissingRequirementException,
|
||||
create_api_error_from_http_exception)
|
||||
from ..tls import TLSConfig
|
||||
from ..transport.npipeconn import NpipeHTTPAdapter
|
||||
from ..transport.npipesocket import PYWIN32_IMPORT_ERROR
|
||||
from ..transport.unixconn import UnixHTTPAdapter
|
||||
from ..transport.sshconn import SSHHTTPAdapter, PARAMIKO_IMPORT_ERROR
|
||||
from ..transport.ssladapter import SSLHTTPAdapter
|
||||
from ..utils import config, utils, json_stream
|
||||
from ..utils.decorators import check_resource, update_headers
|
||||
from ..utils.proxy import ProxyConfig
|
||||
from ..utils.socket import consume_socket_output, demux_adaptor, frames_iter
|
||||
|
||||
from .daemon import DaemonApiMixin
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class APIClient(
|
||||
_Session,
|
||||
DaemonApiMixin):
|
||||
"""
|
||||
A low-level client for the Docker Engine API.
|
||||
|
||||
Example:
|
||||
|
||||
>>> import docker
|
||||
>>> client = docker.APIClient(base_url='unix://var/run/docker.sock')
|
||||
>>> client.version()
|
||||
{u'ApiVersion': u'1.33',
|
||||
u'Arch': u'amd64',
|
||||
u'BuildTime': u'2017-11-19T18:46:37.000000000+00:00',
|
||||
u'GitCommit': u'f4ffd2511c',
|
||||
u'GoVersion': u'go1.9.2',
|
||||
u'KernelVersion': u'4.14.3-1-ARCH',
|
||||
u'MinAPIVersion': u'1.12',
|
||||
u'Os': u'linux',
|
||||
u'Version': u'17.10.0-ce'}
|
||||
|
||||
Args:
|
||||
base_url (str): URL to the Docker server. For example,
|
||||
``unix:///var/run/docker.sock`` or ``tcp://127.0.0.1:1234``.
|
||||
version (str): The version of the API to use. Set to ``auto`` to
|
||||
automatically detect the server's version. Default: ``1.35``
|
||||
timeout (int): Default timeout for API calls, in seconds.
|
||||
tls (bool or :py:class:`~docker.tls.TLSConfig`): Enable TLS. Pass
|
||||
``True`` to enable it with default options, or pass a
|
||||
:py:class:`~docker.tls.TLSConfig` object to use custom
|
||||
configuration.
|
||||
user_agent (str): Set a custom user agent for requests to the server.
|
||||
credstore_env (dict): Override environment variables when calling the
|
||||
credential store process.
|
||||
use_ssh_client (bool): If set to `True`, an ssh connection is made
|
||||
via shelling out to the ssh client. Ensure the ssh client is
|
||||
installed and configured on the host.
|
||||
max_pool_size (int): The maximum number of connections
|
||||
to save in the pool.
|
||||
"""
|
||||
|
||||
__attrs__ = _Session.__attrs__ + ['_auth_configs',
|
||||
'_general_configs',
|
||||
'_version',
|
||||
'base_url',
|
||||
'timeout']
|
||||
|
||||
def __init__(self, base_url=None, version=None,
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS, tls=False,
|
||||
user_agent=DEFAULT_USER_AGENT, num_pools=None,
|
||||
credstore_env=None, use_ssh_client=False,
|
||||
max_pool_size=DEFAULT_MAX_POOL_SIZE):
|
||||
super(APIClient, self).__init__()
|
||||
|
||||
fail_on_missing_imports()
|
||||
|
||||
if tls and not base_url:
|
||||
raise TLSParameterError(
|
||||
'If using TLS, the base_url argument must be provided.'
|
||||
)
|
||||
|
||||
self.base_url = base_url
|
||||
self.timeout = timeout
|
||||
self.headers['User-Agent'] = user_agent
|
||||
|
||||
self._general_configs = config.load_general_config()
|
||||
|
||||
proxy_config = self._general_configs.get('proxies', {})
|
||||
try:
|
||||
proxies = proxy_config[base_url]
|
||||
except KeyError:
|
||||
proxies = proxy_config.get('default', {})
|
||||
|
||||
self._proxy_configs = ProxyConfig.from_dict(proxies)
|
||||
|
||||
self._auth_configs = auth.load_config(
|
||||
config_dict=self._general_configs, credstore_env=credstore_env,
|
||||
)
|
||||
self.credstore_env = credstore_env
|
||||
|
||||
base_url = utils.parse_host(
|
||||
base_url, IS_WINDOWS_PLATFORM, tls=bool(tls)
|
||||
)
|
||||
# SSH has a different default for num_pools to all other adapters
|
||||
num_pools = num_pools or DEFAULT_NUM_POOLS_SSH if \
|
||||
base_url.startswith('ssh://') else DEFAULT_NUM_POOLS
|
||||
|
||||
if base_url.startswith('http+unix://'):
|
||||
self._custom_adapter = UnixHTTPAdapter(
|
||||
base_url, timeout, pool_connections=num_pools,
|
||||
max_pool_size=max_pool_size
|
||||
)
|
||||
self.mount('http+docker://', self._custom_adapter)
|
||||
self._unmount('http://', 'https://')
|
||||
# host part of URL should be unused, but is resolved by requests
|
||||
# module in proxy_bypass_macosx_sysconf()
|
||||
self.base_url = 'http+docker://localhost'
|
||||
elif base_url.startswith('npipe://'):
|
||||
if not IS_WINDOWS_PLATFORM:
|
||||
raise DockerException(
|
||||
'The npipe:// protocol is only supported on Windows'
|
||||
)
|
||||
if PYWIN32_IMPORT_ERROR is not None:
|
||||
raise MissingRequirementException(
|
||||
'Install pypiwin32 package to enable npipe:// support',
|
||||
'pywin32',
|
||||
PYWIN32_IMPORT_ERROR)
|
||||
self._custom_adapter = NpipeHTTPAdapter(
|
||||
base_url, timeout, pool_connections=num_pools,
|
||||
max_pool_size=max_pool_size
|
||||
)
|
||||
self.mount('http+docker://', self._custom_adapter)
|
||||
self.base_url = 'http+docker://localnpipe'
|
||||
elif base_url.startswith('ssh://'):
|
||||
if PARAMIKO_IMPORT_ERROR is not None and not use_ssh_client:
|
||||
raise MissingRequirementException(
|
||||
'Install paramiko package to enable ssh:// support',
|
||||
'paramiko',
|
||||
PARAMIKO_IMPORT_ERROR)
|
||||
self._custom_adapter = SSHHTTPAdapter(
|
||||
base_url, timeout, pool_connections=num_pools,
|
||||
max_pool_size=max_pool_size, shell_out=use_ssh_client
|
||||
)
|
||||
self.mount('http+docker://ssh', self._custom_adapter)
|
||||
self._unmount('http://', 'https://')
|
||||
self.base_url = 'http+docker://ssh'
|
||||
else:
|
||||
# Use SSLAdapter for the ability to specify SSL version
|
||||
if isinstance(tls, TLSConfig):
|
||||
tls.configure_client(self)
|
||||
elif tls:
|
||||
self._custom_adapter = SSLHTTPAdapter(
|
||||
pool_connections=num_pools)
|
||||
self.mount('https://', self._custom_adapter)
|
||||
self.base_url = base_url
|
||||
|
||||
# version detection needs to be after unix adapter mounting
|
||||
if version is None or (isinstance(version, string_types) and version.lower() == 'auto'):
|
||||
self._version = self._retrieve_server_version()
|
||||
else:
|
||||
self._version = version
|
||||
if not isinstance(self._version, string_types):
|
||||
raise DockerException(
|
||||
'Version parameter must be a string or None. Found {0}'.format(
|
||||
type(version).__name__
|
||||
)
|
||||
)
|
||||
if utils.version_lt(self._version, MINIMUM_DOCKER_API_VERSION):
|
||||
raise InvalidVersion(
|
||||
'API versions below {0} are no longer supported by this '
|
||||
'library.'.format(MINIMUM_DOCKER_API_VERSION)
|
||||
)
|
||||
|
||||
def _retrieve_server_version(self):
|
||||
try:
|
||||
return self.version(api_version=False)["ApiVersion"]
|
||||
except KeyError:
|
||||
raise DockerException(
|
||||
'Invalid response from docker daemon: key "ApiVersion"'
|
||||
' is missing.'
|
||||
)
|
||||
except Exception as e:
|
||||
raise DockerException(
|
||||
'Error while fetching server API version: {0}'.format(e)
|
||||
)
|
||||
|
||||
def _set_request_timeout(self, kwargs):
|
||||
"""Prepare the kwargs for an HTTP request by inserting the timeout
|
||||
parameter, if not already present."""
|
||||
kwargs.setdefault('timeout', self.timeout)
|
||||
return kwargs
|
||||
|
||||
@update_headers
|
||||
def _post(self, url, **kwargs):
|
||||
return self.post(url, **self._set_request_timeout(kwargs))
|
||||
|
||||
@update_headers
|
||||
def _get(self, url, **kwargs):
|
||||
return self.get(url, **self._set_request_timeout(kwargs))
|
||||
|
||||
@update_headers
|
||||
def _put(self, url, **kwargs):
|
||||
return self.put(url, **self._set_request_timeout(kwargs))
|
||||
|
||||
@update_headers
|
||||
def _delete(self, url, **kwargs):
|
||||
return self.delete(url, **self._set_request_timeout(kwargs))
|
||||
|
||||
def _url(self, pathfmt, *args, **kwargs):
|
||||
for arg in args:
|
||||
if not isinstance(arg, string_types):
|
||||
raise ValueError(
|
||||
'Expected a string but found {0} ({1}) '
|
||||
'instead'.format(arg, type(arg))
|
||||
)
|
||||
|
||||
quote_f = partial(quote, safe="/:")
|
||||
args = map(quote_f, args)
|
||||
|
||||
if kwargs.get('versioned_api', True):
|
||||
return '{0}/v{1}{2}'.format(
|
||||
self.base_url, self._version, pathfmt.format(*args)
|
||||
)
|
||||
else:
|
||||
return '{0}{1}'.format(self.base_url, pathfmt.format(*args))
|
||||
|
||||
def _raise_for_status(self, response):
|
||||
"""Raises stored :class:`APIError`, if one occurred."""
|
||||
try:
|
||||
response.raise_for_status()
|
||||
except _HTTPError as e:
|
||||
raise create_api_error_from_http_exception(e)
|
||||
|
||||
def _result(self, response, json=False, binary=False):
|
||||
if json and binary:
|
||||
raise AssertionError('json and binary must not be both True')
|
||||
self._raise_for_status(response)
|
||||
|
||||
if json:
|
||||
return response.json()
|
||||
if binary:
|
||||
return response.content
|
||||
return response.text
|
||||
|
||||
def _post_json(self, url, data, **kwargs):
|
||||
# Go <1.1 can't unserialize null to a string
|
||||
# so we do this disgusting thing here.
|
||||
data2 = {}
|
||||
if data is not None and isinstance(data, dict):
|
||||
for k, v in iteritems(data):
|
||||
if v is not None:
|
||||
data2[k] = v
|
||||
elif data is not None:
|
||||
data2 = data
|
||||
|
||||
if 'headers' not in kwargs:
|
||||
kwargs['headers'] = {}
|
||||
kwargs['headers']['Content-Type'] = 'application/json'
|
||||
return self._post(url, data=json.dumps(data2), **kwargs)
|
||||
|
||||
def _attach_params(self, override=None):
|
||||
return override or {
|
||||
'stdout': 1,
|
||||
'stderr': 1,
|
||||
'stream': 1
|
||||
}
|
||||
|
||||
def _get_raw_response_socket(self, response):
|
||||
self._raise_for_status(response)
|
||||
if self.base_url == "http+docker://localnpipe":
|
||||
sock = response.raw._fp.fp.raw.sock
|
||||
elif self.base_url.startswith('http+docker://ssh'):
|
||||
sock = response.raw._fp.fp.channel
|
||||
elif PY3:
|
||||
sock = response.raw._fp.fp.raw
|
||||
if self.base_url.startswith("https://"):
|
||||
sock = sock._sock
|
||||
else:
|
||||
sock = response.raw._fp.fp._sock
|
||||
try:
|
||||
# Keep a reference to the response to stop it being garbage
|
||||
# collected. If the response is garbage collected, it will
|
||||
# close TLS sockets.
|
||||
sock._response = response
|
||||
except AttributeError:
|
||||
# UNIX sockets can't have attributes set on them, but that's
|
||||
# fine because we won't be doing TLS over them
|
||||
pass
|
||||
|
||||
return sock
|
||||
|
||||
def _stream_helper(self, response, decode=False):
|
||||
"""Generator for data coming from a chunked-encoded HTTP response."""
|
||||
|
||||
if response.raw._fp.chunked:
|
||||
if decode:
|
||||
for chunk in json_stream.json_stream(self._stream_helper(response, False)):
|
||||
yield chunk
|
||||
else:
|
||||
reader = response.raw
|
||||
while not reader.closed:
|
||||
# this read call will block until we get a chunk
|
||||
data = reader.read(1)
|
||||
if not data:
|
||||
break
|
||||
if reader._fp.chunk_left:
|
||||
data += reader.read(reader._fp.chunk_left)
|
||||
yield data
|
||||
else:
|
||||
# Response isn't chunked, meaning we probably
|
||||
# encountered an error immediately
|
||||
yield self._result(response, json=decode)
|
||||
|
||||
def _multiplexed_buffer_helper(self, response):
|
||||
"""A generator of multiplexed data blocks read from a buffered
|
||||
response."""
|
||||
buf = self._result(response, binary=True)
|
||||
buf_length = len(buf)
|
||||
walker = 0
|
||||
while True:
|
||||
if buf_length - walker < STREAM_HEADER_SIZE_BYTES:
|
||||
break
|
||||
header = buf[walker:walker + STREAM_HEADER_SIZE_BYTES]
|
||||
dummy, length = struct.unpack_from('>BxxxL', header)
|
||||
start = walker + STREAM_HEADER_SIZE_BYTES
|
||||
end = start + length
|
||||
walker = end
|
||||
yield buf[start:end]
|
||||
|
||||
def _multiplexed_response_stream_helper(self, response):
|
||||
"""A generator of multiplexed data blocks coming from a response
|
||||
stream."""
|
||||
|
||||
# Disable timeout on the underlying socket to prevent
|
||||
# Read timed out(s) for long running processes
|
||||
socket = self._get_raw_response_socket(response)
|
||||
self._disable_socket_timeout(socket)
|
||||
|
||||
while True:
|
||||
header = response.raw.read(STREAM_HEADER_SIZE_BYTES)
|
||||
if not header:
|
||||
break
|
||||
dummy, length = struct.unpack('>BxxxL', header)
|
||||
if not length:
|
||||
continue
|
||||
data = response.raw.read(length)
|
||||
if not data:
|
||||
break
|
||||
yield data
|
||||
|
||||
def _stream_raw_result(self, response, chunk_size=1, decode=True):
|
||||
''' Stream result for TTY-enabled container and raw binary data'''
|
||||
self._raise_for_status(response)
|
||||
|
||||
# Disable timeout on the underlying socket to prevent
|
||||
# Read timed out(s) for long running processes
|
||||
socket = self._get_raw_response_socket(response)
|
||||
self._disable_socket_timeout(socket)
|
||||
|
||||
for out in response.iter_content(chunk_size, decode):
|
||||
yield out
|
||||
|
||||
def _read_from_socket(self, response, stream, tty=True, demux=False):
|
||||
socket = self._get_raw_response_socket(response)
|
||||
|
||||
gen = frames_iter(socket, tty)
|
||||
|
||||
if demux:
|
||||
# The generator will output tuples (stdout, stderr)
|
||||
gen = (demux_adaptor(*frame) for frame in gen)
|
||||
else:
|
||||
# The generator will output strings
|
||||
gen = (data for (dummy, data) in gen)
|
||||
|
||||
if stream:
|
||||
return gen
|
||||
else:
|
||||
# Wait for all the frames, concatenate them, and return the result
|
||||
return consume_socket_output(gen, demux=demux)
|
||||
|
||||
def _disable_socket_timeout(self, socket):
|
||||
""" Depending on the combination of python version and whether we're
|
||||
connecting over http or https, we might need to access _sock, which
|
||||
may or may not exist; or we may need to just settimeout on socket
|
||||
itself, which also may or may not have settimeout on it. To avoid
|
||||
missing the correct one, we try both.
|
||||
|
||||
We also do not want to set the timeout if it is already disabled, as
|
||||
you run the risk of changing a socket that was non-blocking to
|
||||
blocking, for example when using gevent.
|
||||
"""
|
||||
sockets = [socket, getattr(socket, '_sock', None)]
|
||||
|
||||
for s in sockets:
|
||||
if not hasattr(s, 'settimeout'):
|
||||
continue
|
||||
|
||||
timeout = -1
|
||||
|
||||
if hasattr(s, 'gettimeout'):
|
||||
timeout = s.gettimeout()
|
||||
|
||||
# Don't change the timeout if it is already disabled.
|
||||
if timeout is None or timeout == 0.0:
|
||||
continue
|
||||
|
||||
s.settimeout(None)
|
||||
|
||||
@check_resource('container')
|
||||
def _check_is_tty(self, container):
|
||||
cont = self.inspect_container(container)
|
||||
return cont['Config']['Tty']
|
||||
|
||||
def _get_result(self, container, stream, res):
|
||||
return self._get_result_tty(stream, res, self._check_is_tty(container))
|
||||
|
||||
def _get_result_tty(self, stream, res, is_tty):
|
||||
# We should also use raw streaming (without keep-alives)
|
||||
# if we're dealing with a tty-enabled container.
|
||||
if is_tty:
|
||||
return self._stream_raw_result(res) if stream else \
|
||||
self._result(res, binary=True)
|
||||
|
||||
self._raise_for_status(res)
|
||||
sep = binary_type()
|
||||
if stream:
|
||||
return self._multiplexed_response_stream_helper(res)
|
||||
else:
|
||||
return sep.join(
|
||||
list(self._multiplexed_buffer_helper(res))
|
||||
)
|
||||
|
||||
def _unmount(self, *args):
|
||||
for proto in args:
|
||||
self.adapters.pop(proto)
|
||||
|
||||
def get_adapter(self, url):
|
||||
try:
|
||||
return super(APIClient, self).get_adapter(url)
|
||||
except _InvalidSchema as e:
|
||||
if self._custom_adapter:
|
||||
return self._custom_adapter
|
||||
else:
|
||||
raise e
|
||||
|
||||
@property
|
||||
def api_version(self):
|
||||
return self._version
|
||||
|
||||
def reload_config(self, dockercfg_path=None):
|
||||
"""
|
||||
Force a reload of the auth configuration
|
||||
|
||||
Args:
|
||||
dockercfg_path (str): Use a custom path for the Docker config file
|
||||
(default ``$HOME/.docker/config.json`` if present,
|
||||
otherwise ``$HOME/.dockercfg``)
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self._auth_configs = auth.load_config(
|
||||
dockercfg_path, credstore_env=self.credstore_env
|
||||
)
|
||||
|
||||
def _set_auth_headers(self, headers):
|
||||
log.debug('Looking for auth config')
|
||||
|
||||
# If we don't have any auth data so far, try reloading the config
|
||||
# file one more time in case anything showed up in there.
|
||||
if not self._auth_configs or self._auth_configs.is_empty:
|
||||
log.debug("No auth config in memory - loading from filesystem")
|
||||
self._auth_configs = auth.load_config(
|
||||
credstore_env=self.credstore_env
|
||||
)
|
||||
|
||||
# Send the full auth configuration (if any exists), since the build
|
||||
# could use any (or all) of the registries.
|
||||
if self._auth_configs:
|
||||
auth_data = self._auth_configs.get_all_credentials()
|
||||
|
||||
# See https://github.com/docker/docker-py/issues/1683
|
||||
if (auth.INDEX_URL not in auth_data and
|
||||
auth.INDEX_NAME in auth_data):
|
||||
auth_data[auth.INDEX_URL] = auth_data.get(auth.INDEX_NAME, {})
|
||||
|
||||
log.debug(
|
||||
'Sending auth config (%s)',
|
||||
', '.join(repr(k) for k in auth_data.keys())
|
||||
)
|
||||
|
||||
if auth_data:
|
||||
headers['X-Registry-Config'] = auth.encode_header(
|
||||
auth_data
|
||||
)
|
||||
else:
|
||||
log.debug('No auth config found')
|
||||
|
||||
def get_binary(self, pathfmt, *args, **kwargs):
|
||||
return self._result(self._get(self._url(pathfmt, *args, versioned_api=True), **kwargs), binary=True)
|
||||
|
||||
def get_json(self, pathfmt, *args, **kwargs):
|
||||
return self._result(self._get(self._url(pathfmt, *args, versioned_api=True), **kwargs), json=True)
|
||||
|
||||
def get_text(self, pathfmt, *args, **kwargs):
|
||||
return self._result(self._get(self._url(pathfmt, *args, versioned_api=True), **kwargs))
|
||||
|
||||
def get_raw_stream(self, pathfmt, *args, **kwargs):
|
||||
chunk_size = kwargs.pop('chunk_size', DEFAULT_DATA_CHUNK_SIZE)
|
||||
res = self._get(self._url(pathfmt, *args, versioned_api=True), stream=True, **kwargs)
|
||||
self._raise_for_status(res)
|
||||
return self._stream_raw_result(res, chunk_size, False)
|
||||
|
||||
def delete_call(self, pathfmt, *args, **kwargs):
|
||||
self._raise_for_status(self._delete(self._url(pathfmt, *args, versioned_api=True), **kwargs))
|
||||
|
||||
def delete_json(self, pathfmt, *args, **kwargs):
|
||||
return self._result(self._delete(self._url(pathfmt, *args, versioned_api=True), **kwargs), json=True)
|
||||
|
||||
def post_json(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
self._raise_for_status(self._post_json(self._url(pathfmt, *args, versioned_api=True), data, **kwargs))
|
||||
|
||||
def post_json_to_binary(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
return self._result(self._post_json(self._url(pathfmt, *args, versioned_api=True), data, **kwargs), binary=True)
|
||||
|
||||
def post_json_to_json(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
return self._result(self._post_json(self._url(pathfmt, *args, versioned_api=True), data, **kwargs), json=True)
|
||||
|
||||
def post_json_to_text(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
|
||||
def post_json_to_stream_socket(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
headers = (kwargs.pop('headers', None) or {}).copy()
|
||||
headers.update({
|
||||
'Connection': 'Upgrade',
|
||||
'Upgrade': 'tcp',
|
||||
})
|
||||
return self._get_raw_response_socket(
|
||||
self._post_json(self._url(pathfmt, *args, versioned_api=True), data, headers=headers, stream=True, **kwargs))
|
||||
|
||||
def post_json_to_stream(self, pathfmt, *args, **kwargs):
|
||||
data = kwargs.pop('data', None)
|
||||
headers = (kwargs.pop('headers', None) or {}).copy()
|
||||
headers.update({
|
||||
'Connection': 'Upgrade',
|
||||
'Upgrade': 'tcp',
|
||||
})
|
||||
stream = kwargs.pop('stream', False)
|
||||
demux = kwargs.pop('demux', False)
|
||||
tty = kwargs.pop('tty', False)
|
||||
return self._read_from_socket(
|
||||
self._post_json(self._url(pathfmt, *args, versioned_api=True), data, headers=headers, stream=True, **kwargs),
|
||||
stream,
|
||||
tty=tty,
|
||||
demux=demux
|
||||
)
|
||||
|
||||
def post_to_json(self, pathfmt, *args, **kwargs):
|
||||
return self._result(self._post(self._url(pathfmt, *args, versioned_api=True), **kwargs), json=True)
|
||||
195
plugins/module_utils/_api/api/daemon.py
Normal file
195
plugins/module_utils/_api/api/daemon.py
Normal file
@ -0,0 +1,195 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
from .. import auth
|
||||
from ..utils.utils import datetime_to_timestamp, convert_filters
|
||||
from ..utils.decorators import minimum_version
|
||||
from ..types.daemon import CancellableStream
|
||||
|
||||
|
||||
class DaemonApiMixin(object):
|
||||
@minimum_version('1.25')
|
||||
def df(self):
|
||||
"""
|
||||
Get data usage information.
|
||||
|
||||
Returns:
|
||||
(dict): A dictionary representing different resource categories
|
||||
and their respective data usage.
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
"""
|
||||
url = self._url('/system/df')
|
||||
return self._result(self._get(url), True)
|
||||
|
||||
def events(self, since=None, until=None, filters=None, decode=None):
|
||||
"""
|
||||
Get real-time events from the server. Similar to the ``docker events``
|
||||
command.
|
||||
|
||||
Args:
|
||||
since (UTC datetime or int): Get events from this point
|
||||
until (UTC datetime or int): Get events until this point
|
||||
filters (dict): Filter the events by event time, container or image
|
||||
decode (bool): If set to true, stream will be decoded into dicts on
|
||||
the fly. False by default.
|
||||
|
||||
Returns:
|
||||
A :py:class:`docker.types.daemon.CancellableStream` generator
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
|
||||
Example:
|
||||
|
||||
>>> for event in client.events(decode=True)
|
||||
... print(event)
|
||||
{u'from': u'image/with:tag',
|
||||
u'id': u'container-id',
|
||||
u'status': u'start',
|
||||
u'time': 1423339459}
|
||||
...
|
||||
|
||||
or
|
||||
|
||||
>>> events = client.events()
|
||||
>>> for event in events:
|
||||
... print(event)
|
||||
>>> # and cancel from another thread
|
||||
>>> events.close()
|
||||
"""
|
||||
|
||||
if isinstance(since, datetime):
|
||||
since = datetime_to_timestamp(since)
|
||||
|
||||
if isinstance(until, datetime):
|
||||
until = datetime_to_timestamp(until)
|
||||
|
||||
if filters:
|
||||
filters = convert_filters(filters)
|
||||
|
||||
params = {
|
||||
'since': since,
|
||||
'until': until,
|
||||
'filters': filters
|
||||
}
|
||||
url = self._url('/events')
|
||||
|
||||
response = self._get(url, params=params, stream=True, timeout=None)
|
||||
stream = self._stream_helper(response, decode=decode)
|
||||
|
||||
return CancellableStream(stream, response)
|
||||
|
||||
def info(self):
|
||||
"""
|
||||
Display system-wide information. Identical to the ``docker info``
|
||||
command.
|
||||
|
||||
Returns:
|
||||
(dict): The info as a dict
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
"""
|
||||
return self._result(self._get(self._url("/info")), True)
|
||||
|
||||
def login(self, username, password=None, email=None, registry=None,
|
||||
reauth=False, dockercfg_path=None):
|
||||
"""
|
||||
Authenticate with a registry. Similar to the ``docker login`` command.
|
||||
|
||||
Args:
|
||||
username (str): The registry username
|
||||
password (str): The plaintext password
|
||||
email (str): The email for the registry account
|
||||
registry (str): URL to the registry. E.g.
|
||||
``https://index.docker.io/v1/``
|
||||
reauth (bool): Whether or not to refresh existing authentication on
|
||||
the Docker server.
|
||||
dockercfg_path (str): Use a custom path for the Docker config file
|
||||
(default ``$HOME/.docker/config.json`` if present,
|
||||
otherwise ``$HOME/.dockercfg``)
|
||||
|
||||
Returns:
|
||||
(dict): The response from the login request
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
"""
|
||||
|
||||
# If we don't have any auth data so far, try reloading the config file
|
||||
# one more time in case anything showed up in there.
|
||||
# If dockercfg_path is passed check to see if the config file exists,
|
||||
# if so load that config.
|
||||
if dockercfg_path and os.path.exists(dockercfg_path):
|
||||
self._auth_configs = auth.load_config(
|
||||
dockercfg_path, credstore_env=self.credstore_env
|
||||
)
|
||||
elif not self._auth_configs or self._auth_configs.is_empty:
|
||||
self._auth_configs = auth.load_config(
|
||||
credstore_env=self.credstore_env
|
||||
)
|
||||
|
||||
authcfg = self._auth_configs.resolve_authconfig(registry)
|
||||
# If we found an existing auth config for this registry and username
|
||||
# combination, we can return it immediately unless reauth is requested.
|
||||
if authcfg and authcfg.get('username', None) == username \
|
||||
and not reauth:
|
||||
return authcfg
|
||||
|
||||
req_data = {
|
||||
'username': username,
|
||||
'password': password,
|
||||
'email': email,
|
||||
'serveraddress': registry,
|
||||
}
|
||||
|
||||
response = self._post_json(self._url('/auth'), data=req_data)
|
||||
if response.status_code == 200:
|
||||
self._auth_configs.add_auth(registry or auth.INDEX_NAME, req_data)
|
||||
return self._result(response, json=True)
|
||||
|
||||
def ping(self):
|
||||
"""
|
||||
Checks the server is responsive. An exception will be raised if it
|
||||
isn't responding.
|
||||
|
||||
Returns:
|
||||
(bool) The response from the server.
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
"""
|
||||
return self._result(self._get(self._url('/_ping'))) == 'OK'
|
||||
|
||||
def version(self, api_version=True):
|
||||
"""
|
||||
Returns version information from the server. Similar to the ``docker
|
||||
version`` command.
|
||||
|
||||
Returns:
|
||||
(dict): The server version information
|
||||
|
||||
Raises:
|
||||
:py:class:`docker.errors.APIError`
|
||||
If the server returns an error.
|
||||
"""
|
||||
url = self._url("/version", versioned_api=api_version)
|
||||
return self._result(self._get(url), json=True)
|
||||
387
plugins/module_utils/_api/auth.py
Normal file
387
plugins/module_utils/_api/auth.py
Normal file
@ -0,0 +1,387 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import base64
|
||||
import json
|
||||
import logging
|
||||
|
||||
from ansible.module_utils.six import iteritems, string_types
|
||||
|
||||
from . import errors
|
||||
from .credentials.store import Store
|
||||
from .credentials.errors import StoreError, CredentialsNotFound
|
||||
from .utils import config
|
||||
|
||||
INDEX_NAME = 'docker.io'
|
||||
INDEX_URL = 'https://index.{0}/v1/'.format(INDEX_NAME)
|
||||
TOKEN_USERNAME = '<token>'
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def resolve_repository_name(repo_name):
|
||||
if '://' in repo_name:
|
||||
raise errors.InvalidRepository(
|
||||
'Repository name cannot contain a scheme ({0})'.format(repo_name)
|
||||
)
|
||||
|
||||
index_name, remote_name = split_repo_name(repo_name)
|
||||
if index_name[0] == '-' or index_name[-1] == '-':
|
||||
raise errors.InvalidRepository(
|
||||
'Invalid index name ({0}). Cannot begin or end with a'
|
||||
' hyphen.'.format(index_name)
|
||||
)
|
||||
return resolve_index_name(index_name), remote_name
|
||||
|
||||
|
||||
def resolve_index_name(index_name):
|
||||
index_name = convert_to_hostname(index_name)
|
||||
if index_name == 'index.' + INDEX_NAME:
|
||||
index_name = INDEX_NAME
|
||||
return index_name
|
||||
|
||||
|
||||
def get_config_header(client, registry):
|
||||
log.debug('Looking for auth config')
|
||||
if not client._auth_configs or client._auth_configs.is_empty:
|
||||
log.debug(
|
||||
"No auth config in memory - loading from filesystem"
|
||||
)
|
||||
client._auth_configs = load_config(credstore_env=client.credstore_env)
|
||||
authcfg = resolve_authconfig(
|
||||
client._auth_configs, registry, credstore_env=client.credstore_env
|
||||
)
|
||||
# Do not fail here if no authentication exists for this
|
||||
# specific registry as we can have a readonly pull. Just
|
||||
# put the header if we can.
|
||||
if authcfg:
|
||||
log.debug('Found auth config')
|
||||
# auth_config needs to be a dict in the format used by
|
||||
# auth.py username , password, serveraddress, email
|
||||
return encode_header(authcfg)
|
||||
log.debug('No auth config found')
|
||||
return None
|
||||
|
||||
|
||||
def split_repo_name(repo_name):
|
||||
parts = repo_name.split('/', 1)
|
||||
if len(parts) == 1 or (
|
||||
'.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'
|
||||
):
|
||||
# This is a docker index repo (ex: username/foobar or ubuntu)
|
||||
return INDEX_NAME, repo_name
|
||||
return tuple(parts)
|
||||
|
||||
|
||||
def get_credential_store(authconfig, registry):
|
||||
if not isinstance(authconfig, AuthConfig):
|
||||
authconfig = AuthConfig(authconfig)
|
||||
return authconfig.get_credential_store(registry)
|
||||
|
||||
|
||||
class AuthConfig(dict):
|
||||
def __init__(self, dct, credstore_env=None):
|
||||
if 'auths' not in dct:
|
||||
dct['auths'] = {}
|
||||
self.update(dct)
|
||||
self._credstore_env = credstore_env
|
||||
self._stores = {}
|
||||
|
||||
@classmethod
|
||||
def parse_auth(cls, entries, raise_on_error=False):
|
||||
"""
|
||||
Parses authentication entries
|
||||
|
||||
Args:
|
||||
entries: Dict of authentication entries.
|
||||
raise_on_error: If set to true, an invalid format will raise
|
||||
InvalidConfigFile
|
||||
|
||||
Returns:
|
||||
Authentication registry.
|
||||
"""
|
||||
|
||||
conf = {}
|
||||
for registry, entry in iteritems(entries):
|
||||
if not isinstance(entry, dict):
|
||||
log.debug('Config entry for key %s is not auth config', registry)
|
||||
# We sometimes fall back to parsing the whole config as if it
|
||||
# was the auth config by itself, for legacy purposes. In that
|
||||
# case, we fail silently and return an empty conf if any of the
|
||||
# keys is not formatted properly.
|
||||
if raise_on_error:
|
||||
raise errors.InvalidConfigFile(
|
||||
'Invalid configuration for registry {0}'.format(
|
||||
registry
|
||||
)
|
||||
)
|
||||
return {}
|
||||
if 'identitytoken' in entry:
|
||||
log.debug('Found an IdentityToken entry for registry %s', registry)
|
||||
conf[registry] = {
|
||||
'IdentityToken': entry['identitytoken']
|
||||
}
|
||||
continue # Other values are irrelevant if we have a token
|
||||
|
||||
if 'auth' not in entry:
|
||||
# Starting with engine v1.11 (API 1.23), an empty dictionary is
|
||||
# a valid value in the auths config.
|
||||
# https://github.com/docker/compose/issues/3265
|
||||
log.debug('Auth data for %s is absent. Client might be using a credentials store instead.', registry)
|
||||
conf[registry] = {}
|
||||
continue
|
||||
|
||||
username, password = decode_auth(entry['auth'])
|
||||
log.debug('Found entry (registry=%s, username=%s)', repr(registry), repr(username))
|
||||
|
||||
conf[registry] = {
|
||||
'username': username,
|
||||
'password': password,
|
||||
'email': entry.get('email'),
|
||||
'serveraddress': registry,
|
||||
}
|
||||
return conf
|
||||
|
||||
@classmethod
|
||||
def load_config(cls, config_path, config_dict, credstore_env=None):
|
||||
"""
|
||||
Loads authentication data from a Docker configuration file in the given
|
||||
root directory or if config_path is passed use given path.
|
||||
Lookup priority:
|
||||
explicit config_path parameter > DOCKER_CONFIG environment
|
||||
variable > ~/.docker/config.json > ~/.dockercfg
|
||||
"""
|
||||
|
||||
if not config_dict:
|
||||
config_file = config.find_config_file(config_path)
|
||||
|
||||
if not config_file:
|
||||
return cls({}, credstore_env)
|
||||
try:
|
||||
with open(config_file) as f:
|
||||
config_dict = json.load(f)
|
||||
except (IOError, KeyError, ValueError) as e:
|
||||
# Likely missing new Docker config file or it's in an
|
||||
# unknown format, continue to attempt to read old location
|
||||
# and format.
|
||||
log.debug(e)
|
||||
return cls(_load_legacy_config(config_file), credstore_env)
|
||||
|
||||
res = {}
|
||||
if config_dict.get('auths'):
|
||||
log.debug("Found 'auths' section")
|
||||
res.update({
|
||||
'auths': cls.parse_auth(
|
||||
config_dict.pop('auths'), raise_on_error=True
|
||||
)
|
||||
})
|
||||
if config_dict.get('credsStore'):
|
||||
log.debug("Found 'credsStore' section")
|
||||
res.update({'credsStore': config_dict.pop('credsStore')})
|
||||
if config_dict.get('credHelpers'):
|
||||
log.debug("Found 'credHelpers' section")
|
||||
res.update({'credHelpers': config_dict.pop('credHelpers')})
|
||||
if res:
|
||||
return cls(res, credstore_env)
|
||||
|
||||
log.debug(
|
||||
"Couldn't find auth-related section ; attempting to interpret "
|
||||
"as auth-only file"
|
||||
)
|
||||
return cls({'auths': cls.parse_auth(config_dict)}, credstore_env)
|
||||
|
||||
@property
|
||||
def auths(self):
|
||||
return self.get('auths', {})
|
||||
|
||||
@property
|
||||
def creds_store(self):
|
||||
return self.get('credsStore', None)
|
||||
|
||||
@property
|
||||
def cred_helpers(self):
|
||||
return self.get('credHelpers', {})
|
||||
|
||||
@property
|
||||
def is_empty(self):
|
||||
return (
|
||||
not self.auths and not self.creds_store and not self.cred_helpers
|
||||
)
|
||||
|
||||
def resolve_authconfig(self, registry=None):
|
||||
"""
|
||||
Returns the authentication data from the given auth configuration for a
|
||||
specific registry. As with the Docker client, legacy entries in the
|
||||
config with full URLs are stripped down to hostnames before checking
|
||||
for a match. Returns None if no match was found.
|
||||
"""
|
||||
|
||||
if self.creds_store or self.cred_helpers:
|
||||
store_name = self.get_credential_store(registry)
|
||||
if store_name is not None:
|
||||
log.debug('Using credentials store "%s"', store_name)
|
||||
cfg = self._resolve_authconfig_credstore(registry, store_name)
|
||||
if cfg is not None:
|
||||
return cfg
|
||||
log.debug('No entry in credstore - fetching from auth dict')
|
||||
|
||||
# Default to the public index server
|
||||
registry = resolve_index_name(registry) if registry else INDEX_NAME
|
||||
log.debug("Looking for auth entry for %s", repr(registry))
|
||||
|
||||
if registry in self.auths:
|
||||
log.debug("Found %s", repr(registry))
|
||||
return self.auths[registry]
|
||||
|
||||
for key, conf in iteritems(self.auths):
|
||||
if resolve_index_name(key) == registry:
|
||||
log.debug("Found %s", repr(key))
|
||||
return conf
|
||||
|
||||
log.debug("No entry found")
|
||||
return None
|
||||
|
||||
def _resolve_authconfig_credstore(self, registry, credstore_name):
|
||||
if not registry or registry == INDEX_NAME:
|
||||
# The ecosystem is a little schizophrenic with index.docker.io VS
|
||||
# docker.io - in that case, it seems the full URL is necessary.
|
||||
registry = INDEX_URL
|
||||
log.debug("Looking for auth entry for %s", repr(registry))
|
||||
store = self._get_store_instance(credstore_name)
|
||||
try:
|
||||
data = store.get(registry)
|
||||
res = {
|
||||
'ServerAddress': registry,
|
||||
}
|
||||
if data['Username'] == TOKEN_USERNAME:
|
||||
res['IdentityToken'] = data['Secret']
|
||||
else:
|
||||
res.update({
|
||||
'Username': data['Username'],
|
||||
'Password': data['Secret'],
|
||||
})
|
||||
return res
|
||||
except CredentialsNotFound:
|
||||
log.debug('No entry found')
|
||||
return None
|
||||
except StoreError as e:
|
||||
raise errors.DockerException(
|
||||
'Credentials store error: {0}'.format(repr(e))
|
||||
)
|
||||
|
||||
def _get_store_instance(self, name):
|
||||
if name not in self._stores:
|
||||
self._stores[name] = Store(
|
||||
name, environment=self._credstore_env
|
||||
)
|
||||
return self._stores[name]
|
||||
|
||||
def get_credential_store(self, registry):
|
||||
if not registry or registry == INDEX_NAME:
|
||||
registry = INDEX_URL
|
||||
|
||||
return self.cred_helpers.get(registry) or self.creds_store
|
||||
|
||||
def get_all_credentials(self):
|
||||
auth_data = self.auths.copy()
|
||||
if self.creds_store:
|
||||
# Retrieve all credentials from the default store
|
||||
store = self._get_store_instance(self.creds_store)
|
||||
for k in store.list().keys():
|
||||
auth_data[k] = self._resolve_authconfig_credstore(
|
||||
k, self.creds_store
|
||||
)
|
||||
auth_data[convert_to_hostname(k)] = auth_data[k]
|
||||
|
||||
# credHelpers entries take priority over all others
|
||||
for reg, store_name in self.cred_helpers.items():
|
||||
auth_data[reg] = self._resolve_authconfig_credstore(
|
||||
reg, store_name
|
||||
)
|
||||
auth_data[convert_to_hostname(reg)] = auth_data[reg]
|
||||
|
||||
return auth_data
|
||||
|
||||
def add_auth(self, reg, data):
|
||||
self['auths'][reg] = data
|
||||
|
||||
|
||||
def resolve_authconfig(authconfig, registry=None, credstore_env=None):
|
||||
if not isinstance(authconfig, AuthConfig):
|
||||
authconfig = AuthConfig(authconfig, credstore_env)
|
||||
return authconfig.resolve_authconfig(registry)
|
||||
|
||||
|
||||
def convert_to_hostname(url):
|
||||
return url.replace('http://', '').replace('https://', '').split('/', 1)[0]
|
||||
|
||||
|
||||
def decode_auth(auth):
|
||||
if isinstance(auth, string_types):
|
||||
auth = auth.encode('ascii')
|
||||
s = base64.b64decode(auth)
|
||||
login, pwd = s.split(b':', 1)
|
||||
return login.decode('utf8'), pwd.decode('utf8')
|
||||
|
||||
|
||||
def encode_header(auth):
|
||||
auth_json = json.dumps(auth).encode('ascii')
|
||||
return base64.urlsafe_b64encode(auth_json)
|
||||
|
||||
|
||||
def parse_auth(entries, raise_on_error=False):
|
||||
"""
|
||||
Parses authentication entries
|
||||
|
||||
Args:
|
||||
entries: Dict of authentication entries.
|
||||
raise_on_error: If set to true, an invalid format will raise
|
||||
InvalidConfigFile
|
||||
|
||||
Returns:
|
||||
Authentication registry.
|
||||
"""
|
||||
|
||||
return AuthConfig.parse_auth(entries, raise_on_error)
|
||||
|
||||
|
||||
def load_config(config_path=None, config_dict=None, credstore_env=None):
|
||||
return AuthConfig.load_config(config_path, config_dict, credstore_env)
|
||||
|
||||
|
||||
def _load_legacy_config(config_file):
|
||||
log.debug("Attempting to parse legacy auth file format")
|
||||
try:
|
||||
data = []
|
||||
with open(config_file) as f:
|
||||
for line in f.readlines():
|
||||
data.append(line.strip().split(' = ')[1])
|
||||
if len(data) < 2:
|
||||
# Not enough data
|
||||
raise errors.InvalidConfigFile(
|
||||
'Invalid or empty configuration file!'
|
||||
)
|
||||
|
||||
username, password = decode_auth(data[0])
|
||||
return {'auths': {
|
||||
INDEX_NAME: {
|
||||
'username': username,
|
||||
'password': password,
|
||||
'email': data[1],
|
||||
'serveraddress': INDEX_URL,
|
||||
}
|
||||
}}
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
pass
|
||||
|
||||
log.debug("All parsing attempts failed - returning empty config")
|
||||
return {}
|
||||
49
plugins/module_utils/_api/constants.py
Normal file
49
plugins/module_utils/_api/constants.py
Normal file
@ -0,0 +1,49 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import sys
|
||||
|
||||
DEFAULT_DOCKER_API_VERSION = '1.41'
|
||||
MINIMUM_DOCKER_API_VERSION = '1.21'
|
||||
DEFAULT_TIMEOUT_SECONDS = 60
|
||||
STREAM_HEADER_SIZE_BYTES = 8
|
||||
CONTAINER_LIMITS_KEYS = [
|
||||
'memory', 'memswap', 'cpushares', 'cpusetcpus'
|
||||
]
|
||||
|
||||
DEFAULT_HTTP_HOST = "127.0.0.1"
|
||||
DEFAULT_UNIX_SOCKET = "http+unix:///var/run/docker.sock"
|
||||
DEFAULT_NPIPE = 'npipe:////./pipe/docker_engine'
|
||||
|
||||
BYTE_UNITS = {
|
||||
'b': 1,
|
||||
'k': 1024,
|
||||
'm': 1024 * 1024,
|
||||
'g': 1024 * 1024 * 1024
|
||||
}
|
||||
|
||||
IS_WINDOWS_PLATFORM = (sys.platform == 'win32')
|
||||
WINDOWS_LONGPATH_PREFIX = '\\\\?\\'
|
||||
|
||||
DEFAULT_USER_AGENT = "ansible-community.docker"
|
||||
DEFAULT_NUM_POOLS = 25
|
||||
|
||||
# The OpenSSH server default value for MaxSessions is 10 which means we can
|
||||
# use up to 9, leaving the final session for the underlying SSH connection.
|
||||
# For more details see: https://github.com/docker/docker-py/issues/2246
|
||||
DEFAULT_NUM_POOLS_SSH = 9
|
||||
|
||||
DEFAULT_MAX_POOL_SIZE = 10
|
||||
|
||||
DEFAULT_DATA_CHUNK_SIZE = 1024 * 2048
|
||||
|
||||
DEFAULT_SWARM_ADDR_POOL = ['10.0.0.0/8']
|
||||
DEFAULT_SWARM_SUBNET_SIZE = 24
|
||||
15
plugins/module_utils/_api/credentials/constants.py
Normal file
15
plugins/module_utils/_api/credentials/constants.py
Normal file
@ -0,0 +1,15 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
PROGRAM_PREFIX = 'docker-credential-'
|
||||
DEFAULT_LINUX_STORE = 'secretservice'
|
||||
DEFAULT_OSX_STORE = 'osxkeychain'
|
||||
DEFAULT_WIN32_STORE = 'wincred'
|
||||
37
plugins/module_utils/_api/credentials/errors.py
Normal file
37
plugins/module_utils/_api/credentials/errors.py
Normal file
@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
|
||||
class StoreError(RuntimeError):
|
||||
pass
|
||||
|
||||
|
||||
class CredentialsNotFound(StoreError):
|
||||
pass
|
||||
|
||||
|
||||
class InitializationError(StoreError):
|
||||
pass
|
||||
|
||||
|
||||
def process_store_error(cpe, program):
|
||||
message = cpe.output.decode('utf-8')
|
||||
if 'credentials not found in native keychain' in message:
|
||||
return CredentialsNotFound(
|
||||
'No matching credentials in {0}'.format(
|
||||
program
|
||||
)
|
||||
)
|
||||
return StoreError(
|
||||
'Credentials store {0} exited with "{1}".'.format(
|
||||
program, cpe.output.decode('utf-8').strip()
|
||||
)
|
||||
)
|
||||
118
plugins/module_utils/_api/credentials/store.py
Normal file
118
plugins/module_utils/_api/credentials/store.py
Normal file
@ -0,0 +1,118 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import errno
|
||||
import json
|
||||
import subprocess
|
||||
|
||||
from ansible.module_utils.six import PY3, binary_type
|
||||
|
||||
from . import constants
|
||||
from . import errors
|
||||
from .utils import create_environment_dict
|
||||
from .utils import find_executable
|
||||
|
||||
|
||||
class Store(object):
|
||||
def __init__(self, program, environment=None):
|
||||
""" Create a store object that acts as an interface to
|
||||
perform the basic operations for storing, retrieving
|
||||
and erasing credentials using `program`.
|
||||
"""
|
||||
self.program = constants.PROGRAM_PREFIX + program
|
||||
self.exe = find_executable(self.program)
|
||||
self.environment = environment
|
||||
if self.exe is None:
|
||||
raise errors.InitializationError(
|
||||
'{0} not installed or not available in PATH'.format(
|
||||
self.program
|
||||
)
|
||||
)
|
||||
|
||||
def get(self, server):
|
||||
""" Retrieve credentials for `server`. If no credentials are found,
|
||||
a `StoreError` will be raised.
|
||||
"""
|
||||
if not isinstance(server, binary_type):
|
||||
server = server.encode('utf-8')
|
||||
data = self._execute('get', server)
|
||||
result = json.loads(data.decode('utf-8'))
|
||||
|
||||
# docker-credential-pass will return an object for inexistent servers
|
||||
# whereas other helpers will exit with returncode != 0. For
|
||||
# consistency, if no significant data is returned,
|
||||
# raise CredentialsNotFound
|
||||
if result['Username'] == '' and result['Secret'] == '':
|
||||
raise errors.CredentialsNotFound(
|
||||
'No matching credentials in {0}'.format(self.program)
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def store(self, server, username, secret):
|
||||
""" Store credentials for `server`. Raises a `StoreError` if an error
|
||||
occurs.
|
||||
"""
|
||||
data_input = json.dumps({
|
||||
'ServerURL': server,
|
||||
'Username': username,
|
||||
'Secret': secret
|
||||
}).encode('utf-8')
|
||||
return self._execute('store', data_input)
|
||||
|
||||
def erase(self, server):
|
||||
""" Erase credentials for `server`. Raises a `StoreError` if an error
|
||||
occurs.
|
||||
"""
|
||||
if not isinstance(server, binary_type):
|
||||
server = server.encode('utf-8')
|
||||
self._execute('erase', server)
|
||||
|
||||
def list(self):
|
||||
""" List stored credentials. Requires v0.4.0+ of the helper.
|
||||
"""
|
||||
data = self._execute('list', None)
|
||||
return json.loads(data.decode('utf-8'))
|
||||
|
||||
def _execute(self, subcmd, data_input):
|
||||
output = None
|
||||
env = create_environment_dict(self.environment)
|
||||
try:
|
||||
if PY3:
|
||||
output = subprocess.check_output(
|
||||
[self.exe, subcmd], input=data_input, env=env,
|
||||
)
|
||||
else:
|
||||
process = subprocess.Popen(
|
||||
[self.exe, subcmd], stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, env=env,
|
||||
)
|
||||
output, dummy = process.communicate(data_input)
|
||||
if process.returncode != 0:
|
||||
raise subprocess.CalledProcessError(
|
||||
returncode=process.returncode, cmd='', output=output
|
||||
)
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise errors.process_store_error(e, self.program)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
raise errors.StoreError(
|
||||
'{0} not installed or not available in PATH'.format(
|
||||
self.program
|
||||
)
|
||||
)
|
||||
else:
|
||||
raise errors.StoreError(
|
||||
'Unexpected OS error "{0}", errno={1}'.format(
|
||||
e.strerror, e.errno
|
||||
)
|
||||
)
|
||||
return output
|
||||
58
plugins/module_utils/_api/credentials/utils.py
Normal file
58
plugins/module_utils/_api/credentials/utils.py
Normal file
@ -0,0 +1,58 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from ansible.module_utils.six import PY2
|
||||
|
||||
if PY2:
|
||||
from distutils.spawn import find_executable as which
|
||||
else:
|
||||
from shutil import which
|
||||
|
||||
|
||||
def find_executable(executable, path=None):
|
||||
"""
|
||||
As distutils.spawn.find_executable, but on Windows, look up
|
||||
every extension declared in PATHEXT instead of just `.exe`
|
||||
"""
|
||||
if sys.platform != 'win32':
|
||||
if PY2:
|
||||
return which(executable, path)
|
||||
else:
|
||||
return which(executable, path=path)
|
||||
|
||||
if path is None:
|
||||
path = os.environ['PATH']
|
||||
|
||||
paths = path.split(os.pathsep)
|
||||
extensions = os.environ.get('PATHEXT', '.exe').split(os.pathsep)
|
||||
base, ext = os.path.splitext(executable)
|
||||
|
||||
if not os.path.isfile(executable):
|
||||
for p in paths:
|
||||
for ext in extensions:
|
||||
f = os.path.join(p, base + ext)
|
||||
if os.path.isfile(f):
|
||||
return f
|
||||
return None
|
||||
else:
|
||||
return executable
|
||||
|
||||
|
||||
def create_environment_dict(overrides):
|
||||
"""
|
||||
Create and return a copy of os.environ with the specified overrides
|
||||
"""
|
||||
result = os.environ.copy()
|
||||
result.update(overrides or {})
|
||||
return result
|
||||
220
plugins/module_utils/_api/errors.py
Normal file
220
plugins/module_utils/_api/errors.py
Normal file
@ -0,0 +1,220 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from ._import_helper import HTTPError as _HTTPError
|
||||
|
||||
|
||||
class DockerException(Exception):
|
||||
"""
|
||||
A base class from which all other exceptions inherit.
|
||||
|
||||
If you want to catch all errors that the Docker SDK might raise,
|
||||
catch this base exception.
|
||||
"""
|
||||
|
||||
|
||||
def create_api_error_from_http_exception(e):
|
||||
"""
|
||||
Create a suitable APIError from requests.exceptions.HTTPError.
|
||||
"""
|
||||
response = e.response
|
||||
try:
|
||||
explanation = response.json()['message']
|
||||
except ValueError:
|
||||
explanation = (response.content or '').strip()
|
||||
cls = APIError
|
||||
if response.status_code == 404:
|
||||
if explanation and ('No such image' in str(explanation) or
|
||||
'not found: does not exist or no pull access'
|
||||
in str(explanation) or
|
||||
'repository does not exist' in str(explanation)):
|
||||
cls = ImageNotFound
|
||||
else:
|
||||
cls = NotFound
|
||||
raise cls(e, response=response, explanation=explanation)
|
||||
|
||||
|
||||
class APIError(_HTTPError, DockerException):
|
||||
"""
|
||||
An HTTP error from the API.
|
||||
"""
|
||||
def __init__(self, message, response=None, explanation=None):
|
||||
# requests 1.2 supports response as a keyword argument, but
|
||||
# requests 1.1 doesn't
|
||||
super(APIError, self).__init__(message)
|
||||
self.response = response
|
||||
self.explanation = explanation
|
||||
|
||||
def __str__(self):
|
||||
message = super(APIError, self).__str__()
|
||||
|
||||
if self.is_client_error():
|
||||
message = '{0} Client Error for {1}: {2}'.format(
|
||||
self.response.status_code, self.response.url,
|
||||
self.response.reason)
|
||||
|
||||
elif self.is_server_error():
|
||||
message = '{0} Server Error for {1}: {2}'.format(
|
||||
self.response.status_code, self.response.url,
|
||||
self.response.reason)
|
||||
|
||||
if self.explanation:
|
||||
message = '{0} ("{1}")'.format(message, self.explanation)
|
||||
|
||||
return message
|
||||
|
||||
@property
|
||||
def status_code(self):
|
||||
if self.response is not None:
|
||||
return self.response.status_code
|
||||
|
||||
def is_error(self):
|
||||
return self.is_client_error() or self.is_server_error()
|
||||
|
||||
def is_client_error(self):
|
||||
if self.status_code is None:
|
||||
return False
|
||||
return 400 <= self.status_code < 500
|
||||
|
||||
def is_server_error(self):
|
||||
if self.status_code is None:
|
||||
return False
|
||||
return 500 <= self.status_code < 600
|
||||
|
||||
|
||||
class NotFound(APIError):
|
||||
pass
|
||||
|
||||
|
||||
class ImageNotFound(NotFound):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidVersion(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidRepository(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidConfigFile(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidArgument(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class DeprecatedMethod(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class TLSParameterError(DockerException):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return self.msg + (". TLS configurations should map the Docker CLI "
|
||||
"client configurations. See "
|
||||
"https://docs.docker.com/engine/articles/https/ "
|
||||
"for API details.")
|
||||
|
||||
|
||||
class NullResource(DockerException, ValueError):
|
||||
pass
|
||||
|
||||
|
||||
class ContainerError(DockerException):
|
||||
"""
|
||||
Represents a container that has exited with a non-zero exit code.
|
||||
"""
|
||||
def __init__(self, container, exit_status, command, image, stderr):
|
||||
self.container = container
|
||||
self.exit_status = exit_status
|
||||
self.command = command
|
||||
self.image = image
|
||||
self.stderr = stderr
|
||||
|
||||
err = ": {0}".format(stderr) if stderr is not None else ""
|
||||
msg = ("Command '{0}' in image '{1}' returned non-zero exit "
|
||||
"status {2}{3}").format(command, image, exit_status, err)
|
||||
|
||||
super(ContainerError, self).__init__(msg)
|
||||
|
||||
|
||||
class StreamParseError(RuntimeError):
|
||||
def __init__(self, reason):
|
||||
self.msg = reason
|
||||
|
||||
|
||||
class BuildError(DockerException):
|
||||
def __init__(self, reason, build_log):
|
||||
super(BuildError, self).__init__(reason)
|
||||
self.msg = reason
|
||||
self.build_log = build_log
|
||||
|
||||
|
||||
class ImageLoadError(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
def create_unexpected_kwargs_error(name, kwargs):
|
||||
quoted_kwargs = ["'{0}'".format(k) for k in sorted(kwargs)]
|
||||
text = ["{0}() ".format(name)]
|
||||
if len(quoted_kwargs) == 1:
|
||||
text.append("got an unexpected keyword argument ")
|
||||
else:
|
||||
text.append("got unexpected keyword arguments ")
|
||||
text.append(', '.join(quoted_kwargs))
|
||||
return TypeError(''.join(text))
|
||||
|
||||
|
||||
class MissingContextParameter(DockerException):
|
||||
def __init__(self, param):
|
||||
self.param = param
|
||||
|
||||
def __str__(self):
|
||||
return ("missing parameter: {0}".format(self.param))
|
||||
|
||||
|
||||
class ContextAlreadyExists(DockerException):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def __str__(self):
|
||||
return ("context {0} already exists".format(self.name))
|
||||
|
||||
|
||||
class ContextException(DockerException):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return (self.msg)
|
||||
|
||||
|
||||
class ContextNotFound(DockerException):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def __str__(self):
|
||||
return ("context '{0}' not found".format(self.name))
|
||||
|
||||
|
||||
class MissingRequirementException(DockerException):
|
||||
def __init__(self, msg, requirement, import_exception):
|
||||
self.msg = msg
|
||||
self.requirement = requirement
|
||||
self.import_exception = import_exception
|
||||
|
||||
def __str__(self):
|
||||
return (self.msg)
|
||||
123
plugins/module_utils/_api/tls.py
Normal file
123
plugins/module_utils/_api/tls.py
Normal file
@ -0,0 +1,123 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import os
|
||||
import ssl
|
||||
|
||||
from . import errors
|
||||
from .transport.ssladapter import SSLHTTPAdapter
|
||||
|
||||
|
||||
class TLSConfig(object):
|
||||
"""
|
||||
TLS configuration.
|
||||
|
||||
Args:
|
||||
client_cert (tuple of str): Path to client cert, path to client key.
|
||||
ca_cert (str): Path to CA cert file.
|
||||
verify (bool or str): This can be ``False`` or a path to a CA cert
|
||||
file.
|
||||
ssl_version (int): A valid `SSL version`_.
|
||||
assert_hostname (bool): Verify the hostname of the server.
|
||||
|
||||
.. _`SSL version`:
|
||||
https://docs.python.org/3.5/library/ssl.html#ssl.PROTOCOL_TLSv1
|
||||
"""
|
||||
cert = None
|
||||
ca_cert = None
|
||||
verify = None
|
||||
ssl_version = None
|
||||
|
||||
def __init__(self, client_cert=None, ca_cert=None, verify=None,
|
||||
ssl_version=None, assert_hostname=None,
|
||||
assert_fingerprint=None):
|
||||
# Argument compatibility/mapping with
|
||||
# https://docs.docker.com/engine/articles/https/
|
||||
# This diverges from the Docker CLI in that users can specify 'tls'
|
||||
# here, but also disable any public/default CA pool verification by
|
||||
# leaving verify=False
|
||||
|
||||
self.assert_hostname = assert_hostname
|
||||
self.assert_fingerprint = assert_fingerprint
|
||||
|
||||
# TODO(dperny): according to the python docs, PROTOCOL_TLSvWhatever is
|
||||
# depcreated, and it's recommended to use OPT_NO_TLSvWhatever instead
|
||||
# to exclude versions. But I think that might require a bigger
|
||||
# architectural change, so I've opted not to pursue it at this time
|
||||
|
||||
# If the user provides an SSL version, we should use their preference
|
||||
if ssl_version:
|
||||
self.ssl_version = ssl_version
|
||||
else:
|
||||
# If the user provides no ssl version, we should default to
|
||||
# TLSv1_2. This option is the most secure, and will work for the
|
||||
# majority of users with reasonably up-to-date software. However,
|
||||
# before doing so, detect openssl version to ensure we can support
|
||||
# it.
|
||||
if ssl.OPENSSL_VERSION_INFO[:3] >= (1, 0, 1) and hasattr(
|
||||
ssl, 'PROTOCOL_TLSv1_2'):
|
||||
# If the OpenSSL version is high enough to support TLSv1_2,
|
||||
# then we should use it.
|
||||
self.ssl_version = getattr(ssl, 'PROTOCOL_TLSv1_2')
|
||||
else:
|
||||
# Otherwise, TLS v1.0 seems to be the safest default;
|
||||
# SSLv23 fails in mysterious ways:
|
||||
# https://github.com/docker/docker-py/issues/963
|
||||
self.ssl_version = ssl.PROTOCOL_TLSv1
|
||||
|
||||
# "client_cert" must have both or neither cert/key files. In
|
||||
# either case, Alert the user when both are expected, but any are
|
||||
# missing.
|
||||
|
||||
if client_cert:
|
||||
try:
|
||||
tls_cert, tls_key = client_cert
|
||||
except ValueError:
|
||||
raise errors.TLSParameterError(
|
||||
'client_cert must be a tuple of'
|
||||
' (client certificate, key file)'
|
||||
)
|
||||
|
||||
if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or
|
||||
not os.path.isfile(tls_key)):
|
||||
raise errors.TLSParameterError(
|
||||
'Path to a certificate and key files must be provided'
|
||||
' through the client_cert param'
|
||||
)
|
||||
self.cert = (tls_cert, tls_key)
|
||||
|
||||
# If verify is set, make sure the cert exists
|
||||
self.verify = verify
|
||||
self.ca_cert = ca_cert
|
||||
if self.verify and self.ca_cert and not os.path.isfile(self.ca_cert):
|
||||
raise errors.TLSParameterError(
|
||||
'Invalid CA certificate provided for `ca_cert`.'
|
||||
)
|
||||
|
||||
def configure_client(self, client):
|
||||
"""
|
||||
Configure a client with these TLS options.
|
||||
"""
|
||||
client.ssl_version = self.ssl_version
|
||||
|
||||
if self.verify and self.ca_cert:
|
||||
client.verify = self.ca_cert
|
||||
else:
|
||||
client.verify = self.verify
|
||||
|
||||
if self.cert:
|
||||
client.cert = self.cert
|
||||
|
||||
client.mount('https://', SSLHTTPAdapter(
|
||||
ssl_version=self.ssl_version,
|
||||
assert_hostname=self.assert_hostname,
|
||||
assert_fingerprint=self.assert_fingerprint,
|
||||
))
|
||||
19
plugins/module_utils/_api/transport/basehttpadapter.py
Normal file
19
plugins/module_utils/_api/transport/basehttpadapter.py
Normal file
@ -0,0 +1,19 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from .._import_helper import HTTPAdapter as _HTTPAdapter
|
||||
|
||||
|
||||
class BaseHTTPAdapter(_HTTPAdapter):
|
||||
def close(self):
|
||||
super(BaseHTTPAdapter, self).close()
|
||||
if hasattr(self, 'pools'):
|
||||
self.pools.clear()
|
||||
118
plugins/module_utils/_api/transport/npipeconn.py
Normal file
118
plugins/module_utils/_api/transport/npipeconn.py
Normal file
@ -0,0 +1,118 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from ansible.module_utils.six import PY3
|
||||
from ansible.module_utils.six.moves.queue import Empty
|
||||
|
||||
from .. import constants
|
||||
from .._import_helper import HTTPAdapter, urllib3
|
||||
|
||||
from .basehttpadapter import BaseHTTPAdapter
|
||||
from .npipesocket import NpipeSocket
|
||||
|
||||
if PY3:
|
||||
import http.client as httplib
|
||||
else:
|
||||
import httplib
|
||||
|
||||
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
|
||||
|
||||
|
||||
class NpipeHTTPConnection(httplib.HTTPConnection, object):
|
||||
def __init__(self, npipe_path, timeout=60):
|
||||
super(NpipeHTTPConnection, self).__init__(
|
||||
'localhost', timeout=timeout
|
||||
)
|
||||
self.npipe_path = npipe_path
|
||||
self.timeout = timeout
|
||||
|
||||
def connect(self):
|
||||
sock = NpipeSocket()
|
||||
sock.settimeout(self.timeout)
|
||||
sock.connect(self.npipe_path)
|
||||
self.sock = sock
|
||||
|
||||
|
||||
class NpipeHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
|
||||
def __init__(self, npipe_path, timeout=60, maxsize=10):
|
||||
super(NpipeHTTPConnectionPool, self).__init__(
|
||||
'localhost', timeout=timeout, maxsize=maxsize
|
||||
)
|
||||
self.npipe_path = npipe_path
|
||||
self.timeout = timeout
|
||||
|
||||
def _new_conn(self):
|
||||
return NpipeHTTPConnection(
|
||||
self.npipe_path, self.timeout
|
||||
)
|
||||
|
||||
# When re-using connections, urllib3 tries to call select() on our
|
||||
# NpipeSocket instance, causing a crash. To circumvent this, we override
|
||||
# _get_conn, where that check happens.
|
||||
def _get_conn(self, timeout):
|
||||
conn = None
|
||||
try:
|
||||
conn = self.pool.get(block=self.block, timeout=timeout)
|
||||
|
||||
except AttributeError: # self.pool is None
|
||||
raise urllib3.exceptions.ClosedPoolError(self, "Pool is closed.")
|
||||
|
||||
except Empty:
|
||||
if self.block:
|
||||
raise urllib3.exceptions.EmptyPoolError(
|
||||
self,
|
||||
"Pool reached maximum size and no more "
|
||||
"connections are allowed."
|
||||
)
|
||||
pass # Oh well, we'll create a new connection then
|
||||
|
||||
return conn or self._new_conn()
|
||||
|
||||
|
||||
class NpipeHTTPAdapter(BaseHTTPAdapter):
|
||||
|
||||
__attrs__ = HTTPAdapter.__attrs__ + ['npipe_path',
|
||||
'pools',
|
||||
'timeout',
|
||||
'max_pool_size']
|
||||
|
||||
def __init__(self, base_url, timeout=60,
|
||||
pool_connections=constants.DEFAULT_NUM_POOLS,
|
||||
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE):
|
||||
self.npipe_path = base_url.replace('npipe://', '')
|
||||
self.timeout = timeout
|
||||
self.max_pool_size = max_pool_size
|
||||
self.pools = RecentlyUsedContainer(
|
||||
pool_connections, dispose_func=lambda p: p.close()
|
||||
)
|
||||
super(NpipeHTTPAdapter, self).__init__()
|
||||
|
||||
def get_connection(self, url, proxies=None):
|
||||
with self.pools.lock:
|
||||
pool = self.pools.get(url)
|
||||
if pool:
|
||||
return pool
|
||||
|
||||
pool = NpipeHTTPConnectionPool(
|
||||
self.npipe_path, self.timeout,
|
||||
maxsize=self.max_pool_size
|
||||
)
|
||||
self.pools[url] = pool
|
||||
|
||||
return pool
|
||||
|
||||
def request_url(self, request, proxies):
|
||||
# The select_proxy utility in requests errors out when the provided URL
|
||||
# doesn't have a hostname, like is the case when using a UNIX socket.
|
||||
# Since proxies are an irrelevant notion in the case of UNIX sockets
|
||||
# anyway, we simply return the path URL directly.
|
||||
# See also: https://github.com/docker/docker-sdk-python/issues/811
|
||||
return request.path_url
|
||||
235
plugins/module_utils/_api/transport/npipesocket.py
Normal file
235
plugins/module_utils/_api/transport/npipesocket.py
Normal file
@ -0,0 +1,235 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import functools
|
||||
import io
|
||||
import time
|
||||
import traceback
|
||||
|
||||
from ansible.module_utils.six import PY2
|
||||
|
||||
PYWIN32_IMPORT_ERROR = None
|
||||
try:
|
||||
import win32file
|
||||
import win32pipe
|
||||
except ImportError:
|
||||
PYWIN32_IMPORT_ERROR = traceback.format_exc()
|
||||
|
||||
|
||||
cERROR_PIPE_BUSY = 0xe7
|
||||
cSECURITY_SQOS_PRESENT = 0x100000
|
||||
cSECURITY_ANONYMOUS = 0
|
||||
|
||||
MAXIMUM_RETRY_COUNT = 10
|
||||
|
||||
|
||||
def check_closed(f):
|
||||
@functools.wraps(f)
|
||||
def wrapped(self, *args, **kwargs):
|
||||
if self._closed:
|
||||
raise RuntimeError(
|
||||
'Can not reuse socket after connection was closed.'
|
||||
)
|
||||
return f(self, *args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
|
||||
class NpipeSocket(object):
|
||||
""" Partial implementation of the socket API over windows named pipes.
|
||||
This implementation is only designed to be used as a client socket,
|
||||
and server-specific methods (bind, listen, accept...) are not
|
||||
implemented.
|
||||
"""
|
||||
|
||||
def __init__(self, handle=None):
|
||||
self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT
|
||||
self._handle = handle
|
||||
self._closed = False
|
||||
|
||||
def accept(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
def bind(self, address):
|
||||
raise NotImplementedError()
|
||||
|
||||
def close(self):
|
||||
self._handle.Close()
|
||||
self._closed = True
|
||||
|
||||
@check_closed
|
||||
def connect(self, address, retry_count=0):
|
||||
try:
|
||||
handle = win32file.CreateFile(
|
||||
address,
|
||||
win32file.GENERIC_READ | win32file.GENERIC_WRITE,
|
||||
0,
|
||||
None,
|
||||
win32file.OPEN_EXISTING,
|
||||
cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,
|
||||
0
|
||||
)
|
||||
except win32pipe.error as e:
|
||||
# See Remarks:
|
||||
# https://msdn.microsoft.com/en-us/library/aa365800.aspx
|
||||
if e.winerror == cERROR_PIPE_BUSY:
|
||||
# Another program or thread has grabbed our pipe instance
|
||||
# before we got to it. Wait for availability and attempt to
|
||||
# connect again.
|
||||
retry_count = retry_count + 1
|
||||
if (retry_count < MAXIMUM_RETRY_COUNT):
|
||||
time.sleep(1)
|
||||
return self.connect(address, retry_count)
|
||||
raise e
|
||||
|
||||
self.flags = win32pipe.GetNamedPipeInfo(handle)[0]
|
||||
|
||||
self._handle = handle
|
||||
self._address = address
|
||||
|
||||
@check_closed
|
||||
def connect_ex(self, address):
|
||||
return self.connect(address)
|
||||
|
||||
@check_closed
|
||||
def detach(self):
|
||||
self._closed = True
|
||||
return self._handle
|
||||
|
||||
@check_closed
|
||||
def dup(self):
|
||||
return NpipeSocket(self._handle)
|
||||
|
||||
def getpeername(self):
|
||||
return self._address
|
||||
|
||||
def getsockname(self):
|
||||
return self._address
|
||||
|
||||
def getsockopt(self, level, optname, buflen=None):
|
||||
raise NotImplementedError()
|
||||
|
||||
def ioctl(self, control, option):
|
||||
raise NotImplementedError()
|
||||
|
||||
def listen(self, backlog):
|
||||
raise NotImplementedError()
|
||||
|
||||
def makefile(self, mode=None, bufsize=None):
|
||||
if mode.strip('b') != 'r':
|
||||
raise NotImplementedError()
|
||||
rawio = NpipeFileIOBase(self)
|
||||
if bufsize is None or bufsize <= 0:
|
||||
bufsize = io.DEFAULT_BUFFER_SIZE
|
||||
return io.BufferedReader(rawio, buffer_size=bufsize)
|
||||
|
||||
@check_closed
|
||||
def recv(self, bufsize, flags=0):
|
||||
err, data = win32file.ReadFile(self._handle, bufsize)
|
||||
return data
|
||||
|
||||
@check_closed
|
||||
def recvfrom(self, bufsize, flags=0):
|
||||
data = self.recv(bufsize, flags)
|
||||
return (data, self._address)
|
||||
|
||||
@check_closed
|
||||
def recvfrom_into(self, buf, nbytes=0, flags=0):
|
||||
return self.recv_into(buf, nbytes, flags), self._address
|
||||
|
||||
@check_closed
|
||||
def recv_into(self, buf, nbytes=0):
|
||||
if PY2:
|
||||
return self._recv_into_py2(buf, nbytes)
|
||||
|
||||
readbuf = buf
|
||||
if not isinstance(buf, memoryview):
|
||||
readbuf = memoryview(buf)
|
||||
|
||||
err, data = win32file.ReadFile(
|
||||
self._handle,
|
||||
readbuf[:nbytes] if nbytes else readbuf
|
||||
)
|
||||
return len(data)
|
||||
|
||||
def _recv_into_py2(self, buf, nbytes):
|
||||
err, data = win32file.ReadFile(self._handle, nbytes or len(buf))
|
||||
n = len(data)
|
||||
buf[:n] = data
|
||||
return n
|
||||
|
||||
@check_closed
|
||||
def send(self, string, flags=0):
|
||||
err, nbytes = win32file.WriteFile(self._handle, string)
|
||||
return nbytes
|
||||
|
||||
@check_closed
|
||||
def sendall(self, string, flags=0):
|
||||
return self.send(string, flags)
|
||||
|
||||
@check_closed
|
||||
def sendto(self, string, address):
|
||||
self.connect(address)
|
||||
return self.send(string)
|
||||
|
||||
def setblocking(self, flag):
|
||||
if flag:
|
||||
return self.settimeout(None)
|
||||
return self.settimeout(0)
|
||||
|
||||
def settimeout(self, value):
|
||||
if value is None:
|
||||
# Blocking mode
|
||||
self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER
|
||||
elif not isinstance(value, (float, int)) or value < 0:
|
||||
raise ValueError('Timeout value out of range')
|
||||
elif value == 0:
|
||||
# Non-blocking mode
|
||||
self._timeout = win32pipe.NMPWAIT_NO_WAIT
|
||||
else:
|
||||
# Timeout mode - Value converted to milliseconds
|
||||
self._timeout = value * 1000
|
||||
|
||||
def gettimeout(self):
|
||||
return self._timeout
|
||||
|
||||
def setsockopt(self, level, optname, value):
|
||||
raise NotImplementedError()
|
||||
|
||||
@check_closed
|
||||
def shutdown(self, how):
|
||||
return self.close()
|
||||
|
||||
|
||||
class NpipeFileIOBase(io.RawIOBase):
|
||||
def __init__(self, npipe_socket):
|
||||
self.sock = npipe_socket
|
||||
|
||||
def close(self):
|
||||
super(NpipeFileIOBase, self).close()
|
||||
self.sock = None
|
||||
|
||||
def fileno(self):
|
||||
return self.sock.fileno()
|
||||
|
||||
def isatty(self):
|
||||
return False
|
||||
|
||||
def readable(self):
|
||||
return True
|
||||
|
||||
def readinto(self, buf):
|
||||
return self.sock.recv_into(buf)
|
||||
|
||||
def seekable(self):
|
||||
return False
|
||||
|
||||
def writable(self):
|
||||
return False
|
||||
275
plugins/module_utils/_api/transport/sshconn.py
Normal file
275
plugins/module_utils/_api/transport/sshconn.py
Normal file
@ -0,0 +1,275 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
import traceback
|
||||
|
||||
from ansible.module_utils.six import PY3
|
||||
from ansible.module_utils.six.moves.queue import Empty
|
||||
from ansible.module_utils.six.moves.urllib_parse import urlparse
|
||||
|
||||
from .basehttpadapter import BaseHTTPAdapter
|
||||
from .. import constants
|
||||
|
||||
if PY3:
|
||||
import http.client as httplib
|
||||
else:
|
||||
import httplib
|
||||
|
||||
from .._import_helper import HTTPAdapter, urllib3
|
||||
|
||||
PARAMIKO_IMPORT_ERROR = None
|
||||
try:
|
||||
import paramiko
|
||||
except ImportError:
|
||||
PARAMIKO_IMPORT_ERROR = traceback.format_exc()
|
||||
|
||||
|
||||
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
|
||||
|
||||
|
||||
class SSHSocket(socket.socket):
|
||||
def __init__(self, host):
|
||||
super(SSHSocket, self).__init__(
|
||||
socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.host = host
|
||||
self.port = None
|
||||
self.user = None
|
||||
if ':' in self.host:
|
||||
self.host, self.port = self.host.split(':')
|
||||
if '@' in self.host:
|
||||
self.user, self.host = self.host.split('@')
|
||||
|
||||
self.proc = None
|
||||
|
||||
def connect(self, **kwargs):
|
||||
args = ['ssh']
|
||||
if self.user:
|
||||
args = args + ['-l', self.user]
|
||||
|
||||
if self.port:
|
||||
args = args + ['-p', self.port]
|
||||
|
||||
args = args + ['--', self.host, 'docker system dial-stdio']
|
||||
|
||||
preexec_func = None
|
||||
if not constants.IS_WINDOWS_PLATFORM:
|
||||
def f():
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
preexec_func = f
|
||||
|
||||
env = dict(os.environ)
|
||||
|
||||
# drop LD_LIBRARY_PATH and SSL_CERT_FILE
|
||||
env.pop('LD_LIBRARY_PATH', None)
|
||||
env.pop('SSL_CERT_FILE', None)
|
||||
|
||||
self.proc = subprocess.Popen(
|
||||
' '.join(args),
|
||||
env=env,
|
||||
shell=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stdin=subprocess.PIPE,
|
||||
preexec_fn=preexec_func)
|
||||
|
||||
def _write(self, data):
|
||||
if not self.proc or self.proc.stdin.closed:
|
||||
raise Exception('SSH subprocess not initiated.'
|
||||
'connect() must be called first.')
|
||||
written = self.proc.stdin.write(data)
|
||||
self.proc.stdin.flush()
|
||||
return written
|
||||
|
||||
def sendall(self, data):
|
||||
self._write(data)
|
||||
|
||||
def send(self, data):
|
||||
return self._write(data)
|
||||
|
||||
def recv(self, n):
|
||||
if not self.proc:
|
||||
raise Exception('SSH subprocess not initiated.'
|
||||
'connect() must be called first.')
|
||||
return self.proc.stdout.read(n)
|
||||
|
||||
def makefile(self, mode):
|
||||
if not self.proc:
|
||||
self.connect()
|
||||
if PY3:
|
||||
self.proc.stdout.channel = self
|
||||
|
||||
return self.proc.stdout
|
||||
|
||||
def close(self):
|
||||
if not self.proc or self.proc.stdin.closed:
|
||||
return
|
||||
self.proc.stdin.write(b'\n\n')
|
||||
self.proc.stdin.flush()
|
||||
self.proc.terminate()
|
||||
|
||||
|
||||
class SSHConnection(httplib.HTTPConnection, object):
|
||||
def __init__(self, ssh_transport=None, timeout=60, host=None):
|
||||
super(SSHConnection, self).__init__(
|
||||
'localhost', timeout=timeout
|
||||
)
|
||||
self.ssh_transport = ssh_transport
|
||||
self.timeout = timeout
|
||||
self.ssh_host = host
|
||||
|
||||
def connect(self):
|
||||
if self.ssh_transport:
|
||||
sock = self.ssh_transport.open_session()
|
||||
sock.settimeout(self.timeout)
|
||||
sock.exec_command('docker system dial-stdio')
|
||||
else:
|
||||
sock = SSHSocket(self.ssh_host)
|
||||
sock.settimeout(self.timeout)
|
||||
sock.connect()
|
||||
|
||||
self.sock = sock
|
||||
|
||||
|
||||
class SSHConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
|
||||
scheme = 'ssh'
|
||||
|
||||
def __init__(self, ssh_client=None, timeout=60, maxsize=10, host=None):
|
||||
super(SSHConnectionPool, self).__init__(
|
||||
'localhost', timeout=timeout, maxsize=maxsize
|
||||
)
|
||||
self.ssh_transport = None
|
||||
self.timeout = timeout
|
||||
if ssh_client:
|
||||
self.ssh_transport = ssh_client.get_transport()
|
||||
self.ssh_host = host
|
||||
|
||||
def _new_conn(self):
|
||||
return SSHConnection(self.ssh_transport, self.timeout, self.ssh_host)
|
||||
|
||||
# When re-using connections, urllib3 calls fileno() on our
|
||||
# SSH channel instance, quickly overloading our fd limit. To avoid this,
|
||||
# we override _get_conn
|
||||
def _get_conn(self, timeout):
|
||||
conn = None
|
||||
try:
|
||||
conn = self.pool.get(block=self.block, timeout=timeout)
|
||||
|
||||
except AttributeError: # self.pool is None
|
||||
raise urllib3.exceptions.ClosedPoolError(self, "Pool is closed.")
|
||||
|
||||
except Empty:
|
||||
if self.block:
|
||||
raise urllib3.exceptions.EmptyPoolError(
|
||||
self,
|
||||
"Pool reached maximum size and no more "
|
||||
"connections are allowed."
|
||||
)
|
||||
pass # Oh well, we'll create a new connection then
|
||||
|
||||
return conn or self._new_conn()
|
||||
|
||||
|
||||
class SSHHTTPAdapter(BaseHTTPAdapter):
|
||||
|
||||
__attrs__ = HTTPAdapter.__attrs__ + [
|
||||
'pools', 'timeout', 'ssh_client', 'ssh_params', 'max_pool_size'
|
||||
]
|
||||
|
||||
def __init__(self, base_url, timeout=60,
|
||||
pool_connections=constants.DEFAULT_NUM_POOLS,
|
||||
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE,
|
||||
shell_out=False):
|
||||
self.ssh_client = None
|
||||
if not shell_out:
|
||||
self._create_paramiko_client(base_url)
|
||||
self._connect()
|
||||
|
||||
self.ssh_host = base_url
|
||||
if base_url.startswith('ssh://'):
|
||||
self.ssh_host = base_url[len('ssh://'):]
|
||||
|
||||
self.timeout = timeout
|
||||
self.max_pool_size = max_pool_size
|
||||
self.pools = RecentlyUsedContainer(
|
||||
pool_connections, dispose_func=lambda p: p.close()
|
||||
)
|
||||
super(SSHHTTPAdapter, self).__init__()
|
||||
|
||||
def _create_paramiko_client(self, base_url):
|
||||
logging.getLogger("paramiko").setLevel(logging.WARNING)
|
||||
self.ssh_client = paramiko.SSHClient()
|
||||
base_url = urlparse(base_url)
|
||||
self.ssh_params = {
|
||||
"hostname": base_url.hostname,
|
||||
"port": base_url.port,
|
||||
"username": base_url.username,
|
||||
}
|
||||
ssh_config_file = os.path.expanduser("~/.ssh/config")
|
||||
if os.path.exists(ssh_config_file):
|
||||
conf = paramiko.SSHConfig()
|
||||
with open(ssh_config_file) as f:
|
||||
conf.parse(f)
|
||||
host_config = conf.lookup(base_url.hostname)
|
||||
if 'proxycommand' in host_config:
|
||||
self.ssh_params["sock"] = paramiko.ProxyCommand(
|
||||
self.ssh_conf['proxycommand']
|
||||
)
|
||||
if 'hostname' in host_config:
|
||||
self.ssh_params['hostname'] = host_config['hostname']
|
||||
if base_url.port is None and 'port' in host_config:
|
||||
self.ssh_params['port'] = host_config['port']
|
||||
if base_url.username is None and 'user' in host_config:
|
||||
self.ssh_params['username'] = host_config['user']
|
||||
if 'identityfile' in host_config:
|
||||
self.ssh_params['key_filename'] = host_config['identityfile']
|
||||
|
||||
self.ssh_client.load_system_host_keys()
|
||||
self.ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy())
|
||||
|
||||
def _connect(self):
|
||||
if self.ssh_client:
|
||||
self.ssh_client.connect(**self.ssh_params)
|
||||
|
||||
def get_connection(self, url, proxies=None):
|
||||
if not self.ssh_client:
|
||||
return SSHConnectionPool(
|
||||
ssh_client=self.ssh_client,
|
||||
timeout=self.timeout,
|
||||
maxsize=self.max_pool_size,
|
||||
host=self.ssh_host
|
||||
)
|
||||
with self.pools.lock:
|
||||
pool = self.pools.get(url)
|
||||
if pool:
|
||||
return pool
|
||||
|
||||
# Connection is closed try a reconnect
|
||||
if self.ssh_client and not self.ssh_client.get_transport():
|
||||
self._connect()
|
||||
|
||||
pool = SSHConnectionPool(
|
||||
ssh_client=self.ssh_client,
|
||||
timeout=self.timeout,
|
||||
maxsize=self.max_pool_size,
|
||||
host=self.ssh_host
|
||||
)
|
||||
self.pools[url] = pool
|
||||
|
||||
return pool
|
||||
|
||||
def close(self):
|
||||
super(SSHHTTPAdapter, self).close()
|
||||
if self.ssh_client:
|
||||
self.ssh_client.close()
|
||||
73
plugins/module_utils/_api/transport/ssladapter.py
Normal file
73
plugins/module_utils/_api/transport/ssladapter.py
Normal file
@ -0,0 +1,73 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
""" Resolves OpenSSL issues in some servers:
|
||||
https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/
|
||||
https://github.com/kennethreitz/requests/pull/799
|
||||
"""
|
||||
import sys
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils.version import StrictVersion
|
||||
|
||||
from .._import_helper import HTTPAdapter, urllib3
|
||||
from .basehttpadapter import BaseHTTPAdapter
|
||||
|
||||
|
||||
PoolManager = urllib3.poolmanager.PoolManager
|
||||
|
||||
|
||||
class SSLHTTPAdapter(BaseHTTPAdapter):
|
||||
'''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''
|
||||
|
||||
__attrs__ = HTTPAdapter.__attrs__ + ['assert_fingerprint',
|
||||
'assert_hostname',
|
||||
'ssl_version']
|
||||
|
||||
def __init__(self, ssl_version=None, assert_hostname=None,
|
||||
assert_fingerprint=None, **kwargs):
|
||||
self.ssl_version = ssl_version
|
||||
self.assert_hostname = assert_hostname
|
||||
self.assert_fingerprint = assert_fingerprint
|
||||
super(SSLHTTPAdapter, self).__init__(**kwargs)
|
||||
|
||||
def init_poolmanager(self, connections, maxsize, block=False):
|
||||
kwargs = {
|
||||
'num_pools': connections,
|
||||
'maxsize': maxsize,
|
||||
'block': block,
|
||||
'assert_hostname': self.assert_hostname,
|
||||
'assert_fingerprint': self.assert_fingerprint,
|
||||
}
|
||||
if self.ssl_version and self.can_override_ssl_version():
|
||||
kwargs['ssl_version'] = self.ssl_version
|
||||
|
||||
self.poolmanager = PoolManager(**kwargs)
|
||||
|
||||
def get_connection(self, *args, **kwargs):
|
||||
"""
|
||||
Ensure assert_hostname is set correctly on our pool
|
||||
|
||||
We already take care of a normal poolmanager via init_poolmanager
|
||||
|
||||
But we still need to take care of when there is a proxy poolmanager
|
||||
"""
|
||||
conn = super(SSLHTTPAdapter, self).get_connection(*args, **kwargs)
|
||||
if conn.assert_hostname != self.assert_hostname:
|
||||
conn.assert_hostname = self.assert_hostname
|
||||
return conn
|
||||
|
||||
def can_override_ssl_version(self):
|
||||
urllib_ver = urllib3.__version__.split('-')[0]
|
||||
if urllib_ver is None:
|
||||
return False
|
||||
if urllib_ver == 'dev':
|
||||
return True
|
||||
return StrictVersion(urllib_ver) > StrictVersion('1.5')
|
||||
122
plugins/module_utils/_api/transport/unixconn.py
Normal file
122
plugins/module_utils/_api/transport/unixconn.py
Normal file
@ -0,0 +1,122 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import socket
|
||||
|
||||
from ansible.module_utils.six import PY2
|
||||
from ansible.module_utils.six.moves import http_client as httplib
|
||||
|
||||
from .basehttpadapter import BaseHTTPAdapter
|
||||
from .. import constants
|
||||
|
||||
from .._import_helper import HTTPAdapter, urllib3
|
||||
|
||||
|
||||
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
|
||||
|
||||
|
||||
class UnixHTTPResponse(httplib.HTTPResponse, object):
|
||||
def __init__(self, sock, *args, **kwargs):
|
||||
disable_buffering = kwargs.pop('disable_buffering', False)
|
||||
if PY2:
|
||||
# FIXME: We may need to disable buffering on Py3 as well,
|
||||
# but there's no clear way to do it at the moment. See:
|
||||
# https://github.com/docker/docker-py/issues/1799
|
||||
kwargs['buffering'] = not disable_buffering
|
||||
super(UnixHTTPResponse, self).__init__(sock, *args, **kwargs)
|
||||
|
||||
|
||||
class UnixHTTPConnection(httplib.HTTPConnection, object):
|
||||
|
||||
def __init__(self, base_url, unix_socket, timeout=60):
|
||||
super(UnixHTTPConnection, self).__init__(
|
||||
'localhost', timeout=timeout
|
||||
)
|
||||
self.base_url = base_url
|
||||
self.unix_socket = unix_socket
|
||||
self.timeout = timeout
|
||||
self.disable_buffering = False
|
||||
|
||||
def connect(self):
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
sock.settimeout(self.timeout)
|
||||
sock.connect(self.unix_socket)
|
||||
self.sock = sock
|
||||
|
||||
def putheader(self, header, *values):
|
||||
super(UnixHTTPConnection, self).putheader(header, *values)
|
||||
if header == 'Connection' and 'Upgrade' in values:
|
||||
self.disable_buffering = True
|
||||
|
||||
def response_class(self, sock, *args, **kwargs):
|
||||
if self.disable_buffering:
|
||||
kwargs['disable_buffering'] = True
|
||||
|
||||
return UnixHTTPResponse(sock, *args, **kwargs)
|
||||
|
||||
|
||||
class UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
|
||||
def __init__(self, base_url, socket_path, timeout=60, maxsize=10):
|
||||
super(UnixHTTPConnectionPool, self).__init__(
|
||||
'localhost', timeout=timeout, maxsize=maxsize
|
||||
)
|
||||
self.base_url = base_url
|
||||
self.socket_path = socket_path
|
||||
self.timeout = timeout
|
||||
|
||||
def _new_conn(self):
|
||||
return UnixHTTPConnection(
|
||||
self.base_url, self.socket_path, self.timeout
|
||||
)
|
||||
|
||||
|
||||
class UnixHTTPAdapter(BaseHTTPAdapter):
|
||||
|
||||
__attrs__ = HTTPAdapter.__attrs__ + ['pools',
|
||||
'socket_path',
|
||||
'timeout',
|
||||
'max_pool_size']
|
||||
|
||||
def __init__(self, socket_url, timeout=60,
|
||||
pool_connections=constants.DEFAULT_NUM_POOLS,
|
||||
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE):
|
||||
socket_path = socket_url.replace('http+unix://', '')
|
||||
if not socket_path.startswith('/'):
|
||||
socket_path = '/' + socket_path
|
||||
self.socket_path = socket_path
|
||||
self.timeout = timeout
|
||||
self.max_pool_size = max_pool_size
|
||||
self.pools = RecentlyUsedContainer(
|
||||
pool_connections, dispose_func=lambda p: p.close()
|
||||
)
|
||||
super(UnixHTTPAdapter, self).__init__()
|
||||
|
||||
def get_connection(self, url, proxies=None):
|
||||
with self.pools.lock:
|
||||
pool = self.pools.get(url)
|
||||
if pool:
|
||||
return pool
|
||||
|
||||
pool = UnixHTTPConnectionPool(
|
||||
url, self.socket_path, self.timeout,
|
||||
maxsize=self.max_pool_size
|
||||
)
|
||||
self.pools[url] = pool
|
||||
|
||||
return pool
|
||||
|
||||
def request_url(self, request, proxies):
|
||||
# The select_proxy utility in requests errors out when the provided URL
|
||||
# doesn't have a hostname, like is the case when using a UNIX socket.
|
||||
# Since proxies are an irrelevant notion in the case of UNIX sockets
|
||||
# anyway, we simply return the path URL directly.
|
||||
# See also: https://github.com/docker/docker-py/issues/811
|
||||
return request.path_url
|
||||
82
plugins/module_utils/_api/types/daemon.py
Normal file
82
plugins/module_utils/_api/types/daemon.py
Normal file
@ -0,0 +1,82 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import socket
|
||||
|
||||
from .._import_helper import urllib3
|
||||
|
||||
from ..errors import DockerException
|
||||
|
||||
|
||||
class CancellableStream(object):
|
||||
"""
|
||||
Stream wrapper for real-time events, logs, etc. from the server.
|
||||
|
||||
Example:
|
||||
>>> events = client.events()
|
||||
>>> for event in events:
|
||||
... print(event)
|
||||
>>> # and cancel from another thread
|
||||
>>> events.close()
|
||||
"""
|
||||
|
||||
def __init__(self, stream, response):
|
||||
self._stream = stream
|
||||
self._response = response
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
try:
|
||||
return next(self._stream)
|
||||
except urllib3.exceptions.ProtocolError:
|
||||
raise StopIteration
|
||||
except socket.error:
|
||||
raise StopIteration
|
||||
|
||||
next = __next__
|
||||
|
||||
def close(self):
|
||||
"""
|
||||
Closes the event streaming.
|
||||
"""
|
||||
|
||||
if not self._response.raw.closed:
|
||||
# find the underlying socket object
|
||||
# based on api.client._get_raw_response_socket
|
||||
|
||||
sock_fp = self._response.raw._fp.fp
|
||||
|
||||
if hasattr(sock_fp, 'raw'):
|
||||
sock_raw = sock_fp.raw
|
||||
|
||||
if hasattr(sock_raw, 'sock'):
|
||||
sock = sock_raw.sock
|
||||
|
||||
elif hasattr(sock_raw, '_sock'):
|
||||
sock = sock_raw._sock
|
||||
|
||||
elif hasattr(sock_fp, 'channel'):
|
||||
# We're working with a paramiko (SSH) channel, which doesn't
|
||||
# support cancelable streams with the current implementation
|
||||
raise DockerException(
|
||||
'Cancellable streams not supported for the SSH protocol'
|
||||
)
|
||||
else:
|
||||
sock = sock_fp._sock
|
||||
|
||||
if hasattr(urllib3.contrib, 'pyopenssl') and isinstance(
|
||||
sock, urllib3.contrib.pyopenssl.WrappedSocket):
|
||||
sock = sock.socket
|
||||
|
||||
sock.shutdown(socket.SHUT_RDWR)
|
||||
sock.close()
|
||||
301
plugins/module_utils/_api/utils/build.py
Normal file
301
plugins/module_utils/_api/utils/build.py
Normal file
@ -0,0 +1,301 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import io
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import tarfile
|
||||
import tempfile
|
||||
|
||||
from ansible.module_utils.six import PY3
|
||||
|
||||
from . import fnmatch
|
||||
from ..constants import IS_WINDOWS_PLATFORM, WINDOWS_LONGPATH_PREFIX
|
||||
|
||||
|
||||
_SEP = re.compile('/|\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')
|
||||
|
||||
|
||||
def tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):
|
||||
root = os.path.abspath(path)
|
||||
exclude = exclude or []
|
||||
dockerfile = dockerfile or (None, None)
|
||||
extra_files = []
|
||||
if dockerfile[1] is not None:
|
||||
dockerignore_contents = '\n'.join(
|
||||
(exclude or ['.dockerignore']) + [dockerfile[0]]
|
||||
)
|
||||
extra_files = [
|
||||
('.dockerignore', dockerignore_contents),
|
||||
dockerfile,
|
||||
]
|
||||
return create_archive(
|
||||
files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile[0])),
|
||||
root=root, fileobj=fileobj, gzip=gzip, extra_files=extra_files
|
||||
)
|
||||
|
||||
|
||||
def exclude_paths(root, patterns, dockerfile=None):
|
||||
"""
|
||||
Given a root directory path and a list of .dockerignore patterns, return
|
||||
an iterator of all paths (both regular files and directories) in the root
|
||||
directory that do *not* match any of the patterns.
|
||||
|
||||
All paths returned are relative to the root.
|
||||
"""
|
||||
|
||||
if dockerfile is None:
|
||||
dockerfile = 'Dockerfile'
|
||||
|
||||
patterns.append('!' + dockerfile)
|
||||
pm = PatternMatcher(patterns)
|
||||
return set(pm.walk(root))
|
||||
|
||||
|
||||
def build_file_list(root):
|
||||
files = []
|
||||
for dirname, dirnames, fnames in os.walk(root):
|
||||
for filename in fnames + dirnames:
|
||||
longpath = os.path.join(dirname, filename)
|
||||
files.append(
|
||||
longpath.replace(root, '', 1).lstrip('/')
|
||||
)
|
||||
|
||||
return files
|
||||
|
||||
|
||||
def create_archive(root, files=None, fileobj=None, gzip=False,
|
||||
extra_files=None):
|
||||
extra_files = extra_files or []
|
||||
if not fileobj:
|
||||
fileobj = tempfile.NamedTemporaryFile()
|
||||
t = tarfile.open(mode='w:gz' if gzip else 'w', fileobj=fileobj)
|
||||
if files is None:
|
||||
files = build_file_list(root)
|
||||
extra_names = set(e[0] for e in extra_files)
|
||||
for path in files:
|
||||
if path in extra_names:
|
||||
# Extra files override context files with the same name
|
||||
continue
|
||||
full_path = os.path.join(root, path)
|
||||
|
||||
i = t.gettarinfo(full_path, arcname=path)
|
||||
if i is None:
|
||||
# This happens when we encounter a socket file. We can safely
|
||||
# ignore it and proceed.
|
||||
continue
|
||||
|
||||
# Workaround https://bugs.python.org/issue32713
|
||||
if i.mtime < 0 or i.mtime > 8**11 - 1:
|
||||
i.mtime = int(i.mtime)
|
||||
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
# Windows doesn't keep track of the execute bit, so we make files
|
||||
# and directories executable by default.
|
||||
i.mode = i.mode & 0o755 | 0o111
|
||||
|
||||
if i.isfile():
|
||||
try:
|
||||
with open(full_path, 'rb') as f:
|
||||
t.addfile(i, f)
|
||||
except IOError:
|
||||
raise IOError(
|
||||
'Can not read file in context: {0}'.format(full_path)
|
||||
)
|
||||
else:
|
||||
# Directories, FIFOs, symlinks... don't need to be read.
|
||||
t.addfile(i, None)
|
||||
|
||||
for name, contents in extra_files:
|
||||
info = tarfile.TarInfo(name)
|
||||
contents_encoded = contents.encode('utf-8')
|
||||
info.size = len(contents_encoded)
|
||||
t.addfile(info, io.BytesIO(contents_encoded))
|
||||
|
||||
t.close()
|
||||
fileobj.seek(0)
|
||||
return fileobj
|
||||
|
||||
|
||||
def mkbuildcontext(dockerfile):
|
||||
f = tempfile.NamedTemporaryFile()
|
||||
t = tarfile.open(mode='w', fileobj=f)
|
||||
if isinstance(dockerfile, io.StringIO):
|
||||
dfinfo = tarfile.TarInfo('Dockerfile')
|
||||
if PY3:
|
||||
raise TypeError('Please use io.BytesIO to create in-memory '
|
||||
'Dockerfiles with Python 3')
|
||||
else:
|
||||
dfinfo.size = len(dockerfile.getvalue())
|
||||
dockerfile.seek(0)
|
||||
elif isinstance(dockerfile, io.BytesIO):
|
||||
dfinfo = tarfile.TarInfo('Dockerfile')
|
||||
dfinfo.size = len(dockerfile.getvalue())
|
||||
dockerfile.seek(0)
|
||||
else:
|
||||
dfinfo = t.gettarinfo(fileobj=dockerfile, arcname='Dockerfile')
|
||||
t.addfile(dfinfo, dockerfile)
|
||||
t.close()
|
||||
f.seek(0)
|
||||
return f
|
||||
|
||||
|
||||
def split_path(p):
|
||||
return [pt for pt in re.split(_SEP, p) if pt and pt != '.']
|
||||
|
||||
|
||||
def normalize_slashes(p):
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
return '/'.join(split_path(p))
|
||||
return p
|
||||
|
||||
|
||||
def walk(root, patterns, default=True):
|
||||
pm = PatternMatcher(patterns)
|
||||
return pm.walk(root)
|
||||
|
||||
|
||||
# Heavily based on
|
||||
# https://github.com/moby/moby/blob/master/pkg/fileutils/fileutils.go
|
||||
class PatternMatcher(object):
|
||||
def __init__(self, patterns):
|
||||
self.patterns = list(filter(
|
||||
lambda p: p.dirs, [Pattern(p) for p in patterns]
|
||||
))
|
||||
self.patterns.append(Pattern('!.dockerignore'))
|
||||
|
||||
def matches(self, filepath):
|
||||
matched = False
|
||||
parent_path = os.path.dirname(filepath)
|
||||
parent_path_dirs = split_path(parent_path)
|
||||
|
||||
for pattern in self.patterns:
|
||||
negative = pattern.exclusion
|
||||
match = pattern.match(filepath)
|
||||
if not match and parent_path != '':
|
||||
if len(pattern.dirs) <= len(parent_path_dirs):
|
||||
match = pattern.match(
|
||||
os.path.sep.join(parent_path_dirs[:len(pattern.dirs)])
|
||||
)
|
||||
|
||||
if match:
|
||||
matched = not negative
|
||||
|
||||
return matched
|
||||
|
||||
def walk(self, root):
|
||||
def rec_walk(current_dir):
|
||||
for f in os.listdir(current_dir):
|
||||
fpath = os.path.join(
|
||||
os.path.relpath(current_dir, root), f
|
||||
)
|
||||
if fpath.startswith('.' + os.path.sep):
|
||||
fpath = fpath[2:]
|
||||
match = self.matches(fpath)
|
||||
if not match:
|
||||
yield fpath
|
||||
|
||||
cur = os.path.join(root, fpath)
|
||||
if not os.path.isdir(cur) or os.path.islink(cur):
|
||||
continue
|
||||
|
||||
if match:
|
||||
# If we want to skip this file and it's a directory
|
||||
# then we should first check to see if there's an
|
||||
# excludes pattern (e.g. !dir/file) that starts with this
|
||||
# dir. If so then we can't skip this dir.
|
||||
skip = True
|
||||
|
||||
for pat in self.patterns:
|
||||
if not pat.exclusion:
|
||||
continue
|
||||
if pat.cleaned_pattern.startswith(
|
||||
normalize_slashes(fpath)):
|
||||
skip = False
|
||||
break
|
||||
if skip:
|
||||
continue
|
||||
for sub in rec_walk(cur):
|
||||
yield sub
|
||||
|
||||
return rec_walk(root)
|
||||
|
||||
|
||||
class Pattern(object):
|
||||
def __init__(self, pattern_str):
|
||||
self.exclusion = False
|
||||
if pattern_str.startswith('!'):
|
||||
self.exclusion = True
|
||||
pattern_str = pattern_str[1:]
|
||||
|
||||
self.dirs = self.normalize(pattern_str)
|
||||
self.cleaned_pattern = '/'.join(self.dirs)
|
||||
|
||||
@classmethod
|
||||
def normalize(cls, p):
|
||||
|
||||
# Leading and trailing slashes are not relevant. Yes,
|
||||
# "foo.py/" must exclude the "foo.py" regular file. "."
|
||||
# components are not relevant either, even if the whole
|
||||
# pattern is only ".", as the Docker reference states: "For
|
||||
# historical reasons, the pattern . is ignored."
|
||||
# ".." component must be cleared with the potential previous
|
||||
# component, regardless of whether it exists: "A preprocessing
|
||||
# step [...] eliminates . and .. elements using Go's
|
||||
# filepath.".
|
||||
i = 0
|
||||
split = split_path(p)
|
||||
while i < len(split):
|
||||
if split[i] == '..':
|
||||
del split[i]
|
||||
if i > 0:
|
||||
del split[i - 1]
|
||||
i -= 1
|
||||
else:
|
||||
i += 1
|
||||
return split
|
||||
|
||||
def match(self, filepath):
|
||||
return fnmatch.fnmatch(normalize_slashes(filepath), self.cleaned_pattern)
|
||||
|
||||
|
||||
def process_dockerfile(dockerfile, path):
|
||||
if not dockerfile:
|
||||
return (None, None)
|
||||
|
||||
abs_dockerfile = dockerfile
|
||||
if not os.path.isabs(dockerfile):
|
||||
abs_dockerfile = os.path.join(path, dockerfile)
|
||||
if IS_WINDOWS_PLATFORM and path.startswith(
|
||||
WINDOWS_LONGPATH_PREFIX):
|
||||
abs_dockerfile = '{0}{1}'.format(
|
||||
WINDOWS_LONGPATH_PREFIX,
|
||||
os.path.normpath(
|
||||
abs_dockerfile[len(WINDOWS_LONGPATH_PREFIX):]
|
||||
)
|
||||
)
|
||||
if (os.path.splitdrive(path)[0] != os.path.splitdrive(abs_dockerfile)[0] or
|
||||
os.path.relpath(abs_dockerfile, path).startswith('..')):
|
||||
# Dockerfile not in context - read data to insert into tar later
|
||||
with open(abs_dockerfile) as df:
|
||||
return (
|
||||
'.dockerfile.{random:x}'.format(random=random.getrandbits(160)),
|
||||
df.read()
|
||||
)
|
||||
|
||||
# Dockerfile is inside the context - return path relative to context root
|
||||
if dockerfile == abs_dockerfile:
|
||||
# Only calculate relpath if necessary to avoid errors
|
||||
# on Windows client -> Linux Docker
|
||||
# see https://github.com/docker/compose/issues/5969
|
||||
dockerfile = os.path.relpath(abs_dockerfile, path)
|
||||
return (dockerfile, None)
|
||||
77
plugins/module_utils/_api/utils/config.py
Normal file
77
plugins/module_utils/_api/utils/config.py
Normal file
@ -0,0 +1,77 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
from ..constants import IS_WINDOWS_PLATFORM
|
||||
|
||||
DOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')
|
||||
LEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def find_config_file(config_path=None):
|
||||
paths = list(filter(None, [
|
||||
config_path, # 1
|
||||
config_path_from_environment(), # 2
|
||||
os.path.join(home_dir(), DOCKER_CONFIG_FILENAME), # 3
|
||||
os.path.join(home_dir(), LEGACY_DOCKER_CONFIG_FILENAME), # 4
|
||||
]))
|
||||
|
||||
log.debug("Trying paths: %s", repr(paths))
|
||||
|
||||
for path in paths:
|
||||
if os.path.exists(path):
|
||||
log.debug("Found file at path: %s", path)
|
||||
return path
|
||||
|
||||
log.debug("No config file found")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def config_path_from_environment():
|
||||
config_dir = os.environ.get('DOCKER_CONFIG')
|
||||
if not config_dir:
|
||||
return None
|
||||
return os.path.join(config_dir, os.path.basename(DOCKER_CONFIG_FILENAME))
|
||||
|
||||
|
||||
def home_dir():
|
||||
"""
|
||||
Get the user's home directory, using the same logic as the Docker Engine
|
||||
client - use %USERPROFILE% on Windows, $HOME/getuid on POSIX.
|
||||
"""
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
return os.environ.get('USERPROFILE', '')
|
||||
else:
|
||||
return os.path.expanduser('~')
|
||||
|
||||
|
||||
def load_general_config(config_path=None):
|
||||
config_file = find_config_file(config_path)
|
||||
|
||||
if not config_file:
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(config_file) as f:
|
||||
return json.load(f)
|
||||
except (IOError, ValueError) as e:
|
||||
# In the case of a legacy `.dockercfg` file, we won't
|
||||
# be able to load any JSON data.
|
||||
log.debug(e)
|
||||
|
||||
log.debug("All parsing attempts failed - returning empty config")
|
||||
return {}
|
||||
58
plugins/module_utils/_api/utils/decorators.py
Normal file
58
plugins/module_utils/_api/utils/decorators.py
Normal file
@ -0,0 +1,58 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import functools
|
||||
|
||||
from .. import errors
|
||||
from . import utils
|
||||
|
||||
|
||||
def check_resource(resource_name):
|
||||
def decorator(f):
|
||||
@functools.wraps(f)
|
||||
def wrapped(self, resource_id=None, *args, **kwargs):
|
||||
if resource_id is None and kwargs.get(resource_name):
|
||||
resource_id = kwargs.pop(resource_name)
|
||||
if isinstance(resource_id, dict):
|
||||
resource_id = resource_id.get('Id', resource_id.get('ID'))
|
||||
if not resource_id:
|
||||
raise errors.NullResource(
|
||||
'Resource ID was not provided'
|
||||
)
|
||||
return f(self, resource_id, *args, **kwargs)
|
||||
return wrapped
|
||||
return decorator
|
||||
|
||||
|
||||
def minimum_version(version):
|
||||
def decorator(f):
|
||||
@functools.wraps(f)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
if utils.version_lt(self._version, version):
|
||||
raise errors.InvalidVersion(
|
||||
'{0} is not available for version < {1}'.format(
|
||||
f.__name__, version
|
||||
)
|
||||
)
|
||||
return f(self, *args, **kwargs)
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
|
||||
def update_headers(f):
|
||||
def inner(self, *args, **kwargs):
|
||||
if 'HttpHeaders' in self._general_configs:
|
||||
if not kwargs.get('headers'):
|
||||
kwargs['headers'] = self._general_configs['HttpHeaders']
|
||||
else:
|
||||
kwargs['headers'].update(self._general_configs['HttpHeaders'])
|
||||
return f(self, *args, **kwargs)
|
||||
return inner
|
||||
126
plugins/module_utils/_api/utils/fnmatch.py
Normal file
126
plugins/module_utils/_api/utils/fnmatch.py
Normal file
@ -0,0 +1,126 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
"""Filename matching with shell patterns.
|
||||
|
||||
fnmatch(FILENAME, PATTERN) matches according to the local convention.
|
||||
fnmatchcase(FILENAME, PATTERN) always takes case in account.
|
||||
|
||||
The functions operate by translating the pattern into a regular
|
||||
expression. They cache the compiled regular expressions for speed.
|
||||
|
||||
The function translate(PATTERN) returns a regular expression
|
||||
corresponding to PATTERN. (It does not compile it.)
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
__all__ = ["fnmatch", "fnmatchcase", "translate"]
|
||||
|
||||
_cache = {}
|
||||
_MAXCACHE = 100
|
||||
|
||||
|
||||
def _purge():
|
||||
"""Clear the pattern cache"""
|
||||
_cache.clear()
|
||||
|
||||
|
||||
def fnmatch(name, pat):
|
||||
"""Test whether FILENAME matches PATTERN.
|
||||
|
||||
Patterns are Unix shell style:
|
||||
|
||||
* matches everything
|
||||
? matches any single character
|
||||
[seq] matches any character in seq
|
||||
[!seq] matches any char not in seq
|
||||
|
||||
An initial period in FILENAME is not special.
|
||||
Both FILENAME and PATTERN are first case-normalized
|
||||
if the operating system requires it.
|
||||
If you don't want this, use fnmatchcase(FILENAME, PATTERN).
|
||||
"""
|
||||
|
||||
name = name.lower()
|
||||
pat = pat.lower()
|
||||
return fnmatchcase(name, pat)
|
||||
|
||||
|
||||
def fnmatchcase(name, pat):
|
||||
"""Test whether FILENAME matches PATTERN, including case.
|
||||
This is a version of fnmatch() which doesn't case-normalize
|
||||
its arguments.
|
||||
"""
|
||||
|
||||
try:
|
||||
re_pat = _cache[pat]
|
||||
except KeyError:
|
||||
res = translate(pat)
|
||||
if len(_cache) >= _MAXCACHE:
|
||||
_cache.clear()
|
||||
_cache[pat] = re_pat = re.compile(res)
|
||||
return re_pat.match(name) is not None
|
||||
|
||||
|
||||
def translate(pat):
|
||||
"""Translate a shell PATTERN to a regular expression.
|
||||
|
||||
There is no way to quote meta-characters.
|
||||
"""
|
||||
i, n = 0, len(pat)
|
||||
res = '^'
|
||||
while i < n:
|
||||
c = pat[i]
|
||||
i = i + 1
|
||||
if c == '*':
|
||||
if i < n and pat[i] == '*':
|
||||
# is some flavor of "**"
|
||||
i = i + 1
|
||||
# Treat **/ as ** so eat the "/"
|
||||
if i < n and pat[i] == '/':
|
||||
i = i + 1
|
||||
if i >= n:
|
||||
# is "**EOF" - to align with .gitignore just accept all
|
||||
res = res + '.*'
|
||||
else:
|
||||
# is "**"
|
||||
# Note that this allows for any # of /'s (even 0) because
|
||||
# the .* will eat everything, even /'s
|
||||
res = res + '(.*/)?'
|
||||
else:
|
||||
# is "*" so map it to anything but "/"
|
||||
res = res + '[^/]*'
|
||||
elif c == '?':
|
||||
# "?" is any char except "/"
|
||||
res = res + '[^/]'
|
||||
elif c == '[':
|
||||
j = i
|
||||
if j < n and pat[j] == '!':
|
||||
j = j + 1
|
||||
if j < n and pat[j] == ']':
|
||||
j = j + 1
|
||||
while j < n and pat[j] != ']':
|
||||
j = j + 1
|
||||
if j >= n:
|
||||
res = res + '\\['
|
||||
else:
|
||||
stuff = pat[i:j].replace('\\', '\\\\')
|
||||
i = j + 1
|
||||
if stuff[0] == '!':
|
||||
stuff = '^' + stuff[1:]
|
||||
elif stuff[0] == '^':
|
||||
stuff = '\\' + stuff
|
||||
res = '%s[%s]' % (res, stuff)
|
||||
else:
|
||||
res = res + re.escape(c)
|
||||
|
||||
return res + '$'
|
||||
88
plugins/module_utils/_api/utils/json_stream.py
Normal file
88
plugins/module_utils/_api/utils/json_stream.py
Normal file
@ -0,0 +1,88 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import json
|
||||
import json.decoder
|
||||
|
||||
from ansible.module_utils.six import text_type
|
||||
|
||||
from ..errors import StreamParseError
|
||||
|
||||
|
||||
json_decoder = json.JSONDecoder()
|
||||
|
||||
|
||||
def stream_as_text(stream):
|
||||
"""
|
||||
Given a stream of bytes or text, if any of the items in the stream
|
||||
are bytes convert them to text.
|
||||
This function can be removed once we return text streams
|
||||
instead of byte streams.
|
||||
"""
|
||||
for data in stream:
|
||||
if not isinstance(data, text_type):
|
||||
data = data.decode('utf-8', 'replace')
|
||||
yield data
|
||||
|
||||
|
||||
def json_splitter(buffer):
|
||||
"""Attempt to parse a json object from a buffer. If there is at least one
|
||||
object, return it and the rest of the buffer, otherwise return None.
|
||||
"""
|
||||
buffer = buffer.strip()
|
||||
try:
|
||||
obj, index = json_decoder.raw_decode(buffer)
|
||||
rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end():]
|
||||
return obj, rest
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def json_stream(stream):
|
||||
"""Given a stream of text, return a stream of json objects.
|
||||
This handles streams which are inconsistently buffered (some entries may
|
||||
be newline delimited, and others are not).
|
||||
"""
|
||||
return split_buffer(stream, json_splitter, json_decoder.decode)
|
||||
|
||||
|
||||
def line_splitter(buffer, separator=u'\n'):
|
||||
index = buffer.find(text_type(separator))
|
||||
if index == -1:
|
||||
return None
|
||||
return buffer[:index + 1], buffer[index + 1:]
|
||||
|
||||
|
||||
def split_buffer(stream, splitter=None, decoder=lambda a: a):
|
||||
"""Given a generator which yields strings and a splitter function,
|
||||
joins all input, splits on the separator and yields each chunk.
|
||||
Unlike string.split(), each chunk includes the trailing
|
||||
separator, except for the last one if none was found on the end
|
||||
of the input.
|
||||
"""
|
||||
splitter = splitter or line_splitter
|
||||
buffered = text_type('')
|
||||
|
||||
for data in stream_as_text(stream):
|
||||
buffered += data
|
||||
while True:
|
||||
buffer_split = splitter(buffered)
|
||||
if buffer_split is None:
|
||||
break
|
||||
|
||||
item, buffered = buffer_split
|
||||
yield item
|
||||
|
||||
if buffered:
|
||||
try:
|
||||
yield decoder(buffered)
|
||||
except Exception as e:
|
||||
raise StreamParseError(e)
|
||||
94
plugins/module_utils/_api/utils/ports.py
Normal file
94
plugins/module_utils/_api/utils/ports.py
Normal file
@ -0,0 +1,94 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import re
|
||||
|
||||
PORT_SPEC = re.compile(
|
||||
"^" # Match full string
|
||||
"(" # External part
|
||||
r"(\[?(?P<host>[a-fA-F\d.:]+)\]?:)?" # Address
|
||||
r"(?P<ext>[\d]*)(-(?P<ext_end>[\d]+))?:" # External range
|
||||
")?"
|
||||
r"(?P<int>[\d]+)(-(?P<int_end>[\d]+))?" # Internal range
|
||||
"(?P<proto>/(udp|tcp|sctp))?" # Protocol
|
||||
"$" # Match full string
|
||||
)
|
||||
|
||||
|
||||
def add_port_mapping(port_bindings, internal_port, external):
|
||||
if internal_port in port_bindings:
|
||||
port_bindings[internal_port].append(external)
|
||||
else:
|
||||
port_bindings[internal_port] = [external]
|
||||
|
||||
|
||||
def add_port(port_bindings, internal_port_range, external_range):
|
||||
if external_range is None:
|
||||
for internal_port in internal_port_range:
|
||||
add_port_mapping(port_bindings, internal_port, None)
|
||||
else:
|
||||
ports = zip(internal_port_range, external_range)
|
||||
for internal_port, external_port in ports:
|
||||
add_port_mapping(port_bindings, internal_port, external_port)
|
||||
|
||||
|
||||
def build_port_bindings(ports):
|
||||
port_bindings = {}
|
||||
for port in ports:
|
||||
internal_port_range, external_range = split_port(port)
|
||||
add_port(port_bindings, internal_port_range, external_range)
|
||||
return port_bindings
|
||||
|
||||
|
||||
def _raise_invalid_port(port):
|
||||
raise ValueError('Invalid port "%s", should be '
|
||||
'[[remote_ip:]remote_port[-remote_port]:]'
|
||||
'port[/protocol]' % port)
|
||||
|
||||
|
||||
def port_range(start, end, proto, randomly_available_port=False):
|
||||
if not start:
|
||||
return start
|
||||
if not end:
|
||||
return [start + proto]
|
||||
if randomly_available_port:
|
||||
return ['{0}-{1}'.format(start, end) + proto]
|
||||
return [str(port) + proto for port in range(int(start), int(end) + 1)]
|
||||
|
||||
|
||||
def split_port(port):
|
||||
if hasattr(port, 'legacy_repr'):
|
||||
# This is the worst hack, but it prevents a bug in Compose 1.14.0
|
||||
# https://github.com/docker/docker-py/issues/1668
|
||||
# TODO: remove once fixed in Compose stable
|
||||
port = port.legacy_repr()
|
||||
port = str(port)
|
||||
match = PORT_SPEC.match(port)
|
||||
if match is None:
|
||||
_raise_invalid_port(port)
|
||||
parts = match.groupdict()
|
||||
|
||||
host = parts['host']
|
||||
proto = parts['proto'] or ''
|
||||
internal = port_range(parts['int'], parts['int_end'], proto)
|
||||
external = port_range(
|
||||
parts['ext'], parts['ext_end'], '', len(internal) == 1)
|
||||
|
||||
if host is None:
|
||||
if external is not None and len(internal) != len(external):
|
||||
raise ValueError('Port ranges don\'t match in length')
|
||||
return internal, external
|
||||
else:
|
||||
if not external:
|
||||
external = [None] * len(internal)
|
||||
elif len(internal) != len(external):
|
||||
raise ValueError('Port ranges don\'t match in length')
|
||||
return internal, [(host, ext_port) for ext_port in external]
|
||||
84
plugins/module_utils/_api/utils/proxy.py
Normal file
84
plugins/module_utils/_api/utils/proxy.py
Normal file
@ -0,0 +1,84 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from .utils import format_environment
|
||||
|
||||
|
||||
class ProxyConfig(dict):
|
||||
'''
|
||||
Hold the client's proxy configuration
|
||||
'''
|
||||
@property
|
||||
def http(self):
|
||||
return self.get('http')
|
||||
|
||||
@property
|
||||
def https(self):
|
||||
return self.get('https')
|
||||
|
||||
@property
|
||||
def ftp(self):
|
||||
return self.get('ftp')
|
||||
|
||||
@property
|
||||
def no_proxy(self):
|
||||
return self.get('no_proxy')
|
||||
|
||||
@staticmethod
|
||||
def from_dict(config):
|
||||
'''
|
||||
Instantiate a new ProxyConfig from a dictionary that represents a
|
||||
client configuration, as described in `the documentation`_.
|
||||
|
||||
.. _the documentation:
|
||||
https://docs.docker.com/network/proxy/#configure-the-docker-client
|
||||
'''
|
||||
return ProxyConfig(
|
||||
http=config.get('httpProxy'),
|
||||
https=config.get('httpsProxy'),
|
||||
ftp=config.get('ftpProxy'),
|
||||
no_proxy=config.get('noProxy'),
|
||||
)
|
||||
|
||||
def get_environment(self):
|
||||
'''
|
||||
Return a dictionary representing the environment variables used to
|
||||
set the proxy settings.
|
||||
'''
|
||||
env = {}
|
||||
if self.http:
|
||||
env['http_proxy'] = env['HTTP_PROXY'] = self.http
|
||||
if self.https:
|
||||
env['https_proxy'] = env['HTTPS_PROXY'] = self.https
|
||||
if self.ftp:
|
||||
env['ftp_proxy'] = env['FTP_PROXY'] = self.ftp
|
||||
if self.no_proxy:
|
||||
env['no_proxy'] = env['NO_PROXY'] = self.no_proxy
|
||||
return env
|
||||
|
||||
def inject_proxy_environment(self, environment):
|
||||
'''
|
||||
Given a list of strings representing environment variables, prepend the
|
||||
environment variables corresponding to the proxy settings.
|
||||
'''
|
||||
if not self:
|
||||
return environment
|
||||
|
||||
proxy_env = format_environment(self.get_environment())
|
||||
if not environment:
|
||||
return proxy_env
|
||||
# It is important to prepend our variables, because we want the
|
||||
# variables defined in "environment" to take precedence.
|
||||
return proxy_env + environment
|
||||
|
||||
def __str__(self):
|
||||
return 'ProxyConfig(http={0}, https={1}, ftp={2}, no_proxy={3})'.format(
|
||||
self.http, self.https, self.ftp, self.no_proxy)
|
||||
178
plugins/module_utils/_api/utils/socket.py
Normal file
178
plugins/module_utils/_api/utils/socket.py
Normal file
@ -0,0 +1,178 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import errno
|
||||
import os
|
||||
import select
|
||||
import socket as pysocket
|
||||
import struct
|
||||
|
||||
from ansible.module_utils.six import PY3, binary_type
|
||||
|
||||
from ..transport.npipesocket import NpipeSocket
|
||||
|
||||
|
||||
STDOUT = 1
|
||||
STDERR = 2
|
||||
|
||||
|
||||
class SocketError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def read(socket, n=4096):
|
||||
"""
|
||||
Reads at most n bytes from socket
|
||||
"""
|
||||
|
||||
recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)
|
||||
|
||||
if PY3 and not isinstance(socket, NpipeSocket):
|
||||
select.select([socket], [], [])
|
||||
|
||||
try:
|
||||
if hasattr(socket, 'recv'):
|
||||
return socket.recv(n)
|
||||
if PY3 and isinstance(socket, getattr(pysocket, 'SocketIO')):
|
||||
return socket.read(n)
|
||||
return os.read(socket.fileno(), n)
|
||||
except EnvironmentError as e:
|
||||
if e.errno not in recoverable_errors:
|
||||
raise
|
||||
|
||||
|
||||
def read_exactly(socket, n):
|
||||
"""
|
||||
Reads exactly n bytes from socket
|
||||
Raises SocketError if there isn't enough data
|
||||
"""
|
||||
data = binary_type()
|
||||
while len(data) < n:
|
||||
next_data = read(socket, n - len(data))
|
||||
if not next_data:
|
||||
raise SocketError("Unexpected EOF")
|
||||
data += next_data
|
||||
return data
|
||||
|
||||
|
||||
def next_frame_header(socket):
|
||||
"""
|
||||
Returns the stream and size of the next frame of data waiting to be read
|
||||
from socket, according to the protocol defined here:
|
||||
|
||||
https://docs.docker.com/engine/api/v1.24/#attach-to-a-container
|
||||
"""
|
||||
try:
|
||||
data = read_exactly(socket, 8)
|
||||
except SocketError:
|
||||
return (-1, -1)
|
||||
|
||||
stream, actual = struct.unpack('>BxxxL', data)
|
||||
return (stream, actual)
|
||||
|
||||
|
||||
def frames_iter(socket, tty):
|
||||
"""
|
||||
Return a generator of frames read from socket. A frame is a tuple where
|
||||
the first item is the stream number and the second item is a chunk of data.
|
||||
|
||||
If the tty setting is enabled, the streams are multiplexed into the stdout
|
||||
stream.
|
||||
"""
|
||||
if tty:
|
||||
return ((STDOUT, frame) for frame in frames_iter_tty(socket))
|
||||
else:
|
||||
return frames_iter_no_tty(socket)
|
||||
|
||||
|
||||
def frames_iter_no_tty(socket):
|
||||
"""
|
||||
Returns a generator of data read from the socket when the tty setting is
|
||||
not enabled.
|
||||
"""
|
||||
while True:
|
||||
(stream, n) = next_frame_header(socket)
|
||||
if n < 0:
|
||||
break
|
||||
while n > 0:
|
||||
result = read(socket, n)
|
||||
if result is None:
|
||||
continue
|
||||
data_length = len(result)
|
||||
if data_length == 0:
|
||||
# We have reached EOF
|
||||
return
|
||||
n -= data_length
|
||||
yield (stream, result)
|
||||
|
||||
|
||||
def frames_iter_tty(socket):
|
||||
"""
|
||||
Return a generator of data read from the socket when the tty setting is
|
||||
enabled.
|
||||
"""
|
||||
while True:
|
||||
result = read(socket)
|
||||
if len(result) == 0:
|
||||
# We have reached EOF
|
||||
return
|
||||
yield result
|
||||
|
||||
|
||||
def consume_socket_output(frames, demux=False):
|
||||
"""
|
||||
Iterate through frames read from the socket and return the result.
|
||||
|
||||
Args:
|
||||
|
||||
demux (bool):
|
||||
If False, stdout and stderr are multiplexed, and the result is the
|
||||
concatenation of all the frames. If True, the streams are
|
||||
demultiplexed, and the result is a 2-tuple where each item is the
|
||||
concatenation of frames belonging to the same stream.
|
||||
"""
|
||||
if demux is False:
|
||||
# If the streams are multiplexed, the generator returns strings, that
|
||||
# we just need to concatenate.
|
||||
return binary_type().join(frames)
|
||||
|
||||
# If the streams are demultiplexed, the generator yields tuples
|
||||
# (stdout, stderr)
|
||||
out = [None, None]
|
||||
for frame in frames:
|
||||
# It is guaranteed that for each frame, one and only one stream
|
||||
# is not None.
|
||||
if frame == (None, None):
|
||||
raise AssertionError('frame must be (None, None), but got %s' % (frame, ))
|
||||
if frame[0] is not None:
|
||||
if out[0] is None:
|
||||
out[0] = frame[0]
|
||||
else:
|
||||
out[0] += frame[0]
|
||||
else:
|
||||
if out[1] is None:
|
||||
out[1] = frame[1]
|
||||
else:
|
||||
out[1] += frame[1]
|
||||
return tuple(out)
|
||||
|
||||
|
||||
def demux_adaptor(stream_id, data):
|
||||
"""
|
||||
Utility to demultiplex stdout and stderr when reading frames from the
|
||||
socket.
|
||||
"""
|
||||
if stream_id == STDOUT:
|
||||
return (data, None)
|
||||
elif stream_id == STDERR:
|
||||
return (None, data)
|
||||
else:
|
||||
raise ValueError('{0} is not a valid stream'.format(stream_id))
|
||||
514
plugins/module_utils/_api/utils/utils.py
Normal file
514
plugins/module_utils/_api/utils/utils.py
Normal file
@ -0,0 +1,514 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import shlex
|
||||
import string
|
||||
from datetime import datetime
|
||||
from ansible_collections.community.docker.plugins.module_utils.version import StrictVersion
|
||||
|
||||
from ansible.module_utils.six import PY2, PY3, binary_type, integer_types, iteritems, string_types, text_type
|
||||
|
||||
from .. import errors
|
||||
from .. import tls
|
||||
from ..constants import DEFAULT_HTTP_HOST
|
||||
from ..constants import DEFAULT_UNIX_SOCKET
|
||||
from ..constants import DEFAULT_NPIPE
|
||||
from ..constants import BYTE_UNITS
|
||||
|
||||
if PY2:
|
||||
from urllib import splitnport
|
||||
from urlparse import urlparse
|
||||
else:
|
||||
from urllib.parse import splitnport, urlparse
|
||||
|
||||
|
||||
def create_ipam_pool(*args, **kwargs):
|
||||
raise errors.DeprecatedMethod(
|
||||
'utils.create_ipam_pool has been removed. Please use a '
|
||||
'docker.types.IPAMPool object instead.'
|
||||
)
|
||||
|
||||
|
||||
def create_ipam_config(*args, **kwargs):
|
||||
raise errors.DeprecatedMethod(
|
||||
'utils.create_ipam_config has been removed. Please use a '
|
||||
'docker.types.IPAMConfig object instead.'
|
||||
)
|
||||
|
||||
|
||||
def decode_json_header(header):
|
||||
data = base64.b64decode(header)
|
||||
if PY3:
|
||||
data = data.decode('utf-8')
|
||||
return json.loads(data)
|
||||
|
||||
|
||||
def compare_version(v1, v2):
|
||||
"""Compare docker versions
|
||||
|
||||
>>> v1 = '1.9'
|
||||
>>> v2 = '1.10'
|
||||
>>> compare_version(v1, v2)
|
||||
1
|
||||
>>> compare_version(v2, v1)
|
||||
-1
|
||||
>>> compare_version(v2, v2)
|
||||
0
|
||||
"""
|
||||
s1 = StrictVersion(v1)
|
||||
s2 = StrictVersion(v2)
|
||||
if s1 == s2:
|
||||
return 0
|
||||
elif s1 > s2:
|
||||
return -1
|
||||
else:
|
||||
return 1
|
||||
|
||||
|
||||
def version_lt(v1, v2):
|
||||
return compare_version(v1, v2) > 0
|
||||
|
||||
|
||||
def version_gte(v1, v2):
|
||||
return not version_lt(v1, v2)
|
||||
|
||||
|
||||
def _convert_port_binding(binding):
|
||||
result = {'HostIp': '', 'HostPort': ''}
|
||||
if isinstance(binding, tuple):
|
||||
if len(binding) == 2:
|
||||
result['HostPort'] = binding[1]
|
||||
result['HostIp'] = binding[0]
|
||||
elif isinstance(binding[0], string_types):
|
||||
result['HostIp'] = binding[0]
|
||||
else:
|
||||
result['HostPort'] = binding[0]
|
||||
elif isinstance(binding, dict):
|
||||
if 'HostPort' in binding:
|
||||
result['HostPort'] = binding['HostPort']
|
||||
if 'HostIp' in binding:
|
||||
result['HostIp'] = binding['HostIp']
|
||||
else:
|
||||
raise ValueError(binding)
|
||||
else:
|
||||
result['HostPort'] = binding
|
||||
|
||||
if result['HostPort'] is None:
|
||||
result['HostPort'] = ''
|
||||
else:
|
||||
result['HostPort'] = str(result['HostPort'])
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def convert_port_bindings(port_bindings):
|
||||
result = {}
|
||||
for k, v in iteritems(port_bindings):
|
||||
key = str(k)
|
||||
if '/' not in key:
|
||||
key += '/tcp'
|
||||
if isinstance(v, list):
|
||||
result[key] = [_convert_port_binding(binding) for binding in v]
|
||||
else:
|
||||
result[key] = [_convert_port_binding(v)]
|
||||
return result
|
||||
|
||||
|
||||
def convert_volume_binds(binds):
|
||||
if isinstance(binds, list):
|
||||
return binds
|
||||
|
||||
result = []
|
||||
for k, v in binds.items():
|
||||
if isinstance(k, binary_type):
|
||||
k = k.decode('utf-8')
|
||||
|
||||
if isinstance(v, dict):
|
||||
if 'ro' in v and 'mode' in v:
|
||||
raise ValueError(
|
||||
'Binding cannot contain both "ro" and "mode": {0}'
|
||||
.format(repr(v))
|
||||
)
|
||||
|
||||
bind = v['bind']
|
||||
if isinstance(bind, binary_type):
|
||||
bind = bind.decode('utf-8')
|
||||
|
||||
if 'ro' in v:
|
||||
mode = 'ro' if v['ro'] else 'rw'
|
||||
elif 'mode' in v:
|
||||
mode = v['mode']
|
||||
else:
|
||||
mode = 'rw'
|
||||
|
||||
result.append(
|
||||
text_type('{0}:{1}:{2}').format(k, bind, mode)
|
||||
)
|
||||
else:
|
||||
if isinstance(v, binary_type):
|
||||
v = v.decode('utf-8')
|
||||
result.append(
|
||||
text_type('{0}:{1}:rw').format(k, v)
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
def convert_tmpfs_mounts(tmpfs):
|
||||
if isinstance(tmpfs, dict):
|
||||
return tmpfs
|
||||
|
||||
if not isinstance(tmpfs, list):
|
||||
raise ValueError(
|
||||
'Expected tmpfs value to be either a list or a dict, found: {0}'
|
||||
.format(type(tmpfs).__name__)
|
||||
)
|
||||
|
||||
result = {}
|
||||
for mount in tmpfs:
|
||||
if isinstance(mount, string_types):
|
||||
if ":" in mount:
|
||||
name, options = mount.split(":", 1)
|
||||
else:
|
||||
name = mount
|
||||
options = ""
|
||||
|
||||
else:
|
||||
raise ValueError(
|
||||
"Expected item in tmpfs list to be a string, found: {0}"
|
||||
.format(type(mount).__name__)
|
||||
)
|
||||
|
||||
result[name] = options
|
||||
return result
|
||||
|
||||
|
||||
def convert_service_networks(networks):
|
||||
if not networks:
|
||||
return networks
|
||||
if not isinstance(networks, list):
|
||||
raise TypeError('networks parameter must be a list.')
|
||||
|
||||
result = []
|
||||
for n in networks:
|
||||
if isinstance(n, string_types):
|
||||
n = {'Target': n}
|
||||
result.append(n)
|
||||
return result
|
||||
|
||||
|
||||
def parse_repository_tag(repo_name):
|
||||
parts = repo_name.rsplit('@', 1)
|
||||
if len(parts) == 2:
|
||||
return tuple(parts)
|
||||
parts = repo_name.rsplit(':', 1)
|
||||
if len(parts) == 2 and '/' not in parts[1]:
|
||||
return tuple(parts)
|
||||
return repo_name, None
|
||||
|
||||
|
||||
def parse_host(addr, is_win32=False, tls=False):
|
||||
path = ''
|
||||
port = None
|
||||
host = None
|
||||
|
||||
# Sensible defaults
|
||||
if not addr and is_win32:
|
||||
return DEFAULT_NPIPE
|
||||
if not addr or addr.strip() == 'unix://':
|
||||
return DEFAULT_UNIX_SOCKET
|
||||
|
||||
addr = addr.strip()
|
||||
|
||||
parsed_url = urlparse(addr)
|
||||
proto = parsed_url.scheme
|
||||
if not proto or any(x not in string.ascii_letters + '+' for x in proto):
|
||||
# https://bugs.python.org/issue754016
|
||||
parsed_url = urlparse('//' + addr, 'tcp')
|
||||
proto = 'tcp'
|
||||
|
||||
if proto == 'fd':
|
||||
raise errors.DockerException('fd protocol is not implemented')
|
||||
|
||||
# These protos are valid aliases for our library but not for the
|
||||
# official spec
|
||||
if proto == 'http' or proto == 'https':
|
||||
tls = proto == 'https'
|
||||
proto = 'tcp'
|
||||
elif proto == 'http+unix':
|
||||
proto = 'unix'
|
||||
|
||||
if proto not in ('tcp', 'unix', 'npipe', 'ssh'):
|
||||
raise errors.DockerException(
|
||||
"Invalid bind address protocol: {0}".format(addr)
|
||||
)
|
||||
|
||||
if proto == 'tcp' and not parsed_url.netloc:
|
||||
# "tcp://" is exceptionally disallowed by convention;
|
||||
# omitting a hostname for other protocols is fine
|
||||
raise errors.DockerException(
|
||||
'Invalid bind address format: {0}'.format(addr)
|
||||
)
|
||||
|
||||
if any([
|
||||
parsed_url.params, parsed_url.query, parsed_url.fragment,
|
||||
parsed_url.password
|
||||
]):
|
||||
raise errors.DockerException(
|
||||
'Invalid bind address format: {0}'.format(addr)
|
||||
)
|
||||
|
||||
if parsed_url.path and proto == 'ssh':
|
||||
raise errors.DockerException(
|
||||
'Invalid bind address format: no path allowed for this protocol:'
|
||||
' {0}'.format(addr)
|
||||
)
|
||||
else:
|
||||
path = parsed_url.path
|
||||
if proto == 'unix' and parsed_url.hostname is not None:
|
||||
# For legacy reasons, we consider unix://path
|
||||
# to be valid and equivalent to unix:///path
|
||||
path = '/'.join((parsed_url.hostname, path))
|
||||
|
||||
if proto in ('tcp', 'ssh'):
|
||||
# parsed_url.hostname strips brackets from IPv6 addresses,
|
||||
# which can be problematic hence our use of splitnport() instead.
|
||||
host, port = splitnport(parsed_url.netloc)
|
||||
if port is None or port < 0:
|
||||
if proto != 'ssh':
|
||||
raise errors.DockerException(
|
||||
'Invalid bind address format: port is required:'
|
||||
' {0}'.format(addr)
|
||||
)
|
||||
port = 22
|
||||
|
||||
if not host:
|
||||
host = DEFAULT_HTTP_HOST
|
||||
|
||||
# Rewrite schemes to fit library internals (requests adapters)
|
||||
if proto == 'tcp':
|
||||
proto = 'http{0}'.format('s' if tls else '')
|
||||
elif proto == 'unix':
|
||||
proto = 'http+unix'
|
||||
|
||||
if proto in ('http+unix', 'npipe'):
|
||||
return "{0}://{1}".format(proto, path).rstrip('/')
|
||||
return '{0}://{1}:{2}{3}'.format(proto, host, port, path).rstrip('/')
|
||||
|
||||
|
||||
def parse_devices(devices):
|
||||
device_list = []
|
||||
for device in devices:
|
||||
if isinstance(device, dict):
|
||||
device_list.append(device)
|
||||
continue
|
||||
if not isinstance(device, string_types):
|
||||
raise errors.DockerException(
|
||||
'Invalid device type {0}'.format(type(device))
|
||||
)
|
||||
device_mapping = device.split(':')
|
||||
if device_mapping:
|
||||
path_on_host = device_mapping[0]
|
||||
if len(device_mapping) > 1:
|
||||
path_in_container = device_mapping[1]
|
||||
else:
|
||||
path_in_container = path_on_host
|
||||
if len(device_mapping) > 2:
|
||||
permissions = device_mapping[2]
|
||||
else:
|
||||
permissions = 'rwm'
|
||||
device_list.append({
|
||||
'PathOnHost': path_on_host,
|
||||
'PathInContainer': path_in_container,
|
||||
'CgroupPermissions': permissions
|
||||
})
|
||||
return device_list
|
||||
|
||||
|
||||
def kwargs_from_env(ssl_version=None, assert_hostname=None, environment=None):
|
||||
if not environment:
|
||||
environment = os.environ
|
||||
host = environment.get('DOCKER_HOST')
|
||||
|
||||
# empty string for cert path is the same as unset.
|
||||
cert_path = environment.get('DOCKER_CERT_PATH') or None
|
||||
|
||||
# empty string for tls verify counts as "false".
|
||||
# Any value or 'unset' counts as true.
|
||||
tls_verify = environment.get('DOCKER_TLS_VERIFY')
|
||||
if tls_verify == '':
|
||||
tls_verify = False
|
||||
else:
|
||||
tls_verify = tls_verify is not None
|
||||
enable_tls = cert_path or tls_verify
|
||||
|
||||
params = {}
|
||||
|
||||
if host:
|
||||
params['base_url'] = host
|
||||
|
||||
if not enable_tls:
|
||||
return params
|
||||
|
||||
if not cert_path:
|
||||
cert_path = os.path.join(os.path.expanduser('~'), '.docker')
|
||||
|
||||
if not tls_verify and assert_hostname is None:
|
||||
# assert_hostname is a subset of TLS verification,
|
||||
# so if it's not set already then set it to false.
|
||||
assert_hostname = False
|
||||
|
||||
params['tls'] = tls.TLSConfig(
|
||||
client_cert=(os.path.join(cert_path, 'cert.pem'),
|
||||
os.path.join(cert_path, 'key.pem')),
|
||||
ca_cert=os.path.join(cert_path, 'ca.pem'),
|
||||
verify=tls_verify,
|
||||
ssl_version=ssl_version,
|
||||
assert_hostname=assert_hostname,
|
||||
)
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def convert_filters(filters):
|
||||
result = {}
|
||||
for k, v in iteritems(filters):
|
||||
if isinstance(v, bool):
|
||||
v = 'true' if v else 'false'
|
||||
if not isinstance(v, list):
|
||||
v = [v, ]
|
||||
result[k] = [
|
||||
str(item) if not isinstance(item, string_types) else item
|
||||
for item in v
|
||||
]
|
||||
return json.dumps(result)
|
||||
|
||||
|
||||
def datetime_to_timestamp(dt):
|
||||
"""Convert a UTC datetime to a Unix timestamp"""
|
||||
delta = dt - datetime.utcfromtimestamp(0)
|
||||
return delta.seconds + delta.days * 24 * 3600
|
||||
|
||||
|
||||
def parse_bytes(s):
|
||||
if isinstance(s, integer_types + (float,)):
|
||||
return s
|
||||
if len(s) == 0:
|
||||
return 0
|
||||
|
||||
if s[-2:-1].isalpha() and s[-1].isalpha():
|
||||
if s[-1] == "b" or s[-1] == "B":
|
||||
s = s[:-1]
|
||||
units = BYTE_UNITS
|
||||
suffix = s[-1].lower()
|
||||
|
||||
# Check if the variable is a string representation of an int
|
||||
# without a units part. Assuming that the units are bytes.
|
||||
if suffix.isdigit():
|
||||
digits_part = s
|
||||
suffix = 'b'
|
||||
else:
|
||||
digits_part = s[:-1]
|
||||
|
||||
if suffix in units.keys() or suffix.isdigit():
|
||||
try:
|
||||
digits = float(digits_part)
|
||||
except ValueError:
|
||||
raise errors.DockerException(
|
||||
'Failed converting the string value for memory ({0}) to'
|
||||
' an integer.'.format(digits_part)
|
||||
)
|
||||
|
||||
# Reconvert to long for the final result
|
||||
s = int(digits * units[suffix])
|
||||
else:
|
||||
raise errors.DockerException(
|
||||
'The specified value for memory ({0}) should specify the'
|
||||
' units. The postfix should be one of the `b` `k` `m` `g`'
|
||||
' characters'.format(s)
|
||||
)
|
||||
|
||||
return s
|
||||
|
||||
|
||||
def normalize_links(links):
|
||||
if isinstance(links, dict):
|
||||
links = iteritems(links)
|
||||
|
||||
return ['{0}:{1}'.format(k, v) if v else k for k, v in sorted(links)]
|
||||
|
||||
|
||||
def parse_env_file(env_file):
|
||||
"""
|
||||
Reads a line-separated environment file.
|
||||
The format of each line should be "key=value".
|
||||
"""
|
||||
environment = {}
|
||||
|
||||
with open(env_file, 'r') as f:
|
||||
for line in f:
|
||||
|
||||
if line[0] == '#':
|
||||
continue
|
||||
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
parse_line = line.split('=', 1)
|
||||
if len(parse_line) == 2:
|
||||
k, v = parse_line
|
||||
environment[k] = v
|
||||
else:
|
||||
raise errors.DockerException(
|
||||
'Invalid line in environment file {0}:\n{1}'.format(
|
||||
env_file, line))
|
||||
|
||||
return environment
|
||||
|
||||
|
||||
def split_command(command):
|
||||
if PY2 and not isinstance(command, binary_type):
|
||||
command = command.encode('utf-8')
|
||||
return shlex.split(command)
|
||||
|
||||
|
||||
def format_environment(environment):
|
||||
def format_env(key, value):
|
||||
if value is None:
|
||||
return key
|
||||
if isinstance(value, binary_type):
|
||||
value = value.decode('utf-8')
|
||||
|
||||
return u'{key}={value}'.format(key=key, value=value)
|
||||
return [format_env(*var) for var in iteritems(environment)]
|
||||
|
||||
|
||||
def format_extra_hosts(extra_hosts, task=False):
|
||||
# Use format dictated by Swarm API if container is part of a task
|
||||
if task:
|
||||
return [
|
||||
'{0} {1}'.format(v, k) for k, v in sorted(iteritems(extra_hosts))
|
||||
]
|
||||
|
||||
return [
|
||||
'{0}:{1}'.format(k, v) for k, v in sorted(iteritems(extra_hosts))
|
||||
]
|
||||
|
||||
|
||||
def create_host_config(self, *args, **kwargs):
|
||||
raise errors.DeprecatedMethod(
|
||||
'utils.create_host_config has been removed. Please use a '
|
||||
'docker.types.HostConfig object instead.'
|
||||
)
|
||||
582
plugins/module_utils/common_api.py
Normal file
582
plugins/module_utils/common_api.py
Normal file
@ -0,0 +1,582 @@
|
||||
# Copyright 2016 Red Hat | Ansible
|
||||
# Copyright (c) 2022 Felix Fontein <felix@fontein.de>
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
|
||||
import abc
|
||||
import os
|
||||
import re
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule, env_fallback, missing_required_lib
|
||||
from ansible.module_utils.common._collections_compat import Mapping, Sequence
|
||||
from ansible.module_utils.six import string_types
|
||||
from ansible.module_utils.six.moves.urllib.parse import urlparse
|
||||
from ansible.module_utils.parsing.convert_bool import BOOLEANS_TRUE, BOOLEANS_FALSE
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils.version import LooseVersion
|
||||
|
||||
try:
|
||||
from requests.exceptions import RequestException, SSLError
|
||||
except ImportError:
|
||||
# Define an exception class RequestException so that our code doesn't break.
|
||||
class RequestException(Exception):
|
||||
pass
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api import auth
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.api.client import APIClient as Client
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
|
||||
APIError,
|
||||
NotFound,
|
||||
MissingRequirementException,
|
||||
TLSParameterError,
|
||||
)
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.tls import TLSConfig
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.utils import (
|
||||
convert_filters,
|
||||
parse_repository_tag,
|
||||
)
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils.util import (
|
||||
DEFAULT_DOCKER_HOST,
|
||||
DEFAULT_TLS,
|
||||
DEFAULT_TLS_VERIFY,
|
||||
DEFAULT_TLS_HOSTNAME,
|
||||
DEFAULT_TIMEOUT_SECONDS,
|
||||
DOCKER_COMMON_ARGS,
|
||||
DOCKER_MUTUALLY_EXCLUSIVE,
|
||||
DOCKER_REQUIRED_TOGETHER,
|
||||
DEFAULT_DOCKER_REGISTRY,
|
||||
is_image_name_id,
|
||||
is_valid_tag,
|
||||
sanitize_result,
|
||||
update_tls_hostname,
|
||||
)
|
||||
|
||||
|
||||
def _get_tls_config(fail_function, **kwargs):
|
||||
try:
|
||||
tls_config = TLSConfig(**kwargs)
|
||||
return tls_config
|
||||
except TLSParameterError as exc:
|
||||
fail_function("TLS config error: %s" % exc)
|
||||
|
||||
|
||||
def is_using_tls(auth_data):
|
||||
return auth_data['tls_verify'] or auth_data['tls']
|
||||
|
||||
|
||||
def get_connect_params(auth_data, fail_function):
|
||||
if is_using_tls(auth_data):
|
||||
auth['docker_host'] = auth_data['docker_host'].replace('tcp://', 'https://')
|
||||
|
||||
result = dict(
|
||||
base_url=auth_data['docker_host'],
|
||||
version=auth_data['api_version'],
|
||||
timeout=auth_data['timeout'],
|
||||
)
|
||||
|
||||
if auth_data['tls_verify']:
|
||||
# TLS with verification
|
||||
tls_config = dict(
|
||||
verify=True,
|
||||
assert_hostname=auth_data['tls_hostname'],
|
||||
ssl_version=auth_data['ssl_version'],
|
||||
fail_function=fail_function,
|
||||
)
|
||||
if auth_data['cert_path'] and auth_data['key_path']:
|
||||
tls_config['client_cert'] = (auth_data['cert_path'], auth_data['key_path'])
|
||||
if auth_data['cacert_path']:
|
||||
tls_config['ca_cert'] = auth_data['cacert_path']
|
||||
result['tls'] = _get_tls_config(**tls_config)
|
||||
elif auth_data['tls']:
|
||||
# TLS without verification
|
||||
tls_config = dict(
|
||||
verify=False,
|
||||
ssl_version=auth_data['ssl_version'],
|
||||
fail_function=fail_function,
|
||||
)
|
||||
if auth_data['cert_path'] and auth_data['key_path']:
|
||||
tls_config['client_cert'] = (auth_data['cert_path'], auth_data['key_path'])
|
||||
result['tls'] = _get_tls_config(**tls_config)
|
||||
|
||||
if auth_data.get('use_ssh_client'):
|
||||
result['use_ssh_client'] = True
|
||||
|
||||
# No TLS
|
||||
return result
|
||||
|
||||
|
||||
class AnsibleDockerClientBase(Client):
|
||||
def __init__(self, min_docker_api_version=None):
|
||||
self._connect_params = get_connect_params(self.auth_params, fail_function=self.fail)
|
||||
|
||||
try:
|
||||
super(AnsibleDockerClientBase, self).__init__(**self._connect_params)
|
||||
self.docker_api_version_str = self.api_version
|
||||
except MissingRequirementException as exc:
|
||||
self.fail(missing_required_lib(exc.requirement), exception=exc.import_exception)
|
||||
except APIError as exc:
|
||||
self.fail("Docker API error: %s" % exc)
|
||||
except Exception as exc:
|
||||
self.fail("Error connecting: %s" % exc)
|
||||
|
||||
self.docker_api_version = LooseVersion(self.docker_api_version_str)
|
||||
min_docker_api_version = min_docker_api_version or '1.25'
|
||||
if self.docker_api_version < LooseVersion(min_docker_api_version):
|
||||
self.fail('Docker API version is %s. Minimum version required is %s.' % (self.docker_api_version_str, min_docker_api_version))
|
||||
|
||||
def log(self, msg, pretty_print=False):
|
||||
pass
|
||||
# if self.debug:
|
||||
# log_file = open('docker.log', 'a')
|
||||
# if pretty_print:
|
||||
# log_file.write(json.dumps(msg, sort_keys=True, indent=4, separators=(',', ': ')))
|
||||
# log_file.write(u'\n')
|
||||
# else:
|
||||
# log_file.write(msg + u'\n')
|
||||
|
||||
@abc.abstractmethod
|
||||
def fail(self, msg, **kwargs):
|
||||
pass
|
||||
|
||||
def deprecate(self, msg, version=None, date=None, collection_name=None):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def _get_value(param_name, param_value, env_variable, default_value):
|
||||
if param_value is not None:
|
||||
# take module parameter value
|
||||
if param_value in BOOLEANS_TRUE:
|
||||
return True
|
||||
if param_value in BOOLEANS_FALSE:
|
||||
return False
|
||||
return param_value
|
||||
|
||||
if env_variable is not None:
|
||||
env_value = os.environ.get(env_variable)
|
||||
if env_value is not None:
|
||||
# take the env variable value
|
||||
if param_name == 'cert_path':
|
||||
return os.path.join(env_value, 'cert.pem')
|
||||
if param_name == 'cacert_path':
|
||||
return os.path.join(env_value, 'ca.pem')
|
||||
if param_name == 'key_path':
|
||||
return os.path.join(env_value, 'key.pem')
|
||||
if env_value in BOOLEANS_TRUE:
|
||||
return True
|
||||
if env_value in BOOLEANS_FALSE:
|
||||
return False
|
||||
return env_value
|
||||
|
||||
# take the default
|
||||
return default_value
|
||||
|
||||
@abc.abstractmethod
|
||||
def _get_params(self):
|
||||
pass
|
||||
|
||||
@property
|
||||
def auth_params(self):
|
||||
# Get authentication credentials.
|
||||
# Precedence: module parameters-> environment variables-> defaults.
|
||||
|
||||
self.log('Getting credentials')
|
||||
|
||||
client_params = self._get_params()
|
||||
|
||||
params = dict()
|
||||
for key in DOCKER_COMMON_ARGS:
|
||||
params[key] = client_params.get(key)
|
||||
|
||||
result = dict(
|
||||
docker_host=self._get_value('docker_host', params['docker_host'], 'DOCKER_HOST',
|
||||
DEFAULT_DOCKER_HOST),
|
||||
tls_hostname=self._get_value('tls_hostname', params['tls_hostname'],
|
||||
'DOCKER_TLS_HOSTNAME', None),
|
||||
api_version=self._get_value('api_version', params['api_version'], 'DOCKER_API_VERSION',
|
||||
'auto'),
|
||||
cacert_path=self._get_value('cacert_path', params['ca_cert'], 'DOCKER_CERT_PATH', None),
|
||||
cert_path=self._get_value('cert_path', params['client_cert'], 'DOCKER_CERT_PATH', None),
|
||||
key_path=self._get_value('key_path', params['client_key'], 'DOCKER_CERT_PATH', None),
|
||||
ssl_version=self._get_value('ssl_version', params['ssl_version'], 'DOCKER_SSL_VERSION', None),
|
||||
tls=self._get_value('tls', params['tls'], 'DOCKER_TLS', DEFAULT_TLS),
|
||||
tls_verify=self._get_value('tls_verfy', params['validate_certs'], 'DOCKER_TLS_VERIFY',
|
||||
DEFAULT_TLS_VERIFY),
|
||||
timeout=self._get_value('timeout', params['timeout'], 'DOCKER_TIMEOUT',
|
||||
DEFAULT_TIMEOUT_SECONDS),
|
||||
use_ssh_client=self._get_value('use_ssh_client', params['use_ssh_client'], None, False),
|
||||
)
|
||||
|
||||
def depr(*args, **kwargs):
|
||||
self.deprecate(*args, **kwargs)
|
||||
|
||||
update_tls_hostname(result, old_behavior=True, deprecate_function=depr, uses_tls=is_using_tls(result))
|
||||
|
||||
return result
|
||||
|
||||
def _handle_ssl_error(self, error):
|
||||
match = re.match(r"hostname.*doesn\'t match (\'.*\')", str(error))
|
||||
if match:
|
||||
self.fail("You asked for verification that Docker daemons certificate's hostname matches %s. "
|
||||
"The actual certificate's hostname is %s. Most likely you need to set DOCKER_TLS_HOSTNAME "
|
||||
"or pass `tls_hostname` with a value of %s. You may also use TLS without verification by "
|
||||
"setting the `tls` parameter to true."
|
||||
% (self.auth_params['tls_hostname'], match.group(1), match.group(1)))
|
||||
self.fail("SSL Exception: %s" % (error))
|
||||
|
||||
def get_container_by_id(self, container_id):
|
||||
try:
|
||||
self.log("Inspecting container Id %s" % container_id)
|
||||
result = self.get_json('/containers/{0}/json', container_id)
|
||||
self.log("Completed container inspection")
|
||||
return result
|
||||
except NotFound as dummy:
|
||||
return None
|
||||
except Exception as exc:
|
||||
self.fail("Error inspecting container: %s" % exc)
|
||||
|
||||
def get_container(self, name=None):
|
||||
'''
|
||||
Lookup a container and return the inspection results.
|
||||
'''
|
||||
if name is None:
|
||||
return None
|
||||
|
||||
search_name = name
|
||||
if not name.startswith('/'):
|
||||
search_name = '/' + name
|
||||
|
||||
result = None
|
||||
try:
|
||||
params = {
|
||||
'limit': -1,
|
||||
'all': 1,
|
||||
'size': 0,
|
||||
'trunc_cmd': 0,
|
||||
}
|
||||
containers = self.get_json("/containers/json", params=params)
|
||||
for container in containers:
|
||||
self.log("testing container: %s" % (container['Names']))
|
||||
if isinstance(container['Names'], list) and search_name in container['Names']:
|
||||
result = container
|
||||
break
|
||||
if container['Id'].startswith(name):
|
||||
result = container
|
||||
break
|
||||
if container['Id'] == name:
|
||||
result = container
|
||||
break
|
||||
except SSLError as exc:
|
||||
self._handle_ssl_error(exc)
|
||||
except Exception as exc:
|
||||
self.fail("Error retrieving container list: %s" % exc)
|
||||
|
||||
if result is None:
|
||||
return None
|
||||
|
||||
return self.get_container_by_id(result['Id'])
|
||||
|
||||
def get_network(self, name=None, network_id=None):
|
||||
'''
|
||||
Lookup a network and return the inspection results.
|
||||
'''
|
||||
if name is None and network_id is None:
|
||||
return None
|
||||
|
||||
result = None
|
||||
|
||||
if network_id is None:
|
||||
try:
|
||||
networks = self.get_json("/networks")
|
||||
for network in networks:
|
||||
self.log("testing network: %s" % (network['Name']))
|
||||
if name == network['Name']:
|
||||
result = network
|
||||
break
|
||||
if network['Id'].startswith(name):
|
||||
result = network
|
||||
break
|
||||
except SSLError as exc:
|
||||
self._handle_ssl_error(exc)
|
||||
except Exception as exc:
|
||||
self.fail("Error retrieving network list: %s" % exc)
|
||||
|
||||
if result is not None:
|
||||
network_id = result['Id']
|
||||
|
||||
if network_id is not None:
|
||||
try:
|
||||
self.log("Inspecting network Id %s" % network_id)
|
||||
result = self.get_json('/networks/{0}', network_id)
|
||||
self.log("Completed network inspection")
|
||||
except NotFound as dummy:
|
||||
return None
|
||||
except Exception as exc:
|
||||
self.fail("Error inspecting network: %s" % exc)
|
||||
|
||||
return result
|
||||
|
||||
def _image_lookup(self, name, tag):
|
||||
'''
|
||||
Including a tag in the name parameter sent to the Docker SDK for Python images method
|
||||
does not work consistently. Instead, get the result set for name and manually check
|
||||
if the tag exists.
|
||||
'''
|
||||
try:
|
||||
params = {
|
||||
'only_ids': 0,
|
||||
'all': 0,
|
||||
}
|
||||
if LooseVersion(self.api_version) < LooseVersion('1.25'):
|
||||
# only use "filter" on API 1.24 and under, as it is deprecated
|
||||
params['filter'] = name
|
||||
else:
|
||||
params['filters'] = convert_filters({'reference': name})
|
||||
images = self.get_json("/images/json", params=params)
|
||||
except Exception as exc:
|
||||
self.fail("Error searching for image %s - %s" % (name, str(exc)))
|
||||
if tag:
|
||||
lookup = "%s:%s" % (name, tag)
|
||||
lookup_digest = "%s@%s" % (name, tag)
|
||||
response = images
|
||||
images = []
|
||||
for image in response:
|
||||
tags = image.get('RepoTags')
|
||||
digests = image.get('RepoDigests')
|
||||
if (tags and lookup in tags) or (digests and lookup_digest in digests):
|
||||
images = [image]
|
||||
break
|
||||
return images
|
||||
|
||||
def find_image(self, name, tag):
|
||||
'''
|
||||
Lookup an image (by name and tag) and return the inspection results.
|
||||
'''
|
||||
if not name:
|
||||
return None
|
||||
|
||||
self.log("Find image %s:%s" % (name, tag))
|
||||
images = self._image_lookup(name, tag)
|
||||
if not images:
|
||||
# In API <= 1.20 seeing 'docker.io/<name>' as the name of images pulled from docker hub
|
||||
registry, repo_name = auth.resolve_repository_name(name)
|
||||
if registry == 'docker.io':
|
||||
# If docker.io is explicitly there in name, the image
|
||||
# isn't found in some cases (#41509)
|
||||
self.log("Check for docker.io image: %s" % repo_name)
|
||||
images = self._image_lookup(repo_name, tag)
|
||||
if not images and repo_name.startswith('library/'):
|
||||
# Sometimes library/xxx images are not found
|
||||
lookup = repo_name[len('library/'):]
|
||||
self.log("Check for docker.io image: %s" % lookup)
|
||||
images = self._image_lookup(lookup, tag)
|
||||
if not images:
|
||||
# Last case for some Docker versions: if docker.io wasn't there,
|
||||
# it can be that the image wasn't found either
|
||||
# (https://github.com/ansible/ansible/pull/15586)
|
||||
lookup = "%s/%s" % (registry, repo_name)
|
||||
self.log("Check for docker.io image: %s" % lookup)
|
||||
images = self._image_lookup(lookup, tag)
|
||||
if not images and '/' not in repo_name:
|
||||
# This seems to be happening with podman-docker
|
||||
# (https://github.com/ansible-collections/community.docker/issues/291)
|
||||
lookup = "%s/library/%s" % (registry, repo_name)
|
||||
self.log("Check for docker.io image: %s" % lookup)
|
||||
images = self._image_lookup(lookup, tag)
|
||||
|
||||
if len(images) > 1:
|
||||
self.fail("Registry returned more than one result for %s:%s" % (name, tag))
|
||||
|
||||
if len(images) == 1:
|
||||
try:
|
||||
return self.get_json('/images/{0}/json', images[0]['Id'])
|
||||
except NotFound:
|
||||
self.log("Image %s:%s not found." % (name, tag))
|
||||
return None
|
||||
except Exception as exc:
|
||||
self.fail("Error inspecting image %s:%s - %s" % (name, tag, str(exc)))
|
||||
|
||||
self.log("Image %s:%s not found." % (name, tag))
|
||||
return None
|
||||
|
||||
def find_image_by_id(self, image_id, accept_missing_image=False):
|
||||
'''
|
||||
Lookup an image (by ID) and return the inspection results.
|
||||
'''
|
||||
if not image_id:
|
||||
return None
|
||||
|
||||
self.log("Find image %s (by ID)" % image_id)
|
||||
try:
|
||||
return self.get_json('/images/{0}/json', image_id)
|
||||
except NotFound as exc:
|
||||
if not accept_missing_image:
|
||||
self.fail("Error inspecting image ID %s - %s" % (image_id, str(exc)))
|
||||
self.log("Image %s not found." % image_id)
|
||||
return None
|
||||
except Exception as exc:
|
||||
self.fail("Error inspecting image ID %s - %s" % (image_id, str(exc)))
|
||||
|
||||
def pull_image(self, name, tag="latest", platform=None):
|
||||
'''
|
||||
Pull an image
|
||||
'''
|
||||
self.log("Pulling image %s:%s" % (name, tag))
|
||||
old_tag = self.find_image(name, tag)
|
||||
try:
|
||||
repository, image_tag = parse_repository_tag(name)
|
||||
registry, repo_name = auth.resolve_repository_name(repository)
|
||||
params = {
|
||||
'tag': tag or image_tag or 'latest',
|
||||
'fromImage': repository,
|
||||
}
|
||||
if platform is not None:
|
||||
params['platform'] = platform
|
||||
|
||||
headers = {}
|
||||
header = auth.get_config_header(self, registry)
|
||||
if header:
|
||||
headers['X-Registry-Auth'] = header
|
||||
|
||||
response = self._post(
|
||||
self._url('/images/create'), params=params, headers=headers,
|
||||
stream=True, timeout=None
|
||||
)
|
||||
self._raise_for_status(response)
|
||||
for line in self._stream_helper(response, decode=True):
|
||||
self.log(line, pretty_print=True)
|
||||
if line.get('error'):
|
||||
if line.get('errorDetail'):
|
||||
error_detail = line.get('errorDetail')
|
||||
self.fail("Error pulling %s - code: %s message: %s" % (name,
|
||||
error_detail.get('code'),
|
||||
error_detail.get('message')))
|
||||
else:
|
||||
self.fail("Error pulling %s - %s" % (name, line.get('error')))
|
||||
except Exception as exc:
|
||||
self.fail("Error pulling image %s:%s - %s" % (name, tag, str(exc)))
|
||||
|
||||
new_tag = self.find_image(name, tag)
|
||||
|
||||
return new_tag, old_tag == new_tag
|
||||
|
||||
|
||||
class AnsibleDockerClient(AnsibleDockerClientBase):
|
||||
|
||||
def __init__(self, argument_spec=None, supports_check_mode=False, mutually_exclusive=None,
|
||||
required_together=None, required_if=None, required_one_of=None,
|
||||
min_docker_api_version=None, option_minimal_versions=None,
|
||||
option_minimal_versions_ignore_params=None, fail_results=None):
|
||||
|
||||
# Modules can put information in here which will always be returned
|
||||
# in case client.fail() is called.
|
||||
self.fail_results = fail_results or {}
|
||||
|
||||
merged_arg_spec = dict()
|
||||
merged_arg_spec.update(DOCKER_COMMON_ARGS)
|
||||
if argument_spec:
|
||||
merged_arg_spec.update(argument_spec)
|
||||
self.arg_spec = merged_arg_spec
|
||||
|
||||
mutually_exclusive_params = []
|
||||
mutually_exclusive_params += DOCKER_MUTUALLY_EXCLUSIVE
|
||||
if mutually_exclusive:
|
||||
mutually_exclusive_params += mutually_exclusive
|
||||
|
||||
required_together_params = []
|
||||
required_together_params += DOCKER_REQUIRED_TOGETHER
|
||||
if required_together:
|
||||
required_together_params += required_together
|
||||
|
||||
self.module = AnsibleModule(
|
||||
argument_spec=merged_arg_spec,
|
||||
supports_check_mode=supports_check_mode,
|
||||
mutually_exclusive=mutually_exclusive_params,
|
||||
required_together=required_together_params,
|
||||
required_if=required_if,
|
||||
required_one_of=required_one_of,
|
||||
)
|
||||
|
||||
self.debug = self.module.params.get('debug')
|
||||
self.check_mode = self.module.check_mode
|
||||
|
||||
super(AnsibleDockerClient, self).__init__(min_docker_api_version=min_docker_api_version)
|
||||
|
||||
if option_minimal_versions is not None:
|
||||
self._get_minimal_versions(option_minimal_versions, option_minimal_versions_ignore_params)
|
||||
|
||||
def fail(self, msg, **kwargs):
|
||||
self.fail_results.update(kwargs)
|
||||
self.module.fail_json(msg=msg, **sanitize_result(self.fail_results))
|
||||
|
||||
def deprecate(self, msg, version=None, date=None, collection_name=None):
|
||||
self.module.deprecate(msg, version=version, date=date, collection_name=collection_name)
|
||||
|
||||
def _get_params(self):
|
||||
return self.module.params
|
||||
|
||||
def _get_minimal_versions(self, option_minimal_versions, ignore_params=None):
|
||||
self.option_minimal_versions = dict()
|
||||
for option in self.module.argument_spec:
|
||||
if ignore_params is not None:
|
||||
if option in ignore_params:
|
||||
continue
|
||||
self.option_minimal_versions[option] = dict()
|
||||
self.option_minimal_versions.update(option_minimal_versions)
|
||||
|
||||
for option, data in self.option_minimal_versions.items():
|
||||
# Test whether option is supported, and store result
|
||||
support_docker_api = True
|
||||
if 'docker_api_version' in data:
|
||||
support_docker_api = self.docker_api_version >= LooseVersion(data['docker_api_version'])
|
||||
data['supported'] = support_docker_api
|
||||
# Fail if option is not supported but used
|
||||
if not data['supported']:
|
||||
# Test whether option is specified
|
||||
if 'detect_usage' in data:
|
||||
used = data['detect_usage'](self)
|
||||
else:
|
||||
used = self.module.params.get(option) is not None
|
||||
if used and 'default' in self.module.argument_spec[option]:
|
||||
used = self.module.params[option] != self.module.argument_spec[option]['default']
|
||||
if used:
|
||||
# If the option is used, compose error message.
|
||||
if 'usage_msg' in data:
|
||||
usg = data['usage_msg']
|
||||
else:
|
||||
usg = 'set %s option' % (option, )
|
||||
if not support_docker_api:
|
||||
msg = 'Docker API version is %s. Minimum version required is %s to %s.'
|
||||
msg = msg % (self.docker_api_version_str, data['docker_api_version'], usg)
|
||||
else:
|
||||
# should not happen
|
||||
msg = 'Cannot %s with your configuration.' % (usg, )
|
||||
self.fail(msg)
|
||||
|
||||
def report_warnings(self, result, warnings_key=None):
|
||||
'''
|
||||
Checks result of client operation for warnings, and if present, outputs them.
|
||||
|
||||
warnings_key should be a list of keys used to crawl the result dictionary.
|
||||
For example, if warnings_key == ['a', 'b'], the function will consider
|
||||
result['a']['b'] if these keys exist. If the result is a non-empty string, it
|
||||
will be reported as a warning. If the result is a list, every entry will be
|
||||
reported as a warning.
|
||||
|
||||
In most cases (if warnings are returned at all), warnings_key should be
|
||||
['Warnings'] or ['Warning']. The default value (if not specified) is ['Warnings'].
|
||||
'''
|
||||
if warnings_key is None:
|
||||
warnings_key = ['Warnings']
|
||||
for key in warnings_key:
|
||||
if not isinstance(result, Mapping):
|
||||
return
|
||||
result = result.get(key)
|
||||
if isinstance(result, Sequence):
|
||||
for warning in result:
|
||||
self.module.warn('Docker warning: {0}'.format(warning))
|
||||
elif isinstance(result, string_types) and result:
|
||||
self.module.warn('Docker warning: {0}'.format(result))
|
||||
39
plugins/plugin_utils/common_api.py
Normal file
39
plugins/plugin_utils/common_api.py
Normal file
@ -0,0 +1,39 @@
|
||||
# Copyright (c) 2019-2020, Felix Fontein <felix@fontein.de>
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
|
||||
from ansible.errors import AnsibleConnectionFailure
|
||||
from ansible.utils.display import Display
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils.common_api import (
|
||||
AnsibleDockerClientBase,
|
||||
)
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils.util import (
|
||||
DOCKER_COMMON_ARGS,
|
||||
)
|
||||
|
||||
|
||||
class AnsibleDockerClient(AnsibleDockerClientBase):
|
||||
def __init__(self, plugin, min_docker_api_version=None):
|
||||
self.plugin = plugin
|
||||
self.display = Display()
|
||||
super(AnsibleDockerClient, self).__init__(
|
||||
min_docker_api_version=min_docker_api_version)
|
||||
|
||||
def fail(self, msg, **kwargs):
|
||||
if kwargs:
|
||||
msg += '\nContext:\n' + '\n'.join(' {0} = {1!r}'.format(k, v) for (k, v) in kwargs.items())
|
||||
raise AnsibleConnectionFailure(msg)
|
||||
|
||||
def deprecate(self, msg, version=None, date=None, collection_name=None):
|
||||
self.display.deprecated(msg, version=version, date=date, collection_name=collection_name)
|
||||
|
||||
def _get_params(self):
|
||||
return dict([
|
||||
(option, self.plugin.get_option(option))
|
||||
for option in DOCKER_COMMON_ARGS
|
||||
])
|
||||
701
tests/unit/plugins/module_utils/_api/api/test_client.py
Normal file
701
tests/unit/plugins/module_utils/_api/api/test_client.py
Normal file
@ -0,0 +1,701 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import socket
|
||||
import struct
|
||||
import tempfile
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
from ansible.module_utils import six
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api import constants, errors
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.api.client import APIClient
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.constants import DEFAULT_DOCKER_API_VERSION
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.utils import convert_filters
|
||||
from requests.packages import urllib3
|
||||
|
||||
from .. import fake_api
|
||||
|
||||
try:
|
||||
from unittest import mock
|
||||
except ImportError:
|
||||
import mock
|
||||
|
||||
|
||||
DEFAULT_TIMEOUT_SECONDS = constants.DEFAULT_TIMEOUT_SECONDS
|
||||
|
||||
|
||||
def response(status_code=200, content='', headers=None, reason=None, elapsed=0,
|
||||
request=None, raw=None):
|
||||
res = requests.Response()
|
||||
res.status_code = status_code
|
||||
if not isinstance(content, six.binary_type):
|
||||
content = json.dumps(content).encode('ascii')
|
||||
res._content = content
|
||||
res.headers = requests.structures.CaseInsensitiveDict(headers or {})
|
||||
res.reason = reason
|
||||
res.elapsed = datetime.timedelta(elapsed)
|
||||
res.request = request
|
||||
res.raw = raw
|
||||
return res
|
||||
|
||||
|
||||
def fake_resolve_authconfig(authconfig, registry=None, *args, **kwargs):
|
||||
return None
|
||||
|
||||
|
||||
def fake_inspect_container(self, container, tty=False):
|
||||
return fake_api.get_fake_inspect_container(tty=tty)[1]
|
||||
|
||||
|
||||
def fake_resp(method, url, *args, **kwargs):
|
||||
key = None
|
||||
if url in fake_api.fake_responses:
|
||||
key = url
|
||||
elif (url, method) in fake_api.fake_responses:
|
||||
key = (url, method)
|
||||
if not key:
|
||||
raise Exception('{method} {url}'.format(method=method, url=url))
|
||||
status_code, content = fake_api.fake_responses[key]()
|
||||
return response(status_code=status_code, content=content)
|
||||
|
||||
|
||||
fake_request = mock.Mock(side_effect=fake_resp)
|
||||
|
||||
|
||||
def fake_get(self, url, *args, **kwargs):
|
||||
return fake_request('GET', url, *args, **kwargs)
|
||||
|
||||
|
||||
def fake_post(self, url, *args, **kwargs):
|
||||
return fake_request('POST', url, *args, **kwargs)
|
||||
|
||||
|
||||
def fake_put(self, url, *args, **kwargs):
|
||||
return fake_request('PUT', url, *args, **kwargs)
|
||||
|
||||
|
||||
def fake_delete(self, url, *args, **kwargs):
|
||||
return fake_request('DELETE', url, *args, **kwargs)
|
||||
|
||||
|
||||
def fake_read_from_socket(self, response, stream, tty=False, demux=False):
|
||||
return six.binary_type()
|
||||
|
||||
|
||||
url_base = '{prefix}/'.format(prefix=fake_api.prefix)
|
||||
url_prefix = '{0}v{1}/'.format(
|
||||
url_base,
|
||||
constants.DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
|
||||
class BaseAPIClientTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.patcher = mock.patch.multiple(
|
||||
'ansible_collections.community.docker.plugins.module_utils._api.api.client.APIClient',
|
||||
get=fake_get,
|
||||
post=fake_post,
|
||||
put=fake_put,
|
||||
delete=fake_delete,
|
||||
_read_from_socket=fake_read_from_socket
|
||||
)
|
||||
self.patcher.start()
|
||||
self.client = APIClient(version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
def tearDown(self):
|
||||
self.client.close()
|
||||
self.patcher.stop()
|
||||
|
||||
def base_create_payload(self, img='busybox', cmd=None):
|
||||
if not cmd:
|
||||
cmd = ['true']
|
||||
return {"Tty": False, "Image": img, "Cmd": cmd,
|
||||
"AttachStdin": False,
|
||||
"AttachStderr": True, "AttachStdout": True,
|
||||
"StdinOnce": False,
|
||||
"OpenStdin": False, "NetworkDisabled": False,
|
||||
}
|
||||
|
||||
|
||||
class DockerApiTest(BaseAPIClientTest):
|
||||
def test_ctor(self):
|
||||
with pytest.raises(errors.DockerException) as excinfo:
|
||||
APIClient(version=1.12)
|
||||
|
||||
assert str(
|
||||
excinfo.value
|
||||
) == 'Version parameter must be a string or None. Found float'
|
||||
|
||||
def test_url_valid_resource(self):
|
||||
url = self.client._url('/hello/{0}/world', 'somename')
|
||||
assert url == '{0}{1}'.format(url_prefix, 'hello/somename/world')
|
||||
|
||||
url = self.client._url(
|
||||
'/hello/{0}/world/{1}', 'somename', 'someothername'
|
||||
)
|
||||
assert url == '{0}{1}'.format(
|
||||
url_prefix, 'hello/somename/world/someothername'
|
||||
)
|
||||
|
||||
url = self.client._url('/hello/{0}/world', 'some?name')
|
||||
assert url == '{0}{1}'.format(url_prefix, 'hello/some%3Fname/world')
|
||||
|
||||
url = self.client._url("/images/{0}/push", "localhost:5000/image")
|
||||
assert url == '{0}{1}'.format(
|
||||
url_prefix, 'images/localhost:5000/image/push'
|
||||
)
|
||||
|
||||
def test_url_invalid_resource(self):
|
||||
with pytest.raises(ValueError):
|
||||
self.client._url('/hello/{0}/world', ['sakuya', 'izayoi'])
|
||||
|
||||
def test_url_no_resource(self):
|
||||
url = self.client._url('/simple')
|
||||
assert url == '{0}{1}'.format(url_prefix, 'simple')
|
||||
|
||||
def test_url_unversioned_api(self):
|
||||
url = self.client._url(
|
||||
'/hello/{0}/world', 'somename', versioned_api=False
|
||||
)
|
||||
assert url == '{0}{1}'.format(url_base, 'hello/somename/world')
|
||||
|
||||
def test_version(self):
|
||||
self.client.version()
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'version',
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS
|
||||
)
|
||||
|
||||
def test_version_no_api_version(self):
|
||||
self.client.version(False)
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_base + 'version',
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS
|
||||
)
|
||||
|
||||
def test_retrieve_server_version(self):
|
||||
client = APIClient(version="auto")
|
||||
assert isinstance(client._version, six.string_types)
|
||||
assert not (client._version == "auto")
|
||||
client.close()
|
||||
|
||||
def test_auto_retrieve_server_version(self):
|
||||
version = self.client._retrieve_server_version()
|
||||
assert isinstance(version, six.string_types)
|
||||
|
||||
def test_info(self):
|
||||
self.client.info()
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'info',
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS
|
||||
)
|
||||
|
||||
def test_search(self):
|
||||
self.client.get_json('/images/search', params={'term': 'busybox'})
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'images/search',
|
||||
params={'term': 'busybox'},
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS
|
||||
)
|
||||
|
||||
def test_login(self):
|
||||
self.client.login('sakuya', 'izayoi')
|
||||
args = fake_request.call_args
|
||||
assert args[0][0] == 'POST'
|
||||
assert args[0][1] == url_prefix + 'auth'
|
||||
assert json.loads(args[1]['data']) == {
|
||||
'username': 'sakuya', 'password': 'izayoi'
|
||||
}
|
||||
assert args[1]['headers'] == {'Content-Type': 'application/json'}
|
||||
assert self.client._auth_configs.auths['docker.io'] == {
|
||||
'email': None,
|
||||
'password': 'izayoi',
|
||||
'username': 'sakuya',
|
||||
'serveraddress': None,
|
||||
}
|
||||
|
||||
def test_events(self):
|
||||
self.client.events()
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'events',
|
||||
params={'since': None, 'until': None, 'filters': None},
|
||||
stream=True,
|
||||
timeout=None
|
||||
)
|
||||
|
||||
def test_events_with_since_until(self):
|
||||
ts = 1356048000
|
||||
now = datetime.datetime.utcfromtimestamp(ts)
|
||||
since = now - datetime.timedelta(seconds=10)
|
||||
until = now + datetime.timedelta(seconds=10)
|
||||
|
||||
self.client.events(since=since, until=until)
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'events',
|
||||
params={
|
||||
'since': ts - 10,
|
||||
'until': ts + 10,
|
||||
'filters': None
|
||||
},
|
||||
stream=True,
|
||||
timeout=None
|
||||
)
|
||||
|
||||
def test_events_with_filters(self):
|
||||
filters = {'event': ['die', 'stop'],
|
||||
'container': fake_api.FAKE_CONTAINER_ID}
|
||||
|
||||
self.client.events(filters=filters)
|
||||
|
||||
expected_filters = convert_filters(filters)
|
||||
fake_request.assert_called_with(
|
||||
'GET',
|
||||
url_prefix + 'events',
|
||||
params={
|
||||
'since': None,
|
||||
'until': None,
|
||||
'filters': expected_filters
|
||||
},
|
||||
stream=True,
|
||||
timeout=None
|
||||
)
|
||||
|
||||
def _socket_path_for_client_session(self, client):
|
||||
socket_adapter = client.get_adapter('http+docker://')
|
||||
return socket_adapter.socket_path
|
||||
|
||||
def test_url_compatibility_unix(self):
|
||||
c = APIClient(
|
||||
base_url="unix://socket",
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
assert self._socket_path_for_client_session(c) == '/socket'
|
||||
|
||||
def test_url_compatibility_unix_triple_slash(self):
|
||||
c = APIClient(
|
||||
base_url="unix:///socket",
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
assert self._socket_path_for_client_session(c) == '/socket'
|
||||
|
||||
def test_url_compatibility_http_unix_triple_slash(self):
|
||||
c = APIClient(
|
||||
base_url="http+unix:///socket",
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
assert self._socket_path_for_client_session(c) == '/socket'
|
||||
|
||||
def test_url_compatibility_http(self):
|
||||
c = APIClient(
|
||||
base_url="http://hostname:1234",
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
assert c.base_url == "http://hostname:1234"
|
||||
|
||||
def test_url_compatibility_tcp(self):
|
||||
c = APIClient(
|
||||
base_url="tcp://hostname:1234",
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
assert c.base_url == "http://hostname:1234"
|
||||
|
||||
def test_remove_link(self):
|
||||
self.client.delete_call('/containers/{0}', '3cc2351ab11b', params={'v': False, 'link': True, 'force': False})
|
||||
|
||||
fake_request.assert_called_with(
|
||||
'DELETE',
|
||||
url_prefix + 'containers/3cc2351ab11b',
|
||||
params={'v': False, 'link': True, 'force': False},
|
||||
timeout=DEFAULT_TIMEOUT_SECONDS
|
||||
)
|
||||
|
||||
def test_stream_helper_decoding(self):
|
||||
status_code, content = fake_api.fake_responses[url_prefix + 'events']()
|
||||
content_str = json.dumps(content)
|
||||
if six.PY3:
|
||||
content_str = content_str.encode('utf-8')
|
||||
body = io.BytesIO(content_str)
|
||||
|
||||
# mock a stream interface
|
||||
raw_resp = urllib3.HTTPResponse(body=body)
|
||||
setattr(raw_resp._fp, 'chunked', True)
|
||||
setattr(raw_resp._fp, 'chunk_left', len(body.getvalue()) - 1)
|
||||
|
||||
# pass `decode=False` to the helper
|
||||
raw_resp._fp.seek(0)
|
||||
resp = response(status_code=status_code, content=content, raw=raw_resp)
|
||||
result = next(self.client._stream_helper(resp))
|
||||
assert result == content_str
|
||||
|
||||
# pass `decode=True` to the helper
|
||||
raw_resp._fp.seek(0)
|
||||
resp = response(status_code=status_code, content=content, raw=raw_resp)
|
||||
result = next(self.client._stream_helper(resp, decode=True))
|
||||
assert result == content
|
||||
|
||||
# non-chunked response, pass `decode=False` to the helper
|
||||
setattr(raw_resp._fp, 'chunked', False)
|
||||
raw_resp._fp.seek(0)
|
||||
resp = response(status_code=status_code, content=content, raw=raw_resp)
|
||||
result = next(self.client._stream_helper(resp))
|
||||
assert result == content_str.decode('utf-8')
|
||||
|
||||
# non-chunked response, pass `decode=True` to the helper
|
||||
raw_resp._fp.seek(0)
|
||||
resp = response(status_code=status_code, content=content, raw=raw_resp)
|
||||
result = next(self.client._stream_helper(resp, decode=True))
|
||||
assert result == content
|
||||
|
||||
|
||||
class UnixSocketStreamTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
socket_dir = tempfile.mkdtemp()
|
||||
self.build_context = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, socket_dir)
|
||||
self.addCleanup(shutil.rmtree, self.build_context)
|
||||
self.socket_file = os.path.join(socket_dir, 'test_sock.sock')
|
||||
self.server_socket = self._setup_socket()
|
||||
self.stop_server = False
|
||||
server_thread = threading.Thread(target=self.run_server)
|
||||
server_thread.setDaemon(True)
|
||||
server_thread.start()
|
||||
self.response = None
|
||||
self.request_handler = None
|
||||
self.addCleanup(server_thread.join)
|
||||
self.addCleanup(self.stop)
|
||||
|
||||
def stop(self):
|
||||
self.stop_server = True
|
||||
|
||||
def _setup_socket(self):
|
||||
server_sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
server_sock.bind(self.socket_file)
|
||||
# Non-blocking mode so that we can shut the test down easily
|
||||
server_sock.setblocking(0)
|
||||
server_sock.listen(5)
|
||||
return server_sock
|
||||
|
||||
def run_server(self):
|
||||
try:
|
||||
while not self.stop_server:
|
||||
try:
|
||||
connection, client_address = self.server_socket.accept()
|
||||
except socket.error:
|
||||
# Probably no connection to accept yet
|
||||
time.sleep(0.01)
|
||||
continue
|
||||
|
||||
connection.setblocking(1)
|
||||
try:
|
||||
self.request_handler(connection)
|
||||
finally:
|
||||
connection.close()
|
||||
finally:
|
||||
self.server_socket.close()
|
||||
|
||||
def early_response_sending_handler(self, connection):
|
||||
data = b''
|
||||
headers = None
|
||||
|
||||
connection.sendall(self.response)
|
||||
while not headers:
|
||||
data += connection.recv(2048)
|
||||
parts = data.split(b'\r\n\r\n', 1)
|
||||
if len(parts) == 2:
|
||||
headers, data = parts
|
||||
|
||||
mo = re.search(r'Content-Length: ([0-9]+)', headers.decode())
|
||||
assert mo
|
||||
content_length = int(mo.group(1))
|
||||
|
||||
while True:
|
||||
if len(data) >= content_length:
|
||||
break
|
||||
|
||||
data += connection.recv(2048)
|
||||
|
||||
@pytest.mark.skipif(
|
||||
constants.IS_WINDOWS_PLATFORM, reason='Unix only'
|
||||
)
|
||||
def test_early_stream_response(self):
|
||||
self.request_handler = self.early_response_sending_handler
|
||||
lines = []
|
||||
for i in range(0, 50):
|
||||
line = str(i).encode()
|
||||
lines += [('%x' % len(line)).encode(), line]
|
||||
lines.append(b'0')
|
||||
lines.append(b'')
|
||||
|
||||
self.response = (
|
||||
b'HTTP/1.1 200 OK\r\n'
|
||||
b'Transfer-Encoding: chunked\r\n'
|
||||
b'\r\n'
|
||||
) + b'\r\n'.join(lines)
|
||||
|
||||
with APIClient(
|
||||
base_url="http+unix://" + self.socket_file,
|
||||
version=DEFAULT_DOCKER_API_VERSION) as client:
|
||||
for i in range(5):
|
||||
try:
|
||||
params = {
|
||||
't': None,
|
||||
'remote': None,
|
||||
'q': False,
|
||||
'nocache': False,
|
||||
'rm': False,
|
||||
'forcerm': False,
|
||||
'pull': False,
|
||||
'dockerfile': 'Dockerfile',
|
||||
}
|
||||
headers = {'Content-Type': 'application/tar'}
|
||||
data = b'...'
|
||||
response = client._post(client._url('/build'), params=params, headers=headers, data=data, stream=True)
|
||||
stream = client._stream_helper(response, decode=False)
|
||||
break
|
||||
except requests.ConnectionError as e:
|
||||
if i == 4:
|
||||
raise e
|
||||
|
||||
assert list(stream) == [
|
||||
str(i).encode() for i in range(50)
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.skip(
|
||||
'This test requires starting a networking server and tries to access it. '
|
||||
'This does not work with network separation with Docker-based unit tests, '
|
||||
'but it does work with podman-based unit tests.'
|
||||
)
|
||||
class TCPSocketStreamTest(unittest.TestCase):
|
||||
stdout_data = b'''
|
||||
Now, those children out there, they're jumping through the
|
||||
flames in the hope that the god of the fire will make them fruitful.
|
||||
Really, you can't blame them. After all, what girl would not prefer the
|
||||
child of a god to that of some acne-scarred artisan?
|
||||
'''
|
||||
stderr_data = b'''
|
||||
And what of the true God? To whose glory churches and monasteries have been
|
||||
built on these islands for generations past? Now shall what of Him?
|
||||
'''
|
||||
|
||||
@classmethod
|
||||
def setup_class(cls):
|
||||
cls.server = six.moves.socketserver.ThreadingTCPServer(
|
||||
('', 0), cls.get_handler_class())
|
||||
cls.thread = threading.Thread(target=cls.server.serve_forever)
|
||||
cls.thread.setDaemon(True)
|
||||
cls.thread.start()
|
||||
cls.address = 'http://{0}:{1}'.format(
|
||||
socket.gethostname(), cls.server.server_address[1])
|
||||
|
||||
@classmethod
|
||||
def teardown_class(cls):
|
||||
cls.server.shutdown()
|
||||
cls.server.server_close()
|
||||
cls.thread.join()
|
||||
|
||||
@classmethod
|
||||
def get_handler_class(cls):
|
||||
stdout_data = cls.stdout_data
|
||||
stderr_data = cls.stderr_data
|
||||
|
||||
class Handler(six.moves.BaseHTTPServer.BaseHTTPRequestHandler, object):
|
||||
def do_POST(self):
|
||||
resp_data = self.get_resp_data()
|
||||
self.send_response(101)
|
||||
self.send_header(
|
||||
'Content-Type', 'application/vnd.docker.raw-stream')
|
||||
self.send_header('Connection', 'Upgrade')
|
||||
self.send_header('Upgrade', 'tcp')
|
||||
self.end_headers()
|
||||
self.wfile.flush()
|
||||
time.sleep(0.2)
|
||||
self.wfile.write(resp_data)
|
||||
self.wfile.flush()
|
||||
|
||||
def get_resp_data(self):
|
||||
path = self.path.split('/')[-1]
|
||||
if path == 'tty':
|
||||
return stdout_data + stderr_data
|
||||
elif path == 'no-tty':
|
||||
data = b''
|
||||
data += self.frame_header(1, stdout_data)
|
||||
data += stdout_data
|
||||
data += self.frame_header(2, stderr_data)
|
||||
data += stderr_data
|
||||
return data
|
||||
else:
|
||||
raise Exception('Unknown path {path}'.format(path=path))
|
||||
|
||||
@staticmethod
|
||||
def frame_header(stream, data):
|
||||
return struct.pack('>BxxxL', stream, len(data))
|
||||
|
||||
return Handler
|
||||
|
||||
def request(self, stream=None, tty=None, demux=None):
|
||||
assert stream is not None and tty is not None and demux is not None
|
||||
with APIClient(
|
||||
base_url=self.address,
|
||||
version=DEFAULT_DOCKER_API_VERSION,
|
||||
) as client:
|
||||
if tty:
|
||||
url = client._url('/tty')
|
||||
else:
|
||||
url = client._url('/no-tty')
|
||||
resp = client._post(url, stream=True)
|
||||
return client._read_from_socket(
|
||||
resp, stream=stream, tty=tty, demux=demux)
|
||||
|
||||
def test_read_from_socket_tty(self):
|
||||
res = self.request(stream=True, tty=True, demux=False)
|
||||
assert next(res) == self.stdout_data + self.stderr_data
|
||||
with self.assertRaises(StopIteration):
|
||||
next(res)
|
||||
|
||||
def test_read_from_socket_tty_demux(self):
|
||||
res = self.request(stream=True, tty=True, demux=True)
|
||||
assert next(res) == (self.stdout_data + self.stderr_data, None)
|
||||
with self.assertRaises(StopIteration):
|
||||
next(res)
|
||||
|
||||
def test_read_from_socket_no_tty(self):
|
||||
res = self.request(stream=True, tty=False, demux=False)
|
||||
assert next(res) == self.stdout_data
|
||||
assert next(res) == self.stderr_data
|
||||
with self.assertRaises(StopIteration):
|
||||
next(res)
|
||||
|
||||
def test_read_from_socket_no_tty_demux(self):
|
||||
res = self.request(stream=True, tty=False, demux=True)
|
||||
assert (self.stdout_data, None) == next(res)
|
||||
assert (None, self.stderr_data) == next(res)
|
||||
with self.assertRaises(StopIteration):
|
||||
next(res)
|
||||
|
||||
def test_read_from_socket_no_stream_tty(self):
|
||||
res = self.request(stream=False, tty=True, demux=False)
|
||||
assert res == self.stdout_data + self.stderr_data
|
||||
|
||||
def test_read_from_socket_no_stream_tty_demux(self):
|
||||
res = self.request(stream=False, tty=True, demux=True)
|
||||
assert res == (self.stdout_data + self.stderr_data, None)
|
||||
|
||||
def test_read_from_socket_no_stream_no_tty(self):
|
||||
res = self.request(stream=False, tty=False, demux=False)
|
||||
res == self.stdout_data + self.stderr_data
|
||||
|
||||
def test_read_from_socket_no_stream_no_tty_demux(self):
|
||||
res = self.request(stream=False, tty=False, demux=True)
|
||||
assert res == (self.stdout_data, self.stderr_data)
|
||||
|
||||
|
||||
class UserAgentTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.patcher = mock.patch.object(
|
||||
APIClient,
|
||||
'send',
|
||||
return_value=fake_resp("GET", "%s/version" % fake_api.prefix)
|
||||
)
|
||||
self.mock_send = self.patcher.start()
|
||||
|
||||
def tearDown(self):
|
||||
self.patcher.stop()
|
||||
|
||||
def test_default_user_agent(self):
|
||||
client = APIClient(version=DEFAULT_DOCKER_API_VERSION)
|
||||
client.version()
|
||||
|
||||
assert self.mock_send.call_count == 1
|
||||
headers = self.mock_send.call_args[0][0].headers
|
||||
expected = 'ansible-community.docker'
|
||||
assert headers['User-Agent'] == expected
|
||||
|
||||
def test_custom_user_agent(self):
|
||||
client = APIClient(
|
||||
user_agent='foo/bar',
|
||||
version=DEFAULT_DOCKER_API_VERSION)
|
||||
client.version()
|
||||
|
||||
assert self.mock_send.call_count == 1
|
||||
headers = self.mock_send.call_args[0][0].headers
|
||||
assert headers['User-Agent'] == 'foo/bar'
|
||||
|
||||
|
||||
class DisableSocketTest(unittest.TestCase):
|
||||
class DummySocket:
|
||||
def __init__(self, timeout=60):
|
||||
self.timeout = timeout
|
||||
|
||||
def settimeout(self, timeout):
|
||||
self.timeout = timeout
|
||||
|
||||
def gettimeout(self):
|
||||
return self.timeout
|
||||
|
||||
def setUp(self):
|
||||
self.client = APIClient(version=DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
def test_disable_socket_timeout(self):
|
||||
"""Test that the timeout is disabled on a generic socket object."""
|
||||
socket = self.DummySocket()
|
||||
|
||||
self.client._disable_socket_timeout(socket)
|
||||
|
||||
assert socket.timeout is None
|
||||
|
||||
def test_disable_socket_timeout2(self):
|
||||
"""Test that the timeouts are disabled on a generic socket object
|
||||
and it's _sock object if present."""
|
||||
socket = self.DummySocket()
|
||||
socket._sock = self.DummySocket()
|
||||
|
||||
self.client._disable_socket_timeout(socket)
|
||||
|
||||
assert socket.timeout is None
|
||||
assert socket._sock.timeout is None
|
||||
|
||||
def test_disable_socket_timout_non_blocking(self):
|
||||
"""Test that a non-blocking socket does not get set to blocking."""
|
||||
socket = self.DummySocket()
|
||||
socket._sock = self.DummySocket(0.0)
|
||||
|
||||
self.client._disable_socket_timeout(socket)
|
||||
|
||||
assert socket.timeout is None
|
||||
assert socket._sock.timeout == 0.0
|
||||
667
tests/unit/plugins/module_utils/_api/fake_api.py
Normal file
667
tests/unit/plugins/module_utils/_api/fake_api.py
Normal file
@ -0,0 +1,667 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api import constants
|
||||
|
||||
from . import fake_stat
|
||||
|
||||
CURRENT_VERSION = 'v{api_version}'.format(api_version=constants.DEFAULT_DOCKER_API_VERSION)
|
||||
|
||||
FAKE_CONTAINER_ID = '3cc2351ab11b'
|
||||
FAKE_IMAGE_ID = 'e9aa60c60128'
|
||||
FAKE_EXEC_ID = 'd5d177f121dc'
|
||||
FAKE_NETWORK_ID = '33fb6a3462b8'
|
||||
FAKE_IMAGE_NAME = 'test_image'
|
||||
FAKE_TARBALL_PATH = '/path/to/tarball'
|
||||
FAKE_REPO_NAME = 'repo'
|
||||
FAKE_TAG_NAME = 'tag'
|
||||
FAKE_FILE_NAME = 'file'
|
||||
FAKE_URL = 'myurl'
|
||||
FAKE_PATH = '/path'
|
||||
FAKE_VOLUME_NAME = 'perfectcherryblossom'
|
||||
FAKE_NODE_ID = '24ifsmvkjbyhk'
|
||||
FAKE_SECRET_ID = 'epdyrw4tsi03xy3deu8g8ly6o'
|
||||
FAKE_SECRET_NAME = 'super_secret'
|
||||
|
||||
# Each method is prefixed with HTTP method (get, post...)
|
||||
# for clarity and readability
|
||||
|
||||
|
||||
def get_fake_version():
|
||||
status_code = 200
|
||||
response = {
|
||||
'ApiVersion': '1.35',
|
||||
'Arch': 'amd64',
|
||||
'BuildTime': '2018-01-10T20:09:37.000000000+00:00',
|
||||
'Components': [{
|
||||
'Details': {
|
||||
'ApiVersion': '1.35',
|
||||
'Arch': 'amd64',
|
||||
'BuildTime': '2018-01-10T20:09:37.000000000+00:00',
|
||||
'Experimental': 'false',
|
||||
'GitCommit': '03596f5',
|
||||
'GoVersion': 'go1.9.2',
|
||||
'KernelVersion': '4.4.0-112-generic',
|
||||
'MinAPIVersion': '1.12',
|
||||
'Os': 'linux'
|
||||
},
|
||||
'Name': 'Engine',
|
||||
'Version': '18.01.0-ce'
|
||||
}],
|
||||
'GitCommit': '03596f5',
|
||||
'GoVersion': 'go1.9.2',
|
||||
'KernelVersion': '4.4.0-112-generic',
|
||||
'MinAPIVersion': '1.12',
|
||||
'Os': 'linux',
|
||||
'Platform': {'Name': ''},
|
||||
'Version': '18.01.0-ce'
|
||||
}
|
||||
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_info():
|
||||
status_code = 200
|
||||
response = {'Containers': 1, 'Images': 1, 'Debug': False,
|
||||
'MemoryLimit': False, 'SwapLimit': False,
|
||||
'IPv4Forwarding': True}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_auth():
|
||||
status_code = 200
|
||||
response = {'Status': 'Login Succeeded',
|
||||
'IdentityToken': '9cbaf023786cd7'}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_ping():
|
||||
return 200, "OK"
|
||||
|
||||
|
||||
def get_fake_search():
|
||||
status_code = 200
|
||||
response = [{'Name': 'busybox', 'Description': 'Fake Description'}]
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_images():
|
||||
status_code = 200
|
||||
response = [{
|
||||
'Id': FAKE_IMAGE_ID,
|
||||
'Created': '2 days ago',
|
||||
'Repository': 'busybox',
|
||||
'RepoTags': ['busybox:latest', 'busybox:1.0'],
|
||||
}]
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_image_history():
|
||||
status_code = 200
|
||||
response = [
|
||||
{
|
||||
"Id": "b750fe79269d",
|
||||
"Created": 1364102658,
|
||||
"CreatedBy": "/bin/bash"
|
||||
},
|
||||
{
|
||||
"Id": "27cf78414709",
|
||||
"Created": 1364068391,
|
||||
"CreatedBy": ""
|
||||
}
|
||||
]
|
||||
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_import_image():
|
||||
status_code = 200
|
||||
response = 'Import messages...'
|
||||
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_containers():
|
||||
status_code = 200
|
||||
response = [{
|
||||
'Id': FAKE_CONTAINER_ID,
|
||||
'Image': 'busybox:latest',
|
||||
'Created': '2 days ago',
|
||||
'Command': 'true',
|
||||
'Status': 'fake status'
|
||||
}]
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_start_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_resize_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_create_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_inspect_container(tty=False):
|
||||
status_code = 200
|
||||
response = {
|
||||
'Id': FAKE_CONTAINER_ID,
|
||||
'Config': {'Labels': {'foo': 'bar'}, 'Privileged': True, 'Tty': tty},
|
||||
'ID': FAKE_CONTAINER_ID,
|
||||
'Image': 'busybox:latest',
|
||||
'Name': 'foobar',
|
||||
"State": {
|
||||
"Status": "running",
|
||||
"Running": True,
|
||||
"Pid": 0,
|
||||
"ExitCode": 0,
|
||||
"StartedAt": "2013-09-25T14:01:18.869545111+02:00",
|
||||
"Ghost": False
|
||||
},
|
||||
"HostConfig": {
|
||||
"LogConfig": {
|
||||
"Type": "json-file",
|
||||
"Config": {}
|
||||
},
|
||||
},
|
||||
"MacAddress": "02:42:ac:11:00:0a"
|
||||
}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_inspect_image():
|
||||
status_code = 200
|
||||
response = {
|
||||
'Id': FAKE_IMAGE_ID,
|
||||
'Parent': "27cf784147099545",
|
||||
'Created': "2013-03-23T22:24:18.818426-07:00",
|
||||
'Container': FAKE_CONTAINER_ID,
|
||||
'Config': {'Labels': {'bar': 'foo'}},
|
||||
'ContainerConfig':
|
||||
{
|
||||
"Hostname": "",
|
||||
"User": "",
|
||||
"Memory": 0,
|
||||
"MemorySwap": 0,
|
||||
"AttachStdin": False,
|
||||
"AttachStdout": False,
|
||||
"AttachStderr": False,
|
||||
"PortSpecs": "",
|
||||
"Tty": True,
|
||||
"OpenStdin": True,
|
||||
"StdinOnce": False,
|
||||
"Env": "",
|
||||
"Cmd": ["/bin/bash"],
|
||||
"Dns": "",
|
||||
"Image": "base",
|
||||
"Volumes": "",
|
||||
"VolumesFrom": "",
|
||||
"WorkingDir": ""
|
||||
},
|
||||
'Size': 6823592
|
||||
}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_insert_image():
|
||||
status_code = 200
|
||||
response = {'StatusCode': 0}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_wait():
|
||||
status_code = 200
|
||||
response = {'StatusCode': 0}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_logs():
|
||||
status_code = 200
|
||||
response = (b'\x01\x00\x00\x00\x00\x00\x00\x00'
|
||||
b'\x02\x00\x00\x00\x00\x00\x00\x00'
|
||||
b'\x01\x00\x00\x00\x00\x00\x00\x11Flowering Nights\n'
|
||||
b'\x01\x00\x00\x00\x00\x00\x00\x10(Sakuya Iyazoi)\n')
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_diff():
|
||||
status_code = 200
|
||||
response = [{'Path': '/test', 'Kind': 1}]
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_events():
|
||||
status_code = 200
|
||||
response = [{'status': 'stop', 'id': FAKE_CONTAINER_ID,
|
||||
'from': FAKE_IMAGE_ID, 'time': 1423247867}]
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_export():
|
||||
status_code = 200
|
||||
response = 'Byte Stream....'
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_exec_create():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_EXEC_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_exec_start():
|
||||
status_code = 200
|
||||
response = (b'\x01\x00\x00\x00\x00\x00\x00\x11bin\nboot\ndev\netc\n'
|
||||
b'\x01\x00\x00\x00\x00\x00\x00\x12lib\nmnt\nproc\nroot\n'
|
||||
b'\x01\x00\x00\x00\x00\x00\x00\x0csbin\nusr\nvar\n')
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_exec_resize():
|
||||
status_code = 201
|
||||
return status_code, ''
|
||||
|
||||
|
||||
def get_fake_exec_inspect():
|
||||
return 200, {
|
||||
'OpenStderr': True,
|
||||
'OpenStdout': True,
|
||||
'Container': get_fake_inspect_container()[1],
|
||||
'Running': False,
|
||||
'ProcessConfig': {
|
||||
'arguments': ['hello world'],
|
||||
'tty': False,
|
||||
'entrypoint': 'echo',
|
||||
'privileged': False,
|
||||
'user': ''
|
||||
},
|
||||
'ExitCode': 0,
|
||||
'ID': FAKE_EXEC_ID,
|
||||
'OpenStdin': False
|
||||
}
|
||||
|
||||
|
||||
def post_fake_stop_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_kill_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_pause_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_unpause_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_restart_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_rename_container():
|
||||
status_code = 204
|
||||
return status_code, None
|
||||
|
||||
|
||||
def delete_fake_remove_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_image_create():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_IMAGE_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def delete_fake_remove_image():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_IMAGE_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_get_image():
|
||||
status_code = 200
|
||||
response = 'Byte Stream....'
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_load_image():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_IMAGE_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_commit():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_push():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_IMAGE_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_build_container():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_CONTAINER_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def post_fake_tag_image():
|
||||
status_code = 200
|
||||
response = {'Id': FAKE_IMAGE_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_stats():
|
||||
status_code = 200
|
||||
response = fake_stat.OBJ
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_top():
|
||||
return 200, {
|
||||
'Processes': [
|
||||
[
|
||||
'root',
|
||||
'26501',
|
||||
'6907',
|
||||
'0',
|
||||
'10:32',
|
||||
'pts/55',
|
||||
'00:00:00',
|
||||
'sleep 60',
|
||||
],
|
||||
],
|
||||
'Titles': [
|
||||
'UID',
|
||||
'PID',
|
||||
'PPID',
|
||||
'C',
|
||||
'STIME',
|
||||
'TTY',
|
||||
'TIME',
|
||||
'CMD',
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def get_fake_volume_list():
|
||||
status_code = 200
|
||||
response = {
|
||||
'Volumes': [
|
||||
{
|
||||
'Name': 'perfectcherryblossom',
|
||||
'Driver': 'local',
|
||||
'Mountpoint': '/var/lib/docker/volumes/perfectcherryblossom',
|
||||
'Scope': 'local'
|
||||
}, {
|
||||
'Name': 'subterraneananimism',
|
||||
'Driver': 'local',
|
||||
'Mountpoint': '/var/lib/docker/volumes/subterraneananimism',
|
||||
'Scope': 'local'
|
||||
}
|
||||
]
|
||||
}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def get_fake_volume():
|
||||
status_code = 200
|
||||
response = {
|
||||
'Name': 'perfectcherryblossom',
|
||||
'Driver': 'local',
|
||||
'Mountpoint': '/var/lib/docker/volumes/perfectcherryblossom',
|
||||
'Labels': {
|
||||
'com.example.some-label': 'some-value'
|
||||
},
|
||||
'Scope': 'local'
|
||||
}
|
||||
return status_code, response
|
||||
|
||||
|
||||
def fake_remove_volume():
|
||||
return 204, None
|
||||
|
||||
|
||||
def post_fake_update_container():
|
||||
return 200, {'Warnings': []}
|
||||
|
||||
|
||||
def post_fake_update_node():
|
||||
return 200, None
|
||||
|
||||
|
||||
def post_fake_join_swarm():
|
||||
return 200, None
|
||||
|
||||
|
||||
def get_fake_network_list():
|
||||
return 200, [{
|
||||
"Name": "bridge",
|
||||
"Id": FAKE_NETWORK_ID,
|
||||
"Scope": "local",
|
||||
"Driver": "bridge",
|
||||
"EnableIPv6": False,
|
||||
"Internal": False,
|
||||
"IPAM": {
|
||||
"Driver": "default",
|
||||
"Config": [
|
||||
{
|
||||
"Subnet": "172.17.0.0/16"
|
||||
}
|
||||
]
|
||||
},
|
||||
"Containers": {
|
||||
FAKE_CONTAINER_ID: {
|
||||
"EndpointID": "ed2419a97c1d99",
|
||||
"MacAddress": "02:42:ac:11:00:02",
|
||||
"IPv4Address": "172.17.0.2/16",
|
||||
"IPv6Address": ""
|
||||
}
|
||||
},
|
||||
"Options": {
|
||||
"com.docker.network.bridge.default_bridge": "true",
|
||||
"com.docker.network.bridge.enable_icc": "true",
|
||||
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
||||
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||
"com.docker.network.bridge.name": "docker0",
|
||||
"com.docker.network.driver.mtu": "1500"
|
||||
}
|
||||
}]
|
||||
|
||||
|
||||
def get_fake_network():
|
||||
return 200, get_fake_network_list()[1][0]
|
||||
|
||||
|
||||
def post_fake_network():
|
||||
return 201, {"Id": FAKE_NETWORK_ID, "Warnings": []}
|
||||
|
||||
|
||||
def delete_fake_network():
|
||||
return 204, None
|
||||
|
||||
|
||||
def post_fake_network_connect():
|
||||
return 200, None
|
||||
|
||||
|
||||
def post_fake_network_disconnect():
|
||||
return 200, None
|
||||
|
||||
|
||||
def post_fake_secret():
|
||||
status_code = 200
|
||||
response = {'ID': FAKE_SECRET_ID}
|
||||
return status_code, response
|
||||
|
||||
|
||||
# Maps real api url to fake response callback
|
||||
prefix = 'http+docker://localhost'
|
||||
if constants.IS_WINDOWS_PLATFORM:
|
||||
prefix = 'http+docker://localnpipe'
|
||||
|
||||
fake_responses = {
|
||||
'{prefix}/version'.format(prefix=prefix):
|
||||
get_fake_version,
|
||||
'{prefix}/{CURRENT_VERSION}/version'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_version,
|
||||
'{prefix}/{CURRENT_VERSION}/info'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_info,
|
||||
'{prefix}/{CURRENT_VERSION}/auth'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_auth,
|
||||
'{prefix}/{CURRENT_VERSION}/_ping'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_ping,
|
||||
'{prefix}/{CURRENT_VERSION}/images/search'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_search,
|
||||
'{prefix}/{CURRENT_VERSION}/images/json'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_images,
|
||||
'{prefix}/{CURRENT_VERSION}/images/test_image/history'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_image_history,
|
||||
'{prefix}/{CURRENT_VERSION}/images/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_import_image,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/json'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_containers,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/start'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_start_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/resize'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_resize_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/json'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_inspect_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/rename'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_rename_container,
|
||||
'{prefix}/{CURRENT_VERSION}/images/e9aa60c60128/tag'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_tag_image,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/wait'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_wait,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/logs'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_logs,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/changes'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_diff,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/export'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_export,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/update'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_update_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/exec'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_exec_create,
|
||||
'{prefix}/{CURRENT_VERSION}/exec/d5d177f121dc/start'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_exec_start,
|
||||
'{prefix}/{CURRENT_VERSION}/exec/d5d177f121dc/json'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_exec_inspect,
|
||||
'{prefix}/{CURRENT_VERSION}/exec/d5d177f121dc/resize'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_exec_resize,
|
||||
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/stats'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_stats,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/top'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_top,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/stop'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_stop_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/kill'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_kill_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/pause'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_pause_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/unpause'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_unpause_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b/restart'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_restart_container,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/3cc2351ab11b'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
delete_fake_remove_container,
|
||||
'{prefix}/{CURRENT_VERSION}/images/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_image_create,
|
||||
'{prefix}/{CURRENT_VERSION}/images/e9aa60c60128'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
delete_fake_remove_image,
|
||||
'{prefix}/{CURRENT_VERSION}/images/e9aa60c60128/get'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_get_image,
|
||||
'{prefix}/{CURRENT_VERSION}/images/load'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_load_image,
|
||||
'{prefix}/{CURRENT_VERSION}/images/test_image/json'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_inspect_image,
|
||||
'{prefix}/{CURRENT_VERSION}/images/test_image/insert'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_insert_image,
|
||||
'{prefix}/{CURRENT_VERSION}/images/test_image/push'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_push,
|
||||
'{prefix}/{CURRENT_VERSION}/commit'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_commit,
|
||||
'{prefix}/{CURRENT_VERSION}/containers/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_create_container,
|
||||
'{prefix}/{CURRENT_VERSION}/build'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_build_container,
|
||||
'{prefix}/{CURRENT_VERSION}/events'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
get_fake_events,
|
||||
('{prefix}/{CURRENT_VERSION}/volumes'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION), 'GET'):
|
||||
get_fake_volume_list,
|
||||
('{prefix}/{CURRENT_VERSION}/volumes/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION), 'POST'):
|
||||
get_fake_volume,
|
||||
('{1}/{0}/volumes/{2}'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_VOLUME_NAME
|
||||
), 'GET'):
|
||||
get_fake_volume,
|
||||
('{1}/{0}/volumes/{2}'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_VOLUME_NAME
|
||||
), 'DELETE'):
|
||||
fake_remove_volume,
|
||||
('{1}/{0}/nodes/{2}/update?version=1'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_NODE_ID
|
||||
), 'POST'):
|
||||
post_fake_update_node,
|
||||
('{prefix}/{CURRENT_VERSION}/swarm/join'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION), 'POST'):
|
||||
post_fake_join_swarm,
|
||||
('{prefix}/{CURRENT_VERSION}/networks'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION), 'GET'):
|
||||
get_fake_network_list,
|
||||
('{prefix}/{CURRENT_VERSION}/networks/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION), 'POST'):
|
||||
post_fake_network,
|
||||
('{1}/{0}/networks/{2}'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_NETWORK_ID
|
||||
), 'GET'):
|
||||
get_fake_network,
|
||||
('{1}/{0}/networks/{2}'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_NETWORK_ID
|
||||
), 'DELETE'):
|
||||
delete_fake_network,
|
||||
('{1}/{0}/networks/{2}/connect'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_NETWORK_ID
|
||||
), 'POST'):
|
||||
post_fake_network_connect,
|
||||
('{1}/{0}/networks/{2}/disconnect'.format(
|
||||
CURRENT_VERSION, prefix, FAKE_NETWORK_ID
|
||||
), 'POST'):
|
||||
post_fake_network_disconnect,
|
||||
'{prefix}/{CURRENT_VERSION}/secrets/create'.format(prefix=prefix, CURRENT_VERSION=CURRENT_VERSION):
|
||||
post_fake_secret,
|
||||
}
|
||||
144
tests/unit/plugins/module_utils/_api/fake_stat.py
Normal file
144
tests/unit/plugins/module_utils/_api/fake_stat.py
Normal file
@ -0,0 +1,144 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
OBJ = {
|
||||
"read": "2015-02-11T19:20:46.667237763+02:00",
|
||||
"network": {
|
||||
"rx_bytes": 567224,
|
||||
"rx_packets": 3773,
|
||||
"rx_errors": 0,
|
||||
"rx_dropped": 0,
|
||||
"tx_bytes": 1176,
|
||||
"tx_packets": 13,
|
||||
"tx_errors": 0,
|
||||
"tx_dropped": 0
|
||||
},
|
||||
"cpu_stats": {
|
||||
"cpu_usage": {
|
||||
"total_usage": 157260874053,
|
||||
"percpu_usage": [
|
||||
52196306950,
|
||||
24118413549,
|
||||
53292684398,
|
||||
27653469156
|
||||
],
|
||||
"usage_in_kernelmode": 37140000000,
|
||||
"usage_in_usermode": 62140000000
|
||||
},
|
||||
"system_cpu_usage": 3.0881377e+14,
|
||||
"throttling_data": {
|
||||
"periods": 0,
|
||||
"throttled_periods": 0,
|
||||
"throttled_time": 0
|
||||
}
|
||||
},
|
||||
"memory_stats": {
|
||||
"usage": 179314688,
|
||||
"max_usage": 258166784,
|
||||
"stats": {
|
||||
"active_anon": 90804224,
|
||||
"active_file": 2195456,
|
||||
"cache": 3096576,
|
||||
"hierarchical_memory_limit": 1.844674407371e+19,
|
||||
"inactive_anon": 85516288,
|
||||
"inactive_file": 798720,
|
||||
"mapped_file": 2646016,
|
||||
"pgfault": 101034,
|
||||
"pgmajfault": 1207,
|
||||
"pgpgin": 115814,
|
||||
"pgpgout": 75613,
|
||||
"rss": 176218112,
|
||||
"rss_huge": 12582912,
|
||||
"total_active_anon": 90804224,
|
||||
"total_active_file": 2195456,
|
||||
"total_cache": 3096576,
|
||||
"total_inactive_anon": 85516288,
|
||||
"total_inactive_file": 798720,
|
||||
"total_mapped_file": 2646016,
|
||||
"total_pgfault": 101034,
|
||||
"total_pgmajfault": 1207,
|
||||
"total_pgpgin": 115814,
|
||||
"total_pgpgout": 75613,
|
||||
"total_rss": 176218112,
|
||||
"total_rss_huge": 12582912,
|
||||
"total_unevictable": 0,
|
||||
"total_writeback": 0,
|
||||
"unevictable": 0,
|
||||
"writeback": 0
|
||||
},
|
||||
"failcnt": 0,
|
||||
"limit": 8039038976
|
||||
},
|
||||
"blkio_stats": {
|
||||
"io_service_bytes_recursive": [
|
||||
{
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Read",
|
||||
"value": 72843264
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Write",
|
||||
"value": 4096
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Sync",
|
||||
"value": 4096
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Async",
|
||||
"value": 72843264
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Total",
|
||||
"value": 72847360
|
||||
}
|
||||
],
|
||||
"io_serviced_recursive": [
|
||||
{
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Read",
|
||||
"value": 10581
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Write",
|
||||
"value": 1
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Sync",
|
||||
"value": 1
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Async",
|
||||
"value": 10581
|
||||
}, {
|
||||
"major": 8,
|
||||
"minor": 0,
|
||||
"op": "Total",
|
||||
"value": 10582
|
||||
}
|
||||
],
|
||||
"io_queue_recursive": [],
|
||||
"io_service_time_recursive": [],
|
||||
"io_wait_time_recursive": [],
|
||||
"io_merged_recursive": [],
|
||||
"io_time_recursive": [],
|
||||
"sectors_recursive": []
|
||||
}
|
||||
}
|
||||
818
tests/unit/plugins/module_utils/_api/test_auth.py
Normal file
818
tests/unit/plugins/module_utils/_api/test_auth.py
Normal file
@ -0,0 +1,818 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import random
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api import auth, errors
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.credentials.errors import CredentialsNotFound
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.credentials.store import Store
|
||||
|
||||
try:
|
||||
from unittest import mock
|
||||
except ImportError:
|
||||
import mock
|
||||
|
||||
|
||||
class RegressionTest(unittest.TestCase):
|
||||
def test_803_urlsafe_encode(self):
|
||||
auth_data = {
|
||||
'username': 'root',
|
||||
'password': 'GR?XGR?XGR?XGR?X'
|
||||
}
|
||||
encoded = auth.encode_header(auth_data)
|
||||
assert b'/' not in encoded
|
||||
assert b'_' in encoded
|
||||
|
||||
|
||||
class ResolveRepositoryNameTest(unittest.TestCase):
|
||||
def test_resolve_repository_name_hub_library_image(self):
|
||||
assert auth.resolve_repository_name('image') == (
|
||||
'docker.io', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_dotted_hub_library_image(self):
|
||||
assert auth.resolve_repository_name('image.valid') == (
|
||||
'docker.io', 'image.valid'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_hub_image(self):
|
||||
assert auth.resolve_repository_name('username/image') == (
|
||||
'docker.io', 'username/image'
|
||||
)
|
||||
|
||||
def test_explicit_hub_index_library_image(self):
|
||||
assert auth.resolve_repository_name('docker.io/image') == (
|
||||
'docker.io', 'image'
|
||||
)
|
||||
|
||||
def test_explicit_legacy_hub_index_library_image(self):
|
||||
assert auth.resolve_repository_name('index.docker.io/image') == (
|
||||
'docker.io', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_private_registry(self):
|
||||
assert auth.resolve_repository_name('my.registry.net/image') == (
|
||||
'my.registry.net', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_private_registry_with_port(self):
|
||||
assert auth.resolve_repository_name('my.registry.net:5000/image') == (
|
||||
'my.registry.net:5000', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_private_registry_with_username(self):
|
||||
assert auth.resolve_repository_name(
|
||||
'my.registry.net/username/image'
|
||||
) == ('my.registry.net', 'username/image')
|
||||
|
||||
def test_resolve_repository_name_no_dots_but_port(self):
|
||||
assert auth.resolve_repository_name('hostname:5000/image') == (
|
||||
'hostname:5000', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_no_dots_but_port_and_username(self):
|
||||
assert auth.resolve_repository_name(
|
||||
'hostname:5000/username/image'
|
||||
) == ('hostname:5000', 'username/image')
|
||||
|
||||
def test_resolve_repository_name_localhost(self):
|
||||
assert auth.resolve_repository_name('localhost/image') == (
|
||||
'localhost', 'image'
|
||||
)
|
||||
|
||||
def test_resolve_repository_name_localhost_with_username(self):
|
||||
assert auth.resolve_repository_name('localhost/username/image') == (
|
||||
'localhost', 'username/image'
|
||||
)
|
||||
|
||||
def test_invalid_index_name(self):
|
||||
with pytest.raises(errors.InvalidRepository):
|
||||
auth.resolve_repository_name('-gecko.com/image')
|
||||
|
||||
|
||||
def encode_auth(auth_info):
|
||||
return base64.b64encode(
|
||||
auth_info.get('username', '').encode('utf-8') + b':' +
|
||||
auth_info.get('password', '').encode('utf-8'))
|
||||
|
||||
|
||||
class ResolveAuthTest(unittest.TestCase):
|
||||
index_config = {'auth': encode_auth({'username': 'indexuser'})}
|
||||
private_config = {'auth': encode_auth({'username': 'privateuser'})}
|
||||
legacy_config = {'auth': encode_auth({'username': 'legacyauth'})}
|
||||
|
||||
auth_config = auth.AuthConfig({
|
||||
'auths': auth.parse_auth({
|
||||
'https://index.docker.io/v1/': index_config,
|
||||
'my.registry.net': private_config,
|
||||
'http://legacy.registry.url/v1/': legacy_config,
|
||||
})
|
||||
})
|
||||
|
||||
def test_resolve_authconfig_hostname_only(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'my.registry.net'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_no_protocol(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'my.registry.net/v1/'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_no_path(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'http://my.registry.net'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_no_path_trailing_slash(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'http://my.registry.net/'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_no_path_wrong_secure_proto(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'https://my.registry.net'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_no_path_wrong_insecure_proto(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'http://index.docker.io'
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_authconfig_path_wrong_proto(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'https://my.registry.net/v1/'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_default_registry(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_authconfig_default_explicit_none(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, None
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_authconfig_fully_explicit(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'http://my.registry.net/v1/'
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_authconfig_legacy_config(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'legacy.registry.url'
|
||||
)['username'] == 'legacyauth'
|
||||
|
||||
def test_resolve_authconfig_no_match(self):
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, 'does.not.exist'
|
||||
) is None
|
||||
|
||||
def test_resolve_registry_and_auth_library_image(self):
|
||||
image = 'image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_registry_and_auth_hub_image(self):
|
||||
image = 'username/image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_registry_and_auth_explicit_hub(self):
|
||||
image = 'docker.io/username/image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_registry_and_auth_explicit_legacy_hub(self):
|
||||
image = 'index.docker.io/username/image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
)['username'] == 'indexuser'
|
||||
|
||||
def test_resolve_registry_and_auth_private_registry(self):
|
||||
image = 'my.registry.net/image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
)['username'] == 'privateuser'
|
||||
|
||||
def test_resolve_registry_and_auth_unauthenticated_registry(self):
|
||||
image = 'other.registry.net/image'
|
||||
assert auth.resolve_authconfig(
|
||||
self.auth_config, auth.resolve_repository_name(image)[0]
|
||||
) is None
|
||||
|
||||
def test_resolve_auth_with_empty_credstore_and_auth_dict(self):
|
||||
auth_config = auth.AuthConfig({
|
||||
'auths': auth.parse_auth({
|
||||
'https://index.docker.io/v1/': self.index_config,
|
||||
}),
|
||||
'credsStore': 'blackbox'
|
||||
})
|
||||
with mock.patch(
|
||||
'ansible_collections.community.docker.plugins.module_utils._api.auth.AuthConfig._resolve_authconfig_credstore'
|
||||
) as m:
|
||||
m.return_value = None
|
||||
assert 'indexuser' == auth.resolve_authconfig(
|
||||
auth_config, None
|
||||
)['username']
|
||||
|
||||
|
||||
class LoadConfigTest(unittest.TestCase):
|
||||
def test_load_config_no_file(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
cfg = auth.load_config(folder)
|
||||
assert cfg is not None
|
||||
|
||||
def test_load_legacy_config(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
cfg_path = os.path.join(folder, '.dockercfg')
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
with open(cfg_path, 'w') as f:
|
||||
f.write('auth = {auth}\n'.format(auth=auth_))
|
||||
f.write('email = sakuya@scarlet.net')
|
||||
|
||||
cfg = auth.load_config(cfg_path)
|
||||
assert auth.resolve_authconfig(cfg) is not None
|
||||
assert cfg.auths[auth.INDEX_NAME] is not None
|
||||
cfg = cfg.auths[auth.INDEX_NAME]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == 'sakuya@scarlet.net'
|
||||
assert cfg.get('Auth') is None
|
||||
|
||||
def test_load_json_config(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
cfg_path = os.path.join(folder, '.dockercfg')
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
email = 'sakuya@scarlet.net'
|
||||
with open(cfg_path, 'w') as f:
|
||||
json.dump(
|
||||
{auth.INDEX_URL: {'auth': auth_, 'email': email}}, f
|
||||
)
|
||||
cfg = auth.load_config(cfg_path)
|
||||
assert auth.resolve_authconfig(cfg) is not None
|
||||
assert cfg.auths[auth.INDEX_URL] is not None
|
||||
cfg = cfg.auths[auth.INDEX_URL]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == email
|
||||
assert cfg.get('Auth') is None
|
||||
|
||||
def test_load_modern_json_config(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
cfg_path = os.path.join(folder, 'config.json')
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
email = 'sakuya@scarlet.net'
|
||||
with open(cfg_path, 'w') as f:
|
||||
json.dump({
|
||||
'auths': {
|
||||
auth.INDEX_URL: {
|
||||
'auth': auth_, 'email': email
|
||||
}
|
||||
}
|
||||
}, f)
|
||||
cfg = auth.load_config(cfg_path)
|
||||
assert auth.resolve_authconfig(cfg) is not None
|
||||
assert cfg.auths[auth.INDEX_URL] is not None
|
||||
cfg = cfg.auths[auth.INDEX_URL]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == email
|
||||
|
||||
def test_load_config_with_random_name(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
|
||||
dockercfg_path = os.path.join(folder,
|
||||
'.{0}.dockercfg'.format(
|
||||
random.randrange(100000)))
|
||||
registry = 'https://your.private.registry.io'
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
config = {
|
||||
registry: {
|
||||
'auth': '{auth}'.format(auth=auth_),
|
||||
'email': 'sakuya@scarlet.net'
|
||||
}
|
||||
}
|
||||
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
cfg = auth.load_config(dockercfg_path).auths
|
||||
assert registry in cfg
|
||||
assert cfg[registry] is not None
|
||||
cfg = cfg[registry]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == 'sakuya@scarlet.net'
|
||||
assert cfg.get('auth') is None
|
||||
|
||||
def test_load_config_custom_config_env(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
registry = 'https://your.private.registry.io'
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
config = {
|
||||
registry: {
|
||||
'auth': '{auth}'.format(auth=auth_),
|
||||
'email': 'sakuya@scarlet.net'
|
||||
}
|
||||
}
|
||||
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
with mock.patch.dict(os.environ, {'DOCKER_CONFIG': folder}):
|
||||
cfg = auth.load_config(None).auths
|
||||
assert registry in cfg
|
||||
assert cfg[registry] is not None
|
||||
cfg = cfg[registry]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == 'sakuya@scarlet.net'
|
||||
assert cfg.get('auth') is None
|
||||
|
||||
def test_load_config_custom_config_env_with_auths(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
registry = 'https://your.private.registry.io'
|
||||
auth_ = base64.b64encode(b'sakuya:izayoi').decode('ascii')
|
||||
config = {
|
||||
'auths': {
|
||||
registry: {
|
||||
'auth': '{auth}'.format(auth=auth_),
|
||||
'email': 'sakuya@scarlet.net'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
with mock.patch.dict(os.environ, {'DOCKER_CONFIG': folder}):
|
||||
cfg = auth.load_config(None)
|
||||
assert registry in cfg.auths
|
||||
cfg = cfg.auths[registry]
|
||||
assert cfg['username'] == 'sakuya'
|
||||
assert cfg['password'] == 'izayoi'
|
||||
assert cfg['email'] == 'sakuya@scarlet.net'
|
||||
assert cfg.get('auth') is None
|
||||
|
||||
def test_load_config_custom_config_env_utf8(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
registry = 'https://your.private.registry.io'
|
||||
auth_ = base64.b64encode(
|
||||
b'sakuya\xc3\xa6:izayoi\xc3\xa6').decode('ascii')
|
||||
config = {
|
||||
'auths': {
|
||||
registry: {
|
||||
'auth': '{auth}'.format(auth=auth_),
|
||||
'email': 'sakuya@scarlet.net'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
with mock.patch.dict(os.environ, {'DOCKER_CONFIG': folder}):
|
||||
cfg = auth.load_config(None)
|
||||
assert registry in cfg.auths
|
||||
cfg = cfg.auths[registry]
|
||||
assert cfg['username'] == b'sakuya\xc3\xa6'.decode('utf8')
|
||||
assert cfg['password'] == b'izayoi\xc3\xa6'.decode('utf8')
|
||||
assert cfg['email'] == 'sakuya@scarlet.net'
|
||||
assert cfg.get('auth') is None
|
||||
|
||||
def test_load_config_unknown_keys(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
config = {
|
||||
'detachKeys': 'ctrl-q, ctrl-u, ctrl-i'
|
||||
}
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
cfg = auth.load_config(dockercfg_path)
|
||||
assert dict(cfg) == {'auths': {}}
|
||||
|
||||
def test_load_config_invalid_auth_dict(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
config = {
|
||||
'auths': {
|
||||
'scarlet.net': {'sakuya': 'izayoi'}
|
||||
}
|
||||
}
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
cfg = auth.load_config(dockercfg_path)
|
||||
assert dict(cfg) == {'auths': {'scarlet.net': {}}}
|
||||
|
||||
def test_load_config_identity_token(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
registry = 'scarlet.net'
|
||||
token = '1ce1cebb-503e-7043-11aa-7feb8bd4a1ce'
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
auth_entry = encode_auth({'username': 'sakuya'}).decode('ascii')
|
||||
config = {
|
||||
'auths': {
|
||||
registry: {
|
||||
'auth': auth_entry,
|
||||
'identitytoken': token
|
||||
}
|
||||
}
|
||||
}
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config, f)
|
||||
|
||||
cfg = auth.load_config(dockercfg_path)
|
||||
assert registry in cfg.auths
|
||||
cfg = cfg.auths[registry]
|
||||
assert 'IdentityToken' in cfg
|
||||
assert cfg['IdentityToken'] == token
|
||||
|
||||
|
||||
class CredstoreTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.authconfig = auth.AuthConfig({'credsStore': 'default'})
|
||||
self.default_store = InMemoryStore('default')
|
||||
self.authconfig._stores['default'] = self.default_store
|
||||
self.default_store.store(
|
||||
'https://gensokyo.jp/v2', 'sakuya', 'izayoi',
|
||||
)
|
||||
self.default_store.store(
|
||||
'https://default.com/v2', 'user', 'hunter2',
|
||||
)
|
||||
|
||||
def test_get_credential_store(self):
|
||||
auth_config = auth.AuthConfig({
|
||||
'credHelpers': {
|
||||
'registry1.io': 'truesecret',
|
||||
'registry2.io': 'powerlock'
|
||||
},
|
||||
'credsStore': 'blackbox',
|
||||
})
|
||||
|
||||
assert auth_config.get_credential_store('registry1.io') == 'truesecret'
|
||||
assert auth_config.get_credential_store('registry2.io') == 'powerlock'
|
||||
assert auth_config.get_credential_store('registry3.io') == 'blackbox'
|
||||
|
||||
def test_get_credential_store_no_default(self):
|
||||
auth_config = auth.AuthConfig({
|
||||
'credHelpers': {
|
||||
'registry1.io': 'truesecret',
|
||||
'registry2.io': 'powerlock'
|
||||
},
|
||||
})
|
||||
assert auth_config.get_credential_store('registry2.io') == 'powerlock'
|
||||
assert auth_config.get_credential_store('registry3.io') is None
|
||||
|
||||
def test_get_credential_store_default_index(self):
|
||||
auth_config = auth.AuthConfig({
|
||||
'credHelpers': {
|
||||
'https://index.docker.io/v1/': 'powerlock'
|
||||
},
|
||||
'credsStore': 'truesecret'
|
||||
})
|
||||
|
||||
assert auth_config.get_credential_store(None) == 'powerlock'
|
||||
assert auth_config.get_credential_store('docker.io') == 'powerlock'
|
||||
assert auth_config.get_credential_store('images.io') == 'truesecret'
|
||||
|
||||
def test_get_credential_store_with_plain_dict(self):
|
||||
auth_config = {
|
||||
'credHelpers': {
|
||||
'registry1.io': 'truesecret',
|
||||
'registry2.io': 'powerlock'
|
||||
},
|
||||
'credsStore': 'blackbox',
|
||||
}
|
||||
|
||||
assert auth.get_credential_store(
|
||||
auth_config, 'registry1.io'
|
||||
) == 'truesecret'
|
||||
assert auth.get_credential_store(
|
||||
auth_config, 'registry2.io'
|
||||
) == 'powerlock'
|
||||
assert auth.get_credential_store(
|
||||
auth_config, 'registry3.io'
|
||||
) == 'blackbox'
|
||||
|
||||
def test_get_all_credentials_credstore_only(self):
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_with_empty_credhelper(self):
|
||||
self.authconfig['credHelpers'] = {
|
||||
'registry1.io': 'truesecret',
|
||||
}
|
||||
self.authconfig._stores['truesecret'] = InMemoryStore()
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'registry1.io': None,
|
||||
}
|
||||
|
||||
def test_get_all_credentials_with_credhelpers_only(self):
|
||||
del self.authconfig['credsStore']
|
||||
assert self.authconfig.get_all_credentials() == {}
|
||||
|
||||
self.authconfig['credHelpers'] = {
|
||||
'https://gensokyo.jp/v2': 'default',
|
||||
'https://default.com/v2': 'default',
|
||||
}
|
||||
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_with_auths_entries(self):
|
||||
self.authconfig.add_auth('registry1.io', {
|
||||
'ServerAddress': 'registry1.io',
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
})
|
||||
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'registry1.io': {
|
||||
'ServerAddress': 'registry1.io',
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_with_empty_auths_entry(self):
|
||||
self.authconfig.add_auth('default.com', {})
|
||||
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_credstore_overrides_auth_entry(self):
|
||||
self.authconfig.add_auth('default.com', {
|
||||
'Username': 'shouldnotsee',
|
||||
'Password': 'thisentry',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
})
|
||||
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_helpers_override_default(self):
|
||||
self.authconfig['credHelpers'] = {
|
||||
'https://default.com/v2': 'truesecret',
|
||||
}
|
||||
truesecret = InMemoryStore('truesecret')
|
||||
truesecret.store('https://default.com/v2', 'reimu', 'hakurei')
|
||||
self.authconfig._stores['truesecret'] = truesecret
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
}
|
||||
|
||||
def test_get_all_credentials_3_sources(self):
|
||||
self.authconfig['credHelpers'] = {
|
||||
'registry1.io': 'truesecret',
|
||||
}
|
||||
truesecret = InMemoryStore('truesecret')
|
||||
truesecret.store('registry1.io', 'reimu', 'hakurei')
|
||||
self.authconfig._stores['truesecret'] = truesecret
|
||||
self.authconfig.add_auth('registry2.io', {
|
||||
'ServerAddress': 'registry2.io',
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
})
|
||||
|
||||
assert self.authconfig.get_all_credentials() == {
|
||||
'https://gensokyo.jp/v2': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'gensokyo.jp': {
|
||||
'Username': 'sakuya',
|
||||
'Password': 'izayoi',
|
||||
'ServerAddress': 'https://gensokyo.jp/v2',
|
||||
},
|
||||
'https://default.com/v2': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'default.com': {
|
||||
'Username': 'user',
|
||||
'Password': 'hunter2',
|
||||
'ServerAddress': 'https://default.com/v2',
|
||||
},
|
||||
'registry1.io': {
|
||||
'ServerAddress': 'registry1.io',
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
},
|
||||
'registry2.io': {
|
||||
'ServerAddress': 'registry2.io',
|
||||
'Username': 'reimu',
|
||||
'Password': 'hakurei',
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class InMemoryStore(Store):
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.__store = {}
|
||||
|
||||
def get(self, server):
|
||||
try:
|
||||
return self.__store[server]
|
||||
except KeyError:
|
||||
raise CredentialsNotFound()
|
||||
|
||||
def store(self, server, username, secret):
|
||||
self.__store[server] = {
|
||||
'ServerURL': server,
|
||||
'Username': username,
|
||||
'Secret': secret,
|
||||
}
|
||||
|
||||
def list(self):
|
||||
return dict(
|
||||
(k, v['Username']) for k, v in self.__store.items()
|
||||
)
|
||||
|
||||
def erase(self, server):
|
||||
del self.__store[server]
|
||||
141
tests/unit/plugins/module_utils/_api/test_errors.py
Normal file
141
tests/unit/plugins/module_utils/_api/test_errors.py
Normal file
@ -0,0 +1,141 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.errors import (
|
||||
APIError, ContainerError, DockerException,
|
||||
create_unexpected_kwargs_error,
|
||||
create_api_error_from_http_exception,
|
||||
)
|
||||
from .fake_api import FAKE_CONTAINER_ID, FAKE_IMAGE_ID
|
||||
|
||||
|
||||
class APIErrorTest(unittest.TestCase):
|
||||
def test_api_error_is_caught_by_dockerexception(self):
|
||||
try:
|
||||
raise APIError("this should be caught by DockerException")
|
||||
except DockerException:
|
||||
pass
|
||||
|
||||
def test_status_code_200(self):
|
||||
"""The status_code property is present with 200 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 200
|
||||
err = APIError('', response=resp)
|
||||
assert err.status_code == 200
|
||||
|
||||
def test_status_code_400(self):
|
||||
"""The status_code property is present with 400 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 400
|
||||
err = APIError('', response=resp)
|
||||
assert err.status_code == 400
|
||||
|
||||
def test_status_code_500(self):
|
||||
"""The status_code property is present with 500 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 500
|
||||
err = APIError('', response=resp)
|
||||
assert err.status_code == 500
|
||||
|
||||
def test_is_server_error_200(self):
|
||||
"""Report not server error on 200 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 200
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_server_error() is False
|
||||
|
||||
def test_is_server_error_300(self):
|
||||
"""Report not server error on 300 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 300
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_server_error() is False
|
||||
|
||||
def test_is_server_error_400(self):
|
||||
"""Report not server error on 400 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 400
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_server_error() is False
|
||||
|
||||
def test_is_server_error_500(self):
|
||||
"""Report server error on 500 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 500
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_server_error() is True
|
||||
|
||||
def test_is_client_error_500(self):
|
||||
"""Report not client error on 500 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 500
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_client_error() is False
|
||||
|
||||
def test_is_client_error_400(self):
|
||||
"""Report client error on 400 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 400
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_client_error() is True
|
||||
|
||||
def test_is_error_300(self):
|
||||
"""Report no error on 300 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 300
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_error() is False
|
||||
|
||||
def test_is_error_400(self):
|
||||
"""Report error on 400 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 400
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_error() is True
|
||||
|
||||
def test_is_error_500(self):
|
||||
"""Report error on 500 response."""
|
||||
resp = requests.Response()
|
||||
resp.status_code = 500
|
||||
err = APIError('', response=resp)
|
||||
assert err.is_error() is True
|
||||
|
||||
def test_create_error_from_exception(self):
|
||||
resp = requests.Response()
|
||||
resp.status_code = 500
|
||||
err = APIError('')
|
||||
try:
|
||||
resp.raise_for_status()
|
||||
except requests.exceptions.HTTPError as e:
|
||||
try:
|
||||
create_api_error_from_http_exception(e)
|
||||
except APIError as e:
|
||||
err = e
|
||||
assert err.is_server_error() is True
|
||||
|
||||
|
||||
class CreateUnexpectedKwargsErrorTest(unittest.TestCase):
|
||||
def test_create_unexpected_kwargs_error_single(self):
|
||||
e = create_unexpected_kwargs_error('f', {'foo': 'bar'})
|
||||
assert str(e) == "f() got an unexpected keyword argument 'foo'"
|
||||
|
||||
def test_create_unexpected_kwargs_error_multiple(self):
|
||||
e = create_unexpected_kwargs_error('f', {'foo': 'bar', 'baz': 'bosh'})
|
||||
assert str(e) == "f() got unexpected keyword arguments 'baz', 'foo'"
|
||||
@ -0,0 +1,56 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.transport.sshconn import SSHSocket, SSHHTTPAdapter
|
||||
|
||||
|
||||
class SSHAdapterTest(unittest.TestCase):
|
||||
@staticmethod
|
||||
def test_ssh_hostname_prefix_trim():
|
||||
conn = SSHHTTPAdapter(
|
||||
base_url="ssh://user@hostname:1234", shell_out=True)
|
||||
assert conn.ssh_host == "user@hostname:1234"
|
||||
|
||||
@staticmethod
|
||||
def test_ssh_parse_url():
|
||||
c = SSHSocket(host="user@hostname:1234")
|
||||
assert c.host == "hostname"
|
||||
assert c.port == "1234"
|
||||
assert c.user == "user"
|
||||
|
||||
@staticmethod
|
||||
def test_ssh_parse_hostname_only():
|
||||
c = SSHSocket(host="hostname")
|
||||
assert c.host == "hostname"
|
||||
assert c.port is None
|
||||
assert c.user is None
|
||||
|
||||
@staticmethod
|
||||
def test_ssh_parse_user_and_hostname():
|
||||
c = SSHSocket(host="user@hostname")
|
||||
assert c.host == "hostname"
|
||||
assert c.port is None
|
||||
assert c.user == "user"
|
||||
|
||||
@staticmethod
|
||||
def test_ssh_parse_hostname_and_port():
|
||||
c = SSHSocket(host="hostname:22")
|
||||
assert c.host == "hostname"
|
||||
assert c.port == "22"
|
||||
assert c.user is None
|
||||
@ -0,0 +1,95 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.transport import ssladapter
|
||||
|
||||
try:
|
||||
from backports.ssl_match_hostname import (
|
||||
match_hostname, CertificateError
|
||||
)
|
||||
except ImportError:
|
||||
from ssl import (
|
||||
match_hostname, CertificateError
|
||||
)
|
||||
|
||||
try:
|
||||
from ssl import OP_NO_SSLv3, OP_NO_SSLv2, OP_NO_TLSv1
|
||||
except ImportError:
|
||||
OP_NO_SSLv2 = 0x1000000
|
||||
OP_NO_SSLv3 = 0x2000000
|
||||
OP_NO_TLSv1 = 0x4000000
|
||||
|
||||
|
||||
class SSLAdapterTest(unittest.TestCase):
|
||||
def test_only_uses_tls(self):
|
||||
ssl_context = ssladapter.urllib3.util.ssl_.create_urllib3_context()
|
||||
|
||||
assert ssl_context.options & OP_NO_SSLv3
|
||||
# if OpenSSL is compiled without SSL2 support, OP_NO_SSLv2 will be 0
|
||||
assert not bool(OP_NO_SSLv2) or ssl_context.options & OP_NO_SSLv2
|
||||
assert not ssl_context.options & OP_NO_TLSv1
|
||||
|
||||
|
||||
class MatchHostnameTest(unittest.TestCase):
|
||||
cert = {
|
||||
'issuer': (
|
||||
(('countryName', 'US'),),
|
||||
(('stateOrProvinceName', 'California'),),
|
||||
(('localityName', 'San Francisco'),),
|
||||
(('organizationName', 'Docker Inc'),),
|
||||
(('organizationalUnitName', 'Docker-Python'),),
|
||||
(('commonName', 'localhost'),),
|
||||
(('emailAddress', 'info@docker.com'),)
|
||||
),
|
||||
'notAfter': 'Mar 25 23:08:23 2030 GMT',
|
||||
'notBefore': 'Mar 25 23:08:23 2016 GMT',
|
||||
'serialNumber': 'BD5F894C839C548F',
|
||||
'subject': (
|
||||
(('countryName', 'US'),),
|
||||
(('stateOrProvinceName', 'California'),),
|
||||
(('localityName', 'San Francisco'),),
|
||||
(('organizationName', 'Docker Inc'),),
|
||||
(('organizationalUnitName', 'Docker-Python'),),
|
||||
(('commonName', 'localhost'),),
|
||||
(('emailAddress', 'info@docker.com'),)
|
||||
),
|
||||
'subjectAltName': (
|
||||
('DNS', 'localhost'),
|
||||
('DNS', '*.gensokyo.jp'),
|
||||
('IP Address', '127.0.0.1'),
|
||||
),
|
||||
'version': 3
|
||||
}
|
||||
|
||||
def test_match_ip_address_success(self):
|
||||
assert match_hostname(self.cert, '127.0.0.1') is None
|
||||
|
||||
def test_match_localhost_success(self):
|
||||
assert match_hostname(self.cert, 'localhost') is None
|
||||
|
||||
def test_match_dns_success(self):
|
||||
assert match_hostname(self.cert, 'touhou.gensokyo.jp') is None
|
||||
|
||||
def test_match_ip_address_failure(self):
|
||||
with pytest.raises(CertificateError):
|
||||
match_hostname(self.cert, '192.168.0.25')
|
||||
|
||||
def test_match_dns_failure(self):
|
||||
with pytest.raises(CertificateError):
|
||||
match_hostname(self.cert, 'foobar.co.uk')
|
||||
514
tests/unit/plugins/module_utils/_api/utils/test_build.py
Normal file
514
tests/unit/plugins/module_utils/_api/utils/test_build.py
Normal file
@ -0,0 +1,514 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import os
|
||||
import os.path
|
||||
import shutil
|
||||
import socket
|
||||
import tarfile
|
||||
import tempfile
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.constants import IS_WINDOWS_PLATFORM
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.build import exclude_paths, tar
|
||||
|
||||
|
||||
def make_tree(dirs, files):
|
||||
base = tempfile.mkdtemp()
|
||||
|
||||
for path in dirs:
|
||||
os.makedirs(os.path.join(base, path))
|
||||
|
||||
for path in files:
|
||||
with open(os.path.join(base, path), 'w') as f:
|
||||
f.write("content")
|
||||
|
||||
return base
|
||||
|
||||
|
||||
def convert_paths(collection):
|
||||
return set(map(convert_path, collection))
|
||||
|
||||
|
||||
def convert_path(path):
|
||||
return path.replace('/', os.path.sep)
|
||||
|
||||
|
||||
class ExcludePathsTest(unittest.TestCase):
|
||||
dirs = [
|
||||
'foo',
|
||||
'foo/bar',
|
||||
'bar',
|
||||
'target',
|
||||
'target/subdir',
|
||||
'subdir',
|
||||
'subdir/target',
|
||||
'subdir/target/subdir',
|
||||
'subdir/subdir2',
|
||||
'subdir/subdir2/target',
|
||||
'subdir/subdir2/target/subdir'
|
||||
]
|
||||
|
||||
files = [
|
||||
'Dockerfile',
|
||||
'Dockerfile.alt',
|
||||
'.dockerignore',
|
||||
'a.py',
|
||||
'a.go',
|
||||
'b.py',
|
||||
'cde.py',
|
||||
'foo/a.py',
|
||||
'foo/b.py',
|
||||
'foo/bar/a.py',
|
||||
'bar/a.py',
|
||||
'foo/Dockerfile3',
|
||||
'target/file.txt',
|
||||
'target/subdir/file.txt',
|
||||
'subdir/file.txt',
|
||||
'subdir/target/file.txt',
|
||||
'subdir/target/subdir/file.txt',
|
||||
'subdir/subdir2/file.txt',
|
||||
'subdir/subdir2/target/file.txt',
|
||||
'subdir/subdir2/target/subdir/file.txt',
|
||||
]
|
||||
|
||||
all_paths = set(dirs + files)
|
||||
|
||||
def setUp(self):
|
||||
self.base = make_tree(self.dirs, self.files)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.base)
|
||||
|
||||
def exclude(self, patterns, dockerfile=None):
|
||||
return set(exclude_paths(self.base, patterns, dockerfile=dockerfile))
|
||||
|
||||
def test_no_excludes(self):
|
||||
assert self.exclude(['']) == convert_paths(self.all_paths)
|
||||
|
||||
def test_no_dupes(self):
|
||||
paths = exclude_paths(self.base, ['!a.py'])
|
||||
assert sorted(paths) == sorted(set(paths))
|
||||
|
||||
def test_wildcard_exclude(self):
|
||||
assert self.exclude(['*']) == set(['Dockerfile', '.dockerignore'])
|
||||
|
||||
def test_exclude_dockerfile_dockerignore(self):
|
||||
"""
|
||||
Even if the .dockerignore file explicitly says to exclude
|
||||
Dockerfile and/or .dockerignore, don't exclude them from
|
||||
the actual tar file.
|
||||
"""
|
||||
assert self.exclude(['Dockerfile', '.dockerignore']) == convert_paths(
|
||||
self.all_paths
|
||||
)
|
||||
|
||||
def test_exclude_custom_dockerfile(self):
|
||||
"""
|
||||
If we're using a custom Dockerfile, make sure that's not
|
||||
excluded.
|
||||
"""
|
||||
assert self.exclude(['*'], dockerfile='Dockerfile.alt') == set(['Dockerfile.alt', '.dockerignore'])
|
||||
|
||||
assert self.exclude(
|
||||
['*'], dockerfile='foo/Dockerfile3'
|
||||
) == convert_paths(set(['foo/Dockerfile3', '.dockerignore']))
|
||||
|
||||
# https://github.com/docker/docker-py/issues/1956
|
||||
assert self.exclude(
|
||||
['*'], dockerfile='./foo/Dockerfile3'
|
||||
) == convert_paths(set(['foo/Dockerfile3', '.dockerignore']))
|
||||
|
||||
def test_exclude_dockerfile_child(self):
|
||||
includes = self.exclude(['foo/'], dockerfile='foo/Dockerfile3')
|
||||
assert convert_path('foo/Dockerfile3') in includes
|
||||
assert convert_path('foo/a.py') not in includes
|
||||
|
||||
def test_single_filename(self):
|
||||
assert self.exclude(['a.py']) == convert_paths(
|
||||
self.all_paths - set(['a.py'])
|
||||
)
|
||||
|
||||
def test_single_filename_leading_dot_slash(self):
|
||||
assert self.exclude(['./a.py']) == convert_paths(
|
||||
self.all_paths - set(['a.py'])
|
||||
)
|
||||
|
||||
# As odd as it sounds, a filename pattern with a trailing slash on the
|
||||
# end *will* result in that file being excluded.
|
||||
def test_single_filename_trailing_slash(self):
|
||||
assert self.exclude(['a.py/']) == convert_paths(
|
||||
self.all_paths - set(['a.py'])
|
||||
)
|
||||
|
||||
def test_wildcard_filename_start(self):
|
||||
assert self.exclude(['*.py']) == convert_paths(
|
||||
self.all_paths - set(['a.py', 'b.py', 'cde.py'])
|
||||
)
|
||||
|
||||
def test_wildcard_with_exception(self):
|
||||
assert self.exclude(['*.py', '!b.py']) == convert_paths(
|
||||
self.all_paths - set(['a.py', 'cde.py'])
|
||||
)
|
||||
|
||||
def test_wildcard_with_wildcard_exception(self):
|
||||
assert self.exclude(['*.*', '!*.go']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'a.py', 'b.py', 'cde.py', 'Dockerfile.alt',
|
||||
])
|
||||
)
|
||||
|
||||
def test_wildcard_filename_end(self):
|
||||
assert self.exclude(['a.*']) == convert_paths(
|
||||
self.all_paths - set(['a.py', 'a.go'])
|
||||
)
|
||||
|
||||
def test_question_mark(self):
|
||||
assert self.exclude(['?.py']) == convert_paths(
|
||||
self.all_paths - set(['a.py', 'b.py'])
|
||||
)
|
||||
|
||||
def test_single_subdir_single_filename(self):
|
||||
assert self.exclude(['foo/a.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py'])
|
||||
)
|
||||
|
||||
def test_single_subdir_single_filename_leading_slash(self):
|
||||
assert self.exclude(['/foo/a.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py'])
|
||||
)
|
||||
|
||||
def test_exclude_include_absolute_path(self):
|
||||
base = make_tree([], ['a.py', 'b.py'])
|
||||
assert exclude_paths(
|
||||
base,
|
||||
['/*', '!/*.py']
|
||||
) == set(['a.py', 'b.py'])
|
||||
|
||||
def test_single_subdir_with_path_traversal(self):
|
||||
assert self.exclude(['foo/whoops/../a.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py'])
|
||||
)
|
||||
|
||||
def test_single_subdir_wildcard_filename(self):
|
||||
assert self.exclude(['foo/*.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py', 'foo/b.py'])
|
||||
)
|
||||
|
||||
def test_wildcard_subdir_single_filename(self):
|
||||
assert self.exclude(['*/a.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py', 'bar/a.py'])
|
||||
)
|
||||
|
||||
def test_wildcard_subdir_wildcard_filename(self):
|
||||
assert self.exclude(['*/*.py']) == convert_paths(
|
||||
self.all_paths - set(['foo/a.py', 'foo/b.py', 'bar/a.py'])
|
||||
)
|
||||
|
||||
def test_directory(self):
|
||||
assert self.exclude(['foo']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo', 'foo/a.py', 'foo/b.py', 'foo/bar', 'foo/bar/a.py',
|
||||
'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
def test_directory_with_trailing_slash(self):
|
||||
assert self.exclude(['foo']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo', 'foo/a.py', 'foo/b.py',
|
||||
'foo/bar', 'foo/bar/a.py', 'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
def test_directory_with_single_exception(self):
|
||||
assert self.exclude(['foo', '!foo/bar/a.py']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo/a.py', 'foo/b.py', 'foo', 'foo/bar',
|
||||
'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
def test_directory_with_subdir_exception(self):
|
||||
assert self.exclude(['foo', '!foo/bar']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo/a.py', 'foo/b.py', 'foo', 'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not IS_WINDOWS_PLATFORM, reason='Backslash patterns only on Windows'
|
||||
)
|
||||
def test_directory_with_subdir_exception_win32_pathsep(self):
|
||||
assert self.exclude(['foo', '!foo\\bar']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo/a.py', 'foo/b.py', 'foo', 'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
def test_directory_with_wildcard_exception(self):
|
||||
assert self.exclude(['foo', '!foo/*.py']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'foo/bar', 'foo/bar/a.py', 'foo', 'foo/Dockerfile3'
|
||||
])
|
||||
)
|
||||
|
||||
def test_subdirectory(self):
|
||||
assert self.exclude(['foo/bar']) == convert_paths(
|
||||
self.all_paths - set(['foo/bar', 'foo/bar/a.py'])
|
||||
)
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not IS_WINDOWS_PLATFORM, reason='Backslash patterns only on Windows'
|
||||
)
|
||||
def test_subdirectory_win32_pathsep(self):
|
||||
assert self.exclude(['foo\\bar']) == convert_paths(
|
||||
self.all_paths - set(['foo/bar', 'foo/bar/a.py'])
|
||||
)
|
||||
|
||||
def test_double_wildcard(self):
|
||||
assert self.exclude(['**/a.py']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'a.py', 'foo/a.py', 'foo/bar/a.py', 'bar/a.py'
|
||||
])
|
||||
)
|
||||
|
||||
assert self.exclude(['foo/**/bar']) == convert_paths(
|
||||
self.all_paths - set(['foo/bar', 'foo/bar/a.py'])
|
||||
)
|
||||
|
||||
def test_single_and_double_wildcard(self):
|
||||
assert self.exclude(['**/target/*/*']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'target/subdir/file.txt',
|
||||
'subdir/target/subdir/file.txt',
|
||||
'subdir/subdir2/target/subdir/file.txt',
|
||||
])
|
||||
)
|
||||
|
||||
def test_trailing_double_wildcard(self):
|
||||
assert self.exclude(['subdir/**']) == convert_paths(
|
||||
self.all_paths - set([
|
||||
'subdir/file.txt',
|
||||
'subdir/target/file.txt',
|
||||
'subdir/target/subdir/file.txt',
|
||||
'subdir/subdir2/file.txt',
|
||||
'subdir/subdir2/target/file.txt',
|
||||
'subdir/subdir2/target/subdir/file.txt',
|
||||
'subdir/target',
|
||||
'subdir/target/subdir',
|
||||
'subdir/subdir2',
|
||||
'subdir/subdir2/target',
|
||||
'subdir/subdir2/target/subdir',
|
||||
])
|
||||
)
|
||||
|
||||
def test_double_wildcard_with_exception(self):
|
||||
assert self.exclude(['**', '!bar', '!foo/bar']) == convert_paths(
|
||||
set([
|
||||
'foo/bar', 'foo/bar/a.py', 'bar', 'bar/a.py', 'Dockerfile',
|
||||
'.dockerignore',
|
||||
])
|
||||
)
|
||||
|
||||
def test_include_wildcard(self):
|
||||
# This may be surprising but it matches the CLI's behavior
|
||||
# (tested with 18.05.0-ce on linux)
|
||||
base = make_tree(['a'], ['a/b.py'])
|
||||
assert exclude_paths(
|
||||
base,
|
||||
['*', '!*/b.py']
|
||||
) == set()
|
||||
|
||||
def test_last_line_precedence(self):
|
||||
base = make_tree(
|
||||
[],
|
||||
['garbage.md',
|
||||
'trash.md',
|
||||
'README.md',
|
||||
'README-bis.md',
|
||||
'README-secret.md'])
|
||||
assert exclude_paths(
|
||||
base,
|
||||
['*.md', '!README*.md', 'README-secret.md']
|
||||
) == set(['README.md', 'README-bis.md'])
|
||||
|
||||
def test_parent_directory(self):
|
||||
base = make_tree(
|
||||
[],
|
||||
['a.py',
|
||||
'b.py',
|
||||
'c.py'])
|
||||
# Dockerignore reference stipulates that absolute paths are
|
||||
# equivalent to relative paths, hence /../foo should be
|
||||
# equivalent to ../foo. It also stipulates that paths are run
|
||||
# through Go's filepath.Clean, which explicitly "replace
|
||||
# "/.." by "/" at the beginning of a path".
|
||||
assert exclude_paths(
|
||||
base,
|
||||
['../a.py', '/../b.py']
|
||||
) == set(['c.py'])
|
||||
|
||||
|
||||
class TarTest(unittest.TestCase):
|
||||
def test_tar_with_excludes(self):
|
||||
dirs = [
|
||||
'foo',
|
||||
'foo/bar',
|
||||
'bar',
|
||||
]
|
||||
|
||||
files = [
|
||||
'Dockerfile',
|
||||
'Dockerfile.alt',
|
||||
'.dockerignore',
|
||||
'a.py',
|
||||
'a.go',
|
||||
'b.py',
|
||||
'cde.py',
|
||||
'foo/a.py',
|
||||
'foo/b.py',
|
||||
'foo/bar/a.py',
|
||||
'bar/a.py',
|
||||
]
|
||||
|
||||
exclude = [
|
||||
'*.py',
|
||||
'!b.py',
|
||||
'!a.go',
|
||||
'foo',
|
||||
'Dockerfile*',
|
||||
'.dockerignore',
|
||||
]
|
||||
|
||||
expected_names = set([
|
||||
'Dockerfile',
|
||||
'.dockerignore',
|
||||
'a.go',
|
||||
'b.py',
|
||||
'bar',
|
||||
'bar/a.py',
|
||||
])
|
||||
|
||||
base = make_tree(dirs, files)
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
|
||||
with tar(base, exclude=exclude) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == sorted(expected_names)
|
||||
|
||||
def test_tar_with_empty_directory(self):
|
||||
base = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
for d in ['foo', 'bar']:
|
||||
os.makedirs(os.path.join(base, d))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == ['bar', 'foo']
|
||||
|
||||
@pytest.mark.skipif(
|
||||
IS_WINDOWS_PLATFORM or os.geteuid() == 0,
|
||||
reason='root user always has access ; no chmod on Windows'
|
||||
)
|
||||
def test_tar_with_inaccessible_file(self):
|
||||
base = tempfile.mkdtemp()
|
||||
full_path = os.path.join(base, 'foo')
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
with open(full_path, 'w') as f:
|
||||
f.write('content')
|
||||
os.chmod(full_path, 0o222)
|
||||
with pytest.raises(IOError) as ei:
|
||||
tar(base)
|
||||
|
||||
assert 'Can not read file in context: {full_path}'.format(full_path=full_path) in (
|
||||
ei.exconly()
|
||||
)
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='No symlinks on Windows')
|
||||
def test_tar_with_file_symlinks(self):
|
||||
base = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
with open(os.path.join(base, 'foo'), 'w') as f:
|
||||
f.write("content")
|
||||
os.makedirs(os.path.join(base, 'bar'))
|
||||
os.symlink('../foo', os.path.join(base, 'bar/foo'))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == ['bar', 'bar/foo', 'foo']
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='No symlinks on Windows')
|
||||
def test_tar_with_directory_symlinks(self):
|
||||
base = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
for d in ['foo', 'bar']:
|
||||
os.makedirs(os.path.join(base, d))
|
||||
os.symlink('../foo', os.path.join(base, 'bar/foo'))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == ['bar', 'bar/foo', 'foo']
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='No symlinks on Windows')
|
||||
def test_tar_with_broken_symlinks(self):
|
||||
base = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
for d in ['foo', 'bar']:
|
||||
os.makedirs(os.path.join(base, d))
|
||||
|
||||
os.symlink('../baz', os.path.join(base, 'bar/foo'))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == ['bar', 'bar/foo', 'foo']
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='No UNIX sockets on Win32')
|
||||
def test_tar_socket_file(self):
|
||||
base = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
for d in ['foo', 'bar']:
|
||||
os.makedirs(os.path.join(base, d))
|
||||
sock = socket.socket(socket.AF_UNIX)
|
||||
self.addCleanup(sock.close)
|
||||
sock.bind(os.path.join(base, 'test.sock'))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert sorted(tar_data.getnames()) == ['bar', 'foo']
|
||||
|
||||
def tar_test_negative_mtime_bug(self):
|
||||
base = tempfile.mkdtemp()
|
||||
filename = os.path.join(base, 'th.txt')
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
with open(filename, 'w') as f:
|
||||
f.write('Invisible Full Moon')
|
||||
os.utime(filename, (12345, -3600.0))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
assert tar_data.getnames() == ['th.txt']
|
||||
assert tar_data.getmember('th.txt').mtime == -3600
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='No symlinks on Windows')
|
||||
def test_tar_directory_link(self):
|
||||
dirs = ['a', 'b', 'a/c']
|
||||
files = ['a/hello.py', 'b/utils.py', 'a/c/descend.py']
|
||||
base = make_tree(dirs, files)
|
||||
self.addCleanup(shutil.rmtree, base)
|
||||
os.symlink(os.path.join(base, 'b'), os.path.join(base, 'a/c/b'))
|
||||
with tar(base) as archive:
|
||||
tar_data = tarfile.open(fileobj=archive)
|
||||
names = tar_data.getnames()
|
||||
for member in dirs + files:
|
||||
assert member in names
|
||||
assert 'a/c/b' in names
|
||||
assert 'a/c/b/utils.py' not in names
|
||||
140
tests/unit/plugins/module_utils/_api/utils/test_config.py
Normal file
140
tests/unit/plugins/module_utils/_api/utils/test_config.py
Normal file
@ -0,0 +1,140 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import os
|
||||
import unittest
|
||||
import shutil
|
||||
import tempfile
|
||||
import json
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from pytest import mark, fixture
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils import config
|
||||
|
||||
try:
|
||||
from unittest import mock
|
||||
except ImportError:
|
||||
import mock
|
||||
|
||||
|
||||
class FindConfigFileTest(unittest.TestCase):
|
||||
|
||||
@fixture(autouse=True)
|
||||
def tmpdir(self, tmpdir):
|
||||
self.mkdir = tmpdir.mkdir
|
||||
|
||||
def test_find_config_fallback(self):
|
||||
tmpdir = self.mkdir('test_find_config_fallback')
|
||||
|
||||
with mock.patch.dict(os.environ, {'HOME': str(tmpdir)}):
|
||||
assert config.find_config_file() is None
|
||||
|
||||
def test_find_config_from_explicit_path(self):
|
||||
tmpdir = self.mkdir('test_find_config_from_explicit_path')
|
||||
config_path = tmpdir.ensure('my-config-file.json')
|
||||
|
||||
assert config.find_config_file(str(config_path)) == str(config_path)
|
||||
|
||||
def test_find_config_from_environment(self):
|
||||
tmpdir = self.mkdir('test_find_config_from_environment')
|
||||
config_path = tmpdir.ensure('config.json')
|
||||
|
||||
with mock.patch.dict(os.environ, {'DOCKER_CONFIG': str(tmpdir)}):
|
||||
assert config.find_config_file() == str(config_path)
|
||||
|
||||
@mark.skipif("sys.platform == 'win32'")
|
||||
def test_find_config_from_home_posix(self):
|
||||
tmpdir = self.mkdir('test_find_config_from_home_posix')
|
||||
config_path = tmpdir.ensure('.docker', 'config.json')
|
||||
|
||||
with mock.patch.dict(os.environ, {'HOME': str(tmpdir)}):
|
||||
assert config.find_config_file() == str(config_path)
|
||||
|
||||
@mark.skipif("sys.platform == 'win32'")
|
||||
def test_find_config_from_home_legacy_name(self):
|
||||
tmpdir = self.mkdir('test_find_config_from_home_legacy_name')
|
||||
config_path = tmpdir.ensure('.dockercfg')
|
||||
|
||||
with mock.patch.dict(os.environ, {'HOME': str(tmpdir)}):
|
||||
assert config.find_config_file() == str(config_path)
|
||||
|
||||
@mark.skipif("sys.platform != 'win32'")
|
||||
def test_find_config_from_home_windows(self):
|
||||
tmpdir = self.mkdir('test_find_config_from_home_windows')
|
||||
config_path = tmpdir.ensure('.docker', 'config.json')
|
||||
|
||||
with mock.patch.dict(os.environ, {'USERPROFILE': str(tmpdir)}):
|
||||
assert config.find_config_file() == str(config_path)
|
||||
|
||||
|
||||
class LoadConfigTest(unittest.TestCase):
|
||||
def test_load_config_no_file(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
cfg = config.load_general_config(folder)
|
||||
assert cfg is not None
|
||||
assert isinstance(cfg, dict)
|
||||
assert not cfg
|
||||
|
||||
def test_load_config_custom_headers(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
config_data = {
|
||||
'HttpHeaders': {
|
||||
'Name': 'Spike',
|
||||
'Surname': 'Spiegel'
|
||||
},
|
||||
}
|
||||
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config_data, f)
|
||||
|
||||
cfg = config.load_general_config(dockercfg_path)
|
||||
assert 'HttpHeaders' in cfg
|
||||
assert cfg['HttpHeaders'] == {
|
||||
'Name': 'Spike',
|
||||
'Surname': 'Spiegel'
|
||||
}
|
||||
|
||||
def test_load_config_detach_keys(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
config_data = {
|
||||
'detachKeys': 'ctrl-q, ctrl-u, ctrl-i'
|
||||
}
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config_data, f)
|
||||
|
||||
cfg = config.load_general_config(dockercfg_path)
|
||||
assert cfg == config_data
|
||||
|
||||
def test_load_config_from_env(self):
|
||||
folder = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, folder)
|
||||
dockercfg_path = os.path.join(folder, 'config.json')
|
||||
config_data = {
|
||||
'detachKeys': 'ctrl-q, ctrl-u, ctrl-i'
|
||||
}
|
||||
with open(dockercfg_path, 'w') as f:
|
||||
json.dump(config_data, f)
|
||||
|
||||
with mock.patch.dict(os.environ, {'DOCKER_CONFIG': folder}):
|
||||
cfg = config.load_general_config(None)
|
||||
assert cfg == config_data
|
||||
@ -0,0 +1,53 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.api.client import APIClient
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.constants import DEFAULT_DOCKER_API_VERSION
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.decorators import update_headers
|
||||
|
||||
|
||||
class DecoratorsTest(unittest.TestCase):
|
||||
def test_update_headers(self):
|
||||
sample_headers = {
|
||||
'X-Docker-Locale': 'en-US',
|
||||
}
|
||||
|
||||
def f(self, headers=None):
|
||||
return headers
|
||||
|
||||
client = APIClient(version=DEFAULT_DOCKER_API_VERSION)
|
||||
client._general_configs = {}
|
||||
|
||||
g = update_headers(f)
|
||||
assert g(client, headers=None) is None
|
||||
assert g(client, headers={}) == {}
|
||||
assert g(client, headers={'Content-type': 'application/json'}) == {
|
||||
'Content-type': 'application/json',
|
||||
}
|
||||
|
||||
client._general_configs = {
|
||||
'HttpHeaders': sample_headers
|
||||
}
|
||||
|
||||
assert g(client, headers=None) == sample_headers
|
||||
assert g(client, headers={}) == sample_headers
|
||||
assert g(client, headers={'Content-type': 'application/json'}) == {
|
||||
'Content-type': 'application/json',
|
||||
'X-Docker-Locale': 'en-US',
|
||||
}
|
||||
@ -0,0 +1,76 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.json_stream import json_splitter, stream_as_text, json_stream
|
||||
|
||||
|
||||
class TestJsonSplitter:
|
||||
|
||||
def test_json_splitter_no_object(self):
|
||||
data = '{"foo": "bar'
|
||||
assert json_splitter(data) is None
|
||||
|
||||
def test_json_splitter_with_object(self):
|
||||
data = '{"foo": "bar"}\n \n{"next": "obj"}'
|
||||
assert json_splitter(data) == ({'foo': 'bar'}, '{"next": "obj"}')
|
||||
|
||||
def test_json_splitter_leading_whitespace(self):
|
||||
data = '\n \r{"foo": "bar"}\n\n {"next": "obj"}'
|
||||
assert json_splitter(data) == ({'foo': 'bar'}, '{"next": "obj"}')
|
||||
|
||||
|
||||
class TestStreamAsText:
|
||||
|
||||
def test_stream_with_non_utf_unicode_character(self):
|
||||
stream = [b'\xed\xf3\xf3']
|
||||
output, = stream_as_text(stream)
|
||||
assert output == u'<EFBFBD><EFBFBD><EFBFBD>'
|
||||
|
||||
def test_stream_with_utf_character(self):
|
||||
stream = [u'ěĝ'.encode('utf-8')]
|
||||
output, = stream_as_text(stream)
|
||||
assert output == u'ěĝ'
|
||||
|
||||
|
||||
class TestJsonStream:
|
||||
|
||||
def test_with_falsy_entries(self):
|
||||
stream = [
|
||||
'{"one": "two"}\n{}\n',
|
||||
"[1, 2, 3]\n[]\n",
|
||||
]
|
||||
output = list(json_stream(stream))
|
||||
assert output == [
|
||||
{'one': 'two'},
|
||||
{},
|
||||
[1, 2, 3],
|
||||
[],
|
||||
]
|
||||
|
||||
def test_with_leading_whitespace(self):
|
||||
stream = [
|
||||
'\n \r\n {"one": "two"}{"x": 1}',
|
||||
' {"three": "four"}\t\t{"x": 2}'
|
||||
]
|
||||
output = list(json_stream(stream))
|
||||
assert output == [
|
||||
{'one': 'two'},
|
||||
{'x': 1},
|
||||
{'three': 'four'},
|
||||
{'x': 2}
|
||||
]
|
||||
161
tests/unit/plugins/module_utils/_api/utils/test_ports.py
Normal file
161
tests/unit/plugins/module_utils/_api/utils/test_ports.py
Normal file
@ -0,0 +1,161 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.ports import build_port_bindings, split_port
|
||||
|
||||
|
||||
class PortsTest(unittest.TestCase):
|
||||
def test_split_port_with_host_ip(self):
|
||||
internal_port, external_port = split_port("127.0.0.1:1000:2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == [("127.0.0.1", "1000")]
|
||||
|
||||
def test_split_port_with_protocol(self):
|
||||
for protocol in ['tcp', 'udp', 'sctp']:
|
||||
internal_port, external_port = split_port(
|
||||
"127.0.0.1:1000:2000/" + protocol
|
||||
)
|
||||
assert internal_port == ["2000/" + protocol]
|
||||
assert external_port == [("127.0.0.1", "1000")]
|
||||
|
||||
def test_split_port_with_host_ip_no_port(self):
|
||||
internal_port, external_port = split_port("127.0.0.1::2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == [("127.0.0.1", None)]
|
||||
|
||||
def test_split_port_range_with_host_ip_no_port(self):
|
||||
internal_port, external_port = split_port("127.0.0.1::2000-2001")
|
||||
assert internal_port == ["2000", "2001"]
|
||||
assert external_port == [("127.0.0.1", None), ("127.0.0.1", None)]
|
||||
|
||||
def test_split_port_with_host_port(self):
|
||||
internal_port, external_port = split_port("1000:2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == ["1000"]
|
||||
|
||||
def test_split_port_range_with_host_port(self):
|
||||
internal_port, external_port = split_port("1000-1001:2000-2001")
|
||||
assert internal_port == ["2000", "2001"]
|
||||
assert external_port == ["1000", "1001"]
|
||||
|
||||
def test_split_port_random_port_range_with_host_port(self):
|
||||
internal_port, external_port = split_port("1000-1001:2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == ["1000-1001"]
|
||||
|
||||
def test_split_port_no_host_port(self):
|
||||
internal_port, external_port = split_port("2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port is None
|
||||
|
||||
def test_split_port_range_no_host_port(self):
|
||||
internal_port, external_port = split_port("2000-2001")
|
||||
assert internal_port == ["2000", "2001"]
|
||||
assert external_port is None
|
||||
|
||||
def test_split_port_range_with_protocol(self):
|
||||
internal_port, external_port = split_port(
|
||||
"127.0.0.1:1000-1001:2000-2001/udp")
|
||||
assert internal_port == ["2000/udp", "2001/udp"]
|
||||
assert external_port == [("127.0.0.1", "1000"), ("127.0.0.1", "1001")]
|
||||
|
||||
def test_split_port_with_ipv6_address(self):
|
||||
internal_port, external_port = split_port(
|
||||
"2001:abcd:ef00::2:1000:2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == [("2001:abcd:ef00::2", "1000")]
|
||||
|
||||
def test_split_port_with_ipv6_square_brackets_address(self):
|
||||
internal_port, external_port = split_port(
|
||||
"[2001:abcd:ef00::2]:1000:2000")
|
||||
assert internal_port == ["2000"]
|
||||
assert external_port == [("2001:abcd:ef00::2", "1000")]
|
||||
|
||||
def test_split_port_invalid(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("0.0.0.0:1000:2000:tcp")
|
||||
|
||||
def test_split_port_invalid_protocol(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("0.0.0.0:1000:2000/ftp")
|
||||
|
||||
def test_non_matching_length_port_ranges(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("0.0.0.0:1000-1010:2000-2002/tcp")
|
||||
|
||||
def test_port_and_range_invalid(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("0.0.0.0:1000:2000-2002/tcp")
|
||||
|
||||
def test_port_only_with_colon(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port(":80")
|
||||
|
||||
def test_host_only_with_colon(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("localhost:")
|
||||
|
||||
def test_with_no_container_port(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("localhost:80:")
|
||||
|
||||
def test_split_port_empty_string(self):
|
||||
with pytest.raises(ValueError):
|
||||
split_port("")
|
||||
|
||||
def test_split_port_non_string(self):
|
||||
assert split_port(1243) == (['1243'], None)
|
||||
|
||||
def test_build_port_bindings_with_one_port(self):
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000"])
|
||||
assert port_bindings["1000"] == [("127.0.0.1", "1000")]
|
||||
|
||||
def test_build_port_bindings_with_matching_internal_ports(self):
|
||||
port_bindings = build_port_bindings(
|
||||
["127.0.0.1:1000:1000", "127.0.0.1:2000:1000"])
|
||||
assert port_bindings["1000"] == [
|
||||
("127.0.0.1", "1000"), ("127.0.0.1", "2000")
|
||||
]
|
||||
|
||||
def test_build_port_bindings_with_nonmatching_internal_ports(self):
|
||||
port_bindings = build_port_bindings(
|
||||
["127.0.0.1:1000:1000", "127.0.0.1:2000:2000"])
|
||||
assert port_bindings["1000"] == [("127.0.0.1", "1000")]
|
||||
assert port_bindings["2000"] == [("127.0.0.1", "2000")]
|
||||
|
||||
def test_build_port_bindings_with_port_range(self):
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000-1001:1000-1001"])
|
||||
assert port_bindings["1000"] == [("127.0.0.1", "1000")]
|
||||
assert port_bindings["1001"] == [("127.0.0.1", "1001")]
|
||||
|
||||
def test_build_port_bindings_with_matching_internal_port_ranges(self):
|
||||
port_bindings = build_port_bindings(
|
||||
["127.0.0.1:1000-1001:1000-1001", "127.0.0.1:2000-2001:1000-1001"])
|
||||
assert port_bindings["1000"] == [
|
||||
("127.0.0.1", "1000"), ("127.0.0.1", "2000")
|
||||
]
|
||||
assert port_bindings["1001"] == [
|
||||
("127.0.0.1", "1001"), ("127.0.0.1", "2001")
|
||||
]
|
||||
|
||||
def test_build_port_bindings_with_nonmatching_internal_port_ranges(self):
|
||||
port_bindings = build_port_bindings(
|
||||
["127.0.0.1:1000:1000", "127.0.0.1:2000:2000"])
|
||||
assert port_bindings["1000"] == [("127.0.0.1", "1000")]
|
||||
assert port_bindings["2000"] == [("127.0.0.1", "2000")]
|
||||
99
tests/unit/plugins/module_utils/_api/utils/test_proxy.py
Normal file
99
tests/unit/plugins/module_utils/_api/utils/test_proxy.py
Normal file
@ -0,0 +1,99 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.proxy import ProxyConfig
|
||||
|
||||
|
||||
HTTP = 'http://test:80'
|
||||
HTTPS = 'https://test:443'
|
||||
FTP = 'ftp://user:password@host:23'
|
||||
NO_PROXY = 'localhost,.localdomain'
|
||||
CONFIG = ProxyConfig(http=HTTP, https=HTTPS, ftp=FTP, no_proxy=NO_PROXY)
|
||||
ENV = {
|
||||
'http_proxy': HTTP,
|
||||
'HTTP_PROXY': HTTP,
|
||||
'https_proxy': HTTPS,
|
||||
'HTTPS_PROXY': HTTPS,
|
||||
'ftp_proxy': FTP,
|
||||
'FTP_PROXY': FTP,
|
||||
'no_proxy': NO_PROXY,
|
||||
'NO_PROXY': NO_PROXY,
|
||||
}
|
||||
|
||||
|
||||
class ProxyConfigTest(unittest.TestCase):
|
||||
|
||||
def test_from_dict(self):
|
||||
config = ProxyConfig.from_dict({
|
||||
'httpProxy': HTTP,
|
||||
'httpsProxy': HTTPS,
|
||||
'ftpProxy': FTP,
|
||||
'noProxy': NO_PROXY
|
||||
})
|
||||
self.assertEqual(CONFIG.http, config.http)
|
||||
self.assertEqual(CONFIG.https, config.https)
|
||||
self.assertEqual(CONFIG.ftp, config.ftp)
|
||||
self.assertEqual(CONFIG.no_proxy, config.no_proxy)
|
||||
|
||||
def test_new(self):
|
||||
config = ProxyConfig()
|
||||
self.assertIsNone(config.http)
|
||||
self.assertIsNone(config.https)
|
||||
self.assertIsNone(config.ftp)
|
||||
self.assertIsNone(config.no_proxy)
|
||||
|
||||
config = ProxyConfig(http='a', https='b', ftp='c', no_proxy='d')
|
||||
self.assertEqual(config.http, 'a')
|
||||
self.assertEqual(config.https, 'b')
|
||||
self.assertEqual(config.ftp, 'c')
|
||||
self.assertEqual(config.no_proxy, 'd')
|
||||
|
||||
def test_truthiness(self):
|
||||
assert not ProxyConfig()
|
||||
assert ProxyConfig(http='non-zero')
|
||||
assert ProxyConfig(https='non-zero')
|
||||
assert ProxyConfig(ftp='non-zero')
|
||||
assert ProxyConfig(no_proxy='non-zero')
|
||||
|
||||
def test_environment(self):
|
||||
self.assertDictEqual(CONFIG.get_environment(), ENV)
|
||||
empty = ProxyConfig()
|
||||
self.assertDictEqual(empty.get_environment(), {})
|
||||
|
||||
def test_inject_proxy_environment(self):
|
||||
# Proxy config is non null, env is None.
|
||||
self.assertSetEqual(
|
||||
set(CONFIG.inject_proxy_environment(None)),
|
||||
set('{k}={v}'.format(k=k, v=v) for k, v in ENV.items()))
|
||||
|
||||
# Proxy config is null, env is None.
|
||||
self.assertIsNone(ProxyConfig().inject_proxy_environment(None), None)
|
||||
|
||||
env = ['FOO=BAR', 'BAR=BAZ']
|
||||
|
||||
# Proxy config is non null, env is non null
|
||||
actual = CONFIG.inject_proxy_environment(env)
|
||||
expected = ['{k}={v}'.format(k=k, v=v) for k, v in ENV.items()] + env
|
||||
# It's important that the first 8 variables are the ones from the proxy
|
||||
# config, and the last 2 are the ones from the input environment
|
||||
self.assertSetEqual(set(actual[:8]), set(expected[:8]))
|
||||
self.assertSetEqual(set(actual[-2:]), set(expected[-2:]))
|
||||
|
||||
# Proxy is null, and is non null
|
||||
self.assertListEqual(ProxyConfig().inject_proxy_environment(env), env)
|
||||
480
tests/unit/plugins/module_utils/_api/utils/test_utils.py
Normal file
480
tests/unit/plugins/module_utils/_api/utils/test_utils.py
Normal file
@ -0,0 +1,480 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# This code is part of the Ansible collection community.docker, but is an independent component.
|
||||
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
|
||||
#
|
||||
# Copyright (c) 2016-2022 Docker, Inc.
|
||||
#
|
||||
# It is licensed under the Apache 2.0 license (see Apache-2.0.txt in this collection)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import sys
|
||||
|
||||
from ansible.module_utils.six import PY3
|
||||
|
||||
import pytest
|
||||
|
||||
if sys.version_info < (2, 7):
|
||||
pytestmark = pytest.mark.skip('Python 2.6 is not supported')
|
||||
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.api.client import APIClient
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.constants import IS_WINDOWS_PLATFORM, DEFAULT_DOCKER_API_VERSION
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.errors import DockerException
|
||||
from ansible_collections.community.docker.plugins.module_utils._api.utils.utils import (
|
||||
convert_filters, convert_volume_binds,
|
||||
decode_json_header, kwargs_from_env, parse_bytes,
|
||||
parse_devices, parse_env_file, parse_host,
|
||||
parse_repository_tag, split_command, format_environment,
|
||||
)
|
||||
|
||||
|
||||
TEST_CERT_DIR = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'testdata/certs',
|
||||
)
|
||||
|
||||
|
||||
class KwargsFromEnvTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.os_environ = os.environ.copy()
|
||||
|
||||
def tearDown(self):
|
||||
os.environ = self.os_environ
|
||||
|
||||
def test_kwargs_from_env_empty(self):
|
||||
os.environ.update(DOCKER_HOST='',
|
||||
DOCKER_CERT_PATH='')
|
||||
os.environ.pop('DOCKER_TLS_VERIFY', None)
|
||||
|
||||
kwargs = kwargs_from_env()
|
||||
assert kwargs.get('base_url') is None
|
||||
assert kwargs.get('tls') is None
|
||||
|
||||
def test_kwargs_from_env_tls(self):
|
||||
os.environ.update(DOCKER_HOST='tcp://192.168.59.103:2376',
|
||||
DOCKER_CERT_PATH=TEST_CERT_DIR,
|
||||
DOCKER_TLS_VERIFY='1')
|
||||
kwargs = kwargs_from_env(assert_hostname=False)
|
||||
assert 'tcp://192.168.59.103:2376' == kwargs['base_url']
|
||||
assert 'ca.pem' in kwargs['tls'].ca_cert
|
||||
assert 'cert.pem' in kwargs['tls'].cert[0]
|
||||
assert 'key.pem' in kwargs['tls'].cert[1]
|
||||
assert kwargs['tls'].assert_hostname is False
|
||||
assert kwargs['tls'].verify
|
||||
|
||||
parsed_host = parse_host(kwargs['base_url'], IS_WINDOWS_PLATFORM, True)
|
||||
kwargs['version'] = DEFAULT_DOCKER_API_VERSION
|
||||
try:
|
||||
client = APIClient(**kwargs)
|
||||
assert parsed_host == client.base_url
|
||||
assert kwargs['tls'].ca_cert == client.verify
|
||||
assert kwargs['tls'].cert == client.cert
|
||||
except TypeError as e:
|
||||
self.fail(e)
|
||||
|
||||
def test_kwargs_from_env_tls_verify_false(self):
|
||||
os.environ.update(DOCKER_HOST='tcp://192.168.59.103:2376',
|
||||
DOCKER_CERT_PATH=TEST_CERT_DIR,
|
||||
DOCKER_TLS_VERIFY='')
|
||||
kwargs = kwargs_from_env(assert_hostname=True)
|
||||
assert 'tcp://192.168.59.103:2376' == kwargs['base_url']
|
||||
assert 'ca.pem' in kwargs['tls'].ca_cert
|
||||
assert 'cert.pem' in kwargs['tls'].cert[0]
|
||||
assert 'key.pem' in kwargs['tls'].cert[1]
|
||||
assert kwargs['tls'].assert_hostname is True
|
||||
assert kwargs['tls'].verify is False
|
||||
parsed_host = parse_host(kwargs['base_url'], IS_WINDOWS_PLATFORM, True)
|
||||
kwargs['version'] = DEFAULT_DOCKER_API_VERSION
|
||||
try:
|
||||
client = APIClient(**kwargs)
|
||||
assert parsed_host == client.base_url
|
||||
assert kwargs['tls'].cert == client.cert
|
||||
assert not kwargs['tls'].verify
|
||||
except TypeError as e:
|
||||
self.fail(e)
|
||||
|
||||
def test_kwargs_from_env_tls_verify_false_no_cert(self):
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
cert_dir = os.path.join(temp_dir, '.docker')
|
||||
shutil.copytree(TEST_CERT_DIR, cert_dir)
|
||||
|
||||
os.environ.update(DOCKER_HOST='tcp://192.168.59.103:2376',
|
||||
HOME=temp_dir,
|
||||
DOCKER_TLS_VERIFY='')
|
||||
os.environ.pop('DOCKER_CERT_PATH', None)
|
||||
kwargs = kwargs_from_env(assert_hostname=True)
|
||||
assert 'tcp://192.168.59.103:2376' == kwargs['base_url']
|
||||
|
||||
def test_kwargs_from_env_no_cert_path(self):
|
||||
try:
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
cert_dir = os.path.join(temp_dir, '.docker')
|
||||
shutil.copytree(TEST_CERT_DIR, cert_dir)
|
||||
|
||||
os.environ.update(HOME=temp_dir,
|
||||
DOCKER_CERT_PATH='',
|
||||
DOCKER_TLS_VERIFY='1')
|
||||
|
||||
kwargs = kwargs_from_env()
|
||||
assert kwargs['tls'].verify
|
||||
assert cert_dir in kwargs['tls'].ca_cert
|
||||
assert cert_dir in kwargs['tls'].cert[0]
|
||||
assert cert_dir in kwargs['tls'].cert[1]
|
||||
finally:
|
||||
if temp_dir:
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
def test_kwargs_from_env_alternate_env(self):
|
||||
# Values in os.environ are entirely ignored if an alternate is
|
||||
# provided
|
||||
os.environ.update(
|
||||
DOCKER_HOST='tcp://192.168.59.103:2376',
|
||||
DOCKER_CERT_PATH=TEST_CERT_DIR,
|
||||
DOCKER_TLS_VERIFY=''
|
||||
)
|
||||
kwargs = kwargs_from_env(environment={
|
||||
'DOCKER_HOST': 'http://docker.gensokyo.jp:2581',
|
||||
})
|
||||
assert 'http://docker.gensokyo.jp:2581' == kwargs['base_url']
|
||||
assert 'tls' not in kwargs
|
||||
|
||||
|
||||
class ConverVolumeBindsTest(unittest.TestCase):
|
||||
def test_convert_volume_binds_empty(self):
|
||||
assert convert_volume_binds({}) == []
|
||||
assert convert_volume_binds([]) == []
|
||||
|
||||
def test_convert_volume_binds_list(self):
|
||||
data = ['/a:/a:ro', '/b:/c:z']
|
||||
assert convert_volume_binds(data) == data
|
||||
|
||||
def test_convert_volume_binds_complete(self):
|
||||
data = {
|
||||
'/mnt/vol1': {
|
||||
'bind': '/data',
|
||||
'mode': 'ro'
|
||||
}
|
||||
}
|
||||
assert convert_volume_binds(data) == ['/mnt/vol1:/data:ro']
|
||||
|
||||
def test_convert_volume_binds_compact(self):
|
||||
data = {
|
||||
'/mnt/vol1': '/data'
|
||||
}
|
||||
assert convert_volume_binds(data) == ['/mnt/vol1:/data:rw']
|
||||
|
||||
def test_convert_volume_binds_no_mode(self):
|
||||
data = {
|
||||
'/mnt/vol1': {
|
||||
'bind': '/data'
|
||||
}
|
||||
}
|
||||
assert convert_volume_binds(data) == ['/mnt/vol1:/data:rw']
|
||||
|
||||
def test_convert_volume_binds_unicode_bytes_input(self):
|
||||
expected = [u'/mnt/지연:/unicode/박:rw']
|
||||
|
||||
data = {
|
||||
u'/mnt/지연'.encode('utf-8'): {
|
||||
'bind': u'/unicode/박'.encode('utf-8'),
|
||||
'mode': u'rw'
|
||||
}
|
||||
}
|
||||
assert convert_volume_binds(data) == expected
|
||||
|
||||
def test_convert_volume_binds_unicode_unicode_input(self):
|
||||
expected = [u'/mnt/지연:/unicode/박:rw']
|
||||
|
||||
data = {
|
||||
u'/mnt/지연': {
|
||||
'bind': u'/unicode/박',
|
||||
'mode': u'rw'
|
||||
}
|
||||
}
|
||||
assert convert_volume_binds(data) == expected
|
||||
|
||||
|
||||
class ParseEnvFileTest(unittest.TestCase):
|
||||
def generate_tempfile(self, file_content=None):
|
||||
"""
|
||||
Generates a temporary file for tests with the content
|
||||
of 'file_content' and returns the filename.
|
||||
Don't forget to unlink the file with os.unlink() after.
|
||||
"""
|
||||
local_tempfile = tempfile.NamedTemporaryFile(delete=False)
|
||||
local_tempfile.write(file_content.encode('UTF-8'))
|
||||
local_tempfile.close()
|
||||
return local_tempfile.name
|
||||
|
||||
def test_parse_env_file_proper(self):
|
||||
env_file = self.generate_tempfile(
|
||||
file_content='USER=jdoe\nPASS=secret')
|
||||
get_parse_env_file = parse_env_file(env_file)
|
||||
assert get_parse_env_file == {'USER': 'jdoe', 'PASS': 'secret'}
|
||||
os.unlink(env_file)
|
||||
|
||||
def test_parse_env_file_with_equals_character(self):
|
||||
env_file = self.generate_tempfile(
|
||||
file_content='USER=jdoe\nPASS=sec==ret')
|
||||
get_parse_env_file = parse_env_file(env_file)
|
||||
assert get_parse_env_file == {'USER': 'jdoe', 'PASS': 'sec==ret'}
|
||||
os.unlink(env_file)
|
||||
|
||||
def test_parse_env_file_commented_line(self):
|
||||
env_file = self.generate_tempfile(
|
||||
file_content='USER=jdoe\n#PASS=secret')
|
||||
get_parse_env_file = parse_env_file(env_file)
|
||||
assert get_parse_env_file == {'USER': 'jdoe'}
|
||||
os.unlink(env_file)
|
||||
|
||||
def test_parse_env_file_newline(self):
|
||||
env_file = self.generate_tempfile(
|
||||
file_content='\nUSER=jdoe\n\n\nPASS=secret')
|
||||
get_parse_env_file = parse_env_file(env_file)
|
||||
assert get_parse_env_file == {'USER': 'jdoe', 'PASS': 'secret'}
|
||||
os.unlink(env_file)
|
||||
|
||||
def test_parse_env_file_invalid_line(self):
|
||||
env_file = self.generate_tempfile(
|
||||
file_content='USER jdoe')
|
||||
with pytest.raises(DockerException):
|
||||
parse_env_file(env_file)
|
||||
os.unlink(env_file)
|
||||
|
||||
|
||||
class ParseHostTest(unittest.TestCase):
|
||||
def test_parse_host(self):
|
||||
invalid_hosts = [
|
||||
'0.0.0.0',
|
||||
'tcp://',
|
||||
'udp://127.0.0.1',
|
||||
'udp://127.0.0.1:2375',
|
||||
'ssh://:22/path',
|
||||
'tcp://netloc:3333/path?q=1',
|
||||
'unix:///sock/path#fragment',
|
||||
'https://netloc:3333/path;params',
|
||||
'ssh://:clearpassword@host:22',
|
||||
]
|
||||
|
||||
valid_hosts = {
|
||||
'0.0.0.1:5555': 'http://0.0.0.1:5555',
|
||||
':6666': 'http://127.0.0.1:6666',
|
||||
'tcp://:7777': 'http://127.0.0.1:7777',
|
||||
'http://:7777': 'http://127.0.0.1:7777',
|
||||
'https://kokia.jp:2375': 'https://kokia.jp:2375',
|
||||
'unix:///var/run/docker.sock': 'http+unix:///var/run/docker.sock',
|
||||
'unix://': 'http+unix:///var/run/docker.sock',
|
||||
'12.234.45.127:2375/docker/engine': (
|
||||
'http://12.234.45.127:2375/docker/engine'
|
||||
),
|
||||
'somehost.net:80/service/swarm': (
|
||||
'http://somehost.net:80/service/swarm'
|
||||
),
|
||||
'npipe:////./pipe/docker_engine': 'npipe:////./pipe/docker_engine',
|
||||
'[fd12::82d1]:2375': 'http://[fd12::82d1]:2375',
|
||||
'https://[fd12:5672::12aa]:1090': 'https://[fd12:5672::12aa]:1090',
|
||||
'[fd12::82d1]:2375/docker/engine': (
|
||||
'http://[fd12::82d1]:2375/docker/engine'
|
||||
),
|
||||
'ssh://': 'ssh://127.0.0.1:22',
|
||||
'ssh://user@localhost:22': 'ssh://user@localhost:22',
|
||||
'ssh://user@remote': 'ssh://user@remote:22',
|
||||
}
|
||||
|
||||
for host in invalid_hosts:
|
||||
with pytest.raises(DockerException):
|
||||
parse_host(host, None)
|
||||
|
||||
for host, expected in valid_hosts.items():
|
||||
assert parse_host(host, None) == expected
|
||||
|
||||
def test_parse_host_empty_value(self):
|
||||
unix_socket = 'http+unix:///var/run/docker.sock'
|
||||
npipe = 'npipe:////./pipe/docker_engine'
|
||||
|
||||
for val in [None, '']:
|
||||
assert parse_host(val, is_win32=False) == unix_socket
|
||||
assert parse_host(val, is_win32=True) == npipe
|
||||
|
||||
def test_parse_host_tls(self):
|
||||
host_value = 'myhost.docker.net:3348'
|
||||
expected_result = 'https://myhost.docker.net:3348'
|
||||
assert parse_host(host_value, tls=True) == expected_result
|
||||
|
||||
def test_parse_host_tls_tcp_proto(self):
|
||||
host_value = 'tcp://myhost.docker.net:3348'
|
||||
expected_result = 'https://myhost.docker.net:3348'
|
||||
assert parse_host(host_value, tls=True) == expected_result
|
||||
|
||||
def test_parse_host_trailing_slash(self):
|
||||
host_value = 'tcp://myhost.docker.net:2376/'
|
||||
expected_result = 'http://myhost.docker.net:2376'
|
||||
assert parse_host(host_value) == expected_result
|
||||
|
||||
|
||||
class ParseRepositoryTagTest(unittest.TestCase):
|
||||
sha = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
|
||||
|
||||
def test_index_image_no_tag(self):
|
||||
assert parse_repository_tag("root") == ("root", None)
|
||||
|
||||
def test_index_image_tag(self):
|
||||
assert parse_repository_tag("root:tag") == ("root", "tag")
|
||||
|
||||
def test_index_user_image_no_tag(self):
|
||||
assert parse_repository_tag("user/repo") == ("user/repo", None)
|
||||
|
||||
def test_index_user_image_tag(self):
|
||||
assert parse_repository_tag("user/repo:tag") == ("user/repo", "tag")
|
||||
|
||||
def test_private_reg_image_no_tag(self):
|
||||
assert parse_repository_tag("url:5000/repo") == ("url:5000/repo", None)
|
||||
|
||||
def test_private_reg_image_tag(self):
|
||||
assert parse_repository_tag("url:5000/repo:tag") == (
|
||||
"url:5000/repo", "tag"
|
||||
)
|
||||
|
||||
def test_index_image_sha(self):
|
||||
assert parse_repository_tag("root@sha256:{sha}".format(sha=self.sha)) == (
|
||||
"root", "sha256:{sha}".format(sha=self.sha)
|
||||
)
|
||||
|
||||
def test_private_reg_image_sha(self):
|
||||
assert parse_repository_tag(
|
||||
"url:5000/repo@sha256:{sha}".format(sha=self.sha)
|
||||
) == ("url:5000/repo", "sha256:{sha}".format(sha=self.sha))
|
||||
|
||||
|
||||
class ParseDeviceTest(unittest.TestCase):
|
||||
def test_dict(self):
|
||||
devices = parse_devices([{
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/mnt1',
|
||||
'CgroupPermissions': 'r'
|
||||
}])
|
||||
assert devices[0] == {
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/mnt1',
|
||||
'CgroupPermissions': 'r'
|
||||
}
|
||||
|
||||
def test_partial_string_definition(self):
|
||||
devices = parse_devices(['/dev/sda1'])
|
||||
assert devices[0] == {
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/sda1',
|
||||
'CgroupPermissions': 'rwm'
|
||||
}
|
||||
|
||||
def test_permissionless_string_definition(self):
|
||||
devices = parse_devices(['/dev/sda1:/dev/mnt1'])
|
||||
assert devices[0] == {
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/mnt1',
|
||||
'CgroupPermissions': 'rwm'
|
||||
}
|
||||
|
||||
def test_full_string_definition(self):
|
||||
devices = parse_devices(['/dev/sda1:/dev/mnt1:r'])
|
||||
assert devices[0] == {
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/mnt1',
|
||||
'CgroupPermissions': 'r'
|
||||
}
|
||||
|
||||
def test_hybrid_list(self):
|
||||
devices = parse_devices([
|
||||
'/dev/sda1:/dev/mnt1:rw',
|
||||
{
|
||||
'PathOnHost': '/dev/sda2',
|
||||
'PathInContainer': '/dev/mnt2',
|
||||
'CgroupPermissions': 'r'
|
||||
}
|
||||
])
|
||||
|
||||
assert devices[0] == {
|
||||
'PathOnHost': '/dev/sda1',
|
||||
'PathInContainer': '/dev/mnt1',
|
||||
'CgroupPermissions': 'rw'
|
||||
}
|
||||
assert devices[1] == {
|
||||
'PathOnHost': '/dev/sda2',
|
||||
'PathInContainer': '/dev/mnt2',
|
||||
'CgroupPermissions': 'r'
|
||||
}
|
||||
|
||||
|
||||
class ParseBytesTest(unittest.TestCase):
|
||||
def test_parse_bytes_valid(self):
|
||||
assert parse_bytes("512MB") == 536870912
|
||||
assert parse_bytes("512M") == 536870912
|
||||
assert parse_bytes("512m") == 536870912
|
||||
|
||||
def test_parse_bytes_invalid(self):
|
||||
with pytest.raises(DockerException):
|
||||
parse_bytes("512MK")
|
||||
with pytest.raises(DockerException):
|
||||
parse_bytes("512L")
|
||||
with pytest.raises(DockerException):
|
||||
parse_bytes("127.0.0.1K")
|
||||
|
||||
def test_parse_bytes_float(self):
|
||||
assert parse_bytes("1.5k") == 1536
|
||||
|
||||
|
||||
class UtilsTest(unittest.TestCase):
|
||||
longMessage = True
|
||||
|
||||
def test_convert_filters(self):
|
||||
tests = [
|
||||
({'dangling': True}, '{"dangling": ["true"]}'),
|
||||
({'dangling': "true"}, '{"dangling": ["true"]}'),
|
||||
({'exited': 0}, '{"exited": ["0"]}'),
|
||||
({'exited': [0, 1]}, '{"exited": ["0", "1"]}'),
|
||||
]
|
||||
|
||||
for filters, expected in tests:
|
||||
assert convert_filters(filters) == expected
|
||||
|
||||
def test_decode_json_header(self):
|
||||
obj = {'a': 'b', 'c': 1}
|
||||
data = None
|
||||
if PY3:
|
||||
data = base64.urlsafe_b64encode(bytes(json.dumps(obj), 'utf-8'))
|
||||
else:
|
||||
data = base64.urlsafe_b64encode(json.dumps(obj))
|
||||
decoded_data = decode_json_header(data)
|
||||
assert obj == decoded_data
|
||||
|
||||
|
||||
class SplitCommandTest(unittest.TestCase):
|
||||
def test_split_command_with_unicode(self):
|
||||
assert split_command(u'echo μμ') == ['echo', 'μμ']
|
||||
|
||||
@pytest.mark.skipif(PY3, reason="shlex doesn't support bytes in py3")
|
||||
def test_split_command_with_bytes(self):
|
||||
assert split_command('echo μμ') == ['echo', 'μμ']
|
||||
|
||||
|
||||
class FormatEnvironmentTest(unittest.TestCase):
|
||||
def test_format_env_binary_unicode_value(self):
|
||||
env_dict = {
|
||||
'ARTIST_NAME': b'\xec\x86\xa1\xec\xa7\x80\xec\x9d\x80'
|
||||
}
|
||||
assert format_environment(env_dict) == [u'ARTIST_NAME=송지은']
|
||||
|
||||
def test_format_env_no_value(self):
|
||||
env_dict = {
|
||||
'FOO': None,
|
||||
'BAR': '',
|
||||
}
|
||||
assert sorted(format_environment(env_dict)) == ['BAR=', 'FOO']
|
||||
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/ca.pem
vendored
Normal file
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/ca.pem
vendored
Normal file
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/cert.pem
vendored
Normal file
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/cert.pem
vendored
Normal file
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/key.pem
vendored
Normal file
0
tests/unit/plugins/module_utils/_api/utils/testdata/certs/key.pem
vendored
Normal file
@ -1,2 +1,5 @@
|
||||
unittest2 ; python_version < '2.7'
|
||||
importlib ; python_version < '2.7'
|
||||
|
||||
requests
|
||||
backports.ssl-match-hostname ; python_version < '3.5'
|
||||
|
||||
Loading…
Reference in New Issue
Block a user