community.docker/plugins/module_utils/_api/transport/sshconn.py
Felix Fontein ae708a7333
Vendored Docker SDK for Python updates (#434)
* utils: fix IPv6 address w/ port parsing

This was using a deprecated function (`urllib.splitnport`),
ostensibly to work around issues with brackets on IPv6 addresses.

Ironically, its usage was broken, and would result in mangled IPv6
addresses if they had a port specified in some instances.

Usage of the deprecated function has been eliminated and extra test
cases added where missing. All existing cases pass as-is. (The only
other change to the test was to improve assertion messages.)

Cherry-picked from
f16c4e1147

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* client: fix exception semantics in _raise_for_status

We want "The above exception was the direct cause of the following exception:" instead of "During handling of the above exception, another exception occurred:"

Cherry-picked from
bb11197ee3

Co-authored-by: Maor Kleinberger <kmaork@gmail.com>

* tls: use auto-negotiated highest version

Specific TLS versions are deprecated in latest Python, which
causes test failures due to treating deprecation errors as
warnings.

Luckily, the fix here is straightforward: we can eliminate some
custom version selection logic by using `PROTOCOL_TLS_CLIENT`,
which is the recommended method and will select the highest TLS
version supported by both client and server.

Cherry-picked from
56dd6de7df

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* transport: fix ProxyCommand for SSH conn

Cherry-picked from
4e19cc48df

Co-authored-by: Guy Lichtman <glicht@users.noreply.github.com>

* ssh: do not create unnecessary subshell on exec

Cherry-picked from
bb40ba051f

Co-authored-by: liubo <liubo@uniontech.com>

* ssh: reject unknown host keys when using Python SSH impl

In the Secure Shell (SSH) protocol, host keys are used to verify the identity of remote hosts. Accepting unknown host keys may leave the connection open to man-in-the-middle attacks.

Do not accept unknown host keys. In particular, do not set the default missing host key policy for the Paramiko library to either AutoAddPolicy or WarningPolicy. Both of these policies continue even when the host key is unknown. The default setting of RejectPolicy is secure because it throws an exception when it encounters an unknown host key.

Reference: https://cwe.mitre.org/data/definitions/295.html

NOTE: This only affects SSH connections using the native Python SSH implementation (Paramiko), when `use_ssh_client=False` (default). If using the system SSH client (`use_ssh_client=True`), the host configuration
(e.g. `~/.ssh/config`) will apply.

Cherry-picked from
d9298647d9

Co-authored-by: Audun Nes <audun.nes@gmail.com>

* lint: fix deprecation warnings from threading package

Set `daemon` attribute instead of using `setDaemon` method that
was deprecated in Python 3.10.

Cherry-picked from
adf5a97b12

Co-authored-by: Karthikeyan Singaravelan <tir.karthi@gmail.com>

* api: preserve cause when re-raising error

Use `from e` to ensure that the error context is propagated
correctly.

Cherry-picked from
05e143429e

Co-authored-by: Milas Bowman <milas.bowman@docker.com>

* build: trim trailing whitespace from dockerignore entries

Cherry-picked from
3ee3a2486f

Co-authored-by: Clément Loiselet <clement.loiselet@capgemini.com>

* Improve formulation, also mention the security change as a breaking change.

Co-authored-by: Milas Bowman <milas.bowman@docker.com>
Co-authored-by: Maor Kleinberger <kmaork@gmail.com>
Co-authored-by: Guy Lichtman <glicht@users.noreply.github.com>
Co-authored-by: liubo <liubo@uniontech.com>
Co-authored-by: Audun Nes <audun.nes@gmail.com>
Co-authored-by: Karthikeyan Singaravelan <tir.karthi@gmail.com>
Co-authored-by: Clément Loiselet <clement.loiselet@capgemini.com>
2022-07-31 17:09:18 +02:00

276 lines
8.6 KiB
Python

# -*- coding: utf-8 -*-
# This code is part of the Ansible collection community.docker, but is an independent component.
# This particular file, and this file only, is based on the Docker SDK for Python (https://github.com/docker/docker-py/)
#
# Copyright (c) 2016-2022 Docker, Inc.
#
# It is licensed under the Apache 2.0 license (see LICENSES/Apache-2.0.txt in this collection)
# SPDX-License-Identifier: Apache-2.0
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import logging
import os
import signal
import socket
import subprocess
import traceback
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves.queue import Empty
from ansible.module_utils.six.moves.urllib_parse import urlparse
from .basehttpadapter import BaseHTTPAdapter
from .. import constants
if PY3:
import http.client as httplib
else:
import httplib
from .._import_helper import HTTPAdapter, urllib3
PARAMIKO_IMPORT_ERROR = None
try:
import paramiko
except ImportError:
PARAMIKO_IMPORT_ERROR = traceback.format_exc()
RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
class SSHSocket(socket.socket):
def __init__(self, host):
super(SSHSocket, self).__init__(
socket.AF_INET, socket.SOCK_STREAM)
self.host = host
self.port = None
self.user = None
if ':' in self.host:
self.host, self.port = self.host.split(':')
if '@' in self.host:
self.user, self.host = self.host.split('@')
self.proc = None
def connect(self, **kwargs):
args = ['ssh']
if self.user:
args = args + ['-l', self.user]
if self.port:
args = args + ['-p', self.port]
args = args + ['--', self.host, 'docker system dial-stdio']
preexec_func = None
if not constants.IS_WINDOWS_PLATFORM:
def f():
signal.signal(signal.SIGINT, signal.SIG_IGN)
preexec_func = f
env = dict(os.environ)
# drop LD_LIBRARY_PATH and SSL_CERT_FILE
env.pop('LD_LIBRARY_PATH', None)
env.pop('SSL_CERT_FILE', None)
self.proc = subprocess.Popen(
args,
env=env,
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
preexec_fn=preexec_func)
def _write(self, data):
if not self.proc or self.proc.stdin.closed:
raise Exception('SSH subprocess not initiated.'
'connect() must be called first.')
written = self.proc.stdin.write(data)
self.proc.stdin.flush()
return written
def sendall(self, data):
self._write(data)
def send(self, data):
return self._write(data)
def recv(self, n):
if not self.proc:
raise Exception('SSH subprocess not initiated.'
'connect() must be called first.')
return self.proc.stdout.read(n)
def makefile(self, mode):
if not self.proc:
self.connect()
if PY3:
self.proc.stdout.channel = self
return self.proc.stdout
def close(self):
if not self.proc or self.proc.stdin.closed:
return
self.proc.stdin.write(b'\n\n')
self.proc.stdin.flush()
self.proc.terminate()
class SSHConnection(httplib.HTTPConnection, object):
def __init__(self, ssh_transport=None, timeout=60, host=None):
super(SSHConnection, self).__init__(
'localhost', timeout=timeout
)
self.ssh_transport = ssh_transport
self.timeout = timeout
self.ssh_host = host
def connect(self):
if self.ssh_transport:
sock = self.ssh_transport.open_session()
sock.settimeout(self.timeout)
sock.exec_command('docker system dial-stdio')
else:
sock = SSHSocket(self.ssh_host)
sock.settimeout(self.timeout)
sock.connect()
self.sock = sock
class SSHConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
scheme = 'ssh'
def __init__(self, ssh_client=None, timeout=60, maxsize=10, host=None):
super(SSHConnectionPool, self).__init__(
'localhost', timeout=timeout, maxsize=maxsize
)
self.ssh_transport = None
self.timeout = timeout
if ssh_client:
self.ssh_transport = ssh_client.get_transport()
self.ssh_host = host
def _new_conn(self):
return SSHConnection(self.ssh_transport, self.timeout, self.ssh_host)
# When re-using connections, urllib3 calls fileno() on our
# SSH channel instance, quickly overloading our fd limit. To avoid this,
# we override _get_conn
def _get_conn(self, timeout):
conn = None
try:
conn = self.pool.get(block=self.block, timeout=timeout)
except AttributeError: # self.pool is None
raise urllib3.exceptions.ClosedPoolError(self, "Pool is closed.")
except Empty:
if self.block:
raise urllib3.exceptions.EmptyPoolError(
self,
"Pool reached maximum size and no more "
"connections are allowed."
)
pass # Oh well, we'll create a new connection then
return conn or self._new_conn()
class SSHHTTPAdapter(BaseHTTPAdapter):
__attrs__ = HTTPAdapter.__attrs__ + [
'pools', 'timeout', 'ssh_client', 'ssh_params', 'max_pool_size'
]
def __init__(self, base_url, timeout=60,
pool_connections=constants.DEFAULT_NUM_POOLS,
max_pool_size=constants.DEFAULT_MAX_POOL_SIZE,
shell_out=False):
self.ssh_client = None
if not shell_out:
self._create_paramiko_client(base_url)
self._connect()
self.ssh_host = base_url
if base_url.startswith('ssh://'):
self.ssh_host = base_url[len('ssh://'):]
self.timeout = timeout
self.max_pool_size = max_pool_size
self.pools = RecentlyUsedContainer(
pool_connections, dispose_func=lambda p: p.close()
)
super(SSHHTTPAdapter, self).__init__()
def _create_paramiko_client(self, base_url):
logging.getLogger("paramiko").setLevel(logging.WARNING)
self.ssh_client = paramiko.SSHClient()
base_url = urlparse(base_url)
self.ssh_params = {
"hostname": base_url.hostname,
"port": base_url.port,
"username": base_url.username,
}
ssh_config_file = os.path.expanduser("~/.ssh/config")
if os.path.exists(ssh_config_file):
conf = paramiko.SSHConfig()
with open(ssh_config_file) as f:
conf.parse(f)
host_config = conf.lookup(base_url.hostname)
if 'proxycommand' in host_config:
self.ssh_params["sock"] = paramiko.ProxyCommand(
host_config['proxycommand']
)
if 'hostname' in host_config:
self.ssh_params['hostname'] = host_config['hostname']
if base_url.port is None and 'port' in host_config:
self.ssh_params['port'] = host_config['port']
if base_url.username is None and 'user' in host_config:
self.ssh_params['username'] = host_config['user']
if 'identityfile' in host_config:
self.ssh_params['key_filename'] = host_config['identityfile']
self.ssh_client.load_system_host_keys()
self.ssh_client.set_missing_host_key_policy(paramiko.RejectPolicy())
def _connect(self):
if self.ssh_client:
self.ssh_client.connect(**self.ssh_params)
def get_connection(self, url, proxies=None):
if not self.ssh_client:
return SSHConnectionPool(
ssh_client=self.ssh_client,
timeout=self.timeout,
maxsize=self.max_pool_size,
host=self.ssh_host
)
with self.pools.lock:
pool = self.pools.get(url)
if pool:
return pool
# Connection is closed try a reconnect
if self.ssh_client and not self.ssh_client.get_transport():
self._connect()
pool = SSHConnectionPool(
ssh_client=self.ssh_client,
timeout=self.timeout,
maxsize=self.max_pool_size,
host=self.ssh_host
)
self.pools[url] = pool
return pool
def close(self):
super(SSHHTTPAdapter, self).close()
if self.ssh_client:
self.ssh_client.close()