Overhaul ansible-test container management.

This brings ansible-test closer to being able to support split controller/remote testing.
pull/74265/head
Matt Clay 4 years ago
parent 9f856a4964
commit b752d07163

@ -0,0 +1,40 @@
major_changes:
- ansible-test - SSH port forwarding and redirection is now used exclusively to make container ports available on non-container hosts.
When testing on POSIX systems this requires SSH login as root.
Previously SSH port forwarding was combined with firewall rules or other port redirection methods, with some platforms being unsupported.
- ansible-test - All "cloud" plugins which use containers can now be used with all POSIX and Windows hosts.
Previously the plugins did not work with Windows at all, and support for hosts created with the ``--remote`` option was inconsistent.
- ansible-test - Most container features are now supported under Podman.
Previously a symbolic link for ``docker`` pointing to ``podman`` was required.
minor_changes:
- ansible-test - All "cloud" plugins have been refactored for more consistency.
For those that use docker containers, management of the containers has been standardized.
- ansible-test - All "cloud" plugins now use fixed hostnames and ports in tests.
Previously some tests used IP addresses and/or randomly assigned ports.
- ansible-test - The HTTP Tester has been converted to a "cloud" plugin and can now be requested using the ``cloud/httptester`` alias.
The original ``needs/httptester`` alias is still supported for backwards compatibility.
- ansible-test - The HTTP Tester can now be used without the ``--docker`` or `--remote`` options.
It still requires use of the ``docker`` command to run the container.
- ansible-test - The ``docker run`` option ``--link`` is no longer used to connect test containers.
As a result, changes are made to the ``/etc/hosts`` file as needed on all test containers.
Previously containers which were used with the ``--link`` option did not require changes to the ``/etc/hosts`` file.
- ansible-test - Changes made to the ``hosts`` file on test systems are now done using an Ansible playbook for both POSIX and Windows systems.
Changes are applied before a test target runs and are reverted after the test target finishes.
- ansible-test - Environment variables exposed by "cloud" plugins are now available to the controller for role based tests.
Previously only script based tests had access to the exposed environment variables.
breaking_changes:
- ansible-test - The ``--httptester`` option is no longer available.
To override the container used for HTTP Tester tests, set the ``ANSIBLE_HTTP_TEST_CONTAINER`` environment variable instead.
- ansible-test - The ``--disable-httptester`` option is no longer available.
The HTTP Tester is no longer optional for tests that specify it.
- ansible-test - The HTTP Tester is no longer available with the ``ansible-test shell`` command.
Only the ``integration`` and ``windows-integration`` commands provide HTTP Tester.
bugfixes:
- ansible-test - Running tests in a single test run with multiple "cloud" plugins no longer results in port conflicts.
Previously two or more containers with overlapping ports could not be used in the same test run.
- ansible-test - Random port selection is no longer handled by ``ansible-test``, avoiding possible port conflicts.
Previously ``ansible-test`` would, under some circumstances, use one host's available ports to determine those of another host.
- ansible-test - The ``docker inspect`` command is now used to check for existing images instead of the ``docker images`` command.
This resolves an issue where a ``docker pull`` would be unnecessarily executed for an image referenced by checksum.
- ansible-test - Failure to download test results from a remote host no longer hide test failures.
If a download failure occurs after tests fail, a warning will be issued instead.

@ -0,0 +1,2 @@
cloud/acme
shippable/generic/group1

@ -0,0 +1,7 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
with_items:
- http://{{ acme_host }}:5000/
- https://{{ acme_host }}:14000/dir

@ -0,0 +1,2 @@
cloud/cs
shippable/generic/group1

@ -0,0 +1,8 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
register: this
failed_when: "this.status != 401" # authentication is required, but not provided (requests must be signed)
with_items:
- "{{ ansible_env.CLOUDSTACK_ENDPOINT }}"

@ -0,0 +1,2 @@
cloud/foreman
shippable/generic/group1

@ -0,0 +1,6 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
with_items:
- http://{{ ansible_env.FOREMAN_HOST }}:{{ ansible_env.FOREMAN_PORT }}/ping

@ -0,0 +1,3 @@
shippable/galaxy/group1
shippable/galaxy/smoketest
cloud/galaxy

@ -0,0 +1,25 @@
# The pulp container has a long start up time.
# The first task to interact with pulp needs to wait until it responds appropriately.
- name: Wait for Pulp API
uri:
url: '{{ pulp_api }}/pulp/api/v3/distributions/ansible/ansible/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: this
until: this is successful
delay: 1
retries: 60
- name: Verify Galaxy NG server
uri:
url: "{{ galaxy_ng_server }}"
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
- name: Verify Pulp server
uri:
url: "{{ pulp_server }}"
status_code:
- 404 # endpoint responds without authentication

@ -0,0 +1,3 @@
cloud/httptester
windows
shippable/windows/group1

@ -0,0 +1,15 @@
- name: Verify HTTPTESTER environment variable
assert:
that:
- "lookup('env', 'HTTPTESTER') == '1'"
- name: Verify endpoints respond
ansible.windows.win_uri:
url: "{{ item }}"
validate_certs: no
with_items:
- http://ansible.http.tests/
- https://ansible.http.tests/
- https://sni1.ansible.http.tests/
- https://fail.ansible.http.tests/
- https://self-signed.ansible.http.tests/

@ -0,0 +1,2 @@
needs/httptester # using legacy alias for testing purposes
shippable/posix/group1

@ -0,0 +1,15 @@
- name: Verify HTTPTESTER environment variable
assert:
that:
- "lookup('env', 'HTTPTESTER') == '1'"
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
with_items:
- http://ansible.http.tests/
- https://ansible.http.tests/
- https://sni1.ansible.http.tests/
- https://fail.ansible.http.tests/
- https://self-signed.ansible.http.tests/

@ -0,0 +1,2 @@
cloud/nios
shippable/generic/group1

@ -0,0 +1,10 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
url_username: "{{ nios_provider.username }}"
url_password: "{{ nios_provider.password }}"
validate_certs: no
register: this
failed_when: "this.status != 404" # authentication succeeded, but the requested path was not found
with_items:
- https://{{ nios_provider.host }}/

@ -0,0 +1,3 @@
cloud/openshift
shippable/generic/group1
disabled # disabled due to requirements conflict: botocore 1.20.6 has requirement urllib3<1.27,>=1.25.4, but you have urllib3 1.24.3.

@ -0,0 +1,6 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
with_items:
- https://openshift-origin:8443/

@ -0,0 +1,2 @@
cloud/vcenter
shippable/generic/group1

@ -0,0 +1,6 @@
- name: Verify endpoints respond
uri:
url: "{{ item }}"
validate_certs: no
with_items:
- http://{{ vcenter_hostname }}:5000/ # control endpoint for the simulator

@ -3,4 +3,4 @@ freebsd/12.2 python=3.7,2.7,3.8 python_dir=/usr/local/bin
macos/11.1 python=3.9 python_dir=/usr/local/bin macos/11.1 python=3.9 python_dir=/usr/local/bin
rhel/7.9 python=2.7 rhel/7.9 python=2.7
rhel/8.3 python=3.6,3.8 rhel/8.3 python=3.6,3.8
aix/7.2 python=2.7 httptester=disabled temp-unicode=disabled pip-check=disabled aix/7.2 python=2.7 temp-unicode=disabled pip-check=disabled

@ -0,0 +1,8 @@
- hosts: all
gather_facts: no
tasks:
- name: Add container hostname(s) to hosts file
blockinfile:
path: /etc/hosts
block: "{{ '\n'.join(hosts_entries) }}"
unsafe_writes: yes

@ -0,0 +1,9 @@
- hosts: all
gather_facts: no
tasks:
- name: Remove container hostname(s) from hosts file
blockinfile:
path: /etc/hosts
block: "{{ '\n'.join(hosts_entries) }}"
unsafe_writes: yes
state: absent

@ -0,0 +1,34 @@
<#
.SYNOPSIS
Add one or more hosts entries to the Windows hosts file.
.PARAMETER Hosts
A list of hosts entries, delimited by '|'.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$true, Position=0)][String]$Hosts
)
$ProgressPreference = "SilentlyContinue"
$ErrorActionPreference = "Stop"
Write-Verbose -Message "Adding host file entries"
$hosts_entries = $Hosts.Split('|')
$hosts_file = "$env:SystemRoot\System32\drivers\etc\hosts"
$hosts_file_lines = [System.IO.File]::ReadAllLines($hosts_file)
$changed = $false
foreach ($entry in $hosts_entries) {
if ($entry -notin $hosts_file_lines) {
$hosts_file_lines += $entry
$changed = $true
}
}
if ($changed) {
Write-Verbose -Message "Host file is missing entries, adding missing entries"
[System.IO.File]::WriteAllLines($hosts_file, $hosts_file_lines)
}

@ -0,0 +1,6 @@
- hosts: all
gather_facts: no
tasks:
- name: Add container hostname(s) to hosts file
script:
cmd: "\"{{ playbook_dir }}/windows_hosts_prepare.ps1\" -Hosts \"{{ '|'.join(hosts_entries) }}\""

@ -0,0 +1,37 @@
<#
.SYNOPSIS
Remove one or more hosts entries from the Windows hosts file.
.PARAMETER Hosts
A list of hosts entries, delimited by '|'.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$true, Position=0)][String]$Hosts
)
$ProgressPreference = "SilentlyContinue"
$ErrorActionPreference = "Stop"
Write-Verbose -Message "Removing host file entries"
$hosts_entries = $Hosts.Split('|')
$hosts_file = "$env:SystemRoot\System32\drivers\etc\hosts"
$hosts_file_lines = [System.IO.File]::ReadAllLines($hosts_file)
$changed = $false
$new_lines = [System.Collections.ArrayList]@()
foreach ($host_line in $hosts_file_lines) {
if ($host_line -in $hosts_entries) {
$changed = $true
} else {
$new_lines += $host_line
}
}
if ($changed) {
Write-Verbose -Message "Host file has extra entries, removing extra entries"
[System.IO.File]::WriteAllLines($hosts_file, $new_lines)
}

@ -0,0 +1,6 @@
- hosts: all
gather_facts: no
tasks:
- name: Remove container hostname(s) from hosts file
script:
cmd: "\"{{ playbook_dir }}/windows_hosts_restore.ps1\" -Hosts \"{{ '|'.join(hosts_entries) }}\""

@ -43,5 +43,7 @@ good-names=
k, k,
Run, Run,
method-rgx=[a-z_][a-z0-9_]{2,40}$ class-attribute-rgx=[A-Za-z_][A-Za-z0-9_]{1,40}$
function-rgx=[a-z_][a-z0-9_]{2,40}$ attr-rgx=[a-z_][a-z0-9_]{1,40}$
method-rgx=[a-z_][a-z0-9_]{1,40}$
function-rgx=[a-z_][a-z0-9_]{1,40}$

@ -1,229 +0,0 @@
<#
.SYNOPSIS
Designed to set a Windows host to connect to the httptester container running
on the Ansible host. This will setup the Windows host file and forward the
local ports to use this connection. This will continue to run in the background
until the script is deleted.
Run this with SSH with the -R arguments to forward ports 8080, 8443 and 8444 to the
httptester container.
.PARAMETER Hosts
A list of hostnames, delimited by '|', to add to the Windows hosts file for the
httptester container, e.g. 'ansible.host.com|secondary.host.test'.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory=$true, Position=0)][String]$Hosts
)
$Hosts = $Hosts.Split('|')
$ProgressPreference = "SilentlyContinue"
$ErrorActionPreference = "Stop"
$os_version = [Version](Get-Item -Path "$env:SystemRoot\System32\kernel32.dll").VersionInfo.ProductVersion
Write-Verbose -Message "Configuring HTTP Tester on Windows $os_version for '$($Hosts -join "', '")'"
Function Get-PmapperRuleBytes {
<#
.SYNOPSIS
Create the byte values that configures a rule in the PMapper configuration
file. This isn't really documented but because PMapper is only used for
Server 2008 R2 we will stick to 1 version and just live with the legacy
work for now.
.PARAMETER ListenPort
The port to listen on localhost, this will be forwarded to the host defined
by ConnectAddress and ConnectPort.
.PARAMETER ConnectAddress
The hostname or IP to map the traffic to.
.PARAMETER ConnectPort
This port of ConnectAddress to map the traffic to.
#>
param(
[Parameter(Mandatory=$true)][UInt16]$ListenPort,
[Parameter(Mandatory=$true)][String]$ConnectAddress,
[Parameter(Mandatory=$true)][Int]$ConnectPort
)
$connect_field = "$($ConnectAddress):$ConnectPort"
$connect_bytes = [System.Text.Encoding]::ASCII.GetBytes($connect_field)
$data_length = [byte]($connect_bytes.Length + 6) # size of payload minus header, length, and footer
$port_bytes = [System.BitConverter]::GetBytes($ListenPort)
$payload = [System.Collections.Generic.List`1[Byte]]@()
$payload.Add([byte]16) > $null # header is \x10, means Configure Mapping rule
$payload.Add($data_length) > $null
$payload.AddRange($connect_bytes)
$payload.AddRange($port_bytes)
$payload.AddRange([byte[]]@(0, 0)) # 2 extra bytes of padding
$payload.Add([byte]0) > $null # 0 is TCP, 1 is UDP
$payload.Add([byte]0) > $null # 0 is Any, 1 is Internet
$payload.Add([byte]31) > $null # footer is \x1f, means end of Configure Mapping rule
return ,$payload.ToArray()
}
Write-Verbose -Message "Adding host file entries"
$hosts_file = "$env:SystemRoot\System32\drivers\etc\hosts"
$hosts_file_lines = [System.IO.File]::ReadAllLines($hosts_file)
$changed = $false
foreach ($httptester_host in $Hosts) {
$host_line = "127.0.0.1 $httptester_host # ansible-test httptester"
if ($host_line -notin $hosts_file_lines) {
$hosts_file_lines += $host_line
$changed = $true
}
}
if ($changed) {
Write-Verbose -Message "Host file is missing entries, adding missing entries"
[System.IO.File]::WriteAllLines($hosts_file, $hosts_file_lines)
}
# forward ports
$forwarded_ports = @{
80 = 8080
443 = 8443
444 = 8444
}
if ($os_version -ge [Version]"6.2") {
Write-Verbose -Message "Using netsh to configure forwarded ports"
foreach ($forwarded_port in $forwarded_ports.GetEnumerator()) {
$port_set = netsh interface portproxy show v4tov4 | `
Where-Object { $_ -match "127.0.0.1\s*$($forwarded_port.Key)\s*127.0.0.1\s*$($forwarded_port.Value)" }
if (-not $port_set) {
Write-Verbose -Message "Adding netsh portproxy rule for $($forwarded_port.Key) -> $($forwarded_port.Value)"
$add_args = @(
"interface",
"portproxy",
"add",
"v4tov4",
"listenaddress=127.0.0.1",
"listenport=$($forwarded_port.Key)",
"connectaddress=127.0.0.1",
"connectport=$($forwarded_port.Value)"
)
$null = netsh $add_args 2>&1
}
}
} else {
Write-Verbose -Message "Using Port Mapper to configure forwarded ports"
# netsh interface portproxy doesn't work on local addresses in older
# versions of Windows. Use custom application Port Mapper to acheive the
# same outcome
# http://www.analogx.com/contents/download/Network/pmapper/Freeware.htm
$s3_url = "https://ansible-ci-files.s3.amazonaws.com/ansible-test/pmapper-1.04.exe"
# download the Port Mapper executable to a temporary directory
$pmapper_folder = Join-Path -Path ([System.IO.Path]::GetTempPath()) -ChildPath ([System.IO.Path]::GetRandomFileName())
$pmapper_exe = Join-Path -Path $pmapper_folder -ChildPath pmapper.exe
$pmapper_config = Join-Path -Path $pmapper_folder -ChildPath pmapper.dat
New-Item -Path $pmapper_folder -ItemType Directory > $null
$stop = $false
do {
try {
Write-Verbose -Message "Attempting download of '$s3_url'"
(New-Object -TypeName System.Net.WebClient).DownloadFile($s3_url, $pmapper_exe)
$stop = $true
} catch { Start-Sleep -Second 5 }
} until ($stop)
# create the Port Mapper rule file that contains our forwarded ports
$fs = [System.IO.File]::Create($pmapper_config)
try {
foreach ($forwarded_port in $forwarded_ports.GetEnumerator()) {
Write-Verbose -Message "Creating forwarded port rule for $($forwarded_port.Key) -> $($forwarded_port.Value)"
$pmapper_rule = Get-PmapperRuleBytes -ListenPort $forwarded_port.Key -ConnectAddress 127.0.0.1 -ConnectPort $forwarded_port.Value
$fs.Write($pmapper_rule, 0, $pmapper_rule.Length)
}
} finally {
$fs.Close()
}
Write-Verbose -Message "Starting Port Mapper '$pmapper_exe' in the background"
$start_args = @{
CommandLine = $pmapper_exe
CurrentDirectory = $pmapper_folder
}
$res = Invoke-CimMethod -ClassName Win32_Process -MethodName Create -Arguments $start_args
if ($res.ReturnValue -ne 0) {
$error_msg = switch($res.ReturnValue) {
2 { "Access denied" }
3 { "Insufficient privilege" }
8 { "Unknown failure" }
9 { "Path not found" }
21 { "Invalid parameter" }
default { "Undefined Error: $($res.ReturnValue)" }
}
Write-Error -Message "Failed to start pmapper: $error_msg"
}
$pmapper_pid = $res.ProcessId
Write-Verbose -Message "Port Mapper PID: $pmapper_pid"
}
Write-Verbose -Message "Wait for current script at '$PSCommandPath' to be deleted before running cleanup"
$fsw = New-Object -TypeName System.IO.FileSystemWatcher
$fsw.Path = Split-Path -Path $PSCommandPath -Parent
$fsw.Filter = Split-Path -Path $PSCommandPath -Leaf
$fsw.WaitForChanged([System.IO.WatcherChangeTypes]::Deleted, 3600000) > $null
Write-Verbose -Message "Script delete or timeout reached, cleaning up Windows httptester artifacts"
Write-Verbose -Message "Cleanup host file entries"
$hosts_file_lines = [System.IO.File]::ReadAllLines($hosts_file)
$new_lines = [System.Collections.ArrayList]@()
$changed = $false
foreach ($host_line in $hosts_file_lines) {
if ($host_line.EndsWith("# ansible-test httptester")) {
$changed = $true
continue
}
$new_lines.Add($host_line) > $null
}
if ($changed) {
Write-Verbose -Message "Host file has extra entries, removing extra entries"
[System.IO.File]::WriteAllLines($hosts_file, $new_lines)
}
if ($os_version -ge [Version]"6.2") {
Write-Verbose -Message "Cleanup of forwarded port configured in netsh"
foreach ($forwarded_port in $forwarded_ports.GetEnumerator()) {
$port_set = netsh interface portproxy show v4tov4 | `
Where-Object { $_ -match "127.0.0.1\s*$($forwarded_port.Key)\s*127.0.0.1\s*$($forwarded_port.Value)" }
if ($port_set) {
Write-Verbose -Message "Removing netsh portproxy rule for $($forwarded_port.Key) -> $($forwarded_port.Value)"
$delete_args = @(
"interface",
"portproxy",
"delete",
"v4tov4",
"listenaddress=127.0.0.1",
"listenport=$($forwarded_port.Key)"
)
$null = netsh $delete_args 2>&1
}
}
} else {
Write-Verbose -Message "Stopping Port Mapper executable based on pid $pmapper_pid"
Stop-Process -Id $pmapper_pid -Force
# the process may not stop straight away, try multiple times to delete the Port Mapper folder
$attempts = 1
do {
try {
Write-Verbose -Message "Cleanup temporary files for Port Mapper at '$pmapper_folder' - Attempt: $attempts"
Remove-Item -Path $pmapper_folder -Force -Recurse
break
} catch {
Write-Verbose -Message "Cleanup temporary files for Port Mapper failed, waiting 5 seconds before trying again:$($_ | Out-String)"
if ($attempts -ge 5) {
break
}
$attempts += 1
Start-Sleep -Second 5
}
} until ($true)
}

@ -31,6 +31,7 @@ from .util_common import (
create_temp_dir, create_temp_dir,
run_command, run_command,
ResultType, ResultType,
intercept_command,
) )
from .config import ( from .config import (
@ -295,3 +296,15 @@ def get_collection_detail(args, python): # type: (EnvironmentConfig, str) -> Co
detail.version = str(version) if version is not None else None detail.version = str(version) if version is not None else None
return detail return detail
def run_playbook(args, inventory_path, playbook, run_playbook_vars): # type: (CommonConfig, str, str, t.Dict[str, t.Any]) -> None
"""Run the specified playbook using the given inventory file and playbook variables."""
playbook_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'playbooks', playbook)
command = ['ansible-playbook', '-i', inventory_path, playbook_path, '-e', json.dumps(run_playbook_vars)]
if args.verbosity:
command.append('-%s' % ('v' * args.verbosity))
env = ansible_environment(args)
intercept_command(args, command, '', env, disable_coverage=True)

@ -3,7 +3,6 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import os import os
import re
import tempfile import tempfile
import uuid import uuid

@ -176,7 +176,7 @@ def main():
delegate_args = None delegate_args = None
except Delegate as ex: except Delegate as ex:
# save delegation args for use once we exit the exception handler # save delegation args for use once we exit the exception handler
delegate_args = (ex.exclude, ex.require, ex.integration_targets) delegate_args = (ex.exclude, ex.require)
if delegate_args: if delegate_args:
# noinspection PyTypeChecker # noinspection PyTypeChecker
@ -324,7 +324,7 @@ def parse_args():
help='base branch used for change detection') help='base branch used for change detection')
add_changes(test, argparse) add_changes(test, argparse)
add_environments(test) add_environments(test, argparse)
integration = argparse.ArgumentParser(add_help=False, parents=[test]) integration = argparse.ArgumentParser(add_help=False, parents=[test])
@ -423,7 +423,6 @@ def parse_args():
config=PosixIntegrationConfig) config=PosixIntegrationConfig)
add_extra_docker_options(posix_integration) add_extra_docker_options(posix_integration)
add_httptester_options(posix_integration, argparse)
network_integration = subparsers.add_parser('network-integration', network_integration = subparsers.add_parser('network-integration',
parents=[integration], parents=[integration],
@ -469,7 +468,6 @@ def parse_args():
config=WindowsIntegrationConfig) config=WindowsIntegrationConfig)
add_extra_docker_options(windows_integration, integration=False) add_extra_docker_options(windows_integration, integration=False)
add_httptester_options(windows_integration, argparse)
windows_integration.add_argument('--windows', windows_integration.add_argument('--windows',
metavar='VERSION', metavar='VERSION',
@ -564,13 +562,12 @@ def parse_args():
action='store_true', action='store_true',
help='direct to shell with no setup') help='direct to shell with no setup')
add_environments(shell) add_environments(shell, argparse)
add_extra_docker_options(shell) add_extra_docker_options(shell)
add_httptester_options(shell, argparse)
coverage_common = argparse.ArgumentParser(add_help=False, parents=[common]) coverage_common = argparse.ArgumentParser(add_help=False, parents=[common])
add_environments(coverage_common, isolated_delegation=False) add_environments(coverage_common, argparse, isolated_delegation=False)
coverage = subparsers.add_parser('coverage', coverage = subparsers.add_parser('coverage',
help='code coverage management and reporting') help='code coverage management and reporting')
@ -896,9 +893,10 @@ def add_changes(parser, argparse):
changes.add_argument('--changed-path', metavar='PATH', action='append', help=argparse.SUPPRESS) changes.add_argument('--changed-path', metavar='PATH', action='append', help=argparse.SUPPRESS)
def add_environments(parser, isolated_delegation=True): def add_environments(parser, argparse, isolated_delegation=True):
""" """
:type parser: argparse.ArgumentParser :type parser: argparse.ArgumentParser
:type argparse: argparse
:type isolated_delegation: bool :type isolated_delegation: bool
""" """
parser.add_argument('--requirements', parser.add_argument('--requirements',
@ -934,6 +932,7 @@ def add_environments(parser, isolated_delegation=True):
if not isolated_delegation: if not isolated_delegation:
environments.set_defaults( environments.set_defaults(
containers=None,
docker=None, docker=None,
remote=None, remote=None,
remote_stage=None, remote_stage=None,
@ -945,6 +944,9 @@ def add_environments(parser, isolated_delegation=True):
return return
parser.add_argument('--containers',
help=argparse.SUPPRESS) # internal use only
environments.add_argument('--docker', environments.add_argument('--docker',
metavar='IMAGE', metavar='IMAGE',
nargs='?', nargs='?',
@ -1001,32 +1003,6 @@ def add_extra_coverage_options(parser):
help='generate empty report of all python/powershell source files') help='generate empty report of all python/powershell source files')
def add_httptester_options(parser, argparse):
"""
:type parser: argparse.ArgumentParser
:type argparse: argparse
"""
group = parser.add_mutually_exclusive_group()
group.add_argument('--httptester',
metavar='IMAGE',
default='quay.io/ansible/http-test-container:1.3.0',
help='docker image to use for the httptester container')
group.add_argument('--disable-httptester',
dest='httptester',
action='store_const',
const='',
help='do not use the httptester container')
parser.add_argument('--inject-httptester',
action='store_true',
help=argparse.SUPPRESS) # internal use only
parser.add_argument('--httptester-krb5-password',
help=argparse.SUPPRESS) # internal use only
def add_extra_docker_options(parser, integration=True): def add_extra_docker_options(parser, integration=True):
""" """
:type parser: argparse.ArgumentParser :type parser: argparse.ArgumentParser
@ -1119,9 +1095,8 @@ def complete_remote_shell(prefix, parsed_args, **_):
images = sorted(get_remote_completion().keys()) images = sorted(get_remote_completion().keys())
# 2008 doesn't support SSH so we do not add to the list of valid images
windows_completion_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'completion', 'windows.txt') windows_completion_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'completion', 'windows.txt')
images.extend(["windows/%s" % i for i in read_lines_without_comments(windows_completion_path, remove_blank_lines=True) if i != '2008']) images.extend(["windows/%s" % i for i in read_lines_without_comments(windows_completion_path, remove_blank_lines=True)])
return [i for i in images if i.startswith(prefix)] return [i for i in images if i.startswith(prefix)]

@ -50,6 +50,10 @@ from ..data import (
data_context, data_context,
) )
from ..docker_util import (
get_docker_command,
)
PROVIDERS = {} PROVIDERS = {}
ENVIRONMENTS = {} ENVIRONMENTS = {}
@ -197,6 +201,9 @@ class CloudBase(ABC):
def config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None def config_callback(files): # type: (t.List[t.Tuple[str, str]]) -> None
"""Add the config file to the payload file list.""" """Add the config file to the payload file list."""
if self.platform not in self.args.metadata.cloud_config:
return # platform was initialized, but not used -- such as being skipped due to all tests being disabled
if self._get_cloud_config(self._CONFIG_PATH, ''): if self._get_cloud_config(self._CONFIG_PATH, ''):
pair = (self.config_path, os.path.relpath(self.config_path, data_context().content.root)) pair = (self.config_path, os.path.relpath(self.config_path, data_context().content.root))
@ -297,18 +304,38 @@ class CloudProvider(CloudBase):
self.config_template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, '%s.template' % self.config_static_name) self.config_template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, '%s.template' % self.config_static_name)
self.config_extension = config_extension self.config_extension = config_extension
self.uses_config = False
self.uses_docker = False
def filter(self, targets, exclude): def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available. """Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget] :type targets: tuple[TestTarget]
:type exclude: list[str] :type exclude: list[str]
""" """
if not self.uses_docker and not self.uses_config:
return
if self.uses_docker and get_docker_command():
return
if self.uses_config and os.path.exists(self.config_static_path):
return
skip = 'cloud/%s/' % self.platform skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases] skipped = [target.name for target in targets if skip in target.aliases]
if skipped: if skipped:
exclude.append(skip) exclude.append(skip)
display.warning('Excluding tests marked "%s" which require config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped))) if not self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which require config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
elif self.uses_docker and not self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
elif self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support or config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
@ -317,18 +344,6 @@ class CloudProvider(CloudBase):
atexit.register(self.cleanup) atexit.register(self.cleanup)
def get_remote_ssh_options(self):
"""Get any additional options needed when delegating tests to a remote instance via SSH.
:rtype: list[str]
"""
return []
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
return []
def cleanup(self): def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete.""" """Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.remove_config: if self.remove_config:

@ -3,7 +3,6 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import os import os
import time
from . import ( from . import (
CloudProvider, CloudProvider,
@ -11,27 +10,8 @@ from . import (
CloudEnvironmentConfig, CloudEnvironmentConfig,
) )
from ..util import ( from ..containers import (
find_executable, run_support_container,
display,
ApplicationError,
SubprocessError,
)
from ..http import (
HttpClient,
)
from ..docker_util import (
docker_run,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
get_docker_hostname,
get_docker_container_ip,
get_docker_preferred_network_name,
is_docker_user_defined_network,
) )
@ -50,46 +30,8 @@ class ACMEProvider(CloudProvider):
self.image = os.environ.get('ANSIBLE_ACME_CONTAINER') self.image = os.environ.get('ANSIBLE_ACME_CONTAINER')
else: else:
self.image = 'quay.io/ansible/acme-test-container:2.0.0' self.image = 'quay.io/ansible/acme-test-container:2.0.0'
self.container_name = ''
def _wait_for_service(self, protocol, acme_host, port, local_part, name):
"""Wait for an endpoint to accept connections."""
if self.args.explain:
return
client = HttpClient(self.args, always=True, insecure=True)
endpoint = '%s://%s:%d/%s' % (protocol, acme_host, port, local_part)
for dummy in range(1, 30):
display.info('Waiting for %s: %s' % (name, endpoint), verbosity=1)
try:
client.get(endpoint)
return
except SubprocessError:
pass
time.sleep(1) self.uses_docker = True
raise ApplicationError('Timeout waiting for %s.' % name)
def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
docker = find_executable('docker', required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require the "docker" command: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
@ -100,79 +42,26 @@ class ACMEProvider(CloudProvider):
else: else:
self._setup_dynamic() self._setup_dynamic()
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.container_name:
docker_rm(self.args, self.container_name)
super(ACMEProvider, self).cleanup()
def _setup_dynamic(self): def _setup_dynamic(self):
"""Create a ACME test container using docker.""" """Create a ACME test container using docker."""
container_id = get_docker_container_id() ports = [
5000, # control port for flask app in container
self.container_name = self.DOCKER_SIMULATOR_NAME 14000, # Pebble ACME CA
]
results = docker_inspect(self.args, self.container_name)
descriptor = run_support_container(
if results and not results[0].get('State', {}).get('Running'): self.args,
docker_rm(self.args, self.container_name) self.platform,
results = [] self.image,
self.DOCKER_SIMULATOR_NAME,
if results: ports,
display.info('Using the existing ACME docker test container.', verbosity=1) allow_existing=True,
else: cleanup=True,
display.info('Starting a new ACME docker test container.', verbosity=1) )
if not container_id:
# publish the simulator ports when not running inside docker
publish_ports = [
'-p', '5000:5000', # control port for flask app in container
'-p', '14000:14000', # Pebble ACME CA
]
else:
publish_ports = []
if not os.environ.get('ANSIBLE_ACME_CONTAINER'):
docker_pull(self.args, self.image)
docker_run(
self.args,
self.image,
['-d', '--name', self.container_name] + publish_ports,
)
if self.args.docker:
acme_host = self.DOCKER_SIMULATOR_NAME
elif container_id:
acme_host = self._get_simulator_address()
display.info('Found ACME test container address: %s' % acme_host, verbosity=1)
else:
acme_host = get_docker_hostname()
if container_id:
acme_host_ip = self._get_simulator_address()
else:
acme_host_ip = get_docker_hostname()
self._set_cloud_config('acme_host', acme_host)
self._wait_for_service('http', acme_host_ip, 5000, '', 'ACME controller') descriptor.register(self.args)
self._wait_for_service('https', acme_host_ip, 14000, 'dir', 'ACME CA endpoint')
def _get_simulator_address(self): self._set_cloud_config('acme_host', self.DOCKER_SIMULATOR_NAME)
return get_docker_container_ip(self.args, self.container_name)
def _setup_static(self): def _setup_static(self):
raise NotImplementedError() raise NotImplementedError()

@ -23,14 +23,19 @@ from ..core_ci import (
class AwsCloudProvider(CloudProvider): class AwsCloudProvider(CloudProvider):
"""AWS cloud provider plugin. Sets up cloud resources before delegation.""" """AWS cloud provider plugin. Sets up cloud resources before delegation."""
def __init__(self, args):
"""
:type args: TestConfig
"""
super(AwsCloudProvider, self).__init__(args)
self.uses_config = True
def filter(self, targets, exclude): def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available. """Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget] :type targets: tuple[TestTarget]
:type exclude: list[str] :type exclude: list[str]
""" """
if os.path.isfile(self.config_static_path):
return
aci = self._create_ansible_core_ci() aci = self._create_ansible_core_ci()
if aci.available: if aci.available:

@ -44,14 +44,13 @@ class AzureCloudProvider(CloudProvider):
self.aci = None self.aci = None
self.uses_config = True
def filter(self, targets, exclude): def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available. """Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget] :type targets: tuple[TestTarget]
:type exclude: list[str] :type exclude: list[str]
""" """
if os.path.isfile(self.config_static_path):
return
aci = self._create_ansible_core_ci() aci = self._create_ansible_core_ci()
if aci.available: if aci.available:

@ -22,22 +22,13 @@ class CloudscaleCloudProvider(CloudProvider):
"""Cloudscale cloud provider plugin. Sets up cloud resources before """Cloudscale cloud provider plugin. Sets up cloud resources before
delegation. delegation.
""" """
def __init__(self, args): def __init__(self, args):
""" """
:type args: TestConfig :type args: TestConfig
""" """
super(CloudscaleCloudProvider, self).__init__(args) super(CloudscaleCloudProvider, self).__init__(args)
def filter(self, targets, exclude): self.uses_config = True
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if os.path.isfile(self.config_static_path):
return
super(CloudscaleCloudProvider, self).filter(targets, exclude)
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""

@ -4,8 +4,6 @@ __metaclass__ = type
import json import json
import os import os
import re
import time
from . import ( from . import (
CloudProvider, CloudProvider,
@ -14,30 +12,22 @@ from . import (
) )
from ..util import ( from ..util import (
find_executable,
ApplicationError, ApplicationError,
display, display,
SubprocessError,
ConfigParser, ConfigParser,
) )
from ..http import ( from ..http import (
HttpClient,
HttpError,
urlparse, urlparse,
) )
from ..docker_util import ( from ..docker_util import (
docker_run,
docker_rm,
docker_inspect,
docker_pull,
docker_network_inspect,
docker_exec, docker_exec,
get_docker_container_id, )
get_docker_preferred_network_name,
get_docker_hostname, from ..containers import (
is_docker_user_defined_network, run_support_container,
wait_for_file,
) )
@ -52,31 +42,11 @@ class CsCloudProvider(CloudProvider):
super(CsCloudProvider, self).__init__(args) super(CsCloudProvider, self).__init__(args)
self.image = os.environ.get('ANSIBLE_CLOUDSTACK_CONTAINER', 'quay.io/ansible/cloudstack-test-container:1.4.0') self.image = os.environ.get('ANSIBLE_CLOUDSTACK_CONTAINER', 'quay.io/ansible/cloudstack-test-container:1.4.0')
self.container_name = ''
self.endpoint = ''
self.host = '' self.host = ''
self.port = 0 self.port = 0
def filter(self, targets, exclude): self.uses_docker = True
"""Filter out the cloud tests when the necessary config and resources are not available. self.uses_config = True
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if os.path.isfile(self.config_static_path):
return
docker = find_executable('docker', required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require the "docker" command or config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
@ -87,49 +57,19 @@ class CsCloudProvider(CloudProvider):
else: else:
self._setup_dynamic() self._setup_dynamic()
def get_remote_ssh_options(self):
"""Get any additional options needed when delegating tests to a remote instance via SSH.
:rtype: list[str]
"""
if self.managed:
return ['-R', '8888:%s:8888' % get_docker_hostname()]
return []
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.container_name:
if self.ci_provider.code:
docker_rm(self.args, self.container_name)
elif not self.args.explain:
display.notice('Remember to run `docker rm -f %s` when finished testing.' % self.container_name)
super(CsCloudProvider, self).cleanup()
def _setup_static(self): def _setup_static(self):
"""Configure CloudStack tests for use with static configuration.""" """Configure CloudStack tests for use with static configuration."""
parser = ConfigParser() parser = ConfigParser()
parser.read(self.config_static_path) parser.read(self.config_static_path)
self.endpoint = parser.get('cloudstack', 'endpoint') endpoint = parser.get('cloudstack', 'endpoint')
parts = urlparse(self.endpoint) parts = urlparse(endpoint)
self.host = parts.hostname self.host = parts.hostname
if not self.host: if not self.host:
raise ApplicationError('Could not determine host from endpoint: %s' % self.endpoint) raise ApplicationError('Could not determine host from endpoint: %s' % endpoint)
if parts.port: if parts.port:
self.port = parts.port self.port = parts.port
@ -138,50 +78,35 @@ class CsCloudProvider(CloudProvider):
elif parts.scheme == 'https': elif parts.scheme == 'https':
self.port = 443 self.port = 443
else: else:
raise ApplicationError('Could not determine port from endpoint: %s' % self.endpoint) raise ApplicationError('Could not determine port from endpoint: %s' % endpoint)
display.info('Read cs host "%s" and port %d from config: %s' % (self.host, self.port, self.config_static_path), verbosity=1) display.info('Read cs host "%s" and port %d from config: %s' % (self.host, self.port, self.config_static_path), verbosity=1)
self._wait_for_service()
def _setup_dynamic(self): def _setup_dynamic(self):
"""Create a CloudStack simulator using docker.""" """Create a CloudStack simulator using docker."""
config = self._read_config_template() config = self._read_config_template()
self.container_name = self.DOCKER_SIMULATOR_NAME self.port = 8888
results = docker_inspect(self.args, self.container_name)
if results and not results[0]['State']['Running']:
docker_rm(self.args, self.container_name)
results = []
if results:
display.info('Using the existing CloudStack simulator docker container.', verbosity=1)
else:
display.info('Starting a new CloudStack simulator docker container.', verbosity=1)
docker_pull(self.args, self.image)
docker_run(self.args, self.image, ['-d', '-p', '8888:8888', '--name', self.container_name])
# apply work-around for OverlayFS issue
# https://github.com/docker/for-linux/issues/72#issuecomment-319904698
docker_exec(self.args, self.container_name, ['find', '/var/lib/mysql', '-type', 'f', '-exec', 'touch', '{}', ';'])
if not self.args.explain:
display.notice('The CloudStack simulator will probably be ready in 2 - 4 minutes.')
container_id = get_docker_container_id()
if container_id: ports = [
self.host = self._get_simulator_address() self.port,
display.info('Found CloudStack simulator container address: %s' % self.host, verbosity=1) ]
else:
self.host = get_docker_hostname() descriptor = run_support_container(
self.args,
self.platform,
self.image,
self.DOCKER_SIMULATOR_NAME,
ports,
allow_existing=True,
cleanup=True,
)
self.port = 8888 descriptor.register(self.args)
self.endpoint = 'http://%s:%d' % (self.host, self.port)
self._wait_for_service() # apply work-around for OverlayFS issue
# https://github.com/docker/for-linux/issues/72#issuecomment-319904698
docker_exec(self.args, self.DOCKER_SIMULATOR_NAME, ['find', '/var/lib/mysql', '-type', 'f', '-exec', 'touch', '{}', ';'])
if self.args.explain: if self.args.explain:
values = dict( values = dict(
@ -189,17 +114,10 @@ class CsCloudProvider(CloudProvider):
PORT=str(self.port), PORT=str(self.port),
) )
else: else:
credentials = self._get_credentials() credentials = self._get_credentials(self.DOCKER_SIMULATOR_NAME)
if self.args.docker:
host = self.DOCKER_SIMULATOR_NAME
elif self.args.remote:
host = 'localhost'
else:
host = self.host
values = dict( values = dict(
HOST=host, HOST=self.DOCKER_SIMULATOR_NAME,
PORT=str(self.port), PORT=str(self.port),
KEY=credentials['apikey'], KEY=credentials['apikey'],
SECRET=credentials['secretkey'], SECRET=credentials['secretkey'],
@ -211,62 +129,23 @@ class CsCloudProvider(CloudProvider):
self._write_config(config) self._write_config(config)
def _get_simulator_address(self): def _get_credentials(self, container_name):
current_network = get_docker_preferred_network_name(self.args)
networks = docker_network_inspect(self.args, current_network)
try:
network = [network for network in networks if network['Name'] == current_network][0]
containers = network['Containers']
container = [containers[container] for container in containers if containers[container]['Name'] == self.DOCKER_SIMULATOR_NAME][0]
return re.sub(r'/[0-9]+$', '', container['IPv4Address'])
except Exception:
display.error('Failed to process the following docker network inspect output:\n%s' %
json.dumps(networks, indent=4, sort_keys=True))
raise
def _wait_for_service(self):
"""Wait for the CloudStack service endpoint to accept connections."""
if self.args.explain:
return
client = HttpClient(self.args, always=True)
endpoint = self.endpoint
for _iteration in range(1, 30):
display.info('Waiting for CloudStack service: %s' % endpoint, verbosity=1)
try:
client.get(endpoint)
return
except SubprocessError:
pass
time.sleep(10)
raise ApplicationError('Timeout waiting for CloudStack service.')
def _get_credentials(self):
"""Wait for the CloudStack simulator to return credentials. """Wait for the CloudStack simulator to return credentials.
:type container_name: str
:rtype: dict[str, str] :rtype: dict[str, str]
""" """
client = HttpClient(self.args, always=True) def check(value):
endpoint = '%s/admin.json' % self.endpoint # noinspection PyBroadException
try:
for _iteration in range(1, 30): json.loads(value)
display.info('Waiting for CloudStack credentials: %s' % endpoint, verbosity=1) except Exception: # pylint: disable=broad-except
return False # sometimes the file exists but is not yet valid JSON
response = client.get(endpoint)
if response.status_code == 200: return True
try:
return response.json()
except HttpError as ex:
display.error(ex)
time.sleep(10) stdout = wait_for_file(self.args, container_name, '/var/www/html/admin.json', sleep=10, tries=30, check=check)
raise ApplicationError('Timeout waiting for CloudStack credentials.') return json.loads(stdout)
class CsCloudEnvironment(CloudEnvironment): class CsCloudEnvironment(CloudEnvironment):

@ -10,21 +10,8 @@ from . import (
CloudEnvironmentConfig, CloudEnvironmentConfig,
) )
from ..util import ( from ..containers import (
find_executable, run_support_container,
display,
)
from ..docker_util import (
docker_run,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
get_docker_hostname,
get_docker_container_ip,
get_docker_preferred_network_name,
is_docker_user_defined_network,
) )
@ -61,30 +48,8 @@ class ForemanProvider(CloudProvider):
""" """
self.image = self.__container_from_env or self.DOCKER_IMAGE self.image = self.__container_from_env or self.DOCKER_IMAGE
self.container_name = ''
def filter(self, targets, exclude):
"""Filter out the tests with the necessary config and res unavailable.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
docker_cmd = 'docker'
docker = find_executable(docker_cmd, required=False)
if docker: self.uses_docker = True
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning(
'Excluding tests marked "%s" '
'which require the "%s" command: %s'
% (skip.rstrip('/'), docker_cmd, ', '.join(skipped))
)
def setup(self): def setup(self):
"""Setup cloud resource before delegation and reg cleanup callback.""" """Setup cloud resource before delegation and reg cleanup callback."""
@ -95,81 +60,31 @@ class ForemanProvider(CloudProvider):
else: else:
self._setup_dynamic() self._setup_dynamic()
def get_docker_run_options(self):
"""Get additional options needed when delegating tests to a container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the resource and temporary configs files after tests."""
if self.container_name:
docker_rm(self.args, self.container_name)
super(ForemanProvider, self).cleanup()
def _setup_dynamic(self): def _setup_dynamic(self):
"""Spawn a Foreman stub within docker container.""" """Spawn a Foreman stub within docker container."""
foreman_port = 8080 foreman_port = 8080
container_id = get_docker_container_id()
self.container_name = self.DOCKER_SIMULATOR_NAME
results = docker_inspect(self.args, self.container_name) ports = [
foreman_port,
if results and not results[0].get('State', {}).get('Running'): ]
docker_rm(self.args, self.container_name)
results = [] descriptor = run_support_container(
self.args,
display.info( self.platform,
'%s Foreman simulator docker container.' self.image,
% ('Using the existing' if results else 'Starting a new'), self.DOCKER_SIMULATOR_NAME,
verbosity=1, ports,
allow_existing=True,
cleanup=True,
) )
if not results: descriptor.register(self.args)
if self.args.docker or container_id:
publish_ports = []
else:
# publish the simulator ports when not running inside docker
publish_ports = [
'-p', ':'.join((str(foreman_port), ) * 2),
]
if not self.__container_from_env:
docker_pull(self.args, self.image)
docker_run(
self.args,
self.image,
['-d', '--name', self.container_name] + publish_ports,
)
if self.args.docker:
foreman_host = self.DOCKER_SIMULATOR_NAME
elif container_id:
foreman_host = self._get_simulator_address()
display.info(
'Found Foreman simulator container address: %s'
% foreman_host, verbosity=1
)
else:
foreman_host = get_docker_hostname()
self._set_cloud_config('FOREMAN_HOST', foreman_host) self._set_cloud_config('FOREMAN_HOST', self.DOCKER_SIMULATOR_NAME)
self._set_cloud_config('FOREMAN_PORT', str(foreman_port)) self._set_cloud_config('FOREMAN_PORT', str(foreman_port))
def _get_simulator_address(self):
return get_docker_container_ip(self.args, self.container_name)
def _setup_static(self): def _setup_static(self):
raise NotImplementedError raise NotImplementedError()
class ForemanEnvironment(CloudEnvironment): class ForemanEnvironment(CloudEnvironment):

@ -11,23 +11,12 @@ from . import (
CloudEnvironmentConfig, CloudEnvironmentConfig,
) )
from ..util import ( from ..docker_util import (
find_executable, docker_cp_to,
display,
) )
from ..docker_util import ( from ..containers import (
docker_command, run_support_container,
docker_run,
docker_start,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
get_docker_hostname,
get_docker_container_ip,
get_docker_preferred_network_name,
is_docker_user_defined_network,
) )
@ -103,68 +92,35 @@ class GalaxyProvider(CloudProvider):
'docker.io/pulp/pulp-galaxy-ng@sha256:b79a7be64eff86d8f58db9ca83ed4967bd8b4e45c99addb17a91d11926480cf1' 'docker.io/pulp/pulp-galaxy-ng@sha256:b79a7be64eff86d8f58db9ca83ed4967bd8b4e45c99addb17a91d11926480cf1'
) )
self.containers = [] self.uses_docker = True
def filter(self, targets, exclude):
"""Filter out the tests with the necessary config and res unavailable.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
docker_cmd = 'docker'
docker = find_executable(docker_cmd, required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require the "%s" command: %s'
% (skip.rstrip('/'), docker_cmd, ', '.join(skipped)))
def setup(self): def setup(self):
"""Setup cloud resource before delegation and reg cleanup callback.""" """Setup cloud resource before delegation and reg cleanup callback."""
super(GalaxyProvider, self).setup() super(GalaxyProvider, self).setup()
container_id = get_docker_container_id()
p_results = docker_inspect(self.args, 'ansible-ci-pulp')
if p_results and not p_results[0].get('State', {}).get('Running'):
docker_rm(self.args, 'ansible-ci-pulp')
p_results = []
display.info('%s ansible-ci-pulp docker container.'
% ('Using the existing' if p_results else 'Starting a new'),
verbosity=1)
galaxy_port = 80 galaxy_port = 80
pulp_host = 'ansible-ci-pulp'
pulp_port = 24817 pulp_port = 24817
if not p_results: ports = [
if self.args.docker or container_id: galaxy_port,
publish_ports = [] pulp_port,
else: ]
# publish the simulator ports when not running inside docker
publish_ports = [ # Create the container, don't run it, we need to inject configs before it starts
'-p', ':'.join((str(galaxy_port),) * 2), descriptor = run_support_container(
'-p', ':'.join((str(pulp_port),) * 2), self.args,
] self.platform,
self.pulp,
docker_pull(self.args, self.pulp) pulp_host,
ports,
# Create the container, don't run it, we need to inject configs before it starts start=False,
stdout, _dummy = docker_run( allow_existing=True,
self.args, cleanup=None,
self.pulp, )
['--name', 'ansible-ci-pulp'] + publish_ports,
create_only=True
)
pulp_id = stdout.strip() if not descriptor.running:
pulp_id = descriptor.container_id
injected_files = { injected_files = {
'/etc/pulp/settings.py': SETTINGS, '/etc/pulp/settings.py': SETTINGS,
@ -175,20 +131,11 @@ class GalaxyProvider(CloudProvider):
with tempfile.NamedTemporaryFile() as temp_fd: with tempfile.NamedTemporaryFile() as temp_fd:
temp_fd.write(content) temp_fd.write(content)
temp_fd.flush() temp_fd.flush()
docker_command(self.args, ['cp', temp_fd.name, '%s:%s' % (pulp_id, path)]) docker_cp_to(self.args, pulp_id, temp_fd.name, path)
# Start the container descriptor.start(self.args)
docker_start(self.args, 'ansible-ci-pulp', [])
self.containers.append('ansible-ci-pulp') descriptor.register(self.args)
if self.args.docker:
pulp_host = 'ansible-ci-pulp'
elif container_id:
pulp_host = self._get_simulator_address('ansible-ci-pulp')
display.info('Found Galaxy simulator container address: %s' % pulp_host, verbosity=1)
else:
pulp_host = get_docker_hostname()
self._set_cloud_config('PULP_HOST', pulp_host) self._set_cloud_config('PULP_HOST', pulp_host)
self._set_cloud_config('PULP_PORT', str(pulp_port)) self._set_cloud_config('PULP_PORT', str(pulp_port))
@ -196,28 +143,6 @@ class GalaxyProvider(CloudProvider):
self._set_cloud_config('PULP_USER', 'admin') self._set_cloud_config('PULP_USER', 'admin')
self._set_cloud_config('PULP_PASSWORD', 'password') self._set_cloud_config('PULP_PASSWORD', 'password')
def get_docker_run_options(self):
"""Get additional options needed when delegating tests to a container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if not is_docker_user_defined_network(network):
return ['--link', 'ansible-ci-pulp']
return []
def cleanup(self):
"""Clean up the resource and temporary configs files after tests."""
for container_name in self.containers:
docker_rm(self.args, container_name)
super(GalaxyProvider, self).cleanup()
def _get_simulator_address(self, container_name):
return get_docker_container_ip(self.args, container_name)
class GalaxyEnvironment(CloudEnvironment): class GalaxyEnvironment(CloudEnvironment):
"""Galaxy environment plugin. """Galaxy environment plugin.

@ -4,8 +4,6 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import os
from ..util import ( from ..util import (
display, display,
ConfigParser, ConfigParser,
@ -20,17 +18,13 @@ from . import (
class GcpCloudProvider(CloudProvider): class GcpCloudProvider(CloudProvider):
"""GCP cloud provider plugin. Sets up cloud resources before delegation.""" """GCP cloud provider plugin. Sets up cloud resources before delegation."""
def __init__(self, args):
def filter(self, targets, exclude): """Set up container references for provider.
"""Filter out the cloud tests when the necessary config and resources are not available. :type args: TestConfig
:type targets: tuple[TestTarget]
:type exclude: list[str]
""" """
super(GcpCloudProvider, self).__init__(args)
if os.path.isfile(self.config_static_path): self.uses_config = True
return
super(GcpCloudProvider, self).filter(targets, exclude)
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""

@ -2,8 +2,6 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import os
from ..util import ( from ..util import (
display, display,
ConfigParser, ConfigParser,
@ -31,14 +29,13 @@ class HcloudCloudProvider(CloudProvider):
""" """
super(HcloudCloudProvider, self).__init__(args) super(HcloudCloudProvider, self).__init__(args)
self.uses_config = True
def filter(self, targets, exclude): def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available. """Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget] :type targets: tuple[TestTarget]
:type exclude: list[str] :type exclude: list[str]
""" """
if os.path.isfile(self.config_static_path):
return
aci = self._create_ansible_core_ci() aci = self._create_ansible_core_ci()
if aci.available: if aci.available:

@ -0,0 +1,92 @@
"""HTTP Tester plugin for integration tests."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from . import (
CloudProvider,
CloudEnvironment,
CloudEnvironmentConfig,
)
from ..util import (
display,
generate_password,
)
from ..config import (
IntegrationConfig,
)
from ..containers import (
run_support_container,
)
KRB5_PASSWORD_ENV = 'KRB5_PASSWORD'
class HttptesterProvider(CloudProvider):
"""HTTP Tester provider plugin. Sets up resources before delegation."""
def __init__(self, args): # type: (IntegrationConfig) -> None
super(HttptesterProvider, self).__init__(args)
self.image = os.environ.get('ANSIBLE_HTTP_TEST_CONTAINER', 'quay.io/ansible/http-test-container:1.3.0')
self.uses_docker = True
def setup(self): # type: () -> None
"""Setup resources before delegation."""
super(HttptesterProvider, self).setup()
ports = [
80,
88,
443,
444,
749,
]
aliases = [
'ansible.http.tests',
'sni1.ansible.http.tests',
'fail.ansible.http.tests',
'self-signed.ansible.http.tests',
]
descriptor = run_support_container(
self.args,
self.platform,
self.image,
'http-test-container',
ports,
aliases=aliases,
start=True,
allow_existing=True,
cleanup=True,
env={
KRB5_PASSWORD_ENV: generate_password(),
},
)
descriptor.register(self.args)
# Read the password from the container environment.
# This allows the tests to work when re-using an existing container.
# The password is marked as sensitive, since it may differ from the one we generated.
krb5_password = descriptor.details.container.env_dict()[KRB5_PASSWORD_ENV]
display.sensitive.add(krb5_password)
self._set_cloud_config(KRB5_PASSWORD_ENV, krb5_password)
class HttptesterEnvironment(CloudEnvironment):
"""HTTP Tester environment plugin. Updates integration test environment after delegation."""
def get_environment_config(self): # type: () -> CloudEnvironmentConfig
"""Returns the cloud environment config."""
return CloudEnvironmentConfig(
env_vars=dict(
HTTPTESTER='1', # backwards compatibility for tests intended to work with or without HTTP Tester
KRB5_PASSWORD=self._get_cloud_config(KRB5_PASSWORD_ENV),
)
)

@ -10,21 +10,8 @@ from . import (
CloudEnvironmentConfig, CloudEnvironmentConfig,
) )
from ..util import ( from ..containers import (
find_executable, run_support_container,
display,
)
from ..docker_util import (
docker_run,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
get_docker_hostname,
get_docker_container_ip,
get_docker_preferred_network_name,
is_docker_user_defined_network,
) )
@ -48,7 +35,6 @@ class NiosProvider(CloudProvider):
def __init__(self, args): def __init__(self, args):
"""Set up container references for provider. """Set up container references for provider.
:type args: TestConfig :type args: TestConfig
""" """
super(NiosProvider, self).__init__(args) super(NiosProvider, self).__init__(args)
@ -61,30 +47,8 @@ class NiosProvider(CloudProvider):
""" """
self.image = self.__container_from_env or self.DOCKER_IMAGE self.image = self.__container_from_env or self.DOCKER_IMAGE
self.container_name = ''
def filter(self, targets, exclude):
"""Filter out the tests with the necessary config and res unavailable.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
docker_cmd = 'docker'
docker = find_executable(docker_cmd, required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped: self.uses_docker = True
exclude.append(skip)
display.warning(
'Excluding tests marked "%s" '
'which require the "%s" command: %s'
% (skip.rstrip('/'), docker_cmd, ', '.join(skipped))
)
def setup(self): def setup(self):
"""Setup cloud resource before delegation and reg cleanup callback.""" """Setup cloud resource before delegation and reg cleanup callback."""
@ -95,80 +59,30 @@ class NiosProvider(CloudProvider):
else: else:
self._setup_dynamic() self._setup_dynamic()
def get_docker_run_options(self):
"""Get additional options needed when delegating tests to a container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the resource and temporary configs files after tests."""
if self.container_name:
docker_rm(self.args, self.container_name)
super(NiosProvider, self).cleanup()
def _setup_dynamic(self): def _setup_dynamic(self):
"""Spawn a NIOS simulator within docker container.""" """Spawn a NIOS simulator within docker container."""
nios_port = 443 nios_port = 443
container_id = get_docker_container_id()
self.container_name = self.DOCKER_SIMULATOR_NAME ports = [
nios_port,
results = docker_inspect(self.args, self.container_name) ]
if results and not results[0].get('State', {}).get('Running'): descriptor = run_support_container(
docker_rm(self.args, self.container_name) self.args,
results = [] self.platform,
self.image,
display.info( self.DOCKER_SIMULATOR_NAME,
'%s NIOS simulator docker container.' ports,
% ('Using the existing' if results else 'Starting a new'), allow_existing=True,
verbosity=1, cleanup=True,
) )
if not results: descriptor.register(self.args)
if self.args.docker or container_id:
publish_ports = []
else:
# publish the simulator ports when not running inside docker
publish_ports = [
'-p', ':'.join((str(nios_port), ) * 2),
]
if not self.__container_from_env:
docker_pull(self.args, self.image)
docker_run(
self.args,
self.image,
['-d', '--name', self.container_name] + publish_ports,
)
if self.args.docker:
nios_host = self.DOCKER_SIMULATOR_NAME
elif container_id:
nios_host = self._get_simulator_address()
display.info(
'Found NIOS simulator container address: %s'
% nios_host, verbosity=1
)
else:
nios_host = get_docker_hostname()
self._set_cloud_config('NIOS_HOST', nios_host)
def _get_simulator_address(self): self._set_cloud_config('NIOS_HOST', self.DOCKER_SIMULATOR_NAME)
return get_docker_container_ip(self.args, self.container_name)
def _setup_static(self): def _setup_static(self):
raise NotImplementedError raise NotImplementedError()
class NiosEnvironment(CloudEnvironment): class NiosEnvironment(CloudEnvironment):

@ -16,10 +16,6 @@ from ..util import (
class OpenNebulaCloudProvider(CloudProvider): class OpenNebulaCloudProvider(CloudProvider):
"""Checks if a configuration file has been passed or fixtures are going to be used for testing""" """Checks if a configuration file has been passed or fixtures are going to be used for testing"""
def filter(self, targets, exclude):
""" no need to filter modules, they can either run from config file or from fixtures"""
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
super(OpenNebulaCloudProvider, self).setup() super(OpenNebulaCloudProvider, self).setup()
@ -27,6 +23,8 @@ class OpenNebulaCloudProvider(CloudProvider):
if not self._use_static_config(): if not self._use_static_config():
self._setup_dynamic() self._setup_dynamic()
self.uses_config = True
def _setup_dynamic(self): def _setup_dynamic(self):
display.info('No config file provided, will run test from fixtures') display.info('No config file provided, will run test from fixtures')

@ -2,10 +2,7 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import json
import os
import re import re
import time
from . import ( from . import (
CloudProvider, CloudProvider,
@ -18,27 +15,12 @@ from ..io import (
) )
from ..util import ( from ..util import (
find_executable,
ApplicationError,
display, display,
SubprocessError,
) )
from ..http import ( from ..containers import (
HttpClient, run_support_container,
) wait_for_file,
from ..docker_util import (
docker_exec,
docker_run,
docker_rm,
docker_inspect,
docker_pull,
docker_network_inspect,
get_docker_container_id,
get_docker_preferred_network_name,
get_docker_hostname,
is_docker_user_defined_network,
) )
@ -54,28 +36,9 @@ class OpenShiftCloudProvider(CloudProvider):
# The image must be pinned to a specific version to guarantee CI passes with the version used. # The image must be pinned to a specific version to guarantee CI passes with the version used.
self.image = 'openshift/origin:v3.9.0' self.image = 'openshift/origin:v3.9.0'
self.container_name = ''
def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if os.path.isfile(self.config_static_path):
return
docker = find_executable('docker', required=False)
if docker: self.uses_docker = True
return self.uses_config = True
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
display.warning('Excluding tests marked "%s" which require the "docker" command or config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
@ -86,133 +49,52 @@ class OpenShiftCloudProvider(CloudProvider):
else: else:
self._setup_dynamic() self._setup_dynamic()
def get_remote_ssh_options(self):
"""Get any additional options needed when delegating tests to a remote instance via SSH.
:rtype: list[str]
"""
if self.managed:
return ['-R', '8443:%s:8443' % get_docker_hostname()]
return []
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_CONTAINER_NAME]
return []
def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.container_name:
docker_rm(self.args, self.container_name)
super(OpenShiftCloudProvider, self).cleanup()
def _setup_static(self): def _setup_static(self):
"""Configure OpenShift tests for use with static configuration.""" """Configure OpenShift tests for use with static configuration."""
config = read_text_file(self.config_static_path) config = read_text_file(self.config_static_path)
match = re.search(r'^ *server: (?P<server>.*)$', config, flags=re.MULTILINE) match = re.search(r'^ *server: (?P<server>.*)$', config, flags=re.MULTILINE)
if match: if not match:
endpoint = match.group('server') display.warning('Could not find OpenShift endpoint in kubeconfig.')
self._wait_for_service(endpoint)
else:
display.warning('Could not find OpenShift endpoint in kubeconfig. Skipping check for OpenShift service availability.')
def _setup_dynamic(self): def _setup_dynamic(self):
"""Create a OpenShift container using docker.""" """Create a OpenShift container using docker."""
self.container_name = self.DOCKER_CONTAINER_NAME
results = docker_inspect(self.args, self.container_name)
if results and not results[0]['State']['Running']:
docker_rm(self.args, self.container_name)
results = []
if results:
display.info('Using the existing OpenShift docker container.', verbosity=1)
else:
display.info('Starting a new OpenShift docker container.', verbosity=1)
docker_pull(self.args, self.image)
cmd = ['start', 'master', '--listen', 'https://0.0.0.0:8443']
docker_run(self.args, self.image, ['-d', '-p', '8443:8443', '--name', self.container_name], cmd)
container_id = get_docker_container_id()
if container_id:
host = self._get_container_address()
display.info('Found OpenShift container address: %s' % host, verbosity=1)
else:
host = get_docker_hostname()
port = 8443 port = 8443
endpoint = 'https://%s:%s/' % (host, port)
self._wait_for_service(endpoint) ports = [
port,
]
cmd = ['start', 'master', '--listen', 'https://0.0.0.0:%d' % port]
descriptor = run_support_container(
self.args,
self.platform,
self.image,
self.DOCKER_CONTAINER_NAME,
ports,
allow_existing=True,
cleanup=True,
cmd=cmd,
)
descriptor.register(self.args)
if self.args.explain: if self.args.explain:
config = '# Unknown' config = '# Unknown'
else: else:
if self.args.docker: config = self._get_config(self.DOCKER_CONTAINER_NAME, 'https://%s:%s/' % (self.DOCKER_CONTAINER_NAME, port))
host = self.DOCKER_CONTAINER_NAME
elif self.args.remote:
host = 'localhost'
server = 'https://%s:%s' % (host, port)
config = self._get_config(server)
self._write_config(config) self._write_config(config)
def _get_container_address(self): def _get_config(self, container_name, server):
current_network = get_docker_preferred_network_name(self.args)
networks = docker_network_inspect(self.args, current_network)
try:
network = [network for network in networks if network['Name'] == current_network][0]
containers = network['Containers']
container = [containers[container] for container in containers if containers[container]['Name'] == self.DOCKER_CONTAINER_NAME][0]
return re.sub(r'/[0-9]+$', '', container['IPv4Address'])
except Exception:
display.error('Failed to process the following docker network inspect output:\n%s' %
json.dumps(networks, indent=4, sort_keys=True))
raise
def _wait_for_service(self, endpoint):
"""Wait for the OpenShift service endpoint to accept connections.
:type endpoint: str
"""
if self.args.explain:
return
client = HttpClient(self.args, always=True, insecure=True)
for dummy in range(1, 30):
display.info('Waiting for OpenShift service: %s' % endpoint, verbosity=1)
try:
client.get(endpoint)
return
except SubprocessError:
pass
time.sleep(10)
raise ApplicationError('Timeout waiting for OpenShift service.')
def _get_config(self, server):
"""Get OpenShift config from container. """Get OpenShift config from container.
:type container_name: str
:type server: str :type server: str
:rtype: dict[str, str] :rtype: dict[str, str]
""" """
cmd = ['cat', '/var/lib/origin/openshift.local.config/master/admin.kubeconfig'] stdout = wait_for_file(self.args, container_name, '/var/lib/origin/openshift.local.config/master/admin.kubeconfig', sleep=10, tries=30)
stdout, dummy = docker_exec(self.args, self.container_name, cmd, capture=True)
config = stdout config = stdout
config = re.sub(r'^( *)certificate-authority-data: .*$', r'\1insecure-skip-tls-verify: true', config, flags=re.MULTILINE) config = re.sub(r'^( *)certificate-authority-data: .*$', r'\1insecure-skip-tls-verify: true', config, flags=re.MULTILINE)

@ -25,15 +25,7 @@ class ScalewayCloudProvider(CloudProvider):
""" """
super(ScalewayCloudProvider, self).__init__(args) super(ScalewayCloudProvider, self).__init__(args)
def filter(self, targets, exclude): self.uses_config = True
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if os.path.isfile(self.config_static_path):
return
super(ScalewayCloudProvider, self).filter(targets, exclude)
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""

@ -11,22 +11,13 @@ from . import (
) )
from ..util import ( from ..util import (
find_executable,
display, display,
ConfigParser, ConfigParser,
ApplicationError, ApplicationError,
) )
from ..docker_util import ( from ..containers import (
docker_run, run_support_container,
docker_rm,
docker_inspect,
docker_pull,
get_docker_container_id,
get_docker_hostname,
get_docker_container_ip,
get_docker_preferred_network_name,
is_docker_user_defined_network,
) )
@ -45,44 +36,24 @@ class VcenterProvider(CloudProvider):
self.image = os.environ.get('ANSIBLE_VCSIM_CONTAINER') self.image = os.environ.get('ANSIBLE_VCSIM_CONTAINER')
else: else:
self.image = 'quay.io/ansible/vcenter-test-container:1.7.0' self.image = 'quay.io/ansible/vcenter-test-container:1.7.0'
self.container_name = ''
# VMware tests can be run on govcsim or BYO with a static config file. # VMware tests can be run on govcsim or BYO with a static config file.
# The simulator is the default if no config is provided. # The simulator is the default if no config is provided.
self.vmware_test_platform = os.environ.get('VMWARE_TEST_PLATFORM', 'govcsim') self.vmware_test_platform = os.environ.get('VMWARE_TEST_PLATFORM', 'govcsim')
self.insecure = False
self.proxy = None
self.platform = 'vcenter'
def filter(self, targets, exclude):
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if self.vmware_test_platform == 'govcsim' or (self.vmware_test_platform == '' and not os.path.isfile(self.config_static_path)):
docker = find_executable('docker', required=False)
if docker:
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped: if self.vmware_test_platform == 'govcsim':
exclude.append(skip) self.uses_docker = True
display.warning('Excluding tests marked "%s" which require the "docker" command or config (see "%s"): %s' self.uses_config = False
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
elif self.vmware_test_platform == 'static': elif self.vmware_test_platform == 'static':
if os.path.isfile(self.config_static_path): self.uses_docker = False
return self.uses_config = True
super(VcenterProvider, self).filter(targets, exclude)
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""
super(VcenterProvider, self).setup() super(VcenterProvider, self).setup()
self._set_cloud_config('vmware_test_platform', self.vmware_test_platform) self._set_cloud_config('vmware_test_platform', self.vmware_test_platform)
if self.vmware_test_platform == 'govcsim': if self.vmware_test_platform == 'govcsim':
self._setup_dynamic_simulator() self._setup_dynamic_simulator()
self.managed = True self.managed = True
@ -92,91 +63,33 @@ class VcenterProvider(CloudProvider):
else: else:
raise ApplicationError('Unknown vmware_test_platform: %s' % self.vmware_test_platform) raise ApplicationError('Unknown vmware_test_platform: %s' % self.vmware_test_platform)
def get_docker_run_options(self):
"""Get any additional options needed when delegating tests to a docker container.
:rtype: list[str]
"""
network = get_docker_preferred_network_name(self.args)
if self.managed and not is_docker_user_defined_network(network):
return ['--link', self.DOCKER_SIMULATOR_NAME]
return []
def cleanup(self):
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.container_name:
docker_rm(self.args, self.container_name)
super(VcenterProvider, self).cleanup()
def _setup_dynamic_simulator(self): def _setup_dynamic_simulator(self):
"""Create a vcenter simulator using docker.""" """Create a vcenter simulator using docker."""
container_id = get_docker_container_id() ports = [
443,
self.container_name = self.DOCKER_SIMULATOR_NAME 8080,
8989,
results = docker_inspect(self.args, self.container_name) 5000, # control port for flask app in simulator
]
if results and not results[0].get('State', {}).get('Running'):
docker_rm(self.args, self.container_name) descriptor = run_support_container(
results = [] self.args,
self.platform,
if results: self.image,
display.info('Using the existing vCenter simulator docker container.', verbosity=1) self.DOCKER_SIMULATOR_NAME,
else: ports,
display.info('Starting a new vCenter simulator docker container.', verbosity=1) allow_existing=True,
cleanup=True,
if not self.args.docker and not container_id: )
# publish the simulator ports when not running inside docker
publish_ports = [
'-p', '1443:443',
'-p', '8080:8080',
'-p', '8989:8989',
'-p', '5000:5000', # control port for flask app in simulator
]
else:
publish_ports = []
if not os.environ.get('ANSIBLE_VCSIM_CONTAINER'):
docker_pull(self.args, self.image)
docker_run(
self.args,
self.image,
['-d', '--name', self.container_name] + publish_ports,
)
if self.args.docker:
vcenter_hostname = self.DOCKER_SIMULATOR_NAME
elif container_id:
vcenter_hostname = self._get_simulator_address()
display.info('Found vCenter simulator container address: %s' % vcenter_hostname, verbosity=1)
else:
vcenter_hostname = get_docker_hostname()
self._set_cloud_config('vcenter_hostname', vcenter_hostname) descriptor.register(self.args)
def _get_simulator_address(self): self._set_cloud_config('vcenter_hostname', self.DOCKER_SIMULATOR_NAME)
return get_docker_container_ip(self.args, self.container_name)
def _setup_static(self): def _setup_static(self):
if not os.path.exists(self.config_static_path): if not os.path.exists(self.config_static_path):
raise ApplicationError('Configuration file does not exist: %s' % self.config_static_path) raise ApplicationError('Configuration file does not exist: %s' % self.config_static_path)
parser = ConfigParser({
'vcenter_port': '443',
'vmware_proxy_host': '',
'vmware_proxy_port': '8080'})
parser.read(self.config_static_path)
if parser.get('DEFAULT', 'vmware_validate_certs').lower() in ('no', 'false'):
self.insecure = True
proxy_host = parser.get('DEFAULT', 'vmware_proxy_host')
proxy_port = int(parser.get('DEFAULT', 'vmware_proxy_port'))
if proxy_host and proxy_port:
self.proxy = 'http://%s:%d' % (proxy_host, proxy_port)
class VcenterEnvironment(CloudEnvironment): class VcenterEnvironment(CloudEnvironment):
"""VMware vcenter/esx environment plugin. Updates integration test environment after delegation.""" """VMware vcenter/esx environment plugin. Updates integration test environment after delegation."""
@ -208,10 +121,6 @@ class VcenterEnvironment(CloudEnvironment):
vcenter_username='user', vcenter_username='user',
vcenter_password='pass', vcenter_password='pass',
) )
# Shippable starts ansible-test from withing an existing container,
# and in this case, we don't have to change the vcenter port.
if not self.args.docker and not get_docker_container_id():
ansible_vars['vcenter_port'] = '1443'
for key, value in ansible_vars.items(): for key, value in ansible_vars.items():
if key.endswith('_password'): if key.endswith('_password'):

@ -18,22 +18,13 @@ from ..util import (
class VultrCloudProvider(CloudProvider): class VultrCloudProvider(CloudProvider):
"""Checks if a configuration file has been passed or fixtures are going to be used for testing""" """Checks if a configuration file has been passed or fixtures are going to be used for testing"""
def __init__(self, args): def __init__(self, args):
""" """
:type args: TestConfig :type args: TestConfig
""" """
super(VultrCloudProvider, self).__init__(args) super(VultrCloudProvider, self).__init__(args)
def filter(self, targets, exclude): self.uses_config = True
"""Filter out the cloud tests when the necessary config and resources are not available.
:type targets: tuple[TestTarget]
:type exclude: list[str]
"""
if os.path.isfile(self.config_static_path):
return
super(VultrCloudProvider, self).filter(targets, exclude)
def setup(self): def setup(self):
"""Setup the cloud resource before delegation and register a cleanup callback.""" """Setup the cloud resource before delegation and register a cleanup callback."""

@ -9,7 +9,6 @@ from . import types as t
from .util import ( from .util import (
find_python, find_python,
generate_password,
generate_pip_command, generate_pip_command,
ApplicationError, ApplicationError,
) )
@ -126,13 +125,7 @@ class EnvironmentConfig(CommonConfig):
if self.delegate: if self.delegate:
self.requirements = True self.requirements = True
self.inject_httptester = args.inject_httptester if 'inject_httptester' in args else False # type: bool self.containers = args.containers # type: t.Optional[t.Dict[str, t.Dict[str, t.Dict[str, t.Any]]]]
self.httptester = docker_qualify_image(args.httptester if 'httptester' in args else '') # type: str
krb5_password = args.httptester_krb5_password if 'httptester_krb5_password' in args else ''
self.httptester_krb5_password = krb5_password or generate_password() # type: str
if self.get_delegated_completion().get('httptester', 'enabled') == 'disabled':
self.httptester = False
if self.get_delegated_completion().get('pip-check', 'enabled') == 'disabled': if self.get_delegated_completion().get('pip-check', 'enabled') == 'disabled':
self.pip_check = False self.pip_check = False
@ -233,9 +226,6 @@ class ShellConfig(EnvironmentConfig):
self.raw = args.raw # type: bool self.raw = args.raw # type: bool
if self.raw:
self.httptester = False
class SanityConfig(TestConfig): class SanityConfig(TestConfig):
"""Configuration for the sanity command.""" """Configuration for the sanity command."""

@ -0,0 +1,755 @@
"""High level functions for working with containers."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import contextlib
import json
import random
import time
import uuid
from . import types as t
from .encoding import (
Text,
)
from .util import (
ApplicationError,
SubprocessError,
display,
get_host_ip,
sanitize_host_name,
)
from .util_common import (
named_temporary_file,
)
from .config import (
EnvironmentConfig,
IntegrationConfig,
WindowsIntegrationConfig,
)
from .docker_util import (
ContainerNotFoundError,
DockerInspect,
docker_exec,
docker_inspect,
docker_pull,
docker_rm,
docker_run,
docker_start,
get_docker_command,
get_docker_container_id,
get_docker_host_ip,
)
from .ansible_util import (
run_playbook,
)
from .core_ci import (
SshKey,
)
from .target import (
IntegrationTarget,
)
from .ssh import (
SshConnectionDetail,
SshProcess,
create_ssh_port_forwards,
create_ssh_port_redirects,
generate_ssh_inventory,
)
# information about support containers provisioned by the current ansible-test instance
support_containers = {} # type: t.Dict[str, ContainerDescriptor]
class HostType:
"""Enum representing the types of hosts involved in running tests."""
origin = 'origin'
control = 'control'
managed = 'managed'
def run_support_container(
args, # type: EnvironmentConfig
context, # type: str
image, # type: str
name, # type: name
ports, # type: t.List[int]
aliases=None, # type: t.Optional[t.List[str]]
start=True, # type: bool
allow_existing=False, # type: bool
cleanup=None, # type: t.Optional[bool]
cmd=None, # type: t.Optional[t.List[str]]
env=None, # type: t.Optional[t.Dict[str, str]]
): # type: (...) -> ContainerDescriptor
"""
Start a container used to support tests, but not run them.
Containers created this way will be accessible from tests.
"""
if name in support_containers:
raise Exception('Container already defined: %s' % name)
# SSH is required for publishing ports, as well as modifying the hosts file.
# Initializing the SSH key here makes sure it is available for use after delegation.
SshKey(args)
aliases = aliases or [sanitize_host_name(name)]
current_container_id = get_docker_container_id()
publish_ports = True
docker_command = get_docker_command().command
if docker_command == 'docker':
if args.docker:
publish_ports = False # publishing ports is not needed when test hosts are on the docker network
if current_container_id:
publish_ports = False # publishing ports is pointless if already running in a docker container
options = ['--name', name]
if start:
options.append('-d')
if publish_ports:
for port in ports:
options.extend(['-p', str(port)])
if env:
for key, value in env.items():
options.extend(['--env', '%s=%s' % (key, value)])
support_container_id = None
if allow_existing:
try:
container = docker_inspect(args, name)
except ContainerNotFoundError:
container = None
if container:
support_container_id = container.id
if not container.running:
display.info('Ignoring existing "%s" container which is not running.' % name, verbosity=1)
support_container_id = None
elif not container.image:
display.info('Ignoring existing "%s" container which has the wrong image.' % name, verbosity=1)
support_container_id = None
elif publish_ports and not all(port and len(port) == 1 for port in [container.get_tcp_port(port) for port in ports]):
display.info('Ignoring existing "%s" container which does not have the required published ports.' % name, verbosity=1)
support_container_id = None
if not support_container_id:
docker_rm(args, name)
if support_container_id:
display.info('Using existing "%s" container.' % name)
running = True
existing = True
else:
display.info('Starting new "%s" container.' % name)
docker_pull(args, image)
support_container_id = docker_run(args, image, options, create_only=not start, cmd=cmd)
running = start
existing = False
if cleanup is None:
cleanup = not existing
descriptor = ContainerDescriptor(
image,
context,
name,
support_container_id,
ports,
aliases,
publish_ports,
running,
existing,
cleanup,
env,
)
if not support_containers:
atexit.register(cleanup_containers, args)
support_containers[name] = descriptor
return descriptor
def get_container_database(args): # type: (EnvironmentConfig) -> ContainerDatabase
"""Return the current container database, creating it as needed, or returning the one provided on the command line through delegation."""
if not args.containers:
args.containers = create_container_database(args)
elif isinstance(args.containers, (str, bytes, Text)):
args.containers = ContainerDatabase.from_dict(json.loads(args.containers))
display.info('>>> Container Database\n%s' % json.dumps(args.containers.to_dict(), indent=4, sort_keys=True), verbosity=3)
return args.containers
class ContainerAccess:
"""Information needed for one test host to access a single container supporting tests."""
def __init__(self, host_ip, names, ports, forwards): # type: (str, t.List[str], t.Optional[t.List[int]], t.Optional[t.Dict[int, int]]) -> None
# if forwards is set
# this is where forwards are sent (it is the host that provides an indirect connection to the containers on alternate ports)
# /etc/hosts uses 127.0.0.1 (since port redirection will be used)
# else
# this is what goes into /etc/hosts (it is the container's direct IP)
self.host_ip = host_ip
# primary name + any aliases -- these go into the hosts file and reference the appropriate ip for the origin/control/managed host
self.names = names
# ports available (set if forwards is not set)
self.ports = ports
# port redirections to create through host_ip -- if not set, no port redirections will be used
self.forwards = forwards
def port_map(self): # type: () -> t.List[t.Tuple[int, int]]
"""Return a port map for accessing this container."""
if self.forwards:
ports = list(self.forwards.items())
else:
ports = [(port, port) for port in self.ports]
return ports
@staticmethod
def from_dict(data): # type: (t.Dict[str, t.Any]) -> ContainerAccess
"""Return a ContainerAccess instance from the given dict."""
forwards = data.get('forwards')
if forwards:
forwards = dict((int(key), value) for key, value in forwards.items())
return ContainerAccess(
host_ip=data['host_ip'],
names=data['names'],
ports=data.get('ports'),
forwards=forwards,
)
def to_dict(self): # type: () -> t.Dict[str, t.Any]
"""Return a dict of the current instance."""
value = dict(
host_ip=self.host_ip,
names=self.names,
)
if self.ports:
value.update(ports=self.ports)
if self.forwards:
value.update(forwards=self.forwards)
return value
class ContainerDatabase:
"""Database of running containers used to support tests."""
def __init__(self, data): # type: (t.Dict[str, t.Dict[str, t.Dict[str, ContainerAccess]]]) -> None
self.data = data
@staticmethod
def from_dict(data): # type: (t.Dict[str, t.Any]) -> ContainerDatabase
"""Return a ContainerDatabase instance from the given dict."""
return ContainerDatabase(dict((access_name,
dict((context_name,
dict((container_name, ContainerAccess.from_dict(container))
for container_name, container in containers.items()))
for context_name, containers in contexts.items()))
for access_name, contexts in data.items()))
def to_dict(self): # type: () -> t.Dict[str, t.Any]
"""Return a dict of the current instance."""
return dict((access_name,
dict((context_name,
dict((container_name, container.to_dict())
for container_name, container in containers.items()))
for context_name, containers in contexts.items()))
for access_name, contexts in self.data.items())
def local_ssh(args): # type: (EnvironmentConfig) -> SshConnectionDetail
"""Return SSH connection details for localhost, connecting as root to the default SSH port."""
return SshConnectionDetail('localhost', 'localhost', None, 'root', SshKey(args).key, args.python_executable)
def create_container_database(args): # type: (EnvironmentConfig) -> ContainerDatabase
"""Create and return a container database with information necessary for all test hosts to make use of relevant support containers."""
origin = {} # type: t.Dict[str, t.Dict[str, ContainerAccess]]
control = {} # type: t.Dict[str, t.Dict[str, ContainerAccess]]
managed = {} # type: t.Dict[str, t.Dict[str, ContainerAccess]]
for name, container in support_containers.items():
if container.details.published_ports:
published_access = ContainerAccess(
host_ip=get_docker_host_ip(),
names=container.aliases,
ports=None,
forwards=dict((port, published_port) for port, published_port in container.details.published_ports.items()),
)
else:
published_access = None # no published access without published ports (ports are only published if needed)
if container.details.container_ip:
# docker containers, and rootfull podman containers should have a container IP address
container_access = ContainerAccess(
host_ip=container.details.container_ip,
names=container.aliases,
ports=container.ports,
forwards=None,
)
elif get_docker_command().command == 'podman':
# published ports for rootless podman containers should be accessible from the host's IP
container_access = ContainerAccess(
host_ip=get_host_ip(),
names=container.aliases,
ports=None,
forwards=dict((port, published_port) for port, published_port in container.details.published_ports.items()),
)
else:
container_access = None # no container access without an IP address
if get_docker_container_id():
if not container_access:
raise Exception('Missing IP address for container: %s' % name)
origin_context = origin.setdefault(container.context, {})
origin_context[name] = container_access
elif not published_access:
pass # origin does not have network access to the containers
else:
origin_context = origin.setdefault(container.context, {})
origin_context[name] = published_access
if args.remote:
pass # SSH forwarding required
elif args.docker or get_docker_container_id():
if container_access:
control_context = control.setdefault(container.context, {})
control_context[name] = container_access
else:
raise Exception('Missing IP address for container: %s' % name)
else:
if not published_access:
raise Exception('Missing published ports for container: %s' % name)
control_context = control.setdefault(container.context, {})
control_context[name] = published_access
data = {
HostType.origin: origin,
HostType.control: control,
HostType.managed: managed,
}
data = dict((key, value) for key, value in data.items() if value)
return ContainerDatabase(data)
class SupportContainerContext:
"""Context object for tracking information relating to access of support containers."""
def __init__(self, containers, process): # type: (ContainerDatabase, t.Optional[SshProcess]) -> None
self.containers = containers
self.process = process
def close(self): # type: () -> None
"""Close the process maintaining the port forwards."""
if not self.process:
return # forwarding not in use
self.process.terminate()
display.info('Waiting for the session SSH port forwarding process to terminate.', verbosity=1)
self.process.wait()
@contextlib.contextmanager
def support_container_context(
args, # type: EnvironmentConfig
ssh, # type: t.Optional[SshConnectionDetail]
): # type: (...) -> t.Optional[ContainerDatabase]
"""Create a context manager for integration tests that use support containers."""
if not isinstance(args, IntegrationConfig):
yield None # containers are only used for integration tests
return
containers = get_container_database(args)
if not containers.data:
yield ContainerDatabase({}) # no containers are being used, return an empty database
return
context = create_support_container_context(args, ssh, containers)
try:
yield context.containers
finally:
context.close()
def create_support_container_context(
args, # type: EnvironmentConfig
ssh, # type: t.Optional[SshConnectionDetail]
containers, # type: ContainerDatabase
): # type: (...) -> SupportContainerContext
"""Context manager that provides SSH port forwards. Returns updated container metadata."""
host_type = HostType.control
revised = ContainerDatabase(containers.data.copy())
source = revised.data.pop(HostType.origin, None)
container_map = {} # type: t.Dict[t.Tuple[str, int], t.Tuple[str, str, int]]
if host_type not in revised.data:
if not source:
raise Exception('Missing origin container details.')
for context_name, context in source.items():
for container_name, container in context.items():
for port, access_port in container.port_map():
container_map[(container.host_ip, access_port)] = (context_name, container_name, port)
if not container_map:
return SupportContainerContext(revised, None)
if not ssh:
raise Exception('The %s host was not pre-configured for container access and SSH forwarding is not available.' % host_type)
forwards = list(container_map.keys())
process = create_ssh_port_forwards(args, ssh, forwards)
result = SupportContainerContext(revised, process)
try:
port_forwards = process.collect_port_forwards()
contexts = {}
for forward, forwarded_port in port_forwards.items():
access_host, access_port = forward
context_name, container_name, container_port = container_map[(access_host, access_port)]
container = source[context_name][container_name]
context = contexts.setdefault(context_name, {})
forwarded_container = context.setdefault(container_name, ContainerAccess('127.0.0.1', container.names, None, {}))
forwarded_container.forwards[container_port] = forwarded_port
display.info('Container "%s" port %d available at %s:%d is forwarded over SSH as port %d.' % (
container_name, container_port, access_host, access_port, forwarded_port,
), verbosity=1)
revised.data[host_type] = contexts
return result
except Exception:
result.close()
raise
class ContainerDescriptor:
"""Information about a support container."""
def __init__(self,
image, # type: str
context, # type: str
name, # type: str
container_id, # type: str
ports, # type: t.List[int]
aliases, # type: t.List[str]
publish_ports, # type: bool
running, # type: bool
existing, # type: bool
cleanup, # type: bool
env, # type: t.Optional[t.Dict[str, str]]
): # type: (...) -> None
self.image = image
self.context = context
self.name = name
self.container_id = container_id
self.ports = ports
self.aliases = aliases
self.publish_ports = publish_ports
self.running = running
self.existing = existing
self.cleanup = cleanup
self.env = env
self.details = None # type: t.Optional[SupportContainer]
def start(self, args): # type: (EnvironmentConfig) -> None
"""Start the container. Used for containers which are created, but not started."""
docker_start(args, self.name)
def register(self, args): # type: (EnvironmentConfig) -> SupportContainer
"""Record the container's runtime details. Must be used after the container has been started."""
if self.details:
raise Exception('Container already registered: %s' % self.name)
try:
container = docker_inspect(args, self.container_id)
except ContainerNotFoundError:
if not args.explain:
raise
# provide enough mock data to keep --explain working
container = DockerInspect(args, dict(
Id=self.container_id,
NetworkSettings=dict(
IPAddress='127.0.0.1',
Ports=dict(('%d/tcp' % port, [dict(HostPort=random.randint(30000, 40000) if self.publish_ports else port)]) for port in self.ports),
),
Config=dict(
Env=['%s=%s' % (key, value) for key, value in self.env.items()] if self.env else [],
),
))
support_container_ip = container.get_ip_address()
if self.publish_ports:
# inspect the support container to locate the published ports
tcp_ports = dict((port, container.get_tcp_port(port)) for port in self.ports)
if any(not config or len(config) != 1 for config in tcp_ports.values()):
raise ApplicationError('Unexpected `docker inspect` results for published TCP ports:\n%s' % json.dumps(tcp_ports, indent=4, sort_keys=True))
published_ports = dict((port, int(config[0]['HostPort'])) for port, config in tcp_ports.items())
else:
published_ports = {}
self.details = SupportContainer(
container,
support_container_ip,
published_ports,
)
return self.details
class SupportContainer:
"""Information about a running support container available for use by tests."""
def __init__(self,
container, # type: DockerInspect
container_ip, # type: str
published_ports, # type: t.Dict[int, int]
): # type: (...) -> None
self.container = container
self.container_ip = container_ip
self.published_ports = published_ports
def wait_for_file(args, # type: EnvironmentConfig
container_name, # type: str
path, # type: str
sleep, # type: int
tries, # type: int
check=None, # type: t.Optional[t.Callable[[str], bool]]
): # type: (...) -> str
"""Wait for the specified file to become available in the requested container and return its contents."""
display.info('Waiting for container "%s" to provide file: %s' % (container_name, path))
for _iteration in range(1, tries):
if _iteration > 1:
time.sleep(sleep)
try:
stdout = docker_exec(args, container_name, ['dd', 'if=%s' % path], capture=True)[0]
except SubprocessError:
continue
if not check or check(stdout):
return stdout
raise ApplicationError('Timeout waiting for container "%s" to provide file: %s' % (container_name, path))
def cleanup_containers(args): # type: (EnvironmentConfig) -> None
"""Clean up containers."""
for container in support_containers.values():
if container.cleanup:
docker_rm(args, container.container_id)
else:
display.notice('Remember to run `docker rm -f %s` when finished testing.' % container.name)
def create_hosts_entries(context): # type: (t.Dict[str, ContainerAccess]) -> t.List[str]
"""Return hosts entries for the specified context."""
entries = []
unique_id = uuid.uuid4()
for container in context.values():
# forwards require port redirection through localhost
if container.forwards:
host_ip = '127.0.0.1'
else:
host_ip = container.host_ip
entries.append('%s %s # ansible-test %s' % (host_ip, ' '.join(container.names), unique_id))
return entries
def create_container_hooks(
args, # type: IntegrationConfig
managed_connections, # type: t.Optional[t.List[SshConnectionDetail]]
): # type: (...) -> t.Tuple[t.Optional[t.Callable[[IntegrationTarget], None]], t.Optional[t.Callable[[IntegrationTarget], None]]]
"""Return pre and post target callbacks for enabling and disabling container access for each test target."""
containers = get_container_database(args)
control_contexts = containers.data.get(HostType.control)
if control_contexts:
managed_contexts = containers.data.get(HostType.managed)
if not managed_contexts:
managed_contexts = create_managed_contexts(control_contexts)
control_type = 'posix'
if isinstance(args, WindowsIntegrationConfig):
managed_type = 'windows'
else:
managed_type = 'posix'
control_state = {}
managed_state = {}
control_connections = [local_ssh(args)]
def pre_target(target):
forward_ssh_ports(args, control_connections, '%s_hosts_prepare.yml' % control_type, control_state, target, HostType.control, control_contexts)
forward_ssh_ports(args, managed_connections, '%s_hosts_prepare.yml' % managed_type, managed_state, target, HostType.managed, managed_contexts)
def post_target(target):
cleanup_ssh_ports(args, control_connections, '%s_hosts_restore.yml' % control_type, control_state, target, HostType.control)
cleanup_ssh_ports(args, managed_connections, '%s_hosts_restore.yml' % managed_type, managed_state, target, HostType.managed)
else:
pre_target, post_target = None, None
return pre_target, post_target
def create_managed_contexts(control_contexts): # type: (t.Dict[str, t.Dict[str, ContainerAccess]]) -> t.Dict[str, t.Dict[str, ContainerAccess]]
"""Create managed contexts from the given control contexts."""
managed_contexts = {}
for context_name, control_context in control_contexts.items():
managed_context = managed_contexts[context_name] = {}
for container_name, control_container in control_context.items():
managed_context[container_name] = ContainerAccess(control_container.host_ip, control_container.names, None, dict(control_container.port_map()))
return managed_contexts
def forward_ssh_ports(
args, # type: IntegrationConfig
ssh_connections, # type: t.Optional[t.List[SshConnectionDetail]]
playbook, # type: str
target_state, # type: t.Dict[str, t.Tuple[t.List[str], t.List[SshProcess]]]
target, # type: IntegrationTarget
host_type, # type: str
contexts, # type: t.Dict[str, t.Dict[str, ContainerAccess]]
): # type: (...) -> None
"""Configure port forwarding using SSH and write hosts file entries."""
if ssh_connections is None:
return
test_context = None
for context_name, context in contexts.items():
context_alias = 'cloud/%s/' % context_name
if context_alias in target.aliases:
test_context = context
break
if not test_context:
return
if not ssh_connections:
raise Exception('The %s host was not pre-configured for container access and SSH forwarding is not available.' % host_type)
redirects = [] # type: t.List[t.Tuple[int, str, int]]
messages = []
for container_name, container in test_context.items():
explain = []
for container_port, access_port in container.port_map():
if container.forwards:
redirects.append((container_port, container.host_ip, access_port))
explain.append('%d -> %s:%d' % (container_port, container.host_ip, access_port))
else:
explain.append('%s:%d' % (container.host_ip, container_port))
if explain:
if container.forwards:
message = 'Port forwards for the "%s" container have been established on the %s host' % (container_name, host_type)
else:
message = 'Ports for the "%s" container are available on the %s host as' % (container_name, host_type)
messages.append('%s:\n%s' % (message, '\n'.join(explain)))
hosts_entries = create_hosts_entries(test_context)
inventory = generate_ssh_inventory(ssh_connections)
with named_temporary_file(args, 'ssh-inventory-', '.json', None, inventory) as inventory_path:
run_playbook(args, inventory_path, playbook, dict(hosts_entries=hosts_entries))
ssh_processes = [] # type: t.List[SshProcess]
if redirects:
for ssh in ssh_connections:
ssh_processes.append(create_ssh_port_redirects(args, ssh, redirects))
target_state[target.name] = (hosts_entries, ssh_processes)
for message in messages:
display.info(message, verbosity=1)
def cleanup_ssh_ports(
args, # type: IntegrationConfig
ssh_connections, # type: t.List[SshConnectionDetail]
playbook, # type: str
target_state, # type: t.Dict[str, t.Tuple[t.List[str], t.List[SshProcess]]]
target, # type: IntegrationTarget
host_type, # type: str
): # type: (...) -> None
"""Stop previously configured SSH port forwarding and remove previously written hosts file entries."""
state = target_state.pop(target.name, None)
if not state:
return
(hosts_entries, ssh_processes) = state
inventory = generate_ssh_inventory(ssh_connections)
with named_temporary_file(args, 'ssh-inventory-', '.json', None, inventory) as inventory_path:
run_playbook(args, inventory_path, playbook, dict(hosts_entries=hosts_entries))
if ssh_processes:
for process in ssh_processes:
process.terminate()
display.info('Waiting for the %s host SSH port forwarding processs(es) to terminate.' % host_type, verbosity=1)
for process in ssh_processes:
process.wait()

@ -567,6 +567,9 @@ class SshKey:
if not os.path.isfile(key) or not os.path.isfile(pub): if not os.path.isfile(key) or not os.path.isfile(pub):
run_command(args, ['ssh-keygen', '-m', 'PEM', '-q', '-t', self.KEY_TYPE, '-N', '', '-f', key]) run_command(args, ['ssh-keygen', '-m', 'PEM', '-q', '-t', self.KEY_TYPE, '-N', '', '-f', key])
if args.explain:
return key, pub
# newer ssh-keygen PEM output (such as on RHEL 8.1) is not recognized by paramiko # newer ssh-keygen PEM output (such as on RHEL 8.1) is not recognized by paramiko
key_contents = read_text_file(key) key_contents = read_text_file(key)
key_contents = re.sub(r'(BEGIN|END) PRIVATE KEY', r'\1 RSA PRIVATE KEY', key_contents) key_contents = re.sub(r'(BEGIN|END) PRIVATE KEY', r'\1 RSA PRIVATE KEY', key_contents)

@ -2,6 +2,7 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import json
import os import os
import re import re
import sys import sys
@ -16,11 +17,8 @@ from .io import (
from .executor import ( from .executor import (
SUPPORTED_PYTHON_VERSIONS, SUPPORTED_PYTHON_VERSIONS,
HTTPTESTER_HOSTS,
create_shell_command, create_shell_command,
run_httptester,
run_pypi_proxy, run_pypi_proxy,
start_httptester,
get_python_interpreter, get_python_interpreter,
get_python_version, get_python_version,
) )
@ -69,24 +67,19 @@ from .util_common import (
from .docker_util import ( from .docker_util import (
docker_exec, docker_exec,
docker_get, docker_get,
docker_inspect,
docker_pull, docker_pull,
docker_put, docker_put,
docker_rm, docker_rm,
docker_run, docker_run,
docker_available,
docker_network_disconnect, docker_network_disconnect,
get_docker_networks, get_docker_command,
get_docker_preferred_network_name,
get_docker_hostname, get_docker_hostname,
is_docker_user_defined_network,
) )
from .cloud import ( from .containers import (
get_cloud_providers, SshConnectionDetail,
) support_container_context,
from .target import (
IntegrationTarget,
) )
from .data import ( from .data import (
@ -119,12 +112,11 @@ def check_delegation_args(args):
get_python_version(args, get_remote_completion(), args.remote) get_python_version(args, get_remote_completion(), args.remote)
def delegate(args, exclude, require, integration_targets): def delegate(args, exclude, require):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
:type exclude: list[str] :type exclude: list[str]
:type require: list[str] :type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
:rtype: bool :rtype: bool
""" """
if isinstance(args, TestConfig): if isinstance(args, TestConfig):
@ -137,31 +129,30 @@ def delegate(args, exclude, require, integration_targets):
args.metadata.to_file(args.metadata_path) args.metadata.to_file(args.metadata_path)
try: try:
return delegate_command(args, exclude, require, integration_targets) return delegate_command(args, exclude, require)
finally: finally:
args.metadata_path = None args.metadata_path = None
else: else:
return delegate_command(args, exclude, require, integration_targets) return delegate_command(args, exclude, require)
def delegate_command(args, exclude, require, integration_targets): def delegate_command(args, exclude, require):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
:type exclude: list[str] :type exclude: list[str]
:type require: list[str] :type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
:rtype: bool :rtype: bool
""" """
if args.venv: if args.venv:
delegate_venv(args, exclude, require, integration_targets) delegate_venv(args, exclude, require)
return True return True
if args.docker: if args.docker:
delegate_docker(args, exclude, require, integration_targets) delegate_docker(args, exclude, require)
return True return True
if args.remote: if args.remote:
delegate_remote(args, exclude, require, integration_targets) delegate_remote(args, exclude, require)
return True return True
return False return False
@ -170,7 +161,6 @@ def delegate_command(args, exclude, require, integration_targets):
def delegate_venv(args, # type: EnvironmentConfig def delegate_venv(args, # type: EnvironmentConfig
exclude, # type: t.List[str] exclude, # type: t.List[str]
require, # type: t.List[str] require, # type: t.List[str]
integration_targets, # type: t.Tuple[IntegrationTarget, ...]
): # type: (...) -> None ): # type: (...) -> None
"""Delegate ansible-test execution to a virtual environment using venv or virtualenv.""" """Delegate ansible-test execution to a virtual environment using venv or virtualenv."""
if args.python: if args.python:
@ -178,12 +168,6 @@ def delegate_venv(args, # type: EnvironmentConfig
else: else:
versions = SUPPORTED_PYTHON_VERSIONS versions = SUPPORTED_PYTHON_VERSIONS
if args.httptester:
needs_httptester = sorted(target.name for target in integration_targets if 'needs/httptester/' in target.aliases)
if needs_httptester:
display.warning('Use --docker or --remote to enable httptester for tests marked "needs/httptester": %s' % ', '.join(needs_httptester))
if args.venv_system_site_packages: if args.venv_system_site_packages:
suffix = '-ssp' suffix = '-ssp'
else: else:
@ -224,30 +208,26 @@ def delegate_venv(args, # type: EnvironmentConfig
PYTHONPATH=library_path, PYTHONPATH=library_path,
) )
run_command(args, cmd, env=env) with support_container_context(args, None) as containers:
if containers:
cmd.extend(['--containers', json.dumps(containers.to_dict())])
run_command(args, cmd, env=env)
def delegate_docker(args, exclude, require, integration_targets): def delegate_docker(args, exclude, require):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
:type exclude: list[str] :type exclude: list[str]
:type require: list[str] :type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
""" """
get_docker_command(required=True) # fail early if docker is not available
test_image = args.docker test_image = args.docker
privileged = args.docker_privileged privileged = args.docker_privileged
if isinstance(args, ShellConfig):
use_httptester = args.httptester
else:
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in integration_targets)
if use_httptester:
docker_pull(args, args.httptester)
docker_pull(args, test_image) docker_pull(args, test_image)
httptester_id = None
test_id = None test_id = None
success = False success = False
@ -295,11 +275,6 @@ def delegate_docker(args, exclude, require, integration_targets):
try: try:
create_payload(args, local_source_fd.name) create_payload(args, local_source_fd.name)
if use_httptester:
httptester_id = run_httptester(args)
else:
httptester_id = None
test_options = [ test_options = [
'--detach', '--detach',
'--volume', '/sys/fs/cgroup:/sys/fs/cgroup:ro', '--volume', '/sys/fs/cgroup:/sys/fs/cgroup:ro',
@ -320,28 +295,7 @@ def delegate_docker(args, exclude, require, integration_targets):
if get_docker_hostname() != 'localhost' or os.path.exists(docker_socket): if get_docker_hostname() != 'localhost' or os.path.exists(docker_socket):
test_options += ['--volume', '%s:%s' % (docker_socket, docker_socket)] test_options += ['--volume', '%s:%s' % (docker_socket, docker_socket)]
if httptester_id: test_id = docker_run(args, test_image, options=test_options)
test_options += ['--env', 'HTTPTESTER=1', '--env', 'KRB5_PASSWORD=%s' % args.httptester_krb5_password]
network = get_docker_preferred_network_name(args)
if not is_docker_user_defined_network(network):
# legacy links are required when using the default bridge network instead of user-defined networks
for host in HTTPTESTER_HOSTS:
test_options += ['--link', '%s:%s' % (httptester_id, host)]
if isinstance(args, IntegrationConfig):
cloud_platforms = get_cloud_providers(args)
for cloud_platform in cloud_platforms:
test_options += cloud_platform.get_docker_run_options()
test_id = docker_run(args, test_image, options=test_options)[0]
if args.explain:
test_id = 'test_id'
else:
test_id = test_id.strip()
setup_sh = read_text_file(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'docker.sh')) setup_sh = read_text_file(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'docker.sh'))
@ -377,7 +331,8 @@ def delegate_docker(args, exclude, require, integration_targets):
docker_exec(args, test_id, cmd + ['--requirements-mode', 'only'], options=cmd_options) docker_exec(args, test_id, cmd + ['--requirements-mode', 'only'], options=cmd_options)
networks = get_docker_networks(args, test_id) container = docker_inspect(args, test_id)
networks = container.get_network_names()
if networks is not None: if networks is not None:
for network in networks: for network in networks:
@ -391,7 +346,11 @@ def delegate_docker(args, exclude, require, integration_targets):
cmd_options += ['--user', 'pytest'] cmd_options += ['--user', 'pytest']
try: try:
docker_exec(args, test_id, cmd, options=cmd_options) with support_container_context(args, None) as containers:
if containers:
cmd.extend(['--containers', json.dumps(containers.to_dict())])
docker_exec(args, test_id, cmd, options=cmd_options)
# docker_exec will throw SubprocessError if not successful # docker_exec will throw SubprocessError if not successful
# If we make it here, all the prep work earlier and the docker_exec line above were all successful. # If we make it here, all the prep work earlier and the docker_exec line above were all successful.
success = True success = True
@ -402,16 +361,21 @@ def delegate_docker(args, exclude, require, integration_targets):
remote_results_name = os.path.basename(remote_results_root) remote_results_name = os.path.basename(remote_results_root)
remote_temp_file = os.path.join('/root', remote_results_name + '.tgz') remote_temp_file = os.path.join('/root', remote_results_name + '.tgz')
make_dirs(local_test_root) # make sure directory exists for collections which have no tests try:
make_dirs(local_test_root) # make sure directory exists for collections which have no tests
with tempfile.NamedTemporaryFile(prefix='ansible-result-', suffix='.tgz') as local_result_fd: with tempfile.NamedTemporaryFile(prefix='ansible-result-', suffix='.tgz') as local_result_fd:
docker_exec(args, test_id, ['tar', 'czf', remote_temp_file, '--exclude', ResultType.TMP.name, '-C', remote_test_root, remote_results_name]) docker_exec(args, test_id, ['tar', 'czf', remote_temp_file, '--exclude', ResultType.TMP.name, '-C', remote_test_root,
docker_get(args, test_id, remote_temp_file, local_result_fd.name) remote_results_name])
run_command(args, ['tar', 'oxzf', local_result_fd.name, '-C', local_test_root]) docker_get(args, test_id, remote_temp_file, local_result_fd.name)
finally: run_command(args, ['tar', 'oxzf', local_result_fd.name, '-C', local_test_root])
if httptester_id: except Exception as ex: # pylint: disable=broad-except
docker_rm(args, httptester_id) if success:
raise # download errors are fatal, but only if tests succeeded
# handle download error here to avoid masking test failures
display.warning('Failed to download results while handling an exception: %s' % ex)
finally:
if pypi_proxy_id: if pypi_proxy_id:
docker_rm(args, pypi_proxy_id) docker_rm(args, pypi_proxy_id)
@ -420,42 +384,26 @@ def delegate_docker(args, exclude, require, integration_targets):
docker_rm(args, test_id) docker_rm(args, test_id)
def delegate_remote(args, exclude, require, integration_targets): def delegate_remote(args, exclude, require):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
:type exclude: list[str] :type exclude: list[str]
:type require: list[str] :type require: list[str]
:type integration_targets: tuple[IntegrationTarget]
""" """
remote = args.parsed_remote remote = args.parsed_remote
core_ci = AnsibleCoreCI(args, remote.platform, remote.version, stage=args.remote_stage, provider=args.remote_provider, arch=remote.arch) core_ci = AnsibleCoreCI(args, remote.platform, remote.version, stage=args.remote_stage, provider=args.remote_provider, arch=remote.arch)
success = False success = False
raw = False
if isinstance(args, ShellConfig):
use_httptester = args.httptester
raw = args.raw
else:
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in integration_targets)
if use_httptester and not docker_available():
display.warning('Assuming --disable-httptester since `docker` is not available.')
use_httptester = False
httptester_id = None
ssh_options = [] ssh_options = []
content_root = None content_root = None
try: try:
core_ci.start() core_ci.start()
if use_httptester:
httptester_id, ssh_options = start_httptester(args)
core_ci.wait() core_ci.wait()
python_version = get_python_version(args, get_remote_completion(), args.remote) python_version = get_python_version(args, get_remote_completion(), args.remote)
python_interpreter = None
if remote.platform == 'windows': if remote.platform == 'windows':
# Windows doesn't need the ansible-test fluff, just run the SSH command # Windows doesn't need the ansible-test fluff, just run the SSH command
@ -463,7 +411,7 @@ def delegate_remote(args, exclude, require, integration_targets):
manage.setup(python_version) manage.setup(python_version)
cmd = ['powershell.exe'] cmd = ['powershell.exe']
elif raw: elif isinstance(args, ShellConfig) and args.raw:
manage = ManagePosixCI(core_ci) manage = ManagePosixCI(core_ci)
manage.setup(python_version) manage.setup(python_version)
@ -487,9 +435,6 @@ def delegate_remote(args, exclude, require, integration_targets):
cmd = generate_command(args, python_interpreter, os.path.join(ansible_root, 'bin'), content_root, options, exclude, require) cmd = generate_command(args, python_interpreter, os.path.join(ansible_root, 'bin'), content_root, options, exclude, require)
if httptester_id:
cmd += ['--inject-httptester', '--httptester-krb5-password', args.httptester_krb5_password]
if isinstance(args, TestConfig): if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label: if args.coverage and not args.coverage_label:
cmd += ['--coverage-label', 'remote-%s-%s' % (remote.platform, remote.version)] cmd += ['--coverage-label', 'remote-%s-%s' % (remote.platform, remote.version)]
@ -502,14 +447,16 @@ def delegate_remote(args, exclude, require, integration_targets):
if isinstance(args, UnitsConfig) and not args.python: if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default'] cmd += ['--python', 'default']
if isinstance(args, IntegrationConfig): try:
cloud_platforms = get_cloud_providers(args) ssh_con = core_ci.connection
ssh = SshConnectionDetail(core_ci.name, ssh_con.hostname, ssh_con.port, ssh_con.username, core_ci.ssh_key.key, python_interpreter)
for cloud_platform in cloud_platforms: with support_container_context(args, ssh) as containers:
ssh_options += cloud_platform.get_remote_ssh_options() if containers:
cmd.extend(['--containers', json.dumps(containers.to_dict())])
manage.ssh(cmd, ssh_options)
try:
manage.ssh(cmd, ssh_options)
success = True success = True
finally: finally:
download = False download = False
@ -532,15 +479,21 @@ def delegate_remote(args, exclude, require, integration_targets):
# pattern and achieve the same goal # pattern and achieve the same goal
cp_opts = '-hr' if remote.platform in ['aix', 'ibmi'] else '-a' cp_opts = '-hr' if remote.platform in ['aix', 'ibmi'] else '-a'
manage.ssh('rm -rf {0} && mkdir {0} && cp {1} {2}/* {0}/ && chmod -R a+r {0}'.format(remote_temp_path, cp_opts, remote_results_root)) try:
manage.download(remote_temp_path, local_test_root) command = 'rm -rf {0} && mkdir {0} && cp {1} {2}/* {0}/ && chmod -R a+r {0}'.format(remote_temp_path, cp_opts, remote_results_root)
manage.ssh(command, capture=True) # pylint: disable=unexpected-keyword-arg
manage.download(remote_temp_path, local_test_root)
except Exception as ex: # pylint: disable=broad-except
if success:
raise # download errors are fatal, but only if tests succeeded
# handle download error here to avoid masking test failures
display.warning('Failed to download results while handling an exception: %s' % ex)
finally: finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success): if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
core_ci.stop() core_ci.stop()
if httptester_id:
docker_rm(args, httptester_id)
def generate_command(args, python_interpreter, ansible_bin_path, content_root, options, exclude, require): def generate_command(args, python_interpreter, ansible_bin_path, content_root, options, exclude, require):
""" """

@ -4,6 +4,8 @@ __metaclass__ = type
import json import json
import os import os
import random
import socket
import time import time
from . import types as t from . import types as t
@ -27,6 +29,7 @@ from .http import (
from .util_common import ( from .util_common import (
run_command, run_command,
raw_command,
) )
from .config import ( from .config import (
@ -35,12 +38,68 @@ from .config import (
BUFFER_SIZE = 256 * 256 BUFFER_SIZE = 256 * 256
DOCKER_COMMANDS = [
'docker',
'podman',
]
def docker_available():
""" class DockerCommand:
:rtype: bool """Details about the available docker command."""
""" def __init__(self, command, executable, version): # type: (str, str, str) -> None
return find_executable('docker', required=False) self.command = command
self.executable = executable
self.version = version
@staticmethod
def detect(): # type: () -> t.Optional[DockerCommand]
"""Detect and return the available docker command, or None."""
if os.environ.get('ANSIBLE_TEST_PREFER_PODMAN'):
commands = list(reversed(DOCKER_COMMANDS))
else:
commands = DOCKER_COMMANDS
for command in commands:
executable = find_executable(command, required=False)
if executable:
version = raw_command([command, '-v'], capture=True)[0].strip()
if command == 'docker' and 'podman' in version:
continue # avoid detecting podman as docker
display.info('Detected "%s" container runtime version: %s' % (command, version), verbosity=1)
return DockerCommand(command, executable, version)
return None
def get_docker_command(required=False): # type: (bool) -> t.Optional[DockerCommand]
"""Return the docker command to invoke. Raises an exception if docker is not available."""
try:
return get_docker_command.cmd
except AttributeError:
get_docker_command.cmd = DockerCommand.detect()
if required and not get_docker_command.cmd:
raise ApplicationError("No container runtime detected. Supported commands: %s" % ', '.join(DOCKER_COMMANDS))
return get_docker_command.cmd
def get_docker_host_ip(): # type: () -> str
"""Return the IP of the Docker host."""
try:
return get_docker_host_ip.ip
except AttributeError:
pass
docker_host_ip = get_docker_host_ip.ip = socket.gethostbyname(get_docker_hostname())
display.info('Detected docker host IP: %s' % docker_host_ip, verbosity=1)
return docker_host_ip
def get_docker_hostname(): # type: () -> str def get_docker_hostname(): # type: () -> str
@ -101,45 +160,6 @@ def get_docker_container_id():
return container_id return container_id
def get_docker_container_ip(args, container_id):
"""
:type args: EnvironmentConfig
:type container_id: str
:rtype: str
"""
results = docker_inspect(args, container_id)
network_settings = results[0]['NetworkSettings']
networks = network_settings.get('Networks')
if networks:
network_name = get_docker_preferred_network_name(args) or 'bridge'
ipaddress = networks[network_name]['IPAddress']
else:
# podman doesn't provide Networks, fall back to using IPAddress
ipaddress = network_settings['IPAddress']
if not ipaddress:
raise ApplicationError('Cannot retrieve IP address for container: %s' % container_id)
return ipaddress
def get_docker_network_name(args, container_id): # type: (EnvironmentConfig, str) -> str
"""
Return the network name of the specified container.
Raises an exception if zero or more than one network is found.
"""
networks = get_docker_networks(args, container_id)
if not networks:
raise ApplicationError('No network found for Docker container: %s.' % container_id)
if len(networks) > 1:
raise ApplicationError('Found multiple networks for Docker container %s instead of only one: %s' % (container_id, ', '.join(networks)))
return networks[0]
def get_docker_preferred_network_name(args): # type: (EnvironmentConfig) -> str def get_docker_preferred_network_name(args): # type: (EnvironmentConfig) -> str
""" """
Return the preferred network name for use with Docker. The selection logic is: Return the preferred network name for use with Docker. The selection logic is:
@ -147,6 +167,11 @@ def get_docker_preferred_network_name(args): # type: (EnvironmentConfig) -> str
- the network of the currently running docker container (if any) - the network of the currently running docker container (if any)
- the default docker network (returns None) - the default docker network (returns None)
""" """
try:
return get_docker_preferred_network_name.network
except AttributeError:
pass
network = None network = None
if args.docker_network: if args.docker_network:
@ -157,7 +182,10 @@ def get_docker_preferred_network_name(args): # type: (EnvironmentConfig) -> str
if current_container_id: if current_container_id:
# Make sure any additional containers we launch use the same network as the current container we're running in. # Make sure any additional containers we launch use the same network as the current container we're running in.
# This is needed when ansible-test is running in a container that is not connected to Docker's default network. # This is needed when ansible-test is running in a container that is not connected to Docker's default network.
network = get_docker_network_name(args, current_container_id) container = docker_inspect(args, current_container_id, always=True)
network = container.get_network_name()
get_docker_preferred_network_name.network = network
return network return network
@ -167,26 +195,12 @@ def is_docker_user_defined_network(network): # type: (str) -> bool
return network and network != 'bridge' return network and network != 'bridge'
def get_docker_networks(args, container_id):
"""
:param args: EnvironmentConfig
:param container_id: str
:rtype: list[str]
"""
results = docker_inspect(args, container_id)
# podman doesn't return Networks- just silently return None if it's missing...
networks = results[0]['NetworkSettings'].get('Networks')
if networks is None:
return None
return sorted(networks)
def docker_pull(args, image): def docker_pull(args, image):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
:type image: str :type image: str
""" """
if ('@' in image or ':' in image) and docker_images(args, image): if ('@' in image or ':' in image) and docker_image_exists(args, image):
display.info('Skipping docker pull of existing image with tag or digest: %s' % image, verbosity=2) display.info('Skipping docker pull of existing image with tag or digest: %s' % image, verbosity=2)
return return
@ -205,6 +219,11 @@ def docker_pull(args, image):
raise ApplicationError('Failed to pull docker image "%s".' % image) raise ApplicationError('Failed to pull docker image "%s".' % image)
def docker_cp_to(args, container_id, src, dst): # type: (EnvironmentConfig, str, str, str) -> None
"""Copy a file to the specified container."""
docker_command(args, ['cp', src, '%s:%s' % (container_id, dst)])
def docker_put(args, container_id, src, dst): def docker_put(args, container_id, src, dst):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
@ -238,7 +257,7 @@ def docker_run(args, image, options, cmd=None, create_only=False):
:type options: list[str] | None :type options: list[str] | None
:type cmd: list[str] | None :type cmd: list[str] | None
:type create_only[bool] | False :type create_only[bool] | False
:rtype: str | None, str | None :rtype: str
""" """
if not options: if not options:
options = [] options = []
@ -255,12 +274,16 @@ def docker_run(args, image, options, cmd=None, create_only=False):
if is_docker_user_defined_network(network): if is_docker_user_defined_network(network):
# Only when the network is not the default bridge network. # Only when the network is not the default bridge network.
# Using this with the default bridge network results in an error when using --link: links are only supported for user-defined networks
options.extend(['--network', network]) options.extend(['--network', network])
for _iteration in range(1, 3): for _iteration in range(1, 3):
try: try:
return docker_command(args, [command] + options + [image] + cmd, capture=True) stdout = docker_command(args, [command] + options + [image] + cmd, capture=True)[0]
if args.explain:
return ''.join(random.choice('0123456789abcdef') for _iteration in range(64))
return stdout.strip()
except SubprocessError as ex: except SubprocessError as ex:
display.error(ex) display.error(ex)
display.warning('Failed to run docker image "%s". Waiting a few seconds before trying again.' % image) display.warning('Failed to run docker image "%s". Waiting a few seconds before trying again.' % image)
@ -269,7 +292,7 @@ def docker_run(args, image, options, cmd=None, create_only=False):
raise ApplicationError('Failed to run docker image "%s".' % image) raise ApplicationError('Failed to run docker image "%s".' % image)
def docker_start(args, container_id, options): # type: (EnvironmentConfig, str, t.List[str]) -> (t.Optional[str], t.Optional[str]) def docker_start(args, container_id, options=None): # type: (EnvironmentConfig, str, t.Optional[t.List[str]]) -> (t.Optional[str], t.Optional[str])
""" """
Start a docker container by name or ID Start a docker container by name or ID
""" """
@ -287,33 +310,6 @@ def docker_start(args, container_id, options): # type: (EnvironmentConfig, str,
raise ApplicationError('Failed to run docker container "%s".' % container_id) raise ApplicationError('Failed to run docker container "%s".' % container_id)
def docker_images(args, image):
"""
:param args: CommonConfig
:param image: str
:rtype: list[dict[str, any]]
"""
try:
stdout, _dummy = docker_command(args, ['images', image, '--format', '{{json .}}'], capture=True, always=True)
except SubprocessError as ex:
if 'no such image' in ex.stderr:
return [] # podman does not handle this gracefully, exits 125
if 'function "json" not defined' in ex.stderr:
# podman > 2 && < 2.2.0 breaks with --format {{json .}}, and requires --format json
# So we try this as a fallback. If it fails again, we just raise the exception and bail.
stdout, _dummy = docker_command(args, ['images', image, '--format', 'json'], capture=True, always=True)
else:
raise ex
if stdout.startswith('['):
# modern podman outputs a pretty-printed json list. Just load the whole thing.
return json.loads(stdout)
# docker outputs one json object per line (jsonl)
return [json.loads(line) for line in stdout.splitlines()]
def docker_rm(args, container_id): def docker_rm(args, container_id):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
@ -328,25 +324,135 @@ def docker_rm(args, container_id):
raise ex raise ex
def docker_inspect(args, container_id): class DockerError(Exception):
"""General Docker error."""
class ContainerNotFoundError(DockerError):
"""The container identified by `identifier` was not found."""
def __init__(self, identifier):
super(ContainerNotFoundError, self).__init__('The container "%s" was not found.' % identifier)
self.identifier = identifier
class DockerInspect:
"""The results of `docker inspect` for a single container."""
def __init__(self, args, inspection): # type: (EnvironmentConfig, t.Dict[str, t.Any]) -> None
self.args = args
self.inspection = inspection
# primary properties
@property
def id(self): # type: () -> str
"""Return the ID of the container."""
return self.inspection['Id']
@property
def network_settings(self): # type: () -> t.Dict[str, t.Any]
"""Return a dictionary of the container network settings."""
return self.inspection['NetworkSettings']
@property
def state(self): # type: () -> t.Dict[str, t.Any]
"""Return a dictionary of the container state."""
return self.inspection['State']
@property
def config(self): # type: () -> t.Dict[str, t.Any]
"""Return a dictionary of the container configuration."""
return self.inspection['Config']
# nested properties
@property
def ports(self): # type: () -> t.Dict[str, t.List[t.Dict[str, str]]]
"""Return a dictionary of ports the container has published."""
return self.network_settings['Ports']
@property
def networks(self): # type: () -> t.Optional[t.Dict[str, t.Dict[str, t.Any]]]
"""Return a dictionary of the networks the container is attached to, or None if running under podman, which does not support networks."""
return self.network_settings.get('Networks')
@property
def running(self): # type: () -> bool
"""Return True if the container is running, otherwise False."""
return self.state['Running']
@property
def env(self): # type: () -> t.List[str]
"""Return a list of the environment variables used to create the container."""
return self.config['Env']
@property
def image(self): # type: () -> str
"""Return the image used to create the container."""
return self.config['Image']
# functions
def env_dict(self): # type: () -> t.Dict[str, str]
"""Return a dictionary of the environment variables used to create the container."""
return dict((item[0], item[1]) for item in [e.split('=', 1) for e in self.env])
def get_tcp_port(self, port): # type: (int) -> t.Optional[t.List[t.Dict[str, str]]]
"""Return a list of the endpoints published by the container for the specified TCP port, or None if it is not published."""
return self.ports.get('%d/tcp' % port)
def get_network_names(self): # type: () -> t.Optional[t.List[str]]
"""Return a list of the network names the container is attached to."""
if self.networks is None:
return None
return sorted(self.networks)
def get_network_name(self): # type: () -> str
"""Return the network name the container is attached to. Raises an exception if no network, or more than one, is attached."""
networks = self.get_network_names()
if not networks:
raise ApplicationError('No network found for Docker container: %s.' % self.id)
if len(networks) > 1:
raise ApplicationError('Found multiple networks for Docker container %s instead of only one: %s' % (self.id, ', '.join(networks)))
return networks[0]
def get_ip_address(self): # type: () -> t.Optional[str]
"""Return the IP address of the container for the preferred docker network."""
if self.networks:
network_name = get_docker_preferred_network_name(self.args) or 'bridge'
ipaddress = self.networks[network_name]['IPAddress']
else:
ipaddress = self.network_settings['IPAddress']
if not ipaddress:
return None
return ipaddress
def docker_inspect(args, identifier, always=False): # type: (EnvironmentConfig, str, bool) -> DockerInspect
""" """
:type args: EnvironmentConfig Return the results of `docker inspect` for the specified container.
:type container_id: str Raises a ContainerNotFoundError if the container was not found.
:rtype: list[dict]
""" """
if args.explain:
return []
try: try:
stdout = docker_command(args, ['inspect', container_id], capture=True)[0] stdout = docker_command(args, ['inspect', identifier], capture=True, always=always)[0]
return json.loads(stdout)
except SubprocessError as ex: except SubprocessError as ex:
if 'no such image' in ex.stderr: stdout = ex.stdout
return [] # podman does not handle this gracefully, exits 125
try: if args.explain and not always:
return json.loads(ex.stdout) items = []
except Exception: else:
raise ex items = json.loads(stdout)
if len(items) == 1:
return DockerInspect(args, items[0])
raise ContainerNotFoundError(identifier)
def docker_network_disconnect(args, container_id, network): def docker_network_disconnect(args, container_id, network):
@ -358,6 +464,16 @@ def docker_network_disconnect(args, container_id, network):
docker_command(args, ['network', 'disconnect', network, container_id], capture=True) docker_command(args, ['network', 'disconnect', network, container_id], capture=True)
def docker_image_exists(args, image): # type: (EnvironmentConfig, str) -> bool
"""Return True if the image exists, otherwise False."""
try:
docker_command(args, ['image', 'inspect', image], capture=True)
except SubprocessError:
return False
return True
def docker_network_inspect(args, network): def docker_network_inspect(args, network):
""" """
:type args: EnvironmentConfig :type args: EnvironmentConfig
@ -428,7 +544,8 @@ def docker_command(args, cmd, capture=False, stdin=None, stdout=None, always=Fal
:rtype: str | None, str | None :rtype: str | None, str | None
""" """
env = docker_environment() env = docker_environment()
return run_command(args, ['docker'] + cmd, env=env, capture=capture, stdin=stdin, stdout=stdout, always=always, data=data) command = get_docker_command(required=True).command
return run_command(args, [command] + cmd, env=env, capture=capture, stdin=stdin, stdout=stdout, always=always, data=data)
def docker_environment(): def docker_environment():

@ -22,7 +22,6 @@ from .io import (
from .util import ( from .util import (
display, display,
find_executable,
SubprocessError, SubprocessError,
ApplicationError, ApplicationError,
get_ansible_version, get_ansible_version,
@ -36,6 +35,7 @@ from .util_common import (
) )
from .docker_util import ( from .docker_util import (
get_docker_command,
docker_info, docker_info,
docker_version docker_version
) )
@ -269,11 +269,15 @@ def get_docker_details(args):
:type args: CommonConfig :type args: CommonConfig
:rtype: dict[str, any] :rtype: dict[str, any]
""" """
docker = find_executable('docker', required=False) docker = get_docker_command()
executable = None
info = None info = None
version = None version = None
if docker: if docker:
executable = docker.executable
try: try:
info = docker_info(args) info = docker_info(args)
except SubprocessError as ex: except SubprocessError as ex:
@ -285,7 +289,7 @@ def get_docker_details(args):
display.warning('Failed to collect docker version:\n%s' % ex) display.warning('Failed to collect docker version:\n%s' % ex)
docker_details = dict( docker_details = dict(
executable=docker, executable=executable,
info=info, info=info,
version=version, version=version,
) )

@ -56,14 +56,11 @@ from .util import (
remove_tree, remove_tree,
find_executable, find_executable,
raw_command, raw_command,
get_available_port,
generate_pip_command, generate_pip_command,
find_python, find_python,
cmd_quote, cmd_quote,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_DATA_ROOT, ANSIBLE_TEST_DATA_ROOT,
ANSIBLE_TEST_CONFIG_ROOT, ANSIBLE_TEST_CONFIG_ROOT,
get_ansible_version,
tempdir, tempdir,
open_zipfile, open_zipfile,
SUPPORTED_PYTHON_VERSIONS, SUPPORTED_PYTHON_VERSIONS,
@ -88,18 +85,18 @@ from .util_common import (
from .docker_util import ( from .docker_util import (
docker_pull, docker_pull,
docker_run, docker_run,
docker_available, docker_inspect,
docker_rm, )
get_docker_container_id,
get_docker_container_ip, from .containers import (
get_docker_hostname, SshConnectionDetail,
get_docker_preferred_network_name, create_container_hooks,
is_docker_user_defined_network,
) )
from .ansible_util import ( from .ansible_util import (
ansible_environment, ansible_environment,
check_pyyaml, check_pyyaml,
run_playbook,
) )
from .target import ( from .target import (
@ -153,13 +150,6 @@ from .http import (
urlparse, urlparse,
) )
HTTPTESTER_HOSTS = (
'ansible.http.tests',
'sni1.ansible.http.tests',
'fail.ansible.http.tests',
'self-signed.ansible.http.tests',
)
def check_startup(): def check_startup():
"""Checks to perform at startup before running commands.""" """Checks to perform at startup before running commands."""
@ -514,9 +504,6 @@ def command_shell(args):
install_command_requirements(args) install_command_requirements(args)
if args.inject_httptester:
inject_httptester(args)
cmd = create_shell_command(['bash', '-i']) cmd = create_shell_command(['bash', '-i'])
run_command(args, cmd) run_command(args, cmd)
@ -532,7 +519,12 @@ def command_posix_integration(args):
all_targets = tuple(walk_posix_integration_targets(include_hidden=True)) all_targets = tuple(walk_posix_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets) internal_targets = command_integration_filter(args, all_targets)
command_integration_filtered(args, internal_targets, all_targets, inventory_path)
managed_connections = None # type: t.Optional[t.List[SshConnectionDetail]]
pre_target, post_target = create_container_hooks(args, managed_connections)
command_integration_filtered(args, internal_targets, all_targets, inventory_path, pre_target=pre_target, post_target=post_target)
def command_network_integration(args): def command_network_integration(args):
@ -749,9 +741,7 @@ def command_windows_integration(args):
all_targets = tuple(walk_windows_integration_targets(include_hidden=True)) all_targets = tuple(walk_windows_integration_targets(include_hidden=True))
internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init) internal_targets = command_integration_filter(args, all_targets, init_callback=windows_init)
instances = [] # type: t.List[WrappedThread] instances = [] # type: t.List[WrappedThread]
pre_target = None managed_connections = [] # type: t.List[SshConnectionDetail]
post_target = None
httptester_id = None
if args.windows: if args.windows:
get_python_path(args, args.python_executable) # initialize before starting threads get_python_path(args, args.python_executable) # initialize before starting threads
@ -777,76 +767,41 @@ def command_windows_integration(args):
if not args.explain: if not args.explain:
write_text_file(inventory_path, inventory) write_text_file(inventory_path, inventory)
use_httptester = args.httptester and any('needs/httptester/' in target.aliases for target in internal_targets) for core_ci in remotes:
# if running under Docker delegation, the httptester may have already been started ssh_con = core_ci.connection
docker_httptester = bool(os.environ.get("HTTPTESTER", False)) ssh = SshConnectionDetail(core_ci.name, ssh_con.hostname, 22, ssh_con.username, core_ci.ssh_key.key, shell_type='powershell')
managed_connections.append(ssh)
if use_httptester and not docker_available() and not docker_httptester: elif args.explain:
display.warning('Assuming --disable-httptester since `docker` is not available.') identity_file = SshKey(args).key
elif use_httptester:
if docker_httptester: # mock connection details to prevent tracebacks in explain mode
# we are running in a Docker container that is linked to the httptester container, we just need to managed_connections = [SshConnectionDetail(
# forward these requests to the linked hostname name='windows',
first_host = HTTPTESTER_HOSTS[0] host='windows',
ssh_options = [ port=22,
"-R", "8080:%s:80" % first_host, user='administrator',
"-R", "8443:%s:443" % first_host, identity_file=identity_file,
"-R", "8444:%s:444" % first_host shell_type='powershell',
] )]
else: else:
# we are running directly and need to start the httptester container ourselves and forward the port inventory = parse_inventory(args, inventory_path)
# from there manually set so HTTPTESTER env var is set during the run hosts = get_hosts(inventory, 'windows')
args.inject_httptester = True identity_file = SshKey(args).key
httptester_id, ssh_options = start_httptester(args)
managed_connections = [SshConnectionDetail(
# to get this SSH command to run in the background we need to set to run in background (-f) and disable name=name,
# the pty allocation (-T) host=config['ansible_host'],
ssh_options.insert(0, "-fT") port=22,
user=config['ansible_user'],
# create a script that will continue to run in the background until the script is deleted, this will identity_file=identity_file,
# cleanup and close the connection shell_type='powershell',
def forward_ssh_ports(target): ) for name, config in hosts.items()]
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
manage = ManageWindowsCI(remote)
manage.upload(os.path.join(ANSIBLE_TEST_DATA_ROOT, 'setup', 'windows-httptester.ps1'), watcher_path)
# We cannot pass an array of string with -File so we just use a delimiter for multiple values
script = "powershell.exe -NoProfile -ExecutionPolicy Bypass -File .\\%s -Hosts \"%s\"" \
% (watcher_path, "|".join(HTTPTESTER_HOSTS))
if args.verbosity > 3:
script += " -Verbose"
manage.ssh(script, options=ssh_options, force_pty=False)
def cleanup_ssh_ports(target):
"""
:type target: IntegrationTarget
"""
if 'needs/httptester/' not in target.aliases:
return
for remote in [r for r in remotes if r.version != '2008']:
# delete the tmp file that keeps the http-tester alive
manage = ManageWindowsCI(remote)
manage.ssh("cmd.exe /c \"del %s /F /Q\"" % watcher_path, force_pty=False)
watcher_path = "ansible-test-http-watcher-%s.ps1" % time.time()
pre_target = forward_ssh_ports
post_target = cleanup_ssh_ports
def run_playbook(playbook, run_playbook_vars): # type: (str, t.Dict[str, t.Any]) -> None
playbook_path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'playbooks', playbook)
command = ['ansible-playbook', '-i', inventory_path, playbook_path, '-e', json.dumps(run_playbook_vars)]
if args.verbosity:
command.append('-%s' % ('v' * args.verbosity))
env = ansible_environment(args) if managed_connections:
intercept_command(args, command, '', env, disable_coverage=True) display.info('Generated SSH connection details from inventory:\n%s' % (
'\n'.join('%s %s@%s:%d' % (ssh.name, ssh.user, ssh.host, ssh.port) for ssh in managed_connections)), verbosity=1)
pre_target, post_target = create_container_hooks(args, managed_connections)
remote_temp_path = None remote_temp_path = None
@ -854,7 +809,7 @@ def command_windows_integration(args):
# Create the remote directory that is writable by everyone. Use Ansible to talk to the remote host. # Create the remote directory that is writable by everyone. Use Ansible to talk to the remote host.
remote_temp_path = 'C:\\ansible_test_coverage_%s' % time.time() remote_temp_path = 'C:\\ansible_test_coverage_%s' % time.time()
playbook_vars = {'remote_temp_path': remote_temp_path} playbook_vars = {'remote_temp_path': remote_temp_path}
run_playbook('windows_coverage_setup.yml', playbook_vars) run_playbook(args, inventory_path, 'windows_coverage_setup.yml', playbook_vars)
success = False success = False
@ -863,14 +818,11 @@ def command_windows_integration(args):
post_target=post_target, remote_temp_path=remote_temp_path) post_target=post_target, remote_temp_path=remote_temp_path)
success = True success = True
finally: finally:
if httptester_id:
docker_rm(args, httptester_id)
if remote_temp_path: if remote_temp_path:
# Zip up the coverage files that were generated and fetch it back to localhost. # Zip up the coverage files that were generated and fetch it back to localhost.
with tempdir() as local_temp_path: with tempdir() as local_temp_path:
playbook_vars = {'remote_temp_path': remote_temp_path, 'local_temp_path': local_temp_path} playbook_vars = {'remote_temp_path': remote_temp_path, 'local_temp_path': local_temp_path}
run_playbook('windows_coverage_teardown.yml', playbook_vars) run_playbook(args, inventory_path, 'windows_coverage_teardown.yml', playbook_vars)
for filename in os.listdir(local_temp_path): for filename in os.listdir(local_temp_path):
with open_zipfile(os.path.join(local_temp_path, filename)) as coverage_zip: with open_zipfile(os.path.join(local_temp_path, filename)) as coverage_zip:
@ -887,6 +839,9 @@ def windows_init(args, internal_targets): # pylint: disable=locally-disabled, u
:type args: WindowsIntegrationConfig :type args: WindowsIntegrationConfig
:type internal_targets: tuple[IntegrationTarget] :type internal_targets: tuple[IntegrationTarget]
""" """
# generate an ssh key (if needed) up front once, instead of for each instance
SshKey(args)
if not args.windows: if not args.windows:
return return
@ -955,14 +910,7 @@ def windows_inventory(remotes):
if remote.ssh_key: if remote.ssh_key:
options["ansible_ssh_private_key_file"] = os.path.abspath(remote.ssh_key.key) options["ansible_ssh_private_key_file"] = os.path.abspath(remote.ssh_key.key)
if remote.name == 'windows-2008': if remote.name == 'windows-2016':
options.update(
# force 2008 to use PSRP for the connection plugin
ansible_connection='psrp',
ansible_psrp_auth='basic',
ansible_psrp_cert_validation='ignore',
)
elif remote.name == 'windows-2016':
options.update( options.update(
# force 2016 to use NTLM + HTTP message encryption # force 2016 to use NTLM + HTTP message encryption
ansible_connection='winrm', ansible_connection='winrm',
@ -1053,24 +1001,23 @@ def command_integration_filter(args, # type: TIntegrationConfig
data_context().register_payload_callback(integration_config_callback) data_context().register_payload_callback(integration_config_callback)
if args.delegate: if args.delegate:
raise Delegate(require=require, exclude=exclude, integration_targets=internal_targets) raise Delegate(require=require, exclude=exclude)
install_command_requirements(args) install_command_requirements(args)
return internal_targets return internal_targets
def command_integration_filtered(args, targets, all_targets, inventory_path, pre_target=None, post_target=None, def command_integration_filtered(
remote_temp_path=None): args, # type: IntegrationConfig
""" targets, # type: t.Tuple[IntegrationTarget]
:type args: IntegrationConfig all_targets, # type: t.Tuple[IntegrationTarget]
:type targets: tuple[IntegrationTarget] inventory_path, # type: str
:type all_targets: tuple[IntegrationTarget] pre_target=None, # type: t.Optional[t.Callable[IntegrationTarget]]
:type inventory_path: str post_target=None, # type: t.Optional[t.Callable[IntegrationTarget]]
:type pre_target: (IntegrationTarget) -> None | None remote_temp_path=None, # type: t.Optional[str]
:type post_target: (IntegrationTarget) -> None | None ):
:type remote_temp_path: str | None """Run integration tests for the specified targets."""
"""
found = False found = False
passed = [] passed = []
failed = [] failed = []
@ -1108,10 +1055,6 @@ def command_integration_filtered(args, targets, all_targets, inventory_path, pre
display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds) display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds)
time.sleep(seconds) time.sleep(seconds)
# Windows is different as Ansible execution is done locally but the host is remote
if args.inject_httptester and not isinstance(args, WindowsIntegrationConfig):
inject_httptester(args)
start_at_task = args.start_at_task start_at_task = args.start_at_task
results = {} results = {}
@ -1158,6 +1101,9 @@ def command_integration_filtered(args, targets, all_targets, inventory_path, pre
start_time = time.time() start_time = time.time()
if pre_target:
pre_target(target)
run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, True) run_setup_targets(args, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path, common_temp_path, True)
if not args.explain: if not args.explain:
@ -1165,9 +1111,6 @@ def command_integration_filtered(args, targets, all_targets, inventory_path, pre
remove_tree(test_dir) remove_tree(test_dir)
make_dirs(test_dir) make_dirs(test_dir)
if pre_target:
pre_target(target)
try: try:
if target.script_path: if target.script_path:
command_integration_script(args, target, test_dir, inventory_path, common_temp_path, command_integration_script(args, target, test_dir, inventory_path, common_temp_path,
@ -1261,155 +1204,21 @@ def command_integration_filtered(args, targets, all_targets, inventory_path, pre
len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed))) len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed)))
def start_httptester(args): def parse_inventory(args, inventory_path): # type: (IntegrationConfig, str) -> t.Dict[str, t.Any]
""" """Return a dict parsed from the given inventory file."""
:type args: EnvironmentConfig cmd = ['ansible-inventory', '-i', inventory_path, '--list']
:rtype: str, list[str] env = ansible_environment(args)
""" inventory = json.loads(intercept_command(args, cmd, '', env, capture=True, disable_coverage=True)[0])
return inventory
# map ports from remote -> localhost -> container
# passing through localhost is only used when ansible-test is not already running inside a docker container
ports = [
dict(
remote=8080,
container=80,
),
dict(
remote=8088,
container=88,
),
dict(
remote=8443,
container=443,
),
dict(
remote=8444,
container=444,
),
dict(
remote=8749,
container=749,
),
]
container_id = get_docker_container_id()
if not container_id:
for item in ports:
item['localhost'] = get_available_port()
docker_pull(args, args.httptester)
httptester_id = run_httptester(args, dict((port['localhost'], port['container']) for port in ports if 'localhost' in port))
if container_id:
container_host = get_docker_container_ip(args, httptester_id)
display.info('Found httptester container address: %s' % container_host, verbosity=1)
else:
container_host = get_docker_hostname()
ssh_options = []
for port in ports:
ssh_options += ['-R', '%d:%s:%d' % (port['remote'], container_host, port.get('localhost', port['container']))]
return httptester_id, ssh_options
def run_httptester(args, ports=None):
"""
:type args: EnvironmentConfig
:type ports: dict[int, int] | None
:rtype: str
"""
options = [
'--detach',
'--env', 'KRB5_PASSWORD=%s' % args.httptester_krb5_password,
]
if ports:
for localhost_port, container_port in ports.items():
options += ['-p', '%d:%d' % (localhost_port, container_port)]
network = get_docker_preferred_network_name(args)
if is_docker_user_defined_network(network):
# network-scoped aliases are only supported for containers in user defined networks
for alias in HTTPTESTER_HOSTS:
options.extend(['--network-alias', alias])
httptester_id = docker_run(args, args.httptester, options=options)[0]
if args.explain:
httptester_id = 'httptester_id'
else:
httptester_id = httptester_id.strip()
return httptester_id
def inject_httptester(args):
"""
:type args: CommonConfig
"""
comment = ' # ansible-test httptester\n'
append_lines = ['127.0.0.1 %s%s' % (host, comment) for host in HTTPTESTER_HOSTS]
hosts_path = '/etc/hosts'
original_lines = read_text_file(hosts_path).splitlines(True)
if not any(line.endswith(comment) for line in original_lines):
write_text_file(hosts_path, ''.join(original_lines + append_lines))
# determine which forwarding mechanism to use
pfctl = find_executable('pfctl', required=False)
iptables = find_executable('iptables', required=False)
if pfctl:
kldload = find_executable('kldload', required=False)
if kldload:
try:
run_command(args, ['kldload', 'pf'], capture=True)
except SubprocessError:
pass # already loaded
rules = '''
rdr pass inet proto tcp from any to any port 80 -> 127.0.0.1 port 8080
rdr pass inet proto tcp from any to any port 88 -> 127.0.0.1 port 8088
rdr pass inet proto tcp from any to any port 443 -> 127.0.0.1 port 8443
rdr pass inet proto tcp from any to any port 444 -> 127.0.0.1 port 8444
rdr pass inet proto tcp from any to any port 749 -> 127.0.0.1 port 8749
'''
cmd = ['pfctl', '-ef', '-']
try:
run_command(args, cmd, capture=True, data=rules)
except SubprocessError:
pass # non-zero exit status on success
elif iptables:
ports = [
(80, 8080),
(88, 8088),
(443, 8443),
(444, 8444),
(749, 8749),
]
for src, dst in ports:
rule = ['-o', 'lo', '-p', 'tcp', '--dport', str(src), '-j', 'REDIRECT', '--to-port', str(dst)]
try: def get_hosts(inventory, group_name): # type: (t.Dict[str, t.Any], str) -> t.Dict[str, t.Dict[str, t.Any]]
# check for existing rule """Return a dict of hosts from the specified group in the given inventory."""
cmd = ['iptables', '-t', 'nat', '-C', 'OUTPUT'] + rule hostvars = inventory.get('_meta', {}).get('hostvars', {})
run_command(args, cmd, capture=True) group = inventory.get(group_name, {})
except SubprocessError: host_names = group.get('hosts', [])
# append rule when it does not exist hosts = dict((name, hostvars[name]) for name in host_names)
cmd = ['iptables', '-t', 'nat', '-A', 'OUTPUT'] + rule return hosts
run_command(args, cmd, capture=True)
else:
raise ApplicationError('No supported port forwarding mechanism detected.')
def run_pypi_proxy(args): # type: (EnvironmentConfig) -> t.Tuple[t.Optional[str], t.Optional[str]] def run_pypi_proxy(args): # type: (EnvironmentConfig) -> t.Tuple[t.Optional[str], t.Optional[str]]
@ -1441,14 +1250,14 @@ def run_pypi_proxy(args): # type: (EnvironmentConfig) -> t.Tuple[t.Optional[str
docker_pull(args, proxy_image) docker_pull(args, proxy_image)
container_id = docker_run(args, proxy_image, options=options)[0] container_id = docker_run(args, proxy_image, options=options)
if args.explain: container = docker_inspect(args, container_id)
container_id = 'pypi_id'
container_ip = '127.0.0.1' container_ip = container.get_ip_address()
else:
container_id = container_id.strip() if not container_ip:
container_ip = get_docker_container_ip(args, container_id) raise Exception('PyPI container IP not available.')
endpoint = 'http://%s:%d/root/pypi/+simple/' % (container_ip, port) endpoint = 'http://%s:%d/root/pypi/+simple/' % (container_ip, port)
@ -1586,12 +1395,6 @@ def integration_environment(args, target, test_dir, inventory_path, ansible_conf
""" """
env = ansible_environment(args, ansible_config=ansible_config) env = ansible_environment(args, ansible_config=ansible_config)
if args.inject_httptester:
env.update(dict(
HTTPTESTER='1',
KRB5_PASSWORD=args.httptester_krb5_password,
))
callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else []) callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else [])
integration = dict( integration = dict(
@ -1636,6 +1439,14 @@ def command_integration_script(args, target, test_dir, inventory_path, temp_path
if cloud_environment: if cloud_environment:
env_config = cloud_environment.get_environment_config() env_config = cloud_environment.get_environment_config()
if env_config:
display.info('>>> Environment Config\n%s' % json.dumps(dict(
env_vars=env_config.env_vars,
ansible_vars=env_config.ansible_vars,
callback_plugins=env_config.callback_plugins,
module_defaults=env_config.module_defaults,
), indent=4, sort_keys=True), verbosity=3)
with integration_test_environment(args, target, inventory_path) as test_env: with integration_test_environment(args, target, inventory_path) as test_env:
cmd = ['./%s' % os.path.basename(target.script_path)] cmd = ['./%s' % os.path.basename(target.script_path)]
@ -1658,6 +1469,7 @@ def command_integration_script(args, target, test_dir, inventory_path, temp_path
cmd += ['-e', '@%s' % config_path] cmd += ['-e', '@%s' % config_path]
module_coverage = 'non_local/' not in target.aliases module_coverage = 'non_local/' not in target.aliases
intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path, intercept_command(args, cmd, target_name=target.name, env=env, cwd=cwd, temp_path=temp_path,
remote_temp_path=remote_temp_path, module_coverage=module_coverage) remote_temp_path=remote_temp_path, module_coverage=module_coverage)
@ -1694,11 +1506,20 @@ def command_integration_role(args, target, start_at_task, test_dir, inventory_pa
hosts = 'testhost' hosts = 'testhost'
gather_facts = True gather_facts = True
if not isinstance(args, NetworkIntegrationConfig):
cloud_environment = get_cloud_environment(args, target) cloud_environment = get_cloud_environment(args, target)
if cloud_environment: if cloud_environment:
env_config = cloud_environment.get_environment_config() env_config = cloud_environment.get_environment_config()
if env_config:
display.info('>>> Environment Config\n%s' % json.dumps(dict(
env_vars=env_config.env_vars,
ansible_vars=env_config.ansible_vars,
callback_plugins=env_config.callback_plugins,
module_defaults=env_config.module_defaults,
), indent=4, sort_keys=True), verbosity=3)
with integration_test_environment(args, target, inventory_path) as test_env: with integration_test_environment(args, target, inventory_path) as test_env:
if os.path.exists(test_env.vars_file): if os.path.exists(test_env.vars_file):
vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir)) vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir))
@ -1758,6 +1579,9 @@ def command_integration_role(args, target, start_at_task, test_dir, inventory_pa
ANSIBLE_PLAYBOOK_DIR=cwd, ANSIBLE_PLAYBOOK_DIR=cwd,
)) ))
if env_config and env_config.env_vars:
env.update(env_config.env_vars)
env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir
module_coverage = 'non_local/' not in target.aliases module_coverage = 'non_local/' not in target.aliases
@ -2278,17 +2102,15 @@ class NoTestsForChanges(ApplicationWarning):
class Delegate(Exception): class Delegate(Exception):
"""Trigger command delegation.""" """Trigger command delegation."""
def __init__(self, exclude=None, require=None, integration_targets=None): def __init__(self, exclude=None, require=None):
""" """
:type exclude: list[str] | None :type exclude: list[str] | None
:type require: list[str] | None :type require: list[str] | None
:type integration_targets: tuple[IntegrationTarget] | None
""" """
super(Delegate, self).__init__() super(Delegate, self).__init__()
self.exclude = exclude or [] self.exclude = exclude or []
self.require = require or [] self.require = require or []
self.integration_targets = integration_targets or tuple()
class AllTargetsSkipped(ApplicationWarning): class AllTargetsSkipped(ApplicationWarning):

@ -271,10 +271,17 @@ class IntegrationAliasesTest(SanityVersionNeutral):
) )
for cloud in clouds: for cloud in clouds:
if cloud == 'httptester':
find = self.format_test_group_alias('linux').replace('linux', 'posix')
find_incidental = ['%s/posix/incidental/' % self.TEST_ALIAS_PREFIX]
else:
find = self.format_test_group_alias(cloud, 'generic')
find_incidental = ['%s/%s/incidental/' % (self.TEST_ALIAS_PREFIX, cloud), '%s/cloud/incidental/' % self.TEST_ALIAS_PREFIX]
messages += self.check_ci_group( messages += self.check_ci_group(
targets=tuple(filter_targets(posix_targets, ['cloud/%s/' % cloud], include=True, directories=False, errors=False)), targets=tuple(filter_targets(posix_targets, ['cloud/%s/' % cloud], include=True, directories=False, errors=False)),
find=self.format_test_group_alias(cloud, 'cloud'), find=find,
find_incidental=['%s/%s/incidental/' % (self.TEST_ALIAS_PREFIX, cloud), '%s/cloud/incidental/' % self.TEST_ALIAS_PREFIX], find_incidental=find_incidental,
) )
return messages return messages

@ -0,0 +1,264 @@
"""High level functions for working with SSH."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import random
import re
import subprocess
from . import types as t
from .encoding import (
to_bytes,
to_text,
)
from .util import (
ApplicationError,
cmd_quote,
common_environment,
devnull,
display,
exclude_none_values,
sanitize_host_name,
)
from .config import (
EnvironmentConfig,
)
class SshConnectionDetail:
"""Information needed to establish an SSH connection to a host."""
def __init__(self,
name, # type: str
host, # type: str
port, # type: t.Optional[int]
user, # type: str
identity_file, # type: str
python_interpreter=None, # type: t.Optional[str]
shell_type=None, # type: t.Optional[str]
): # type: (...) -> None
self.name = sanitize_host_name(name)
self.host = host
self.port = port
self.user = user
self.identity_file = identity_file
self.python_interpreter = python_interpreter
self.shell_type = shell_type
class SshProcess:
"""Wrapper around an SSH process."""
def __init__(self, process): # type: (t.Optional[subprocess.Popen]) -> None
self._process = process
self.pending_forwards = None # type: t.Optional[t.Set[t.Tuple[str, int]]]
self.forwards = {} # type: t.Dict[t.Tuple[str, int], int]
def terminate(self): # type: () -> None
"""Terminate the SSH process."""
if not self._process:
return # explain mode
# noinspection PyBroadException
try:
self._process.terminate()
except Exception: # pylint: disable=broad-except
pass
def wait(self): # type: () -> None
"""Wait for the SSH process to terminate."""
if not self._process:
return # explain mode
self._process.wait()
def collect_port_forwards(self): # type: (SshProcess) -> t.Dict[t.Tuple[str, int], int]
"""Collect port assignments for dynamic SSH port forwards."""
errors = []
display.info('Collecting %d SSH port forward(s).' % len(self.pending_forwards), verbosity=2)
while self.pending_forwards:
if self._process:
line_bytes = self._process.stderr.readline()
if not line_bytes:
if errors:
details = ':\n%s' % '\n'.join(errors)
else:
details = '.'
raise ApplicationError('SSH port forwarding failed%s' % details)
line = to_text(line_bytes).strip()
match = re.search(r'^Allocated port (?P<src_port>[0-9]+) for remote forward to (?P<dst_host>[^:]+):(?P<dst_port>[0-9]+)$', line)
if not match:
if re.search(r'^Warning: Permanently added .* to the list of known hosts\.$', line):
continue
display.warning('Unexpected SSH port forwarding output: %s' % line, verbosity=2)
errors.append(line)
continue
src_port = int(match.group('src_port'))
dst_host = str(match.group('dst_host'))
dst_port = int(match.group('dst_port'))
dst = (dst_host, dst_port)
else:
# explain mode
dst = list(self.pending_forwards)[0]
src_port = random.randint(40000, 50000)
self.pending_forwards.remove(dst)
self.forwards[dst] = src_port
display.info('Collected %d SSH port forward(s):\n%s' % (
len(self.forwards), '\n'.join('%s -> %s:%s' % (src_port, dst[0], dst[1]) for dst, src_port in sorted(self.forwards.items()))), verbosity=2)
return self.forwards
def create_ssh_command(
ssh, # type: SshConnectionDetail
options=None, # type: t.Optional[t.Dict[str, t.Union[str, int]]]
cli_args=None, # type: t.List[str]
command=None, # type: t.Optional[str]
): # type: (...) -> t.List[str]
"""Create an SSH command using the specified options."""
cmd = [
'ssh',
'-n', # prevent reading from stdin
'-i', ssh.identity_file, # file from which the identity for public key authentication is read
]
if not command:
cmd.append('-N') # do not execute a remote command
if ssh.port:
cmd.extend(['-p', str(ssh.port)]) # port to connect to on the remote host
if ssh.user:
cmd.extend(['-l', ssh.user]) # user to log in as on the remote machine
ssh_options = dict(
BatchMode='yes',
ExitOnForwardFailure='yes',
LogLevel='ERROR',
ServerAliveCountMax=4,
ServerAliveInterval=15,
StrictHostKeyChecking='no',
UserKnownHostsFile='/dev/null',
)
ssh_options.update(options or {})
for key, value in sorted(ssh_options.items()):
cmd.extend(['-o', '='.join([key, str(value)])])
cmd.extend(cli_args or [])
cmd.append(ssh.host)
if command:
cmd.append(command)
return cmd
def run_ssh_command(
args, # type: EnvironmentConfig
ssh, # type: SshConnectionDetail
options=None, # type: t.Optional[t.Dict[str, t.Union[str, int]]]
cli_args=None, # type: t.List[str]
command=None, # type: t.Optional[str]
): # type: (...) -> SshProcess
"""Run the specified SSH command, returning the created SshProcess instance created."""
cmd = create_ssh_command(ssh, options, cli_args, command)
env = common_environment()
cmd_show = ' '.join([cmd_quote(c) for c in cmd])
display.info('Run background command: %s' % cmd_show, verbosity=1, truncate=True)
cmd_bytes = [to_bytes(c) for c in cmd]
env_bytes = dict((to_bytes(k), to_bytes(v)) for k, v in env.items())
if args.explain:
process = SshProcess(None)
else:
process = SshProcess(subprocess.Popen(cmd_bytes, env=env_bytes, bufsize=-1, stdin=devnull(), stdout=subprocess.PIPE, stderr=subprocess.PIPE))
return process
def create_ssh_port_forwards(
args, # type: EnvironmentConfig
ssh, # type: SshConnectionDetail
forwards, # type: t.List[t.Tuple[str, int]]
): # type: (...) -> SshProcess
"""
Create SSH port forwards using the provided list of tuples (target_host, target_port).
Port bindings will be automatically assigned by SSH and must be collected with a subseqent call to collect_port_forwards.
"""
options = dict(
LogLevel='INFO', # info level required to get messages on stderr indicating the ports assigned to each forward
)
cli_args = []
for forward_host, forward_port in forwards:
cli_args.extend(['-R', ':'.join([str(0), forward_host, str(forward_port)])])
process = run_ssh_command(args, ssh, options, cli_args)
process.pending_forwards = forwards
return process
def create_ssh_port_redirects(
args, # type: EnvironmentConfig
ssh, # type: SshConnectionDetail
redirects, # type: t.List[t.Tuple[int, str, int]]
): # type: (...) -> SshProcess
"""Create SSH port redirections using the provided list of tuples (bind_port, target_host, target_port)."""
options = {}
cli_args = []
for bind_port, target_host, target_port in redirects:
cli_args.extend(['-R', ':'.join([str(bind_port), target_host, str(target_port)])])
process = run_ssh_command(args, ssh, options, cli_args)
return process
def generate_ssh_inventory(ssh_connections): # type: (t.List[SshConnectionDetail]) -> str
"""Return an inventory file in JSON format, created from the provided SSH connection details."""
inventory = dict(
all=dict(
hosts=dict((ssh.name, exclude_none_values(dict(
ansible_host=ssh.host,
ansible_port=ssh.port,
ansible_user=ssh.user,
ansible_ssh_private_key_file=os.path.abspath(ssh.identity_file),
ansible_connection='ssh',
ansible_ssh_pipelining='yes',
ansible_python_interpreter=ssh.python_interpreter,
ansible_shell_type=ssh.shell_type,
ansible_ssh_extra_args='-o UserKnownHostsFile=/dev/null', # avoid changing the test environment
ansible_ssh_host_key_checking='no',
))) for ssh in ssh_connections),
),
)
inventory_text = json.dumps(inventory, indent=4, sort_keys=True)
display.info('>>> SSH Inventory\n%s' % inventory_text, verbosity=3)
return inventory_text

@ -614,6 +614,9 @@ class IntegrationTarget(CompletionTarget):
if 'destructive' not in groups: if 'destructive' not in groups:
groups.append('non_destructive') groups.append('non_destructive')
if 'needs/httptester' in groups:
groups.append('cloud/httptester') # backwards compatibility for when it was not a cloud plugin
if '_' in self.name: if '_' in self.name:
prefix = self.name[:self.name.find('_')] prefix = self.name[:self.name.find('_')]
else: else:

@ -72,6 +72,13 @@ try:
except AttributeError: except AttributeError:
MAXFD = -1 MAXFD = -1
try:
TKey = t.TypeVar('TKey')
TValue = t.TypeVar('TValue')
except AttributeError:
TKey = None # pylint: disable=invalid-name
TValue = None # pylint: disable=invalid-name
COVERAGE_CONFIG_NAME = 'coveragerc' COVERAGE_CONFIG_NAME = 'coveragerc'
ANSIBLE_TEST_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) ANSIBLE_TEST_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
@ -148,6 +155,11 @@ def read_lines_without_comments(path, remove_blank_lines=False, optional=False):
return lines return lines
def exclude_none_values(data): # type: (t.Dict[TKey, t.Optional[TValue]]) -> t.Dict[TKey, TValue]
"""Return the provided dictionary with any None values excluded."""
return dict((key, value) for key, value in data.items() if value is not None)
def find_executable(executable, cwd=None, path=None, required=True): def find_executable(executable, cwd=None, path=None, required=True):
""" """
:type executable: str :type executable: str
@ -365,8 +377,6 @@ def common_environment():
) )
optional = ( optional = (
'HTTPTESTER',
'KRB5_PASSWORD',
'LD_LIBRARY_PATH', 'LD_LIBRARY_PATH',
'SSH_AUTH_SOCK', 'SSH_AUTH_SOCK',
# MacOS High Sierra Compatibility # MacOS High Sierra Compatibility
@ -725,18 +735,6 @@ def parse_to_list_of_dict(pattern, value):
return matched return matched
def get_available_port():
"""
:rtype: int
"""
# this relies on the kernel not reusing previously assigned ports immediately
socket_fd = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
with contextlib.closing(socket_fd):
socket_fd.bind(('', 0))
return socket_fd.getsockname()[1]
def get_subclasses(class_type): # type: (t.Type[C]) -> t.Set[t.Type[C]] def get_subclasses(class_type): # type: (t.Type[C]) -> t.Set[t.Type[C]]
"""Returns the set of types that are concrete subclasses of the given type.""" """Returns the set of types that are concrete subclasses of the given type."""
subclasses = set() # type: t.Set[t.Type[C]] subclasses = set() # type: t.Set[t.Type[C]]
@ -859,6 +857,21 @@ def open_zipfile(path, mode='r'):
zib_obj.close() zib_obj.close()
def sanitize_host_name(name):
"""Return a sanitized version of the given name, suitable for use as a hostname."""
return re.sub('[^A-Za-z0-9]+', '-', name)[:63].strip('-')
def devnull():
"""Return a file descriptor for /dev/null, using a previously cached version if available."""
try:
return devnull.fd
except AttributeError:
devnull.fd = os.open('/dev/null', os.O_RDONLY)
return devnull.fd
def get_hash(path): def get_hash(path):
""" """
:type path: str :type path: str
@ -874,4 +887,20 @@ def get_hash(path):
return file_hash.hexdigest() return file_hash.hexdigest()
def get_host_ip():
"""Return the host's IP address."""
try:
return get_host_ip.ip
except AttributeError:
pass
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as sock:
sock.connect(('10.255.255.255', 22))
host_ip = get_host_ip.ip = sock.getsockname()[0]
display.info('Detected host IP: %s' % host_ip, verbosity=1)
return host_ip
display = Display() # pylint: disable=locally-disabled, invalid-name display = Display() # pylint: disable=locally-disabled, invalid-name

@ -219,7 +219,7 @@ def named_temporary_file(args, prefix, suffix, directory, content):
:rtype: str :rtype: str
""" """
if args.explain: if args.explain:
yield os.path.join(directory, '%stemp%s' % (prefix, suffix)) yield os.path.join(directory or '/tmp', '%stemp%s' % (prefix, suffix))
else: else:
with tempfile.NamedTemporaryFile(prefix=prefix, suffix=suffix, dir=directory) as tempfile_fd: with tempfile.NamedTemporaryFile(prefix=prefix, suffix=suffix, dir=directory) as tempfile_fd:
tempfile_fd.write(to_bytes(content)) tempfile_fd.write(to_bytes(content))

@ -218,7 +218,6 @@ test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constr
test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose test/lib/ansible_test/_data/requirements/sanity.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters test/support/integration/plugins/module_utils/aws/core.py pylint:property-with-parameters
test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate test/support/integration/plugins/module_utils/cloud.py future-import-boilerplate
test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate test/support/integration/plugins/module_utils/cloud.py metaclass-boilerplate

@ -0,0 +1,518 @@
# Copyright (c) 2020 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
Function Get-AnsibleWindowsWebRequest {
<#
.SYNOPSIS
Creates a System.Net.WebRequest object based on common URL module options in Ansible.
.DESCRIPTION
Will create a WebRequest based on common input options within Ansible. This can be used manually or with
Invoke-AnsibleWindowsWebRequest.
.PARAMETER Uri
The URI to create the web request for.
.PARAMETER UrlMethod
The protocol method to use, if omitted, will use the default value for the URI protocol specified.
.PARAMETER FollowRedirects
Whether to follow redirect reponses. This is only valid when using a HTTP URI.
all - Will follow all redirects
none - Will follow no redirects
safe - Will only follow redirects when GET or HEAD is used as the UrlMethod
.PARAMETER Headers
A hashtable or dictionary of header values to set on the request. This is only valid for a HTTP URI.
.PARAMETER HttpAgent
A string to set for the 'User-Agent' header. This is only valid for a HTTP URI.
.PARAMETER MaximumRedirection
The maximum number of redirections that will be followed. This is only valid for a HTTP URI.
.PARAMETER UrlTimeout
The timeout in seconds that defines how long to wait until the request times out.
.PARAMETER ValidateCerts
Whether to validate SSL certificates, default to True.
.PARAMETER ClientCert
The path to PFX file to use for X509 authentication. This is only valid for a HTTP URI. This path can either
be a filesystem path (C:\folder\cert.pfx) or a PSPath to a credential (Cert:\CurrentUser\My\<thumbprint>).
.PARAMETER ClientCertPassword
The password for the PFX certificate if required. This is only valid for a HTTP URI.
.PARAMETER ForceBasicAuth
Whether to set the Basic auth header on the first request instead of when required. This is only valid for a
HTTP URI.
.PARAMETER UrlUsername
The username to use for authenticating with the target.
.PARAMETER UrlPassword
The password to use for authenticating with the target.
.PARAMETER UseDefaultCredential
Whether to use the current user's credentials if available. This will only work when using Become, using SSH with
password auth, or WinRM with CredSSP or Kerberos with credential delegation.
.PARAMETER UseProxy
Whether to use the default proxy defined in IE (WinINet) for the user or set no proxy at all. This should not
be set to True when ProxyUrl is also defined.
.PARAMETER ProxyUrl
An explicit proxy server to use for the request instead of relying on the default proxy in IE. This is only
valid for a HTTP URI.
.PARAMETER ProxyUsername
An optional username to use for proxy authentication.
.PARAMETER ProxyPassword
The password for ProxyUsername.
.PARAMETER ProxyUseDefaultCredential
Whether to use the current user's credentials for proxy authentication if available. This will only work when
using Become, using SSH with password auth, or WinRM with CredSSP or Kerberos with credential delegation.
.PARAMETER Module
The AnsibleBasic module that can be used as a backup parameter source or a way to return warnings back to the
Ansible controller.
.EXAMPLE
$spec = @{
options = @{}
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-AnsibleWindowsWebRequestSpec))
$web_request = Get-AnsibleWindowsWebRequest -Module $module
#>
[CmdletBinding()]
[OutputType([System.Net.WebRequest])]
Param (
[Alias("url")]
[System.Uri]
$Uri,
[Alias("url_method")]
[System.String]
$UrlMethod,
[Alias("follow_redirects")]
[ValidateSet("all", "none", "safe")]
[System.String]
$FollowRedirects = "safe",
[System.Collections.IDictionary]
$Headers,
[Alias("http_agent")]
[System.String]
$HttpAgent = "ansible-httpget",
[Alias("maximum_redirection")]
[System.Int32]
$MaximumRedirection = 50,
[Alias("url_timeout")]
[System.Int32]
$UrlTimeout = 30,
[Alias("validate_certs")]
[System.Boolean]
$ValidateCerts = $true,
# Credential params
[Alias("client_cert")]
[System.String]
$ClientCert,
[Alias("client_cert_password")]
[System.String]
$ClientCertPassword,
[Alias("force_basic_auth")]
[Switch]
$ForceBasicAuth,
[Alias("url_username")]
[System.String]
$UrlUsername,
[Alias("url_password")]
[System.String]
$UrlPassword,
[Alias("use_default_credential")]
[Switch]
$UseDefaultCredential,
# Proxy params
[Alias("use_proxy")]
[System.Boolean]
$UseProxy = $true,
[Alias("proxy_url")]
[System.String]
$ProxyUrl,
[Alias("proxy_username")]
[System.String]
$ProxyUsername,
[Alias("proxy_password")]
[System.String]
$ProxyPassword,
[Alias("proxy_use_default_credential")]
[Switch]
$ProxyUseDefaultCredential,
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
[System.Object]
$Module
)
# Set module options for parameters unless they were explicitly passed in.
if ($Module) {
foreach ($param in $PSCmdlet.MyInvocation.MyCommand.Parameters.GetEnumerator()) {
if ($PSBoundParameters.ContainsKey($param.Key)) {
# Was set explicitly we want to use that value
continue
}
foreach ($alias in @($Param.Key) + $param.Value.Aliases) {
if ($Module.Params.ContainsKey($alias)) {
$var_value = $Module.Params.$alias -as $param.Value.ParameterType
Set-Variable -Name $param.Key -Value $var_value
break
}
}
}
}
# Disable certificate validation if requested
# FUTURE: set this on ServerCertificateValidationCallback of the HttpWebRequest once .NET 4.5 is the minimum
if (-not $ValidateCerts) {
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
}
# Enable TLS1.1/TLS1.2 if they're available but disabled (eg. .NET 4.5)
$security_protocols = [System.Net.ServicePointManager]::SecurityProtocol -bor [System.Net.SecurityProtocolType]::SystemDefault
if ([System.Net.SecurityProtocolType].GetMember("Tls11").Count -gt 0) {
$security_protocols = $security_protocols -bor [System.Net.SecurityProtocolType]::Tls11
}
if ([System.Net.SecurityProtocolType].GetMember("Tls12").Count -gt 0) {
$security_protocols = $security_protocols -bor [System.Net.SecurityProtocolType]::Tls12
}
[System.Net.ServicePointManager]::SecurityProtocol = $security_protocols
$web_request = [System.Net.WebRequest]::Create($Uri)
if ($UrlMethod) {
$web_request.Method = $UrlMethod
}
$web_request.Timeout = $UrlTimeout * 1000
if ($UseDefaultCredential -and $web_request -is [System.Net.HttpWebRequest]) {
$web_request.UseDefaultCredentials = $true
} elseif ($UrlUsername) {
if ($ForceBasicAuth) {
$auth_value = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $UrlUsername, $UrlPassword)))
$web_request.Headers.Add("Authorization", "Basic $auth_value")
} else {
$credential = New-Object -TypeName System.Net.NetworkCredential -ArgumentList $UrlUsername, $UrlPassword
$web_request.Credentials = $credential
}
}
if ($ClientCert) {
# Expecting either a filepath or PSPath (Cert:\CurrentUser\My\<thumbprint>)
$cert = Get-Item -LiteralPath $ClientCert -ErrorAction SilentlyContinue
if ($null -eq $cert) {
Write-Error -Message "Client certificate '$ClientCert' does not exist" -Category ObjectNotFound
return
}
$crypto_ns = 'System.Security.Cryptography.X509Certificates'
if ($cert.PSProvider.Name -ne 'Certificate') {
try {
$cert = New-Object -TypeName "$crypto_ns.X509Certificate2" -ArgumentList @(
$ClientCert, $ClientCertPassword
)
} catch [System.Security.Cryptography.CryptographicException] {
Write-Error -Message "Failed to read client certificate at '$ClientCert'" -Exception $_.Exception -Category SecurityError
return
}
}
$web_request.ClientCertificates = New-Object -TypeName "$crypto_ns.X509Certificate2Collection" -ArgumentList @(
$cert
)
}
if (-not $UseProxy) {
$proxy = $null
} elseif ($ProxyUrl) {
$proxy = New-Object -TypeName System.Net.WebProxy -ArgumentList $ProxyUrl, $true
} else {
$proxy = $web_request.Proxy
}
# $web_request.Proxy may return $null for a FTP web request. We only set the credentials if we have an actual
# proxy to work with, otherwise just ignore the credentials property.
if ($null -ne $proxy) {
if ($ProxyUseDefaultCredential) {
# Weird hack, $web_request.Proxy returns an IWebProxy object which only gurantees the Credentials
# property. We cannot set UseDefaultCredentials so we just set the Credentials to the
# DefaultCredentials in the CredentialCache which does the same thing.
$proxy.Credentials = [System.Net.CredentialCache]::DefaultCredentials
} elseif ($ProxyUsername) {
$proxy.Credentials = New-Object -TypeName System.Net.NetworkCredential -ArgumentList @(
$ProxyUsername, $ProxyPassword
)
} else {
$proxy.Credentials = $null
}
}
$web_request.Proxy = $proxy
# Some parameters only apply when dealing with a HttpWebRequest
if ($web_request -is [System.Net.HttpWebRequest]) {
if ($Headers) {
foreach ($header in $Headers.GetEnumerator()) {
switch ($header.Key) {
Accept { $web_request.Accept = $header.Value }
Connection { $web_request.Connection = $header.Value }
Content-Length { $web_request.ContentLength = $header.Value }
Content-Type { $web_request.ContentType = $header.Value }
Expect { $web_request.Expect = $header.Value }
Date { $web_request.Date = $header.Value }
Host { $web_request.Host = $header.Value }
If-Modified-Since { $web_request.IfModifiedSince = $header.Value }
Range { $web_request.AddRange($header.Value) }
Referer { $web_request.Referer = $header.Value }
Transfer-Encoding {
$web_request.SendChunked = $true
$web_request.TransferEncoding = $header.Value
}
User-Agent { continue }
default { $web_request.Headers.Add($header.Key, $header.Value) }
}
}
}
# For backwards compatibility we need to support setting the User-Agent if the header was set in the task.
# We just need to make sure that if an explicit http_agent module was set then that takes priority.
if ($Headers -and $Headers.ContainsKey("User-Agent")) {
$options = (Get-AnsibleWindowsWebRequestSpec).options
if ($HttpAgent -eq $options.http_agent.default) {
$HttpAgent = $Headers['User-Agent']
} elseif ($null -ne $Module) {
$Module.Warn("The 'User-Agent' header and the 'http_agent' was set, using the 'http_agent' for web request")
}
}
$web_request.UserAgent = $HttpAgent
switch ($FollowRedirects) {
none { $web_request.AllowAutoRedirect = $false }
safe {
if ($web_request.Method -in @("GET", "HEAD")) {
$web_request.AllowAutoRedirect = $true
} else {
$web_request.AllowAutoRedirect = $false
}
}
all { $web_request.AllowAutoRedirect = $true }
}
if ($MaximumRedirection -eq 0) {
$web_request.AllowAutoRedirect = $false
} else {
$web_request.MaximumAutomaticRedirections = $MaximumRedirection
}
}
return $web_request
}
Function Invoke-AnsibleWindowsWebRequest {
<#
.SYNOPSIS
Invokes a ScriptBlock with the WebRequest.
.DESCRIPTION
Invokes the ScriptBlock and handle extra information like accessing the response stream, closing those streams
safely as well as setting common module return values.
.PARAMETER Module
The Ansible.Basic module to set the return values for. This will set the following return values;
elapsed - The total time, in seconds, that it took to send the web request and process the response
msg - The human readable description of the response status code
status_code - An int that is the response status code
.PARAMETER Request
The System.Net.WebRequest to call. This can either be manually crafted or created with
Get-AnsibleWindowsWebRequest.
.PARAMETER Script
The ScriptBlock to invoke during the web request. This ScriptBlock should take in the params
Param ([System.Net.WebResponse]$Response, [System.IO.Stream]$Stream)
This scriptblock should manage the response based on what it need to do.
.PARAMETER Body
An optional Stream to send to the target during the request.
.PARAMETER IgnoreBadResponse
By default a WebException will be raised for a non 2xx status code and the Script will not be invoked. This
parameter can be set to process all responses regardless of the status code.
.EXAMPLE Basic module that downloads a file
$spec = @{
options = @{
path = @{ type = "path"; required = $true }
}
}
$module = Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-AnsibleWindowsWebRequestSpec))
$web_request = Get-AnsibleWindowsWebRequest -Module $module
Invoke-AnsibleWindowsWebRequest -Module $module -Request $web_request -Script {
Param ([System.Net.WebResponse]$Response, [System.IO.Stream]$Stream)
$fs = [System.IO.File]::Create($module.Params.path)
try {
$Stream.CopyTo($fs)
$fs.Flush()
} finally {
$fs.Dispose()
}
}
#>
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[System.Object]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[System.Net.WebRequest]
$Request,
[Parameter(Mandatory=$true)]
[ScriptBlock]
$Script,
[AllowNull()]
[System.IO.Stream]
$Body,
[Switch]
$IgnoreBadResponse
)
$start = Get-Date
if ($null -ne $Body) {
$request_st = $Request.GetRequestStream()
try {
$Body.CopyTo($request_st)
$request_st.Flush()
} finally {
$request_st.Close()
}
}
try {
try {
$web_response = $Request.GetResponse()
} catch [System.Net.WebException] {
# A WebResponse with a status code not in the 200 range will raise a WebException. We check if the
# exception raised contains the actual response and continue on if IgnoreBadResponse is set. We also
# make sure we set the status_code return value on the Module object if possible
if ($_.Exception.PSObject.Properties.Name -match "Response") {
$web_response = $_.Exception.Response
if (-not $IgnoreBadResponse -or $null -eq $web_response) {
$Module.Result.msg = $_.Exception.StatusDescription
$Module.Result.status_code = $_.Exception.Response.StatusCode
throw $_
}
} else {
throw $_
}
}
if ($Request.RequestUri.IsFile) {
# A FileWebResponse won't have these properties set
$Module.Result.msg = "OK"
$Module.Result.status_code = 200
} else {
$Module.Result.msg = $web_response.StatusDescription
$Module.Result.status_code = $web_response.StatusCode
}
$response_stream = $web_response.GetResponseStream()
try {
# Invoke the ScriptBlock and pass in WebResponse and ResponseStream
&$Script -Response $web_response -Stream $response_stream
} finally {
$response_stream.Dispose()
}
} finally {
if ($web_response) {
$web_response.Close()
}
$Module.Result.elapsed = ((Get-date) - $start).TotalSeconds
}
}
Function Get-AnsibleWindowsWebRequestSpec {
<#
.SYNOPSIS
Used by modules to get the argument spec fragment for AnsibleModule.
.EXAMPLES
$spec = @{
options = @{}
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-AnsibleWindowsWebRequestSpec))
.NOTES
The options here are reflected in the doc fragment 'ansible.windows.web_request' at
'plugins/doc_fragments/web_request.py'.
#>
@{
options = @{
url_method = @{ type = 'str' }
follow_redirects = @{ type = 'str'; choices = @('all', 'none', 'safe'); default = 'safe' }
headers = @{ type = 'dict' }
http_agent = @{ type = 'str'; default = 'ansible-httpget' }
maximum_redirection = @{ type = 'int'; default = 50 }
url_timeout = @{ type = 'int'; default = 30 }
validate_certs = @{ type = 'bool'; default = $true }
# Credential options
client_cert = @{ type = 'str' }
client_cert_password = @{ type = 'str'; no_log = $true }
force_basic_auth = @{ type = 'bool'; default = $false }
url_username = @{ type = 'str' }
url_password = @{ type = 'str'; no_log = $true }
use_default_credential = @{ type = 'bool'; default = $false }
# Proxy options
use_proxy = @{ type = 'bool'; default = $true }
proxy_url = @{ type = 'str' }
proxy_username = @{ type = 'str' }
proxy_password = @{ type = 'str'; no_log = $true }
proxy_use_default_credential = @{ type = 'bool'; default = $false }
}
}
}
$export_members = @{
Function = "Get-AnsibleWindowsWebRequest", "Get-AnsibleWindowsWebRequestSpec", "Invoke-AnsibleWindowsWebRequest"
}
Export-ModuleMember @export_members

@ -0,0 +1,219 @@
#!powershell
# Copyright: (c) 2015, Corwin Brown <corwin@corwinbrown.com>
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <dag@wieers.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Module Ansible.ModuleUtils.CamelConversion
#Requires -Module Ansible.ModuleUtils.FileUtil
#Requires -Module Ansible.ModuleUtils.Legacy
#AnsibleRequires -PowerShell ..module_utils.WebRequest
$spec = @{
options = @{
url = @{ type = "str"; required = $true }
content_type = @{ type = "str" }
body = @{ type = "raw" }
dest = @{ type = "path" }
creates = @{ type = "path" }
removes = @{ type = "path" }
return_content = @{ type = "bool"; default = $false }
status_code = @{ type = "list"; elements = "int"; default = @(200) }
# Defined for ease of use and backwards compatibility
url_timeout = @{
aliases = "timeout"
}
url_method = @{
aliases = "method"
default = "GET"
}
# Defined for the alias backwards compatibility, remove once aliases are removed
url_username = @{
aliases = @("user", "username")
deprecated_aliases = @(
@{ name = "user"; date = [DateTime]::ParseExact("2022-07-01", "yyyy-MM-dd", $null); collection_name = 'ansible.windows' },
@{ name = "username"; date = [DateTime]::ParseExact("2022-07-01", "yyyy-MM-dd", $null); collection_name = 'ansible.windows' }
)
}
url_password = @{
aliases = @("password")
deprecated_aliases = @(
@{ name = "password"; date = [DateTime]::ParseExact("2022-07-01", "yyyy-MM-dd", $null); collection_name = 'ansible.windows' }
)
}
}
supports_check_mode = $true
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-AnsibleWindowsWebRequestSpec))
$url = $module.Params.url
$method = $module.Params.url_method.ToUpper()
$content_type = $module.Params.content_type
$body = $module.Params.body
$dest = $module.Params.dest
$creates = $module.Params.creates
$removes = $module.Params.removes
$return_content = $module.Params.return_content
$status_code = $module.Params.status_code
$JSON_CANDIDATES = @('text', 'json', 'javascript')
$module.Result.elapsed = 0
$module.Result.url = $url
Function ConvertFrom-SafeJson {
<#
.SYNOPSIS
Safely convert a JSON string to an object, this is like ConvertFrom-Json except it respect -ErrorAction.
.PAREMTER InputObject
The input object string to convert from.
#>
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[AllowEmptyString()]
[AllowNull()]
[String]
$InputObject
)
if (-not $InputObject) {
return
}
try {
# Make sure we output the actual object without unpacking with the unary comma
,[Ansible.Basic.AnsibleModule]::FromJson($InputObject)
} catch [System.ArgumentException] {
Write-Error -Message "Invalid json string as input object: $($_.Exception.Message)" -Exception $_.Exception
}
}
if (-not ($method -cmatch '^[A-Z]+$')) {
$module.FailJson("Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
}
if ($creates -and (Test-AnsiblePath -Path $creates)) {
$module.Result.skipped = $true
$module.Result.msg = "The 'creates' file or directory ($creates) already exists."
$module.ExitJson()
}
if ($removes -and -not (Test-AnsiblePath -Path $removes)) {
$module.Result.skipped = $true
$module.Result.msg = "The 'removes' file or directory ($removes) does not exist."
$module.ExitJson()
}
$client = Get-AnsibleWindowsWebRequest -Uri $url -Module $module
if ($null -ne $content_type) {
$client.ContentType = $content_type
}
$response_script = {
param($Response, $Stream)
ForEach ($prop in $Response.PSObject.Properties) {
$result_key = Convert-StringToSnakeCase -string $prop.Name
$prop_value = $prop.Value
# convert and DateTime values to ISO 8601 standard
if ($prop_value -is [System.DateTime]) {
$prop_value = $prop_value.ToString("o", [System.Globalization.CultureInfo]::InvariantCulture)
}
$module.Result.$result_key = $prop_value
}
# manually get the headers as not all of them are in the response properties
foreach ($header_key in $Response.Headers.GetEnumerator()) {
$header_value = $Response.Headers[$header_key]
$header_key = $header_key.Replace("-", "") # replace - with _ for snake conversion
$header_key = Convert-StringToSnakeCase -string $header_key
$module.Result.$header_key = $header_value
}
# we only care about the return body if we need to return the content or create a file
if ($return_content -or $dest) {
# copy to a MemoryStream so we can read it multiple times
$memory_st = New-Object -TypeName System.IO.MemoryStream
try {
$Stream.CopyTo($memory_st)
if ($return_content) {
$memory_st.Seek(0, [System.IO.SeekOrigin]::Begin) > $null
$content_bytes = $memory_st.ToArray()
$module.Result.content = [System.Text.Encoding]::UTF8.GetString($content_bytes)
if ($module.Result.ContainsKey("content_type") -and $module.Result.content_type -Match ($JSON_CANDIDATES -join '|')) {
$json = ConvertFrom-SafeJson -InputObject $module.Result.content -ErrorAction SilentlyContinue
if ($json) {
$module.Result.json = $json
}
}
}
if ($dest) {
$memory_st.Seek(0, [System.IO.SeekOrigin]::Begin) > $null
$changed = $true
if (Test-AnsiblePath -Path $dest) {
$actual_checksum = Get-FileChecksum -path $dest -algorithm "sha1"
$sp = New-Object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider
$content_checksum = [System.BitConverter]::ToString($sp.ComputeHash($memory_st)).Replace("-", "").ToLower()
if ($actual_checksum -eq $content_checksum) {
$changed = $false
}
}
$module.Result.changed = $changed
if ($changed -and (-not $module.CheckMode)) {
$memory_st.Seek(0, [System.IO.SeekOrigin]::Begin) > $null
$file_stream = [System.IO.File]::Create($dest)
try {
$memory_st.CopyTo($file_stream)
} finally {
$file_stream.Flush()
$file_stream.Close()
}
}
}
} finally {
$memory_st.Close()
}
}
if ($status_code -notcontains $Response.StatusCode) {
$module.FailJson("Status code of request '$([int]$Response.StatusCode)' is not in list of valid status codes $status_code : $($Response.StatusCode)'.")
}
}
$body_st = $null
if ($null -ne $body) {
if ($body -is [System.Collections.IDictionary] -or $body -is [System.Collections.IList]) {
$body_string = ConvertTo-Json -InputObject $body -Compress
} elseif ($body -isnot [String]) {
$body_string = $body.ToString()
} else {
$body_string = $body
}
$buffer = [System.Text.Encoding]::UTF8.GetBytes($body_string)
$body_st = New-Object -TypeName System.IO.MemoryStream -ArgumentList @(,$buffer)
}
try {
Invoke-AnsibleWindowsWebRequest -Module $module -Request $client -Script $response_script -Body $body_st -IgnoreBadResponse
} catch {
$module.FailJson("Unhandled exception occurred when sending web request. Exception: $($_.Exception.Message)", $_)
} finally {
if ($null -ne $body_st) {
$body_st.Dispose()
}
}
$module.ExitJson()

@ -0,0 +1,155 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Corwin Brown <corwin@corwinbrown.com>
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <dag@wieers.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: win_uri
short_description: Interacts with webservices
description:
- Interacts with FTP, HTTP and HTTPS web services.
- Supports Digest, Basic and WSSE HTTP authentication mechanisms.
- For non-Windows targets, use the M(ansible.builtin.uri) module instead.
options:
url:
description:
- Supports FTP, HTTP or HTTPS URLs in the form of (ftp|http|https)://host.domain:port/path.
type: str
required: yes
content_type:
description:
- Sets the "Content-Type" header.
type: str
body:
description:
- The body of the HTTP request/response to the web service.
type: raw
dest:
description:
- Output the response body to a file.
type: path
creates:
description:
- A filename, when it already exists, this step will be skipped.
type: path
removes:
description:
- A filename, when it does not exist, this step will be skipped.
type: path
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result. If the reported Content-type is
"application/json", then the JSON is additionally loaded into a key
called C(json) in the dictionary results.
type: bool
default: no
status_code:
description:
- A valid, numeric, HTTP status code that signifies success of the request.
- Can also be comma separated list of status codes.
type: list
elements: int
default: [ 200 ]
url_method:
default: GET
aliases:
- method
url_timeout:
aliases:
- timeout
# Following defined in the web_request fragment but the module contains deprecated aliases for backwards compatibility.
url_username:
description:
- The username to use for authentication.
- The alias I(user) and I(username) is deprecated and will be removed on
the major release after C(2022-07-01).
aliases:
- user
- username
url_password:
description:
- The password for I(url_username).
- The alias I(password) is deprecated and will be removed on the major
release after C(2022-07-01).
aliases:
- password
extends_documentation_fragment:
- ansible.windows.web_request
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Corwin Brown (@blakfeld)
- Dag Wieers (@dagwieers)
'''
EXAMPLES = r'''
- name: Perform a GET and Store Output
ansible.windows.win_uri:
url: http://example.com/endpoint
register: http_output
# Set a HOST header to hit an internal webserver:
- name: Hit a Specific Host on the Server
ansible.windows.win_uri:
url: http://example.com/
method: GET
headers:
host: www.somesite.com
- name: Perform a HEAD on an Endpoint
ansible.windows.win_uri:
url: http://www.example.com/
method: HEAD
- name: POST a Body to an Endpoint
ansible.windows.win_uri:
url: http://www.somesite.com/
method: POST
body: "{ 'some': 'json' }"
'''
RETURN = r'''
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: always
type: float
sample: 23.2
url:
description: The Target URL.
returned: always
type: str
sample: https://www.ansible.com
status_code:
description: The HTTP Status Code of the response.
returned: success
type: int
sample: 200
status_description:
description: A summary of the status.
returned: success
type: str
sample: OK
content:
description: The raw content of the HTTP response.
returned: success and return_content is True
type: str
sample: '{"foo": "bar"}'
content_length:
description: The byte size of the response.
returned: success
type: int
sample: 54447
json:
description: The json structure returned under content as a dictionary.
returned: success and Content-Type is "application/json" or "application/javascript" and return_content is True
type: dict
sample: {"this-is-dependent": "on the actual return content"}
'''

@ -1,131 +0,0 @@
# This file is part of Ansible
# -*- coding: utf-8 -*-
#
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pytest
from units.compat.mock import call, patch, MagicMock
# docker images quay.io/ansible/centos7-test-container --format '{{json .}}'
DOCKER_OUTPUT_MULTIPLE = """
{"Containers":"N/A","CreatedAt":"2020-06-11 17:05:58 -0500 CDT","CreatedSince":"3 months ago","Digest":"\u003cnone\u003e","ID":"b0f914b26cc1","Repository":"quay.io/ansible/centos7-test-container","SharedSize":"N/A","Size":"556MB","Tag":"1.17.0","UniqueSize":"N/A","VirtualSize":"555.6MB"}
{"Containers":"N/A","CreatedAt":"2020-06-11 17:05:58 -0500 CDT","CreatedSince":"3 months ago","Digest":"\u003cnone\u003e","ID":"b0f914b26cc1","Repository":"quay.io/ansible/centos7-test-container","SharedSize":"N/A","Size":"556MB","Tag":"latest","UniqueSize":"N/A","VirtualSize":"555.6MB"}
{"Containers":"N/A","CreatedAt":"2019-04-01 19:59:39 -0500 CDT","CreatedSince":"18 months ago","Digest":"\u003cnone\u003e","ID":"dd3d10e03dd3","Repository":"quay.io/ansible/centos7-test-container","SharedSize":"N/A","Size":"678MB","Tag":"1.8.0","UniqueSize":"N/A","VirtualSize":"678MB"}
""".lstrip() # noqa: E501
PODMAN_OUTPUT = """
[
{
"id": "dd3d10e03dd3580de865560c3440c812a33fd7a1fca8ed8e4a1219ff3d809e3a",
"names": [
"quay.io/ansible/centos7-test-container:1.8.0"
],
"digest": "sha256:6e5d9c99aa558779715a80715e5cf0c227a4b59d95e6803c148290c5d0d9d352",
"created": "2019-04-02T00:59:39.234584184Z",
"size": 702761933
},
{
"id": "b0f914b26cc1088ab8705413c2f2cf247306ceeea51260d64c26894190d188bd",
"names": [
"quay.io/ansible/centos7-test-container:latest"
],
"digest": "sha256:d8431aa74f60f4ff0f1bd36bc9a227bbb2066330acd8bf25e29d8614ee99e39c",
"created": "2020-06-11T22:05:58.382459136Z",
"size": 578513505
}
]
""".lstrip()
@pytest.fixture
def docker_images():
from ansible_test._internal.docker_util import docker_images
return docker_images
@pytest.fixture
def ansible_test(ansible_test):
import ansible_test
return ansible_test
@pytest.fixture
def subprocess_error():
from ansible_test._internal.util import SubprocessError
return SubprocessError
@pytest.mark.parametrize(
('returned_items_count', 'patched_dc_stdout'),
(
(3, (DOCKER_OUTPUT_MULTIPLE, '')),
(2, (PODMAN_OUTPUT, '')),
(0, ('', '')),
),
ids=('docker JSONL', 'podman JSON sequence', 'empty output'))
def test_docker_images(docker_images, mocker, returned_items_count, patched_dc_stdout):
mocker.patch(
'ansible_test._internal.docker_util.docker_command',
return_value=patched_dc_stdout)
ret = docker_images('', 'quay.io/ansible/centos7-test-container')
assert len(ret) == returned_items_count
def test_podman_fallback(ansible_test, docker_images, subprocess_error, mocker):
'''Test podman >2 && <2.2 fallback'''
cmd = ['docker', 'images', 'quay.io/ansible/centos7-test-container', '--format', '{{json .}}']
docker_command_results = [
subprocess_error(cmd, status=1, stderr='function "json" not defined'),
(PODMAN_OUTPUT, ''),
]
mocker.patch(
'ansible_test._internal.docker_util.docker_command',
side_effect=docker_command_results)
ret = docker_images('', 'quay.io/ansible/centos7-test-container')
calls = [
call(
'',
['images', 'quay.io/ansible/centos7-test-container', '--format', '{{json .}}'],
capture=True,
always=True),
call(
'',
['images', 'quay.io/ansible/centos7-test-container', '--format', 'json'],
capture=True,
always=True),
]
ansible_test._internal.docker_util.docker_command.assert_has_calls(calls)
assert len(ret) == 2
def test_podman_no_such_image(ansible_test, docker_images, subprocess_error, mocker):
'''Test podman "no such image" error'''
cmd = ['docker', 'images', 'quay.io/ansible/centos7-test-container', '--format', '{{json .}}']
exc = subprocess_error(cmd, status=1, stderr='no such image'),
mocker.patch(
'ansible_test._internal.docker_util.docker_command',
side_effect=exc)
ret = docker_images('', 'quay.io/ansible/centos7-test-container')
assert ret == []
Loading…
Cancel
Save