ansible-galaxy - add signature verification of the MANIFEST.json (#76681)

* ansible-galaxy collection install|verify:

  - Support verifying the origin of the MANIFEST.json when the Galaxy server has provided signatures.
  - Allow supplemental signatures to use during verification on the CLI/requirements file.

* ansible-galaxy collection install:

  - Support disabling signature verification. This silences the warning provided by ansible-galaxy if the Galaxy server provided signatures it cannot use because no keyring is configured.
  - Store Galaxy server metadata alongside installed collections for provenance. This is used by 'ansible-galaxy collection verify --offline'.

* Add unit tests for method that gets signatures from a Galaxy server

* Add integration tests for user-provided signature sources

- Test CLI option combinations
- Test installing collections with valid/invalid signature sources
- Test disabling GPG verification when installing collections
- Test verifying collections with valid/invalid signature sources

* Make signature verification advisory-by-default if signatures are provided by the Galaxy server

- Make the default keyring None
- Warn if the keyring is None but the Galaxy server provided signatures
- Error if the keyring is None but the user supplied signatures
- Error if the keyring is not None but is invalid

* changelog

* add ansible-galaxy user documentation for new options

Co-authored-by: Matt Martz <matt@sivel.net>
Co-authored-by: Sviatoslav Sydorenko <wk.cvs.github@sydorenko.org.ua>
Co-authored-by: Martin Krizek <martin.krizek@gmail.com>
Co-authored-by: Sandra McCann <samccann@redhat.com>
Co-authored-by: Andy Mott <amott@redhat.com>
Co-authored-by: John R Barker <john@johnrbarker.com>
pull/77016/head
Sloane Hertel 4 years ago committed by GitHub
parent d35bef68f5
commit 43e55db208
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,24 @@
minor_changes:
- >-
``ansible-galaxy collection [install|verify]`` - use gpg to verify the authenticity of
the signed ``MANIFEST.json`` with ASCII armored detached signatures provided by the Galaxy
server. The keyring (which is not managed by ``ansible-galaxy``) must be provided with
the ``--keyring`` option to use signature verification.
If no ``--keyring`` is specified and the collection to ``install|verify`` has associated
detached signatures on the Galaxy server, a warning is provided.
- >-
``ansible-galaxy collection [install|verify]`` - allow user-provided signature sources
in addition to those from the Galaxy server.
Each collection entry in a requirements file can specify a ``signatures`` key followed by
a list of sources.
Collection name(s) provided on the CLI can specify additional signature sources by using
the ``--signatures`` CLI option.
Signature sources should be URIs that can be opened with ``urllib.request.urlopen()``, such as
"https://example.com/path/to/detached_signature.asc" or "file:///path/to/detached_signature.asc".
The ``--keyring`` option must be specified if signature sources are provided.
- >-
``ansible-galaxy collection install`` - Store Galaxy server metadata alongside installed
collections for provenance. Signatures obtained from the Galaxy server can be used for offline
verification with ``ansible-galaxy collection verify --offline``.
- >-
``ansible-galaxy collection install`` - Add a global toggle to turn off GPG signature verification.

@ -69,3 +69,11 @@ You can also keep a collection adjacent to the current playbook, under a ``colle
See :ref:`collection_structure` for details on the collection directory structure.
Collections signed by a Galaxy server can be verified during installation with GnuPG. To opt into signature verification, configure a keyring for ``ansible-galaxy`` with native GnuPG tooling and provide the file path with the ``--keyring`` CLI option. Signatures provided by the Galaxy server will be used to verify the collection's ``MANIFEST.json``. If verification is unsuccessful, the collection will not be installed. GnuPG signature verification can be disabled with ``--disable-gpg-verify`` or by configuring :ref:`GALAXY_DISABLE_GPG_VERIFY`.
Use the ``--signature`` option to verify the collection's ``MANIFEST.json`` with additional signatures to those provided by the Galaxy server. Supplemental signatures should be provided as URIs.
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx

@ -13,15 +13,28 @@ You can set up a ``requirements.yml`` file to install multiple collections in on
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
You can specify four keys for each collection entry:
You can specify the following keys for each collection entry:
* ``name``
* ``version``
* ``signatures``
* ``source``
* ``type``
The ``version`` key uses the same range identifier format documented in :ref:`collections_older_version`.
The ``signatures`` key accepts a list of signature sources that are used to supplement those found on the Galaxy server during collection installation and ``ansible-galaxy collection verify``. Signature sources should be URIs that contain the detached signature. These are only used for collections on Galaxy servers. The ``--keyring`` CLI option must be provided if signatures are specified.
.. code-block:: yaml
collections:
- name: namespace.name
version: 1.0.0
type: galaxy
signatures:
- https://examplehost.com/detached_signature.asc
- file:///path/to/local/detached_signature.asc
The ``type`` key can be set to ``file``, ``galaxy``, ``git``, ``url``, ``dir``, or ``subdirs``. If ``type`` is omitted, the ``name`` key is used to implicitly determine the source of the collection.
When you install a collection with ``type: git``, the ``version`` key can refer to a branch or to a `git commit-ish <https://git-scm.com/docs/gitglossary#def_commit-ish>`_ object (commit or tag). For example:

@ -270,6 +270,41 @@ In addition to the ``namespace.collection_name:version`` format, you can provide
Verifying against ``tar.gz`` files is not supported. If your ``requirements.yml`` contains paths to tar files or URLs for installation, you can use the ``--ignore-errors`` flag to ensure that all collections using the ``namespace.name`` format in the file are processed.
Signature verification
----------------------
If a collection has been signed by the Galaxy server, the server will provide ASCII armored, detached signatures to verify the authenticity of the MANIFEST.json before using it to verify the collection's contents. You must opt into signature verification by configuring a keyring for ``ansible-galaxy`` to use and providing the path with the ``--keyring`` option.
In addition to any signatures provided by the Galaxy server, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs.
Use the ``--signature`` option to verify collection name(s) provided on the CLI with an additional signature. This option can be used multiple times to provide multiple signatures.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx
Collections in a requirements file should list any additional signature sources following the collection's "signatures" key.
.. code-block:: yaml
# requirements.yml
collections:
- name: ns.coll
version: 1.0.0
signatures:
- https://examplehost.com/detached_signature.asc
- file:///path/to/local/detached_signature.asc
.. code-block:: bash
ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx
When a collection is installed from a Galaxy server, the signatures provided by the server to verify the collection's authenticity are saved alongside the installed collections. This data is used to verify the internal consistency of the collection without querying the Galaxy server again when the ``--offline`` option is provided.
.. code-block:: bash
ansible-galaxy collection verify my_namespace.my_collection --offline --keyring ~/.ansible/pubring.kbx
.. _collections_using_playbook:
Using collections in a Playbook

@ -82,9 +82,14 @@ def with_collection_artifacts_manager(wrapped_method):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
keyring = GalaxyCLI._resolve_path(keyring)
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
validate_certs=not context.CLIARGS['ignore_certs'],
keyring=keyring,
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
@ -385,6 +390,12 @@ class GalaxyCLI(CLI):
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
@ -425,9 +436,26 @@ class GalaxyCLI(CLI):
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
if self._implicit_role and ('-r' in self._raw_args or '--role-file' in self._raw_args):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
@ -816,6 +844,7 @@ class GalaxyCLI(CLI):
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
@ -823,6 +852,12 @@ class GalaxyCLI(CLI):
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
@ -832,7 +867,7 @@ class GalaxyCLI(CLI):
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager)
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
@ -1107,9 +1142,13 @@ class GalaxyCLI(CLI):
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
@ -1140,6 +1179,9 @@ class GalaxyCLI(CLI):
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
@ -1155,6 +1197,7 @@ class GalaxyCLI(CLI):
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
@ -1219,6 +1262,7 @@ class GalaxyCLI(CLI):
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
@ -1239,6 +1283,7 @@ class GalaxyCLI(CLI):
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0

@ -1440,6 +1440,27 @@ GALAXY_CACHE_DIR:
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration

@ -229,7 +229,7 @@ CollectionMetadata = collections.namedtuple('CollectionMetadata', ['namespace',
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies):
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies, signatures_url, signatures):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
@ -240,6 +240,8 @@ class CollectionVersionMetadata:
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
:param signatures_url: The URL to the specific version of the collection.
:param signatures: The list of signatures found at the signatures_url.
"""
self.namespace = namespace
self.name = name
@ -247,6 +249,8 @@ class CollectionVersionMetadata:
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
self.signatures_url = signatures_url
self.signatures = signatures
@functools.total_ordering
@ -780,9 +784,11 @@ class GalaxyAPI:
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
signatures = data.get('signatures') or []
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'])
data['metadata']['dependencies'], data['href'], signatures)
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
@ -870,3 +876,31 @@ class GalaxyAPI:
self._set_cache()
return versions
@g_connect(['v2', 'v3'])
def get_collection_signatures(self, namespace, name, version):
"""
Gets the collection signatures from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: A list of signature strings.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
try:
signatures = data["signatures"]
except KeyError:
# Noisy since this is used by the dep resolver, so require more verbosity than Galaxy calls
display.vvvvvv(f"Server {self.api_server} has not signed {namespace}.{name}:{version}")
return []
else:
return [signature_info["signature"] for signature_info in signatures]

@ -16,6 +16,7 @@ import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import yaml
@ -49,7 +50,6 @@ if TYPE_CHECKING:
else: # Python 2 + Python 3.4-3.7
from typing_extensions import Literal
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
@ -97,6 +97,7 @@ if TYPE_CHECKING:
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
@ -105,6 +106,11 @@ from ansible.galaxy.collection.concrete_artifact_manager import (
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source
)
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
@ -133,6 +139,45 @@ MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
@ -166,13 +211,60 @@ def verify_local_collection(
modified_content = [] # type: List[ModifiedContent]
verify_local_only = remote_collection is None
if verify_local_only:
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = []
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]]
elif not verify_local_only and remote_collection.signatures:
signatures = remote_collection.signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
else:
for signature in signatures:
try:
verify_file_signature(manifest_file, to_text(signature, errors='surrogate_or_strict'), artifacts_manager.keyring)
except CollectionSignatureError as error:
display.vvvv(error.report(local_collection.fqcn))
result.success = False
if not result.success:
return result
elif signatures:
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
@ -189,20 +281,6 @@ def verify_local_collection(
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
@ -246,6 +324,27 @@ def verify_local_collection(
return result
def verify_file_signature(manifest_file, detached_signature, keyring):
# type: (str, str, str) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = set(error.get_gpg_error_description() for error in chain([error], errors))
raise CollectionSignatureError(reasons=reasons, stdout=gpg_result, rc=gpg_verification_rc)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (Text, Text, bool) -> Text
"""Creates the Ansible collection artifact in a .tar.gz file.
@ -319,6 +418,8 @@ def download_collections(
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
@ -443,6 +544,7 @@ def install_collections(
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
@ -456,7 +558,7 @@ def install_collections(
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type)
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
@ -506,7 +608,8 @@ def install_collections(
else existing_collections
)
preferred_collections = {
Candidate(coll.fqcn, coll.ver, coll.src, coll.type)
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
@ -518,8 +621,10 @@ def install_collections(
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
@ -536,6 +641,17 @@ def install_collections(
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
@ -647,20 +763,31 @@ def verify_collections(
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: Trigger the lookup. If found, it'll cache
# NOTE: download URL and token in artifact manager.
api_proxy.get_collection_version_metadata(
remote_collection,
)
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
if not signatures:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
@ -1079,7 +1206,19 @@ def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(b_artifact_path, b_collection_path, artifacts_manager._b_working_directory)
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
@ -1087,20 +1226,67 @@ def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
)
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path):
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring):
# type: (str, str, List[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
for signature in signatures:
try:
verify_file_signature(manifest_file, to_text(signature, errors='surrogate_or_strict'), keyring)
except CollectionSignatureError as error:
display.vvvv(error.report(collection_name))
failed_verify = True
if failed_verify:
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if signatures and keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
@ -1314,6 +1500,7 @@ def _resolve_depenency_map(
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
): # type: (...) -> Dict[str, Candidate]
"""Return the resolved dependency map."""
collection_dep_resolver = build_collection_dependency_resolver(
@ -1324,6 +1511,7 @@ def _resolve_depenency_map(
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
)
try:
return collection_dep_resolver.resolve(

@ -37,6 +37,7 @@ if TYPE_CHECKING:
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
@ -62,19 +63,51 @@ class ConcreteArtifactsManager:
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True):
# type: (bytes, bool) -> None
def __init__(self, b_working_directory, validate_certs=True, keyring=None):
# type: (bytes, bool, str) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: Dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: Dict[Union[Candidate, Requirement], bytes]
self._artifact_meta_cache = {} # type: Dict[bytes, Dict[str, Optional[Union[str, List[str], Dict[str, str]]]]]
self._galaxy_collection_cache = {} # type: Dict[Union[Candidate, Requirement], Tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: Dict[Candidate, Tuple[str, List[Dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: Dict[str, str]
self._keyring = keyring # type: str
@property
def keyring(self):
return self._keyring
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> Dict[str, Union[str, List[Dict[str, str]]]]
server = collection.src.api_server
try:
download_url, _dummy, _dummy = self._galaxy_collection_cache[collection]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (Union[Candidate, Requirement]) -> bytes
@ -280,14 +313,15 @@ class ConcreteArtifactsManager:
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token):
# type: (Candidate, str, str, GalaxyToken) -> None
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, List[Dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
@ -295,6 +329,7 @@ class ConcreteArtifactsManager:
cls, # type: Type[ConcreteArtifactsManager]
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
): # type: (...) -> Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
@ -309,7 +344,7 @@ class ConcreteArtifactsManager:
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(b_temp_path, validate_certs)
yield cls(b_temp_path, validate_certs, keyring=keyring)
finally:
rmtree(b_temp_path)

@ -14,7 +14,7 @@ except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Dict, Iterable, Iterator, Tuple
from typing import Dict, Iterable, Iterator, Tuple, List
from ansible.galaxy.api import CollectionVersionMetadata
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
@ -144,6 +144,8 @@ class MultiGalaxyAPIProxy:
version_metadata.download_url,
version_metadata.artifact_sha256,
api.token,
version_metadata.signatures_url,
version_metadata.signatures,
)
return version_metadata
@ -165,3 +167,39 @@ class MultiGalaxyAPIProxy:
get_collection_version_metadata(collection_candidate).
dependencies
)
def get_signatures(self, collection_candidate):
# type: (Candidate) -> List[Dict[str, str]]
namespace = collection_candidate.namespace
name = collection_candidate.name
version = collection_candidate.ver
last_err = None
api_lookup_order = (
(collection_candidate.src, )
if isinstance(collection_candidate.src, GalaxyAPI)
else self._apis
)
for api in api_lookup_order:
try:
return api.get_collection_signatures(namespace, name, version)
except GalaxyError as api_err:
last_err = api_err
except Exception as unknown_err:
# Warn for debugging purposes, since the Galaxy server may be unexpectedly down.
last_err = unknown_err
display.warning(
"Skipping Galaxy server {server!s}. "
"Got an unexpected error when getting "
"available versions of collection {fqcn!s}: {err!s}".
format(
server=api.api_server,
fqcn=collection_candidate.fqcn,
err=to_text(unknown_err),
)
)
if last_err:
raise last_err
return []

@ -0,0 +1,291 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2022, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Signature verification helpers."""
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.urls import open_url
import contextlib
import os
import subprocess
import sys
from dataclasses import dataclass, fields as dc_fields
from functools import partial
from urllib.error import HTTPError, URLError
try:
# NOTE: It's in Python 3 stdlib and can be installed on Python 2
# NOTE: via `pip install typing`. Unnecessary in runtime.
# NOTE: `TYPE_CHECKING` is True during mypy-typecheck-time.
from typing import TYPE_CHECKING
except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from ansible.utils.display import Display
from typing import Tuple, Iterator, Optional
IS_PY310_PLUS = sys.version_info[:2] >= (3, 10)
frozen_dataclass = partial(dataclass, frozen=True, **({'slots': True} if IS_PY310_PLUS else {}))
def get_signature_from_source(source, display=None): # type: (str, Optional[Display]) -> str
if display is not None:
display.vvvv(f"Using signature at {source}")
try:
with open_url(
source,
http_agent=user_agent(),
validate_certs=True,
follow_redirects='safe'
) as resp:
signature = resp.read()
except (HTTPError, URLError) as e:
raise AnsibleError(
f"Failed to get signature for collection verification from '{source}': {e}"
) from e
return signature
def run_gpg_verify(
manifest_file, # type: str
signature, # type: str
keyring, # type: str
display, # type: Display
): # type: (...) -> Tuple[str, int]
status_fd_read, status_fd_write = os.pipe()
# running the gpg command will create the keyring if it does not exist
remove_keybox = not os.path.exists(keyring)
cmd = [
'gpg',
f'--status-fd={status_fd_write}',
'--verify',
'--batch',
'--no-tty',
'--no-default-keyring',
f'--keyring={keyring}',
'-',
manifest_file,
]
cmd_str = ' '.join(cmd)
display.vvvv(f"Running command '{cmd}'")
try:
p = subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
pass_fds=(status_fd_write,),
encoding='utf8',
)
except (FileNotFoundError, subprocess.SubprocessError) as err:
raise AnsibleError(
f"Failed during GnuPG verification with command '{cmd_str}': {err}"
) from err
else:
stdout, stderr = p.communicate(input=signature)
finally:
os.close(status_fd_write)
if remove_keybox:
with contextlib.suppress(OSError):
os.remove(keyring)
with os.fdopen(status_fd_read) as f:
stdout = f.read()
display.vvvv(
f"stdout: \n{stdout}\nstderr: \n{stderr}\n(exit code {p.returncode})"
)
return stdout, p.returncode
def parse_gpg_errors(status_out): # type: (str) -> Iterator[GpgBaseError]
for line in status_out.splitlines():
if not line:
continue
try:
_dummy, status, remainder = line.split(maxsplit=2)
except ValueError:
_dummy, status = line.split(maxsplit=1)
remainder = None
try:
cls = GPG_ERROR_MAP[status]
except KeyError:
continue
fields = [status]
if remainder:
fields.extend(
remainder.split(
None,
len(dc_fields(cls)) - 2
)
)
yield cls(*fields)
@frozen_dataclass
class GpgBaseError(Exception):
status: str
@classmethod
def get_gpg_error_description(cls) -> str:
"""Return the current class description."""
return ' '.join(cls.__doc__.split())
def __post_init__(self):
for field in dc_fields(self):
super(GpgBaseError, self).__setattr__(field.name, field.type(getattr(self, field.name)))
@frozen_dataclass
class GpgExpSig(GpgBaseError):
"""The signature with the keyid is good, but the signature is expired."""
keyid: str
username: str
@frozen_dataclass
class GpgExpKeySig(GpgBaseError):
"""The signature with the keyid is good, but the signature was made by an expired key."""
keyid: str
username: str
@frozen_dataclass
class GpgRevKeySig(GpgBaseError):
"""The signature with the keyid is good, but the signature was made by a revoked key."""
keyid: str
username: str
@frozen_dataclass
class GpgBadSig(GpgBaseError):
"""The signature with the keyid has not been verified okay."""
keyid: str
username: str
@frozen_dataclass
class GpgErrSig(GpgBaseError):
""""It was not possible to check the signature. This may be caused by
a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public
key.
"""
keyid: str
pkalgo: int
hashalgo: int
sig_class: str
time: int
rc: int
fpr: str
@frozen_dataclass
class GpgNoPubkey(GpgBaseError):
"""The public key is not available."""
keyid: str
@frozen_dataclass
class GpgMissingPassPhrase(GpgBaseError):
"""No passphrase was supplied."""
@frozen_dataclass
class GpgBadPassphrase(GpgBaseError):
"""The supplied passphrase was wrong or not given."""
keyid: str
@frozen_dataclass
class GpgNoData(GpgBaseError):
"""No data has been found. Codes for WHAT are:
- 1 :: No armored data.
- 2 :: Expected a packet but did not find one.
- 3 :: Invalid packet found, this may indicate a non OpenPGP
message.
- 4 :: Signature expected but not found.
"""
what: str
@frozen_dataclass
class GpgUnexpected(GpgBaseError):
"""No data has been found. Codes for WHAT are:
- 1 :: No armored data.
- 2 :: Expected a packet but did not find one.
- 3 :: Invalid packet found, this may indicate a non OpenPGP
message.
- 4 :: Signature expected but not found.
"""
what: str
@frozen_dataclass
class GpgError(GpgBaseError):
"""This is a generic error status message, it might be followed by error location specific data."""
location: str
code: int
more: str = ""
@frozen_dataclass
class GpgFailure(GpgBaseError):
"""This is the counterpart to SUCCESS and used to indicate a program failure."""
location: str
code: int
@frozen_dataclass
class GpgBadArmor(GpgBaseError):
"""The ASCII armor is corrupted."""
@frozen_dataclass
class GpgKeyExpired(GpgBaseError):
"""The key has expired."""
timestamp: int
@frozen_dataclass
class GpgKeyRevoked(GpgBaseError):
"""The used key has been revoked by its owner."""
@frozen_dataclass
class GpgNoSecKey(GpgBaseError):
"""The secret key is not available."""
keyid: str
GPG_ERROR_MAP = {
'EXPSIG': GpgExpSig,
'EXPKEYSIG': GpgExpKeySig,
'REVKEYSIG': GpgRevKeySig,
'BADSIG': GpgBadSig,
'ERRSIG': GpgErrSig,
'NO_PUBKEY': GpgNoPubkey,
'MISSING_PASSPHRASE': GpgMissingPassPhrase,
'BAD_PASSPHRASE': GpgBadPassphrase,
'NODATA': GpgNoData,
'UNEXPECTED': GpgUnexpected,
'ERROR': GpgError,
'FAILURE': GpgFailure,
'BADARMOR': GpgBadArmor,
'KEYEXPIRED': GpgKeyExpired,
'KEYREVOKED': GpgKeyRevoked,
'NO_SECKEY': GpgNoSecKey,
}

@ -36,6 +36,7 @@ def build_collection_dependency_resolver(
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
include_signatures=True, # type: bool
): # type: (...) -> CollectionDependencyResolver
"""Return a collection dependency resolver.
@ -51,6 +52,7 @@ def build_collection_dependency_resolver(
with_deps=with_deps,
with_pre_releases=with_pre_releases,
upgrade=upgrade,
include_signatures=include_signatures,
),
CollectionDependencyReporter(),
)

@ -7,11 +7,12 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
from collections import namedtuple
from collections.abc import MutableSequence
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
try:
from typing import TYPE_CHECKING
@ -19,7 +20,7 @@ except ImportError:
TYPE_CHECKING = False
if TYPE_CHECKING:
from typing import Tuple, Type, TypeVar
from typing import Type, TypeVar
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
@ -29,12 +30,12 @@ if TYPE_CHECKING:
'_ComputedReqKindsMixin',
)
import yaml
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import raise_from
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
@ -42,11 +43,70 @@ from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
@ -112,6 +172,25 @@ def _is_concrete_artifact_pointer(tested_str):
class _ComputedReqKindsMixin:
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(self.src, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_path = to_bytes(os.path.sep).join(b_path_parts)
info_path = self.construct_galaxy_info_path(b_path)
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: Type[Collection]
@ -158,11 +237,11 @@ class _ComputedReqKindsMixin:
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir')
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(req_name, req_version, dir_path, 'dir')
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
@ -179,10 +258,10 @@ class _ComputedReqKindsMixin:
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir') # type: ignore[call-arg]
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager):
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input):
# Arg is a file path or URL to a collection
@ -191,6 +270,7 @@ class _ComputedReqKindsMixin:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@ -201,6 +281,16 @@ class _ComputedReqKindsMixin:
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
@ -312,7 +402,7 @@ class _ComputedReqKindsMixin:
format(not_url=req_source.api_server),
)
tmp_inst_req = cls(req_name, req_version, req_source, req_type)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
@ -323,6 +413,7 @@ class _ComputedReqKindsMixin:
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
@ -346,6 +437,26 @@ class _ComputedReqKindsMixin:
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_metadata_dir):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@ -412,16 +523,40 @@ class _ComputedReqKindsMixin:
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources'))
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures'))
class Requirement(
_ComputedReqKindsMixin,
namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type')),
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type'))
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()

@ -20,6 +20,7 @@ if TYPE_CHECKING:
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import get_signature_from_source
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate,
Requirement,
@ -31,9 +32,38 @@ from ansible.galaxy.dependency_resolution.versioning import (
from ansible.module_utils.six import string_types
from ansible.utils.version import SemanticVersion
from collections.abc import Set
from resolvelib import AbstractProvider
class PinnedCandidateRequests(Set):
"""Custom set class to store Candidate objects. Excludes the 'signatures' attribute when determining if a Candidate instance is in the set."""
CANDIDATE_ATTRS = ('fqcn', 'ver', 'src', 'type')
def __init__(self, candidates):
self._candidates = set(candidates)
def __iter__(self):
return iter(self._candidates)
def __contains__(self, value):
if not isinstance(value, Candidate):
raise ValueError(f"Expected a Candidate object but got {value!r}")
for candidate in self._candidates:
# Compare Candidate attributes excluding "signatures" since it is
# unrelated to whether or not a matching Candidate is user-requested.
# Candidate objects in the set are not expected to have signatures.
for attr in PinnedCandidateRequests.CANDIDATE_ATTRS:
if getattr(value, attr) != getattr(candidate, attr):
break
else:
return True
return False
def __len__(self):
return len(self._candidates)
class CollectionDependencyProvider(AbstractProvider):
"""Delegate providing a requirement interface for the resolver."""
@ -46,6 +76,7 @@ class CollectionDependencyProvider(AbstractProvider):
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
include_signatures=True, # type: bool
): # type: (...) -> None
r"""Initialize helper attributes.
@ -61,14 +92,25 @@ class CollectionDependencyProvider(AbstractProvider):
:param with_pre_releases: A flag specifying whether the \
resolver should skip pre-releases. \
Off by default.
:param upgrade: A flag specifying whether the resolver should \
skip matching versions that are not upgrades. \
Off by default.
:param include_signatures: A flag to determine whether to retrieve \
signatures from the Galaxy APIs and \
include signatures in matching Candidates. \
On by default.
"""
self._api_proxy = apis
self._make_req_from_dict = functools.partial(
Requirement.from_requirement_dict,
art_mgr=concrete_artifacts_manager,
)
self._pinned_candidate_requests = set(
Candidate(req.fqcn, req.ver, req.src, req.type)
self._pinned_candidate_requests = PinnedCandidateRequests(
# NOTE: User-provided signatures are supplemental, so signatures
# NOTE: are not used to determine if a candidate is user-requested
Candidate(req.fqcn, req.ver, req.src, req.type, None)
for req in (user_requirements or ())
if req.is_concrete_artifact or (
req.ver != '*' and
@ -79,6 +121,7 @@ class CollectionDependencyProvider(AbstractProvider):
self._with_deps = with_deps
self._with_pre_releases = with_pre_releases
self._upgrade = upgrade
self._include_signatures = include_signatures
def _is_user_requested(self, candidate): # type: (Candidate) -> bool
"""Check if the candidate is requested by the user."""
@ -107,8 +150,11 @@ class CollectionDependencyProvider(AbstractProvider):
# NOTE: with the `source:` set, it'll match the first check
# NOTE: but it still can have entries with `src=None` so this
# NOTE: normalized check is still necessary.
# NOTE:
# NOTE: User-provided signatures are supplemental, so signatures
# NOTE: are not used to determine if a candidate is user-requested
return Candidate(
candidate.fqcn, candidate.ver, None, candidate.type,
candidate.fqcn, candidate.ver, None, candidate.type, None
) in self._pinned_candidate_requests
return False
@ -252,23 +298,41 @@ class CollectionDependencyProvider(AbstractProvider):
raise ValueError(version_err) from ex
return [
Candidate(fqcn, version, _none_src_server, first_req.type)
Candidate(fqcn, version, _none_src_server, first_req.type, None)
for version, _none_src_server in coll_versions
]
latest_matches = sorted(
{
candidate for candidate in (
Candidate(fqcn, version, src_server, 'galaxy')
for version, src_server in coll_versions
)
if all(self.is_satisfied_by(requirement, candidate) for requirement in requirements)
latest_matches = []
signatures = []
extra_signature_sources = []
for version, src_server in coll_versions:
tmp_candidate = Candidate(fqcn, version, src_server, 'galaxy', None)
unsatisfied = False
for requirement in requirements:
unsatisfied |= not self.is_satisfied_by(requirement, tmp_candidate)
# FIXME
# if all(self.is_satisfied_by(requirement, candidate) and (
# requirement.src is None or # if this is true for some candidates but not all it will break key param - Nonetype can't be compared to str
# requirement.src == candidate.src
# ))
},
# unsatisfied |= not self.is_satisfied_by(requirement, tmp_candidate) or not (
# requirement.src is None or # if this is true for some candidates but not all it will break key param - Nonetype can't be compared to str
# or requirement.src == candidate.src
# )
if unsatisfied:
break
if not self._include_signatures:
continue
extra_signature_sources.extend(requirement.signature_sources or [])
if not unsatisfied:
if self._include_signatures:
signatures = src_server.get_collection_signatures(first_req.namespace, first_req.name, version)
for extra_source in extra_signature_sources:
signatures.append(get_signature_from_source(extra_source))
latest_matches.append(
Candidate(fqcn, version, src_server, 'galaxy', frozenset(signatures))
)
latest_matches.sort(
key=lambda candidate: (
SemanticVersion(candidate.ver), candidate.src,
),

@ -78,6 +78,8 @@ RETURN = '''
'''
import os
import subprocess
import tarfile
import tempfile
import yaml
@ -141,6 +143,21 @@ def publish_collection(module, collection):
'stderr': stderr,
}
if module.params['signature_dir'] is not None:
# To test user-provided signatures, we need to sign the MANIFEST.json before publishing
# Extract the tarfile to sign the MANIFEST.json
with tarfile.open(collection_path, mode='r') as collection_tar:
collection_tar.extractall(path=os.path.join(collection_dir, '%s-%s-%s' % (namespace, name, version)))
manifest_path = os.path.join(collection_dir, '%s-%s-%s' % (namespace, name, version), 'MANIFEST.json')
signature_path = os.path.join(module.params['signature_dir'], '%s-%s-%s-MANIFEST.json.asc' % (namespace, name, version))
sign_manifest(signature_path, manifest_path, module, result)
# Create the tarfile containing the signed MANIFEST.json
with tarfile.open(collection_path, "w:gz") as tar:
tar.add(os.path.join(collection_dir, '%s-%s-%s' % (namespace, name, version)), arcname=os.path.sep)
publish_args = ['ansible-galaxy', 'collection', 'publish', collection_path, '--server', module.params['server']]
if module.params['token']:
publish_args.extend(['--token', module.params['token']])
@ -155,6 +172,49 @@ def publish_collection(module, collection):
return result
def sign_manifest(signature_path, manifest_path, module, collection_setup_result):
collection_setup_result['gpg_detach_sign'] = {'signature_path': signature_path}
status_fd_read, status_fd_write = os.pipe()
gpg_cmd = [
"gpg",
"--batch",
"--pinentry-mode",
"loopback",
"--yes",
"--passphrase",
"SECRET",
"--homedir",
module.params['signature_dir'],
"--detach-sign",
"--armor",
"--output",
signature_path,
manifest_path,
]
try:
p = subprocess.Popen(
gpg_cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
pass_fds=(status_fd_write,),
encoding='utf8',
)
except (FileNotFoundError, subprocess.SubprocessError) as err:
collection_setup_result['gpg_detach_sign']['error'] = "Failed during GnuPG verification with command '{gpg_cmd}': {err}".format(
gpg_cmd=gpg_cmd, err=err
)
else:
stdout, stderr = p.communicate()
collection_setup_result['gpg_detach_sign']['stdout'] = stdout
if stderr:
error = "Failed during GnuPG verification with command '{gpg_cmd}':\n{stderr}".format(gpg_cmd=gpg_cmd, stderr=stderr)
collection_setup_result['gpg_detach_sign']['error'] = error
finally:
os.close(status_fd_write)
def run_module():
module_args = dict(
server=dict(type='str', required=True),
@ -171,6 +231,7 @@ def run_module():
use_symlink=dict(type='bool', default=False),
),
),
signature_dir=dict(type='path', default=None),
)
module = AnsibleModule(

@ -352,6 +352,19 @@
- skip.me
dest: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
- name: install roles from requirements file with collection-only keyring option
command: ansible-galaxy role install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }}
vars:
req_file: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_opt
- assert:
that:
- invalid_opt is failed
- "'unrecognized arguments: --keyring' in invalid_opt.stderr"
# Need to run with -vvv to validate the roles will be skipped msg
- name: install collections only with requirements-with-role.yml - {{ test_name }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml' -s '{{ test_name }}' -vvv
@ -410,6 +423,137 @@
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: uninstall collections for next requirements file test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: rewrite requirements file with collections and signatures
copy:
content: |
collections:
- name: namespace7.name
version: "1.0.0"
signatures:
- "file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
- "{{ not_mine }}"
- "{{ also_not_mine }}"
- namespace8.name
- name: namespace9.name
signatures:
- "file://{{ gpg_homedir }}/namespace9-name-1.0.0-MANIFEST.json.asc"
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
vars:
not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
also_not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
- name: install collection with mutually exclusive options
command: ansible-galaxy collection install -r {{ req_file }} -s {{ test_name }} {{ cli_signature }}
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
# --signature is an ansible-galaxy collection install subcommand, but mutually exclusive with -r
cli_signature: "--signature file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: mutually_exclusive_opts
- assert:
that:
- mutually_exclusive_opts is failed
- expected_error in actual_error
vars:
expected_error: >-
The --signatures option and --requirements-file are mutually exclusive.
Use the --signatures with positional collection_name args or provide a
'signatures' key for requirements in the --requirements-file.
actual_error: "{{ mutually_exclusive_opts.stderr }}"
- name: install a collection with user-supplied signatures for verification but no keyring
command: ansible-galaxy collection install namespace1.name1:1.0.0 {{ cli_signature }}
vars:
cli_signature: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: required_together
- assert:
that:
- required_together is failed
- '"ERROR! Signatures were provided to verify namespace1.name1 but no keyring was configured." in required_together.stderr'
- name: install collections with ansible-galaxy install -r with invalid signatures - {{ test_name }}
# Note that --keyring is a valid option for 'ansible-galaxy install -r ...', not just 'ansible-galaxy collection ...'
command: ansible-galaxy install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }} {{ galaxy_verbosity }}
register: install_req
ignore_errors: yes
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: assert invalid signature is fatal with ansible-galaxy install - {{ test_name }}
assert:
that:
- install_req is failed
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed" in install_req.stderr'
# The other collections shouldn't be installed because they're listed
# after the failing collection and --ignore-errors was not provided
- '"Installing ''namespace8.name:1.0.0'' to" not in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" not in install_req.stdout'
- name: install collections with ansible-galaxy install and --ignore-errors - {{ test_name }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }}
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }} --ignore-errors"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_name }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: assert invalid signature is not fatal with ansible-galaxy install --ignore-errors - {{ test_name }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." in install_stderr'
- '"Failed to install collection namespace7.name:1.0.0 but skipping due to --ignore-errors being set." in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
# Uncomment once pulp container is at pulp>=0.5.0
#- name: install cache.cache at the current latest version
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' -vvv
@ -528,6 +672,164 @@
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
- name: install collection with signature with invalid keyring
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_option }} {{ keyring_option }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_option: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring_option: '--keyring {{ gpg_homedir }}/i_do_not_exist.kbx'
ignore_errors: yes
register: keyring_error
- assert:
that:
- keyring_error is failed
- expected_errors[0] in actual_error
- expected_errors[1] in actual_error
- expected_errors[2] in actual_error
- unexpected_warning not in actual_warning
vars:
keyring: "{{ gpg_homedir }}/i_do_not_exist.kbx"
expected_errors:
- "Signature verification failed for 'namespace1.name1' (return code 2):"
- "* The public key is not available."
- >-
* It was not possible to check the signature. This may be caused
by a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public key.
unexpected_warning: >-
The GnuPG keyring used for collection signature
verification was not configured but signatures were
provided by the Galaxy server to verify authenticity.
Configure a keyring for ansible-galaxy to use
or disable signature verification.
Skipping signature verification.
actual_warning: "{{ keyring_error.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
stdout_no_color: "{{ keyring_error.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
actual_error: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
# TODO: Uncomment once signatures are provided by pulp-galaxy-ng
#- name: install collection with signature provided by Galaxy server (no keyring)
# command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
# ignore_errors: yes
# register: keyring_warning
#
#- name: assert a warning was given but signature verification did not occur without configuring the keyring
# assert:
# that:
# - keyring_warning is not failed
# - - '"Installing ''namespace1.name1:1.0.9'' to" in keyring_warning.stdout'
# # TODO: Don't just check the stdout, make sure the collection was installed.
# - expected_warning in actual_warning
# vars:
# expected_warning: >-
# The GnuPG keyring used for collection signature
# verification was not configured but signatures were
# provided by the Galaxy server to verify authenticity.
# Configure a keyring for ansible-galaxy to use
# or disable signature verification.
# Skipping signature verification.
# actual_warning: "{{ keyring_warning.stderr | regex_replace(reset_color) | regex_replace(color) | regex_replace('\\n', ' ') }}"
# reset_color: '\x1b\[0m'
# color: '\x1b\[[0-9];[0-9]{2}m'
- name: install simple collection from first accessible server with valid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_signature
- assert:
that:
- invalid_signature is failed
- "'Not installing namespace1.name1 because GnuPG signature verification failed.' in invalid_signature.stderr"
- expected_errors[0] in install_stdout
- expected_errors[1] in install_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
stdout_no_color: "{{ invalid_signature.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
install_stdout: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: validate collection directory was not created
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
register: collection_dir
check_mode: yes
failed_when: collection_dir is changed
- name: disable signature verification and install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }} --disable-gpg-verify"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: ignore_invalid_signature
- assert:
that:
- ignore_invalid_signature is success
- '"Installing ''namespace1.name1:1.0.9'' to" in ignore_invalid_signature.stdout'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: download collections with pre-release dep - {{ test_name }}
command: ansible-galaxy collection download dep_with_beta.parent namespace1.name1:1.1.0-beta.1 -p '{{ galaxy_dir }}/scratch'

@ -70,6 +70,8 @@
server: '{{ galaxy_ng_server }}'
v3: true
- include_tasks: setup_gpg.yml
# We use a module for this so we can speed up the test time.
# For pulp interactions, we only upload to galaxy_ng which shares
# the same repo and distribution with pulp_ansible
@ -79,6 +81,7 @@
setup_collections:
server: galaxy_ng
collections: '{{ collection_list }}'
signature_dir: '{{ gpg_homedir }}'
environment:
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
@ -174,6 +177,7 @@
args:
apply:
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
vars:
test_api_fallback: 'pulp_v2'

@ -0,0 +1,14 @@
- name: generate revocation certificate
expect:
command: "gpg --homedir {{ gpg_homedir }} --output {{ gpg_homedir }}/revoke.asc --gen-revoke {{ fingerprint }}"
responses:
"Create a revocation certificate for this key": "y"
"Please select the reason for the revocation": "0"
"Enter an optional description": ""
"Is this okay": "y"
- name: revoke key
command: "gpg --no-tty --homedir {{ gpg_homedir }} --import {{ gpg_homedir }}/revoke.asc"
- name: list keys for debugging
command: "gpg --no-tty --homedir {{ gpg_homedir }} --list-keys {{ gpg_user }}"

@ -0,0 +1,24 @@
- name: create empty gpg homedir
file:
state: "{{ item }}"
path: "{{ gpg_homedir }}"
mode: 0700
loop:
- absent
- directory
- name: get username for generating key
command: whoami
register: user
- name: generate key for user with gpg
command: "gpg --no-tty --homedir {{ gpg_homedir }} --passphrase SECRET --pinentry-mode loopback --quick-gen-key {{ user.stdout }} default default"
- name: list gpg keys for user
command: "gpg --no-tty --homedir {{ gpg_homedir }} --list-keys {{ user.stdout }}"
register: gpg_list_keys
- name: save gpg user and fingerprint of new key
set_fact:
gpg_user: "{{ user.stdout }}"
fingerprint: "{{ gpg_list_keys.stdout_lines[1] | trim }}"

@ -26,13 +26,9 @@
- name: install the collection from the server
command: ansible-galaxy collection install ansible_test.verify:1.0.0 -s {{ test_api_fallback }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
- name: verify the collection against the first valid server
command: ansible-galaxy collection verify ansible_test.verify:1.0.0 -vvv {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
register: verify
- assert:
@ -43,8 +39,6 @@
- name: verify the installed collection against the server
command: ansible-galaxy collection verify ansible_test.verify:1.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
register: verify
- assert:
@ -54,11 +48,11 @@
- name: verify the installed collection against the server, with unspecified version in CLI
command: ansible-galaxy collection verify ansible_test.verify -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
- name: verify a collection that doesn't appear to be installed
command: ansible-galaxy collection verify ansible_test.verify:1.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/nonexistent_dir'
register: verify
failed_when: verify.rc == 0
@ -95,8 +89,6 @@
- name: verify a version of a collection that isn't installed
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
register: verify
failed_when: verify.rc == 0
@ -107,13 +99,9 @@
- name: install the new version from the server
command: ansible-galaxy collection install ansible_test.verify:2.0.0 --force -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
- name: verify the installed collection against the server
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
register: verify
- assert:
@ -159,8 +147,6 @@
- name: test verifying checksumes of the modified collection
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
register: verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
failed_when: verify.rc == 0
- assert:
@ -179,8 +165,6 @@
- name: ensure a modified FILES.json is validated
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
register: verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
failed_when: verify.rc == 0
- assert:
@ -203,8 +187,6 @@
- name: ensure the MANIFEST.json is validated against the uncorrupted file from the server
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
register: verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
failed_when: verify.rc == 0
- assert:
@ -233,8 +215,6 @@
- name: test we only verify collections containing a MANIFEST.json with the version on the server
command: ansible-galaxy collection verify ansible_test.verify:2.0.0 -s {{ test_name }} {{ galaxy_verbosity }}
register: verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
failed_when: verify.rc == 0
- assert:
@ -255,13 +235,9 @@
- name: force-install from local artifact
command: ansible-galaxy collection install '{{ galaxy_dir }}/ansible_test-verify-3.0.0.tar.gz' --force
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
- name: verify locally only, no download or server manifest hash check
command: ansible-galaxy collection verify --offline ansible_test.verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
register: verify
- assert:
@ -278,8 +254,6 @@
- name: verify modified collection locally-only (should fail)
command: ansible-galaxy collection verify --offline ansible_test.verify
register: verify
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}'
failed_when: verify.rc == 0
- assert:
@ -287,3 +261,80 @@
- verify.rc != 0
- "'Collection ansible_test.verify contains modified content in the following files:' in verify.stdout"
- "'plugins/modules/test_module.py' in verify.stdout"
# TODO: add a test for offline Galaxy signature metadata
- name: install a collection that was signed by setup_collections
command: ansible-galaxy collection install namespace1.name1:1.0.0
- name: verify the installed collection with a detached signature
command: ansible-galaxy collection verify namespace1.name1:1.0.0 {{ galaxy_verbosity }} {{ signature_options }}
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: verify
- assert:
that:
- verify.rc == 0
- name: verify the installed collection with invalid detached signature
command: ansible-galaxy collection verify namespace1.name1:1.0.0 {{ galaxy_verbosity }} {{ signature_options }}
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: verify
ignore_errors: yes
- assert:
that:
- verify.rc != 0
- '"Signature verification failed for ''namespace1.name1'' (return code 1)" in verify.stdout'
- expected_errors[0] in verify_stdout
- expected_errors[1] in verify_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
stdout_no_color: "{{ verify.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
verify_stdout: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- include_tasks: revoke_gpg_key.yml
- name: verify the installed collection with a revoked detached signature
command: ansible-galaxy collection verify namespace1.name1:1.0.0 {{ galaxy_verbosity }} {{ signature_options }}
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: verify
ignore_errors: yes
- assert:
that:
- verify.rc != 0
- '"Signature verification failed for ''namespace1.name1'' (return code 0)" in verify.stdout'
- expected_errors[0] in verify_stdout
- expected_errors[1] in verify_stdout
vars:
expected_errors:
- "* The used key has been revoked by its owner."
- "* The signature with the keyid is good, but the signature was made by a revoked key."
stdout_no_color: "{{ verify.stdout | regex_replace(reset_color) | regex_replace(color) }}"
# Remove formatting from the reason so it's one line
verify_stdout: "{{ stdout_no_color | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
reset_color: '\x1b\[0m'
color: '\x1b\[[0-9];[0-9]{2}m'
- name: empty installed collections
file:
path: "{{ galaxy_dir }}/ansible_collections"
state: "{{ item }}"
loop:
- absent
- directory

@ -1,5 +1,7 @@
galaxy_verbosity: "{{ '' if not ansible_verbosity else '-' ~ ('v' * ansible_verbosity) }}"
gpg_homedir: "{{ galaxy_dir }}/gpg"
pulp_repositories:
- published
- secondary

@ -8,6 +8,7 @@ lib/ansible/cli/galaxy.py import-3.8 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.8 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.8 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.8 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.8 # circular imports
@ -19,6 +20,7 @@ lib/ansible/cli/galaxy.py import-3.9 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.9 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.9 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.9 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.9 # circular imports
@ -30,6 +32,7 @@ lib/ansible/cli/galaxy.py import-3.10 # unguarded indirect resolvelib import
lib/ansible/galaxy/collection/__init__.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/concrete_artifact_manager.py import-3.10 # unguarded resolvelib import
lib/ansible/galaxy/collection/galaxy_api_proxy.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/collection/gpg.py import-3.10 # unguarded resolvelib imports
lib/ansible/galaxy/dependency_resolution/__init__.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/dataclasses.py import-3.10 # circular imports
lib/ansible/galaxy/dependency_resolution/errors.py import-3.10 # circular imports

@ -14,7 +14,7 @@ from ansible.galaxy.dependency_resolution.dataclasses import Requirement
@pytest.fixture
def collection_object():
def _cobj(fqcn='sandwiches.ham'):
return Requirement(fqcn, '1.5.0', None, 'galaxy')
return Requirement(fqcn, '1.5.0', None, 'galaxy', None)
return _cobj

@ -57,12 +57,14 @@ def mock_collection_objects(mocker):
'1.5.0',
None,
'dir',
None,
),
(
'sandwiches.reuben',
'2.5.0',
None,
'dir',
None,
),
)
@ -72,12 +74,14 @@ def mock_collection_objects(mocker):
'1.0.0',
None,
'dir',
None,
),
(
'sandwiches.ham',
'1.0.0',
None,
'dir',
None,
),
)
@ -97,12 +101,14 @@ def mock_from_path(mocker):
'1.5.0',
None,
'dir',
None,
),
(
'sandwiches.pbj',
'1.0.0',
None,
'dir',
None,
),
),
'sandwiches.ham': (
@ -111,6 +117,7 @@ def mock_from_path(mocker):
'1.0.0',
None,
'dir',
None,
),
),
}

@ -13,11 +13,11 @@ from ansible.galaxy.dependency_resolution.dataclasses import Requirement
@pytest.fixture
def collection_objects():
collection_ham = Requirement('sandwiches.ham', '1.5.0', None, 'galaxy')
collection_ham = Requirement('sandwiches.ham', '1.5.0', None, 'galaxy', None)
collection_pbj = Requirement('sandwiches.pbj', '2.5', None, 'galaxy')
collection_pbj = Requirement('sandwiches.pbj', '2.5', None, 'galaxy', None)
collection_reuben = Requirement('sandwiches.reuben', '4', None, 'galaxy')
collection_reuben = Requirement('sandwiches.reuben', '4', None, 'galaxy', None)
return [collection_ham, collection_pbj, collection_reuben]
@ -27,7 +27,7 @@ def test_get_collection_widths(collection_objects):
def test_get_collection_widths_single_collection(mocker):
mocked_collection = Requirement('sandwiches.club', '3.0.0', None, 'galaxy')
mocked_collection = Requirement('sandwiches.club', '3.0.0', None, 'galaxy', None)
# Make this look like it is not iterable
mocker.patch('ansible.cli.galaxy.is_iterable', return_value=False)

@ -704,6 +704,7 @@ def test_get_collection_version_metadata_no_version(api_version, token_type, ver
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'href': 'https://galaxy.server.com/api/{api}/namespace/name/versions/{version}/'.format(api=api_version, version=version),
'download_url': 'https://downloadme.com',
'artifact': {
'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f',
@ -741,6 +742,85 @@ def test_get_collection_version_metadata_no_version(api_version, token_type, ver
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, version', [
('v2', None, None, '2.1.13'),
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/api/automation-hub/'), '1.0.0'),
])
def test_get_collection_signatures_backwards_compat(api_version, token_type, token_ins, version, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO("{}")
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_signatures('namespace', 'collection', version)
assert actual == []
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s/' \
% (api.api_server, api_version, version)
# v2 calls dont need auth, so no authz header or token_type
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, version', [
('v2', None, None, '2.1.13'),
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/api/automation-hub/'), '1.0.0'),
])
def test_get_collection_signatures(api_version, token_type, token_ins, version, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'signatures': [
{
"signature": "-----BEGIN PGP SIGNATURE-----\nSIGNATURE1\n-----END PGP SIGNATURE-----\n",
"pubkey_fingerprint": "FINGERPRINT",
"signing_service": "ansible-default",
"pulp_created": "2022-01-14T14:05:53.835605Z",
},
{
"signature": "-----BEGIN PGP SIGNATURE-----\nSIGNATURE2\n-----END PGP SIGNATURE-----\n",
"pubkey_fingerprint": "FINGERPRINT",
"signing_service": "ansible-default",
"pulp_created": "2022-01-14T14:05:53.835605Z",
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_signatures('namespace', 'collection', version)
assert actual == [
"-----BEGIN PGP SIGNATURE-----\nSIGNATURE1\n-----END PGP SIGNATURE-----\n",
"-----BEGIN PGP SIGNATURE-----\nSIGNATURE2\n-----END PGP SIGNATURE-----\n"
]
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s/' \
% (api.api_server, api_version, version)
# v2 calls dont need auth, so no authz header or token_type
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, response', [
('v2', None, None, {
'count': 2,

@ -168,6 +168,7 @@ def collection_artifact(request, tmp_path_factory):
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com')
galaxy_api.get_collection_signatures = MagicMock(return_value=[])
return galaxy_api
@ -449,7 +450,9 @@ def test_build_requirement_from_name(galaxy_server, monkeypatch, tmp_path_factor
requirements = cli._require_one_of_collections_requirements(
collections, requirements_file, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False)['namespace.collection']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -466,7 +469,7 @@ def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch,
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {})
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
@ -476,7 +479,9 @@ def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch,
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False)['namespace.collection']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -494,7 +499,7 @@ def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monk
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None,
{})
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
@ -504,7 +509,9 @@ def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monk
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:2.0.1-beta.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False)['namespace.collection']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -521,7 +528,7 @@ def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, t
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {})
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
broken_server = copy.copy(galaxy_server)
@ -538,7 +545,7 @@ def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, t
['namespace.collection:>1.0.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
@ -569,7 +576,7 @@ def test_build_requirement_from_name_missing(galaxy_server, monkeypatch, tmp_pat
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n* namespace.collection:* (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False)
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch, tmp_path_factory):
@ -589,7 +596,7 @@ def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch
expected = "error (HTTP Code: 401, Message: msg)"
with pytest.raises(api.GalaxyError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False)
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False, False)
def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch, tmp_path_factory):
@ -608,7 +615,7 @@ def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch,
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None,
{})
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:==2.0.0'])
@ -616,7 +623,7 @@ def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch,
['namespace.collection:==2.0.0'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False)['namespace.collection']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -644,7 +651,7 @@ def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server,
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None,
{})
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>=2.0.1,<2.0.2'])
@ -652,7 +659,7 @@ def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server,
['namespace.collection:>=2.0.1,<2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False)['namespace.collection']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -678,7 +685,7 @@ def test_build_requirement_from_name_multiple_version_results(galaxy_server, mon
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {})
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
@ -693,7 +700,7 @@ def test_build_requirement_from_name_multiple_version_results(galaxy_server, mon
['namespace.collection:!=2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False)['namespace.collection']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
@ -712,7 +719,7 @@ def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {})
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
@ -727,7 +734,7 @@ def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=2.0.5 (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False)
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
@ -735,8 +742,8 @@ def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_serve
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info_return = [
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}),
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}, None, None),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}, None, None),
]
mock_get_info = MagicMock(side_effect=mock_get_info_return)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
@ -752,12 +759,12 @@ def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_serve
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=1.0.0 (dependency of parent.collection:2.0.5)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False)
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_server):
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir')])
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir', None)])
monkeypatch.setattr(collection, 'find_existing_collections', mock_installed_collections)
@ -768,7 +775,7 @@ def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_serv
monkeypatch.setattr(Display, 'display', mock_display)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {})
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(return_value=['1.2.3', '1.3.0'])
@ -795,7 +802,7 @@ def test_install_collection(collection_artifact, monkeypatch):
collection_path = os.path.join(output_path, b'ansible_namespace', b'collection')
os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file')
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)
collection.install(candidate, to_text(output_path), concrete_artifact_cm)
# Ensure the temp directory is empty, nothing is left behind
@ -834,7 +841,7 @@ def test_install_collection_with_download(galaxy_server, collection_artifact, mo
mock_download.return_value = collection_tar
monkeypatch.setattr(concrete_artifact_cm, 'get_galaxy_artifact_path', mock_download)
req = Requirement('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy')
req = Candidate('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy', None)
collection.install(req, to_text(collections_dir), concrete_artifact_cm)
actual_files = os.listdir(collection_path)
@ -862,8 +869,8 @@ def test_install_collections_from_tar(collection_artifact, monkeypatch):
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file')]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
@ -898,8 +905,8 @@ def test_install_collections_existing_without_force(collection_artifact, monkeyp
assert os.path.isdir(collection_path)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file')]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
@ -930,8 +937,8 @@ def test_install_missing_metadata_warning(collection_artifact, monkeypatch):
os.unlink(b_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file')]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
@ -951,8 +958,8 @@ def test_install_collection_with_circular_dependency(collection_artifact, monkey
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file')]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)

Loading…
Cancel
Save