Merge branch 'ansible:devel' into 85206-faster-dependency-resolution

pull/85249/head
Nils Brinkmann 3 months ago committed by GitHub
commit c6ab8134f4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -13,6 +13,9 @@ else
target="shippable/generic/" target="shippable/generic/"
fi fi
stage="${S:-prod}"
# shellcheck disable=SC2086 # shellcheck disable=SC2086
ansible-test integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \ ansible-test integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
--remote-terminate always --remote-stage "${stage}" \
--docker default --python "${python}" --docker default --python "${python}"

@ -0,0 +1,2 @@
minor_changes:
- "ansible-doc - show ``notes``, ``seealso``, and top-level ``version_added`` for role entrypoints (https://github.com/ansible/ansible/pull/81796)."

@ -0,0 +1,2 @@
minor_changes:
- "default callback plugin - add option to configure indentation for JSON and YAML output (https://github.com/ansible/ansible/pull/85497)."

@ -0,0 +1,2 @@
bugfixes:
- callback plugins - improve consistency accessing the Task object's resolved_action attribute.

@ -0,0 +1,6 @@
minor_changes:
- >-
setup - added new subkey ``lvs`` within each entry of ``ansible_facts['vgs']``
to provide complete logical volume data scoped by volume group.
The top level ``lvs`` fact by comparison, deduplicates logical volume names
across volume groups and may be incomplete. (https://github.com/ansible/ansible/issues/85632)

@ -0,0 +1,2 @@
bugfixes:
- "validate-modules sanity test - fix handling of missing doc fragments (https://github.com/ansible/ansible/pull/85638)."

@ -0,0 +1,2 @@
bugfixes:
- The ``ansible_failed_task`` variable is now correctly exposed in a rescue section, even when a failing handler is triggered by the ``flush_handlers`` task in the corresponding ``block`` (https://github.com/ansible/ansible/issues/85682)

@ -0,0 +1,2 @@
bugfixes:
- "``ternary`` filter - evaluate values lazily (https://github.com/ansible/ansible/issues/85743)"

@ -0,0 +1,2 @@
minor_changes:
- ansible-test - Implement new authentication methods for accessing the Ansible Core CI service.

@ -1,2 +1,2 @@
minor_changes: minor_changes:
- ansible-test - Upgrade to ``coverage`` version 7.10.0 for Python 3.9 and later. - ansible-test - Upgrade to ``coverage`` version 7.10.5 for Python 3.9 and later.

@ -0,0 +1,2 @@
bugfixes:
- ansible-test - Fix a traceback that can occur when using delegation before the ansible-test temp directory is created.

@ -0,0 +1,3 @@
---
minor_changes:
- apt_repository - use correct debug method to print debug message.

@ -0,0 +1,2 @@
minor_changes:
- blockinfile - add new module option ``encoding`` to support files in encodings other than UTF-8 (https://github.com/ansible/ansible/pull/85291).

@ -0,0 +1,3 @@
bugfixes:
- templating - Multi-node template results coerce embedded ``None`` nodes to empty string (instead of rendering literal ``None`` to the output).
- argspec validation - The ``str`` argspec type treats ``None`` values as empty string for better consistency with pre-2.19 templating conversions.

@ -0,0 +1,3 @@
deprecated_features:
- INJECT_FACTS_AS_VARS configuration currently defaults to ``True``, this is now deprecated and it will switch to ``False`` by Ansible 2.24.
You will only get notified if you are accessing 'injected' facts (for example, ansible_os_distribution vs ansible_facts['os_distribution']).

@ -0,0 +1,3 @@
---
bugfixes:
- dnf - Check if installroot is directory or not (https://github.com/ansible/ansible/issues/85680).

@ -0,0 +1,3 @@
bugfixes:
- conditionals - When displaying a broken conditional error or deprecation warning,
the origin of the non-boolean result is included (if available), and the raw result is omitted.

@ -0,0 +1,3 @@
---
removed_features:
- encrypt - remove deprecated passlib_or_crypt API.

@ -0,0 +1,4 @@
bugfixes:
- templating - Undefined marker values sourced from the Jinja ``getattr->getitem`` fallback are now accessed correctly,
raising AnsibleUndefinedVariable for user plugins that do not understand markers.
Previously, these values were erroneously returned to user plugin code that had not opted in to marker acceptance.

@ -0,0 +1,6 @@
---
deprecated_features:
- include_vars - Specifying 'ignore_files' as a string is deprecated.
minor_changes:
- include_vars - Raise an error if 'ignore_files' is not specified as a list.
- include_vars - Raise an error if 'extensions' is not specified as a list.

@ -0,0 +1,4 @@
known_issues:
- templating - Exceptions raised in a Jinja ``set`` or ``with`` block which are not accessed by the template are ignored in the same manner as undefined values.
- templating - Passing a container created in a Jinja ``set`` or ``with`` block to a method results in a copy of that container.
Mutations to that container which are not returned by the method will be discarded.

@ -0,0 +1,2 @@
minor_changes:
- lineinfile - add new module option ``encoding`` to support files in encodings other than UTF-8 (https://github.com/ansible/ansible/pull/84999).

@ -0,0 +1,4 @@
breaking_changes:
- >-
powershell - Removed code that tried to remote quotes from paths when performing Windows operations like copying
and fetching file. This should not affect normal playbooks unless a value is quoted too many times.

@ -0,0 +1,4 @@
---
minor_changes:
- regex - Document the match_type fullmatch.
- regex - Ensure that match_type is one of match, fullmatch, or search (https://github.com/ansible/ansible/pull/85629).

@ -0,0 +1,2 @@
removed_features:
- "ansible-doc - role entrypoint attributes are no longer shown"

@ -0,0 +1,2 @@
deprecated_features:
- hash_params function in roles/__init__ is being deprecated as it is not in use.

@ -0,0 +1,5 @@
deprecated_features:
- >-
Deprecated the shell plugin's ``wrap_for_exec`` function. This API is not used in Ansible or any known collection
and is being removed to simplify the plugin API. Plugin authors should wrap their command to execute within an
explicit shell or other known executable.

@ -0,0 +1,4 @@
removed_features:
- >-
Removed the option to set the ``DEFAULT_TRANSPORT`` configuration to ``smart`` that selects the default transport
as either ``ssh`` or ``paramiko`` based on the underlying platform configuraton.

@ -0,0 +1,2 @@
minor_changes:
- tags now warn when using reserved keywords.

@ -0,0 +1,2 @@
bugfixes:
- templating - Fix slicing of tuples in templating (https://github.com/ansible/ansible/issues/85606).

@ -0,0 +1,6 @@
bugfixes:
- template lookup - Skip finalization on the internal templating operation to allow markers to be returned and handled by, e.g. the ``default`` filter.
Previously, finalization tripped markers, causing an exception to end processing of the current template pipeline.
(https://github.com/ansible/ansible/issues/85674)
- templating - Avoid tripping markers within Jinja generated code.
(https://github.com/ansible/ansible/issues/85674)

@ -0,0 +1,3 @@
bugfixes:
- templating - Ensure filter plugin result processing occurs under the correct call context.
(https://github.com/ansible/ansible/issues/85585)

@ -0,0 +1,2 @@
minor_changes:
- Python type hints applied to ``to_text`` and ``to_bytes`` functions for better type hint interactions with code utilizing these functions.

@ -0,0 +1,4 @@
removed_features:
- >-
vars plugins - removed the deprecated ``get_host_vars`` or ``get_group_vars`` fallback for vars plugins that do
not inherit from ``BaseVarsPlugin`` and define a ``get_vars`` method.

@ -0,0 +1,2 @@
deprecated_features:
- vars, the internal variable cache will be removed in 2.24. This cache, once used internally exposes variables in inconsistent states, the 'vars' and 'varnames' lookups should be used instead.

@ -0,0 +1,2 @@
removed_features:
- "``vault``/``unvault`` filters - remove the deprecated ``vaultid`` parameter."

@ -0,0 +1,2 @@
minor_changes:
- ansible now warns if you use reserved tags that were only meant for selection and not for use in play.

@ -6,7 +6,6 @@ from __future__ import annotations
import copy import copy
import dataclasses import dataclasses
import enum import enum
import textwrap
import typing as t import typing as t
import collections.abc as c import collections.abc as c
import re import re
@ -44,7 +43,7 @@ from ._jinja_bits import (
_finalize_template_result, _finalize_template_result,
FinalizeMode, FinalizeMode,
) )
from ._jinja_common import _TemplateConfig, MarkerError, ExceptionMarker from ._jinja_common import _TemplateConfig, MarkerError, ExceptionMarker, JinjaCallContext
from ._lazy_containers import _AnsibleLazyTemplateMixin from ._lazy_containers import _AnsibleLazyTemplateMixin
from ._marker_behaviors import MarkerBehavior, FAIL_ON_UNDEFINED from ._marker_behaviors import MarkerBehavior, FAIL_ON_UNDEFINED
from ._transform import _type_transform_mapping from ._transform import _type_transform_mapping
@ -260,6 +259,7 @@ class TemplateEngine:
with ( with (
TemplateContext(template_value=variable, templar=self, options=options, stop_on_template=stop_on_template) as ctx, TemplateContext(template_value=variable, templar=self, options=options, stop_on_template=stop_on_template) as ctx,
DeprecatedAccessAuditContext.when(ctx.is_top_level), DeprecatedAccessAuditContext.when(ctx.is_top_level),
JinjaCallContext(accept_lazy_markers=True), # let default Jinja marker behavior apply, since we're descending into a new template
): ):
try: try:
if not value_is_str: if not value_is_str:
@ -559,9 +559,11 @@ class TemplateEngine:
bool_result = bool(result) bool_result = bool(result)
result_origin = Origin.get_tag(result) or Origin.UNKNOWN
msg = ( msg = (
f'Conditional result was {textwrap.shorten(str(result), width=40)!r} of type {native_type_name(result)!r}, ' f'Conditional result ({bool_result}) was derived from value of type {native_type_name(result)!r} at {str(result_origin)!r}. '
f'which evaluates to {bool_result}. Conditionals must have a boolean result.' 'Conditionals must have a boolean result.'
) )
if _TemplateConfig.allow_broken_conditionals: if _TemplateConfig.allow_broken_conditionals:

@ -811,7 +811,7 @@ class AnsibleEnvironment(SandboxedEnvironment):
try: try:
value = obj[attribute] value = obj[attribute]
except (TypeError, LookupError): except (TypeError, LookupError):
return self.undefined(obj=obj, name=attribute) if is_safe else self.unsafe_undefined(obj, attribute) value = self.undefined(obj=obj, name=attribute) if is_safe else self.unsafe_undefined(obj, attribute)
AnsibleAccessContext.current().access(value) AnsibleAccessContext.current().access(value)
@ -891,6 +891,8 @@ def _flatten_nodes(nodes: t.Iterable[t.Any]) -> t.Iterable[t.Any]:
else: else:
if type(node) is TemplateModule: # pylint: disable=unidiomatic-typecheck if type(node) is TemplateModule: # pylint: disable=unidiomatic-typecheck
yield from _flatten_nodes(node._body_stream) yield from _flatten_nodes(node._body_stream)
elif node is None:
continue # avoid yielding `None`-valued nodes to avoid literal "None" in stringified template results
else: else:
yield node yield node

@ -114,7 +114,13 @@ class JinjaPluginIntercept(c.MutableMapping):
try: try:
with JinjaCallContext(accept_lazy_markers=instance.accept_lazy_markers): with JinjaCallContext(accept_lazy_markers=instance.accept_lazy_markers):
return instance.j2_function(*lazify_container_args(args), **lazify_container_kwargs(kwargs)) result = instance.j2_function(*lazify_container_args(args), **lazify_container_kwargs(kwargs))
if instance.plugin_type == 'filter':
# ensure list conversion occurs under the call context
result = _wrap_plugin_output(result)
return result
except MarkerError as ex: except MarkerError as ex:
return ex.source return ex.source
except Exception as ex: except Exception as ex:
@ -155,7 +161,6 @@ class JinjaPluginIntercept(c.MutableMapping):
@functools.wraps(instance.j2_function) @functools.wraps(instance.j2_function)
def wrapper(*args, **kwargs) -> t.Any: def wrapper(*args, **kwargs) -> t.Any:
result = self._invoke_plugin(instance, *args, **kwargs) result = self._invoke_plugin(instance, *args, **kwargs)
result = _wrap_plugin_output(result)
return result return result

@ -229,8 +229,6 @@ class _AnsibleLazyTemplateDict(_AnsibleTaggedDict, _AnsibleLazyTemplateMixin):
__slots__ = _AnsibleLazyTemplateMixin._SLOTS __slots__ = _AnsibleLazyTemplateMixin._SLOTS
def __init__(self, contents: t.Iterable | _LazyValueSource, /, **kwargs) -> None: def __init__(self, contents: t.Iterable | _LazyValueSource, /, **kwargs) -> None:
_AnsibleLazyTemplateMixin.__init__(self, contents)
if isinstance(contents, _AnsibleLazyTemplateDict): if isinstance(contents, _AnsibleLazyTemplateDict):
super().__init__(dict.items(contents), **kwargs) super().__init__(dict.items(contents), **kwargs)
elif isinstance(contents, _LazyValueSource): elif isinstance(contents, _LazyValueSource):
@ -238,6 +236,8 @@ class _AnsibleLazyTemplateDict(_AnsibleTaggedDict, _AnsibleLazyTemplateMixin):
else: else:
raise UnsupportedConstructionMethodError() raise UnsupportedConstructionMethodError()
_AnsibleLazyTemplateMixin.__init__(self, contents)
def get(self, key: t.Any, default: t.Any = None) -> t.Any: def get(self, key: t.Any, default: t.Any = None) -> t.Any:
if (value := super().get(key, _NoKeySentinel)) is _NoKeySentinel: if (value := super().get(key, _NoKeySentinel)) is _NoKeySentinel:
return default return default
@ -372,8 +372,6 @@ class _AnsibleLazyTemplateList(_AnsibleTaggedList, _AnsibleLazyTemplateMixin):
__slots__ = _AnsibleLazyTemplateMixin._SLOTS __slots__ = _AnsibleLazyTemplateMixin._SLOTS
def __init__(self, contents: t.Iterable | _LazyValueSource, /) -> None: def __init__(self, contents: t.Iterable | _LazyValueSource, /) -> None:
_AnsibleLazyTemplateMixin.__init__(self, contents)
if isinstance(contents, _AnsibleLazyTemplateList): if isinstance(contents, _AnsibleLazyTemplateList):
super().__init__(list.__iter__(contents)) super().__init__(list.__iter__(contents))
elif isinstance(contents, _LazyValueSource): elif isinstance(contents, _LazyValueSource):
@ -381,6 +379,8 @@ class _AnsibleLazyTemplateList(_AnsibleTaggedList, _AnsibleLazyTemplateMixin):
else: else:
raise UnsupportedConstructionMethodError() raise UnsupportedConstructionMethodError()
_AnsibleLazyTemplateMixin.__init__(self, contents)
def __getitem__(self, key: t.SupportsIndex | slice, /) -> t.Any: def __getitem__(self, key: t.SupportsIndex | slice, /) -> t.Any:
if type(key) is slice: # pylint: disable=unidiomatic-typecheck if type(key) is slice: # pylint: disable=unidiomatic-typecheck
return _AnsibleLazyTemplateList(_LazyValueSource(source=super().__getitem__(key), templar=self._templar, lazy_options=self._lazy_options)) return _AnsibleLazyTemplateList(_LazyValueSource(source=super().__getitem__(key), templar=self._templar, lazy_options=self._lazy_options))
@ -567,7 +567,7 @@ class _AnsibleLazyAccessTuple(_AnsibleTaggedTuple, _AnsibleLazyTemplateMixin):
def __getitem__(self, key: t.SupportsIndex | slice, /) -> t.Any: def __getitem__(self, key: t.SupportsIndex | slice, /) -> t.Any:
if type(key) is slice: # pylint: disable=unidiomatic-typecheck if type(key) is slice: # pylint: disable=unidiomatic-typecheck
return _AnsibleLazyAccessTuple(super().__getitem__(key)) return _AnsibleLazyAccessTuple(_LazyValueSource(source=super().__getitem__(key), templar=self._templar, lazy_options=self._lazy_options))
value = super().__getitem__(key) value = super().__getitem__(key)

@ -107,7 +107,6 @@ from ansible import context
from ansible.utils import display as _display from ansible.utils import display as _display
from ansible.cli.arguments import option_helpers as opt_help from ansible.cli.arguments import option_helpers as opt_help
from ansible.inventory.manager import InventoryManager from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.file import is_executable from ansible.module_utils.common.file import is_executable
@ -403,8 +402,8 @@ class CLI(ABC):
options = super(MyCLI, self).post_process_args(options) options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction: if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified') raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types): if isinstance(options.listofhosts, str):
options.listofhosts = string_types.split(',') options.listofhosts = options.listofhosts.split(',')
return options return options
""" """
@ -440,7 +439,7 @@ class CLI(ABC):
if options.inventory: if options.inventory:
# should always be list # should always be list
if isinstance(options.inventory, string_types): if isinstance(options.inventory, str):
options.inventory = [options.inventory] options.inventory = [options.inventory]
# Ensure full paths when needed # Ensure full paths when needed

@ -24,7 +24,6 @@ from ansible.config.manager import ConfigManager
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleRequiredOptionError from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleRequiredOptionError
from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes
from ansible._internal import _json from ansible._internal import _json
from ansible.module_utils.six import string_types
from ansible.parsing.quoting import is_quoted from ansible.parsing.quoting import is_quoted
from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.utils.color import stringc from ansible.utils.color import stringc
@ -288,21 +287,21 @@ class ConfigCLI(CLI):
default = '0' default = '0'
elif default: elif default:
if stype == 'list': if stype == 'list':
if not isinstance(default, string_types): if not isinstance(default, str):
# python lists are not valid env ones # python lists are not valid env ones
try: try:
default = ', '.join(default) default = ', '.join(default)
except Exception as e: except Exception as e:
# list of other stuff # list of other stuff
default = '%s' % to_native(default) default = '%s' % to_native(default)
if isinstance(default, string_types) and not is_quoted(default): if isinstance(default, str) and not is_quoted(default):
default = shlex.quote(default) default = shlex.quote(default)
elif default is None: elif default is None:
default = '' default = ''
if subkey in settings[setting] and settings[setting][subkey]: if subkey in settings[setting] and settings[setting][subkey]:
entry = settings[setting][subkey][-1]['name'] entry = settings[setting][subkey][-1]['name']
if isinstance(settings[setting]['description'], string_types): if isinstance(settings[setting]['description'], str):
desc = settings[setting]['description'] desc = settings[setting]['description']
else: else:
desc = '\n#'.join(settings[setting]['description']) desc = '\n#'.join(settings[setting]['description'])
@ -343,7 +342,7 @@ class ConfigCLI(CLI):
sections[s] = new_sections[s] sections[s] = new_sections[s]
continue continue
if isinstance(opt['description'], string_types): if isinstance(opt['description'], str):
desc = '# (%s) %s' % (opt.get('type', 'string'), opt['description']) desc = '# (%s) %s' % (opt.get('type', 'string'), opt['description'])
else: else:
desc = "# (%s) " % opt.get('type', 'string') desc = "# (%s) " % opt.get('type', 'string')
@ -361,7 +360,7 @@ class ConfigCLI(CLI):
seen[entry['section']].append(entry['key']) seen[entry['section']].append(entry['key'])
default = self.config.template_default(opt.get('default', ''), get_constants()) default = self.config.template_default(opt.get('default', ''), get_constants())
if opt.get('type', '') == 'list' and not isinstance(default, string_types): if opt.get('type', '') == 'list' and not isinstance(default, str):
# python lists are not valid ini ones # python lists are not valid ini ones
default = ', '.join(default) default = ', '.join(default)
elif default is None: elif default is None:

@ -32,7 +32,6 @@ from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump from ansible.module_utils.common.yaml import yaml_dump
from ansible.module_utils.six import string_types
from ansible.parsing.plugin_docs import read_docstub from ansible.parsing.plugin_docs import read_docstub
from ansible.parsing.yaml.dumper import AnsibleDumper from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.parsing.yaml.loader import AnsibleLoader from ansible.parsing.yaml.loader import AnsibleLoader
@ -1274,7 +1273,7 @@ class DocCLI(CLI, RoleMixin):
sub_indent = inline_indent + extra_indent sub_indent = inline_indent + extra_indent
if is_sequence(opt['description']): if is_sequence(opt['description']):
for entry_idx, entry in enumerate(opt['description'], 1): for entry_idx, entry in enumerate(opt['description'], 1):
if not isinstance(entry, string_types): if not isinstance(entry, str):
raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry))) raise AnsibleError("Expected string in description of %s at index %s, got %s" % (o, entry_idx, type(entry)))
if entry_idx == 1: if entry_idx == 1:
text.append(key + DocCLI.warp_fill(DocCLI.tty_ify(entry), limit, text.append(key + DocCLI.warp_fill(DocCLI.tty_ify(entry), limit,
@ -1282,7 +1281,7 @@ class DocCLI(CLI, RoleMixin):
else: else:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(entry), limit, initial_indent=sub_indent, subsequent_indent=sub_indent)) text.append(DocCLI.warp_fill(DocCLI.tty_ify(entry), limit, initial_indent=sub_indent, subsequent_indent=sub_indent))
else: else:
if not isinstance(opt['description'], string_types): if not isinstance(opt['description'], str):
raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description']))) raise AnsibleError("Expected string in description of %s, got %s" % (o, type(opt['description'])))
text.append(key + DocCLI.warp_fill(DocCLI.tty_ify(opt['description']), limit, text.append(key + DocCLI.warp_fill(DocCLI.tty_ify(opt['description']), limit,
initial_indent=inline_indent, subsequent_indent=sub_indent, initial_extra=len(extra_indent))) initial_indent=inline_indent, subsequent_indent=sub_indent, initial_extra=len(extra_indent)))
@ -1344,6 +1343,51 @@ class DocCLI(CLI, RoleMixin):
text.append("%s%s:" % (opt_indent, subkey)) text.append("%s%s:" % (opt_indent, subkey))
DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent) DocCLI.add_fields(text, subdata, limit, opt_indent + ' ', return_values, opt_indent)
@staticmethod
def _add_seealso(text: list[str], seealsos: list[dict[str, t.Any]], limit: int, opt_indent: str) -> None:
for item in seealsos:
if 'module' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify('Module %s' % item['module']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['module'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s module.' % item['module']
if description is not None:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['module'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_module.html' % item['module'].replace('.', '/', 2)
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'plugin' in item and 'plugin_type' in item:
plugin_suffix = ' plugin' if item['plugin_type'] not in ('module', 'role') else ''
text.append(DocCLI.warp_fill(DocCLI.tty_ify('%s%s %s' % (item['plugin_type'].title(), plugin_suffix, item['plugin'])),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['plugin'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s %s%s.' % (item['plugin'], item['plugin_type'], plugin_suffix)
if description is not None:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['plugin'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_%s.html' % (item['plugin'].replace('.', '/', 2), item['plugin_type'])
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'name' in item and 'link' in item and 'description' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['name']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['link']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
elif 'ref' in item and 'description' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
def get_role_man_text(self, role, role_json): def get_role_man_text(self, role, role_json):
"""Generate text for the supplied role suitable for display. """Generate text for the supplied role suitable for display.
@ -1371,6 +1415,9 @@ class DocCLI(CLI, RoleMixin):
text.append("ENTRY POINT: %s %s" % (_format(entry_point, "BOLD"), desc)) text.append("ENTRY POINT: %s %s" % (_format(entry_point, "BOLD"), desc))
text.append('') text.append('')
if version_added := doc.pop('version_added', None):
text.append(_format("ADDED IN:", 'bold') + " %s\n" % DocCLI._format_version_added(version_added))
if doc.get('description'): if doc.get('description'):
if isinstance(doc['description'], list): if isinstance(doc['description'], list):
descs = doc['description'] descs = doc['description']
@ -1384,29 +1431,24 @@ class DocCLI(CLI, RoleMixin):
text.append(_format("Options", 'bold') + " (%s indicates it is required):" % ("=" if C.ANSIBLE_NOCOLOR else 'red')) text.append(_format("Options", 'bold') + " (%s indicates it is required):" % ("=" if C.ANSIBLE_NOCOLOR else 'red'))
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent) DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
if doc.get('attributes', False): if notes := doc.pop('notes', False):
display.deprecated(
f'The role {role}\'s argument spec {entry_point} contains the key "attributes", '
'which will not be displayed by ansible-doc in the future. '
'This was unintentionally allowed when plugin attributes were added, '
'but the feature does not map well to role argument specs.',
version='2.20',
)
text.append("") text.append("")
text.append(_format("ATTRIBUTES:", 'bold')) text.append(_format("NOTES:", 'bold'))
for k in doc['attributes'].keys(): for note in notes:
text.append('') text.append(DocCLI.warp_fill(DocCLI.tty_ify(note), limit - 6,
text.append(DocCLI.warp_fill(DocCLI.tty_ify(_format('%s:' % k, 'UNDERLINE')), limit - 6, initial_indent=opt_indent, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
subsequent_indent=opt_indent))
text.append(DocCLI._indent_lines(DocCLI._dump_yaml(doc['attributes'][k]), opt_indent)) if seealso := doc.pop('seealso', False):
del doc['attributes'] text.append("")
text.append(_format("SEE ALSO:", 'bold'))
DocCLI._add_seealso(text, seealso, limit=limit, opt_indent=opt_indent)
# generic elements we will handle identically # generic elements we will handle identically
for k in ('author',): for k in ('author',):
if k not in doc: if k not in doc:
continue continue
text.append('') text.append('')
if isinstance(doc[k], string_types): if isinstance(doc[k], str):
text.append('%s: %s' % (k.upper(), DocCLI.warp_fill(DocCLI.tty_ify(doc[k]), text.append('%s: %s' % (k.upper(), DocCLI.warp_fill(DocCLI.tty_ify(doc[k]),
limit - (len(k) + 2), subsequent_indent=opt_indent))) limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)): elif isinstance(doc[k], (list, tuple)):
@ -1418,7 +1460,7 @@ class DocCLI(CLI, RoleMixin):
if doc.get('examples', False): if doc.get('examples', False):
text.append('') text.append('')
text.append(_format("EXAMPLES:", 'bold')) text.append(_format("EXAMPLES:", 'bold'))
if isinstance(doc['examples'], string_types): if isinstance(doc['examples'], str):
text.append(doc.pop('examples').strip()) text.append(doc.pop('examples').strip())
else: else:
try: try:
@ -1497,49 +1539,7 @@ class DocCLI(CLI, RoleMixin):
if doc.get('seealso', False): if doc.get('seealso', False):
text.append("") text.append("")
text.append(_format("SEE ALSO:", 'bold')) text.append(_format("SEE ALSO:", 'bold'))
for item in doc['seealso']: DocCLI._add_seealso(text, doc['seealso'], limit=limit, opt_indent=opt_indent)
if 'module' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify('Module %s' % item['module']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['module'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s module.' % item['module']
if description is not None:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['module'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_module.html' % item['module'].replace('.', '/', 2)
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'plugin' in item and 'plugin_type' in item:
plugin_suffix = ' plugin' if item['plugin_type'] not in ('module', 'role') else ''
text.append(DocCLI.warp_fill(DocCLI.tty_ify('%s%s %s' % (item['plugin_type'].title(), plugin_suffix, item['plugin'])),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description')
if description is None and item['plugin'].startswith('ansible.builtin.'):
description = 'The official documentation on the %s %s%s.' % (item['plugin'], item['plugin_type'], plugin_suffix)
if description is not None:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(description),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
if item['plugin'].startswith('ansible.builtin.'):
relative_url = 'collections/%s_%s.html' % (item['plugin'].replace('.', '/', 2), item['plugin_type'])
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink(relative_url)),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'name' in item and 'link' in item and 'description' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['name']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['link']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
elif 'ref' in item and 'description' in item:
text.append(DocCLI.warp_fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(DocCLI.warp_fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
del doc['seealso'] del doc['seealso']
if doc.get('requirements', False): if doc.get('requirements', False):
@ -1554,7 +1554,7 @@ class DocCLI(CLI, RoleMixin):
continue continue
text.append('') text.append('')
header = _format(k.upper(), 'bold') header = _format(k.upper(), 'bold')
if isinstance(doc[k], string_types): if isinstance(doc[k], str):
text.append('%s: %s' % (header, DocCLI.warp_fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent))) text.append('%s: %s' % (header, DocCLI.warp_fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)): elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (header, ', '.join(doc[k]))) text.append('%s: %s' % (header, ', '.join(doc[k])))
@ -1566,7 +1566,7 @@ class DocCLI(CLI, RoleMixin):
if doc.get('plainexamples', False): if doc.get('plainexamples', False):
text.append('') text.append('')
text.append(_format("EXAMPLES:", 'bold')) text.append(_format("EXAMPLES:", 'bold'))
if isinstance(doc['plainexamples'], string_types): if isinstance(doc['plainexamples'], str):
text.append(doc.pop('plainexamples').strip()) text.append(doc.pop('plainexamples').strip())
else: else:
try: try:
@ -1603,7 +1603,7 @@ def _do_yaml_snippet(doc):
for o in sorted(doc['options'].keys()): for o in sorted(doc['options'].keys()):
opt = doc['options'][o] opt = doc['options'][o]
if isinstance(opt['description'], string_types): if isinstance(opt['description'], str):
desc = DocCLI.tty_ify(opt['description']) desc = DocCLI.tty_ify(opt['description'])
else: else:
desc = DocCLI.tty_ify(" ".join(opt['description'])) desc = DocCLI.tty_ify(" ".join(opt['description']))

@ -54,7 +54,6 @@ from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible._internal._datatag._tags import TrustedAsTemplate from ansible._internal._datatag._tags import TrustedAsTemplate
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader from ansible.parsing.dataloader import DataLoader
from ansible.playbook.role.requirement import RoleRequirement from ansible.playbook.role.requirement import RoleRequirement
from ansible._internal._templating._engine import TemplateEngine from ansible._internal._templating._engine import TemplateEngine
@ -65,7 +64,6 @@ from ansible.utils.plugin_docs import get_versioned_doclink
from ansible.utils.vars import load_extra_vars from ansible.utils.vars import load_extra_vars
display = Display() display = Display()
urlparse = six.moves.urllib.parse.urlparse
def with_collection_artifacts_manager(wrapped_method): def with_collection_artifacts_manager(wrapped_method):

@ -1215,7 +1215,6 @@ DEFAULT_TRANSPORT:
default: ssh default: ssh
description: description:
- Can be any connection plugin available to your ansible installation. - Can be any connection plugin available to your ansible installation.
- There is also a (DEPRECATED) special 'smart' option, that will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions.
env: [{name: ANSIBLE_TRANSPORT}] env: [{name: ANSIBLE_TRANSPORT}]
ini: ini:
- {key: transport, section: defaults} - {key: transport, section: defaults}

@ -574,7 +574,7 @@ class PlayIterator:
Given the current HostState state, determines if the current block, or any child blocks, Given the current HostState state, determines if the current block, or any child blocks,
are in rescue mode. are in rescue mode.
""" """
if state.run_state == IteratingStates.TASKS and state.get_current_block().rescue: if state.run_state in (IteratingStates.TASKS, IteratingStates.HANDLERS) and state.get_current_block().rescue:
return True return True
if state.tasks_child_state is not None: if state.tasks_child_state is not None:
return self.is_any_block_rescuing(state.tasks_child_state) return self.is_any_block_rescuing(state.tasks_child_state)

@ -31,7 +31,6 @@ from ansible.utils.helpers import pct_to_int
from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.utils.path import makedirs_safe from ansible.utils.path import makedirs_safe
from ansible.utils.ssh_functions import set_default_transport
from ansible.utils.display import Display from ansible.utils.display import Display
@ -65,14 +64,6 @@ class PlaybookExecutor:
forks=context.CLIARGS.get('forks'), forks=context.CLIARGS.get('forks'),
) )
# Note: We run this here to cache whether the default ansible ssh
# executable supports control persist. Sometime in the future we may
# need to enhance this to check that ansible_ssh_executable specified
# in inventory is also cached. We can't do this caching at the point
# where it is used (in task_executor) because that is post-fork and
# therefore would be discarded after every task.
set_default_transport()
def run(self): def run(self):
""" """
Run the given playbook, based on the settings in the play which Run the given playbook, based on the settings in the play which

@ -27,7 +27,6 @@ from ansible._internal._datatag._tags import TrustedAsTemplate
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.common.text.converters import to_text, to_native from ansible.module_utils.common.text.converters import to_text, to_native
from ansible.module_utils.connection import write_to_stream from ansible.module_utils.connection import write_to_stream
from ansible.module_utils.six import string_types
from ansible.playbook.task import Task from ansible.playbook.task import Task
from ansible.plugins import get_plugin_class from ansible.plugins import get_plugin_class
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
@ -48,6 +47,7 @@ display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x] RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
_INJECT_FACTS, _INJECT_FACTS_ORIGIN = C.config.get_config_value_and_origin('INJECT_FACTS_AS_VARS')
__all__ = ['TaskExecutor'] __all__ = ['TaskExecutor']
@ -340,7 +340,7 @@ class TaskExecutor:
}) })
# if plugin is loaded, get resolved name, otherwise leave original task connection # if plugin is loaded, get resolved name, otherwise leave original task connection
if self._connection and not isinstance(self._connection, string_types): if self._connection and not isinstance(self._connection, str):
task_fields['connection'] = getattr(self._connection, 'ansible_name') task_fields['connection'] = getattr(self._connection, 'ansible_name')
tr = _RawTaskResult( tr = _RawTaskResult(
@ -664,8 +664,11 @@ class TaskExecutor:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars # TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = result['ansible_facts'] af = result['ansible_facts']
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af)) vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS: if _INJECT_FACTS:
if _INJECT_FACTS_ORIGIN == 'default':
cleaned_toplevel = {k: _deprecate_top_level_fact(v) for k, v in clean_facts(af).items()} cleaned_toplevel = {k: _deprecate_top_level_fact(v) for k, v in clean_facts(af).items()}
else:
cleaned_toplevel = clean_facts(af)
vars_copy.update(cleaned_toplevel) vars_copy.update(cleaned_toplevel)
# set the failed property if it was missing. # set the failed property if it was missing.
@ -759,9 +762,13 @@ class TaskExecutor:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars # TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = result['ansible_facts'] af = result['ansible_facts']
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af)) variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS: if _INJECT_FACTS:
# DTFIX-FUTURE: why is this happening twice, esp since we're post-fork and these will be discarded? if _INJECT_FACTS_ORIGIN == 'default':
# This happens x2 due to loops and being able to use values in subsequent iterations
# these copies are later discared in favor of 'total/final' one on loop end.
cleaned_toplevel = {k: _deprecate_top_level_fact(v) for k, v in clean_facts(af).items()} cleaned_toplevel = {k: _deprecate_top_level_fact(v) for k, v in clean_facts(af).items()}
else:
cleaned_toplevel = clean_facts(af)
variables.update(cleaned_toplevel) variables.update(cleaned_toplevel)
# save the notification target in the result, if it was specified, as # save the notification target in the result, if it was specified, as
@ -960,9 +967,6 @@ class TaskExecutor:
self._play_context.connection = current_connection self._play_context.connection = current_connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
conn_type = self._play_context.connection conn_type = self._play_context.connection
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context( connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(

@ -25,7 +25,6 @@ from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.api import retry_with_delays_and_condition from ansible.module_utils.api import retry_with_delays_and_condition
from ansible.module_utils.api import generate_jittered_backoff from ansible.module_utils.api import generate_jittered_backoff
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url, prepare_multipart from ansible.module_utils.urls import open_url, prepare_multipart
from ansible.utils.display import Display from ansible.utils.display import Display
@ -595,11 +594,11 @@ class GalaxyAPI:
page_size = kwargs.get('page_size', None) page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None) author = kwargs.get('author', None)
if tags and isinstance(tags, string_types): if tags and isinstance(tags, str):
tags = tags.split(',') tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags) search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types): if platforms and isinstance(platforms, str):
platforms = platforms.split(',') platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms) search_url += '&platforms_autocomplete=' + '+'.join(platforms)

@ -339,12 +339,12 @@ def verify_local_collection(local_collection, remote_collection, artifacts_manag
] ]
# Find any paths not in the FILES.json # Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path): for root, dirs, filenames in os.walk(b_collection_path):
for name in files: for name in filenames:
full_path = os.path.join(root, name) full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict') path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if any(fnmatch.fnmatch(full_path, b_pattern) for b_pattern in b_ignore_patterns): if any(fnmatch.fnmatch(full_path, b_pattern) for b_pattern in b_ignore_patterns):
display.v("Ignoring verification for %s" % full_path) display.v("Ignoring verification for %s" % to_text(full_path))
continue continue
if full_path not in collection_files: if full_path not in collection_files:

@ -24,7 +24,6 @@ from ansible.galaxy.dependency_resolution.versioning import (
is_pre_release, is_pre_release,
meets_requirements, meets_requirements,
) )
from ansible.module_utils.six import string_types
from ansible.utils.version import SemanticVersion, LooseVersion from ansible.utils.version import SemanticVersion, LooseVersion
try: try:
@ -278,7 +277,7 @@ class CollectionDependencyProviderBase(AbstractProvider):
# NOTE: Another known mistake is setting a minor part of the SemVer notation # NOTE: Another known mistake is setting a minor part of the SemVer notation
# NOTE: skipping the "patch" bit like "1.0" which is assumed non-compliant even # NOTE: skipping the "patch" bit like "1.0" which is assumed non-compliant even
# NOTE: after the conversion to string. # NOTE: after the conversion to string.
if not isinstance(version, string_types): if not isinstance(version, str):
raise ValueError(version_err) raise ValueError(version_err)
elif version != '*': elif version != '*':
try: try:

@ -33,7 +33,6 @@ from ansible._internal import _json, _wrapt
from ansible._internal._json import EncryptedStringBehavior from ansible._internal._json import EncryptedStringBehavior
from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.inventory.data import InventoryData from ansible.inventory.data import InventoryData
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.parsing.utils.addresses import parse_address from ansible.parsing.utils.addresses import parse_address
from ansible.plugins.loader import inventory_loader from ansible.plugins.loader import inventory_loader
@ -112,7 +111,7 @@ def split_host_pattern(pattern):
results = (split_host_pattern(p) for p in pattern) results = (split_host_pattern(p) for p in pattern)
# flatten the results # flatten the results
return list(itertools.chain.from_iterable(results)) return list(itertools.chain.from_iterable(results))
elif not isinstance(pattern, string_types): elif not isinstance(pattern, str):
pattern = to_text(pattern, errors='surrogate_or_strict') pattern = to_text(pattern, errors='surrogate_or_strict')
# If it's got commas in it, we'll treat it as a straightforward # If it's got commas in it, we'll treat it as a straightforward
@ -162,7 +161,7 @@ class InventoryManager(object):
# the inventory dirs, files, script paths or lists of hosts # the inventory dirs, files, script paths or lists of hosts
if sources is None: if sources is None:
self._sources = [] self._sources = []
elif isinstance(sources, string_types): elif isinstance(sources, str):
self._sources = [sources] self._sources = [sources]
else: else:
self._sources = sources self._sources = sources

@ -0,0 +1,86 @@
from __future__ import annotations
import sys
import types
from ansible.module_utils.common import warnings
# INLINED FROM THE SIX LIBRARY, see lib/ansible/module_utils/six/__init__.py
# Copyright (c) 2010-2024 Benjamin Peterson
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
if sys.version_info[:2] >= (3, 7):
# This version introduced PEP 560 that requires a bit
# of extra care (we mimic what is done by __build_class__).
resolved_bases = types.resolve_bases(bases)
if resolved_bases is not bases:
d['__orig_bases__'] = bases
else:
resolved_bases = bases
return meta(name, resolved_bases, d)
@classmethod
def __prepare__(cls, name, this_bases):
return meta.__prepare__(name, bases)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
if hasattr(cls, '__qualname__'):
orig_vars['__qualname__'] = cls.__qualname__
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
def iteritems(d, **kw):
return iter(d.items(**kw))
_mini_six = {
"PY2": False,
"PY3": True,
"text_type": str,
"binary_type": bytes,
"string_types": (str,),
"integer_types": (int,),
"iteritems": iteritems,
"add_metaclass": add_metaclass,
"with_metaclass": with_metaclass,
}
# INLINED SIX END
def deprecate(importable_name: str, module_name: str, *deprecated_args) -> object:
"""Inject import-time deprecation warnings."""
if not (importable_name in deprecated_args and (importable := _mini_six.get(importable_name, ...) is not ...)):
raise AttributeError(f"module {module_name!r} has no attribute {importable_name!r}")
# TODO Inspect and remove all calls to this function in 2.24
warnings.deprecate(
msg=f"Importing {importable_name!r} from {module_name!r} is deprecated.",
version="2.24",
)
return importable

@ -1,15 +1,35 @@
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com> 2016 # Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
"""
.. warn:: Use ansible.module_utils.common.text.converters instead.
"""
from __future__ import annotations from __future__ import annotations
# Backwards compat for people still calling it from this package from ansible.module_utils.common import warnings as _warnings
# pylint: disable=unused-import
import codecs
_mini_six = {
"binary_type": bytes,
"text_type": str,
"PY3": True,
}
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text def __getattr__(importable_name: str) -> object:
"""Inject import-time deprecation warnings."""
help_text: str | None = None
importable: object
if importable_name == "codecs":
import codecs
importable = codecs
elif importable_name in {"to_bytes", "to_native", "to_text"}:
from ansible.module_utils.common.text import converters
importable = getattr(converters, importable_name)
help_text = "Use ansible.module_utils.common.text.converters instead."
elif (importable := _mini_six.get(importable_name, ...)) is ...:
raise AttributeError(f"module {__name__!r} has no attribute {importable_name!r}")
_warnings.deprecate(
msg=f"Importing {importable_name!r} from {__name__!r} is deprecated.",
version="2.24",
help_text=help_text,
)
return importable

@ -46,6 +46,15 @@ import tempfile
import time import time
import traceback import traceback
from collections.abc import (
KeysView,
Mapping,
MutableMapping,
Sequence,
MutableSequence,
Set,
MutableSet,
)
from functools import reduce from functools import reduce
try: try:
@ -123,13 +132,6 @@ def _get_available_hash_algorithms():
AVAILABLE_HASH_ALGORITHMS = _get_available_hash_algorithms() AVAILABLE_HASH_ALGORITHMS = _get_available_hash_algorithms()
from ansible.module_utils.common import json as _json from ansible.module_utils.common import json as _json
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import ( from ansible.module_utils.common.file import (
@ -2186,6 +2188,18 @@ def get_module_path():
return os.path.dirname(os.path.realpath(__file__)) return os.path.dirname(os.path.realpath(__file__))
_mini_six = {
"b": lambda s: s.encode("latin-1"),
"PY2": False,
"PY3": True,
"text_type": str,
"binary_type": bytes,
"string_types": (str,),
"integer_types": (int,),
"iteritems": lambda d, **kw: iter(d.items(**kw)),
}
def __getattr__(importable_name): def __getattr__(importable_name):
"""Inject import-time deprecation warnings.""" """Inject import-time deprecation warnings."""
if importable_name == 'datetime': if importable_name == 'datetime':
@ -2203,24 +2217,12 @@ def __getattr__(importable_name):
elif importable_name == 'repeat': elif importable_name == 'repeat':
from itertools import repeat from itertools import repeat
importable = repeat importable = repeat
elif importable_name in {
'PY2', 'PY3', 'b', 'binary_type', 'integer_types',
'iteritems', 'string_types', 'text_type',
}:
import importlib
importable = getattr(
importlib.import_module('ansible.module_utils.six'),
importable_name
)
elif importable_name == 'map': elif importable_name == 'map':
importable = map importable = map
elif importable_name == 'shlex_quote': elif importable_name == 'shlex_quote':
importable = shlex.quote importable = shlex.quote
else: elif (importable := _mini_six.get(importable_name, ...)) is ...:
raise AttributeError( raise AttributeError(f"module {__name__!r} has no attribute {importable_name!r}")
f'cannot import name {importable_name !r} '
f"from '{__name__}' ({__file__ !s})"
)
deprecate( deprecate(
msg=f"Importing '{importable_name}' from '{__name__}' is deprecated.", msg=f"Importing '{importable_name}' from '{__name__}' is deprecated.",

@ -2,7 +2,7 @@
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause) # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
"""Collections ABC import shim. """Collections ABC import shim.
Use `ansible.module_utils.six.moves.collections_abc` instead, which has been available since ansible-core 2.11. Use `collections.abc` instead.
This module exists only for backwards compatibility. This module exists only for backwards compatibility.
""" """
@ -10,7 +10,7 @@ from __future__ import annotations
# Although this was originally intended for internal use only, it has wide adoption in collections. # Although this was originally intended for internal use only, it has wide adoption in collections.
# This is due in part to sanity tests previously recommending its use over `collections` imports. # This is due in part to sanity tests previously recommending its use over `collections` imports.
from ansible.module_utils.six.moves.collections_abc import ( # pylint: disable=unused-import from collections.abc import ( # pylint: disable=unused-import
MappingView, MappingView,
ItemsView, ItemsView,
KeysView, KeysView,
@ -25,3 +25,12 @@ from ansible.module_utils.six.moves.collections_abc import ( # pylint: disable=
Iterable, Iterable,
Iterator, Iterator,
) )
from ansible.module_utils.common import warnings as _warnings
_warnings.deprecate(
msg="The `ansible.module_utils.common._collections_compat` module is deprecated.",
help_text="Use `collections.abc` from the Python standard library instead.",
version="2.24",
)

@ -6,9 +6,10 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import Hashable, Mapping, MutableMapping, Sequence # pylint: disable=unused-import
from ansible.module_utils._internal import _no_six
from ansible.module_utils.common import warnings as _warnings from ansible.module_utils.common import warnings as _warnings
from ansible.module_utils.six import binary_type, text_type
from ansible.module_utils.six.moves.collections_abc import Hashable, Mapping, MutableMapping, Sequence # pylint: disable=unused-import
class ImmutableDict(Hashable, Mapping): class ImmutableDict(Hashable, Mapping):
@ -67,7 +68,7 @@ class ImmutableDict(Hashable, Mapping):
def is_string(seq): def is_string(seq):
"""Identify whether the input has a string-like type (including bytes).""" """Identify whether the input has a string-like type (including bytes)."""
return isinstance(seq, (text_type, binary_type)) return isinstance(seq, (str, bytes))
def is_iterable(seq, include_strings=False): def is_iterable(seq, include_strings=False):
@ -114,3 +115,7 @@ def count(seq):
for elem in seq: for elem in seq:
counters[elem] = counters.get(elem, 0) + 1 counters[elem] = counters.get(elem, 0) + 1
return counters return counters
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "binary_type", "text_type")

@ -7,10 +7,9 @@ from __future__ import annotations
import re import re
from collections.abc import MutableMapping
from copy import deepcopy from copy import deepcopy
from ansible.module_utils.six.moves.collections_abc import MutableMapping
def camel_dict_to_snake_dict(camel_dict, reversible=False, ignore_list=()): def camel_dict_to_snake_dict(camel_dict, reversible=False, ignore_list=()):
""" """

@ -6,11 +6,13 @@
from __future__ import annotations from __future__ import annotations
import re import re
# backward compat
from builtins import zip # pylint: disable=unused-import
from struct import pack from struct import pack
from socket import inet_ntoa from socket import inet_ntoa
from ansible.module_utils.six.moves import zip
VALID_MASKS = [2**8 - 2**i for i in range(0, 9)] VALID_MASKS = [2**8 - 2**i for i in range(0, 9)]

@ -9,9 +9,19 @@ import os
import typing as t import typing as t
from collections import deque from collections import deque
from itertools import chain from collections.abc import (
KeysView,
Set,
Sequence,
Mapping,
MutableMapping,
MutableSet,
MutableSequence,
)
from itertools import chain # pylint: disable=unused-import
from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils._internal import _no_six
from ansible.module_utils._internal._datatag import AnsibleSerializable, AnsibleTagHelper from ansible.module_utils._internal._datatag import AnsibleSerializable, AnsibleTagHelper
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.warnings import warn from ansible.module_utils.common.warnings import warn
@ -33,26 +43,6 @@ from ansible.module_utils.errors import (
SubParameterTypeError, SubParameterTypeError,
) )
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Set,
Sequence,
Mapping,
MutableMapping,
MutableSet,
MutableSequence,
)
from ansible.module_utils.six import (
binary_type,
integer_types,
string_types,
text_type,
PY2,
PY3,
)
from ansible.module_utils.common.validation import ( from ansible.module_utils.common.validation import (
check_mutually_exclusive, check_mutually_exclusive,
check_required_arguments, check_required_arguments,
@ -243,7 +233,7 @@ def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprec
if aliases is None: if aliases is None:
continue continue
if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)): if not is_iterable(aliases) or isinstance(aliases, (bytes, str)):
raise TypeError('internal error: aliases must be a list or tuple') raise TypeError('internal error: aliases must be a list or tuple')
for alias in aliases: for alias in aliases:
@ -346,7 +336,7 @@ def _list_no_log_values(argument_spec, params):
for sub_param in sub_parameters: for sub_param in sub_parameters:
# Validate dict fields in case they came in as strings # Validate dict fields in case they came in as strings
if isinstance(sub_param, string_types): if isinstance(sub_param, str):
sub_param = check_type_dict(sub_param) sub_param = check_type_dict(sub_param)
if not isinstance(sub_param, Mapping): if not isinstance(sub_param, Mapping):
@ -362,7 +352,7 @@ def _return_datastructure_name(obj):
""" Return native stringified values from datastructures. """ Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification.""" For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)): if isinstance(obj, (str, bytes)):
if obj: if obj:
yield to_native(obj, errors='surrogate_or_strict') yield to_native(obj, errors='surrogate_or_strict')
return return
@ -375,7 +365,7 @@ def _return_datastructure_name(obj):
elif obj is None or isinstance(obj, bool): elif obj is None or isinstance(obj, bool):
# This must come before int because bools are also ints # This must come before int because bools are also ints
return return
elif isinstance(obj, tuple(list(integer_types) + [float])): elif isinstance(obj, (int, float)):
yield to_native(obj, nonstring='simplerepr') yield to_native(obj, nonstring='simplerepr')
else: else:
raise TypeError('Unknown parameter type: %s' % (type(obj))) raise TypeError('Unknown parameter type: %s' % (type(obj)))
@ -413,16 +403,13 @@ def _remove_values_conditions(value, no_log_strings, deferred_removals):
""" """
original_value = value original_value = value
if isinstance(value, (text_type, binary_type)): if isinstance(value, (str, bytes)):
# Need native str type # Need native str type
native_str_value = value native_str_value = value
if isinstance(value, text_type): if isinstance(value, str):
value_is_text = True value_is_text = True
if PY2: elif isinstance(value, bytes):
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict') native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings: if native_str_value in no_log_strings:
@ -430,9 +417,9 @@ def _remove_values_conditions(value, no_log_strings, deferred_removals):
for omit_me in no_log_strings: for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8) native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type): if value_is_text and isinstance(native_str_value, bytes):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace') value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type): elif not value_is_text and isinstance(native_str_value, str):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace') value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else: else:
value = native_str_value value = native_str_value
@ -514,7 +501,7 @@ def _set_defaults(argument_spec, parameters, set_default=True):
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals): def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """ """ Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)): if isinstance(value, (str, bytes)):
return value return value
if isinstance(value, Sequence): if isinstance(value, Sequence):
@ -541,7 +528,7 @@ def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_remov
deferred_removals.append((value, new_value)) deferred_removals.append((value, new_value))
return new_value return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))): if isinstance(value, (int, float, bool, NoneType)):
return value return value
if isinstance(value, (datetime.datetime, datetime.date, datetime.time)): if isinstance(value, (datetime.datetime, datetime.date, datetime.time)):
@ -560,8 +547,8 @@ def _validate_elements(wanted_type, parameter, values, options_context=None, err
# Get param name for strings so we can later display this value in a useful error message if needed # Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers # Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {} kwargs = {}
if wanted_element_type == 'str' and isinstance(wanted_type, string_types): if wanted_element_type == 'str' and isinstance(wanted_type, str):
if isinstance(parameter, string_types): if isinstance(parameter, str):
kwargs['param'] = parameter kwargs['param'] = parameter
elif isinstance(parameter, dict): elif isinstance(parameter, dict):
kwargs['param'] = list(parameter.keys())[0] kwargs['param'] = list(parameter.keys())[0]
@ -620,7 +607,7 @@ def _validate_argument_types(argument_spec, parameters, prefix='', options_conte
# Get param name for strings so we can later display this value in a useful error message if needed # Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers # Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {} kwargs = {}
if wanted_name == 'str' and isinstance(wanted_type, string_types): if wanted_name == 'str' and isinstance(wanted_type, str):
kwargs['param'] = list(parameters.keys())[0] kwargs['param'] = list(parameters.keys())[0]
# Get the name of the parent key if this is a nested option # Get the name of the parent key if this is a nested option
@ -659,7 +646,7 @@ def _validate_argument_values(argument_spec, parameters, options_context=None, e
if choices is None: if choices is None:
continue continue
if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)): if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (bytes, str)):
if param in parameters: if param in parameters:
# Allow one or more when type='list' param with choices # Allow one or more when type='list' param with choices
if isinstance(parameters[param], list): if isinstance(parameters[param], list):
@ -745,7 +732,7 @@ def _validate_sub_spec(
options_context.append(param) options_context.append(param)
# Make sure we can iterate over the elements # Make sure we can iterate over the elements
if not isinstance(parameters[param], Sequence) or isinstance(parameters[param], string_types): if not isinstance(parameters[param], Sequence) or isinstance(parameters[param], str):
elements = [parameters[param]] elements = [parameters[param]]
else: else:
elements = parameters[param] elements = parameters[param]
@ -940,3 +927,7 @@ def remove_values(value, no_log_strings):
raise TypeError('Unknown container type encountered when removing private values from output') raise TypeError('Unknown container type encountered when removing private values from output')
return new_value return new_value
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "binary_type", "text_type", "integer_types", "string_types", "PY2", "PY3")

@ -8,11 +8,8 @@ from __future__ import annotations
import codecs import codecs
import json import json
from ansible.module_utils.six import ( from ansible.module_utils.compat import typing as _t
binary_type, from ansible.module_utils._internal import _no_six
iteritems,
text_type,
)
try: try:
codecs.lookup_error('surrogateescape') codecs.lookup_error('surrogateescape')
@ -25,8 +22,54 @@ _COMPOSED_ERROR_HANDLERS = frozenset((None, 'surrogate_or_replace',
'surrogate_or_strict', 'surrogate_or_strict',
'surrogate_then_replace')) 'surrogate_then_replace'))
_T = _t.TypeVar('_T')
def to_bytes(obj, encoding='utf-8', errors=None, nonstring='simplerepr'): _NonStringPassthru: _t.TypeAlias = _t.Literal['passthru']
_NonStringOther: _t.TypeAlias = _t.Literal['simplerepr', 'empty', 'strict']
_NonStringAll: _t.TypeAlias = _t.Union[_NonStringPassthru, _NonStringOther]
@_t.overload
def to_bytes(
obj: object,
encoding: str = 'utf-8',
errors: str | None = None,
) -> bytes: ...
@_t.overload
def to_bytes(
obj: bytes | str,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringPassthru = 'passthru',
) -> bytes: ...
@_t.overload
def to_bytes(
obj: _T,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringPassthru = 'passthru',
) -> _T: ...
@_t.overload
def to_bytes(
obj: object,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringOther = 'simplerepr',
) -> bytes: ...
def to_bytes(
obj: _T,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringAll = 'simplerepr'
) -> _T | bytes:
"""Make sure that a string is a byte string """Make sure that a string is a byte string
:arg obj: An object to make sure is a byte string. In most cases this :arg obj: An object to make sure is a byte string. In most cases this
@ -84,13 +127,13 @@ def to_bytes(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
string is valid in the specified encoding. If it's important that the string is valid in the specified encoding. If it's important that the
byte string is in the specified encoding do:: byte string is in the specified encoding do::
encoded_string = to_bytes(to_text(input_string, 'latin-1'), 'utf-8') encoded_string = to_bytes(to_text(input_string, encoding='latin-1'), encoding='utf-8')
.. version_changed:: 2.3 .. version_changed:: 2.3
Added the ``surrogate_then_replace`` error handler and made it the default error handler. Added the ``surrogate_then_replace`` error handler and made it the default error handler.
""" """
if isinstance(obj, binary_type): if isinstance(obj, bytes):
return obj return obj
# We're given a text string # We're given a text string
@ -104,7 +147,7 @@ def to_bytes(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
else: else:
errors = 'replace' errors = 'replace'
if isinstance(obj, text_type): if isinstance(obj, str):
try: try:
# Try this first as it's the fastest # Try this first as it's the fastest
return obj.encode(encoding, errors) return obj.encode(encoding, errors)
@ -129,21 +172,60 @@ def to_bytes(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
value = repr(obj) value = repr(obj)
except UnicodeError: except UnicodeError:
# Giving up # Giving up
return to_bytes('') return b''
elif nonstring == 'passthru': elif nonstring == 'passthru':
return obj return obj
elif nonstring == 'empty': elif nonstring == 'empty':
# python2.4 doesn't have b'' return b''
return to_bytes('')
elif nonstring == 'strict': elif nonstring == 'strict':
raise TypeError('obj must be a string type') raise TypeError('obj must be a string type')
else: else:
raise TypeError('Invalid value %s for to_bytes\' nonstring parameter' % nonstring) raise TypeError('Invalid value %s for to_bytes\' nonstring parameter' % nonstring)
return to_bytes(value, encoding, errors) return to_bytes(value, encoding=encoding, errors=errors)
def to_text(obj, encoding='utf-8', errors=None, nonstring='simplerepr'): @_t.overload
def to_text(
obj: object,
encoding: str = 'utf-8',
errors: str | None = None,
) -> str: ...
@_t.overload
def to_text(
obj: str | bytes,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringPassthru = 'passthru',
) -> str: ...
@_t.overload
def to_text(
obj: _T,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringPassthru = 'passthru',
) -> _T: ...
@_t.overload
def to_text(
obj: object,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringOther = 'simplerepr',
) -> str: ...
def to_text(
obj: _T,
encoding: str = 'utf-8',
errors: str | None = None,
nonstring: _NonStringAll = 'simplerepr'
) -> _T | str:
"""Make sure that a string is a text string """Make sure that a string is a text string
:arg obj: An object to make sure is a text string. In most cases this :arg obj: An object to make sure is a text string. In most cases this
@ -194,7 +276,7 @@ def to_text(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
Added the surrogate_then_replace error handler and made it the default error handler. Added the surrogate_then_replace error handler and made it the default error handler.
""" """
if isinstance(obj, text_type): if isinstance(obj, str):
return obj return obj
if errors in _COMPOSED_ERROR_HANDLERS: if errors in _COMPOSED_ERROR_HANDLERS:
@ -205,7 +287,7 @@ def to_text(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
else: else:
errors = 'replace' errors = 'replace'
if isinstance(obj, binary_type): if isinstance(obj, bytes):
# Note: We don't need special handling for surrogate_then_replace # Note: We don't need special handling for surrogate_then_replace
# because all bytes will either be made into surrogates or are valid # because all bytes will either be made into surrogates or are valid
# to decode. # to decode.
@ -221,17 +303,17 @@ def to_text(obj, encoding='utf-8', errors=None, nonstring='simplerepr'):
value = repr(obj) value = repr(obj)
except UnicodeError: except UnicodeError:
# Giving up # Giving up
return u'' return ''
elif nonstring == 'passthru': elif nonstring == 'passthru':
return obj return obj
elif nonstring == 'empty': elif nonstring == 'empty':
return u'' return ''
elif nonstring == 'strict': elif nonstring == 'strict':
raise TypeError('obj must be a string type') raise TypeError('obj must be a string type')
else: else:
raise TypeError('Invalid value %s for to_text\'s nonstring parameter' % nonstring) raise TypeError('Invalid value %s for to_text\'s nonstring parameter' % nonstring)
return to_text(value, encoding, errors) return to_text(value, encoding=encoding, errors=errors)
to_native = to_text to_native = to_text
@ -259,10 +341,10 @@ def container_to_bytes(d, encoding='utf-8', errors='surrogate_or_strict'):
""" """
# DTFIX-FUTURE: deprecate # DTFIX-FUTURE: deprecate
if isinstance(d, text_type): if isinstance(d, str):
return to_bytes(d, encoding=encoding, errors=errors) return to_bytes(d, encoding=encoding, errors=errors)
elif isinstance(d, dict): elif isinstance(d, dict):
return dict(container_to_bytes(o, encoding, errors) for o in iteritems(d)) return dict(container_to_bytes(o, encoding, errors) for o in d.items())
elif isinstance(d, list): elif isinstance(d, list):
return [container_to_bytes(o, encoding, errors) for o in d] return [container_to_bytes(o, encoding, errors) for o in d]
elif isinstance(d, tuple): elif isinstance(d, tuple):
@ -279,14 +361,18 @@ def container_to_text(d, encoding='utf-8', errors='surrogate_or_strict'):
""" """
# DTFIX-FUTURE: deprecate # DTFIX-FUTURE: deprecate
if isinstance(d, binary_type): if isinstance(d, bytes):
# Warning, can traceback # Warning, can traceback
return to_text(d, encoding=encoding, errors=errors) return to_text(d, encoding=encoding, errors=errors)
elif isinstance(d, dict): elif isinstance(d, dict):
return dict(container_to_text(o, encoding, errors) for o in iteritems(d)) return dict(container_to_text(o, encoding, errors) for o in d.items())
elif isinstance(d, list): elif isinstance(d, list):
return [container_to_text(o, encoding, errors) for o in d] return [container_to_text(o, encoding, errors) for o in d]
elif isinstance(d, tuple): elif isinstance(d, tuple):
return tuple(container_to_text(o, encoding, errors) for o in d) return tuple(container_to_text(o, encoding, errors) for o in d)
else: else:
return d return d
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "binary_type", "text_type", "iteritems")

@ -6,7 +6,7 @@ from __future__ import annotations
import re import re
from ansible.module_utils.six import iteritems from ansible.module_utils._internal import _no_six
SIZE_RANGES = { SIZE_RANGES = {
'Y': 1 << 80, 'Y': 1 << 80,
@ -117,7 +117,7 @@ def bytes_to_human(size, isbits=False, unit=None):
base = 'bits' base = 'bits'
suffix = '' suffix = ''
for suffix, limit in sorted(iteritems(SIZE_RANGES), key=lambda item: -item[1]): for suffix, limit in sorted(SIZE_RANGES.items(), key=lambda item: -item[1]):
if (unit is None and size >= limit) or unit is not None and unit.upper() == suffix[0]: if (unit is None and size >= limit) or unit is not None and unit.upper() == suffix[0]:
break break
@ -127,3 +127,7 @@ def bytes_to_human(size, isbits=False, unit=None):
suffix = base suffix = base
return '%.2f %s' % (size / limit, suffix) return '%.2f %s' % (size / limit, suffix)
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "iteritems")

@ -10,15 +10,13 @@ import os
import re import re
from ast import literal_eval from ast import literal_eval
from ansible.module_utils._internal import _no_six
from ansible.module_utils.common import json as _common_json from ansible.module_utils.common import json as _common_json
from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.common.collections import is_iterable from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.text.formatters import human_to_bytes from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.common.warnings import deprecate from ansible.module_utils.common.warnings import deprecate
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import (
string_types,
)
def count_terms(terms, parameters): def count_terms(terms, parameters):
@ -43,7 +41,7 @@ def safe_eval(value, locals=None, include_exceptions=False):
version="2.21", version="2.21",
) )
# do not allow method calls to modules # do not allow method calls to modules
if not isinstance(value, string_types): if not isinstance(value, str):
# already templated to a datavaluestructure, perhaps? # already templated to a datavaluestructure, perhaps?
if include_exceptions: if include_exceptions:
return (value, None) return (value, None)
@ -194,7 +192,7 @@ def check_required_by(requirements, parameters, options_context=None):
if key not in parameters or parameters[key] is None: if key not in parameters or parameters[key] is None:
continue continue
# Support strings (single-item lists) # Support strings (single-item lists)
if isinstance(value, string_types): if isinstance(value, str):
value = [value] value = [value]
if missing := [required for required in value if required not in parameters or parameters[required] is None]: if missing := [required for required in value if required not in parameters or parameters[required] is None]:
@ -373,10 +371,13 @@ def check_type_str(value, allow_conversion=True, param=None, prefix=''):
:returns: Original value if it is a string, the value converted to a string :returns: Original value if it is a string, the value converted to a string
if allow_conversion=True, or raises a TypeError if allow_conversion=False. if allow_conversion=True, or raises a TypeError if allow_conversion=False.
""" """
if isinstance(value, string_types): if isinstance(value, str):
return value return value
if allow_conversion and value is not None: if value is None:
return '' # approximate pre-2.19 templating None->empty str equivalency here for backward compatibility
if allow_conversion:
return to_native(value, errors='surrogate_or_strict') return to_native(value, errors='surrogate_or_strict')
msg = "'{0!r}' is not a string and conversion is not allowed".format(value) msg = "'{0!r}' is not a string and conversion is not allowed".format(value)
@ -403,7 +404,7 @@ def check_type_list(value):
return value return value
# DTFIX-FUTURE: deprecate legacy comma split functionality, eventually replace with `_check_type_list_strict` # DTFIX-FUTURE: deprecate legacy comma split functionality, eventually replace with `_check_type_list_strict`
if isinstance(value, string_types): if isinstance(value, str):
return value.split(",") return value.split(",")
elif isinstance(value, int) or isinstance(value, float): elif isinstance(value, int) or isinstance(value, float):
return [str(value)] return [str(value)]
@ -431,7 +432,7 @@ def check_type_dict(value):
if isinstance(value, dict): if isinstance(value, dict):
return value return value
if isinstance(value, string_types): if isinstance(value, str):
if value.startswith("{"): if value.startswith("{"):
try: try:
return json.loads(value) return json.loads(value)
@ -494,7 +495,7 @@ def check_type_bool(value):
if isinstance(value, bool): if isinstance(value, bool):
return value return value
if isinstance(value, string_types) or isinstance(value, (int, float)): if isinstance(value, str) or isinstance(value, (int, float)):
return boolean(value) return boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value)) raise TypeError('%s cannot be converted to a bool' % type(value))
@ -594,3 +595,7 @@ def check_type_jsonarg(value):
return json.dumps(value, cls=_common_json._get_legacy_encoder(), _decode_bytes=True) return json.dumps(value, cls=_common_json._get_legacy_encoder(), _decode_bytes=True)
raise TypeError('%s cannot be converted to a json string' % type(value)) raise TypeError('%s cannot be converted to a json string' % type(value))
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "string_types")

@ -36,9 +36,10 @@ import struct
import uuid import uuid
from functools import partial from functools import partial
from ansible.module_utils._internal import _no_six
from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.module_utils.common.json import _get_legacy_encoder from ansible.module_utils.common.json import _get_legacy_encoder
from ansible.module_utils.six import iteritems
def write_to_stream(stream, obj): def write_to_stream(stream, obj):
@ -95,7 +96,7 @@ class ConnectionError(Exception):
def __init__(self, message, *args, **kwargs): def __init__(self, message, *args, **kwargs):
super(ConnectionError, self).__init__(message) super(ConnectionError, self).__init__(message)
for k, v in iteritems(kwargs): for k, v in kwargs.items():
setattr(self, k, v) setattr(self, k, v)
@ -149,7 +150,7 @@ class Connection(object):
raise ConnectionError( raise ConnectionError(
"Unable to decode JSON from response to {0}. Received '{1}'.".format(name, out) "Unable to decode JSON from response to {0}. Received '{1}'.".format(name, out)
) )
params = [repr(arg) for arg in args] + ['{0}={1!r}'.format(k, v) for k, v in iteritems(kwargs)] params = [repr(arg) for arg in args] + ['{0}={1!r}'.format(k, v) for k, v in kwargs.items()]
params = ', '.join(params) params = ', '.join(params)
raise ConnectionError( raise ConnectionError(
"Unable to decode JSON from response to {0}({1}). Received '{2}'.".format(name, params, out) "Unable to decode JSON from response to {0}({1}). Received '{2}'.".format(name, params, out)
@ -200,3 +201,7 @@ class Connection(object):
sf.close() sf.close()
return to_text(response, errors='surrogate_or_strict') return to_text(response, errors='surrogate_or_strict')
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "iteritems")

@ -24,13 +24,13 @@ import re
import sys import sys
import time import time
from ansible.module_utils._internal import _no_six
from ansible.module_utils._internal._concurrent import _futures from ansible.module_utils._internal._concurrent import _futures
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.common.text.formatters import bytes_to_human from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size
from ansible.module_utils.six import iteritems
# import this as a module to ensure we get the same module instance # import this as a module to ensure we get the same module instance
from ansible.module_utils.facts import timeout from ansible.module_utils.facts import timeout
@ -653,7 +653,7 @@ class LinuxHardware(Hardware):
retval[target].add(entry) retval[target].add(entry)
except OSError: except OSError:
continue continue
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval)) return dict((k, list(sorted(v))) for (k, v) in retval.items())
except OSError: except OSError:
return {} return {}
@ -665,7 +665,7 @@ class LinuxHardware(Hardware):
device = elements[3] device = elements[3]
target = elements[5] target = elements[5]
retval[target].add(device) retval[target].add(device)
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval)) return dict((k, list(sorted(v))) for (k, v) in retval.items())
except OSError: except OSError:
return {} return {}
@ -750,7 +750,7 @@ class LinuxHardware(Hardware):
d = {} d = {}
d['virtual'] = virtual d['virtual'] = virtual
d['links'] = {} d['links'] = {}
for (link_type, link_values) in iteritems(links): for (link_type, link_values) in links.items():
d['links'][link_type] = link_values.get(block, []) d['links'][link_type] = link_values.get(block, [])
diskname = os.path.basename(sysdir) diskname = os.path.basename(sysdir)
for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']: for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']:
@ -801,7 +801,7 @@ class LinuxHardware(Hardware):
part_sysdir = sysdir + "/" + partname part_sysdir = sysdir + "/" + partname
part['links'] = {} part['links'] = {}
for (link_type, link_values) in iteritems(links): for (link_type, link_values) in links.items():
part['links'][link_type] = link_values.get(partname, []) part['links'][link_type] = link_values.get(partname, [])
part['start'] = get_file_content(part_sysdir + "/start", 0) part['start'] = get_file_content(part_sysdir + "/start", 0)
@ -890,7 +890,8 @@ class LinuxHardware(Hardware):
'size_g': items[-2], 'size_g': items[-2],
'free_g': items[-1], 'free_g': items[-1],
'num_lvs': items[2], 'num_lvs': items[2],
'num_pvs': items[1] 'num_pvs': items[1],
'lvs': {},
} }
lvs_path = self.module.get_bin_path('lvs') lvs_path = self.module.get_bin_path('lvs')
@ -901,7 +902,18 @@ class LinuxHardware(Hardware):
rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options)) rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options))
for lv_line in lv_lines.splitlines(): for lv_line in lv_lines.splitlines():
items = lv_line.strip().split(',') items = lv_line.strip().split(',')
lvs[items[0]] = {'size_g': items[3], 'vg': items[1]} vg_name = items[1]
lv_name = items[0]
# The LV name is only unique per VG, so the top level fact lvs can be misleading.
# TODO: deprecate lvs in favor of vgs
lvs[lv_name] = {'size_g': items[3], 'vg': vg_name}
try:
vgs[vg_name]['lvs'][lv_name] = {'size_g': items[3]}
except KeyError:
self.module.warn(
"An LVM volume group was created while gathering LVM facts, "
"and is not included in ansible_facts['vgs']."
)
pvs_path = self.module.get_bin_path('pvs') pvs_path = self.module.get_bin_path('pvs')
# pvs fields: PV VG #Fmt #Attr PSize PFree # pvs fields: PV VG #Fmt #Attr PSize PFree
@ -925,3 +937,7 @@ class LinuxHardwareCollector(HardwareCollector):
_fact_class = LinuxHardware _fact_class = LinuxHardware
required_facts = set(['platform']) required_facts = set(['platform'])
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "iteritems")

@ -19,7 +19,7 @@ import os
import re import re
import time import time
from ansible.module_utils.six.moves import reduce from functools import reduce
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.timeout import TimeoutError, timeout from ansible.module_utils.facts.timeout import TimeoutError, timeout

@ -18,12 +18,13 @@ from __future__ import annotations
import re import re
import time import time
from functools import reduce
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.text.formatters import bytes_to_human from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.utils import get_file_content, get_mount_size from ansible.module_utils.facts.utils import get_file_content, get_mount_size
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts import timeout from ansible.module_utils.facts import timeout
from ansible.module_utils.six.moves import reduce
class SunOSHardware(Hardware): class SunOSHardware(Hardware):

@ -7,7 +7,7 @@ import ansible.module_utils.compat.typing as t
from abc import ABCMeta, abstractmethod from abc import ABCMeta, abstractmethod
from ansible.module_utils.six import with_metaclass from ansible.module_utils._internal import _no_six
from ansible.module_utils.basic import missing_required_lib from ansible.module_utils.basic import missing_required_lib
from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
@ -19,7 +19,7 @@ def get_all_pkg_managers():
return {obj.__name__.lower(): obj for obj in get_all_subclasses(PkgMgr) if obj not in (CLIMgr, LibMgr, RespawningLibMgr)} return {obj.__name__.lower(): obj for obj in get_all_subclasses(PkgMgr) if obj not in (CLIMgr, LibMgr, RespawningLibMgr)}
class PkgMgr(with_metaclass(ABCMeta, object)): # type: ignore[misc] class PkgMgr(metaclass=ABCMeta):
@abstractmethod @abstractmethod
def is_available(self, handle_exceptions): def is_available(self, handle_exceptions):
@ -125,3 +125,7 @@ class CLIMgr(PkgMgr):
if not handle_exceptions: if not handle_exceptions:
raise raise
return found return found
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "with_metaclass")

@ -18,8 +18,7 @@ from __future__ import annotations
import os import os
import typing as t import typing as t
from ansible.module_utils.six import iteritems from ansible.module_utils._internal import _no_six
from ansible.module_utils.facts.collector import BaseFactCollector from ansible.module_utils.facts.collector import BaseFactCollector
@ -31,7 +30,11 @@ class EnvFactCollector(BaseFactCollector):
env_facts = {} env_facts = {}
env_facts['env'] = {} env_facts['env'] = {}
for k, v in iteritems(os.environ): for k, v in os.environ.items():
env_facts['env'][k] = v env_facts['env'][k] = v
return env_facts return env_facts
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "iteritems")

@ -3,16 +3,18 @@
from __future__ import annotations from __future__ import annotations
import configparser
import glob import glob
import json import json
import os import os
import stat import stat
import typing as t import typing as t
from io import StringIO
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.facts.utils import get_file_content from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector from ansible.module_utils.facts.collector import BaseFactCollector
from ansible.module_utils.six.moves import configparser, StringIO
class LocalFactCollector(BaseFactCollector): class LocalFactCollector(BaseFactCollector):

@ -5,7 +5,7 @@ from __future__ import annotations
import collections.abc as c import collections.abc as c
from ansible.module_utils.six import binary_type, text_type from ansible.module_utils._internal import _no_six
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
@ -20,7 +20,7 @@ def boolean(value, strict=True):
normalized_value = value normalized_value = value
if isinstance(value, (text_type, binary_type)): if isinstance(value, (str, bytes)):
normalized_value = to_text(value, errors='surrogate_or_strict').lower().strip() normalized_value = to_text(value, errors='surrogate_or_strict').lower().strip()
if not isinstance(value, c.Hashable): if not isinstance(value, c.Hashable):
@ -32,3 +32,7 @@ def boolean(value, strict=True):
return False return False
raise TypeError("The value '%s' is not a valid boolean. Valid booleans include: %s" % (to_text(value), ', '.join(repr(i) for i in BOOLEANS))) raise TypeError("The value '%s' is not a valid boolean. Valid booleans include: %s" % (to_text(value), ', '.join(repr(i) for i in BOOLEANS)))
def __getattr__(importable_name):
return _no_six.deprecate(importable_name, __name__, "binary_type", "text_type")

@ -36,7 +36,6 @@ import select
import shlex import shlex
import subprocess import subprocess
from ansible.module_utils.six import b
from ansible.module_utils.common.text.converters import to_bytes, to_text from ansible.module_utils.common.text.converters import to_bytes, to_text
@ -200,7 +199,7 @@ def daemonize(module, cmd):
fds = [p.stdout, p.stderr] fds = [p.stdout, p.stderr]
# loop reading output till it is done # loop reading output till it is done
output = {p.stdout: b(""), p.stderr: b("")} output = {p.stdout: b"", p.stderr: b""}
while fds: while fds:
rfd, wfd, efd = select.select(fds, [], fds, 1) rfd, wfd, efd = select.select(fds, [], fds, 1)
if (rfd + wfd + efd) or p.poll() is None: if (rfd + wfd + efd) or p.poll() is None:
@ -234,7 +233,7 @@ def daemonize(module, cmd):
os.waitpid(pid, 0) os.waitpid(pid, 0)
# Grab response data after child finishes # Grab response data after child finishes
return_data = b("") return_data = b""
while True: while True:
rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]]) rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]])
if pipe[0] in rfd: if pipe[0] in rfd:

@ -383,7 +383,6 @@ from ansible.module_utils.common.file import S_IRWXU_RXG_RXO
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.six import string_types
from ansible.module_utils.urls import fetch_file from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold' DPKG_OPTIONS = 'force-confdef,force-confold'
@ -633,7 +632,7 @@ def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
if pkgspec: if pkgspec:
for pkgspec_pattern in pkgspec: for pkgspec_pattern in pkgspec:
if not isinstance(pkgspec_pattern, string_types): if not isinstance(pkgspec_pattern, str):
m.fail_json(msg="Invalid type for package name, expected string but got %s" % type(pkgspec_pattern)) m.fail_json(msg="Invalid type for package name, expected string but got %s" % type(pkgspec_pattern))
pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern) pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern)

@ -508,7 +508,7 @@ class UbuntuSourcesList(SourcesList):
try: try:
rc, out, err = self.module.run_command([self.gpg_bin, '--list-packets', key_file]) rc, out, err = self.module.run_command([self.gpg_bin, '--list-packets', key_file])
except OSError as ex: except OSError as ex:
self.debug(f"Could check key against file {key_file!r}: {ex}") self.module.debug(f"Could check key against file {key_file!r}: {ex}")
continue continue
if key_fingerprint in out: if key_fingerprint in out:

@ -131,7 +131,6 @@ import re
import tempfile import tempfile
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import b, indexbytes
from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.text.converters import to_native
@ -141,6 +140,7 @@ def assemble_from_fragments(src_path, delimiter=None, compiled_regexp=None, igno
tmp = os.fdopen(tmpfd, 'wb') tmp = os.fdopen(tmpfd, 'wb')
delimit_me = False delimit_me = False
add_newline = False add_newline = False
b_linesep = os.linesep.encode()
for f in sorted(os.listdir(src_path)): for f in sorted(os.listdir(src_path)):
if compiled_regexp and not compiled_regexp.search(f): if compiled_regexp and not compiled_regexp.search(f):
@ -153,7 +153,7 @@ def assemble_from_fragments(src_path, delimiter=None, compiled_regexp=None, igno
# always put a newline between fragments if the previous fragment didn't end with a newline. # always put a newline between fragments if the previous fragment didn't end with a newline.
if add_newline: if add_newline:
tmp.write(b('\n')) tmp.write(b_linesep)
# delimiters should only appear between fragments # delimiters should only appear between fragments
if delimit_me: if delimit_me:
@ -163,16 +163,12 @@ def assemble_from_fragments(src_path, delimiter=None, compiled_regexp=None, igno
tmp.write(delimiter) tmp.write(delimiter)
# always make sure there's a newline after the # always make sure there's a newline after the
# delimiter, so lines don't run together # delimiter, so lines don't run together
if not delimiter.endswith(b_linesep):
# byte indexing differs on Python 2 and 3, tmp.write(b_linesep)
# use indexbytes for compat
# chr(10) == '\n'
if indexbytes(delimiter, -1) != 10:
tmp.write(b('\n'))
tmp.write(fragment_content) tmp.write(fragment_content)
delimit_me = True delimit_me = True
if fragment_content.endswith(b('\n')): if fragment_content.endswith(b_linesep):
add_newline = False add_newline = False
else: else:
add_newline = True add_newline = True

@ -102,6 +102,13 @@ options:
type: bool type: bool
default: no default: no
version_added: '2.16' version_added: '2.16'
encoding:
description:
- The character set in which the target file is encoded.
- For a list of available built-in encodings, see U(https://docs.python.org/3/library/codecs.html#standard-encodings)
type: str
default: utf-8
version_added: '2.20'
notes: notes:
- When using C(with_*) loops be aware that if you do not set a unique mark the block will be overwritten on each iteration. - When using C(with_*) loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.
- As of Ansible 2.3, the O(dest) option has been changed to O(path) as default, but O(dest) still works as well. - As of Ansible 2.3, the O(dest) option has been changed to O(path) as default, but O(dest) still works as well.
@ -192,15 +199,16 @@ EXAMPLES = r"""
import re import re
import os import os
import tempfile import tempfile
from ansible.module_utils.six import b
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_bytes, to_native from ansible.module_utils.common.text.converters import to_native
def write_changes(module, contents, path): def write_changes(module, contents, path, encoding=None):
tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir) tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(tmpfd, 'wb') as tf: # newline param set to translate newline sequences with system default line separator
with os.fdopen(tmpfd, 'w', encoding=encoding, newline=None) as tf:
tf.write(contents) tf.write(contents)
validate = module.params.get('validate', None) validate = module.params.get('validate', None)
@ -246,6 +254,7 @@ def main():
marker_end=dict(type='str', default='END'), marker_end=dict(type='str', default='END'),
append_newline=dict(type='bool', default=False), append_newline=dict(type='bool', default=False),
prepend_newline=dict(type='bool', default=False), prepend_newline=dict(type='bool', default=False),
encoding=dict(type='str', default='utf-8'),
), ),
mutually_exclusive=[['insertbefore', 'insertafter']], mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True, add_file_common_args=True,
@ -254,6 +263,8 @@ def main():
params = module.params params = module.params
path = params['path'] path = params['path']
encoding = module.params.get('encoding', None)
if os.path.isdir(path): if os.path.isdir(path):
module.fail_json(rc=256, module.fail_json(rc=256,
msg='Path %s is a directory !' % path) msg='Path %s is a directory !' % path)
@ -274,7 +285,8 @@ def main():
original = None original = None
lines = [] lines = []
else: else:
with open(path, 'rb') as f: # newline param set to preserve newline sequences read from file
with open(path, 'r', encoding=encoding, newline='') as f:
original = f.read() original = f.read()
lines = original.splitlines(True) lines = original.splitlines(True)
@ -288,10 +300,12 @@ def main():
insertbefore = params['insertbefore'] insertbefore = params['insertbefore']
insertafter = params['insertafter'] insertafter = params['insertafter']
block = to_bytes(params['block']) block = params['block']
marker = to_bytes(params['marker']) marker = params['marker']
present = params['state'] == 'present' present = params['state'] == 'present'
blank_line = [b(os.linesep)]
line_separator = os.linesep
blank_line = [line_separator]
if not present and not path_exists: if not present and not path_exists:
module.exit_json(changed=False, msg="File %s not present" % path) module.exit_json(changed=False, msg="File %s not present" % path)
@ -300,17 +314,19 @@ def main():
insertafter = 'EOF' insertafter = 'EOF'
if insertafter not in (None, 'EOF'): if insertafter not in (None, 'EOF'):
insertre = re.compile(to_bytes(insertafter, errors='surrogate_or_strict')) insertre = re.compile(insertafter)
elif insertbefore not in (None, 'BOF'): elif insertbefore not in (None, 'BOF'):
insertre = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict')) insertre = re.compile(insertbefore)
else: else:
insertre = None insertre = None
marker0 = re.sub(b(r'{mark}'), b(params['marker_begin']), marker) + b(os.linesep) marker0 = re.sub(r'{mark}', params['marker_begin'], marker) + os.linesep
marker1 = re.sub(b(r'{mark}'), b(params['marker_end']), marker) + b(os.linesep) marker1 = re.sub(r'{mark}', params['marker_end'], marker) + os.linesep
if present and block: if present and block:
if not block.endswith(b(os.linesep)): if not block.endswith(os.linesep):
block += b(os.linesep) block += os.linesep
blocklines = [marker0] + block.splitlines(True) + [marker1] blocklines = [marker0] + block.splitlines(True) + [marker1]
else: else:
blocklines = [] blocklines = []
@ -329,9 +345,9 @@ def main():
match = insertre.search(original) match = insertre.search(original)
if match: if match:
if insertafter: if insertafter:
n0 = to_native(original).count('\n', 0, match.end()) n0 = original.count('\n', 0, match.end())
elif insertbefore: elif insertbefore:
n0 = to_native(original).count('\n', 0, match.start()) n0 = original.count('\n', 0, match.start())
else: else:
for i, line in enumerate(lines): for i, line in enumerate(lines):
if insertre.search(line): if insertre.search(line):
@ -352,15 +368,15 @@ def main():
# Ensure there is a line separator before the block of lines to be inserted # Ensure there is a line separator before the block of lines to be inserted
if n0 > 0: if n0 > 0:
if not lines[n0 - 1].endswith(b(os.linesep)): if not lines[n0 - 1].endswith(os.linesep):
lines[n0 - 1] += b(os.linesep) lines[n0 - 1] += os.linesep
# Before the block: check if we need to prepend a blank line # Before the block: check if we need to prepend a blank line
# If yes, we need to add the blank line if we are not at the beginning of the file # If yes, we need to add the blank line if we are not at the beginning of the file
# and the previous line is not a blank line # and the previous line is not a blank line
# In both cases, we need to shift by one on the right the inserting position of the block # In both cases, we need to shift by one on the right the inserting position of the block
if params['prepend_newline'] and present: if params['prepend_newline'] and present:
if n0 != 0 and lines[n0 - 1] != b(os.linesep): if n0 != 0 and lines[n0 - 1] != os.linesep:
lines[n0:n0] = blank_line lines[n0:n0] = blank_line
n0 += 1 n0 += 1
@ -372,13 +388,13 @@ def main():
# and the line right after is not a blank line # and the line right after is not a blank line
if params['append_newline'] and present: if params['append_newline'] and present:
line_after_block = n0 + len(blocklines) line_after_block = n0 + len(blocklines)
if line_after_block < len(lines) and lines[line_after_block] != b(os.linesep): if line_after_block < len(lines) and lines[line_after_block] != os.linesep:
lines[line_after_block:line_after_block] = blank_line lines[line_after_block:line_after_block] = blank_line
if lines: if lines:
result = b''.join(lines) result = ''.join(lines)
else: else:
result = b'' result = ''
if module._diff: if module._diff:
diff['after'] = result diff['after'] = result
@ -402,7 +418,7 @@ def main():
backup_file = module.backup_local(path) backup_file = module.backup_local(path)
# We should always follow symlinks so that we change the real file # We should always follow symlinks so that we change the real file
real_path = os.path.realpath(params['path']) real_path = os.path.realpath(params['path'])
write_changes(module, result, real_path) write_changes(module, result, real_path, encoding)
if module.check_mode and not path_exists: if module.check_mode and not path_exists:
module.exit_json(changed=changed, msg=msg, diff=diff) module.exit_json(changed=changed, msg=msg, diff=diff)

@ -219,20 +219,20 @@ import os
import platform import platform
import pwd import pwd
import re import re
import shlex
import sys import sys
import tempfile import tempfile
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.file import S_IRWU_RWG_RWO from ansible.module_utils.common.file import S_IRWU_RWG_RWO
from ansible.module_utils.common.text.converters import to_bytes, to_native from ansible.module_utils.common.text.converters import to_bytes, to_native
from ansible.module_utils.six.moves import shlex_quote
class CronTabError(Exception): class CronTabError(Exception):
pass pass
class CronTab(object): class CronTab:
""" """
CronTab object to write time based crontab file CronTab object to write time based crontab file
@ -243,8 +243,8 @@ class CronTab(object):
def __init__(self, module, user=None, cron_file=None): def __init__(self, module, user=None, cron_file=None):
self.module = module self.module = module
self.user = user self.user = user
self.root = (os.getuid() == 0) self.root = os.getuid() == 0
self.lines = None self.lines = []
self.ansible = "#Ansible: " self.ansible = "#Ansible: "
self.n_existing = '' self.n_existing = ''
self.cron_cmd = self.module.get_bin_path('crontab', required=True) self.cron_cmd = self.module.get_bin_path('crontab', required=True)
@ -264,7 +264,6 @@ class CronTab(object):
def read(self): def read(self):
# Read in the crontab from the system # Read in the crontab from the system
self.lines = []
if self.cron_file: if self.cron_file:
# read the cronfile # read the cronfile
try: try:
@ -280,7 +279,7 @@ class CronTab(object):
# FIXME: using safely quoted shell for now, but this really should be two non-shell calls instead. # FIXME: using safely quoted shell for now, but this really should be two non-shell calls instead.
(rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True) (rc, out, err) = self.module.run_command(self._read_user_execute(), use_unsafe_shell=True)
if rc != 0 and rc != 1: # 1 can mean that there are no jobs. if rc not in (0, 1): # 1 can mean that there are no jobs.
raise CronTabError("Unable to read crontab") raise CronTabError("Unable to read crontab")
self.n_existing = out self.n_existing = out
@ -300,7 +299,6 @@ class CronTab(object):
def is_empty(self): def is_empty(self):
if len(self.lines) == 0: if len(self.lines) == 0:
return True return True
else:
for line in self.lines: for line in self.lines:
if line.strip(): if line.strip():
return False return False
@ -451,12 +449,9 @@ class CronTab(object):
if special: if special:
if self.cron_file: if self.cron_file:
return "%s@%s %s %s" % (disable_prefix, special, self.user, job) return "%s@%s %s %s" % (disable_prefix, special, self.user, job)
else:
return "%s@%s %s" % (disable_prefix, special, job) return "%s@%s %s" % (disable_prefix, special, job)
else:
if self.cron_file: if self.cron_file:
return "%s%s %s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, self.user, job) return "%s%s %s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, self.user, job)
else:
return "%s%s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, job) return "%s%s %s %s %s %s %s" % (disable_prefix, minute, hour, day, month, weekday, job)
def get_jobnames(self): def get_jobnames(self):
@ -495,7 +490,6 @@ class CronTab(object):
if len(newlines) == 0: if len(newlines) == 0:
return True return True
else:
return False # TODO add some more error testing return False # TODO add some more error testing
def _update_env(self, name, decl, addenvfunction): def _update_env(self, name, decl, addenvfunction):
@ -529,13 +523,13 @@ class CronTab(object):
user = '' user = ''
if self.user: if self.user:
if platform.system() == 'SunOS': if platform.system() == 'SunOS':
return "su %s -c '%s -l'" % (shlex_quote(self.user), shlex_quote(self.cron_cmd)) return "su %s -c '%s -l'" % (shlex.quote(self.user), shlex.quote(self.cron_cmd))
elif platform.system() == 'AIX': if platform.system() == 'AIX':
return "%s -l %s" % (shlex_quote(self.cron_cmd), shlex_quote(self.user)) return "%s -l %s" % (shlex.quote(self.cron_cmd), shlex.quote(self.user))
elif platform.system() == 'HP-UX': if platform.system() == 'HP-UX':
return "%s %s %s" % (self.cron_cmd, '-l', shlex_quote(self.user)) return "%s %s %s" % (self.cron_cmd, '-l', shlex.quote(self.user))
elif pwd.getpwuid(os.getuid())[0] != self.user: if pwd.getpwuid(os.getuid())[0] != self.user:
user = '-u %s' % shlex_quote(self.user) user = '-u %s' % shlex.quote(self.user)
return "%s %s %s" % (self.cron_cmd, user, '-l') return "%s %s %s" % (self.cron_cmd, user, '-l')
def _write_execute(self, path): def _write_execute(self, path):
@ -546,10 +540,10 @@ class CronTab(object):
if self.user: if self.user:
if platform.system() in ['SunOS', 'HP-UX', 'AIX']: if platform.system() in ['SunOS', 'HP-UX', 'AIX']:
return "chown %s %s ; su '%s' -c '%s %s'" % ( return "chown %s %s ; su '%s' -c '%s %s'" % (
shlex_quote(self.user), shlex_quote(path), shlex_quote(self.user), self.cron_cmd, shlex_quote(path)) shlex.quote(self.user), shlex.quote(path), shlex.quote(self.user), self.cron_cmd, shlex.quote(path))
elif pwd.getpwuid(os.getuid())[0] != self.user: if pwd.getpwuid(os.getuid())[0] != self.user:
user = '-u %s' % shlex_quote(self.user) user = '-u %s' % shlex.quote(self.user)
return "%s %s %s" % (self.cron_cmd, user, shlex_quote(path)) return "%s %s %s" % (self.cron_cmd, user, shlex.quote(path))
def main(): def main():
@ -668,7 +662,7 @@ def main():
# if requested make a backup before making a change # if requested make a backup before making a change
if backup and not module.check_mode: if backup and not module.check_mode:
(backuph, backup_file) = tempfile.mkstemp(prefix='crontab') (dummy, backup_file) = tempfile.mkstemp(prefix='crontab')
crontab.write(backup_file) crontab.write(backup_file)
if env: if env:
@ -763,9 +757,6 @@ def main():
module.exit_json(**res_args) module.exit_json(**res_args)
# --- should never get here
module.exit_json(msg="Unable to execute cron task.")
if __name__ == '__main__': if __name__ == '__main__':
main() main()

@ -250,7 +250,6 @@ from ansible.module_utils.common.file import S_IRWXU_RXG_RXO, S_IRWU_RG_RO
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.common.text.converters import to_bytes from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.six import raise_from # type: ignore[attr-defined]
from ansible.module_utils.urls import generic_urlparse from ansible.module_utils.urls import generic_urlparse
from ansible.module_utils.urls import open_url from ansible.module_utils.urls import open_url
from ansible.module_utils.urls import get_user_agent from ansible.module_utils.urls import get_user_agent
@ -339,7 +338,7 @@ def write_signed_by_key(module, v, slug):
try: try:
r = open_url(v, http_agent=get_user_agent()) r = open_url(v, http_agent=get_user_agent())
except Exception as exc: except Exception as exc:
raise_from(RuntimeError(to_native(exc)), exc) raise RuntimeError('Could not fetch signed_by key.') from exc
else: else:
b_data = r.read() b_data = r.read()
else: else:
@ -587,14 +586,9 @@ def main():
elif is_sequence(value): elif is_sequence(value):
value = format_list(value) value = format_list(value)
elif key == 'signed_by': elif key == 'signed_by':
try:
key_changed, signed_by_filename, signed_by_data = write_signed_by_key(module, value, slug) key_changed, signed_by_filename, signed_by_data = write_signed_by_key(module, value, slug)
value = signed_by_filename or signed_by_data value = signed_by_filename or signed_by_data
changed |= key_changed changed |= key_changed
except RuntimeError as exc:
module.fail_json(
msg='Could not fetch signed_by key: %s' % to_native(exc)
)
if value.count('\n') > 0: if value.count('\n') > 0:
value = format_multiline(value) value = format_multiline(value)

@ -537,6 +537,9 @@ class DnfModule(YumDnf):
conf.sslverify = sslverify conf.sslverify = sslverify
# Set installroot # Set installroot
if not os.path.isdir(installroot):
self.module.fail_json(msg=f"Installroot {installroot} must be a directory")
conf.installroot = installroot conf.installroot = installroot
# Load substitutions from the filesystem # Load substitutions from the filesystem

@ -595,6 +595,10 @@ class Dnf5Module(YumDnf):
conf.localpkg_gpgcheck = not self.disable_gpg_check conf.localpkg_gpgcheck = not self.disable_gpg_check
conf.sslverify = self.sslverify conf.sslverify = self.sslverify
conf.clean_requirements_on_remove = self.autoremove conf.clean_requirements_on_remove = self.autoremove
if not os.path.isdir(self.installroot):
self.module.fail_json(msg=f"Installroot {self.installroot} must be a directory")
conf.installroot = self.installroot conf.installroot = self.installroot
conf.use_host_config = True # needed for installroot conf.use_host_config = True # needed for installroot
conf.cacheonly = "all" if self.cacheonly else "none" conf.cacheonly = "all" if self.cacheonly else "none"

@ -291,7 +291,6 @@ import time
from ansible.module_utils.common.text.converters import to_text, to_native from ansible.module_utils.common.text.converters import to_text, to_native
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import string_types
class _Object: class _Object:
@ -496,7 +495,7 @@ def main():
params = module.params params = module.params
if params['mode'] and not isinstance(params['mode'], string_types): if params['mode'] and not isinstance(params['mode'], str):
module.fail_json( module.fail_json(
msg="argument 'mode' is not a string and conversion is not allowed, value is of type %s" % params['mode'].__class__.__name__ msg="argument 'mode' is not a string and conversion is not allowed, value is of type %s" % params['mode'].__class__.__name__
) )

@ -374,9 +374,9 @@ import shutil
import tempfile import tempfile
from datetime import datetime, timezone from datetime import datetime, timezone
from urllib.parse import urlsplit
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils.common.text.converters import to_native from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec from ansible.module_utils.urls import fetch_url, url_argument_spec

@ -343,7 +343,6 @@ from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.six import b, string_types
def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir): def relocate_repo(module, result, repo_dir, old_repo_dir, worktree_dir):
@ -443,12 +442,12 @@ def write_ssh_wrapper(module):
fd, wrapper_path = tempfile.mkstemp() fd, wrapper_path = tempfile.mkstemp()
# use existing git_ssh/ssh_command, fallback to 'ssh' # use existing git_ssh/ssh_command, fallback to 'ssh'
template = b("""#!/bin/sh template = """#!/bin/sh
%s $GIT_SSH_OPTS "$@" %s $GIT_SSH_OPTS "$@"
""" % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh'))) """ % os.environ.get('GIT_SSH', os.environ.get('GIT_SSH_COMMAND', 'ssh'))
# write it # write it
with os.fdopen(fd, 'w+b') as fh: with os.fdopen(fd, 'w') as fh:
fh.write(template) fh.write(template)
# set execute # set execute
@ -1257,7 +1256,7 @@ def main():
# evaluate and set the umask before doing anything else # evaluate and set the umask before doing anything else
if umask is not None: if umask is not None:
if not isinstance(umask, string_types): if not isinstance(umask, str):
module.fail_json(msg="umask must be defined as a quoted octal integer") module.fail_json(msg="umask must be defined as a quoted octal integer")
try: try:
umask = int(umask, 8) umask = int(umask, 8)

@ -66,7 +66,7 @@ options:
description: description:
- Ignore unknown file extensions within the directory. - Ignore unknown file extensions within the directory.
- This allows users to specify a directory containing vars files that are intermingled with non-vars files extension types - This allows users to specify a directory containing vars files that are intermingled with non-vars files extension types
(e.g. a directory with a README in it and vars files). (for example, a directory with a README in it and vars files).
type: bool type: bool
default: no default: no
version_added: "2.7" version_added: "2.7"

@ -123,6 +123,13 @@ options:
type: bool type: bool
default: no default: no
version_added: "2.5" version_added: "2.5"
encoding:
description:
- The character set in which the target file is encoded.
- For a list of available built-in encodings, see U(https://docs.python.org/3/library/codecs.html#standard-encodings)
type: str
default: utf-8
version_added: "2.20"
extends_documentation_fragment: extends_documentation_fragment:
- action_common_attributes - action_common_attributes
- action_common_attributes.files - action_common_attributes.files
@ -250,11 +257,11 @@ from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
def write_changes(module, b_lines, dest): def write_changes(module, lines, dest, encoding=None):
tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir) tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(tmpfd, 'wb') as f: with os.fdopen(tmpfd, 'w', encoding=encoding) as f:
f.writelines(b_lines) f.writelines(lines)
validate = module.params.get('validate', None) validate = module.params.get('validate', None)
valid = not validate valid = not validate
@ -293,6 +300,7 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
'before_header': '%s (content)' % dest, 'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest} 'after_header': '%s (content)' % dest}
encoding = module.params.get('encoding', None)
b_dest = to_bytes(dest, errors='surrogate_or_strict') b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest): if not os.path.exists(b_dest):
if not create: if not create:
@ -304,30 +312,29 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
except Exception as e: except Exception as e:
module.fail_json(msg='Error creating %s (%s)' % (to_text(b_destpath), to_text(e))) module.fail_json(msg='Error creating %s (%s)' % (to_text(b_destpath), to_text(e)))
b_lines = [] lines = []
else: else:
with open(b_dest, 'rb') as f: with open(b_dest, 'r', encoding=encoding) as f:
b_lines = f.readlines() lines = f.readlines()
if module._diff: if module._diff:
diff['before'] = to_native(b''.join(b_lines)) diff['before'] = ''.join(lines)
if regexp is not None: if regexp is not None:
bre_m = re.compile(to_bytes(regexp, errors='surrogate_or_strict')) re_m = re.compile(regexp)
if insertafter not in (None, 'BOF', 'EOF'): if insertafter not in (None, 'BOF', 'EOF'):
bre_ins = re.compile(to_bytes(insertafter, errors='surrogate_or_strict')) re_ins = re.compile(insertafter)
elif insertbefore not in (None, 'BOF'): elif insertbefore not in (None, 'BOF'):
bre_ins = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict')) re_ins = re.compile(insertbefore)
else: else:
bre_ins = None re_ins = None
# index[0] is the line num where regexp has been found # index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found # index[1] is the line num where insertafter/insertbefore has been found
index = [-1, -1] index = [-1, -1]
match = None match = None
exact_line_match = False exact_line_match = False
b_line = to_bytes(line, errors='surrogate_or_strict')
# The module's doc says # The module's doc says
# "If regular expressions are passed to both regexp and # "If regular expressions are passed to both regexp and
@ -339,8 +346,8 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
# Given the above: # Given the above:
# 1. First check that there is no match for regexp: # 1. First check that there is no match for regexp:
if regexp is not None: if regexp is not None:
for lineno, b_cur_line in enumerate(b_lines): for lineno, cur_line in enumerate(lines):
match_found = bre_m.search(b_cur_line) match_found = re_m.search(cur_line)
if match_found: if match_found:
index[0] = lineno index[0] = lineno
match = match_found match = match_found
@ -349,8 +356,8 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
# 2. Second check that there is no match for search_string: # 2. Second check that there is no match for search_string:
if search_string is not None: if search_string is not None:
for lineno, b_cur_line in enumerate(b_lines): for lineno, cur_line in enumerate(lines):
match_found = to_bytes(search_string, errors='surrogate_or_strict') in b_cur_line match_found = search_string in cur_line
if match_found: if match_found:
index[0] = lineno index[0] = lineno
match = match_found match = match_found
@ -360,12 +367,12 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
# 3. When no match found on the previous step, # 3. When no match found on the previous step,
# parse for searching insertafter/insertbefore: # parse for searching insertafter/insertbefore:
if not match: if not match:
for lineno, b_cur_line in enumerate(b_lines): for lineno, cur_line in enumerate(lines):
if b_line == b_cur_line.rstrip(b'\r\n'): if line == cur_line.rstrip('\r\n'):
index[0] = lineno index[0] = lineno
exact_line_match = True exact_line_match = True
elif bre_ins is not None and bre_ins.search(b_cur_line): elif re_ins is not None and re_ins.search(cur_line):
if insertafter: if insertafter:
# + 1 for the next line # + 1 for the next line
index[1] = lineno + 1 index[1] = lineno + 1
@ -380,17 +387,17 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
msg = '' msg = ''
changed = False changed = False
b_linesep = to_bytes(os.linesep, errors='surrogate_or_strict') linesep = os.linesep
# Exact line or Regexp matched a line in the file # Exact line or Regexp matched a line in the file
if index[0] != -1: if index[0] != -1:
if backrefs and match: if backrefs and match:
b_new_line = match.expand(b_line) new_line = match.expand(line)
else: else:
# Don't do backref expansion if not asked. # Don't do backref expansion if not asked.
b_new_line = b_line new_line = line
if not b_new_line.endswith(b_linesep): if not new_line.endswith(linesep):
b_new_line += b_linesep new_line += linesep
# If no regexp or search_string was given and no line match is found anywhere in the file, # If no regexp or search_string was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter # insert the line appropriately if using insertbefore or insertafter
@ -400,18 +407,18 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
if insertafter and insertafter != 'EOF': if insertafter and insertafter != 'EOF':
# Ensure there is a line separator after the found string # Ensure there is a line separator after the found string
# at the end of the file. # at the end of the file.
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'): if lines and not lines[-1][-1:] in ('\n', '\r'):
b_lines[-1] = b_lines[-1] + b_linesep lines[-1] = lines[-1] + linesep
# If the line to insert after is at the end of the file # If the line to insert after is at the end of the file
# use the appropriate index value. # use the appropriate index value.
if len(b_lines) == index[1]: if len(lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line: if lines[index[1] - 1].rstrip('\r\n') != line:
b_lines.append(b_line + b_linesep) lines.append(line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
elif b_lines[index[1]].rstrip(b'\r\n') != b_line: elif lines[index[1]].rstrip('\r\n') != line:
b_lines.insert(index[1], b_line + b_linesep) lines.insert(index[1], line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
@ -419,18 +426,18 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
# If the line to insert before is at the beginning of the file # If the line to insert before is at the beginning of the file
# use the appropriate index value. # use the appropriate index value.
if index[1] <= 0: if index[1] <= 0:
if b_lines[index[1]].rstrip(b'\r\n') != b_line: if lines[index[1]].rstrip('\r\n') != line:
b_lines.insert(index[1], b_line + b_linesep) lines.insert(index[1], line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
elif b_lines[index[1] - 1].rstrip(b'\r\n') != b_line: elif lines[index[1] - 1].rstrip('\r\n') != line:
b_lines.insert(index[1], b_line + b_linesep) lines.insert(index[1], line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
elif b_lines[index[0]] != b_new_line: elif lines[index[0]] != new_line:
b_lines[index[0]] = b_new_line lines[index[0]] = new_line
msg = 'line replaced' msg = 'line replaced'
changed = True changed = True
@ -440,7 +447,7 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
pass pass
# Add it to the beginning of the file # Add it to the beginning of the file
elif insertbefore == 'BOF' or insertafter == 'BOF': elif insertbefore == 'BOF' or insertafter == 'BOF':
b_lines.insert(0, b_line + b_linesep) lines.insert(0, line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
# Add it to the end of the file if requested or # Add it to the end of the file if requested or
@ -449,10 +456,10 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
elif insertafter == 'EOF' or index[1] == -1: elif insertafter == 'EOF' or index[1] == -1:
# If the file is not empty then ensure there's a newline before the added line # If the file is not empty then ensure there's a newline before the added line
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'): if lines and not lines[-1][-1:] in ('\n', '\r'):
b_lines.append(b_linesep) lines.append(linesep)
b_lines.append(b_line + b_linesep) lines.append(line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
@ -460,30 +467,30 @@ def present(module, dest, regexp, search_string, line, insertafter, insertbefore
# Don't insert the line if it already matches at the index. # Don't insert the line if it already matches at the index.
# If the line to insert after is at the end of the file use the appropriate index value. # If the line to insert after is at the end of the file use the appropriate index value.
if len(b_lines) == index[1]: if len(lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line: if lines[index[1] - 1].rstrip('\r\n') != line:
b_lines.append(b_line + b_linesep) lines.append(line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
elif b_line != b_lines[index[1]].rstrip(b'\n\r'): elif line != lines[index[1]].rstrip('\n\r'):
b_lines.insert(index[1], b_line + b_linesep) lines.insert(index[1], line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
# insert matched, but not the regexp or search_string # insert matched, but not the regexp or search_string
else: else:
b_lines.insert(index[1], b_line + b_linesep) lines.insert(index[1], line + linesep)
msg = 'line added' msg = 'line added'
changed = True changed = True
if module._diff: if module._diff:
diff['after'] = to_native(b''.join(b_lines)) diff['after'] = ''.join(lines)
backupdest = "" backupdest = ""
if changed and not module.check_mode: if changed and not module.check_mode:
if backup and os.path.exists(b_dest): if backup and os.path.exists(b_dest):
backupdest = module.backup_local(dest) backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest) write_changes(module, lines, dest, encoding)
if module.check_mode and not os.path.exists(b_dest): if module.check_mode and not os.path.exists(b_dest):
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff) module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff)
@ -510,40 +517,40 @@ def absent(module, dest, regexp, search_string, line, backup):
'before_header': '%s (content)' % dest, 'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest} 'after_header': '%s (content)' % dest}
with open(b_dest, 'rb') as f: encoding = module.params['encoding']
b_lines = f.readlines()
with open(b_dest, 'r', encoding=encoding) as f:
lines = f.readlines()
if module._diff: if module._diff:
diff['before'] = to_native(b''.join(b_lines)) diff['before'] = ''.join(lines)
if regexp is not None: if regexp is not None:
bre_c = re.compile(to_bytes(regexp, errors='surrogate_or_strict')) re_c = re.compile(regexp)
found = [] found = []
b_line = to_bytes(line, errors='surrogate_or_strict') def matcher(cur_line):
def matcher(b_cur_line):
if regexp is not None: if regexp is not None:
match_found = bre_c.search(b_cur_line) match_found = re_c.search(cur_line)
elif search_string is not None: elif search_string is not None:
match_found = to_bytes(search_string, errors='surrogate_or_strict') in b_cur_line match_found = search_string in cur_line
else: else:
match_found = b_line == b_cur_line.rstrip(b'\r\n') match_found = line == cur_line.rstrip('\r\n')
if match_found: if match_found:
found.append(b_cur_line) found.append(cur_line)
return not match_found return not match_found
b_lines = [l for l in b_lines if matcher(l)] lines = [l for l in lines if matcher(l)]
changed = len(found) > 0 changed = len(found) > 0
if module._diff: if module._diff:
diff['after'] = to_native(b''.join(b_lines)) diff['after'] = ''.join(lines)
backupdest = "" backupdest = ""
if changed and not module.check_mode: if changed and not module.check_mode:
if backup: if backup:
backupdest = module.backup_local(dest) backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest) write_changes(module, lines, dest, encoding)
if changed: if changed:
msg = "%s line(s) removed" % len(found) msg = "%s line(s) removed" % len(found)
@ -567,6 +574,7 @@ def main():
regexp=dict(type='str', aliases=['regex']), regexp=dict(type='str', aliases=['regex']),
search_string=dict(type='str'), search_string=dict(type='str'),
line=dict(type='str', aliases=['value']), line=dict(type='str', aliases=['value']),
encoding=dict(type='str', default='utf-8'),
insertafter=dict(type='str'), insertafter=dict(type='str'),
insertbefore=dict(type='str'), insertbefore=dict(type='str'),
backrefs=dict(type='bool', default=False), backrefs=dict(type='bool', default=False),

@ -18,7 +18,7 @@ options:
This is a list and can support multiple package managers per system, since version 2.8. This is a list and can support multiple package managers per system, since version 2.8.
- The V(portage) and V(pkg) options were added in version 2.8. - The V(portage) and V(pkg) options were added in version 2.8.
- The V(apk) option was added in version 2.11. - The V(apk) option was added in version 2.11.
- The V(pkg_info)' option was added in version 2.13. - The V(pkg_info) option was added in version 2.13.
- Aliases were added in 2.18, to support using C(manager={{ansible_facts['pkg_mgr']}}) - Aliases were added in 2.18, to support using C(manager={{ansible_facts['pkg_mgr']}})
default: ['auto'] default: ['auto']
choices: choices:

@ -180,7 +180,6 @@ from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.service import fail_if_missing, is_systemd_managed from ansible.module_utils.service import fail_if_missing, is_systemd_managed
from ansible.module_utils.six import b
class Service(object): class Service(object):
@ -292,8 +291,8 @@ class Service(object):
# chkconfig localizes messages and we're screen scraping so make # chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale # sure we use the C locale
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=lang_env, preexec_fn=lambda: os.close(pipe[1])) p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=lang_env, preexec_fn=lambda: os.close(pipe[1]))
stdout = b("") stdout = b""
stderr = b("") stderr = b""
fds = [p.stdout, p.stderr] fds = [p.stdout, p.stderr]
# Wait for all output, or until the main process is dead and its output is done. # Wait for all output, or until the main process is dead and its output is done.
while fds: while fds:
@ -322,7 +321,7 @@ class Service(object):
os.close(pipe[1]) os.close(pipe[1])
os.waitpid(pid, 0) os.waitpid(pid, 0)
# Wait for data from daemon process and process it. # Wait for data from daemon process and process it.
data = b("") data = b""
while True: while True:
rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]]) rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]])
if pipe[0] in rfd: if pipe[0] in rfd:

@ -438,13 +438,12 @@ import os
import re import re
import shutil import shutil
import tempfile import tempfile
from collections.abc import Mapping, Sequence
from datetime import datetime, timezone from datetime import datetime, timezone
from urllib.parse import urlencode, urljoin
from ansible.module_utils.basic import AnsibleModule, sanitize_keys from ansible.module_utils.basic import AnsibleModule, sanitize_keys
from ansible.module_utils.six import binary_type, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urljoin
from ansible.module_utils.common.text.converters import to_native, to_text from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.six.moves.collections_abc import Mapping, Sequence
from ansible.module_utils.urls import ( from ansible.module_utils.urls import (
fetch_url, fetch_url,
get_response_filename, get_response_filename,
@ -479,7 +478,7 @@ def write_file(module, dest, content, resp):
try: try:
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir) fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(fd, 'wb') as f: with os.fdopen(fd, 'wb') as f:
if isinstance(content, binary_type): if isinstance(content, bytes):
f.write(content) f.write(content)
else: else:
shutil.copyfileobj(content, f) shutil.copyfileobj(content, f)
@ -521,14 +520,14 @@ def kv_list(data):
def form_urlencoded(body): def form_urlencoded(body):
""" Convert data into a form-urlencoded string """ """ Convert data into a form-urlencoded string """
if isinstance(body, string_types): if isinstance(body, str):
return body return body
if isinstance(body, (Mapping, Sequence)): if isinstance(body, (Mapping, Sequence)):
result = [] result = []
# Turn a list of lists into a list of tuples that urlencode accepts # Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body): for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)): if isinstance(values, str) or not isinstance(values, (Mapping, Sequence)):
values = [values] values = [values]
for value in values: for value in values:
if value is not None: if value is not None:
@ -641,12 +640,12 @@ def main():
if body_format == 'json': if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON # Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types): if not isinstance(body, str):
body = json.dumps(body) body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]: if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json' dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded': elif body_format == 'form-urlencoded':
if not isinstance(body, string_types): if not isinstance(body, str):
try: try:
body = form_urlencoded(body) body = form_urlencoded(body)
except ValueError as e: except ValueError as e:
@ -747,7 +746,7 @@ def main():
# In python3, the headers are title cased. Lowercase them to be # In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour. # compatible with the python2 behaviour.
uresp = {} uresp = {}
for key, value in iteritems(resp): for key, value in resp.items():
ukey = key.replace("-", "_").lower() ukey = key.replace("-", "_").lower()
uresp[ukey] = value uresp[ukey] = value
@ -755,7 +754,7 @@ def main():
uresp['location'] = urljoin(url, uresp['location']) uresp['location'] = urljoin(url, uresp['location'])
# Default content_encoding to try # Default content_encoding to try
if isinstance(content, binary_type): if isinstance(content, bytes):
u_content = to_text(content, encoding=content_encoding) u_content = to_text(content, encoding=content_encoding)
if maybe_json: if maybe_json:
try: try:

@ -19,7 +19,6 @@ from ansible._internal._errors import _error_utils
from ansible.module_utils.basic import is_executable from ansible.module_utils.basic import is_executable
from ansible._internal._datatag._tags import Origin, TrustedAsTemplate, SourceWasEncrypted from ansible._internal._datatag._tags import Origin, TrustedAsTemplate, SourceWasEncrypted
from ansible.module_utils._internal._datatag import AnsibleTagHelper from ansible.module_utils._internal._datatag import AnsibleTagHelper
from ansible.module_utils.six import binary_type, text_type
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.parsing.quoting import unquote from ansible.parsing.quoting import unquote
from ansible.parsing.utils.yaml import from_yaml from ansible.parsing.utils.yaml import from_yaml
@ -32,7 +31,7 @@ display = Display()
# Tries to determine if a path is inside a role, last dir must be 'tasks' # Tries to determine if a path is inside a role, last dir must be 'tasks'
# this is not perfect but people should really avoid 'tasks' dirs outside roles when using Ansible. # this is not perfect but people should really avoid 'tasks' dirs outside roles when using Ansible.
RE_TASKS = re.compile(u'(?:^|%s)+tasks%s?$' % (os.path.sep, os.path.sep)) RE_TASKS = re.compile('(?:^|%s)+tasks%s?$' % (os.path.sep, os.path.sep))
class DataLoader: class DataLoader:
@ -54,23 +53,22 @@ class DataLoader:
ds = dl.load_from_file('/path/to/file') ds = dl.load_from_file('/path/to/file')
""" """
def __init__(self): def __init__(self) -> None:
self._basedir = '.' self._basedir: str = '.'
# NOTE: not effective with forks as the main copy does not get updated. # NOTE: not effective with forks as the main copy does not get updated.
# avoids rereading files # avoids rereading files
self._FILE_CACHE = dict() self._FILE_CACHE: dict[str, object] = {}
# NOTE: not thread safe, also issues with forks not returning data to main proc # NOTE: not thread safe, also issues with forks not returning data to main proc
# so they need to be cleaned independently. See WorkerProcess for example. # so they need to be cleaned independently. See WorkerProcess for example.
# used to keep track of temp files for cleaning # used to keep track of temp files for cleaning
self._tempfiles = set() self._tempfiles: set[str] = set()
# initialize the vault stuff with an empty password # initialize the vault stuff with an empty password
# TODO: replace with a ref to something that can get the password # TODO: replace with a ref to something that can get the password
# a creds/auth provider # a creds/auth provider
self._vaults = {}
self._vault = VaultLib() self._vault = VaultLib()
self.set_vault_secrets(None) self.set_vault_secrets(None)
@ -230,23 +228,19 @@ class DataLoader:
def set_basedir(self, basedir: str) -> None: def set_basedir(self, basedir: str) -> None:
""" sets the base directory, used to find files when a relative path is given """ """ sets the base directory, used to find files when a relative path is given """
self._basedir = basedir
if basedir is not None:
self._basedir = to_text(basedir)
def path_dwim(self, given: str) -> str: def path_dwim(self, given: str) -> str:
""" """
make relative paths work like folks expect. make relative paths work like folks expect.
""" """
given = to_text(given, errors='surrogate_or_strict')
given = unquote(given) given = unquote(given)
if given.startswith(to_text(os.path.sep)) or given.startswith(u'~'): if given.startswith(os.path.sep) or given.startswith('~'):
path = given path = given
else: else:
basedir = to_text(self._basedir, errors='surrogate_or_strict') path = os.path.join(self._basedir, given)
path = os.path.join(basedir, given)
return unfrackpath(path, follow=False) return unfrackpath(path, follow=False)
@ -294,10 +288,9 @@ class DataLoader:
""" """
search = [] search = []
source = to_text(source, errors='surrogate_or_strict')
# I have full path, nothing else needs to be looked at # I have full path, nothing else needs to be looked at
if source.startswith(to_text(os.path.sep)) or source.startswith(u'~'): if source.startswith(os.path.sep) or source.startswith('~'):
search.append(unfrackpath(source, follow=False)) search.append(unfrackpath(source, follow=False))
else: else:
# base role/play path + templates/files/vars + relative filename # base role/play path + templates/files/vars + relative filename
@ -364,7 +357,7 @@ class DataLoader:
if os.path.exists(to_bytes(test_path, errors='surrogate_or_strict')): if os.path.exists(to_bytes(test_path, errors='surrogate_or_strict')):
result = test_path result = test_path
else: else:
display.debug(u'evaluation_path:\n\t%s' % '\n\t'.join(paths)) display.debug('evaluation_path:\n\t%s' % '\n\t'.join(paths))
for path in paths: for path in paths:
upath = unfrackpath(path, follow=False) upath = unfrackpath(path, follow=False)
b_upath = to_bytes(upath, errors='surrogate_or_strict') b_upath = to_bytes(upath, errors='surrogate_or_strict')
@ -385,9 +378,9 @@ class DataLoader:
search.append(os.path.join(to_bytes(self.get_basedir(), errors='surrogate_or_strict'), b_dirname, b_source)) search.append(os.path.join(to_bytes(self.get_basedir(), errors='surrogate_or_strict'), b_dirname, b_source))
search.append(os.path.join(to_bytes(self.get_basedir(), errors='surrogate_or_strict'), b_source)) search.append(os.path.join(to_bytes(self.get_basedir(), errors='surrogate_or_strict'), b_source))
display.debug(u'search_path:\n\t%s' % to_text(b'\n\t'.join(search))) display.debug('search_path:\n\t%s' % to_text(b'\n\t'.join(search)))
for b_candidate in search: for b_candidate in search:
display.vvvvv(u'looking for "%s" at "%s"' % (source, to_text(b_candidate))) display.vvvvv('looking for "%s" at "%s"' % (source, to_text(b_candidate)))
if os.path.exists(b_candidate): if os.path.exists(b_candidate):
result = to_text(b_candidate) result = to_text(b_candidate)
break break
@ -418,11 +411,10 @@ class DataLoader:
Temporary files are cleanup in the destructor Temporary files are cleanup in the destructor
""" """
if not file_path or not isinstance(file_path, (binary_type, text_type)): if not file_path or not isinstance(file_path, (bytes, str)):
raise AnsibleParserError("Invalid filename: '%s'" % to_native(file_path)) raise AnsibleParserError("Invalid filename: '%s'" % to_native(file_path))
b_file_path = to_bytes(file_path, errors='surrogate_or_strict') if not self.path_exists(file_path) or not self.is_file(file_path):
if not self.path_exists(b_file_path) or not self.is_file(b_file_path):
raise AnsibleFileNotFound(file_name=file_path) raise AnsibleFileNotFound(file_name=file_path)
real_path = self.path_dwim(file_path) real_path = self.path_dwim(file_path)
@ -480,7 +472,7 @@ class DataLoader:
""" """
b_path = to_bytes(os.path.join(path, name)) b_path = to_bytes(os.path.join(path, name))
found = [] found: list[str] = []
if extensions is None: if extensions is None:
# Look for file with no extension first to find dir before file # Look for file with no extension first to find dir before file
@ -489,27 +481,29 @@ class DataLoader:
for ext in extensions: for ext in extensions:
if '.' in ext: if '.' in ext:
full_path = b_path + to_bytes(ext) b_full_path = b_path + to_bytes(ext)
elif ext: elif ext:
full_path = b'.'.join([b_path, to_bytes(ext)]) b_full_path = b'.'.join([b_path, to_bytes(ext)])
else: else:
full_path = b_path b_full_path = b_path
full_path = to_text(b_full_path)
if self.path_exists(full_path): if self.path_exists(full_path):
if self.is_directory(full_path): if self.is_directory(full_path):
if allow_dir: if allow_dir:
found.extend(self._get_dir_vars_files(to_text(full_path), extensions)) found.extend(self._get_dir_vars_files(full_path, extensions))
else: else:
continue continue
else: else:
found.append(to_text(full_path)) found.append(full_path)
break break
return found return found
def _get_dir_vars_files(self, path: str, extensions: list[str]) -> list[str]: def _get_dir_vars_files(self, path: str, extensions: list[str]) -> list[str]:
found = [] found = []
for spath in sorted(self.list_directory(path)): for spath in sorted(self.list_directory(path)):
if not spath.startswith(u'.') and not spath.endswith(u'~'): # skip hidden and backups if not spath.startswith('.') and not spath.endswith('~'): # skip hidden and backups
ext = os.path.splitext(spath)[-1] ext = os.path.splitext(spath)[-1]
full_spath = os.path.join(path, spath) full_spath = os.path.join(path, spath)

@ -130,6 +130,7 @@ class ModuleArgsParser:
# HACK: why are these not FieldAttributes on task with a post-validate to check usage? # HACK: why are these not FieldAttributes on task with a post-validate to check usage?
self._task_attrs.update(['local_action', 'static']) self._task_attrs.update(['local_action', 'static'])
self._task_attrs = frozenset(self._task_attrs) self._task_attrs = frozenset(self._task_attrs)
self._resolved_action = None
def _split_module_string(self, module_string: str) -> tuple[str, str]: def _split_module_string(self, module_string: str) -> tuple[str, str]:
""" """
@ -344,6 +345,8 @@ class ModuleArgsParser:
raise e raise e
is_action_candidate = context.resolved and bool(context.redirect_list) is_action_candidate = context.resolved and bool(context.redirect_list)
if is_action_candidate:
self._resolved_action = context.resolved_fqcn
if is_action_candidate: if is_action_candidate:
# finding more than one module name is a problem # finding more than one module name is a problem

@ -59,7 +59,6 @@ except ImportError:
from ansible.errors import AnsibleError, AnsibleAssertionError from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible import constants as C from ansible import constants as C
from ansible.module_utils.six import binary_type
from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native from ansible.module_utils.common.text.converters import to_bytes, to_text, to_native
from ansible.utils.display import Display from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe, unfrackpath from ansible.utils.path import makedirs_safe, unfrackpath
@ -1237,7 +1236,7 @@ class VaultAES256:
It would be nice if there were a library for this but hey. It would be nice if there were a library for this but hey.
""" """
if not (isinstance(b_a, binary_type) and isinstance(b_b, binary_type)): if not (isinstance(b_a, bytes) and isinstance(b_b, bytes)):
raise TypeError('_is_equal can only be used to compare two byte strings') raise TypeError('_is_equal can only be used to compare two byte strings')
# http://codahale.com/a-lesson-in-timing-attacks/ # http://codahale.com/a-lesson-in-timing-attacks/

@ -19,7 +19,6 @@ from ansible import context
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleAssertionError, AnsibleValueOmittedError, AnsibleFieldAttributeError from ansible.errors import AnsibleError, AnsibleParserError, AnsibleAssertionError, AnsibleValueOmittedError, AnsibleFieldAttributeError
from ansible.module_utils.datatag import native_type_name from ansible.module_utils.datatag import native_type_name
from ansible._internal._datatag._tags import Origin from ansible._internal._datatag._tags import Origin
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.common.sentinel import Sentinel from ansible.module_utils.common.sentinel import Sentinel
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
@ -37,7 +36,7 @@ display = Display()
def _validate_action_group_metadata(action, found_group_metadata, fq_group_name): def _validate_action_group_metadata(action, found_group_metadata, fq_group_name):
valid_metadata = { valid_metadata = {
'extend_group': { 'extend_group': {
'types': (list, string_types,), 'types': (list, str,),
'errortype': 'list', 'errortype': 'list',
}, },
} }
@ -204,7 +203,7 @@ class FieldAttributeBase:
value = self.set_to_context(attr.name) value = self.set_to_context(attr.name)
valid_values = frozenset(('always', 'on_failed', 'on_unreachable', 'on_skipped', 'never')) valid_values = frozenset(('always', 'on_failed', 'on_unreachable', 'on_skipped', 'never'))
if value and isinstance(value, string_types) and value not in valid_values: if value and isinstance(value, str) and value not in valid_values:
raise AnsibleParserError("'%s' is not a valid value for debugger. Must be one of %s" % (value, ', '.join(valid_values)), obj=self.get_ds()) raise AnsibleParserError("'%s' is not a valid value for debugger. Must be one of %s" % (value, ', '.join(valid_values)), obj=self.get_ds())
return value return value
@ -350,14 +349,14 @@ class FieldAttributeBase:
found_group_metadata = False found_group_metadata = False
for action in action_group: for action in action_group:
# Everything should be a string except the metadata entry # Everything should be a string except the metadata entry
if not isinstance(action, string_types): if not isinstance(action, str):
_validate_action_group_metadata(action, found_group_metadata, fq_group_name) _validate_action_group_metadata(action, found_group_metadata, fq_group_name)
if isinstance(action['metadata'], dict): if isinstance(action['metadata'], dict):
found_group_metadata = True found_group_metadata = True
include_groups = action['metadata'].get('extend_group', []) include_groups = action['metadata'].get('extend_group', [])
if isinstance(include_groups, string_types): if isinstance(include_groups, str):
include_groups = [include_groups] include_groups = [include_groups]
if not isinstance(include_groups, list): if not isinstance(include_groups, list):
# Bad entries may be a warning above, but prevent tracebacks by setting it back to the acceptable type. # Bad entries may be a warning above, but prevent tracebacks by setting it back to the acceptable type.
@ -472,7 +471,7 @@ class FieldAttributeBase:
elif attribute.isa == 'percent': elif attribute.isa == 'percent':
# special value, which may be an integer or float # special value, which may be an integer or float
# with an optional '%' at the end # with an optional '%' at the end
if isinstance(value, string_types) and '%' in value: if isinstance(value, str) and '%' in value:
value = value.replace('%', '') value = value.replace('%', '')
value = float(value) value = float(value)
elif attribute.isa == 'list': elif attribute.isa == 'list':

@ -3,7 +3,6 @@
from __future__ import annotations from __future__ import annotations
from ansible.module_utils.six import string_types
from ansible.playbook.attribute import FieldAttribute from ansible.playbook.attribute import FieldAttribute
from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display from ansible.utils.display import Display
@ -32,7 +31,7 @@ def _ensure_default_collection(collection_list=None):
class CollectionSearch: class CollectionSearch:
# this needs to be populated before we can resolve tasks/roles/etc # this needs to be populated before we can resolve tasks/roles/etc
collections = FieldAttribute(isa='list', listof=string_types, priority=100, default=_ensure_default_collection, always_post_validate=True, static=True) collections = FieldAttribute(isa='list', listof=(str,), priority=100, default=_ensure_default_collection, always_post_validate=True, static=True)
def _load_collections(self, attr, ds): def _load_collections(self, attr, ds):
# We are always a mixin with Base, so we can validate this untemplated # We are always a mixin with Base, so we can validate this untemplated

@ -20,12 +20,11 @@ from __future__ import annotations
from ansible.errors import AnsibleAssertionError from ansible.errors import AnsibleAssertionError
from ansible.playbook.attribute import NonInheritableFieldAttribute from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.task import Task from ansible.playbook.task import Task
from ansible.module_utils.six import string_types
class Handler(Task): class Handler(Task):
listen = NonInheritableFieldAttribute(isa='list', default=list, listof=string_types, static=True) listen = NonInheritableFieldAttribute(isa='list', default=list, listof=(str,), static=True)
def __init__(self, block=None, role=None, task_include=None): def __init__(self, block=None, role=None, task_include=None):
self.notified_hosts = [] self.notified_hosts = []

@ -22,7 +22,6 @@ from ansible import context
from ansible.errors import AnsibleError from ansible.errors import AnsibleError
from ansible.errors import AnsibleParserError, AnsibleAssertionError from ansible.errors import AnsibleParserError, AnsibleAssertionError
from ansible.module_utils.common.collections import is_sequence from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.six import binary_type, string_types, text_type
from ansible.playbook.attribute import NonInheritableFieldAttribute from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.base import Base from ansible.playbook.base import Base
from ansible.playbook.block import Block from ansible.playbook.block import Block
@ -53,11 +52,11 @@ class Play(Base, Taggable, CollectionSearch):
""" """
# ================================================================================= # =================================================================================
hosts = NonInheritableFieldAttribute(isa='list', required=True, listof=string_types, always_post_validate=True, priority=-2) hosts = NonInheritableFieldAttribute(isa='list', required=True, listof=(str,), always_post_validate=True, priority=-2)
# Facts # Facts
gather_facts = NonInheritableFieldAttribute(isa='bool', default=None, always_post_validate=True) gather_facts = NonInheritableFieldAttribute(isa='bool', default=None, always_post_validate=True)
gather_subset = NonInheritableFieldAttribute(isa='list', default=None, listof=string_types, always_post_validate=True) gather_subset = NonInheritableFieldAttribute(isa='list', default=None, listof=(str,), always_post_validate=True)
gather_timeout = NonInheritableFieldAttribute(isa='int', default=None, always_post_validate=True) gather_timeout = NonInheritableFieldAttribute(isa='int', default=None, always_post_validate=True)
fact_path = NonInheritableFieldAttribute(isa='string', default=None) fact_path = NonInheritableFieldAttribute(isa='string', default=None)
@ -120,10 +119,10 @@ class Play(Base, Taggable, CollectionSearch):
for entry in value: for entry in value:
if entry is None: if entry is None:
raise AnsibleParserError("Hosts list cannot contain values of 'None'. Please check your playbook") raise AnsibleParserError("Hosts list cannot contain values of 'None'. Please check your playbook")
elif not isinstance(entry, (binary_type, text_type)): elif not isinstance(entry, (bytes, str)):
raise AnsibleParserError("Hosts list contains an invalid host value: '{host!s}'".format(host=entry)) raise AnsibleParserError("Hosts list contains an invalid host value: '{host!s}'".format(host=entry))
elif not isinstance(value, (binary_type, text_type, EncryptedString)): elif not isinstance(value, (bytes, str, EncryptedString)):
raise AnsibleParserError("Hosts list must be a sequence or string. Please check your playbook.") raise AnsibleParserError("Hosts list must be a sequence or string. Please check your playbook.")
def get_name(self): def get_name(self):
@ -303,7 +302,7 @@ class Play(Base, Taggable, CollectionSearch):
t = Task(block=flush_block) t = Task(block=flush_block)
t.action = 'meta' t.action = 'meta'
t.resolved_action = 'ansible.builtin.meta' t._resolved_action = 'ansible.builtin.meta'
t.args['_raw_params'] = 'flush_handlers' t.args['_raw_params'] = 'flush_handlers'
t.implicit = True t.implicit = True
t.set_loader(self._loader) t.set_loader(self._loader)

@ -28,7 +28,6 @@ from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleAssertionError from ansible.errors import AnsibleError, AnsibleParserError, AnsibleAssertionError
from ansible.module_utils.common.sentinel import Sentinel from ansible.module_utils.common.sentinel import Sentinel
from ansible.module_utils.common.text.converters import to_text from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.six import binary_type, text_type
from ansible.playbook.base import Base from ansible.playbook.base import Base
from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.conditional import Conditional from ansible.playbook.conditional import Conditional
@ -38,6 +37,7 @@ from ansible.playbook.role.metadata import RoleMetadata
from ansible.playbook.taggable import Taggable from ansible.playbook.taggable import Taggable
from ansible.plugins.loader import add_all_plugin_dirs from ansible.plugins.loader import add_all_plugin_dirs
from ansible.utils.collection_loader import AnsibleCollectionConfig from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.path import is_subpath from ansible.utils.path import is_subpath
from ansible.utils.vars import combine_vars from ansible.utils.vars import combine_vars
@ -53,14 +53,12 @@ if _t.TYPE_CHECKING:
__all__ = ['Role', 'hash_params'] __all__ = ['Role', 'hash_params']
# TODO: this should be a utility function, but can't be a member of _display = Display()
# the role due to the fact that it would require the use of self
# in a static method. This is also used in the base class for
# strategies (ansible/plugins/strategy/__init__.py)
def hash_params(params): def hash_params(params):
""" """
DEPRECATED
Construct a data structure of parameters that is hashable. Construct a data structure of parameters that is hashable.
This requires changing any mutable data structures into immutable ones. This requires changing any mutable data structures into immutable ones.
@ -72,10 +70,16 @@ def hash_params(params):
1) There shouldn't be any unhashable scalars specified in the yaml 1) There shouldn't be any unhashable scalars specified in the yaml
2) Our only choice would be to return an error anyway. 2) Our only choice would be to return an error anyway.
""" """
_display.deprecated(
msg="The hash_params function is deprecated as its consumers have moved to internal alternatives",
version='2.24',
help_text='Contact the plugin author to update their code',
)
# Any container is unhashable if it contains unhashable items (for # Any container is unhashable if it contains unhashable items (for
# instance, tuple() is a Hashable subclass but if it contains a dict, it # instance, tuple() is a Hashable subclass but if it contains a dict, it
# cannot be hashed) # cannot be hashed)
if isinstance(params, Container) and not isinstance(params, (text_type, binary_type)): if isinstance(params, Container) and not isinstance(params, (str, bytes)):
if isinstance(params, Mapping): if isinstance(params, Mapping):
try: try:
# Optimistically hope the contents are all hashable # Optimistically hope the contents are all hashable

@ -22,7 +22,6 @@ import os
from ansible import constants as C from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._internal._datatag import AnsibleTagHelper from ansible.module_utils._internal._datatag import AnsibleTagHelper
from ansible.module_utils.six import string_types
from ansible.playbook.attribute import NonInheritableFieldAttribute from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.base import Base from ansible.playbook.base import Base
from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.collectionsearch import CollectionSearch
@ -70,7 +69,7 @@ class RoleDefinition(Base, Conditional, Taggable, CollectionSearch):
if isinstance(ds, int): if isinstance(ds, int):
ds = "%s" % ds ds = "%s" % ds
if not isinstance(ds, dict) and not isinstance(ds, string_types): if not isinstance(ds, dict) and not isinstance(ds, str):
raise AnsibleAssertionError() raise AnsibleAssertionError()
if isinstance(ds, dict): if isinstance(ds, dict):
@ -113,11 +112,11 @@ class RoleDefinition(Base, Conditional, Taggable, CollectionSearch):
string), just that string string), just that string
""" """
if isinstance(ds, string_types): if isinstance(ds, str):
return ds return ds
role_name = ds.get('role', ds.get('name')) role_name = ds.get('role', ds.get('name'))
if not role_name or not isinstance(role_name, string_types): if not role_name or not isinstance(role_name, str):
raise AnsibleError('role definitions must contain a role name', obj=ds) raise AnsibleError('role definitions must contain a role name', obj=ds)
# if we have the required datastructures, and if the role_name # if we have the required datastructures, and if the role_name

@ -18,7 +18,6 @@
from __future__ import annotations from __future__ import annotations
from ansible.errors import AnsibleError, AnsibleParserError from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.six import string_types
from ansible.playbook.delegatable import Delegatable from ansible.playbook.delegatable import Delegatable
from ansible.playbook.role.definition import RoleDefinition from ansible.playbook.role.definition import RoleDefinition
@ -40,10 +39,10 @@ class RoleInclude(RoleDefinition, Delegatable):
@staticmethod @staticmethod
def load(data, play, current_role_path=None, parent_role=None, variable_manager=None, loader=None, collection_list=None): def load(data, play, current_role_path=None, parent_role=None, variable_manager=None, loader=None, collection_list=None):
if not (isinstance(data, string_types) or isinstance(data, dict)): if not (isinstance(data, str) or isinstance(data, dict)):
raise AnsibleParserError("Invalid role definition.", obj=data) raise AnsibleParserError("Invalid role definition.", obj=data)
if isinstance(data, string_types) and ',' in data: if isinstance(data, str) and ',' in data:
raise AnsibleError("Invalid old style role requirement: %s" % data) raise AnsibleError("Invalid old style role requirement: %s" % data)
ri = RoleInclude(play=play, role_basedir=current_role_path, variable_manager=variable_manager, loader=loader, collection_list=collection_list) ri = RoleInclude(play=play, role_basedir=current_role_path, variable_manager=variable_manager, loader=loader, collection_list=collection_list)

@ -20,7 +20,6 @@ from __future__ import annotations
import os import os
from ansible.errors import AnsibleParserError, AnsibleError from ansible.errors import AnsibleParserError, AnsibleError
from ansible.module_utils.six import string_types
from ansible.playbook.attribute import NonInheritableFieldAttribute from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.base import Base from ansible.playbook.base import Base
from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.collectionsearch import CollectionSearch
@ -70,7 +69,7 @@ class RoleMetadata(Base, CollectionSearch):
for role_def in ds: for role_def in ds:
# FIXME: consolidate with ansible-galaxy to keep this in sync # FIXME: consolidate with ansible-galaxy to keep this in sync
if isinstance(role_def, string_types) or 'role' in role_def or 'name' in role_def: if isinstance(role_def, str) or 'role' in role_def or 'name' in role_def:
roles.append(role_def) roles.append(role_def)
continue continue
try: try:

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save