diff --git a/.github/BOTMETA.yml b/.github/BOTMETA.yml index 7256da63708..8dbfff935f7 100644 --- a/.github/BOTMETA.yml +++ b/.github/BOTMETA.yml @@ -4097,8 +4097,6 @@ files: test/integration/targets/zabbix_: $team_zabbix test/integration/targets/ucs_: *ucs test/integration/targets/vultr: *vultr - test/legacy/: - notified: mattclay test/lib/: notified: mattclay test/lib/ansible_test/_internal/cloud/acme.py: *crypto diff --git a/docs/docsite/rst/scenario_guides/guide_scaleway.rst b/docs/docsite/rst/scenario_guides/guide_scaleway.rst index 82d7ace7570..d579f8fab3b 100644 --- a/docs/docsite/rst/scenario_guides/guide_scaleway.rst +++ b/docs/docsite/rst/scenario_guides/guide_scaleway.rst @@ -68,8 +68,6 @@ The ``ssh_pub_key`` parameter contains your ssh public key as a string. Here is .. code-block:: yaml - # SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway_ssh_playbook.yml - - name: Test SSH key lifecycle on a Scaleway account hosts: localhost gather_facts: no @@ -120,8 +118,6 @@ Take a look at this short playbook to see a working example using ``scaleway_com .. code-block:: yaml - # SCW_TOKEN='XXX' ansible-playbook ./test/legacy/scaleway_compute.yml - - name: Test compute instance lifecycle on a Scaleway account hosts: localhost gather_facts: no @@ -253,7 +249,7 @@ Scaleway S3 object storage `Object Storage `_ allows you to store any kind of objects (documents, images, videos, etc.). As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: :ref:`s3_bucket_module`, :ref:`aws_s3_module`. -You can find many examples in ``./test/legacy/roles/scaleway_s3`` +You can find many examples in the `scaleway_s3 integration tests `_. .. code-block:: yaml+jinja diff --git a/test/integration/targets/scaleway_compute/tasks/main.yml b/test/integration/targets/scaleway_compute/tasks/main.yml index 922b1ea30f6..d2d3ed0042f 100644 --- a/test/integration/targets/scaleway_compute/tasks/main.yml +++ b/test/integration/targets/scaleway_compute/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' SCW_ORG='YYY' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_compute - - include_tasks: state.yml - include_tasks: ip.yml - include_tasks: security_group.yml diff --git a/test/integration/targets/scaleway_image_info/tasks/main.yml b/test/integration/targets/scaleway_image_info/tasks/main.yml index 540810479a1..370855ce8e0 100644 --- a/test/integration/targets/scaleway_image_info/tasks/main.yml +++ b/test/integration/targets/scaleway_image_info/tasks/main.yml @@ -1,6 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_image_info - - - name: Get image informations and register it in a variable scaleway_image_info: region: par1 diff --git a/test/integration/targets/scaleway_ip/tasks/main.yml b/test/integration/targets/scaleway_ip/tasks/main.yml index 9b639ad0271..b12ab0270b0 100644 --- a/test/integration/targets/scaleway_ip/tasks/main.yml +++ b/test/integration/targets/scaleway_ip/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' SCW_ORG='YYY' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_ip - - name: Create IP (Check) check_mode: yes scaleway_ip: diff --git a/test/integration/targets/scaleway_ip_info/tasks/main.yml b/test/integration/targets/scaleway_ip_info/tasks/main.yml index d36c68fb9c7..a3509f3d02b 100644 --- a/test/integration/targets/scaleway_ip_info/tasks/main.yml +++ b/test/integration/targets/scaleway_ip_info/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_ip_info - - name: Get ip informations and register it in a variable scaleway_ip_info: region: par1 diff --git a/test/integration/targets/scaleway_lb/tasks/main.yml b/test/integration/targets/scaleway_lb/tasks/main.yml index 6bbbe0e703a..45d3551a48b 100644 --- a/test/integration/targets/scaleway_lb/tasks/main.yml +++ b/test/integration/targets/scaleway_lb/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' SCW_ORG='YYY' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_lb - - name: Create a load-balancer (Check) check_mode: yes scaleway_lb: diff --git a/test/integration/targets/scaleway_security_group_info/tasks/main.yml b/test/integration/targets/scaleway_security_group_info/tasks/main.yml index ebba5223d9f..3164fabcbc0 100644 --- a/test/integration/targets/scaleway_security_group_info/tasks/main.yml +++ b/test/integration/targets/scaleway_security_group_info/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_security_group_info - - name: Get security group informations and register it in a variable scaleway_security_group_info: region: par1 diff --git a/test/integration/targets/scaleway_security_group_rule/tasks/main.yml b/test/integration/targets/scaleway_security_group_rule/tasks/main.yml index 812ef1f506b..2b436c12678 100644 --- a/test/integration/targets/scaleway_security_group_rule/tasks/main.yml +++ b/test/integration/targets/scaleway_security_group_rule/tasks/main.yml @@ -1,6 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_security_group_rule - - - name: Create a scaleway security_group scaleway_security_group: state: present diff --git a/test/integration/targets/scaleway_server_info/tasks/main.yml b/test/integration/targets/scaleway_server_info/tasks/main.yml index a6956e2a3b0..585cc61ed31 100644 --- a/test/integration/targets/scaleway_server_info/tasks/main.yml +++ b/test/integration/targets/scaleway_server_info/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_server_info - - name: Get server informations and register it in a variable scaleway_server_info: region: par1 diff --git a/test/integration/targets/scaleway_snapshot_info/tasks/main.yml b/test/integration/targets/scaleway_snapshot_info/tasks/main.yml index 1827bdb3c8a..20ad9695150 100644 --- a/test/integration/targets/scaleway_snapshot_info/tasks/main.yml +++ b/test/integration/targets/scaleway_snapshot_info/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_snapshot_info - - name: Get snapshot informations and register it in a variable scaleway_snapshot_info: region: par1 diff --git a/test/integration/targets/scaleway_sshkey/tasks/main.yml b/test/integration/targets/scaleway_sshkey/tasks/main.yml index f6ae57890e3..ca6beb10942 100644 --- a/test/integration/targets/scaleway_sshkey/tasks/main.yml +++ b/test/integration/targets/scaleway_sshkey/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_ssh - - scaleway_sshkey: ssh_pub_key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDf29yyommeGyKSIgSmX0ISVXP+3x6RUY4JDGLoAMFh2efkfDaRVdsvkvnFuUywgP2RewrjTyLE8w0NpCBHVS5Fm1BAn3yvxOUtTMxTbsQcw6HQ8swJ02+1tewJYjHPwc4GrBqiDo3Nmlq354Us0zBOJg/bBzuEnVD5eJ3GO3gKaCSUYTVrYwO0U4eJE0D9OJeUP9J48kl4ULbCub976+mTHdBvlzRw0Tzfl2kxgdDwlks0l2NefY/uiTdz2oMt092bAY3wZHxjto/DXoChxvaf5s2k8Zb+J7CjimUYnzPlH+zA9F6ROjP5AUu6ZWPd0jOIBl1nDWWb2j/qfNLYM43l sieben@sieben-macbook.local" state: present diff --git a/test/integration/targets/scaleway_user_data/tasks/main.yml b/test/integration/targets/scaleway_user_data/tasks/main.yml index ee62b35003a..68d23bac5e2 100644 --- a/test/integration/targets/scaleway_user_data/tasks/main.yml +++ b/test/integration/targets/scaleway_user_data/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_user_data - - name: Create a server scaleway_compute: name: foobar diff --git a/test/integration/targets/scaleway_volume/tasks/main.yml b/test/integration/targets/scaleway_volume/tasks/main.yml index 0546dabe0fa..c4182e0036b 100644 --- a/test/integration/targets/scaleway_volume/tasks/main.yml +++ b/test/integration/targets/scaleway_volume/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' SCW_ORG='YYY' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_volume - - name: Make sure volume is not there before tests scaleway_volume: name: ansible-test-volume diff --git a/test/integration/targets/scaleway_volume_info/tasks/main.yml b/test/integration/targets/scaleway_volume_info/tasks/main.yml index 4463ddda04c..41e8d4bb11b 100644 --- a/test/integration/targets/scaleway_volume_info/tasks/main.yml +++ b/test/integration/targets/scaleway_volume_info/tasks/main.yml @@ -1,5 +1,3 @@ -# SCW_API_KEY='XXX' ansible-playbook ./test/legacy/scaleway.yml --tags test_scaleway_volume_info - - name: Get volume informations and register it in a variable scaleway_volume_info: region: par1 diff --git a/test/legacy/Makefile b/test/legacy/Makefile deleted file mode 100644 index 65a471a4a35..00000000000 --- a/test/legacy/Makefile +++ /dev/null @@ -1,147 +0,0 @@ -# This Makefile is for legacy integration tests. -# Most new tests should be implemented using ansible-test. -# Existing tests are slowly being migrated to ansible-test. -# See: https://docs.ansible.com/ansible/devel/dev_guide/testing_integration.html - -TEST_DIR ?= ~/ansible_testing -INVENTORY ?= inventory -VARS_FILE ?= integration_config.yml - -# Create a semi-random string for use when testing cloud-based resources -ifndef CLOUD_RESOURCE_PREFIX -CLOUD_RESOURCE_PREFIX := $(shell python -c "import string,random; print('ansible-testing-' + ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(8)));") -endif - -CREDENTIALS_FILE ?= credentials.yml -# If credentials.yml exists, use it -ifneq ("$(wildcard $(CREDENTIALS_FILE))","") -CREDENTIALS_ARG = -e @$(CREDENTIALS_FILE) -else -CREDENTIALS_ARG = -endif - -# http://unix.stackexchange.com/questions/30091/fix-or-alternative-for-mktemp-in-os-x -MYTMPDIR = $(shell mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir') - -VAULT_PASSWORD_FILE = vault-password - -CONSUL_RUNNING := $(shell python consul_running.py) -EUID := $(shell id -u -r) - -UNAME := $(shell uname | tr '[:upper:]' '[:lower:]') - -setup: - rm -rf $(TEST_DIR) - mkdir -p $(TEST_DIR) - -cloud: amazon rackspace azure - -cloud_cleanup: amazon_cleanup rackspace_cleanup - -amazon_cleanup: - python cleanup_ec2.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" - -azure_cleanup: - python cleanup_azure.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" - -digital_ocean: $(CREDENTIALS_FILE) - ansible-playbook digital_ocean.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - exit $$RC; - -gce_setup: - python setup_gce.py "$(CLOUD_RESOURCE_PREFIX)" - -gce_cleanup: - python cleanup_gce.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" - -rackspace_cleanup: - python cleanup_rax.py -y --match="^$(CLOUD_RESOURCE_PREFIX)" - -$(CREDENTIALS_FILE): - @echo "No credentials file found. A file named '$(CREDENTIALS_FILE)' is needed to provide credentials needed to run cloud tests. See sample 'credentials.template' file." - @exit 1 - -amazon: $(CREDENTIALS_FILE) - ANSIBLE_HOST_KEY_CHECKING=False ANSIBLE_PIPELINING=no BOTO_CONFIG=/dev/null ansible-playbook amazon.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make amazon_cleanup ; \ - exit $$RC; - -azure: $(CREDENTIALS_FILE) - ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook azure.yml -i $(INVENTORY) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make azure_cleanup ; \ - exit $$RC; - -gce: $(CREDENTIALS_FILE) - CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make gce_setup ; \ - ansible-playbook gce.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make gce_cleanup ; \ - exit $$RC; - -rackspace: $(CREDENTIALS_FILE) - ansible-playbook rackspace.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - CLOUD_RESOURCE_PREFIX="$(CLOUD_RESOURCE_PREFIX)" make rackspace_cleanup ; \ - exit $$RC; - -exoscale: - ansible-playbook exoscale.yml -i $(INVENTORY) -e @$(VARS_FILE) -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - exit $$RC; - -jenkins: - ansible-playbook jenkins.yml -i $(INVENTORY) -e @$(VARS_FILE) -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - exit $$RC; - -cloudflare: $(CREDENTIALS_FILE) - ansible-playbook cloudflare.yml -i $(INVENTORY) -e @$(VARS_FILE) -e @$(CREDENTIALS_FILE) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - exit $$RC; - -cloudscale: - ansible-playbook cloudscale.yml -i $(INVENTORY) -e @$(VARS_FILE) -e "resource_prefix=$(CLOUD_RESOURCE_PREFIX)" -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - exit $$RC; - -$(CONSUL_RUNNING): - -consul: -ifeq ($(CONSUL_RUNNING), True) - ansible-playbook -i $(INVENTORY) consul.yml ; \ - ansible-playbook -i ../../contrib/inventory/consul_io.py consul_inventory.yml -else - @echo "Consul agent is not running locally. To run a cluster locally see http://github.com/sgargan/consul-vagrant" -endif - -test_galaxy: test_galaxy_spec test_galaxy_yaml test_galaxy_git - -test_galaxy_spec: setup - mytmpdir=$(MYTMPDIR) ; \ - ansible-galaxy install -r galaxy_rolesfile -p $$mytmpdir/roles -vvvv ; \ - cp galaxy_playbook.yml $$mytmpdir ; \ - ansible-playbook -i $(INVENTORY) $$mytmpdir/galaxy_playbook.yml -e @$(VARS_FILE) -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - rm -rf $$mytmpdir ; \ - exit $$RC - -test_galaxy_yaml: setup - mytmpdir=$(MYTMPDIR) ; \ - ansible-galaxy install -r galaxy_roles.yml -p $$mytmpdir/roles -vvvv; \ - cp galaxy_playbook.yml $$mytmpdir ; \ - ansible-playbook -i $(INVENTORY) $$mytmpdir/galaxy_playbook.yml -e @$(VARS_FILE) -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - rm -rf $$mytmpdir ; \ - exit $$RC - -test_galaxy_git: setup - mytmpdir=$(MYTMPDIR) ; \ - ansible-galaxy install git+https://bitbucket.org/willthames/git-ansible-galaxy,v1.6 -p $$mytmpdir/roles -vvvv; \ - cp galaxy_playbook_git.yml $$mytmpdir ; \ - ansible-playbook -i $(INVENTORY) $$mytmpdir/galaxy_playbook_git.yml -v $(TEST_FLAGS) ; \ - RC=$$? ; \ - rm -rf $$mytmpdir ; \ - exit $$RC diff --git a/test/legacy/aix_services.yml b/test/legacy/aix_services.yml deleted file mode 100644 index 4da434c9359..00000000000 --- a/test/legacy/aix_services.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- name: Services/Subsystems tests for AIX - hosts: localhost - connection: local - tasks: - - name: spooler shutdown - service: - name: spooler - state: started - - - name: stopping sendmail - service: - name: sendmail - state: stopped - - - name: starting sendmail - service: - name: sendmail - state: started - - - name: starting an inexistent subsystem and group subsystem - service: - name: fakeservice - state: stopped diff --git a/test/legacy/amazon.yml b/test/legacy/amazon.yml deleted file mode 100644 index 18cf530eb58..00000000000 --- a/test/legacy/amazon.yml +++ /dev/null @@ -1,35 +0,0 @@ -- hosts: amazon - gather_facts: true - roles: - - { role: test_ec2_key, tags: test_ec2_key } - - { role: test_ec2_group, tags: test_ec2_group } - #- { role: test_ec2_vpc, tags: test_ec2_vpc } - #- { role: test_ec2_vol, tags: test_ec2_vol } - #- { role: test_ec2_tag, tags: test_ec2_tag } - #- { role: test_ec2_facts, tags: test_ec2_facts } - - { role: test_ec2_elb_lb, tags: test_ec2_elb_lb } - - { role: test_ec2_eip, tags: test_ec2_eip } - #- { role: test_ec2_ami, tags: test_ec2_ami } - #- { role: test_ec2, tags: test_ec2 } - - { role: test_ec2_asg, tags: test_ec2_asg } - - { role: test_ec2_vpc_nat_gateway, tags: test_ec2_vpc_nat_gateway } - - { role: test_ecs_ecr, tags: test_ecs_ecr } - -# complex test for ec2_elb, split up over multiple plays -# since there is a setup component as well as the test which -# runs on a different set of hosts (ec2 instances) - -- hosts: amazon - roles: - - { role: ec2_provision_instances, tags: test_ec2_elb, count: 5 } - -- hosts: ec2 - gather_facts: no - remote_user: ec2-user - become: true - roles: - - { role: ec2_elb_instance_setup, tags: test_ec2_elb } - -- hosts: amazon - roles: - - { role: test_ec2_elb, tags: test_ec2_elb } diff --git a/test/legacy/azure.yml b/test/legacy/azure.yml deleted file mode 100644 index 4fceb2a13e7..00000000000 --- a/test/legacy/azure.yml +++ /dev/null @@ -1,7 +0,0 @@ -- hosts: localhost - connection: local - gather_facts: no - tags: - - test_azure - roles: - - { role: test_azure } diff --git a/test/legacy/cleanup_azure.py b/test/legacy/cleanup_azure.py deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/test/legacy/cleanup_ec2.py b/test/legacy/cleanup_ec2.py deleted file mode 100644 index 826a720b1f8..00000000000 --- a/test/legacy/cleanup_ec2.py +++ /dev/null @@ -1,207 +0,0 @@ -''' -Find and delete AWS resources matching the provided --match string. Unless ---yes|-y is provided, the prompt for confirmation prior to deleting resources. -Please use caution, you can easily delete you're *ENTIRE* EC2 infrastructure. -''' - -from __future__ import (absolute_import, division, print_function) -__metaclass__ = type - -import boto -import boto.ec2.elb -import optparse -import os -import os.path -import re -import sys -import time -import yaml - -from ansible.module_utils.six.moves import input - - -def delete_aws_resources(get_func, attr, opts): - for item in get_func(): - val = getattr(item, attr) - if re.search(opts.match_re, val): - prompt_and_delete(item, "Delete matching %s? [y/n]: " % (item,), opts.assumeyes) - - -def delete_autoscaling_group(get_func, attr, opts): - assumeyes = opts.assumeyes - group_name = None - for item in get_func(): - group_name = getattr(item, attr) - if re.search(opts.match_re, group_name): - if not opts.assumeyes: - assumeyes = input("Delete matching %s? [y/n]: " % (item).lower()) == 'y' - break - if assumeyes and group_name: - groups = asg.get_all_groups(names=[group_name]) - if groups: - group = groups[0] - group.max_size = 0 - group.min_size = 0 - group.desired_capacity = 0 - group.update() - instances = True - while instances: - tmp_groups = asg.get_all_groups(names=[group_name]) - if tmp_groups: - tmp_group = tmp_groups[0] - if not tmp_group.instances: - instances = False - time.sleep(10) - - group.delete() - while len(asg.get_all_groups(names=[group_name])): - time.sleep(5) - print("Terminated ASG: %s" % group_name) - - -def delete_aws_eips(get_func, attr, opts): - - # the file might not be there if the integration test wasn't run - try: - with open(opts.eip_log, 'r') as f: - eip_log = f.read().splitlines() - except IOError: - print('%s not found.' % opts.eip_log) - return - - for item in get_func(): - val = getattr(item, attr) - if val in eip_log: - prompt_and_delete(item, "Delete matching %s? [y/n]: " % (item,), opts.assumeyes) - - -def delete_aws_instances(reservation, opts): - for list in reservation: - for item in list.instances: - prompt_and_delete(item, "Delete matching %s? [y/n]: " % (item,), opts.assumeyes) - - -def prompt_and_delete(item, prompt, assumeyes): - if not assumeyes: - assumeyes = input(prompt).lower() == 'y' - assert hasattr(item, 'delete') or hasattr(item, 'terminate'), "Class <%s> has no delete or terminate attribute" % item.__class__ - if assumeyes: - if hasattr(item, 'delete'): - item.delete() - print("Deleted %s" % item) - if hasattr(item, 'terminate'): - item.terminate() - print("Terminated %s" % item) - - -def parse_args(): - # Load details from credentials.yml - default_aws_access_key = os.environ.get('AWS_ACCESS_KEY', None) - default_aws_secret_key = os.environ.get('AWS_SECRET_KEY', None) - if os.path.isfile('credentials.yml'): - credentials = yaml.load(open('credentials.yml', 'r')) - - if default_aws_access_key is None: - default_aws_access_key = credentials['ec2_access_key'] - if default_aws_secret_key is None: - default_aws_secret_key = credentials['ec2_secret_key'] - - parser = optparse.OptionParser( - usage="%s [options]" % (sys.argv[0], ), - description=__doc__ - ) - parser.add_option( - "--access", - action="store", dest="ec2_access_key", - default=default_aws_access_key, - help="Amazon ec2 access id. Can use EC2_ACCESS_KEY environment variable, or a values from credentials.yml." - ) - parser.add_option( - "--secret", - action="store", dest="ec2_secret_key", - default=default_aws_secret_key, - help="Amazon ec2 secret key. Can use EC2_SECRET_KEY environment variable, or a values from credentials.yml." - ) - parser.add_option( - "--eip-log", - action="store", dest="eip_log", - default=None, - help="Path to log of EIPs created during test." - ) - parser.add_option( - "--integration-config", - action="store", dest="int_config", - default="integration_config.yml", - help="path to integration config" - ) - parser.add_option( - "--credentials", "-c", - action="store", dest="credential_file", - default="credentials.yml", - help="YAML file to read cloud credentials (default: %default)" - ) - parser.add_option( - "--yes", "-y", - action="store_true", dest="assumeyes", - default=False, - help="Don't prompt for confirmation" - ) - parser.add_option( - "--match", - action="store", dest="match_re", - default="^ansible-testing-", - help="Regular expression used to find AWS resources (default: %default)" - ) - - (opts, args) = parser.parse_args() - for required in ['ec2_access_key', 'ec2_secret_key']: - if getattr(opts, required) is None: - parser.error("Missing required parameter: --%s" % required) - - return (opts, args) - - -if __name__ == '__main__': - - (opts, args) = parse_args() - - int_config = yaml.load(open(opts.int_config).read()) - if not opts.eip_log: - output_dir = os.path.expanduser(int_config["output_dir"]) - opts.eip_log = output_dir + '/' + opts.match_re.replace('^', '') + '-eip_integration_tests.log' - - # Connect to AWS - aws = boto.connect_ec2(aws_access_key_id=opts.ec2_access_key, - aws_secret_access_key=opts.ec2_secret_key) - - elb = boto.connect_elb(aws_access_key_id=opts.ec2_access_key, - aws_secret_access_key=opts.ec2_secret_key) - - asg = boto.connect_autoscale(aws_access_key_id=opts.ec2_access_key, - aws_secret_access_key=opts.ec2_secret_key) - - try: - # Delete matching keys - delete_aws_resources(aws.get_all_key_pairs, 'name', opts) - - # Delete matching security groups - delete_aws_resources(aws.get_all_security_groups, 'name', opts) - - # Delete matching ASGs - delete_autoscaling_group(asg.get_all_groups, 'name', opts) - - # Delete matching launch configs - delete_aws_resources(asg.get_all_launch_configurations, 'name', opts) - - # Delete ELBs - delete_aws_resources(elb.get_all_load_balancers, 'name', opts) - - # Delete recorded EIPs - delete_aws_eips(aws.get_all_addresses, 'public_ip', opts) - - # Delete temporary instances - filters = {"tag:Name": opts.match_re.replace('^', ''), "instance-state-name": ['running', 'pending', 'stopped']} - delete_aws_instances(aws.get_all_instances(filters=filters), opts) - - except KeyboardInterrupt as e: - print("\nExiting on user command.") diff --git a/test/legacy/cleanup_gce.py b/test/legacy/cleanup_gce.py deleted file mode 100644 index f98798a6700..00000000000 --- a/test/legacy/cleanup_gce.py +++ /dev/null @@ -1,93 +0,0 @@ -''' -Find and delete GCE resources matching the provided --match string. Unless ---yes|-y is provided, the prompt for confirmation prior to deleting resources. -Please use caution, you can easily delete your *ENTIRE* GCE infrastructure. -''' - -import optparse -import os -import re -import sys -import yaml - -try: - from libcloud.common.google import ( - GoogleBaseError, - QuotaExceededError, - ResourceExistsError, - ResourceInUseError, - ResourceNotFoundError, - ) - from libcloud.compute.providers import get_driver - from libcloud.compute.types import Provider - _ = Provider.GCE -except ImportError: - print("failed=True msg='libcloud with GCE support (0.13.3+) required for this module'") - sys.exit(1) - -import gce_credentials - -from ansible.module_utils.six.moves import input - - -def delete_gce_resources(get_func, attr, opts): - for item in get_func(): - val = getattr(item, attr) - if re.search(opts.match_re, val, re.IGNORECASE): - prompt_and_delete(item, "Delete matching %s? [y/n]: " % (item,), opts.assumeyes) - - -def prompt_and_delete(item, prompt, assumeyes): - if not assumeyes: - assumeyes = input(prompt).lower() == 'y' - assert hasattr(item, 'destroy'), "Class <%s> has no delete attribute" % item.__class__ - if assumeyes: - item.destroy() - print("Deleted %s" % item) - - -def parse_args(): - parser = optparse.OptionParser( - usage="%s [options]" % sys.argv[0], - description=__doc__ - ) - gce_credentials.add_credentials_options(parser) - parser.add_option( - "--yes", "-y", - action="store_true", dest="assumeyes", - default=False, - help="Don't prompt for confirmation" - ) - parser.add_option( - "--match", - action="store", dest="match_re", - default="^ansible-testing-", - help="Regular expression used to find GCE resources (default: %default)" - ) - - (opts, args) = parser.parse_args() - gce_credentials.check_required(opts, parser) - return (opts, args) - - -if __name__ == '__main__': - - (opts, args) = parse_args() - - # Connect to GCE - gce = gce_credentials.get_gce_driver(opts) - - try: - # Delete matching instances - delete_gce_resources(gce.list_nodes, 'name', opts) - - # Delete matching snapshots - def get_snapshots(): - for volume in gce.list_volumes(): - for snapshot in gce.list_volume_snapshots(volume): - yield snapshot - delete_gce_resources(get_snapshots, 'name', opts) - # Delete matching disks - delete_gce_resources(gce.list_volumes, 'name', opts) - except KeyboardInterrupt as e: - print("\nExiting on user command.") diff --git a/test/legacy/cleanup_rax.py b/test/legacy/cleanup_rax.py deleted file mode 100755 index 1d7b1309a7e..00000000000 --- a/test/legacy/cleanup_rax.py +++ /dev/null @@ -1,182 +0,0 @@ -#!/usr/bin/env python - -import os -import re -import yaml -import argparse - -try: - import pyrax - HAS_PYRAX = True -except ImportError: - HAS_PYRAX = False - -from ansible.module_utils.six.moves import input - - -def rax_list_iterator(svc, *args, **kwargs): - method = kwargs.pop('method', 'list') - items = getattr(svc, method)(*args, **kwargs) - while items: - retrieved = getattr(svc, method)(*args, marker=items[-1].id, **kwargs) - if items and retrieved and items[-1].id == retrieved[0].id: - del items[-1] - items.extend(retrieved) - if len(retrieved) < 2: - break - return items - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument('-y', '--yes', action='store_true', dest='assumeyes', - default=False, help="Don't prompt for confirmation") - parser.add_argument('--match', dest='match_re', - default='^ansible-testing', - help='Regular expression used to find resources ' - '(default: %(default)s)') - - return parser.parse_args() - - -def authenticate(): - try: - with open(os.path.realpath('./credentials.yml')) as f: - credentials = yaml.load(f) - except Exception as e: - raise SystemExit(e) - - try: - pyrax.set_credentials(credentials.get('rackspace_username'), - credentials.get('rackspace_api_key')) - except Exception as e: - raise SystemExit(e) - - -def prompt_and_delete(item, prompt, assumeyes): - if not assumeyes: - assumeyes = input(prompt).lower() == 'y' - assert hasattr(item, 'delete') or hasattr(item, 'terminate'), \ - "Class <%s> has no delete or terminate attribute" % item.__class__ - if assumeyes: - if hasattr(item, 'delete'): - item.delete() - print("Deleted %s" % item) - if hasattr(item, 'terminate'): - item.terminate() - print("Terminated %s" % item) - - -def delete_rax(args): - """Function for deleting CloudServers""" - print("--- Cleaning CloudServers matching '%s'" % args.match_re) - search_opts = dict(name='^%s' % args.match_re) - for region in pyrax.identity.services.compute.regions: - cs = pyrax.connect_to_cloudservers(region=region) - servers = rax_list_iterator(cs.servers, search_opts=search_opts) - for server in servers: - prompt_and_delete(server, - 'Delete matching %s? [y/n]: ' % server, - args.assumeyes) - - -def delete_rax_clb(args): - """Function for deleting Cloud Load Balancers""" - print("--- Cleaning Cloud Load Balancers matching '%s'" % args.match_re) - for region in pyrax.identity.services.load_balancer.regions: - clb = pyrax.connect_to_cloud_loadbalancers(region=region) - for lb in rax_list_iterator(clb): - if re.search(args.match_re, lb.name): - prompt_and_delete(lb, - 'Delete matching %s? [y/n]: ' % lb, - args.assumeyes) - - -def delete_rax_keypair(args): - """Function for deleting Rackspace Key pairs""" - print("--- Cleaning Key Pairs matching '%s'" % args.match_re) - for region in pyrax.identity.services.compute.regions: - cs = pyrax.connect_to_cloudservers(region=region) - for keypair in cs.keypairs.list(): - if re.search(args.match_re, keypair.name): - prompt_and_delete(keypair, - 'Delete matching %s? [y/n]: ' % keypair, - args.assumeyes) - - -def delete_rax_network(args): - """Function for deleting Cloud Networks""" - print("--- Cleaning Cloud Networks matching '%s'" % args.match_re) - for region in pyrax.identity.services.network.regions: - cnw = pyrax.connect_to_cloud_networks(region=region) - for network in cnw.list(): - if re.search(args.match_re, network.name): - prompt_and_delete(network, - 'Delete matching %s? [y/n]: ' % network, - args.assumeyes) - - -def delete_rax_cbs(args): - """Function for deleting Cloud Networks""" - print("--- Cleaning Cloud Block Storage matching '%s'" % args.match_re) - for region in pyrax.identity.services.network.regions: - cbs = pyrax.connect_to_cloud_blockstorage(region=region) - for volume in cbs.list(): - if re.search(args.match_re, volume.name): - prompt_and_delete(volume, - 'Delete matching %s? [y/n]: ' % volume, - args.assumeyes) - - -def delete_rax_cdb(args): - """Function for deleting Cloud Databases""" - print("--- Cleaning Cloud Databases matching '%s'" % args.match_re) - for region in pyrax.identity.services.database.regions: - cdb = pyrax.connect_to_cloud_databases(region=region) - for db in rax_list_iterator(cdb): - if re.search(args.match_re, db.name): - prompt_and_delete(db, - 'Delete matching %s? [y/n]: ' % db, - args.assumeyes) - - -def _force_delete_rax_scaling_group(manager): - def wrapped(uri): - manager.api.method_delete('%s?force=true' % uri) - return wrapped - - -def delete_rax_scaling_group(args): - """Function for deleting Autoscale Groups""" - print("--- Cleaning Autoscale Groups matching '%s'" % args.match_re) - for region in pyrax.identity.services.autoscale.regions: - asg = pyrax.connect_to_autoscale(region=region) - for group in rax_list_iterator(asg): - if re.search(args.match_re, group.name): - group.manager._delete = \ - _force_delete_rax_scaling_group(group.manager) - prompt_and_delete(group, - 'Delete matching %s? [y/n]: ' % group, - args.assumeyes) - - -def main(): - if not HAS_PYRAX: - raise SystemExit('The pyrax python module is required for this script') - - args = parse_args() - authenticate() - - funcs = [f for n, f in globals().items() if n.startswith('delete_rax')] - for func in sorted(funcs, key=lambda f: f.__name__): - try: - func(args) - except Exception as e: - print("---- %s failed (%s)" % (func.__name__, e.message)) - - -if __name__ == '__main__': - try: - main() - except KeyboardInterrupt: - print('\nExiting...') diff --git a/test/legacy/cloudflare.yml b/test/legacy/cloudflare.yml deleted file mode 100644 index ddae6f26da9..00000000000 --- a/test/legacy/cloudflare.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- hosts: localhost - connection: local - gather_facts: no - tags: - - cloudflare - roles: - - { role: test_cloudflare_dns, tags: test_cloudflare_dns } diff --git a/test/legacy/cnos.yaml b/test/legacy/cnos.yaml deleted file mode 100644 index eab7a879431..00000000000 --- a/test/legacy/cnos.yaml +++ /dev/null @@ -1,24 +0,0 @@ -- hosts: cnos - gather_facts: no - connection: local - - vars: - limit_to: "*" - debug: false - - roles: - - { role: cnos_facts, when: "limit_to in ['*', 'cnos_facts']" } - - { role: cnos_vlan, when: "limit_to in ['*', 'cnos_vlan']" } - - { role: cnos_ethernet, when: "limit_to in ['*', 'cnos_ethernet']" } - - { role: cnos_image, when: "limit_to in ['*', 'cnos_image']" } - - { role: cnos_portchannel, when: "limit_to in ['*', 'cnos_portchannel']" } - - { role: cnos_rollback, when: "limit_to in ['*', 'cnos_rollback']" } - - { role: cnos_save, when: "limit_to in ['*', 'cnos_save']" } - - { role: cnos_template, when: "limit_to in ['*', 'cnos_template']" } - - { role: cnos_conditional_template, when: "limit_to in ['*', 'cnos_conditional_template']" } - - { role: cnos_conditional_command, when: "limit_to in ['*', 'cnos_conditional_command']" } - - { role: cnos_vlag, when: "limit_to in ['*', 'cnos_vlag']" } - - { role: cnos_command, when: "limit_to in ['*', 'cnos_command']" } - - { role: cnos_bgp, when: "limit_to in ['*', 'cnos_bgp']" } - - { role: cnos_backup, when: "limit_to in ['*', 'cnos_backup']" } - - { role: cnos_showrun, when: "limit_to in ['*', 'cnos_showrun']" } \ No newline at end of file diff --git a/test/legacy/connection-buildah.yaml b/test/legacy/connection-buildah.yaml deleted file mode 100644 index 336857f932f..00000000000 --- a/test/legacy/connection-buildah.yaml +++ /dev/null @@ -1,5 +0,0 @@ -- hosts: buildah-container - connection: buildah - gather_facts: no - roles: - - { role: connection_buildah } diff --git a/test/legacy/consul.yml b/test/legacy/consul.yml deleted file mode 100644 index 90288f2bdb0..00000000000 --- a/test/legacy/consul.yml +++ /dev/null @@ -1,78 +0,0 @@ -- hosts: localhost - connection: local - gather_facts: false - - vars: - # these are the defaults from the consul-vagrant cluster setup - - mgmt_token: '4791402A-D875-4C18-8316-E652DBA53B18' - - acl_host: '11.0.0.2' - - metadata_json: '{"clearance": "top_secret"}' - - pre_tasks: - # this works except for the KV_lookusp - - name: check that the consul agent is running locally - local_action: wait_for port=8500 timeout=5 - ignore_errors: true - register: consul_running - - roles: - - {role: test_consul_service, - when: not consul_running.failed is defined} - - - {role: test_consul_kv, - when: not consul_running.failed is defined} - - - {role: test_consul_acl, - when: not consul_running.failed is defined} - - - {role: test_consul_session, - when: not consul_running.failed is defined} - - tasks: - - name: setup services with passing check for consul inventory test - consul: - service_name: nginx - service_port: 80 - script: "sh -c true" - interval: 5 - token: '4791402A-D875-4C18-8316-E652DBA53B18' - tags: - - dev - - master - - - name: setup failing service for inventory test - consul: - service_name: nginx - service_port: 443 - script: "sh -c false" - interval: 5 - tags: - - qa - - slave - - - name: setup ssh service for inventory test - consul: - service_name: ssh - service_port: 2222 - script: "sh -c true" - interval: 5 - token: '4791402A-D875-4C18-8316-E652DBA53B18' - - - name: update the Anonymous token to allow anon access to kv store - consul_acl: - mgmt_token: '{{mgmt_token}}' - host: '{{acl_host}}' - token: 'anonymous' - rules: - - key: '' - policy: write - - - name: add metadata for the node through kv_store - consul_kv: "key=ansible/metadata/dc1/consul-1 value='{{metadata_json}}'" - - - name: add metadata for the node through kv_store - consul_kv: key=ansible/groups/dc1/consul-1 value='a_group, another_group' - - - name: warn that tests are ignored if consul agent is not running - debug: msg="A consul agent needs to be running inorder to run the tests. To setup a vagrant cluster for use in testing see http://github.com/sgargan/consul-vagrant" - when: consul_running.failed is defined diff --git a/test/legacy/consul_inventory.yml b/test/legacy/consul_inventory.yml deleted file mode 100644 index 0007a0965d4..00000000000 --- a/test/legacy/consul_inventory.yml +++ /dev/null @@ -1,19 +0,0 @@ -- hosts: all;!localhost - gather_facts: false - - pre_tasks: - - name: check that the consul agent is running locally - local_action: wait_for port=8500 timeout=5 - ignore_errors: true - register: consul_running - - roles: - - - {role: test_consul_inventory, - when: not consul_running.failed is defined} - - tasks: - - - name: warn that tests are ignored if consul agent is not running - debug: msg="A consul agent needs to be running inorder to run the tests. To setup a vagrant cluster for use in testing see http://github.com/sgargan/consul-vagrant" - when: consul_running.failed is defined diff --git a/test/legacy/consul_running.py b/test/legacy/consul_running.py deleted file mode 100644 index 0772cd40469..00000000000 --- a/test/legacy/consul_running.py +++ /dev/null @@ -1,11 +0,0 @@ -''' Checks that the consul agent is running locally. ''' - -if __name__ == '__main__': - - try: - import consul - consul = consul.Consul(host='0.0.0.0', port=8500) - consul.catalog.nodes() - print("True") - except Exception: - pass diff --git a/test/legacy/credentials.template b/test/legacy/credentials.template deleted file mode 100644 index 72e30f9f56e..00000000000 --- a/test/legacy/credentials.template +++ /dev/null @@ -1,26 +0,0 @@ ---- -# Rackspace Credentials -rackspace_username: -rackspace_api_key: -rackspace_region: - -# AWS Credentials -ec2_access_key: -ec2_secret_key: -security_token: - -# GCE Credentials -gce_service_account_email: -gce_pem_file: -gce_project_id: - -# Azure Credentials -azure_subscription_id: "{{ lookup('env', 'AZURE_SUBSCRIPTION_ID') }}" -azure_cert_path: "{{ lookup('env', 'AZURE_CERT_PATH') }}" - -# Cloudflare Credentials -cloudflare_api_token: -cloudflare_email: -cloudflare_zone: - -digitalocean_oauth_token: "{{ lookup('env', 'DO_API_TOKEN') }}" diff --git a/test/legacy/digital_ocean.yml b/test/legacy/digital_ocean.yml deleted file mode 100644 index f87a5227b70..00000000000 --- a/test/legacy/digital_ocean.yml +++ /dev/null @@ -1,9 +0,0 @@ -- hosts: localhost - connection: local - gather_facts: no - tags: - - test_digital_ocean - vars: - dummy_ssh_pub_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCzTSH4WVqnK2kUgtbs2VryNUXBaox7SoXPmV4yMP4INPAndrtPTS3BRzBPrJwQSwjsT7y3kBLNIHGppxLFMoQTEL03WlMDfM1TthMT7Y5B65wOUMxdwbSn9zlblAqbbRg7XU/UgNZb+B2kBPepPRJlh1ap4CPTNrbzdKlmwqS4X3+hX/WM3Gt3S09eNxUKDBK18Fbf/yKvhXP4bGtD0cxYNKL4qoGjEZTkjiYQyC4TvfuZaUtOFpiMLPAt0V7ao2S2bKr8hAgxl9MtrJpa2q1FueAjljMSBWUhOjFgmO0SpWDcBu157vMtscmtUC2cMpQwAY2HQyMJAYs0HOa59dpUKtxBR3LwjXyZvL+RbjEbzZjp4JQSer/bB/jekrxHAIABCwdFmx6qBGNVqDdfT7o+OcEJaAvk4gKEFI24OU8k6WF4ss97VfxlvIT6Bq2p04oUsxN0qh9aSjRVfqJmhkSocf+1iGWGfa4DMFpeQAXCzUkhJS5ecYXSDmyMHGtl7OfhLnncUDHVRjhsmrCFb5kkHHVfNd601ixixydInlssUhQRRzhnJ+ciTh/x7ARDfwMendHTTHCj5sO1IvnOJdCcX4FTMKp1GLo6eaK738o9w4rWL0bs3kWJfxWg91QegwZW0r8xSJBtga7HyQafxivhwEN8knN8HuD47iBuAL+VTw== - roles: - - { role: test_digital_ocean, tags: test_digital_ocean } diff --git a/test/legacy/exoscale.yml b/test/legacy/exoscale.yml deleted file mode 100644 index a6f5621bd28..00000000000 --- a/test/legacy/exoscale.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- hosts: localhost - gather_facts: no - roles: - - { role: test_exoscale_dns, tags: test_exoscale_dns } diff --git a/test/legacy/galaxy_playbook.yml b/test/legacy/galaxy_playbook.yml deleted file mode 100644 index 8e64798c70f..00000000000 --- a/test/legacy/galaxy_playbook.yml +++ /dev/null @@ -1,7 +0,0 @@ -- hosts: localhost - connection: local - - roles: - - "git-ansible-galaxy" - - "http-role" - - "hg-ansible-galaxy" diff --git a/test/legacy/galaxy_playbook_git.yml b/test/legacy/galaxy_playbook_git.yml deleted file mode 100644 index 1d9b03b22a2..00000000000 --- a/test/legacy/galaxy_playbook_git.yml +++ /dev/null @@ -1,5 +0,0 @@ -- hosts: localhost - connection: local - - roles: - - "git-ansible-galaxy" diff --git a/test/legacy/galaxy_roles.yml b/test/legacy/galaxy_roles.yml deleted file mode 100644 index b323ffef6f5..00000000000 --- a/test/legacy/galaxy_roles.yml +++ /dev/null @@ -1,16 +0,0 @@ -# change these to some ansible owned test roles -- src: briancoca.oracle_java7 - name: oracle_java7 - -- src: git+http://bitbucket.org/willthames/git-ansible-galaxy - version: pr-10620 - -- src: http://bitbucket.org/willthames/hg-ansible-galaxy - scm: hg - -- src: https://bitbucket.org/willthames/http-ansible-galaxy/get/master.tar.gz - name: http-role - -- src: git@github.com:geerlingguy/ansible-role-php.git - scm: git - name: php diff --git a/test/legacy/galaxy_rolesfile b/test/legacy/galaxy_rolesfile deleted file mode 100644 index 047eef95502..00000000000 --- a/test/legacy/galaxy_rolesfile +++ /dev/null @@ -1,8 +0,0 @@ -# deliberate non-empty whitespace line to follow - - -git+https://bitbucket.org/willthames/git-ansible-galaxy,pr-10620 -hg+https://bitbucket.org/willthames/hg-ansible-galaxy -https://bitbucket.org/willthames/http-ansible-galaxy/get/master.tar.gz,,http-role -# comment -git+git@github.com:geerlingguy/ansible-role-php.git diff --git a/test/legacy/gce.yml b/test/legacy/gce.yml deleted file mode 100644 index 72fb4204829..00000000000 --- a/test/legacy/gce.yml +++ /dev/null @@ -1,14 +0,0 @@ -- hosts: testhost - gather_facts: true - roles: - - { role: test_gce, tags: test_gce } - - { role: test_gce_pd, tags: test_gce_pd } - - { role: test_gce_mig, tags: test_gce_mig } - - { role: test_gcdns, tags: test_gcdns } - - { role: test_gce_tag, tags: test_gce_tag } - - { role: test_gce_net, tags: test_gce_net } - - { role: test_gcp_url_map, tags: test_gcp_url_map } - - { role: test_gcp_glb, tags: test_gcp_glb } - - { role: test_gcp_healthcheck, tags: test_gcp_healthcheck } - - { role: test_gce_labels, tags: test_gce_labels } - # TODO: tests for gce_lb, gc_storage diff --git a/test/legacy/gce_credentials.py b/test/legacy/gce_credentials.py deleted file mode 100644 index 4d3b540fe58..00000000000 --- a/test/legacy/gce_credentials.py +++ /dev/null @@ -1,52 +0,0 @@ -import collections -import os -import sys -import yaml - -try: - from libcloud.compute.types import Provider - from libcloud.compute.providers import get_driver - _ = Provider.GCE -except ImportError: - print("failed=True msg='libcloud with GCE support (0.13.3+) required for this module'") - sys.exit(1) - - -def add_credentials_options(parser): - default_service_account_email = None - default_pem_file = None - default_project_id = None - - # Load details from credentials.yml - if os.path.isfile('credentials.yml'): - credentials = yaml.load(open('credentials.yml', 'r')) - default_service_account_email = credentials[ - 'gce_service_account_email'] - default_pem_file = credentials['gce_pem_file'] - default_project_id = credentials['gce_project_id'] - - parser.add_option( - "--service_account_email", action="store", - dest="service_account_email", default=default_service_account_email, - help="GCE service account email. Default is loaded from credentials.yml.") - parser.add_option( - "--pem_file", action="store", dest="pem_file", - default=default_pem_file, - help="GCE client key. Default is loaded from credentials.yml.") - parser.add_option( - "--project_id", action="store", dest="project_id", - default=default_project_id, - help="Google Cloud project ID. Default is loaded from credentials.yml.") - - -def check_required(opts, parser): - for required in ['service_account_email', 'pem_file', 'project_id']: - if getattr(opts, required) is None: - parser.error("Missing required parameter: --%s" % required) - - -def get_gce_driver(opts): - # Connect to GCE - gce_cls = get_driver(Provider.GCE) - return gce_cls(opts.service_account_email, opts.pem_file, - project=opts.project_id) diff --git a/test/legacy/group_vars/all b/test/legacy/group_vars/all deleted file mode 100644 index 0f8a2260309..00000000000 --- a/test/legacy/group_vars/all +++ /dev/null @@ -1,17 +0,0 @@ -a: 999 -b: 998 -c: 997 -d: 996 -uno: 1 -dos: 2 -tres: 3 -etest: 'from group_vars' -inventory_beats_default: 'narf' -# variables used for hash merging behavior testing -test_hash: - group_vars_all: "this is in group_vars/all" -# variables used for conditional testing -test_bare: true -test_bare_var: 123 -test_bare_nested_good: "test_bare_var == 123" -test_bare_nested_bad: "{{test_bare_var}} == 321" diff --git a/test/legacy/group_vars/amazon b/test/legacy/group_vars/amazon deleted file mode 100644 index 3d7209ef1b0..00000000000 --- a/test/legacy/group_vars/amazon +++ /dev/null @@ -1,3 +0,0 @@ ---- -ec2_url: ec2.amazonaws.com -ec2_region: us-east-1 diff --git a/test/legacy/group_vars/local b/test/legacy/group_vars/local deleted file mode 100644 index 4bb5f3a24c1..00000000000 --- a/test/legacy/group_vars/local +++ /dev/null @@ -1,3 +0,0 @@ -tres: 'three' -hash_test: - group_vars_local: "this is in group_vars/local" diff --git a/test/legacy/group_vars/vyos.yaml b/test/legacy/group_vars/vyos.yaml deleted file mode 100644 index 43c37b11cc4..00000000000 --- a/test/legacy/group_vars/vyos.yaml +++ /dev/null @@ -1,5 +0,0 @@ ---- -cli: - host: "{{ ansible_ssh_host }}" -# username: "{{ vyos_cli_user | default('ansible-admin') }}" -# password: "{{ vyos_cli_pass | default('adminpw') }}" diff --git a/test/legacy/host_vars/testhost b/test/legacy/host_vars/testhost deleted file mode 100644 index 4d04a96cbcf..00000000000 --- a/test/legacy/host_vars/testhost +++ /dev/null @@ -1,10 +0,0 @@ -a: 1 -b: 2 -c: 3 -d: 4 -role_var_beats_inventory: 'should_not_see_this' -test_hash: - host_vars_testhost: "this is in host_vars/testhost" - -# Var precedence testing -defaults_file_var_role3: "overridden from inventory" diff --git a/test/legacy/integration_config.yml b/test/legacy/integration_config.yml deleted file mode 100644 index 3159daf196c..00000000000 --- a/test/legacy/integration_config.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -win_output_dir: 'C:\ansible_testing' -output_dir: ~/ansible_testing -non_root_test_user: ansible -pip_test_package: isort diff --git a/test/legacy/inventory b/test/legacy/inventory deleted file mode 100644 index 3534b8cb8a2..00000000000 --- a/test/legacy/inventory +++ /dev/null @@ -1,55 +0,0 @@ -[local] -testhost ansible_ssh_host=127.0.0.1 ansible_connection=local -testhost2 ansible_ssh_host=127.0.0.1 ansible_connection=local -# For testing delegate_to -testhost3 ansible_ssh_host=127.0.0.3 -testhost4 ansible_ssh_host=127.0.0.4 -# For testing fact gathering -facthost[0:20] ansible_host=127.0.0.1 ansible_connection=local - -[binary_modules] -testhost_binary_modules ansible_host=127.0.0.1 ansible_connection=local - -[local_group] -kube-pippin.knf.local - -# the following inline declarations are accompanied -# by (preferred) group_vars/ and host_vars/ variables -# and are used in testing of variable precedence - -[inven_overridehosts] -invenoverride ansible_ssh_host=127.0.0.1 ansible_connection=local - -[all:vars] -extra_var_override=FROM_INVENTORY -inven_var=inventory_var -unicode_host_var=CaféEñyei - -[inven_overridehosts:vars] -foo=foo -var_dir=vars - -[arbitrary_parent:children] -local - -[local:vars] -parent_var=6000 -groups_tree_var=5000 - -[arbitrary_parent:vars] -groups_tree_var=4000 -overridden_in_parent=1000 - -[arbitrary_grandparent:children] -arbitrary_parent - -[arbitrary_grandparent:vars] -groups_tree_var=3000 -grandparent_var=2000 -overridden_in_parent=2000 - -[amazon] -localhost ansible_ssh_host=127.0.0.1 ansible_connection=local - -[azure] -localhost ansible_ssh_host=127.0.0.1 ansible_connection=local diff --git a/test/legacy/inventory.yaml b/test/legacy/inventory.yaml deleted file mode 100644 index 25760b347c8..00000000000 --- a/test/legacy/inventory.yaml +++ /dev/null @@ -1,62 +0,0 @@ -all: - children: - local: - hosts: - testhost: - ansible_host: 127.0.0.1 - ansible_connection: local - testhost2: - ansible_host: 127.0.0.1 - ansible_connection: local - # For testing delegate_to - testhost3: - ansible_ssh_host: 127.0.0.3 - testhost4: - ansible_ssh_host: 127.0.0.4 - # For testing fact gathering - 'facthost[0:20]': - ansible_host: 1270.0.0.1 - ansible_connection: local - vars: - parent_var: 6000 - groups_tree_var: 5000 - - binary_modules: - hosts: - testhost_binary_modules: - ansible_host: 127.0.0.1 - ansible_connection: local - - inven_overridehosts: - desc: | - the following inline declarations are accompanied# by (preferred) group_vars/ and host_vars/ variables and - are used in testing of variable precedence - hosts: - invenoverride: - ansible_ssh_host: 127.0.0.1 - ansible_connection: local - vars: - foo: foo - var_dir: vars - - arbitrary_grandparent: - children: - arbitrary_parent: - children: - local: - vars: - groups_tree_var: 4000 - overridden_in_parent: 1000 - vars: - groups_tree_var: 3000 - grandparent_var: 2000 - overridden_in_parent: 2000 - amazon: - hosts: - localhost: - ansible_ssh_host: 127.0.0.1 - ansible_connection: local - vars: - extra_var_override: FROM_INVENTORY - inven_var: inventory_var - unicode_host_var: CaféEñyei diff --git a/test/legacy/jenkins.yml b/test/legacy/jenkins.yml deleted file mode 100644 index a20e4bdc2b0..00000000000 --- a/test/legacy/jenkins.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- hosts: localhost - connection: local - gather_facts: no - tags: - - jenkins - roles: - - test_jenkins_job diff --git a/test/legacy/netscaler.yaml b/test/legacy/netscaler.yaml deleted file mode 100644 index ee35939eb28..00000000000 --- a/test/legacy/netscaler.yaml +++ /dev/null @@ -1,25 +0,0 @@ ---- - -- hosts: netscaler - - gather_facts: no - connection: local - - vars: - limit_to: "*" - debug: false - - roles: - - { role: netscaler_cs_action, when: "limit_to in ['*', 'netscaler_cs_action']" } - - { role: netscaler_cs_policy, when: "limit_to in ['*', 'netscaler_cs_policy']" } - - { role: netscaler_cs_vserver, when: "limit_to in ['*', 'netscaler_cs_vserver']" } - - { role: netscaler_server, when: "limit_to in ['*', 'netscaler_server']" } - - { role: netscaler_lb_vserver, when: "limit_to in ['*', 'netscaler_lb_vserver']" } - - { role: netscaler_lb_monitor, when: "limit_to in ['*', 'netscaler_lb_monitor']" } - - { role: netscaler_save_config, when: "limit_to in ['*', 'netscaler_save_config']" } - - { role: netscaler_service, when: "limit_to in ['*', 'netscaler_service']" } - - { role: netscaler_servicegroup, when: "limit_to in ['*', 'netscaler_servicegroup']" } - - { role: netscaler_gslb_service, when: "limit_to in ['*', 'netscaler_gslb_service']" } - - { role: netscaler_gslb_site, when: "limit_to in ['*', 'netscaler_gslb_site']" } - - { role: netscaler_gslb_vserver, when: "limit_to in ['*', 'netscaler_gslb_vserver']" } - - { role: netscaler_ssl_certkey, when: "limit_to in ['*', 'netscaler_ssl_certkey']" } diff --git a/test/legacy/nuage.yaml b/test/legacy/nuage.yaml deleted file mode 100644 index b59efbdcd98..00000000000 --- a/test/legacy/nuage.yaml +++ /dev/null @@ -1,11 +0,0 @@ ---- -- hosts: nuage - gather_facts: no - connection: local - - vars: - limit_to: "*" - debug: false - - roles: - - { role: nuage_vspk, when: "limit_to in ['*', 'nuage_vspk']" } \ No newline at end of file diff --git a/test/legacy/online.yml b/test/legacy/online.yml deleted file mode 100644 index a64a5afa3b4..00000000000 --- a/test/legacy/online.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- hosts: localhost - gather_facts: no - connection: local - - roles: - - { role: online_server_info, tags: test_online_server_info } - - { role: online_user_info, tags: test_online_user_info } diff --git a/test/legacy/opennebula.yml b/test/legacy/opennebula.yml deleted file mode 100644 index 66f38d904e0..00000000000 --- a/test/legacy/opennebula.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- hosts: localhost - roles: - - { role: one_vm, tags: test_one_vm } - - { role: one_image, tags: test_one_image } - - { role: one_image_info, tags: test_one_image_info } - - { role: one_service, tags: test_one_service } diff --git a/test/legacy/ovs.yaml b/test/legacy/ovs.yaml deleted file mode 100644 index 35d3acc0fd2..00000000000 --- a/test/legacy/ovs.yaml +++ /dev/null @@ -1,36 +0,0 @@ ---- -- hosts: ovs - gather_facts: no - remote_user: ubuntu - become: yes - - vars: - limit_to: "*" - debug: false - -# Run the tests within blocks allows the next module to be tested if the previous one fails. -# This is done to allow https://github.com/ansible/dci-partner-ansible/ to run the full set of tests. - - - tasks: - - set_fact: - test_failed: false - failed_modules: [] - - block: - - include_role: - name: openvswitch_db - when: "limit_to in ['*', 'openvswitch_db']" - rescue: - - set_fact: - failed_modules: "{{ failed_modules + [ 'openvswitch_db' ]}}" - test_failed: true - - -########### - - debug: var=failed_modules - when: test_failed - - - name: Has any previous test failed? - fail: - msg: "One or more tests failed, check log for details" - when: test_failed diff --git a/test/legacy/rackspace.yml b/test/legacy/rackspace.yml deleted file mode 100644 index 0fd56dc300b..00000000000 --- a/test/legacy/rackspace.yml +++ /dev/null @@ -1,45 +0,0 @@ ---- -- hosts: localhost - connection: local - gather_facts: false - tags: - - rackspace - roles: - - role: test_rax - tags: test_rax - - - role: test_rax_facts - tags: test_rax_facts - - - role: test_rax_meta - tags: test_rax_meta - - - role: test_rax_keypair - tags: test_rax_keypair - - - role: test_rax_clb - tags: test_rax_clb - - - role: test_rax_clb_nodes - tags: test_rax_clb_nodes - - - role: test_rax_network - tags: test_rax_network - - - role: test_rax_cbs - tags: test_rax_cbs - - - role: test_rax_cbs_attachments - tags: test_rax_cbs_attachments - - - role: test_rax_identity - tags: test_rax_identity - - - role: test_rax_cdb - tags: test_rax_cdb - - - role: test_rax_cdb_database - tags: test_rax_cdb_database - - - role: test_rax_scaling_group - tags: test_rax_scaling_group diff --git a/test/legacy/roles/azure_rm_networkinterface/tasks/main.yml b/test/legacy/roles/azure_rm_networkinterface/tasks/main.yml deleted file mode 100644 index 9bacb6e42a4..00000000000 --- a/test/legacy/roles/azure_rm_networkinterface/tasks/main.yml +++ /dev/null @@ -1,339 +0,0 @@ -- name: Create virtual network - azure_rm_virtualnetwork: - name: vnet001 - resource_group: "{{ resource_group }}" - address_prefixes_cidr: "10.10.0.0/16" - register: output - -- name: Create subnet - azure_rm_subnet: - name: subnet001 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - address_prefix_cidr: "10.10.0.0/24" - register: output - -- name: Create second virtual network - azure_rm_virtualnetwork: - name: vnet002 - resource_group: "{{ resource_group }}" - address_prefixes_cidr: "10.20.0.0/16" - register: output - -- name: Create second subnet - azure_rm_subnet: - name: subnet002 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet002 - address_prefix_cidr: "10.20.0.0/24" - register: output - -- name: Create security group - azure_rm_securitygroup: - name: secgroup001 - resource_group: "{{ resource_group }}" - register: output - -- name: Create second security group - azure_rm_securitygroup: - name: secgroup002 - resource_group: "{{ resource_group }}" - register: output - -- name: Create a public ip - azure_rm_publicipaddress: - name: publicip001 - resource_group: "{{ resource_group }}" - allocation_method: "Static" - register: output - -- name: Create second public ip - azure_rm_publicipaddress: - name: publicip002 - resource_group: "{{ resource_group }}" - allocation_method: "Static" - register: output - -- name: Delete network interface, if it exists - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - state: absent - register: output - -- name: Should require subnet when creating nic - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - security_group_name: secgroup001 - public_ip_address_name: publicip001 - register: output - ignore_errors: yes - -- assert: - that: - - output.failed - - "'subnet' in output.msg" - -- name: Should require virtual network when creating nic - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - security_group_name: secgroup001 - public_ip_address_name: publicip001 - subnet: subnet001 - register: output - ignore_errors: yes - -- assert: - that: - - output.failed - - "'virtual_network_name' in output.msg" - -- name: Create nic - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - subnet: subnet001 - security_group_name: secgroup001 - public_ip_address_name: publicip001 - register: output - -- name: Should be idempotent - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - subnet: subnet001 - security_group_name: secgroup001 - public_ip_address_name: publicip001 - register: output - -- assert: - that: not output.changed - -- name: Should change private IP address - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - private_ip_address: 10.10.0.10 - private_ip_allocation_method: Static - virtual_network_name: vnet001 - subnet: subnet001 - security_group_name: secgroup001 - public_ip_address_name: publicip001 - register: output - -- assert: - that: - - output.changed - - output.state.ip_configuration.private_ip_address == '10.10.0.10' - - output.state.ip_configuration.private_ip_allocation_method == 'Static' - -- name: Should change virtual network and subnet - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - private_ip_allocation_method: Dynamic - virtual_network_name: vnet002 - subnet: subnet002 - security_group_name: secgroup002 - public_ip_address_name: publicip002 - register: output - -- assert: - that: - - output.changed - - "'10.20' in output.state.ip_configuration.private_ip_address" - - output.state.ip_configuration.private_ip_allocation_method == 'Dynamic' - - output.state.ip_configuration.subnet.name == 'subnet002' - - output.state.ip_configuration.public_ip_address.name == 'publicip002' - -- name: Add tags - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - tags: - testing: testing - foo: bar - register: output - -- assert: - that: - - output.state.tags | length == 2 - - output.state.tags.testing == 'testing' - -- name: Gather facts for tags - azure_rm_networkinterface_info: - tags: testing - register: output - -- assert: - that: - - azure_networkinterfaces | length >= 1 - -- name: Gather facts for resource group and tags - azure_rm_networkinterface_info: - resource_group: "{{ resource_group }}" - tags: testing - register: output - -- assert: - that: - - azure_networkinterfaces| length == 1 - -- name: Gather facts for name and tags - azure_rm_networkinterface_info: - resource_group: "{{ resource_group }}" - name: nic003 - tags: testing - register: output - -- assert: - that: - - azure_networkinterfaces | length == 1 - -- name: Purge one tag - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - tags: - testing: testing - register: output - -- assert: - that: - - output.changed - - output.state.tags | length == 1 - -- name: Purge all tags - azure_rm_networkinterface: - name: nic003 - resource_group: "{{ resource_group }}" - tags: {} - register: output - -- assert: - that: - - output.changed - - output.state.tags | length == 0 - -- name: Remove network interface, if it exists - azure_rm_networkinterface: - name: "{{ item }}" - resource_group: "{{ resource_group }}" - state: absent - register: output - with_items: - - nic004 - - nic005 - -- name: Remove publicip, if it exists - azure_rm_publicipaddress: - name: "{{ item }}" - resource_group: "{{ resource_group }}" - state: absent - with_items: - - nic00401 - - nic00501 - -- name: Remove security group, if it exists - azure_rm_securitygroup: - name: "{{ item }}" - resource_group: "{{ resource_group }}" - state: absent - with_items: - - nic00401 - - nic00501 - -- name: Should create default security group and default public ip for linux host - azure_rm_networkinterface: - name: nic004 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - subnet: subnet001 - register: output - -- assert: - that: - - output.state.ip_configuration.public_ip_address.name == 'nic00401' - - output.state.network_security_group.name == 'nic00401' - -- name: Gather facts for security group nic00401 - azure_rm_securitygroup_info: - resource_group: "{{ resource_group }}" - name: nic00401 - register: output - -- assert: - that: - - azure_securitygroups[0].properties.securityRules[0].properties.destinationPortRange == '22' - -- name: Should create default security group and default public ip for windows host - azure_rm_networkinterface: - name: nic005 - resource_group: "{{ resource_group }}" - virtual_network_name: vnet001 - subnet: subnet001 - os_type: Windows - open_ports: - - 9000 - - '9005-9010' - register: output - -- assert: - that: - - output.state.ip_configuration.public_ip_address.name == 'nic00501' - - output.state.network_security_group.name == 'nic00501' - -- name: Gather facts for security group nic00501 - azure_rm_securitygroup_info: - resource_group: "{{ resource_group }}" - name: nic00501 - register: output - -- name: Security group should allow RDP access on custom port - assert: - that: - - azure_securitygroups[0].properties.securityRules[0].properties.destinationPortRange == '9000' - - azure_securitygroups[0].properties.securityRules[1].properties.destinationPortRange == '9005-9010' - -- name: Gather facts for one nic - azure_rm_networkinterface_info: - resource_group: "{{ resource_group }}" - name: nic003 - register: output - -- assert: - that: - - azure_networkinterfaces | length == 1 - -- name: Gather facts for all nics in resource groups - azure_rm_networkinterface_info: - resource_group: "{{ resource_group }}" - register: output - -- assert: - that: - - azure_networkinterfaces | length >= 3 - -- name: Gather facts for all nics - azure_rm_networkinterface_info: - register: output - -- assert: - that: - - azure_networkinterfaces | length >= 3 - -- name: Delete nic - azure_rm_networkinterface: - name: "{{ item }}" - resource_group: "{{ resource_group }}" - state: absent - register: output - with_items: - - nic003 - - nic004 - - nic005 diff --git a/test/legacy/roles/azure_rm_resourcegroup/tasks/main.yml b/test/legacy/roles/azure_rm_resourcegroup/tasks/main.yml deleted file mode 100644 index 00c257cce2d..00000000000 --- a/test/legacy/roles/azure_rm_resourcegroup/tasks/main.yml +++ /dev/null @@ -1,142 +0,0 @@ -- name: Get resource group - azure_rm_resourcegroup_info: - name: "{{ resource_group }}" - -- name: Create resource group - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - location: "{{ azure_resourcegroups[0].location }}" - tags: - testing: testing - delete: never - register: output - -- assert: - that: - - output.state.tags.testing == 'testing' - - output.state.tags.delete == 'never' - - output.state.location == '{{ location }}' - -- name: Should be idempotent - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - tags: - testing: testing - delete: never - register: output - -- assert: - that: not output.changed - -- name: Change resource group tags - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - tags: - testing: 'no' - delete: 'on-exit' - foo: 'bar' - register: output - -- assert: - that: - - output.state.tags | length == 3 - - output.state.tags.testing == 'no' - - output.state.tags.delete == 'on-exit' - - output.state.tags.foo == 'bar' - -- name: Gather facts by tags - azure_rm_resourcegroup_info: - tags: - - testing - - foo:bar - register: output - -- assert: - that: azure_resourcegroups | length == 1 - -- name: Purge one tag - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - tags: - testing: 'no' - delete: 'on-exit' - debug: yes - register: output - -- assert: - that: - - output.state.tags | length == 2 - - output.state.tags.testing == 'no' - - output.state.tags.delete == 'on-exit' - -- name: Purge no tags - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - register: output - -- assert: - that: - - output.state.tags | length == 2 - -- name: Purge all tags - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - tags: {} - register: output - -- assert: - that: - - output.state.tags | length == 0 - -- name: Add a resource - azure_rm_virtualnetwork: - resource_group: "{{ resource_prefix }}" - name: "virtualnet01" - address_prefixes_cidr: '10.1.0.0/16' - register: output - -- name: Remove resource group should fail - azure_rm_resourcegroup: - name: "{{ resource_prefix }}" - state: absent - register: output - ignore_errors: yes - -- assert: - that: - - output.failed - - "'Resources exist' in output.msg" - -- name: Create a second resource group - azure_rm_resourcegroup: - name: Testing2 - location: "{{ location }}" - register: output - -- name: Gather facts for a resource group - azure_rm_resourcegroup_info: - name: "{{ resource_group }}" - register: output - -- assert: - that: azure_resourcegroups | length == 1 - -- name: Gather facts for all resource groups - azure_rm_resourcegroup_info: - register: output - -- assert: - that: azure_resourcegroups | length > 1 - -- name: Force remove resource group - azure_rm_resourcegroup: - name: "{{ resource_group }}" - state: absent - force: yes - register: output - -- name: Remove second resource group - azure_rm_resourcegroup: - name: Testing2 - state: absent - register: output diff --git a/test/legacy/roles/cnos_backup/README.md b/test/legacy/roles/cnos_backup/README.md deleted file mode 100644 index e1ccda36575..00000000000 --- a/test/legacy/roles/cnos_backup/README.md +++ /dev/null @@ -1,113 +0,0 @@ -# Ansible Role: cnos_backup_sample - Saving the switch configuration to a remote server ---- - - -This role is an example of using the *cnos_backup.py* Lenovo module in the context of CNOS switch configuration. This module allows you to work with switch configurations. It provides a way to back up the running or startup configurations of a switch to a remote server. This is achieved by periodically saving a copy of the startup or running configuration of the network device to a remote server using FTP, SFTP, TFTP, or SCP. - -The results of the operation can be viewed in *results* directory. - -For more details, see [Lenovo modules for Ansible: cnos_backup](http://systemx.lenovofiles.com/help/index.jsp?topic=%2Fcom.lenovo.switchmgt.ansible.doc%2Fcnos_backup.html&cp=0_3_1_0_4_4). - - -## Requirements ---- - - -- Ansible version 2.2 or later ([Ansible installation documentation](https://docs.ansible.com/ansible/intro_installation.html)) -- Lenovo switches running CNOS version 10.2.1.0 or later -- an SSH connection to the Lenovo switch (SSH must be enabled on the network device) - - -## Role Variables ---- - - -Available variables are listed below, along with description. - -The following are mandatory inventory variables: - -Variable | Description ---- | --- -`username` | Specifies the username used to log into the switch -`password` | Specifies the password used to log into the switch -`enablePassword` | Configures the password used to enter Global Configuration command mode on the switch (this is an optional parameter) -`hostname` | Searches the hosts file at */etc/ansible/hosts* and identifies the IP address of the switch on which the role is going to be applied -`deviceType` | Specifies the type of device from where the configuration will be backed up (**g8272_cnos** - G8272, **g8296_cnos** - G8296) - -The values of the variables used need to be modified to fit the specific scenario in which you are deploying the solution. To change the values of the variables, you need to visits the *vars* directory of each role and edit the *main.yml* file located there. The values stored in this file will be used by Ansible when the template is executed. - -The syntax of *main.yml* file for variables is the following: - -``` -