Compare commits

..

1 Commits
master ... wip

Author SHA1 Message Date
Felix Stupp fdf19c4e26
WIP 5 years ago

13
.gitignore vendored

@ -1,15 +1,6 @@
/ansible_collections
credentials/**
facts/**
/venv/**
public_keys/**
__pycache__/
!README.md
!public_keys/*.sh
!public_keys/*.py
credentials/
public_keys/
*.retry
*.facts
/*.yml
!/site.yml
!/hosts.yml
!/collection_requirements.yml

3
.gitmodules vendored

@ -1,3 +0,0 @@
[submodule "misc/mitogen"]
path = misc/mitogen
url = https://git.banananet.work/archive/mitogen.git

@ -1,16 +1,11 @@
{
"search.usePCRE2": true,
"files.associations": {
"*.yml": "ansible"
},
"[ansible]": {
"editor.tabSize": 2,
"editor.autoIndent": false
},
"editor.tabSize": 2,
"search.exclude": {
"**/node_modules": true,
"**/bower_components": true,
"playbooks/{credentials,filter_plugins,group_vars,helpers,host_vars,public_keys,roles}/": true
},
"files.exclude": {
"playbooks/{credentials,filter_plugins,group_vars,helpers,host_vars,public_keys,roles}/": true
},
"python.pythonPath": "/home/zocker/Repositories/ansible2/venv/bin/python",
"editor.tabSize": 2
}

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2020 Felix Stupp
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

@ -2,93 +2,50 @@
This playbook defines the configuration for all servers / devices controlled by the BananaNetwork.
All systems are expected to run a Debian GNU/Linux or a similiar distribution.
## Roles
Following roles have been defined for making a server configuration easy:
Following roles have been defined to make creating a server configuration easy:
- **account** installs an user account preconfigured with tmux, vim and zsh.
- **acme** defines roles for handling the automatic handling of certificates with *certbot*
- **acme** defines roles for handling the automatic handling of certificates with *acme.sh*
- **application** installs main application
- **certificate** issues a given certificate
- **bootstrap** defines a way to connect to a server which has not been configured yet, changes user password and hardening SSH access
- **bootstrap** defines a way to connect to a server which has not been configured yet
- **common** defines the installation of common packages and common configurations like firewall
- **dns** defines roles for handling dns authorities and slaves, uses *bind9*
- **application** installs main application (installs from bind9 official repository)
- **entries** configures given dns entries on authoritive dns server (authoritive must be configured by this repository)
- **application** installs main application
- **master** configures a dns authority with support of DNSSEC for a domain
- **server_entries** configures default A/AAAA/SSHFP and additional records for current host and given domain (uses **dns/entries**)
- **slave** configures an automatic cloning slave for a domain
- **fail2ban** defines roles for configuring fail2ban for different systems
- **application** installs main application
- **rule** configures a filter + jail for a given server / use case
- **git_auto_update** adds an auto update mechanism for a git repository based on signed release tags
- **hostname** configures the hostname for a given host
- **misc** contains some required but small roles
- **backup_files** configures auto backup for a given directory
- **deb_unstable** enables Debian unstable on low priority
- **docker** installs *Docker* (from official Docker repository)
- **deb_unstable** enables debian unstable on low priority
- **handlers** contains some handlers used by other roles
- **ip_discover** configures a server to automatically discover its ip addresses to a supported service
- **overlay_mount** configures an overlay mount with systemd
- **system_user** creates a system user
- **mysql** defines roles for handling mysql databases and users, uses *MariaDB*
- **application** installs the main application with automatic backup
- **backup_database** configures auto backup for a given mysql database
- **database** configures a database for an external application with its own user (uses **mysql/backup_database**)
- **nfs** defines roles to set up NFS file shares
- **export** configures a NFS share
- **server** configures main NFS server without default shares
- **database** configures a database for an external application with its own user
- **nginx** defines roles to set up virtual servers, certificates will be requested by default
- **application** installs and configures the main requirements
- **default_server** configures default server for hostname fqdn with status info (only accessable from localhost)
- **forward** sets up a forwarding from one domain to another
- **php** sets up a PHP webpage with files at the given directory
- **php-fpm** installs php-fpm and requirements
- **php-pool** sets up a php-fpm pool running its own user account
- **php** sets up a PHP webpage with files at the given directory
- **proxy** sets up a reverse proxy to a local port / proxy
- **server** sets up a nginx server with custom directives
- **static** sets up a static web root
- **upstream** sets up an upstream accessible to nginx virtual servers
- **upstream** sets up an upstream accessible to nginx servers
- **node** defines roles for setting up node applications
- **application** installs node (installs from node official repository)
- **application** installs the main application
- **server** defines roles using different kind of server applications, applications will be configured using separated system users
- **firefox-sync** sets up a Firefox sync server for bookmarks, history, etc.
- **gitea** sets up a git repository using *Gitea* as web overlay (fail2ban)
- **minecraft** sets up a Minecraft server at the given version (AppArmor, no Web UI)
- **firefox-sync** sets up a syncserver for Mozilla Firefox
- **gitea** sets up a git repository using *Gitea* as web overlay
- **nextcloud** sets up a cloud storage using *NextCloud*
- **node** sets up a *Node.js* server from a repository with a database expecting it can be configured using environment variables
- **node** sets up a *Node.js* server from a repository with a database expecting it can be configured by command arguments
- **php** sets up a PHP webpage from a repository
- **spotme** sets up a SpotMe server
- **static** sets up a static virtual server with files from a repository
- **tt-rss** sets up a Tiny Tiny RSS Feed Reader server
- **tt-rss** sets up a RSS feed using *TinyTinyRSS*
- **typo3** defines a CMS using *typo3*
- **wireguard** defines roles to handle a *WireGuard* configuration across different servers
- **application** installs and configures the main application
- **backbone** configures a system to allow all other *WireGuard* systems to connect to this server
- **client** configures a system to connect to *WireGuard* backbones
- **handlers** contains special handlers effecting all *WireGuard* backbones and clients
- **special_client** creates a configuration for a device not configurable by Ansible and stores it locally
All roles, but especially the server subroles, are built to include everything required.
For example, some server subroles include support for configuring AppArmor or fail2ban.
Also nearly all server subroles will install and configure nginx and set the required dns entries.
The are some exceptions however, which are stated here, for example the **dns/entries** role.
Some roles require variables to be configured,
look into the roles `defaults/main.yml` file.
All configurable variables are documenteted there with their default values.
Mandatory variables are commented or otherwise stated mandatory.
All roles will use official resources by default, but some of them let you configure those, e.g. **server/tt-rss**.
## Usage
You *may* can apply the whole playbook to your server configuration without changes,
but I would not recommended that.
Some role's defaults are specially defined to work good in the environments of my server.
Please use my playbook and roles to build one yourself suited for your environment.
## License
This repository is licensed under MIT.
This configuration comes with no warranty.

@ -1,37 +1,6 @@
[defaults]
# always ask for vault pass instead of expecting user to make it available unasked
ask_vault_pass = True
# force handlers to be executed always, especially after a normal task failed to execute
# without this option, it might happen that a handler might be missed to be executed after the role was completed in multiple tries (e.g. reload certain services)
force_handlers = True
# select custom inventory parser for setting up inventory
inventory = ./hosts.py
# install & use ansible collections locally (similar to venv) instead of globally
# helps to prevent differences on developer machines to be disturbing
# collections will be automatically setup from the dependency list "collection-requirements.yml" using "make ansible_collections"
# requires dev's to documentate each external dependency inside the repository
collections_path = ./ # ansible then searches for the subdirectory "ansible_collections" for itself
# disable usage of cowsay for ansible-playbook's logging (increases readability drastically, only matters if cowsay is installed)
inventory = ./hosts
nocows = True
# disable storing retry files after fail because of no usage
retry_files_enabled = False
# automatically select python interpreter, should be sufficient
interpreter_python = auto
# add mitogen strategies and select mitogen as default strategy
# mitogen, see https://mitogen.networkgenomics.com/ansible_detailed.html
strategy_plugins = ./misc/mitogen/ansible_mitogen/plugins/strategy
strategy = mitogen_linear
[diff]
# always enable --diff option
always = True

12
enter

@ -1,12 +0,0 @@
#!/bin/echo You need to source this script! Use something like: source
# (re-)create env if required (e.g. requirements.txt changed)
make setup
# enable coloring on these tools
export ANSIBLE_FORCE_COLORS=1
export PY_COLORS=1
# enter venv
. ./venv/bin/activate

@ -1,54 +0,0 @@
from ansible.module_utils._text import to_native
import re
ENTRY_RE = re.compile(r'^\s*(?P<domain>\S+)(\s+(?P<ttl>\d+))?(\s+(?P<class>[a-zA-Z]+))?\s+(?P<type>[a-zA-Z]+)\s+(?P<data>\S(.*\S)?)\s*$')
def dns_entry_interpeter(entry):
if isinstance(entry, dict):
return entry
m = ENTRY_RE.match(entry)
if not m:
raise Exception("Entry not in expected format: %s" % to_native(entry))
ret = {}
for key, val in m.groupdict().items():
if val is not None:
if key in ["ttl"]:
ret[key] = int(val)
else:
ret[key] = val
return ret
def dns_entry_equal(a, b):
return a.get("domain", "@") == b.get("domain", "@") and a.get("ttl", -1) == b.get("ttl", -1) and a.get("class", "IN") == b.get("class", "IN") and a["type"] == b["type"]
def dns_entries_combiner(entries):
ret = []
for a in entries:
found = False
for b in ret:
if dns_entry_equal(a, b):
found = True
if not isinstance(b["data"], list):
b["data"] = [b["data"]]
if isinstance(a["data"], list):
b["data"] += a["data"]
else:
b["data"].append(a["data"])
break
if not found:
ret.append(a)
return ret
def dns_entries_interpreter(entries):
if isinstance(entries, str):
entries = [e for e in entries.splitlines() if e]
return dns_entries_combiner(map(dns_entry_interpeter, entries))
class FilterModule(object):
def filters(self):
return {
'dns_entry_interpreter': dns_entry_interpeter,
'dns_entries_combiner': dns_entries_combiner,
'dns_entries_interpreter': dns_entries_interpreter,
}

@ -1,10 +0,0 @@
def domain_relative_to(domain, zone):
if domain == '@':
return zone
if domain[-1] != '.':
return f"{domain}.{zone}"
return domain
class FilterModule(object):
def filters(self):
return {'domain_relative_to': domain_relative_to}

@ -1,25 +0,0 @@
from pathlib import Path
import re
import sys
NOT_ALLOWED_CHARS = re.compile(r'[^A-Za-z0-9-]+')
DOMAIN_SHORTS = Path(__file__).parent / '..' / 'public_keys/domain_shorts'
def rreplace(text, to_replace, replacement, count=1):
return replacement.join(text.rsplit(to_replace, count))
def domain_to_username(domain):
with DOMAIN_SHORTS.open() as f:
for l in f:
long_domain, _, short_domain = l.strip().partition(' ')
if domain.endswith(long_domain):
domain = rreplace(domain, long_domain, short_domain)
break
return NOT_ALLOWED_CHARS.sub('-', domain)
class FilterModule(object):
def filters(self):
return {'domain_to_username': domain_to_username}
if __name__ == '__main__':
print(domain_to_username(sys.argv[1]))

@ -1,29 +0,0 @@
from netaddr import IPNetwork, IPSet
def ip_rev(orig, rev, net):
if orig.isdisjoint(IPSet(net)):
rev.add(net)
return
elif orig.issuperset(IPSet(net)):
return
else:
for net in net.subnet(net.prefixlen + 1):
ip_rev(orig, rev, net)
def ip_net_rev(addresses, version=None):
orig = IPSet(addresses)
rev = IPSet()
if version in [None, 4]:
ip_rev(orig, rev, IPNetwork('0.0.0.0/0'))
if version in [None, 6]:
ip_rev(orig, rev, IPNetwork('::/0'))
return [str(net) for net in rev.iter_cidrs()]
class FilterModule(object):
def filters(self):
return {'ip_net_rev': ip_net_rev}
if __name__ == '__main__':
import sys
for ip in ip_net_rev(sys.argv[1:]):
print(ip)

@ -1,8 +0,0 @@
from collections import Mapping
def mapping(val):
return isinstance(val, Mapping)
class FilterModule(object):
def filters(self):
return {'mapping': mapping}

@ -1,33 +0,0 @@
from functools import partial
import re
import subprocess
import sys
from ansible.errors import AnsibleFilterError
def systemd_escape(text, instance=False, mangle=False, path=False, suffix=None, template=None, unescape=False):
options_map = {
"instance": instance,
"mangle": mangle,
"path": path,
"unescape": unescape,
}
args_map = {
"suffix": suffix,
"template": template,
}
args = ["/usr/bin/env", "systemd-escape"] + [f"--{name}" for name, val in options_map.items() if val] + [f"--{name}={val}" for name, val in args_map.items() if val is not None] + [text]
result = subprocess.run(args, capture_output=True, text=True)
if result.returncode != 0:
raise AnsibleFilterError(re.sub('\u001b\\[.*?[@-~]', '', result.stderr.rstrip('\n')))
return result.stdout.rstrip('\n')
class FilterModule(object):
def filters(self):
return {
'systemd_escape': systemd_escape,
'systemd_escape_mount': partial(systemd_escape, path=True, suffix='mount')
}
if __name__ == '__main__':
print(systemd_escape(sys.argv[1]))

@ -1,92 +0,0 @@
---
# === Constants defined by OS packages / applications
# seperated in arbitary system/kernel and applications/packages
# each group is sorted alphabetically
# general system/kernel constants
global_fstab_file: "/etc/fstab"
global_resolv_conf: "/etc/resolv.conf"
global_pamd: "/etc/pam.d"
global_proc_hidepid_service_whitelist:
- "{{ global_systemd_login_service_name }}"
- "{{ global_systemd_user_service_name }}"
global_users_directory: "/home"
# application constants
global_ansible_facts_directory: "/etc/ansible/facts.d"
global_apparmor_profiles_directory: "/etc/apparmor.d"
global_apparmor_profiles_local_directory: "{{ global_apparmor_profiles_directory }}/local"
global_apt_sources_directory: "/etc/apt/sources.list.d"
global_bind_service_name: "named.service"
global_bind_configuration_directory: "/etc/bind"
global_bind_data_directory: "/var/lib/bind"
global_certbot_configuration_directory: "/etc/letsencrypt"
global_certbot_configuration_file: "{{ global_certbot_configuration_directory }}/cli.ini"
global_certbot_certificates_directory: "/etc/letsencrypt/live"
global_chromium_configuration_directory: "/etc/chromium"
global_chromium_managed_policies_file: "{{ global_chromium_configuration_directory }}/policies/managed/managed_policies.json"
global_dnsmasq_configuration_file: "/etc/dnsmasq.conf"
global_dnsmasq_configuration_directory: "/etc/dnsmasq.d"
global_docker_service_name: "docker.service"
global_docker_configuration_directory: "/etc/docker"
global_docker_daemon_configuration_file: "{{ global_docker_configuration_directory }}/daemon.json"
global_fail2ban_service_name: "fail2ban.service"
global_fail2ban_system_directory: "/etc/fail2ban"
global_fail2ban_configuration_directory: "{{ global_fail2ban_system_directory }}/fail2ban.d"
global_fail2ban_actions_directory: "{{ global_fail2ban_system_directory }}/action.d"
global_fail2ban_filters_directory: "{{ global_fail2ban_system_directory }}/filter.d"
global_fail2ban_jails_directory: "{{ global_fail2ban_system_directory }}/jail.d"
global_interfaces_directory: "/etc/network/interfaces.d"
global_lightdm_configuration_directory: "/etc/lightdm"
global_log_directory: "/var/log"
global_mysql_socket_path: "/var/run/mysqld/mysqld.sock"
global_nfs_port: "2049" # for version 4
global_nfs_directory: "{{ global_webservers_directory }}/nfs"
global_nginx_system_user: www-data
global_nginx_service_name: "nginx.service"
global_nginx_installation_directory: "/etc/nginx"
global_plymouth_themes_directory: "/usr/share/plymouth/themes"
global_redis_configuration_directory: "/etc/redis"
global_redis_service_name: "redis-server.service"
global_ssh_service_name: "sshd.service"
global_ssh_configuration_directory: "/etc/ssh/"
global_ssh_configuration_environment_directory: "{{ global_configuration_environment_directory }}/ssh"
global_ssh_configuration_link_name: "config"
global_ssh_configuration_link: "{{ global_ssh_configuration_environment_directory }}/{{ global_ssh_configuration_link_name }}"
global_sudoers_directory: "/etc/sudoers.d"
global_wireguard_configuration_directory: "/etc/wireguard"
global_systemd_preset_directory: "/lib/systemd/system"
global_systemd_configuration_directory: "/etc/systemd/system"
global_systemd_journal_configuration_directory: "/etc/systemd/journald.conf.d"
global_systemd_login_service_name: "systemd-logind.service"
global_systemd_network_directory: "/etc/systemd/network"
global_systemd_network_service_name: "systemd-networkd.service"
global_systemd_network_system_user: "systemd-network"
global_systemd_user_service_name: "user@.service"
global_zsh_antigen_source: "/usr/share/zsh-antigen/antigen.zsh"

@ -2,88 +2,33 @@
TIMEZONE: "Europe/Berlin"
local_user: "{{ lookup('env','USER') }}"
global_username: zocker
global_admin_mail: felix.stupp@outlook.com # TODO change to felix.stupp@banananet.work, verify if all usages will apply change (e.g. lets encrypt)
ansible_user: "{{ global_username }}"
ansible_user: zocker
ansible_become: yes
ansible_become_pass: "{{ zocker_password }}"
default_gpg_keyserver_hostname: "keys.openpgp.org"
default_tg_monitor_recipient_id: "{{ zocker_telegram_id }}"
zocker_authorized_keys_url: "https://git.banananet.work/zocker.keys"
update_scripts_directory: "/root/update"
tailscale_vpn_subnet: "100.64.0.0/10"
backup_gpg_fingerprint: "73D09948B2392D688A45DC8393E1BD26F6B02FB7"
backups_to_keep: 1
backups_directory: "/backups"
backups_databases_directory: "{{ backups_directory }}/databases"
backups_files_directory: "{{ backups_directory }}/files"
backups_mysql_database_directory: "{{ backups_directory }}/mysql_databases"
backup_scripts_directory: "/root/backup"
backup_files_scripts_directory: "{{ backup_scripts_directory }}/files"
backup_mysql_database_scripts_directory: "{{ backup_scripts_directory }}/mysql_databases"
# Enabling "debug mode" allows deploying an debug / transitional instance besides another with the same base configuration
# The debug instance is reachable by using the same domain but prefixed with global_dns_debug_prefix
# Prevents overwriting of original's instance DNS config until debug mode is disabled
# If debug mode is disabled, the compatibility to the "debug domain" will be lost and the original's instance DNS config will be overwritten
# Other variables will need to be adjusted if both instances run on the same server
is_debug_instance: no
has_debug_instance: "{{ is_debug_instance }}"
delete_debug_dns_entries: "{{ not has_debug_instance }}"
debug_domain: "debug-instance.{{ domain }}" # used if is_debug_instance / on "debug mode", should only prefix domain
effective_domain: "{{ is_debug_instance | ternary(debug_domain, domain) }}"
global_local_user: "{{ lookup('env', 'USER') }}"
global_deployment_directory: "/ansible"
global_configuration_environment_directory: "{{ global_deployment_directory }}/configurations"
global_helper_directory: "{{ global_deployment_directory }}/helpers"
global_helper_directory: "/ansible/helpers"
global_webservers_directory: "/var/webservers"
global_socket_directory: "/var/run"
global_credentials_directory: "credentials"
global_public_key_directory: "public_keys"
global_dns_list_directory: "{{ global_public_key_directory }}/dns"
global_dns_session_key_name: "local-ddns"
global_dns_session_key_path: "/var/run/named/session.key"
global_dns_session_key_algorithm: "hmac-sha512"
global_dns_update_key_algorithm: "ED25519"
global_dns_ttl: "{{ 60 * 60 }}" # default if omitted in all cases
global_dns_debug_ttl: "{{ 60 }}" # mostly used if has_debug_instance to allow short transfer times
global_ssh_key_directory: "{{ global_public_key_directory }}/ssh"
global_ssh_host_key_directory: "{{ global_ssh_key_directory }}/hosts"
global_validate_python_script: "/usr/bin/python3 -m pylint --disable=C0114 %s"
global_validate_shell_script: "/usr/bin/shellcheck %s" # TODO add "--format="
global_validate_sshd_config: "/usr/sbin/sshd -t -f %s"
global_validate_sudoers_file: "/usr/sbin/visudo -c -f %s"
global_wireguard_private_directory: "{{ global_credentials_directory }}/wireguard"
global_wireguard_public_directory: "{{ global_public_key_directory }}/wireguard/keys"
global_wireguard_peers_directory: "{{ global_public_key_directory }}/wireguard/peers"
nginx_status_page_acl: |
allow 127.0.0.0/8;
allow ::1;
allow {{ ansible_default_ipv4.address }};
allow {{ ansible_default_ipv6.address }};
allow {{ global_wireguard_ipv4_range }};
deny all;
phpfpm_status_page_path: "/.well-known/php-fpm-status"
global_wireguard_public_directory: "{{ global_public_key_directory }}/wireguard"
ssh_host_key_types:
- ecdsa
- ed25519
- rsa
@ -92,37 +37,16 @@ ssh_host_key_types:
backend_smtp_port: 12891
backend_imap_port: 12892
# OS-specific Default Configuration
debian_repository_mirror: "http://deb.debian.org/debian/"
debian_repository_use_sources: yes
raspbian_repository_mirror: "http://raspbian.raspberrypi.org/raspbian/"
raspbian_archive_repository_mirror: "http://archive.raspberrypi.org/debian/"
raspbian_repository_use_sources: yes
# Application configurations
global_dns_upstream_servers:
# Quad9 DNS with DNSSEC support, without EDNS
- "9.9.9.9"
- "149.112.112.112"
- "2620:fe::fe"
- "2620:fe::9"
global_apt_sources_directory: "/etc/apt/sources.list.d"
global_ip_discover_url: "https://keys.banananet.work/ping"
global_ip_discover_register_pass: "{{ lookup('password', 'credentials/ip_discover/register_pass chars=digits,ascii_letters length=256') }}"
global_ssh_configuration_directory: "/etc/ssh/"
global_ssh_configuration_environment_directory: "/ansible/ssh_configuration"
global_ssh_configuration_link_name: "config"
global_ssh_configuration_link: "{{ global_ssh_configuration_environment_directory }}/{{ global_ssh_configuration_link_name }}"
global_wireguard_port: 51820
global_wireguard_ipv4_subnet: 22
global_wireguard_ipv4_netmask: "{{ ('0.0.0.0/' + (global_wireguard_ipv4_subnet | string)) | ipaddr('netmask') }}"
global_wireguard_ipv4_range: "10.162.4.0/{{ global_wireguard_ipv4_subnet }}"
# TODO Wireguard IPv6 Support
global_systemd_configuration_directory: "/etc/systemd/system"
global_systemd_journal_max_storage: 1G
# Miscellaneous
## IP Blocklist
global_ip_blocklist: "{{ (lookup('file', 'misc/blocklists/ipv4.txt')).split('\n') }}"
# Debian Repository Mirror
debian_repository_mirror: "http://deb.debian.org/debian/"

@ -1,38 +1,18 @@
$ANSIBLE_VAULT;1.1;AES256
66386430666466343732636663313264663933613563643231323066383261616361353234366534
3337323862636537663538343062333064383838653138340a343662326139396634343261396230
65666533626263386465616466663431333339613162373766363937333564323233353930303836
6332366434333437370a666636656534653031303237633863356630393836386137353837303039
33323433343065313135323462316163343364656562303962373634656666353235363537366361
35383031343138376439316365306337636264346434363863623765356161663133653363633533
30613430613333666561303935663833396265363931653133373934363263323362333839366662
62373533643535323430353032386431346462363566323637613736313336373665666631326633
34343830653535623262333730356164636131623735333839663336623735353138313962656564
35643231303461653236373665613339313332386535376665623130646637626531306366316266
36613961653162633639333536333434383332363061653062396163623664316363303561636634
63353263313730313133613537386536616338323533303666653131656262323763616432343664
65626130383432326663303238383233633265393936633934623634366663333862643562383736
38313265306138303431363634656334656530393539636232613962386238613963643161306234
35646136613764353138666431363337393765343233303332663530336261316331383665643536
30663831656566663239656565613535316438666632663236666636383762333432303964333833
33353661623965633630383536613633313437666430623565636635633634646338633666356234
66323966396638316236626234326364633366666266643832333066383735306330366234383533
63386563626264303234303832356662363732356438306234656561373637376137346565653966
65373465303032393939383833386333353461633732623232393761353236306331626164386238
35353464373732346537626464663532653434386564636532623838383937363463633332366534
35613137613933636434336432653964353536303366353832356161653535353165613964333339
30646139316661656363383832313765326234316134393732636262373730386562626233633439
39643862393336653533373731333938343164363233323638353265656139333465363831333431
62323332396537656432343235633735636631646334306265376566343364646566396563386537
32366335313335666436613531356535623364336135636665623233363763663537393538666233
35643431396430336533396137303763333332626439316265383138663639343061656631626463
39386461303866373862626361373836643437346365343531323264386631313834613166393833
34656537326531643962636436393236393537373935346135663335656666343430313335373633
65393066636233653262623031383564393038353730393363356561363936356366636330386264
37383064636433646265396365373330613833623338666638653532363061316261343639323937
33623665316161353035366438663337346532653262366434366138306364343966653235383636
38666263623633356463373963636135656637613164353265613635353733316138626637623364
34386338633363653231643334323161653933613864636338626638323035323233643137353964
35666332346264613136343039336261303964343237373136393139376234363833376164643839
33316566353033363333633966643366303537653766623935643933373062313830316166303961
37393638653064623935356564303236343766393939323561356461656636626534
64333965353537646136656630316237636563383764356461623238323836383466313230333531
6131306336633661373335653663613538633662663438360a343839666263396139343735333462
62333564383633326131646533313566306534623539393533333366356264623562643438653231
6133396364663765300a343766643036613262613062326532373738653538623333303933323237
36313864346161356332663664386635333764393161646332643938623332386562313836653436
63353136373866373238356334363762363961653964333565343364306135616363376565623536
31353737643366353330343266613466343231653033336433343632353465353836616638636231
34313138633238313839616139633431653630306338373065623961656462316432353966363661
30393862373634373161326262363162343139313334613939613636633665613839353862346533
34353366333733303363323164613934633634353866393831333566626565383036373964386633
39316131363732353663626530333634616435316464633937656136386534383635643337323262
33643336616237323533353639666465363563363437306232313266646238623130616235623265
65323665383038343732643064316533666239633738666539373463626332386431303633333934
65386662346361653232643437346663303362623834623063363061396361303861363739373139
36346365366537356565373165663238626335616336373433343834346138656562333464323037
65613336336135343938373064623766353666623763323364343836643262653032626230383566
3466

@ -1,10 +0,0 @@
---
bootstrap_user: "debian"
mysql_query_cache_size: 1G
# Currently disabled because upstream servers do not support forwarding DNSSEC related records
#global_dns_upstream_servers:
# - 129.143.2.1
# - 129.143.2.4

@ -1,5 +0,0 @@
---
global_dns_upstream_servers:
- 213.136.95.10
- 213.136.95.11

@ -1,14 +0,0 @@
---
bootstrap_user: "root"
global_dns_upstream_servers:
- 213.133.100.100
- 213.133.99.99
- 213.133.98.98
- "2a01:4f8:0:1::add:1010"
- "2a01:4f8:0:1::add:9898"
- "2a01:4f8:0:1::add:9999"
debian_repository_mirror: "http://mirror.hetzner.de/debian/packages"
debian_repository_use_sources: no # Not supported by Hetzner mirrors, but also not required

@ -1,6 +0,0 @@
---
ansible_python_interpreter: "/usr/bin/python3"
ansible_distribution_name: "debian"
# debian_repository_mirror

@ -1,12 +0,0 @@
---
ansible_distribution_name: "raspbian"
bootstrap_user: "pi"
bootstrap_become_pass: ""
ansible_ssh_pass: "raspberry"
# raspbian_repository_mirror
# raspbian_archive_repository_mirror
global_systemd_journal_max_storage: 256M

@ -0,0 +1,4 @@
FILES = $(shell ls | grep -vE "^dns$$")
dns: $(FILES)
echo "$(FILES)" | xargs --max-args 1 ssh-keygen -r "$$(basename "$$(pwd)")." -f > "$@"

@ -1,6 +0,0 @@
---
ansible_host: "10.11.11.64"
debian_repository_mirror: "http://10.11.11.64:9999/debian/"
wireguard_ipv4_address: "10.162.4.64"

@ -0,0 +1,3 @@
---
ansible_host: "193.196.36.223"

@ -1,5 +0,0 @@
---
ansible_host: "193.196.37.200"
wireguard_ipv4_address: "10.162.4.2"

@ -2,7 +2,3 @@
ansible_host: "167.86.97.105"
debian_repository_mirror: "http://mirror.de.leaseweb.net/debian/"
mysql_query_cache_size: 4G
wireguard_ipv4_address: "10.162.4.1"

@ -0,0 +1,3 @@
---
ansible_host: "193.196.36.154"

@ -1,5 +0,0 @@
---
ansible_host: "193.196.38.137"
wireguard_ipv4_address: "10.162.4.3"

@ -1,3 +0,0 @@
---
ansible_host: "10.11.11.194"

@ -0,0 +1,9 @@
[bootstrap]
nvak.banananet.work
morska.banananet.work
rurapenthe.banananet.work
[wireguard_nodes]
nvak.banananet.work
morska.banananet.work
rurapenthe.banananet.work

@ -1,212 +0,0 @@
#!/usr/bin/env python3
import json
import re
import sys
import yaml
class LoopPrevention:
def __init__(self, obj):
self.__obj = obj
self.__entered = False
def __enter__(self):
if self.__entered:
raise Exception("detected and prevented infinite loop")
self.__entered = True
return self
def __exit__(self, *args):
self.__entered = False
return False # forward exception
class Group:
def __init__(self, inv):
self.__inv = inv
self.__hosts = set()
self.__children = set()
def add_host(self, host):
if not host in self.__hosts:
self.__hosts.add(host)
def add_hosts(self, hosts):
self.__hosts |= hosts
@property
def direct_hosts(self):
return set(self.__hosts)
@property
def all_hosts(self):
with LoopPrevention(self):
hosts = self.direct_hosts
for child in self.children:
hosts |= self.__inv._group(child).all_hosts
return hosts
def add_child(self, group_name):
if not group_name in self.__children:
self.__children.add(group_name)
@property
def children(self):
return set(self.__children)
def export(self):
return { "hosts": list(self.__hosts), "vars": dict(), "children": list(self.__children) }
class Inventory:
def __init__(self):
self.__groups = dict()
self.add_group("all")
def __group(self, group_name):
if group_name not in self.__groups:
self.__groups[group_name] = Group(self)
return self.__groups[group_name]
def _group(self, group_name):
if group_name not in self.__groups:
raise Exception(f'Unknown group "{group_name}"')
return self.__groups[group_name]
def add_host(self, host):
self.__group("all").add_host(host)
def add_hosts(self, hosts):
self.__group("all").add_hosts(hosts)
def add_group(self, group_name):
self.__group(group_name)
def add_host_to_group(self, host, group_name):
self.add_host(host)
self.__group(group_name).add_host(host)
def add_hosts_to_group(self, hosts, group_name):
self.add_hosts(hosts)
self.__group(group_name).add_hosts(hosts)
def add_child_to_group(self, child_name, parent_name):
self.__group(child_name)
self.__group(parent_name).add_child(child_name)
def all_hosts_of_group(self, group_name):
return self._group(group_name).all_hosts
def export(self):
meta_dict = {
"_meta": {
"hostvars": {},
},
}
group_dict = { group_name: group.export() for group_name, group in self.__groups.items() }
return { **meta_dict , **group_dict }
def _read_yaml(path):
with open(path, 'r') as stream:
try:
return yaml.safe_load(stream)
except yaml.YAMLError as e:
return AnsibleError(e)
GROUPS_PATTERN_OPS = {
"": lambda old, add: old | add,
"&": lambda old, add: old & add,
"!": lambda old, add: old - add,
}
GROUPS_PATTERN_OPS_NAMES = "".join(GROUPS_PATTERN_OPS.keys())
GROUPS_PATTERN = re.compile(r'^(?P<operation>[' + GROUPS_PATTERN_OPS_NAMES + r']?)(?P<group_name>[^' + GROUPS_PATTERN_OPS_NAMES + r'].*)$')
def _parse_group_aliasses(inv, data):
for group, syntax in data.items():
if isinstance(syntax, str):
group_list = syntax.split(':')
elif isinstance(syntax, list):
group_list = syntax
else:
raise Exception(f'Unknown syntax for alias "{group}": {syntax}')
if len(syntax) <= 0 or len(group_list) <= 0:
raise Exception(f'Empty syntax for alias "{group}": {syntax}')
if group_list[0][0] == '!': # if first entry is an inversion
group_list.insert(0, 'all') # remove group from all for inversion
hosts = set()
for group_name in group_list:
group_matched = GROUPS_PATTERN.match(group_name)
add = inv.all_hosts_of_group(group_matched.group('group_name'))
op = GROUPS_PATTERN_OPS[group_matched.group('operation')]
hosts = op(hosts, add)
inv.add_hosts_to_group(hosts, group)
def _parse_groups(inv, data):
for group, children in data.items():
inv.add_group(group)
if children is None:
continue # as if no children are given
for child in children:
inv.add_child_to_group(child, group)
if isinstance(children, dict):
_parse_groups(inv, children)
def _parse_host_groups(inv, data):
GROUPS_KEY = "_all"
for host_group, hosts in data.items():
inv.add_group(host_group)
if hosts is None:
continue
for host in hosts:
if host != GROUPS_KEY:
inv.add_host_to_group(host, host_group)
if isinstance(hosts, dict):
hosts = dict(hosts) # copy dict for further edits
parents = hosts.pop(GROUPS_KEY, None)
if parents is not None:
for parent in parents:
inv.add_child_to_group(host_group, parent)
_parse_single_hosts(inv, hosts)
def _parse_single_hosts(inv, data):
for host, groups in data.items():
inv.add_host(host)
if groups is not None:
for group in groups:
inv.add_host_to_group(host, group)
def _parse_version_0(inv, data):
return _parse_single_hosts(inv, data)
parser_mapping_v1 = { "groups": _parse_groups, "host_groups": _parse_host_groups, "single_hosts": _parse_single_hosts }
def _parse_version_1(inv, data):
for key_name, parser in parser_mapping_v1.items():
if key_name in data:
parser(inv, data[key_name])
def _parse_version_2(inv, data):
_parse_version_1(inv, data)
_parse_group_aliasses(inv, data["group_aliasses"])
parser_version_mapping = {
None: _parse_version_0, # legacy version without version number, only hosts list with tags
1: _parse_version_1, # adds support for default, inversed group dependencies and host_groups aside single_hosts (ignores aliases supported with version 2)
2: _parse_version_2, # adds support for aliases (thus destroying the common graph structures where aliasses were used)
}
def parse(path):
data = _read_yaml(path)
inv = Inventory()
version = data.get("version", None)
# detect that version was used as hostname
if not isinstance(version, (int, float, complex)):
version = None
if version not in parser_version_mapping:
raise AnsibleError(Exception("Version not supported"))
parser_version_mapping[version](inv, data)
return inv.export()
print(json.dumps(parse("hosts.yml")))

@ -1,85 +0,0 @@
version: 2
groups: # a:b meaning b is a, can be nested
# hardware structure
dev_known:
barebones:
- rented_barebones # sub group
# list of all known barebone device groups
- dev_surface3 # Microsoft Surface 3
virtual:
- rented_vserver # sub group
dev_unknown: # for unknown device kinds
# structure of rented servers
rented:
rented_barebones:
- hetzner_server # https://robot.your-server.de/server
rented_vserver:
- bwcloud_vserver # https://portal.bw-cloud.org/
- contabo_vserver # https://my.contabo.com/vps
# OS structure
os_known: # list of all known OS derivates
- os_debian
- os_raspbian
# applications
bootstrapable: # which OSes/hosts can be bootstraped
- os_debian
- os_raspbian
group_aliasses: # a:b meaning a equals b, should only depend on groups not defined here
# unknown groups
dev_unknown: "!dev_known"
os_unknown: "!os_known"
# applications
bootstrap: "bootstrapable:!no_bootstrap" # which hosts should be bootstraped
common_roles: "!no_common_roles"
wireguard_backbones: "public_available:!no_wireguard_automatic"
wireguard_clients: "!public_available:!no_wireguard_automatic"
host_groups: # group: host: [*groups]
no_defaults: # do not include in all default playbooks / roles
_all:
- no_bootstrap # do not setup sudo bootstrap
- no_common_roles # do not include in common roles
- no_wireguard_automatic # do not assign wireguard role automatic, hosts may be excluded from wireguard or assigned to their wireguard role manually
rented:
_all:
- public_available # rented are public available
# to group similar devices together
common_server: # public common servers
_all:
- os_debian
hatoria.banananet.work:
- hetzner_server
nvak.banananet.work:
- contabo_vserver
morska.banananet.work:
- bwcloud_vserver
rurapenthe.banananet.work:
- bwcloud_vserver
single_hosts: # a:b meaning a is b, cannot be nested
# Local Servers
hardie.eridon.banananet.work:
- os_debian
# Embedded Devices
wgpanel.eridon.banananet.work:
- dev_surface3
- os_debian
- no_wireguard_automatic # no wireguard

@ -1,115 +0,0 @@
#!/usr/bin/python
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r'''
---
module: tsig_interpreter
short_description: Reads BIND9 tsig key files and outputs content to register
# If this is part of a collection, you need to use semantic versioning,
# i.e. the version is of the form "2.5.0" and not "2.4".
version_added: "1.0.0"
description: This is my longer description explaining my test module.
options:
path:
description: Path the keyfile should be found
required: true
type: str
aliases:
- file
- key_file
author:
- Felix Stupp (@zocker1999net)
'''
EXAMPLES = r'''
# Gain and use key
- name: Gain key
my_namespace.my_collection.tsig_interpreter:
path: '/etc/bind/rndc.key'
register: key_data
- name: Use key
nsupdate:
key_algorithm: key_data.key_algorithm
key_name: key_data.key_name
key_secret: key_data.key_secret
'''
RETURN = r'''
key_algorithm:
description: The algorithm used for the key
type: str
returned: always
sample: 'hmac-md5'
key_file:
description: The file that contained the extracted key
type: str
returned: always
sample: '/my/path/my.key'
key_name:
description: The name of the key
type: str
returned: always
sample: 'key.example.com'
key_secret:
description: The secret of the key
type: str
returned: always
sample: 'ABCDEFG=='
'''
import os
import re
from ansible.module_utils.basic import AnsibleModule
def main():
content_regex = re.compile(r'^\s*key\s+"?(?P<name>[^"\s{};]+)"?\s+\{\s*algorithm\s+"?(?P<algo>[^"\s{};]+)"?\s*;\s*secret\s+"?(?P<secret>[^"\s{};]+)"?\s*;\s*}\s*;\s*$')
module_args = {
"path": {
"type": "str",
"required": True,
"aliases": ["file", "key_file"],
},
}
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True, # ignored because only data is retrieved
)
# get params
path = module.params["path"]
# prepare result
result = {
"changed": False,
"key_file": path,
}
# check file
if not os.path.exists(path):
module.fail_json(msg="file not found: %s" % path)
if not os.access(path, os.R_OK):
module.fail_json(msg="file is not readable: %s" % path)
# gain content
with open(path, 'r') as fh:
content = fh.read()
# interpret content
content = content.replace("\n", " ")
match = content_regex.match(content)
if not match:
module.fail_json(msg="content of file not in expected syntax: %s" % path)
result["key_algorithm"] = match.group("algo")
result["key_name"] = match.group("name")
result["key_secret"] = match.group("secret")
# exit
module.exit_json(**result)
if __name__ == '__main__':
main()

@ -1,53 +1,17 @@
vault:=group_vars/all/vault.yml
playbooks_dir:=playbooks
playbooks:=$(wildcard ${playbooks_dir}/*.yml)
credentials_dir:=credentials
credentials_file:=misc/credentials.tar.gpg
venv_dir:=venv
# Default Target (must be first target)
.PHONY: main list vault ${playbooks}
.PHONY: main
main:
ansible-playbook site.yml
# Virtual Environment's Setup
.PHONY: setup
setup: ansible_collections ${venv_dir}
ansible_collections: collection-requirements.yml ${venv_dir}
mkdir --parent $@
. ./${venv_dir}/bin/activate && ansible-galaxy install -r $<
${venv_dir}: pip-requirements.txt
python3 -m venv $@
. ./$@/bin/activate && python3 -m pip install -r $<
# Playbook Execution
.PHONY: list
list:
@echo ${playbooks}
.PHONY: ${playbooks}
${playbooks}:
ansible-playbook ${playbooks_dir}/$@.yml
# Vault Handling
.PHONY: vault
vault:
ansible-vault edit ${vault}
# Credential Handling
.PHONY: store-credentials
store-credentials: ${credentials_file}
${credentials_file}: $(shell find "${credentials_dir}")
tar -cf - "${credentials_dir}" | gpg --encrypt --recipient 73D09948B2392D688A45DC8393E1BD26F6B02FB7 > "$@"
.PHONY: load-credentials
load-credentials:
< "${credentials_file}" gpg --decrypt | tar -xf -
${playbooks}:
ansible-playbook ${playbooks_dir}/$@.yml

@ -1,24 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail;
LIST_FILE="$(dirname "$0")/ipv4.txt";
TMP_FILE="$(mktemp)";
IP_REGEX='(?<!\d)\d+(\.\d+){3}(/\d+)?(?!\d)';
cat "$LIST_FILE" "$@" |
grep --only-matching --perl-regexp "$IP_REGEX" |
sort --version-sort |
uniq > "$TMP_FILE";
echo "$TMP_FILE";
if diff "$LIST_FILE" "$TMP_FILE"; then
echo "No differences found!";
exit 0;
fi
echo "Press enter to approve changes, ^C to abort";
read;
mv "$TMP_FILE" "$LIST_FILE";

File diff suppressed because it is too large Load Diff

Binary file not shown.

@ -1 +0,0 @@
Subproject commit 36f3e3b28c82611c72a867cdc1f5ddc8bd9325e9

@ -1,27 +0,0 @@
#### Python / PiP Requirements ####
# each group either sorted by alphabet or, if applicable, sorted by hierachy
### Main Runtime Dependencies ###
# Ansible itself
ansible ~= 2.10.0 # pinned to 2.10 because upgrade may bring issues
### Test Frameworks ###
ansible-lint # simple linter
yamllint # linter for YAML files in general
## molecule ##
# role based test framework for Ansible
molecule
# enable docker for test environments, requires Docker to be installed on host and usuable without additional permissions
molecule-docker
# allows using Vagrant (VMs) for creating test environments, requires Vagrant and any hypervisor (e.g. VirtualBox) to be installed
molecule-vagrant
python-vagrant # extra module required as not always installed with vagrant

@ -1 +0,0 @@
../credentials

@ -1,143 +0,0 @@
- name: Configure hatoria as dns server
hosts: hatoria.banananet.work
vars:
# Source: https://docs.hetzner.com/dns-console/dns/general/authoritative-name-servers
hetzner_authoritatives:
- ns1.first-ns.de.
- robotns2.second-ns.de.
- robotns3.second-ns.com.
hetzner_authoritatives_ip:
# ns1.first-ns.de.
- "213.239.242.238"
- "2a01:4f8:0:a101::a:1"
# robotns2.second-ns.de.
- "213.133.105.6"
- "2a01:4f8:d0a:2004::2"
# robotns3.second-ns.com.
- "193.47.99.3"
- "2001:67c:192c::add:a3"
mailbox_mx:
- 10 mxext1.mailbox.org.
- 10 mxext2.mailbox.org.
- 20 mxext3.mailbox.org.
mailbox_spf: >-
"v=spf1 include:mailbox.org"
mailbox_dkim_keys:
- name: MBO0001
data: >-
"v=DKIM1; k=rsa; "
"p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2K4PavXoNY8eGK2u61"
"LIQlOHS8f5sWsCK5b+HMOfo0M+aNHwfqlVdzi/IwmYnuDKuXYuCllrgnxZ4fG4yV"
"aux58v9grVsFHdzdjPlAQfp5rkiETYpCMZwgsmdseJ4CoZaosPHLjPumFE/Ua2WA"
"QQljnunsM9TONM9L6KxrO9t5IISD1XtJb0bq1lVI/e72k3mnPd/q77qzhTDmwN4T"
"SNJZN8sxzUJx9HNSMRRoEIHSDLTIJUK+Up8IeCx0B7CiOzG5w/cHyZ3AM5V8lkqB"
"aTDK46AwTkTVGJf59QxUZArG3FEH5vy9HzDmy0tGG+053/x4RqkhqMg5/ClDm+lp"
"ZqWwIDAQAB"
- name: MBO0002
data: >-
"v=DKIM1; k=rsa; "
"p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqxEKIg2c48ecfmy/+r"
"j35sBOhdfIYGNDCMeHy0b36DX6MNtS7zA/VDR2q5ubtHzraL5uUGas8kb/33wtrW"
"FYxierLRXy12qj8ItdYCRugu9tXTByEED05WdBtRzJmrb8YBMfeK0E0K3wwoWfhI"
"k/wzKbjMkbqYBOTYLlIcVGQWzOfN7/n3n+VChfu6sGFK3k2qrJNnw22iFy4C8Ks7"
"j77+tCpm0PoUwA2hOdLrRw3ldx2E9PH0GVwIMJRgekY6cS7DrbHrj/AeGlwfwwCS"
"i9T23mYvc79nVrh2+82ZqmkpZSTD2qq+ukOkyjdRuUPck6e2b+x141Nzd81dIZVf"
"OEiwIDAQAB"
roles:
- role: dns/master
domain: banananet.work
main_nameserver_domain: "ns1.banananet.work" # required glue entry already configured
responsible_mail_name: hostmaster.banananet.work
slaves_ip: "{{ hetzner_authoritatives_ip }}"
entries:
# main NS entry
- type: NS
data: ns1.banananet.work.
# Hetzner NS entries
- type: NS
data: "{{ hetzner_authoritatives }}"
# limit CA
- type: CAA
data: 0 issue "letsencrypt.org"
# Mailbox Mail configuration
- domain: bca8c01774fd59c9756c68532174fd5b85762fee # domain verification
type: TXT
data: 7a99f795a552c812b55c7f809920bf25db96137b
- type: MX
data: "{{ mailbox_mx }}"
- type: TXT
data: "{{ mailbox_spf }}"
- domain: "{{ mailbox_dkim_keys[0].name }}._domainkey"
type: TXT
data: "{{ mailbox_dkim_keys[0].data }}"
- domain: "{{ mailbox_dkim_keys[1].name }}._domainkey"
type: TXT
data: "{{ mailbox_dkim_keys[1].data }}"
- domain: _dmarc
type: TXT
data: v=DMARC1;p=none
- domain: autoconfig
type: CNAME
data: mailbox.org.
- domain: _autodiscover._tcp
type: SRV
data: "0 0 443 mailbox.org."
- domain: _submission._tcp
type: SRV
data: "10 10 465 smtp.mailbox.org."
- domain: _imaps._tcp
type: SRV
data: "10 10 993 imap.mailbox.org."
- domain: _hkps.tcp
type: SRV
data: "10 10 443 pgp.mailbox.org."
# other entries
- domain: _minecraft._tcp.wg
type: SRV
data: "10 10 10110 mc.wg.{{ domain }}."
- role: dns/master
domain: forumderschan.de
main_nameserver_domain: "ns1.banananet.work"
responsible_mail_name: hostmaster.banananet.work
slaves_ip: "{{ hetzner_authoritatives_ip }}"
entries:
# main NS entry
- type: NS
data: ns1.banananet.work.
# Hetzner NS entries
- type: NS
data: "{{ hetzner_authoritatives }}"
# limit CA
- type: CAA
data: 0 issue "letsencrypt.org"
- role: dns/master
domain: stadtpiraten-karlsruhe.de
main_nameserver_domain: "ns1.banananet.work"
responsible_mail_name: hostmaster.banananet.work
entries:
# main NS entry
- type: NS
data: ns1.banananet.work.
# limit CA
- type: CAA
data: 0 issue "letsencrypt.org"
- name: Add public available hosts to dns zones
hosts: public_available
roles:
- role: dns/server_entries
domain: "{{ inventory_hostname }}"
- name: Arbitary entries
# all tasks/roles here must be local only
hosts: all # select any host as not important
run_once: yes # run only once "for first host"
gather_facts: no # do not gather facts from host as these may not be used
roles:
- role: ext_mail/mailjet
tags:
- mailjet
- wg.banananet.work
domain: wg.banananet.work
verification_name: 5803f0f5
verification_data: 5803f0f5f4278d66327350f7a8141b70

@ -1 +0,0 @@
*.yml

@ -1 +0,0 @@
../filter_plugins

@ -1,22 +0,0 @@
---
- name: Configure bwcloud nodes
hosts: bwcloud_vserver
tasks:
- name: Install special packages for bw cloud nodes
apt:
name:
- linux-headers-cloud-amd64
state: present
- name: Configure cloud-kernel to preserve hostname
copy:
content: |
preserve_hostname: yes
dest: "/etc/cloud/cloud.cfg.d/preserve_hostname.cfg"
owner: root
group: root
mode: u=rw,g=r,o=r
# If something goes wrong with mouting or /etc/hosts, add this back to cloud.cfg using directory:
#mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
#manage_etc_hosts: true

@ -1,42 +0,0 @@
---
- name: Configure Surface 3 device
hosts: dev_surface3
tasks:
- name: Install packages for hardware
apt:
state: present
name:
- intel-media-va-driver-non-free
- intel-microcode
- firmware-linux
- firmware-linux-free
- firmware-linux-nonfree
- xserver-xorg-video-intel
- name: Add apt key for special kernel
apt_key:
state: present
id: 87DEFA4AB94A99A4C8C3112556C464BAAC421453
url: https://raw.githubusercontent.com/linux-surface/linux-surface/master/pkg/keys/surface.asc
- name: Add apt repository for special kernel
apt_repository:
state: present
filename: linux-surface
repo: "deb [arch=amd64] https://pkg.surfacelinux.com/debian release main"
update_cache: yes
- name: Install special kernel
apt:
state: present
name:
- libwacom-surface
- linux-headers-surface
- linux-image-surface
- linux-surface-secureboot-mok # Password: surface
- name: Disable evbug module debug logging # https://ralph.blog.imixs.com/2013/10/02/evbug-auf-die-blacklist-setzen/
copy:
content: |
blacklist evbug
dest: "{{ global_modprode_configuration_directory }}/disable-evbug.conf"
owner: root
group: root
mode: u=rw,g=r,o=

@ -1,16 +0,0 @@
---
- name: Configure Raspberry Pi nodes
hosts: os_raspbian
tasks:
- name: Configure default boot to console shell
file:
state: link
src: "{{ global_systemd_preset_directory }}/multi-user.target"
dest: "{{ global_systemd_configuration_directory }}/default.target"
owner: root
group: root
- name: Remove raspian specific apt source file
file:
state: absent
path: "{{ global_apt_sources_directory }}/raspi.list"

@ -1 +0,0 @@
../group_vars

@ -1 +0,0 @@
../helpers

@ -1,386 +0,0 @@
- name: Configure hatoria.banananet.work
hosts: hatoria.banananet.work
vars:
bnet_cloud_domain: "cloud.banananet.work"
bnet_cloud_username: "{{ bnet_cloud_domain | domain_to_username }}"
roles:
- role: nginx/default_server # Would not be configurable otherwise
tags:
- default_server
# Git Server
- role: server/gitea
tags:
- git.banananet.work
domain: git.banananet.work
gitea_system_user: git
database_user: gitea
- role: server/drone.io/server
domain: ci.git.banananet.work
bind_port: 12824
gitea_server_url: https://git.banananet.work
gitea_client_id: "{{ drone_ci_gitea_main_oauth2_client_id }}"
gitea_client_secret: "{{ drone_ci_gitea_main_oauth2_client_secret }}"
- role: server/drone.io/runner
drone_server_host: ci.git.banananet.work
# Banananet.work
- role: server/static
tags:
- banananet.work
domain: banananet.work
repo: git@git.banananet.work:banananetwork/main-static.git
- role: nginx/forward
tags:
- banananet.work
domain: www.banananet.work
dest: banananet.work
# SpotMe Server
- role: server/spotme
tags:
- spotme.banananet.work
domain: spotme.banananet.work
bind_port: 12820
# Firefox Sync Server
- role: server/firefox-sync
tags:
- firefox.banananet.work
domain: firefox.banananet.work
# RSS Server
# TODO Manual initialization of database required
- role: server/tt-rss
tags:
- rss.banananet.work
domain: rss.banananet.work
# Linx Server
- role: server/linx
tags:
- drop.banananet.work
domain: drop.banananet.work
bind_port: 12840
use_hdd_directory: yes
site_name: "BananaNetwork Drop Server"
# # Admin Panel
# - role: server/php
# domain: nvak.banananet.work
# repo: PHPMYADMIN # TODO
# BananaNetwork Keys
# - role: server/node
# domain: keys.banananet.work
# repo: https://git.banananet.work/banananetwork/keys.git
# bind_port: 12822
# system_user: keys-banananet-work
# Nextcloud Server
- role: server/nextcloud
tags:
- cloud.banananet.work
domain: "{{ bnet_cloud_domain }}"
system_user: "{{ bnet_cloud_username }}"
nextcloud_admin_user: "{{ global_username }}"
enabled_apps_list:
- accessibility
- activity
- admin_audit
- apporder
- bruteforcesettings
- calendar
- checksum
- cloud_federation_api
- comments
- contacts
- contactsinteraction
- cospend
- dav
- deck
- external
- federatedfilesharing
- federation
- files
- files_automatedtagging
- files_external
- files_markdown
- files_pdfviewer
- files_rightclick
- files_sharing
- files_trashbin
- files_versions
- files_videoplayer
- firstrunwizard
- logreader
- lookup_server_connector
- mail
- metadata
- nextcloud_announcements
- notes
- notifications
- oauth2
- ocdownloader
- password_policy
- phonetrack
- photos
- polls
- privacy
- provisioning_api
- quota_warning
- ransomware_protection
- serverinfo
- settings
- sharebymail
- sociallogin
- socialsharing_email
- support
- suspicious_login
- systemtags
- tasks
- text
- theming
- twofactor_admin
- twofactor_backupcodes
- twofactor_gateway
- twofactor_nextcloud_notification
- twofactor_totp
- twofactor_u2f
- updatenotification
- viewer
- workflowengine
disabled_apps_list:
- encryption
- files_readmemd
- recommendations
- spreed
- survey_client
- user_ldap
# Forum der Schande
- role: server/php
tags:
- forumderschan.de
domain: forumderschan.de
repo: git@git.banananet.work:strichliste/strichliste-php.git
root: html
installation_includes:
- includes
- role: nginx/forward
tags:
- forumderschan.de
domain: www.forumderschan.de
dest: forumderschan.de
# Monitors
- role: misc/tg_monitor_cmd
tags: tg-monitor-cmd
monitor_name: forumderschan.de-NS
description: "NS entries of forumderschan.de"
command_str: >-
/usr/bin/dig
@a.nic.de.
forumderschan.de. NS
| grep --only-matching --perl-regexp '(?<=\s)(\S+\.)+(?=$)'
| sort
use_shell: yes
# WG Nextcloud
- role: server/nextcloud
tags:
- wg.banananet.work
domain: wg.banananet.work
nextcloud_admin_user: felix
enabled_apps_list:
- accessibility
- activity
- apporder
- bruteforcesettings
- calendar
- checksum
- cloud_federation_api
- comments
- contacts
- cookbook
- cospend
- dav
- deck
- encryption
- external
- federatedfilesharing
- federation
- files
- files_automatedtagging
- files_external
- files_pdfviewer
- files_rightclick
- files_sharing
- files_trashbin
- files_versions
- files_videoplayer
- firstrunwizard
- logreader
- lookup_server_connector
- metadata
- nextcloud_announcements
- notes
- notifications
- oauth2
- ocdownloader
- password_policy
- photos
- polls
- privacy
- provisioning_api
- quota_warning
- ransomware_protection
- serverinfo
- settings
- sharebymail
- side_menu
- sociallogin
- socialsharing_email
- support
- suspicious_login
- systemtags
- tasks
- text
- theming
- twofactor_admin
- twofactor_backupcodes
- twofactor_gateway
- twofactor_nextcloud_notification
- twofactor_totp
- twofactor_u2f
- updatenotification
- viewer
- workflowengine
disabled_apps_list:
- admin_audit
- recommendations
- spreed
- survey_client
- user_ldap
# WG Minecraft
- role: server/minecraft
tags:
- mc.wg.banananet.work
domain: mc.wg.banananet.work
minecraft_version: "1.16.4"
minecraft_ram: "16G"
minecraft_port: 25566
config:
difficulty: normal
motd: ChaosCraft
view-distance: 16
# # Stadtpiraten
# - role: server/typo3
# domain: piraten.dev.banananet.work
# - role: server/php
# domain: forum.piraten.dev.banananet.work
# repo: PHPBB # TODO
# version: master
# # Stadtpiraten (prod)
# - role: nginx/forward
# domain: www.stadtpiraten-karlsruhe.de
# dest: stadtpiraten-karlsruhe.de
# SMD/SFC HST 2020
- role: nginx/forward
tags:
- proj-hst
- hst21.banananet.work
domain: hst20.banananet.work
dest: hst21.banananet.work
- role: server/nextcloud
tags:
- proj-hst
- hst21.banananet.work
domain: hst21.banananet.work
system_user: nc-hst21
nextcloud_admin_user: felix
enabled_apps_list:
- accessibility
- activity
- apporder
- bruteforcesettings
- calendar
- checksum
- cloud_federation_api
- comments
- contacts
- contactsinteraction
- cospend
- dav
- deck
- encryption
- external
- federatedfilesharing
- federation
- files
- files_automatedtagging
- files_linkeditor
- files_mindmap
- files_pdfviewer
- files_rightclick
- files_sharing
- files_trashbin
- files_versions
- files_videoplayer
- firstrunwizard
- forms
- logreader
- lookup_server_connector
- mail
- maps
- metadata
- nextcloud_announcements
- notes
- notifications
- oauth2
- password_policy
- photos
- polls
- privacy
- provisioning_api
- quota_warning
- ransomware_protection
- serverinfo
- settings
- sharebymail
- socialsharing_email
- spreed
- support
- suspicious_login
- systemtags
- tasks
- text
- theming
- twofactor_admin
- twofactor_backupcodes
- twofactor_gateway
- twofactor_totp
- twofactor_u2f
- updatenotification
- viewer
- whiteboard
- workflowengine
disabled_apps_list:
- admin_audit
- dashboard
- files_external
- recommendations
- sociallogin
- survey_client
- user_ldap
- user_status
- weather_status
tasks:
- name: Configure custom archive Nextcloud directory on hdd for personal usages
tags:
- cloud.banananet.work
- custom_archive_directory
vars:
archive_directory: "{{ global_hdd_directory }}/{{ bnet_cloud_domain }}~personal-archive"
block:
- name: Create archive directory
file:
state: directory
path: "{{ archive_directory }}"
owner: "{{ bnet_cloud_username }}"
group: "{{ bnet_cloud_username }}"
mode: "u=rwx,g=rx,o="
register: archive_directory_task
- name: Show message to user about path on changes
debug:
msg: >-
Changed custom archive directory: Please ensure you (re-)configure this directory properly on your Nextcloud instance: {{ archive_directory | quote }}
when: archive_directory_task.changed

@ -1,10 +0,0 @@
- name: Configure nvak.banananet.work
hosts: nvak.banananet.work
roles:
- role: nginx/default_server # Would not be configurable otherwise
# DSA Seite
# - role: server/node
# domain: dsa.banananet.work
# repo: git@git.banananet.work:dsaGroup/dsaPage.git
# bind_port: 12821
# system_user: dsaPage

@ -1,27 +0,0 @@
- name: Configure rurapenthe
hosts: rurapenthe.banananet.work
roles:
- role: nginx/default_server # Would not be configurable otherwise
# - role: dns/slave
# domain: banananet.work
# masters:
# - nvak.banananet.work
# - role: dns/slave
# domain: forumderschan.de
# masters:
# - nvak.banananet.work
# - role: dns/slave
# domain: stadtpiraten-karlsruhe.de
# masters:
# - nvak.banananet.work
# - role: dns/slave
# domain: spotme.fun
# masters:
# - nvak.banananet.work
- role: server/node
domain: keys.banananet.work
repo: https://git.banananet.work/banananetwork/keys.git
bind_port: 12822
system_user: keys-banananet-work
environment_vars:
REGISTER_PASS: "{{ global_ip_discover_register_pass }}"

@ -1,22 +0,0 @@
---
- name: Configure thinkie ThinkPad Tablet
hosts: thinkie.eridon.banananet.work
tasks:
- name: Increase tty font for readability
debconf:
name: console-setup
question: "{{ item.key }}"
value: "{{ item.value }}"
vtype: select
loop:
- key: console-setup/fontsize-fb47
value: 16x32 (framebuffer only)
- key: console-setup/fontface47
value: Terminus
- key: console-setup/fontsize
value: 16x32
- key: console-setup/fontsize-text47
value: 16x32 (framebuffer only)
loop_control:
label: "{{ item.key }}"

@ -1 +0,0 @@
../host_vars

@ -1,12 +0,0 @@
---
- name: Configure wgpanel
hosts: wgpanel.eridon.banananet.work
roles:
- role: kiosk/boot
system_name: WG Panel
plymouth_theme_pack: pack_1
plymouth_theme: colorful_sliced
- role: kiosk/website
kiosk_website: "http://10.11.11.70:8123/wg-dashboard/default"
zoom_factor: 1.5

@ -1 +0,0 @@
../library

@ -1,41 +0,0 @@
---
- name: Configure local repository
hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Create local directory for credentials & keys
file:
path: "{{ item }}"
owner: "{{ global_local_user }}"
group: "{{ global_local_user }}"
mode: "u=rwx,g=rx,o=rx"
state: directory
loop:
- "{{ global_credentials_directory }}"
- "{{ global_public_key_directory }}"
- "{{ global_dns_list_directory }}"
- "{{ global_ssh_key_directory }}"
- "{{ global_ssh_host_key_directory }}"
- "{{ global_wireguard_private_directory }}"
- "{{ global_wireguard_public_directory }}"
- name: Configure shorts table
copy:
content: |
banananet.work bnet
forumderschan.de striche
stadtpiraten-karlsruhe.de pirat-ka
dest: "{{ global_public_key_directory }}/domain_shorts"
owner: "{{ global_local_user }}"
group: "{{ global_local_user }}"
mode: u=rw,g=r,o=r
- name: Install required tools
become: yes
become_user: root
become_method: sudo
apt:
name:
- sshpass
- wireguard-tools
state: present

@ -1 +0,0 @@
../public_keys

@ -1 +0,0 @@
../roles

@ -1,22 +0,0 @@
---
- name: Store facts of hosts
hosts: all
gather_facts: yes
tasks:
- name: Create directory for facts
file:
state: directory
path: "./facts"
owner: "{{ local_user }}"
group: "{{ local_user }}"
mode: "u=rwx,g=rx,o=rx"
delegate_to: localhost
- name: Download facts to file
copy:
content: "{{ ansible_facts | to_nice_yaml(indent=2) }}"
dest: "./facts/{{ ansible_fqdn }}.yml"
owner: "{{ local_user }}"
group: "{{ local_user }}"
mode: "u=rw,g=r,o=r"
delegate_to: localhost

@ -1,35 +0,0 @@
---
- name: Configure wireguard backbones
hosts: wireguard_backbones
tags:
- wireguard
- wireguard_backbones
roles:
- role: wireguard/backbone
- name: Configure wireguard clients
hosts: wireguard_clients
tags:
- wireguard
- wireguard_clients
roles:
- role: wireguard/client
- name: Reload all configurations
hosts:
- wireguard_backbones
- wireguard_clients
tags:
- wireguard
- wireguard_backbones
- wireguard_clients
roles:
- name: misc/handlers
tasks:
- name: Reload systemd wireguard network always
become: no
command: /bin/true
delegate_to: localhost
notify:
- restart systemd network

@ -1,13 +0,0 @@
# `/public_keys` for Ansible project
This directory is here to store all facts and public keys generated by Ansible on remotes.
In difference to the `/facts` directory,
this directory is here to store each fact in a single file
so Ansible itself and other scripts can use them more easily.
Also, if this directory needs to be restored,
the full current Ansible playbook needs to be run
## Scripts
This directory contains scripts tracked by the repository to allow easier lookups in the given data by Ansible or other scripts.
Each script itself will contain a short description of its usage.

@ -1,39 +0,0 @@
#!/usr/bin/env python3
from pathlib import Path
import sys
class DnsRootNoParentError(Exception):
pass
def get_dns_parent(domain):
s = domain.split('.', 1)
if len(s) < 2:
raise DnsRootNoParentError()
return domain.split('.', 1)[1]
def find_dns_zone(map_dir, domain):
dns_file = Path(map_dir) / domain
if dns_file.exists():
return domain
else:
return find_dns_zone(map_dir, get_dns_parent(domain))
def main():
dns_map_dir = Path(sys.argv[0]).parent / "dns"
if len(sys.argv) >= 1:
domains = sys.argv[1:]
else:
domains = []
for domain in sys.stdin:
domains.append(domain.strip())
for domain in domains:
domain = domain.strip('.')
try:
print(find_dns_zone(dns_map_dir, domain))
except DnsRootNoParentError:
print(f'No dns zone found for "{domain}"', file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

@ -1,26 +0,0 @@
#!/usr/bin/env python3
import argparse
from pathlib import Path
import subprocess
import sys
def gen_sshfp_rr(keys_dir, host, domain):
key_dir = Path(keys_dir) / host
res = []
for key in key_dir.iterdir():
if key.name != "dns":
res.append(subprocess.check_output(["ssh-keygen", "-r", domain, "-f", str(key)]).decode('utf-8').strip())
return '\n'.join(res)
def main():
ssh_hosts_keys = Path(sys.argv[0]).parent / "ssh/hosts"
parser = argparse.ArgumentParser()
parser.add_argument('--domain', default=None)
parser.add_argument('--host', required=True)
args = parser.parse_args()
args.domain = (args.domain + ".") if args.domain else "@"
print(gen_sshfp_rr(ssh_hosts_keys, args.host, args.domain))
if __name__ == "__main__":
main()

@ -4,27 +4,12 @@
apt:
state: present
name:
- bmon
- exa
- git
- htop
- httpie
- man
- thefuck
- tmux
- vim
- zsh
- zsh-antigen
- name: Configure sudo insults
copy:
content: |
Defaults insults
dest: "{{ global_sudoers_directory }}/insults"
owner: root
group: root
mode: u=r,g=r,o=
validate: "{{ global_validate_sudoers_file }}"
- name: Configure user account {{ username }}
user:
@ -42,7 +27,7 @@
ssh_key_type: ed25519
ssh_key_file: .ssh/id_ed25519
ssh_key_passphrase: "{{ password }}"
ssh_key_comment: "{{ username }}@{{ inventory_hostname }} {{ ansible_date_time.date }}"
ssh_key_comment: "{{ username }}@{{ ansible_fqdn }} {{ ansible_date_time.date }}"
- name: Configure home directory
file:
@ -52,13 +37,13 @@
group: "{{ username }}"
mode: "u=rwx,g=rx,o="
- name: Configure authorized_keys
authorized_key:
state: present
user: "{{ username }}"
key: "{{ authorized_keys }}"
- name: Download oh-my-zsh for user {{ username }}
become_user: "{{ username }}"
git:
repo: https://github.com/robbyrussell/oh-my-zsh.git
dest: ~/.oh-my-zsh
- name: Configure zsh
- name: Configure oh-my-zsh
become_user: "{{ username }}"
template:
src: template.zshrc

@ -1,22 +1,69 @@
source {{ global_zsh_antigen_source | quote }};
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH
antigen use oh-my-zsh
# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"
antigen theme {{ zsh_theme | quote }}
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/robbyrussell/oh-my-zsh/wiki/Themes
ZSH_THEME="{{ zsh_theme }}"
MAGIC_ENTER_GIT_COMMAND='git status -u .'
MAGIC_ENTER_OTHER_COMMAND='ls -lh .'
# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in ~/.oh-my-zsh/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )
ZSH_TMUX_AUTOSTART=true
ZSH_TMUX_AUTOCONNECT=true
ZSH_TMUX_AUTOQUIT=true
# Uncomment the following line to use case-sensitive completion.
# CASE_SENSITIVE="true"
# Uncomment the following line to use hyphen-insensitive completion.
# Case-sensitive completion must be off. _ and - will be interchangeable.
# HYPHEN_INSENSITIVE="true"
# Uncomment the following line to disable bi-weekly auto-update checks.
DISABLE_AUTO_UPDATE="false"
DISABLE_UPDATE_PROMPT="true"
# Uncomment the following line to change how often to auto-update (in days).
export UPDATE_ZSH_DAYS=2
# Uncomment the following line to disable colors in ls.
DISABLE_LS_COLORS="false"
# Uncomment the following line to disable auto-setting terminal title.
DISABLE_AUTO_TITLE="false"
# Uncomment the following line to enable command auto-correction.
ENABLE_CORRECTION="false"
# Uncomment the following line to display red dots whilst waiting for completion.
COMPLETION_WAITING_DOTS="false"
# quit bugging me!
DISABLE_AUTO_UPDATE="true"
DISABLE_LS_COLORS="true" # To remove alias "ls=ls --color=tty" by oh-my-zsh for exa alias
# Uncomment the following line if you want to disable marking untracked files
# under VCS as dirty. This makes repository status check for large repositories
# much, much faster.
# DISABLE_UNTRACKED_FILES_DIRTY="true"
antigen bundles <<EOBUNDLES
# oh-my-zsh plugins
# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
# HIST_STAMPS="mm/dd/yyyy"
# Would you like to use another custom folder than $ZSH/custom?
# ZSH_CUSTOM=/path/to/new-custom-folder
# Which plugins would you like to load?
# Standard plugins can be found in ~/.oh-my-zsh/plugins/*
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(
colored-man-pages
colorize
command-not-found
@ -33,21 +80,41 @@ antigen bundles <<EOBUNDLES
themes
tmux
ufw
EOBUNDLES
)
MAGIC_ENTER_GIT_COMMAND='git status -u .'
MAGIC_ENTER_OTHER_COMMAND='ls -lh .'
antigen apply
ZSH_TMUX_AUTOSTART=true
ZSH_TMUX_AUTOCONNECT=true
ZSH_TMUX_AUTOQUIT=true
export ANSIBLE_NOCOWS=1
# Disable flow control
stty -ixon
source $ZSH/oh-my-zsh.sh
# User configuration
# export MANPATH="/usr/local/man:$MANPATH"
# You may need to manually set your language environment
# export LANG=en_US.UTF-8
# Preferred editor for local and remote sessions
# if [[ -n $SSH_CONNECTION ]]; then
# export EDITOR='vim'
# else
# export EDITOR='mvim'
# fi
# aptitude custom
alias api="sudo aptitude"
alias ati="sudo aptitude install"
alias atr="sudo aptitude remove"
alias up='sudo aptitude update ; sudo aptitude safe-upgrade'
# Compilation flags
# export ARCHFLAGS="-arch x86_64"
function fork() {
"$@" >/dev/null 2>&1 &!;
}
# Set personal aliases, overriding those provided by oh-my-zsh libs,
# plugins, and themes. Aliases can be placed here, though oh-my-zsh
# users are encouraged to define aliases within the ZSH_CUSTOM folder.
# For a full list of active aliases, run `alias`.
#
# Example aliases
# alias zshconfig="mate ~/.zshrc"
# alias ohmyzsh="mate ~/.oh-my-zsh"

@ -1,5 +1,15 @@
---
acme_account_mail: "{{ global_admin_mail }}"
acme_system_user: "acme"
acme_user_directory: "/var/{{ acme_system_user }}"
acme_key_size: 4096
acme_source_directory: "{{ acme_user_directory }}/repository"
acme_source_repository: "https://github.com/Neilpang/acme.sh.git"
acme_source_version: "master"
acme_account_mail: felix.stupp@outlook.com
acme_installation_directory: "{{ acme_user_directory }}/application"
acme_configuration_directory: "{{ acme_user_directory }}/configuration"
acme_internal_certificates_directory: "{{ acme_configuration_directory }}/certificates"
acme_certificates_directory: "{{ acme_user_directory }}/certificates"

@ -3,4 +3,6 @@
allow_duplicates: no
dependencies:
- role: nginx/application
- role: misc/system_user
system_user: "{{ acme_system_user }}"
user_directory: "{{ acme_user_directory }}"

@ -1,15 +1,40 @@
---
- name: Install required packages
apt:
state: present
name:
- certbot # main package
- name: Configure certbot
template:
src: cli.ini
dest: "{{ global_certbot_configuration_file }}"
owner: root
group: root
mode: u=rw,g=r,o=r
- name: Download acme.sh
become_user: "{{ acme_system_user }}"
git:
repo: "{{ acme_source_repository }}"
version: "{{ acme_source_version }}"
dest: "{{ acme_source_directory }}"
update: no
- name: Configure acme.sh
become_user: "{{ acme_system_user }}"
command: >-
./acme.sh --install
--home {{ acme_installation_directory | quote }}
--config-home {{ acme_configuration_directory | quote }}
--cert-home {{ acme_internal_certificates_directory | quote }}
--accountemail {{ acme_account_mail | quote }}
args:
chdir: "{{ acme_source_directory }}"
creates: "{{ acme_installation_directory }}"
- name: Upgrade acme.sh
become_user: "{{ acme_system_user }}"
command:
./acme.sh --upgrade
--home {{ acme_installation_directory | quote }}
--config-home {{ acme_configuration_directory | quote }}
args:
chdir: "{{ acme_installation_directory }}"
register: acme_upgrade_results
changed_when: acme_upgrade_results.rc == 0 and "Upgrade success" in acme_upgrade_results.stdout
- name: Create directory for certificates
file:
path: "{{ acme_certificates_directory }}"
state: directory
owner: "{{ acme_system_user }}"
group: "{{ acme_system_user }}"
mode: "u=rwx,g=,o="

@ -1,12 +0,0 @@
# Accept service terms
agree-tos
# Default RSA key size
rsa-key-size = {{ acme_key_size }}
# E-Mail Address for registration
email = {{ acme_account_mail }}
# Use webroot per default
authenticator = webroot
webroot-path = {{ acme_validation_root_directory }}

@ -1,25 +1,5 @@
---
# at least one of domain or domains is required
domain: "{{ domains[0] }}"
domains:
- "{{ effective_domain }}"
# effective_domain from all/vars.yml
acme_must_staple: yes
certificate_name: "{{ effective_domain }}"
# acme_validation_root_directory from nginx/application
acme_certificate_directory: "{{ global_certbot_certificates_directory }}/{{ certificate_name }}"
acme_certificate_location: "{{ acme_certificate_directory }}/cert.pem"
acme_chain_location: "{{ acme_certificate_directory }}/chain.pem"
acme_fullchain_location: "{{ acme_certificate_directory }}/fullchain.pem"
acme_key_location: "{{ acme_certificate_directory }}/privkey.pem"
acme_keyfullchain_location: "{{ acme_certificate_directory }}/keyfullchain.pem"
# at maximun one of is used
reload_command: "systemctl reload-or-restart {{ global_nginx_service_name | quote }}"
reload_commands:
- "{{ reload_command }}"
acme_certificate_prefix: "{{ acme_certificates_directory }}/{{ domain }}"
acme_certificate_location: "{{ acme_certificate_prefix }}.crt"
acme_key_location: "{{ acme_certificate_prefix }}.key"

@ -4,5 +4,4 @@ allow_duplicates: yes
dependencies:
- role: acme/application
- role: dns/server_entries
# domain
- role: nginx/application

@ -1,17 +1,35 @@
---
- name: Issue certificate for {{ certificate_name }}
command:
cmd: >-
certbot certonly
--non-interactive
--cert-name {{ certificate_name | quote }}
{% if acme_must_staple %}--must-staple{% endif %}
--disable-hook-validation
--post-hook {{ ( '(' + (all_reload_commands | join(') && (')) + ')' ) | quote }}
{% for d in domains %}
--domain {{ d | quote }}
{% endfor %}
creates: "{{ acme_certificate_location }}"
tags:
- certificate
- meta: flush_handlers
- name: "Issue certificate for {{ domain }}"
become_user: "{{ acme_system_user }}"
command: >-
./acme.sh --issue
--home {{ acme_installation_directory | quote }}
--config-home {{ acme_configuration_directory | quote }}
--domain "{{ domain | quote }}"
--webroot "{{ nginx_validation_root_directory | quote }}"
--ecc
--ocsp-must-staple
args:
chdir: "{{ acme_installation_directory }}"
register: acme_issue_result
changed_when: acme_issue_result.rc != 2 or "Domains not changed" not in acme_issue_result.stdout
failed_when: acme_issue_result.rc != 0 and "Domains not changed" not in acme_issue_result.stdout
- name: "Install certificate for {{ domain }}"
become_user: "{{ acme_system_user }}"
command: >-
./acme.sh --install-cert
--home {{ acme_installation_directory | quote }}
--config-home {{ acme_configuration_directory | quote }}
--domain "{{ domain | quote }}"
--key-file "{{ acme_key_location | quote }}"
--fullchain-file "{{ acme_certificate_location | quote }}"
--reloadcmd "systemctl force-reload nginx"
args:
chdir: "{{ acme_installation_directory }}"
creates: "{{ acme_key_location }}"
register: acme_install_result
failed_when: acme_install_result.rc != 0 and "Reload error for" not in acme_install_result.stderr

@ -1,5 +0,0 @@
---
required_reload_commands:
- "cat {{ acme_key_location | quote }} {{ acme_fullchain_location | quote }} > {{ acme_keyfullchain_location | quote }}"
all_reload_commands: "{{ required_reload_commands + reload_commands }}"

@ -1,10 +1,6 @@
---
- name: Reset connection to remove privileged user
meta: reset_connection
- name: Remove temporary privileged user
user:
- user:
name: "{{ bootstrap_user }}"
state: absent
become: yes

@ -25,20 +25,6 @@
name: "{{ bootstrap_user }}"
state: present
register: bootstrap_user_data
- name: Be sure old user has .ssh directory
file:
state: directory
path: "{{ bootstrap_user_data.home }}/.ssh"
owner: "{{ bootstrap_user }}"
group: "{{ bootstrap_user }}"
mode: "u=rwx,g=rx,o="
- name: Be sure old user has authorized_keys file
file:
state: touch
path: "{{ bootstrap_user_data.home }}/.ssh/authorized_keys"
owner: "{{ bootstrap_user }}"
group: "{{ bootstrap_user }}"
mode: "u=rw,g=r,o="
- name: Create .ssh directory for user {{ bootstrap_expected_user }}
file:
path: "{{ bootstrap_expected_user_data.home }}/.ssh"
@ -55,9 +41,3 @@
group: "{{ bootstrap_expected_user }}"
mode: u=rw,g=r,o=
become: yes
- name: Configure given SSH key for new user
authorized_key:
state: present
user: "{{ bootstrap_expected_user }}"
key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}"

@ -1,15 +1,8 @@
---
- name: Set variables for shifting back
set_fact:
- set_fact:
bootstrap_used: no
ansible_user: '{{ bootstrap_expected_user }}'
ansible_become_pass: '{{ bootstrap_expected_become_pass }}'
- meta: reset_connection
- name: Reboot server proper disabling old user
reboot:
when:
- bootstrap_user != "root"
- bootstrap_user != bootstrap_expected_user
- meta: reset_connection

@ -1,13 +1,11 @@
---
- name: Try to ping host with expected credentials
action: ping
- action: ping
ignore_unreachable: true
ignore_errors: yes
register: pingtest
- meta: clear_host_errors
- name: Shift if ping fails
set_fact:
- set_fact:
bootstrap_used: yes
ansible_user: '{{ bootstrap_user }}'
ansible_become_pass: '{{ bootstrap_become_pass }}'

@ -1,10 +1,9 @@
#!/usr/bin/env bash
#!/bin/sh
set -euxo pipefail;
set -e;
dir="$(dirname "$1")";
date="$(date +%Y-%m-%d-%H-%M)";
name="$(basename "$1")";
ext="${name##latest.}";
dir=$(dirname "$1");
date=$(date +%Y-%m-%d-%H-%M);
name=$(basename "$1");
mv "$1" "$dir/$date.$ext";
mv "$1" "$dir/$date-$name";

@ -1,20 +1,15 @@
#!/usr/bin/env sh
#!/bin/sh
# Usage: <url> <fpr> <keyring>
set -euf;
set -e;
return_code=0;
readonly keyfile="$(mktemp --dry-run)";
mkdir --parents ~/.gnupg;
chmod "u=rwx,g=,o=" ~/.gnupg;
/usr/bin/wget --quiet --output-document="$keyfile" -- "$1";
/usr/bin/gpg2 --dry-run --quiet --debug-level 0 --import-options import-show --with-colons --import "$keyfile" | awk -F: '$1 == "fpr" { print $10 }' | head --lines=1 | grep --fixed-strings "$2" > /dev/null;
readonly return_text="$(/usr/bin/gpg2 --no-default-keyring --keyring "$3" --import "$keyfile" 2>&1)";
if echo "$return_text" | grep --basic-regexp ' not changed$' > /dev/null; then
return_code=2;
fi
/usr/bin/wget --output-document="$keyfile" -- "$1";
/usr/bin/gpg2 --dry-run --quiet --import-options import-show --with-colons --import "$keyfile" | awk -F: '$1 == "fpr" { print $10 }' | head --lines=1 | grep --fixed-strings "$2";
/usr/bin/gpg2 --quiet --no-default-keyring --keyring "$3" --import "$keyfile";
rm "$keyfile";
exit $return_code;

@ -1,12 +1,4 @@
---
- name: restart systemd-journald
service:
name: systemd-journald.service
state: restarted
- name: generate locales
command: locale-gen
- name: reload facts
setup:

@ -1,20 +0,0 @@
---
- name: Create custom facts directory
file:
state: directory
path: "{{ global_ansible_facts_directory }}"
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
- name: Store custom apt fact
copy:
content: |
#!/bin/sh
echo "{\"architecture\": \"$(dpkg --print-architecture)\"}";
dest: "{{ global_ansible_facts_directory }}/dpkg.fact"
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
notify: reload facts

@ -7,8 +7,6 @@
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
tags:
- backups
- name: Upload helper scripts
copy:
@ -17,23 +15,9 @@
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
validate: "{{ global_validate_shell_script }}"
loop:
- backup_rename.sh
- gpg_import_url_key.sh
tags:
- backups
- name: Upload python helper scripts
copy:
src: "{{ item }}"
dest: "{{ global_helper_directory }}/{{ item }}"
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
validate: "{{ global_validate_python_script }}"
loop:
- check_subnet.py
- name: Build and upload template helper scripts
template:
@ -42,21 +26,6 @@
owner: root
group: root
mode: "u=rwx,g=rx,o=rx"
validate: "{{ global_validate_shell_script }}"
loop:
- backup_autoremove.sh
- backup_database.sh
- backup_files.sh
- backup_mysql_database.sh
- nsupdate_keygen.sh
tags:
- backups
- name: Configure auto remove older backups
cron:
hour: 0
minute: 30
job: "{{ global_helper_directory }}/backup_autoremove.sh"
name: "Auto remove older backups"
state: present
tags:
- backups

@ -1,16 +0,0 @@
---
- name: Create directory for journald config
file:
state: directory
path: "{{ global_systemd_journal_configuration_directory }}"
owner: root
group: root
mode: u=rwx,g=rx,o=rx
- name: Configure journald log
template:
src: journald.conf
dest: "{{ global_systemd_journal_configuration_directory }}/main.conf"
notify:
- restart systemd-journald

@ -1,39 +0,0 @@
---
# protecting process list of users different than root
# Source: https://wiki.archlinux.org/index.php/Security#hidepid
- name: Configure group for reading other processes
group:
state: present
name: proc
system: yes
- name: Configure proc mounting in fstab
lineinfile:
path: "{{ global_fstab_file }}"
regexp: '^\S+\s+/proc\s+proc\s+'
line: >-
proc /proc proc
nosuid,nodev,noexec,hidepid=2,gid=proc
0 0
- name: Ensure configuration directory for whitelisted services exist
file:
state: directory
path: "{{ global_systemd_configuration_directory }}/{{ item }}.d"
owner: root
group: root
mode: u=rwx,g=rx,o=rx
loop: "{{ global_proc_hidepid_service_whitelist }}"
- name: Configure whitelisted services to adapt to hidepid setting
copy:
content: |
[Service]
SupplementaryGroups=proc
dest: "{{ global_systemd_configuration_directory }}/{{ item }}.d/proc_hidepid_whitelist.conf"
owner: root
group: root
mode: u=rw,g=r,o=r
loop: "{{ global_proc_hidepid_service_whitelist }}"

@ -1,32 +1,19 @@
---
- name: Configure apt packages
import_tasks: packages.yml
include_tasks: packages.yml
- name: Configure sshd
import_tasks: sshd.yml
include_tasks: sshd.yml
- name: Configure ufw
import_tasks: ufw.yml
- name: Enforce kernel security
import_tasks: kernel_hidepid.yml
tags:
- kernel_hidepid
include_tasks: ufw.yml
- name: Configure locales
import_tasks: locales.yml
- name: Configure journald
import_tasks: journald.yml
tags:
- journald
- name: Configure custom facts
import_tasks: custom_facts.yml
include_tasks: locales.yml
- name: Configure helpers
import_tasks: helpers.yml
include_tasks: helpers.yml
- name: Configure ssh key for root user
user:
@ -34,50 +21,24 @@
state: present
generate_ssh_key: yes
ssh_key_type: ed25519
ssh_key_comment: "root@{{ inventory_hostname }}"
ssh_key_comment: "root@{{ ansible_fqdn }}"
register: root_user
- name: Store ssh public key local
copy:
local_action:
module: copy
content: "{{ root_user.ssh_public_key }}\n"
dest: "{{ global_ssh_key_directory }}/root@{{ inventory_hostname }}"
delegate_to: localhost
dest: "public_keys/ssh/root@{{ ansible_fqdn }}"
vars:
ansible_become: no
- name: Create hdd data directory
- name: Create auto update scripts directory
file:
state: directory
path: "{{ global_hdd_directory }}"
owner: root
group: root
mode: u=rwx,g=rx,o=rx
when:
- global_hdd_directory is defined
- name: Create scripts directories
file:
path: "{{ item }}"
path: "{{ update_scripts_directory }}"
state: directory
owner: root
group: root
mode: "u=rwx,g=rx,o="
loop:
- "{{ backup_scripts_directory }}"
- "{{ backup_files_scripts_directory }}"
- "{{ backup_mysql_database_scripts_directory }}"
- "{{ update_scripts_directory }}"
- name: Configure hdd dir for backups
import_role:
name: misc/hdd_dir
vars:
use_hdd_directory: "{{ global_hdd_directory is defined }}"
hdd_source_dir: "{{ backups_directory }}"
hdd_directory_name: backups
tags:
- backups
- backups_hdd_dir
- name: Create backups directories
file:
@ -88,10 +49,5 @@
mode: "u=rwx,g=rx,o=rx"
loop:
- "{{ backups_directory }}"
- "{{ backups_databases_directory }}"
- "{{ backups_files_directory }}"
- "{{ backups_mysql_database_directory }}"
tags:
- backups
- name: Flush handlers for role
meta: flush_handlers

@ -2,8 +2,8 @@
- name: Configure package source
template:
src: "sources.{{ ansible_distribution_name }}.list"
dest: "/etc/apt/sources.list"
src: "sources.list"
dest: /etc/apt/sources.list
owner: root
group: root
mode: "u=rw,g=r,o=r"
@ -11,9 +11,10 @@
- name: Update packages and install common packages
apt:
name:
- acl # Required for temporary files by Ansible, see https://docs.ansible.com/ansible/latest/user_guide/become.html#risks-of-becoming-an-unprivileged-user
- acl
- aptitude
- apt-transport-https # TODO Can be removed after using only Debian >= buster due to feature integrated into apt
- apt-transport-https
- buffer
- ca-certificates
- cron
- curl
@ -21,20 +22,16 @@
- dnsutils
- git
- gnupg2
- pv # Required for scripting
- python3
- python3-apt # required for Ansible
- python3-ipy # required for helper check_subnet.py
- python3-pip
- python3-yaml # required for scripting
- sed # required for scripting
- shellcheck
- htop
- python
- python-pip
- software-properties-common
- unattended-upgrades
- vim # required because will be configured as system-wide default editor
- tmux
- ufw
- vim
- wget
state: present
- zsh
state: latest
allow_unauthenticated: no
update_cache: yes
cache_valid_time: 3600

@ -35,20 +35,8 @@
owner: root
group: root
mode: "u=rw,g=r,o=r"
validate: "{{ global_validate_sshd_config }}"
notify: reassemble sshd config
- name: Upload main ssh_config
template:
src: 0_main.ssh_config
dest: "{{ global_ssh_configuration_environment_directory }}/0_main.ssh_config"
owner: root
group: root
mode: "u=rw,g=r,o=r"
notify: reassemble ssh config
tags:
- ssh_config
- name: Collect ssh host keys
command: "cat /etc/ssh/ssh_host_{{ item | quote }}_key.pub"
loop: "{{ ssh_host_key_types }}"
@ -57,22 +45,29 @@
check_mode: no
- name: Create directory for host keys locally
file:
path: "{{ global_ssh_host_key_directory }}/{{ inventory_hostname }}"
local_action:
module: file
path: "{{ global_ssh_host_key_directory }}/{{ ansible_fqdn }}"
state: directory
owner: "{{ global_local_user }}"
group: "{{ global_local_user }}"
mode: "u=rwx,g=rx,o=rx"
delegate_to: localhost
- name: Store ssh host keys locally
copy:
local_action:
module: copy
content: "{{ item.stdout }}\n"
dest: "{{ global_ssh_host_key_directory }}/{{ inventory_hostname }}/{{ item.item }}"
dest: "{{ global_ssh_host_key_directory }}/{{ ansible_fqdn }}/{{ item.item }}"
owner: "{{ global_local_user }}"
group: "{{ global_local_user }}"
mode: "u=rw,g=r,o=r"
delegate_to: localhost
loop: "{{ ssh_host_keys.results }}"
loop_control:
label: "{{ item.item }}"
- name: Generate ssh host key dns fingerprints locally
local_action:
module: make
chdir: "{{ global_ssh_host_key_directory }}/{{ ansible_fqdn }}"
file: "{{ playbook_dir }}/helpers/ssh_dns_fingerprints.makefile"
target: dns

@ -5,14 +5,3 @@
state: enabled
policy: deny
direction: incoming
- name: Block known addresses
ufw:
insert: 1 # Insert before common rules
rule: deny
from_ip: "{{ item }}"
direction: in
comment: "IP from Blocklist"
loop: "{{ global_ip_blocklist }}"
tags:
- ip_blocklist

@ -1,51 +0,0 @@
# This is the ssh client system-wide configuration file. See
# ssh_config(5) for more information. This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.
# Configuration data is parsed as follows:
# 1. command line options
# 2. user-specific file
# 3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.
# Site-wide defaults for some commonly used options. For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.
Host *
# ForwardAgent no
# ForwardX11 no
# ForwardX11Trusted yes
# PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# GSSAPIKeyExchange no
# GSSAPITrustDNS no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# IdentityFile ~/.ssh/id_ecdsa
# IdentityFile ~/.ssh/id_ed25519
# Port 22
# Protocol 2
# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
# MACs hmac-md5,hmac-sha1,umac-64@openssh.com
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com
# RekeyLimit 1G 1h
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
VerifyHostKeyDNS yes

@ -29,7 +29,7 @@ HostKey /etc/ssh/ssh_host_{{ type }}_key
# Authentication:
#LoginGraceTime 2m
PermitRootLogin no
PermitRootLogin yes
#StrictModes yes
MaxAuthTries 6
#MaxSessions 10
@ -86,7 +86,7 @@ UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
@ -113,11 +113,6 @@ AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
# Disable weak key algorithms
HostKeyAlgorithms -ecdsa-sha2-nistp256
KexAlgorithms -diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
MACs -hmac-sha1,hmac-sha2-256,hmac-sha2-512,umac-64@openssh.com,umac-128@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no

@ -1,21 +0,0 @@
#!/usr/bin/env bash
set -euxo pipefail;
backupsToKeep={{ backups_to_keep | quote }};
function onlyDatedFiles() {
grep --perl-regexp '/\d+(-\d+)*(\.[^/]+)*$';
}
function getDirName() {
grep --only-matching --perl-regexp '^.+(?=/[^/]+)';
}
find -H {{ backups_directory | quote }} -type f |
onlyDatedFiles |
getDirName |
sort --unique |
while read -r dir; do
find "$dir" -type f | onlyDatedFiles | sort --reverse | tail --lines=+$((backupsToKeep + 1)) | xargs rm --force;
done

@ -0,0 +1,9 @@
#!/bin/sh
set -e;
file={{ backups_databases_directory | quote }}"/$1.sql.gpg";
mysqldump --opt --databases "$1" | buffer -m 128M -s 128K | gpg --quiet --no-verbose --encrypt --recipient 73D09948B2392D688A45DC8393E1BD26F6B02FB7 --trust-model always > "$file";
chmod u+r-wx,g+r-wx,o+r-wx "$file";
{{ global_helper_directory | quote }}/backup_rename.sh "$file";

@ -1,19 +1,17 @@
#!/usr/bin/env bash
#!/bin/sh
set -euxo pipefail;
set -e;
# Arguments
path="$1";
target="$2";
name="$2";
# Variables
dir="$(dirname "$path")";
base="$(basename "$path")";
dest="$target/latest.tar.gpg";
dest={{ backups_files_directory | quote }}"/$name.tar.gpg";
# Execution
tar --directory="$dir" --create --dereference --file=- "$base" |
pv --quiet --buffer-size 256M |
gpg --quiet --no-verbose --compress-level 0 --encrypt --recipient {{ backup_gpg_fingerprint | quote }} --trust-model always > "$dest";
tar -C "$dir" -cf - "$base" | buffer -m 128M -s 128K | gpg --quiet --no-verbose --encrypt --recipient 73D09948B2392D688A45DC8393E1BD26F6B02FB7 --trust-model always > "$dest";
chmod u+r-wx,g+r-wx,o+r-wx "$dest";
{{ global_helper_directory | quote }}/backup_rename.sh "$dest";

@ -1,16 +0,0 @@
#!/usr/bin/env bash
set -euxo pipefail;
# Arguments
db="$1";
# Variables
file={{ backups_mysql_database_directory | quote }}"/$db/latest.sql.gpg";
# Execution
mysqldump --opt "$db" |
pv --quiet --buffer-size 256M |
gpg --quiet --no-verbose --encrypt --recipient {{ backup_gpg_fingerprint | quote }} --trust-model always > "$file";
chmod u+r-wx,g+r-wx,o+r-wx "$file";
{{ global_helper_directory | quote }}/backup_rename.sh "$file";

@ -1,3 +0,0 @@
[Journal]
Storage=persistent
SystemMaxUse={{ global_systemd_journal_max_storage }}

@ -1,23 +0,0 @@
#!/bin/bash
set -euxo pipefail;
if [[ -z "${1+x}" ]]; then
echo "Usage: $(basename "$0") HOST [PATH]" >&2
exit 2;
fi
key_path="${2:-1}";
if [[ "$key_path" = /* ]]; then
target="$key_path";
else
target="$PWD/$key_path";
fi
tmpdir="$(mktemp --directory)";
cd "$tmpdir";
name="$(dnssec-keygen -a {{ global_dns_update_key_algorithm }} -n HOST -T KEY "$1")";
for suffix in "key" "private"; do
mv "$tmpdir/$name.$suffix" "$target.$suffix";
done
rm -rf "$tmpdir";

@ -1,17 +1,11 @@
# Main Repository
deb {{ debian_repository_mirror }} {{ ansible_distribution_release }} main non-free contrib
{% if debian_repository_use_sources %}
deb-src {{ debian_repository_mirror }} {{ ansible_distribution_release }} main non-free contrib
{% endif %}
# Security Repository
deb http://security.debian.org/debian-security {{ ansible_distribution_release }}/updates main non-free contrib
{% if debian_repository_use_sources %}
deb-src http://security.debian.org/debian-security {{ ansible_distribution_release }}/updates main non-free contrib
{% endif %}
# Updates Repository
deb {{ debian_repository_mirror }} {{ ansible_distribution_release }}-updates main non-free contrib
{% if debian_repository_use_sources %}
deb-src {{ debian_repository_mirror }} {{ ansible_distribution_release }}-updates main non-free contrib
{% endif %}

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save