Ansible 2.17 moved to using Python 3.7. This causes issues with systems that use Python 3.6 (i.e., RHEL 8 based distros). Unfortunately, you can’t just upgrade Python either, as 3.6 is used in system tools such as DNF/YUM.
There are two options.
Upgrade to a RHEL 9 based distribution
Use Ansible 2.16
Ansible 2.16 should be the default installed version on RHEL 8 based distros.
The following Bash commands were taken and modified from the above Twitter link
Here is a one liner that will check the version of xz binaries and return if they are safe or vulnerable. You’ll need to run this in a Bash shell. May have issues in sh.
for xz_p in $(type -a xz | awk '{print $NF}' ); do if ( strings "$xz_p" | grep "xz (XZ Utils)" | grep '5.6.0\|5.6.1' ); then echo $xz_p Vulnerable; else echo $xz_p Safe ; fi ; done
Ansible Playbooks
Here are two different Ansible Playbooks to check if the xz package(s) are backdoored.
This one uses the above Bash commands to check the xz binaries.
Ansible Playbook to Check xz Backdoor
----name: Check if XZ tools are compromised
# https://twitter.com/kostastsale/status/1773890846250926445hosts: all
tasks:-name: Run Bash command
shell:
for xz_p in $(type -a xz | awk '{print $NF}' ); do
if ( strings "$xz_p" | grep "xz (XZ Utils)" | grep '5.6.0\|5.6.1' );
then echo $xz_p Vulnerable!;
else
echo $xz_p Safe ;
fi ;
done
args:executable: /bin/bash
register: result
-name: Show output
ansible.builtin.debug:msg:"{{ result.stdout_lines }}"
The following playbook uses the package manager to check the xz version. On RHEL/Fedora this is the xc package. On Debian/Ubuntu, it is part of the liblzma5 package.
Ansible Playbook to Check xz Backdoor using package manager
----name: Check if XZ tools are compromised
hosts: all
tasks:-name: Collect package info
ansible.builtin.package_facts:manager: auto
-name: Check if liblzma5 is vulnerable (Ubuntu/Debian)
ansible.builtin.debug:msg:"Installed version of liblzma5/xz: {{ ansible_facts.packages['liblzma5'] | map(attribute='version') | join(', ') }} Vulnerable!"when: ('liblzma5' in ansible_facts.packages) and (ansible_facts.packages['liblzma5'][0].version.split('-')[0] is version('5.6.0', '==') or ansible_facts.packages['liblzma5'][0].version.split('-')[0] is version('5.6.1', '=='))
-name: Check if xz is vulnerable (RHEL/Fedora/Rocky/Alma)
ansible.builtin.debug:msg:"Installed version of xz: {{ ansible_facts.packages['xz'] | map(attribute='version') | join(', ') }} is vulnerable"when: ('xz' in ansible_facts.packages) and (ansible_facts.packages['xz'][0].version is version('5.6.0', '==') or ansible_facts.packages['xz'][0].version is version('5.6.1', '=='))
This playbook is for updating Mikrotik routers. It will update both the RouterOS version and the firmware.
The playbook executes in the following order.
Check for RouterOS Updates
Update RouterOS (Router will reboot if there is an update)
Sleep 120 seconds to allow the router(s) to boot up
Check current firmware version, and if there is an available upgrade
Update firmware
Reboot router to apply firmware upgrade
This playbook attempts to be smart and will not reboot a router if there is not an update available. Routers that have updates available will reboot twice. Once to apply the RouterOS version, and the second time to apply the firmware.
Prerequisites
You should already have an inventory file and the Ansible RouterOS collection installed. If not, check out the following post.
Here is the playbook. A quick command syntax note, RouterOS 7 and newer typically use slashes / between commands. i.e. /system/package/update/install. Older versions of RouterOS have spaces in the command path i.e. /system package update install Since this still works on newer versions, we use it here.
Mikrotik Update Playbook
----name: Mikrotik RouterOS and Firmware Upgrades
hosts: routers
gather_facts:falsetasks:# Update RouterOS version. Mikrotik update/install command automatically reboots the router-name: Check for RouterOS updates
community.routeros.command:commands:- /system package update check-for-updates
register: system_update_print
-name: Update RouterOS version
community.routeros.command:commands:- /system package update install
when: system_update_print is not search('System is already up to date')
# Check if firmware needs an upgrade, upgrade and reboot.-name: Sleeping for 120 seconds. Giving time for routers to reboot.
ansible.builtin.wait_for:timeout:120delegate_to: localhost
-name: Check Current firmware
community.routeros.command:commands:-':put [/system routerboard get current-firmware]'register: firmware_current
-name: Check Upgrade firmware
community.routeros.command:commands:-':put [/system routerboard get upgrade-firmware]'register: firmware_upgrade
-name: Upgrade firmware
community.routeros.command:commands:-':execute script="/system routerboard upgrade"'when: firmware_current != firmware_upgrade
-name: Wait for firmware upgrade and then reboot
community.routeros.command:commands:- /system routerboard print
register: Reboot_Status
until:"Reboot_Status is search(\"please reboot\")"notify:- Reboot Mikrotik
retries:3delay:15when: firmware_current != firmware_upgrade
handlers:-name: Reboot Mikrotik
community.routeros.command:commands:-':execute script="/system reboot"'
This playbook can be used to report the Linux Distribution, OS Family, Distribution Version, and Distribution Major Version. This can be helpful for verifying all operating systems are up to date, or for working out what to use in other playbooks.
You will need to already have an inventory file.
Playbook yaml file
The playbook is very simple. Copy and paste the following contents into a file named “os_info.yaml”
----hosts: all
gather_facts: yes
become:falsetasks:-name: Distribution
debug: msg=" distribution {{ ansible_distribution }}- os_family {{ ansible_os_family}}- distribution_version {{ansible_distribution_version}}- distribution_major_version {{ ansible_distribution_major_version }}"
If we wanted to, we could break out each Ansible variable in its own debug line. I prefer having them all on a single line.
Running the Playbook
Run the playbook like any other playbook. Change inventory.ini to your inventory file. If your inventory file is encrypted, use the –ask-vault-pass option.
This will do a full update automatically reboot your servers if needed.
There is a special section for RHEL, CentOS 7 servers. If a server is running say CentOS 7, it will default to using YUM instead of DNF.
You need sudo or become: yes to reboot and install upgrades.
Linux OS Upgrade Playbook
Linux Upgrade Playbook
----name: Linux OS Upgrade
hosts: all
gather_facts: yes
become: yes
tasks:-name: Upgrade Debian and Ubuntu systems with apt
block:-name: dist-upgrade
ansible.builtin.apt:upgrade: dist
update_cache: yes
register: upgrade_result
-name: Debain check if reboot is required
shell:"[ -f /var/run/reboot-required ]"failed_when:Falseregister: debian_reboot_required
changed_when: debian_reboot_required.rc == 0
notify:- Reboot server
-name: Debian remove unneeded dependencies
ansible.builtin.apt:autoremove: yes
register: autoremove_result
-name: Debian print errors if upgrade failed
ansible.builtin.debug:msg:|
Upgrade Result: {{ upgrade_result }}
Autoremove Result: {{ autoremove_result }}when: ansible_os_family == "Debian"
-name: Upgrade RHEL systems with DNF
block:-name: Get packages that can be upgraded with DNF
ansible.builtin.dnf:list: upgrades
state: latest
update_cache: yes
register: reg_dnf_output_all
-name: List packages that can be upgraded with DNF
ansible.builtin.debug:msg:"{{ reg_dnf_output_all.results | map(attribute='name') | list }}"-name: Upgrade packages with DNF
become: yes
ansible.builtin.dnf:name:'*'state: latest
update_cache: yes
update_only: no
register: reg_upgrade_ok
-name: Print DNF errors if upgrade failed
ansible.builtin.debug:msg:"Packages upgrade failed"when: reg_upgrade_ok is not defined
-name: Install dnf-utils
become: yes
ansible.builtin.dnf:name:'dnf-utils'state: latest
update_cache: yes
when: reg_dnf_output_all is defined
when: ansible_os_family == "RedHat" and not (ansible_distribution_major_version == "7")
-name: Upgrade legacy RHEL systems with YUM
block:-name: Get packages that can be upgraded with YUM
ansible.builtin.yum:list: upgrades
state: latest
update_cache: yes
register: reg_yum_output_all
-name: List packages that can be upgraded with YUM
ansible.builtin.debug:msg:"{{ reg_yum_output_all.results | map(attribute='name') | list }}"-name: Upgrade packages with YUM
become: yes
ansible.builtin.yum:name:'*'state: latest
update_cache: yes
update_only: no
register: reg_yum_upgrade_ok
-name: Print YUM errors if upgrade failed
ansible.builtin.debug:msg:"Packages upgrade failed"when: reg_yum_upgrade_ok is not defined
-name: Check legacy RHEL system if a reboot is required
become: yes
command: needs-restarting -r
register: reg_reboot_required
ignore_errors: yes
failed_when:falsechanged_when: reg_reboot_required.rc != 0
notify:- Reboot server
when: ansible_os_family == "RedHat" and ansible_distribution_major_version == "7"
handlers:-name: Reboot server
ansible.builtin.reboot:msg:"Reboot initiated by Ansible after OS update"reboot_timeout:3600test_command: uptime
The first thing we need to do is create an inventory file. This will contain a list of our servers along with the credentials.
touch hosts.txt
Now let’s encrypt the file with Ansible Vault.
ansible-vault encrypt hosts.txt
The file is now encrypted. To edit the file, we need to use `ansible-vault edit`. If you want to, you can configure the hosts.txt file and then encrypt it when you are finished.
ansible-vault edit hosts.txt
Now add some hosts. In this example we add the local Kali machine, because why not. If you have Ubuntu servers, replace debian with ubuntu.
[debian]
kali ansible_host=127.0.0.1 ansible_ssh_user=kali ansible_ssh_port=22 ansible_ssh_password='kali pass' ansible_become_pass='kali sudo pass'
Add as many hosts as you need. For sake of simplicity, we are only adding one, and it is our localhost.
Create Playbook
Create a new playbook.
vi debian_update.yml
Put the following into the playbook. Edit as desired. Change hosts to match the above hosts in the inventory/hosts file.
----name: OS update
hosts: debian
gather_facts: yes
become: yes
tasks:-name: dist-upgrade
ansible.builtin.apt:upgrade: dist
update_cache: yes
register: upgrade_result
-name: Check if a reboot is required
ansible.builtin.stat:path: /var/run/reboot-required
get_checksum: no
register: reboot_required_file
-name: Reboot the server (if required).
ansible.builtin.reboot:when: reboot_required_file.stat.exists
register: reboot_result
-name: Remove unneeded dependencies
ansible.builtin.apt:autoremove: yes
register: autoremove_result
-name: Print errors if upgrade failed
ansible.builtin.debug:msg:|
Upgrade Result: {{ upgrade_result }}
Reboot Result: {{ reboot_result }}
Autoremove Result: {{ autoremove_result }}
A couple of notes
On the 3rd line it defines which group to run this playbook against. In this case debian.
This will check if a reboot is needed and reboot the machine. Reboots are usually needed when the kernel is updated
The 5th line contains `become: yes` this means that the playbook will use sudo. You can specify the sudo password in the hosts file `ansible_become_pass=sudopass` or with the -k or –ask-become options
The update and reboot are natively built into Ansible. Hence the ansible.builtin.
Run Playbook
Now that we have our inventory and playbook, we can upgrade our machines.