ansible – How to run a master playbook configuring network on 2 hosts at the same time?

I have 2 hosts, and my master playbook which configures network and other steps, runs on both, at the same time. It is a requirement.
The network playbook when run on a host, asks for ip address of the network interfaces (3 interfaces for each host, in my case). So when I run this network playbook on 2 hosts at the same time, it gets confused, as to which address is for 1st VM, and which addresses are for 2nd VM.
How to solve this?

ssh – Ansible unable to reach multiple hosts

I’m new to ansible but are having basic issues reaching multiple hosts via ansible. I’m able to reach all hosts via ssh and also if I use ansible to target any specific host in my inventory. It succsesfully reaches one of my hosts but fails all the others.

If run:

ansible all -i inventory.yml -u oytal -m ping

It returns:

192.168.1.90 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}


192.168.1.21 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: oytal@192.168.1.21: Permission denied (publickey).",
    "unreachable": true
}


192.168.1.20 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: oytal@192.168.1.20: Permission denied (publickey).",
    "unreachable": true
}


192.168.1.100 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: oytal@192.168.1.100: Permission denied (publickey).",
    "unreachable": true
}

It’s not consistent wich host is unreachable, I shifted around the order of my hosts and even removed the succsefull one, and it will reach one of the others instead but still fail the rest.

My inventory:

---
all:
    hosts:
        192.168.1.90:
        192.168.1.21:
        192.168.1.20:
        192.168.1.100:

ubuntu – Ansible “missing sudo password” even with passwordless sudo enabled

I have in my sudoers file

ALL            ALL = (ALL) NOPASSWD: ALL

Which allows anyone to use sudo without entering a password. And I confirmed that I can sudo without a password when I ssh into the machine.

Yet when I attempt to run a playbook on it, I get an error “missing sudo password”.

The command I’m using to run is

ansible-playbook -i inventory.yaml common_install.yaml --limit vpn.lan.example.com -vvv

I’ve run the same command, limiting it to a different host, that has the same sudo rule, and are both running Ubuntu 20.04, and it works on that. But won’t work on this host.

Why won’t it work?

Managing Your LowEndEmpire With Ansible, Part 2

In the previous tutorial, we walked through setting up Ansible on both the control (master) node and on target nodes.  Now let’s look at using Ansible capabilities.

One cool thing about Ansible is that hosts can have various roles, and you can layer these roles. So for example you could have roles like this:

  • a “common” role that contains tasks to be run on all hosts
  • a “backup-client” role for hosts that are backed up
  • a “db” role for hosts that operate as database servers

etc. In this tutorial, we’ll have a “common” role that we intend to run on all hosts. That we’ll create another role called ‘db’ for database servers.

In /ansible, create the following playbook file called db.yml:

---
- hosts: db

  roles:
    - common

Note that we are using YAML, which is very fussy about syntax, paticular spaces, so if you get an error, make sure it’s as show above.

Ansible comes with a wide variety of modules, which are packages that can be used to issue commands on target hosts. These cover a ton of common tasks. Some examples:

  • the ‘apt’ module can be used to install and remove packages using apt on Debian. There are also ‘yum’, ‘pacman’ and other package manager modules.
  • modules such as ‘user’, ‘group’, etc. can add/remove users and groups
  • locale_gen, timezone, and other modules can be used for system configuration
  • the ‘postgresql*’ set of modules and the ‘mysql*’ modules can be used to add/remove databases, add/remove users, etc.

Etcetera. So if you want to make sure that a certain package is available on your server, you don’t need to code dpkg, apt-get, etc. commands. You can just use the relevant Ansible module.

If something you need is not covered by a stock module, there are also modules for modifying files, making sure a line is present in a file, etc. which allow for configurations not covered by the Ansible distributed modules. And of course you can always copy a script from your control node and execute it.

See the docs for a full list of all modules and capabilities.

Create the following directory structure:

mkdir -p /ansible/roles/common/tasks

Now we’ll create the actual tasks we’re going to execute for the ‘common’ role. In /ansible/roles/common/tasks, create the file main.yml as follows. You do not need to include the comments – they are explanatory text for purposes of this tutorial.

The ‘name’ portion of each task is whatever you choose to call that task. The following line contains the module name followed by a colon, and then specific arguments and options for that module.

---

  # this task will run dpkg-reconfigure locales and ensure that
  # en_US.UTF-8 is present. Modify for your locale.

  - name: locale generation
    locale_gen: name=en_US.UTF-8 state=present

  # this task will run apt-get update

  - name: apt-get update
    apt: update_cache=yes

  # this task will run apt-get upgrade

  - name: apt-get upgrade
    apt: upgrade=dist

  # this task will install some packages we want on all hosts

  - name: basic packages
    apt: name=bsd-mailx,bzip2,cron,dnsutils,gpg,git,man,sqlite,unzip,vim,wget,whois,zip state=latest

  # this task will set the localtime to US/Pacific. Modify to taste.

  - name: set timezone
    timezone:
      name: US/Pacific

  # this task will make sure that cron is enable in systemd

  - name: cron enable
    service: name=cron enabled=yes state=started

  # this task ensures that root's .bash_profile exists
  - name: make sure /root/.bash_profile exists
    file:
    path: /root/.bash_profile
    state: touch

  # this task ensures that 'set -o vi' is in root's .bash_profile

  - name: set -o vi for root
    lineinfile:
    path: /root/.bash_profile
    state: present
    regexp: '^set -o vi'
    line: set -o vi

Then run the ansible-playbook file against db.yml:

root@master:/ansible# ansible-playbook db.yml

PLAY (db) **********************************************************************

TASK (Gathering Facts) *********************************************************
ok: (target.example.com)

TASK (common : locale generation) **********************************************
ok: (target.example.com)

TASK (common : apt-get update) *************************************************
changed: (target.example.com)

TASK (common : apt-get upgrade) ************************************************
ok: (target.example.com)

TASK (common : basic packages) *************************************************
ok: (target.example.com)

TASK (common : set timezone to US/Pacific) *************************************
ok: (target.example.com)

TASK (common : cron enable) ****************************************************
ok: (target.example.com)

TASK (common : make sure /root/.bash_profile exists) ***************************
changed: (target.example.com)

TASK (common : set -o vi for root) *********************************************
changed: (target.example.com)

PLAY RECAP *********************************************************************
target.example.com : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Now you may say to yourself “so you’ve set the locale and timezone and installed some packages – so what?” But the key points here are:

  • You didn’t have to do it manually.
  • Because it’s automated, it’s done identically every single time.
  • You can do this as easily on one host as on a thousand hosts.
  • Ansible allows you to set roles that baseline your environment. So every single host you use will be setup exactly as you wish.
  • You can run these commands every night to make sure no host drifts from the configuration you want. Think of these as “policies” that enforce how you want each server to be configured.

Modify your db.yml to add a db-server role:

---
- hosts: db

  roles:
    - common
    - db-server

Then:

mkdir -p /ansible/roles/db-server/tasks

And create a main.yml there with these tasks:

---

  # make sure that postgres is installed

  - name: postgres package
    apt: name=postgresql-11,python-ipaddress state=latest

  # make sure postgresql is configured to start on boot

  - name: postgres enable
    service: name=postgresql enabled=yes state=started

  # enable md5 (password) connections locally

    - name: modify pg_hba.conf to allow md5 connections
      postgresql_pg_hba:
        dest: /etc/postgresql/11/main/pg_hba.conf
        contype: local
        users: all
        databases: all
        method: md5
        state: present

Now run

ansible-playbook db.yml

You’ll see Ansible walk through all the ‘common’ tasks, and then:

TASK (db-server : postgres packages) *******************************************
changed: (target.example.com)

TASK (db-server : postgres enable) *********************************************
ok: (target.example.com)

TASK (db-server : modify pg_hba.conf to allow md5 connections) *****************
changed: (target.example.com)

Other PostgreSQL modules allow us to create databases, setup users, grant permissions, etc.

Ansible allows the easy distribution of files, either for configuration or application purposes. It also comes with a powerful templating package called Jinja2 that can modify these templates so that they are customized properly for each system.

Let’s add a mail-server role. We’ll use postfix.

Modify db.yml again:

---
- hosts: db

  roles:
    - common
    - db-server
    - mail-server

Then execute:

mkdir -p /ansible/roles/mail-server/tasks

And edit /ansible/roles/mail-server/tasks/main.yml:

---

  # ensure postfix packages are installed

  - name: postfix package package
    apt: name=postfix state=latest

  # template /etc/mailname

  - name: /etc/mailname
    template: src=/ansible/src/mailname.j2 dest=/etc/mailname owner=root group=0 mode=0644

  # copy /etc/postfix/main.cf

  - name: postfix main.cf
    template: src=/ansible/src/main.cf.j2 dest=/etc/postfix/main.cf owner=root group=0 mode=0644

  # make sure postfix is configured to start on boot and restart it
  # in case main.cf is changed

  - name: postfix enable and restart
    service: name=postfix enabled=yes state=restarted

  # set the root: alias in /etc/aliases

  - name: root alias in /etc/aliases
    lineinfile:
      path: /etc/aliases
      state: present
      regexp: '^root:'
      line: 'root: someone@somewhere.com'

  # run newaliases

  - name: newaliases
    command: /usr/bin/newaliases

Now let’s create our templates. In /ansible/src/mailname.j2, enter this text:

{{ ansible_host }}

This is a Jinja2 template (hence the .j2 ending). Text in between the double braces will be replaced with variables. Ansible supports many different variables. In this case we are using ‘ansible_host’ which will be replaced with ‘target.example.com’.

Here is /ansible/src/main.cf.j2, our postfix configuration:

smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
compatibility_level = 2
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = {{ ansible_node_name }}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = $myhostname, localhost.example.com , localhost
relayhost =
mynetworks = 127.0.0.0/8 (::ffff:127.0.0.0)/104 (::1)/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = all

Of course many Postfix variables could be set – this is a tutorial on Ansible not Postfix.

Now run:

ansible-playbook db.yml

And after the ‘common’ and ‘db-server’ sections, you’ll see the ‘mail-server’ tasks run:

TASK (mail-server : postfix package package) ***********************************
changed: (target.example.com)

TASK (mail-server : /etc/mailname) *********************************************
changed: (target.example.com)

TASK (mail-server : postfix main.cf) *******************************************
changed: (target.example.com)

TASK (mail-server : postfix enable and restart) ********************************
changed: (target.example.com)

TASK (mail-server : root alias in /etc/aliases) ********************************
changed: (target.example.com)

TASK (mail-server : newaliases) ************************************************
changed: (target.example.com)

And if we look on the system, we see that the proper template substitutions have been made:

root@master:/ansible# cat /etc/mailname
master.example.com

Although we’ve only scratched the surface of Ansible’s capabilities, hopefully you can see the tremendous power and flexibility of the product.  With Ansible playbooks, you can both setup new systems quickly and enforce policies on a continuous basis.

 

raindog308

I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.

Merging variables in Ansible with roles

I’m configuring my environment via the following:

inventory.yml

all:
  children:
    production:
      1.2.3.4
    staging:
      1.2.3.5

In group_vars/all.yml I’m setting up a hash of users which will be added in a playbook. I’d like to be able to add users specifically to group_vars/staging.yml that would be merged with the same setting in my group_vars/all.yml.

Is there a proper way to merge the hash or declare an inheritance in this case?

ansible – Update SLURM node state prior/after playbook execution

I would like to automatically set the state of a node in a SLURM cluster before/after running my Ansible playbook (from idle to drained and after applying the playbook back to idle). The scontrol command that is required for this, is only available on the head node of the cluster. The Ansible playbook, however, is applied to the compute nodes of the cluster. Is there any way to run a remote command on another host than the one currently connected to? I could think of just using the built-in shell module and then SSH to the head node. But maybe there’s a nicer way of doing it?

I already looked for ready-to-go Ansible modules but couldn’t find any for my use case. The existing ones are all focused on installing/configuring a SLURM cluster.

My idea is then to use a do-until loop that sets the new cluster node state and then repeatedly checks whether the node already switched to the new state (as there still could be running jobs).

network – How to use ansible playbook to add new rules to snort docker container?

I have a snort docker container running on my network to detect attacks and i would like to use ansible to run a playbook to add a new rule to snort . I have installed ansible security ids rule on the host (Centos 7) and created a playbook designed to add a new rule to snort, howver why i do try to run the playbook i get the following error :

The error appears to have been in '/etc/ansible/roles/ansible_security.ids_rule/tasks/main.yml': line 26, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: include ids_provider tasks*
  ^ here    "

How do I use get ansible playbook to run on snort container?

linux – Ansible won’t connect via SSH

Hi I’ve recently installed ansible on centos 7 to manage a remote server. I’ve tried connecting to the server using ansible with ssh and tried pinging the server using the command ” ansible -m ping all” i get an error message saying “

 "Failed to connect to the host via ssh: ssh: connect to host 176.16.21.138 port 22: Connection timed out", 
    "unreachable": true "  

I’ve tried turning off the firewalls, nothing works. I’ve posted an image of an error mesage i get when i execute the anisble command with the -vvv option/

However I can manually connect to the server using the ssh command. It just that ansible won’t work .
Image of error message with -vvv here

redhat – How do I enable access to the BaseOS repository from my control host to my managed hosts on ansible?

I’m training for my RHCE 8 EXAM, and I have to:

Create a playbook with the name setupreposerver.yaml to set up the control host as a repository host. Make sure this host meets the following requirements, which must be done by the playbook:

a. The RHEL 8 installation ISO is loop-mounted on the directory /var/ftp/repo.

b. The firewalld service is disabled.

c. The vsftpd service is started as well as enabled, and it allows anonymous user access to the /var/ftp/repo directory.

Then, create a Bash script that configures the managed servers as repository clients to the repository server that you set up in the previous tasks. This script must use ad hoc commands and perform the following tasks:

a. Disable any currently existing repository.

b. Enable access to the BaseOS repository on control.example.com

c. Enable access to the AppStream repository on control.example.com

I have tried many things, but I can’t enable access to BaseOS and AppStream from my managed nodes. After I mount /dev/sr0 I can’t execute the createcrepo command

https://i.stack.imgur.com/q1Thu.png <— client side repo setup playbook

https://i.stack.imgur.com/qkGW3.png <— client side repo setup playbook error

https://i.stack.imgur.com/ZE9oF.png <— server side repo setup playbook

https://i.stack.imgur.com/KprgT.png <— server side repo setup playbook part2

I need access to BaseOS and AppStream because later I will need to download packages into my managed node using my control node repo through ftp.

I have tried just putting the rpms files straight into /var/ftp/repo and then setting up the repo client with ftp://control.example.com/repo and it works, I can download the package.

But I want to access the rpm packages inside BaseOS and AppStream, and I can’t find a solution. I tried copying all the rpms to another directory, then unmounting, and then moving the backed up rpms back into /var/ftp/repo but then it wouldn’t be the right way.

Can someone help me find a solution. Thanks

My server side playbook executes with no errors by the way.

Managing Your LowEndEmpire With Ansible, Part 1

Ansible is an automation platform that can be used for provisioning, configuration management, and deployments. It does not require deploying an agent and is suitable to manage any system that allows SSH connections and can run Python 2.7.

Ansible is very powerful for fleet management. Currently, many IT organizations espouse a “cattle not pets” or “stamps not snowflakes” mode of managing in their IT. If your servers are pets, you lovingly create each one by hand, tweaking and tuning it to its specific purpose. This is fine if you have 10 servers but does not scale well. If your servers are cattle, you can create them by the dozen, with the idea that you manage each system identically with common policies.

For managing your own personal LEBs, Ansible can accomplish several things:

  1. Taking the drudgery out of common setup tasks, such as installing packages, tweaking config files, installing creature comforts in dot-files, setting up software, etc.
  2. Enforcing policies so that systems don’t “drift” from your standard configs.
  3. Automating routine tasks such as updating systems, deploying new software, etc.

This tutorial will not attempt to cover everything that Ansible can do, but rather show you how you can quickly make managing your LowEndEmpire more pleasant.

In part one, we’re going to introduce some Ansible concepts and get Ansible setup on your control node (the master).  In the next tutorial we’ll use Ansible to manage some hosts.

Control Node: the host that serves as a master, dispatching tasks to other hosts (or itself)

Host (or node): a remote machine that Ansible manages

Roles: you will assign roles to a host or group of hosts, such as “webserver” or “database server” to manage them. A host can have many roles.

Playbooks: these contain a list of plays (think sports), which are rules that say “run these tasks on these hosts”. A task in this context can be something like “update packages” or “copy this file”. Typically you’ll run a playbook, which says “for this group of hosts, run these sets of tasks”.

The Name Ansible Itself: Science fiction author Ursula LeGuin created the word “ansible” to describe a means of high-tech superliminal communication equipment, where messages can be exchanged nearly instantly even over interstellar distances.

There are many more terms but these are enough to get us started.

In this tutorial, we have a control node called master, and a target called (wait for it…) target. Both are going to run Debian 10, but of course Ansible can run on many operating systems.

First, add the following line to /etc/apt/sources.list so you pick up the Ansible repo:

deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main

Then run these commands as root:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
apt update
apt install ansible

Since everything Ansible does is done over ssh, we’ll want to create an ssh key. You can either generate a passwordless ssh key or use ssh-agent.

In this example, I’m running my control node from a secure location – my home, safe behind a NAT’d firewall – so I’ll be using passwordless ssh.

If your control node is in the cloud, you may not want to use a passwordless ssh key because if your control node is compromised, then all of the nodes that ansible controls could be compromised as well. In that configuration, an ssh-agent setup where you need to enter a password to kick off Ansible may be preferable.

To generate an passwordless ssh-key, issue the following command, answering with a return when prompted for no password:

ssh-keygen -t ed25519 -f ~/.ssh/ansible-key -C 'ssh key for ansible'

There’s no software to install, but you do need to setup the ssh key. Some VPS providers allow you to run a script as part of your setup. If so, putting your Ansible ssh key into root’s authorized_keys is easy to go. If your provider doesn’t offer this functionality, then you’ll need to setup the SSH key manually.

Here is a script that can be used to ensure that the Ansible key is setup properly. You can either run it as a setup script in your provider’s provisioning, or you can copy it to your target node and run it there. You could even login to your target node, copy/paste this script to a file in /tmp, and then execute it. Be sure to change the SSH key (where it says YOUR SSH PUBLIC KEY) to match your control node’s public key.

#!/bin/bash
set -e # will exit on any error
( ! -d /root/.ssh) && mkdir /root/.ssh
( ! -f /root/.ssh/authorized_keys) && touch /root/.ssh/authorized_keys
echo "YOUR SSH PUBLIC KEY" >> /root/.ssh/authorized_keys
chown root:root /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
chown root:root /root/.ssh
chmod 700 /root/.ssh
echo "Ansible ssh key setup successfully"

We need to give Ansible a list of hosts to manage. We can also use list this to define groups.

In this case, ‘target’ is going to be a database host. So we’ll edit /etc/ansible/hosts to create a ‘db’ group and put ‘target’ into it:

(db)
target.example.com

Look at the examples (commented out) in /etc/ansible/hosts to get a feel for how you can group hosts and specify them.

You’ll want to modify /etc/ansible/ansible.cfg (the main Ansible options file). If you use the default Ansible file locations, you’ll have less to modify, so here I will just call out some options to consider.

If you’ve changed the SSH port on your target hosts, you can use the remote_port setting to change this. For example, if you use port 32222, change this line in the (defaults) group:

remote_port = 61941

You may wish to disable host key checking, otherwise you’ll need to either use ssh-keyscan to add new hosts to your control node’s known_hosts or manually answer ‘Y’ the first time a playbook is run:

host_key_checking = False

Finally, you need to tell Ansible which private key to use, so be sure to set this option:

private_key_file = /root/.ssh/ansible-key

You’ll want on a work area on your control node for ansible, where you can:

  • keep playbooks and plays
  • keep various source files you want to distribute to your target nodes. For example, you might have a vimrc.local you want to always have on your hosts. To do this, create a directory that will hold all your source files. It can be anywhere you wish.

In this tutorial, we’ll keep it simple and call the work directory /ansible, and keep any files we want to distribute in /ansible/src

mkdir -p /ansible/src

Now that we’ve got Ansible setup, we’ll show you how to exploit its capabilities in the next tutorial.

raindog308

I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.