quick Ansible for today

November 15, 2018

I made some changes to the way I am updating my remote server, which hosts this blog & several other services. This is a part of my grand plan to use Ansible in the future for deploying web applications. The immediate goal that I have in mind is to migrate from using Docker for launching MediaWiki instance to using LXD and Ansible. Why? Purely curious and for educational purposes.

alias-ing

Since I will be using Ansible Playbook for this, I aliased the ansible-playbook command in my alias file. Here is a quick background how I manage my aliases. So I have this file, ~/.aixnr that holds all my aliases. I simply add an entry there for aliasing the ansible-playbook command:

alias playbook='alias-playbook --ask-become-pass'

The reason why I have the option --ask-become-pass will be obvious shortly. My current preferred shell is fish (migrated from zsh few months back; haven’t looked back yet) so I had to tell my fish to use aliases in my ~/.aixnr. The user configuration files for fish is located at ~/.config/fish/. I created a config.fish file inside this directory, and told it to source the ~/.aixnr file.

cd ~/.config/fish
echo "source ~/.aixnr" > config.fish

setting up Ansible

Before actually doing anything, Ansible needs to know the target server. I created a folder in ~/Documents that holds (1) Ansible configuration file that tells where to find target hosts, (2) a hosts file, (3) a playbook to update & upgrade packages. So, 3 files for now.

# navigation
cd ~/Documents && mkdir Ansible
cd Ansible

# create stuff
touch ansible.cfg
touch hosts
touch update.yml

First, let’s define the location of the hosts file in the ansible.cfg. In Ansible’s parlance, the hosts file is also known as inventory. Without this file, Ansible assumes the hosts file is located in the /etc directory. We don’t want to go there. Whenever you execute ansible-related commands in a directory containing the ansible.cfg, it will respect the ansible.cfg directives.

[defaults]
inventory = hosts

Next, let’s define the target server. I only have one for now, so here is how it looks like. Let’s pretend the server is located at 127.0.0.1 with a non-standard SSH port of 2323 and the main user is horsey. I had to define the ansible_user because by default Ansible assumes the same user for both local and remote, and that is not the case for me.

[web]
127.0.0.1:2323 ansible_user=horsey

If we are targeting many servers and have certain specific roles, the hosts file is going to look very different. Now, let’s take a look at the update.yml file.

---
- hosts: all
  become: true
  become_method: sudo

  tasks:
    - name: update packages
      apt: update_cache=yes upgrade=dist
    - name: clean unwanted stuff
      apt: autoremove=yes purge=yes

Ubuntu’s apt requires sudo to execute the update and upgrade process. I did not define any secret nor I hard-coded my password as a variable in my configuration. To ensure the playbook works, I decided to invoke ansible-playbook --ask-become-pass that was defined as an alias to playbook in my ~/.aixnr.

What would happen when I call the playbook command is that it will ask for password for sudo. See the trick now? Not the cleanest but it works fine for me (for now).

assumption

That the remote machine has the SSH public key because Ansible communicates via SSH.