Manager node

Note

Execute the following commands on the seed node. Execute the commands within the manager environment (cd environments/manager) of the configuration repository.

The manager node is used to manage all other nodes of the environment. The use of a dedicated system is recommended. In many environments, one of the controller nodes is used as the manager node.

You can use a different folder location for the virtual environment that will be created by setting the environment variable VENV_PATH. This is required for example if your current folder path contains blank characters.

Various Ansible configurations can be adjusted via environment variables.

  • to query the password for using sudo:

    ANSIBLE_BECOME_ASK_PASS=True
    
  • if secrets.yml files are encrypted with Ansible Vault, let Ansible prompt for the password by using:

    ANSIBLE_ASK_VAULT_PASS=True
    
  • set the password file location (password will be in cleartext!)

    ANSIBLE_VAULT_PASSWORD_FILE=../../secrets/vaultpass
    

An overview with all parameters can be found at http://docs.ansible.com/ansible/devel/reference_appendices/config.html#environment-variables.

Initialization

Note

It is possible to manage more than one manager. In this case it may be useful to work with –limit.

Creation of the operator user

ANSIBLE_USER=ubuntu ./run.sh operator

Note

The operator user is created on each system. It is used as a service account for OSISM. All Docker containers run with this user. Ansible also uses this user to access the systems. Commands on the manager node need to be run as this user!

  • If a password is required to login to the manager node, ANSIBLE_ASK_PASS=True must be set.

  • If an SSH key is required to login to the manager node, the key has to be added on the manager node to ~/.ssh/authorized_keys in the home directory of the user specified as ANSIBLE_USER.

  • If the error /bin/sh: 1: /usr/bin/python: not found occurs, Python has to be installed on the manager node by executing:

    ANSIBLE_USER=ubuntu ./run.sh python
    
  • To verify the creation of the operator user, use the private key file id_rsa.operator:

    ssh -i id_rsa.operator dragon@manager01
    
  • A typical call to create the operator user looks like this:

    ANSIBLE_BECOME_ASK_PASS=True \
    ANSIBLE_ASK_VAULT_PASS=True \
    ANSIBLE_ASK_PASS=True \
    ANSIBLE_USER=ubuntu \
    ./run.sh operator
    

Warning

If the operator user was already created when the operating system was provisioned, ./run.sh operator must still be executed. ANSIBLE_USER should be set to a user with sudo rights and different from the operator user.

The UID and GID of the operator user need to be 45000. Execute the following commands as root user on the manger node:

usermod -u 45000 dragon
groupmod -g 45000 dragon

chgrp dragon /home/dragon/
chown dragon /home/dragon/

find /home/dragon -group 1000 -exec chgrp -h dragon {} \;
find /home/dragon -user 1000 -exec chown -h dragon {} \;
  • If Ansible Vault is used, direct Ansible to prompt for the Vault password:

    export ANSIBLE_ASK_VAULT_PASS=True
    

    or the password file location can be exported (password will be in cleartext!):

    export ANSIBLE_VAULT_PASSWORD_FILE=../../secrets/vaultpass
    

Configuration of the network

./run.sh network
  • The network configuration, already present on a system should be saved before this step.

  • Currently we are still using /etc/network/interfaces. Hence rename all files below /etc/netplan to X.unused.

    The default file 01-netcfg.yaml with the following content can remain as is.

    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
      version: 2
      renderer: networkd
    
  • Upon completion of the network configurtion, a system reboot should be performed to ensure the configuration is functional and reboot safe. Since network services are not restarted automatically, later changes to the network configuration are not effective without a manual restart of the network service or reboot of the nodes.

  • A reboot is performed to activate and test the network configuration. The reboot must be performed before the bootstrap is performed.

    ./run.sh reboot
    

Bootstrap of the manager node

./run.sh bootstrap

Reboot the manager node afterwards to ensure changes are boot safe:

./run.sh reboot

Deploy the configuration repository on the manager node:

./run.sh configuration

Deploy the manager services:

./run.sh manager

Optional infrastructure services

The deployment of these infrastructure services is optional. They are only deployed if they are to be used.

Cobbler

Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between lots of various commands and applications when rolling out new systems, and, in some cases, changing existing ones. It can help with installation, DNS, DHCP, package updates, power management, configuration management orchestration, and much more. 1

On the manager node execute the following command:

osism-infrastructure cobbler

Mirror

With the mirror services it is possible to store packages for Ubuntu and images for Docker in one central location.

osism-infrastructure mirror

After the bootstrap of the mirror services they have to be synchronized. Depending on the bandwidth, this process will take several hours.

osism-mirror images
osism-mirror packages
1

source: https://github.com/cobbler/cobbler/blob/master/README.md