chrooted sftp backup destination

Environment: Stock install of Centos 7 minimal, SELinux, yum-cron.

Data directory: /mnt/backup. XFS filesystem on a logical volume.

Mount with systemd unit similar to:

[Unit]
 Description = Backup logical volume
[Mount]
 What = UUID="coffee40-dead-beef-1234-ffffff1c2a34"
 Where = /mnt/backup
 Type = xfs
 Options = nodev,nosuid,noexec
[Install]
 WantedBy = multi-user.target

Add the users:

useradd backup-user-1 -d /mnt/backup/backup-user-1/ -s /sbin/nologin
useradd backup-user-2 -d /mnt/backup/backup-user-2/ -s /sbin/nologin

Create users .ssh directory:

mkdir /mnt/backup/.ssh/
chown root:root /mnt/backup/.ssh/

Add users keys:

echo 'ssh-rsa AAAA[...]' > /mnt/backup/.ssh/authorized_keys-backup-user-1
chown backup-user-1:backup-user-1 /mnt/backup/.ssh/authorized_keys-backup-user-1
echo 'ssh-rsa AAAA[...]' > /mnt/backup/.ssh/authorized_keys-backup-user-2
chown backup-user-2:backup-user-2 /mnt/backup/.ssh/authorized_keys-backup-user-2

Apply the following ACL’s recursively to /mnt/backup/.ssh/:

Notes: You could also put all remote users in a group and apply a group ACL instead. Use setfacl –set-file=file to read these acl’s from a file.

# file: .ssh/
# owner: root
# group: root
user::rwx
user:backup-user-1:r-x
user:backup-user-2:r-x
group::r-x
mask::r-x
other::---
default:user::rwx
default:user:backup-user-1:r-x
default:user:backup-user-2:r-x
default:group::r-x
default:mask::r-x
default:other::---

Apply the following ACL’s recursively to /mnt/backup/backup-user-1/:

# file: backup-user-1
# owner: backup-user-1
# group: root
user::rwx
group::---
other::---
default:user::rwx
default:user:backup-user-1:rwx
default:group::---
default:mask::rwx
default:other::---

Apply the following ACL’s recursively to /mnt/backup/backup-user-2/:

# file: backup-user-2
# owner: backup-user-2
# group: root
user::rwx
group::---
other::---
default:user::rwx
default:user:backup-user-2:rwx
default:group::---
default:mask::rwx
default:other::---

SELinux contexts:

semanage fcontext -at root_t /mnt/backup/
semanage fcontext -at home_root_t /mnt/backup/
semanage fcontext -at user_home_dir_t /mnt/backup/backup-user-1
semanage fcontext -at user_home_dir_t /mnt/backup/backup-user-2
semanage fcontext -at ssh_home_t /mnt/backup/.ssh
semanage fcontext -at ssh_home_t /mnt/backup/.ssh/authorized_keys-backup-user-1
semanage fcontext -at ssh_home_t /mnt/backup/.ssh/authorized_keys-backup-user-2
restorecon -Rv /mnt/backup/

SSH config:

# override default of no subsystems
# Subsystem sftp /usr/libexec/openssh/sftp-server
Subsystem sftp internal-sftp
Match User backup-user-1,backup-user-2
 PasswordAuthentication no
 X11Forwarding no
 AllowTcpForwarding no
 PermitTTY no
 ForceCommand internal-sftp
 ChrootDirectory /mnt/backup/
 AuthorizedKeysFile /mnt/backup/.ssh/authorized_keys-%u
Advertisements
chrooted sftp backup destination

Keeping Naemon or Nagios running at all times with a systemd drop-in unit

Due to a silly bug in Naemon 1.0.4, I looked into ways to make sure it always restarts if it dies or is killed. Turns out it’s rather easy thanks to systemd.

Create a naemon.service.d directory in /etc/systemd/system/

cd /etc/systemd/system/
mkdir naemon.service.d
cd naemon.service.d

Create the file 10-restart.conf with the following contents:

[Service]
RestartSec=10s
Restart=always

Now reload systemd:

systemctl daemon-reload

And make sure the unit is overridden:

[root@manage naemon.service.d]# systemd-delta | grep naemon
[EXTENDED] /usr/lib/systemd/system/naemon.service -> /etc/systemd/system/naemon.service.d/10-restart.conf

Then try killing naemon and watch it restart

killall naemon
watch systemctl status naemon
Keeping Naemon or Nagios running at all times with a systemd drop-in unit

Upgrading from naemon. 1.0.3 to 1.0.4

Important: First of all, back up /etc/naemon/ before updating.

rsync /etc/naemon/ /etc/naemon-bak/ -av
yum update -y

Note: If you’ve already upgraded and don’t have a backup, you can copy the config from nagios (if installed):

Skip this step if you have a backup of /etc/naemon/!

cp /etc/nagios/objects/templates.cfg /etc/naemon/conf.d/templates/
cp /etc/nagios/objects/contacts.cfg /etc/naemon/conf.d/
cp /etc/nagios/objects/timeperiods.cfg /etc/naemon/conf.d/
cp /etc/nagios/objects/commands.cfg /etc/naemon/conf.d/

Verify the naemon config:

naemon -vp /etc/naemon/naemon.cfg
Error in configuration file '/etc/naemon/naemon.cfg' - Line 344 (Warning: Failed to open check_result_path '/var/cache/naemon/checkresults': No such file or directory)
 Error processing main config file!

check_result_path is deprecated and you can safely remove it from the config.

sed -i '/check_result_path=/d' /etc/naemon/naemon.cfg

Verify the config again:

naemon -vp /etc/naemon/naemon.cfg
Reading configuration data...
Warning: enable_environment_macros is deprecated and will be removed.
Warning: use_large_installation_tweaks is deprecated and will be removed. Naemon should always be fast
Warning: daemon_dumps_core is deprecated and will be removed. Use system facilities to control coredump behaviour instead
Warning: max_check_result_file_age is deprecated and will be removed. Support for processing check results from disk will be removed
Warning: max_check_result_reaper_time is deprecated and will be removed. Support for processing check results from disk will be removed
Warning: check_result_reaper_frequency is deprecated and will be removed. Support for processing check results from disk will be removed
Warning: naemon_group is deprecated and will be removed. Naemon is compiled to be run as naemon:naemon
Warning: naemon_user is deprecated and will be removed. Naemon is compiled to be run as naemon:naemon
 Read main config file okay...
Error: Template 'generic-host' specified in host definition could not be found (config file '/usr/share/okconfig/templates/misc/hosts.cfg', starting on line 3)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/linux/services.cfg', starting on line 4)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/http/services.cfg', starting on line 41)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/nagios/services.cfg', starting on line 29)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/nagios/services.cfg', starting on line 20)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/nagios/services.cfg', starting on line 11)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/nagios/services.cfg', starting on line 2)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/misc/services.cfg', starting on line 53)
Error: Template 'generic-service' specified in service definition could not be found (config file '/usr/share/okconfig/templates/wmi/wmi.cfg', starting on line 6)
 Error processing object config files!

Rsync the templates directory from the backup and remove the deprecated attributes:

rsync naemon-bak/conf.d/templates/ naemon/conf.d/templates/ -av
sed -i '/enable_environment_macros/d' /etc/naemon/naemon.cfg
sed -i '/use_large_installation_tweaks/d' /etc/naemon/naemon.cfg
sed -i '/daemon_dumps_core/d' /etc/naemon/naemon.cfg
sed -i '/max_check_result_file_age/d' /etc/naemon/naemon.cfg
sed -i '/max_check_result_reaper_time/d' /etc/naemon/naemon.cfg
sed -i '/check_result_reaper_frequency/d' /etc/naemon/naemon.cfg
sed -i '/naemon_group/d' /etc/naemon/naemon.cfg
sed -i '/naemon_user/d' /etc/naemon/naemon.cfg

Verify the config again:

naemon -vp /etc/naemon/naemon.cfg
Reading configuration data...
 Read main config file okay...
Error: Could not find member group 'admins' specified in contactgroup 'default' (config file '/etc/naemon/okconfig//groups/default.cfg', starting on line 11)
 Error processing object config files!

Rsync the contacts config from the backup:

rsync naemon-bak/conf.d/contacts.cfg naemon/conf.d/ -av

Verify the config again:

naemon -vp /etc/naemon/naemon.cfg
Reading configuration data...
 Read main config file okay...
Error: Service notification period '24x7' specified for contact 'naemonadmin' is not defined anywhere!
Error: Could not register contact (config file '/etc/naemon/conf.d/contacts.cfg', starting on line 24)
 Error processing object config files!

Rsync the timeperiods config from the backup:

rsync naemon-bak/conf.d/timeperiods.cfg naemon/conf.d/ -av

Verify the config again:

naemon -vp /etc/naemon/naemon.cfg
Reading configuration data...
 Read main config file okay...
Error: Host check command 'check-host-alive' specified for host 'monitor-01' is not defined anywhere!
Error: Could not register host (config file '/etc/naemon/okconfig//hosts/default/monitor-01-host.cfg', starting on line 3)
 Error processing object config files!

Rsync the commands config from the backup:

rsync naemon-bak/conf.d/commands.cfg naemon/conf.d/ -av

Verify the config again:

naemon -vp /etc/naemon/naemon.cfg
Reading configuration data...
 Read main config file okay...
 Read object config files okay...

Either make sure that if you have defined broker_module=/usr/lib64/naemon/naemon-livestatus/livestatus.so[…] or the module-conf.d include:

echo 'include_dir=module-conf.d' >> /etc/naemon/naemon.cfg

Next:

Due to a bug in naemon 1.0.4 you need to configure automatic restarting for the service with a drop-in config for the systemd unit. See: Keeping Naemon or Nagios running at all times with a systemd drop-in unit

Upgrading from naemon. 1.0.3 to 1.0.4

Mounting partitions with systemd

Example systemd mount unit:

[Unit]
Description = Backup logical volume
After = lvm2-monitor.service

[Mount]
What = UUID="a37505f4-46e7-4496-8926-deadbeef4a79"
#What = "/dev/vg_backup/lv_backup"
Where = /mnt/backup
Type = xfs
Options = nodev,nosuid,noexec

[Install]
WantedBy = multi-user.target

Note that the unit file name needs to match the Where = clause. This unit needs to be named mnt-backup.mount, and goes in /etc/systemd/system/.

Mounting partitions with systemd

Regex process check with Nagios/Adagios

Check Command:

define command {
 command_name check_nrpe_procs_regex
 command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_procs_regex -a $_SERVICE_WARNING$ $_SERVICE_CRITICAL$ $_SERVICE_USER$ $_SERVICE_EREG_ARG_ARRAY$
}

NRPE command:

command[check_procs_regex]=/usr/lib64/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -u $ARG3$ --ereg-argument-array "$ARG4$"

Nagios service:

define service {
 use okc-linux-check_proc
 host_name hostname.domain.com
 __NAME apache2
 __WARNING 1:100
 __CRITICAL 0:200
 service_description Process apache2
 check_command check_nrpe_procs_regex
 __EREG_ARG_ARRAY '/usr/sbin/apache2'
 __USER www-data
}
Regex process check with Nagios/Adagios

TUN/TAP device in lxc containers

To create tun/tap devices in Red Hat or Debian based distros inside lxc containers, create the following systemd unit:

/etc/systemd/system/tundev.service:
    [Unit]
    Description=Add tun device workaround
    Wants=network.target
 
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/usr/bin/mkdir /dev/net
    ExecStart=/usr/bin/mknod -m 666 /dev/net/tun c 10 200
 
    [Install]
    WantedBy=multi-user.target

To create the tun/tap device before certain units start (ex. OpenVpn) you can add

Before=openvpn@.service

under [Unit].

To allow the container to create the device, the following line must be in the lxc config file (/var/lib/lxc/100/config):

lxc.cgroup.devices.allow = c 10:200 rwm

For Proxmox, add the following line to the container config (ex. /etc/pve/lxc/100.conf):

lxc.cgroup.devices.allow: c 10:200 rwm
TUN/TAP device in lxc containers

Simple VPN network mesh with tinc

From Wikipedia:

Tinc is an open-source, self-routing, mesh networking protocol, used for compressed, encrypted, virtual private networks.

Network graph:

tinc mesh

 

Hosts:

storage-01:

public ip:   123.123.123.100
vpn ip:      10.0.0.1
connects to: media01, router01

media-01:

public ip:   123.123.123.200
vpn ip:      10.0.0.2
connects to: storage01, router01

router-01:

public ip:   123.123.123.300
vpn ip:      10.0.0.3
connects to: storage01, media01

Note: Using dashes (-) in tinc hostname files does not work.

VPN name:

myvpn

tinc setup:

Identical directory tree on all servers after setup:

/etc/tinc/
└── myvpn
     ├── hosts
     │   ├── media01
     │   ├── router01
     │   └── storage01
     ├── rsa_key.priv
     ├── tinc.conf
     ├── tinc-down
     └── tinc-up

storage-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/

/etc/tinc/myvpn/hosts/storage01:
    Address = 123.123.123.100
    Subnet = 10.0.0.1/32
    
/etc/tinc/myvpn/tinc.conf:
    Name = storage01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = router01
    ConnectTo = media01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.1/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    #!/bin/sh
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.1/32 dev $INTERFACE
    ip link set $INTERFACE down

media-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/

/etc/tinc/myvpn/hosts/media01:
    Address = 123.123.123.200
    Subnet = 10.0.0.2/32

/etc/tinc/myvpn/tinc.conf:
    Name = media01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = storage01
    ConnectTo = router01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.2/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    #!/bin/sh
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.2/32 dev $INTERFACE
    ip link set $INTERFACE down

router-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/
    
/etc/tinc/myvpn/hosts/router01:
    Address = 123.123.123.300
    Subnet = 10.0.0.3/32
    
/etc/tinc/myvpn/tinc.conf:
    Name = router01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = storage01
    ConnectTo = media01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.3/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.3/32 dev $INTERFACE
    ip link set $INTERFACE down

On all servers:

# Create private/public keypair
tincd -n myvpn -K4096

/etc/firewalld/services/tinc.xml:
    <?xml version="1.0" encoding="utf-8"?>
    <service>
        <short>tinc</short>
        <description>tinc VPN daemon</description>
        <port protocol="udp" port="655"/>
        <port protocol="tcp" port="655"/>
    </service>

firewall-cmd --add-service=tinc --permanent
firewall-cmd --reload

All servers should have a copy of all host files with the public keys, so copy them.

[root@media-01 ~]# rsync /etc/tinc/myvpn/hosts/ router-01:/etc/tinc/myvpn/hosts/ -av
[root@media-01 ~]# rsync /etc/tinc/myvpn/hosts/ storage-01:/etc/tinc/myvpn/hosts/ -av
[root@router-01 ~]# rsync /etc/tinc/myvpn/hosts/ media-01:/etc/tinc/myvpn/hosts/ -av 
[root@router-01 ~]# rsync /etc/tinc/myvpn/hosts/ storage-01:/etc/tinc/myvpn/hosts/ -av
[root@storage-01 ~]# rsync /etc/tinc/myvpn/hosts/ media-01:/etc/tinc/myvpn/hosts/ -av
[root@storage-01 ~]# rsync /etc/tinc/myvpn/hosts/ router-01:/etc/tinc/myvpn/hosts/ -av

On all servers:

# Set executable bit on tinc-up and tinc-down
chmod +x /etc/tinc/myvpn/tinc-up
chmod +x /etc/tinc/myvpn/tinc-down
# Enable and start tinc:
systemctl enable tinc@myvpn
systemctl start tinc@myvpn

Now all three servers should be able to communicate on 10.0.0.0/24.
If communication between any two drops, it’ll route through the third one.

Note on Debian 8:

Debian 8 dosen’t have a systemd unit for tinc yet, so to get tinc up and running /etc/tinc/nets.boot should contain names of all networks to be started. You can then start it normally with init/systemd.

For example:

/etc/tinc/
 ├── myvpn
 │   ├── hosts
 │   │   ├── media01
 │   │   ├── router01
 │   │   ├── storage01
 │   ├── rsa_key.priv
 │   ├── tinc.conf
 │   ├── tinc-down
 │   └── tinc-up
 └── nets.boot
/etc/tinc/nets.boot:
    myvpn
# Enable and start tinc
systemctl enable tinc
systemctl start tinc

# Check the status
systemctl status tinc
Simple VPN network mesh with tinc