Simple VPN network mesh with tinc

From Wikipedia:

Tinc is an open-source, self-routing, mesh networking protocol, used for compressed, encrypted, virtual private networks.

Network graph:

tinc mesh

 

Hosts:

storage-01:

public ip:   123.123.123.100
vpn ip:      10.0.0.1
connects to: media01, router01

media-01:

public ip:   123.123.123.200
vpn ip:      10.0.0.2
connects to: storage01, router01

router-01:

public ip:   123.123.123.300
vpn ip:      10.0.0.3
connects to: storage01, media01

Note: Using dashes (-) in tinc hostname files does not work.

VPN name:

myvpn

tinc setup:

Identical directory tree on all servers after setup:

/etc/tinc/
└── myvpn
     ├── hosts
     │   ├── media01
     │   ├── router01
     │   └── storage01
     ├── rsa_key.priv
     ├── tinc.conf
     ├── tinc-down
     └── tinc-up

storage-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/

/etc/tinc/myvpn/hosts/storage01:
    Address = 123.123.123.100
    Subnet = 10.0.0.1/32
    
/etc/tinc/myvpn/tinc.conf:
    Name = storage01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = router01
    ConnectTo = media01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.1/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    #!/bin/sh
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.1/32 dev $INTERFACE
    ip link set $INTERFACE down

media-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/

/etc/tinc/myvpn/hosts/media01:
    Address = 123.123.123.200
    Subnet = 10.0.0.2/32

/etc/tinc/myvpn/tinc.conf:
    Name = media01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = storage01
    ConnectTo = router01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.2/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    #!/bin/sh
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.2/32 dev $INTERFACE
    ip link set $INTERFACE down

router-01 (centos 7):

# Install tinc
yum install tinc -y

# Create directories
mkdir -p /etc/tinc/myvpn/hosts/
    
/etc/tinc/myvpn/hosts/router01:
    Address = 123.123.123.300
    Subnet = 10.0.0.3/32
    
/etc/tinc/myvpn/tinc.conf:
    Name = router01
    Interface = tun8
    AddressFamily = ipv4
    ConnectTo = storage01
    ConnectTo = media01

/etc/tinc/myvpn/tinc-up:
    #!/bin/sh
    ip link set $INTERFACE up
    ip addr add 10.0.0.3/32 dev $INTERFACE
    ip route add 10.0.0.0/24 dev $INTERFACE

/etc/tinc/myvpn/tinc-down:
    ip route del 10.0.0.0/24 dev $INTERFACE
    ip addr del 10.0.0.3/32 dev $INTERFACE
    ip link set $INTERFACE down

On all servers:

# Create private/public keypair
tincd -n myvpn -K4096

/etc/firewalld/services/tinc.xml:
    <?xml version="1.0" encoding="utf-8"?>
    <service>
        <short>tinc</short>
        <description>tinc VPN daemon</description>
        <port protocol="udp" port="655"/>
        <port protocol="tcp" port="655"/>
    </service>

firewall-cmd --add-service=tinc --permanent
firewall-cmd --reload

All servers should have a copy of all host files with the public keys, so copy them.

[root@media-01 ~]# rsync /etc/tinc/myvpn/hosts/ router-01:/etc/tinc/myvpn/hosts/ -av
[root@media-01 ~]# rsync /etc/tinc/myvpn/hosts/ storage-01:/etc/tinc/myvpn/hosts/ -av
[root@router-01 ~]# rsync /etc/tinc/myvpn/hosts/ media-01:/etc/tinc/myvpn/hosts/ -av 
[root@router-01 ~]# rsync /etc/tinc/myvpn/hosts/ storage-01:/etc/tinc/myvpn/hosts/ -av
[root@storage-01 ~]# rsync /etc/tinc/myvpn/hosts/ media-01:/etc/tinc/myvpn/hosts/ -av
[root@storage-01 ~]# rsync /etc/tinc/myvpn/hosts/ router-01:/etc/tinc/myvpn/hosts/ -av

On all servers:

# Set executable bit on tinc-up and tinc-down
chmod +x /etc/tinc/myvpn/tinc-up
chmod +x /etc/tinc/myvpn/tinc-down
# Enable and start tinc:
systemctl enable tinc@myvpn
systemctl start tinc@myvpn

Now all three servers should be able to communicate on 10.0.0.0/24.
If communication between any two drops, it’ll route through the third one.

Note on Debian 8:

Debian 8 dosen’t have a systemd unit for tinc yet, so to get tinc up and running /etc/tinc/nets.boot should contain names of all networks to be started. You can then start it normally with init/systemd.

For example:

/etc/tinc/
 ├── myvpn
 │   ├── hosts
 │   │   ├── media01
 │   │   ├── router01
 │   │   ├── storage01
 │   ├── rsa_key.priv
 │   ├── tinc.conf
 │   ├── tinc-down
 │   └── tinc-up
 └── nets.boot
/etc/tinc/nets.boot:
    myvpn
# Enable and start tinc
systemctl enable tinc
systemctl start tinc

# Check the status
systemctl status tinc
Simple VPN network mesh with tinc

iodine client/server on CentOS 7

From http://code.kryo.se/iodine/:

iodine lets you tunnel IPv4 data through a DNS server. This can be usable in different situations where internet access is firewalled, but DNS queries are allowed.

DNS Setup

Name     Type    Value            
iodine   NS      tunnel.domain.com
tunnel   A       123.123.123.123

Server:

Install:

yum install iodine-server -y

Configure iodine-server.service in “/etc/sysconfig/iodine-server”:

OPTIONS="-f -P 'good password' 172.21.21.1/24 iodine.domain.com"

where 172.21.21.1/24 is the tunnel ip/netmask

Start the server:

[root@iodine ~]# systemctl start iodine-server.service
[root@iodine ~]# systemctl status iodine-server.service
iodine-server.service - Iodine Server
 Loaded: loaded (/usr/lib/systemd/system/iodine-server.service; enabled)
 Active: active (running) since Sat 2015-06-20 02:24:28 GMT; 42s ago
 Main PID: 1960 (iodined)
 CGroup: /system.slice/iodine-server.service
 └─1960 /usr/sbin/iodined -f -P 172.21.21.1 24 iodine.domain.com

Jun 20 02:24:28 iodine.domain.com systemd[1]: Starting Iodine Server...
Jun 20 02:24:28 iodine.domain.com systemd[1]: Started Iodine Server.
Jun 20 02:24:28 iodine.domain.com iodined[1960]: Opened dns0
Jun 20 02:24:28 iodine.domain.com iodined[1960]: Setting IP of dns0 to 172.21.21.1
Jun 20 02:24:28 iodine.domain.com iodined[1960]: Setting MTU of dns0 to 1130
Jun 20 02:24:28 iodine.domain.com iodined[1960]: Opened IPv4 UDP socket
Jun 20 02:24:28 iodine.domain.com iodined[1960]: Listening to dns for domain iodine.domain.com
Jun 20 02:24:28 iodine.domain.com iodined[1960]: started, listening on port 53

Enable IPv4 forwarding in the kernel:

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.d/99-sysctl.conf
sysctl -p /etc/sysctl.d/99-sysctl.conf

Client:

Install:

yum install iodine-client

Configure iodine-client.service in “/etc/sysconfig/iodine-client”:

iodine -f -r 123.123.123.123 iodine.domain.com -P 'good password'

Start the client:

[root@iodine-client ~]# systemctl start iodine-client
[root@iodine-client ~]# systemctl status iodine-client
iodine-client.service - Iodine Client
 Loaded: loaded (/usr/lib/systemd/system/iodine-client.service; disabled)
 Active: active (running) since Sat 2015-06-20 02:27:46 GMT; 3s ago
 Main PID: 2020 (iodine)
 CGroup: /system.slice/iodine-client.service
 └─2020 /usr/sbin/iodine -f -r 123.123.123.123 iodine.domain.com -P

Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Using EDNS0 extension
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Switching upstream to codec Base128
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Server switched upstream to codec Base128
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: No alternative downstream codec available, using default (Raw)
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Switching to lazy mode for low-latency
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Server switched to lazy mode
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Autoprobing max downstream fragment size... (skip with -m fragsize)
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: 768 ok.. 1152 ok.. 1344 ok.. 1440 ok.. 1488 ok.. 1512 ok.. 1524 ok.. will use 1524-2=1522
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Setting downstream fragment size to max 1522...
Jun 20 02:27:46 iodine-client.domain.com iodine[2020]: Connection setup complete, transmitting data.

Test client -> server and server -> client ping:

[root@iodine-client ~]# ping 172.21.21.1
PING 172.21.21.1 (172.21.21.1) 56(84) bytes of data.
64 bytes from 172.21.21.1: icmp_seq=1 ttl=64 time=0.045 ms
64 bytes from 172.21.21.1: icmp_seq=2 ttl=64 time=0.054 ms
64 bytes from 172.21.21.1: icmp_seq=3 ttl=64 time=0.038 ms
64 bytes from 172.21.21.1: icmp_seq=4 ttl=64 time=0.057 ms
^C
--- 172.21.21.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.038/0.048/0.057/0.010 ms
[root@iodine-client ~]#
[root@iodine ~]# ping 172.21.21.2
PING 172.21.21.2 (172.21.21.2) 56(84) bytes of data.
64 bytes from 172.21.21.2: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 172.21.21.2: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 172.21.21.2: icmp_seq=3 ttl=64 time=0.064 ms
64 bytes from 172.21.21.2: icmp_seq=4 ttl=64 time=0.057 ms
^C
--- 172.21.21.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.057/0.061/0.064/0.008 ms
[root@iodine ~]#

Example of an ssh tunnel (socks proxy) through the iodine server:

ssh 172.21.21.1 -p 443 -D 8080 -f -N

Or if you are using the iodine NetworkManager plugin:

dnf install NetworkManager-iodine-gnome -y

Screenshot from 2015-06-20 01-58-57

Screenshot from 2015-06-20 01-59-14

Screenshot from 2015-06-20 02-00-44

Screenshot from 2015-06-20 02-02-43

Screenshot from 2015-06-20 02-02-48

 

iodine GitHub: https://github.com/yarrick/iodine

 

iodine client/server on CentOS 7

Hardened OpenVPN on CentOS 7

This post should cover installing and hardening OpenVPN, configuring firewalld to allow VPN traffic, and configure logrotate to rotate the OpenVPN logs on CentOS 7.

Consider reading this first: https://community.openvpn.net/openvpn/wiki/Hardening

SELinux should be enforcing and key permissions should not allow anyone but root to read them.

Installing

First install the EPEL repo:

yum install epel-release -y

Update the system:

yum update -y

Install openvpn and easy-rsa:

yum install openvpn easy-rsa -y

Copy the easy-rsa scripts to /etc/openvpn/easy-rsa:

cp -R /usr/share/easy-rsa/2.0/ /etc/openvpn/easy-rsa/

Copy the OpenVPN sample server config to /etc/openvpn:

cp /usr/share/doc/openvpn-2.*/sample/sample-config-files/server.conf /etc/openvpn/

Edit the following variables in the /etc/openvpn/easy-rsa/vars file:

export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="SanFrancisco"
export KEY_ORG="Fort-Funston"
export KEY_EMAIL="mail@domain"
export KEY_EMAIL=mail@domain

Edit the KEY_SIZE variable to increase the key size to something above 3072 (4096 is probably not a bad idea unless you suffer performance problems):
https://community.openvpn.net/openvpn/wiki/Hardening#X.509keysize
https://www.enisa.europa.eu/activities/identity-and-trust/library/deliverables/algorithms-key-sizes-and-parameters-report

KEY_SIZE=4096

Create the server side keys and certificates:

cd /etc/openvpn/easy-rsa/
source vars
./clean-all
./build-ca
./build-key-server server

Build the Diffie-Hellman parameters:
Note: this will take a long time, in some cases more than an hour. Consider installing and starting haveged before doing this.

./build-dh

OpenVPN won’t start if the CRL file doesn’t exist or is invalid, so we create a dummy client certificate and revoke it:

./build-key dummy-client
./revoke-full dummy-client

When dropping the OpenVPN daemon privileges after initialization to “nobody”, it won’t be able to read the crl.pem file because /etc/openvpn/easy-rsa/keys has 0700 permissions. We work around this by moving it to /etc/openvpn/crl.pem and symlinking to /etc/openvpn/easy-rsa/keys/crl.pem. This way we don’t have to make /etc/openvpn/easy-rsa/keys world-readable or edit the revoke-full script. Nice.

mv /etc/openvpn/easy-rsa/keys/crl.pem /etc/openvpn/crl.pem
ln -s /etc/openvpn/crl.pem /etc/openvpn/easy-rsa/keys/crl.pem

Generate a client certificate/key combo:

cd /etc/openvpn/easy-rsa/
source vars
./build-key client1

Generate a TLS pre-shared key:

cd /etc/openvpn/easy-rsa/keys
openvpn --genkey --secret ta.key

Edit the server configuration file /etc/openvpn/server.conf:

vim server.conf

Certificate Authority, Server Certificate, Server Key:

ca easy-rsa/keys/ca.crt
cert easy-rsa/keys/server.crt
key easy-rsa/keys/server.key # This file should be kept secret

Diffie-Hellman parameters:

dh easy-rsa/keys/dh4096.pem

Push a default gateway route:

push "redirect-gateway def1 bypass-dhcp"

Push DNS options:

push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 208.67.220.220"

Use a TLS authentication secret:
https://community.openvpn.net/openvpn/wiki/Hardening#Useof–tls-auth

tls-auth easy-rsa/keys/ta.key 0 # This file is secret

Maximum number of concurrently connected clients (change this if you have more than 10 clients):

max-clients 10

Drop privileges after initialization:

user nobody
group nobody

Append log:

log-append /var/log/openvpn.log

Check the Extended Key Usage on the certificates:

Note: The –remote-cert-tls client option is equivalent to –remote-cert-eku “TLS Web Client Authentication”

remote-cert-tls client

Check for revoked certificates:

crl-verify crl.pem

Set a minimum TLS protocol version:

tls-version-min 1.2

Set a stronger cipher:

cipher AES-256-CBC

Use SHA-2 for message authentication:
Note: The source blog says “SHA-256”, but OpenVPN wouldn’t start unless I changed it to SHA256.

I changed this to SHA512 because why not. Use SHA256 if you suffer performance problems. See Algorithms, Key Sizes and Parameters Report – 2013 (3.3.1 Recommended Hash Functions, page 26).

auth SHA512

Limit the list of supported TLS ciphersuites:
https://community.openvpn.net/openvpn/wiki/Hardening#Useof–tls-cipher

tls-cipher TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256

The final server config

And a corresponding client config (see server config explanations above for the same directives).

Replace the dots (“…”) in the inline tags with the corresponding certs/keys:

<ca> = certificate authority (contents of: /etc/openvpn/easy-rsa/keys/ca.crt)
<cert> = client certificate (contents of: /etc/openvpn/easy-rsa/keys/client1.crt)
<key> = client private key (contents of: /etc/openvpn/easy-rsa/keys/client1.key)
<tls-auth> = pre-shared tsl key (contents of: /etc/openvpn/easy-rsa/keys/ta.key)

In the client configuration, verify the server certificate subject string.
For example:

verify-x509-name 'C=XX, ST=NA, L=XX, O=XX, OU=XX, CN=XX, name=XX, emailAddress=XX' subject

To see these values for the server certificate, use:

Note: The string must match the subject, but the text output from openssl puts forward slashes between the CN, name, and emailAddress fields. These should be separated by “, ” as shown above. Otherwise you will get an error stating that the subject doesn’t match.

To generate the subject string:

openssl x509 -in easy-rsa/keys/server.crt -text|grep Subject:|sed 's|/name=|, name=|g;s|/emailAddress=|, emailAddress=|g;s|.*Subject: ||g'
Subject: C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, OU=MyOrganizationalUnit, CN=server, name=EasyRSA, emailAddress=me@myhost.mydomain

Enable and start the OpenVPN service:

systemctl enable openvpn@server
systemctl start openvpn@server

Note: the @server means systemd will start openvpn with the config file “server.conf”.
For multiple servers/clients use systemctl enable openvpn@server2, systemctl enable openvpn@client1, etc..

Firewall and forwarding

Enable IPv4 forwarding in the kernel:

/etc/sysctl.d/99-sysctl.conf:

net.ipv4.ip_forward=1
sysctl -p /etc/sysctl.d/99-sysctl.conf

iptables:
Note: Replace ens18 with your internet-facing interface

iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp -m state --state NEW -m udp --dport 1194 -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o ens18 -j MASQUERADE

firewalld/firewall-cmd:

**todo**

Logrotate:

Put the following in /etc/logrotate.d/openvpn:

/var/log/openvpn.log {
 missingok
 notifempty
 copytruncate
 compress
 delaycompress
 daily
 rotate 7
 create 0600 root root
}

useful commands:

View effective config without comments or other garbage:

egrep -iv "^(\#|;|$)" server.conf | sort

Sources and further reading:

https://blog.g3rt.nl/openvpn-security-tips.html
https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
https://community.openvpn.net/openvpn/wiki/Hardening
https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility
https://www.enisa.europa.eu/activities/identity-and-trust/library/deliverables/algorithms-key-sizes-and-parameters-report
http://darizotas.blogspot.com/2014/04/openvpn-hardening-cheat-sheet.html

Hardened OpenVPN on CentOS 7

Postgresql 9.2 monitoring with Adagios on CentOS 7

On the PostgreSQL server:

Note: You may need to deal with SELinux.

Install some needed perl modules, download the check script and make it executable:

yum install perl-Data-Dumper perl-Digest-MD5 perl-Getopt-Long perl-File-Temp perl-Time-HiRes perl-TimeDate
cd /usr/lib64/nagios/plugins
wget https://raw.githubusercontent.com/bucardo/check_postgres/master/check_postgres.pl
chmod +x check_postgres.pl

Add the following to /usr/lib64/nagios/plugins/check_postgres_stats.sh:

#!/bin/bash
DB="$1"
STATS=$(/usr/lib64/nagios/plugins/check_postgres.pl --datadir /var/lib/pgsql/data/ -db "$DB" --action dbstats | sed 's/:/=/g')
echo "OK: Postgres stats collected | $STATS"

Add the following to /etc/nrpe.d/check_postgres.cfg:

command[check_postgres]=/usr/bin/sudo -u postgres /usr/lib64/nagios/plugins/check_postgres.pl --datadir /var/lib/pgsql/data/ -db '$ARG1$' --action '$ARG2$'
command[check_postgres_w]=/usr/bin/sudo -u postgres "/usr/lib64/nagios/plugins/check_postgres.pl" --datadir /var/lib/pgsql/data/ -db '$ARG1$' --action '$ARG2$' --warning '$ARG3$'
command[check_postgres_wc]=/usr/bin/sudo -u postgres "/usr/lib64/nagios/plugins/check_postgres.pl" --datadir /var/lib/pgsql/data/ -db '$ARG1$' --action '$ARG2$' --warning '$ARG3$' --critical '$ARG4$'
command[check_postgres_stats]=/usr/bin/sudo -u postgres /usr/lib64/nagios/plugins/check_postgres_stats.sh '$ARG1$'

Add the following to /etc/sudoers.d/nrpe using visudo:

visudo -f /etc/sudoers.d/nrpe
Defaults:nrpe !requiretty
nrpe ALL=(postgres) NOPASSWD: /usr/lib64/nagios/plugins/check_postgres.pl
nrpe ALL=(postgres) NOPASSWD: /usr/lib64/nagios/plugins/check_postgres_stats.sh
 On the Nagios server:

Create the check commands:

pynag add command command_name="2ks-check_nrpe_postgres" command_line='$USER1$/check_nrpe -H $HOSTADDRESS$ -c check_postgres -a '$_SERVICE_DATABASE$' '$_SERVICE_ACTION$''
pynag add command command_name="2ks-check_nrpe_postgres_w" command_line='$USER1$/check_nrpe -H $HOSTADDRESS$ -c check_postgres_w -a '$_SERVICE_DATABASE$' '$_SERVICE_ACTION$' '$_SERVICE_WARNING$''
pynag add command command_name="2ks-check_nrpe_postgres_wc" command_line='$USER1$/check_nrpe -H $HOSTADDRESS$ -c check_postgres_wc -a '$_SERVICE_DATABASE$' '$_SERVICE_ACTION$' '$_SERVICE_WARNING$' '$_SERVICE_CRITICAL$''
pynag add command command_name="2ks-check_nrpe_postgres_stats" command_line='$USER1$/check_nrpe -H $HOSTADDRESS$ -c check_postgres_stats -a '$_SERVICE_DATABASE$''

Create the okconfig template /etc/nagios/okconfig/examples/postgres.cfg-example:

define service {
    use                            okc-linux-check_proc
    __WARNING                      1:
    __NAME                         postgres
    host_name                      HOSTNAME
    service_description            Process postgres
    __CRITICAL                     :20
    check_command                 okc-check_nrpe!check_procs -a $_SERVICE_WARNING$ $_SERVICE_CRITICAL$ $_SERVICE_NAME$
}

define service {
        service_description           PostgreSQL Database connection
         use                            generic-service
         host_name                      HOSTNAME
        check_command                 2ks-check_nrpe_postgres
        __DATABASE                    database_1
        __ACTION                      connection
        notes                         Simply connects and returns version number.
}

define service {
    use                            generic-service
    __DATABASE                     database_1
    check_command                 2ks-check_nrpe_postgres_stats
    host_name                      HOSTNAME
        service_description           PostgreSQL Database statistics
        notes                         Reports information from the pg_stat_database view, and outputs as performance data.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      bloat
         host_name                      HOSTNAME
        service_description           PostgreSQL Database bloat
        __CRITICAL                    50%
        __WARNING                     25%
        notes                         Checks the amount of bloat in tables and indexes. Bloat is generally the amount of dead unused space taken up in a table or index. This space is usually reclaimed by use of the VACUUM command.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      locks
         host_name                      HOSTNAME
        service_description           PostgreSQL Database locks
        __CRITICAL                    300
        __WARNING                     150
        notes                         Check the total number of locks on one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      timesync
         host_name                      HOSTNAME
        service_description           PostgreSQL Database timesync
        __CRITICAL                    5
        __WARNING                     2
        notes                         Compares the local system time with the time reported by one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      last_vacuum
         host_name                      HOSTNAME
        service_description           PostgreSQL Database last vacuum
        __CRITICAL                    7d
        __WARNING                     3d
        notes                         Checks how long it has been since vacuum (or analyze) was last run on each table in one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      backends
         host_name                      HOSTNAME
        service_description           PostgreSQL Database backends
        __CRITICAL                    95
        __WARNING                     80
        notes                         Checks the current number of connections for one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      hitratio
         host_name                      HOSTNAME
        service_description           PostgreSQL Database hitratio
        __CRITICAL                    80%
        __WARNING                     90%
        notes                         Checks the hit ratio of all databases and complains when they are too low.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      query_time
         host_name                      HOSTNAME
        service_description           PostgreSQL Database query time
        __CRITICAL                    10
        __WARNING                     5
        notes                         Checks the length of running queries on one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      txn_idle
         host_name                      HOSTNAME
         service_description            PostgreSQL Database connections idle in transaction
        __CRITICAL                    5 for 10 seconds
        __WARNING                     2 for 5 seconds
        notes                         Checks the number and duration of "idle in transaction" queries on one or more databases.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_w
        __ACTION                      disabled_triggers
         host_name                      HOSTNAME
         service_description            PostgreSQL Database disabled triggers
        __WARNING                     1
        notes                         Checks on the number of disabled triggers inside the database. In normal usage having disabled triggers is a dangerous event.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_wc
        __ACTION                      checkpoint
         host_name                      HOSTNAME
        service_description           PostgreSQL Database last checkpoint
        __CRITICAL                    600
        __WARNING                     400
        notes                         Determines how long since the last checkpoint has been run.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        check_command                 2ks-check_nrpe_postgres_w
        __ACTION                      settings_checksum
         host_name                      HOSTNAME
         service_description            PostgreSQL Database settings checksum
        __WARNING                     c6358648f0d06757a8311709be307f24
        notes                         Checks that all the Postgres settings are the same as last time you checked.
}

define service {
         use                            generic-service
         __DATABASE                     database_1
        __WARNING                     15GB
         check_command                  2ks-check_nrpe_postgres_wc
        __ACTION                      database_size
         host_name                      HOSTNAME
         service_description            PostgreSQL Database size
        __CRITICAL                    30GB
        notes                         Checks the size of all databases and complains when they are too big.
}

Add the template to a host:

okconfig addtemplate db-01.domain.com --template postgres

The values provided in the above configuration are examples. You should change them according to your needs.
adagios_postgres_status
Source: https://bucardo.org/check_postgres/check_postgres.pl.html

Postgresql 9.2 monitoring with Adagios on CentOS 7

Varnish 4 monitoring with Adagios on CentOS 7

On the Varnish server:

Install prerequisites:

yum install git automake libtool varnish-libs-devel

Clone the varnish-nagios repo, autogen, configure, and make:

git clone https://github.com/varnish/varnish-nagios.git
cd varnish-nagios
./autogen.sh
./configure
make

Move the check_varnish binary to /usr/lib64/nagios/plugins/ and restore SELinux context:

mv check_varnish /usr/lib64/nagios/plugins/
restorecon /usr/lib64/nagios/plugins/check_varnish

Create the nrpe command and restart nrpe:

echo 'command[check_varnish]=/usr/lib64/nagios/plugins/check_varnish -p "$ARG1$" -w "$ARG2$" -c "$ARG3$"' &gt; /etc/nrpe.d/check_varnish.cfg
systemctl restart nrpe.service

To see if the check works, run:

/usr/lib64/nagios/plugins/check_varnish -p MAIN.sess_dropped -w 0 -c 5
/usr/lib64/nagios/plugins/check_varnish -p MGT.child_panic -w 0 -c 2
/usr/lib64/nagios/plugins/check_varnish -p SMA.Transient.c_fail -c 0
/usr/lib64/nagios/plugins/check_varnish -p ratio -w 20:90 -c 10:98

It should return:

[root@varnish-host ~]# /usr/lib64/nagios/plugins/check_varnish -p MAIN.sess_dropped -w 0 -c 5
VARNISH OK: Sessions dropped for thread (0)|MAIN.sess_dropped=0
[root@varnish-host ~]# /usr/lib64/nagios/plugins/check_varnish -p MGT.child_panic -w 0 -c 2
VARNISH OK: Child process panic (0)|MGT.child_panic=0
[root@varnish-host ~]# /usr/lib64/nagios/plugins/check_varnish -p SMA.Transient.c_fail -c 0
VARNISH OK: Allocator failures (0)|SMA.Transient.c_fail=0
[root@varnish-host ~]# /usr/lib64/nagios/plugins/check_varnish -p ratio -w 20:90 -c 10:98
VARNISH OK: Cache hit ratio (26)|ratio=26
[root@varnish-host ~]#
On the Nagios server:

Create a check command:

pynag add command command_name="2ks-check_nrpe_varnish_status" command_line='$USER1$/check_nrpe -H $HOSTADDRESS$ -c check_varnish -a "$_SERVICE_PARAMETER$" "$_SERVICE_WARNING$" "$_SERVICE_CRITICAL$"'

NOTE: In my case pynag placed the cfg file in /etc/nagios/commands/, but it was not included as a cfg_dir in nagios.cfg. To fix that, run:

pynag config --append cfg_dir=/etc/nagios/commands/

Create an okconfig template:

echo 'define service {
    service_description            Varnish: Sessions dropped
    use                            generic-service
    host_name                      HOSTNAME
    __PARAMETER                   MAIN.sess_dropped
    check_command                 2ks-check_nrpe_varnish_status
    __CRITICAL                    5
    __WARNING                     0
    notes                         This counter will show the number of requests that have to be dropped because no more threads were available to handle them.
}
define service {
    service_description            Varnish: Child process panic
    use                            generic-service
    host_name                      HOSTNAME
    __PARAMETER                   MGT.child_panic
    check_command                 2ks-check_nrpe_varnish_status
    __CRITICAL                    2
    __WARNING                     0
    notes                         This counter will count the number of times the child has paniced. The master process will restart the child immediately when it happens, and the cache will be flushed.
}
define service {
    service_description            Varnish: Allocator failures
    use                            generic-service
    host_name                      HOSTNAME
    __PARAMETER                   SMA.Transient.c_fail
    check_command                 2ks-check_nrpe_varnish_status
    __CRITICAL                    0
    __WARNING                     0
    notes                         This counter indicates that the operating system is unable to allocate memory as requested.
}
define service {
    service_description            Varnish: Cache hit ratio
    use                            generic-service
    host_name                      HOSTNAME
    __PARAMETER                   ratio
    check_command                 2ks-check_nrpe_varnish_status
    __CRITICAL                    10:98
    __WARNING                     20:90
}
define service {
    use                            okc-linux-check_proc
    __WARNING                      1:
    __NAME                         varnishd
    host_name                      HOSTNAME
    service_description            Process varnishd
    __CRITICAL                     :10
    check_command                 okc-check_nrpe!check_procs -a $_SERVICE_WARNING$ $_SERVICE_CRITICAL$ $_SERVICE_NAME$

}' > /etc/nagios/okconfig/examples/varnish.cfg-example

Add the template to a host:

okconfig addtemplate www-01.domain.com --template varnish

Reload nagios and run the service checks from the Adagios web interface, and they should be green:
varnish_template_status_adagios
Sources:
https://www.varnish-software.com/blog/blog-sysadmin-monitoring-health-varnish-cache

Varnish 4 monitoring with Adagios on CentOS 7