LAB: Loadbalancing for Nextcloud


I built a new lab environment that consists of 6 to 10 virtual Ubuntu server (Ubuntu 18.04 LTS x64) using the same virtual network, based on Virtual Box 5.2.

Virtuall Lab Description:

Server1: 192.168.2.3/255.255.255.0 Frontend (Loadbalancer and Reverse Proxy)
Server2: 192.168.2.4/255.255.255.0 Backend  (Nextcloud)
Server3: 192.168.2.5/255.255.255.0 Backend  (Nextcloud)
Server4: 192.168.2.5/255.255.255.0 NFS
Server5: 192.168.2.7/255.255.255.0 MariaDB  (as a single db server)
Server6: 192.168.2.8/255.255.255.0 Redis    (Primary/Master)

Special 1 (follow challenge 3: MariaDB Galera Cluster and HAProxy):
Server7: 192.168.2.189/255.255.255.0 DBCluster01
Server8: 192.168.2.190/255.255.255.0 DBCluster02
Server9: 192.168.2.191/255.255.255.0 HAProxy

Special 2 (follow challenge 4: Redis Primary/Secondary (Master/Slave)):
Server10: 192.168.2.9/255.255.255.0 Redis-Server (Secondary/Slave)

To simplify this guide i reduced the amount of server to three virtual server.

Simplified Lab Description:

Server1: 192.168.2.3/255.255.255.0 Frontend Server1 (Loadbalancer, ReverseProxy, NFS, MariaDB, Redis)
Server2: 192.168.2.4/255.255.255.0 Backend  Server2 (Nextcloud)
Server3: 192.168.2.5/255.255.255.0 Backend  Server3 (Nextcloud)

All server are updated and configured with Nextcloud, NGINX 1.15.5, PHP 7.2, MariaDB, REDIS, ssh and ufw as written in my Nextcloud installation guide.


LAB: NGINX loadbalancing for Nextcloud

My goal is to have multiple Nextcloud NGINX webserver instances loadbalanced by one NGINX as the Nextcloud frontend server and multiple NGINX webserver operating as Nextcloud backend server running on a selfhosted environment.

The challenges are:

  1. Challenge 1: load balancing using sticky sessions (ssl enabled)
  2. Challenge 2: global Nextcloud binaries and data (/var/www/nextcloud & /var/nc_data)
  3. Challenge 3: global Nextcloud database
  4. Challenge 4: global Redis-Server

Challenge 1: load balancing for Nextcloud using sticky sessions (ssl)

We start configuring NGINX to serve and act as a loadbalancer (extranet) and reverse proxy for multiple NGINX instances in your local area network (intranet). On the first Server (frontend server called ‘Server1’ ) we will change the nginx.conf and the gateway.conf to act as loadbalancer and reverse proxy.

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste all the following rows and substitue the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.2.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Then move the nextcloud.conf to nextcloud.conf.bak (disable this vhost) and modify the gateway.conf:

mv /etc/nginx/conf.d/nextcloud.conf /etc/nginx/conf.d/nextcloud.conf.bak
mv /etc/nginx/conf.d/gateway.conf /etc/nginx/conf.d/gateway.conf.bak
vi /etc/nginx/conf.d/gateway.conf

Paste all the following rows and substitue the red values properly:

upstream NEXTCLOUD-LB {
ip_hash;
server 192.168.2.4; # <- IP Server2
server 192.168.2.5; # <- IP Server3
}
server {
listen 80 default_server;
server_name your.dedyn.io 192.168.2.3; # <- your dyndns name and IP Server1
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2 default_server;
server_name your.dedyn.io 192.168.2.3; # <- your dyndns name and IP Server1
include /etc/nginx/ssl.conf;
include /etc/nginx/header.conf;
location ^~ / {
client_max_body_size 10240M;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_buffering on;
proxy_max_temp_file_size 10240M;
proxy_request_buffering on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://NEXTCLOUD-LB;
proxy_redirect off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
} 
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;}
}

Verify your configuration by issuing

nginx -t

and restart your NGINX on Server1:

service nginx restart

Next switch to the first backend server called ‘Server2’ . Move and modify your nginx.conf again and move/modify the nextcloud.conf as described:

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste all the following rows and substitue the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.2.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Move and modify the nextcloud.conf

mv /etc/nginx/conf.d/nextcloud.conf /etc/nginx/conf.d/nextcloud.conf.bak 
vi /etc/nginx/conf.d/nextcloud.conf

Paste all the following rows and substitue the red values properly:

server {
server_name 192.168.2.4; # <- local IP Server2
listen 192.168.2.4:80 default_server; <- local IP:80 Server2
include /etc/nginx/proxy.conf;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100M;
mp4_max_buffer_size 1024M;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|ipcheck|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif|png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
include /etc/nginx/proxy.conf;
access_log off;
expires 360d;
}
}

Verify and restart your webserver on ‘Server2’ by issuing

nginx -t
service nginx restart.

and repeat the previous steps from ‘Server2’ for the next backend server called ‘Server3’. Switch to ‘Server3’ and move/modify your nginx.conf and nextcloud.conf as described:

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste the following rows and substitute the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.2.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Move and modify the nextcloud.conf

mv /etc/nginx/conf.d/nextcloud.conf /etc/nginx/conf.d/nextcloud.conf.bak 
vi /etc/nginx/conf.d/nextcloud.conf

Paste all the following rows and substitue the red values properly:

server {
server_name 192.168.2.5; # <- local IP Server3
listen 192.168.2.5:80 default_server; <- local IP:80 Server3
include /etc/nginx/proxy.conf;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100M;
mp4_max_buffer_size 1024M;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|ipcheck|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif|png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
include /etc/nginx/proxy.conf;
access_log off;
expires 360d;
}
}

Verify and restart your webserver on ‘Server3’ by issuing

nginx -t
service nginx restart

and create the healthcheck file called “ipcheck.php” on all the backend server (‘Server2’ and ‘Server3’):

sudo -u www-data vi /var/www/nextcloud/ipcheck.php

Paste the following rows:

<?php
header( 'Content-Type: text/plain' );
echo 'NGINX';
echo 'Host: ' . $_SERVER['HTTP_HOST'] . "\n";
echo 'Remote Address: ' . $_SERVER['REMOTE_ADDR'] . "\n";
echo 'X-Forwarded-For: ' . $_SERVER['HTTP_X_FORWARDED_FOR'] . "\n";
echo 'X-Forwarded-Proto: ' . $_SERVER['HTTP_X_FORWARDED_PROTO'] . "\n";
echo 'Server Address: ' . $_SERVER['SERVER_ADDR'] . "\n";
echo 'Server Port: ' . $_SERVER['SERVER_PORT'] . "\n\n";
?>

Call https://your.dedyn.io/ipcheck.php in your browser

https://your.dedyn.io/ipcheck.php

or issue curl and you will be “sticky” loadbalanced to one of your balanced backend server(2 or 3)

curl https://your.dedyn.io/ipcheck.php


what is caused by the NGINX iphash statement in your gateway.conf. If you remove the “iphash;” statement in the gateway.conf in your frontend server and restart nginx on that ‘Server1’ you can follow the roundrobin loadbalancing by issuing

curl https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php

Modify your Nextcloud config.php by issuing

sudo -u www-data vi /var/www/nextcloud/config/config.php

and modify

...
'trusted_domains' =>
array (
0 => 'localhost',
1 => '192.168.2.3',
2 => 'your.dedyn.io',
),
...
'overwrite.cli.url' => 'https://your.dedyn.io',
...

Your Nextcloud Server will be reachable again, just call your IP or dyndns (your loadbalancer called ‘Server1’)

but please be aware: every balanced Nextcloud instance (‘Server2’ and ‘Server3’) is still using its own Nextcloud configuration and data dir yet.


√ Challenge 1: load balancing for Nextcloud using sticky sessions (ssl)


Challenge 2: global Nextcloud binaries and data (/var/www/nextcloud & /var/nc_data)

To simplify our Lab environment we assume your Server1 acts as your loadbalancer and as well as your NFS server. So install NFS4 on Server1 by issuing

sudo -s
apt install nfs-kernel-server

and create the shares by modifying the /etc/export file on Server1:

vi /etc/exports

Paste the following rows

/var/www/nextcloud 192.168.2.4(rw,async,no_root_squash)
/var/www/nextcloud 192.168.2.5(rw,async,no_root_squash)
/var/nc_data 192.168.2.4(rw,async,no_root_squash)
/var/nc_data 192.168.2.5(rw,async,no_root_squash)

and export the new NFS shares:

exportfs -ra

Don’t forget to open your firewall like (or with specific client IPs)

ufw allow from 192.168.2.0/24 to any port 111
ufw allow from 192.168.2.0/24 to any port 2049
ufw allow from 192.168.2.0/24 to any port 13025

Now switch to your Server2 and Server3 to move both directories

sudo -s
mv  /var/www/nextcloud /var/www/nextcloud.old && mv /var/nc_data /var/nc_data.old

and create empty directories with proper permissions on Server2 and Server3:

mkdir -p /var/nc_data && mkdir -p /var/www/nextcloud && chown -R www-data:www-data /var/nc_data && chown -R www-data:www-data /var/www/nextcloud

Install the NFS client on Server2 and Server3 by issuing

apt install nfs-common

and stop NGINX on Server2 and Server3:

service nginx stop

Then mount the provided shares via /etc/fstab and restart nginx on Server2 and Server3:

vi /etc/fstab

Paste the following rows to /etc/fstab on Server2 and Server3

192.168.2.3:/var/www/nextcloud /var/www/nextcloud nfs rw 0 0
192.168.2.3:/var/nc_data /var/nc_data nfs rw 0 0

and restart NGINX on Server2 and Server3:

mount -a && service nginx restart

From now your Nextcloud binaries and data are shared using NFS. All files and data are delivered by Server1 which serves as NFS-Server. Please keep in mind to use a dedicated NFS-Server or an existing NAS like Synology.


√ Challenge 2: global Nextcloud binaries and data (/var/www/nextcloud & /var/nc_data)

In (near) future, i will explain how to configure GlusterFS … be patient.


Challenge 3: global database server for Nextcloud

First change MariaDBs binding from 127.0.0.1 to e.g. 192.168.2.3 on Server1:

sudo -s
vi /etc/mysql/my.cnf

Change the binding as follows:

# bind-address = 127.0.0.1
bind-address = 192.168.2.3

and restart MariaDB:

service mysql restart

Remove the nextcloud@localhost user and grant nextcloud@’192.168.2.%’ for remote database access: Connect to the database server

mysql -u root -p

and issue

SELECT User, Host FROM mysql.user;
drop user nextcloud@localhost;
GRANT ALL PRIVILEGES on nextcloud.* to nextcloud@'192.168.2.%' identified by 'nextcloud';
Flush privileges;
quit;

Restart MariaDB again

service mysql restart

and modify your firewall by issuing

ufw allow from 192.168.2.0/24 to any port 3306/tcp
ufw allow from 192.168.2.0/24 to any port 4567/tcp
ufw allow from 192.168.2.0/24 to any port 4568/tcp
ufw allow from 192.168.2.0/24 to any port 4444/tcp
ufw allow from 192.168.2.0/24 to any port 4567/udp

Finally change Nextclouds config.php by using

sudo -u www-data vi /var/www/nextcloud/config/config.php
...
 'dbtype' => 'mysql',
 'version' => '13.0.0.14',
 'dbname' => 'nextcloud',
 'dbhost' => '192.168.2.3',
 'dbport' => '',
 'dbtableprefix' => 'oc_',
 'mysql.utf8mb4' => true,
 'dbuser' => 'nextcloud',
 'dbpassword' => 'nextcloud',
...

Nextcloud will now interact with your global remote database from all loadbalanced Nextcloud instances. Please keep in mind to set up a dedicated MariaDB-Server for security and performance reasons.


SPECIAL (MariaDB Galera Cluster with HAProxy):

Instead of one single MariaDB instance we will implement a simple MariaDB Galera cluster, consisting of two MariaDB server with a separate HAProxy load balancing in front.

On any of your DBClusters and the HAProxy modify the host file

sudo -s
vi /etc/hosts

accordingly and add

...
192.168.2.189 dbcluster01
192.168.2.190 dbcluster02
192.168.2.191 haproxy
...

Switch into sudo mode on “dbcluster01′:

sudo -s

and perfrom as well the update as the MariaDB installation.

apt update && apt upgrade -y
apt install mariadb-server mariadb-client rsync -y

Make sure to create and choose the dame root password on each cluster (e.g. “galera” in this guide).

Configure the firewall properly

ufw allow from 192.168.2.0/24 to any port 3306/tcp
ufw allow from 192.168.2.0/24 to any port 4567/tcp
ufw allow from 192.168.2.0/24 to any port 4568/tcp
ufw allow from 192.168.2.0/24 to any port 4444/tcp
ufw allow from 192.168.2.0/24 to any port 4567/udp

Stop MariaDB and comment the bind-address in

service mysql stop
cp /etc/mysql/my.cnf /etc/mysql/my.cnf.single-bak
vi /etc/mysql/my.cnf

to

#bind-address = 127.0.0.1

Prepare the Server to act as galera cluster by editing the my.cnf:

vi /etc/mysql/my.cnf

Paste all the following rows:

[galera]
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_node_address=192.168.2.189
wsrep_node_name=dbcluster01
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#YourSettingsHere
wsrep_cluster_name="galera_cluster"
wsrep_sst_method=rsync
bind-address=192.168.2.189

(Please verify and amend the wsrep_node_address as well as bind_address to the node-specific cluster-IP and wsrep_node_name to your need.)

For the “dbcluster01” the wsrep_cluster_address has to be kept without any value.

Initialize the cluster and start MariaDB.

galera_new_cluster
service mysql start

The galera_new_cluster statement must not be issued on the other server, only on “dbcluster01”!

Switch over to the second datase cluster “dbcluster02” and issue the following steps:

Stop MariaDB and comment the bind-address in

service mysql stop
cp /etc/mysql/my.cnf /etc/mysql/my.cnf.single-bak
vi /etc/mysql/my.cnf

to

#bind-address = 127.0.0.1

Prepare the Server to act as galera cluster by editing the my.cnf:

vi /etc/mysql/my.cnf

Paste all the following rows:

[galera]
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.2.189,192.168.2.190
wsrep_node_address=192.168.2.190
wsrep_node_name=dbcluster02
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#YourSettingsHere
wsrep_cluster_name="galera_cluster"
wsrep_sst_method=rsync
bind-address=192.168.2.190

(Please verify and amend the wsrep_node_address as well as bind_address to the node-specific cluster-IP and wsrep_node_name to your need.)

For the “dbcluster02” the wsrep_cluster_address keeps both cluster IPs.

Start the second MariaDB server by issuing

service mysql start

and test your Galera cluster on “dbcluster01”:

mysql -uroot -pgalera -e 'SELECT VARIABLE_VALUE as "cluster size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME="wsrep_cluster_size"'

All cluster information can be shown by

mysql -uroot -pgalera -e "SHOW STATUS LIKE 'wsrep_%'"

Your Galera cluster is already up and running!

Now switch over to your HAProxy server called “haproxy” and install the binaries

sudo -s
apt update && apt upgrade -y && apt install haproxy -y

Move the origin configuration and create a new one:

mv etc/haproxy/haproxy.cfg etc/haproxy/haproxy.cfg.bak
vi etc/haproxy/haproxy.cfg

Paste all the following rows:

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend mariadb
bind 192.168.2.191:3306
mode tcp
default_backend mariadb_galera

backend mariadb_galera
balance leastconn
mode tcp
option tcpka
option mysql-check user haproxy
server dbcluster01 192.168.2.189:3306 check weight 1
server dbcluster02 192.168.2.190:3306 check weight 1

Verify having HAProxy enabled in /etc/default/haproxy. If the statement

ENABLED=0

exists, set it to 1

ENABLED=1

or create a new entry. The file should look similar to mine:

# Defaults file for HAProxy
#
# This is sourced by both, the initscript and the systemd unit file, so do not
# treat it as a shell script fragment.

# Change the config file location if needed
#CONFIG="/etc/haproxy/haproxy.cfg"

# Add extra flags here, see haproxy(1) for a few options
#EXTRAOPTS="-de -m 16"
ENABLED=1

Switch back to “dbcluster02” and create the database ‘haproxy’ user to permit the HAProxy database healthchecks.

mysql -uroot -pgalera -e "CREATE USER 'haproxy'@'192.168.2.191'"

With regards to Nextcloud i grant access to root and the nextcloud db user as follows:

GRANT ALL PRIVILEGES ON *.* TO 'nextcloud'@'%' IDENTIFIED BY 'nextcloud' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'galera' WITH GRANT OPTION;
FLUSH PRIVILEGES;

These privileges were configured for the lab environment only and i raise no claim to neither completeness nor security! You have to amend your permissions depending on your demand.

Having regards to your Nextcloud and the lab environment in this guide you have to restore your database to your cluster and point Nextcloud to your new haproxy in the config.php

...
'dbtype' => 'mysql',
'version' => '13.0.5.0',
'dbname' => 'nextcloud',
'dbhost' => '192.168.2.191',
'dbport' => '',
'dbtableprefix' => 'oc_',
'dbuser' => 'nextcloud',
'dbpassword' => 'nextcloud',
...

From now your system availability increased  even a bit.


√ Challenge 3: global database server for Nextcloud (as a single or as a galera cluster database server)


Challenge 4: global Redis-Server for Nextcloud

First stop Redis-Server on ‘Server1’ (192.168.2.3) by issuing

service redis-server stop

then modify the current Redis-Server configuration

vi /etc/redis/redis.conf

Change all of the following parameters:

bind 127.0.0.1 ::1

to

bind 127.0.0.1 192.168.2.3

protected-mode yes

to

protected-mode no

port 0

to

port 6379

Then create a new rule to your UFW by issuing

ufw allow from 192.168.2.0/24 to any port6379

to grant both backend server (‘server2’ and ‘server 3’) access to the Redis-Server remotleyand within your local network only. Restart the Redis-Server and flush existing the Redis-Server data

service redis-server restart
redis-client -h 192.168.2.3 -p 6379

FLUSHALL
quit

Open your Nextcloud config.php

sudo -u www-data vi /var/www/config/config.php

and change the redis part from

'redis' => array ( 'host' => '/var/run/redis/redis-server.sock', 'port' => 0, 'timeout' => 0.0, ),

to

'redis' => array ( 'host' => '192.168.2.3', 'port' => 6379, 'timeout' => 1.5, ),

and finally rebuilt the database information for Nextcloud:

sudo -u www-data php /var/www/occ files:scan --all -v
sudo -u www-data php /var/www/occ files:scan-app-data -v.

If you are interested in a Redis Primary/Second (Master/Slave) configuration just create a further server and install Redis on node 2.

To be more clear:

Primary Redis-Server (Master) - node 1: 192.168.2.3
Secondary Redis-Server (Slave) - node 2: 192.168.2.9

Ammend the Redis configuration on the second Redis node (192.168.2.9):

vi /etc/redis/redis.conf
bind 127.0.0.1 192.168.2.9
...
slaveof 192.168.2.3 6379
...
port 6379
...
protected-mode no
...

Restart redis on the second Redis node (192.168.2.9)

service redis-server restart

and verify the Primary/secondary (Master/Slave) configuration. On Redis node 1 (192.168.2.3) issue the follwoing statements:

redis-cli

127.0.0.1:6379> set 'rieger' 10
OK
127.0.0.1:6379> exit

On Redis node 2 (192.168.2.9) issue

redis-cli

127.0.0.1:6379> get 'rieger'
"10"
127.0.0.1:6379> exit

Your Primary/Secondary Redis configuraition now works properly and as expected.

Redis-Server operates as your global RE(mote) DI(ctionary) S(erver) for all of your Nextcloud instances. Please keep in mind to set up a dedicated Redis-Server for security and performance reasons.


√ Challenge 4: global Redis-Server for Nextcloud


As a remaining and manual purge job you have to remove the MariaDB and Redis-Server instances from all the Backendserver.



Carsten Rieger

Carsten Rieger

Carsten Rieger is a senior system engineer in full-time and also working as an IT freelancer. He is working with linux environments for more than 13 years, an Open Source enthusiast and highly motivated on linux installation and troubleshooting. Mostly working with Debian/Ubuntu Linux, Nginx and Apache web server, MariaDB/MySQL/PostgreSQL, PHP, Cloud infrastructure (e.g. Nextcloud) and other open source projects (e.g. Roundcube) and in voluntary work for the Dr. Michael & Angela Jacobi Stiftung for more than 6 years.