LAB: NGINX loadbalancing for Nextcloud


We built a new lab environment that consists of 3 virtual server (Ubuntu 16.04.3 LTS) using the same virtual network.

Server1: 192.168.56.3/255.255.255.0
Server2: 192.168.56.4/255.255.255.0 (vm clone of Server1)
Server3: 192.168.56.5/255.255.255.0 (vm clone of Server1)

All server are up to date and configured with NGINX 1.13.9, PHP 7.2, MariaDB, REDIS, ssh and ufw as written in my guide: Nextcloud 13, Roundcube, WordPress, Shellinabox and Pi-hole behind a NGINX reverse proxy.


LAB: NGINX loadbalancing for Nextcloud

Our goal is to have multiple Nextcloud NGINX webserver instances loadbalanced by one NGINX webserver running on a selfhosted environment.

The challenges are:

  1. Challenge 1: load balancing using sticky sessions (ssl enabled)
  2. Challenge 2: global Nextcloud binaries and data (/var/www/nextcloud & /var/nc_data)
  3. Challenge 3: global Nextcloud database
  4. Challenge 4: global Redis-Server (scheduled for March, 2018)

Challenge 1: load balancing using sticky sessions (ssl enabled)

We start configuring NGNINX to serve and act as a loadbalancer (extranet) and reverse proxy for multiple NGINX instances in your local area network (intranet). On the first Server (Server1) we will change the nginx.conf and the gateway.conf to act as loadbalancer and reverse proxy.

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste all the following rows and substitue the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.56.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.56.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Then move and modify the gateway.conf:

mv /etc/nginx/conf.d/gateway.conf /etc/nginx/conf.d/gateway.conf.bak
vi /etc/nginx/conf.d/gateway.conf

Paste all the following rows and substitue the red values properly:

upstream NEXTCLOUD-LB {
 ip_hash;
 #least_conn;
 server 192.168.56.4; # <- IP Server2
 server 192.168.56.6; # <- IP Server3
 }
 server {
 listen 80 default_server;
 server_name your.dedyn.io 192.168.56.3; # <- your dyndns name and IP Server1
 location / {
 return 301 https://$host$request_uri;
 }
 }
 server {
 listen 443 ssl http2 default_server;
 server_name your.dedyn.io 192.168.56.3; # <- your dyndns name and IP Server1
 include /etc/nginx/ssl.conf;
 include /etc/nginx/header.conf;
 location ^~ / {
 client_max_body_size 10G;
 proxy_connect_timeout 3600;
 proxy_send_timeout 3600;
 proxy_read_timeout 3600;
 send_timeout 3600;
 proxy_buffering on;
 proxy_max_temp_file_size 10240m;
 proxy_request_buffering on;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header Host $host;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_pass http://NEXTCLOUD-LB;
 proxy_redirect off;
 }
 }

Verify your configuration by issuing

nginx -t

and restart your NGINX on Server1:

service nginx restart

Next switch to Server2. Move and modify your nginx.conf again and move/modify the nextcloud.conf as described:

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste all the following rows and substitue the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.56.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.56.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Move and modify the nextcloud.conf

mv /etc/nginx/conf.d/nextcloud.conf /etc/nginx/conf.d/nextcloud.conf.bak 
vi /etc/nginx/conf.d/nextcloud.conf

Paste all the following rows and substitue the red values properly:

server {
server_name 192.168.56.4; # <- local IP Server2
listen 192.168.56.4:80 default_server; <- local IP:80 Server2
include /etc/nginx/proxy.conf;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|ipcheck|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif)$ {
try_files $uri /index.php$uri$is_args$args;
include /etc/nginx/proxy.conf;
access_log off;
expires 360d;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
expires 360d;
}
}

Verify and restart your webserver on Server2 by issuing

nginx -t
service nginx restart.

and repeat the previous steps for Server2 on Server3 (and further Server). Switch to Server3 and move/modify your nginx.conf and nextcloud.conf as described:

sudo -s
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

Paste the following rows and substitute the red values properly:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.56.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.56.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Move and modify the nextcloud.conf

mv /etc/nginx/conf.d/nextcloud.conf /etc/nginx/conf.d/nextcloud.conf.bak 
vi /etc/nginx/conf.d/nextcloud.conf

Paste all the following rows and substitue the red values properly:

server {
server_name 192.168.56.5; # <- local IP Server3
listen 192.168.56.5:80 default_server; <- local IP:80 Server3
include /etc/nginx/proxy.conf;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|ipcheck|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif)$ {
try_files $uri /index.php$uri$is_args$args;
include /etc/nginx/proxy.conf;
access_log off;
expires 360d;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
expires 360d;
}
}

Verify and restart your webserver on Server3 by issuing

nginx -t
service nginx restart

and create the healthcheck file called “ipcheck.php” on Server2 and Server3:

sudo -u www-data vi /var/www/nextcloud/ipcheck.php

Paste the following rows:

<?php
header( 'Content-Type: text/plain' );
echo 'Host: ' . $_SERVER['HTTP_HOST'] . "\n";
echo 'Remote Address: ' . $_SERVER['REMOTE_ADDR'] . "\n";
echo 'X-Forwarded-For: ' . $_SERVER['HTTP_X_FORWARDED_FOR'] . "\n";
echo 'X-Forwarded-Proto: ' . $_SERVER['HTTP_X_FORWARDED_PROTO'] . "\n";
echo 'Server Address: ' . $_SERVER['SERVER_ADDR'] . "\n";
echo 'Server Port: ' . $_SERVER['SERVER_PORT'] . "\n\n";
?>

Call https://your.dedyn.io/ipcheck.php  in your browser

https://your.dedyn.io/ipcheck.php

or issue curl and you will be “sticky” loadbalanced to one of your balanced Server(2 or 3)

curl https://your.dedyn.io/ipcheck.php


caused by the NGINX iphash statement in your gateway.conf. If you remove the “iphash;” statement in the gateway.conf and restart nginx on Server1 you can follow the loadbalancing by issuing

curl https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php https://your.dedyn.io/ipcheck.php


Modify your Nextcloud config.php by issuing

sudo -u www-data vi /var/www/nextcloud/config/config.php

and ammend

...
'trusted_domains' =>
array (
0 => 'server1',
1 => '192.168.56.3',
),
...
'overwrite.cli.url' => 'https://192.168.56.3',
...

Your Nextcloud Server will be reachable again, just call your dyndns (loadbalancer)

but please be aware: every balanced Nextcloud instance (Server2 and Server3) is actually still using its own configuration and data dir.


√ Challenge 1: load balancing using sticky sessions (ssl enabled)


Challenge 2: global Nextcloud binaries and data (/var/www/nextcloud & /var/nc_data)

To simplify our Lab environment we assume your Server1 serves for both: loadbalancer and NFS server. So install NFS4 on Server1 by issuing

sudo -sapt install nfs-kernel-server

and create the shares by modifying the /etc/export file on Server1:

vi /etc/exports

Paste the following rows

/var/www/nextcloud 192.168.56.4(rw,async,no_root_squash)
/var/www/nextcloud 192.168.56.5(rw,async,no_root_squash)
/var/nc_data 192.168.56.4(rw,async,no_root_squash)
/var/nc_data 192.168.56.5(rw,async,no_root_squash)

and export the new NFS shares:

exportfs -ra

Don’t forget to open your firewall like (or with specific client IPs)

ufw allow from 192.168.56.0/24 to any port 111
ufw allow from 192.168.56.0/24 to any port 2049
ufw allow from 192.168.56.0/24 to any port 13025

Now switch to your Server2 and Server3 to move both directories

sudo -smv  /var/www/nextcloud /var/www/nextcloud.old && mv /var/nc_data /var/nc_data.old

and create empty directories with proper permissions on Server2 and Server3:

mkdir -p /var/nc_data && mkdir -p /var/www/nextcloud && chown -R www-data:www-data /var/nc_data && chown -R www-data:www-data /var/www/nextcloud

Install the NFS client on Server2 and Server3 by issuing

apt install nfs-common

and stop NGINX on Server2 and Server3:

service nginx stop

Then mount the provided shares via /etc/fstab and restart nginx on Server2 and Server3:

vi /etc/fstab

Paste the following rows to /etc/fstab on Server2 and Server3

192.168.56.3:/var/www/nextcloud /var/www/nextcloud nfs rw 0 0
192.168.56.3:/var/nc_data /var/nc_data nfs rw 0 0

and restart NGINX on Server2 and Server3:

service nginx restart

From now your Nextcloud binaries and data are shared using NFS. All files and data are delivered by Server1 which serves as NFS-Server. Please keep in mind to use a dedicated NFS-Server or an existing NAS like Synology.

For this Lab we simplified the entire topology by using Server1 for both, the loadbalancer and the NFS-Server. We highly recommend to separate it for security and performance reasons outside this Lab-environment!


√ Challenge 2: global Nextcloud configuration and data (config.php & nc_data)


Challenge 3: global Nextcloud database

First change MariaDBs binding from 127.0.0.1 to e.g. 0.0.0.0 on Server1:

sudo -s
vi /etc/mysql/mariadb.conf.d/50-server.cnf

Change the binding as follows:

# bind-address = 127.0.0.1
bind-address = 0.0.0.0

and restart MariaDB:

service mysql restart

Remove the nextcloud@localhost user and grant nextcloud@’192.168.56.%’ for remote database access: Connect to the database server

mysql -u root -p

and issue

SELECT User, Host FROM mysql.user;
drop user nextcloud@localhost;
GRANT ALL PRIVILEGES on nextcloud.* to nextcloud@'192.168.56.%' identified by 'nextcloud';
Flush privileges;
quit;

Restart MariaDB again

service mysql restart

and modify your firewall by issuing

ufw allow from 192.168.56.0/24 to any port 3306

Finally change Nextclouds config.php by using

sudo -u www-data vi /var/www/nextcloud/config/config.php
...
 'dbtype' => 'mysql',
 'version' => '13.0.0.14',
 'dbname' => 'nextcloud',
 'dbhost' => '192.168.56.3',
 'dbport' => '',
 'dbtableprefix' => 'oc_',
 'mysql.utf8mb4' => true,
 'dbuser' => 'nextcloud',
 'dbpassword' => 'nextcloud',
...

Nextclud will now interact with your global remote database from all loadbalanced Nextcloud instances. Please keep in mind to set up a dedicated MariaDB-Server for security and perdormance reasons.

For this Lab we simplified the entire topology by using Server1 for all, the loadbalancer, the NFS-Server and the database server. We highly recommend to separate it for security and performance reasons outside this Lab-environment!


√ Challenge 3: global Nextcloud database


Challenge 4: global Redis-Server

… scheduled for March, 2018