Nextcloud 13 installation guide advanced

End of regular support – will be maintained sporadically only


Following this advanced guide you will be able to install and configure Nextcloud 13 based on Ubuntu 16.04.4 LTS or 18.04 LTS, NGINX 1.15.2 with ngx_cache_purge enabled, PHP 7.2, MariaDB, Redis, fail2ban, firewall (ufw). In addition you will gain an A+ rating from both: Nextcloud and Qualys SSL Labs. We will request and implement the ssl certificate from Let’s Encrypt in chapter 7.

You only have to ammend the red marked values (YOUR.DEDYN.IO, 192.168.2.x, 22, redis-hash, your@dedyn.io, your-ubuntu-user-name, ssh-port) regarding your environment!


Table of content

01. Compile and install NGINX 1.15 with ngx_cache_purge module enabled
02. Install PHP 7.2
03. Install MariaDB
04. Prepare NGINX for Let’s Encrypt and Nextcloud 13
05. Install Nextcloud 13
06. Install Redis-Server
07. Create the ssl certificates
08. Configure Nextcloud 13
The following chapters are optional:
09. Mount additonal storage to Nextcloud
09.1 NAS (e.g. Synology)
09.2 external HDD e.g. WD for NextcloudBox
10. Recommended tweaks and hardenings
10.1 Make use of ramdisk
10.2 Prevent ctrl+alt+del
10.3 Install and enjoy fail2ban
11. SSL certificate renewal
12. Backup
13. Server hardenings
13.1 disabling ipv6
13.2 enabling ufw
13.3 prevent ip spoofing
13.4 check your environment using nmap
13.5 install postfix (smtp)
13.6 install and enabling apticron
13.7 ssh hardening (2FA)
13.8 logwatch
14. monitor your entire system using netdata


Last Updates:

July, 8th 2018:
– MariaDB changes: transaction_isolation = READ-COMMITTED, binlog_format = ROW

… the entire update history


01. Compile and install NGINX 1.15 with ngx_cache_purge module enabled

NGINX 1.15 will be built from scratch in this guide. We suggest to upgrade to openssl version 1.1.0h (how to upgrade to openssl 1.1.0h) before starting the entire installation! Then prepare your server:

sudo -s
cd /usr/local/src

Ubuntu 16.04.4 LTS:

apt update && apt upgrade -y && apt install software-properties-common python-software-properties zip unzip screen curl ffmpeg libfile-fcntllock-perl -y

Ubuntu 18.04 LTS:

apt update && apt upgrade -y && apt install software-properties-common python-software-properties zip unzip screen curl ffmpeg libfile-fcntllock-perl -y
wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key
apt install language-pack-en-base -y && sudo LC_ALL=en_US.UTF-8

Build NGINX manually and install NGINX:

As sudo (sudo -s) change into the directory “/usr/local/src” and update your system first.

apt update && apt upgrade -y

To remove any previous NGINX-Packages please issue

apt remove nginx nginx-common nginx-full -y --allow-change-held-packages

Add the NGINX key…

wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key

… and the NGINX repositories to your system:

vi /etc/apt/sources.list.d/nginx.list

Copy and paste the following rows:

Ubuntu 16.04:

deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx

Ubuntu 18.04:

deb http://nginx.org/packages/mainline/ubuntu/ bionic nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ bionic nginx

Then update your software sources by issuing:

apt update

Some errors will appear while running ‘apt update’:

” …N: Skipping acquire of configured file ‘nginx/binary-arm64/Packages’ as repository ‘http://nginx.org/packages/mainline/ubuntu xenial InRelease’ doesn’t support architecture ‘arm64’
N: Skipping acquire of configured file ‘nginx/binary-armhf/Packages’ as repository ‘http://nginx.org/packages/mainline/ubuntu xenial InRelease’ doesn’t support architecture ‘armhf’ …”

Please ignore these errors and go ahead with downloading the build dependencies and the source code for the new NGINX-webserver:

apt build-dep nginx -y && apt source nginx

Another warning/error will be thrown:

W: Can’t drop privileges for downloading as file ‘nginx_1.15.2-1~xenial.dsc’ couldn’t be accessed by user ‘_apt’. – pkgAcquire::Run (13: Permission denied)

Please ignore this error either and go ahead with the next step. Create and change into the nginx-directory:

mkdir /usr/local/src/nginx-1.15.2/debian/modules -p && cd /usr/local/src/nginx-1.15.2/debian/modules

Now, in the modules directory, we are going to download and extract the code for each of the modules we want to include (e.g. ngx_cache_purge 2.3):

wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.tar.gz

Extract the binaries and remove the source file:

tar -zxvf 2.3.tar.gz && rm 2.3.tar.gz

Change back to the the debian-directory and edit the compiler information file “rules”:

cd /usr/local/src/nginx-1.15.2/debian && vi rules

You will need to modify two lines in the rules file. Search for “with-ld-opt=”$(LDFLAGS)” and immediately after the first occurrence add the following:

--add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3"

and on the second occurrence add the following:

--add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" --with-debug

Change

dh_shlibdeps -a

to

dh_shlibdeps -a --dpkg-shlibdeps-params=--ignore-missing-info

either. Save and quit (:wq!) the rules-file. We will now build the debian package, please ensure you are in the “nginx” source directory:

cd /usr/local/src/nginx-1.15.2

Start building the package by:

dpkg-buildpackage -uc -b -j4

After package building (may take a while ~10 min) please change to the src-directory back:

cd /usr/local/src

Start installing the new NGINX webserver, choose the package that fits your environment (ARM64 or AMD64):

dpkg --install nginx_1.15.2-1~bionic_arm64.deb

Press ‘N’ having regards to the default.conf

The name of the *.deb-file depends on your server architecture (amd64, arm64…). Please adjust the name accordingly e.g. to dpkg –install nginx_1.15.2-1~xenial_amd64.deb.

Configure NGINX

Mark the Webserver NGINX as “hold” to avoid any automatically updates using apt upgrade and set NGINX to autostart:

apt-mark hold nginx && systemctl enable nginx.service

Looking for the amount of CPUs and Process limits on your server hardware:

grep ^processor /proc/cpuinfo | wc -l

Result: 4 (Odroid C2)

As NGINX Amplify recommends we set this value to auto instead of 4

ulimit -n

Result: 1024 (Odroid C2)

Change the nginx.conf with regards to the previous values

cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

to:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/proxy.conf;
include /etc/nginx/ssl.conf;
include /etc/nginx/header.conf;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.2.1;
# IPv4 and IPv6:
# resolver 192.168.2.1 [0:0:0:0:0:FFFF:C0A8:0201];
# resolver IP is your DNS e.g. your FritzBox/Router
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Start NGINX:

service nginx restart

Create folders and apply permissions:

mkdir -p /var/nc_data /var/www/letsencrypt /usr/local/tmp/cache /usr/local/tmp/sessions /usr/local/tmp/apc /upload_tmp
chown -R www-data:www-data /upload_tmp /var/nc_data /var/www
chown -R www-data:root /usr/local/tmp/sessions /usr/local/tmp/cache /usr/local/tmp/apc

Create all required folders and apply the proper permissions:

mkdir -p /var/nc_data /var/www/letsencrypt /usr/local/tmp/cache /usr/local/tmp/sessions /usr/local/tmp/apc /upload_tmp
chown -R www-data:www-data /upload_tmp /var/nc_data /var/www
chown -R www-data:root /usr/local/tmp/sessions /usr/local/tmp/cache /usr/local/tmp/apc

Check your new NGINX-webserver by issuing

nginx -t

If the following output appears

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

start NGINX and verify the enabled module “ngx_cache_purge” enabled:

service nginx restart && nginx -V 2>&1 | grep ngx_cache_purge -o

If “ngx_cache_purge” appears your webserver works correctly. Modify the source file “nginx.list” to disable its content

vi /etc/apt/sources.list.d/nginx.list

by adding ‘#’ at the beginning of each line:

#deb http://nginx.org/packages/ubuntu/ xenial nginx
#deb-src http://nginx.org/packages/ubuntu/ xenial nginx

Go ahead with the installation of PHP.


02. Install php 7.2

Install PHP 7.2 out of ondrej’s Ubuntu repository:

apt install language-pack-en-base -y && sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php -y && apt update
apt install php7.2-fpm php7.2-gd php7.2-mysql php7.2-curl php7.2-xml php7.2-zip php7.2-intl php7.2-mbstring php7.2-json php7.2-bz2 php7.2-ldap php-apcu imagemagick php-imagick php-smbclient -y

Awesome, PHP 7.2 is already installed but must still be configured … let’s configure PHP using the following rows:

cp /etc/php/7.2/fpm/pool.d/www.conf /etc/php/7.2/fpm/pool.d/www.conf.bak
cp /etc/php/7.2/cli/php.ini /etc/php/7.2/cli/php.ini.bak
cp /etc/php/7.2/fpm/php.ini /etc/php/7.2/fpm/php.ini.bak
cp /etc/php/7.2/fpm/php-fpm.conf /etc/php/7.2/fpm/php-fpm.conf.bak
sed -i "s/;env\[HOSTNAME\] = /env[HOSTNAME] = /" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/;env\[TMP\] = /env[TMP] = /" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/;env\[TMPDIR\] = /env[TMPDIR] = /" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/;env\[TEMP\] = /env[TEMP] = /" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/;env\[PATH\] = /env[PATH] = /" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/pm.max_children = .*/pm.max_children = 240/" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/pm.start_servers = .*/pm.start_servers = 20/" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/pm.min_spare_servers = .*/pm.min_spare_servers = 10/" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/pm.max_spare_servers = .*/pm.max_spare_servers = 20/" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/;pm.max_requests = 500/pm.max_requests = 500/" /etc/php/7.2/fpm/pool.d/www.conf
sed -i "s/output_buffering =.*/output_buffering = 'Off'/" /etc/php/7.2/cli/php.ini
sed -i "s/max_execution_time =.*/max_execution_time = 1800/" /etc/php/7.2/cli/php.ini
sed -i "s/max_input_time =.*/max_input_time = 3600/" /etc/php/7.2/cli/php.ini
sed -i "s/post_max_size =.*/post_max_size = 10240M/" /etc/php/7.2/cli/php.ini
sed -i "s/;upload_tmp_dir =.*/upload_tmp_dir = \/upload_tmp/" /etc/php/7.2/cli/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 10240M/" /etc/php/7.2/cli/php.ini
sed -i "s/max_file_uploads =.*/max_file_uploads = 100/" /etc/php/7.2/cli/php.ini
sed -i "s/;date.timezone.*/date.timezone = Europe\/\Berlin/" /etc/php/7.2/cli/php.ini
sed -i "s/;session.cookie_secure.*/session.cookie_secure = True/" /etc/php/7.2/cli/php.ini
sed -i "s/;session.save_path =.*/session.save_path = \"N;700;\/usr\/local\/tmp\/sessions\"/" /etc/php/7.2/cli/php.ini
sed -i '$aapc.enable_cli = 1' /etc/php/7.2/cli/php.ini
sed -i "s/memory_limit = 128M/memory_limit = 512M/" /etc/php/7.2/fpm/php.ini
sed -i "s/output_buffering =.*/output_buffering = 'Off'/" /etc/php/7.2/fpm/php.ini
sed -i "s/max_execution_time =.*/max_execution_time = 1800/" /etc/php/7.2/fpm/php.ini
sed -i "s/max_input_time =.*/max_input_time = 3600/" /etc/php/7.2/fpm/php.ini
sed -i "s/post_max_size =.*/post_max_size = 10240M/" /etc/php/7.2/fpm/php.ini
sed -i "s/;upload_tmp_dir =.*/upload_tmp_dir = \/upload_tmp/" /etc/php/7.2/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 10240M/" /etc/php/7.2/fpm/php.ini
sed -i "s/max_file_uploads =.*/max_file_uploads = 100/" /etc/php/7.2/fpm/php.ini
sed -i "s/;date.timezone.*/date.timezone = Europe\/\Berlin/" /etc/php/7.2/fpm/php.ini
sed -i "s/;session.cookie_secure.*/session.cookie_secure = True/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.enable=.*/opcache.enable=1/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.enable_cli=.*/opcache.enable_cli=1/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.memory_consumption=.*/opcache.memory_consumption=128/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.interned_strings_buffer=.*/opcache.interned_strings_buffer=8/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.max_accelerated_files=.*/opcache.max_accelerated_files=10000/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.revalidate_freq=.*/opcache.revalidate_freq=1/" /etc/php/7.2/fpm/php.ini
sed -i "s/;opcache.save_comments=.*/opcache.save_comments=1/" /etc/php/7.2/fpm/php.ini
sed -i "s/;session.save_path =.*/session.save_path = \"N;700;\/usr\/local\/tmp\/sessions\"/" /etc/php/7.2/fpm/php.ini
sed -i "s/;emergency_restart_threshold =.*/emergency_restart_threshold = 10/" /etc/php/7.2/fpm/php-fpm.conf
sed -i "s/;emergency_restart_interval =.*/emergency_restart_interval = 1m/" /etc/php/7.2/fpm/php-fpm.conf
sed -i "s/;process_control_timeout =.*/process_control_timeout = 10s/" /etc/php/7.2/fpm/php-fpm.conf
sed -i '$aapc.enabled=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.file_update_protection=2' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.optimization=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.shm_size=256M' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.include_once_override=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.shm_segments=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.ttl=7200' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.user_ttl=7200' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.gc_ttl=3600' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.num_files_hint=1024' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.enable_cli=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.max_file_size=5M' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.cache_by_default=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.use_request_time=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.slam_defense=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.mmap_file_mask=/usr/local/tmp/apc/apc.XXXXXX' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.stat_ctime=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.canonicalize=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.write_lock=1' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.report_autofilter=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.rfc1867=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.rfc1867_prefix =upload_' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.rfc1867_name=APC_UPLOAD_PROGRESS' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.rfc1867_freq=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.rfc1867_ttl=3600' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.lazy_classes=0' /etc/php/7.2/fpm/php.ini
sed -i '$aapc.lazy_functions=0' /etc/php/7.2/fpm/php.ini
sed -i "s/09,39.*/# &/" /etc/cron.d/php
(crontab -l ; echo "09,39 * * * * /usr/lib/php/sessionclean 2>&1") | crontab -u root -

Awesome, PHP is configured and ready for Nextcloud!

Modify /etc/fstab and enable RAMDISK

Determine the uid of your www-data user by issuing

id www-data

and only if it differs from ‘uid=33‘ replace the following ‘uid=33‘ properly before executing them!

sed -i '$atmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0' /etc/fstab
sed -i '$atmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0' /etc/fstab
sed -i '$atmpfs /usr/local/tmp/apc tmpfs defaults,uid=33,size=300M,noatime,nosuid,nodev,noexec,mode=1777 0 0' /etc/fstab
sed -i '$atmpfs /usr/local/tmp/cache tmpfs defaults,uid=33,size=300M,noatime,nosuid,nodev,noexec,mode=1777 0 0' /etc/fstab
sed -i '$atmpfs /usr/local/tmp/sessions tmpfs defaults,uid=33,size=300M,noatime,nosuid,nodev,noexec,mode=1777 0 0' /etc/fstab

Finally mount the ramdisk and restart PHP and nginx. From now all changes are in place.

mount -a && service php7.2-fpm restart && service nginx restart

Go ahead with the installation of MariaDB.


03. Install MariaDB

You may install MariaDB directly from the ubuntu repository. Update your system and install mariadb:

apt update && apt install mariadb-server -y

Now you are already running MariaDB. Configure and secure the databaseserver, therefore run the mysql_secure_installation tool:

mysql_secure_installation

If you already set the db-password for the <root>-User during the installation process you can skip the first question. All the other following questions should be answered with ‘Yes’ (Y).

Edit MariaDB’s configuration-file:

mv /etc/mysql/my.cnf /etc/mysql/my.cnf.bak
vi /etc/mysql/my.cnf

Change the MariaDB my.cnf-file to:

[server]
skip-name-resolve
innodb_buffer_pool_size = 128M
innodb_buffer_pool_instances = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_max_dirty_pages_pct = 90
query_cache_type = 1
query_cache_limit = 2M
query_cache_min_res_unit = 2k
query_cache_size = 64M
tmp_table_size= 64M
max_heap_table_size= 64M
slow-query-log = 1
slow-query-log-file = /var/log/mysql/slow.log
long_query_time = 1

[client-server]
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/

[client]
default-character-set = utf8mb4

[mysqld]
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
transaction_isolation = READ-COMMITTED
binlog_format = ROW
innodb_large_prefix=on
innodb_file_format=barracuda
innodb_file_per_table=1

Now we will create the databases for Nextcloud. Open the console of mariadb

service mysql restart && mysql -uroot -p

and create the databases:

CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE USER nextcloud@localhost identified by 'nextcloud';
GRANT ALL PRIVILEGES on nextcloud.* to nextcloud@localhost;
FLUSH privileges;
quit;

MariaDB now fits all requirements and is already up and running.


04. Prepare NGINX for Let’s Encrypt and Nextcloud

The new filestructure will look like this:

/etc/nginx/
  • nginx.conf
    (nginx basic configuration)
  • ssl.conf, header.conf, proxy.conf, optimization.conf, php_optimization.conf
/etc/nginx/conf.d/
  • nextcloud.conf (Nextcloud vhost)
  • letsencrypt.conf (let’s encrypt vhost)

Your Nextcloud will be reachable via https://YOUR.DEDYN.IO. Please substitute your dyndns, ip and resolver ip with regard to your environment.

service nginx stop

Create the nextcloud.conf

mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.bak
vi /etc/nginx/conf.d/nextcloud.conf

and paste the following rows:

fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
map $request_uri $skip_cache {
default 1;
~*/thumbnail.php 0;
~*/apps/galleryplus/ 0;
~*/apps/gallery/ 0;
}
server {
server_name YOUR.DEDYN.IO;
listen 80 default_server;
location ^~ /.well-known/acme-challenge {
proxy_pass http://127.0.0.1:81;
proxy_set_header Host $host;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name YOUR.DEDYN.IO;
listen 443 ssl http2 default_server;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache NEXTCLOUD;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif|png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
expires 360d;
}
}

If you want your Nextcloud running in a subdir like https://your.dedyn.io/nextcloud use this nextcloud.conf instead:

server {
server_name your.dedyn.io;
listen 80 default_server;
location ^~ /.well-known/acme-challenge {
proxy_pass http://127.0.0.1:81;
proxy_set_header Host $host;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name your.dedyn.io;
listen 443 ssl http2 default_server;
root /var/www/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/nextcloud/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/nextcloud/remote.php/dav;
}
client_max_body_size 10240M;
location ^~ /nextcloud {
location /nextcloud {
rewrite ^ /nextcloud/index.php$uri;
}
location ~ ^/nextcloud/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/nextcloud/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/nextcloud/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/nextcloud/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|css|js|woff|svg|gif)$ {
try_files $uri /nextcloud/index.php$uri$is_args$args;
access_log off;
}
}
}

Attention regarding 10240M or 10G: the maximum value for 32Bit-OS is 2048M


Save and quit the file (:wq!) and create the Let’s Encrypt-nginx-configuration file:

vi /etc/nginx/conf.d/letsencrypt.conf

Paste the following lines:

server {
server_name 127.0.0.1;
listen 127.0.0.1:81 default_server;
charset utf-8;
access_log /var/log/nginx/le.access.log main;
error_log /var/log/nginx/le.error.log warn;
location ^~ /.well-known/acme-challenge {
default_type text/plain;
root /var/www/letsencrypt;
}
}

Save and quit (:wq!) the file and create the ssl.conf:

vi /etc/nginx/ssl.conf

Paste the following rows:

ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_trusted_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
#ssl_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/YOUR.DEDYN.IO/privkey.pem;
#ssl_trusted_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/chain.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384';
ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
ssl_prefer_server_ciphers on;
ssl_stapling on;
ssl_stapling_verify on;

Androider’s: if you run in troubles e.g. using CalDAV/CardDAV please decrease the eliptic curve and cipher strength to:

ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_ecdh_curve prime256v1;

Save and quit the file (:wq!) and enhance security by using the Diffie-Hellman-Parameter:

screen -S dhparam
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

We recommend using screen as shown in the command above to run this command in backround. It will take a long time to calculate on Odroid C2 – please be patient.

While the dhparam.pem will be created just create the header.conf:

vi /etc/nginx/header.conf

Paste the following rows:

add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer" always;

Save and quit (:wq!) the file and create the proxy.conf

vi /etc/nginx/proxy.conf

Paste the following rows:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
proxy_redirect off;

Save and quit (:wq!) the file and create the optimization.conf

vi /etc/nginx/optimization.conf

Paste the following rows:

fastcgi_read_timeout 3600;
fastcgi_buffers 64 64K;
fastcgi_buffer_size 256k;
fastcgi_busy_buffers_size 3840K;
fastcgi_cache_key $http_cookie$request_method$host$request_uri;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
gzip_disable "MSIE [1-6]\.";

Save and quit (:wq!) the file and create the php_optimization.conf

vi /etc/nginx/php_optimization.conf

Paste the following rows:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_cache_valid 404 1m;
fastcgi_cache_valid any 1h;
fastcgi_cache_methods GET HEAD;

Save and quit (:wq!) the file and then remove the leading ‘#’ in the /etc/nginx/nginx.conf file:

sed -i s/\#\include/\include/g /etc/nginx/nginx.conf

Validate your NGINX webserver

nginx -t

and restart NGINX

service nginx restart

if no errors appears.


05. Install Nextcloud 13

Please find relevant Release information at Nextclouds Maintenance and Release Schedule

The webfolders for all the applications were already created so we can start downloading and extracting the software. Change to our working directory again:

cd /usr/local/src

Download the latest Nextcloud package:

wget https://download.nextcloud.com/server/releases/latest.tar.bz2

Extract the Nextcloud package to your web-folder /var/www/nextcloud:

tar -xjf latest.tar.bz2 -C /var/www

Remove the source file:

rm latest.tar.bz2

Reset the permissions:

chown -R www-data:www-data /var/www/

Go ahead with the installation of the Redis Cache Server.


06. Install Redis-Server

Run the installation of redis:

apt update && apt install redis-server php-redis -y

Then edit the redis-configuration:

cp /etc/redis/redis.conf /etc/redis/redis.conf.bak
vi /etc/redis/redis.conf

Change both

a) the default port to ‘0’

# port 6379
port 0

and

b) the unixsocket-entries from

# unixsocket /var/run/redis/redis.sock
# or on Ubuntu 18.04 LTS: # unixsocket /var/run/redis/redis-server.sock
# unixsocketperm 700

to

Ubuntu 16.04.4 LTS:

unixsocket /var/run/redis/redis.sock
unixsocketperm 770

Ubuntu 18.04.4 LTS:

unixsocket /var/run/redis/redis-server.sock
unixsocketperm 770

Now change the value for maxclients from 10000 to an appropriated value to avoid errors like:

# You requested maxclients of 10000 requiring at least 10032 max file descriptors. # Redis can’t set maximum open files to 10032 because of OS error: Operation not permitted # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase ‘ulimit -n’.

Depending on your server hardware set the value to e.g. 512 for oDroid C2:

# maxclients 10000
maxclients 512

Save and quit the file (:wq!) and grant all privileges to the webuser (e.g. www-data) needed for Redis in combination with Nextcloud:

usermod -a -G redis www-data

Optional only:

Create a password hash by issuing the following statement (replace the red one to your need)

echo "yourPassWord2BHashed" | sha256sum

Note down the result without the final ‘‘:

d98c51c882960945f49fe8127cb0eb97dbf435b3532bd58c846bd85c2282c4af -

Edit the redis.conf again

vi /etc/redis/redis.conf

and paste:

requirepass d98c51c882960945f49fe8127cb0eb97dbf435b3532bd58c846bd85c2282c4af

To either fix

# WARNING overcommit_memory is set to 0! Background save may fail under low memory condition.

in the redis-server.log just add “vm.overcommit_memory = 1” into /etc/sysctl.conf and run the sysctl-command directly in your terminal:

vi /etc/sysctl.conf

At the end add the following row:

vm.overcommit_memory = 1

Save and quit the file (:wq!) and run this command in your shell

sysctl -p

for this to take effect immediately. Another warning occurs

“# WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.”

To fix either this warning you have to set a new config to /etc/rc.local so that the setting will persist upon reboot:

vi /etc/rc.local

Add

sysctl -w net.core.somaxconn=65535

Save and quit the file (:wq!). When you reboot the next time the new setting will be to allow 65535 connections instead of 128 as before.

shutdown -r now

After this reboot please validate the existence of both files in both folders:

sudo -s
ls -la /run/redis && ls -la /var/run/redis

Files:

redis-server.pid 
- and -
redis.sock.

If you want to check wether redis is running correctly or not type:

Ubuntu 16.04.4 LTS:

redis-cli -s /var/run/redis/redis.sock

Ubuntu 18.04 LTS:

redis-cli -s /var/run/redis/redis-server.sock

and enter

PING

You will receive

PONG

as a valid response from Redis. Leave the Redis-Serverconsole with quit and review what will happen inside Redis:

Ubuntu 16.04.4 LTS:

redis-cli -s /var/run/redis/redis.sock monitor

Ubuntu 18.04 LTS:

redis-cli -s /var/run/redis/redis-server.sock monitor

while you are browsing in Nextcloud. Go ahead and create your ssl certificates.


07. Create the ssl certificates

Install the letsencrypt-clientsoftware out of Ubuntus repository.

add-apt-repository ppa:certbot/certbot -y
apt update && apt install letsencrypt -y

Request your certificate(s):

letsencrypt certonly -a webroot --webroot-path=/var/www/letsencrypt --rsa-key-size 4096 -d YOUR.DEDYN.IO

If asked, add your notify email for Let’s Encrypt, agree to their Terms of Service and finally the client will display a success-message. All Let’s Encrypt certs will be stored to

ls -la /etc/letsencrypt/live/YOUR.DEDYN.IO
cert.pem - public key
chain.pem - public key from keychain
fullchain.pem - bundle (cert.pem + chain.pem)
privkey.pem - private Key

Now apply the proper permissions using a new permission script called permissions.sh:

vi /home/next/permissions.sh

Paste the following rows:

#!/bin/bash
find /var/www/ -type f -print0 | xargs -0 chmod 0640
find /var/www/ -type d -print0 | xargs -0 chmod 0750
chown -R www-data:www-data /var/www/
chown -R www-data:www-data /upload_tmp/
chown -R www-data:www-data /var/nc_data/
chmod 0644 /var/www/nextcloud/.htaccess
chmod 0644 /var/www/nextcloud/.user.ini
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/privkey.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/chain.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/cert.pem
chmod 600 /etc/ssl/certs/dhparam.pem
exit 0

Save and close (:wq!) the shell script, mark it as executable and issue it:

chmod u+x /home/next/permissions.sh
/home/next/permissions.sh

This script can be reused after every update or modification to your server configuration.

Modify the ssl.conf and remove the self-signed-certificates and the leading ‘#’ at the beginning of the ssl-rows:

sed -i '/ssl-cert-snakeoil/d' /etc/nginx/ssl.conf
sed -i s/\#\ssl/\ssl/g /etc/nginx/ssl.conf

Save and quit the file, then verify and finally restart nginx:

nginx -t 
service nginx restart

Your server is now configured using ssl.


08. Configure Nextcloud

Open your browser and call https://your.dedyn.io, then enter the following values:

Username: cloud-root
Password*: cloud-13-root-password!
Data folder: /var/nc_data
Datenbankuser: nextcloud
DB-Passwort: nextcloud
Datenbank-Name: nextcloud
Host: localhost

Click ,Finish setup’ and wait few seconds … the installation will be finished and you will be prompt to the Nextcloud-Welcome board. Some smaller changes should be applied to the Nextcloud config.php immediately.

egrep "'instanceid' =>.*|'passwordsalt' => '.*|'secret' => '.*" /var/www/nextcloud/config/config.php
'instanceid' => 'ofg69hjknlor0',
'passwordsalt' => 'RrRjXeEeEdddBmJbRnqlnVK7e6R5T3hRX',
'secret' => 'HjKlIz9i8J7G6F5DuGQrqV1L9D8HFj6J8YedSVnTD9d',

Open the config.php as www-data and ammend it:

sudo -u www-data cp /var/www/nextcloud/config/config.php /var/www/nextcloud/config/config.php.bak
sudo -u www-data vi /var/www/nextcloud/config/config.php

and adjust the config.php to the following rows:

<?php
$CONFIG = array (
 'activity_expire_days' => 14,
 'auth.bruteforce.protection.enabled' => true,
 'blacklisted_files' => 
 array (
 0 => '.htaccess',
 1 => 'Thumbs.db',
 2 => 'thumbs.db',
 ),
 'cron_log' => true,
 'datadirectory' => '/var/nc_data',
 'dbtype' => 'mysql',
 'dbname' => 'nextcloud',
 'dbhost' => 'localhost',
 'dbport' => '',
 'dbtableprefix' => 'oc_',
 'dbuser' => 'nextcloud',
 'dbpassword' => 'nextcloud',
 'enable_previews' => true,
 'enabledPreviewProviders' => 
 array (
 0 => 'OC\\Preview\\PNG',
 1 => 'OC\\Preview\\JPEG',
 2 => 'OC\\Preview\\GIF',
 3 => 'OC\\Preview\\BMP',
 4 => 'OC\\Preview\\XBitmap',
 5 => 'OC\\Preview\\Movie',
 6 => 'OC\\Preview\\PDF',
 7 => 'OC\\Preview\\MP3',
 8 => 'OC\\Preview\\TXT',
 9 => 'OC\\Preview\\MarkDown',
 ),
 'filesystem_check_changes' => 0,
 'filelocking.enabled' => 'true',
 'htaccess.RewriteBase' => '/',
 'installed' => true,
 'instanceid' => '*KeepYourSettings: ofg69hjknlor0*',
 'integrity.check.disabled' => false,
 'knowledgebaseenabled' => false,
 'logfile' => '/var/nc_data/nextcloud.log',
 'loglevel' => 2,
 'logtimezone' => 'Europe/Berlin',
 'log_rotate_size' => 104857600,
 'maintenance' => false,
 'memcache.local' => '\\OC\\Memcache\\APCu',
 'memcache.locking' => '\\OC\\Memcache\\Redis',
 'mysql.utf8mb4' => true,
 'overwriteprotocol' => 'https',
 'overwrite.cli.url' => 'https://your.dedyn.io',
 'passwordsalt' => '*KeepYourSettings: RrRjXeEeEdddBmJbRnqlnVK7e6R5T3hRX*',
 'preview_max_x' => 1024,
 'preview_max_y' => 768,
 'preview_max_scale_factor' => 1,
 'redis' => 
 array (
 # Ubuntu 16.04.4 LTS:
 'host' => '/var/run/redis/redis.sock',
 # Ubuntu 18.04 LTS:
 'host' => '/var/run/redis/redis-server.sock', 
 # next row only, if set in redis.conf before!
 #'password' => 'd98c51c882960945f49fe8127cb0eb97dbf435b3532bd58c846bd85c2282c4af',
 'port' => 0,
 'timeout' => 0.0,
 ),
 'quota_include_external_storage' => false,
 'secret' => '*KeepYourSettings:HjKlIz9i8J7G6F5DuGQrqV1L9D8HFj6J8YedSVnTD9d*',
 'share_folder' => '/Shares',
 'skeletondirectory' => '',
 'theme' => '',
 'trashbin_retention_obligation' => 'auto, 7',
 'trusted_domains' => 
 array (
 0 => 'your.dedyn.io',
 ),
 'updater.release.channel' => 'stable',
 'version' => '13.0.5.2',
);

We have to edit the file .user.ini as well:

sudo -u www-data sed -i "s/upload_max_filesize=.*/upload_max_filesize=10240M/" /var/www/nextcloud/.user.ini
sudo -u www-data sed -i "s/post_max_size=.*/post_max_size=10240M/" /var/www/nextcloud/.user.ini
sudo -u www-data sed -i "s/output_buffering=.*/output_buffering='Off'/" /var/www/nextcloud/.user.ini

Verify your previously made changes:

egrep "upload_max_filesize=.*|post_max_size=.*|output_buffering=.*" /var/www/nextcloud/.user.ini

If the integrity check within Nextcloud will fail, try to change the config.php

sudo -u www-data vi /var/www/nextcloud/config/config.php

and set :

'integrity.check.disabled' => true,

Then restart all services:

service php7.2-fpm restart && service redis-server restart && service nginx restart

Re-run the integrity check and set the value back to ‘false’:

sudo -u www-data vi /var/www/nextcloud/config/config.php
'integrity.check.disabled' => false,

Restart all services again.

service php7.2-fpm restart && service redis-server restart && service nginx restart

and the message should disappear!


Configure and enable a Nextcloud two cron-jobs running as Webuser (www-data):

crontab -u www-data -e

Paste the following row to the crontab:

*/15 * * * * php -f /var/www/nextcloud/cron.php > /dev/null 2>&1
5 1 * * * php -f /var/www/nextcloud/occ files:scan-app-data > /dev/null 2>&1

From now one Nextcloud-cronjob will run every 15 minutes and one job at 1:05 AM as webuser (www-data).

Change the cron-settings from AJAX to Cron by issuing the following command:

sudo -u www-data php /var/www/nextcloud/occ background:cron

We recommend to install ufw (uncomplicated firewall) to secure your data and your server:

apt install ufw -y
ufw allow 80/tcp
ufw allow 443/tcp
ufw allow 22/tcp
ufw logging medium
ufw default deny incoming
ufw enable

If you have changed the default ssh port 22 please adjust the port for ufw either. To verify the ufw settings and ufw state just run:

ufw status verbose

Now, we are already finished and your Nextcloud is ready to use!

Finally verify your server security level

(1)

https://www.ssllabs.com/ssltest/analyze.html?d=your.dedyn.io

(2)

https://scan.nextcloud.com

(3)

https://observatory.mozilla.org/analyze/your.dedyn.io


Enjoy your Nextcloud!

The following chapters are optional.


09. Mount additonal storage to Nextcloud

You may enhance your Nextcloud with data from your NAS-share or an external hdd.

09.1. Mount your NAS data to a Nextcloud-user

This chapter is optional but for sure, it is very simple to mount a NAS share to your Nextcloud using cifs. First install cifs-utils:

apt install cifs-utils -y

Then store your credentials to a special file (e.g. /home/next/.smbcredentials)

vi ~/.smbcredentials

Write down your username and password:

username=NASuser
password=NASPassword

Save and quit (:wq!) the file and change the permissions to 0600:

chmod 0600 ~/.smbcredentials

Detect the ID of the webuser (www-data) using the id-command:

id www-data

and keep the id in mind to reuse it in /etc/fstab:

cp /etc/fstab /etc/fstab.bak
vi /etc/fstab

Paste the following to the end of fstab

//<NAS>/<share> /var/nc_data/next/files cifs user,uid=33,rw,iocharset=utf8,suid,credentials=/home/next/.smbcredentials,file_mode=0770,dir_mode=0770 0 0

Please subtitue “//<NAS>/<share>“, “next” and if neccessary the uid=”33” and then try to mount your NAS manually first:

mount //<NAS>/<share>/

or

mount -a

To unmount your NAS manually run

umount //<NAS>/<share>/

or

umount -a

It will be neccessary to rescan your data for the first usage once. So change to your Nextcloud directory and execute Nextclouds files:scan for the relevant Nextcloud-user (e.g. next) or all (–all):

service nginx stop
cd /var/www/nextcloud
#Ubuntu 16.04.4 LTS
redis-cli -s /var/run/redis/redis.sock
#Ubuntu 18.04 LTS
redis-cli -s /var/run/redis/redis-server.sock
FLUSHALL
quit
sudo -u www-data php occ files:scan --all -v
sudo -u www-data php occ files:scan-app-data -v
service nginx start

After Nextclouds files:scan all of your NAS data will appear in the Nextcloud file-app.
The permissions-script <permission.sh> should be enhanced to umount and mount the new mounted NAS share:

vi ~/permissions.sh

Add the red lines to the existing script:

#!/bin/bash
find /var/www/ -type f -print0 | xargs -0 chmod 0640
find /var/www/ -type d -print0 | xargs -0 chmod 0750
chown -R www-data:www-data /var/www/
chown -R www-data:www-data /upload_tmp/
umount //<NAS>/<share>
chown -R www-data:www-data /var/nc_data/
mount //<NAS>/<share>
chmod 0644 /var/www/nextcloud/.htaccess
chmod 0644 /var/www/nextcloud/.user.ini
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/fullchain.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/privkey.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/chain.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/cert.pem
chmod 600 /etc/ssl/certs/dhparam.pem

Please substitute the red ones accordingly to your environment, then save and quit (:wq!) the file. From now, your NAS will always be available in Nextcloud.
If you are interested in mounting an external hdd to Nextcloud, please expand the following toogle:

09.2 Mount an external hdd to your Nextcloud

We prepare the new drive ‘/dev/sda’ for the use in Nextcloud. Please format it with an ‘ext4’ file system and mount it permanently with an entry in /etc/fstab.

Stop your server (NGINX, PHP, MariaDB, Redis) services and check the availability of the new drive:

sudo -s
service nginx stop && service php7.2-fpm stop && service redis-server stop && service mysql stop
fdisk -l /dev/sda

If available, make a new partition with the fdisk command.

fdisk /dev/sda
  1. Type ‘o’ to create a new partition table.
  2. Type ‘n’ to create a new partition.
  3. Choose the primary partition type, input ‘p’.
  4. Partition Number – we just need 1.
  5. Leave all default on the First sector and Last sector – Press Enter.
  6. Type ‘w’ and press enter to write the partition.

The ‘/dev/sda1’ partition has been created, now we have to format it to ‘ext4’ with the mkfs tool. Then check the volume size.

mkfs.ext4 /dev/sda1
fdisk -s /dev/sda1

Next, create a new local ‘nc_data’ directory and mount ‘/dev/sda1’ to that directory.

sudo mkdir -p /nc_data

To mount new disk permanently, we add the new mount configuration to the fstab file. Open fstab with vom:

vi /etc/fstab

Paste the configuration below at the end of the file.

/dev/sda1     /nc_data     ext4     defaults     0     1

Save fstab and exit:

Now mount the disk and make sure that there is no error.

mount -a
df -h

At least you have to move your current Nextcloud data direcory to the new mounted directory

chown -R www-data:www-data /nc_data
rsync -av /var/nc_data/ /nc_data

and point to it in Nextcloud’s config.php.

sudo -u www-data vi /var/www/nextcloud/config/config.php

Change the data-directory

...
'datadirectory' => '/nc_data',
...

Finally restart your server services and perform a new filescan:

service nginx stop && service php7.2-fpm restart && service redis-server restart && service mysql restart
cd /var/www/nextcloud
# Ubuntu 16.04.4 LTS: 
redis-cli -s /var/run/redis/redis.sock
FLUSHALL
# Ubuntu 18.04 LTS: 
redis-cli -s /var/run/redis/redis-server.sock 
quit
sudo -u www-data php occ files:scan --all -v
sudo -u www-data php occ files:scan-app-data -v
service nginx start

From now, your Nextcloud data will be stored on your external HDD.


10. Recommended tweaks and hardenings

10.1 Make use of ramdisk
10.2 Prevent ctrl+alt+del
10.3 Install and enjoy fail2ban


10.1 Make further use of ramdisk

Your tmpfs (ramdisk) is already in use by your server even after having rebooted your server. To either move cache directories to ramdisk, create the file “/etc/profile.d/xdg_cache_home.sh

vi /etc/profile.d/xdg_cache_home.sh

and paste the following two rows:

#!/bin/bash
export XDG_CACHE_HOME="/dev/shm/.cache"

Save and quit (:wq!) the file and make the script executable:

chmod +x /etc/profile.d/xdg_cache_home.sh

10.2 Prevent ctrl+alt+del

To prevent “ctrl+alt+del” would reboot your server, run the following commands:

systemctl mask ctrl-alt-del.target
systemctl daemon-reload

10.3 Install and enjoy fail2ban

Now we will install fail2ban to prevent any kind of bruteforce or ddos attacks.

apt update && apt install fail2ban -y

Create the Nextcloud-filter for fail2ban

vi /etc/fail2ban/filter.d/nextcloud.conf

and paste the following three lines

[Definition]
failregex=^{"reqId":".*","remoteAddr":".*","app":"core","message":"Login failed: '.*' \(Remote IP: '<HOST>'\)","level":2,"time":".*"}$
^{"reqId":".*","level":2,"time":".*","remoteAddr":".*","app":"core".*","message":"Login failed: '.*' \(Remote IP: '<HOST>'\)".*}$
^.*\"remoteAddr\":\"<HOST>\".*Trusted domain error.*$

and add the following code to the new file /etc/fail2ban/jail.d/nextcloud.local:

vi /etc/fail2ban/jail.d/nextcloud.local

Paste the following rows:

[nextcloud]
backend = auto
enabled = true
port = 80,443
protocol = tcp
filter = nextcloud
maxretry = 3
bantime = 36000
findtime = 36000
logpath = /var/nc_data/nextcloud.log

[nginx-http-auth]
enabled = true

Save and quit the file (:wq!) and re-start the fail2ban-service. Then have a look at fail2ban status:

service fail2ban restart
fail2ban-client status nextcloud

Logon to your Nextcloud using false credentials for at least three times. The browser will display an error message and the fail2ban-status changed as shown exemplarily in the screenshot

You can remove the locked IP (banip) using this command:

fail2ban-client set nextcloud unbanip <Banned IP>

Your fail2ban is now working properly.


11. SSL certificate renewal

To renew your certificates automatically just create and enable a renewal-script that runs weekly or monthly executed by cron:

sudo -s
sed -i "s/SHELL=*/# &/" /etc/cron.d/certbot
sed -i "s/PATH=*/# &/" /etc/cron.d/certbot
sed -i "s/0 =*/# &/" /etc/cron.d/certbot
cd /root vi renewal.sh

Paste the following lines:

#!/bin/bash
cd /etc/letsencrypt
letsencrypt renew
result=$(find /etc/letsencrypt/live/ -type l -mtime -1 )
if [ -n "$result" ]; then
/usr/sbin/service nginx stop
/usr/sbin/service mysql restart
/usr/sbin/service redis-server restart
/usr/sbin/service php7.2-fpm restart
/usr/sbin/service postfix restart
/usr/sbin/service nginx restart
fi
exit 0

Make the script executable and create a new cronjob.

chmod +x renewal.sh
crontab -e

Paste the following line to crontab

@monthly /root/renewal.sh > /home/<your-ubuntuuser-name>/renewal.txt 2>&1

Please decide whether to run the script weekly (@weekly) or e.g. monthly (@monthly). Save/quit (:wq!) the crontab. Enjoy Nextcloud with your new ssl renewal-automatism.


12. Backup your Nextcloud and oDroid C2 eMMC

Create a shellscript and let cron handle your backups automatically. Create the backup.sh file

BACKUP_FOLDER=your preferred working directory
ARCHIVE_STORE=your preferred directory, where your backups will be stored to

sudo -s
mkdir -p /backup_work && mkdir -p /backup_store
cd /root
vi backup.sh

and paste the following lines

#!/bin/bash
BACKUP_FOLDER=/backup_work 
ARCHIVE_STORE=/backup_store
CURRENT_TIME_FORMAT="%w"
echo "-------------------------------------"
echo "START: $(date)"
echo "-------------------------------------"
FOLDERS_TO_BACKUP=(
 "/root/"
 "/etc/apticron/"
 "/etc/fail2ban/"
 "/etc/letsencrypt/"
 "/etc/mysql/"
 "/etc/nginx/"
 "/etc/php/"
 "/etc/postfix/"
 "/etc/ssh/"
 "/etc/ssl/"
 "/var/nc_data/rainloop-storage/"
 "/var/www/"
 )
ARCHIVE_FILE="$ARCHIVE_STORE/nextcloud_backup_$(date +$CURRENT_TIME_FORMAT).tar.gz"
cd $BACKUP_FOLDER
for FOLDER in ${FOLDERS_TO_BACKUP[@]}
do
if [ -d "$FOLDER" ];
then
echo "Copying $FOLDER..."
rsync -AaRx --delete $FOLDER $BACKUP_FOLDER
else
echo "Skipping $FOLDER (since it does not exist)"
fi
done
echo "Copying /etc/fstab..."
cp /etc/fstab $BACKUP_FOLDER/etc/
echo "-------------------------------------"
echo "SQL Dump..."
mysqldump --single-transaction -h localhost -u nextcloud -pnextcloud nextcloud > $BACKUP_FOLDER/nextcloud_`date +"%w"`.sql
mysql -e "SELECT table_schema 'DB',round(sum(data_length+index_length)/1024/1024,4) 'Size (MB)' from information_schema.tables group by table_schema;"
echo "-------------------------------------"
echo "Creating..."
echo $ARCHIVE_FILE
mkdir -p $(dirname $ARCHIVE_FILE)
tar -czf $ARCHIVE_FILE .
echo "Backup size: $(stat --printf='%s' $ARCHIVE_FILE | numfmt --to=iec)"
echo "-------------------------------------"
echo "Purge..."
rm $BACKUP_FOLDER/*.sql
rm $BACKUP_FOLDER/etc/fstab
echo "-------------------------------------"
echo "END: $(date)"
echo "-------------------------------------"
mail -s "Backup - $(date +$CURRENT_TIME_FORMAT)" -a "From: Your Name <your@dedyn.io>" your@dedyn.io < /home/<your-ubuntuuser-name>/backup.txt
exit 0

Then create a regular cron-job

sudo crontab -u root -e

Paste the following row:

55 23 * * * /root/backup.sh > /home/<your-ubuntuuser-name>/backup.txt 2>&1

Save and quit, then make the script executable:

chmod +x backup.sh

From now your Nextcloud will be backuped by cron every day and rotate every day from “nextcloud_backup_0.tar.gz” (Sunday) to “nextcloud_backup_6.tar.gz” (Saturday). Please also have a look at: Nextcloud backup and restore.

To backup your eMMC Modul run

fdisk -l

Disk /dev/mmcblk0: 14,6 GiB, 15634268160 bytes, 30535680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xea4f0000

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1        2048   264191   262144  128M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      264192 30534656 30270465 14,4G 83 Linux

and copy the red value to the parameter “count”:

apt install gzip
dd status=progress if=/dev/mmcblk0 bs=512 count=30534656| gzip > /home/<your-ubuntuuser-name>/server.img.gz

Substitute your <your-ubuntuuser-name> accordingly.

The whole eMMC will now be cloned and compressed. You can run the restore using this server.img.gz file by running:

gunzip -c /home/<your-ubuntuuser-name>/server.img.gz | dd of=/dev/mmcblk0

The whole eMMC would be restored, so please be careful.


13. Server hardening


13.1 Disable IPv6 (if not needed)

You do not have to disable IPv6 – but if you want to, just edit sysctl.conf as follows:

cp /etc/sysctl.conf /etc/sysctl.conf.bak
vi /etc/sysctl.conf
...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
...

and reload the configuration by

sysctl -p

or more simple just create a new configuration file

vi /etc/sysctl.d/01-disable-ipv6.conf

and paste the following row

net.ipv6.conf.all.disable_ipv6 = 1

Then save and quit the file (:wq!), reboot your server and validate ipv6 is disabled:

ip a | grep inet6

No output will appear, so IPv6 is disabled.

Disable IPv6 in the firewall (ufw)

Edit the ufw-config

apt install ufw -y
ip6tables -P INPUT DROP && ip6tables -P OUTPUT DROP && ip6tables -P FORWARD DROP
cp /etc/default/ufw /etc/default/ufw.bak
vi /etc/default/ufw

and set IPV6 to ‘no’ or respectively comment it out.

IPV6=no

13.2 Enable and configure the ufw

If not already done at the end of chapter 08 please enable the ufw:

ufw enable

In specific we will allow ownly three needed services: http, https and ssh:

ufw allow 80/tcp
ufw allow 443/tcp
ufw allow sshport/tcp
ufw logging medium

Please substitute the sshport according to your sshd_config (e.g. 1234 or 22).

In addition we will set a deny rule for all the other incoming requests

ufw default deny incoming

The status

ufw status verbose

should look like

Status: active
Logging: on (medium)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
1234/tcp                   ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere

13.3 Prevent IP Spoofing

Ubuntu 16.04 only – work in progress for Ubuntu 18.04

Switch back to your terminal and type the following

cp /etc/host.conf /etc/host.conf.bak
vi /etc/host.conf

Add/edit the following lines

# order hosts,bind
# multi on
order bind,hosts
nospoof on

Reboot your server to ensure all changes beeing in place.


13.4 Check your environment using nmap

Install nmap and check your system.

apt install nmap -y

Run both

nmap -v -sT localhost

Your output should look similar to mine

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Completed Connect Scan at 21:30, 0.02s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00013s latency).
Not shown: 996 closed ports
PORT     STATE SERVICE
1234/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Additionally run this command:

nmap -v -sS localhost

Your output should look similar to mine once more.

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Completed SYN Stealth Scan at 21:35, 1.60s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000048s latency).
Not shown: 996 closed ports
PORT     STATE SERVICE
1234/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 1.68 seconds
           Raw packets sent: 1060 (46.640KB) | Rcvd: 2124 (89.216KB)
)

13.5 Install POSTFIX to send server mails

Verify you set your hostname properly. To change the hostname in Ubuntu 18.04 follow this procedure:

sudo -s
vi /etc/cloud/cloud.cfg

Change the value from false to true

preserve_hostname: true

Modify the hostname two times

  1. issuing “hostnamectl set-hostname yourhostname”
    hostnamectl set-hostname yourhostname
  2. by editing the host file
    vi /etc/hosts

    paste

    127.0.1.1  yourhostname

On Ubuntu 16.04 it is sufficient to edit the hosts file. Restart your server to apply all settings. Then install two packages: postfix and libsasl2-modules

sudo -s
apt install postfix libsasl2-modules mailutils -y

and start configure your mailserver.

When the postfix-Installationscreen appears select <sattelitesystem>

&copy; 2016, rieger::CLOUD

Postfix will ask you for the system emailname, you can confirm the shown entry e.g. yourcloud. Then you were asked for the smtp-relayservername e.g. w12345.kasserver.com. Please fill in your according mailservername.

&copy;2016, rieger::CLOUD

Finish the installation <OK>. Now edit the configuration of postfix

cp /etc/postfix/main.cf /etc/postfix/main.cf.bak
vi /etc/postfix/main.cf

and add the following lines

...
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_password

Save and quit (:wq!) this file.

Our complete but exemplarily main.cf:

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = yourhostname.dedyn.io
mydomain = yourhostname.dedyn.io
myorigin = $mydomain
smtp_tls_CApath = /etc/ssl/certs
smtpd_tls_CApath = /etc/ssl/certs
smtpd_tls_received_header = yes
smtp_tls_loglevel = 1
smtpd_tls_loglevel = 1
smtpd_use_tls=yes
smtp_use_tls=yes
smtpd_tls_protocols = TLSv1.2, !TLSv1.1, !SSLv2, !SSLv3
smtp_tls_protocols = TLSv1.2, !TLSv1.1, !SSLv2, !SSLv3
smtpd_tls_ciphers = high
smtp_tls_ciphers = high
smtpd_tls_cert_file = /etc/letsencrypt/live/yourhostname.dedyn.io/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/yourhostname.dedyn.io/privkey.pem
smtp_tls_cert_file = /etc/letsencrypt/live/yourhostname.dedyn.io/fullchain.pem
smtp_tls_key_file = /etc/letsencrypt/live/yourhostname.dedyn.io/privkey.pem
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = $myhostname, yourhostname.dedyn.io, localhost.localdomain, localhost
relayhost = your.smtpserver.com:587
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_password
sender_canonical_maps = hash:/etc/postfix/sender_canonical
mynetworks = 127.0.0.0/8
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
inet_protocols = all
compatibility_level=2

Create a new file containing your credentials to connect to your mailserver.

vi /etc/postfix/sasl_password

Enter your credentials like exemplarily shown

your.smtpserver.com a987654:PassWorD

and change the access level of this file to 0600.

chmod 600 /etc/postfix/sasl_password

At least we promote the information to postfix.

postmap hash:/etc/postfix/sasl_password

As default mails would be sent as user@hostname (e.g. root@localhost), but a lot of mailserver would reject those mails. That’s why we add a new row to postfix configuration file:

vi /etc/postfix/main.cf

If not exists add the following line to the config file

...
sender_canonical_maps = hash:/etc/postfix/sender_canonical

Save and quit (:wq!) the configuration and create the referred new file

vi /etc/postfix/sender_canonical

Add both lines and adjust the parameters according to your environment

root youremail@dedyn.io
www-data youremail@dedyn.io
<your-ubuntuuser-name> youremail@dedyn.io

This will assign your emailadress to the root and www-data users. We have to promote this information to postfix again

postmap /etc/postfix/sender_canonical

Finally we add postfix to the autostart and start the service

update-rc.d postfix defaults
service postfix restart

From now, you are already able to send system mails. Please verify the functionality

vi testmail.txt

Add any kind of text to your demofile, e.g.

My first system mail

Save and quit the testfile (:wq!) and send your first manual system mail

mail -s "Postfix-Testmail" yourmail@dedyn.io < testmail.txt

Check the logfile

cat /var/log/mail.log

and also check your mailclient if you already received that mail.

Postfix administration tasks:

[a] have a look in your actual mailqueue: mailq

[b] flush / re-send your mail(s)-queue: postfix flush

[c] delete all mails in your mailqueue: postsuper -d ALL

FAIL2BAN – system mails

We substitute the root-User in the fail2ban-config to receive status mails of fail2ban in the future. Those mails will contain both, the fail2ban-status (stopped/started) and in case of failed logins also the banned ip(‘s). Edit the fail2ban configuration file

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.conf.bak
vi /etc/fail2ban/jail.conf

and substitute at least the red marked parameters according to your system:

...
destemail = you@dedyn.io
...
sender = you@dedyn.io
...
mta = mail
...
# action = %(action_)s
action = %(action_mwl)s
...

Save and quit (:wq!) the fail2ban configuration. To avoid (many) mails on every fail2ban-restart just create a new file and copy it as shown below:

vi /etc/fail2ban/action.d/mail-buffered.local

Paste the following rows

[Definition]
actionstart =
actionstop =

Copy the file

cp /etc/fail2ban/action.d/mail-buffered.local /etc/fail2ban/action.d/mail.local
cp /etc/fail2ban/action.d/mail-buffered.local /etc/fail2ban/action.d/mail-whois-lines.local
cp /etc/fail2ban/action.d/mail-buffered.local /etc/fail2ban/action.d/mail-whois.local
cp /etc/fail2ban/action.d/mail-buffered.local /etc/fail2ban/action.d/sendmail-buffered.local
cp /etc/fail2ban/action.d/mail-buffered.local /etc/fail2ban/action.d/sendmail-common.local

Re-start the fail2ban-service an you will (only) be informed if fail2ban blocked new IPs

service fail2ban restart

automatically.


13.6 Apticron

If you use APTICRON, your system may send emails in case of available systemupdates either.

apt install apticron -y

After havin installed APTICRON you should edit the config and substitute at least your EMAIL, SYSTEM, NOTIFY_NO_UPDATES and CUSTOM_FROM.

cp /etc/apticron/apticron.conf /etc/apticron/apticron.conf.bak
vi /etc/apticron/apticron.conf
...
EMAIL="you@dedyn.io"
...
SYSTEM="you@dedyn.io"
...
NOTIFY_HOLDS="1"
...
NOTIFY_NO_UPDATES="1"
...
CUSTOM_SUBJECT='$SYSTEM: $NUM_PACKAGES package update(s)'
...
CUSTOM_NO_UPDATES_SUBJECT='$SYSTEM: no updates available'
...
CUSTOM_FROM="you@dedyn.io"
...

To run and check APTICRON just call

apticron

and you will receive an email sent by APTICRON. Now you are a little bit more secure.

cp /etc/cron.d/apticron /etc/cron.d/apticron.bak
vi /etc/cron.d/apticron
30 8 * * * root if test -x /usr/sbin/apticron; then /usr/sbin/apticron --cron; else true; fi

Apticron will now be executed by cron.d. You can change the starttime e.g. to daily 8.30 AM.


13.7 Two (2)-Factor-Authentication (2FA) for SSH

The following steps are system relevant (critical) and only recommended for advanced linux users. If the ssh configuration will fail, you won’t be able to login to your system via ssh anymore. The mandatory prerequisite is a ssh server that you can log on using private/public key only!

Install the software for 2FA (Two-Factor-Authentication) with your preferred OTP AUTH app

apt install libpam-google-authenticator -y

Leave the root-Shell and run the following command as your <your-ubuntu-user-name> and NOT as root:

exit
google-authenticator

You will be asked for:

Do you want authentication tokens to be time-based (y/n) y
&copy; 2016, c-rieger.de
Do you want me to update your "~/.google_authenticator" file (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n
If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Change back to the root-Shell

sudo -s

Backup the current configuration and configure your ssh server

cp /etc/pam.d/sshd /etc/pam.d/sshd.bak
vi /etc/pam.d/sshd

Change the file to mine:

@include common-auth
@include common-password
auth required pam_google_authenticator.so
account required pam_nologin.so
@include common-account
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
session required pam_loginuid.so
session optional pam_keyinit.so force revoke
@include common-session
session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate
session optional pam_mail.so standard noenv # [1]
session required pam_limits.so
session required pam_env.so # [1]
session required pam_env.so user_readenv=1 envfile=/etc/default/locale
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open

Save and quit (:wq!) the file.

If not already created please create your 4096 bit RSA Key (SSH) first:

cd ~
ssh-keygen -q -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa

If you will be asked to overwrite the existing key, confirm with ‘Y’. Then backup, edit and change your SSH-config to examplarily mine

mv /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
vi /etc/ssh/sshd_config
# Port 22
Port 1234 #your decision, but keep UFW in mind!
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
UsePrivilegeSeparation yes
KeyRegenerationInterval 3600
ServerKeyBits 4096
SyslogFacility AUTH
LogLevel INFO
LoginGraceTime 30
PermitRootLogin no
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
IgnoreRhosts yes
RhostsRSAAuthentication no
HostbasedAuthentication no
IgnoreUserKnownHosts yes
PermitEmptyPasswords no
ChallengeResponseAuthentication yes
PasswordAuthentication no
X11Forwarding no
X11DisplayOffset 10
PrintMotd no
PrintLastLog no
TCPKeepAlive yes
Banner /etc/issue
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
AllowUsers ubuntuuser #<your-ubuntu-user-name> for e.g. putty or ssh native
AuthenticationMethods publickey,password publickey,keyboard-interactive

If you changed the ssh-Port to e.g. 1234, please ensure having changed your ufw-configuration as well and adjust the username in ‘AllowUsers ubuntuuser.

Paste your public key to the <ubuntuuser>’s keystore (ubuntu’s how-to):

vi ~/.ssh/authorized_keys

and set proper permissions:

sudo chown -R ubuntuuser:ubuntuuser ~/.sshsudo chmod 700 ~/.sshsudo chmod 600 ~/.ssh/authorized_keys

Then restart your ssh server

service ssh restart

and re-logon to your server using a new session-window. THis is your final fallback, if you misconfigured your ssh server 😉

From now your privat key is needed, you will be prompted for your password and finally for your new second factor.

Public Key authentication and ssh-user password

Verification code (OTP 2FA)

Logged on

Start your e.g. OTP AUTH app and read your second factor to gain access to your server.


 

13.8 logwatch

Install logwatch

sudo -s
apt update && apt install logwatch -y

Copy the default configuration files to the logwatch folder:

cp /usr/share/logwatch/default.conf/logfiles/http.conf /etc/logwatch/conf/logfiles/nginx.conf
cp /usr/share/logwatch/default.conf/services/http.conf /etc/logwatch/conf/services/nginx.conf
cp /usr/share/logwatch/scripts/services/http /usr/share/logwatch/scripts/services/nginx
cp /usr/share/logwatch/default.conf/services/http-error.conf /etc/logwatch/conf/services/nginx-error.conf
cp /usr/share/logwatch/scripts/services/http-error /etc/logwatch/scripts/services/nginx-error
cp /etc/logwatch/conf/logfiles/nginx.conf /etc/logwatch/conf/logfiles/nginx.conf.org.bak

Edit the /etc/logwatch/conf/logfiles/nginx.conf to mine

vi /etc/logwatch/conf/logfiles/nginx.conf

Substitute the whole file to:

########################################################
# Define log file group for NGINX
########################################################

# What actual file? Defaults to LogPath if not absolute path....
#LogFile = httpd/*access_log
#LogFile = apache/*access.log.1
#LogFile = apache/*access.log
#LogFile = apache2/*access.log.1
#LogFile = apache2/*access.log
#LogFile = apache2/*access_log
#LogFile = apache-ssl/*access.log.1
#LogFile = apache-ssl/*access.log
LogFile = nginx/*access.log
LogFile = nginx/*error.log
LogFile = nginx/*access.log.1
LogFile = nginx/*error.log.1

# If the archives are searched, here is one or more line
# (optionally containing wildcards) that tell where they are...
#If you use a "-" in naming add that as well -mgt
#Archive = archiv/httpd/*access_log.*
#Archive = httpd/*access_log.*
#Archive = apache/*access.log.*.gz
#Archive = apache2/*access.log.*.gz
#Archive = apache2/*access_log.*.gz
#Archive = apache-ssl/*access.log.*.gz
#Archive = archiv/httpd/*access_log-*
#Archive = httpd/*access_log-*
#Archive = apache/*access.log-*.gz
#Archive = apache2/*access.log-*.gz
#Archive = apache2/*access_log-*.gz
#Archive = apache-ssl/*access.log-*.gz
Archive = nginx/*access.log.*.gz
Archive = nginx/*error.log.*.gz

# Expand the repeats (actually just removes them now)
*ExpandRepeats

# Keep only the lines in the proper date range...
*ApplyhttpDate

# vi: shiftwidth=3 tabstop=3 et

Save and quit (:wq!) this file and edit /etc/logwatch/conf/services/nginx.conf:

cp /etc/logwatch/conf/services/nginx.conf /etc/logwatch/conf/services/nginx.conf.org.bak
vi /etc/logwatch/conf/services/nginx.conf

Change the name from http to NGINX or substitute the whole file to mine:

###########################################################################
# Configuration file for NGINX filter
###########################################################################

Title = "NGINX"

# Which logfile group...
LogFile = NGINX

# Define the log file format
#
# This is now the same as the LogFormat parameter in the configuration file
# for httpd. Multiple instances of declared LogFormats in the httpd
# configuration file can be declared here by concatenating them with the
# '|' character. The default, shown below, includes the Combined Log Format,
# the Common Log Format, and the default SSL log format.
#$LogFormat = "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"|%h %l %u %t \"%r\" %>s %b|%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

# The following is supported for backwards compatibility, but deprecated:
# Define the log file format
#
# the only currently supported fields are:
# client_ip
# request
# http_rc
# bytes_transfered
# agent
#
#$HTTP_FIELDS = "client_ip ident userid timestamp request http_rc bytes_transfered referrer agent"
#$HTTP_FORMAT = "space space space brace quote space space quote quote"
# Define the field formats
#
# the only currently supported formats are:
# space = space delimited field
# quote = quoted ("..") space delimited field
# brace = braced ([..]) space delimited field

# Flag to ignore 4xx and 5xx error messages as possible hack attempts
#
# Set flag to 1 to enable ignore
# or set to 0 to disable
$HTTP_IGNORE_ERROR_HACKS = 0

# Ignore requests
# Note - will not do ANY processing, counts, etc... just skip it and go to
# the next entry in the log file.
# Note - The match will be case insensitive; e.g. /model/ == /MoDel/
# Examples:
# 1. Ignore all URLs starting with /model/ and ending with 1 to 10 digits
# $HTTP_IGNORE_URLS = ^/model/\d{1,10}$
#
# 2. Ignore all URLs starting with /model/ and ending with 1 to 10 digits and
# all URLS starting with /photographer and ending with 1 to 10 digits
# $HTTP_IGNORE_URLS = ^/model/\d{1,10}$|^/photographer/\d{1,10}$
# or simply:
# $HTTP_IGNORE_URLS = ^/(model|photographer)/\d{1,10}$

# To ignore a range of IP addresses completely from the log analysis,
# set $HTTP_IGNORE_IPS. For example, to ignore all local IP addresses:
#
# $HTTP_IGNORE_IPS = ^10\.|^172\.(1[6-9]|2[0-9]|3[01])\.|^192\.168\.|^127\.
#

# For more sophisticated ignore rules, you can define HTTP_IGNORE_EVAL
# to an arbitrary chunk of code.
# The default is not to filter anything:
$HTTP_IGNORE_EVAL = 0
# Example:
# $HTTP_IGNORE_EVAL = "($field{http_rc} == 401) && ($field{client_ip}=~/^192\.168\./) && ($field{url}=~m%^/protected1/%)"
# See the "scripts/services/http" script for other variables that can be tested.

# The variable $HTTP_USER_DISPLAY defines which user accesses are displayed.
# The default is not to display user accesses:
$HTTP_USER_DISPLAY = 0
# To display access failures:
# $HTTP_USER_DISPLAY = "$field{http_rc} >= 400"
# To display all user accesses except "Unauthorized":
# $HTTP_USER_DISPLAY = "$field{http_rc} != 401"

# To raise the needed level of detail for one or more specific
# error codes to display a summary instead of listing each
# occurrence, set a variable like the following ones:
# Raise 403 codes to detail level High
#$http_rc_detail_rep_403 = 10
# Always show only summary for 404 codes
#$http_rc_detail_rep_404 = 20

# vi: shiftwidth=3 tabstop=3 et

Save and quit the file (:wq!) and disable the default apache-configuration files:

cd /usr/share/logwatch/default.conf/services
mv http-error.conf http-error.conf.bak && mv http.conf http.conf.bak

At least we create a cronjob to send the result from logwatch automatically:

crontab -e

Paste the following row:

@daily /usr/sbin/logwatch --output mail --mailto your@mail.com --format html --detail high --range yesterday > /dev/null 2>&1

Save and quit crontab and check if logwatch is configured properly:

/usr/sbin/logwatch --output mail --mailto your@mail.com --format html --detail high --range yesterday

You should receive an email from logwatch that looks like this:

From now you will receive daily mails containing your system summary.


14. monitor your entire system using netdata

Start download netdata – the directory ‘netdata’ will be created

sudo -s
apt install apache2-utils git gcc make autoconf automake pkg-config uuid-dev zlib1g-dev
cd /usr/local/src
git clone https://github.com/firehol/netdata.git --depth=1
cd netdata

Create a passwordfile to protect netdata:

htpasswd -c /etc/nginx/netdata-access YourName

Then run the script netdata-installer.sh with root privileges to build, install and start netdata

./netdata-installer.sh

Netdata is already installed. We will make smaller adjustementss to netdata’s configuration:

vi /etc/netdata/netdata.conf

First we change the value for “history” to e.g. 14400 (4 hours of chart data retention, uses about 60 MB of RAM) in the [global] section:

 history = 14400

Then we change the binding in the [web] section to localhost (127.0.0.1) only:

 bind to = 127.0.0.1

At least we disable all the ipv6 configurations in the three sections [system.ipv6], [ipv6.packets], [ipv6.errors] by setting “enabled = no”:

...
[system.ipv6]
 # history = 3996
 enabled = no
...
[ipv6.packets]
 # history = 3996
 enabled = no
...
[ipv6.errors]
 # history = 3996
 enabled = no
...

Finally we enhance the nextcloud.conf and nginx.conf file to include the netdata webserver-configuration:

vi /etc/nginx/conf.d/nextcloud.conf

Paste the red rows as shown below to the nextcloud.conf:

...
location / {
 rewrite ^ /index.php$uri;
 }
location /netdata {
 return 301 /netdata/;
 }
 location ~ /netdata/(?<ndpath>.*) {
 auth_basic "Restricted Area";
 auth_basic_user_file /etc/nginx/netdata-access;
 proxy_http_version 1.1;
 proxy_pass_request_headers on;
 proxy_set_header Connection "keep-alive";
 proxy_store off;
 proxy_pass http://netdata/$ndpath$is_args$args;
 gzip on;
 gzip_proxied any;
 gzip_types *;
 }
 location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
 deny all;
...

Your nextcloud.conf should look like:

fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
map $request_uri $skip_cache {
default 1;
~*/thumbnail.php 0;
~*/apps/galleryplus/ 0;
~*/apps/gallery/ 0;
}
server {
server_name YOUR.DEDYN.IO;
listen 80 default_server;
location ^~ /.well-known/acme-challenge {
proxy_pass http://127.0.0.1:81;
proxy_set_header Host $host;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name YOUR.DEDYN.IO;
listen 443 ssl http2 default_server;
root /var/www/nextcloud/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
client_max_body_size 10240M;
location / {
rewrite ^ /index.php$uri;
}
location /netdata {
return 301 /netdata/;
}
location ~ /netdata/(?<ndpath>.*) {
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/netdata-access;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Connection "keep-alive";
proxy_store off;
proxy_pass http://netdata/$ndpath$is_args$args;
gzip on;
gzip_proxied any;
gzip_types *;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache NEXTCLOUD;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif|png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
expires 360d;
}
}

If you want your Nextcloud and Netdata running in a subdir like https://your.dedyn.io/nextcloud and https://your.dedyn.io/netdata use this nextcloud.conf instead:

fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
map $request_uri $skip_cache {
default 1;
~*/thumbnail.php 0;
~*/apps/galleryplus/ 0;
~*/apps/gallery/ 0;
}
server {
server_name your.dedyn.io;
listen 80 default_server;
location ^~ /.well-known/acme-challenge {
proxy_pass http://127.0.0.1:81;
proxy_set_header Host $host;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
server_name your.dedyn.io;
listen 443 ssl http2 default_server;
root /var/www/;
access_log /var/log/nginx/nextcloud.access.log main;
error_log /var/log/nginx/nextcloud.error.log warn;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $scheme://$host/nextcloud/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/nextcloud/remote.php/dav;
}
client_max_body_size 10240M;
location ^~ /nextcloud {
location /nextcloud {
rewrite ^ /nextcloud/index.php$uri;
}
location /netdata {
return 301 /netdata/;
}
location ~ /netdata/(?<ndpath>.*) {
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/netdata-access;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Connection "keep-alive";
proxy_store off;
proxy_pass http://netdata/$ndpath$is_args$args;
gzip on;
gzip_proxied any;
gzip_types *;
}
location ~ ^/nextcloud/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/nextcloud/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ \.(?:flv|mp4|mov|m4a)$ {
mp4;
mp4_buffer_size 100m;
mp4_max_buffer_size 1024m;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache NEXTCLOUD;
}
location ~ ^/nextcloud/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
include php_optimization.conf;
fastcgi_pass php-handler;
fastcgi_param HTTPS on;
}
location ~ ^/nextcloud/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|css|js|woff|svg|gif)$ {
try_files $uri /nextcloud/index.php$uri$is_args$args;
access_log off;
}
}
}

Create the new /etc/nginx/conf.d/stub_status.conf:

vi /etc/nginx/conf.d/stub_status.conf

Paste all the following rows:

server {
listen 127.0.0.1:80 default_server;
server_name 127.0.0.1;
location /stub_status {
stub_status on;
allow 127.0.0.1;
deny all;
}
}

Save and quit the file (:wq!) and modify the file /etc/nginx/nginx.conf:

...
http {
 server_names_hash_bucket_size 64;
 upstream php-handler {
 server unix:/run/php/php7.2-fpm.sock;
 }
 upstream netdata {
 server 127.0.0.1:19999;
 keepalive 64;
 }
...

Your nginx.conf should look like:

user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
proxy_headers_hash_bucket_size 64;
server_names_hash_bucket_size 64;
upstream php-handler {
server unix:/run/php/php7.2-fpm.sock;
}
upstream netdata {
server 127.0.0.1:19999;
keepalive 64;
}
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.0/24;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
include /etc/nginx/mime.types;
include /etc/nginx/optimization.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main;
sendfile on;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=500 inactive=10m;
open_file_cache_errors on;
keepalive_timeout 65;
reset_timedout_connection on;
server_tokens off;
resolver 192.168.2.1;
resolver_timeout 10s;
include /etc/nginx/conf.d/*.conf;
}

Save and quit the file (:wq!) and check NGINX

nginx -t

If no errors appear just restart netdata and nginx

service netdata restart && service nginx restart

and call netdata in your browser

https://your.dedyn.io/netdata

or as an external site in your Nextcloud.



Carsten Rieger

83 Responses

  1. compuls1v3 says:

    Hi Carsten, when I do the following, I create a new file called “rules”.

    “Change back to the the debian-directory and edit the compiler information file “rules”:

    cd /usr/local/src/nginx-1.15.2/debian && vi rules”

    What am I missing?

    • You don’t need to create this file. If it doesn’t exist, something went wrong! I guessm you are on Debian? The advanced guide is based on Ubuntu and does not work 1:1 for Debian.

  2. compuls1v3 says:

    Hi Carsten, thank you very much for the guide. Will you help me please? I’ve successfully mounted my synology device, but when I try to scan files using “sudo -u www-data php occ files:scan –all -v” I get an error as follows:
    “Your data directory is invalid
    Ensure there is a file called “.ocdata” in the root of the data directory.
    An unhandled exception has been thrown:
    Exception: Environment not properly prepared. in /var/www/nextcloud/lib/private/Console/Application.php:148
    Stack trace:
    #0 /var/www/nextcloud/console.php(89): OC\Console\Application->loadCommands(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
    #1 /var/www/nextcloud/occ(11): require_once(‘/var/www/nextcl…’)”

    I used your script to install Nextcloud and cannot find the file it is looking for. Thank for any help!

  3. Ivo says:

    Hi, Carsten, thanks for the guide! I followed its previous version, for Ubuntu 16.04, and everything went pretty fine. The only problem I see is currently, when I have to renew the certificate – I get 403 Forbidden for the /.well-known/acme-challenge directory. Here is the output from my execution of the renewal.sh script (please, bear in mind my current certificate is with 3 SANs, for three domains, cloud.domain1.com, domain1.com, cloud.domain2.com – I have hidden the real domain names as it is a public space, I would share them with you):

    root@NextCloud:/root# ./renewal.sh
    Saving debug log to /var/log/letsencrypt/letsencrypt.log

    ——————————————————————————-
    Processing /etc/letsencrypt/renewal/cloud.domain1.com.conf
    ——————————————————————————-
    Cert is due for renewal, auto-renewing…
    Plugins selected: Authenticator webroot, Installer None
    Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
    Renewing an existing certificate
    Performing the following challenges:
    http-01 challenge for cloud.domain1.com
    http-01 challenge for cloud.domain2.com
    http-01 challenge for domain1.com
    Waiting for verification…
    Cleaning up challenges
    Attempting to renew cert (cloud.domain1.com) from /etc/letsencrypt/renewal/cloud.domain1.com.conf produced an unexpected error: Failed authorization procedure. cloud.domain2.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://cloud.domain2.com/.well-known/acme-challenge/WXkeNhW8jjW0aThGrnnOoRGNp267W-XxWH7T-Y8T9-I: ”
    <html xmlns="http", domain1.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://domain1.com/.well-known/acme-challenge/6hv726bqLC2YBHC5HsLvCUao-QlAFf84yX9aprkct3g: "
    <html xmlns="http", cloud.domain1.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://cloud.domain1.com/.well-known/acme-challenge/7s3KHJFHmdD86-kP_gMckq9w7K1u70Yi_u96OFqVvfk: "
    <html xmlns="http". Skipping.
    All renewal attempts failed. The following certs could not be renewed:
    /etc/letsencrypt/live/cloud.domain1.com/fullchain.pem (failure)

    ——————————————————————————-

    All renewal attempts failed. The following certs could not be renewed:
    /etc/letsencrypt/live/cloud.domain1.com/fullchain.pem (failure)
    ——————————————————————————-
    1 renew failure(s), 0 parse failure(s)

    IMPORTANT NOTES:
    – The following errors were reported by the server:

    Domain: cloud.domain2.com
    Type: unauthorized
    Detail: Invalid response from
    http://cloud.domain2.com/.well-known/acme-challenge/WXkeNhW8jjW0aThGrnnOoRGNp267W-XxWH7T-Y8T9-I:
    "
    <html xmlns="http"

    Domain: domain1.com
    Type: unauthorized
    Detail: Invalid response from
    http://domain1.com/.well-known/acme-challenge/6hv726bqLC2YBHC5HsLvCUao-QlAFf84yX9aprkct3g:
    "
    <html xmlns="http"

    Domain: cloud.domain1.com
    Type: unauthorized
    Detail: Invalid response from
    http://cloud.domain1.com/.well-known/acme-challenge/7s3KHJFHmdD86-kP_gMckq9w7K1u70Yi_u96OFqVvfk:
    "
    <html xmlns="http"

    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A/AAAA record(s) for that domain
    contain(s) the right IP address.

    The nextcloud error log outputs the following:

    2018/07/22 13:10:21 [error] 1653#1653: *12686 access forbidden by rule, client: LAN_IP, server: cloud.domain1.com, request: "GET /.well-known/acme-challenge/sOd9x-MQtNynxRRn5E7kYOUe9mh4xl6Xy6oRARIqGhA: HTTP/2.0", host: "cloud.domain1.com"

    I double and triple checked that my config is the same as yours and I cannot see any discrepancy. If you need me to, I can upload any config you request… And as the certificates expire in just several days, I am stuck 🙁 I would greatly appreciate your assistance here!

    Thanks in advance!

    • good morning ivo. did you configure the letsencrypt part for each sni (vhost) properly? did you verify or apply the proper directories and permissions? please zip your /etc/nginx/conf.d and provide me your dir-structure ls – lsa /var/www/
      Please drop it here. CARSTEN

      • Ivo says:

        Hi Carsten, sorry for the delay, I have just uploaded the requested files in a zip (Ivo_Nginx.zip). I hope it will help you with helping me 🙂

        Regarding your other questions, I am not sure how to check those. I followed your guide thoroughly on the first place (just adding three domain names on the places you added just one), the issue of the certs went absolutely fine, NC itself works pretty fine. Only the renew is failing. Honestly, I am specializing in the MS world and this is my first interaction with Linux; in this regards, I am not sure how to check the permissions. I believe the dirs exist but that’s all I can say 🙁

        • When trying to access your webserver an IIS is responding, nor a NGINX?!

          • Ivo says:

            Spot on! It turned out a less-than-optimal configuration of the Kemp reverse proxy in front of the NC box. It was throwing port 80 to a Windows machine rather than to NC.
            I am really sorry for taking up your time for such a dull problem! And thank you again for your effort!
            Best regards!

  4. Alex says:

    Thanks for a great info!

  5. Frank says:

    Hi, just found your site and checked which guides may fit my need. I’m a bit confused regarding the “normal” guide and the “advanced” guide. In the advanced one you mention “ngx_cache_purge” but I don find many informations regarding the usage of this module. In the official nextcloud-docs its only mentioned for version 9 and I dont see the nginx-config that makes use of it ???
    Beside that, the official docs mention redis for larger organizations only, but do also have performance-improvements with a small user-base ?

    • Hi Frank, the module is used in your nextcloud.conf (on top fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
      map $request_uri $skip_cache {
      … and fastcgi_cache NEXTCLOUD;). Regarding the usage of this module and Redis: the answere is as simple as that: it is up to you. I would prefer Redis even on smaller installations but do no longer compile the ngx_cache_purge module on my own environment. It is sufficient to follow the normal guide for ~ 99% of the community. Cheers, Carsten

      • Frank says:

        Thank you very much for your fast reply, this helps me on my journey 🙂 I’m aware that it makes no sense for the 99% and I`m heading for a very small user-base. But as I really hate sluggish sites I’m always in search for good hints 🙂

  6. Linux supporter says:

    Hi Carsten, thanks for the awesome guide! I’m stuck at trying to use my synology for storage. When I go to mount the share, I get “mount error(95): Operation not supported
    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)”
    -I looked online and found several articles stating I need to add “vers=3.0” in the fstab file, however that did not work. Do you have any suggestions sir?

    • Probably this may help you?
      //192.168.2.111/synologyfolder/synologyshare /var/nc_data/user/files cifs user,uid=33,gid=33,rw,iocharset=utf8,suid,credentials=/path/to/credit,file_mode=0770,dir_mode=0770 0 0
      the credit file contains of the username and the password of the synology user having sufficient permissions.

    • compuls1v3 says:

      I had to use “vers=2.0” to get my Synology mounted.

  7. mario says:

    Hello,

    I get an error ngninx no ssl-certificate is defined for listen…ssl in /etc/nginx/conf.d/nextcloud:19
    Can someone help me

  8. Matthias says:

    Hallo Carsten,

    zunächst mal vielen Dank für deine für mich gut nachvollziehbar dargestellten und vor allem sehr nützlichen Anleitungen!

    Bei der Gelegenheit noch eine kleine Anmerkung zur Datei “ssl.conf”. Dort wird (im Gegensatz zum “Nicht-Advanced”-Guide) nicht nur beim “ssl_certificate”, sondern auch beim “ssl_trusted_certificate” auf “fullchain.pem” verwiesen. Im “Nicht-Advanced”-Guide stattdessen auf “chain.pem”. Und ohne jetzt als Erbsenzähler da stehen zu wollen: ich glaube, dass sich da noch ein kleiner “Schreibfehler” bei den alternativen “ssl_ciphers” für Androidnutzer eingeschlichen hat: das erste “Hochkomma” ist dort ein “‘”, sollte aber wohl ein “‘” sein. Keine Ahnung, ob sich das auf die Funktion auswirkt 😉

  9. Frank says:

    Hallo Carsten,

    wollte mit Hilfe deiner aktualisierten Anleitung gerade meinen zweiten Odroid HC1 installieren. Leider scheitert es direkt am Start. Wenn ich die Schritte fürs compilieren von NGINX 1.15.0. abarbeite befindet sich im Verzeichnis “/usr/local/src/nginx-1.15.0/debian/” keine Datei “rules”.

    Nachdem ich zurück ins Verzeichnis “/usr/local/src/” gewechselt bin fiel mir auf, dass es auch keine Daten für die Version __1.15.0__ sondern für die Version __1.14.0__ gibt. Womit auch das Fehlen der Datei “rules” im Pfad “/usr/local/src/nginx-1.15.0/debian/” erklärt ist.

    Was ich mir nicht erklären kann ist, weshalb ich die Dateien und Verzeichnisse für die Version 1.14 vorliegen habe.

    Hast Du eine Idee?

  10. ecbo says:

    Hi Carsten,

    ich habe am Ende von Kapitel 13.8 logwatch einen Screenshot von Nextcloud mit Roundcube App bei Dir gesehen. Mit welcher Nextcloud Version
    hast Du da das Roundcube-Plugin am Laufen? Ab Version 11 konnte ich die Roundcube App nicht mehr nutzen, da die Nextclouder vermutlich am Core/Api zuviel verändert hatten und die Roundcube-App leider nicht mehr weiterentwickelt wird für neuere Nextcloud-Versionen. Oder hast Du da ne Lösung für Roundcube-App mit Nextcloud 13?

    LG, ecbo

    • Das ist die Nextcloud eigene “external sites”-App mit der ich auf Roundcube (subfolder) verweise. Workaround der super funktioniert, nur leider noch ohne SSO.

      • ecbo says:

        Ah ok. Ich hatte in der Vergangenheit immer einige Probleme mit der external Sites App,
        da es bestenfalls https Seiten sein sollen und das Ding über iFrames oder veraltete/unsichere
        Webtechniken Webseiten und Inhalte einzubinden versucht. Wie auch immer, ich habe auf dem
        gleichen Server Roundcube laufen domain/roundcube (keine subdomain, kein extra vhost?).
        Das Einbinden von Webseiten/Portalen vom gleichen Server hab ich nicht gleich hinbekommen.
        Vielleicht brauchts da ne schlaue apache-config. Du hast Roundcube auf dem gleichen Server
        wie Nextcloud dann laufen? (subfolder schreibst du ja) Und der Clou aber bei der alten Roundcube-App war ja,
        dass man nach dem Nextcloud-Login nicht mehr extra bei Roundcube sich einloggen musste, die Credentials
        waren ja in der Roundcube-App bzw. in der DB gehasht hinterlegt 😉
        Btw. mit SSO schlage ich mich zur Zeit auch noch rum… Darf ich fragen wie Du Dein Theming
        realisiert hast? Theming-App, eigenes Template im themes-Ordner angelegt oder über Nextcloud-
        Enterprise Support eingekauft?

        LG, ecbo

  11. Zane says:

    Hey ich habe ein ODROID-X4U mit Ubuntu 16.04 minimal
    Bei mir funktioniert fail2ban nicht.
    Ich habe es so gemacht wie du es beschrieben hast.
    Das Daten Verzeichnis und das Log liegt auf der Externen HDD unter:
    /media/usb1tb/usb/data/nextcloud.log
    Hast du eine Idee woran es liegen könnte?
    Bei SSH funktioniert f2b ohne Probleme.
    Kann es sein das f2b das log nicht lesen kann weil es auf der Externen HDD liegt?
    Muss ich irgendwelche Rechte vergeben?
    Ich habe in Nextcloud die “Brute Force Protection” eingeschaltet stört sich das vielleicht gegenseitig?

    Danke!

    • So muss es bei dir aussehen:

      /etc/fail2ban/jail.d/nextcloud.local:

      [nextcloud]
      #ignoreip = 192.168.2.0/24
      backend = auto
      enabled = true
      port = 80,443
      protocol = tcp
      filter = nextcloud
      maxretry = 3
      bantime = 36000
      findtime = 36000
      logpath = /media/usb1tb/usb/data/nextcloud.log

      /etc/fail2ban/filter.d/nextcloud.conf:

      [Definition]
      failregex=^{“reqId”:”.*”,”remoteAddr”:”.*”,”app”:”core”,”message”:”Login failed: ‘.*’ (Remote IP: ”)”,”level”:2,”time”:”.*”}$
      ^{“reqId”:”.*”,”level”:2,”time”:”.*”,”remoteAddr”:”.*”,”app”:”core”.*”,”message”:”Login failed: ‘.*’ (Remote IP: ”)”.*}$
      ^.*”remoteAddr”:””.*Trusted domain error.*$

      Mit der Bruteforce Protection hat das Fehlverhalten nicht zu tun…steht das Loglevel auf 2 in der config.php?
      Kommen die korrekten IPs im nextcloud.log an (echte IPs und nicht die lokalen)?

      • Zane says:

        Ja scheint alles richtig zu sein.
        Aber glaube das Format von failregex und/oder die Uhrzeit passen nicht, habe den test um 17:30 Uhr gemacht.
        Wenn ich in der Konsole “date” eingebe um mir die Uhrzeit anzuzeigen wird mir die richtige Uhrzeit angezeigt.
        Hier ein Auszug aus dem log:
        {“reqId”:”43QSRv8Tl8C9zPhbH9nH”,”level”:2,”time”:”2018-05-17T15:30:19+00:00″,”remoteAddr”:”185.220.70.143″,”user”:”–“,”app”:”core”,”method”:”POST”,”url”:”/index.php/login?user=xxxxxxxx”,”message”:”Login failed: ‘xxxxxxxx’ (Remote IP: ‘185.220.70.143’)”,”userAgent”:”Mozilla/5.0 (Android 7.1.2; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0″,”version”:”13.0.2.1″}

        • Zane says:

          Test:
          $ fail2ban-regex /media/usb1tb/usb/data/nextcloud.log /etc/fail2ban/filter.d/nextcloud.conf

          Running tests
          =============

          Use failregex filter file : nextcloud, basedir: /etc/fail2ban
          Traceback (most recent call last):
          File “/usr/bin/fail2ban-regex”, line 549, in
          fail2banRegex.readRegex(cmd_regex, ‘fail’) or sys.exit(-1)
          File “/usr/bin/fail2ban-regex”, line 319, in readRegex
          ‘add%sRegex’ % regextype.title())(regex.getFailRegex())
          File “/usr/lib/python3/dist-packages/fail2ban/server/filter.py”, line 110, in addFailRegex
          raise e
          File “/usr/lib/python3/dist-packages/fail2ban/server/filter.py”, line 102, in addFailRegex
          regex = FailRegex(value)
          File “/usr/lib/python3/dist-packages/fail2ban/server/failregex.py”, line 215, in __init__
          raise RegexException(“No ‘host’ group in ‘%s'” % self._regex)
          fail2ban.server.failregex.RegexException: No ‘host’ group in ‘^{“reqId”:”.*”,”remoteAddr”:”.*”,”app”:”core”,”message”:”Login failed: ‘.*’ (Remote IP: ‘‘)”,”level”:2,”time”:”.*”}$’

          Damit kann ich jetzt erstmal nicht so viel anfangen

          • Zane says:

            In der config.php habe ich ich jetzt ‘logtimezone’ => ‘Europe/Berlin’, eingefügt jetzt wird die richtige Uhrzeit geloggt.
            Aber f2b zeigt immer noch nichts an.

            sudo fail2ban-client status nextcloud
            Status for the jail: nextcloud
            |- Filter
            | |- Currently failed: 0
            | |- Total failed: 0
            | `- File list: /media/usb1tb/usb/data/nextcloud.log
            `- Actions
            |- Currently banned: 0
            |- Total banned: 0
            `- Banned IP list:

          • Zane says:

            Ok jetzt habe ich in der config.php die log datei nach /var/log/nextcloud.log umgebogen und habe mit “chmod a+rw nextcloud.log” rechte vergeben dann habe ich noch beim regex bei Remote IP: ‘’ eingetragen jetzt gibt mir der Test folgendens aus:

            sudo fail2ban-regex /var/log/nextcloud.log /etc/fail2ban/filter.d/nextcloud.conf

            Running tests
            =============

            Use failregex filter file : nextcloud, basedir: /etc/fail2ban
            Use log file : /var/log/nextcloud.log
            Use encoding : UTF-8

            Results
            =======

            Failregex: 0 total

            Ignoreregex: 0 total

            Date template hits:
            |- [# of hits] date format
            | [12] Year-Month-Day[T ]24hour:Minute:Second(?:.Microseconds)?(?:Zone offset)?
            `-

            Lines: 12 lines, 0 ignored, 0 matched, 12 missed [processed in 0.00 sec]
            |- Missed line(s):
            {“reqId”:”Yb9dSbkzUlCmXPXZftO3″,”level”:2,”time”:”2018-05-17T21:41:52+02:00″,”remoteAddr”:”185.216.35.67″,”user”:”–“,”app”:”core”,”method”:”POST”,”url”:”/index.php/login?redirect_url=/index.php/apps/files/&user=admin”,”message”:”Login failed: ‘bad’ (Remote IP: ‘185.216.35.67’)”,”userAgent”:”Mozilla/5.0 (Android 7.1.2; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0″,”version”:”13.0.2.1″}

            Jetzt verstehe ich das so das er die nicht liest weil er die Zeit nicht lesen kann weil es im faschen format ist oder?
            Kann man das irgendwo einstellen?
            Hast du vllt. eine idee wie ich das ändern kann?

            Sorry das ich hier so viel schriebe, danke für deine Zeit.

          • Zane says:

            Ich habe es jetzt gelöst, es lag nicht an dem logdateformat sondern an der failregex.
            Bei mir steht jetzt nur noch das folgende drin und es funktionert. (Zumindest bei falschen logins)

            [Definition]
            failregex = ^.*Login failed: ‘.*’ (Remote IP: ”.*$

            $ sudo fail2ban-client status nextcloud

            Status for the jail: nextcloud
            |- Filter
            | |- Currently failed: 1
            | |- Total failed: 4
            | `- File list: /var/log/nextcloud.log
            `- Actions
            |- Currently banned: 1
            |- Total banned: 1
            `- Banned IP list: 185.216.35.67

    • Falk says:

      Hi, habe selbiges set-up und bei mir scheitert es an der installation von: nginx_1.14.0-1~xenial_arm64.deb
      Wie hast du dieses Problem gelöst, wenn ich fragen darf?
      Grüße!

      • Hast Du ein Problem mit fail2ban ode rmit NGINX?

        • Falk says:

          Ich hatte ein problem mit nginx. Die Lösung, die für mich funktioniert hat, war weder die Endung amd64 noch arm64 sonder armhf.

          • Sofern die Plattform keine 64Bit-Architektur respektive auf einer anderen Architektur basiert, so muss das Repository natürlich angepasst werden. Vielen Dank für diese Information.
            32Bit, 64 Bit, AMD64, ARMHF, ARM64 …

  12. Matthias says:

    Hi,

    for Ubuntu 18.04, does it have to be

    ‘host’ => ‘/var/run/redis/redis-sock.sock’,

    or rather
    ‘host’ => ‘/var/run/redis/redis-server.sock’,

    in config.php?

    Thanks!

  13. Carsten,

    thanks, and a big thumbs up for your excellent guides!

    Small comment/question: why do you even consider switching off IPv6 support?
    https://internet.nl will not consider to put a website in the hall of fame without it. The site does additional checks, compared to ssllabs (where I got 2 A+ scores…)

    Noch mal: danke schön, schau mal vorbei, dann trinken wir mal etwas. Keep up the good work!

    • IPv6 should only be used by people who really need IPv6 yet (and know what they are doing and what they have to do regarding IPv6)…that’s the only reason 😉
      Ich komme gern mal vorbei…aber wohin?! Cheers, Carsten

  14. JC Connell says:

    nginx -t returns:

    nginx: [emerg] no port in upstream “php-handler” in /etc/nginx/conf.d/nextcloud.conf:53

    The line it’s complaining about looks like this:

    fastcgi_pass php-handler;

    It looks correct to me. Any ideas why it’s complaining?

    • JC Connell says:

      Looks like I forgot to copy a section of the nginx.conf. I have added that and the error is gone now.

      I am receiving a new warning now however:
      nginx: [warn] no “fastcgi_cache_key” for “fastcgi_cache” in /etc/nginx/nginx.conf:43

      • Did you install and modify PHP properly?
        Did you start PHP?
        Did you remove th ‘#’ in the nginx.conf after running all steps from my guide?

        Please send me your nginx.conf and the vhosts files per mail if you won’t have success. Cheers, Carsten

  15. Eric says:

    Hey Carsten, just wanted to let you know your fix for the phpsession files with ramdisk is working perfectly. Your guides are enabling a lot of people to operate there nextcloud services properly. We are fortunate to have you take the time to keep up these guides. Thanks!

  16. Max says:

    Hallo Carsten,

    Thank you for the very useful tutorials and for keeping them updated!

    When I do a Nextcloud backup (using your script) I always see this in the email message

    -tar: .: file changed as we read it
    -Size of archive: 286M

    and basically, the file size almost doubles.

    I am not a Linux expert though I am wondering why I get a “file changed as we read it” from tar given that the script first rsync the directories and then creates the tar file. So I would not expect such message as tar would be creating a file from files which should be accessed by anyone but tar itself.

    Thank you.
    Max

  17. Noah Williams says:

    Hi Carsten in step 9.2 mount an external hdd to your nextcloud i get this error

    our data directory must be an absolute path
    Check the value of “datadirectory” in your configuration

    Your data directory is invalid
    Ensure there is a file called “.ocdata” in the root of the data directory.

    An unhandled exception has been thrown:
    Exception: Environment not properly prepared. in /var/www/nextcloud/lib/private/Console/Application.php:148
    Stack trace:
    #0 /var/www/nextcloud/console.php(89): OCConsoleApplication->loadCommands(Object(SymfonyComponentConsoleInputArgvInput), Object(SymfonyComponentConsoleOutputConsoleOutput))
    #1 /var/www/nextcloud/occ(11): require_once(‘/var/www/nextcl…’)
    #2 {main}root@nextcloud-desktop:/var/www/nextcloud#

    when i run sudo -u www-data php occ files:scan –all -v and when i run sudo -u www-data php occ files:scan-app-data -v

    i did not follow any previous steps on this page for my setup. I followed your guide here https://www.c-rieger.de/nextcloud-13-installation-guide/comment-page-1/#comment-651

    • Does .ocdata exists?
      Issu this:
      sudo -u www-data touch /path-to-your-Nextcloud-data/.ocdata
      e.g. sudo -u www-data touch /sdb1/nc_data/.ocdata
      or
      e.g. sudo -u www-data touch /var/nc_data/.ocdata
      and re-run the filescan.

      • noah williams says:

        Hi Carsten,

        I tried the command sudo -u www-data touch /var/nc_data/.ocdata
        and re-run the filescan. and i still got the same errors so i changed the in /var/www/nextcloud/config/config.php datadirectory back to ‘/var/nc_data’ its not ‘datadirectory’ => ‘/nc_data’, now i can run scan commands successful.

        My question is do i need to edit /etc/fstab from /nc_data and back to /var/nc_data?

        My purpose is i want to keep files local, are my files local as configured? Should i revert back to datadirectory’ => ‘/nc_data’ and move files from /var/nc_data if so what commands should i use. your assistance is greatly appreciated.

        • Please send me a mail with the following information: Does the .ocdata exists? Where do you want to store your data? (path?), Your directory-structure (ls -lsa /yourpathto/nextcloud_data), Your current /etc/fstab, Your current fdisk -l, Your current config.php, Your current error – message(s) when running files:scan
          I will try to assist you. Cheers, Carsten

          • Falk says:

            I got the same first mistake message and create a .ocdata file. Now it is working, except all users are not able to write data. when scanning with:
            sudo -u www-data php occ files:scan –all -v
            I get:
            Scanning files for 2 users

            Starting scan for user 1 out of 2 (*username*)
            Home storage for user *username* not writable
            Make sure you’re running the scan command only as the user the web server runs as

            Starting scan for user 2 out of 2 (*username*)
            Home storage for user *username* not writable
            Make sure you’re running the scan command only as the user the web server runs as

            +———+——-+————–+
            | Folders | Files | Elapsed time |
            +———+——-+————–+
            | 0 | 0 | 00:00:00 |
            +———+——-+————–+

            I thought it would be a permissions/ownership problem but a ls -l gives:
            drwxr-xr-x 7 www-data www-data 4096 Jun 5 20:54 appdata_*instanceid*
            drwxr-xr-x 3 www-data www-data 4096 Jun 5 20:54 *username*
            drwxrwxr-x 2 www-data www-data 4096 Jun 5 21:00 files_external
            drwx—— 2 www-data www-data 16384 Jun 3 19:58 lost+found
            -rw-r–r– 1 www-data www-data 0 Jun 5 20:36 nextcloud.log

            I still use the original config.php file not the one mentioned in this guide since I tried the one mentioned here and it does not work(yet).
            Could it be related to redis?
            If you could point me in a direction where to look at, I would be very pleased.
            Thanks in advance.
            Greetings

          • Hi Falk, I suggest you are a german guy? Please send me an email (sende mir bitte eine Mail) including your config.php, Ubuntu version (lsb_release -a) and your nextcloud.conf. I will have a dive into it. If you are interested in I could logon to your server using ssh … that would be more efficient?! Cheers/Servus, Carsten

  18. V8_Engine says:

    Dear Mr. Rieger,

    Thank you very much for this article, it helps me a lot.

  19. Karsten says:

    Hi Carsten,
    in step 09.2 if you want to move the files directly to the newly created /nc_data the command used should be “rsync -av /var/nc_data/ /nc_data”, else it will create a new subfolder and the hierarchy would be /nc_data/nc_data

  20. JC says:

    Would these steps work on a container based installation of Nextcloud to improve file transfer speeds? Downloading files via WAN is very slow on my install.

  21. Daniel says:

    Thank you for your guide evrything work great judt one thing when i want setup SMTP Mail in nextcloud admin panel i get error even the info its working how i can fix this or just use php methode?

  22. Eike says:

    Again thank you Carsten for maintaining this site. It already helped me getting my Nextcloud up in 2017. Today I decided to upgrade nginx using this manual, but I’m not only getting the error you listed when pulling the nginx source but also an error relating to the gpg signatures thought the publix key was added with apt-key and is also listed when using apt-key list… Can you reproduce this error? I just don’t get it…
    gpgv: Signature made Di 20 Feb 2018 16:05:02 CET using RSA key ID 7BD9BF62
    gpgv: Can’t check signature: public key not found
    dpkg-source: warning: failed to verify signature on ./nginx_1.13.9-1~xenial.dsc
    dpkg-source: info: extracting nginx in nginx-1.13.9
    dpkg-source: info: unpacking nginx_1.13.9.orig.tar.gz
    dpkg-source: info: unpacking nginx_1.13.9-1~xenial.debian.tar.xz

    • Eike, please find my deb_files here:
      ARM64: nginx w/ngx_cache_purge
      AMD64: nginx w/ngx_cache_purge
      When i am back from vacation i will double check this behaviour. I am sorry.

      • Eike says:

        I noticed this files somewhere in your tutorial, but I’m on armv7l (so 32bit…).

        PS: In your old tutorial you had the following in your /et/nginx/conf.d/nextcloud.conf:

        proxy_set_header Cache-Control “public, max-age=7200”;
        proxy_set_header Strict-Transport-Security “max-age=15768000; includeSubDomains; always;”;
        proxy_set_header X-Content-Type-Options “nosniff; always;”;
        proxy_set_header X-XSS-Protection “1; mode=block; always;”;
        proxy_set_header X-Robots-Tag none;
        proxy_set_header X-Download-Options noopen;
        proxy_set_header X-Permitted-Cross-Domain-Policies none;

        With the current settings I picked up from the tutorials here I don’t know if this settings were removed or if I just didn’t find them anymore. Nonetheless Nextcloud throws errors that seems to relate to this like:

        Der „X-XSS-Protection“-HTTP-Header ist nicht so konfiguriert, dass er „1; mode=block“ entspricht. Dies ist ein potentielles Sicherheitsrisiko und es wird empfohlen, diese Einstellung zu ändern.
        Der „X-Content-Type-Options“-HTTP-Header ist nicht so konfiguriert, dass er „nosniff“ entspricht. Dies ist ein potentielles Sicherheitsrisiko und es wird empfohlen, diese Einstellung zu ändern.
        Der „X-Robots-Tag“-HTTP-Header ist nicht so konfiguriert, dass er „none“ entspricht. Dies ist ein potentielles Sicherheitsrisiko und es wird empfohlen, diese Einstellung zu ändern.
        Der „X-Download-Options“-HTTP-Header ist nicht so konfiguriert, dass er „noopen“ entspricht. Dies ist ein potentielles Sicherheitsrisiko und es wird empfohlen, diese Einstellung zu ändern.
        Der „X-Permitted-Cross-Domain-Policies“-HTTP-Header ist nicht so konfiguriert, dass er „none“ entspricht. Dies ist ein potentielles Sicherheitsrisiko und es wird empfohlen, diese Einstellung zu ändern.

        I thought about putting those settings into /etc/nginx/proxy.conf and believer that could be the right choice, but on the other hand in the tutorial version mid 2017 they were in the nextcloud.conf – but IIRC in that tutorial version you werent using the proxy.conf?

        But no need to hurry. Enjoy your vacation 🙂

      • Eike says:

        Should have used the search funtion.

        include /etc/nginx/proxy.conf;

        in the nextcloud.conf solves this. I found it commented out in some configs here in your tutorials. But maybe I should just read EVERYTHING again instead of writing potentially unnecessary comments 🙂

        • Eike says:

          Last one for today: Why have you removed the geoip blocking mechanism from your tutorials?

          • Eike says:

            …. I meant include /etc/nginx/headers.conf…. but that wasn’t the solution
            my 2017 version fo the ssl.conf caused the issues. it was:

            ssl on;
            ssl_certificate /etc/letsencrypt/live/***/fullchain.pem;
            ssl_certificate_key /etc/letsencrypt/live/***/privkey.pem;
            ssl_trusted_certificate /etc/letsencrypt/live/***/fullchain.pem;
            ssl_dhparam /etc/ssl/certs/dhparam.pem;
            ssl_protocols TLSv1.2;
            ssl_ciphers ‘ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK:!AES128’;
            ssl_ecdh_curve prime256v1;
            ssl_prefer_server_ciphers on;
            ssl_stapling on;
            ssl_stapling_verify on;
            ssl_session_timeout 24h;
            ssl_session_cache shared:SSL:50m;
            ssl_session_tickets off;
            resolver 192.168.1.1;
            resolver_timeout 10s;
            add_header Strict-Transport-Security “max-age=15768000; includeSubDomains” always;
            add_header X-Content-Type-Options “nosniff” always;
            add_header Referrer-Policy “same-origin” always;
            add_header X-Xss-Protection “1; mode=block” always;
            add_header X-Robots-Tag none;
            add_header X-Download-Options noopen;
            add_header X-Permitted-Cross-Domain-Policies none;

            after changing this to the current version from your tutorial I’m back to an a+ rating…

          • From my point of view it isn’t necessarry anymore because of TOR, VPN and other techs…i prefer fail2ban, ufw and other mechanism instead.

  23. Rob says:

    After modifying my .user.ini to:

    upload_max_filesize=10240M
    post_max_size=10240M
    memory_limit=512M
    mbstring.func_overload=0
    always_populate_raw_post_data=-1
    default_charset=’UTF-8′
    output_buffering=’Off’

    NC13 gives an invalid hash for .user.ini.

    The rescan function doesn’t fix the problem.

  1. 6. February 2018

    […] our server please create and verify a backup of your entire environment. It is described in a separate post called “Nextcloud installation guide ext.”. Then ensure having the Nextcloud Release […]

Leave a Reply

Your email address will not be published. Required fields are marked *