nextcloud

Nextcloud installation guide v. 4.1


oDroid C2 / Intel NUC, Nextcloud 12.0.2, Nextcloud A+, SSL A+, Ubuntu 16.04.3 LTS 64Bit, nginx 1.13.4, mariadb 10.0.31, PHP 7.1.8, ufw, fail2ban, redis-server, postfix …


01. Install NGINX 1.13.4 with ngx_cache_purge module
02. Install PHP 7.1.8
03. Install MariaDB 10.0.31
04. Prepare NGINX for Let’s Encrypt and Nextcloud 12.0.2
05. Install Nextcloud 12.0.2
06. Install Redis-server 3
07. Create the ssl certificates
08. Configure Nextcloud 12.0.2
The following chapters are optional:
09. Mount additonal storage to Nextcloud
09.1 NAS (e.g. Synology)
09.2 external HDD e.g. WD for NextcloudBox
10. Recommended tweaks and hardenings
11. SSL certificate renewal
12. Backup
13. Server hardenings
14. monitor your entire system using netdata


Update history

v. 4.1 || 2017-08-15 monitor your entire system using netdata


v. 4.0.3 || 2017-08-13 security enhancements by using logwatch


v. 4.0.2 || 2017-08-10 made smaller adjustments to the NGINX-compile process – thx to Tony.


v. 4.0.1 || 2017-08-07 made smaller adjustments to Nextcloud CLI – thx to Jens H.

  • changed “… php occ file:scan” -> to -> “… php occ files:scan”

v. 4.0 || 2017-08-05 Made changes to the backup.sh and nginx.conf – thx to Bego & Dariusz

  • backup.sh:

    chmod -R 600 /bkup

    echo “Delete backups older than 5 days…”
    ls -1 /bkup/ | sort -r | tail -n +6 | xargs rm
  • nginx.conf:

    open_file_cache max=5000 inactive=10m;
    open_file_cache_errors on;

v. 3.9 || 2017-07-20 Added the ubuntu64-16.04-minimal-odroid-c2-20160815.img.xz – image to the download repository


v. 3.8 || 2017-07-05 Removed “kernel.sched_autogroup_enabled = 0” from /etc/sysctl.conf


v. 3.7 || 2017-07-03 Made smaller adjustments to the backup.sh


01. Install NGINX 1.13.4 with ngx_cache_purge module

NGINX 1.13.4 will manually be built from scratch in this guide. We are familiar working in the directory /usr/local/src for compiling, so please change to it as sudo and update your system first.

sudo -s
cd /usr/local/src
apt update && apt upgrade -y
apt install zip unzip screen curl -y

Add the NGINX key

wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key

and the NGINX repositories to your system:

vi /etc/apt/sources.list.d/nginx.list

Copy and paste the following two rows:

deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx

Then update your software sources:

apt update

Some errors will appear while running ‘apt update’:

” …N: Skipping acquire of configured file ‘nginx/binary-arm64/Packages’ as repository ‘http://nginx.org/packages/mainline/ubuntu xenial InRelease’ doesn’t support architecture ‘arm64’
N: Skipping acquire of configured file ‘nginx/binary-armhf/Packages’ as repository ‘http://nginx.org/packages/mainline/ubuntu xenial InRelease’ doesn’t support architecture ‘armhf’ …”

Please ignore these errors and go ahead with downloading the build dependencies and the source code for the new nginx-server:

apt build-dep nginx -y
apt source nginx

Another warning/error will be thrown:

W: Can’t drop privileges for downloading as file ‘nginx_1.13.4-1~xenial.dsc’ couldn’t be accessed by user ‘_apt’. – pkgAcquire::Run (13: Permission denied)

Please ignore this error either and go ahead with the next step. Create and change into the nginx-directory:

mkdir nginx-1.13.4/debian/modules -p
cd nginx-1.13.4/debian/modules

Now, in the modules directory, we are going to download and extract the code for each of the modules we want to include (e.g. ngx_cache_purge 2.3):

wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.tar.gz

Now extract the binaries:

tar -zxvf 2.3.tar.gz

Change back to the debian-directory and edit the compiler information file rules:

cd /usr/local/src/nginx-1.13.4/debian
vi rules

You will need to modify two lines in the rules file. Search for “with-ld-opt=”$(LDFLAGS)” and immediately after the first occurrence add the following:

--add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3"

and on the second occurrence add the following:

--add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" --with-debug

On the second pass building the debug deb package an error may occur:

… dh_shlibdeps -a dpkg-shlibdeps: error: no dependency information found for /usr/lib/libz.so.1 (used by debian/nginx/usr/sbin/nginx-debug) …

To fix this error find the line

dh_shlibdeps -a

and modify it to

dh_shlibdeps -a --dpkg-shlibdeps-params=--ignore-missing-info

Save and quit (:wq!) the rules-file. We will now build the debian package, please ensure you are in the nginx source directory:

cd /usr/local/src/nginx-1.13.4

and run

dpkg-buildpackage -uc -b

After package building will be finished (may take a while ~10 min) please change to the src-directory again:

cd /usr/local/src

First remove any old nginx fragments on your server:

apt remove nginx nginx-common nginx-full -y --allow-change-held-packages

Then start installing the new nginx-webserver, choose the package that fits your environment:

dpkg --install nginx_1.13.4-1~xenial_arm64.deb
Press ‘N’ having regards to the default.conf
The name of the *.deb-file depends on your server architecture (amd64, arm64…). Please adjust the name accordingly e.g. to dpkg –install nginx_1.13.4-1~xenial_amd64.deb.

Mark the nginx as “hold” to avoid any updates to nginx using apt upgrade.

apt-mark hold nginx

Looking for the amount of CPUs and Process limits on your server hardware:

grep ^processor /proc/cpuinfo | wc -l

Result: 4 (Odroid C2)

ulimit -n

Result: 1024 (Odroid C2)

Change the nginx.conf with regards to the previous values

cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf

to:

user www-data;
worker_processes auto;
# or: worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
 worker_connections 1024;
 multi_accept on;
 use epoll;
}
http {
 server_names_hash_bucket_size 64;
 upstream php-handler {
 server unix:/run/php/php7.1-fpm.sock;
 }
 include /etc/nginx/mime.types;
 limit_req_zone $binary_remote_addr zone=wp_ddos:20m rate=2r/m;
 # include /etc/nginx/ssl.conf;
 # include /etc/nginx/header.conf;
 # include /etc/nginx/optimization.conf;
 default_type application/octet-stream;
 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';
 access_log /var/log/nginx/access.log main;
 sendfile on;
 send_timeout 3600;
 tcp_nopush on;
 tcp_nodelay on;
 open_file_cache max=500 inactive=10m;
 open_file_cache_errors on;
 keepalive_timeout 65;
 reset_timedout_connection on;
 server_tokens off;
 resolver 192.168.2.1;
 resolver_timeout 10s;
 include /etc/nginx/conf.d/*.conf;
}

Check your new nginx-webserver:

nginx -t

If the following output appears

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

start and verify your new NGINX webserver with module “ngx_cache_purge” enabled:

service nginx restart && nginx -V 2>&1 | grep ngx_cache_purge -o

If ngx_cache_purge appears your webserver works correctly. Modify the source file “nginx.list” to disable its content

vi /etc/apt/sources.list.d/nginx.list

by adding ‘#’ at the beginning of each line:

# deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
# deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx

Save and quit the file (:wq!). Having regards to Nextcloud we have to create some folders and apply the proper permissions:

mkdir -p /var/nc_data && mkdir -p /var/www/letsencrypt
mkdir -p /usr/local/tmp/cache && mkdir /upload_tmp
chown -R www-data:www-data /upload_tmp
chown -R www-data:www-data /var/nc_data
chown -R www-data:www-data /var/www

Go ahead with the installation of PHP.


02. Install php 7.1.8

Install php 7.1.8 directly from the ubuntu repository

apt install language-pack-en-base -y
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php -y
apt update && apt install php7.1-fpm php7.1-gd php7.1-mysql php7.1-curl php7.1-xml php7.1-zip php7.1-intl php7.1-mcrypt php7.1-mbstring php-apcu php-imagick php7.1-json php7.1-bz2 php7.1-zip -y

Awesome, PHP 7.1 was already installed but must still be configured … let’s configure the global PHP-Config:

cp /etc/php/7.1/fpm/pool.d/www.conf /etc/php/7.1/fpm/pool.d/www.conf.bak
vi /etc/php/7.1/fpm/pool.d/www.conf

Search for:

Pass environment variables like LD_LIBRARY_PATH. ALL $VARIABLES are taken from the current environment

and remove the semicolon at the beginning of the following lines.

env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp

During our tests we encountered these warnings in our php.log (cat /var/log/php7.1-fpm.log):

[…] WARNING: [pool www] server reached pm.max_children setting (5), consider raising it

You can solve this by editing the php-fpm-configuration

vi /etc/php/7.1/fpm/pool.d/www.conf

and change the following lines to

pm.max_children = 240
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 500

on Odroid C2. To calculate the above values with regards to your environment just stop PHP

service php7.1-fpm stop

and run

free

to display the available memory of your system in particular. Then start PHP again and display the memory needed by php

service php7.1-fpm start && ps --no-headers -o "rss,cmd" -C php-fpm7.1 | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"M") }'

The result should examplarily look like: 6M

Exemplarily calculations:

  • Rasperry Pi3 PHP 7.1 takes 19 MB =>20 MB for the following calculation
  • oDroid C2 PHP 7.1 takes 6 MB => 7 MB for the following calculation

at runtime, so we calculate with 20 M (Pi3) or 7 MB (OC2) to have a small buffer.

Available memory / divided to php memory usage
Rasperry Pi 3: 650M / 20M = 32,5 (=> 30)
oDroid C2:    1800M /  7M = 257,14 (=> 240)

On Pi3 we will apply ’30’ for the pm.max_children value and calculate the other values as shown.

pm.max_children = 30
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 10

pm.max_requests = 500

On oDroid C2 we will apply ‘240’ for the pm.max_children value and calculate the other values as shown.

pm.max_children = 240
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 20

pm.max_requests = 500

Now the warnings should disappear or being reduced.

Enable apc and opcache for php and adjust further, general php settings:

cp /etc/php/7.1/cli/php.ini /etc/php/7.1/cli/php.ini.bak
vi /etc/php/7.1/cli/php.ini

On Odroid C2 set the values to:

...
post_max_size = 10240M
...
upload_tmp_dir = /upload_tmp
...
upload_max_filesize = 10240M
...
max_file_uploads = 100
...
max_execution_time = 1800
...
max_input_time = 3600 
...
output_buffering = Off 
...
apc.enable_cli = 1
...
session.cookie_secure = True
...
date.timezone = Europe/Berlin
...

Attention (10240M): the maximum value for 32Bit-OS ≤ 2048M


Save and quit file (:wq!), then modify the php.ini in the fpm-directory either:

cp /etc/php/7.1/fpm/php.ini /etc/php/7.1/fpm/php.ini.bak
vi /etc/php/7.1/fpm/php.ini

On Odroid C2 set the values to:

...
post_max_size = 10240M
...
upload_tmp_dir = /upload_tmp
...
upload_max_filesize = 10240M
...
max_file_uploads = 100
...
max_execution_time = 1800
...
max_input_time = 3600 
...
output_buffering = Off 
...
session.cookie_secure = True
...
date.timezone = Europe/Berlin
...
opcache.enable=1
...
opcache.enable_cli=1
...
opcache.memory_consumption=128
...
opcache.interned_strings_buffer=8
...
opcache.max_accelerated_files=10000
...
opcache.revalidate_freq=1
...
opcache.save_comments=1
...

Attention (10240M): the maximum value for 32Bit-OS ≤ 2048M


Save and quit the file (:wq!) and then adjust the PHP Settings in php-fpm.conf:

cp /etc/php/7.1/fpm/php-fpm.conf /etc/php/7.1/fpm/php-fpm.conf.bak
vi /etc/php/7.1/fpm/php-fpm.conf

Set the following values

...
emergency_restart_threshold = 10
...
emergency_restart_interval = 1m
...
process_control_timeout = 10s
...

Save and quit the file (:wq!) and finally restart PHP and nginx. From now all changes are in place.

service php7.1-fpm restart && service nginx restart

Go ahead with the installation of MariaDB.


03. Install MariaDB 10.0.31

You may install MariaDB 10.0.31 directly from the ubuntu repository. Update your system and install mariadb:

apt update && apt install mariadb-server -y

Now you are already running MariaDB 10.0.31. Configure and secure the databaseserver, therefore run the mysql_secure_installation tool:

mysql_secure_installation

If you already set the db-password for the <root>-User during the installation process you can skip the first question. All the other following questions should be answered with ‘Yes’ (Y).

Edit MariaDB’s configuration-file:

cp /etc/mysql/my.cnf /etc/mysql/my.cnf.bak
vi /etc/mysql/my.cnf

Change the MariaDB my.cnf-file to:

[server]
skip-name-resolve
innodb_buffer_pool_size = 128M
innodb_buffer_pool_instances = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_max_dirty_pages_pct = 90
query_cache_type = 1
query_cache_limit = 2M
query_cache_min_res_unit = 2k
query_cache_size = 64M
tmp_table_size= 64M
max_heap_table_size= 64M
slow-query-log = 1
slow-query-log-file = /var/log/mysql/slow.log
long_query_time = 1

[client-server]
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/

[client]
default-character-set = utf8mb4

[mysqld]
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
binlog_format = MIXED
innodb_large_prefix=on
innodb_file_format=barracuda
innodb_file_per_table=1

Now we will create the databases for Nextcloud. Open the console of mariadb

service mysql restart && mysql -uroot -p

and create the databases:

CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE USER nextcloud@localhost identified by 'nextcloud';
GRANT ALL PRIVILEGES on nextcloud.* to nextcloud@localhost;
FLUSH privileges;
quit;

MariaDB now fits all requirements and is already up and running.


04. Prepare NGINX for Let’s Encrypt and Nextcloud

The new filestructure will look like this:

root@nextcloud: /etc/nginx/
  • nginx.conf
    (nginx basic configuration)
  • ssl.conf, header.conf, optimization.conf, php_optimization.conf
    (parameters for Nextcloud and other apps)
root@nextcloud: /etc/nginx/conf.d/
  • nextcloud.conf (Nextcloud vhost)
  • letsencrypt.conf (let’s encrypt vhost)

Your Nextcloud will be reachable via <https://YOUR.DDNS.IO>. Please substitute your dyndns, ip and resolver ip with regard to your environment.

<YOUR.DDNS.IO> -> cloud.dedyn.io
<192.168.2.17> -> 192.168.178.5
<192.168.2.1> -> 192.168.178.1
sudo -s
service nginx stop

Create the nextcloud.conf

mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.bak
vi /etc/nginx/conf.d/nextcloud.conf

and paste the following rows:

fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
map $request_uri $skip_cache {
 default 1;
 ~*/thumbnail.php 0;
 ~*/apps/galleryplus/ 0;
 ~*/apps/gallery/ 0;
}
server {
 listen 80 default_server;
 server_name YOUR.DDNS.IO;
 location ^~ /.well-known/acme-challenge {
 proxy_pass http://127.0.0.1:81;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $remote_addr;
 proxy_set_header X-Forwarded-Host $host;
 proxy_set_header X-Forwarded-Port $server_port;
 proxy_set_header X-Forwarded-Protocol $scheme;
 proxy_redirect off;
 }
 location / {
 return 301 https://$host$request_uri;
 }
}
server {
 listen 443 ssl http2 default_server;
 server_name YOUR.DDNS.IO;
 root /var/www/nextcloud/;
 access_log /var/log/nginx/nextcloud.access.log main;
 error_log /var/log/nginx/nextcloud.error.log warn;
 location = /robots.txt {
 allow all;
 log_not_found off;
 access_log off;
 }
 location = /.well-known/carddav {
 return 301 $scheme://$host/remote.php/dav;
 }
 location = /.well-known/caldav {
 return 301 $scheme://$host/remote.php/dav;
 }
 client_max_body_size 10240M;
 location / {
 rewrite ^ /index.php$uri;
 }
 location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
 deny all;
 }
 location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
 deny all;
 }
 location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
 fastcgi_split_path_info ^(.+\.php)(/.*)$;
 include fastcgi_params;
 include php_optimization.conf;
 fastcgi_pass php-handler;
 fastcgi_param HTTPS on;
 fastcgi_cache_bypass $skip_cache;
 fastcgi_no_cache $skip_cache;
 fastcgi_cache NEXTCLOUD;
 }
 location ~ ^/(?:updater|ocs-provider)(?:$|/) {
 try_files $uri/ =404;
 index index.php;
 }
 location ~ \.(?:css|js|woff|svg|gif)$ {
 try_files $uri /index.php$uri$is_args$args;
 add_header Cache-Control "public, max-age=15778463";
 access_log off;
 expires 30d;
 }
 location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
 try_files $uri /index.php$uri$is_args$args;
 access_log off;
 expires 30d;
 }
}

Attention (10240M): the maximum value for 32Bit-OS ≤ 2048M


Save and quit the file (:wq!) and create the Let’s Encrypt-nginx-configuration file:

vi /etc/nginx/conf.d/letsencrypt.conf

Paste the following lines:

server {
listen 127.0.0.1:81 default_server;
server_name 127.0.0.1;
charset utf-8;
location ^~ /.well-known/acme-challenge {
default_type text/plain;
root /var/www/letsencrypt;
access_log /var/log/nginx/le.access.log main;
error_log /var/log/nginx/le.error.log warn;
}
}

Save and quit (:wq!) the file and create the ssl.conf:

vi /etc/nginx/ssl.conf

Paste the following rows:

# ssl_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/YOUR.DEDYN.IO/privkey.pem;
# ssl_trusted_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem;
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1;
ssl_stapling on;
ssl_stapling_verify on;

Limitations for Android-User: change to “ssl_ecdh_curve prime256v1;” instead of “ssl_ecdh_curve secp384r1;

Save and quit the file (:wq!) and enhance security by using the Diffie-Hellman-Parameter:

screen -S dhparam
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

We recommend using screen as shown in the command above to run this command in backround.

Be aware, it will take a long time to calculate on Odroid C2.

Remove the leading ‘#’ in the /etc/nginx/nginx.conf file:

vi /etc/nginx/nginx.conf
...
include /etc/nginx/ssl.conf;
include /etc/nginx/header.conf;
include /etc/nginx/optimization.conf;
...

Save and quit the file (:wq!) and create the header.conf:

vi /etc/nginx/header.conf

Paste the following rows:

add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
add_header Referrer-Policy "same-origin";

Save and quit (:wq!) the file and create the optimization.conf

vi /etc/nginx/optimization.conf

Paste the following rows:

fastcgi_buffers 64 8K;
fastcgi_cache_key $http_cookie$request_method$host$request_uri;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
gzip_disable "MSIE [1-6]\.";

Save and quit (:wq!) the file and create the php_optimization.conf

vi /etc/nginx/php_optimization.conf

Paste the following rows:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_cache_valid 60m;
fastcgi_cache_methods GET HEAD;

Save and quit (:wq!) the file. Validate your NGINX and restart the Webserver

nginx -t
service nginx restart

if no errors appear.


05. Install Nextcloud

Please find relevant Release information at Nextclouds Maintenance and Release Schedule.
The webfolders for all the applications were already created so we can start downloading and extracting the software. Change to our working directory again:

cd /usr/local/src

Download the current Nextcloud package:

wget https://download.nextcloud.com/server/releases/nextcloud-12.0.2.tar.bz2

Extract the Nextcloud package to your web-folder /var/www/nextcloud:

tar -xjf nextcloud-12.0.2.tar.bz2 -C /var/www

Remove the sources:

rm nextcloud-12.0.2.tar.bz2

Reset the permissions:

chown -R www-data:www-data /var/www/

Go ahead with the installation of the Redis Cache Server.


06. Install Redis-Server

Run the installation of redis:

apt update && apt install redis-server php-redis -y

Then edit the redis-configuration:

cp /etc/redis/redis.conf /etc/redis/redis.conf.bak
vi /etc/redis/redis.conf

Change both

a) the default port to ‘0’

# port 6379
port 0

and

b) the unixsocket-entries from

# unixsocket /var/run/redis/redis.sock
# unixsocketperm 700

to

unixsocket /var/run/redis/redis.sock
unixsocketperm 770

Now change the value for maxclients from 10000 to an appropriated value to avoid errors like:

# You requested maxclients of 10000 requiring at least 10032 max file descriptors. # Redis can’t set maximum open files to 10032 because of OS error: Operation not permitted # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase ‘ulimit -n’.

Depending on your server hardware set the value to e.g. 512 for oDroid C2:

# maxclients 10000
maxclients 512

Save and quit the file (:wq!) and grant all privileges to the webuser (e.g. www-data) needed for Redis in combination with Nextcloud:

usermod -a -G redis www-data

To either fix

# WARNING overcommit_memory is set to 0! Background save may fail under low memory condition.

in the redis-server.log just add vm.overcommit_memory = 1 to /etc/sysctl.conf and run the sysctl-command directly in the shell:

vi /etc/sysctl.conf

At the end add the following row:

vm.overcommit_memory = 1

Save and quit the file (:wq!) and run this command in your shell

sysctl -p

for this to take effect immediately. Another warning occurs

“# WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.”

To fix either this warning you have to set a new config to /etc/rc.local so that the setting will persist upon reboot:

vi /etc/rc.local

Add

sysctl -w net.core.somaxconn=65535

Save and quit the file (:wq!). When you reboot the next time the new setting will be to allow 65535 connections instead of 128 as before.

shutdown -r now

After this reboot please validate the existence of both files in both folders:

sudo -s
ls -la /run/redis && ls -la /var/run/redis

Files:

redis-server.pid 
- and -
redis.sock.

If you want to check wether redis is running correctly or not type:

redis-cli -s /var/run/redis/redis.sock

and enter

PING

You will receive

PONG

as a valid response from Redis. Leave the Redis-Serverconsole with quit and review what will happen inside Redis:

redis-cli -s /var/run/redis/redis.sock monitor

while you are browsing in Nextcloud. Go ahead and create your ssl certificates.


07. Create the ssl certificates

Install the letsencrypt-clientsoftware out of Ubuntus repository.

add-apt-repository ppa:certbot/certbot -y
apt update && apt install letsencrypt -y
letsencrypt certonly -a webroot --webroot-path=/var/www/letsencrypt --rsa-key-size 4096 -d YOUR.DEDYN.IO

If asked, add your notify email for Let’s Encrypt, agree to their Terms of Service and finally the client will display a success-message. All Let’s Encrypt certs will be stored to

ls -la /etc/letsencrypt/live/YOUR.DEDYN.IO
cert.pem - public key
chain.pem - public key from keychain
fullchain.pem - bundle (cert.pem + chain.pem)
privkey.pem - private Key

Now apply the proper permissions using a new permission script called permissions.sh:

vi ~/permissions.sh

Paste the following rows:

#!/bin/bash
find /var/www/ -type f -print0 | xargs -0 chmod 0640
find /var/www/ -type d -print0 | xargs -0 chmod 0750
chown -R www-data:www-data /var/www/
chown -R www-data:www-data /upload_tmp/
chown -R www-data:www-data /var/nc_data/
chmod 0644 /var/www/nextcloud/.htaccess
chmod 0644 /var/www/nextcloud/.user.ini
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/privkey.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/chain.pem
chmod 600 /etc/letsencrypt/live/YOUR.DEDYN.IO/cert.pem
chmod 600 /etc/ssl/certs/dhparam.pem

Save and close (:wq!) the shell script, mark it as executable and execute it:

chmod u+x /home/next/permissions.sh
/home/next/permissions.sh

This script can be reused after every update or modification to your server configuration.

Modify the ssl.conf

vi /etc/nginx/ssl.conf

and remove the leading ‘#’ at the beginning of the following rows:

ssl_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/YOUR.DEDYN.IO/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/YOUR.DEDYN.IO/fullchain.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
...

Save and quit the file, then verify and finally restart nginx:

nginx -t 
service nginx restart

Your server is now configured using ssl.


08. Configure Nextcloud

Open your browser and call https://<yourcloud.dedyn.io>, then enter the following values:

Username: cloudadmin
Password*: NC-Password!

* Please consider using a complex password!

Data folder: /var/nc_data
Datenbankuser: nextcloud
DB-Passwort*: nextcloud

* Please consider using a complex password!

Datenbank-Name: nextcloud
Host: localhost

Click ,Finish setup’ and wait few seconds … the installation will be finished and you will be prompt to the Nextcloud-Welcome board. Some smaller changes should be applied to the Nextcloud config.php immediately.

Open the config.php as www-data

sudo -u www-data vi /var/www/nextcloud/config/config.php

and adjust the config.php to the following rows:

<?php
$CONFIG = array (
'instanceid' => '...keep your values...',
'passwordsalt' => '...keep your values...',
'secret' => '...keep your values...',
'trusted_domains' =>
array (
0 => 'YOUR.DEDYN.IO',
),
'datadirectory' => '/var/nc_data',
'dbtype' => 'mysql',
'version' => '12.0.1.5',
'dbname' => 'nextcloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => 'nextcloud',
'dbpassword' => '...keep your values...',
'mysql.utf8mb4' => true,
'htaccess.RewriteBase' => '/',
'overwrite.cli.url' => 'https://YOUR.DEDYN.IO',
'overwriteprotocol' => 'https',
'loglevel' => 1,
'logtimezone' => 'Europe/Berlin',
'logfile' => '/var/nc_data/nextcloud.log',
'log_rotate_size' => 104857600,
'cron_log' => true,
'installed' => true,
'filesystem_check_changes' => 1,
'quota_include_external_storage' => false,
'knowledgebaseenabled' => false,
'memcache.local' => '\\OC\\Memcache\\APCu',
'filelocking.enabled' => 'true',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
'host' => '/var/run/redis/redis.sock',
'port' => 0,
'timeout' => 0.0,
),
'maintenance' => false,
'theme' => '',
'enable_previews' => true,
);

We have to edit the file .user.ini as well:

sudo -u www-data vi /var/www/nextcloud/.user.ini

Replace the file with the following content:

upload_max_filesize=10240M
post_max_size=10240M
memory_limit=512M
mbstring.func_overload=0
always_populate_raw_post_data=-1
default_charset='UTF-8'
output_buffering='Off'

Save and quit (:wq!) the file.


Attention (10240M): the maximum value for 32Bit-OS ≤ 2048M


Configure and enable a Nextcloud cron-job running as Webuser (www-data):

crontab -u www-data -e

Paste the following row to the crontab:

*/15 * * * * php -f /var/www/nextcloud/cron.php > /dev/null 2>&1

From now a Nextcloud-cronjob will run every 15 minutes as webuser (www-data). Switch back or logon to Nextcloud as Administrator and change the cron-settings from AJAX to Cron.

The change will be stored while selecting another section in the panel.

We recommend to install ufw (uncomplicated firewall) to secure your data and your server:

apt install ufw -y
ufw allow 80/tcp
ufw allow 443/tcp
ufw allow 22/tcp
ufw logging medium
ufw default deny incoming
ufw enable

If you have changed the default ssh port 22 please adjust the port for ufw either. To verify the ufw settings and ufw state just run:

ufw status verbose

Now, we are already finished and your Nextcloud is ready to use!


Enjoy your Nextcloud!

The following chapters are optional.


09. Mount additonal storage to Nextcloud

You may enhance your Nextcloud with data from your NAS-share or an external hdd.

09.1. Mount your NAS data to a Nextcloud-user

This chapter is optional but for sure, it is very simple to mount a NAS share to your Nextcloud using cifs. First install cifs-utils:

apt install cifs-utils -y

Then store your credentials to a special file (e.g. /home/next/.smbcredentials)

vi ~/.smbcredentials

Write down your username and password:

username=NASuser
password=NASPassword

Save and quit (:wq!) the file and change the permissions to 0600:

chmod 0600 ~/.smbcredentials

Detect the ID of the webuser (www-data) using the id-command:

id www-data

and keep the id in mind to reuse it in /etc/fstab:

cp /etc/fstab /etc/fstab.bak
vi /etc/fstab

Paste the following to the end of fstab

//<NAS>/<share> /var/nc_data/next/files cifs user,uid=33,rw,iocharset=utf8,suid,credentials=/home/next/.smbcredentials,file_mode=0770,dir_mode=0770 0 0

Please subtitue “//<NAS>/<share>“, “next” and if neccessary the uid=”33” and then try to mount your NAS manually first:

mount //<NAS>/<share>/

or

mount -a

To unmount your NAS manually run

umount //<NAS>/<share>/

or

umount -a

It will be neccessary to rescan your data for the first usage once. So change to your Nextcloud directory and execute Nextclouds files:scan for the relevant Nextcloud-user (e.g. next):

cd /var/www/nextcloud
sudo -u www-data php occ files:scan next -v

&copy; 2016, rieger::CLOUD

After Nextclouds occ-rescan all data from NAS will appear in the Nextcloud file-app.
The permissions-script <permission.sh> should be enhanced to umount and mount the new mounted NAS share:

vi ~/permissions.sh

Add the red lines to the existing script:

#!/bin/bash
find /var/www/ -type f -print0 | xargs -0 chmod 0640
find /var/www/ -type d -print0 | xargs -0 chmod 0750
chown -R www-data:www-data /var/www/
chown -R www-data:www-data /upload_tmp/
umount //<NAS>/<share>
chown -R www-data:www-data /var/nc_data/
mount //<NAS>/<share>
chmod 0644 /var/www/nextcloud/.htaccess
chmod 0644 /var/www/nextcloud/.user.ini
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/fullchain.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/privkey.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/chain.pem
chmod 600 /etc/letsencrypt/live/<yourcloud.dedyn.io>/cert.pem
chmod 600 /etc/ssl/certs/dhparam.pem

Please substitute the red ones accordingly to your environment, then save and quit (:wq!) the file. From now, your NAS will always be available in Nextcloud.
If you are interested in mounting an external hdd to Nextcloud, please expand the following toogle:

09.2 Mount an external hdd to your Nextcloud

We prepare the new drive ‘/dev/sda’ for the use in Nextcloud. Please format it with an ‘ext4’ file system and mount it permanently with an entry in /etc/fstab.

Check the availability of the new drive:

sudo -s
fdisk -l /dev/sda

If available, make a new partition with the fdisk command.

fdisk /dev/sda
  1. Type ‘o’ to create a new partition table.
  2. Type ‘n’ to create a new partition.
  3. Choose the primary partition type, input ‘p’.
  4. Partition Number – we just need 1.
  5. Leave all default on the First sector and Last sector – Press Enter.
  6. Type ‘w’ and press enter to write the partition.

The ‘/dev/sda1’ partition has been created, now we have to format it to ‘ext4’ with the mkfs tool. Then check the volume size.

mkfs.ext4 /dev/sda1
fdisk -s /dev/sda1

Next, create a new local ‘nc_data’ directory and mount ‘/dev/sda1’ to that directory.

sudo mkdir -p /nc_data

To mount new disk permanently, we add the new mount configuration to the fstab file. Open fstab with vom:

vi /etc/fstab

Paste the configuration below at the end of the file.

/dev/sda1     /nc_data     ext4     defaults     0     1

Save fstab and exit:

Now mount the disk and make sure that there is no error.

mount -a
df -h

At least you have to move your current Nextcloud data direcory to the new mounted directory

chown -R www-data:www-data /nc_data
rsync -av /var/nc_data/ /nc_data/

and point to it in Nextcloud’s config.php.

sudo -u www-data vi /var/www/nextcloud/config/config.php

Change the data-directory

...
'datadirectory' => '/nc_data',
...

Finally restart nginx (Nextcloud) and perform a new filescan:

cd /var/www/nextcloud
sudo -u www-data php occ files:scan --all -v

From now, your Nextcloud data will be stored on your external HDD.


10. Recommended tweaks and hardenings

10.1 Make use of ramdisk
10.2 Prevent ctrl+alt+del
10.3 Install and enjoy fail2ban


10.1 Make use of ramdisk

Open the /etc/fstab and add the follwing code to enable ramdisk for /tmp and /var/tmp

vi /etc/fstab

Add the following rows to the end of this file:

...
tmpfs /tmp       tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0
tmpfs /var/tmp   tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0
...

Save and quit (:wq!) the file and mount the tmpfs-filesystem manually.

mount -a

tmpfs (ramdisk) is now used by your server even after having rebooted your server. To either move cache directories to ramdisk, create the file “/etc/profile.d/xdg_cache_home.sh

vi /etc/profile.d/xdg_cache_home.sh

and paste the following two rows:

#!/bin/bash
export XDG_CACHE_HOME="/dev/shm/.cache"

Save and quit (:wq!) the file and make the script executable:

chmod +x /etc/profile.d/xdg_cache_home.sh

10.2 Prevent ctrl+alt+del

To prevent “ctrl+alt+del” would reboot your server, run the following commands:

systemctl mask ctrl-alt-del.target
systemctl daemon-reload

10.3 Install and enjoy fail2ban

Now we will install fail2ban to prevent any kind of bruteforce or ddos attacks.

apt update && apt install fail2ban -y

Create the Nextcloud-filter for fail2ban

vi /etc/fail2ban/filter.d/nextcloud.conf

and paste the following three lines

[Definition]
failregex = ^.*Login failed: '.*' \(Remote IP: '<HOST>'.*$
ignoreregex =

and add the following code to the new file /etc/fail2ban/jail.d/nextcloud.local:

vi /etc/fail2ban/jail.d/nextcloud.local
[nextcloud]
ignoreip = 192.168.2.0/24
enabled = true
port = 80,443
protocol = tcp
filter = nextcloud
maxretry = 3
bantime = 36000
findtime = 36000
logpath = /var/nc_data/nextcloud.log

Save and quit the file (:wq!) and re-start the fail2ban-service. Then have a look at fail2ban status:

service fail2ban restart
fail2ban-client status nextcloud

Logon to your Nextcloud using wrong credentials for at least three times. The browser will display an error message and the fail2ban-status changed as shown exemplarily in the screenshot

You can remove the locked IP (banip) using this command

fail2ban-client set nextcloud unbanip <Banned IP>

Then reboot your system.


11. SSL certificate renewal

To renew your certificates automatically just create and enable a renewal-script that runs weekly or monthly executed by cron:

sudo -s
cd /root
vi renewal.sh

Paste the following lines:

#!/bin/bash
cd /etc/letsencrypt
echo "-------------------------------------"
echo "Renewals:"
certbot renew
echo "-------------------------------------"
result=$(find /etc/letsencrypt/live/ -type l -mtime -1 )
if [ -n "$result" ]; then
echo "Restarting services..."
/usr/sbin/service nginx stop
/usr/sbin/service mysql restart
/usr/sbin/service redis-server restart
/usr/sbin/service php7.1-fpm restart
/usr/sbin/service nginx start
echo "-------------------------------------"
fi
mail -s "ssl renewal" <your@mailserver.de> < /home/<ubuntuuser>/renewal.txt
exit 0

Please substitute the red ones and save the script (:wq!). Make the script executable and create a new cronjob.

chmod +x renewal.sh
crontab -e

Paste the following line to crontab

@weekly /root/renewal.sh > /home/<ubuntuuser>/renewal.txt 2>&1

Please decide whether to run the script weekly (@weekly) or e.g. monthly (@monthly). Save/quit (:wq!) the crontab and leave root using the exit command.

exit

Enjoy Nextcloud with your new ssl renewal-automatism.


12. Backup your Nextcloud and oDroid C2 eMMC

Create a shellscript and let cron handle your backups automatically. Create the backup.sh file

sudo -i
mkdir /work/backup -p
cd /root
vi backup.sh

and paste the following lines

#!/bin/bash
sudo -s
CURRENT_TIME_FORMAT="%d.%m.%Y"
BACKUP_FOLDER=/work/backup
FOLDERS_TO_BACKUP=(
 "/root/"
 "/etc/apticron/"
 "/etc/fail2ban/"
 "/etc/letsencrypt/"
 "/etc/logwatch/"
 "/etc/mysql/"
 "/etc/nginx/"
 "/etc/php/"
 "/etc/postfix/"
 "/etc/ssh/"
 "/etc/ssl/"
 "/usr/share/logwatch/"
 "/var/www/"
 )
ARCHIVE_FILE="/bkup/backup_$(date +$CURRENT_TIME_FORMAT).tar.gz"
echo "-------------------------------------"
echo "START: $(date)"
echo "-------------------------------------"
cd $BACKUP_FOLDER
for FOLDER in ${FOLDERS_TO_BACKUP[@]}
do
if [ -d "$FOLDER" ];
then
echo "Copying $FOLDER..."
rsync -AaRx --delete $FOLDER $BACKUP_FOLDER
else
echo "Skipping $FOLDER since it does not exist"
fi
done
echo "Copying fstab..."
cp /etc/fstab /work/backup/etc/
echo "Creating SQL Dumps:"
echo " - Nextcloud..."
mysqldump --lock-tables -unextcloud -p!YourPassword1~ nextcloud --add-drop-table --allow-keywords --complete-insert --quote-names | gzip -c > $BACKUP_FOLDER/nextcloud.sql.gz
# echo " - Roundcube..."
# mysqldump --lock-tables -uroundCube -p!YourPassword2~ roundcube --add-drop-table --allow-keywords --complete-insert --quote-names | gzip -c > $BACKUP_FOLDER/RoundCube.sql.gz
# echo " - WordPress..."
# mysqldump --lock-tables -uwordpress -p!YourPassword3~ wordpress --add-drop-table --allow-keywords --complete-insert --quote-names | gzip -c > $BACKUP_FOLDER/wordpress.sql.gz
echo "Creating archive $ARCHIVE_FILE..."
mkdir -p $(dirname $ARCHIVE_FILE)
tar -czf $ARCHIVE_FILE .
chmod -R 600 /bkup
echo "Size of archive: $(stat --printf='%s' $ARCHIVE_FILE | numfmt --to=iec)"
echo "-------------------------------------"
echo "Cleaning up..."
rm $BACKUP_FOLDER/nextcloud.sql.gz
# rm $BACKUP_FOLDER/RoundCube.sql.gz
# rm $BACKUP_FOLDER/wordpress.sql.gz
rm $BACKUP_FOLDER/etc/fstab
echo "Delete backups older than 5 days..."
ls -1 /bkup/ | sort -r | tail -n +6 | xargs rm
echo "-------------------------------------"
echo "END: $(date)"
echo "-------------------------------------"
mail -s "Backup - $(date +$CURRENT_TIME_FORMAT)" -a "From: Your Name <your@email.com>" your@email.com < /home/<ubuntuuser>/backup.txt
exit 0

Then create a regular cron-job

sudo crontab -u root -e

Paste the following row:

55 23 * * * /root/backup.sh >> /home/<ubuntuuser>/backup.txt 2>&1

Save and quit, then make the script executable:

chmod +x backup.sh

From now your Nextcloud will be backuped by cron every day. Leave root:

exit

To backup your eMMC Modul run

fdisk -l

Disk /dev/mmcblk0: 14,6 GiB, 15634268160 bytes, 30535680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xea4f0000

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1        2048   264191   262144  128M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      264192 30534656 30270465 14,4G 83 Linux

and copy the red value to the parameter “count”:

dd if=/dev/mmcblk0 bs=512 count=30534656| gzip > /home/<ubuntuuser>/bkup/server.img.gz

Substitute your <ubuntuuser> accordingly.

The whole eMMC will now be cloned and compressed. You can run the restore using this server.img.gz file by running:

gunzip -c /home/<ubuntuuser>/bkup/server.img.gz | dd of=/dev/mmcblk0

The whole eMMC would be restored, so please be careful.


13. Server hardening


13.1 Disable IPv6

Edit sysctl.conf as follows

cp /etc/sysctl.conf /etc/sysctl.conf.bak
vi /etc/sysctl.conf
...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
...

and reload the configuration by

sysctl -p

or more simple just create a new configuration file

vi /etc/sysctl.d/01-disable-ipv6.conf

and paste the following row

net.ipv6.conf.all.disable_ipv6 = 1

Then save and quit the file (:wq!), reboot your server and validate ipv6 is disabled:

ip a | grep inet6

No output will appear, so IPv6 is disabled.

Disable IPv6 in the firewall (ufw)

Edit the ufw-config

apt install ufw -y
ip6tables -P INPUT DROP && ip6tables -P OUTPUT DROP && ip6tables -P FORWARD DROP
cp /etc/default/ufw /etc/default/ufw.bak
vi /etc/default/ufw

and set IPV6 to ‘no’ or respectively comment it out.

IPV6=no

13.2 Enable and configure the ufw

Enable the ufw (first disable, the enable) and you are set.

ufw disable
ufw enable

In specific we will allow ownly the three needed services: http, https and ssh:

ufw allow 80/tcp
ufw allow 443/tcp
ufw allow sshport/tcp
ufw logging medium

Please substitute sshport according to your sshd_config (e.g. 1234).

In addition we will set a deny rule for all the other incoming requests

ufw default deny incoming

The status

ufw status verbose

should look like


Status: active
Logging: on (medium)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
1234/tcp                   ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere

13.3 Prevent IP Spoofing

Switch back to your terminal and type the following

cp /etc/host.conf /etc/host.conf.bak
vi /etc/host.conf

Add/edit the following lines

# order hosts,bind
# multi on
order bind,hosts
nospoof on

Reboot your server to ensure all changes beeing in place.


13.4 Check your environment using nmap

Install nmap and check your system.

apt install nmap -y

Run both

nmap -v -sT localhost

Your output should look similar to mine

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Completed Connect Scan at 21:30, 0.02s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00013s latency).
Not shown: 996 closed ports
PORT     STATE SERVICE
1234/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Additionally run this command:

nmap -v -sS localhost

Your output should look similar to mine once more.

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Completed SYN Stealth Scan at 21:35, 1.60s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000048s latency).
Not shown: 996 closed ports
PORT     STATE SERVICE
1234/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 1.68 seconds
           Raw packets sent: 1060 (46.640KB) | Rcvd: 2124 (89.216KB)
)

13.5 Install POSTFIX to send server mails

Please install two packages: postfix and libsasl2-modules

apt install postfix libsasl2-modules mailutils -y

and start configure your mailserver.

When the postfix-Installationscreen appears select <sattelitesystem>

&copy; 2016, rieger::CLOUD

Postfix will ask you for the system emailname, you can confirm the shown entry e.g. yourcloud. Then you were asked for the smtp-relayservername e.g. w12345.kasserver.com. Please fill in your according mailservername.

&copy;2016, rieger::CLOUD

Finish the installation <OK>. Now edit the configuration of postfix

cp /etc/postfix/main.cf /etc/postfix/main.cf.bak
vi /etc/postfix/main.cf

and add the following lines

...
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_password

Save and quit (:wq!) this file.

Our complete but exemplarily main.cf:

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = <YOURCLOUD-HOSTNAME-fqn>
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = $myhostname, <YOURCLOUD-HOSTNAME-fqn>, localhost.localdomain, localhost
relayhost = [your.mailserver.de]:587
sender_canonical_maps = hash:/etc/postfix/sender_canonical
mynetworks = 127.0.0.0/8
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
inet_protocols = all
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_password
compatibility_level=2

Create a new file containing your credentials to connect to your mailserver.

vi /etc/postfix/sasl_password

Enter your credentials like exemplarily shown

your.mailserver.de a987654:PassWorD

and change the access level of this file to 0600.

chmod 600 /etc/postfix/sasl_password

At least we promote the information to postfix.

postmap hash:/etc/postfix/sasl_password

As default mails would be sent as user@hostname (e.g. root@localhost), but a lot of mailserver would reject those mails. That’s why we add a new row to postfix configuration file:

vi /etc/postfix/main.cf

Add the following line to the config file

...
sender_canonical_maps = hash:/etc/postfix/sender_canonical

Save and quit (:wq!) the configuration and create the referred new file

vi /etc/postfix/sender_canonical

Add both lines and adjust the parameters according to your environment

root you@mailserver.de
www-data you@mailserver.de

This will assign your emailadress to the root and www-data users. We have to promote this information to postfix again

postmap /etc/postfix/sender_canonical

Finally we add postfix to the autostart and start the service

update-rc.d postfix defaults
service postfix restart

From now, you are already able to send system mails. Please verify the functionality

vi testmail.txt

Add any kind of text to your demofile, e.g.

My first system mail

Save and quit the testfile (:wq!) and send your first manual system mail

mail -s "yourcloud - Testmail" you@mailserver.de < testmail.txt

Check the logfile

cat /var/log/mail.log

and also check your mailclient if you already received that mail.

Postfix administration tasks:

[a] have a look in your actual mailqueue: mailq

[b] flush / re-send your mail(s)-queue: postfix flush

[c] delete all mails in your mailqueue: postsuper -d ALL

FAIL2BAN – system mails

We substitute the root-User in the fail2ban-config to receive status mails of fail2ban in the future. Those mails will contain both, the fail2ban-status (stopped/started) and in case of failed logins also the banned ip(‘s). Edit the fail2ban configuration file

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.conf.bak
vi /etc/fail2ban/jail.conf

and substitute at least the red marked parameters according to your system:

...
destemail = you@mailserver.de
...
sender = you@mailserver.de
...
mta = mail
...
# action = %(action_)s
action = %(action_mwl)s
...

Save and quit (:wq!) the fail2ban configuration, restart fail2ban

service fail2ban restart

and receive emails from FAIL2BAN.


13.6 Apticron

If you use APTICRON, your system will send emails in case of available systemupdates.

apt install apticron -y

After havin installed APTICRON you should edit the config and substitute at least your EMAIL, SYSTEM, NOTIFY_NO_UPDATES and CUSTOM_FROM.

cp /etc/apticron/apticron.conf /etc/apticron/apticron.conf.bak
vi /etc/apticron/apticron.conf
...
EMAIL="you@mailserver.de"
...
SYSTEM="yourcloud.dedyn.io"
...
NOTIFY_HOLDS="1"
...
NOTIFY_NO_UPDATES="1"
...
CUSTOM_SUBJECT='$SYSTEM: $NUM_PACKAGES package update(s)'
...
CUSTOM_NO_UPDATES_SUBJECT='$SYSTEM: no updates available'
...
CUSTOM_FROM="you@mailserver.de"
...

To run and check APTICRON just call

apticron

and you will receive an email sent by APTICRON. Now you are a little bit more secure.

cp /etc/cron.d/apticron /etc/cron.d/apticron.bak
vi /etc/cron.d/apticron
30 8 * * * root if test -x /usr/sbin/apticron; then /usr/sbin/apticron --cron; else true; fi

Apticron will now be executed by cron.d. You can change the starttime e.g. to daily 8.30 AM.


13.7 Two (2)-Factor-Authentication (2FA) for SSH

The following steps are system relevant (critical) and only recommended for advanced linux users. If the ssh configuration will fail, you won’t be able to login to your system via ssh anymore. The mandatory prerequisite is a ssh server that you can log on using private/public key only!

Install the software for 2FA (Two-Factor-Authentication) with your preferred OTP AUTH app

apt install libpam-google-authenticator -y

Leave the root-Shell and run the following command as your <ubuntuuser>:

exit
google-authenticator

You will be asked for:

Do you want authentication tokens to be time-based (y/n) y
&copy; 2016, c-rieger.de
Do you want me to update your "~/.google_authenticator" file (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) n
If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Change to the root-Shell again

sudo -s

Then backup the current configuration and configure your ssh server

cp /etc/pam.d/sshd /etc/pam.d/sshd.bak
vi /etc/pam.d/sshd

Change the file to mine:

@include common-auth
@include common-password
auth required pam_google_authenticator.so
account required pam_nologin.so
@include common-account
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
session required pam_loginuid.so
session optional pam_keyinit.so force revoke
@include common-session
session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate
session optional pam_mail.so standard noenv # [1]
session required pam_limits.so
session required pam_env.so # [1]
session required pam_env.so user_readenv=1 envfile=/etc/default/locale
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open

Save and quit (:wq!) the file.

If not already created please create your 4096 bit RSA Key (SSH) first:

cd ~
ssh-keygen -q -f /etc/ssh/ssh_host_rsa_key -N '' -b 4096 -t rsa

Then backup, edit and change your SSH-config to examplarily mine

cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
vi /etc/ssh/sshd_config
# Port 22
Port 1234
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
UsePrivilegeSeparation yes
KeyRegenerationInterval 3600
ServerKeyBits 4096
SyslogFacility AUTH
LogLevel INFO
LoginGraceTime 30
PermitRootLogin no
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
IgnoreRhosts yes
RhostsRSAAuthentication no
HostbasedAuthentication no
IgnoreUserKnownHosts yes
PermitEmptyPasswords no
ChallengeResponseAuthentication yes
PasswordAuthentication no
X11Forwarding no
X11DisplayOffset 10
PrintMotd no
PrintLastLog no
TCPKeepAlive yes
Banner /etc/issue
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
AllowUsers ubuntuuser
AuthenticationMethods publickey,password publickey,keyboard-interactive

If you changed the ssh-Port to e.g. 1234, please ensure having changed your ufw-configuration as well and adjust the username in ‘AllowUsers ubuntuuser.

Paste your public key to your <ubuntuuser> keystore (ubuntu’s how-to):

exit
vi ~/.ssh/authorized_keys

Save and quit (:wq!), then switch back to sudo mode:

sudo -s

Then restart your ssh server

service ssh restart

and re-logon to your server. You will be prompted for your password first and secondly for your new second factor.

Public Key authentication and ssh-user password
Verification code (OTP 2FA)
Logged on

Start your OTP AUTH app (on iOS or on android) and read your second factor.

To gain access to your system using ssh you’ll now need the private key, the password and the OTP-password…safe!


13.8 Check your system (nmap)

Install nmap and check your system.

apt install nmap -y

Run

nmap -v -sT localhost && nmap -v -sS localhost

and verify both results:

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 443/tcp on 127.0.0.1
Completed Connect Scan at 21:30, 0.02s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00013s latency).
Not shown: 995 closed ports
PORT     STATE SERVICE
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  https
1234/tcp open  ssh
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

and

...
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 1234/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 443/tcp on 127.0.0.1
Completed SYN Stealth Scan at 21:35, 1.60s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000048s latency).
Not shown: 995 closed ports
PORT     STATE SERVICE
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  https
1234/tcp open  ssh
3306/tcp open  mysql

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 1.68 seconds
           Raw packets sent: 1060 (46.640KB) | Rcvd: 2124 (89.216KB)
)

Your results should be alike mine (ports 25, 8x, 443, 1234, 3306).

13.9 logwatch

Install logwatch

sudo -s
apt update && apt install logwatch -y

Copy the default configuration files to the logwatch folder:

cp /usr/share/logwatch/default.conf/logfiles/http.conf /etc/logwatch/conf/logfiles/nginx.conf
cp /usr/share/logwatch/default.conf/services/http.conf /etc/logwatch/conf/services/nginx.conf
cp /usr/share/logwatch/scripts/services/http /usr/share/logwatch/scripts/services/nginx
cp /usr/share/logwatch/default.conf/services/http-error.conf /etc/logwatch/conf/services/nginx-error.conf
cp /usr/share/logwatch/scripts/services/http-error /etc/logwatch/scripts/services/nginx-error
cp /etc/logwatch/conf/logfiles/nginx.conf /etc/logwatch/conf/logfiles/nginx.conf.org.bak

Edit the /etc/logwatch/conf/logfiles/nginx.conf to mine

vi /etc/logwatch/conf/logfiles/nginx.conf

Substitute the whole file to:

########################################################
# Define log file group for NGINX
########################################################

# What actual file? Defaults to LogPath if not absolute path....
#LogFile = httpd/*access_log
#LogFile = apache/*access.log.1
#LogFile = apache/*access.log
#LogFile = apache2/*access.log.1
#LogFile = apache2/*access.log
#LogFile = apache2/*access_log
#LogFile = apache-ssl/*access.log.1
#LogFile = apache-ssl/*access.log
LogFile = nginx/*access.log
LogFile = nginx/*error.log
LogFile = nginx/*access.log.1
LogFile = nginx/*error.log.1

# If the archives are searched, here is one or more line
# (optionally containing wildcards) that tell where they are...
#If you use a "-" in naming add that as well -mgt
#Archive = archiv/httpd/*access_log.*
#Archive = httpd/*access_log.*
#Archive = apache/*access.log.*.gz
#Archive = apache2/*access.log.*.gz
#Archive = apache2/*access_log.*.gz
#Archive = apache-ssl/*access.log.*.gz
#Archive = archiv/httpd/*access_log-*
#Archive = httpd/*access_log-*
#Archive = apache/*access.log-*.gz
#Archive = apache2/*access.log-*.gz
#Archive = apache2/*access_log-*.gz
#Archive = apache-ssl/*access.log-*.gz
Archive = nginx/*access.log.*.gz
Archive = nginx/*error.log.*.gz

# Expand the repeats (actually just removes them now)
*ExpandRepeats

# Keep only the lines in the proper date range...
*ApplyhttpDate

# vi: shiftwidth=3 tabstop=3 et

Save and quit (:wq!) this file and edit /etc/logwatch/conf/services/nginx.conf:

cp /etc/logwatch/conf/services/nginx.conf /etc/logwatch/conf/services/nginx.conf.org.bak
vi /etc/logwatch/conf/services/nginx.conf

Change the name from http to NGINX or substitute the whole file to mine:

###########################################################################
# Configuration file for NGINX filter
###########################################################################

Title = "NGINX"

# Which logfile group...
LogFile = NGINX

# Define the log file format
#
# This is now the same as the LogFormat parameter in the configuration file
# for httpd. Multiple instances of declared LogFormats in the httpd
# configuration file can be declared here by concatenating them with the
# '|' character. The default, shown below, includes the Combined Log Format,
# the Common Log Format, and the default SSL log format.
#$LogFormat = "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"|%h %l %u %t \"%r\" %>s %b|%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

# The following is supported for backwards compatibility, but deprecated:
# Define the log file format
#
# the only currently supported fields are:
# client_ip
# request
# http_rc
# bytes_transfered
# agent
#
#$HTTP_FIELDS = "client_ip ident userid timestamp request http_rc bytes_transfered referrer agent"
#$HTTP_FORMAT = "space space space brace quote space space quote quote"
# Define the field formats
#
# the only currently supported formats are:
# space = space delimited field
# quote = quoted ("..") space delimited field
# brace = braced ([..]) space delimited field

# Flag to ignore 4xx and 5xx error messages as possible hack attempts
#
# Set flag to 1 to enable ignore
# or set to 0 to disable
$HTTP_IGNORE_ERROR_HACKS = 0

# Ignore requests
# Note - will not do ANY processing, counts, etc... just skip it and go to
# the next entry in the log file.
# Note - The match will be case insensitive; e.g. /model/ == /MoDel/
# Examples:
# 1. Ignore all URLs starting with /model/ and ending with 1 to 10 digits
# $HTTP_IGNORE_URLS = ^/model/\d{1,10}$
#
# 2. Ignore all URLs starting with /model/ and ending with 1 to 10 digits and
# all URLS starting with /photographer and ending with 1 to 10 digits
# $HTTP_IGNORE_URLS = ^/model/\d{1,10}$|^/photographer/\d{1,10}$
# or simply:
# $HTTP_IGNORE_URLS = ^/(model|photographer)/\d{1,10}$

# To ignore a range of IP addresses completely from the log analysis,
# set $HTTP_IGNORE_IPS. For example, to ignore all local IP addresses:
#
# $HTTP_IGNORE_IPS = ^10\.|^172\.(1[6-9]|2[0-9]|3[01])\.|^192\.168\.|^127\.
#

# For more sophisticated ignore rules, you can define HTTP_IGNORE_EVAL
# to an arbitrary chunk of code.
# The default is not to filter anything:
$HTTP_IGNORE_EVAL = 0
# Example:
# $HTTP_IGNORE_EVAL = "($field{http_rc} == 401) && ($field{client_ip}=~/^192\.168\./) && ($field{url}=~m%^/protected1/%)"
# See the "scripts/services/http" script for other variables that can be tested.

# The variable $HTTP_USER_DISPLAY defines which user accesses are displayed.
# The default is not to display user accesses:
$HTTP_USER_DISPLAY = 0
# To display access failures:
# $HTTP_USER_DISPLAY = "$field{http_rc} >= 400"
# To display all user accesses except "Unauthorized":
# $HTTP_USER_DISPLAY = "$field{http_rc} != 401"

# To raise the needed level of detail for one or more specific
# error codes to display a summary instead of listing each
# occurrence, set a variable like the following ones:
# Raise 403 codes to detail level High
#$http_rc_detail_rep_403 = 10
# Always show only summary for 404 codes
#$http_rc_detail_rep_404 = 20

# vi: shiftwidth=3 tabstop=3 et

Save and quit the file (:wq!) and disable the default apache-configuration files:

cd /usr/share/logwatch/default.conf/services
mv http-error.conf http-error.conf.bak && mv http.conf http.conf.bak

At least we create a cronjob to send the result from logwatch automatically:

crontab -e

Paste the following row:

@daily /usr/sbin/logwatch --output mail --mailto your@mail.com --format html --detail high --range yesterday > /dev/null 2>&1

Save and quit crontab and check if logwatch is configured properly:

/usr/sbin/logwatch --output mail --mailto your@mail.com --format html --detail high --range yesterday

You should receive an email from logwatch that looks like this:

From now you will receive daily mails containing your system summary.


14. monitor your entire system using netdata

Start download netdata – the directory ‘netdata’ will be created

sudo -s
apt install apache2-utils
cd /usr/local/src
git clone https://github.com/firehol/netdata.git --depth=1
cd netdata

Create a passwordfile to protect netdata:

htpasswd -c /etc/nginx/netdata-access YourName

Then run the script netdata-installer.sh with root privileges to build, install and start netdata

./netdata-installer.sh

Netdata is already installed. We will make smaller adjustementss to netdata’s configuration:

vi /etc/netdata/netdata.conf

First we change the value for “history” to e.g. 14400 (4 hours of chart data retention, uses about 60 MB of RAM) in the [global] section:

 history = 14400

Then we change the binding in the [web] section to localhost (127.0.0.1) only:

 bind to = 127.0.0.1

At least we disable all the ipv6 configurations in the three sections [system.ipv6], [ipv6.packets], [ipv6.errors] by setting “enabled = no”:

...
[system.ipv6]
 # history = 3996
 enabled = no
...
[ipv6.packets]
 # history = 3996
 enabled = no
...
[ipv6.errors]
 # history = 3996
 enabled = no
...

Finally we enhance the nextcloud.conf and nginx.conf file to include the netdata webserver-configuration:

vi /etc/nginx/conf.d/nextcloud.conf

Paste the red rows as shown below to the nextcloud.conf:

...
location / {
 rewrite ^ /index.php$uri;
 }
location /netdata {
 return 301 /netdata/;
 }
 location ~ /netdata/(?<ndpath>.*) {
 auth_basic "Restricted Area";
 auth_basic_user_file /etc/nginx/netdata-access;
 proxy_http_version 1.1;
 proxy_pass_request_headers on;
 proxy_set_header Connection "keep-alive";
 proxy_store off;
 proxy_pass http://netdata/$ndpath$is_args$args;
 gzip on;
 gzip_proxied any;
 gzip_types *;
 }
 location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
 deny all;
...

Create the new /etc/nginx/proxy.conf and /etc/nginx/conf.d/stub_status.conf:

vi /etc/nginx/proxy.conf

Paste all the following rows:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_redirect off;

Save and quit the file (:wq!) and create the stub_status.conf

vi /etc/nginx/conf.d/stub_status.conf

Paste all the following rows:

server {
 listen 127.0.0.1:80 default_server;
 server_name 127.0.0.1;
 location /nginx_status {
 stub_status on;
 allow 127.0.0.1;
 deny all;
 }
}

Save and quit the file (:wq!) and modify the file /etc/nginx/nginx.conf:

...
http {
 server_names_hash_bucket_size 64;
 proxy_headers_hash_max_size 512;
 upstream php-handler {
 server unix:/run/php/php7.1-fpm.sock;
 }
 upstream netdata {
 server 127.0.0.1:19999;
 keepalive 64;
 }
...

Save and quit the file (:wq!) and adjust the ufw firewall:

ufw allow 19999/tcp

Then check NGINX

nginx -t

and if no errors appear just restart netdata and nginx

service netdata restart && service nginx restart

and call netdata in your browser

https://your.dedyn.io/netdata

or as an external site in your Nextcloud.

 


Have fun and enjoy your Nextcloud!

Carsten Rieger