Skip to content

Author: crossRT

Fix sonarr “Unable to communicate with SkyHook.”

Today I set up sonarr on my TrueNAS machine. I thought it will just work as same as my radarr instance, but end up I get the following error when I try to add new series: Unable to communicate with SkyHook.

I tried to google it but seems like not many people are facing the same issue as me. Luckily I found a discussion on TrueNAS forum, and it leads me to some clues: https://www.truenas.com/community/threads/sonarr-radarr-probably-other-arr-jails-unable-to-verify-ssl-certificates-after-latest-update.96008/

According to the post, this is a known issue that FreeBSD’s Mono 6.8 package is missing its root CA store. What we going to do is manually download the root CA and sync it to the jail.

And following is my solution:

# get into the shell of the jail
# 5 is my sonarr jail ID
# if you don't know your jail ID, run `jls`
jexec 5 sh

# install the useful wget command in order to download file
pkg install wget

# download the root CA and pipe it to cert-sync
wget -O - https://curl.haxx.se/ca/cacert.pem | cert-sync --user /dev/stdin

# until this stage, you need to exit the jail and stop it. #

# execute following command on your TrueNAS shell. (NOT jail shell)
cp -R /mnt/main-pool/iocage/jails/sonarr/root/root/.config/.mono/ /mnt/main-pool/iocage/releases/12.2-RELEASE/root/usr/share/.mono

Notes for the last command:
1. replace main-pool with your pool’s name.
2. replace sonarr with your jail’s name.
3. replace 12.2-RELEASE with the FreeBSD release that your jail using.

That’s all. Restart your sonarr jail and try to add new series again, it should work now.

Leave a Comment

Laravel 9: avoid to use `Storage::exists()`

I use Storage facades to manage my Laravel application files and folder a lot, either on server storage or S3 compatible storage like DigitalOcean Spaces.

Recently I have a new microservice project kicked start and using Laravel 9, and I use the Storage facades as same as I did in previous projects.

So this time I need to use Storage::exists() to check if a file exists in a huge size of DigitalOcean Spaces (around 300GB). But the speed is pretty slow, sometimes it takes 3 seconds, but sometimes it can take more than 10 seconds for a single file check.

This problem happened quite randomly because I use the same method to check in my old projects running with other Laravel versions doesn’t have such a problem. Even if I use another S3 client also couldn’t reproduce the slowness, thus I decided to dig in and see what was going on.

End up I found this in FilesystemAdapter.php:

This leads me to the driver League\Flysystem\Filesystem , and I can see the implementation of has() function like the following:

And YES. This explains why the checking of a single file takes longer than expected.

Workaround

Use Storage::disk('s3')->fileExists($path) to check the file directly.

It avoids the calling to directoryExists($path) , which probably causes slowness on a huge side S3 storage.

Leave a Comment

Setup AdGuardHome on TrueNAS

Recently I found a good replacement for PiHole call AdGuardHome.

PiHole is good, but it requires at least 1 Raspberry Pi running. It might be a problem if you are out of the electric socket or you don’t have enough LAN port from your router.

For me, it’s a good replacement for PiHole is because:
– I already have a TrueNAS server running on a LAN cable in my home network. Installing AdGuardHome on it doesn’t require any additional socket or space.
– I think my TrueNAS spec should have better performance than Raspberry Pi 4. Don’t plan to waste it. =P

Here’s how I setup AdGuardHome on my TrueNAS:

# create a jail
iocage create -n AdGuardHome --release 12.2-RELEASE

# you might want to configure your jail network in your web GUI!

# access into the newly created jail console
iocage console AdGuardHome

# run the installation script from AdGuardHome github repo
# you can check it here: https://github.com/AdguardTeam/AdGuardHome#automated-install-linux-and-mac
curl -s -S -L https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sh -s -- -v

# done!
# note: AdGuardHome web portal always run on port 3000
# you can access to your AdGuardHome web portal by using the network IP u set earlier.
# for example: http://192.168.0.xxx:3000

After AdGuardHome is installed, make sure you update your router’s primary DNS setting OR your DHCP server’s primary DNS which allows your clients to start sending DNS queries to AdGuardHome.

Once your installation and configuration success, you may see the statistic like this on your AdGuardHome web portal! =D

2 Comments

My TrueNAS setup

I own my very first NAS about 10 years ago – Synology DS110J, a very entry-level NAS for a beginner like me at that time. No hassle setup, basically everything just plug-and-use.
I have learned a lot of things from the whole experience: Networking, SMB, FTP, basic shell, multimedia streaming, and so on. And it did bring a lot of joy to me, which let me feel I’m really good at these IT technologies. Slowly… I continue to learn more and more from my life and career. I will say it’s one of my learning motivations actually.

After years of using Synology DS110J (around 2015), I felt it’s insufficient for me in terms of storage and the customization I want for my home media server setup. And most importantly is that I have some extra money to spend that time. =D

Basically, I purchased all components (CPU, motherboard, RAM, and hard disk) and assemble them myself. In the beginning, I choose to run Ubuntu Server on it, which I was already familiar with during my learning journey. But after a few weeks of using it, I felt it was not something that I want. After that, I did some researches. And finally, I found FreeNAS (something new for me to learn that time=D).

Indeed, jumping to FreeNAS at that time was a bit challenging for me. Because both Linux and FreeBSD sound identical, but technically they are totally different things. I did a few fatal mistakes that cause my whole setup to not be secured and put my files at risk of data loss.

But the good thing is: I did improve a lot again from the entire experience. I’m able to use FreeNAS to set up my home media server that I what. And I truly enjoy having it in my life. =)

Until now, I’m using TrueNAS (you can think it’s renamed from FreeNAS) as my primary NAS storage in my home network. I did the “upgrade” recently, that’s why I planned to write a post which to share my setup.

Basically, the purposes of this TrueNAS in my house:
1. Plex: service to stream my movies
2. Transmission: torrent downloader
3. Nextcloud: sharing my files with friends
4. WordPress backup: with UpdraftPlus S3 support.
5. Mac timemachine: dedicated backup storage for my mac.
6. AdGuardHome: replacement for PiHole. I found it recently.

Soon, I’ll slowly share the setup for the individual service running in my TrueNAS instance here. Stay tuned. =)

Leave a Comment

Containerize old PHP/Laravel application with Apache and SSL

Recently I migrated my websites and some applications from 7 years old server to a new server. Then I just realize some PHP application is not working on the latest PHP 7.4, thus I have to create a simple docker container to run the app.

Specifically, it’s Invoice Ninja v3.4.1.
This application was running on my server for more than 5 years.
Because I’m too lazy to upgrade to the latest version manually, so I think letting it run in a container with a specific PHP version will be good enough for me.

Following is my Dockerfile:

# use any php version as u need
FROM php:7.0.33-apache

# install any php extension as u need
RUN docker-php-ext-install pdo_mysql mysql

# install ssl-cert for generate self-signed cert
RUN apt-get update && apt-get install -y ssl-cert

# enable apache2 mod
RUN a2enmod rewrite
RUN a2enmod ssl

# simple virtualhost definition
RUN echo '\n\
<VirtualHost *:443>\n\
  <Directory /srv>\n\
    Options Indexes FollowSymLinks\n\
    AllowOverride All\n\
    Require all granted\n\
  </Directory>\n\
  DocumentRoot /srv/public \n\
  SSLEngine on \n\
  SSLCertificateFile  /etc/ssl/certs/ssl-cert-snakeoil.pem \n\
  SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key \n\
</VirtualHost>\n'\
> /etc/apache2/sites-available/000-default.conf

WORKDIR /srv

CMD ["apache2-foreground"]

With this Dockerfile:
1. install PHP extension to run the applications.
2. it has a self-signed certificate to enable simple SSL.
3. it allows us to mount any kind of PHP application and use it immediately.

To run the application with this Dockerfile:
1. Put the Dockerfile in your application root directory.

2. Build the image:
docker build -t YOUR-APPLICATION-IMAGE:custom .

3. Run the container with the image and mount the application directory to container’s /srv:
docker run --rm --name CONTAINER_NAME -v /path/to/your/application YOUR-APPLICATION-IMAGE:custom

That’s all~
With this trick, you should be able to run any web applications that require PHP version less than 7.4

Leave a Comment

Install PHP Xdebug on M1 Macbook Pro

Installing Xdebug on M1 MacBook Pro can be very tricky due to the CPU architecture changed.

This post will show how I install it on my machine:

  1. Make sure you install ur PHP through brew. For me, I have PHP v7.4.15 and perl v5.30.2 installed
  2. Install Xdebug with perl:
    arch -arm64 sudo pecl uninstall xdebug
  3. Locate the Xdebug path in your system. For me, it’s installed on /opt/homebrew/Cellar/[email protected]/7.4.15_1/pecl/20190902/xdebug.so
  4. Make sure xdebug.so is loaded correctly in php.ini. You can check the following block.
zend_extension="/opt/homebrew/Cellar/[email protected]/7.4.15_1/pecl/20190902/xdebug.so"
xdebug.mode=debug
xdebug.client_host=localhost
xdebug.client_port=9000

DONE~
You may start using Xdebug to debug ur PHP application. =)

1 Comment

Fix Nextcloud login form Content Security Policy issue

I get the following error when my Nextcloud instance is hosted on a FreeNAS machine and it’s behind Nginx reverse proxy.

Refused to send form data to ‘http://cloud.example.com/’ because it violates the following Content Security Policy directive: “form-action ‘self’ cloud.example.com”.

It’s because the host is enabled with HTTPS, but the Nextcloud instance is running as HTTP behind the reverse proxy, thus Nextcloud itself doesn’t know it should run as HTTPS.

Solution

We can add the following line into config/config.php:

  'overwriteprotocol' => 'https',

Basically, it tells Nextcloud to run the instance as HTTPS. It perfectly resolved the problem, instead of struggling with the Nginx Content Security Policy directive.

My setup info

FreeNAS version: FreeNAS-11.3-U1
Nextcloud version: 21.0.2

Nextcloud directory path in FreeNAS jail: /usr/local/www/nextcloud

Reference

  • https://github.com/nextcloud/server/issues/17409#issuecomment-538684976
  • https://content-security-policy.com/
Leave a Comment

How to install Netflix on Unifi Plus Box

Until the date of writing this post, Unifi users are still not able to install Netflix on Unifi Plus Box.

It’s because the official Netflix app itself will verify the machine when it launch.

But if you insist to have Netflix installed on your Unifi Plus Box, you can still install a modded Netflix app through an APK file.

Disclaimer: Do this at your own risk! I’m not the owner of the modded APK and I don’t take any responsibility. Stop here if you having concerns!

You can get the modded APK from this XDA forum post:
https://forum.xda-developers.com/t/v7-aidans-rom-s905x-atv-9-no-lag-bloat-1-2gb-2021-update-tv-netflix.4191157/

How to install:

Option 1: Copy the APK file into a USB drive, and then install it via file manager.

Option 2: Install it through network with adb command.

# connect your machine first, make sure ur device allow the access.
adb connect 192.168.1.XXX

# install the .apk file
adb install /path/to/apk-file

Conclusion

I have been using it for months already, everything looks ok.
If you afraid before installing it, you can scan the APK on MetaDefener OR VirusTotal OR any other online virus scanner platform.

 

Leave a Comment

Temporary solution when Robomongo SSH support was disabled since 0.9.0 RC1

The previous robomongo that I used is version 0.8.5. It’s great, but there is a bug that floating-point numbers are displayed with incorrect precision.

So I upgrade to latest 0.9.0 RC7. But unfortunately,  SSH tab is missing due to the function was disabled since RC1. Here is the link to the announcement.

Luckily here is a temporary solution from GitHub. Give it a try if you need to connect your server mongodb instance with SSH.

ssh -L 27018:localhost:27017 [email protected]
Leave a Comment

OwnCloud 8.2 configuration for Nginx subdirectory

I installed the latest OwnCloud 8.2 under the subdirectory on https://crossrt.me, so I can share the SSL certificate to protect my files during transfer between remote and client. I search the Nginx configuration for my circumstance, but not luck since no one is working. So I decide to modify the config file from administration manual, here is the link to it.

My circumstance:

  • Want to install OwnCloud 8.2
  • WordPress installed on the domain.
  • Share SSL certificate.

Here is my nginx config file, modify it according to your need:

PASTEBIN


server {
	listen 443 default_server;
	ssl on;
	ssl_certificate /PATH/TO/YOUR/SSL.crt;
	ssl_certificate_key /PATH/TO/YOUR/SSL.key;

	server_name domainname.com;
	root /PATH/TO/YOUR/domainname.com;
	index index.php index.html index.htm;

	location / {
		try_files $uri $uri/ /index.php?q=$uri&$args;
	}

	location ~ \.php$ {
		try_files $uri =404;
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass unix:/run/php/php7.0-fpm.sock;
		fastcgi_index index.php;
		include fastcgi_params;
	}

	# deny access to .htaccess files, if Apache's document root concurs with nginx's one
	location ~ /\.ht {
		deny all;
	}

	error_page 404 /404.html;
	error_page 500 502 503 504 /50x.html;
	location = /50x.html {
		root /usr/share/nginx/html;
	}

	# Add headers to serve security related headers
	# add Strict-Transport-Security to prevent man in the middle attacks
	add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
	add_header X-Content-Type-Options nosniff;
	add_header X-Frame-Options "SAMEORIGIN";
	add_header X-XSS-Protection "1; mode=block";
	add_header X-Robots-Tag none;

	location /owncloud {

		client_max_body_size 2G; # set max upload size
		fastcgi_buffers 64 4K;

		error_page 403 /owncloud/core/templates/403.php;
		error_page 404 /owncloud/core/templates/404.php;

		rewrite ^/owncloud/.well-known/carddav /remote.php/carddav/ permanent;
		rewrite ^/owncloud/.well-known/caldav /remote.php/caldav/ permanent;

		# The following 2 rules are only needed for the user_webfinger app.
		# Uncomment it if you're planning to use this app.
		#rewrite ^/owncloud/.well-known/host-meta /public.php?service=host-meta last;
		#rewrite ^/owncloud/.well-known/host-meta.json /public.php?service=host-meta-json last;

		location = /owncloud/robots.txt {
			allow all;
			log_not_found off;
			access_log off;
		}

		location ~ ^/owncloud/(build|tests|config|lib|3rdparty|templates|data)/ {
			deny all;
		}

		location ~ ^/owncloud/(?:\.|autotest|occ|issue|indie|db_|console) {
			deny all;
		}

		rewrite ^/owncloud/remote/(.*) /remote.php last;
		rewrite ^/owncloud(/core/doc/[^\/]+/)$ $1/index.html;
		try_files $uri $uri/ =404;

		location ~ \.php(?:$|/) {
			fastcgi_split_path_info ^(.+\.php)(/.+)$;
			include fastcgi_params;
			fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
			fastcgi_param PATH_INFO $fastcgi_path_info;
			fastcgi_param HTTPS on;
			fastcgi_param modHeadersAvailable true; #Avoid sending the security headers twice
			fastcgi_pass unix:/run/php/php7.0-fpm.sock;
			fastcgi_intercept_errors on;
		}

		# Adding the cache control header for js and css files
		# Make sure it is BELOW the location ~ \.php(?:$|/) { block
		location ~* \.(?:css|js)$ {
			add_header Cache-Control "public, max-age=7200";
			# Add headers to serve security related headers
			add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
			add_header X-Content-Type-Options nosniff;
			add_header X-Frame-Options "SAMEORIGIN";
			add_header X-XSS-Protection "1; mode=block";
			add_header X-Robots-Tag none;
			# Optional: Don't log access to assets
			access_log off;
		}

		# Optional: Don't log access to other assets
		location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|swf)$ {
			access_log off;
		}
	}
}


So basically I just modify the rewrite rules in the config file from its official document. Again, remember to refer OwnCloud official document, it’s always helped.

 

Leave a Comment