I have received several requests to post more details on how I setup Pi Lab. The majority of these requests were from people wanting information on how I setup Bludit to work on a Raspberry Pi cluster.
This post will not attempt to be a step by step guide or an exhaustive guide on setting up a Raspberry Pi cluster. I will simply highlight the specific challenges that I had with setting Pi Lab up to properly run Bludit.
Note: This is NOT a copy paste how-to. It is an overview. I will include reference articles thoughout that I used during the setup of Pi Lab.
Raspberry Pi Cluster
I am using four Raspberry Pi 2 Model Bs. One is used as the Load Balancer and the other three are setup as the web nodes. I am using NGINX as the Load Balancer and Web Server. Here is one of the many articles that I used to setup NGINX and php-fpm on each of the nodes.
Setting up an NGINX web server on a Raspberry Pi
Once each node was setup with NGINX and php-fpm I setup the load balancing. This is done on the Load Balancing Node.
Tip: I recommend setting up the hostnames for each Pi so that it is easy to remember which Pi is which. I chose pi0, pi1, pi2, and pi3. pi0 being the Load Balancer
Example Load Balancer NGINX conf
This example NGINX conf file assumes that you have already setup Let's Encrypt for your domain (production-domain.com, www.production-domain.com). I used several articles to learn how to best install Let's Encrypt with the Pi Lab setup. So I don't have a single article to reference.
Port forwarding is also assumed. Since Port Forwarding works differently depending on your home network I will leave that for you to Google. Here is a decent article covering Port Forwarding. How to Port Forward – General Guide to Multiple Router Brands.
This example also includes specific recommendations from the Bludit Docs as well as other specific configuration options such as forcing SSL and non-www on the domain.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
# HTTP Redirect
server {
listen 80;
server_name production-domain.com www.production-domain.com;
return 301 https://production-domain.com$request_uri;
}
upstream picluster {
least_conn;
server 192.168.xx.xx1 max_fails=3 fail_timeout=30s;
server 192.168.xx.xx2 max_fails=3 fail_timeout=30s;
server 192.168.xx.xx3 max_fails=3 fail_timeout=30s;
}
# HTTPS Server
server {
listen 443 ssl;
server_name production-domain.com;
root /var/www/html;
ssl_certificate /etc/letsencrypt/live/production-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/production-domain.com/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
location / {
proxy_pass http://picluster;
}
add_header Strict-Transport-Security "max-age=31557600; includeSubDomains";
location ^~ /bl-content/databases/ { deny all; }
location ^~ /bl-content/workspaces/ { deny all; }
location ^~ /bl-content/pages/ { deny all; }
location ^~ /bl-kernel/*.php { deny all; }
location ~ /.well-known { allow all; }
gzip on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
}
}
At this point I had a load balanced Raspberry Pi cluster.
Bludit Prod Setup
Before attempting to install Bludit I updated my NGINX conf so that only one of my web nodes was in rotation. Once I had the other web nodes commented out I restarted NGINX.
Once that was completed I could follow the Bludit Install Instructions. I followed the very simple "Installation from zip file" instructions and simply extracted the contents to my "/var/www/html" directory on the web node that I left in rotation.
I did run into some php modules that needed to be installed.
The following command should install all the necessary prerequisites.
sudo apt-get install php-mbstring php-json php-gd php-xml
At this point I had Bludit setup and installed on my "primary" web node.
Now I simply copied the contents of "/var/www/html" from my "primary" web node to the other two nodes. Once copied I put the other two nodes back into rotation and restarted NGINX.
At this point I had Bludit setup and running on a load balanced cluster. This was great, however, there was an issue. I couldn't login to the Bludit Admin panel. This is due to the way that sessions are handled. There are several ways that I could solve this. I could have gone the route of using NFS so that each web node would actually read the files from a single source. I didn't like this since one of the reasons that I wanted to run a cluster was for the redundancy.
So I decided to setup a seperate installation of Bludit on the "primary" web node in a different directory using a different domain and port. This copy serves as my source of truth and a "sandbox" to try things before I push them live.
Bludit Dev Setup
For this portion I setup Port Forwarding for a different domain on a different port to point directly to my "primary" web node. Below is an example of the NGINX configuration that I setup in "/etc/nginx/sites-enabled" on the "primary" web node.
Example NGINX conf
server {
listen 8000 default_server;
listen [::]:8000 default_server;
root /var/www/dev;
index.php;
server_name _;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
}
Now I copied the contents of "/var/www/html" to "/var/www/dev" on my "primary" web node. I also had to edit "/var/www/dev/bl-content/databases/site.php" so that it used the domain and port that I setup to point directly to my "primary" web node.
Now when I hit this domain I should get a working copy of Bludit running on a single node.
For example: development-domain.com:8000
Now I can login to development-domain.com:8000/admin and make changes and add posts to my "dev" copy of Bludit without any issues.
Now for the fun part!
Pushing changes from Bludit Dev to Bludit Prod
For this part to work I needed to setup my "primary" web node so that it could connect to each of the other web nodes. To do this I setup SSH keys so that my "primary" node could login to each of the other web nodes without a password.
This article covers this setup very well. Passwordless SSH access
Once that was setup I wrote a simple bash script to "promote" my "dev" copy of Bludit to all three web nodes. This script is run on the "primary" web node once I have made changes to the "dev" copy. These changes could be from modifying files directly in the "/var/www/dev" directory or more commonly updating via the Bludit admin panel on my "dev" copy.
Example Promote script
#!/bin/bash
echo "Updating file permisions to 'pi:www-data'\n"
sudo chown -R pi:www-data /var/www/dev
echo "Updating url in settings\n"
sudo sed -i 's/http:\\\/\\\/development-domain.com:8000/https:\\\/\\\/production-domain.com/g' /var/www/dev/bl-content/databases/site.php
echo "Complete\n"
echo "Updating url in rss\n"
sudo sed -i 's/http:\/\/development-domain.com:8000/https:\/\/production-domain.com/g' /var/www/dev/bl-content/workspaces/rss/rss.xml
echo "Complete\n"
echo "Updating url in sitemap\n"
sudo sed -i 's/http:\/\/development-domain.com:8000/https:\/\/production-domain.com/g' /var/www/dev/bl-content/workspaces/sitemap/sitemap.xml
echo "Complete\n"
echo "Copying dev to html\n"
sudo rsync -azvr /var/www/dev/ /var/www/html/ --delete
echo "Complete\n"
echo "Syncing pi1 to pi2\n"
ssh pi@192.168.xx.xx2 'sudo chown -R pi:www-data /var/www/html'
rsync -azvr /var/www/html/ -e ssh pi@192.168.xx.xx2:/var/www/html/ --delete
ssh pi@192.168.xx.xx2 'sudo chown -R www-data:www-data /var/www/html'
echo "Complete\n"
echo "Syncing pi1 to pi3\n"
ssh pi@192.168.xx.xx3 'sudo chown -R pi:www-data /var/www/html'
rsync -azvr /var/www/html/ -e ssh pi@192.168.xx.xx3:/var/www/html/ --delete
ssh pi@192.168.xx.xx3 'sudo chown -R www-data:www-data /var/www/html'
echo "Complete\n"
echo "Reverting url in settings\n"
sudo sed -i 's/https:\\\/\\\/production-domain.com/http:\\\/\\\/development-domain.com:8000/g' /var/www/dev/bl-content/databases/site.php
echo "Complete\n"
echo "Updating file permisions to 'www-data:'\n"
sudo chown -R www-data: /var/www/dev
sudo chown -R www-data: /var/www/html
echo "Promote Complete\n"
So what is this script doing?
- Update permissions of the "/var/www/dev" directory to "pi:www-data". This ensures that the script doesn't run into any issues with perms on the "dev" copy.
- Modify the "/var/www/dev/bl-content/databases/site.php" file with the production url.
- Modify the "/var/www/dev/bl-content/workspaces/rss/rss.xml" file with the production url.
- Modify the "/var/www/dev/bl-content/workspaces/sitemap/sitemap.xml" file with the production url.
- Copy the "dev" copy to the "prod" directory of the "primary" web node
- Syncronize the "primary" web node to the second web node. This includes some temporary permission changes.
- Syncronize the "primary" web node to the third web node. This includes some temporary permission changes.
- Revert the changes made to "/var/www/dev/bl-content/databases/site.php" file on the "dev" copy.
- Update the file permissions of the "dev" copy back to "www-data:www-data"
Now all the changes that were made on the "dev" copy are live on each web node!
This workflow is something that I am still polishing, but this is working very well for me. It allows me to try things in a "sandbox" without affecting my live site. It also gives me some redundancy with things. So if I somehow really mess up the "dev" copy I can always revert to the latest "prod" copy by copying the files from one of the live web nodes to the "dev" copy.
I actually have an additional step setup that I didn't cover with my "promote" script to push the latest and greatest to Github.
I hope this helps someone looking to run Bludit on a Raspberry Pi cluster. I welcome comments and feedback. I will keep this article updated with any changes to my workflow that I feel would be beneficial to share.
Updates:
I have added a few more items to the promote script. I found that the Sitemap and RSS plugins needed to also have the URL updated during the promotion to ensure that the production URL was used.
Checkout the plugins that I built specifically for Pi Lab: Plugins
I have upgraded the SD cards for Pi Lab. Pi Lab SD Card Upgrade
I have updated the nginx.conf file. I had issues with Let's Encrypt auto renewal. The updates should allow auto renewal to work with the cluster setup.
I have upgraded Pi Lab with new Prod Nodes including 2 Raspberry Pi 4 4GB nodes. Current Specs
I have done a bit of reorganizing/upgrading. Pi Lab "proper" now uses Raspberry Pi 4 2GBs for all production nodes including the load balancer. The Dev Node is now only used for dev and not in the production rotation. I moved the Raspberry Pi 4 4GBs to use on some of the other "ancillary nodes" such as the BOINC node. Current Specs