Saturday, March 2, 2013

Plex Web Proxy

Plex Media Server works excellently by itself as a server, and the various client interfaces they have.  DLNA, Android, Windows, Mac all have excellent clients for this server.  In addition, they provide a web interface which can be served from the same system as the server.  

Plex provides their own account management via their MyPlex website.  You can use these credentials to access other Plex members' libraries who have shared their servers with you.  Since these credentials are needed in addition to my users' usual LDAP solution, I was hoping to put the Plex Web client behind my Nginx proxy and use Basic HTTP Authentication.  I was successful, unfortunately, the Web client doubles as the server management client if the source ip address of the request matches the local system. Unless I want users managing my account, probably not a good idea. 

The second reason I wanted to put Plex Web behind the proxy was so that I could put it inside of a subdomain, like www.domain.com/plexweb.  Unfortunately, Plex does not yet provide a way to provide a different context path behind a proxy.

The third reason to put Plex Web behind the proxy was to secure it with my SSL certificate and https.  This is easily doable, as is proxying to the default Plex Web port of 32400, to keep the request URLs a little cleaner, and just use the same hole in the NAT as regular https.

The first thing that is necessary is to tweak the Nginx configuration to properly proxy all the necessary subdomains used by the Plex Web http API.  In addition, Plex Web HAS to remain at the root domain.  However, I still want to use the root domain my web server frontpage.  The idea is to look for http headers specific to the Plex Web requests, proxy those to the Plex server, and proxy home requests to the subdomain /home, where a simple homemade web page will reside. 

The first step is to properly redirect the root domain, so edit /etc/nginx/sites-enabled/default, add the home section, and change the root location to the following:
location ^~/home {
    root /var/www/home;
}

location ^~ / {
    set $test "true";
    #If the web request contains either of these 2 headers, unset the flag
    if ($http_x_plex_product) {
        set $test "false";
    }
    
    if ($http_x_plex_protocol) {
        set $test "false";
    }

    #if the flag is still set, redirect all requests to /home location
    if ($test = "true") {
        rewrite ^(.*)$   /home$1 last;
    }

    #otherwise, we have a Plex header, redirect to plex
    proxy_pass http://www.domain.com:32400;
    proxy_redirect http:// https://;
}
After a little packet sniffing, I determined the set of subdomains needed by Plex, so that I only have to forward those requests. These may change as Plex updates their API. Add the following sections to  /etc/nginx/sites-enabled/default under the main server section. This could probably be done with a single location and an or'd regex, but from what I read, this may be faster.
#PlexWeb Section
location ^~ /:/ {
   proxy_pass http://www.domain.com:32400/:/;
   proxy_redirect http:// https://;
}
location ^~ /web {
   proxy_pass http://www.domain.com:32400/web;
   proxy_redirect http:// https:/
   proxy_redirect http:// https://;/;
}
location ^~ /system {
   proxy_pass http://www.domain.com:32400/system;
   proxy_redirect http:// https://;
}
location ^~ /library {
   proxy_pass http://www.domain.com:32400/library;
   proxy_redirect http:// https://;
}
location ^~ /servers {
   proxy_pass http://www.domain.com:32400/servers;
   proxy_redirect http:// https://;
}
location ^~ /channels {
   proxy_pass http://www.domain.com:32400/channels;
   proxy_redirect http:// https://;
}
location ^~ /identity {
   proxy_pass http://www.domain.com:32400/identity;
   proxy_redirect http:// https://;
}
location ^~ /photo {
   proxy_pass http://www.domain.com:32400/photo;
   proxy_redirect http:// https://;
}
location ^~ /pms {
   proxy_pass http://www.domain.com:32400/pms;
   proxy_redirect http:// https://;
}
location ^~ /video {
   proxy_pass http://www.domain.com:32400/video;
   proxy_redirect http:// https://;
}
This is unfortunately an incomplete solution.  The protocol that Plex uses over location /:/ actually uses WebSockets.  As such, the above solution to tunnel/proxy Plex kinda works, but the client keeps thinking it's disconnected.  Not sure what effect this has.  It will be necessary to use the feature just made available last month to version 1.3.13 of Nginx. Upgrading to this development version currently breaks my other proxying (subsonic, ldap-manager), so for now, I am disabling Plex proxying until they work out the kinks. However, if Plex proxy is all you need, just change the /:/ to 
location ^~ /:/ {
    proxy_pass http://www.domain.com:32400/:/;
    proxy_redirect http:// https://;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
}

Subsonic Proxy

Today's goal is to update Subsonic and Nginx so that all requests for Subsonic come through Nginx.  The reason for this is two-fold.
  1. I can use the same port (ssl 443) and URL (www.domain.com) for all my server apps.  Thus I don't need to poke a hole in my NAT to forward new ports, and users don't have to remember special ports.
  2. I can use the same SSL certificate for all my server apps, and it is a officially signed certificate, unlike the self-signed that comes with subsonic.
So, first step is to configure Subsonic.  I know that my Subsonic is going to be under the subdomain https://www.domain.com/subsonic, so I need to specify the context-path variable in the configuration.  Also, I still need to run Subsonic on a different port, I will just have Nginx redirect requests to this port.  Lastly, I will increase the max-memory available to Subsonic a bit to have a few more resources. To start, open the startup script for Subsonic
sudo vi /etc/default/subsonic
and change the args to
SUBSONIC_ARGS="--context-path=/subsonic --port=8080 --https-port=0 --max-memory=300"
Finally, for security reasons, change the user for Subsonic from root to www-data, the default user for Nginx. Make sure the permissions on your media files are set to allow this user.
SUBSONIC_USER=www-data
Next step is to configure Nginx.  Open the config
sudo vi /etc/nginx/sites-enabled/default
Then add the following section to the server section for port 443. We need to fix up some headers, and make sure that https is properly redirected.
location ^~ /subsonic/ {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header Host $http_host;
        proxy_max_temp_file_size 0;
        proxy_pass http://localhost:8080;
        proxy_redirect http:// https://;
}
Then just restart both services, and you should be able to access Subsonic via http://www.domain.com/subsonic
sudo service restart nginx
sudo service restart subsonic

Fail2ban

Ok, maybe it's paranoia because of what I see at my job.....or maybe it's all the attempted logins I have seen in my authentication log, but it's time to secure my system....at least a little bit.  Primarily I'm considered with what I see in /var/log/auth.log Many repeated (failed) attempts to login to my ssh daemon from IP addresses not related to myself.  Probably some script kiddies or something, but the last thing I want to do is open myself to brute force attacks, or denial-of-service.  

After some research I settled on a software called fail2ban. Basically, it monitors various system logs, and after a number of failed accesses from a certain user/ip/whatever it bans the IP address is associated with that access by making a rule in iptables.  Similar to denyhosts, fail2ban will work on many different services in addition to ssh, which is perfect for when I get my web authentication and LDAP server up and running.   There are pretty good guides already out there, but this is specific to Ubuntu 12.04 and my server. 

To install
sudo apt-get install fail2ban
Whew, with that out of the way you can modify the config file.
sudo vi /etc/fail2ban/jail.conf
Pretty straightforward, there are a couple particulars to Ubuntu, and myself.  First off, I 
think the 10 minute ban-time is a little short, so I bumped it to 60 minutes.
bantime  = 3600
Apparently Debian has some issues with python-gamin (not sure if this is true with 12.04, but what the hell) so set the following
backend = polling
Restart fail2ban
sudo service restart fail2ban 
And that's it!  By default, ssh is enabled, and checks /var/log/auth.log.  However, I did notice an issue while testing.  rsyslog is the service responsible authentication logging.  Upon quickly repeated attempts to access the service, it may only print 1 message for multiple logins and just says something like "Previous message repeated 3 times".  As such, fail2ban is under-counting the number of accesses.  To fix this, you need to change the rsyslog.conf.
sudo vi /etc/rsyslog.conf
change the value RepeatedMsgReduction to 
RepeatedMsgReduction = off
 And restart the logger
sudo service rsyslog restart
To check the banning, try logging in from another system, over 3 times.  Then do
sudo iptables -L 
You should see a rule for iptables-ssh in the INPUT chain.
Chain INPUT (policy ACCEPT)
target        prot opt source               destination
fail2ban-ssh  tcp  --  anywhere             anywhere             multiport dports ssh
And fail2ban-ssh section with 1 reference.
Chain fail2ban-ssh (1 references)
target     prot opt source               destination
DROP       all  --  192.168.100.100      anywhere
RETURN     all  --  anywhere             anywhere

Thursday, February 28, 2013

Nginx & SSL Setup


So, I decided to go with Nginx for my webserver instead of Apache.  Why? If you are really interested, read something like this. Mainly, I think it will have less overhead, since it's event-driven instead of process based.  I also don't need to scale, so it will serve my needs (see what i did there?).

Easy to install
sudo apt-get install nginx-extras
Easy to run
sudo service nginx start
And easy to configure (once you know how).  The default config file you will change the most is in /etc/nginx/sites-enabled/default. My configuration will only have one virtual host, www.domain.com.  I plan to separate individual apps and webpages through subdomains.  like /subsonic, /opds, etc.  Also, I only plan to use https for my server.  This is because one of the web-apps (opds), can only authenticate through Basic HTTP Authentication.  I do not want the eventual LDAP credentials to be sent plain text, so SSL it is.  I ended up getting an SSL Certificate through the PositiveSSL service with Namecheap/Comodo for around $5-$6 a year.  

You can reference kbeezie's blog here for a start to getting the cert ready for nginx. Only difference is that I concatenated the certificates from Comodo into a bundle.  This will be needed later for some clients, and the LDAP server we'll build later.  
cat domain.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt > serverall.crt
Here is my initial nginx configuration file
server {

    listen [::]:443;
    server_name www.domain.com;

    #root /var/www;
    #index index.php index.html index.htm;

    ssl on;
    ssl_certificate /etc/nginx/certs/serverall.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_session_timeout 5m;

    access_log /var/log/nginx/server.access.log;
    error_log /var/log/nginx/server.error.log;

    root /usr/share/nginx/www;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ /index.html;
    }
}
I also want to redirect any unencrypted requests on port 80 to use SSL on port 443, so I added the below.
server
{
    listen 80;
    server_name www.domain.com;
    rewrite ^ https://$server_name$request_uri? permanent;  # enforce https
}

Tuesday, February 26, 2013

Loading the media servers

This is the easy part, install all the base packages that will be needed for the server.  The first step is to install the base media serving applications.  

1) Plex Media Server
Download the .deb package for your architecture (x64 in my case), then install:
sudo dpkg -i plexmediaserver_0.9.7.12.407-db37d6d_amd64.deb
Setup is fairly easy, you can start the web app from the applications list.  Point it at the media locations, let it finish scanning, then go in and tweak any incorrect matches by the automated title agent. This installs a service, named plexmediaserver. To restart:
sudo service plexmediaserver restart
2) Subsonic
Unfortunately, Plex is not quite nearly as full featured as Plex when it comes to music.  First you need Java, i went with the open package for convenience, but I will probably switch to the official Sun JDK at some point:
sudo apt-get install openjdk-6-jre
Then, download the .deb package from the above site and install:
sudo dpkg -i subsonic-4.7.deb
To restart:
sudo service subsonic restart 
Donate if you can, there's only 1 developer, and he's done a great job with the project.  Once finished, login via the web interface (http://localhost:4040) and change the admin password.  Point to the music directories, and you're done! Eventually we'll get to setting up SSL, for now it's unsecured.

3) Calibre
 I use calibre to manage my e-book collection.  It is very full-featured and open-source.  It also comes with an OPDS web-server so that you can serve up the library to various e-reader applications, like Aldiko, Moon Reader & FBReader. I ended up going with a different OPDS server, since Calibre does not support SSL currently.  Regardless, to install it
sudo apt-get install calibre
4) Calibre OPDS PHP Server
Great server, currently under active development.  This is the other piece of the e-book server.  The setup here is fairly complicated, and has lots of dependencies, namely NGINX, which is going to require a much longer description.  This will be covered later.


And Done! In fact, at this point, you could expose the various servers to the world over their respective ports on this system.  I will not at this point. Instead I will encapsulate the all servers behind a common web portal on port 443, using my own SSL certificate, web domain, and a unified LDAP login.

Sunday, February 24, 2013

Change of plan

Ok, more issues with the chromebox.  First off, something happened while in a chroot, and Chrome OS decided to log itself out. To top it off, I'm having trouble hot-booting my Adata SSD via the USB recovery hack.  Screw it, I'm putting Ubuntu on my more stable gaming system.  i7 2600k Sandy Bridge, 8 GB RAM, 2x 64 GB Mtron 7800 SSD in RAID 0 config, with a GeForce GTX 560 Ti (Fermi) 1GB RAM, on a Asrock H67M-ITX HT Mini-ITX Mobo.

I've decided to do something else new here.  Installed Windows 8 Pro on the system, VMWare 9.0 Workstation, and installed Ubuntu 12.04 x64 in a VM.  From what I have read, it may even be possible to move the VM to bare metal if I wish in the future, but more than likely hosted on an ESXi server.  Maybe that i7 Intel NUC or Gigabyte Brix system.  

In the meantime, I have a much more stable system, if a little bit loud (no more so than the highway nearby).  I also get to take advantage of USB 3.0 for the Cineraid data storage I have.  Reinstalled crashplan, and am importing the backup now, F-T-W

External USB Raid and Crashplan

So, first to setup is to attach the external RAID with all my music, movies, and media to the server.  I plugged it in, and the USB device is auto-mounted inside chrome.   Not quite what I want, needs to be mounted inside of ubuntu.

 open a crosh tab with Ctrl-Alt-T, type shell to drop to an admin shell, find the drive and unmount it.
mount  
in my case the drive is at /dev/sdb1 on /media/removable/Cineraid type ext3 (rw,nosuid,nodev,noexec,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) 

so just unmount
sudo umount /dev/sdb1 
Eventually this will be added to a startup script starting crouton and entering the chroot.  This should be enough to allow you to mount the drive inside of ubuntu. Just open a shell, create a mount point, and mount the drive. 
sudo mkdir /media/Cineraid
sudo mount /dev/sdb1 /media/Cineraid
 
Again, some automation will help here.  

Now in my case, I keep the entire external RAID backed up with an cloud backup service.  Crashplan has unlimited storage capacity and works with Linux, so this is what I use.  Instead of re-backing up the entire RAID, I can just import the previous backup from my previous server.  The only caveat is that the path must be the same on the new computer as the old computer.  In my case, this is /media/Quadra. I also will need to add this drive to the /etc/fstab to make automount a little easier.

Get the UUID of the drive.
sudo blkid
Add this UUID to automount to the same location, to /etc/fstab.  Should look something like
UUID=3456-3452345-345-345345   /media/Quadra   ext3   defaults   0   0
At this point, you should just be able to mount the drive manually
mount -a