Thursday, February 28, 2013

Nginx & SSL Setup


So, I decided to go with Nginx for my webserver instead of Apache.  Why? If you are really interested, read something like this. Mainly, I think it will have less overhead, since it's event-driven instead of process based.  I also don't need to scale, so it will serve my needs (see what i did there?).

Easy to install
sudo apt-get install nginx-extras
Easy to run
sudo service nginx start
And easy to configure (once you know how).  The default config file you will change the most is in /etc/nginx/sites-enabled/default. My configuration will only have one virtual host, www.domain.com.  I plan to separate individual apps and webpages through subdomains.  like /subsonic, /opds, etc.  Also, I only plan to use https for my server.  This is because one of the web-apps (opds), can only authenticate through Basic HTTP Authentication.  I do not want the eventual LDAP credentials to be sent plain text, so SSL it is.  I ended up getting an SSL Certificate through the PositiveSSL service with Namecheap/Comodo for around $5-$6 a year.  

You can reference kbeezie's blog here for a start to getting the cert ready for nginx. Only difference is that I concatenated the certificates from Comodo into a bundle.  This will be needed later for some clients, and the LDAP server we'll build later.  
cat domain.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt > serverall.crt
Here is my initial nginx configuration file
server {

    listen [::]:443;
    server_name www.domain.com;

    #root /var/www;
    #index index.php index.html index.htm;

    ssl on;
    ssl_certificate /etc/nginx/certs/serverall.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_session_timeout 5m;

    access_log /var/log/nginx/server.access.log;
    error_log /var/log/nginx/server.error.log;

    root /usr/share/nginx/www;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ /index.html;
    }
}
I also want to redirect any unencrypted requests on port 80 to use SSL on port 443, so I added the below.
server
{
    listen 80;
    server_name www.domain.com;
    rewrite ^ https://$server_name$request_uri? permanent;  # enforce https
}

Tuesday, February 26, 2013

Loading the media servers

This is the easy part, install all the base packages that will be needed for the server.  The first step is to install the base media serving applications.  

1) Plex Media Server
Download the .deb package for your architecture (x64 in my case), then install:
sudo dpkg -i plexmediaserver_0.9.7.12.407-db37d6d_amd64.deb
Setup is fairly easy, you can start the web app from the applications list.  Point it at the media locations, let it finish scanning, then go in and tweak any incorrect matches by the automated title agent. This installs a service, named plexmediaserver. To restart:
sudo service plexmediaserver restart
2) Subsonic
Unfortunately, Plex is not quite nearly as full featured as Plex when it comes to music.  First you need Java, i went with the open package for convenience, but I will probably switch to the official Sun JDK at some point:
sudo apt-get install openjdk-6-jre
Then, download the .deb package from the above site and install:
sudo dpkg -i subsonic-4.7.deb
To restart:
sudo service subsonic restart 
Donate if you can, there's only 1 developer, and he's done a great job with the project.  Once finished, login via the web interface (http://localhost:4040) and change the admin password.  Point to the music directories, and you're done! Eventually we'll get to setting up SSL, for now it's unsecured.

3) Calibre
 I use calibre to manage my e-book collection.  It is very full-featured and open-source.  It also comes with an OPDS web-server so that you can serve up the library to various e-reader applications, like Aldiko, Moon Reader & FBReader. I ended up going with a different OPDS server, since Calibre does not support SSL currently.  Regardless, to install it
sudo apt-get install calibre
4) Calibre OPDS PHP Server
Great server, currently under active development.  This is the other piece of the e-book server.  The setup here is fairly complicated, and has lots of dependencies, namely NGINX, which is going to require a much longer description.  This will be covered later.


And Done! In fact, at this point, you could expose the various servers to the world over their respective ports on this system.  I will not at this point. Instead I will encapsulate the all servers behind a common web portal on port 443, using my own SSL certificate, web domain, and a unified LDAP login.

Sunday, February 24, 2013

Change of plan

Ok, more issues with the chromebox.  First off, something happened while in a chroot, and Chrome OS decided to log itself out. To top it off, I'm having trouble hot-booting my Adata SSD via the USB recovery hack.  Screw it, I'm putting Ubuntu on my more stable gaming system.  i7 2600k Sandy Bridge, 8 GB RAM, 2x 64 GB Mtron 7800 SSD in RAID 0 config, with a GeForce GTX 560 Ti (Fermi) 1GB RAM, on a Asrock H67M-ITX HT Mini-ITX Mobo.

I've decided to do something else new here.  Installed Windows 8 Pro on the system, VMWare 9.0 Workstation, and installed Ubuntu 12.04 x64 in a VM.  From what I have read, it may even be possible to move the VM to bare metal if I wish in the future, but more than likely hosted on an ESXi server.  Maybe that i7 Intel NUC or Gigabyte Brix system.  

In the meantime, I have a much more stable system, if a little bit loud (no more so than the highway nearby).  I also get to take advantage of USB 3.0 for the Cineraid data storage I have.  Reinstalled crashplan, and am importing the backup now, F-T-W

External USB Raid and Crashplan

So, first to setup is to attach the external RAID with all my music, movies, and media to the server.  I plugged it in, and the USB device is auto-mounted inside chrome.   Not quite what I want, needs to be mounted inside of ubuntu.

 open a crosh tab with Ctrl-Alt-T, type shell to drop to an admin shell, find the drive and unmount it.
mount  
in my case the drive is at /dev/sdb1 on /media/removable/Cineraid type ext3 (rw,nosuid,nodev,noexec,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered) 

so just unmount
sudo umount /dev/sdb1 
Eventually this will be added to a startup script starting crouton and entering the chroot.  This should be enough to allow you to mount the drive inside of ubuntu. Just open a shell, create a mount point, and mount the drive. 
sudo mkdir /media/Cineraid
sudo mount /dev/sdb1 /media/Cineraid
 
Again, some automation will help here.  

Now in my case, I keep the entire external RAID backed up with an cloud backup service.  Crashplan has unlimited storage capacity and works with Linux, so this is what I use.  Instead of re-backing up the entire RAID, I can just import the previous backup from my previous server.  The only caveat is that the path must be the same on the new computer as the old computer.  In my case, this is /media/Quadra. I also will need to add this drive to the /etc/fstab to make automount a little easier.

Get the UUID of the drive.
sudo blkid
Add this UUID to automount to the same location, to /etc/fstab.  Should look something like
UUID=3456-3452345-345-345345   /media/Quadra   ext3   defaults   0   0
At this point, you should just be able to mount the drive manually
mount -a

Operating System

Initially I used a dual-boot system, running Chrubuntu, based on 12.04 Precise Pangolin.  Fortunately, there is an alternative.  It's a fairly new method of running Ubuntu inside a Chrome chroot, called crouton.  The advantage is I actually get to use Chrome OS when I want to, without changing the partition table.  Just Ctrl-Alt-F1 for Chrome, Ctrl-Alt-F3 for Ubuntu.  Easy! The disadvantages are many.  Sharing resources with Chrome may prove tricky.  Not only are there additional operating system CPU, memory and disk overhead, but there are things like unmounting the USB RAID in Chrome, so that I can mount it in Ubuntu.  


The biggest disadvantage is that this is a fairly unproven method of running Ubuntu.  I've already run into permission errors with some of the mounted partitions from Chrome.   The other is that there is no running gnome-session, which means that things like the Ubuntu Software Center, the shutdown menu, etc, all are currently broken inside of Unity.  But, I figure as long as I document the server building process, save important config files, AND keep the data secure, I should be able to reconstitute the server much more quickly in the future.  

So for now, chroot will be a bit of an experiment.  I would actually prefer to run in some sort of VM or virtual appliance, but this processor is probably a little under-powered for that.  Soon...soon...



The end

Since this blog starts as a way for me to vent some of my frustrations in screwing up my web/media server, I should probably explain how I got into this mess..  Hopefully, it will also help others to avoid the mistakes I made.

Initially I build a dual-boot system, running Chrubuntu, based on 12.04 Precise Pangolin. This worked well for just running Ubuntu (even if it's based on a chrome kernel).  Unfortunately, swapping between chrome and ubuntu requires modifying the SSD partition table priorities, which is how I foobar'd the system in the end. I ended up not being able to boot the system. 


Booting from a USB chrome couldn't see the SSD, I ended up taking out the mSATA SSD, putting it in a mSATA to SATA adapter, putting it in another system, mounting it as virtual drive inside of a VirtualBox instance of chrome, then running cgpt to modify the partition table. Unfortunately, cgpt wanted to "fix" the partition table.  I should never have listened.  Fixing the partition basically blew away the ubuntu partition and merged it with another chrome partition.  My day was over.  The next couple days have been spent trying to recover important configuration files using photorec, find, and grep.  Ugh!  Notes to self, backup important files, document the process to rebuild the server (this blog), and consider a new line of work.

The beginning

And it starts.  I've consumed so many helpful guides, how-to's, opinions and the like that I felt the world needed one more blog.  So here it is.  This blog will initially be focused on the untimely death (and hopefully resurrection) of my self-hosted web server.  I acquired a Google/Samsung Chromebox from Google IO 2012 and decided to re-purpose a somewhat limited cloud pc, into my general web/media server.

The system is very quiet and small, ideal for being co-located with my TV and audio system.  Pretty decent specs, Intel Dual-Core Celeron i5, 4 GB RAM, 16 GB SSD, etc, etc... But for my uses running Ubuntu as a server, needed some upgrades.   After some inspiration from this article, I started out with some upgrades. For starters, 16 GB of DDR3 RAM.  Secondly, I wanted at least 64 GB of disk space for linux.  Turns out, swapping in a new SSD would prove troublesome.  After working with some folks on Google Groups trying to do the same thing, I decided to go with a faster 128 GB SSD than stock, even though it meant I needed to do some tricks when cold-booting the Chromebox.  At least this way I can reuse it when the Intel NUC i5 comes out in April of 2013. Oh, and lastly, to store all the multimedia I have (about 4 TB with), I attached a 4 drive external RAID over USB 2.0 using 4 2TB drives. BTW, the RAID makes an excellent replacementfor a christmas tree, what with the massive amount of blinking LEDs.