Ended up moving my server over to an Intel NUC D54250WYK Haswell i5 with attached storage. Ah, the advantages of running a VM. Intel finally released this low-power desktop system and it seems to work great with Esxi 5.5, with the addition of a few drivers.
The problem is described here. However, in fact, the driver the author provided for the e1001 did not work for my system. I used a combination of the 5.5 iso (VMware-VMvisor-Installer-5.5.0-1331820.x86_64.iso), ESXI-customizer 2.7.1, and two drivers, the e1000 (net-e1000e-2.3.2.x86_64.vib) from here and sata-xhci (sata-xahci-1.5-1.x86_64.vib) from here. I've provided a mirror for the drivers and the customerizer on dropbox here.
The ESXI-customizer takes care of most of the nitty gritty, so just follow the instructions there for adding the drivers, one at a time.
VT-d and PCI passthrough seems to work just fine with this hardware.
Thursday, December 12, 2013
Saturday, June 8, 2013
IRC Web Client
For the web interface, we will install qwebirc and proxy it through Nginx on Ubuntu 12.04 Precise Pangolin. The instructions for qwebirc are kind of in different places, so hopefully this will help someone.
In my case JDK is already installed, so the dependencies I needed where
To apply the patch cd to the source root and do
Set IDENT to a valid user on your LDAP domain. I created an account called "webirc" in ldap-account-manager.
In my case JDK is already installed, so the dependencies I needed where
sudo apt-get install python python-twisted python-twisted-bin \Download the source somewhere
python-twisted-core python-twisted-runner python-twisted-names \
python-twisted-mail python-twisted-words python-twisted-web \
python-zope.interface python-openss mercurial
hg clone http://hg.qwebirc.org/qwebirc qwebircI decided to switch to their stable branch
hg up -C stableOk, in my case, I needed to apply two patches for SSL and server authentication. For the authentication I applied the patch from here, direct link to the patch here. For the SSL support I applied the patch from anacart's post in this thread, direct link to the patch here.
To apply the patch cd to the source root and do
patch -p1 < patch.diffOnce your done, put the qwebirc folder somewhere permanent, like /usr/local/qwebirc, or /usr/share/qwebirc, and make a copy of the config file.
cp config.py.example config.pyEdit config.py. Change IRCPORT and SSL port to match the client port of your IRC server.
Set IDENT to a valid user on your LDAP domain. I created an account called "webirc" in ldap-account-manager.
IDENT = "webirc"Set the NETWORK to the IRC network name, specified in the inspircd.conf
NETWORK_NAME = "IRCNet"I wasn't sure what to see the URLs to, but here's how mine is set. Set REALNAME to the server address.
REALNAME = "https://www.domain.com/webirc"Set BASE_URL to the local address, i don't think this is right, needs checking.
BASE_URL = "http://localhost:9090"For the Nginx proxy, set the following.
FORWARDED_FOR_HEADER="x-forwarded-for"Finally compile qwebirc.
FORWARDED_FOR_IPS=["127.0.0.1"]
python compile.pyAnd test it
python run.pyYou should be able to browse to http://localhost:9090/
Lastly, create a file to launch qwebirc as a service. If qwebirc crashes, this script will not restart the process, it needs some tweaking. Edit /etc/init/qwebirc.conf
# qwebirc - qwebirc job file
start on runlevel [2345]stop on runlevel [016]
chdir /usr/local/qwebirc
export fork
exec /usr/local/qwebirc/run.py
If all goes well, it should start on reboot, or by running
sudo service qwebirc startNow to tweak Nginx. I just had to add the following to /etc/nginx/sites-enabled/default
location ^~ /webirc/ {Restart nginx
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://127.0.0.1:9090/;
}
sudo service nginx restartAnd browse to https://www.domain.com/webirc
IRC Server
Decided to setup a little IRC server using Inspircd. The plan was as follows. Use LDAP authentication for users, SSL encryption using my domain certs, a web-based UI proxied through NGINX https and lastly, federated with a friend's IRC server. Ubuntu 12.04 comes with an old version of Inspircd (1.1.2?), doesn't work well with federation and doesn't come with the LDAP module. I downloaded the latest version of Inspircd that comes with 12.10 Ubuntu from here.
The above link also shows any dependencies you may need to install separately. I was missing a few, solved with...
The main config file to edit is /etc/inspircd/inspircd.conf You'll want to configure this to setup some basic info, like change the <bind> tag to listen on an ip other than "127.0.0.1". "" will default to all network interfaces, and then set the port to listen to.
At this point, you can run the Inspircd daemon
First you need to tell Inspircd to load the gnutls module and point to your certs, by editing /etc/inspircd/inspircd.conf and adding:
The above link also shows any dependencies you may need to install separately. I was missing a few, solved with...
sudo apt-get install libtre5 libpq5 libmysqlclient18then installed Inspircd
sudo dpkg -i inspircd_2.0.5-1_amd64.debFirst you need to edit /etc/default/inspircd and change the '0' to '1'
The main config file to edit is /etc/inspircd/inspircd.conf You'll want to configure this to setup some basic info, like change the <bind> tag to listen on an ip other than "127.0.0.1". "" will default to all network interfaces, and then set the port to listen to.
At this point, you can run the Inspircd daemon
sudo service inspircd startNext will secure the chat client port to use the SSL cert for the server. I store the SSL certs with my nginx server in /etc/nginx/certs.
First you need to tell Inspircd to load the gnutls module and point to your certs, by editing /etc/inspircd/inspircd.conf and adding:
<module name="m_ssl_gnutls.so">If you want, you can create a self-signed cert, and use that, but clients will need to be told to ignore invalid certs.
<gnutls certfile="/etc/nginx/certs/server.crt" keyfile="/etc/nginx/certs/server.key">
Next, change your client's bind tag to something like:
<bind address="" port="5309" type="clients" ssl="gnutls">To add LDAP authentication, you need to load the ldapauth module and point to your ldap server, by editing /etc/inspircd/inspircd.conf and adding:
<module name="m_ldapauth.so">To connect this server to another server, you need to <bind> a port as type server,
<ldapauth baserdn="ou=People,dc=domain,dc=com"
attribute="uid"
server="ldap://localhost"
allowpattern="Guest*"
killreason="Access denied"
searchscope="subtree"
binddn=""
bindauth=""
verbose="yes"
userfield="yes">
<bind address="" port="9799" type="servers">setup a <link> section to define the server connection. The same thing needs to be setup on the other server to be connected.
<link name="irc.otherdomain.com"Lastly, one of the two servers can be set to <autoconnect> to avoid manually maintaining the connection.
ipaddr="irc.otherdomain.com"
port="9799"
sendpass="secret"
recvpass="secret">
<autoconnect period="60" server="irc.otherdomain.com">
Part 2 of this blog entry will setup the web interface
Wednesday, June 5, 2013
Self Service Password
I decided to allow users on the server to change their passwords, when they want, through a web based tool. I chose LTB's Self Service Password. A simple php tool with lots of neat features like SMS reset, security questions, etc. I only plan to enable the simple form to reset the password. To install, download the latest .deb file, (0.8 in my case). Next install the dependencies, and restart php5.
sudo apt-get install apache2 php5 php5-ldap php5-mcryptThen the .deb
sudo service php5-fpm restart
sudo dpkg -i self-service-password_0.8-1_all.debYou will need to modify php config file at /usr/share/self-service-password/conf/config.inc.php and make some changes to LDAP.
In my case, the server runs on the localhost.
$ldap_url = "ldap://localhost";ldap_binddn and ldap_bindpw are made blank ("") to not use admin credentials.
$ldap_binddn = "";ldap_base is set to your domain.
$ldap_bindpw = "";
$ldap_base = "dc=domain,dc=com";I'm using simple posix schema for users.
$ldap_filter = "(&(objectClass=posixAccount)($ldap_login_attribute={login}))";Next up, modify your nginx config file at /etc/nginx/sites-enabled/default,and add the following sections.
#Self Service Password Section
location /self-service-password {
alias /usr/share/self-service-password;
index index.html index.php;
}
location ~ ^/self-service-password/.*\.php$ {Restart nginx and browse to https://www.domain.com/self-service-password
root /usr/share;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include /etc/nginx/fastcgi_params;
}
sudo service nginx restart
Tuesday, June 4, 2013
Add Authentication to Nginx
The plan is to expose certain web apps behind Basic HTTP Authentication. This is why the web server is only available via HTTPS, we do not want LDAP user/passwords going over plaintext. The way we have setup LDAP and PAM earlier, it is very easy to secure subdomains using it. Note: Basic HTTP Authentication has uses no session cookies or persistence, so the user remains logged in until they close their browser. Be warned!
Make sure that you have nginx-extras installed, and not nginx. Extras includes the PAM module.
To protect everything under /secure you will add the following to the nginx.conf file: This is secure enough for many purposes.
As an example, to authenticate users against an LDAP server (using the pam_ldap.so module) you will use an /etc/pam.d/nginx like the following:
Make sure that you have nginx-extras installed, and not nginx. Extras includes the PAM module.
sudo apt-get install nginx-extrasThe following is taken from this readme.
To protect everything under /secure you will add the following to the nginx.conf file: This is secure enough for many purposes.
location /secure {Note that the module runs as the web server user, so the PAM modules used must be able to authenticate the users without being root; that means that if you want to use the pam_unix.so module to authenticate users you need to let the web server user to read the /etc/shadow file if that does not scare you (on Debian like systems you can add the www-data user to the shadow group).
auth_pam "Secure Zone";
auth_pam_service_name "nginx";
}
As an example, to authenticate users against an LDAP server (using the pam_ldap.so module) you will use an /etc/pam.d/nginx like the following:
auth required pam_ldap.soIf you also want to limit the users from LDAP that can authenticate you can use the pam_listfile.so module; to limit who can access resources under /restricted add the following to the nginx.conf file:
account required pam_ldap.so
location /restricted {Use the following /etc/pam.d/nginx_restricted file:
auth_pam "Restricted Zone";
auth_pam_service_name "nginx_restricted";
}
auth required pam_listfile.so onerr=fail item=user \And add the users allowed to authenticate to the /etc/nginx/restricted_users (remember that the web server user has to be able to read this file).
sense=allow file=/etc/nginx/restricted_users
auth required pam_ldap.so
account required pam_ldap.so
Build your ownCloud
ownCloud is a way to bring your own cloud storage to the internets. You may want to use this to control your own data, use open standards provided by ownCloud, or just save yourself the monthly subscription costs of the commercial options. The only limitation on storage size is the size of your connected storage, which can actually include other cloud storage services such as Dropbox and Google Drive, in addition to external storage.
Installation is fairly easy with Ubuntu as a repository with packages is available. The following directories are taken from the installation page on ownCloud. Run the following as root
Next step is to link ownCloud to your Ldap server for authentication. Login with the admin account, click the settings button, and go to "Apps". Enable the app for "Ldap User and Group Backend".
Click the settings button, and go to "Admin". Under the LDAP section, set your LDAP host, your domain, and the user and group attributes. Test the configuration and save.
Again, if you are using ownCloud 4.5 everything should work out of the box as is, and users can login and share files with group members. In my case, all web users are a member of the group 'webuser'. However, ownCloud 5.0 requires some additional configuration, or the users are not associated with their groups. The solution is to add the memberUid attribute to the associated group in ldap-account-manager, manually add the users to this group, then tell ownCloud to use this attribute.
Login to ldap-account-manager and click on "Tree View". From here, select the group, and click "Add New Attribute". Select "memberUid". Add the name of at least one user. The new attribute should be visible in the group in tree view. From here, you can manually add members by clicking "Modify Group Members" under memberUid. You can add the users in a batch, instead of manually typing them out.
Log back in to ownCloud as admin. Click on the settings button, and go back into "Admin". In the LDAP section, select the "Advanced" tab. Under "Directory Settings" set "Group-member association" to "memberUid". Save. You may need to remove the users so that ownCloud can repopulate the list with the new group association.
Installation is fairly easy with Ubuntu as a repository with packages is available. The following directories are taken from the installation page on ownCloud. Run the following as root
echo 'deb http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_12.04/ /' >> /etc/apt/sources.list.d/owncloud.listIf you want to add the key to apt-get to avoid a warning
apt-get update
apt-get install owncloud
wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_12.04/Release.keyOnce installed, you need to add ownCloud to Nginx. You will presumably already setup php5-fpm during the instructions in my LDAP server post. With ownCloud 5, some slightly more complicated Nginx rules are needed. Again, we are putting ownCloud in a subdomain on the webserver for clean separation of server services. Note: make sure your fastcgi_pass matches the mechanism you are using for FastCGI, either unix socket or tcp socket. The following was taken from this post.
apt-get add - < Release.key
#owncloud settingsMake sure to restart nginx as usual, sudo service nginx restart. You should now be able to create a default admin account at your domain.com/owncloud
#Some rewrite rules, more to come later
rewrite ^/owncloud/caldav((/|$).*)$ /owncloud/remote.php/caldav$1 last;
rewrite ^/owncloud/carddav((/|$).*)$ /owncloud/remote.php/carddav$1 last;
rewrite ^/owncloud/webdav((/|$).*)$ /owncloud/remote.php/webdav$1 last;
location ~ ^/owncloud/(data|config|\.ht|db_structure.xml|README) {
deny all;
}
# Configure the root location with proper rewrite rule
location /owncloud/ {
rewrite ^/owncloud/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/owncloud/.well-known/host-meta.json /public.php?service=host-meta-json last;
rewrite ^/owncloud/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/owncloud/.well-known/caldav /remote.php/caldav/ redirect;
rewrite ^/owncloud/apps/calendar/caldav.php /remote.php/caldav/ last;
rewrite ^/owncloud/apps/contacts/carddav.php /remote.php/carddav/ last;
rewrite ^/owncloud/apps/([^/]*)/(.*\.(css|php))$ /index.php?app=$1&getfile=$2 last;
rewrite ^(/owncloud/core/doc[^\/]+/)$ $1/index.html;
try_files $uri $uri/ index.php;
}
# Configure PHP-FPM stuff
location ~ ^(?<script_name>.+?\.php)(?<path_info>/.*)?$ {
try_files $script_name = 404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
# This one is a little bit tricky, you need to pass all parameters in a single line, separating them with newline (\n)
fastcgi_param PHP_VALUE "upload_max_filesize = 1024M \n post_max_size = 1024M"; # This finishes the max upload size settings
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # On some systems OC will work without this setting, but it doesn't hurt to leave it here
include /etc/nginx/fastcgi_params;
}
Next step is to link ownCloud to your Ldap server for authentication. Login with the admin account, click the settings button, and go to "Apps". Enable the app for "Ldap User and Group Backend".
Click the settings button, and go to "Admin". Under the LDAP section, set your LDAP host, your domain, and the user and group attributes. Test the configuration and save.
Again, if you are using ownCloud 4.5 everything should work out of the box as is, and users can login and share files with group members. In my case, all web users are a member of the group 'webuser'. However, ownCloud 5.0 requires some additional configuration, or the users are not associated with their groups. The solution is to add the memberUid attribute to the associated group in ldap-account-manager, manually add the users to this group, then tell ownCloud to use this attribute.
Login to ldap-account-manager and click on "Tree View". From here, select the group, and click "Add New Attribute". Select "memberUid". Add the name of at least one user. The new attribute should be visible in the group in tree view. From here, you can manually add members by clicking "Modify Group Members" under memberUid. You can add the users in a batch, instead of manually typing them out.
Log back in to ownCloud as admin. Click on the settings button, and go back into "Admin". In the LDAP section, select the "Advanced" tab. Under "Directory Settings" set "Group-member association" to "memberUid". Save. You may need to remove the users so that ownCloud can repopulate the list with the new group association.
Sunday, March 3, 2013
Setting up LDAP Server
LDAP Server Setup
Lightweight Directory Access Protocol or LDAP, is a high-level application protocol for managing directory services in a hierarchical manner. It's most common use is to manage domain related information, such as an email directory or user information. In my case, I will use the Unix-related structures for managing users and their system access to my services. The same user name and login will be used for Subsonic, Owncloud, COPS (e-book server), SSH-SFTP logins, etc.
To start installing OpenLDAPServer, you can use 2 guides over at Ubuntu, here and here. The second link actual corrects a few things in the guide, but most of it is unnecessary for an initial LDAP server.
First off, install the packages.
User Configuration
A great utility for managing users and settings includes a web app called LAM, or ldap-account-manager. There is another great write-up regarding using LAM with Nginx here.
Install the necessary packages
Lightweight Directory Access Protocol or LDAP, is a high-level application protocol for managing directory services in a hierarchical manner. It's most common use is to manage domain related information, such as an email directory or user information. In my case, I will use the Unix-related structures for managing users and their system access to my services. The same user name and login will be used for Subsonic, Owncloud, COPS (e-book server), SSH-SFTP logins, etc.
To start installing OpenLDAPServer, you can use 2 guides over at Ubuntu, here and here. The second link actual corrects a few things in the guide, but most of it is unnecessary for an initial LDAP server.
First off, install the packages.
sudo apt-get install slapd ldap-utilsIn my case, my host is already joined to a domain, so I didn't need the next step, but just to make sure, reconfigure slapd to add the ldap domain and reset the password.
sudo dpkg-reconfigure slapdThat's pretty much all you need to get running. The latest builds of Ubuntu handle the inclusion of various basic schemas, but to verify your ldap is up and running, run the following.
sudo ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=configInitially this command should list 10-15 entries and is a good first check.
User Configuration
A great utility for managing users and settings includes a web app called LAM, or ldap-account-manager. There is another great write-up regarding using LAM with Nginx here.
Install the necessary packages
sudo apt-get install php5-fpm php5 php5-ldap php-apc php5-gd php-fpdf ldap-account-managerNormally php5-fpm is configured listening on 127.0.0.1 port 9000. We're going to change this to a Unix socket, just to clean up the ports a bit and potentially increases performance under load. In general it won't help much, but theoretically removes some of the TCP overhead.
sudo vi /etc/php5/fpm/pool.d/www.confLook for
listen = 127.0.0.1:9000Change to
listen = /var/run/php5-fpm.sockRestart the service
sudo service php5-fpm restartAdd the following section to /etc/nginx/sites-enabled/default to create a sub-domain for the account manager, and will point it to the main launch page.
location /ldap-account-manager {Add the following section to /etc/nginx/sites-enabled/default to point Nginx to the LAM directory, the php Unix socket, and tweak a couple of fastcgi parameters.
alias /usr/share/ldap-account-manager;
index index.html index.php;
}
location ~ ^/ldap-account-manager/.*\.php$ {Restart Nginx
root /usr/share;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include /etc/nginx/fastcgi_params;
}
sudo service nginx restart
You should be able to now browse to LAM https://www.domain.com/ldap-account-manager. At this point, I refer you to the ducky-pond.com post for how to initially setup LAM.
Client Access
First, configure your client config for ldap client apps.
First install the packages
Client Access
First, configure your client config for ldap client apps.
sudo vi /etc/ldap.confMake sure the domain is properly specified
base dc=domain,dc=comand the uri is correct
uri ldap://127.0.0.1:389I refer you to this thread for the instructions I used to setup local and ssh logins for your users. This will automatically create their home directories if they do not exist. There is a correction by a later contributer, which I have included in my quick setup below
First install the packages
sudo apt-get install ldap-utils libpam-ldap libnss-ldap nslcdNext edit /etc/nsswitch.conf and change the lines for passwd, group, and shadow
passwd: compat ldapEdit /etc/pam.d/lightdm and add
group : compat ldap
shadow: compat ldap
session required pam_mkhomedir.so skel=/etc/skel umask=0022Edit /etc/pam.d/common-session and add
session required pam_mkhomedir.so skel=/etc/skel umask=0022Apply changes
sudo update-rc.d nslcd enableConfigure lightdm to allow user to specify a username for login
sudo /usr/lib/lightdm/lightdm-set-defaults -m trueAnd reboot. If the user logs in locally or via ssh, her home directory will be created automatically.
Saturday, March 2, 2013
Plex Web Proxy
Plex Media Server works excellently by itself as a server, and the various client interfaces they have. DLNA, Android, Windows, Mac all have excellent clients for this server. In addition, they provide a web interface which can be served from the same system as the server.
Plex provides their own account management via their MyPlex website. You can use these credentials to access other Plex members' libraries who have shared their servers with you. Since these credentials are needed in addition to my users' usual LDAP solution, I was hoping to put the Plex Web client behind my Nginx proxy and use Basic HTTP Authentication. I was successful, unfortunately, the Web client doubles as the server management client if the source ip address of the request matches the local system. Unless I want users managing my account, probably not a good idea.
The second reason I wanted to put Plex Web behind the proxy was so that I could put it inside of a subdomain, like www.domain.com/plexweb. Unfortunately, Plex does not yet provide a way to provide a different context path behind a proxy.
The third reason to put Plex Web behind the proxy was to secure it with my SSL certificate and https. This is easily doable, as is proxying to the default Plex Web port of 32400, to keep the request URLs a little cleaner, and just use the same hole in the NAT as regular https.
The first thing that is necessary is to tweak the Nginx configuration to properly proxy all the necessary subdomains used by the Plex Web http API. In addition, Plex Web HAS to remain at the root domain. However, I still want to use the root domain my web server frontpage. The idea is to look for http headers specific to the Plex Web requests, proxy those to the Plex server, and proxy home requests to the subdomain /home, where a simple homemade web page will reside.
The first step is to properly redirect the root domain, so edit /etc/nginx/sites-enabled/default, add the home section, and change the root location to the following:
Plex provides their own account management via their MyPlex website. You can use these credentials to access other Plex members' libraries who have shared their servers with you. Since these credentials are needed in addition to my users' usual LDAP solution, I was hoping to put the Plex Web client behind my Nginx proxy and use Basic HTTP Authentication. I was successful, unfortunately, the Web client doubles as the server management client if the source ip address of the request matches the local system. Unless I want users managing my account, probably not a good idea.
The second reason I wanted to put Plex Web behind the proxy was so that I could put it inside of a subdomain, like www.domain.com/plexweb. Unfortunately, Plex does not yet provide a way to provide a different context path behind a proxy.
The third reason to put Plex Web behind the proxy was to secure it with my SSL certificate and https. This is easily doable, as is proxying to the default Plex Web port of 32400, to keep the request URLs a little cleaner, and just use the same hole in the NAT as regular https.
The first thing that is necessary is to tweak the Nginx configuration to properly proxy all the necessary subdomains used by the Plex Web http API. In addition, Plex Web HAS to remain at the root domain. However, I still want to use the root domain my web server frontpage. The idea is to look for http headers specific to the Plex Web requests, proxy those to the Plex server, and proxy home requests to the subdomain /home, where a simple homemade web page will reside.
The first step is to properly redirect the root domain, so edit /etc/nginx/sites-enabled/default, add the home section, and change the root location to the following:
location ^~/home {After a little packet sniffing, I determined the set of subdomains needed by Plex, so that I only have to forward those requests. These may change as Plex updates their API. Add the following sections to /etc/nginx/sites-enabled/default under the main server section. This could probably be done with a single location and an or'd regex, but from what I read, this may be faster.
root /var/www/home;
}
location ^~ / {
set $test "true";
#If the web request contains either of these 2 headers, unset the flag
if ($http_x_plex_product) {
set $test "false";
}
if ($http_x_plex_protocol) {
set $test "false";
}
#if the flag is still set, redirect all requests to /home location
if ($test = "true") {
rewrite ^(.*)$ /home$1 last;
}
#otherwise, we have a Plex header, redirect to plex
proxy_pass http://www.domain.com:32400;
proxy_redirect http:// https://;
}
#PlexWeb SectionThis is unfortunately an incomplete solution. The protocol that Plex uses over location /:/ actually uses WebSockets. As such, the above solution to tunnel/proxy Plex kinda works, but the client keeps thinking it's disconnected. Not sure what effect this has. It will be necessary to use the feature just made available last month to version 1.3.13 of Nginx. Upgrading to this development version currently breaks my other proxying (subsonic, ldap-manager), so for now, I am disabling Plex proxying until they work out the kinks. However, if Plex proxy is all you need, just change the /:/ to
location ^~ /:/ {
proxy_pass http://www.domain.com:32400/:/;
proxy_redirect http:// https://;
}
location ^~ /web {
proxy_pass http://www.domain.com:32400/web;
proxy_redirect http:// https:/
proxy_redirect http:// https://;/;
}
location ^~ /system {
proxy_pass http://www.domain.com:32400/system;
proxy_redirect http:// https://;
}
location ^~ /library {
proxy_pass http://www.domain.com:32400/library;
proxy_redirect http:// https://;
}
location ^~ /servers {
proxy_pass http://www.domain.com:32400/servers;
proxy_redirect http:// https://;
}
location ^~ /channels {
proxy_pass http://www.domain.com:32400/channels;
proxy_redirect http:// https://;
}
location ^~ /identity {
proxy_pass http://www.domain.com:32400/identity;
proxy_redirect http:// https://;
}
location ^~ /photo {
proxy_pass http://www.domain.com:32400/photo;
proxy_redirect http:// https://;
}
location ^~ /pms {
proxy_pass http://www.domain.com:32400/pms;
proxy_redirect http:// https://;
}
location ^~ /video {
proxy_pass http://www.domain.com:32400/video;
proxy_redirect http:// https://;
}
location ^~ /:/ {
proxy_pass http://www.domain.com:32400/:/;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Subsonic Proxy
Today's goal is to update Subsonic and Nginx so that all requests for Subsonic come through Nginx. The reason for this is two-fold.
- I can use the same port (ssl 443) and URL (www.domain.com) for all my server apps. Thus I don't need to poke a hole in my NAT to forward new ports, and users don't have to remember special ports.
- I can use the same SSL certificate for all my server apps, and it is a officially signed certificate, unlike the self-signed that comes with subsonic.
sudo vi /etc/default/subsonicand change the args to
SUBSONIC_ARGS="--context-path=/subsonic --port=8080 --https-port=0 --max-memory=300"
Finally, for security reasons, change the user for Subsonic from root to www-data, the default user for Nginx. Make sure the permissions on your media files are set to allow this user.
SUBSONIC_USER=www-dataNext step is to configure Nginx. Open the config
sudo vi /etc/nginx/sites-enabled/defaultThen add the following section to the server section for port 443. We need to fix up some headers, and make sure that https is properly redirected.
location ^~ /subsonic/ {Then just restart both services, and you should be able to access Subsonic via http://www.domain.com/subsonic
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_max_temp_file_size 0;
proxy_pass http://localhost:8080;
proxy_redirect http:// https://;
}
sudo service restart nginx
sudo service restart subsonic
Fail2ban
Ok, maybe it's paranoia because of what I see at my job.....or maybe it's all the attempted logins I have seen in my authentication log, but it's time to secure my system....at least a little bit. Primarily I'm considered with what I see in /var/log/auth.log Many repeated (failed) attempts to login to my ssh daemon from IP addresses not related to myself. Probably some script kiddies or something, but the last thing I want to do is open myself to brute force attacks, or denial-of-service.
After some research I settled on a software called fail2ban. Basically, it monitors various system logs, and after a number of failed accesses from a certain user/ip/whatever it bans the IP address is associated with that access by making a rule in iptables. Similar to denyhosts, fail2ban will work on many different services in addition to ssh, which is perfect for when I get my web authentication and LDAP server up and running. There are pretty good guides already out there, but this is specific to Ubuntu 12.04 and my server.
To install
think the 10 minute ban-time is a little short, so I bumped it to 60 minutes.
After some research I settled on a software called fail2ban. Basically, it monitors various system logs, and after a number of failed accesses from a certain user/ip/whatever it bans the IP address is associated with that access by making a rule in iptables. Similar to denyhosts, fail2ban will work on many different services in addition to ssh, which is perfect for when I get my web authentication and LDAP server up and running. There are pretty good guides already out there, but this is specific to Ubuntu 12.04 and my server.
To install
sudo apt-get install fail2banWhew, with that out of the way you can modify the config file.
sudo vi /etc/fail2ban/jail.confPretty straightforward, there are a couple particulars to Ubuntu, and myself. First off, I
think the 10 minute ban-time is a little short, so I bumped it to 60 minutes.
bantime = 3600Apparently Debian has some issues with python-gamin (not sure if this is true with 12.04, but what the hell) so set the following
backend = pollingRestart fail2ban
sudo service restart fail2banAnd that's it! By default, ssh is enabled, and checks /var/log/auth.log. However, I did notice an issue while testing. rsyslog is the service responsible authentication logging. Upon quickly repeated attempts to access the service, it may only print 1 message for multiple logins and just says something like "Previous message repeated 3 times". As such, fail2ban is under-counting the number of accesses. To fix this, you need to change the rsyslog.conf.
sudo vi /etc/rsyslog.confchange the value RepeatedMsgReduction to
RepeatedMsgReduction = offAnd restart the logger
sudo service rsyslog restartTo check the banning, try logging in from another system, over 3 times. Then do
sudo iptables -LYou should see a rule for iptables-ssh in the INPUT chain.
Chain INPUT (policy ACCEPT)And fail2ban-ssh section with 1 reference.
target prot opt source destination
fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh
Chain fail2ban-ssh (1 references)
target prot opt source destination
DROP all -- 192.168.100.100 anywhere
RETURN all -- anywhere anywhere
Thursday, February 28, 2013
Nginx & SSL Setup
Easy to install
sudo apt-get install nginx-extrasEasy to run
sudo service nginx startAnd easy to configure (once you know how). The default config file you will change the most is in /etc/nginx/sites-enabled/default. My configuration will only have one virtual host, www.domain.com. I plan to separate individual apps and webpages through subdomains. like /subsonic, /opds, etc. Also, I only plan to use https for my server. This is because one of the web-apps (opds), can only authenticate through Basic HTTP Authentication. I do not want the eventual LDAP credentials to be sent plain text, so SSL it is. I ended up getting an SSL Certificate through the PositiveSSL service with Namecheap/Comodo for around $5-$6 a year.
You can reference kbeezie's blog here for a start to getting the cert ready for nginx. Only difference is that I concatenated the certificates from Comodo into a bundle. This will be needed later for some clients, and the LDAP server we'll build later.
cat domain.crt PositiveSSLCA2.crt AddTrustExternalCARoot.crt > serverall.crtHere is my initial nginx configuration file
server {I also want to redirect any unencrypted requests on port 80 to use SSL on port 443, so I added the below.
listen [::]:443;
server_name www.domain.com;
#root /var/www;
#index index.php index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/certs/serverall.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_session_timeout 5m;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
}
server
{
listen 80;
server_name www.domain.com;
rewrite ^ https://$server_name$request_uri? permanent; # enforce https
}
Tuesday, February 26, 2013
Loading the media servers
This is the easy part, install all the base packages that will be needed for the server. The first step is to install the base media serving applications.
1) Plex Media Server
Download the .deb package for your architecture (x64 in my case), then install:
Unfortunately, Plex is not quite nearly as full featured as Plex when it comes to music. First you need Java, i went with the open package for convenience, but I will probably switch to the official Sun JDK at some point:
3) Calibre
I use calibre to manage my e-book collection. It is very full-featured and open-source. It also comes with an OPDS web-server so that you can serve up the library to various e-reader applications, like Aldiko, Moon Reader & FBReader. I ended up going with a different OPDS server, since Calibre does not support SSL currently. Regardless, to install it
Great server, currently under active development. This is the other piece of the e-book server. The setup here is fairly complicated, and has lots of dependencies, namely NGINX, which is going to require a much longer description. This will be covered later.
And Done! In fact, at this point, you could expose the various servers to the world over their respective ports on this system. I will not at this point. Instead I will encapsulate the all servers behind a common web portal on port 443, using my own SSL certificate, web domain, and a unified LDAP login.
1) Plex Media Server
Download the .deb package for your architecture (x64 in my case), then install:
sudo dpkg -i plexmediaserver_0.9.7.12.407-db37d6d_amd64.debSetup is fairly easy, you can start the web app from the applications list. Point it at the media locations, let it finish scanning, then go in and tweak any incorrect matches by the automated title agent. This installs a service, named plexmediaserver. To restart:
sudo service plexmediaserver restart2) Subsonic
Unfortunately, Plex is not quite nearly as full featured as Plex when it comes to music. First you need Java, i went with the open package for convenience, but I will probably switch to the official Sun JDK at some point:
sudo apt-get install openjdk-6-jreThen, download the .deb package from the above site and install:
sudo dpkg -i subsonic-4.7.debTo restart:
sudo service subsonic restartDonate if you can, there's only 1 developer, and he's done a great job with the project. Once finished, login via the web interface (http://localhost:4040) and change the admin password. Point to the music directories, and you're done! Eventually we'll get to setting up SSL, for now it's unsecured.
3) Calibre
I use calibre to manage my e-book collection. It is very full-featured and open-source. It also comes with an OPDS web-server so that you can serve up the library to various e-reader applications, like Aldiko, Moon Reader & FBReader. I ended up going with a different OPDS server, since Calibre does not support SSL currently. Regardless, to install it
sudo apt-get install calibre4) Calibre OPDS PHP Server
Great server, currently under active development. This is the other piece of the e-book server. The setup here is fairly complicated, and has lots of dependencies, namely NGINX, which is going to require a much longer description. This will be covered later.
And Done! In fact, at this point, you could expose the various servers to the world over their respective ports on this system. I will not at this point. Instead I will encapsulate the all servers behind a common web portal on port 443, using my own SSL certificate, web domain, and a unified LDAP login.
Sunday, February 24, 2013
Change of plan
Ok, more issues with the chromebox. First off, something happened while in a chroot, and Chrome OS decided to log itself out. To top it off, I'm having trouble hot-booting my Adata SSD via the USB recovery hack. Screw it, I'm putting Ubuntu on my more stable gaming system. i7 2600k Sandy Bridge, 8 GB RAM, 2x 64 GB Mtron 7800 SSD in RAID 0 config, with a GeForce GTX 560 Ti (Fermi) 1GB RAM, on a Asrock H67M-ITX HT Mini-ITX Mobo.
I've decided to do something else new here. Installed Windows 8 Pro on the system, VMWare 9.0 Workstation, and installed Ubuntu 12.04 x64 in a VM. From what I have read, it may even be possible to move the VM to bare metal if I wish in the future, but more than likely hosted on an ESXi server. Maybe that i7 Intel NUC or Gigabyte Brix system.
In the meantime, I have a much more stable system, if a little bit loud (no more so than the highway nearby). I also get to take advantage of USB 3.0 for the Cineraid data storage I have. Reinstalled crashplan, and am importing the backup now, F-T-W
I've decided to do something else new here. Installed Windows 8 Pro on the system, VMWare 9.0 Workstation, and installed Ubuntu 12.04 x64 in a VM. From what I have read, it may even be possible to move the VM to bare metal if I wish in the future, but more than likely hosted on an ESXi server. Maybe that i7 Intel NUC or Gigabyte Brix system.
In the meantime, I have a much more stable system, if a little bit loud (no more so than the highway nearby). I also get to take advantage of USB 3.0 for the Cineraid data storage I have. Reinstalled crashplan, and am importing the backup now, F-T-W
External USB Raid and Crashplan
So, first to setup is to attach the external RAID with all my music, movies, and media to the server. I plugged it in, and the USB device is auto-mounted inside chrome. Not quite what I want, needs to be mounted inside of ubuntu.
open a crosh tab with Ctrl-Alt-T, type shell to drop to an admin shell, find the drive and unmount it.
so just unmount
Now in my case, I keep the entire external RAID backed up with an cloud backup service. Crashplan has unlimited storage capacity and works with Linux, so this is what I use. Instead of re-backing up the entire RAID, I can just import the previous backup from my previous server. The only caveat is that the path must be the same on the new computer as the old computer. In my case, this is /media/Quadra. I also will need to add this drive to the /etc/fstab to make automount a little easier.
Get the UUID of the drive.
open a crosh tab with Ctrl-Alt-T, type shell to drop to an admin shell, find the drive and unmount it.
mountin my case the drive is at /dev/sdb1 on /media/removable/Cineraid type ext3 (rw,nosuid,nodev,noexec,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered)
so just unmount
sudo umount /dev/sdb1Eventually this will be added to a startup script starting crouton and entering the chroot. This should be enough to allow you to mount the drive inside of ubuntu. Just open a shell, create a mount point, and mount the drive.
sudo mkdir /media/CineraidAgain, some automation will help here.
sudo mount /dev/sdb1 /media/Cineraid
Now in my case, I keep the entire external RAID backed up with an cloud backup service. Crashplan has unlimited storage capacity and works with Linux, so this is what I use. Instead of re-backing up the entire RAID, I can just import the previous backup from my previous server. The only caveat is that the path must be the same on the new computer as the old computer. In my case, this is /media/Quadra. I also will need to add this drive to the /etc/fstab to make automount a little easier.
Get the UUID of the drive.
sudo blkidAdd this UUID to automount to the same location, to /etc/fstab. Should look something like
UUID=3456-3452345-345-345345 /media/Quadra ext3 defaults 0 0At this point, you should just be able to mount the drive manually
mount -a
Operating System
Initially I used a dual-boot system, running Chrubuntu, based on 12.04 Precise Pangolin. Fortunately, there is an alternative. It's a fairly new method of running Ubuntu inside a Chrome chroot, called crouton. The advantage is I actually get to use Chrome OS when I want to, without changing the partition table. Just Ctrl-Alt-F1 for Chrome, Ctrl-Alt-F3 for Ubuntu. Easy! The disadvantages are many. Sharing resources with Chrome may prove tricky. Not only are there additional operating system CPU, memory and disk overhead, but there are things like unmounting the USB RAID in Chrome, so that I can mount it in Ubuntu.
The biggest disadvantage is that this is a fairly unproven method of running Ubuntu. I've already run into permission errors with some of the mounted partitions from Chrome. The other is that there is no running gnome-session, which means that things like the Ubuntu Software Center, the shutdown menu, etc, all are currently broken inside of Unity. But, I figure as long as I document the server building process, save important config files, AND keep the data secure, I should be able to reconstitute the server much more quickly in the future.
So for now, chroot will be a bit of an experiment. I would actually prefer to run in some sort of VM or virtual appliance, but this processor is probably a little under-powered for that. Soon...soon...
The biggest disadvantage is that this is a fairly unproven method of running Ubuntu. I've already run into permission errors with some of the mounted partitions from Chrome. The other is that there is no running gnome-session, which means that things like the Ubuntu Software Center, the shutdown menu, etc, all are currently broken inside of Unity. But, I figure as long as I document the server building process, save important config files, AND keep the data secure, I should be able to reconstitute the server much more quickly in the future.
So for now, chroot will be a bit of an experiment. I would actually prefer to run in some sort of VM or virtual appliance, but this processor is probably a little under-powered for that. Soon...soon...
The end
Since this blog starts as a way for me to vent some of my frustrations in screwing up my web/media server, I should probably explain how I got into this mess.. Hopefully, it will also help others to avoid the mistakes I made.
Initially I build a dual-boot system, running Chrubuntu, based on 12.04 Precise Pangolin. This worked well for just running Ubuntu (even if it's based on a chrome kernel). Unfortunately, swapping between chrome and ubuntu requires modifying the SSD partition table priorities, which is how I foobar'd the system in the end. I ended up not being able to boot the system.
Booting from a USB chrome couldn't see the SSD, I ended up taking out the mSATA SSD, putting it in a mSATA to SATA adapter, putting it in another system, mounting it as virtual drive inside of a VirtualBox instance of chrome, then running cgpt to modify the partition table. Unfortunately, cgpt wanted to "fix" the partition table. I should never have listened. Fixing the partition basically blew away the ubuntu partition and merged it with another chrome partition. My day was over. The next couple days have been spent trying to recover important configuration files using photorec, find, and grep. Ugh! Notes to self, backup important files, document the process to rebuild the server (this blog), and consider a new line of work.
Initially I build a dual-boot system, running Chrubuntu, based on 12.04 Precise Pangolin. This worked well for just running Ubuntu (even if it's based on a chrome kernel). Unfortunately, swapping between chrome and ubuntu requires modifying the SSD partition table priorities, which is how I foobar'd the system in the end. I ended up not being able to boot the system.
Booting from a USB chrome couldn't see the SSD, I ended up taking out the mSATA SSD, putting it in a mSATA to SATA adapter, putting it in another system, mounting it as virtual drive inside of a VirtualBox instance of chrome, then running cgpt to modify the partition table. Unfortunately, cgpt wanted to "fix" the partition table. I should never have listened. Fixing the partition basically blew away the ubuntu partition and merged it with another chrome partition. My day was over. The next couple days have been spent trying to recover important configuration files using photorec, find, and grep. Ugh! Notes to self, backup important files, document the process to rebuild the server (this blog), and consider a new line of work.
The beginning
And it starts. I've consumed so many helpful guides, how-to's, opinions and the like that I felt the world needed one more blog. So here it is. This blog will initially be focused on the untimely death (and hopefully resurrection) of my self-hosted web server. I acquired a Google/Samsung Chromebox from Google IO 2012 and decided to re-purpose a somewhat limited cloud pc, into my general web/media server.
The system is very quiet and small, ideal for being co-located with my TV and audio system. Pretty decent specs, Intel Dual-Core Celeron i5, 4 GB RAM, 16 GB SSD, etc, etc... But for my uses running Ubuntu as a server, needed some upgrades. After some inspiration from this article, I started out with some upgrades. For starters, 16 GB of DDR3 RAM. Secondly, I wanted at least 64 GB of disk space for linux. Turns out, swapping in a new SSD would prove troublesome. After working with some folks on Google Groups trying to do the same thing, I decided to go with a faster 128 GB SSD than stock, even though it meant I needed to do some tricks when cold-booting the Chromebox. At least this way I can reuse it when the Intel NUC i5 comes out in April of 2013. Oh, and lastly, to store all the multimedia I have (about 4 TB with), I attached a 4 drive external RAID over USB 2.0 using 4 2TB drives. BTW, the RAID makes an excellent replacementfor a christmas tree, what with the massive amount of blinking LEDs.
The system is very quiet and small, ideal for being co-located with my TV and audio system. Pretty decent specs, Intel Dual-Core Celeron i5, 4 GB RAM, 16 GB SSD, etc, etc... But for my uses running Ubuntu as a server, needed some upgrades. After some inspiration from this article, I started out with some upgrades. For starters, 16 GB of DDR3 RAM. Secondly, I wanted at least 64 GB of disk space for linux. Turns out, swapping in a new SSD would prove troublesome. After working with some folks on Google Groups trying to do the same thing, I decided to go with a faster 128 GB SSD than stock, even though it meant I needed to do some tricks when cold-booting the Chromebox. At least this way I can reuse it when the Intel NUC i5 comes out in April of 2013. Oh, and lastly, to store all the multimedia I have (about 4 TB with), I attached a 4 drive external RAID over USB 2.0 using 4 2TB drives. BTW, the RAID makes an excellent replacementfor a christmas tree, what with the massive amount of blinking LEDs.
Subscribe to:
Posts (Atom)