ubuntu – The Industrious Squirrel https://blog.chadweisshaar.com Thu, 07 Nov 2013 17:15:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://blog.chadweisshaar.com/wp-content/uploads/2016/07/favicon.png ubuntu – The Industrious Squirrel https://blog.chadweisshaar.com 32 32 Take back the data – Part 4 https://blog.chadweisshaar.com/2013/11/07/take-back-the-data-part-4/ https://blog.chadweisshaar.com/2013/11/07/take-back-the-data-part-4/#respond Thu, 07 Nov 2013 17:15:06 +0000 http://gator3305.temp.domains/~cweissha/blog/?p=517 Continue reading "Take back the data – Part 4"]]> I have decided to stop using cloud services and move all my data back to my own computers. Part 1 listed all the cloud services that I use. Part 2 described how I plan to replace my cloud services with my own web server. Part 3 covered the process of setting up the web server hardware and software in more detail.

In this post I’ll describe securing my web server with SSL, setting up my own email server, and the backup system.

SSL

To install email and to connect to my ownCloud web server securely, I need to setup SSL on my web server.

Many websites use SSL to create a secure connection and prove their identity to their users. You use SSL whenever you connect with https to a website and get the little lock icon. SSL works via public key cryptography where each user creates a pair of keys – a public key and a private key. If a message is encrypted with one of the keys, it can only be decrypted with the other key. The public key is shared with everyone and it can be used to send a message that only the owner of the matching private key can decrypt. If the private key is used to “sign” a message, receivers can verify the sender because it will only decrypt with their public key.

On the internet, the websites have public and private keys. When you go to a website, your browser downloads their public key. Your browser then creates a one time, symmetric, encryption key, encodes it with the public key and sends it back to the website. All traffic is then encrypted with that agreed upon symmetric key. This system is nice because only the website has to have a public/private key pair. It allows the traffic to be encrypted, and you can verify who the website is, but they can’t verify who you are.

But there is still one problem. How do you know that the public key you are looking at really came from that website? There are quite a few ways that an attacker can intercept traffic from a web browser and send it to a fake website that is set up to look like the one you are trying to get to (known as “spoofing“). When a website gives you their public key, there has to be some way to trust that it is really their public key and not an attackers.

On the internet, all public keys are signed by a trusted third party. If your browser sees a public key that hasn’t been signed by one of these “Certificate Authorities”, it gives you a warning. All the major browsers are setup to trust the same hundred or so certificate authority companies. So, if you want people to trust that you are really XYZ Corp, you pay one of the authorities to verify that you are really XYZ Corp. Then they sign your public key so that browsers trust it.

But there are some serious problems with this system. If ANY of the hundred or so authorities is hacked (which has happened a few times), the hackers can issue themselves certificates to make it look like they are ANYONE. There is an individual certificate revocation system, and some of the “authorities” are no longer trusted at all, but those fixes take a long time to get out to everyone.

There was another option in the early days of the internet called a “web of trust”. Each user on the internet would decide which public keys to trust. This would just be the people you know personally. If a couple of your friends trusted a third party, you would trust them too. Software would look at the web of who trusts who to decide how much you should trust them. This arrangement has its own problems, and requires more work for each user of the internet. But it has one big benefit too. If everyone had a public/private key, we wouldn’t need passwords for websites.

I decided not to pay one of these companies to verify my certificate. Instead I will “self-sign” my public key. The only people who will trust that I am who I say I am are people who get my public key from me directly. This is fine for my purposes as I am not trying to communicate with strangers, and I can directly give my public key to anyone who would actually need it. So if you go to https://billandchad.com, your browser will tell you that the site’s security certificate is not trusted.

Creating a self signed certificate is pretty easy on Linux. On Windows you need to download the free and open source openssl program. Here are the steps that I used to create a certificate authority called Necropolis.

# Create the private key
sudo openssl genrsa -des3 -out /etc/ssl/private/necropolisRootCA.key 2048
# Use it to self sign our public key
sudo openssl req -x509 -new -nodes -key  /etc/ssl/private/necropolisRootCA.key -days 8192 -out  /etc/ssl/private/necropolisRootCA.pem

Since I created it, I obviously trust this key, so I install the public key on my Windows machine as a certificate authority. This is done with the Internet Properties dialog or the SSL admin tool. IE, Chrome and Firefox all use the windows certificate manager, so they will trust any certificates signed by this new authority.

WindowsCerts

Next I create the certificate for billandchad.com and sign it with the Necropolis private key.

#Create a private key
openssl genrsa -des3 -out billandchad.com.key 2048

#Generate a CSR
openssl req -new -key billandchad.com.key -out billandchad.com.csr

# Sign the CSR and create the certificate
sudo openssl x509 -req -in billandchad.com.csr -CA necropolisRootCA.pem -CAkey private/necropolisRootCA.key -CAcreateserial -out billandchad.com.crt -days 8196

With this private/public key pair for billandchad.com (signed by Necropolis), I can turn on SSL in apache.

sudo ln -s sites-available/default-ssl sites-enabled/001-default-ssl
# edit sites-available/default-ssl to point to the new certificates and set SSLVerifyClient to none
sudo a2enmod ssl
sudo service apache2 restart

Next I create a personal key for my user account that I can use to send and receive secure and/or digitally signed email

# Create a private key
openssl genrsa -des3 -out cweissha.key 2048
# Generate a CSR
openssl req -new -key cweissha.key -out cweissha.csr
# Sign with Necropolis
sudo openssl x509 -req -in cweissha.csr -CA necropolisRootCA.pem -CAkey private/necropolisRootCA.key -CAcreateserial -out cweissha.crt -days 8196
# Package the signed certificate and private key together into a .p12 so that we can both encrypt and decrypt with it
sudo openssl pkcs12 -export -in cweissha.crt -inkey cweissha.key -out cweissha.p12

Then I install the .p12 file into the windows certificate manager. Since it has both my public and private keys, I can use it to digitally sign email and to decrypt email that has been encrpyted with my public key.

WindowsPersonalCerts

 

When I send a digitally signed email, it includes my public key. If the receiver trusts my public key – either because they already trust Necropolis, or because I call them and tell them I just sent my public key – then they can use it to encrypt email that only I can read. And, if they receive an email signed with my key, they know that it really came from me. It is easy to send an email that appears to be from someone else, so it is nice to have a way to verify the sender.

Email

Like many people online, I have a lot of email accounts. I have accounts on MSN and Yahoo that I never use and a GMail account that I use rarely. I primarily use a few accounts provided by my web hosting company at chadweisshaar.com. I plan to move chadweisshaar.com to my own web server, and part of that move will be setting up my web server to receive and send email.

Linux comes setup with local email tied to your user account. Before the internet was popular, this was the only way to get an email account. You got a user account on an internet connected machine, and your email address became username@machinename. This is still how it works underneath all the web interfaces and POP/IMAP clients.

Making an Ubuntu machine into an email server that can talk to the internet means installing a Mail Transfer Agent. I picked Postfix.

sudo apt-get install postfix
sudo dpkg-reconfigure postfix
# I configured postfix as mail.billandchad.com
#   hosts= billandchad.com localhost Necropolis
#   addresses= 127.0.0.0/8 10.0.0.0/8 209.181.65.34 addresses
# edit /etc/postfix/main.cf to point to our cert and key
#   smtpd_tls_cert_file=/etc/apache2/ssl/billandchad.com.crt
#   smtpd_tls_key_file=/etc/apache2/ssl/billandchad.com.key

This allows me to send/receive mail to/from the internet on my web server. I can email in/out as cweissha@billandchad.com as long as I am on that machine. I’d like to be able to use my standard windows email client to send/receive email. To do this, I need to setup an IMAP or POP server. I picked Dovecot as my mail server. I configured it to only allow secure connections:

sudo postconf -e 'smtpd_sasl_type = dovecot'
sudo postconf -e 'smtpd_sasl_path = private/auth-client'
sudo postconf -e 'smtpd_sasl_local_domain ='
sudo postconf -e 'smtpd_sasl_security_options = noanonymous'
sudo postconf -e 'broken_sasl_auth_clients = yes'
sudo postconf -e 'smtpd_sasl_auth_enable = yes'
sudo postconf -e 'smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination'
sudo postconf -e 'inet_interfaces = all'
sudo postconf -e 'smtpd_tls_auth_only = no'
sudo postconf -e 'smtp_tls_security_level = may'
sudo postconf -e 'smtpd_tls_security_level = may'
sudo postconf -e 'smtp_tls_note_starttls_offer = yes'
sudo postconf -e 'smtpd_tls_key_file = /etc/apache2/ssl/billandchad.com.key'
sudo postconf -e 'smtpd_tls_cert_file = /etc/apache2/ssl/billandchad.com.crt'
sudo postconf -e 'smtpd_tls_loglevel = 1'
sudo postconf -e 'smtpd_tls_received_header = yes'
sudo postconf -e 'smtpd_tls_session_cache_timeout = 3600s'
sudo postconf -e 'tls_random_source = dev:/dev/urandom'
sudo postconf -e 'myhostname = mail.billandchad.com'
sudo postconf -e 'smtpd_tls_CAfile = /etc/ssl/necropolisRootCA.pem'
sudo /etc/init.d/postfix restart
sudo apt-get install dovecot-common
sudo apt-get install dovecot-postfix

I also wanted to be able to use the address chad@billandchad.com instead of my actual username of cweissha. So I setup an email alias

sudo postconf -e "virtual_alias_domains = billandchad.com"
sudo postconf -e "virtual_alias_maps = hash:/etc/postfix/virtual"
# Add to /etc/postfix/virtual
#   chad@billandchad.com cweissha
sudo postmap /etc/postfix/virtual
sudo /etc/init.d/postfix/restart

With this setup, I can add an email account to Windows Live Mail as chad@billandchad.com with a mail server of mail.billandchad.com and my linux login credentials. I can then send and receive email to any internet recipient.

WindowsLiveSetup

 

Finally, I can tell Windows Live mail to use my personal certificate when I send email:

WindowsLiveCertSetup

Backup

Linux has quite a few options for backup. I chose to use the built-in rsync tool bolstered with some scripts for creating snapshots. The goal is to have frequent backups that act like a full backup, but to save space by using hard-links to the previous snapshot for files that haven’t changed.

First I setup a linux partition on my external drive. I already had a backup partition, but it was formated NTFS.

# unmount everything
sudo umount -a
# partition the disk (use commands t,1,83 w)
sudo fdisk /dev/sdb
# format the partition
sudo mkfs.ext4 /dev/sdb1
# label the partition
sudo e2label /dev/sdb1 LinuxBackup

#Update /etc/fstab with new UUID and type ext4
sudo mount -a

Next I setup some links and directories for the backup scripts which I got from Point Software.

sudo ln -s /backup /media/LinuxBackup
mkdir -pv /backup/snapshot/{rsync,Necropolis,md5-log}
ln -s /backup/snapshot/Necropolis /backup/snapshot/localhost

Install the three scripts into /backup/snapshot/rsync.

rsync-list.sh_ - Script for making md5 hashes of files for comparisons.
rsync-include - List of directories to include and exclude from the backups.
rsync-snapshot.sh - Main script that does the bakcups.

Running the rsync-snapshot.sh script as root makes a full backup of the system as /backup/snapshot/Necropolis/snapshot.001. Previous snapshots are rotated up a number. Snapshots are removed as needed for disk space, but the way that they are removed is to keep fewer and fewer backups as they get older.

I wanted a backup to be created every two days, so I setup the following cron job

sudo crontab -e
# 0 3 */2 * * /backup/snapshot/rsync/rsync-snapshot.sh
]]>
https://blog.chadweisshaar.com/2013/11/07/take-back-the-data-part-4/feed/ 0
Take back the data – part 3 https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/ https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/#respond Tue, 05 Nov 2013 00:18:38 +0000 http://gator3305.temp.domains/~cweissha/blog/?p=485 Continue reading "Take back the data – part 3"]]> I have decided to stop using cloud services and move all my data back to my own computers. In Take back the data – Part 1, I listed all the services that I use. In Take back the data – Part 2, I described how I plan to replace my cloud services with my own web server. In this post I’ll describe the process of setting up the web server hardware and software in more detail.

Hardware

A web server typically doesn’t need to be a powerful machine unless you are getting a lot of traffic. An ideal web server is probably a low cost, low power machine. A computer marketed as a home theater PC would work well. I had spare hardware from my last desktop computer upgrade so I used that. I did need to buy a power supply and found that an 80+ certified supply pays for itself in energy savings pretty quickly:

Assuming that the machine is going to idle at 180 w, I compared several power supplies. Our electricity costs 13 cents per kilowatt hour when all taxes and fees are included.

[table] Power supply rating,Purchase cost, 1 year total cost, 2 year total cost, 5 year total cost
Non certified (~70% efficiency),$20,$362,$703,$1729
80 plus (80% efficiency),$25,$324,$623,$1395
80 plus bronze (85%),$40,$321,$603,$1448
80 plus gold (90%),$63,$328,$594,$1392
80 plus platinum (92%),$95,$355,$615,$1395
[/table]

As you can see, the sweet spot is either bronze or gold and the power is a very significant cost to consider when starting up your own web server.

Static IP and Hostname

To get an address that the outside world can use to see my home web server, I need a Static IP from my DSL provider. CenturyLink will provide a single static IP for $6 per month. Getting a static IP can be done on a web page and took less than a half hour.

I was given the address 209.181.65.34. This is like having a phone number that other people can always use to call me. To add myself to the internet version of the phone book, you have to register a domain name that points to that address.

I did that through namecheap.com. This costs about $10 per year and I registered the name billandchad.com. This was also quick and easy. The default setting at namecheap was to point my name to one of their web servers that has a standard “squatter” page. I changed that to point to my static IP address. They also had ways to setup email addresses that would forward to another email account, but I set it up to send the mail directly to my machine.

Software

I decided to go with a Linux based machine. Both Windows and Linux can be used to run an Apache web server, but it is a little bit easier to find DNS and mail server software for Linux. Linux is also free.

I installed the latest version of Ubuntu (13.04). I installed the desktop version instead of the server version so that I could use the machine as a home theater PC.

Once the OS was installed and a user created, I installed an ssh server so that I could log in from my main desktop PC.

sudo apt-get install openssh-server

# setup a static ip address and hostname so that I can log in remotely from inside my local network
# Edit /etc/network/interfaces to look like this
# auto eth0
# iface eth0 inet static
#        address 10.0.0.2
#        netmask 255.255.255.0
#        gateway 10.0.0.1
#        broadcast 10.0.0.255
#        dns-nameservers 10.0.0.1 205.171.2.65
# Edit /etc/hostname to have one line with the name of the machine

sudo service networking restart

The “sudo” command runs the rest of the command as root (administrator). Ubuntu strongly recommends that you don’t create a root account and use the “sudo” command instead. The “apt-get” command is how you install new software in Ubuntu on the command line. With an ssh server, I can use putty (or something like it) to log into my server from my main Windows PC by going to its hostname or 10.0.0.2.

I setup my router to forward all incoming traffic on port 80, 443, 25, 465, 585, 993 and 995 to 10.0.0.2. Port 80 is used for http and port 443 is used for https, the rest are used for email. These settings mean that if someone online connects to billandchad.com it will connect to my home web server instead of being dropped by the router.

Next I installed Apache, php, and mysql. These three pieces of software make up a common web server configuration and has been named the “LAMP” stack. I also installed phpmyadmin which is a nice webapp for maintaining a mysql database.

sudo apt-get install lamp-server^
sudo apt-get install phpmyadmin

# This next command may not be necessary. Should have been done my phpmyadmin installer
sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf

# Edit /etc/apache2/conf.d/security and make following changes:
# ServerTokens Prod
# ServerSignature Off

Now that a web server is installed, you can point a web browser to billandchad.com and see a default web page served by Apache. Next I installed the OwnCloud web app. I had to add the ownCloud repository to apt:

# Add the following line to /etc/apt/sources.list.d/owncloud.list
# deb http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_13.04/ /

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_13.04/Release.key
apt-key add - < Release.key
sudo apt-get update
sudo apt-get install owncloud

Use phpmyadmin to create the owncloud user and a database with the same name.

CREATE USER 'owncloud'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE IF NOT EXISTS owncloud;
GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'password';

Now we can use the OwnCloud webapp to finish the installation and create OwnCloud users. To do this just point a browser to http://127.0.0.1/ownCloud.

USB Drive

We have a network shared drive to store music, photos, and other shared data. Ubuntu auto mounts the partitions on a USB drive when it is plugged in. However, these mount points are only created after a user logs into the machine. I don’t want to have to log in after a reboot, so I will create my own mount points.

# First get the UUIDs of the drive partitions
blkid
# next add lines to the /etc/fstab file. One line for each partition.
# The first column is the UUID from the first step
# The second column is where the drive will be mounted
# The third drive is the format type. NTFS is the windows standard
# 	UUID=1294CE3B94CE2159 /media/ChadsDrive ntfs-3g defaults
#	UUID=72601D93601D5EE3 /media/WilliamsDrive ntfs-3g defaults
#	UUID=50C0308BC0307974 /media/DataDrive ntfs-3g defaults
# make the mount points
sudo mkdir -p /media/DataDrive
sudo mkdir -p /media/ChadsDrive
sudo mkdir -p /media/WilliamsDrive
# re-mount drives
sudo mount -a

Now we can go to /media/DataDrive to and see the files on the USB drive. I’d like to be able to see this files from my windows machines too. To do this I’ll use Samba.

sudo apt-get install samba
# Add these lines to /etc/samba/smb.conf for each partition that should be shared
# [DataDrive]
#        path = /media/DataDrive
#        browseable = yes
#        writable = yes
#        guest ok = yes
sudo service smbd restart

Now we can access these shared drives by going to a Windows machine on the local network and pointing the file explorer to \\10.0.0.2\DataDrive.

DNS

DNS is what computers use to turn a human friendly name, like billandchad.com, into an actual IP address. When you get broadband internet service, the ISP provides you with a DNS server that your web browser uses to lookup addresses. This works fine, and when I lookup billandchad.com at CenturyLink’s DNS, it comes back with my static IP address. This is good, but causes a problem inside my home network. If I try to go to http://billandchad.com it goes to http://209.181.65.34. My DSL router sees that as my own external IP and drops the request (on the theory that you wouldn’t want to route your traffic through the external internet just to get back to a computer in your house). Of course that is exactly what I was trying to do, but there is a “better” way to do this.

I can setup my own DNS that will tell my local computers how to get to billandchad.com. So if I lookup billandchad.com I will get the address 10.0.0.2, but if anyone else looks up billandchad.com they will get 209.181.65.34.

The standard DNS server is called “bind” and it is a bit of a hassle to setup. I am going to first setup bind to just be a caching DNS for my local network. That means that it will do all the DNS lookups for my home computers. The first time a site is requested (say google.com), my web server will ask CenturyLink for the address. The second time a site is requested, it will have the answer cached. This will be quite a bit faster than going back to CenturyLink every time. Most modern DSL routers already have a DNS cache, and Windows also caches DNS entries, so the actual speed improvement for browsing will be small.

# install bind
sudo apt-get install bind9
# edit /etc/bind/named.conf.options to have the following.
# These are the DNS servers I will use when the site isn't cached
#        forwarders {
#                205.171.2.65;
#                8.8.8.8;
#                156.154.71.25;
#        };

Next, I will tell bind that if someone is asking about billandchad.com that it can provide the address itself. This makes my DNS server the “master” for billandchad.com. Of course, the only people using this DNS server are other computers in my house.

# edit /etc/bind/named.conf.local to:
#	zone "billandchad.com" {
#	        type master;
#	        file "/etc/bind/db.billandchad.com";
#	};
# create the file db.billandchad.com with:
#	$TTL    604800
#	@       IN      SOA     ns.billandchad.com. hostmaster.billandchad.com. (
#	                              2         ; Serial
#	                         604800         ; Refresh
#	                          86400         ; Retry
#	                        2419200         ; Expire
#	                         604800 )       ; Negative Cache TTL
#	;
#	@       IN      NS      ns.billandchad.com.
#	;
#	        IN      A       10.0.0.2
#	ns      IN      A       10.0.0.2
#	                MX      10 mail
#	                TXT     "Necropolis"
#	www             CNAME   ns
#
# Edit /etc/network/interfaces to point to 127.0.0.1 for DNS
sudo /etc/init.d/bind9 restart
sudo service networking restart

DHCP

Now I need to setup my own DHCP server so that I can tell all the computers in my house what DNS server to use.

My DSL router has a DHCP server built in, but it insists upon listing itself as the DNS server. So the first step is to turn off the DCHP server in the router.

Next I install and configure the default DHCP server for linux

sudo apt-get install isc-dhcp-server

#edit /etc/dhcp/dhcp.conf to have the following settings
# option domain-name "billandchad.com";
# option domain-name-servers 10.0.0.2, 205.171.2.65;
# option routers 10.0.0.1;
# option subnet-mask 255.255.255.0;
# option broadcast-address 10.0.0.255;
# authoritative;
# default-lease-time 7200;
# max-lease-time 86400;
# subnet 10.0.0.0 netmask 255.255.255.0 {
#   range 10.0.0.15 10.0.0.255;
# }

sudo isc-dhcp-server restart

Finally I can go to http://billandchad.com/owncloud from either my phone, my computer or another computer connected to the internet and get to my local web server.

My local web server is also providing several network services for the other computers in my house. When my desktop computer is turned on, it asks the network for an IP address. My web server responds that it is the DHCP server and hands out an IP address to my desktop computer. At the same time, it tells my desktop computer that it is the DNS server. When my desktop computer tries to go to billandchad.com, the web server tells it that the IP address is 10.0.0.2 and the connection is made internally.

If a computer on the internet goes to billandchad.com, a real DNS server will tell it that the IP address is 209.181.65.34. Connecting to that address will connect to my DSL router on port 80 or 443. My router will forward that request on to my web server. The apache server on my web server will respond to the request because it is listening on port 80.

Setting all of this up has been a good reminder of what is really going on behind the scenes to make the internet work.

]]>
https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/feed/ 0