php – The Industrious Squirrel https://blog.chadweisshaar.com Sun, 19 Jan 2020 15:42:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://blog.chadweisshaar.com/wp-content/uploads/2016/07/favicon.png php – The Industrious Squirrel https://blog.chadweisshaar.com 32 32 New Photo gallery https://blog.chadweisshaar.com/2020/01/19/new-photo-gallery-2/ https://blog.chadweisshaar.com/2020/01/19/new-photo-gallery-2/#respond Sun, 19 Jan 2020 15:42:32 +0000 https://chadweisshaar.com/blog/?p=1685 Continue reading "New Photo gallery"]]> I’ve updated the look of my photo gallery and changed how the photos are stored. My goal is to make the gallery work better on mobile devices and modernize the look and feel. I also moved the hosting of all the photos from my website to Amazon S3. I’m planning to switch my web hosting to a private server and it will cost less if I don’t store my photos on that server.

Here’s a comparison of the look of the old gallery to the new:

Motivation

I created my original photo gallery before web galleries like google photos were popular. At that time it was possible to manually upload your photos to a web host like Flickr or Shutterfly, but you gave up some control of your photos and their interfaces weren’t great.

Today it is hard to argue against using Google photos – especially for people who don’t have a standalone camera. Google photos provides AI based search and categorization that replaces hours of captioning and tagging of photos. They provide an easy to use interface and lots of free storage.

But there are still disadvantages to switching to Google photos for me. The main one is my stubborn resistance to giving all my photos to Google. Their terms and conditions are pretty good, but are subject to change at any time and give Google permission to use photos forever. The second is my sub-photo feature. In my albums photos can have a set of sub-photos that you can see by clicking on the main photo. This allows the album to be small, showing just the best photos, while still allowing me to post everything.

New Look

My existing photo gallery has small buttons and scroll bars that are difficult to use on a mobile device. Along with fixing that problem, I changed the layout of the page to be responsive to the screen size. I also removed the requirement that all photos take the same amount of space on the page. This allows a better display of panorama and portrait orientation. I based the look of the new site on Google Photos which has a nice clean look.

Amazon S3

My web hosting company (HostGator) advertises “unlimited” storage space, but in practice there is a limit of 100K files. This sounds like a lot, but between all my webpages, WordPress blog, and photos, I’m getting close to this limit. I also have plans to switch to a private web server like Digital Ocean which will give me more control of the software on my website. But when I do that I’ll have to pay significantly more for storage space. So as part of this gallery re-write I moved the storage of my images to Amazon S3.

I’m using S3 is as a cloud based hard drive. I added the capability to my photo processing software to upload the original photo along with a mid-sized and thumbnail sized image to S3. The thumbnails are about 12KB each, the mid-sized image is about 300KB and the original depends on the camera. All my photos take about 100GB and adding the thumbnail and mid-size version increased that by about 8%. The cost to store all of this in S3 is about $1/month.

S3 was not difficult to setup, but it wasn’t trivial either. I needed to setup permissions and get API keys, and their systems were not intuitive to me. On the other hand, their software SDK for C# was extremely easy to use and I was able to quickly add the code to upload the files.

Database

I’m using a MySQL database on my website to store albums and the photo/sub-photo relationships. I considered using the S3 tag feature for photo meta-data (caption, person tags, photo location, camera settings, etc.). But I decided to store that information in the MySQL database. This allowed me to do all the web code without needing the S3 APIs and will make it easier to switch from Amazon to a competitor if needed.

This database ended up being pretty similar to the existing gallery system, but I did add people, tags and camera data. I also moved the code that adds photos to the database from a PHP script on my website to my C# desktop photo software. I was having issues with old versions of PHP not being able to process some of my photos correctly.

AJAX

My old gallery used PHP to generate a static page with bits of javascript. This meant that the whole page loaded at once but could take a while to come up. It also made it more difficult to add javascript to the page. In the intervening years I’ve gotten used to making pages in HTML/javascript that make an AJAX call to a very basic PHP script which loads data from the DB and so I switched the gallery to use this method. It made all the code much cleaner and let me add nicer transitions and more javascript features. The page comes up quicker, but takes a bit longer to fully load the photos.

Future work

There are a couple features that I’d still like to add to the gallery system. The first is a more advanced search where you could look for specific people or limit the results by year. I’d also like to add a map to the album so that all the photos show up on the same map. It would also be nice if there was a way for people to download all the photos in an album at once.

]]>
https://blog.chadweisshaar.com/2020/01/19/new-photo-gallery-2/feed/ 0
Moving data from MySQL to Google Sheets with PHP https://blog.chadweisshaar.com/2017/06/01/moving-data-from-mysql-to-google-sheets-with-php/ https://blog.chadweisshaar.com/2017/06/01/moving-data-from-mysql-to-google-sheets-with-php/#respond Fri, 02 Jun 2017 01:46:36 +0000 http://gator3305.temp.domains/~cweissha/blog/?p=1252 Continue reading "Moving data from MySQL to Google Sheets with PHP"]]> I recently needed to pull data from a database and add it to a google spreadsheet. Google provides an API for working with Sheets, but like many of their APIs, the documentation isn’t great. I’ve got my program working and figured I’d document my steps for future me and anyone else who needs it.

Step 1 – Permissions

The first step is to setup your app in the google developer console. Create a project and enable the Google Sheets API for that project. Under the API Manager, select Credentials. This is where you need to setup a way to authenticate your application when it uses the API.

Other ways to authenticate

If you are just reading data from a public spreadsheet, you can simply generate an API key and pass that in calls to the API. If you are acting as a google user (modifying data on someone’s behalf) you will need to go through the whole OAuth2 process and present that user with a permission screen. That process is fairly complex and not appropriate for a background job.

For an application that runs in the background without a user interface, the best approach is to use the “service account key”. Create a service account key. Give the account a name and pick a role (I don’t think it matters). Select JSON as the key type. The “Service account ID” that looks like an email address can be used to give this fake account access to a private spreadsheet. Just share the sheet with this email address. If the spreadsheet is public or shared by link, you wont need this address.

Hit done and you will be prompted to download a JSON encoded private key. We will use this key to authenticate all our API calls.

Step 2 – PHP Library

Next we need to get the Sheet API PHP client library. There are good instructions for doing that at the github. Basically, you can either use Composer or download the whole API library.

Then you add the line

require_once '/path/to/your-project/vendor/autoload.php';
-OR-
require_once '/path/to/google-api-php-client/vendor/autoload.php';

Copy the JSON encoded private key to the directory containing the PHP file. Keep this file secure as it identifies your app and anyone with that file can act as your application. Google also can’t replace it for you, so don’t lose it.

To use the PHP library, you have to setup the environment and then create a Google_Client:

putenv('GOOGLE_APPLICATION_CREDENTIALS=<Your Service Account Key File>.json');
define('SCOPES', implode(' ', array(Google_Service_Sheets::SPREADSHEETS)));

$client = new Google_Client();
$client->useApplicationDefaultCredentials();
$client->setScopes(SCOPES);

$service = new Google_Service_Sheets($client);

The first line tells the client where to find the JSON file with your application’s private key and the second line tells the client which APIs you are going to use. I just needed the spreadsheet API for this application. (Remember that you have to add each API you need to the APIs on the developer console)

Step 3 – Read data from the database and write to a sheet

Use MySQL to load some data:

$dbLink = mysqli_connect("localhost", "<dbUsername>", "<dbPassword>", "<databaseName>");
if ( !$dbLink )
  die("Couldn't connect to database");

$stmt = $dbLink->prepare("SELECT Name, Address FROM People WHERE City = ?");
$stmt->bind_param("s", <Some variable>);
$stmt->execute();

$data = array();
$rowCount = 0;
$stmt->bind_result($name, $address);
while ( $stmt->fetch() )
{
  ++$rowCount;
  $row = array(is_null($name) ? "" : $name, 
               is_null($address) ? "" : $address);
  array_push($data, $row);
}

This code assumes that there isn’t so much data that you need to break up the API calls. It is also going to put all the data onto a single range in the google sheet. I’m basically making an array of rows where each element is an array of column values. Google doesn’t like NULL values, so you’ll want to handle that case. I just inserted blank strings.

Now that we have the data loaded, we can write it to the spreadsheet with this code:

$spreadsheetId = <SpreadsheetID>;
$optParams = ['valueInputOption' => 'RAW'];
$requestBody = new Google_Service_Sheets_ValueRange();
$requestBody->setMajorDimension("ROWS");
$requestBody->setValues($data);
$range = "SheetName!A2:B";
$response = $service->spreadsheets_values->update($spreadsheetId, $range, $requestBody, $optParams);

// Clear out old data
$range = sprintf("SheetName!A%d:B", $rowCount+2);
$service->spreadsheets_values->clear($spreadsheetId, $range, new Google_Service_Sheets_ClearValuesRequest());

The $spreadsheetId is the unique identifier that google gives to any google doc. It is the long string of characters in the URL of the spreadsheet.

The $range is a standard spreadsheet address like you would use in any Google sheet or excel formula. Google is fairly generous here and allows you to specify a range that is too big for the data.

The $optParams of RAW tells Google to treat the input as raw data. The other option is to pretend that it was typed in by a user so that any triggers are applied.

This application fills in data that may be shorter or longer than the previously written data. The last few lines clear out any old data past the end of the new range.

That is all there is to it. The API has lots of other capabilities for manipulating the spreadsheet. Here is some sample code for reading data from a sheet:

$response = $service->spreadsheets_values->get($spreadsheetId, $range);
$values = $response->getValues();

if (count($values) == 0) {
  print "No data found.\n";
} else {
  foreach ($values as $row) {
    printf("%s\n", $row[0]);
  }
}
]]>
https://blog.chadweisshaar.com/2017/06/01/moving-data-from-mysql-to-google-sheets-with-php/feed/ 0
Take back the data – part 3 https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/ https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/#respond Tue, 05 Nov 2013 00:18:38 +0000 http://gator3305.temp.domains/~cweissha/blog/?p=485 Continue reading "Take back the data – part 3"]]> I have decided to stop using cloud services and move all my data back to my own computers. In Take back the data – Part 1, I listed all the services that I use. In Take back the data – Part 2, I described how I plan to replace my cloud services with my own web server. In this post I’ll describe the process of setting up the web server hardware and software in more detail.

Hardware

A web server typically doesn’t need to be a powerful machine unless you are getting a lot of traffic. An ideal web server is probably a low cost, low power machine. A computer marketed as a home theater PC would work well. I had spare hardware from my last desktop computer upgrade so I used that. I did need to buy a power supply and found that an 80+ certified supply pays for itself in energy savings pretty quickly:

Assuming that the machine is going to idle at 180 w, I compared several power supplies. Our electricity costs 13 cents per kilowatt hour when all taxes and fees are included.

[table]
Power supply rating,Purchase cost, 1 year total cost, 2 year total cost, 5 year total cost
Non certified (~70% efficiency),$20,$362,$703,$1729
80 plus (80% efficiency),$25,$324,$623,$1395
80 plus bronze (85%),$40,$321,$603,$1448
80 plus gold (90%),$63,$328,$594,$1392
80 plus platinum (92%),$95,$355,$615,$1395
[/table]

As you can see, the sweet spot is either bronze or gold and the power is a very significant cost to consider when starting up your own web server.

Static IP and Hostname

To get an address that the outside world can use to see my home web server, I need a Static IP from my DSL provider. CenturyLink will provide a single static IP for $6 per month. Getting a static IP can be done on a web page and took less than a half hour.

I was given the address 209.181.65.34. This is like having a phone number that other people can always use to call me. To add myself to the internet version of the phone book, you have to register a domain name that points to that address.

I did that through namecheap.com. This costs about $10 per year and I registered the name billandchad.com. This was also quick and easy. The default setting at namecheap was to point my name to one of their web servers that has a standard “squatter” page. I changed that to point to my static IP address. They also had ways to setup email addresses that would forward to another email account, but I set it up to send the mail directly to my machine.

Software

I decided to go with a Linux based machine. Both Windows and Linux can be used to run an Apache web server, but it is a little bit easier to find DNS and mail server software for Linux. Linux is also free.

I installed the latest version of Ubuntu (13.04). I installed the desktop version instead of the server version so that I could use the machine as a home theater PC.

Once the OS was installed and a user created, I installed an ssh server so that I could log in from my main desktop PC.

sudo apt-get install openssh-server

# setup a static ip address and hostname so that I can log in remotely from inside my local network
# Edit /etc/network/interfaces to look like this
# auto eth0
# iface eth0 inet static
#        address 10.0.0.2
#        netmask 255.255.255.0
#        gateway 10.0.0.1
#        broadcast 10.0.0.255
#        dns-nameservers 10.0.0.1 205.171.2.65
# Edit /etc/hostname to have one line with the name of the machine

sudo service networking restart

The “sudo” command runs the rest of the command as root (administrator). Ubuntu strongly recommends that you don’t create a root account and use the “sudo” command instead. The “apt-get” command is how you install new software in Ubuntu on the command line. With an ssh server, I can use putty (or something like it) to log into my server from my main Windows PC by going to its hostname or 10.0.0.2.

I setup my router to forward all incoming traffic on port 80, 443, 25, 465, 585, 993 and 995 to 10.0.0.2. Port 80 is used for http and port 443 is used for https, the rest are used for email. These settings mean that if someone online connects to billandchad.com it will connect to my home web server instead of being dropped by the router.

Next I installed Apache, php, and mysql. These three pieces of software make up a common web server configuration and has been named the “LAMP” stack. I also installed phpmyadmin which is a nice webapp for maintaining a mysql database.

sudo apt-get install lamp-server^
sudo apt-get install phpmyadmin

# This next command may not be necessary. Should have been done my phpmyadmin installer
sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf

# Edit /etc/apache2/conf.d/security and make following changes:
# ServerTokens Prod
# ServerSignature Off

Now that a web server is installed, you can point a web browser to billandchad.com and see a default web page served by Apache. Next I installed the OwnCloud web app. I had to add the ownCloud repository to apt:

# Add the following line to /etc/apt/sources.list.d/owncloud.list
# deb http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_13.04/ /

wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_13.04/Release.key
apt-key add - &lt; Release.key
sudo apt-get update
sudo apt-get install owncloud

Use phpmyadmin to create the owncloud user and a database with the same name.

CREATE USER 'owncloud'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE IF NOT EXISTS owncloud;
GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'password';

Now we can use the OwnCloud webapp to finish the installation and create OwnCloud users. To do this just point a browser to http://127.0.0.1/ownCloud.

USB Drive

We have a network shared drive to store music, photos, and other shared data. Ubuntu auto mounts the partitions on a USB drive when it is plugged in. However, these mount points are only created after a user logs into the machine. I don’t want to have to log in after a reboot, so I will create my own mount points.

# First get the UUIDs of the drive partitions
blkid
# next add lines to the /etc/fstab file. One line for each partition.
# The first column is the UUID from the first step
# The second column is where the drive will be mounted
# The third drive is the format type. NTFS is the windows standard
# 	UUID=1294CE3B94CE2159 /media/ChadsDrive ntfs-3g defaults
#	UUID=72601D93601D5EE3 /media/WilliamsDrive ntfs-3g defaults
#	UUID=50C0308BC0307974 /media/DataDrive ntfs-3g defaults
# make the mount points
sudo mkdir -p /media/DataDrive
sudo mkdir -p /media/ChadsDrive
sudo mkdir -p /media/WilliamsDrive
# re-mount drives
sudo mount -a

Now we can go to /media/DataDrive to and see the files on the USB drive. I’d like to be able to see this files from my windows machines too. To do this I’ll use Samba.

sudo apt-get install samba
# Add these lines to /etc/samba/smb.conf for each partition that should be shared
# [DataDrive]
#        path = /media/DataDrive
#        browseable = yes
#        writable = yes
#        guest ok = yes
sudo service smbd restart

Now we can access these shared drives by going to a Windows machine on the local network and pointing the file explorer to \\10.0.0.2\DataDrive.

DNS

DNS is what computers use to turn a human friendly name, like billandchad.com, into an actual IP address. When you get broadband internet service, the ISP provides you with a DNS server that your web browser uses to lookup addresses. This works fine, and when I lookup billandchad.com at CenturyLink’s DNS, it comes back with my static IP address. This is good, but causes a problem inside my home network. If I try to go to http://billandchad.com it goes to http://209.181.65.34. My DSL router sees that as my own external IP and drops the request (on the theory that you wouldn’t want to route your traffic through the external internet just to get back to a computer in your house). Of course that is exactly what I was trying to do, but there is a “better” way to do this.

I can setup my own DNS that will tell my local computers how to get to billandchad.com. So if I lookup billandchad.com I will get the address 10.0.0.2, but if anyone else looks up billandchad.com they will get 209.181.65.34.

The standard DNS server is called “bind” and it is a bit of a hassle to setup. I am going to first setup bind to just be a caching DNS for my local network. That means that it will do all the DNS lookups for my home computers. The first time a site is requested (say google.com), my web server will ask CenturyLink for the address. The second time a site is requested, it will have the answer cached. This will be quite a bit faster than going back to CenturyLink every time. Most modern DSL routers already have a DNS cache, and Windows also caches DNS entries, so the actual speed improvement for browsing will be small.

# install bind
sudo apt-get install bind9
# edit /etc/bind/named.conf.options to have the following.
# These are the DNS servers I will use when the site isn't cached
#        forwarders {
#                205.171.2.65;
#                8.8.8.8;
#                156.154.71.25;
#        };

Next, I will tell bind that if someone is asking about billandchad.com that it can provide the address itself. This makes my DNS server the “master” for billandchad.com. Of course, the only people using this DNS server are other computers in my house.

# edit /etc/bind/named.conf.local to:
#	zone "billandchad.com" {
#	        type master;
#	        file "/etc/bind/db.billandchad.com";
#	};
# create the file db.billandchad.com with:
#	$TTL    604800
#	@       IN      SOA     ns.billandchad.com. hostmaster.billandchad.com. (
#	                              2         ; Serial
#	                         604800         ; Refresh
#	                          86400         ; Retry
#	                        2419200         ; Expire
#	                         604800 )       ; Negative Cache TTL
#	;
#	@       IN      NS      ns.billandchad.com.
#	;
#	        IN      A       10.0.0.2
#	ns      IN      A       10.0.0.2
#	                MX      10 mail
#	                TXT     "Necropolis"
#	www             CNAME   ns
#
# Edit /etc/network/interfaces to point to 127.0.0.1 for DNS
sudo /etc/init.d/bind9 restart
sudo service networking restart

DHCP

Now I need to setup my own DHCP server so that I can tell all the computers in my house what DNS server to use.

My DSL router has a DHCP server built in, but it insists upon listing itself as the DNS server. So the first step is to turn off the DCHP server in the router.

Next I install and configure the default DHCP server for linux

sudo apt-get install isc-dhcp-server

#edit /etc/dhcp/dhcp.conf to have the following settings
# option domain-name "billandchad.com";
# option domain-name-servers 10.0.0.2, 205.171.2.65;
# option routers 10.0.0.1;
# option subnet-mask 255.255.255.0;
# option broadcast-address 10.0.0.255;
# authoritative;
# default-lease-time 7200;
# max-lease-time 86400;
# subnet 10.0.0.0 netmask 255.255.255.0 {
#   range 10.0.0.15 10.0.0.255;
# }

sudo isc-dhcp-server restart

Finally I can go to http://billandchad.com/owncloud from either my phone, my computer or another computer connected to the internet and get to my local web server.

My local web server is also providing several network services for the other computers in my house. When my desktop computer is turned on, it asks the network for an IP address. My web server responds that it is the DHCP server and hands out an IP address to my desktop computer. At the same time, it tells my desktop computer that it is the DNS server. When my desktop computer tries to go to billandchad.com, the web server tells it that the IP address is 10.0.0.2 and the connection is made internally.

If a computer on the internet goes to billandchad.com, a real DNS server will tell it that the IP address is 209.181.65.34. Connecting to that address will connect to my DSL router on port 80 or 443. My router will forward that request on to my web server. The apache server on my web server will respond to the request because it is listening on port 80.

Setting all of this up has been a good reminder of what is really going on behind the scenes to make the internet work.

]]>
https://blog.chadweisshaar.com/2013/11/04/take-back-the-data-part-3/feed/ 0
New Photo Gallery https://blog.chadweisshaar.com/2008/05/17/new-photo-gallery/ https://blog.chadweisshaar.com/2008/05/17/new-photo-gallery/#respond Sat, 17 May 2008 14:44:07 +0000 http://gator3305.temp.domains/~cweissha/blog/?p=29 Continue reading "New Photo Gallery"]]> I have changed web hosting companies and re-designed my web page. The biggest change is to the photo galleries. There are a lot of photo gallery programs out there, but I couldn’t find one with the feature that I wanted. I wanted to be able to upload a large set of pictures, but only display a sub-set of them on the main page. The rest of the pictures would be “behind” the displayed set. So, if I go to Paris and take a bunch of pictures of the Eiffel tower, I can put one picture on the main page, but allow users to see the rest of them if that is something they are interested it.

I wrote my own system with php and mySql that lets me manage the photos and the galleries. It also lets me separate the galleries into public and private areas. I have been scanning prints of family photos from my childhood and I put those online for the rest of my family to see.

]]>
https://blog.chadweisshaar.com/2008/05/17/new-photo-gallery/feed/ 0