Updating an 8-Bit Game, 35 Years Later

In 1990, a computer magazine published a game I wrote. It was a big deal for teenage me.

There is now a thriving “retrocomputing” scene, with people making new software and hardware for computers like the Commodore 64 and Apple II. I decided to update my old game, and experience what developing for these classic machines is like now.

The Game

In August 1987, Compute!’s Gazette published Bounty Hunter, an educational game in which you chase a bad guy around the U.S. It was played on a map that flipped between east and west halves. I thought it would be fun to do a similar game with countries, played on a scrolling world map. Compute!’s Gazette published that game, International Bounty Hunter, in March 1990.

The Challenge

My goals for the 2025 edition:

  • An updated map, of course. The original had 2 Germanies and a Soviet Union!
  • A bigger map.
  • Make it run on the Commodore 64. The original was written for the C128, using BASIC 7’s new graphics commands. But I want this to run on the real hardware of the best-selling personal computer ever.

Can I do it with my rusty 8-bit programming skills?

Just for context: when I developed this in 1989, there was no World Wide Web (the first web page went online in late 1990). There were bulletin-board systems (BBSes) but participating in one would have meant a modem call to Winnipeg. Computer class in school was just typing, so learning programming was a solo exercise, using magazines and books.

A boy and his C128

The Data

How I made the map in 1989: I photocopied a map onto graph paper and colored the squares with markers. Which got the aspect ratio wrong, incidentally, because the characters on the C64 aren’t square. 🙄

How I made the map in 2025:

  1. I downloaded a shapefile from Natural Earth.
  2. The original map looks like a Robinson projection, so I projected the new map the same way: ogr2ogr -f "ESRI Shapefile" -lco ENCODING=UTF-8 -t_srs "ESRI:54030" projected_map.shp.zip /vsizip/ne_110m_admin_0_map_units.zip
  3. I used Python libraries Fiona and Pillow to draw the shapes into a GIF.

I mostly re-used the country colors from the original game. For the countries in the Balkan Peninsula and the post-Soviet states, I used a greedy graph coloring algorithm to assign new colors.

How I got the routes between countries in 1989: I looked at a map and listed the connections by hand.

How I got the routes in 2025: I used the The World Factbook’s list of land boundaries. There’s some commentary mixed in with the data, but with a small amount of cleanup the list can be parsed with Beautiful Soup.

I asked ChatGPT what the main hub for international travel is on each continent. It told me: Atlanta, São Paulo, London, Dubai, Johannesburg, Sydney, and Tokyo. I allowed shortcuts between those hubs wherever you could draw a line over just ocean.

In the original game, your starting point—the country you’re “working for”—was random. I’m a tad uncomfortable even “pretend-working” for some unpleasant regimes out there, so in this update, you work only for countries rated “Free” in the Freedom in the World report.

The Code

This time around, I am developing on a Mac with the VICE emulator.

What language to use? I considered compiled BASIC, but opted for the cc65 C compiler. My C skills are rusty, but not as rusty as my BASIC!

I mentioned that I want a bigger map. Will I run into memory limits? The original map was 192×101, but for the correct aspect ratio it should have been 266×101. That already multiplies to 26,000, and we only have 38K for the program and data combined. Storing 2 color values per byte (Commodore computers have only 16 colors) is a quick way to cut the map’s bulk in half.

I began with the map drawing code, which would be the hardest part because it needs to be fast. Specifically, it must run in less than 1/60 of a second.

Why? The C64 used a cathode-ray tube monitor, where an electron beam races from the top to the bottom of the screen, lighting up pixels as it goes. If you’re modifying the whole screen—for example, to redraw a map—then you really don’t want to be drawing where the electron beam currently is, because that produces nasty flickering effects. Ideally, you want to follow along behind the beam, making your screen updates, ready for when the beam returns to the top of the screen.

The screen is redrawn 60 times per second. On a 1 MHz computer, that means I get about 16,666 clock cycles to retrieve the color data, unpack the 2 colors from each byte, write them to the screen, and handle loop counters.

As a baseline, drawing the map using C with nested loops (rows & columns) and *dst++=*src++ style copying takes around 210,500 cycles.

My final version, written in assembly language, comes in around 14,900 cycles. The speed was achieved with 3 techniques:

  • Loop unrolling: I do just 2 loops, one for the top half of the map and one for the bottom half.
  • Self-modifying code: Typically, to retrieve data from dynamically-allocated memory, you would use indirect-indexed addressing: LDA ($addr),Y. Indexed absolute addressing is faster, but to use it you have to know the memory addresses ahead of time and hardcode them. Or, calculate the addresses and insert them into your code on the fly. Self-modifying code sounds dangerous, but cc65 has macros to help make the code readable and the operations safer.
  • Counting down: I loop right-to-left across the screen. If you make your loop variable X go from 0 to 40, then after each INX (increment X) you have to compare to 40 to know if you’ve hit the end of your loop. But if you go from 40 to 0, then after each DEX (decrement X) you don’t have to compare to 0: DEX has a built-in check of whether it has hit zero. That saves one compare operation for every step of the loop.

You can measure speed using the stopwatch and breakpoints in the VICE monitor, but there’s a more fun way to do it. There’s a trick where you change the screen border color as your code runs, and the width of the color stripes shows you how long your code takes:

Scrolling the map is potentially a faster operation than drawing it from scratch. Scrolling is mostly just shifting data that’s already on the screen, and you only have to retrieve & unpack new data for the edges. Maybe that can be an exercise for 2026 🙂.

The other speed concern is the C64’s legendarily slow disk drive. I want to minimize the map size and the number of disk operations to read it. As in the original game, I compressed the map data, using run-length encoding for its simplicity. I tried a few variations of RLE. The one that worked best on this data: any byte with the high bit set to 0 is a color, any byte with the high bit set to 1 is a count between 3–130. I use only 2 disk reads: read 4 bytes to get the map dimensions, then read the remainder of the file in one go, using the same memory allocated for the map as the read buffer.

With map scrolling and disk I/O done, the rest of the program logic is simple:

And here’s the game: ibh2025.d64. Not much explanation is needed for playing it. You can generally jump along island chains and from islands to the nearest large landmass. Some tiny countries are missing. Some countries are abbreviated to save your poor typing fingers (like USA and DRC). You can type HELP or HINT or simply press Return to get a suggestion on where you can go next.

So how was the experience of coding for the C64 now? It was fantastic being able to write in C, build with Makefiles, debug with the VICE monitor, and browse advice on Lemon64. I was shocked how many details about the C64 sprang readily from my memory like 1989 was just yesterday.

The C64 is a classic for a reason: a powerful but affordable and friendly computer that savvy programmers pushed to do amazing things. I feel privileged to have grown up in the era of those machines and I am delighted that a community of enthusiasts is keeping them alive.

I’ll close by saying I’m still kinda proud of how teenage me captured those cartoon characters as sprites!

Temperature Alerts with Awair & a Fitbit

I recently had a furnace problem. Sometimes, when the house switched from using the heat pump to the furnace, the furnace would not turn on. And the temperature would drop as the thermostat wasn’t clever enough to recognize something was wrong.

I have 2 tropical birds and they could get quite uncomfortable if the temperature fell overnight. So I wanted some way to wake myself to manually reset the thermostat when this happened.

My plan

I have an Awair Element air quality monitor that measures temperature. The accompanying app does alerts, but you don’t get fine control over the alert thresholds. But Awair has an API too. I turned it on and got my API key; now I could fetch temperature readings.

Next I turned to Zapier to schedule checks overnight. Strangely, the hour field in a Zapier scheduler event is in 12-hour format, so you can’t tell 2am and 2pm apart. Fortunately, you also get the full date/time in UTC, which you can format using the “H” option to get the hour in 24-hour format. Now I could schedule hourly temperature readings, and pay attention only to the ones that happened during the night.

The final step was to alert me if the temperature dropped below a threshold. I used Zapier to send myself a text, and set an old Fitbit Inspire 2 to buzz when my phone received texts. Together, the phone’s ping and the Fitbit’s buzz were sure to wake me up.

Happily, I only had to run this system for 2 nights before a furnace technician fixed the faulty part. But it was a fun exercise in tying together my existing devices to solve a problem.

Shipping to the EU with AWS Lambda

I’ve been itching to try out Amazon’s serverless services for some time now. The European Union gave me the excuse I was looking for! This post is about how I used AWS Lambda to handle some new EU new customs requirements.

A company I work with was preparing for the EU’s requirements for more detailed customs declarations. Even before the new rules came into effect on July 1, customers had started to see more packages get stopped by customs with a request for details on every item inside.

We faced 3 problems:

  1. Their storefront software (Shopify) allows only 1 tariff code per “product”—not enough for gift sets or bundles.
  2. Shopify’s API does not expose customs information, so even if we could enter more details into Shopify, that information couldn’t flow automatically to the shipping software.
  3. Their shipping software (ShipStation) can fill in customs information for orders, but again it allows only 1 tariff code per product.

I came up with a 3-part solution:

  1. When a product in Shopify needs more than 1 tariff code, store the codes (along with description, quantity, and value) in Shopify metafields. A nice thing about metafields is that they are visible in Shopify’s API! We use the Accentuate Custom Fields app to allow store admins to view and edit the metafields in a friendly way.
  2. When a new order appears in ShipStation, use ShipStation’s webhooks to send order details to a small piece of code hosted in AWS Lambda.
  3. If the order needs tariff codes filled in, retrieve the metafields from Shopify, modify the order, and send it back to ShipStation.

AWS Lambda is a good fit for this task: too complicated for a duct-tape tool like Zapier, but too small to justify having a server.

When multiple software tools are communicating about products, SKUs—compact, unique names assigned to each product—are the keys that tie everything together. Strangely, Shopify’s REST API does not let you look up a product by SKU. Thankfully, Shopify’s GraphQL API does allow that. Here’s what a query to retrieve metafields given a product SKU looks like:

{
  products(first:1, query:"sku:SOME-PRODUCT") {
    edges {
      node {
        legacyResourceId
        metafields(first:25, namespace:"accentuate") {
          edges {
            node {
              key
              value
            }
          }
        }
      }
    }
  }
}

Overall verdict on AWS Lambda: Setting up a small piece of code that runs on demand was easy. A facility for testing was right there next to the code editor, which was nice. The Secrets Manager was convenient for safely storing API keys. The policies that govern permissions were ugly, but you can’t win ’em all. 🙂 I will certainly use Lambda for more projects in the future.

Mocking Up an Aviary with POV-Ray

Years ago, I received a fantastic gift: the book Practical Ray Tracing in C. It came with a ray tracing program, DKBTrace, which later became POV-Ray.

I became a little obsessed with ray tracing and made a lot of images with POV-Ray.

This summer, we wanted to build an aviary for our parrots. After some research, we decided on a sturdy outer frame made of pressure-treated wood, with removable inner panels made of untreated wood and stainless steel mesh. (Untreated wood is safer if the parrots decide to chew on it.)

The more we discussed the design—how tall should it be? which way should the door open? how should the panels fit together?—the more I wanted to mock it up somehow. I thought, this is a job for a CAD program. I don’t know any CAD programs—but I know POV-Ray!

We went through 3 versions. With POV-Ray we could examine it from the outside, from the inside, from the top. We could watch how the door swings. A handy resource was MIStupid.com’s list of true lumber dimensions.

POV-Ray comes with some nice wood grain textures, which serve an extra purpose here: they show the kind of wood joints being used.

The wood grain shows the kind of joint being used

POV-Ray’s text-based input format was an advantage too: once the model was finished, I could simply grep for lines containing “PTW” (pressure treated wood) or “Pine” and get a list of what to order from the lumber yard.

To the developers of POV-Ray: two grey birds thank you for helping build their safe outdoor play space!


The sky is based on the Realistic Skies tutorial by Friedrich A. Lohmüller.

A Protractor Test Example Using XPath

Protractor is an amazing tool for testing AngularJS apps, but I’ve had a tough time finding examples of nontrivial tasks in Protractor. This blog post is just to put one more example out there.

This post covers how I automated the use of this wheel scroller widget when testing an AngularJS app, VegUp:

wheel scroller widget
A wheel scroller widget for picking food portion sizes

The introductory examples in the Protractor documentation will look familiar to anyone who has used JUnit, PyUnit, etc. Tests are a one-step-after-the-other process: do step 1, do step 2, check the results, do step 3, check that results have changed as expected, etc.

But the moment you try to do something more complex in Protractor, you discover that Protractor is a different beast. It is built on an asynchronous system, namely “promises”, and it employs tricks to force one-step-after-the-other behavior like other test systems.

Take for example the task of clicking on a dynamically-discovered list of items one after the other. This would normally be a job for a loop. If you Google “protractor loops”, you will find discussions in which one or two very smart people talk about how loops in Protractor are a brilliant use-case for closures in Java, and everyone else seems bewildered and unhappy.

Returning to the wheel widget. Some considerations about interacting with that widget:

  1. All of the options in the wheel exist in the HTML. The ones way up above and way down below the selected item just happen to be hidden. That means we can’t just find the item we want and tap on it, because Protractor won’t let you simulate a tap that’s off-screen.
  2. Simulating a flick-to-spin, tap-to-stop action seems like it would require very precise timing, so we’ll look for alternatives to that.
  3. In this widget you can advance the wheel one item at a time, in either direction, by tapping on the item that is just above or just below the currently-selected item.

So now we have a well-defined task to implement in Protractor: tap on all the items in between the currently-selected item and the desired item, then finally on the desired item itself.

Now I know the Protractor style guide authors say to “NEVER use xpath”, but in this case I think XPath is exactly the right tool for the job. It provides all the pieces we need to accomplish this task. The following-sibling and preceding-sibling functions allow us to do tasks like “get all the items after the currently selected one” or “get all the items before the desired one”. And XPath supports set operations like union and intersection. Given those capabilities, we can do this:

xpath intersections and unions

If you build up the XPath expression to do these selection, union, and intersection operations, and assign it to a variable named “steps”, then all you need in Protractor is this, simple and friendly for future maintainers:

element.all(by.xpath(steps)).each(function(item) {
  item.click();
});

I do agree with the Protractor style guide authors when they say that XPath expressions can be difficult to read. But since we know how to do union and intersection constructions in XPath*, you can write little union and intersection functions in JavaScript and use them to build up the complete XPath expression in a readable step-by-step fashion.

function getWheelSteps(wheel, desired) {
  var wheelXPath = '//*[contains(@class,"mbsc-sc-whl") and @aria-label=' + wheel + ']';
  var selectedItemXPath = wheelXPath + '//*[contains(@class,"mbsc-sc-itm") and @aria-selected="true"]';
  var desiredItemXPath = wheelXPath + '//*[contains(@class,"mbsc-sc-itm") and @data-val=' + desired + ']';
  return xPathUnion(
    // desired is below selected on wheel
    xPathIntersection(
      selectedItemXPath + '//following-sibling::*',
      xPathUnion(desiredItemXPath + '//preceding-sibling::*', desiredItemXPath)
    ),
    // desired is above selected on wheel
    xPathIntersection(
      selectedItemXPath + '//preceding-sibling::*',
      xPathUnion(desiredItemXPath, desiredItemXPath + '//following-sibling::*')
    )
  );
}

Finally, here is the code in action, entering breakfast in a food journal:


* Union of nodesets $ns1 and $ns2 in XPath:

$ns1|$ns2

Intersection of nodesets $ns1 and $ns2:

$ns1[count(. | $ns2) = count($ns2)]

Making a Tartan with LibGD

Every now and then I need to write a program to generate an image. Maybe it’s because the image is naturally an algorithmic one, like a fractal. Or maybe it’s because the image is fiddly to put together, and there are details I may need to tweak, and I don’t want to have to make the image by hand over and over. Whatever the reason, for tasks like these I am quite fond of the GD Graphics Library.

My partner and I wanted to make matching t-shirts for St. Patrick’s Day. Well, not quite matching: her family name is solidly Irish, and I was born in the U.K., so we used the classic “Kiss Me” slogan for her shirt and a slight variant for mine:

T-shirts for St. Patrick’s day

I wanted a tartan pattern in the word “Irish”. Because her surname is a common one, there are many different tartans for different branches of the family. I chose instead to use a special tartan designed “for all those of Irish descent at home in Ireland and around the world.”

Irish Diaspora tartan

The Irish Diaspora tartan

Making this tartan is a super-simple coding task in GD. The stripe colors and widths are read from a text file, then two 4×4 tiles, Tartan tile for horizontal stripes and Tartan tile for vertical stripes, are used to draw the horizontal and vertical stripes, respectively. The tiles have a transparent background color so that the stripes in the tartan paint over top of one another properly.

Here is the Python code and the input file giving the stripe colors and widths. Call it like this:

[sourcecode language=”bash”]python tartan.py < irish_diaspora_pattern.csv > tartan.png[/sourcecode]

That image tiles nicely, and you can create additional input files to produce any number of different tartan designs.

Why didn’t I just use a tool like the online Tartan Designer? That was an interesting lesson in flag etiquette and small details.

Have a peek at these sites: Heritage of ScotlandAlexis Malcolm KiltsNicolson Kiltmakers. Notice the three thin stripes in the colors of the Irish flag: they go green-white-orange left to right, but orange-white-green top to bottom. None of the tartan-making websites let me specify a different ordering of colors for the horizontal and vertical stripes. “Darn,” I thought, “I’ll just have to write a tartan-making program myself to get that detail right.”

Flags right and wrong

A commenter on the blog Broadsheet cites the correct way to display a flag vertically.

Turns out I should have checked more official sources first, like the Scottish Register of Tartans. There, the stripes go green-white-orange top to bottom. How am I sure that’s the correct order? The government of Ireland publishes a guide to the history and correct display of the flag: “the green should be … uppermost in the vertical position.”

So I could have just used the online Tartan Designer after all! Nevertheless, perhaps my tartan-making program can be useful to someone as a cute little demo / tutorial of using GD in Python.

Map Technology Then & Now, Part 3: House Hunting

Continuing to chronicle my fascination with maps and computers…

In my last two posts I described past projects involving computers and maps: creating a game for the Commodore 128, and later creating a website to plot the progress of my running group in a virtual cross-country run. In this post I will describe how two free tools — Google Earth and W. Randolph Franklin’s PNPOLY function — helped me in a very practical task: finding a place to live.

When I was preparing to move to Ottawa in 2009, I looked at the rental listings on Craigslist and Kijiji. My partner knew which neighbourhoods we should consider — she had lived in Ottawa before. But how could I know which rentals were in those neighbourhoods without clicking through all of the listings, looking at the little maps one by one?

Neighbourhoods sketched using Google Earth.

Step 1: Using Wikipedia’s list of Ottawa neighbourhoods, I drew some polygons in Google Earth and saved them as KML files, a handy text-based, human-readable format. Yes, you can draw shapes right on top of the map in Google Earth! Look for this button: A good tutorial is here.

Step 2: Using the Universal Feed Parser Python module, read the Craigslist and Kijiji rental listings. Both websites are consistent in how they display addresses, so it’s not hard to grab the address out of each listing. Run the addresses through Google’s geocoding service.

If you’ve never seen Google’s geocoding service in action, try clicking this link: http://maps.googleapis.com/maps/api/geocode/json?address=181+Queen+Street,+Ottawa,+ON&sensor=false&region=ca

Send an address (in this case, CBC Radio’s offices in downtown Ottawa), get back a latitude and longitude. Such a simple yet powerful service!

Putting it all together.

Step 3: I had the outlines of neighbourhoods and I had the locations of individual rental listings. How to sort listings into neighbourhoods? That’s where a great snippet of code comes in: W. Randolph Franklin’s PNPOLY function. It’s a 7-line function that tells you whether a point lies within a given polygon.  It’s written in C, but it uses only simple operations so it’s dead easy to translate into the language of your choice.

A little scripting to glue the pieces together and I had a system to feed me new listings in just the neighbourhoods I wanted. No fancy GIS setup required — just free tools that any hobbyist programmer can put to use.

The happy ending to the story: found a place within 20 min. walk to work, fenced outdoor space for the dogs, big kitchen, covered parking. Thanks Google Earth and W. Randolph Franklin!

Map Technology Then & Now, Part 2: Running Across Canada

Continuing to chronicle my fascination with maps and computers…

In my last post I described digitizing a world map by hand and creating a game for the Commodore 128 in which you chase a bad guy around a scrolling world map. That was in 1990, and I didn’t think much about computers and maps again until 2003, when the leader of my running group came up with a neat idea.

Inspired by the Virtual Australia Race, ultrarunner Ryne Melcher had us submit our training mileage each week, and he tracked in a spreadsheet where each of us was on a virtual cross-Canada route from Newfoundland to British Columbia.

The Yellow Toque

The Yellow Toque

I took the idea further and rigged up a website, virtualraces.org. You could enter your mileage each day, and see where you were relative to the other runners. The runner in the lead got the coveted “yellow toque” icon. It was a cute website, and it kept my running group amused. A German running blog even called it “schönen” — “beautiful”. The ultrarunners soon racked up 7200 km and moved on to a second race, a Virtual Route 66 run following the route of the 1928 race from Los Angeles to New York.

The technologies powering the website were OpenGIS Web Map Services, the Python Imaging Library, and the PROJ.4 cartographic projections library.

OpenGIS web map servers impressed the heck out of me with their simplicity.  Send them a request to list their capabilities, and they send back an XML document describing the maps they can provide. Send them a request for a map (in plain old HTTP GET or POST format) and they send back image data.  And people were offering this amazing service for free!

Virtual Cross-Canada race

I was sort of in the middle of the pack.

I used the Demis map server for the virtual cross-Canada race, and used the Python Imaging Library to paste a start marker, a route line, and runner locations over top of the map. The result… well, it looked pretty snazzy in 2003.  Requesting a map was a little slow, but that was OK: the map only expanded in scope when the leader moved ahead.

Adding the virtual Route 66 race made things a lot more interesting. Maps from the Demis server were in plain unprojected co-ordinates: you could treat latitude as y and longitude as x.  For the American race I wanted to use the better-looking maps from nationalatlas.gov.  But those maps were in a format called “US National Atlas Equal Area”, or EPSG 2163.  To use them, I had to learn about the mathematical transformation between latitude,longitude and x,y in that type of map.

Lambert Azimuthal Equal Area

From USGS

I found a great introduction to map types from the U.S. Geological Survey, and I found the PROJ.4 project, which offers a command-line program to transform co-ordinates to and from a huge variety of map types.  I had all I needed to take the latitude,longitude locations of cities along the race route and work out the equivalent x,y co-ordinates on the maps from nationalatlas.gov.

Virtualraces.org lived on for a couple of years, but as runners lost interest and new services like Google Maps made the maps look awfully dated, I eventually retired the site.

Still, in terms of educational value, virtualraces.org was one of the more useful projects I have done. I have found many uses for PROJ.4 in various projects at work and at home. It can be useful on its own (and language bindings like pyproj make it easy to use in your language of choice) but it can also be found as a component of bigger toolkits like the SpatiaLite database and the Quantum GIS application.  A few uses I have found for it:

  • Drawing regular objects (circles, rectangles) on maps. It’s a pain to try to do that working directly in latitude and longitude. But it’s easy to draw shapes in an appropriate projected co-ordinate system, then transform to latitude and longitude to create a KML layer for a Google map, for example.
  • I once needed to transform location data from Irish Grid format and Ontario Ministry of National Resources format to plain latitude and longitude.
  • I transformed a shapefile of census subdivisions to a map format that makes distance calculations easier, to help my partner calculate measures of geographic isolation for her work.

So, huge props to the people behind PROJ.4, SpatiaLite, and QGIS, and the easy-reading tutorial on map types!  You helped me entertain my running group and become my own GIS department in the process.

Map Technology Then & Now, Part 1: Chasing Bad Guys on a Commodore 128

When Google Maps first appeared, I remember reading a popular-press-type article that disparaged geeks for getting so excited about a program that draws maps.  That sentiment made me chuckle, but it also made me realize that I have been intrigued by the combination of maps and computers for a long time.

Compute!’s Gazette March 1990 coverThis is the cover of the March 1990 issue of Compute!’s Gazette, a magazine dedicated to the Commodore 64 and 128 personal computers.  On page 26 is a game called “International Bounty Hunter” in which you, the bounty hunter, pursue a bad guy around a world map, moving from country to country by typing in the name (easy setting) or capital city (hard setting) of an adjacent country.

The world map was too large for the screen, so the map had to scroll as you moved around. The map-scrolling code marked my first foray into assembly language; BASIC was too slow for the task. The map itself was run-length encoded to make it easier for the magazine subscriber to type in.  (Yep, type in: that’s how we rolled back in 1990.)

International Bounty Hunter screenshot

I can still play it in VICE!

It was a fun little game, and playing it so much while debugging had an unexpected side benefit: I dominated capital city trivia questions on my school’s Reach for the Top team.  🙂

That game was the beginning and end of my illustrious career as a game designer.  It was also the last time I looked at computerized maps for a while… until 2003, when I decided to create a Virtual Races website for my running group, and I would get to see how far that map tools available to the hobbyist had come.

The world map that I digitized by hand for the game. Look, it’s the Soviet Union! And two Germanies!

Google Chart Tools in a WordPress Website

A few months ago I had the pleasure of contributing to a website by the Humane Research Council.

The website was to present data on 25 measures of the status of animals in the United States – everything from the number of cats and dogs in shelters, to pounds of meat eaten per person.

This blog post will be about the tools l used in this project and tips I discovered for making them work smoothly.  But first, a glimpse of what the finished website looks like.  On the left, for the particular statistic being displayed, a line chart shows the trend over time, a pie chart shows details for one year, and a drop-down menu allows the visitor to select which year the pie chart shows.  On the right, a stacked column chart shows the trend over time.

 

The HRC had 3 requirements for the handling of the data:

  1. The website authors should be able to collaboratively edit the data, and add to it in future years.
  2. The data should be presented to website visitors as easy-to-read pie charts, line charts, maps, etc.
  3. The visitor should be able to “go behind the charts” and see the original numbers.

I recommended storing the data in a Google Docs spreadsheet and using Google Chart Tools to present it. Meanwhile, they selected WordPress as the software with which to build the website.

Google Chart Tools use JavaScript, so the first challenge I encountered with this setup was that when I included JavaScript in a WordPress post, WordPress would attempt to format the code as text: adding paragraph breaks and generally turning functioning JavaScript into broken code. Fortunately, codex.wordpress.org readily provided a solution: use the Text Control Plugin to turn off WordPress’s automatic text formatting features.

But even the Text Control plugin didn’t seem to defeat WordPress’s desire to turn every ampersand in the JavaScript code into “&amp;”.  Which was a big problem, since I needed to include the public URLs of the Google Docs spreadsheet data in order to create the charts!  I’m still not sure whether I missed a setting in Text Control, but I ended up sneaking in the ampersands like so:

[code language="javascript"]
var amp = String.fromCharCode(38);
new google.visualization.Query('https://spreadsheets.google.com/spreadsheet/tq?authkey=CI-z4fsJ'+amp+'range=A1:C5'+amp+'key=0AuCiNv5AqRyIdDhMUGlJY3ZuTzFJTHZkcXcxTWNqcnc'+amp+'gid=17'+amp+'headers=1').send(drawTable);[/code]

The next decision to make was where to do the data manipulation when data was to be presented in multiple forms.  For example, a pie chart might show all the responses to a survey question, but an accompanying line chart might collapse multiple responses into a single value to display over time (e.g., “strongly agree” plus “somewhat agree”).  Should I keep one simple table of values in the Google Docs spreadsheet, and manipulate it into different forms with JavaScript?  Or should I create derivative tables in the spreadsheet, and keep the JavaScript simple?  Guessing that future maintainers might be more comfortable doing data manipulation in a spreadsheet than in JavaScript, I chose the latter.

Overall I was impressed by Google Chart Tools.  The charts look great and it took very little time to get up-and-running with them.  They are nicely dynamic: the setColumns() method of the DataView class made it easy to create a chart that switches from showing, say, 2009 data to 2010 data when the website visitor selects a year from a drop-down menu.

The Chart Tools team seems to be very responsive to bug reports.  I found a formatting bug in the pop-up info bubble that appears when you hover over a pie slice in a pie chart. The bug had been reported a day or two earlier by another developer; a few days later, it was fixed.

My one gripe about Chart Tools is that CSS support is inconsistent.  Some chart types, such as the Table Chart, let you assign a CSS class name for header row cells, odd and even row cells, selected row cells, hover row cells, even individual cells.  Other chart types, such as the Pie Chart, make you specify a font name, size, and color separately for the title, legend, pie slices, and tooltips.  That adds up to a lot of edits to do in the JavaScript code if the designer wants to tweak the look of the website when you have 25 pages of charts.  But I’m sure the option to use CSS class names in all chart types will come, in time.

Besides Google Chart Tools, I also used the excellent odometer widget by Gavin Brock wherever HRC wanted to illustrate a rate, such as the number of unwanted pets being put to sleep in shelters each day.  The only hand-made chart on the website was the one below.  It’s a simple layered construction: a white layer at the bottom, a map (png) with people-shaped transparent cutouts at the top, and two orange rectangles sandwiched in-between.  The orange rectangles can be re-sized via JavaScript to turn any number of the little people orange.

You can check out the finished Humane Trends website here.