I’ve been playing video games since – well, since there have been video games.

Sort of.  Technically, the first video game patent was granted in December of 1948 (well before I was born), but I – like most of the world – consider Pong to be the first real video game.  And Pong hit the shelves in 1972.

So I have seen video games come and go, and I’ve played a fair number of them.  I don’t have a favorite (or even a favorite genre), but there have certainly been games I have loved.  The Legend of ZeldaSonic the HedgehogCivilization IIIFinal Fantasy VIIHalf-Life 2.  A few video games have even gone so far as to have had a genuine emotional impact on me (if you know who Aeris or Eli Vance are you know what I’m talking about).  Others have forced me to think to a degree I never expected from a video game (it took me days to fully unravel and digest the final events of BioShock Infinite).  In fact, I would posit that video games are currently on the brink of (if not in the midst of) a process similar to that undergone by cinema about a century ago, more recently by comic books:  transforming from a diversion into a viable (even respectable) medium for storytelling.

Of course, not all video games tell a story, which should not be considered a shortcoming.  Some – by design – leave you to your own devices (to varying degrees).  This is called ‘nonlinear gameplay’, and it comes in a variety of forms.  My personal favorite is what they call ‘sandbox’ games.  Basically, sandbox games give you a world (or a situation) and a set of rules and then leave you to master your own fate.  Kind of like this series of random and happy events we call “life”.  Minecraft is probably the best known sandbox game on the market today.  It is certainly my favorite.

To be honest, though, I probably never would have played Minecraft if it wasn’t for my son.  He started playing, and since I’m one of those annoying parents that actually likes to engage in activities with my child, I joined in.  I’m glad I did.  And I highly recommend it.

Minecraft has a multiplayer mode, which is easy to access and enjoy with a friend or loved one if you happen to have the console version of the game.  If you have the PC version of the game, multiplayer games can be realized over your LAN or by accessing a server.  As you may or may not know, accessing most online servers for gaming is much like entering Mos Eisley.  They tend to be cesspits.  However, on this front Minecraft really comes through for us.  Running your own server is as easy as opening a port in your router and running a simple .exe on a computer.  I have a Minecraft server running on an old computer in my home office, which is further whitelisted so that only my son and his buddies can play on it.  You can also run a Minecraft server on an Amazon EC2 or on Openshift, if you’re so inclined.  When it comes to controlled multiplayer experiences, Minecraft really is the Video Game Promised Land.  And since multiplayer games are collaborative and creative, awesome things can result.


Being a Map Dork, it was only a matter of time before I decided to recreate our town as a playable world for Minecraft.  I had heard rumors about GIS data being translated into Minecraft, so I figured I’d try my hand at it.  What follows is a discussion of the process I eventually went through and a few of the things I learned along the way.  This is not a tutorial of any kind, or even instructions.  The first thing I learned is that the process is intensely enslaved to the particular location chosen.  As usual, the landscape dictates the rules by which we must engage it.

At first, I went at this task in full Map Dork mode.  I had heard of some instances of this being done – entire countries, in fact.  Great Britain has been done, as has Denmark.  I was also aware that Minecraftery was possible using FME, and a minimal amount of searching led me to detailed instructions for doing so.  So I amassed a bunch of data, fired up FME and had at it.

The results were less than satisfactory.  I cannot say whether this was due to a shortcoming on my own part, a drawback of the software, or simply crossed purposes.  I initially suspected the first case, but I am now leaning toward the last.  Looking at the models of Denmark and Great Britain, as well as the FME directions, it would appear that the GIS approach to Minecraft is to use it for building scale models, ostensibly for use for municipal planning and similar nefarious purposes.  I, however, wanted more to build a virtual playground for my son and his friends, modeled on our town.  The idea was to produce something they could virtually walk around in and explore, not just look at and admire.  Which led me to conclude that the GIS approach would not get me where I wanted to go.  Another pitfall of the GIS approach is the assumption that most (if not all) of the cartographic heavy lifting can be done procedurally.  Some jobs are best done by hand (it is worth noting that in the end this project required a rather large commitment of time and energy on my part.  Of course, in my experience dedicating enormous amounts of time and energy to pastimes that only marginally interest you pretty much defines ‘parenthood’).  So I decided to abandon my usual modus operandi and looked instead for a Minecraftian approach.

There are a variety of Minecraft world building/editing programs out there (you can find a good list here).  The one I eventually decided to use is called WorldPainter.  I decided upon WorldPainter mainly for two reasons:  It allows for importing heightmaps and it also allows for using semitransparent overlays on top of said heightmaps.  In total I used three programs (four if you count Minecraft itself) for Minecrafting Greenfield.  Quantum GIS, Photoshop (simply a matter of personal preference.  Any image editor/manipulator would probably suffice), and WorldPainter.  First, I loaded GIS data into Quantum GIS, tweaked it to my purposes and then exported it as various rasters.  Next I used Photoshop to alter and/or combine the various rasters for use in Worldpainter.  Lastly I either imported the rasters directly into WorldPainter, or used them as overlays for guiding further landscape manipulation.  WorldPainter exports Minecraft worlds directly into the folder Minecraft looks to for saved worlds.

Maps are often built in layers, and the order of the layers is both logical and important.  Usually when I make a map I start from the bottom and work my way up.  As an (oversimplified) example, a typical map would start with topography (the bare shape of the land), overlain by hydrography (lakes and streams), then roads and buildings.  Lastly, I might put on some borders (if appropriate) and labels.  In this way features are layered as they are in the real world (we tend to put our roads on bridges over streams) or in the fashion that makes most sense for the map (borders and labels overlying the features they inform or delineate).

Minecraft, however, has its own set of rules.  It has physics (of a sort), but physics that are only vaguely similar to the physics that operate in reality.  For example, water flows downhill, it follows the path of least resistance, and it seeks it own level.  Minecraft only really understands the last of these.  If you tell Minecraft that there is a stream halfway up a mountain, it will want to fill the entire landscape up to that level.  It views your ‘world’ as a flat plain with bumps on it, not as a small portion of a quasi-spherical planet.

An important point to remember is that I was not trying to replicate our town.  We’re dealing with Minecraft here, which can be fairly well described as the LEGO of video games.  My goal was to create a recognizable representation of the town – a Close Facsimile Thereof.  So don’t get upset when I play fast and loose with data.

All the data I used for this project came from MassGIS and OpenStreetMap.  My two go-to sources for geospatial data.  I started with a basic heightmap, generated myself using 1:5000 contours from MassGIS.  I used this heightmap mainly because I already had it, but in fact it was far more detailed than I needed for this project.  A NED DEM at virtually any resolution would probably suffice (but you might as well get the best resolution you can get your hands on because why not?).  To complete my initial basemap I also used a hydrography polygon shapefile (from MassGIS) and a 2D shapefile of building outlines (this one from OpenStreetMap.  MassGIS has a fine one, but OSM tends to have more current data, and there are a couple of new buildings in town I wanted to be sure to include).

I know.  It sounds strange that I’m including buildings at this point.  Bear with me.  WorldPainter works best if we give it as much three-dimensional information as we can from the outset.

A Minecraft landscape has a maximum height of 256 blocks, and a Minecraft block represents roughly one cubic meter.  Luckily, my neck of the woods is hilly but not very high.  The change in altitude in my area of interest is just over 300 meters, which is close enough to Minecraft’s 256 that I can call it even with a straight face (remember – we’re just building an approximation here).  After loading my chosen DEM into QGIS, I went with this ‘scale’ by zooming in to an area roughly 10 kilometers square.  I then created a new Print Composer, set the size to 210 mm x 210 mm and set the resolution to 1200 dpi (this makes for rather large exported images, but when it’s all said and done they scale to roughly 1 pixel = 1 meter).

I decided to deal with the aforementioned hydrography problem by simply cutting water features more deeply into the landscape (I was afraid that this would result in ridiculous chasms in the higher elevations of the project area, but the effect turned out to be less severe than I had expected.  Besides – we are talking about a video game here).  I opened my hydrography polygon layer in QGIS and created a smaller version of it by negatively buffering it.  I then exported images of each hydro layer in turn.  Opening the DEM in Photoshop, I brightened the whole thing slightly so that the hydro layers could dominate the lower end of the spectrum.  I set the image of the original hydro layer to an RGB slightly darker than the lowest end of the DEM, the buffered hydro layer slightly darker than the original hydro layer.  I then merged the three layers into a single landscape.  For a final touch I put a bit of a Gaussian blur over the whole thing, to keep WorldPainter from building cliffs at every elevation change.

Lastly, I put every building in town on the DEM.  I didn’t want them all to just be flat, one-story affairs, so I first separated them out by number of floors.  Luckily, there are no buildings in our town taller than 4 floors, and the only 4-story buildings are on Main Street.  So I used QGIS to separate the buildings into separate layers of 1, 2, and 3 floors.  I further subdivided these into flat-roofed or sloped-roofed layers (Main Street I did entirely by hand, since I wanted it to be more easily recognizable to the kids.  Besides, it’s only one road barely over a kilometer long).  Since the buildings rest upon various elevations (rather than a fixed one), I used QGIS to export a simple black and white image of each layer in turn.  These I placed over the base DEM in Photoshop, using the building images to select the buildings.  I then used that selection to copy the appropriate portion of the original DEM, which I then pasted onto a new, blank layer.  I then adjusted the brightness of the new layer according to the size of the buildings (at the scale I was working at, I found a Brightness of +8 worked for single floor buildings, +15 for two-story buildings, +21 and +28 for 3- and 4-floor buildings, respectively).  I further used Photoshop’s blending options to apply a radial gradient to the sloped buildings, giving them an appearance of peaked rooftops (I use ‘appearance of’ only in the grossest sense.  When you only have blocks to work with, you can only get so close to an actual ‘slope’).

Once I had all of these layers situated, I stacked them all up in Photoshop and merged them into a single basemap (I also resized all the rasters to 9984 x 9984 px in Photoshop.  WorldPainter behaves much better if you give it rasters of a size divisible by 128.  I could have achieved the same thing by adjusting options in the Print Composer of QGIS, but I didn’t realize the need until I had everything in Photoshop).

DEM with buildings

I then imported this combined raster as a heightmap into WorldPainter.  I set the water level to 0 (it defaults to 62, but I wanted a dry map, planning to add the water manually in the following step).  I also removed the noise and beaches, and I deleted all the materials but Grass, which I switched to Bare Grass (overwhelmingly, the area I chose to build is forested, but the forests would come later.  As a secondary choice, I figured universal lawn as a starting point made sense for this project).


Yes – I now had a town consisting of buildings made out of grass-covered dirt.


But that’s okay.  I had plans.  Just after I finished the water, which is up next.

Believe it or not, the basemap was the easy part.  From then on I had to do most of the heavy lifting myself.  But we’ll get to that next time.


Field MapThe overwhelming majority of my professional choices have been heavily impacted by my love of the outdoors.  I am at my happiest in remote locations, far from the presence of other humans.  This has led to fields like forestry and archaeology and even GIS.  But when it comes to GIS, I’m more of a Field Operative than anything else.  By that, I mean I am more likely to find myself out in the field gathering and/or verifying data than I am to be at a computer manipulating data gathered or created by someone else.  Because of this, field maps have always figured prominently in my tool kit.

For a long time, paper maps more than sufficed (still do in a pinch), mainly because they were the only show in town and therefore had to.  Truth is I still don’t go into the field without a paper map and compass, if only because their batteries never wear out and the only GPS fix they require is the one in my head.

More recently, I expanded my tool kit to include some sort of hand-held unit, usually a Garmin.  It wasn’t long before I figured out how to plug my own custom maps into these devices (although I cannot now remember the software I used to do so), neatly laying the information I needed over the pre-existing base maps built in to the devices.

Then smartphones came along.  At first glance you’d think such devices would be tailor-made for my purposes, but it turned out to be otherwise.  Like much of our world, smartphone apps are driven by the market, and a ridiculously high percentage of smartphone users have absolutely no use whatsoever for custom field maps (they do, however, really need to know where the closest cup of coffee is).

So I set out to write my own.  It turned out to be much easier than I had envisioned.

Beside the ability to use my own maps, the next biggest requirement I had was the ability to work offline.  There isn’t much connectivity in the kinds of places where I need field maps, so I needed the ability to store the maps in the device.  Google Maps allows for local caching (for later offline use), but I’m an archaeologist.  I need a field map that looks less like this:

Google Map

And more like this:


Not surprisingly, Google Play is not overflowing with apps along these lines.  I can’t imagine they’d be big sellers.

Which left me with only one recourse.  What follows is how it went.  This will not be a comprehensive app-writing tutorial.  I simply intend to tell you – in general terms – how I wrote this app, and I will include all the necessary code in case you’re inclined to construct your own.  If you know nothing about app development but want to learn, there are numerous resources available all over the Internet.  Google is your friend.

I use Eclipse when I write apps, which is simply a matter of personal preference.  I mention this only because it means that my description of the process will necessarily be specific to my personal development environment, so if you want to play along you might have to adapt things a bit if your personal preferences differ.

Our target end result is a simple map app with minimal bells and whistles.  I didn’t want to clutter the code too terribly, so wherever I felt a need to explain something, I placed a marker comment saying “Note 1” or similar.  I will address these notes in the body of this post.  Let’s get to it, shall we?

First, you’ll have to create a new Android app.  Call it whatever you want.  I called mine “FieldMap” for obvious reasons.  Before we can start plugging code in, though, we have to import a couple of libraries and add them to our build path.  These libraries are OSMDroid (available here.  I used osmdroid-android-3.0.8.jar) and SLF4J Android (available here.  I used slf4j-android-1.5.8.jar).  Import both of these into your “libs” folder, then right-click on each of them in turn and click on ‘Build Path’ –‘Add to Build Path’.

From here, it’s just a question of plugging the necessary code into the appropriate places.  You should, of course, feel free to alter any or all of it to suit your needs.

Main Activity (

package com.fieldmap;

import org.osmdroid.views.MapController;
import org.osmdroid.views.MapView;
import org.osmdroid.views.overlay.MyLocationOverlay;
import org.osmdroid.views.overlay.ScaleBarOverlay;
import android.location.LocationManager;
import android.os.Bundle;
import android.provider.Settings;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
import android.content.DialogInterface;
import android.content.Intent;
public class MainActivity extends Activity {

MyItemizedOverlay myItemizedOverlay = null;
 MyLocationOverlay myLocationOverlay = null;

 private MapController myMapController;

 public void onCreate(Bundle savedInstanceState) {

 LocationManager service = (LocationManager) getSystemService(LOCATION_SERVICE);
 boolean enabled = service.isProviderEnabled(LocationManager.GPS_PROVIDER);

 if (!enabled) {

AlertDialog.Builder builder = new AlertDialog.Builder(this);
 builder.setTitle("GPS Disabled")
 .setMessage("You forgot to turn on the GPS.")

//Note 1

.setPositiveButton("Fix It", new DialogInterface.OnClickListener() {
 public void onClick(DialogInterface dialog, int id) {
 Intent intent = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS);
 .setNegativeButton("Quit", new DialogInterface.OnClickListener() {
 public void onClick(DialogInterface dialog, int id) {


 final MapView mapView = (MapView) findViewById(;





 myMapController = mapView.getController();

 myLocationOverlay = new MyLocationOverlay(this, mapView);

myLocationOverlay.runOnFirstFix(new Runnable() {
 public void run() {

 ScaleBarOverlay myScaleBarOverlay = new ScaleBarOverlay(this);

 protected void onResume() {
 // TODO Auto-generated method stub

 protected void onPause() {
 // TODO Auto-generated method stub

 public boolean onCreateOptionsMenu(Menu menu) {
 MenuInflater inflater = getMenuInflater();
 inflater.inflate(, menu);
 return true;


 public boolean onOptionsItemSelected(MenuItem item)
 switch (item.getItemId())
 myLocationOverlay.runOnFirstFix(new Runnable() {
 public void run() {
 final MapView mapView = (MapView) findViewById(;


 Intent intent = new Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS);
 return true;

Note 1:  I include icons in my apps as a matter of form.  It is, however, completely unnecessary.  Whether you include one is up to you.

Note 2:  OSMDroid has built-in zoom controls.  Since pretty much every single device that’s capable of running this app has a touch screen (and it’s safe to assume the person holding the device knows how to use it), I don’t see any reason to clutter up screen real estate with ‘+’ and ‘-’ buttons.  If you would prefer it otherwise, it should be fairly obvious how to bring this about.

The Main Activity calls for an Itemized Overlay to house our compass and scalebar (necessary components of any map).  This particular class doesn’t already exist in the Android SDK, so we’ll have to create it ourselves.  Create the Class inside your package (under ‘src’).

My Itemized Overlay (

package com.fieldmap;

import java.util.ArrayList;
import org.osmdroid.ResourceProxy;
import org.osmdroid.api.IMapView;
import org.osmdroid.util.GeoPoint;
import org.osmdroid.views.overlay.ItemizedOverlay;
import org.osmdroid.views.overlay.OverlayItem;


public class MyItemizedOverlay extends ItemizedOverlay<OverlayItem> {

private ArrayList<OverlayItem> overlayItemList = new ArrayList<OverlayItem>();

public MyItemizedOverlay(Drawable pDefaultMarker,
 ResourceProxy pResourceProxy) {
 super(pDefaultMarker, pResourceProxy);
 // TODO Auto-generated constructor stub

public void addItem(GeoPoint p, String title, String snippet){
 OverlayItem newItem = new OverlayItem(title, snippet, p);

public boolean onSnapToItem(int arg0, int arg1, Point arg2, IMapView arg3) {
 // TODO Auto-generated method stub
 return false;

protected OverlayItem createItem(int arg0) {
 // TODO Auto-generated method stub
 return overlayItemList.get(arg0);

public int size() {
 // TODO Auto-generated method stub
 return overlayItemList.size();


And that pretty much concludes the heavy lifting.  Now all we have to do is tweak a few things.  The drawable folders can be completely ignored if you’re so inclined.  They will already contain launcher icons provided automatically by Eclipse (unless you chose to replace them with your own), and the folders need no other attention unless you want to include your own graphics (see Note 1 above).  We will need to address the layout, as follows:

Layout (main.xml)

<RelativeLayout xmlns:android=""
 tools:context=".MainActivity" >



Also, we need to configure a menu:

Menu (main.xml)

<menu xmlns:android="" >

 <item android:id="@+id/menu_center"
 android:title="@string/menu_center" />
 <item android:id="@+id/menu_quit"
 android:title="@string/menu_quit" />


And we also need to give some values to our strings (under ‘values’):

Strings (strings.xml)

<?xml version="1.0" encoding="utf-8"?>

<string name="app_name">Field Map</string>
<string name="action_settings">Settings</string>
<string name="menu_center">Center Map</string>
<string name="menu_quit">Quit</string>


Lastly, we have to alter the Manifest.  Eclipse will automatically target the latest Android API, but doing so produces a minor clash with OSMDroid.  This issue arises when we overlay the compass and scalebar.  In order to work around the issue, we simply have to tell the app to use an older API.  It’s a simple matter of changing the target SDK to ‘11’.

Manifest (AndroidManifest.xml)

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android=""
 android:versionName="1.0" >

 android:targetSdkVersion="11" />

 <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
 <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
 <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

 android:theme="@style/AppTheme" >
 android:label="@string/app_name" >
 <action android:name="android.intent.action.MAIN" />

<category android:name="android.intent.category.LAUNCHER" />


At this point you have a functioning map app, although it currently lacks a map.  In fact, if you changed one line of the Main Activity (just after Note 2) from “mapView.setUseDataConnection(false);” to “mapView.setUseDataConnection(true);”, you would have a fine app that pulls map data from OpenStreetMap (you could even pull your map tiles from other sources, like MapQuest).  However, our original intent was to use this app offline, so we now have to give ourselves some local map data, for which we will turn to one of my favorite tools, TileMill.  I won’t go into the mechanics of map production using TileMill – there’s plenty of good resources over at MapBox.

Once you’ve made your map, it’ll be time to get it into the app.  My initial searches on the subject eventually led to this post, which led me to believe such a thing was possible.  It looked as though it may be a little tricky, but I assumed I could beat my head against it long enough to figure out how to make it happen.

This turned out not to be the case.  In truth, it turned out to be rather simple.  In fact, I’d go so far as to call it stupid simple.  It goes like this:

Connect your device as a hard drive to the computer housing the map you exported from TileMill (the .mbtiles file).  On your Android device, navigate to the SD card (/mnt/sdcard).  Create a folder on the SD card called “osmdroid” (without the quotes).  Place the .mbtiles file in the osmdroid folder.

And you’re done.  The app will now see and play nicely with your TileMill-generated map.  See?  I told you it was stupid simple.

But wait – there’s more.  And it gets even better.  Let’s say you need detailed maps of two different and geographically divergent areas in one day.  The logical conclusion is that you’d have to create a large map encompassing both areas.  While this would indeed suffice, the resulting map would likely be far larger than you’d like to stuff into a mobile device.  So it’s a good thing you don’t have to.  If you instead make two separate maps of the different areas, the app will automatically use the one closest to your geographic location when you start the app.  It will not, however, switch between maps as you travel between them (it stays on the one you started with).  This is easily corrected by stopping and restarting the app (besides – just use Google Maps if you need help getting from A to B).

A couple notes on the app itself:  Android does not allow for programmatically starting a device’s GPS (for security reasons).  Because of this, the app checks to see if it’s on at startup.  If the GPS isn’t enabled, the app offers to take you to Location Services to turn it on.  Also, if you exit through the app (Menu-Quit), it automatically takes you back to Location Services to remind you to turn the GPS off (it’s a serious battery hog).  The other button on the menu centers the map on your current location (in case you lose your way while looking around).



PoppleAs a diehard lover of old maps, I have been especially excited by Map Dorkia’s recent rediscovery of the charm of bygone cartography.  I came to GIS via history and archaeology, so my generalized love of maps stems from an earlier, more specific love of old maps.  I think this also accounts for the fact that while maps come in many shapes, I am most fond of the ones that depict an actual physical landscape.  So I was thrilled a short while ago when Kartograph arrived on the scene (skillfully showcasing a stunning map of Italy), and I was equally happy when MapBox debuted AJ Ashton’s “Pirate Map”.

While both these maps are quite beautiful and therefore just plain nice to look at, they seem to be laboring under the misconception that beauty was the only strong point of old cartography.  Because of this, they are missing an important point (in my opinion the most important point) of old maps.

This shortcoming was perfectly illuminated by my wife when she entered my office just after I encountered the aforementioned Kartograph map of Italy.

“What’s that?”  she asked.

“A very pretty map,” I answered.

“It is pretty,” my wife agreed.  “But there isn’t very much information on it, is there?”

As usual (although don’t tell her I said so) my wife was exactly correct.  The map sure is pretty, but it conveys very little information (the same can be said of the “Pirate Map”).  As I’ve said before, the whole point of a map is to convey information.  Beauty can certainly serve as part of the means toward this end, but we should be careful to remember that it is not the end in and of itself.

Take a look at this map, created in 1733 by Henry Popple (click on the link or the image to see the whole map at the Rumsey Collection):


As easy to read as it is easy to look at, isn’t it?  Anyone have any trouble figuring out where the forests are?  The mountains?  Delightfully easy to tell the difference between large cities and small towns, isn’t it?  The only issue I can find with this map of any sort is that it’s a bit overcrowded in the label department.  This, however, is simply a shortcoming of the medium, one which we can easily overcome.  More on this in a bit.

Modern maps, on the other hand, tend to be much less easy to read.  Often, features like forests are depicted with colored blobs, the explanation of which are necessarily supplied in a legend of some sort (clearly, if we’re lucky.  If we’re not lucky, we may be wasting our time trying to differentiate between the green blobs that are forests and the blobs of a slightly different shade of green which are parks).  While techniques such as this do (technically) suffice, they are rather inelegant.  I have long felt (and often remarked) that every entry in a legend represents a failure on the part of the cartographer.

The ideal solution lies in a mixture of modern data and tools coupled with classical cartographic techniques.  Why represent a forest as a green blob instead of populating it with trees?  The capability certainly exists – it’s easy enough to convert a forest polygon into a teeming mass of randomly placed points.  Points which can then be represented graphically as trees.

But then we run into the problem of numbers.  A forest measuring in acres (or even square miles) can take a lot of points to fill.  This means a boat-load of tiny little pictures of trees that need to be drawn.  The problem isn’t so bad if we’re just producing a static map – then it’s just my machine that has to do all the work, and it only has to do it once.  But what if we want to serve this up as a web map?  Then we’re talking about rendering insane numbers of little trees at both ends of the exchange.  A ridiculous expenditure of resources under the best of circumstances.

Until TileMill came along, that is.  I’ve talked about TileMill before, and it’s long past time I revisited the subject.  When last we discussed it, TileMill only ran in Apple and Linux machines, and it was no small task to get it up and running (and it tended to be a little buggy at that).  Now it’s a simple matter of searching the Ubuntu repositories (for Linux) or just downloading an installer (for Windows).  I assume it’s equally as simple to set up TileMill for OS X.  And today the bugs are few and far between (and tend to be squashed with great alacrity and enthusiasm).

Using TileMill, we can take current data and old cartographic techniques and join them into an accurate, attractive and (above all) easy to read basemap:


Do click on the map and check out the fully functional version.  Please be aware that I created this map simply as a proof of concept, so there are a few limitations:  1) The map is running on a crappy little server on the floor in my office, and it’s being hosted on a stupid little web page my ISP provides free of charge.  So please don’t hit it too hard [Update:  After posting this last night, I was contacted by Eric from MapBox (thanks, Bill).  Eric offered me a sweet deal for MapBox hosting, and by sweet I’m referring to another word with a double ‘ee’ (no – not ‘beer’).  While I believe in the abilities of my little server, I am also not stupid.  So I happily took Eric up on his offer.  The upshot of this is that the map is now being served by MapBox and therefore you should all feel free to kick the crap out of it].  2) Even though I largely confined the map to geographic features, we’re still talking enormous amounts of data here.  Because of this, I restricted the map to 7 levels of zoom and a geographic area that represents roughly the central third of Massachusetts.  And we’re still talking an MBTiles file that takes up well over a gigabyte.  3) I am certain there are numerous errors on the map.  If I had been making this map for a client rather than as an experiment I would have spent more time seeking them out and removing them.  4) I did not (nor do I now) give a crap about the page surrounding the map.  I simply slapped together a fast and dirty container for the map itself, literally in less time than it has taken me to write this sentence.  If you have any criticisms regarding it, know that I am completely and blissfully indifferent toward them.

While I am less than completely pleased with the end result, I am quite happy with what the result represents.  I don’t feel the techniques I employed are terribly far removed from those employed by Popple and his contemporaries (although I did freely use some colors, I could have achieved much the same effect in greyscale), and in the cases where I diverged I’m sure they would have approved (since I have the luxury of scale and zoom control, my ability to comprehensively label a map without overcrowding it would probably make Popple green with envy).  Mostly, I’m pleased by the fact that the map is almost completely self-explanatory.  In fact, the only explanations of any kind on the map are the labels, all of which are simply placenames.

All things considered, I think I’ll put this project in the ‘win’ column.

Notes:  There are a few changes I would make to the map if I were producing it “for real”.  Besides the aforementioned data cleansing, the most important of these would be the labels.  For this project, I baked the labels directly into the tiles, simply because I felt the map needed some sort of labels.  Ideally, though, the basemap wouldn’t contain any labels at all.  Instead, I would apply all labels over the basemap, well after the fact (I’ve heard that CartoDB plans to eventually allow home-brewed MBTiles to serve as basemaps on their platform.  I can’t think of a better scenario for serving up my nefarious schemes).  And I would most certainly include more labels than are currently present on the map (streets, for example).

The “trees” on the map are placed randomly within the confines of forest polygons.  If the polygon data differentiated between deciduous, coniferous and mixed stands, it would be a simple matter for the graphics to represent them accurately.

Nuts And Bolts:  The data I used for this map came mostly from MassGIS.  I also availed myself of some data from USGS and OpenStreetMap.  The data was tweaked and altered and arranged and (in some cases) soundly beaten using Quantum GIS and ArcGIS Desktop.  Graphics were produced and/or altered using Photoshop and Inkscape.  The font used for the labels is called Essays 1743TileMill was (of course) used to bake the tiles.  TileStream is serving the tiles from my hapless little server (it was a bit of a bear to get TileStream up and running.  These instructions by Nathaniel Kelso finally got me where I needed to be).  If I were going to serve up maps like this “for real”, I would avail myself of MapBox hosting rather than using my own server. [Update 2:  I unforgivably forgot to mention Leaflet, which played a major role in the speed with which I was able to deploy my map on a web page.  Since switching the map over to MapBox hosting, Wax has also played a part in the process.]


Kitchen Analogy

I spend much of my time singing the praises of the vibrant and skilled GIS development community.  There are a lot of very smart people dedicating a ridiculous amount of energy to developing new and interesting data manipulation tools on a regular basis.  These tools are usually quite powerful, but they are often not for the faint of heart.  The people building the tools are aware of the skill level of the people using the tools, so they tend to initially focus their energies underneath the hood, only later working toward ease of use when and if time allows.  This tends to result in tools that achieve astounding results, but that are not necessarily user-friendly.

The upside is that we are able to do fun, useful and amazing things with data long before any proprietary software vender would dare to release the appropriate tools.

The downside is that performing any but the most routine of GIS tasks usually entails the dexterous employment of a MacGyver-like set of skills.  Because of this, we often spend more of our time wrangling software than we do wrangling data.  And while there’s nothing particularly wrong with this, it is difficult to explain to those who do not work in our (or a similar) field.

So allow me to illustrate.  If meal preparation necessitated a process anything like that of the average GIS project, Daddy’s Night to Cook would go something like this:

It begins with a trip to get ingredients, which entails at least half a dozen stops.  This is not because there are so many ingredients, but because any given stop rarely accounts for more than one or two.  When I get home and start putting the ingredients away, I immediately find that the refrigerator isn’t working properly, so I spend some time fixing it.  Afterwards, I realize that I don’t really have the refrigerator I need.  Some of my ingredients need to be stored at different temperatures and humidity levels than the other ingredients.  I read somewhere that the next generation of the refrigerator will accommodate this need, but the new model is only out in beta and I haven’t yet received an invitation to try it out.

A setback, but only a minor one.  After some thought, I journey to the basement for supplies.  A half-hour’s worth of creativity later a vegetable drawer has been turned into an adequate secondary storage unit through application of some rigid foam insulation and duct tape.  It’s a temporary measure, but if need be it will probably even suffice if I need it again before I get my hands on the next generation device.

That done, I can set about the task of food preparation.  I begin by chopping up a variety of vegetables, a task best approached with a food processor.  Unfortunately, the food processor really only exists in theory.  Oh – it’s been talked about for a long time, and in theory it’s certainly attainable (at a conference earlier this year a few guys even had a working prototype), but the reality is still a long way off.

Still, I think I can probably simulate it enough to achieve the desired effects.  I start with a simple knife, one of my favorite tools.  The truth of this quickly becomes evident when I start cutting and discover how dull the blade is (an unavoidable byproduct of frequent use).  So I take a few minutes to sharpen my knife.  While I’m at it I sharpen a few more, just for good measure.  Then I proceed to coarsely chop the vegetables in preparation for the next tool I’ll be using.

The blender.  A fine tool by any standard, not least because it is responsible for bringing us the Daiquiri. Unfortunately, it doesn’t actually have the setting I need.  ‘Mix’ is too coarse, ‘Blend’ too fine.  What I really need is something halfway between.

A fair amount of head-scratching later, I get an idea.  If I lash a pair of chopsticks together with duct tape, the thicker end will be just wide enough to push both buttons at once.  Then, if I turn the buttons toward the wall, I can butt the blender up against the stove while I wedge the chopsticks against the wall, thereby pushing and holding the ‘Mix’ and ‘Blend’ buttons simultaneously.

This plan works so beautifully that I can’t help but stand back and pat myself on the back as I watch the blender work its magic.  When it’s approaching the texture I desire, I suddenly realize I cannot, in fact, turn the appliance off.  The depressed buttons need to pop out in order for this to happen, and the chopsticks are wedged too firmly in place.  A flash of insight reminds me that I can just unplug the device, and a moment of despair informs me that I am unable to reach the outlet because of the position I have wedged the blender into.

Desperation gives rise to inspiration, and I sprint down the basement stairs and head toward the breaker box.  Halfway across the basement I soundly crack my skull against a low-hanging pipe, but the pipe remains intact and I never actually lose consciousness so all is well.  I reach the circuit breakers and successfully kill the power to the blender, then return triumphantly to the kitchen, stopping only long enough to acquire a pair of wire cutters with which to cut the chopsticks.

The end result is better than I had hoped, and I actually feel pretty smug about it.  I sauté the vegetables very quickly, then drown them in wine for a long, low simmer.

I briefly entertain the notion of going the extra mile and baking a nice dessert.  Unfortunately, all the recipes I can find are metric and Celsius.  Of course, all my measuring devices are in imperial units, and my oven only does Fahrenheit.  I could just make the conversions every damn step of the way, but it seems like a whole lot of extra effort for something that’s not really necessary.  I decide to give it a pass.

The next step is making the pasta.   The recipe calls for fresh pasta, and since I am perfectly capable of making fresh pasta I intend to do so.  For this task, though, I have to use the other kitchen (hey – I’m a food dork.  Of course I have more than one kitchen).  The kitchen I’ve been working in thus far is my primary kitchen – the space where I do most of my cooking.  It’s larger, brighter (it’s got all those windows), and I’m just more used to working in it.  When I need to do something more hard core and technical, though, I have to use the kitchen my wife joking refers to as the “Formal” kitchen (because it’s decorated all in black and white, and she thinks it’s like being inside a tuxedo).

So I repair to the Formal Kitchen and set about the task of pasta production.  I mix the dough the traditional way, simply building a ‘volcano’ of flour on a chopping block and cracking an egg into it.  The block is dedicated to this task, and the pasta machine is secured directly to it.  I have a joyful, Zen-like experience making a decent amount of fettuccine, which I place upon my lovely pasta drying rack (a gift from the in-laws) for transport to the other kitchen.

Back in the primary kitchen, it’s time to have at the chicken.  This promises to be the fastest and easiest task, which is why it’s left for last.  It should be a simple process of slicing the poultry into appropriately-sized chunks and quickly frying it up.  Should be a cakewalk and it turns out to actually be one.

Or it would be except for the fact that I read through the rest of the recipe at this point.  Buried deep in the nether regions of the document, it explains that this recipe works equally well with dark or light meat (I’m using breasts).  However, use of lighter meat necessitates accompaniment by thinner, faster-cooking pasta (like spaghetti or linguini).  I sadly look at my rack of fettuccine (by now too dry to just run back through the machine) and sigh.

And so back to the Formal Kitchen I go, this time to make a nice batch of fresh spaghetti.  There’s nothing even remotely Zen-like happening this time around.

But the payoff finally arrives.  I plate the food and it looks spectacular and smells even better.  My mouth waters and I realize that I am famished.  I set the table and go in search of my family.

I find my wife – alone – in the living room, reading.

“Where’s the boy?” I ask.

“I took him to bed hours ago,” she responds, not even looking up from her book.  “It’s almost eleven o’clock, honey.”

“What?  It is?  How did… I mean… aw, crap.  What about dinner?”

“Well, we had to eat and it was getting late,”  she explains.  “So I just ordered pizza from Google.”

Map Ninja

There was a time when the road to Map Ninja-hood was long and perilous.  It entailed arduous journeys to spend years studying with monks in temples on Tibetan mountaintops.  Advancement came through fierce, deadly battles against Map Ninjas who ranked above you, precariously fought in windswept mountain passes or on rickety suspension bridges over precipitous chasms.

Or maybe you just had to be able to map temples on Tibetan mountaintops, windswept mountain passes and precipitous chasms.  But it was still pretty tough.  It used to take some serious hard work and dedication to become a Map Ninja.  These days, though, it’s pretty much a walk in the park.  As I discovered this morning.

This morning, my buddy Jake and I went out into the woods to map some property he owns.  We had done so once before (something like eight years ago).  Since then, one of Jake’s neighbors had had a survey done that disagreed with what we thought about their shared boundary.  Today we went out to see who was right (as it turns out, they were).

The last time we did this, I went prepared with a compass, maps, and a Garmin.  We spent an entire day gathering data.  I then spent hours (possibly even days) squeezing the data out of the Garmin and turning it into something I could use to draw a pretty map.  Today, it went like this:

The equipment I used was much the same.  A compass and maps (items I always bring into the woods with me).  Instead of a Garmin, though, this time I just took along my smartphone, which runs the Android operating system.  I installed two apps into my phone in preparation for today’s festivities.

The first one is called GPS Status and Toolbox.  It’s a very nice little nuts-and-bolts kind of GPS utility (and it also has a donation-ware version that I strongly recommend).  It tells you a ton of stuff about your location and status, includes a compass, and even has a level bubble to help you keep your device perpendicular to the planet when you’re taking a bearing.  It doesn’t allow you to store waypoints, but it does allow you to share your current location.  The second app makes this amount to the same thing as waypoint storage.

The second app is Evernote.  If you’re not familiar with Evernote, you should take steps to correct this.  It has been one of the world’s most useful utilities for years, and taking it along in your pocket makes it virtually indispensible.

Armed thus, we set out for adventure (I also tracked our progress with My Tracks, just because I could).  It was a beautiful day to be out on a mountaintop, and the data acquisition process turned out to be deliriously simple.  I started GPS Status and Toolbox running at the outset.  Any time I wanted to save a location, I simply shared it via Evernote (adding any necessary notes to myself in the process).  Since we were miles away from any form of connectivity, Evernote stored the data locally (i.e., in my phone).

Three hours later, Jake and I climbed into his truck and headed back to my house, armed with our newly acquired data (as well as a bonus slew of geo-located photographs).

This is where it goes from deliriously simple to insanely easy.  When we get back to the house, my phone connects to my wireless network.  Immediately, Evernote syncs from my phone to their servers, which in turn immediately syncs with my computer.  So by the time I walk upstairs and into my office, my computer already knows all the data I gathered in the field (if I had had connectivity while in the field, my computer would have received the data virtually as I recorded it).  I then sit down at my desk and open Evernote.  Each note I recorded in the field contains the latitude and longitude of where I was standing when I recorded it but – thanks to GPS Status and Toolbox – the notes also contain links to the exact locations marked on Google Maps.

So I open a browser and sign in to Google.  Then I go to Google Maps and create a new map under ‘My Places’ called “Jake’s Land”.  Then I click on each link stored in Evernote in turn, adding each one to “Jake’s Land” from within Google Maps.  Inside of five minutes, I have a new saved map containing markers at every point I marked while in the field.

From there it’s a simple matter to export the map as a KML (if you don’t know how to do this, it goes like this:  click on the ‘Link’ icon [the one to the right of the ‘Print’ icon].  A window will open with a link to your map already highlighted.  Copy it and paste it into the location bar of a new browser tab or window.  Add ”&output=kml” [without quotes] to the end of it and hit ‘enter’).

And now I have a KML including all the waypoints I chose to record during our outing.   Since we’re dealing with a property boundary here, there is an obvious desire to connect the dots.  There are a variety of methods to employ toward bringing this about.  For the uninitiated, there is the simple expedient of Google EarthGoogle Earth has line and polygon drawing tools built into it, so it’s a quick and easy matter to open our KML file and draw lines connecting our waypoints.  If we prefer, we can draw the entire plot of land as a polygon.

Depending on our mapping needs, we could be done at this point.  We gathered our data, drew our map, and now we have what could be considered a finished product already packed into a portable and easily shareable format (KML allows for a great deal more ‘finishing’ if we so desired).  We could send our KML attached to an email, and any recipient could open it up in Google Earth (or even Google Maps in a browser) and play with it to their heart’s content.  And there are many other applications out there that can read KML (in fact, KML is based on XML.  If you change the name of your file from ‘myfile.kml’ to ‘myfile.xml’, Excel can open it).

There are a slew of GIS applications that can read and manipulate KML files and – being the Map Dork that I am – I will use one or more of them to trick out our gathered data and produce a final map for Jake.  But the fact is that within a half hour of my arrival home I was easily able to produce a passable map.  Given another hour (and using just a little basic knowledge of KML and HTML) I could have attached a decent amount of bells and whistles to it.  And all without dipping into any of the deeper mysteries of Map Ninjutsu.

Not exactly a pitched battle on a rickety suspension bridge, but hey – I’m getting a little old for that kind of thing anyway.

Father ChristmasI love the holidays, and I always have.  I grew up somewhere between the top of the lower class and the bottom of the middle class, so for me the solstice never had much of a materialistic orientation.  In my life, the holidays have always been about the things we all say they’re about:  Sharing.  Warmth.  Love.

I have to admit, though, that despite my upbringing there comes at least one occasion every holiday season upon which I feel powerfully compelled to slap someone.  This occasion is invariably when some bonehead informs me that they neglected to tell their children about Santa because they “didn’t want to lie to”, or “thought it was important to be honest with” their children.

A laudable goal.  I am a fan of honesty.  In fact, I am excruciatingly honest myself.  I say this not to extol some virtue, but simply as a datum, like the color of my eyes or the length of my hair.  It’s just the way I was raised, and I have very little control over it.  For the most part it’s a good thing, but it has been known to land me in trouble.  You’d be amazed at how many people don’t really want honest answers to their questions.  Seemingly innocent questions, too, like:  “Does this dress make me look fat?”, or “Did I make a fool of myself at the party last night?”

But there’s honesty, and then there’s honesty.  And when people tell me they deny the existence of Santa Claus in order to be ‘honest’ with their children, I know they are lying to me (and not just for the obvious reason).  I also know they are lying to their children.  I know this because their children are happy.   And if their parents really tried to be honest with them, this wouldn’t be the case.  Those children who are playing so happily on the swing set probably didn’t have their parents carefully explain to them that most humans are bad people, that evil invariably triumphs over good, and that – in the Grand Scheme of Things – all humans lead brief, pointless lives in this vale of tears before going on to become worm food.  Brutal honesty has no place in parenting, and most parents thankfully steer clear of it.  So why single out Santa?  Let’s face it – you’re not achieving some kind of moral superiority by denying the existence of Santa Claus.  You’re just being a dick.  And a dishonest dick, at that.

I have, on occasion, been asked whether or not I believe in Santa Claus.  The absurdity of this question baffles me.  Of course I believe in Santa.  I also believe in broccoli, and grout.  You’d be amazed at the number of things I believe in just because they exist.

Although I suppose ‘belief’ isn’t quite the correct word.  I don’t actually believe  in grout – I’m simply aware of its existence.  In much the same way, I don’t ‘believe’ in Santa, as such.  Neither do I ‘believe’ in trees or cars or shoes or rocks or hats or pianos or bubble gum or roller skates or blue jeans or cake or grills or chairs or any of the countless things that straightforwardly are.  Santa Claus simply is, and no one’s belief or lack thereof has any effect on this state of affairs.  Luckily, our opinions about Santa are largely immaterial to him.  The only people whose opinions really do matter to Santa are children, and they are all quite aware of him (and think rather highly of him), regardless of whatever nonsense they hear from their parents.

Truth is, it’s not the thought of lying that bothers these misguided parents so much,  but rather the thought of Santa Claus himself.  Why?  No one quite knows the reason.  It could be their heads aren’t screwed on just right.  It could  be, perhaps, that their shoes are too tight.  It could be their hearts aren’t a large enough size.  Or maybe they’re confusing Santa with some other bearded guy.

But the most likely reason of all is that one year during their childhood they really, really wanted the Captain Plastic Adventure Playset and Santa failed to deliver.  This disappointment led to a fundamental misunderstanding of the nature of Santa, which in turn was probably fueled by a purposeful misrepresentation of Santa by their own parents (not to criticize anyone.  Parenting is hard enough.  If parents want to assess responsibility to Santa for their inability or unwillingness to secure a specific gift, who’s to blame them?).

I think a big part of the problem is that there is so much mythology surrounding Santa many people get confused and think that Santa himself is mythological.  Nothing could be further from the truth, but on some level the confusion is understandable.  Some of the myths surrounding Santa are pretty far-fetched.  There are people who actually believe Santa is a Christian (please don’t bring up the ‘birthday’ thing.  We all know the chance of December 25th actually being the day on which Christ was born is 1 in 365.  It’s the solstice that’s meaningful to Santa).

And the misconceptions don’t end there.  Children are actually taught to send ‘wish lists’ to Santa, as if he was some sort of mail-order business.  The poor kids are being told that their relationship with Santa is one of supply and demand.  They’re being taught that Christmas is about greed, jealousy and gluttony.  They’re being deceived into thinking that the gifts are a measure of Santa’s love, when in fact they are a token of it.

You see, that is what Santa does:  he loves.  He’s not about gifts.  He’s not about trees or wreaths or toys or mistletoe or milk and cookies or sleighs or carols or silver bells or any of that stuff.  What he is about is love.  The best kind of love – the unselfish, unconditional kind.  The kind of love that is blind to behavior, be it naughty or nice (let’s be clear about this:  Santa does not care whether anybody cleans their room).  The kind of love that is big enough and pure enough and… elemental enough to travel the entire world in one night to deliver gifts to children of all sorts.

And each child receives precisely one present.  Because one is all it takes.  Because the gift lies not in the object, but in what the object represents:

Santa’s pure, unconditional love.

Happy Christmas to all.

hoffmanThose of you who know me know that I have my share of issues with this whole “Occupy (Insert name of Street or City Here)” movement.  For the most part, my issues stem from a general lack of tolerance for hypocrisy.  I don’t take Teabaggers seriously because they whine about taxes while at the same time complaining about potholes that aren’t getting filled.  For much the same reasons I don’t have much use for people who wear designer jeans and drink Starbucks coffee while they Tweet on their iPhones about the evils of corporate America.

For the most part, though, my problems with OWS are not so much with the message as with the messengers.  While the issues surrounding economic inequality in this country are real and important, I don’t feel the reluctance of the white middle class to repay their student loans ranks terribly high among them.

As time has gone by, though, I find myself less and less enamored of the message behind the protests.  In fact, the entire movement has completely failed to impress me.  This concerned me at first, mainly because I felt I should be impressed.  Economic equality is just the sort of socialist idea I can really get behind, so on the surface it really appeared to be my kind of movement.  But once I looked hard at the movement – looked below the surface – I realized it’s not actually my kind of movement at all.

Why?  Because it’s got no soul.  It’s got no heart.  It is a movement that is incapable of seeing beyond itself.  Or maybe it’s just unwilling to.  It has been called an inherently selfish movement by many (myself included), although it may be more fair to call it ‘self centered’ or ‘self-absorbed’.

The 99 percent

I’ve heard the arguments – that we should endeavor to look beyond the iPhones and the designer jeans to the message beneath.  That the ‘message’ of OWS is in their words, not their behaviors (any 4-year-old can tell you differently).  Here’s a news flash:  the message is getting out to the world, and it is loud and clear.  But it isn’t necessarily the message OWS thinks it’s broadcasting.  If you bring a gun to an anti-war protest, your message is not one of peace, no matter what you say.

I have repeatedly seen attempts to compare OWS to the civil rights movement, as well as to the anti-war counterculture movements of the 1960s.  All of these attempts have failed, and in their failure they underscore the fundamental shortcomings of OWS.  Its lack of a soul.  Its absence of heart.

First off, lets dismiss any comparisons to the civil rights movement.  I’m sorry, but placing OWS into the same category with Freedom Rides is almost insulting.  Let’s face facts here, people – those actively participating in the major events of the American civil rights movement were risking a great deal more than a dose of pepper spray.  And while a faceful of pepper spray is not exactly a pleasant experience, in comparison to the civil rights movement participating in OWS is practically risk free.  They also were fighting for rights on a far different level than those claimed by OWS.  They weren’t looking for a bigger slice of the pie – they just wanted to be allowed into the restaurant.  Those occupying Wall Street may argue differently, but in the eyes of the law, the 99% have the same rights as the 1%  (in theory, at least).  This was not the case for African Americans well into the twentieth century.  Today, no African American can legally be stopped from drinking out of a public water fountain.  The importance of this statement cannot be understood by anyone who would compare OWS to the civil rights movement.

When the proponents of OWS compare it to the anti-war counterculture movements of the 1960s, they are on slightly less shaky ground.  But only slightly.  The movements of the 1960s – like OWS – were primarily white middle-class movements.  And this is pretty much where the comparisons end.  When we start looking for more similarities is when the self-absorption of OWS stands out.

In both cases, we’re talking about the (primarily white) middle class.  We’re talking about people who have every door open to them.  Who have every opportunity available to them.  Who have every right and privilege handed to them.  From this starting point, vastly different messages arose.

OWS looks to the gap between itself and the 1% and says to the world:  “This inequality is inexcusable.  We should not have to settle for what we have when these few have so much.  As a society, we should take steps to reduce what they have so that the rest of us can have more.”

In contrast, those protesting in the 1960s looked to the gap between themselves and those who had less and said to the world:  “This inequality is inexcusable.  We should not allow members of our society to have so little when we have so much.  As a society, we should take steps to increase what they have, even if it means decreasing what we have.”

The movements of the 1960s were selfless (this is not to say that there were no egos involved).  They were about ending war.  They were about treating each other fairly.  They were about striving toward equality by giving – not by taking.

Those on the ground in the 1960s also saw inherent flaws in American consumer culture.  They too saw rampant consumption and pervasive greed, and they feared the results of them.  Their response to it, though, was almost opposite to OWSers – they opted out.  When they saw a culture of avarice that they felt had eroded their society and threatened their world, their response was to turn their backs on it – not to demand more of it.  Thus the term ‘counterculture’.

If OWS had occurred in the 1960s, iPhones wouldn’t have been used to Tweet about it.  They would have been used as firewood.

The counterculture movements of the 1960s possessed something that OWS sadly lacks.  They had heart, soul and yes – even magic.  Because of this, they gave birth to greatness.  Heroes don’t give birth to movements – movements give birth to heroes.  The 1960s produced the likes of Abbie Hoffman and the rest of the Chicago Seven.  (The civil rights movement produced even bigger giants.)

This is the soul that OWS lacks. And without it I feel it is doomed to failure. Where is its Hoffman, its Dylan, its Joplin, its Baez?

Speaking of which, where in hell is the music?  How is it that this movement has inspired so little?  Oh – I know that the people on the ground have been attempting to write songs.  I’ve listened to some of them.  And that’s all I’m going to say about that.

And I was going to continue my decades-old practice of ignoring Third Eye Blind, but I will go so far as to give them 10 bonus points for offering their song as a free download.  And then I’ll take 5 of those points away because one of the places they posted it is their Facebook page.  However, you’re fooling yourself if you see their “anthem”  as anything other than a thinly-veiled attempt to resuscitate their dead careers.

The 1960s, though, produced music of a different sort.  The kind of music that never goes out of style.  The kind of music that understands that peace is the answer to war, love is the answer to hatred, and generosity is the answer to greed.  The kind of music that shines light into dark places and makes flowers grow there.

The kind of music that can somehow magically transform half a million sweaty, mud-caked, tripping hippies into Stardust.

SithThere was a time – not too long ago – when a new social media contender appeared on the horizon.  It was supposed to be the first real threat to Facebook, and it was called Diaspora (I’m not really sure what they were thinking when they chose the name.  While the word technically can simply mean a scattering of people, it’s common usage implies a scattering that takes place against the people’s will).

At first, Diaspora got a lot of press.  The guys proposing it hyped it as a privacy-minded alternative to Facebook – a social network that wouldn’t sell off our private data to the highest bidder.  This proposal was well received.  The developers asked the world for money for startup costs via Kickstarter.  They initially asked for $10,000.  They ended up receiving more than $200,000.  All this without writing a single line of code.

I watched Diaspora with interest, as it sounded like a fine idea to me.  It shouldn’t come as a surprise to anyone that I thought the world could use an alternative to Facebook.  I was also intrigued by the fact that Diaspora intended their code (when they finally wrote it) to be open source, thereby allowing us to run it ourselves on our own servers if we so desired.

But then Google+ hit the interwebs.  It was immediately given the title of Facebook killer, and it seemed like everybody was talking about G+ for weeks.

And nobody – but nobody – seemed to be talking about Diaspora anymore.  I even asked about it a couple of times, at Google+ as well as at Twitter, but no one seemed to have heard anything from or about Diaspora since Google+ launched.  As far as I could tell, the project seemed to be pretty much dead in the water.

Until Diaspora reappeared, just a couple weeks ago.  I first noticed activity on the official Diaspora Twitter account, shortly after which I received an email inviting me to join in on the beta.  Of course, I did so.

And I have been greatly disappointed.  Not by the software but by its user base.  See, Diaspora had a real shot at the limelight, and if they had just gotten off the pot after they received twenty times the funding they asked for, they may have given Facebook a run for its money.  But Google beat them to the punch, and it was a serious beating.

Fact is, the overwhelming majority of Facebook users are really quite happy with Facebook, warts and all.  When it comes to all the various privacy issues, the average user just doesn’t give a crap.  And for most of those who do give a crap, Google+ serves as a perfectly adequate alternative.

So when Diaspora finally hit the scene, they were no longer the only alternative to Facebook.  In fact, they were now just a feature-poor substitute offered by a relatively unknown company with comparatively no resources at their disposal.

And their pickings were pretty slim.  Of the many, many people who actually want to participate in some form of social network, Facebook had already sewn up the majority of the pie.  Of the remainder, Google+ met the needs and/or desires of all but the most rabidly paranoid of the tinfoil hat-wearing crowd, who (sadly) have flocked to Diaspora and claimed it as their own.

As you may have guessed, finding a rational discussion at Diaspora is virtually impossible.  Like previously mentioned Quora, Diaspora’s narrow and esoteric user base has led to Rule By Douchebaggerati.  I have tried a few times to engage people at Diaspora, and the universal response has been attempts to pick fights with me.  Kind of sad and laughable at the same time, especially the latest instance.

Unsurprisingly, a fair amount of the ‘discussion’ at Diaspora revolves around Facebook- and/or Google- bashing.  My latest exposure to extreme douchbaggery occurred when a guy claimed to ‘know’ of Google’s evil, due to the vast amount of ‘research’ he’s done on the subject.  I politely (really – I worked at it) asked him to share his research.

I got no response from the Google scholar, but I did get numerous responses from the rest of the tinfoil hat-wearing crowd.  Their eventual consensus was (I’m not kidding) that the ‘truth’ about Google is only meaningful to those who do the research themselves.   Seriously.  One of them even went so far as to reference a series of ‘scholarly’ works on the subject of research and how it only really ‘works’ when we do it for ourselves (I’m not really sure how this works.  How far back along the research trail do we have to go ourselves?  Should I start each day by inventing language?).  So it’s not that they can’t back up their claims, but that they choose not to.  For my own good.  And they were quite happy to explain ad nauseam the reasons for this choice.  I don’t know if they’re intensely dumb or if they just think I am.

Which got me to thinking (about Google, that is).  I have, in fact, wondered about Google.  About whether or not it is evil.  My initial assumption was that it is.  I mean – it stands to reason, doesn’t it?  It’s an enormous, ridiculously wealthy and powerful corporation – how could it not be evil?

Being the kind of guy I am, though, I took the time to look into it.  I figured an enormous, wealthy, powerful evil empire would leave some sort of conclusive, verifiable proof of evildoings.  So I looked for them.  And I didn’t find any.  So I looked harder.  And I still didn’t find any.  So I looked even harder.  And still nothing.

What I found was a company that has made a fortune off of advertising.  One way in which they have done this is by gathering data about their users (us) and selling it to the highest bidder.  As far as I can tell, Google has never tried to hide this.  And while the data they gather (data we freely hand over to them, by the way) is – technically – private data, it’s not private in the way most people think.  Google doesn’t sell our account numbers to anyone.  Nor do they sell our email addresses.  In fact, they don’t sell anything that could be called PII (personally identifiable information).  Not even here in Massachusetts, the home of insanely stringent PII legislation.  The kind of data Google gathers and sells about us is data that we generate but that we don’t generally have a use for ourselves.

Years ago, my mother was a regular participant in the Neilsen Ratings.  Every so often, she would get a package in the mail from Neilsen.  It would contain some forms, a pencil and a ridiculous fee (I’m pretty sure it was $1).  For the following couple of weeks, she would religiously (and painfully honestly) record every television program watched in our household.  When the forms were completed, she would send them back to Neilsen.  The idea behind this was to find out what shows people were actually watching so that programming and advertising dollars could be spent appropriately.  I don’t know if the system actually worked, but it came close enough to make all involved happy.

This is the sort of data Google gathers.  The kind of data advertisers really care about, but that is not terribly meaningful to most of us average users.

And Google doesn’t force this upon us.  If you don’t want to give them your personal data, all you have to do is refrain from using their products and services.  There are other search engines out there.  There are other email providers (actually, if you want to use Gmail but don’t want Google to gather your personal information while you do so, all you have to do is pay for it.  It’s the free version that gets paid for though data).  On the other hand, if you’re willing to let Google gather and use your personal data, all those products and services are the payment you receive for the deal.

The other thing I found in my travels is scores – no, hundreds (possibly even thousands) of people who know that Google is evil.  They know because they’ve seen proof.  They’ve walked the walk, they’ve done the research, and they know – beyond doubt – that Google is The Evil Empire.  And every time I have encountered one of these people I have made the same simple request:  that they share this knowledge with me.

Not a single one of them has done so.  In fact, most of them get quite angry as part of the process of not doing so.  Usually I get told how painfully obvious it is – how the universe is practically littered with the proof of it – but no one has actually gone so far as to show me the proof they profess to have, or point me to the proof they profess to have seen.  Other times (like the recent one mentioned above) I get lengthy justifications as to why they are not sharing what they know (always that they are not – never that they cannot.  An important distinction).

At first I wondered if Google was just that good at covering up their evildoing.  They’d have to be better at it than the CIA (who’ve been eating and drinking cover-up for generations), but that wouldn’t be impossible.  Just unlikely.

But that didn’t make sense in light of all the people who have seen evidence of Google’s wrongdoing (they have!  Really!).  Instead, it would mean that of all those people, not one of them was willing to put their money where their mouth is (I mean, they’re all able to, right?  It’s that they’re not willing to).  Of all those people who know how evil Google is, not a single one of them is willing to produce any real proof of it.  Not a single conclusive, verifiable piece of evidence.  Not one.

Of course, the other possibility is that they’re all a bunch of asshats and Google is just a legitimate business.

NoGISI’ve got a brother who lives in Connecticut, not far from New York.  I visited him not too long after September 11th, 2001, for no particular reason.  While I was there, a 9/11 benefit concert was held in New York, and we watched it live on television.  We watched a variety of performers come and go, as well as the audience’s varying reactions to them.  Toward the end of the concert, The Who (one of my favorite bands) got up to play.  They played Won’t Get Fooled Again and Baba O’Reilly.

And the audience went nuts.  They yelled and screamed and punched the air and waved their flags and laughed and cried.  They cheered themselves hoarse for a band they believed understood their pent-up national pain and anger.  They cheered for their love of country and their faith in the future.  They cheered for America the Beautiful and for four British boys who seemed to understand.

I sat in my brother’s armchair, drinking a beer and watching this spectacle in dumbfounded horror.  Halfway through the second song, I jumped up and shouted at the television:

“Aren’t you people listening to the words?!?!”

I’ve been reminded of this fairly often as of late, most every time I encounter a discussion about the NoGIS ‘movement’.  For those of you who are unfamiliar with the catchphrase, NoGIS is a term adopted by many Map Dorks to signify a perceived need for a paradigm shift within the discipline.

As a concept, NoGIS is meaningful and interesting, and its more sober and informed proponents have supplied me with some lively and enjoyable arguments and/or discussions on the subject.  Of course, these are the same people who currently tend to shy away from the term ‘NoGIS’ as being inappropriate and ill-conceived.  The problem is that the term was adopted while the concept itself was still rather nebulous and unformed.  NoGIS was chosen as a nod toward the NoSQL movement, mainly – I think – because it sounded cool.

Anyway, NoGIS reminds me of that 9/11 benefit concert because for every sober and informed proponent of the concept, there are at least a dozen idiots who have no idea what the whole thing is about but have nonetheless jumped on the bandwagon because they couldn’t pass up an opportunity to wave their flag and shout.  People who are afraid that there’s a revolution brewing and are terrified that it might pass them by.  Kind of sad, actually.

Truth is, there’s no revolution.  Nor is there a looming paradigm shift.  What is occurring is a sort of branching of the discipline.  A fork in the road, as it were.  In fact, we arrived at that fork and passed by it some time ago, but it hasn’t been until now that the need has emerged to sit down and really figure out what it means.

Today’s GIS seems to have such different demands that it’s easy to jump to the conclusion that the entire discipline is due for a shake-up.  And it’s not just a question of size – while shuffling around terabytes certainly proposes certain challenges, they’re not terribly different than those presented by shuffling around gigabytes not too long ago.  All other size-related issues fall into a similar category.  While the demands get bigger and bigger, so do our capabilities.

We’re talking about other sorts of change here.  Changes in the primary purpose our data is serving.  Who is using it, how are they using it, and for what purpose?  This is the fork in the road I’m talking about.

A meaningful split occurred at that fork (this is not to imply that there is any sort of divide in the discipline.  We’re all on the same side here).  A large part of the discipline continued happily down the road GIS has been travelling along since its birth, which is why any paradigm shift that happened was not a universal one.

But the new road called for a major reorganization of worldview.  On this new road, the client became the consumer.  The project became the product.  The science of GIS became the business of GIS.

What I’m talking about here is the commoditization of geography.

Yes – it entails it different tool kit, but not a dissimilar one (we are not alone in this – any discipline that has both a theoretical and applied branch has these sorts of differences.  This is most easily seen by comparing how a discipline is practiced in the academy compared to how it is practiced in the public sphere).  And many of the tools do much the same job, but in a different way or to a different degree (a hammer and a pneumatic nailgun both drive nails).

What the flag wavers and shouters don’t seem to be noticing here, though, is that everybody wins.  This fork in the road is a very good thing for GIS.  The more directions we have research and development travelling in, the better off we all are.

As long as we all keep talking to each other.  GIS will continue to travel down both roads (and I hope there will be more to come), and the best thing for our discipline and our selves is to share our advancements so that we can build upon and refine each others’ work.

If we must make distinctions, though, let’s at least do so in a manner that makes sense.  We could apply any number of labels we desire, and many of them would make as much sense as the others.  Personally, I like Theoretical GIS and Applied GIS (I’d like to think which is which is obvious).  They’re fairly descriptive and neither one has any particular negative connotations.

I think it’s about time we drop this NoGIS crap, though.  At the end of the day, we’re all just trying to apply some meaning to geography, or to extract some meaning from it.

And that, my friends, is GIS.

CloudNote: This is the fourth and last part in a series on building your own home-brewed map server  It is advisable to read the previous installments, found here, here and here.

This is the point at which the tutorial-like flavor of this series breaks down, for a variety of reasons.  From here on, we’ll be dealing with divergent variables that cannot be easily addressed.  We’ll discuss them each as they come up.  Suffice to say that from now on I can only detail the steps I have taken.  Any steps you take will depend on your equipment and circumstances.

Having finished putting my server together, I decided it was time to give it a face to show the world.  Before I could do so, however, I had to give it a more substantial connection to that world, a process that began with establishing a dedicated address (domain).  The most common method of achieving this is to simply purchase one (i.e.,  There are a variety of web hosts you can turn to for this.  I cannot personally recommend any of them (due only to personal ignorance).

For the purpose of this exercise I didn’t feel I needed a whole lot (all I really wanted was an address, since I intended to host everything myself), so I went to DynDNS and created a free account (thanks, Don).  DynDNS set me up with a personal address, and the use of their updater makes it work for dynamic addresses (which most routers provide).  The web site does a decent job of walking you through the process, including setting up port forwarding in your router.

Exactly how to go about forwarding a port is particular to the router in question, so I won’t go into it in detail.  I will say that it is not something that should be approached lightly.  Port forwarding can pose certain security risks.  It’s a very good idea to do some research into the process before you dabble in it.

Once I had an address and a port through which to use it, I had to choose a front end for my server.  I was tempted to go with Drupal, mainly because it has the best documented means with which to serve up TileStream, but also because I’ve been meaning to learn my way around Drupal for some time now.

In the end, I realized that my little server, despite being almost Thomas-like in its dedication and willingness to serve, just doesn’t have the cojones necessary for serving those kind of tiles.  Truth is, if I wanted my own custom base map tiles in an enterprise environment, I’d purchase MapBox’s TileStream hosting rather that serving it myself, anyway. (Umm… I really couldn’t have been more wrong about this.)

And so I decided learning Drupal could wait for another day.  Instead I chose to go with WordPress, for several reasons.  I’m reasonably familiar with it, it’s a solid, well-constructed application, it’s extremely customizable, and it has an enormous, dedicated user base who have written huge amounts of themes and plugins.  And while WordPress was originally intended to be a blogging platform (and remains one of the best), it’s easy enough to reconfigure it for other purposes.

Installing WordPress is a snap since we included LAMP (Linux Apache, MySQL and PHP) while initially installing Ubuntu Server Edition.  In the terminal, type:

sudo apt-get install wordpress

Let it do its thing.  When it asks if you want to continue, hit ‘y’.  When it gets back to the prompt, type:

sudo ln -s /usr/share/wordpress /var/www/wordpress

sudo bash /usr/share/doc/wordpress/examples/setup-mysql -n wordpress

But replace ‘’ with the address you created at DynDNS.

Back to Webmin.  On the sidebar menu, click on Servers→Apache Webserver→Virtual Server.  Scroll down to the bottom.  Leave the Address at ‘Any’.  Specify the port you configured your router to forward (should be port 80, the default for HTTP).  Set the Document Root by browsing to  /var/www/wordpress.  Specify the Server Name as the address you created at DynDNS (the full address – include http://).  Stop and start Apache for good measure.

Now you should be able to point your browser to your DynDNS-created address (hereafter referred to as your address) to complete your configuration of WordPress.  You will have to make many decisions.  Choose wisely.

Once you have WordPress tweaked to your satisfaction, you’re probably going to want to add some web map functionality to it.  First and easiest is Flex Viewer.  All you have to do is move the ‘flexviewer’ folder from /var/www to  /usr/share/wordpress.  The file manager in Webmin can do this easily.  Once you’re done, placing a Flex Viewer map on a page looks something like this:

<iframe style="border: none;" height="400" width="600" src="http://your address/flexviewer/index.html"></iframe>

Straightforward HTML.  Nothing fancy, once all the machinery is in place.

Which gets a little trickier for GeoServer.  By design, GeoServer only runs locally (localhost).  In order to send GeoServer maps out to the universe at large, we have to do so through a proxy.  This has to be configured in Apache.  Luckily, Webmin makes it a relatively painless process.

We’ll start by enabling the proxy module in Apache.  Click on Servers→Apache Webserver→Global Configuration→Configure Apache Modules.  Click the checkboxes next to ‘proxy’ and ‘proxy_http’, then click on the ‘Enable Selected Modules’ button at the bottom.  When you return to the Apache start page, click on ‘Apply Changes’ in the top right-hand corner.

Having done that, we can point everything in the right direction.  Go to Servers→Apache Webserver→Virtual Server→Aliases and Redirects.  Scroll to the bottom and fill in the boxes thus:


Your server will have a name other than maps.  Most likely, it will be localhost.  In any case, you can find it by looking in the location bar when you access the OpenGeo Suite.  Apply the changes again, and you might as well stop Apache and restart it for good measure.

You can now configure and publish maps through GeoExplorer.  The only caveat is that GeoExplorer will give you code that needs a minor change.  It will use a local address (i.e., localhost:8080) that needs to be updated.  Example:

<iframe style="border: none;" height="400" width="600" src="http://localhost:8080/geoexplorer/viewer#maps/1"></iframe>

changes to

<iframe style="border: none;" height="400" width="600" src="http://your address/geoexplorer/viewer#maps/1"></iframe>

And that – as they say – is that.  Much of this has entailed individual choices and therefore leaves a lot of room for variation, but I think we’ve covered enough ground to get you up and running.  If you want to see my end result, you can find it at:

The Monster Fun Home Map Server Webby Thing

I won’t make any promises as to how long I will keep it up and running, but it will be there for a short while, at least.  Keep in mind that it is a work in progress.  So be nice.

Update:  My apologies to anyone who may give a crap. but I have pulled the plug on the Webby Thing.  It was really just a showpiece, and I just couldn’t seem to find the time to maintain it properly.  And frankly, I have better uses for the server.  Sorry.


Blog Stats

  • 21,185 hits


March 2015
« Mar    

Get every new post delivered to your Inbox.