You are currently browsing the tag archive for the ‘Mapping’ tag.

CloudNote: This is the third part in a series on building your own home-brewed map server  It is advisable to read the previous installments, found here and here.

Last time, I walked you through installing TileMill, and I promised a similar treatment for TileStream and Flex Viewer.  I am a man of my word, so here we go.  Don’t worry – this will be easy in comparison to what we’ve already accomplished.

We’ll start with TileStream, simply because we’re going to have to avail ourselves of the command line.  Once again, you can either plug a keyboard and monitor into your server or use whatever SSH client you’ve been using thus far.

Once you’re in the terminal, take control again (‘sudo su’).  For your TileStream installation, you can follow the installation instructions as presented, except for one detail:  it’s assumed we already have an application we don’t have.  Let’s correct that:

sudo apt-get install git

And then proceed with the installation (don’t forget to hit ‘enter’ after each command):

sudo apt-get install curl build-essential libssl-dev libsqlite3-0 libsqlite3-dev

git clone –b master-ndistro git://

cd tilestream


And that’s that (TileStream, even more than TileMill, will throw up errors during the installation.  None of them should stop the process, though, so you can safely ignore them).  Like TileMill, TileStream needs to be started before it can be accessed in a browser.  Since the plan is to run the server headless, let’s set this up in Webmin in a fashion similar the one employed for TileMill.

Back to Webmin, again open the ‘Other’ menu, and this time click on ‘Custom Commands’.  We’ll create a new Custom Command and configure it as follows (substitute your name for mine as appropriate):

TileStream Command

Save it, and you will now have a Custom Command button to use for starting TileStream (we didn’t do this for TileMill because we cannot.  The Webmin Custom Command function simply won’t accept it.  I think it has to do with the nature of the command.  I think the ‘./’ in the TileMill command confuses it).

At this point, TileStream is fully functional, but it doesn’t yet have a tileset to work with.  Using the same browser with which you just accessed Webmin, go here to download one.  Scroll down the page, pick a tileset you like and click on it to proceed to the download page (I picked World Light).  Download the file to wherever you please.  Once you have the file, go back to Webmin and open the ‘Other’ menu again.  Click on ‘Upload and Download’, then select the ‘Upload to Server’ tab.  Click on one of the buttons labeled ‘Choose File’, then browse to the tileset file you downloaded.  For ‘File or directory to upload to’, click the button and browse your way to /home/terry/tilestream/tiles (by now, you should know you’re not ‘terry’).  Click the ‘Upload’  button.

Once your tileset is finished uploading, you can point the browser to http://maps:8888 (yeah, yeah – not ‘maps’) to access TileStream.  Enjoy:


Our last order of business is Flex Viewer (otherwise known as ‘ArcGIS Viewer for Flex’).  This is the easiest of the lot, mainly because it doesn’t actually have to be installed.  Still using the same browser, go to the download page (you’ll need an ESRI Global Account.  If you don’t have one, create one), agree to the terms and download the current version (again – download it to wherever you please).  Once you have the package, use Webmin to upload it to the server.  This time you’ll want to upload the file to /var/www and you’ll want to check the ‘Yes’ button adjacent to ‘Extract archive or compressed files?’

And you’re in.  Point the browser to http://maps/flexviewer/ (you know the drill) and play with your new toy:


You can see I have customized the flex viewer.  You should do so as well (it’s designed for it, after all).  Open the file manager in Webmin (the ‘Other’ menu again) and navigate to /var/www/flexviwer.  Select config.xml, then click the ‘Edit’ button on the toolbar.  The rest is up to you.

* * * * *

So now you have a headless Ubuntu map server up and running, and the question you are probably asking yourself is:  “Do I really need all this stuff running in my server?”  The answer is, of course, ‘no’.  The point of this exercise was to learn a thing or two.  If you’ve actually been following along and have these applications running in your own machine, you are now in a good position to poke around for a while to figure out what sort of server you’d like to run.

For instance, there’s no real reason to run TileMill on a server.  TileMill doesn’t serve tiles, it fires them.  Therefore it’s probably not the best idea to be eating up your server’s resources with TileMill (and it seriously devours resources).  The server doesn’t have a use for the tiles until they’re done being fired, at which point TileStream is the tool for the job.

That said, there’s no compelling reason why you couldn’t run TileMill on your server.  If you’d rather not commit another machine to the task (and if you’re not in any kind of hurry), why not give the job to the server?  It’ll take it a while, but it will get the tiles fired (if your server is an older machine like mine, I would strongly advise you to fire your tiles in sections, then put them together later.  I suggest firing them one zoom level at a time and combining them with SQLite Compare).

Flex Viewer and the OpenGeo Suite don’t often go together, but there’s no reason why they can’t.  Flex Viewer can serve up layers delivered via WMS – there’s nothing to say GeoServer can’t provide that service.  They are, however, very different applications, with vastly different capabilities, strengths and weaknesses.  They also have a very different ‘feel’, and we should never discount the importance of aesthetics in the decision making process.

A final – and very important – consideration in the final configuration of our home server is the nature of the face it presents to the world.  In order for a server to serve, it must connect to and communicate with the world at large.  This means some kind of front end, the nature of which will influence at least some of our choices.

Which brings us neatly to the next post.  See you there.


CloudNote:  This is the second part in a series on building your own home-brewed map server (I would tell you how many installments the series will entail, but I won’t pretend to have really thought this through.  There will be at least one more.  Probably two).  It assumes you have read the previous installment.  You have been warned.

Last time, I walked you through setting up your very own headless map server using only Free and Open Source Software.  Now, I’m going to show you how to trick it out with a few extra web mapping goodies.  The installation process will be easiest if you re-attach a ‘head’ to your server (i.e., a monitor and keyboard), so go ahead and do that before we begin (alternately, if you’re using PuTTY to access your headless server, you can use it for this purpose).

At the end of my last post, I showed you all a screenshot of my server running TileMill, TileStream and Flex Viewer, and I made a semi-promise to write something up about it.  So here we are.

I tend toward a masochistic approach to most undertakings in my life, and this one will not deviate from that course.  Whenever I am faced with a series of tasks that need completion, I rank them in decreasing order of difficulty and unpleasantness, and I attack them in that order.  In other words, I work from the most demanding to the least troublesome.

I originally intended to write a single post covering TileMill, TileStream and Flex Viewer, but a short way into this post I realized that I had to split it into two pieces.  The next post will cover TileStream and Flex Viewer.  This one will get you through TileMill.

TileMill can be a bear to install – not because you need catlike reflexes or forbidden knowledge or crazy computer skills – but simply because there are many steps, which translate into lots of room for error.  A quick glance at TileMill’s installation instructions may seem a bit daunting (especially if you’re new to this kind of thing):

Install build requirements:

# Mapnik dependencies 
sudo apt-get install -y g++ cpp \ 
libboost-filesystem1.42-dev \ 
libboost-iostreams1.42-dev libboost-program-options1.42-dev \ 
libboost-python1.42-dev libboost-regex1.42-dev \ 
libboost-system1.42-dev libboost-thread1.42-dev \ 
python-dev libxml2 libxml2-dev \ 
libfreetype6 libfreetype6-dev \ 
libjpeg62 libjpeg62-dev \ 
libltdl7 libltdl-dev \ 
libpng12-0 libpng12-dev \ 
libgeotiff-dev libtiff4 libtiff4-dev libtiffxx0c2 \ 
libcairo2 libcairo2-dev python-cairo python-cairo-dev \ 
libcairomm-1.0-1 libcairomm-1.0-dev \ 
ttf-unifont ttf-dejavu ttf-dejavu-core ttf-dejavu-extra \ 
subversion build-essential python-nose 

# Mapnik plugin dependencies 
sudo apt-get install libgdal1-dev python-gdal libgdal1-dev gdal-bin \ 
postgresql-8.4 postgresql-server-dev-8.4 postgresql-contrib-8.4 postgresql-8.4-postgis \ 
libsqlite3-0 libsqlite3-dev  

# TileMill dependencies 
sudo apt-get install libzip1 libzip-dev curl 

Install mapnik from source:

svn checkout -r 2638 mapnik 
cd mapnik python scons/ configure INPUT_PLUGINS=shape,ogr,gdal 
python scons/ 
sudo python scons/ install 
sudo ldconfig 

Download and unpack TileMill. Build & install:

cd tilemill ./ndistro 

It’s not as scary as it looks (the color-coding is my doing, to make it easy to differentiate things).  The only circumstance that makes this particular process difficult is that the author of these instructions assumes we know a thing or two about Linux and the command line.

Let’s start at the top, with the first ‘paragraph’, which begins: # Mapnik dependencies.  Translation:  We will now proceed to install all the little tools, utilities, accessories and such-rot that Mapnik (a necessary and desirable program) needs to function (i.e., “dependencies”).

It is assumed that we know the entire ‘paragraph’ is one command and that the forward-slashes (/) are not actually carriage returns and shouldn’t be followed by spaces.  It is also assumed that we will notice any errors that may occur during this process, know whether we need concern ourselves with them and (if so) be capable of correcting them.

Let’s see what we can do about this, shall we?  Since we’re installing this on our server and actually typing in the commands (rather than copying and pasting the whole thing), we have the luxury of slicing it up into bite-sized pieces.  This way the process becomes much less daunting, and it makes it easier for us to correct any errors that crop up along the way.

We’ll start by taking control.  Type “sudo su” (sans quotation marks), then provide your password.  Now we can proceed to install everything, choosing commands of a size we’re comfortable with.  I found that doing it one line at a time works pretty smoothly.  Two important points here:  start every command with “sudo apt-get install” (not just the first line) and don’t include the forward-slashes (unless you’re installing more than one line at a time).  I would therefore type in the first two lines like this (don’t forget to hit ‘enter’ at the end of each command):

sudo apt-get install –y g++ cpp

sudo apt-get install libboost-filesystem1.42-dev

You get the idea.  Continue along in this fashion until you have installed all the necessary dependencies for Mapnik.  I strongly recommend doing them all in one sitting.  It just makes it easier to keep track of what has and hasn’t been installed.

At this stage of the game, any errors you encounter will most likely be spelling errors.  Your computer will let you know when you mistype, usually through the expedient of informing you that it couldn’t find the package you requested.  When this occurs, just double-check your spelling (hitting the ‘up’ cursor key at the command prompt will cause the computer to repeat your last command.  You can then use the cursors to correct the error).  At certain points in the installation process, your server will inform you of disk space consumption and ask you to confirm an install (in the form of yes/no).  Hitting ‘y’ will keep the process moving along.

While packages install in your system, slews of code will fly by on your screen, far too fast to read or comprehend.  Just watch it go by and feel your Geek Cred grow.

By now you should have developed enough Dorkish confidence to have a go at # Mapnik plugin dependencies and # TileMill dependencies.  Have at it.

When you’re done, move on to installing Mapnik from source.  Each line of this section is an individual command that should be followed by ‘enter’.  The first line will throw up your first real error.  Simply paying attention to your server and following the instructions it provides will fix the problem (in case you missed it, the error occurred because you haven’t installed Subversion, an application you attempted to use by typing the command ‘svn’.  Easily fixed by typing sudo apt-get install subversion).  You can then re-type the first line and proceed onward with the installation.  When you get to the scons commands, you will learn a thing or two about patience.  Wait it out.  It will finish eventually.

Now we should be ready to do what we came here to do:  install TileMill.  Unfortunately, TileMill’s installation instructions aren’t very helpful at this point for a headless installation.  All they tell us is to “Download and unpack TileMill”.  There’s a button further up TileMill’s installation page for the purpose of the ‘download’ part of this, but it’s not very helpful for our situation.  We could use Webmin to manage this, but what the hell – let your Geek Flag fly (later on, we’ll use Webmin to install Flex Viewer, so you’ll get a chance to see the process anyway).

Our installation of Mapnik left us within the Mapnik directory, so let’s start by returning to the home directory:


Then we can download TileMill:

wget –no-check-certificate

Now let’s check to confirm the name of the file we need to unpack:


This command will return a list of everything in your current directory (in this case, the home directory).  Amongst the files and folders listed, you should see ‘0.1.4’ (probably first).  Let’s unpack it:

unzip 0.1.4

Now we have a workable TileMill folder we can use for installation, but the folder has an unwieldy name (which, inexplicably, the installation instructions fail to address).  Check your directory again to find the name of the file you just unpacked (in my case, the folder was ‘mapbox-tilemill-4ba9aea’).  Let’s change that to something more reasonable:

mv mapbox-tilemill-4ba9aea tilemill

At long last, we can follow the last of the instructions and finish the installation:

cd tilemill


Watch the code flash by.  Enjoy the show.  This package is still in beta, so it will probably throw up some errors during installation.  None of them should be severe enough to interrupt the process, though.  Feel free to ignore them.

Once the installation is complete, we’ll have to start TileMill before we can use it.  This can be achieved by typing ‘./tilemill.js’in the terminal, but TileMill actually runs in a browser (and we’ll eventually need to be able to run it in a server with no head), so let’s simplify our lives and start it through Webmin.

Go to the other computer on your network through which you usually access your server (or just stay where you are, if you’ve been doing all this through PuTTY), open the browser and start Webmin.  Open the ‘Others’ page and select ‘Command Shell’.  In the box to the right of the ‘Execute Command’ button, type:

cd /home/terry/tilemill (substitute your own username for ‘terry’)

Click the ‘Execute Command’ button, then type in:


Click the button again (after you’ve gone through this process a couple of times, Webmin will remember these commands and you’ll be able to select them from a drop-down list of previous commands).

And now enjoy the fruits:  type http://maps:8889 into the location bar of your browser (again, substitute the name of your server for ‘maps’).  Gaze in awe and wonder at what you have wrought:


Take a short break and play around with the program a bit.  You’ve earned it.  When you’re done I’ll be waiting at the beginning of the next post.

CloudFellow Map Dork and good Twitter friend Don Meltz has been writing a series of blog posts about his trials and tribulations while setting up a homebrewed map server on an old Dell Inspiron (here and here).  I strongly recommend giving them a read.

At the outset, Don ran his GeoSandbox on Windows XP, but recently he switched over to Ubuntu.  While I applaud this decision whole-heartedly, I thought I’d take the extra step and build my own map server on a headless Ubuntu Server box (when I say ‘headless’, I am talking about an eventual goal.  To set this all up, the computer in question will initially need to have a monitor and keyboard plugged into it, as well as an internet connection.  When the dust settles, all that need remain is the internet connection).  The following is a quick walkthrough of the process.  I apologize to any non-Map Dorks who may be reading this.

The process begins, of course, with the installation of Ubuntu 10.04 Server Edition.  Download it, burn it to a disk, and install it on the machine you have chosen to be your server.  Read the screens that come up during installation and make the decisions that are appropriate for your life.  The only one of these I feel compelled to comment on is the software selection:


The above image shows my choices (what the hell – install everything, right?).  Definitely install Samba shares.  It allows Linux machines to talk to others.  Also, be sure to install the OpenSSH server.  You’ll need it.  For our purposes, there’s no real reason to install a print server, and installing a mail server will cause the computer to ask you a slew of configuration questions you’re probably not prepared to answer.  Give it a pass.

During the installation process, you will be asked to give your server a name.  I named mine ‘maps’.  So whenever I write ‘maps’, substitute the name you give your own machine.

Once your installation is complete, you will be asked to login to your new server (using the username and password you provided during installation), after which you will be presented with a blinking white underscore (_) on a black screen.  This is a command prompt, and you need not fear it.  I’ll walk you through the process of using it to give yourself a better interface with which to communicate with your server.  Hang tight.

Let’s begin the process by taking control of the machine.  Type in “sudo su” (sans quotation marks) and hit ‘enter’.  The server will ask for your password, and after you supply it, you will be able to do pretty much anything you want.  You are now what is sometimes called a superuser, or root.  What it means is that you are now speaking to your computer in terms it cannot ignore.  This circumstance should be treated with respect.  At this stage, your server will erase itself if you tell it to (and it won’t ask you whether or not you’re sure about it – it’ll just go ahead and obey your orders).  So double-check your typing before you hit ‘enter’.

Now, let’s get ourselves a GUI (Graphical User Interface).  The server edition we’re using doesn’t have its own GUI, and for good reasons (both resource conservation and security).  Instead, we can install Webmin, a software package that allows us to connect to our server using a web browser on another computer on the same network.  We’ll do this using the command line.  Type in (ignore the bullets before each command.  They are only there to let you know where each new line begins):

And hit ‘enter’ (I’m not going to keep repeating this.  Just assume that hitting ‘enter’ is something you should do after entering commands{the dark words}).  Follow this with:

  • sudo dpkg -i webmin-current.deb

And finish it up with:

  • sudo apt-get -f install

Now we have a GUI in place.  If you open a browser on another computer on your network and type: https://maps:10000 into the location bar (remember to replace ‘maps’ with the name you gave your own server), you’ll be asked to supply your username and password, then you’ll see this (you may also be asked to verify Webmin’s certificate, depending on your browser):

Cool, huh?  Don’t get your hopes up, though.  We’re not done with the command line yet (don’t sweat it – I’ll hold your hand along the way.  Besides – you should learn to be comfortable with the command line).  For the moment, though, let’s take a look around the Webmin interface.  There is a lot this program can do, and if you can find the time and determination it would be a good idea to learn your way through it.  For now, you just really need to know a few options.  The first is that the initial page will notify you if any of your packages (Linux for ‘software’) have available updates.  It’s a good idea to take care of them.  If you want, Webmin can be told to do this automatically (on the update page you get to when you click through).  The other important features are both located under the ‘Other’ menu (on the left).  The first is the file manager (which bears a striking resemblance to the Windows File Manager of old), which gives you the ability to explore and modify the file system on your server (this feature runs on Java, so be sure the browser you’re using can handle it).  The other feature is ‘Upload and Download’ which does what it says it does.  Together, these two features give you the ability to put maps on your map server, something I assume you’ll want to do.

Please note the specs on my server (as pictured above).  It’s not terribly different than Don’s Inspiron.  I’m not suggesting you do the same, but it is worth noting that an old machine can handle this job.

Back to the command line.  Let’s get OpenGeo:

Rock and roll.  When your server is done doing what it needs to do, go back to the browser you used for Webmin and type http://maps:8080/dashboard/ into the location bar.  Check out the OpenGeo goodness.

Finally, to make your new server truly headless, you’re going to need some way to login remotely (when you turn the machine on, it won’t do a damn thing until you give it a username and password).  Since you listened to me earlier and installed the OpenSSH server, you’ll be able to do this.  All you need is an SSH client.  If you’re remotely connecting through a Linux machine, chances are you already have one.  In the terminal, just type:

  • ssh <username>@<computer name or IP address>

In my case, this would be:

  • ssh terry@maps

You’ll be asked for a password, and then you’re in (I hear this works the same in OS X, but I cannot confirm it).

If you’re using a Windows machine – or if you just prefer a GUI – you can use PuTTY.  PuTTY is very simple to use (and it comes as an executable.  I love programs that don’t mess with the registry).  Tell it the name of the computer you want to connect to and it opens a console window asking for your username and password.  Tell it what it wants to know.

It’s not a bad idea to install a new, dedicated browser for use with your new server.  I used Safari, but only because I already use Firefox and Chrome for other purposes.  Also, your network will probably give your server a dynamic IP address.  This is not an issue for you, since your network can identify the machine by name.  If you want to (and there are several valid reasons to do so), you can assign a static IP address to your server.  To find out how to do so, just search around a bit at the extraordinary Ubuntu ForumsUpdate:  It seem that Webmin provides an easy method to assign a static IP address to your server.  Go to Networking → Network Configuration → Network Interfaces → Activated at Boot.  Click on the name of your active connection, and you will then be able to assign a static IP address just by filling in boxes.

Enjoy your map server.  If I can find the time, I’ll write up a post on how I added Flex Viewer, TileMill and TileStream to the server:

And 50 bonus points to anyone who understands the image at the top of this post.

OSM LogoYears back (while I was still in college) I did a few projects that entailed a lot of time studying the small town of Deerfield, MA (often referred to as the most over-studied small town in America).  During this process, my advisor (Bob) introduced me to what was known as The Deerfield Lunch. The Deerfield Lunch was an irregular brown-bag affair, the attendees of which tended to fluctuate for a variety of reasons, the most commonplace being that everyone was quite busy.  You see, the people who sat down to these lunches were all scholars of varying stripe, many of them leaders in their field.

I managed to attend a couple Deerfield Lunches (maybe a few – it was a long time ago).  They were fun and extremely informative.  The first one I attended made the largest impression on me, mainly because it was there that I met Abbott Lowell Cummings.  Those of you who don’t happen to be students of New England architectural history may not have heard of him, but he’s a pretty big dog in the world of Domestic Architecture, or – more accurately – Vernacular Architecture.  He’s also an absolute sweetheart.

Vernacular Architecture is one of those terms that has been defined extensively, but unfortunately in varying ways.  The basic gist tends to hold more or less the same, but the details get muddy at times.  In a nutshell, Vernacular Architecture describes structures that were constructed by someone who had not been formally educated in the discipline of architecture.  This does not mean they were unskilled or unprofessional.  Neither does it mean that they were inexperienced or incompetent.  All it means is that a builder of Vernacular structures has not been formally educated as an architect.  And it most certainly does not mean that Vernacular structures are in any way sub-standard.

To drive the point home, it has been estimated that as much as 90% of the extant structures on the face of the Earth are, in fact, Vernacular Architecture.

Anyway, a slew of conversations I’ve been having lately have brought Abbott to mind.  He’s a big fan of maps in general and GIS in particular, and I think he’d be interested in the ongoing dialogue about data sources.

You know the one I’m talking about.  It usually starts when someone says ‘crowd-sourced’ or ‘authoritative’ and then it tends to degenerate from there.  Pretty soon, everyone’s arguing about data quality and reliability as if they have a direct correlation to the data’s source (they don’t, in case you were wondering).

As far as I can tell, the problems all stem from inadequate and/or ill-defined terminology.  There seems to be a certain amount of determination to doggedly stick to terms that are not appropriate to the task at hand.  While I love the term ‘crowd-sourced’, it has some connotations attached to it that are rather counter-productive.  It’s the crowd, after all.  Barely a step away from the mob. And we all know about the mob – they’re unskilled and unruly.  They’re the Great Unwashed.  They stormed the Bastille, for crying out loud!

All personal interpretations aside, though, ‘crowd-sourced’ doesn’t actually describe what we’re talking about here.  At least, it doesn’t according to the guy who coined the term.  In fact, ‘crowd-sourced’ is far closer to ‘outsourced’ than it is to ‘volunteered’.  When we ‘crowd-source’ a job, we start by looking for volunteers.  Failing that, we hire free-lancers.  The waters only get cloudier (no pun intended) when we start adding cost to the definition.

We’re not just talking about free v. not-free data here (and by ‘free’, I’m talking about monetary cost.  Any other form of ‘free’ falls under licensing, which is far too large a discussion to delve into here).  If we were, we could just draw that line.  Neither are we talking about good quality v. poor quality data.  Again, we could simply draw that line and be done with it.  The problem we’re having is that free and not-free, as well as good quality and poor quality, fall on all sides.

The point I’m trying to make is that if we must draw lines (and it appears that there’s no avoiding it), we should do so using terms that do not imply judgment calls in regard to cost and/or quality.

And since everyone seems to be trying to draw lines according to data source, why don’t we go ahead and do just that?  A reasonable attempt was made with Volunteered Geographical Information (VGI), except in that ‘volunteered’ describes the method in which the data is delivered, not the source from whence it comes.  I also think we should avoid narrowing our definitions to geographical data.  Data comes from a variety of sources – applying geography to it is kind of our job, isn’t it?  Besides – I think we Map Dorks tend to put too much emphasis on the ‘G’ in GIS.  GIS should be information that is geographically informed, not information that’s geographically driven.

Another horrible attempt was to label some data as ‘authoritative’ (as opposed to ‘crowd-sourced’).  I hope I don’t have to spell out everything that’s wrong with this one.  ‘Authoritarian’ would probably hit closer to the intent behind this choice.  ‘Official’ would be a better term to describe the source, but it still implies data that is more accurate, correct, or just plain better.

Why don’t we stop worrying about cost (it seems obvious enough) and quality (a judgment call best left to the individual user) and focus our attention on data source?  Where does our data come from (aside from the data we gather ourselves, which need not enter into this conversation)?  While this division usually ends in two categories, I would argue that three is a more appropriate number (and I do not mean to imply that three categories can adequately include all available sources of data.  They can cover a very large percentage of them, though).  Here, presented in an order that is not intended to imply or connote a bloody thing, are the three categories I would separate our data into, including the terms I use to describe them along with a brief definition and examples.

1) Governmental This is probably the largest of the three categories.  This describes data that is produced directly by a governmental body, be it national, regional or local.  This data is often free (unless you count the fact that we pay for it through taxes), but not always.  This data is often described as ‘official’ or ‘authoritative’.  While these descriptions are technically correct, they should not be taken to mean that governmental data is necessarily more accurate or ‘correct’ than other sources of data (see the previous post for a brief discussion of this).  The USGS and the oft-mentioned MassGIS are good examples of Governmental data sources.

2) Commercial This is data that springs from professional sources.  I use the term ‘commercial’ instead of ‘professional’ because this is not the only category to include data created by professionals.  In fact, professionals contribute enormous amounts of data in all three categories.  What separates this category from the others is that it exists in the private sector and it consists of data that was gathered, created and/or derived for money.  Of course, after the fact much of the data is made freely available for a variety of reasons, not least of which is that government is often the entity paying for the contract (or corporations so large they might as well be governmental bodies.  You know – like Google and Microsoft).  Companies like GeoEye, Digital Globe and Navteq are better known examples of Commercial data sources, but there are many, many more out there.

3) Vernacular Vernacular data is data that is provided voluntarily, mostly by private entities, for public consumption.  This is the kind of data that’s provided by people on the ground.  And while most of it is freely accessible to all, a certain amount of knowledge and skill is needed before actually contributing (my mother, for instance, wouldn’t know where to begin).  What separates this kind of data from the others is its self-correcting nature.  Vernacular data tends to be openly editable, which means that anyone who notices errors can correct them.  When this is not the case, the public at large is usually given access to the machinery needed to report errors.   For those of you who don’t know me and have never before read this blog, this comprises my personal favorite source of data, for a variety of reasons (it’s also the one I often refer to as ‘Dork-sourced’).  I choose the word ‘vernacular’ for much the same reason it was chosen to describe structures that have been standing for centuries – because experience and dedication and commitment are often stronger than formal indoctrination.  Examples of these data sources are OpenStreetMap (including the numerous spin-offs that expand upon OSM’s data) and Google Maps (the My Maps aspect, as well as the API).  Other examples (sometimes referred to as ‘passive’ data) include data like geo-tagged photos at Flickr or Panoramio.

The most important thing to remember about all three of these categories is that none of them possess any kind of exclusive claim to accuracy and/or quality.  I’ve chosen the locations of these lines carefully, and I feel they’re safely drawn.  It is important to remember that the lines refer only to source.

In practice, I find that I tend to dance between all three categories, depending on the project at hand and the data I can get my hands on for said project.  I have my personal preferences, and they usually dictate where I start a search for data, but at the end of the day I select my sources based solely upon who offers the best available data for the task I have before me.  I’m sure I’m not alone in this.


In sixty-nine I was twenty-one and I called the road my own
I don’t know when that road turned into the road I’m on

– Jackson Browne

OSM Logo Except in my case it wasn’t ‘69, I was 16, and I’ve never stopped calling the road my own.

See, when I was in High School, I read Kerouac and Kesey, I read about Woody Guthrie and really listened to his music, and I – along with a bunch of my friends – succumbed to the siren song calling us to stand on the highway with our thumbs out.  A month later I returned dirtier, stronger and better than I had ever been.  I came away from the experience with a better understanding of the road, of the United States of America and – most important – a real understanding of freedom.  Travelling just for it’s own sake (and in a fashion that leaves you at the mercy of fate) – with no money, no real destination and no discrete itinerary – entails a level of freedom that the average person doesn’t really understand.  Reading Kerouac and listening to Woody can net you a glimpse, but you’ll never know the reality of it until you experience it firsthand.

A large part of that freedom is ownership.  When you develop such an intimate relationship with the road, you begin to understand the communal – no, universal – aspect of the road.  The road that – in effect – belongs to everyone.  The road that actually deserves capital letters and will hereafter be given them.  The Road that exists outside of boundaries and municipal spheres of influence (even while passing through them).

This is The Road that OpenStreetMap is about.  The Road that belongs to each and every one of us.  This is why Woody Guthrie would love OSM (although I’m pretty sure Kesey wouldn’t understand it and Kerouac wouldn’t give a rat’s ass about it).  Because a central aspect of OSM is about returning ownership of The Road to us, the people.  You know – the Great Unwashed.

And it is ours, you know.  And not just because we paid for it.

Lately, I’ve been thinking a lot about ownership and how we (the collective, all-inclusive ‘we’) fit into it.  And how ownership differs from possession.  This train of thought started with this discussionThis post added fuel to the flames.  Now we’re down to the stew my brain has made of it.  You might want to look away.

So the question that bubbles to the surface of my brain stew is:  Do lines in the sand (i.e., political boundaries and/or parcel data) have a place in OSM?

My immediate reaction is to say “no”.  For technical and philosophical reasons.  On a technical level, these are not the sort of data that the average person on the ground is able to provide.  I can easily take my GPS out into the world and accurately record streets, buildings, rivers, railways, bus-stops, parks, bathrooms, pubs, and trees.  These are all concrete physical features that any one of us can locate on the surface of the Earth and record.  More importantly, they’re features that anyone else can check.  Or double-check.  And this – in case you haven’t noticed – is the strength of OSM.  It’s self-correcting.

But we can’t do this with the lines in the sand.  Where – exactly – is the border of your town?  Can you stand on it and take a waypoint?  Sometimes you can.  Most roads have convenient signs telling you when you’re leaving one political sphere of influence and entering into another.  Here in New England, there are often monuments of one sort or another at pertinent locations to mark the dividing line betwixt one town and another.  And these are certainly locations that can be marked – as points.  If you’re not willing to walk the entire border, however, you shouldn’t draw the line in the sand.  Sometimes you can’t just connect the dots.  Actually, most of the time you can’t.

Of course, we often have the option of downloading border data from various (presumably authoritative) governmental sources.  But then we run into the question of whether we have the right to upload that data to OSM.  Personally, I don’t think the payoff is worth the expenditure of neurons necessary to figure it out.  Especially because the ‘authorities’ don’t always agree:

A quick comparison of the counties of western Massachusetts.  The green background with black outlines was provided by the USGS.  The semi-transparent grey foreground with white outlines was provided by MassGIS.  Note the differences.  For the record, the data provided by MassGIS is vastly superior to that provided by the USGS.  Trust me.

Parcel data is even worse.  Frankly, I don’t know why anyone would want to include parcel data on any map, but then I’ve had a lot of experience with it and therefore I am cognizant of its uselessness.  Parcel maps are more for bean counting than anything else.  Their primary purpose is to delineate taxation and therefore they tend to conform to a “close is good” standard.  They don’t need to be accurate – tax collectors are quite happy to round up.  Take it from a guy who has had occasion to check a large number of parcel maps against the truth on the ground – they are grossly inaccurate (in these parts, it used to be thought that the ground trumped the map and/or the deed.  I’ve seen many maps that have ‘corrected’ acreages on them.  These days, though, the thinking tends in the other direction.  After all – what you paid for is what the paper says you paid for).

On a purely philosophical level, I feel as though lines in the sand have no place in OSM.  Lines in the sand are all about possession.  They are someone’s way of saying “This land is my land.  It’s not your land”.  In my far from humble opinion, this is pretty much the polar opposite of what OSM is about.  OSM is about taking ownership back from the line-drawers and the so-called authorities.  It’s a declaration that the map belongs to us – all of us – and we’d kind of like it to be an accurate map.  If it’s all the same to you.

But then, Kate had an excellent point (she does that, and is almost never annoying about it):  people tend to want to know where they are.  While I agree that this is, indeed, the case, I don’t think borders need to be a part of the picture.  When people go from Town A to Town B, they like to know where they are when they are actually inside the town proper.  But I question whether the average person cares when they cross over the border between Town A and Town B (except, of course, for the 3-year-old in my back seat who always wants to know.  Lucky for him, the driver’s seat is occupied by a daddy with a very accurate personal GPS in his head).  And while I think there is a place in the world for some borders (as I said before, we need some way to determine who’s responsible for plowing the roads and collecting the garbage), I doubt whether that place is on a ‘People’s Map’ like OSM.

Is there a solution?  I think so, and I think Andy hit upon it pretty soundly in the post linked to above:  labels.  With absolutely no lines whatsoever, people have no difficulty identifying points and areas if a map is sprinkled with labels of judicious size and font.  If you doubt me on this one, just look at this map:

If a couple Hobbits can find their way from the Brandywine river to the bowels of Mount Doom without borders, I think maybe OSM can do without them, as well.

Update:  Just got an email from Barb (Drew’s wife) with details of the upcoming publication of Drew’s book, Red Ink:  Native Americans Picking up the Pen in the Colonial Period.  Feel free to buy it as soon as you can.


At this stage in my life, making a map has become very much like writing or driving – it’s something I largely don’t think about.  These are all tasks that I usually just sit down and do.  Only rarely do I actually have to think about the process involved in what I am doing.  Recently, though, I made a couple of maps that I really thought about, beginning to end.

An old friend (Drew) stopped by for a visit a short while ago (by ‘old friend’ I mean one of those few who has been a good friend through thick and thin since high school and still seems magically blind to my faults).  He asked me to put together maps for a book he’s authored that’s heading for publication.  I assented, of course.  Hell – I would have done so even if we hadn’t been drinking.

Anyway, I anticipated questions about the maps, so I spent a lot of time thinking about the choices I was making and why I was making them.  The questions never materialized (I should have realized that someone who’s been a friend this long would know enough to just get out of my way and let me do what I do), but anticipating them forced me to articulate my creative process in a fashion I haven’t experienced in years.  I thought I’d share it with you all.

Disclaimer:  As a Map Dork, I am almost completely self-taught.  Those of you who have actually taken classes and/or earned degrees in this stuff may find this to be painfully obvious or stupidly off base.  My only response to this is:  “Hey – it works for me.”

As you may have guessed, I spend a lot of my time looking at maps.  All sorts of maps.  These days, I see a lot of ugly maps.  And you know what, kids?  Ugly maps don’t work.  Allow me to explain.

A map is a document.  Like any other document, the point of a map is to convey information to an audience (the audience being whoever is looking at it).  In order for this to occur, the audience actually has to look at the map long enough to absorb whatever information the map is intended to convey.  Since the overwhelming majority of humanity does not particularly enjoy looking at ugly things, chances are an ugly map will fail in its purpose.

Part of the problem (I’m sad to say) is the advent of geographic information systems (GIS).  GIS has taught us that maps are, in fact, collections of data.  While it is a good thing to know this Basic Truth, sometimes it leads to the erroneous conclusion that the map is about the data, rather than the other way around.  When this occurs, bad maps happen.  In an army hygiene film sort of way (“Men – don’t let this happen to you!”), allow me to demonstrate:

This hideous monstrosity is not something I cooked up just to make a point.  It’s a detail of a real map a company actually paid a GIS firm to produce.  It’s so damn ugly it’s almost beautiful.

Simple is good.  I cannot stress that enough.

I’m not entirely clear on the details of Drew’s book (I haven’t read it yet), but I know it has a lot to do with 17th- and 18th-century New England, specifically in regard to English settlements and Native ‘Praying Towns’ therein.  I know this because he needed me to produce maps that depicted these things (a lot of e-commerce has bounced back and forth between here and Texas as of late.  That’s right – Texas.  Poor Drew and his family are Liberals In Exile).

Luckily, the sort of maps Drew wanted are the sort I like to make.  Maps of landscapes.  Coming to GIS through archaeology and history, I’m particularly drawn to maps that depict things as they are (or were) on the ground.  It may (but probably won’t) surprise you to know that maps do not always depict a physical landscape.  This is as it should be – not all maps are trying to convey physical information (such as maps of the internet or mind maps), while others only loosely refer to actual geography (like the annoying red/blue maps we Americans are stuffed into every four years or the iconic London Tube map).  Personally, I’m most comfortable when dealing with the actual face of the planet.

Drew wanted maps of two specific landscapes: one of the Boston/Cape Cod/Rhode Island area (Map 1) and one of the eastern New York/western Massachusetts and Connecticut area (Map 2).  The only restrictions being that the maps needed to be black and white (greyscale, technically), and they had to be easily readable at a size typical of a scholarly work (let’s say 8” x 6”).  I approached them the same way I approach any area I want to map.  I first looked for the Big Reality.

Every location has the Big Reality.  It is the single enormity that pervades life in a specific location.  The Big Reality is often a geographic feature of some sort (a volcano, a river, a mountain range, a desert, an ocean), but oft-times is of a different nature (making paper, the hostile neighbor to the south, winter, tourism, growing corn, jazz).  Every place has the Big Reality, though (some have more than one).  It is huge, and it affects almost all aspects of life, but usually not overwhelmingly so.  Generally the Big Reality runs in the background, as it were.

Living in close proximity to both areas (as well as having gotten a degree in history from a New England university) made determining the Big Realties for both maps a cakewalk.  For Map 1, the ocean.  Map 2, the mountains.

The problem with the Big Reality is that it can be rather tricky to map (this is especially true if the Big Reality is less tangible than a geographical feature).  It needs to be pervasive but not overwhelming.  Unmistakably present, but not hitting you over the head with its presence.  In a word, the Big Reality should be the background.  In Map 1, this was achieved with a simple drop shadow.  In Map 2 with a slight bump.  It pays to spend a lot of time thinking about a map before you begin to actually produce it.

Both maps needed clear distinctions between British settlements and Native ones.  The looming pitfall here was the lure to overdo it.  We often underestimate the human brain’s ability to notice subtle differences.  For Drew’s maps, I made the distinctions through slightly different symbology and separate fonts.

Which brings me to another point I cannot stress enough:  choose your fonts with care.

For these maps, I used two fonts – one for the English settlements and one for the Native settlements.  For the English settlements I chose a font I use often: Souvenir.  Souvenir has a few things going for it – it’s clean, it’s easy to read and (important for these maps) it has serifs.  In this case Souvenir has an added bonus that caused me to use it for the titles and other miscellany.  It is a font used regularly by the U.S. Geological Survey on their quadrangle maps.  Because of this, when many Americans see this font they automatically think ‘map’.  Also, it serves to lend a certain air of legitimacy to a map, which never hurts.

The second font I used to label Native settlements and spheres of influence.  To make a clear distinction from the British settlements, I wanted a font without serifs (sans-serif).  I also wanted a font that was more organic/natural looking.

A quick aside – many white people have funny ideas about Native American peoples (or any other aboriginal group, for that matter).  There is this tendency to think of them as Tolkein’s Wood Elves – living in an idyllic state, completely in tune with the natural world around them, at peace with all living things.

Crap.  This way of thinking presumes the existence of the ‘Noble Savage’, which is, in fact, a myth.  The truth is that Native Americans were and are human beings, and you know perfectly well that humans tend to be annoying, stupid and downright mean.  Put another way, there is absolutely nothing, then or now, stopping any particular Native from being an asshole.  In fact, if you compiled a list of the meanest people in human history, the Mohawks would clock in pretty high on it.

However, the simple truth is that most Native groups at the time lived considerably ‘greener’ lives than their European contemporaries.  Even more important – they did so by design.  In general, Native American peoples did not view the natural world as something that needed to be beaten into submission.

What I was really looking for was a font that would appear more as a part of the landscape than stamped over it.  Something rounder and smoother.  There are a ridiculous number of fonts to choose from, but my decision became easy when I stumbled across a font called Pigiarniq.  It’s a font adopted by the Government of Nunavut that allows for all of their spoken languages to be represented uniformly.  This isn’t the only Native font out there, but it’s the only one I’ve seen that includes English characters.

Yes – pretty much any clean, sans-serif font would have gotten the job done.  But using Pigiarniq has style.  Don’t underestimate it.

Giving the Native labels a ‘natural’ feel was enhanced on Map 2 due to the added bump.  By adding the labels before applying the bump, it gives the labels a subtle appearance of being ‘draped’ over the landscape, as opposed to the British labels which were applied flat after the fact.

At the end of the day, I have to say I’m pretty happy with the way these maps turned out.  Drew is very happy, which is even more important.  Here’s a detail from each:


Appendix: The nuts and bolts (for any of you who care):

The data used for these maps came mostly from just a few sources – MassGIS, USGS and Google Earth (because I am who I am, I endeavored to place the English settlements in as historically accurate a manner as possible.  The easiest way to do this was to locate the town hall, town common, or original church.  Searching for a town hall, though, often gets you directed to a structure that was built in the 1990s.  I got around this by using Street View in Google Earth to get a look at the building in question.  In New England, it’s a pretty simple matter to identify the structure that’s three or four centuries old [I told you I found a use for Street View, Bill]).  Other data came out of Drew’s brain, based on extensive research.  I filled in a gap or two myself.  The projection used was NAD83 Stateplane.  Software used was QuantumGIS, Bryce and Photoshop.  No British or Native American settlements were harmed in the making of these maps.


Blog Stats

  • 25,067 hits


May 2018
« Dec