You are currently browsing the category archive for the ‘Internet’ category.

SithThere was a time – not too long ago – when a new social media contender appeared on the horizon.  It was supposed to be the first real threat to Facebook, and it was called Diaspora (I’m not really sure what they were thinking when they chose the name.  While the word technically can simply mean a scattering of people, it’s common usage implies a scattering that takes place against the people’s will).

At first, Diaspora got a lot of press.  The guys proposing it hyped it as a privacy-minded alternative to Facebook – a social network that wouldn’t sell off our private data to the highest bidder.  This proposal was well received.  The developers asked the world for money for startup costs via Kickstarter.  They initially asked for $10,000.  They ended up receiving more than $200,000.  All this without writing a single line of code.

I watched Diaspora with interest, as it sounded like a fine idea to me.  It shouldn’t come as a surprise to anyone that I thought the world could use an alternative to Facebook.  I was also intrigued by the fact that Diaspora intended their code (when they finally wrote it) to be open source, thereby allowing us to run it ourselves on our own servers if we so desired.

But then Google+ hit the interwebs.  It was immediately given the title of Facebook killer, and it seemed like everybody was talking about G+ for weeks.

And nobody – but nobody – seemed to be talking about Diaspora anymore.  I even asked about it a couple of times, at Google+ as well as at Twitter, but no one seemed to have heard anything from or about Diaspora since Google+ launched.  As far as I could tell, the project seemed to be pretty much dead in the water.

Until Diaspora reappeared, just a couple weeks ago.  I first noticed activity on the official Diaspora Twitter account, shortly after which I received an email inviting me to join in on the beta.  Of course, I did so.

And I have been greatly disappointed.  Not by the software but by its user base.  See, Diaspora had a real shot at the limelight, and if they had just gotten off the pot after they received twenty times the funding they asked for, they may have given Facebook a run for its money.  But Google beat them to the punch, and it was a serious beating.

Fact is, the overwhelming majority of Facebook users are really quite happy with Facebook, warts and all.  When it comes to all the various privacy issues, the average user just doesn’t give a crap.  And for most of those who do give a crap, Google+ serves as a perfectly adequate alternative.

So when Diaspora finally hit the scene, they were no longer the only alternative to Facebook.  In fact, they were now just a feature-poor substitute offered by a relatively unknown company with comparatively no resources at their disposal.

And their pickings were pretty slim.  Of the many, many people who actually want to participate in some form of social network, Facebook had already sewn up the majority of the pie.  Of the remainder, Google+ met the needs and/or desires of all but the most rabidly paranoid of the tinfoil hat-wearing crowd, who (sadly) have flocked to Diaspora and claimed it as their own.

As you may have guessed, finding a rational discussion at Diaspora is virtually impossible.  Like previously mentioned Quora, Diaspora’s narrow and esoteric user base has led to Rule By Douchebaggerati.  I have tried a few times to engage people at Diaspora, and the universal response has been attempts to pick fights with me.  Kind of sad and laughable at the same time, especially the latest instance.

Unsurprisingly, a fair amount of the ‘discussion’ at Diaspora revolves around Facebook- and/or Google- bashing.  My latest exposure to extreme douchbaggery occurred when a guy claimed to ‘know’ of Google’s evil, due to the vast amount of ‘research’ he’s done on the subject.  I politely (really – I worked at it) asked him to share his research.

I got no response from the Google scholar, but I did get numerous responses from the rest of the tinfoil hat-wearing crowd.  Their eventual consensus was (I’m not kidding) that the ‘truth’ about Google is only meaningful to those who do the research themselves.   Seriously.  One of them even went so far as to reference a series of ‘scholarly’ works on the subject of research and how it only really ‘works’ when we do it for ourselves (I’m not really sure how this works.  How far back along the research trail do we have to go ourselves?  Should I start each day by inventing language?).  So it’s not that they can’t back up their claims, but that they choose not to.  For my own good.  And they were quite happy to explain ad nauseam the reasons for this choice.  I don’t know if they’re intensely dumb or if they just think I am.

Which got me to thinking (about Google, that is).  I have, in fact, wondered about Google.  About whether or not it is evil.  My initial assumption was that it is.  I mean – it stands to reason, doesn’t it?  It’s an enormous, ridiculously wealthy and powerful corporation – how could it not be evil?

Being the kind of guy I am, though, I took the time to look into it.  I figured an enormous, wealthy, powerful evil empire would leave some sort of conclusive, verifiable proof of evildoings.  So I looked for them.  And I didn’t find any.  So I looked harder.  And I still didn’t find any.  So I looked even harder.  And still nothing.

What I found was a company that has made a fortune off of advertising.  One way in which they have done this is by gathering data about their users (us) and selling it to the highest bidder.  As far as I can tell, Google has never tried to hide this.  And while the data they gather (data we freely hand over to them, by the way) is – technically – private data, it’s not private in the way most people think.  Google doesn’t sell our account numbers to anyone.  Nor do they sell our email addresses.  In fact, they don’t sell anything that could be called PII (personally identifiable information).  Not even here in Massachusetts, the home of insanely stringent PII legislation.  The kind of data Google gathers and sells about us is data that we generate but that we don’t generally have a use for ourselves.

Years ago, my mother was a regular participant in the Neilsen Ratings.  Every so often, she would get a package in the mail from Neilsen.  It would contain some forms, a pencil and a ridiculous fee (I’m pretty sure it was $1).  For the following couple of weeks, she would religiously (and painfully honestly) record every television program watched in our household.  When the forms were completed, she would send them back to Neilsen.  The idea behind this was to find out what shows people were actually watching so that programming and advertising dollars could be spent appropriately.  I don’t know if the system actually worked, but it came close enough to make all involved happy.

This is the sort of data Google gathers.  The kind of data advertisers really care about, but that is not terribly meaningful to most of us average users.

And Google doesn’t force this upon us.  If you don’t want to give them your personal data, all you have to do is refrain from using their products and services.  There are other search engines out there.  There are other email providers (actually, if you want to use Gmail but don’t want Google to gather your personal information while you do so, all you have to do is pay for it.  It’s the free version that gets paid for though data).  On the other hand, if you’re willing to let Google gather and use your personal data, all those products and services are the payment you receive for the deal.

The other thing I found in my travels is scores – no, hundreds (possibly even thousands) of people who know that Google is evil.  They know because they’ve seen proof.  They’ve walked the walk, they’ve done the research, and they know – beyond doubt – that Google is The Evil Empire.  And every time I have encountered one of these people I have made the same simple request:  that they share this knowledge with me.

Not a single one of them has done so.  In fact, most of them get quite angry as part of the process of not doing so.  Usually I get told how painfully obvious it is – how the universe is practically littered with the proof of it – but no one has actually gone so far as to show me the proof they profess to have, or point me to the proof they profess to have seen.  Other times (like the recent one mentioned above) I get lengthy justifications as to why they are not sharing what they know (always that they are not – never that they cannot.  An important distinction).

At first I wondered if Google was just that good at covering up their evildoing.  They’d have to be better at it than the CIA (who’ve been eating and drinking cover-up for generations), but that wouldn’t be impossible.  Just unlikely.

But that didn’t make sense in light of all the people who have seen evidence of Google’s wrongdoing (they have!  Really!).  Instead, it would mean that of all those people, not one of them was willing to put their money where their mouth is (I mean, they’re all able to, right?  It’s that they’re not willing to).  Of all those people who know how evil Google is, not a single one of them is willing to produce any real proof of it.  Not a single conclusive, verifiable piece of evidence.  Not one.

Of course, the other possibility is that they’re all a bunch of asshats and Google is just a legitimate business.

Advertisements

CloudNote: This is the fourth and last part in a series on building your own home-brewed map server  It is advisable to read the previous installments, found here, here and here.

This is the point at which the tutorial-like flavor of this series breaks down, for a variety of reasons.  From here on, we’ll be dealing with divergent variables that cannot be easily addressed.  We’ll discuss them each as they come up.  Suffice to say that from now on I can only detail the steps I have taken.  Any steps you take will depend on your equipment and circumstances.

Having finished putting my server together, I decided it was time to give it a face to show the world.  Before I could do so, however, I had to give it a more substantial connection to that world, a process that began with establishing a dedicated address (domain).  The most common method of achieving this is to simply purchase one (i.e., http://www.myinternetaddress.com).  There are a variety of web hosts you can turn to for this.  I cannot personally recommend any of them (due only to personal ignorance).

For the purpose of this exercise I didn’t feel I needed a whole lot (all I really wanted was an address, since I intended to host everything myself), so I went to DynDNS and created a free account (thanks, Don).  DynDNS set me up with a personal address, and the use of their updater makes it work for dynamic addresses (which most routers provide).  The web site does a decent job of walking you through the process, including setting up port forwarding in your router.

Exactly how to go about forwarding a port is particular to the router in question, so I won’t go into it in detail.  I will say that it is not something that should be approached lightly.  Port forwarding can pose certain security risks.  It’s a very good idea to do some research into the process before you dabble in it.

Once I had an address and a port through which to use it, I had to choose a front end for my server.  I was tempted to go with Drupal, mainly because it has the best documented means with which to serve up TileStream, but also because I’ve been meaning to learn my way around Drupal for some time now.

In the end, I realized that my little server, despite being almost Thomas-like in its dedication and willingness to serve, just doesn’t have the cojones necessary for serving those kind of tiles.  Truth is, if I wanted my own custom base map tiles in an enterprise environment, I’d purchase MapBox’s TileStream hosting rather that serving it myself, anyway. (Umm… I really couldn’t have been more wrong about this.)

And so I decided learning Drupal could wait for another day.  Instead I chose to go with WordPress, for several reasons.  I’m reasonably familiar with it, it’s a solid, well-constructed application, it’s extremely customizable, and it has an enormous, dedicated user base who have written huge amounts of themes and plugins.  And while WordPress was originally intended to be a blogging platform (and remains one of the best), it’s easy enough to reconfigure it for other purposes.

Installing WordPress is a snap since we included LAMP (Linux Apache, MySQL and PHP) while initially installing Ubuntu Server Edition.  In the terminal, type:

sudo apt-get install wordpress

Let it do its thing.  When it asks if you want to continue, hit ‘y’.  When it gets back to the prompt, type:

sudo ln -s /usr/share/wordpress /var/www/wordpress

sudo bash /usr/share/doc/wordpress/examples/setup-mysql -n wordpress mysite.com

But replace ‘mysite.com’ with the address you created at DynDNS.

Back to Webmin.  On the sidebar menu, click on Servers→Apache Webserver→Virtual Server.  Scroll down to the bottom.  Leave the Address at ‘Any’.  Specify the port you configured your router to forward (should be port 80, the default for HTTP).  Set the Document Root by browsing to  /var/www/wordpress.  Specify the Server Name as the address you created at DynDNS (the full address – include http://).  Stop and start Apache for good measure.

Now you should be able to point your browser to your DynDNS-created address (hereafter referred to as your address) to complete your configuration of WordPress.  You will have to make many decisions.  Choose wisely.

Once you have WordPress tweaked to your satisfaction, you’re probably going to want to add some web map functionality to it.  First and easiest is Flex Viewer.  All you have to do is move the ‘flexviewer’ folder from /var/www to  /usr/share/wordpress.  The file manager in Webmin can do this easily.  Once you’re done, placing a Flex Viewer map on a page looks something like this:

<iframe style="border: none;" height="400" width="600" src="http://your address/flexviewer/index.html"></iframe>

Straightforward HTML.  Nothing fancy, once all the machinery is in place.

Which gets a little trickier for GeoServer.  By design, GeoServer only runs locally (localhost).  In order to send GeoServer maps out to the universe at large, we have to do so through a proxy.  This has to be configured in Apache.  Luckily, Webmin makes it a relatively painless process.

We’ll start by enabling the proxy module in Apache.  Click on Servers→Apache Webserver→Global Configuration→Configure Apache Modules.  Click the checkboxes next to ‘proxy’ and ‘proxy_http’, then click on the ‘Enable Selected Modules’ button at the bottom.  When you return to the Apache start page, click on ‘Apply Changes’ in the top right-hand corner.

Having done that, we can point everything in the right direction.  Go to Servers→Apache Webserver→Virtual Server→Aliases and Redirects.  Scroll to the bottom and fill in the boxes thus:

Proxy

Your server will have a name other than maps.  Most likely, it will be localhost.  In any case, you can find it by looking in the location bar when you access the OpenGeo Suite.  Apply the changes again, and you might as well stop Apache and restart it for good measure.

You can now configure and publish maps through GeoExplorer.  The only caveat is that GeoExplorer will give you code that needs a minor change.  It will use a local address (i.e., localhost:8080) that needs to be updated.  Example:

<iframe style="border: none;" height="400" width="600" src="http://localhost:8080/geoexplorer/viewer#maps/1"></iframe>

changes to

<iframe style="border: none;" height="400" width="600" src="http://your address/geoexplorer/viewer#maps/1"></iframe>

And that – as they say – is that.  Much of this has entailed individual choices and therefore leaves a lot of room for variation, but I think we’ve covered enough ground to get you up and running.  If you want to see my end result, you can find it at:

The Monster Fun Home Map Server Webby Thing

I won’t make any promises as to how long I will keep it up and running, but it will be there for a short while, at least.  Keep in mind that it is a work in progress.  So be nice.

Update:  My apologies to anyone who may give a crap. but I have pulled the plug on the Webby Thing.  It was really just a showpiece, and I just couldn’t seem to find the time to maintain it properly.  And frankly, I have better uses for the server.  Sorry.

CloudNote: This is the third part in a series on building your own home-brewed map server  It is advisable to read the previous installments, found here and here.

Last time, I walked you through installing TileMill, and I promised a similar treatment for TileStream and Flex Viewer.  I am a man of my word, so here we go.  Don’t worry – this will be easy in comparison to what we’ve already accomplished.

We’ll start with TileStream, simply because we’re going to have to avail ourselves of the command line.  Once again, you can either plug a keyboard and monitor into your server or use whatever SSH client you’ve been using thus far.

Once you’re in the terminal, take control again (‘sudo su’).  For your TileStream installation, you can follow the installation instructions as presented, except for one detail:  it’s assumed we already have an application we don’t have.  Let’s correct that:

sudo apt-get install git

And then proceed with the installation (don’t forget to hit ‘enter’ after each command):

sudo apt-get install curl build-essential libssl-dev libsqlite3-0 libsqlite3-dev

git clone –b master-ndistro git://github.com/mapbox/tilestream.git

cd tilestream

./ndistro

And that’s that (TileStream, even more than TileMill, will throw up errors during the installation.  None of them should stop the process, though, so you can safely ignore them).  Like TileMill, TileStream needs to be started before it can be accessed in a browser.  Since the plan is to run the server headless, let’s set this up in Webmin in a fashion similar the one employed for TileMill.

Back to Webmin, again open the ‘Other’ menu, and this time click on ‘Custom Commands’.  We’ll create a new Custom Command and configure it as follows (substitute your name for mine as appropriate):

TileStream Command

Save it, and you will now have a Custom Command button to use for starting TileStream (we didn’t do this for TileMill because we cannot.  The Webmin Custom Command function simply won’t accept it.  I think it has to do with the nature of the command.  I think the ‘./’ in the TileMill command confuses it).

At this point, TileStream is fully functional, but it doesn’t yet have a tileset to work with.  Using the same browser with which you just accessed Webmin, go here to download one.  Scroll down the page, pick a tileset you like and click on it to proceed to the download page (I picked World Light).  Download the file to wherever you please.  Once you have the file, go back to Webmin and open the ‘Other’ menu again.  Click on ‘Upload and Download’, then select the ‘Upload to Server’ tab.  Click on one of the buttons labeled ‘Choose File’, then browse to the tileset file you downloaded.  For ‘File or directory to upload to’, click the button and browse your way to /home/terry/tilestream/tiles (by now, you should know you’re not ‘terry’).  Click the ‘Upload’  button.

Once your tileset is finished uploading, you can point the browser to http://maps:8888 (yeah, yeah – not ‘maps’) to access TileStream.  Enjoy:

TileStream

Our last order of business is Flex Viewer (otherwise known as ‘ArcGIS Viewer for Flex’).  This is the easiest of the lot, mainly because it doesn’t actually have to be installed.  Still using the same browser, go to the download page (you’ll need an ESRI Global Account.  If you don’t have one, create one), agree to the terms and download the current version (again – download it to wherever you please).  Once you have the package, use Webmin to upload it to the server.  This time you’ll want to upload the file to /var/www and you’ll want to check the ‘Yes’ button adjacent to ‘Extract archive or compressed files?’

And you’re in.  Point the browser to http://maps/flexviewer/ (you know the drill) and play with your new toy:

Flex

You can see I have customized the flex viewer.  You should do so as well (it’s designed for it, after all).  Open the file manager in Webmin (the ‘Other’ menu again) and navigate to /var/www/flexviwer.  Select config.xml, then click the ‘Edit’ button on the toolbar.  The rest is up to you.

* * * * *

So now you have a headless Ubuntu map server up and running, and the question you are probably asking yourself is:  “Do I really need all this stuff running in my server?”  The answer is, of course, ‘no’.  The point of this exercise was to learn a thing or two.  If you’ve actually been following along and have these applications running in your own machine, you are now in a good position to poke around for a while to figure out what sort of server you’d like to run.

For instance, there’s no real reason to run TileMill on a server.  TileMill doesn’t serve tiles, it fires them.  Therefore it’s probably not the best idea to be eating up your server’s resources with TileMill (and it seriously devours resources).  The server doesn’t have a use for the tiles until they’re done being fired, at which point TileStream is the tool for the job.

That said, there’s no compelling reason why you couldn’t run TileMill on your server.  If you’d rather not commit another machine to the task (and if you’re not in any kind of hurry), why not give the job to the server?  It’ll take it a while, but it will get the tiles fired (if your server is an older machine like mine, I would strongly advise you to fire your tiles in sections, then put them together later.  I suggest firing them one zoom level at a time and combining them with SQLite Compare).

Flex Viewer and the OpenGeo Suite don’t often go together, but there’s no reason why they can’t.  Flex Viewer can serve up layers delivered via WMS – there’s nothing to say GeoServer can’t provide that service.  They are, however, very different applications, with vastly different capabilities, strengths and weaknesses.  They also have a very different ‘feel’, and we should never discount the importance of aesthetics in the decision making process.

A final – and very important – consideration in the final configuration of our home server is the nature of the face it presents to the world.  In order for a server to serve, it must connect to and communicate with the world at large.  This means some kind of front end, the nature of which will influence at least some of our choices.

Which brings us neatly to the next post.  See you there.

CloudNote:  This is the second part in a series on building your own home-brewed map server (I would tell you how many installments the series will entail, but I won’t pretend to have really thought this through.  There will be at least one more.  Probably two).  It assumes you have read the previous installment.  You have been warned.

Last time, I walked you through setting up your very own headless map server using only Free and Open Source Software.  Now, I’m going to show you how to trick it out with a few extra web mapping goodies.  The installation process will be easiest if you re-attach a ‘head’ to your server (i.e., a monitor and keyboard), so go ahead and do that before we begin (alternately, if you’re using PuTTY to access your headless server, you can use it for this purpose).

At the end of my last post, I showed you all a screenshot of my server running TileMill, TileStream and Flex Viewer, and I made a semi-promise to write something up about it.  So here we are.

I tend toward a masochistic approach to most undertakings in my life, and this one will not deviate from that course.  Whenever I am faced with a series of tasks that need completion, I rank them in decreasing order of difficulty and unpleasantness, and I attack them in that order.  In other words, I work from the most demanding to the least troublesome.

I originally intended to write a single post covering TileMill, TileStream and Flex Viewer, but a short way into this post I realized that I had to split it into two pieces.  The next post will cover TileStream and Flex Viewer.  This one will get you through TileMill.

TileMill can be a bear to install – not because you need catlike reflexes or forbidden knowledge or crazy computer skills – but simply because there are many steps, which translate into lots of room for error.  A quick glance at TileMill’s installation instructions may seem a bit daunting (especially if you’re new to this kind of thing):

Install build requirements:

# Mapnik dependencies 
sudo apt-get install -y g++ cpp \ 
libboost-filesystem1.42-dev \ 
libboost-iostreams1.42-dev libboost-program-options1.42-dev \ 
libboost-python1.42-dev libboost-regex1.42-dev \ 
libboost-system1.42-dev libboost-thread1.42-dev \ 
python-dev libxml2 libxml2-dev \ 
libfreetype6 libfreetype6-dev \ 
libjpeg62 libjpeg62-dev \ 
libltdl7 libltdl-dev \ 
libpng12-0 libpng12-dev \ 
libgeotiff-dev libtiff4 libtiff4-dev libtiffxx0c2 \ 
libcairo2 libcairo2-dev python-cairo python-cairo-dev \ 
libcairomm-1.0-1 libcairomm-1.0-dev \ 
ttf-unifont ttf-dejavu ttf-dejavu-core ttf-dejavu-extra \ 
subversion build-essential python-nose 

# Mapnik plugin dependencies 
sudo apt-get install libgdal1-dev python-gdal libgdal1-dev gdal-bin \ 
postgresql-8.4 postgresql-server-dev-8.4 postgresql-contrib-8.4 postgresql-8.4-postgis \ 
libsqlite3-0 libsqlite3-dev  

# TileMill dependencies 
sudo apt-get install libzip1 libzip-dev curl 

Install mapnik from source:

svn checkout -r 2638 http://svn.mapnik.org/trunk mapnik 
cd mapnik python scons/scons.py configure INPUT_PLUGINS=shape,ogr,gdal 
python scons/scons.py 
sudo python scons/scons.py install 
sudo ldconfig 

Download and unpack TileMill. Build & install:

cd tilemill ./ndistro 

It’s not as scary as it looks (the color-coding is my doing, to make it easy to differentiate things).  The only circumstance that makes this particular process difficult is that the author of these instructions assumes we know a thing or two about Linux and the command line.

Let’s start at the top, with the first ‘paragraph’, which begins: # Mapnik dependencies.  Translation:  We will now proceed to install all the little tools, utilities, accessories and such-rot that Mapnik (a necessary and desirable program) needs to function (i.e., “dependencies”).

It is assumed that we know the entire ‘paragraph’ is one command and that the forward-slashes (/) are not actually carriage returns and shouldn’t be followed by spaces.  It is also assumed that we will notice any errors that may occur during this process, know whether we need concern ourselves with them and (if so) be capable of correcting them.

Let’s see what we can do about this, shall we?  Since we’re installing this on our server and actually typing in the commands (rather than copying and pasting the whole thing), we have the luxury of slicing it up into bite-sized pieces.  This way the process becomes much less daunting, and it makes it easier for us to correct any errors that crop up along the way.

We’ll start by taking control.  Type “sudo su” (sans quotation marks), then provide your password.  Now we can proceed to install everything, choosing commands of a size we’re comfortable with.  I found that doing it one line at a time works pretty smoothly.  Two important points here:  start every command with “sudo apt-get install” (not just the first line) and don’t include the forward-slashes (unless you’re installing more than one line at a time).  I would therefore type in the first two lines like this (don’t forget to hit ‘enter’ at the end of each command):

sudo apt-get install –y g++ cpp

sudo apt-get install libboost-filesystem1.42-dev

You get the idea.  Continue along in this fashion until you have installed all the necessary dependencies for Mapnik.  I strongly recommend doing them all in one sitting.  It just makes it easier to keep track of what has and hasn’t been installed.

At this stage of the game, any errors you encounter will most likely be spelling errors.  Your computer will let you know when you mistype, usually through the expedient of informing you that it couldn’t find the package you requested.  When this occurs, just double-check your spelling (hitting the ‘up’ cursor key at the command prompt will cause the computer to repeat your last command.  You can then use the cursors to correct the error).  At certain points in the installation process, your server will inform you of disk space consumption and ask you to confirm an install (in the form of yes/no).  Hitting ‘y’ will keep the process moving along.

While packages install in your system, slews of code will fly by on your screen, far too fast to read or comprehend.  Just watch it go by and feel your Geek Cred grow.

By now you should have developed enough Dorkish confidence to have a go at # Mapnik plugin dependencies and # TileMill dependencies.  Have at it.

When you’re done, move on to installing Mapnik from source.  Each line of this section is an individual command that should be followed by ‘enter’.  The first line will throw up your first real error.  Simply paying attention to your server and following the instructions it provides will fix the problem (in case you missed it, the error occurred because you haven’t installed Subversion, an application you attempted to use by typing the command ‘svn’.  Easily fixed by typing sudo apt-get install subversion).  You can then re-type the first line and proceed onward with the installation.  When you get to the scons commands, you will learn a thing or two about patience.  Wait it out.  It will finish eventually.

Now we should be ready to do what we came here to do:  install TileMill.  Unfortunately, TileMill’s installation instructions aren’t very helpful at this point for a headless installation.  All they tell us is to “Download and unpack TileMill”.  There’s a button further up TileMill’s installation page for the purpose of the ‘download’ part of this, but it’s not very helpful for our situation.  We could use Webmin to manage this, but what the hell – let your Geek Flag fly (later on, we’ll use Webmin to install Flex Viewer, so you’ll get a chance to see the process anyway).

Our installation of Mapnik left us within the Mapnik directory, so let’s start by returning to the home directory:

cd~

Then we can download TileMill:

wget https://github.com/mapbox/tilemill/zipball/0.1.4 –no-check-certificate

Now let’s check to confirm the name of the file we need to unpack:

dir

This command will return a list of everything in your current directory (in this case, the home directory).  Amongst the files and folders listed, you should see ‘0.1.4’ (probably first).  Let’s unpack it:

unzip 0.1.4

Now we have a workable TileMill folder we can use for installation, but the folder has an unwieldy name (which, inexplicably, the installation instructions fail to address).  Check your directory again to find the name of the file you just unpacked (in my case, the folder was ‘mapbox-tilemill-4ba9aea’).  Let’s change that to something more reasonable:

mv mapbox-tilemill-4ba9aea tilemill

At long last, we can follow the last of the instructions and finish the installation:

cd tilemill

./ndistro

Watch the code flash by.  Enjoy the show.  This package is still in beta, so it will probably throw up some errors during installation.  None of them should be severe enough to interrupt the process, though.  Feel free to ignore them.

Once the installation is complete, we’ll have to start TileMill before we can use it.  This can be achieved by typing ‘./tilemill.js’in the terminal, but TileMill actually runs in a browser (and we’ll eventually need to be able to run it in a server with no head), so let’s simplify our lives and start it through Webmin.

Go to the other computer on your network through which you usually access your server (or just stay where you are, if you’ve been doing all this through PuTTY), open the browser and start Webmin.  Open the ‘Others’ page and select ‘Command Shell’.  In the box to the right of the ‘Execute Command’ button, type:

cd /home/terry/tilemill (substitute your own username for ‘terry’)

Click the ‘Execute Command’ button, then type in:

./tilemill.js

Click the button again (after you’ve gone through this process a couple of times, Webmin will remember these commands and you’ll be able to select them from a drop-down list of previous commands).

And now enjoy the fruits:  type http://maps:8889 into the location bar of your browser (again, substitute the name of your server for ‘maps’).  Gaze in awe and wonder at what you have wrought:

Tilemill

Take a short break and play around with the program a bit.  You’ve earned it.  When you’re done I’ll be waiting at the beginning of the next post.

CloudFellow Map Dork and good Twitter friend Don Meltz has been writing a series of blog posts about his trials and tribulations while setting up a homebrewed map server on an old Dell Inspiron (here and here).  I strongly recommend giving them a read.

At the outset, Don ran his GeoSandbox on Windows XP, but recently he switched over to Ubuntu.  While I applaud this decision whole-heartedly, I thought I’d take the extra step and build my own map server on a headless Ubuntu Server box (when I say ‘headless’, I am talking about an eventual goal.  To set this all up, the computer in question will initially need to have a monitor and keyboard plugged into it, as well as an internet connection.  When the dust settles, all that need remain is the internet connection).  The following is a quick walkthrough of the process.  I apologize to any non-Map Dorks who may be reading this.

The process begins, of course, with the installation of Ubuntu 10.04 Server Edition.  Download it, burn it to a disk, and install it on the machine you have chosen to be your server.  Read the screens that come up during installation and make the decisions that are appropriate for your life.  The only one of these I feel compelled to comment on is the software selection:

Software

The above image shows my choices (what the hell – install everything, right?).  Definitely install Samba shares.  It allows Linux machines to talk to others.  Also, be sure to install the OpenSSH server.  You’ll need it.  For our purposes, there’s no real reason to install a print server, and installing a mail server will cause the computer to ask you a slew of configuration questions you’re probably not prepared to answer.  Give it a pass.

During the installation process, you will be asked to give your server a name.  I named mine ‘maps’.  So whenever I write ‘maps’, substitute the name you give your own machine.

Once your installation is complete, you will be asked to login to your new server (using the username and password you provided during installation), after which you will be presented with a blinking white underscore (_) on a black screen.  This is a command prompt, and you need not fear it.  I’ll walk you through the process of using it to give yourself a better interface with which to communicate with your server.  Hang tight.

Let’s begin the process by taking control of the machine.  Type in “sudo su” (sans quotation marks) and hit ‘enter’.  The server will ask for your password, and after you supply it, you will be able to do pretty much anything you want.  You are now what is sometimes called a superuser, or root.  What it means is that you are now speaking to your computer in terms it cannot ignore.  This circumstance should be treated with respect.  At this stage, your server will erase itself if you tell it to (and it won’t ask you whether or not you’re sure about it – it’ll just go ahead and obey your orders).  So double-check your typing before you hit ‘enter’.

Now, let’s get ourselves a GUI (Graphical User Interface).  The server edition we’re using doesn’t have its own GUI, and for good reasons (both resource conservation and security).  Instead, we can install Webmin, a software package that allows us to connect to our server using a web browser on another computer on the same network.  We’ll do this using the command line.  Type in (ignore the bullets before each command.  They are only there to let you know where each new line begins):

And hit ‘enter’ (I’m not going to keep repeating this.  Just assume that hitting ‘enter’ is something you should do after entering commands{the dark words}).  Follow this with:

  • sudo dpkg -i webmin-current.deb

And finish it up with:

  • sudo apt-get -f install

Now we have a GUI in place.  If you open a browser on another computer on your network and type: https://maps:10000 into the location bar (remember to replace ‘maps’ with the name you gave your own server), you’ll be asked to supply your username and password, then you’ll see this (you may also be asked to verify Webmin’s certificate, depending on your browser):

Cool, huh?  Don’t get your hopes up, though.  We’re not done with the command line yet (don’t sweat it – I’ll hold your hand along the way.  Besides – you should learn to be comfortable with the command line).  For the moment, though, let’s take a look around the Webmin interface.  There is a lot this program can do, and if you can find the time and determination it would be a good idea to learn your way through it.  For now, you just really need to know a few options.  The first is that the initial page will notify you if any of your packages (Linux for ‘software’) have available updates.  It’s a good idea to take care of them.  If you want, Webmin can be told to do this automatically (on the update page you get to when you click through).  The other important features are both located under the ‘Other’ menu (on the left).  The first is the file manager (which bears a striking resemblance to the Windows File Manager of old), which gives you the ability to explore and modify the file system on your server (this feature runs on Java, so be sure the browser you’re using can handle it).  The other feature is ‘Upload and Download’ which does what it says it does.  Together, these two features give you the ability to put maps on your map server, something I assume you’ll want to do.

Please note the specs on my server (as pictured above).  It’s not terribly different than Don’s Inspiron.  I’m not suggesting you do the same, but it is worth noting that an old machine can handle this job.

Back to the command line.  Let’s get OpenGeo:

Rock and roll.  When your server is done doing what it needs to do, go back to the browser you used for Webmin and type http://maps:8080/dashboard/ into the location bar.  Check out the OpenGeo goodness.

Finally, to make your new server truly headless, you’re going to need some way to login remotely (when you turn the machine on, it won’t do a damn thing until you give it a username and password).  Since you listened to me earlier and installed the OpenSSH server, you’ll be able to do this.  All you need is an SSH client.  If you’re remotely connecting through a Linux machine, chances are you already have one.  In the terminal, just type:

  • ssh <username>@<computer name or IP address>

In my case, this would be:

  • ssh terry@maps

You’ll be asked for a password, and then you’re in (I hear this works the same in OS X, but I cannot confirm it).

If you’re using a Windows machine – or if you just prefer a GUI – you can use PuTTY.  PuTTY is very simple to use (and it comes as an executable.  I love programs that don’t mess with the registry).  Tell it the name of the computer you want to connect to and it opens a console window asking for your username and password.  Tell it what it wants to know.

It’s not a bad idea to install a new, dedicated browser for use with your new server.  I used Safari, but only because I already use Firefox and Chrome for other purposes.  Also, your network will probably give your server a dynamic IP address.  This is not an issue for you, since your network can identify the machine by name.  If you want to (and there are several valid reasons to do so), you can assign a static IP address to your server.  To find out how to do so, just search around a bit at the extraordinary Ubuntu ForumsUpdate:  It seem that Webmin provides an easy method to assign a static IP address to your server.  Go to Networking → Network Configuration → Network Interfaces → Activated at Boot.  Click on the name of your active connection, and you will then be able to assign a static IP address just by filling in boxes.

Enjoy your map server.  If I can find the time, I’ll write up a post on how I added Flex Viewer, TileMill and TileStream to the server:

And 50 bonus points to anyone who understands the image at the top of this post.

Zombies

In my time, I’ve seen my share of Zombie films.  Some of them I’ve enjoyed (Shaun of the Dead, Zombieland), some I’ve actively disliked (28 Days Later), and many others have fallen somewhere in between.  Until recently, though, there was one aspect of zombie films that confused me greatly:  I couldn’t figure out why zombies displayed a form of social cohesion.

I mean – we’re talking about mindless, shambling, ravenous, flesh-eating monsters here.  Why do they run in packs?  Why do they work together?  Why, I wondered, do they cooperate?

It just seemed inexplicable that zombies would exhibit a tendency to strive toward a common goal.  I expected more anarchy and less teamwork from the shambling masses.  Just the other day, however, I began to understand the complexities of zombie social dynamics.  Unsurprisingly, this onset of comprehension coincided with my latest foray into the seedy underside of the Social Web.

It occurred to me that zombies were not born zombies but were, in fact, once human.  Therefore, their behavior patterns (both within the narrative and without) would logically fall into line with normal human behavior patterns.  And most humans, I think, are less likely to form a community and more likely to form a mob.  You know – a large group of mindless, shambling, ravenous monsters.

I take a great interest in the Social Web.  On some level, I guess you could say I am a student of it.  Because of this, I am quick to study any new movement/website/idea of the ilk that comes down the road.  This often results in membership and a trial of the newest fad, but not always (see my posts on Facebook.  Sometimes my research shows me that membership is a step I’m unwilling to take).  The Social Web is not terribly different from many other aspects of life – sometimes the best way to get to know it is to just take a deep breath and dive in.

Which is what I did with the latest fad to appear on my radar: Quora.  Quora bills itself as “a continually improving collection of questions and answers created, edited, and organized by everyone who uses it.”  On the surface, this sounds like a good idea (unfortunately, the reality is nothing of the sort.  The general consensus over at Quora seems to be that ideas need to be edited in order to have value.  It’s more like the Ministry of Truth than the Social Web).  So I joined, looked around a bit, then posted a question.  I checked back now and again over a week or so, until I found that someone had edited my question.  Curious as to what I had misspelled, I went to have a look, and discovered that an entire paragraph had been removed.  This made me wonder about the person who had done the editing, so I clicked upon his name to check out his profile.  What I saw disturbed me a bit.  The profiles on Quora show users’ activities on the site.  Specifically, the numbers of questions asked, answers given and edits provided by the user.  This particular user had asked 6 questions, given 8 answers, and provided 1,122 edits (you read those numbers correctly).

Naturally, I assumed I was dealing with some sort of Quora troll.  Being the fan of crowdsourcing that I am (see any of my posts discussing OpenStreetMap), I leapt to the erroneous conclusion that the community’s ability to edit each others’ questions was geared toward fixing errors (like spelling and/or grammar).  It never occurred to me that other users would feel free to radically alter the content of a question.  Such behavior would seem to negate the point of posting questions at all.  How could you expect to get answers to a question if anyone could easily change its meaning?

So I posted a couple more questions to Quora.  The first simply asked if the user base was aware of this sort of thing (it turns out that they were.  Worse – they approve of it).  The second (which, of the two, I thought was less likely to offend) asked whether Quora should have more robust filters in place.  Since Quora provides space to further elaborate, I used it to describe the aforementioned troll and my desire to automatically block such users.

Enter the horde of mindless, shambling, ravenous monsters.  I was stunned by the vitriolic response my second question inspired.  While I am quite aware of the speed with which any group of humans will mutate into the Howling Mob (there’s a reason they make us read Lord of the Flies in school), I am often caught off guard by the seeming innocuous things that serve as catalyst.  I forget that the average human is a quivering mass of insecurities, and that their desperate need to belong often causes them to lash out at any perceived threat against the pony to which they’ve hitched their wagon.

As you probably know, this is not the first time I have encountered the Howling Mob online.  In fact, it seems to happen to me with alarming frequency.  Considering my own personality type, this is hardly surprising and it doesn’t actually bother me.

It did get me to wondering, though.  Since human nature is what it is, and since every aspect of the Social Web is necessarily teeming with humans, why is it that I’ve never been assaulted by the Howling Mob at my particular favorite corner of the Social Web:  Twitter?  What is it about Twitter that makes it so different from my other experiences with the Social Web?

Of course, this launched a discussion on Twitter.  After much discussion and even more thought, I think I finally figured out what the difference is:  it’s a question of exposure.  See, Quora does new users the disservice of immediately throwing them into the middle of the mob, there to claw their way to whatever position they can attain (Quora is by no means alone in this behavior.  In fact, most of the Social Web functions this way.  Just look at the stats and/or titles attached to users in any forum/group/site on the internet).  Just like in high school, newcomers are forced to find their way in an environment where all the social lines have been drawn and all the camps have been populated, their leadership positions filled.  Sometimes online communities can be open and accepting of new members.  Usually, though, the Lord of the Flies mentality prevails.

Twitter does it differently.  When you first join Twitter, you enter into their universe all alone, and you remain alone until you do something about it.  Until you start following other users, the mob doesn’t really know you exist.  And because you choose who you do and do not interact with on Twitter, the mob only enters into your life if you invite it (I’m pretty sure Facebook works in a very similar fashion, but I‘m not positive.  For obvious reasons).

Something else that sets Twitter apart is its general lack of score-keeping.  As far as I know, Twitter tracks precisely three things:  how many people you follow, how many people follow you, and how many times you have ‘Tweeted’ (posted a message).  And that’s it (again, I think Facebook is similar in this).  While this information is tracked and is accessible, it doesn’t appear as though Twitter actually does anything with it.  There never comes a time when you are ‘Super-Followed’ or become a ‘Global Tweeter’.

Herein lie the important differences.  The small area of the Social Web that works for me is the one where the group I spend time amongst is a group of my choosing.  More importantly, it’s the area where people aren’t necessarily trying to prove anything.  Where it’s more about connecting and communicating than about score-keeping and imagined popularity.

So thanks but no thanks, Quora.  If it’s all the same to you, I’ll pass on your Howling Mob and just stick with my neighborhood pub.

Internet Kill Switch

As you probably know, Egypt has been going through some crazy political shit as of late.  In a nutshell, the general populace of Egypt decided they weren’t very happy with their sitting government.  In fact, they pretty much concluded that they would prefer it to be a getting up and running away government.

Mubarak, of course, felt differently about this.  Being a reigning scumbag is rather habit-forming, and he obviously desired to keep his personal status as quo as possible.  Toward this end, he thought it would be a good idea to prevent his people from talking to each other.  This, to his thinking, was the crux of the problem – as soon as any group of Egyptians started talking together, the conversation invariably turned to everything that was wrong with Mubarak’s regime.

The solution was elegant in its simplicity.  To stop the conversations, all he had to do was plug the pipes.  To accomplish this, he turned off the internet in Egypt.  In response to which Egypt – well – exploded.  I’m sure you’ve all heard about Tahrir Square.

When all was said and done, Mubarak was out of power and Egypt began a series of political seizures that still haven’t finished playing out.

After watching these events unfold, some members of our political leadership started to revisit the idea of a U.S. government-controlled internet kill switch.  Seriously.  And these people are running the show.  What were we thinking?

Of course, this idea has surfaced before.  The rationalization is that the government may someday have to shut down the internet in the interest of National Cybersecurity (leaving aside the reality that by the time our government actually became aware of such a need it would be far too late).  I haven’t heard an explanation as to why the government would need to shut down the entire internet to achieve this, rather than just their pieces of it.  I assume this is simply because no one who works for our government actually knows anything about the internet, but it could be that they just don’t want to admit that the only real use for an internet kill switch is the one Mubarak employed.

The problem here in the U.S. is that We The People have all those pesky Constitutional rights.

When the Founding Drunkards were drawing up the documents that rule our lives, they produced a Constitution, which they swiftly followed with the Bill of Rights.  There was much arguing over the Bill of Rights (specifically, whether there should even be one), but eventually the majority decided that the document should be ratified.

It is curious that the Constitution – the document intended to serve as the foundation of our nation – was so quickly amended.  Not once but ten times.  It could be that they wanted to drive home the point that the Constitution is meant to be amended.  It’s the whole idea behind the document – that it be something that can change and grow along with the change and growth of the United States of America.

I also think it’s possible that the Bill of Rights was the Framers’ way of saying:  “These are the big ones, folks.  If you don’t have these freedoms, then you are not free.”

I bring this up because I think it’s important to note that an internet kill switch would be seriously flirting with infringing on our Constitutional rights.  Specifically in relation to the First Amendment.

You thought I was talking about Freedom of Speech (more exactly, Freedom of Expression), didn’t you?  Well, I’m not (although that argument could be made).

No, I’m talking about another First Amendment right:  Freedom of Assembly.  This, my friends, is what we do with the internet:  we assemble.  Facebook, Twitter, LinkedIn, Myspace – just to name some of the big guns – for many people, these are the internet.  These days, it seems more and more that the internet exists solely to give Social Media a place to hang out.  In case you haven’t noticed, what most of us do with the internet is connect, reconnect, and stay connected with each other.

You see, the Drunkards were well educated folks.  They had read their history and were quite aware that most revolutions begin in pubs.  They were also aware that this is not due to the presence of alcohol (although it certainly doesn’t hurt).  The reason drinking establishments so often serve as birthplaces for insurrection is that they are public venues where people are able to come together and speak openly.  Where We The People can assemble to discuss our grievances.

Which is precisely what Mubarak feared and tried to stop.  He wasn’t just trying to keep people from talking – he was trying to keep them from talking to each other.

In sixty-nine I was twenty-one and I called the road my own
I don’t know when that road turned into the road I’m on

– Jackson Browne

OSM Logo Except in my case it wasn’t ‘69, I was 16, and I’ve never stopped calling the road my own.

See, when I was in High School, I read Kerouac and Kesey, I read about Woody Guthrie and really listened to his music, and I – along with a bunch of my friends – succumbed to the siren song calling us to stand on the highway with our thumbs out.  A month later I returned dirtier, stronger and better than I had ever been.  I came away from the experience with a better understanding of the road, of the United States of America and – most important – a real understanding of freedom.  Travelling just for it’s own sake (and in a fashion that leaves you at the mercy of fate) – with no money, no real destination and no discrete itinerary – entails a level of freedom that the average person doesn’t really understand.  Reading Kerouac and listening to Woody can net you a glimpse, but you’ll never know the reality of it until you experience it firsthand.

A large part of that freedom is ownership.  When you develop such an intimate relationship with the road, you begin to understand the communal – no, universal – aspect of the road.  The road that – in effect – belongs to everyone.  The road that actually deserves capital letters and will hereafter be given them.  The Road that exists outside of boundaries and municipal spheres of influence (even while passing through them).

This is The Road that OpenStreetMap is about.  The Road that belongs to each and every one of us.  This is why Woody Guthrie would love OSM (although I’m pretty sure Kesey wouldn’t understand it and Kerouac wouldn’t give a rat’s ass about it).  Because a central aspect of OSM is about returning ownership of The Road to us, the people.  You know – the Great Unwashed.

And it is ours, you know.  And not just because we paid for it.

Lately, I’ve been thinking a lot about ownership and how we (the collective, all-inclusive ‘we’) fit into it.  And how ownership differs from possession.  This train of thought started with this discussionThis post added fuel to the flames.  Now we’re down to the stew my brain has made of it.  You might want to look away.

So the question that bubbles to the surface of my brain stew is:  Do lines in the sand (i.e., political boundaries and/or parcel data) have a place in OSM?

My immediate reaction is to say “no”.  For technical and philosophical reasons.  On a technical level, these are not the sort of data that the average person on the ground is able to provide.  I can easily take my GPS out into the world and accurately record streets, buildings, rivers, railways, bus-stops, parks, bathrooms, pubs, and trees.  These are all concrete physical features that any one of us can locate on the surface of the Earth and record.  More importantly, they’re features that anyone else can check.  Or double-check.  And this – in case you haven’t noticed – is the strength of OSM.  It’s self-correcting.

But we can’t do this with the lines in the sand.  Where – exactly – is the border of your town?  Can you stand on it and take a waypoint?  Sometimes you can.  Most roads have convenient signs telling you when you’re leaving one political sphere of influence and entering into another.  Here in New England, there are often monuments of one sort or another at pertinent locations to mark the dividing line betwixt one town and another.  And these are certainly locations that can be marked – as points.  If you’re not willing to walk the entire border, however, you shouldn’t draw the line in the sand.  Sometimes you can’t just connect the dots.  Actually, most of the time you can’t.

Of course, we often have the option of downloading border data from various (presumably authoritative) governmental sources.  But then we run into the question of whether we have the right to upload that data to OSM.  Personally, I don’t think the payoff is worth the expenditure of neurons necessary to figure it out.  Especially because the ‘authorities’ don’t always agree:

A quick comparison of the counties of western Massachusetts.  The green background with black outlines was provided by the USGS.  The semi-transparent grey foreground with white outlines was provided by MassGIS.  Note the differences.  For the record, the data provided by MassGIS is vastly superior to that provided by the USGS.  Trust me.

Parcel data is even worse.  Frankly, I don’t know why anyone would want to include parcel data on any map, but then I’ve had a lot of experience with it and therefore I am cognizant of its uselessness.  Parcel maps are more for bean counting than anything else.  Their primary purpose is to delineate taxation and therefore they tend to conform to a “close is good” standard.  They don’t need to be accurate – tax collectors are quite happy to round up.  Take it from a guy who has had occasion to check a large number of parcel maps against the truth on the ground – they are grossly inaccurate (in these parts, it used to be thought that the ground trumped the map and/or the deed.  I’ve seen many maps that have ‘corrected’ acreages on them.  These days, though, the thinking tends in the other direction.  After all – what you paid for is what the paper says you paid for).

On a purely philosophical level, I feel as though lines in the sand have no place in OSM.  Lines in the sand are all about possession.  They are someone’s way of saying “This land is my land.  It’s not your land”.  In my far from humble opinion, this is pretty much the polar opposite of what OSM is about.  OSM is about taking ownership back from the line-drawers and the so-called authorities.  It’s a declaration that the map belongs to us – all of us – and we’d kind of like it to be an accurate map.  If it’s all the same to you.

But then, Kate had an excellent point (she does that, and is almost never annoying about it):  people tend to want to know where they are.  While I agree that this is, indeed, the case, I don’t think borders need to be a part of the picture.  When people go from Town A to Town B, they like to know where they are when they are actually inside the town proper.  But I question whether the average person cares when they cross over the border between Town A and Town B (except, of course, for the 3-year-old in my back seat who always wants to know.  Lucky for him, the driver’s seat is occupied by a daddy with a very accurate personal GPS in his head).  And while I think there is a place in the world for some borders (as I said before, we need some way to determine who’s responsible for plowing the roads and collecting the garbage), I doubt whether that place is on a ‘People’s Map’ like OSM.

Is there a solution?  I think so, and I think Andy hit upon it pretty soundly in the post linked to above:  labels.  With absolutely no lines whatsoever, people have no difficulty identifying points and areas if a map is sprinkled with labels of judicious size and font.  If you doubt me on this one, just look at this map:

If a couple Hobbits can find their way from the Brandywine river to the bowels of Mount Doom without borders, I think maybe OSM can do without them, as well.

internet When I attended Oxford about a decade ago, I took an amazingly interesting  class called ‘British Perspectives of the American Revolution”.  The woman who taught said class was fond of pointing out that the United States of America is really an experiment, and a young experiment at that.  Whether we can call it a successful experiment will have to wait until it reaches maturity.

I think of that statement often when the internet comes up in conversation.  If the United States is a young experiment, the internet is in its infancy.  For some reason, people today don’t seem to realize this.  Even people who were well into adulthood before the internet went mainstream somehow manage to forget that there was life before modems.  While this circumstance always makes me laugh, it becomes especially funny whenever a new Internet Apocalypse looms on the horizon.

Like this latest crap about Google/Verizon and net neutrality.  I’m sure you’ve heard about it – the interwebs are all abuzz and atwitter about it (I’m sure they’re all afacebook about it as well, but I have no way to verify it).  In a nutshell, it’s a proposal of a framework for net neutrality.  It says that the net should be free and neutral, but with notable exceptions.  You can read the proposal here.  First off, don’t let the title of the piece scare you.  Although the word ‘legislative’ is in the title, here in America we don’t yet let major corporations draft legislation (at least not openly).

Anyway, the release of this document has Chicken Little running around and screaming his fool head off.  In all his guises.  Just throw a digital stone and you’ll hit someone who’s whining about it.  One moron even believes that this document will destroy the internet inside of five years.  Why will this occur?  Ostensibly, the very possibility of tiered internet service will cause the internet to implode.  Or something like that.

Let’s put that one to rest right now.  The internet isn’t going away any time soon.  It won’t go away simply because it is a commodity that people are willing to pay for.

Allow me to repeat that, this time with fat letters: it is a commodity.  The problem we’re running into here is the mistaken belief that a neutral net is some sort of constitutionally guaranteed human right.  We’re not talking about freedom of expression here (except in a most tangential fashion).  We’re talking about a service – a service that cannot be delivered to us for free.  Truth is, net neutrality is an attempt to dictate to providers the particulars of what it is they provide.

A neutral net would be one in which no provider is allowed to base charges according to site visited or service used. Period. It’s not about good versus evil, it’s not about corporations versus the little guy, it’s not about us versus them. What it is about is who pays for what. Should I get better access than you because I pay more? Should Google’s service get priority bandwidth because they pay more?

Predictably, our initial response to these questions is to leap to our feet and shout ‘No!’ (and believe me, kids – I’m the first one on my feet).

But should we?  Seriously – what other service or commodity do we buy that follows a model anything like net neutrality?  Chances are, most of you get more channels on your TV than I do.  Why?  Because you pay for it.  I probably get faster down- and upload speeds than many of you.  Why?  Because I pay for it.  Many people today get data plans (read: internet) on their cell phones.  Why?  Because they pay for it.

Doesn’t this happen because the service provider dedicates more resources to the customers who receive more and/or better service?

And then there are the fears about the corporate end of the spectrum.  As one pundit put it:  What would stop Verizon from getting into bed with Hulu and then providing free and open access to Hulu while throttling access to Netflix?

The short answer is:  Nothing would stop them.  The long answer adds:  Net neutrality wouldn’t stop them either.  Does anyone really believe that net neutrality would stop Verizon from emulating Facebook by forcing customers to sign into their accounts and click through 47 screens before they could ‘enable’ Netflix streaming?

And I may be missing something here, but Verizon getting into bed with Hulu and throttling Netflix sounds like a standard business practice to me.  I’m not saying I agree with it, just that it doesn’t strike me as being unusual.  The university I attended was littered with Coke machines.  Really.  Coca-Cola was everywhere on that campus.  Like death and taxes, it was around every corner and behind every door.  But Pepsi was nowhere to be found.  It simply was not possible to procure a Pepsi anywhere on the grounds of the university.  Why was it this way?  Simply because Coke ponied up more money than Pepsi did when push came to shove.  Oddly, nobody ever insisted they had a right to purchase Pepsi.

Why – exactly – do so many of us think that the internet should be exempt from the free market?

Gather ‘round children, and let me tell you a story.  It’s about a mythical time before there was television.  In the midst of that dark age, a Neanderthal hero invented the device we now know as TV.  In those early times, the cavemen ‘made’ television by broadcasting programs from large antennae built for the purpose.  Other cavemen watched these programs on magical boxes that pulled the TV out of thin air.  Because TV came magically out of thin air, it initially seemed to be free of cost.  The cavemen who made the programs and ran the stations paid for it all through advertising.

Eventually, TV became valuable enough for everyone to desire it.  This led to the invention of cable as a means to get programs to the people who lived too far away from the antennae to be able to get TV out of the air.  Because putting cable up on poles and running wire to people’s houses costs money, the people at the ends of the wires were charged for the service.

It wasn’t long before the cable providers hit upon the idea of offering cable to people who didn’t need it, but might want it.  To get more channels, or to get their existing channels at a better quality.  Unsurprisingly, there was much yelling of “I will not pay for something I can get for free!”, but as you know it didn’t last long.  In short order cable went from ‘luxury’ to ‘necessity’.

Does any of that sound familiar?  Can you see a pattern beginning to emerge?  Let me give you a hint:  It’s about money.  The internet has never been free.  It just appeared to be so because someone else was largely footing the bill (or at least it seemed that way.  Truth is, you’ve been paying for it all along, and the coin you’ve been paying with is personal data).  The internet – like so much of our world – is market-driven.  Don’t kid yourself into thinking otherwise.

And I hate to say it, folks, but it looks as though the market is moving away from net neutrality.  The simple fact that it’s being talked about so much is a clear indication that its demise is imminent.  To be honest, I’m not so sure this would be a bad thing.  In the short term, a lack of net neutrality would pretty much suck.  In the long term, though, it could very well be the best thing for us, the average consumers.

You see, while money drives the market, the market drives competition (as well as innovation).  If our Verizon/Hulu scenario actually came to pass, it wouldn’t be long before another ISP appeared in town, one who wasn’t in bed with Hulu and was willing to offer Netflix (providing, of course, that there was a demand for such a thing).  Eventually, we get to reap the benefits of price and/or service wars (much like cell service providers today).  In fact, this could help solve one of America’s largest internet-related problems – the lack of adequate broadband providers (you’d be surprised how many Americans only have one available choice for broadband).

I don’t think we really need to fear losing net neutrality, even if it is legislated away.  If enough of us truly want to have a neutral net, sooner or later someone will come along and offer to sell it to us.

Signature

The Road I came across this post the other day, and it made flashy things go on and off inside my braincase as normally underused neurons woke up and stretched lazily (do click on the blue letters and read the post).  While I agree with the crux of the above linked post, the light show inside my skull was actually related to (mostly) other ideas.  In my usual, intensely dull, Map Dorkish manner, I was thinking about data.

Really.  It’s something I think about.  A lot.  It’s a sickness.

Anyway, I got to thinking about a discussion I had with a fellow Map Dork on Twitter a short while ago, about data and GIS.  About how the majority of the GIS community spends the bulk of its time thinking about what to do with data, and not enough time thinking about the quality of the data itself.

It’s like this – whenever I make a map, there are two primary components involved in the process.  The first is the software that produces said map.  The leader in the field is far and away Esri, the company that produces ArcGIS (which used to be known as ArcView).  Esri does not produce my software of choice, for a variety of reasons, none of which should be taken as a comment on the software itself (okay – some of it should, but not a lot.  Maybe 30% or so).  Truth is that Esri wins Best In Show when it comes to proprietary software.

In Map Dorkia, though, proprietary software doesn’t carry the kind of weight it does in other fields.  You see, a fair number of Map Dorks also happen to be coders (maybe even most of them).  Because of this, the market has been flooded with a vast number of good, stable, working, free and open source alternatives.  I can’t begin to mention them all, but I will point you to this site, where someone better informed than myself has put together some good overviews (even if parts of them are bit out of date).

At the end of the day, my go-to GIS application is Quantum GIS (although it’s far from the only one I use).  Like the Esri offerings, Quantum GIS is a good, all-around GIS package (but not as feature-packed).  Unlike EsriWare, Quantum GIS has a huge, talented support base.  Everyone who’s working on Quantum GIS is doing so because they care, not just to get a paycheck.  Think about that.

The second component of any map I make is the data with which I make the map.  This data comes in many shapes and sizes, as well as different formats and/or projections.  The lion’s share of what I actually do involves taking all that crap and turning it into an accurate, useful and (hopefully) visually pleasing map.  The problem that Map Dorks run into at this point is:  Where to get the data?

Often, we turn to the federal government.  The USGS has been producing quality maps almost since the Boston Tea Party, so we tend to think of them as a pretty safe bet.  However, it’s wise to check the fine print on the quadrangle you’re looking at.  Around these parts, they generally date back to the sixties, although many of them were updated in the eighties or nineties.

Our government also provides census data, also known as TIGER (Topologically Integrated Geographic Encoding and Referencing system) files.  TIGER data comes in a variety of shapes and sizes, and is of varying accuracy (see below).

These days, most state governments have some sort of GIS department, as do many cities and towns.  These tend to be more accurate than federal sources (although not always) due mainly to the fact that they have a much smaller area of focus.  And, of course, some are better than others.  Here in Massachusetts, we are lucky to have MassGIS.  While MassGIS can be rather quirky (their file naming conventions leave a bit to be desired), they freely offer a wealth of data that tends to be pretty accurate (I know because I’ve checked a fair amount of it on the ground).  They do have a budget, however, so some of their data gets a little old between updates.  And while they offer tons of data via WMS, their servers – well – suck.

For my money, the most accurate data around (besides the data I go out and gather myself, of course) is that which comes from OpenStreetMap.  Steve touched upon this in the post mentioned previously, but it bears repeating.  Because data sources are many and various, it is often difficult to assess the accuracy of the data in question (especially if it’s data depicting an area geographically removed from your own location).

What makes OSM (OpenStreetMap) unique among data providers is the workforce that acquires the data.  The OSM workforce isn’t comprised of people looking only for a paycheck.  The OSM workforce doesn’t daydream about something else while they’re gathering data.  The OSM workforce is extraordinarily focused on the job at hand because they are only doing it because they really want to do it. They also really want the data to be accurate.

Possibly the most important aspect of the OSM workforce is their proximity to the area they provide data about.  In the majority of cases, OSM data is collected by people who can vouch for the accuracy of their data because they can see it out their window or because they walked by it on their way home from work.  When it comes to the OSM workforce, the person who mapped any given road has most probably walked down that road.

Because of the nature of the OSM workforce, I tend to trust the accuracy of OSM data more than most.  To my mind it’s just plain common sense.  And in my experience, OSM data is at least as good as any other source, usually better.  Here’s a comparison of road data from three sources:

Lines

You can see the obvious shortcomings of the TIGER data.  You will probably also note the similarity between the MassGIS data and the OSM data.  This is because MassGIS (bless their little hearts) handed a bunch of data to OSM many moons ago (I don’t know exactly when this occurred).  While this is a great thing for Massachusetts, not all of America was so lucky.  And in my experience, even here in Massachusetts OSM data tends to be more up to date than MassGIS’s (the primary reason for this, I think, is that MassGIS dedicates the lion’s share of their budget to flashy projects.  For instance, they just finished gathering new, state-wide aerial imagery – most at 30cm/pixel, some at 15cm.  While the imagery is very cool and very useful, OSM will probably get around to utilizing it before MassGIS does).

As luck would have it, you don’t have to take my word for this.  Bing maps just rolled out a new feature:  an OpenStreetMap layer.  I did a quick comparison:

Oxford

This pretty much speaks for itself.  Not only is the OSM data more accurate (note the British Rail lines on the left, as well as the placement of the Oxford Canal), but OSM provides far more information than the Bing data (without overcrowding the map).  In pretty much all ways, it’s just plain better data.

And before anyone points to the fact that OSM started in Great Britain (so of course OSM data is better over there), here’s a section of Boston I visited just the other day:

Boston

Kudos to Microsoft for including the OSM layer.  By all means head on over to Bing maps and check it out.  It’s nice to see that they’ve finally figured out what many of us Map Dorks figured out long ago:

Always use the best data you can get your hands on.

Signature

Twittification:

  • RT @ShaunBarger: I hope every studio executive who ever hemmed & hawed about the monetary viability of female led blockbusters is choking o… 1 month ago

Blog Stats

  • 24,535 hits

Categories

October 2017
M T W T F S S
« Jul    
 1
2345678
9101112131415
16171819202122
23242526272829
3031