You are currently browsing the tag archive for the ‘Data’ tag.
In our last outing we put together a simple map. Having reached this point, the wise cartographer asks itself (when discussing cartographers I default to non-specific, non-human pronouns): “Am I done? Does this map suffice?”
Actually, the answer to this question should have been determined before we even started. One of my favorite professors used to say “Begin every project with a proposal. It’s the easiest and best way to know when you’re done.”
Anyway, if the map you’ve already made suits your purposes, congratulations. There is no reason for you to continue reading this, so by all means go back to whatever your favorite activity happens to be. I promise I won’t take it personally. If, however, you need a less simple map – or if you’re just like me (any map worth making is worth making pretty) – then please read on.
The plan here is to avail ourselves of some of QGIS’s advanced stylizing techniques to apply a variety of better looking symbology to the map. We will work in an order the reverse of that which we used last time (top down as opposed to bottom up) because this neatly follows a least-to-most complicated trajectory (with one exception).
Our general goal here is to construct a believable illusion of three dimensions. We aren’t going to try to achieve true 3D (or – more accurately – 2.5D), instead we’ll just use a few tricks to fool the eye. In the case of the structures layer we’ll just apply a simple drop shadow. Double-click on the ‘Structures’ layer to open the Properties window. Click on ‘Style’ in the left-hand sidebar if it isn’t highlighted already. Select a lighter color for the polygons (I used a grey: #707070), then check the ‘Draw effects’ box in the main area of the window. Click on the small star that resultantly turned yellow to open up the Draw effects window. Check the ‘Drop Shadow’ box on the left, change the Offset to 4 Map Units, change the Blur Radius to 1, and change the Blend Mode to Normal. Click ‘Okay’ a couple of times and watch the map redraw itself.
Roads come in various shapes and sizes. There are many different attributes of this layer we could use to symbolize our roads, but I think the wisest (read: easiest) way to go is to symbolize them by relative size. If you right-click on the Title of the ‘Roads’ layer (in the Layers Panel), a menu appears. Click on ‘Open Attribute’ and a window opens showing you the database that’s attached to the layer (most layers have them). Feel free to have a look around, and don’t worry if much of it looks like gibberish. The first column of the attribute table is called ‘Class’ and it is the one we will be using to symbolize the layer. The road classes are single-digit codes ranging from 1 to 6 (scroll down the MassGIS ‘Roads’ page for an explanation of the classes). The short explanation is the smaller numbers represent bigger roads and vice versa. We want to symbolize thicker lines for larger roads, so we’ll just have to tweak it a bit. Double-click on the ‘Roads’ layer to open the Properties window and change the line style from ‘Single Symbol’ to ‘Graduated’. Click on the ‘ε’ symbol to the far right of the ‘Column’ selector to open up the Expression Dialog window. Enter 10-“CLASS” for the expression and click ‘okay’. Change the Method to ‘Size’ and input a range of 4 to 16. Bump the Classes up to 6 and change the Mode to ‘Quantile (Equal Count)’. Click on the word ‘Change’ in the Symbol selector to open up the Symbol Selector window. Select your line, change the Pen Width to 0.46 Map Units and set the Cap Style to ‘Round’. Add another line, move it down in the hierarchy, then change its Pen Width to 0.86 Map Units, change its color to grey (#707070) and set its Cap Style to ‘Round’ as well. Check the Draw Effects box and click on the yellow star. Select ‘Outer Glow’, change the Spread to 3 Map Units, set the Transparency to 0 and change the color to the same grey as the line itself. Click ‘okay’ a couple of times and watch the roads redraw themselves.
Almost there. If you zoom in to get a closer look at the roads you will find that they look funny. This is because each road is made up of a series of small, straight lines connected together into longer lines that more accurately represent the less-than-straight reality of roads. Each of these small lines are drawing individually, which makes random grey lines cross over our roads in annoyingly large numbers. Don’t worry. We can fix this.
Reopen the Layer Properties window to get back to the layer Styles. Click on the button labeled ‘Advanced’ and select ‘Symbol Levels’ to open the Symbol Levels dialog. You will see two columns, Layer 0 and Layer 1. Everything in the Layer 0 column should say ‘0’ and everything in the Layer 1 column should say ‘1’. Check the ‘Enable Symbol Levels’ box. This tells QGIS to draw everything in Column 0 first, then go back and draw everything in Column 1. Click ‘okay’ twice and watch the roads fix themselves.
This layer is the exception. It just needs a little tweaking to make it look differently than lines painted on. Darken the color a bit (I used #007ac9) and add an Outer Glow: Spread of 2 Map Units, Blur Radius 3, 50% Transparency, color #46f3f7.
When maps are made in 2.5D they are most commonly “lit” from the upper left corner. I don’t know where this convention came from, but it is so strongly ingrained in us that when maps are “lit” from the bottom they look backwards to us (mountains look like valleys and valleys look like mountains). Knowing this, we can take advantage of it to make objects on our maps look ‘3D’, either in a positive or negative fashion. For our bodies of water we’ll go for a negative, making them appear to be cut into the ground surface rather than just sitting upon it. We do this by applying a highlight to the lower right-hand side and a shadow on the upper left-hand side.
But first we’ll have to specify which polygons we want show and which ones we’d like to hide. When we look at the MassGIS web page for the dataset we’re using, we see two separate columns in the attribute table that describe the water features. The first is called WETCODE, and it separates the features into 28 distinct categories. These categories are also conveniently detailed in another column of the attribute table (IT_VALDESC). For our purposes, though, this is far more detail than we need. Instead, we’ll use another, similar column called POLY_CODE, which serves much the same purpose but with only 11 categories. For our symbology we’ll want to show any features that delineate open water (we’ll include marshland in this group) and hide any other features. We’ll do this by using Rules Based styling.
Open the layer style and change it from Single Symbol to Rule-based. Double-click the existing rule (No label) to open the Rule properties dialog. Click on the Expression button (…) to the right of the ‘Filter’ bar to open the Expression String Builder. We will want to separate the open water features, which are those with POLY_CODEs of 1,2,6,9 and 10 (for this map we could skip 9 and 10, but we will include them in the name of thoroughness. Sloppy work is habit forming). To do this we will build the somewhat awkward expression POLY_CODE=1 OR POLY_CODE=2 OR POLY_CODE=6 OR POLY_CODE=9 OR POLY_CODE=10. Then we add two additional Simple Fill layers to our symbol, which we will style in order from the top down. The first we color with the same color we used for Streams (#007ac9), turn off the outline and add an Inner Glow: 20 Map Unit Spread and a darker color (#005b93). The next is the shadow layer, which we color with a darker shade (#005488), turn off the outline and offset negative 8 X and 8 Y Map Units. Finally we do the highlight layer, which we color lightly (#70b5d7) and offset positive 8 X and 8 Y Map Units. Finally, we make the entire layer semi-transparent (a setting of 25 works well).
For our contours we’re going to want two discrete thicknesses (or ‘weights’) of lines. A heavier line (called an Index) occurring periodically and a lighter line (called an Interval) occurring everywhere in between. Because our data will make it easy to do so, our Index lines will occur every 30 meters (if you look at the attribute table you will see a column called ELEV_M. This is the elevation of each line in meters, and the interval of the lines is 3 meters. So every 30 meters we get a line that’s a nice round figure – 60, 90, 120, etc). Since we have hills instead of mountains here in New England, 30 meter Indices are workable without being too crowded.
Before we start, though, we should make our lives easier by combining the two contour layers we’ve been using. Because we opened zipped files, we cannot edit either of the layers as they are, so we first must convert one of them to a shapefile (a format that has more than its share of issues. But it works so we’ll just run with it). Right-click on the Greenfield Contours file and select ‘Save As’. Choose the appropriate co-ordinate system, browse to the folder you want to save it in (creating said folder if necessary. I used Maps→Data→Greenfield→Vector→Contours and called the file Greater Greenfield Contours). Then zoom to the extent of the Montague Contours layer (by right-clicking on it) and select the entirety of it with the Select Features tool. Copy the features to the clipboard and return to the previous zoom using the Zoom Last button. Select the Greater Greenfield Contours layer, toggle editing on, paste the features onto it, then save the changes and toggle editing off. Then clean up by deselecting all features and removing both the Greenfield Contours and Montague Contours layers (by right-clicking them).
Now open the Layer Properties window for Greater Greenfield Contours. Change the style from Single Symbol to Rules-based. Double-click the undefined rule and label it ‘Index’. Open the Expression String Builder and build the expression: floor( “ELEV_M”/10)=( “ELEV_M”/10). Change the color of the line to #665640, set the Pen Width to 3 Map Units, then open the Draw Effects dialog. Give it an Inner Glow (2 Map Unit Spread, 3 Blur Radius, 0% Transparency, color #f8a05c) and an Outer Glow (4 Map Unit Spread, 3 Blur Radius, 0% Transparency, color also #f8a05c).
Add a second rule labeled ‘Interval’. For this one build the following very similar but significantly different expression: floor( “ELEV_M”/10)<>( “ELEV_M”/10). Change the color to #665640 and set the Pen Width to 1.25 Map Units. Give the line an Inner Glow (0.75 Map Unit Spread, 3 Blur Radius, 50% Transparency, color#f8a05c) and an Outer Glow (4 Map Unit Spread, 3 Blur Radius, 50% Transparency, color #f8a50c). “Okay” your way out of all the dialogs and watch the map redraw itself.
And there we are. We took our basic map and made it pretty. Nice work. It should be noted that not all of this symbology works well at all zoom levels. If you zoom into and out of various areas of the map you will see what I’m referring to. Creating symbology that works well for various zoom levels is its own art form and is beyond the scope of this post. My intent here is only to introduce you to a small sample of the vast styling power of QGIS. Please continue to explore further on your own. You won’t regret it.
So you heard about this GIS thing and you thought to yourself that it might be kind of cool to be able to make your own maps. But then you looked at the software and decided that it’s rather more dense and complicated and esoteric than you’d like, so you decided that maybe you’d pass on making your own maps and instead just use whatever you could find on the Internet.
Fair enough, but the truth is that making your own maps really isn’t very hard. Sure, the software may be a little dense and complicated and esoteric, but so is most every word processor on the market. But this doesn’t stop any of us from firing up Word and using it to write a letter. The same goes for email. How many of you can claim to know more than 2% of the full capabilities of Gmail? I know I can’t.
GIS software can indeed perform a great many complex and wondrous scientific operations, but a working knowledge of them is not a prerequisite for using the software. With a little bit of knowledge and the right data, pretty much anyone can make a respectable map.
Which is what we’re going to do. Relax – I’ll walk you through the process. It probably won’t hurt at all.
The first order of business is to secure the software we’ll be using. There are a bunch of good options out there, but for today’s exercise we’ll be using QGIS, because it’s solid and dependable, but also because it’s my personal favorite. Head on over to their download page and download and install whatever version of the latest release is appropriate for your system (I’ll be using 2.12.3 throughout). I have chosen the latest release over the LTS (long term support) version because it includes a variety of new features that are well worth having. Besides – if you decide to stick with this GIS thing you’ll be updating your software on a regular basis.
Once you have installed QGIS, fire it up. Right off the bat there are a couple things we’ll want to do. The first is to simply rearrange the UI to suit our purposes. The UI is completely customizable (and easily so – just click on things and move them around), so it’s a simple matter to arrange things to better suit our own habits. In time you’ll determine the setup that best suits you – in the meantime I’ll show you the setup that suits me (out of the box, at least). The next thing we want to do is change the CRS of our project. When we first fire up QGIS it defaults to EPSG 4326 (also known as WGS 84). Maps are projected, necessarily so because when mapping we are representing a three-dimensional object (Earth) two-dimensionally. Therefore we must project a map in order to draw it. Projecting is inescapably imperfect, therefore every map distorts something in some way. We use a CRS (co-ordinate reference system) to project a map. WGS 84 is intended for maps of a global scale, but our map will be on a local scale (in Massachusetts). Therefore we’ll change the CRS to a projection that distorts less on our more focused scale. For this map we’ll use EPSG 26986, NAD 83/Massachusetts Mainland (Meters). If you want to know more about projections and co-ordinate systems, a great resource can be found here.
Now we need data. For this map our task will be easy (but don’t get used to it. Finding appropriate data is usually the hardest part of GIS), as Massachusetts has a wonderful State GIS agency called (you guessed it) MassGIS. We’re only going to use six datasets for our map, which really isn’t much as these projects go. It’s still data, though, and it needs to be managed, so let’s talk about that for a minute.
GIS is driven by data. As such, practitioners tend to amass enormous amounts of data. Data that must be organized. Maps are made out of data, and a failure to properly wrangle said data can be crippling. Where did I put that data from 6 months ago? Which one of the 14 folders called “Project X” did I put it into? Or is it somewhere else? It’s a road dataset, so did I put it in the huge folder called “Roads”?
I didn’t just make these questions up. They are questions I have actually had to ask myself at earlier, less organized points in my career. Trust me. Properly managing data is the single most important part of modern GIS. As an added bonus, well organized data is easier to back up. The value of this cannot be overstated.
This is how I do it. I start with one master folder called simply “Maps” (in Windows you can make this folder a distinct Library. I strongly advise doing so). This folder is where I keep everything – data, projects – everything. The advantage of this is that I simply have to back up my “Maps” folder and I can sleep soundly at night. The disadvantage is that my “Maps” folder can get pretty damn large (and I don’t even work with Big Data. If I did I think I would have to store it elsewhere). Inside my “Maps” folder are two other folders: “Data” and “Projects”. My “Projects” folder contains a myriad of folders (usually one per project), as well as a separate folder called “Saves”. This is where I keep all the ‘Saves’ of projects, whichever software I happen to be using. This way all my software defaults to the correct location whenever I ask it to open a project.
The “Data” folder gets considerably more complicated. And personal. Its contents are more individualized and dependent upon the work most often performed and the types of data needed to do so. In my case, I work most often (almost exclusively) in Massachusetts. Because of this, MassGIS is my go-to source for data. So it just makes sense for me to organize my data in a manner similar to the method MassGIS uses to organize their data. Much of their data is organized by town, and I follow suit. My “Data” folder slowly fills with folders named for towns in Massachusetts. Inside each town’s folder I further separate the data into folders for vector data (points, lines, polygons) and raster data (images, digital elevation models). MassGIS organizes some of its data differently, and for this data I include separate folders, appropriately labeled. For instance, I have a folder called “Statewide” for those datasets that encompass the entirety of Massachusetts.
For this project we will be using two statewide datasets and four town datasets (three from one town, one from another). Start here to get them.
Again, the choices you make will be your own. I’m just describing how I do it.
First, I scroll down the MassGIS download page until I reach the section titled ‘Transportation’ under the ‘Infrastructure’ heading in the ‘Vector Data’ section. There I click on ‘Mass DOT Roads’, then ‘Download these layers’. I then scroll down to ‘Greenfield’ (in the left-hand column) and click on the filename (eotroads_114.zip) to initiate the download. I download the file to Maps→Data→Greenfield→Vector→Roads. I click my way back to the main download page, then in the next section (‘Other Facilities and Structures’) I download ‘Building Structures (2D, from 2011-2014 Ortho Imagery)’ in a similar fashion (structures_poly_114.zip to Maps→Data→Greenfield→Vector→Structures). Then, under the ‘Physical Resources’ heading, I download ‘Contours (1:5,000)’ from ‘Elevation and Derived Products’ (while I’m there, I also download this file for the next town over: Montague). Lastly, I move on to ‘Inland Water Features’ and download ‘MassDEP Wetlands (1:12,000)’. This one is Statewide, and it is divided into two layers, a polygon layer (wetlandsdep_poly.zip) and a line layer (wetlandsdep_arc.zip), both of which I download to Maps→Data→Statewide→Vector→Hydro.
Now that we have our data we can start building a map. Our first order of business will be to navigate to our ‘Maps’ folder in QGIS’s Browser panel and then add it to our favorites (right-click on it). This will greatly facilitate our future mapping endeavors. Then lets add in our two contour datasets (all you have to do is double-click on them in the Browser Panel – QGIS can read a zip file) and symbolize them (by double-clicking on their names in the Layers panel).
Maps are built in layers, and the order of our layers should roughly mimic the world we are attempting to depict. So our next order of business should be to add and symbolize our two hydro (water) layers. For some reason, they come from MassGIS double-zipped, but all that costs us is an extra mouse click.
Next, lets zoom into a more focused area of interest, then add our road and structures layers, symbolizing them as we go.
Lastly, lets rename our layers in the table of contents as something more human-readable (just right-click on them individually) and then save our project.
So now we’ve made a map. What do you say we put it onto a piece of paper so it can actually be a useful thing?
Click on File→New Print Composer. Once open, I first change the paper size to the one my printer spits out (ANSI A: Letter). Then I use the ‘Add New Map’ button to add a map to the composition by clicking and dragging corner to corner. Then I click on the ‘Item Properties’ tab and change the scale of the map to 1:24,000. This is because we here in Massachusetts still backwardly use feet and inches, so if I make the scale 1:24,000 on my printed map one inch will equal 2,000 feet. This makes my paper map a useful and easy-to-use tool in the real world. Those of you fortunate enough to live in the modern world can do the same thing with the metric system, just with easier math.
Once the scale is set, I change my project from a picture into a map by adding a scalebar, North arrow and title.
Switch back to your map window and save the project. Congratulations! You made a map. Feel free to print it out and use it in the real world.
Now, in all truth and in the interest of full disclosure, we’ve barely scratched the surface of the cartographic capabilities of QGIS. And we haven’t touched upon its processing or analytical capabilities at all.
But that’s okay, because we did accomplish what we set out to do – we made a custom map from scratch. A map that we can print out for use in the world (alternately, we could export it as an image for use in a document or on a web page). And it wasn’t terribly difficult, which is the point I set out to make.
Years back (while I was still in college) I did a few projects that entailed a lot of time studying the small town of Deerfield, MA (often referred to as the most over-studied small town in America). During this process, my advisor (Bob) introduced me to what was known as The Deerfield Lunch. The Deerfield Lunch was an irregular brown-bag affair, the attendees of which tended to fluctuate for a variety of reasons, the most commonplace being that everyone was quite busy. You see, the people who sat down to these lunches were all scholars of varying stripe, many of them leaders in their field.
I managed to attend a couple Deerfield Lunches (maybe a few – it was a long time ago). They were fun and extremely informative. The first one I attended made the largest impression on me, mainly because it was there that I met Abbott Lowell Cummings. Those of you who don’t happen to be students of New England architectural history may not have heard of him, but he’s a pretty big dog in the world of Domestic Architecture, or – more accurately – Vernacular Architecture. He’s also an absolute sweetheart.
Vernacular Architecture is one of those terms that has been defined extensively, but unfortunately in varying ways. The basic gist tends to hold more or less the same, but the details get muddy at times. In a nutshell, Vernacular Architecture describes structures that were constructed by someone who had not been formally educated in the discipline of architecture. This does not mean they were unskilled or unprofessional. Neither does it mean that they were inexperienced or incompetent. All it means is that a builder of Vernacular structures has not been formally educated as an architect. And it most certainly does not mean that Vernacular structures are in any way sub-standard.
To drive the point home, it has been estimated that as much as 90% of the extant structures on the face of the Earth are, in fact, Vernacular Architecture.
Anyway, a slew of conversations I’ve been having lately have brought Abbott to mind. He’s a big fan of maps in general and GIS in particular, and I think he’d be interested in the ongoing dialogue about data sources.
You know the one I’m talking about. It usually starts when someone says ‘crowd-sourced’ or ‘authoritative’ and then it tends to degenerate from there. Pretty soon, everyone’s arguing about data quality and reliability as if they have a direct correlation to the data’s source (they don’t, in case you were wondering).
As far as I can tell, the problems all stem from inadequate and/or ill-defined terminology. There seems to be a certain amount of determination to doggedly stick to terms that are not appropriate to the task at hand. While I love the term ‘crowd-sourced’, it has some connotations attached to it that are rather counter-productive. It’s the crowd, after all. Barely a step away from the mob. And we all know about the mob – they’re unskilled and unruly. They’re the Great Unwashed. They stormed the Bastille, for crying out loud!
All personal interpretations aside, though, ‘crowd-sourced’ doesn’t actually describe what we’re talking about here. At least, it doesn’t according to the guy who coined the term. In fact, ‘crowd-sourced’ is far closer to ‘outsourced’ than it is to ‘volunteered’. When we ‘crowd-source’ a job, we start by looking for volunteers. Failing that, we hire free-lancers. The waters only get cloudier (no pun intended) when we start adding cost to the definition.
We’re not just talking about free v. not-free data here (and by ‘free’, I’m talking about monetary cost. Any other form of ‘free’ falls under licensing, which is far too large a discussion to delve into here). If we were, we could just draw that line. Neither are we talking about good quality v. poor quality data. Again, we could simply draw that line and be done with it. The problem we’re having is that free and not-free, as well as good quality and poor quality, fall on all sides.
The point I’m trying to make is that if we must draw lines (and it appears that there’s no avoiding it), we should do so using terms that do not imply judgment calls in regard to cost and/or quality.
And since everyone seems to be trying to draw lines according to data source, why don’t we go ahead and do just that? A reasonable attempt was made with Volunteered Geographical Information (VGI), except in that ‘volunteered’ describes the method in which the data is delivered, not the source from whence it comes. I also think we should avoid narrowing our definitions to geographical data. Data comes from a variety of sources – applying geography to it is kind of our job, isn’t it? Besides – I think we Map Dorks tend to put too much emphasis on the ‘G’ in GIS. GIS should be information that is geographically informed, not information that’s geographically driven.
Another horrible attempt was to label some data as ‘authoritative’ (as opposed to ‘crowd-sourced’). I hope I don’t have to spell out everything that’s wrong with this one. ‘Authoritarian’ would probably hit closer to the intent behind this choice. ‘Official’ would be a better term to describe the source, but it still implies data that is more accurate, correct, or just plain better.
Why don’t we stop worrying about cost (it seems obvious enough) and quality (a judgment call best left to the individual user) and focus our attention on data source? Where does our data come from (aside from the data we gather ourselves, which need not enter into this conversation)? While this division usually ends in two categories, I would argue that three is a more appropriate number (and I do not mean to imply that three categories can adequately include all available sources of data. They can cover a very large percentage of them, though). Here, presented in an order that is not intended to imply or connote a bloody thing, are the three categories I would separate our data into, including the terms I use to describe them along with a brief definition and examples.
1) Governmental This is probably the largest of the three categories. This describes data that is produced directly by a governmental body, be it national, regional or local. This data is often free (unless you count the fact that we pay for it through taxes), but not always. This data is often described as ‘official’ or ‘authoritative’. While these descriptions are technically correct, they should not be taken to mean that governmental data is necessarily more accurate or ‘correct’ than other sources of data (see the previous post for a brief discussion of this). The USGS and the oft-mentioned MassGIS are good examples of Governmental data sources.
2) Commercial This is data that springs from professional sources. I use the term ‘commercial’ instead of ‘professional’ because this is not the only category to include data created by professionals. In fact, professionals contribute enormous amounts of data in all three categories. What separates this category from the others is that it exists in the private sector and it consists of data that was gathered, created and/or derived for money. Of course, after the fact much of the data is made freely available for a variety of reasons, not least of which is that government is often the entity paying for the contract (or corporations so large they might as well be governmental bodies. You know – like Google and Microsoft). Companies like GeoEye, Digital Globe and Navteq are better known examples of Commercial data sources, but there are many, many more out there.
3) Vernacular Vernacular data is data that is provided voluntarily, mostly by private entities, for public consumption. This is the kind of data that’s provided by people on the ground. And while most of it is freely accessible to all, a certain amount of knowledge and skill is needed before actually contributing (my mother, for instance, wouldn’t know where to begin). What separates this kind of data from the others is its self-correcting nature. Vernacular data tends to be openly editable, which means that anyone who notices errors can correct them. When this is not the case, the public at large is usually given access to the machinery needed to report errors. For those of you who don’t know me and have never before read this blog, this comprises my personal favorite source of data, for a variety of reasons (it’s also the one I often refer to as ‘Dork-sourced’). I choose the word ‘vernacular’ for much the same reason it was chosen to describe structures that have been standing for centuries – because experience and dedication and commitment are often stronger than formal indoctrination. Examples of these data sources are OpenStreetMap (including the numerous spin-offs that expand upon OSM’s data) and Google Maps (the My Maps aspect, as well as the API). Other examples (sometimes referred to as ‘passive’ data) include data like geo-tagged photos at Flickr or Panoramio.
The most important thing to remember about all three of these categories is that none of them possess any kind of exclusive claim to accuracy and/or quality. I’ve chosen the locations of these lines carefully, and I feel they’re safely drawn. It is important to remember that the lines refer only to source.
In practice, I find that I tend to dance between all three categories, depending on the project at hand and the data I can get my hands on for said project. I have my personal preferences, and they usually dictate where I start a search for data, but at the end of the day I select my sources based solely upon who offers the best available data for the task I have before me. I’m sure I’m not alone in this.