Category Archives: Online HGIS tools

In search of Canadian HGIS data

Researchers who study Canada have generated large quantities of geohistorical data for many years. While we reflect on the creation of a national geohistorical infrastructure, it is pertinent to identify datasets at different scales which can become a part of such a portal. We are therefore trying to enhance the discoverability of existing and available datasets. In the long run, it would be preferable to enumerate and describe each layer and each attribute  table, it is not necessary, for the moment, to delve at such a level of detailed granularity. We hope, at this stage, to identify collections which have emerged from different research projects or from the online deposit of previously georeferenced digital data such as:

  • raster geographic maps
  • aerial photographs
  • vector layers
  • attribute data linked to vector layers

We have already identified datasets offered by different types of creators so that we can present diversity in the nature and the type of data which can interest researchers. We have therefore identified:

  • quality international data (FAO)
  • data from collaborative mapping projects (Open Street Map, Natural Earth)
  • data available on GIS company web sites  (ESRI)
  • national data (government of Canada, Géogratis)
  • provincial or territorial data (British Columbia, Yukon, Québec, Nova Scotia, Prince Edward Island, New Brunswick)
  • municipal data  (Toronto, Montréal, Sherbrooke)
  • research team data (CIEQ, NICHE, LHPM, MAP, VIHistory)
  • data from map library and archive centres (Scholars’ Geoportal, MADGIC, GéoIndex+)
  • personal initiative data  (historical railway lines )

Choosing what type of metadata to associate with each dataset has meant achieving a compromise. An insuficient level of detail would prevent effective searches while requirements for overly detailed metadata could discourage data creators who are not trainted to  create metadata which meet international standards. According to Rodolphe Devillers, we can use six criteria to define the quality of a geospatial dataset1.

i. Definition : Allows the user to evaluate if the nature of a datum and of the object it describes, i.e. “what”,  meets his or her requirements (semantic, spatial and temporal definitions);

ii. Coverage : Allows the user to evaluate if the territory and the period for which the data exists, i.e. the “where” and the “when”, meet his or her requirements ;

iii. Genealogy : Allows the user to know where the data came from, the project’s objectives when the data was acquired, the methods used to obtain the data, i.e. the “how” and the “why” and to verify if this meets the user’s requirements ;

iv. Precision : Allows the user to evaluate the data’s worth and if it is acceptable for the user’s requirements (semantic, temporal and spatial precision of the object and of its attributes);

v. Legitimacy : Allows the user to evaluate the official recognition and the legal standing of the data and if it meets the user’s requirements (de facto standards, recognised good practices, legal or administrative recognition by an official agency, legal garantee by a supplier, , etc.);

vi. Accessibility : Allows the user to evaluate how easily the user can obtain the data (cost, delays,format, privacy, respect of recognised good practices, copyright, etc.).

A metadata standard which would meet all of these criteria may seem overwhelming for may people who would like to make their data available. We therefore propose to use the format defined by the Dublin Core Metadata Initiative, an international standard for which the types of fields are easier to understand for people less familiar with metadata. We have applied and interpreted the DCMI based upon its general definition available on Wikipedia2 and on the interpretation of a few fields proposed by the Bibliothèque nationale de France3. This approach can certainly be criticised, because it is geared towards a simple application rather than perfection. Based on how metadata will be entered in this list, we can refine these principles to improve this compromise. The fields do not appear in the same order as in the DCMI and some are subdivided to provide for a slightly finer level of granularity.

Table 1. List of fields used to describe datasets

Élément (French) Élément (EnGLISH) Comment
Créateur Creator The main entity responsible for creating the content of the resource. It can be the name of one or many people, an organisation, or a service.
Format : Last name, First name.
Separate multiple entities with a semi colon.


Contributeur Contributor Entity reponsible for contributing to the content of the resource. It can be the name of one or many people, an organisation, or a service.
Format : Last name, First name.
Separate multiple entities with a semi colon.


Titre Title Name given to the resource.
The title is genarally the formal name under which the resource is known. Indicate the title in the language of origin of the resource.If the resource does not have a formal title and if the title is derived from the content, place the title between square brackets.


Description.Générale Description.General A presentation of the content of the  resource. Examples of descriptions are generally in free form text. As much as possible, use the description provided by the creators of the resource.


Description.Nature-du-projet Description.Project-type A key word which allows us to categoriese projects according to the following typology:

– gouvernemental
– academic
– individual
– commercial
– collaborative


Description.Méthodologie Description.Methodology Free form text which describes the process used to create the resource.


Description.Sources Description.Sources List of documents which were used to create the resource. This field is different from the field Source, which is used to identify where a user can acquire the resource.


Description.Champs Description.Fields List of fields used in the table or database, preferably with a description.


Date.Publication Date.Published Date where the resource was originally created. This is not necessarily the date represented by the resource.


Date.Mise-à-jour Date.Updated Date of an update event in the life cycle of the resource.


Couverture.Temps Coverage.Time Perimeter or domain of the resource, in this case, the date, the year or the period represented by the resource.


Couverture.Espace Coverage.Space Perimeter or domain of the resource, in this case, the territory. It is recommended to use a value from a controled vocabulary.


Couverture.Niveau Coverage.Level A key word which identified the level of the spatial coverage of the resource:

– international
– national
– provincial
– regional
– municipal
– local


Sujet.ISO Subject.ISO A keyword which allows us to link the resource to one of the ISO categories of geospatial data.

– agriculture / farming
– biota / biota
– limites administratives / boundaries
– climatologie / climatology
– économie / economy
– élévation / elevation
– environnement / environment
– information géoscientifique / geoscientific information
– santé / health
– imagerie / imagery
– intelligence / intelligence (militaire)
– eaux intérieures / inland waters
– localisation / location
– océans / oceans
– urbanisme / planning
– société / society
– structure / structure
– transport / transportation
– services publics / utilities

Voir :


Sujet Sujet One or several keywords which can be used to categorise the resource.


Format Format The physical or in this case, the digital manifestation of the resource, ie, the MIME type of the document :

– shp
– kml
– kmz
– zip
– csv
– other formats used in GIS


Langue Language The language of the intellectual content of the resource.
It is recommended to use a value defined in RFC 3066 [RFC3066] which, with the ISO 639 [ISO639] standard, defines 2 letter primary language codes, as well as optional subcodes.
Exemples :- en
– fr


Type de ressource Type Type of content.
By default, the resources identified as part of this project are part of the dataset type.


Droits.Licence Rights.License Brief indication of the type of licence which applies to the data:

– copyright
– CC (or one of its variations)
– public domain
– open


Droits.Accessibilité Rights.Access One of the following termes will allow us to indentify how the data can be accessed.

– free
– one time payment
– free subscription
– paid subcription


Droits.Conditions d’utilisation Rights.Terms of use Text copied and pasted from the web site where the data is deposited to specify the creators’ terms of use.


Source Source Location from which a user can obtain the resource. This will generally be a URL.  A Source.URI could be added should it become pertinent.


Relation Relation Link to other resources. A resource can be derived from another or can be associated with another as part of a project.
Exemples : isPartOf [other resource number]
isChildOf [other resource number]
isDerivedFrom [other resource number]


Éditeur Publisher Name of the person, organisation or service which published the document.


Commentaire Comment Any additionnal information which can help users better undertand the resource.



A list of identified resources is available here: Some of the notices are incomplete and we are working on completing them. If you would like to propose a dataset, you can fill out the form available here:

1  DEVILLERS, Rodolphe (2004). « Conception d’un système multidimensionnel d’information sur la qualité des données géospatiales », [En ligne], Ph. D., Université Laval <>.

2  Collaborateurs de Wikipédia (2016). « Dublin Core » <>.

3  Bibliothèque nationale de France, Direction des Services et des Réseaux, Département de l’Information bibliographique et numérique (2008). « Guide d’utilisation du Dublin Core (DC) à la BnF : Dublin Core simple et Dublin Core qualifié, avec indications pour utiliser le profil d’application de TEL », version 2.0 <>.

OCUL site banner edited crop2

OCUL releases over 1000 early topo maps of Ontario…

Guest post by Amber Leahey, Scholars Portal, and Jay Brodeur, McMaster University Library

The Ontario Council of University Libraries (OCUL1) is pleased to announce the release of a shared digital collection of more than 1000 early topographic maps of Ontario, now available online!

Map libraries are really wonderful places–just ask any Librarian or staff member who provides patrons with services, guidance, and access to maps and associated cartographic material at university libraries across Ontario. Or better yet, ask the countless patrons who use the collections’ vast and varied information to support activities in their research, education, work, and private lives. Indeed, there is much to be said about sparking interest in maps and GIS by telling a story with old maps–which there are many of–at libraries across the province. With such rich and diverse map collections, and thanks to the careful curation and digitization of over 1000 early topographic maps of Ontario, academic libraries continue to play a key role in preserving our national and provincial heritage in the digital age.

Led by the OCUL Geo Community, the OCUL Historical Topographic Map Digitization Project is a province-wide collaboration to inventory, digitize, georeference, and provide broad access to early topographic maps of Ontario. The initiative represents the single most comprehensive digitization project of the early National Topographic Series (NTS) map collection in Canada. The publicly-available collection provides access to georeferenced topographic maps at the 1:25000 and 1:63360 (one inch to one mile) scales, covering towns, cities, and rural areas in Ontario over the period of 1906 to 1977. As the collective achievement of individuals representing university libraries across Ontario, this shared collection exemplifies OCUL’s continuing commitment to collaborative approaches that improve access to knowledge both within and beyond the province. The completion of this project also serves as an opportunity to reflect on the history of the OCUL Geo Community, and celebrate the shared vision and effort that have made possible the current achievement.

The significance of historical maps

Much like a photograph, landscape painting, or textual account, a historical or otherwise superseded map preserves information from the past and provides its viewer an opportunity to explore the ways in which environments, cultures, and human knowledge have changed over time. As a part of their mission, map collections, libraries, and archives have a long tradition of preserving and providing access to a wide array of cartographic and cultural information.

In the present day, early topographic maps are a critical resource for those with an interest in historical events and exploring change over time. For many researchers, local historians, planners, conservationists, engineers, and consulting firms (to name but a few), historical topographic maps provide a unique snapshot of a given time period, showing both man-made and natural features such as spot heights, waterways, shorelines, boundaries, roads, railways, houses, barns, electricity lines, industry, agriculture, and much more.

Ottawa’s Changing Landscape and Growth 1906-1948
Animated compilation of early topographic maps of the Ottawa area, showing changes and growth between 1906 and 1948.
From curation to digitization: The role of the OCUL Geo Community

Among the challenges faced in producing such a comprehensive digital collection is the effort required to inventory and bring together sheets that exist across a multitude of map libraries. Given the variety and quantity of maps that are created during any given period and the finite nature of storage space and budgets, map collection curators are required to make careful (and often difficult!) choices about the collections they develop, steward, and preserve over time. As a result, many institutions have focused their topographic map collections around items of local relevance and significance. In Ontario, for example, the maps that make up the digitized series–originally produced by the Department of National Defence (until 1923: the Department of Militia and Defence)–are dispersed across many Ontario University Libraries. Over the years, Ontario libraries have collaborated to develop a comprehensive inventory of known maps from the series in existence, working closely with the Ontario Archives and Library and Archives Canada more recently for this digitization project. That the vast majority of sheets in these collections could be found at OCUL institutions is a testament to the foundational work of the early Geo and Map Communities.

As the predecessor of the OCUL Geo Community, the OCUL Map Group (then known as the OULC Map Group) was formed in 1973 with the goal of communicating and collaborating on map-related projects. Among their completed initiatives was the creation of a union catalogue of topographic maps across institutions. The importance of this work to OCUL Geo’s current-day success shouldn’t be overlooked, as these foundational efforts provided a means for coordinating map collections across OCUL institutions, and helped ensure maximal collective coverage in a cost- and space-efficient manner. Today, the OCUL Geo Community continues the goals of its predecessor, with a commitment to fostering dialogue around important issues such as best practices for the digitization of maps in libraries, access to maps and GIS for research, and collaboration on a variety of library activities in these areas.

Moving forward, the group plans to engage with the wider map community in Canada about the project, specifically at the upcoming Association of Canadian Map Libraries and Archives Carto 2017 Conference being held in Vancouver, B.C. in June (ACMLA website). The group hopes to identify opportunities to build on the project, engaging with other university libraries and archives, to digitize maps from this national collection.

We are very excited about this release, please let us know how you may be using the maps for your next project! For more information or to get in touch with us contact the project members at

We hope to hear from you!

1 OCUL is a consortium of 21 University Libraries in Ontario, and fosters collaboration around library activities and services including map and GIS collections, digitization, and digital curation.  Ontario’s university libraries have been working together through OCUL on initiatives such as this since 1967. In 2017, OCUL is celebrating its 50th anniversary, and this project demonstrates the ongoing success of this collaboration.}

Neptis Geoweb 4

The Neptis Geoweb: A behind-the-scenes look into the underpinning framework

Guest post by Vishan Guyadeen, Neptis Foundation. Neptis is one of the collaborating partners of the Canadian Historical GIS project. 

The age of data is upon us. Data from different fields, quality, and types have become more and more available. However, in many cases it can be hard to glean valuable information from data because one might not be able to easily visualize and/or compare it with other datasets.

In studying the forces that make up and shape urban regions, it is particularly difficult to contextualize data since it exists in many different places.  The Neptis Geoweb is an interactive mapping and data visualization platform that aims to address this issue. Specific to the Toronto region, the Geoweb utilizes data that is normally siloed in various government organizations to make the complex policies shaping the region more accessible and easier to understand.

One subject that requires data that is often difficult to obtain, understand and visualize is the history of the Greater Golden Horseshoe. The Neptis Geoweb has a unique feature – the Timeline, which guides the user through milestone policies/events that have helped to shape the region into its current state. The Timeline is an interactive feature that describes and visualizes milestones in the regional context. Users may also compare these historical map layers with other current and historical datasets for further context.

Neptis Geoweb showing historical information about Region of Niagara, keyed to timeline below map
Neptis Geoweb showing historical information about Region of Niagara, keyed to timeline below map

Creating a platform that is capable of showcasing and managing large quantities of data is not an easy undertaking. The Neptis Geoweb was built with a fully customized framework, which allowed for easy access, clear and up-to-date data, as well as the ability to maintain different types of content (e.g. maps, charts, and text). There are two main underlying components that make this possible.

First, the most important component of the Neptis Geoweb is Carto (formerly CartoDB). Carto is a cloud-based GIS platform that houses and queries all of the data layers on the Geoweb. Carto was utilized because it is a powerful and flexible platform that is easy to use. For example, Carto provides the ability to quickly manipulate data in the cloud using SQL and also visualize spatial data using either a user-friendly wizard interface or an advanced CartoCSS editor (see screenshot below). Further, Carto provides the Geoweb with the flexibility of using various data types and the ability to seamlessly interact with other platforms such as Leaflet, MapBox, and OpenStreetMaps. These additional platforms enhance the overall functionality of the Geoweb. Carto offers these and many other benefits to the Geoweb while maintaining an overall ease of use that doesn’t always require a GIS professional.

Carto graphic interface for editing layer graphics using CartoCSS (one of several methods)
Carto graphic interface for editing layer graphics using CartoCSS (one of several methods)

Second, the administrative interface of the Neptis Geoweb was custom-built by Carto developers, to organize and maintain map layers and other non-spatial content. Neptis staff are able to prepare, organize and maintain content such as map layers, municipal data profiles, and short topic stories. When dealing with large quantities of raw data and map layers, it is essential to have a way of managing content. The admin interface contains simple forms that make up the content shown on the Neptis Geoweb. The screenshot below shows part of the form that is required when creating and updating map layers.

Neptis Geoweb custom administrative interface - form for New Layer
Neptis Geoweb custom administrative interface – form for New Layer

This is a brief introduction to the Neptis Geoweb and the two main components that make it an efficient platform. Like Carto and web mapping as a whole, the Neptis Geoweb is an evolving project. As more data becomes available – whether relating to local areas, the region or beyond, the intent is to continue to enrich the Geoweb.

Using to visualize maps in the Murray collection

Using to put the Murray Maps of Canada ca 1761 online

Guest post by S. Max Edelson, University of Virginia

This semester, I’m leading a group of University of Virginia undergraduates in a collaborative, project-based digital humanities course to put the Murray Map of Canada online in a dynamic digital exhibition. Taught as a selective Pavilion Seminar, this “Digital Practicum in Map History” is a hands-on experience that combines traditional reading, writing, and discussion with a workshop in digital humanities development. It involves an interdisciplinary focus on the history of cartography, visual design, digital humanities, public history, and the global history of empire.

As librarians scan the contents of their map archives, preserving fragile artifacts by creating high-resolution images, new tools are being developed to present these vital historic objects to a broad public audience. One of those tools is MapScholar, a distributed, browser-based visualization authoring tool purposed-built for illustrating scholarship in the history of cartography. With support from the ACLS and the NEH, research scientist Bill Ferster and I built MapScholar at University of Virginia’s SHANTI (Sciences, Humanities, and Arts Network of Technological Initiatives). My primary goal was to build a dynamic platform to display some 300 maps that are the subject of my forthcoming book, The New Map of Empire: How Britain Imagined America before Independence (Harvard University Press, 2017). Among the many maps I examined for this research, I was intrigued by the Murray Map collection at the William H. Clements Library at the University of Michigan. This huge manuscript collection–copies of which are also held by the British Library and the Library and Archives of Canada–seemed an idea source to mount and view online, bringing all of its disparate pieces together through georeferencing to fully appreciate the scope and ambition of this eighteenth-century surveying and mapping project.

When British forces occupied New France in 1760, the territory’s military governor, General James Murray, initiated a comprehensive survey of what would become, after the formal cession in 1763, the British colony of Quebec. The impulse to map Quebec came from military rather than administrative designs. Murray expected the province to be handed back to France after the peace had been negotiated, and he wanted to gather strategic intelligence that might be useful in support of a future invasion. As Murray explained to William Pitt in 1762, with this survey in hand to reveal the intricate passages along the waterways of the St. Lawrence River valley, Britain “never again can be at a loss how to attack and conquer this country in one campaign.” Murray dispatched eight army engineers to lead surveys along different sections of the river. The composite map they produced contained seventy-four separately mapped sections that, when joined together, formed an interconnected image forty-five feet long and thirty-six feet tall. Representing space at the scale of two thousand feet to one inch, these maps were among the highest resolution topographic maps produced by eighteenth-century surveyors anywhere. The Murray maps’ design as a strategic profile of the province was made clear by the addition of demographic summaries that enumerated how many men capable of bearing arms lived in each district.

Map curators Brian Dunnigan and Mary Pedley at The William L. Clements Library at the University of Michigan provided high-resolution scans of the Murray Map and have met with the class via video conference to help us develop it. As students georeference maps, design dynamic visualizations, record object metadata, manage distributed web resources, and write essays and annotations that provide context and interpretation, they will gain first-hand experience in digital humanities work.

We are beginning to georeference the collection now, and I will provide updates about our progress in a future blog post.

S. Max Edelson is an Associate Professor at the University of Virginia in the Corcoran Department of History.

Map Your History! Building and Sharing a Historical Spatial Data Infrastructure with the Keweenaw Time Traveler Project

While Historical GIS (HGIS) has become a familiar approach in the social sciences and humanities (Gregory and Geddes, 2014), recent trends in the social science use of GIS have called for HGIS implementations that can apply Big Data-based HGIS approaches to more qualitative research questions and, perhaps most importantly, more closely involve the public. Approaches range from allowing users to contribute to HGIS research using improved web interfaces, such as the New York Public Library’s Building Inspector, to the expansion of qualitative HGIS research (Olson, 2011; Lafreniere and Gilliland, 2015). In the broader world of GIScience, researchers have developed hybrid qualitative/quantitative tool combinations that expand the research potential of GIS further still (Kwan and Ding, 2008; Jung and Ellwood, 2010); these have more recently become topics of interest in the HGIS community as well. As part of this trend, Michigan Technological University’s Historic Environments Spatial Analytics Lab (HESAL) is preparing to launch the Keweenaw Time Traveler project – combining the latest generation of historical spatial data infrastructure with Web 2.0 technology and public outreach in ways that foster closer connections between research and the public by making history both fun and accessible.

Our Subject in a Nutshell: The Copper Country

The Keweenaw Time Traveler Project (KeTT) brings to the public a regional HGIS focusing on the Copper Country of upper Michigan, a region of the Midwest USA that contains the world’s largest deposits of nearly pure elemental, or native, copper. Native Americans exploited this resource for thousands of years; a subsequent industrial copper boom in the mid 19th century led to the area becoming the world’s largest supplier of copper by the 1880s, with a rapidly growing population and massive mining infrastructure quickly built in what had been a remote, if beautiful, wilderness. By the end of WW I, economic factors coupled with the growing cost of extraction led to a long, slow decline in the Copper Country’s mining economy, ending with the closure of the last mines at the end of the 1960s. When mining activity ceased, the entire region became a vast industrial archaeology site, a relict landscape. Today, with the population a fraction of its historical peak, the Copper Country’s economic base has largely shifted to service and tourism; local identity remains closely tied to Keweenaw’s mining heritage, however, and the area attracts visitors as much for its mining history as for its natural beauty.

Building the Foundation of the KeTT: Datasets and the

The Keweenaw Time Traveler benefits from the richness of historical data found within its geographic area of focus. The largest historical copper mining companies in the region, such as the Calumet & Hecla and Quincy mining companies, were among the great industrial giants of their day; the scale of their enterprise required a vast industrial infrastructure along with company towns to house their workers, all of which had to be designed, built, and paid for. As a result, most of the towns and large mining locations in the Copper Country are extraordinarily well-documented in the form of an extensive body of Sanborn fire insurance plans (FIPs), company-produced maps with detail that even surpasses FIPs, plan drawings and blueprints. In an age of corporate paternalism and scientific management practices, mining companies also extensively documented the lives of their workers and their families; the KeTT team has begun the digitization of an unprecedented wealth of detailed records on company housing, employee records and health records that provide far more information than standard census data. These are combined with decennial census data from the Minnesota Population Center, business and phone directories, and school records to provide a uniquely detailed look at the history of an entire region down to the level of the individual over the course of a century.

The core of the project is the Copper Country Historical Spatial Data Infrastructure (CC-HSDI), a next-generation implementation of HGIS designed to better facilitate both quantitative and qualitative research, while also fostering public engagement with both local history and the concept of HGIS itself. Using ArcGIS Desktop, ArcGIS Server and a PostgreSQL geospatial database, the CC-HSDI contains a series of ESRI Map Services consisting of georeferenced maps or FIPs broken into a series of time slices roughly equating with census years (in addition to smaller collections of maps from other years). Building these map service presented an early challenge to the KeTT team, as the size of each map service (representing a single town in a single year) ran into the tens of gigabytes and required the establishment of a dedicated PostreSQL geospatial-enabled server at Michigan Tech. Subsequent expansion of the HSDI will require these services to migrate to an off-premises enterprise-scale server facility (Amazon AWS) in the near future.

The historical built environment of each time slice in the CC-HSDI is then hand-digitized from the map services, resulting in over a hundred thousand building footprint polygons (as well as roads, rail lines and a few other infrastructural components). These polygon shapfiles serve as the geographical anchor point for all of the CC-HSDI’s non-map-based historical data mentioned previously and constitutes the “built environment stage” of the HSDI (after Lafreniere and Gilliland, 2015). This stage not only includes the building footprint itself, but other relevant data transcribed from the FIPs including the spatial arrangement,  street address,  and number of stories for each building.

Linked to the built environment stage geodatabases are non-map sources including data from the nearest census, business directories, phone directories, and company and school records. These records capture the social environment of each time slice in incredible detail, including whether a primary school student was immunized, or the medical profile of mining company employees. Coupled with the census data and business directory data that are already staples of HGIS, this “social environment stage” (after Lafreniere and Gilliland, 2015) not only represents a step forward in the ability of HGIS to contribute to qualitative research on past social environments, but provides the public with a wealth of local information that fosters a personal connection with the HGIS.

The KeTT prototype web apps, developed in ArcGIS Online Web AppBuilder, allowed the team to gain valuable experience in developing requirements for the forthcoming full public launch of the KeTT Project.
The KeTT prototype web apps, developed in ArcGIS Online Web AppBuilder, allowed the team to gain valuable experience in developing requirements for the forthcoming full public launch of the KeTT Project.

Public Outreach and Collaboration: The Keweenaw Time Traveler

While the Copper Country HSDI is an invaluable research tool in its own right, The KeTT project serves to connect the public with a new way of viewing their past environments.  This is accomplished through the use of web apps. Each app represents a different way of exploring and/or contributing to the HGIS. The KeTT is currently developing four different web apps that allow the public to interact with and contribute to the HSDI. These web apps provide the user with tasks ranging from basic historical map interaction exercises to facilitating more complex storytelling:

  • Recording the built environment by building material (using the fire insurance plan color-codes)
  • Identifying and recording the broad use-type of a structure (dwelling, commercial, institutional etc.)
  • Transcribing descriptive map text for individual buildings
  • Contributing personal stories and recollections about specific places on historic maps

Initially, the team used ArcGIS Online’s Web Appbuilder to build and test these apps for the KeTT. ArcGIS Online apps are an excellent resource for HGIS researchers looking to share data with the public; researchers with little or no programming background can quickly convert GIS data into customizable, publicly accessible web apps that take advantage of ArcGIS Online’s robust back-end infrastructure. However, large raster datasets can become expensive to share this way, as are the implementation of geospatial analysis tools, which consume ESRI credits. After building several prototypes, the KeTT team also realized they wanted more control over the app interfaces and underlying programming logic than the Web AppBuilder provided. This meant hiring a programmer and developing custom web apps in Javascript that made use of the CC-HSDI’s ESRI map and feature services. Despite this, ArcGIS Online’s Web App Builder proved to be invaluable for creating app prototypes and allowing the team to develop a clearer ideas about the look and feel of the final web apps.

GRACE project

The KeTT project has emphasized public outreach and involvement as an integral part of the construction of the HGIS, not just its dissemination or end use. The team has reached out in several ways to accomplish this. Last summer’s GRACE program served as an early example of what the KeTT project could achieve. The G.R.A.C.E. Project — (GIS Resources and Applications for Career Education project) is a NSF funded collaboration between Dr. Yichun Xie, PhD, Professor/Director at Eastern Michigan University’s Institute of Geospatial Research and Education, Dr. Don Lafreniere at MTU’s HESAL, Michigan’s Virtual University, and several statewide professional GIS organizations to provide hands-on training in the use of GIS to students and teachers in economically disadvantaged communities.  Last summer, the GRACE project partnered with the Keweenaw National Historic Park to bring GRACE to the Copper Country. Interns recruited from local high schools joined the HESAL at MTU in Houghton, Michigan to digitize major portions of the KeTT’s built environment stage from Sanborn fire insurance plans. During the course of the internship, GRACE students not only learned resume-building GIS skills, but also explored the history of their local community at a level of detail few people have access to. At the end of the internship, interns used ArcGIS Online StoryMaps to share portions of their local history they found most interesting during their work with members of the public. The KeTT team found the GRACE project to be a great way to involve the local community in ways that provided real benefit, and to generate some publicity in the process.

The GRACE project took high school students into the lab and field, helping to build the Copper Country HSDI while also using it to explore the historical built environment of their local community and, ultimately, to share their experiences through public presentations.
The GRACE project took high school students into the lab and field, helping to build the Copper Country HSDI while also using it to explore the historical built environment of their local community and, ultimately, to share their experiences through public presentations.

Next Steps

While a lot has been accomplished thus far, The KeTT project is just warming up; we plan to “go live” this spring, replacing the current beta web apps on the project website with the final, custom-programmed web apps that allow the public to explore, interact with, and contribute to the Keweenaw Time Traveler. The release of the final apps will coincide with a new season of KeTT team outreach activities in partnership with Keweenaw National Historic Park and Keweenaw Heritage Sites to spread awareness of the project and provide public outreach. In addition to the ongoing GRACE project, we will be bringing custom-built touchscreen kiosks to numerous public events around the Keweenaw that allow people to use the KeTT web apps with the help of KeTT team members and partners. Stay tuned at!


Gregory, I. N., & Geddes, A. (2014). Toward spatial humanities: Historical GIS and spatial history. Bloomington: Indiana University Press.

Jung, J.-K., & Elwood, S. (2010). Extending the Qualitative Capabilities of GIS: Computer-Aided Qualitative GIS. Transactions in GIS, 14, 1, 63-87.

Kwan, M.-P., & Ding, G. (2008). Geo-Narrative: Extending Geographic Information Systems for Narrative Analysis in Qualitative and Mixed-Method Research. The Professional Geographer, 60, 4, 443-465.

Lafreniere, D., & Gilliland, J. (2015). “All the World’s a Stage”: A GIS Framework for Recreating Personal Time-Space from Qualitative and Quantitative Sources. Transactions in GIS, 19, 2, 225-246.

Olson, S., & Thornton, P. A. (2011). Peopling the North American city: Montreal 1840-1900. Montreal: McGill-Queen’s University Press.


Breathing new life into old Historical GIS data

— the benefits of the long-tail of the Ontario Historical County Map Project and the Don Valley Historical Mapping Project data

Most academics who’ve written about Historical GIS have discussed the high-cost of building HGIS projects (Gregory and Ell, 2007). Building any GIS project is an expensive endeavour. Few, however, have mentioned the benefits of the ongoing nature or the extended length of some projects; and the long-term benefits of data projects Ontario Historical County Map Project (OCMP) and the Don Valley Historical Mapping Project (DVHMP) are two projects that have benefitted from the long-tail of their existence in order to continue to develop and enjoy useful applications and use of the long-ago-built (or still being built) historical data.

The OCMP was conceived a few years after the release of the well-known Canadian County Atlas Project at McGill University Libraries in the late 1990s. Nineteenth century County Maps were generally published earlier than the County Atlases. The Atlas project focused solely on the bound maps, and the OCMP focuses only on the earlier large-format maps. Like the Atlas project, however, the main focus of the County Map Project is to allow for the querying of land occupant names found on the maps, and the display of the names on images of the historical maps.

Fortin 2017 3
Canadian Historical County Map Project result of search by Name in Etobicoke Township plate, York County Atlas, 1878

While the McGill project did not use any GIS technology for displaying name information, it did take advantage of the web-technology of its day to graphically lay-out images of the atlasplates, and PHP to link image locations within the database of land-occupant names. The Atlas project was certainly an inspiration to us in developing the Ontario County Map Project.

In contrast to the types of tools used in the Atlas Project, the OHCMP has been a GIS project from the beginning. Like the Atlas Project, however, we also wanted to ensure that users of the County Map project could benefit from web technology to view the maps and GIS data. Being a GIS database, however, a new method of dissemination would need to be used.

Early tests of web technology were pre-Google and used what is now archaic web-mapping software. Our first attempt in 2004 utilized Esri’s ArcIMS (Internet Map Server), made available to us as part of our campus site license with Esri Canada.  We loaded our entire database into ArcIMS as a test, which at the time consisted of only Waterloo and Brant counties. Somewhat surprisingly, we were able to build a sophisticated querying tool and managed to display the georeferenced county map scans in the online map.

Ontario Historical County Map Project rendered in Esri’s ArcIMS software
Ontario Historical County Map Project rendered in Esri’s ArcIMS software

While yielding relatively impressive results for the time (if one were patient enough to wait for results of a query or a zoom-in or -out) it was clear that this setup was less than ideal as the software was extremely difficult to install, very slow to render results, and gave us difficulties finding adequate server space on which to permanently install the software.
Due to the limitations of available software, developing a web map of the land occupant names of the project was put on hold. Of course Google Maps changed the entire web-mapping landscape in 2005. Despite the adoption of Google Maps by many to display their data on the web, our attempts were hampered by the now large size of our land occupant database. While MySQL was often used to work alongside PHP and the Google API at the time, the conversion of our geospatial database into a MySQL database would have been a step back in the GIS development of the project.

Other more recent attempts at using web-mapping technology in 2013 also included a Mapserver configuration with OpenLayers and a PostgreSQL geospatial-enabled database using PostGIS. While the shapefile data did need to be converted to PostGIS, this setup at least promised the maintenance of our database in a GIS environment, compared to using MySQL. The resulting web-map was very promising, but required quite a bit of coding and manipulation. Having no programmer on the team or any funds to hire one, my programming of the application was limited to a six month research leave and the odd-slow day at the Map and Data Library. Without a programmer, it was clear this solution was less than ideal and would take years to complete.

Openlayers-Mapserver-PostGIS rendition of the Ontario Historical County Map Project
Openlayers-Mapserver-PostGIS rendition of the Ontario Historical County Map Project

For many years I ignored ArcGIS Online as possibly an overblown idea by Esri. How could one actually build an online tool with GIS functionality and get us to buy into it, I always wondered. However, its popularity grew so much among our U of T users that I eventually needed to learn how to use it to be able to support it. What better way to teach myself how to use ArcGIS Online than to load the County Map Project data, I decided. To my immediate surprise, ArcGIS Online was not only fun and full of great GIS and web-mapping features; it also had the Web AppBuilder application built into it. Along with dozens of Story Map templates, the Web AppBuilder allows you to take your GIS data into a web skin where you can add customizable widgets that work extremely well, even in mobile browsers. Being able to query or filter the 80,000 or so names in our database was a key consideration in adopting any web technology for the project. ArcGIS Online delivered this amazingly well, and also allowed for the rendering of high-resolution images of the scanned County Maps. The ease of use and customization of web apps without the need for coding are also fantastic selling points. Other fun but useful widgets include using animated timelines of “time-enabled” data, and a swipe tool that allows for viewing two datasets on top of the other and sliding a toolbar to switch between displays.

ArcGIS Online version of the Ontario Historical County Map Project with Querying tool display
ArcGIS Online version of the Ontario Historical County Map Project with Querying tool display

Adopting ArcGIS Online as a web-mapping tool has allowed the project to be out in the public eye where users can actually take advantage of the data built over the past 15 years. I never thought we would have a web-mapping solution before we finished the database, but as it stands, I am pretty happy with most of the functionality of the web app at this point, as our database continues to grow and we continue to compile more land-occupant names from Historical County Maps. Interestingly, while writing this post I actually received three email messages about the project and requests for further information from users of the County Maps site. Without making our data available in this powerful way, I doubt our project would have drawn so much attention.

Inspired by my success with the web-app builder tool, I decided to also build an app for the DVHMP and found that the data we had built over seven years ago really came to life on the web. Being able to query the data and render both polygon and point data together in one view online is empowering.

ArcGIS online is of course not the only tool that has taken advantage of web-mapping and cloud computing advancements to allow users to build their own web map apps. Products such as Mapbox are also increasing in popularity because of their ease of use, powerful functionality and customizability, and the attractiveness of the final map product.

Web Mapping has been around since the 1990s, but with new advanced web-mapping technology like ArcGIS online and Mapbox, it may be time for many other dormant or long-forgotten HGIS datasets to be pulled out of hard drives, or USB sticks to be given new life displayed in easily created yet powerful web maps. I am excited at the thought of possibly seeing the Montréal Avenir du Passé data for instance, available for display on a web map for all to interact with.

The Canadian HGIS Partnership is investigating many web-mapping tools and visualization methods. We are also working with Esri Canada, as part of the GeoHist project, to provide specific HGIS requirements for online mapping tools. With the powerful components already available in ArcGIS online, Mapbox, and other web mapping tools, the future of web-mapping for HGIS is certainly very exciting and accessible to anyone interested in developing them without the need to code.

Gregory, Ian., and Paul S. Ell. Historical GIS: Technologies, Methodologies, and Scholarship. New York: Cambridge University Press, 2007.

How do we find and link all this geohist information?

The volume of geohistorical data available on the web and stored in various databases is expanding rapidly as the geospatial turn gains momentum and as online mapping tools become more accessible. Historical maps can be situated with a bounding box or georeferenced with precision. Aerial photographs are assembled and georeferenced to analyse a region or to easily locate a specific sheet. Animated or static maps are increasingly being used to visualise phenomenons which affected history at various scales : local (Don Valley Historical Mapping Project), regional (Map of how the Black Death devastated medieval Britain), national (American Panorama. An Atlas of United States History), continental (Mapping the Republic of Letters), trans-Atlantic (The Trans-Atlantic Slave Trade Database) or global (Time-Lapse Map of Every Nuclear Explosion, 1945-1998).

Faced with massive amounts of data, researchers are not just looking for the proverbial needle in the haystack. They need to search for many needles spread across many haystacks. Several initiatives have been undertaken, including by this group, to develop solutions which would improve accessibility to geohistorical data. Portals are generally viewed as a solution to bring together data which pertains to a given location or to the research interests of a group or an institution. Consciously or not, they are designed to showcase the work of a group or institution. We will still need portals as infrastructures to host and distribute geospatial data. But on their own, they will not resolve issues of discoverability, openness and interoperability.

Depending on how effective the developers are at search engine optimisation, a given portal will be more or less easy to find on the web. The user will generally land on the portal’s home page and will then use the system’s own search tools to identify the specific item or items related to her or his research. Some systems, such as GeoIndex+, combine faceted search with a spatial view to facilitate discovery. Others still rely on older catalogue inspired search engines.

Whether or not the desired data can be located, it may not be available for download. Apart from commercial licensing issues, many researchers are still reticent to make their data available for download, but this would be an issue for a separate post. Governments are gradually making data freely available, but there is still a chance that a researcher could end up digitising and georeferencing data which already exists in that form. At this point, the use of a file format incompatible with a researcher’s preferred software becomes a minor inconvenience.

Even when portal developers have the best intentions to make data available and downloadable, the lack of system interoperability makes cross-portal searches a difficult challenge to overcome unless they open API’s or make data available in a linked and open format. While API’s could resolve immediate issues, they would not solve the problems related to security, system maintenance and overhauls. I will therefore emphasise linked and open data as the most promising long term solution to the problem.

Linked data “is a method of publishing structured data so that it can be interlinked and become more useful through semantic queries. It builds upon standard Web technologies such as HTTP, RDF and URIs, but rather than using them to serve web pages for human readers, it extends them to share information in a way that can be read automatically by computers. This enables data from different sources to be connected and queried.” (Source). A World Wide Web Consortium (W3C) standard, it forms the basis for the semantic web as defined by Tim Berners-Lee.

LOD relies upon the Resource Description Framework (RDF) which uses a subject – predicate – object grammar to make statements about resources. These triples, which could also be seen as entity – attribute – value structures (document X -> is a -> map), are machine-readable and use Uniform Resource Identifiers (URIs) to connect different elements together. LOD is already used to make information available and connected in projects such as DBpedia.

The data structures presented as rdf statements are defined by ontologies. The Spatial Data on the Web Working Group  has been formed by the W3C to

  • to determine how spatial information can best be integrated with other data on the Web;
  • to determine how machines and people can discover that different facts in different datasets relate to the same place, especially when ‘place’ is expressed in different ways and at different levels of granularity;
  • to identify and assess existing methods and tools and then create a set of best practices for their use;
    where desirable, to complete the standardization of informal technologies already in widespread use.
    [SDWWG Mission Statement]

Such an initiative will provide us with the tools and the infrastructure to make geohistorical data discoverable and accessible.

Unfortunately, LOD is not a simple solution to implement. Competing ontologies could emerge, which would limit interoperability unless bridges are made to define equivalences. Some institutions’ insistence on defining their own URIs, for place names for example, without connecting them to other authority lists can recreate the silos that we are trying to avoid. Many stakeholders need to open and offer their research data as rdf triples for the web of geohistorical data to emerge, as is already the case with DBpedia, Geonames, and the World Factbook. Designed as infrastructure, LOD tools are still in development and they  do not have much of a “wow” factor which would bring visibility and investment. A pilot project with a strong front end will be required for people to understand what LOD can do so that they will invest the resources required to publish geohistorical data as rdf triples.

There are still issues to be resolved, such as a standard ontology or a set of compatible ontologies. The SDWWG proposes compatibility with upper ontologies, as opposed to dependence upon a given world view of linked data [SDWWG Best Practices Statement]. We must also expect that different teams will publish their data at different levels of granularity. Some will at least provide metadata to indicate that a dataset has social and economic information about Montreal in 1825 while another could publish each data element at the household level. With regards to a scholar’s career, how can this type of publication be recognised for hiring, tenure and grants? The Collaborative for Historical Information and Analysis  has studied data repository practices which can be useful as we move towards LOD. Finally, how will we flag data which is less than recommended for scholarly research? We will need to define peer-review for an LOD world.

There are obviously more questions than answers at the moment, linked and open data provides a long term solution to discoverability and accessibility. Such a solution should be part of future portal designs.

To go further, the SDWWG lists a few publications and presentations. Catherine Dolbear and Glen Hart’s Linked Data: A Geographic Perspective (CRC Press, 2013) can also provide further guidance to the use of linked data from a geographic perspective. Any search for linked data or the semantic web will provide many useful results for additional reading. For historians, Philippe Michon’s M.A. thesis, « Vers une nouvelle architecture de l’information historique : L’impact du Web sémantique sur l’organisation du Répertoire du patrimoine culturel du Québec », is highly recommended.

Léon Robichaud
Professeur agrégé
Département d’histoire
Université de Sherbrooke

Accessing digital historical census boundaries just got a whole lot easier!

Finding and mapping historical census data can be a little difficult. Statistics Canada makes census data available for the 2011, 2006, 2001, and 1996 Censuses, with some profile tables available back to 1991. For boundary files, fewer censuses are made available online, with only 2011, 2006, and 2001 files. They do not provide access to earlier censuses any longer.

There are some sources for earlier census data and boundary files available through the Data Liberation Initiative (DLI) program, a national consortium made up of universities that formed together in the mid-1990’s to pay for and access Statistics Canada data, namely Public-Use Microdata Files (PUMFs). Part of the DLI includes access to older census tables and boundary files, including census tracts, dissemination/enumeration areas, census metropolitan areas, census divisions and census subdivisions, with some boundary coverages back to 1971. These boundary files represent some of the oldest digital boundary files produced in Canada, and are still used by researchers today. Both English and French data files were produced, and files are stored in varying GIS and non-GIS formats.

Today, access to the collection is typically mediated by the library at subscribing DLI institutions, some providing links to the data files online, but most only have access via a local connection FTP server. Given that the data are not available online publically, this prevents people from searching Google and finding the census boundary files. In addition, for some of the censuses, the spatial data are stored in ASCII text, or ESRI proprietary interchange format E00. This presents challenges for use in current GIS, and loading in open geoportals.

In Ontario, Scholars Portal and the Ontario Council of University Libraries (OCUL), have begun a year-long project to gather and convert all existing Canadian digital census boundary files, including the DLI collection, and other census boundaries digitized over the years by university libraries across Canada. The project will make data and documentation available openly in an interactive geoportal – Scholars GeoPortal ( Access to this important historical GIS collection will be improved greatly, and it is hoped that by making the collection available publically, these data will be shared and reused more effectively, reducing duplication for researchers everywhere.

Here is an overview of the censuses we are almost finished converting and loading, including creating ISO 19115 – North American Profile metadata for. (Some of these were reused from other national projects including the Canadian Century Research Infrastructure (CCRI) GIS boundary files):

2011 – Statistics Canada (in portal)
2006 – Statistics Canada (in portal)
2001 – Statistics Canada, DLI (in portal)
1996 – Statistics Canada, DLI (in portal)
1991 – Statistics Canada, DLI (in processing)
1986 – Statistics Canada, DLI (in processing)
1981 – Statistics Canada, DLI & Map and Data Library, University of Toronto Libraries  (Census Tracts in portal; the rest in processing)
1976 – Statistics Canada, DLI *(only point files available)
1971 – Statistics Canada, DLI & Map and Data Library, University of Toronto Libraries (Census Tracts in portal; the rest in processing)
1961 – Historical Atlas of Canada (Provided by the GIS & Cartography Office, Department of Geography and Planning, University of Toronto) (in processing)
1951 – University of British Columbia Libraries, and CCRI (University of Alberta Libraries) (CCRI in portal)
1941 – CCRI (University of Alberta Libraries) (in portal)
1931 – CCRI (University of Alberta Libraries) (in portal)
1921 – CCRI (University of Alberta Libraries) (in portal)
1911 – CCRI (University of Alberta Libraries) (in portal)

To check out the progress, you can easily view the boundaries by going directly to the portal.

In the near future, we plan to make the census boundaries inventory available so that gaps can be collaboratively addressed by the community and those who are interested in doing national, comprehensive digitizing and georeferencing work for this important historical census collection.

For questions and more information, please contact me at


I would like to acknowledge the ongoing efforts of university libraries for their ability to manage and archive census data, boundary maps, and GIS. These collections are truly valuable to researchers and historians, and access to these collections would not be possible today if it weren’t for these efforts. I would like to thank the kind contributions from the following universities, organizations, and individuals throughout the project:

Vince Gray, Western University Libraries
Eva Dodsworth, University of Waterloo Libraries
Marcel Fortin, University of Toronto Libraries
Leanne Trimble, University of Toronto Libraries
University of Alberta Libraries
University of British Columbia Libraries
Data Liberation Initiative, Statistics Canada

And, to Jeff Allen, our student assistant at University of Toronto Libraries & Scholars Portal, who has worked tirelessly on this project for almost a year now…

Many thanks,

Amber Leahey
Data and Geospatial Librarian
Scholars Portal, Ontario Council of University Libraries