The map below visualizes the text-mined data produced by the Trading Consequences project. We queried the database to identify all the commodities with a strong relationship to London and then found every other location where the text mining pipeline identified a relationship those commodities at least 10 times in a given year. This results in 111,977 rows of data, each representing between 2841 and 10 commodity-place relationships. I will present this data visualization to the Social Science History Association meeting in Toronto this November.
The map above uses CartoDB’s Torque Cat animation to visualize the data as it changes over time. It only distinguishes 10 different commodities, which is already too many to really follow, and displays the remaining commodities in the Other category. The word cloud below shows all of the commodities and ranks them by the number of places and number of years they met the 10 relationships threshold (i.e. the words are bigger if a commodity had a lot of mined relationships with different places and these relationships remained consistent across the whole century).
It is also possible to look at all of the data from the whole of the nineteenth century to see the the locations with a high intensity of relationships with numerous commodities that also have a strong relationship with London.
[This map looks better when you zoom in.]
I should note that this data does not confirm a direct relationship with London and not all of these locations are a part of the city’s increasingly global hinterlands. Some locations would be competing markets sourcing the same materials or producing the same goods as London. British ports were also waystations where goods from the world were transhipped and sent on to other European centres. The text mining identified when a commodity term, like sugar, was in the same sentence as a place name. The text mining shows a strong correlation between London and sugar and a strong correlation between Cuba and sugar. In this case Cuba, I know from other sources, it was among the numerous suppliers of sugar to London. We cannot simply assume, however, that the strong correlation between Leather and Calais in 1822 meant the French port supplied London with Leather in that year. They could be a market for London’s leather or a competitor. To focus the map on London’s hinterlands exclusively, I would need to filter out results based on additional research and an extensive ground-truthing exercise. It would probably be more accurate to say these maps helps illuminate the geography of commodities related to London in the nineteenth century, but this data and the visualizations remain a starting point for further research (like the research I’m doing with Andrew Watson on leather).
You can download the data as a CSV file with this link.
Here is the abstract for the SSHA paper I’m co-authoring with Bea Alex and Uta Hinrichs:
Visualizing Text Mined Geospatial Results: Exploring the Trading Consequences Database. Continue reading “London’s Text-Mined Hinterlands for the Social Science History Association”
I am working on an abstract for the ESEH in France next summer. I plan to focus on the role of an industrialist, J.E. Howard, in supporting the efforts of British government officials and economic botanists to establish cinchona plantations in Asia. I’ve done a lot of archival research on this topic, but I thought it would be interesting to see what I could find in the Trading Consequences database. The Location Cloud Visualization clearly shows the geographic transfer of cinchona to India and Ceylon, but I needed to dig down past our web visualizations to see what the database has to say about a particular person. To do this, I extracted every sentence that mentions the commodity cinchona in the Trading Consequences corpus, ordered them by their year and exported a text file from the database. This yields a file with 3762 sentences that mention cinchona.
Uploading this data into Voyant Tools makes it easy to explore some of the patterns in the text as it changes over the course of the nineteenth century. For example, we can see the initial importance of India (which would include mentions of the East India Company) and the growing significance of Ceylon and Java as the century went on. It is also notable that Peru and Peruvian were relatively less significant locations in these British government documents.
Using the same tool, we can see the rise and decline in popularity of an alternative spelling of cinchona, “chinchona”, during the middle of the 19th century.
More to the point, we can search for the last names of five of the key individuals involved in the transfer of cinchona: Clement Markham, Richard Spruce, the father and son, William and Joseph Hooker, and John Eliot Howard. Markham was a Indian Office geographer who led an exhibition to Peru to steal cinchona seeds. Spruce, a botanist, collected further seeds from New Granada. The Hookers were both directors of Kew Gardens, with Joseph taking over from his father in 1865. Howard was one of the sons in the Howard & Sons company, which produced much of the quinine manufactured in Britain. In addition to his expertise as a manufacturer, Howard was a leading expert on the botany of cinchona. The visualization below shows that while Markham, Spruce and William Hooker were key figures in the initial planning and transfers of the early 1860s, Howard gains significance in the corpus in the decades that follow.
The real power of Voyant is that once you identify an interesting trend in the data, it is possible to click on the spike for Howard in the chart above and update some of the other visualizations. Below you can see “Howard” as a key work in context during the spike and further down you can see the actual sentences where Howard is mentioned. With a little more work I could have included the URL for the original document page.
From the Trading Consequences Blog: Today we are delighted to officially announce the launch of Trading Consequences! Over the course of the last two years the project team have been hard at work to use text mining, traditional and innovative historical research methods, and visualization techniques, to turn digitized nineteenth century papers and trading records (and their OCR’d text) into a unique database of commodities and engaging visualization and search interfaces to explore that data. Today we launch the database, searches and visualization tools alongside the Trading Consequences White Paper, which charts our work on the project including technical approaches, some of the challenges we faced, and what and how we have achieved during the project. The White Paper also discusses, in detail, how we built the tools we are launching today and is therefore an essential point of reference for those wanting to better understand how data is presented in our interfaces, how these interfaces came to be, and how you might best use and interpret the data shared in these resources in your own historical research. READ MORE
I’ve just learned about a great timeline creation tool called Timeline.js. It is a very easy tool to create nice looking and very functional timelines. There is a small problem, as the current Google spreadsheet template does not work with dates before 1900 (a common problem with computer data fields). However, those of us interested in pre-1900 history can simply cut and past the top row of that template into a fresh spreadsheet, the timeline works fine with all dates (use a negative for dates before the year zero). I’ve created a very quick and rough timeline of the global tallow supply below. I will fix it up over the next few hours. I think this could be a great tool for undergraduate teaching. Here is what the Google spreadsheet looks like:
I’ve been a part of a lot of discussions lately about the need for an effective way to share HGIS data. As the number of researchers using GIS for history/historical geography increases, the need to find ways of sharing resources and avoiding duplicated efforts also increases. One way forward is for more of us to post our data on individual websites (see the Don Valley project). We could then try to link the data together through some kind of federated search portal (like NINES.org). Ideally, however, it would be nice to have a system where individuals and teams could collaborate on work in progress or expand upon data created by others then share it again. Simple websites don’t provide an easy way for people to upload data back to the source. Github provides a platform for sharing code and a system for collaboration. It is widely used by the open-source software community. I’ve created a test repository and it seems like it is possible to share a few different kinds of vector data, including shapefiles, KML and geojson, all of which work with QGIS (and some work with ArcGIS). Is this an established platform that we could attempt to adapt to the needs of the HGIS community? Or is Git too confusing and difficult and the soft limits of 100 MB per file and 1 GB per repository too small for our needs? Do we need a system where we can also share scanned and georeferenced maps? Is there another existing option that we could agree on or do we need to wait until someone has the time, skills and funding to build something better suited to our needs?