I’m Brian Aitken, the Digital Humanities Research Officer for the School of Critical Studies at Glasgow, and the person responsible for designing and developing the technical infrastructure for the REELS project. Since the start of this year I have been working on the public-facing search and browse facilities that we are hoping to launch in the autumn and I wanted to share with you some information about how things are progressing and what facilities we will be offering.
The first step when developing an online resource should always be to document exactly what it is that needs to be developed – to figure out the ‘requirements’, as they are called. This needs to be done in collaboration with the project team to ensure that all the required facilities are considered and there is no confusion between the developer and the researchers as to what features are important. We all met as a team at the start of the year and through this meeting, email exchanges and follow-up meetings I came up with a document that gives an overview of all of the facilities we hope to offer and how they might work. This document then acts as a sort of checklist or recipe that I can then follow as development proceeds. Of course, requirements are never set in stone and things change as features are developed and demonstrated. A developer needs to be flexible enough to deal with these changes, but at the same time the team needs to understand that not every new, exciting feature can necessarily be implemented in the available time, at least not without dropping some other feature.
With my requirements document to hand I could begin work on the actual development. I decided to create an ‘API’ that all of the search and browse facilities would connect to and use. An API (Application Programming Interface) is basically a website that allows you to submit queries to it through a URL, it then processes the query and then outputs the data in a structured format that can then be used either by scripts (e.g. in JSON format) or by humans (e.g. in CSV format that can be viewed in Excel). The main advantage of this approach is that any script can connect to the API, whether it’s the server-side scripts I will write in PHP, or the client-side scripts I will write in JavaScript, or indeed some other scripts that any other developer might want to create in future. It’s an approach that keeps the querying of the data and the processing of the data for display nice and separate.
I’m still working on the API, adding in facilities to output data required by features of the front-end as I develop each feature, but to date I have created facilities for outputting data for the quick search, the advanced search and other types of data such as that which will be displayed in our map pop-ups. Here’s an example of the structure of a returned search result, structured in the JSON format:
The REELS place-name search and browse facilities will present users with a map-based interface for accessing place-names. As with previous projects, I decided to use Leaflet for the map, as it is a simple, lightweight library with no external dependencies that you can easily install on your own server (unlike Google Maps where everything has to get sent to Google’s servers for processing). I set Leaflet up with an initial MapBox basemap (which I am still going to work with to improve the interface) and managed to connect the map to the API in order to display the search results. I then split the results up into different map layers based on the place-name type, and assigned each type a different coloured dot. Eventually I will replace these with icons, but this was a good first step. With this in place and the legend visible it then became possible to turn on or off a particular type, for example hiding all of the settlements, or hiding all of the grey dots. Here’s an example of the map, showing a search for ‘h_ll’ (all names with these characters somewhere in them):
I also added in hover-over place-name labels (as you can see for ‘Stichill Linn’) and created popups that appear when you click on a place-name. These are AJAX powered – none of the map markers actually includes the content of their pop-ups until the user clicks on the map marker. At that point an AJAX request is sent to the API and the data is retrieved in JSON format, then formatted by the script and the pop-up is displayed. If the user clicks to open a popup a second time the system can tell that the popup is already populated and therefore a second AJAX request is not made. This is all much more efficient than loading all of the data in straight away. Here’s an example of a map pop-up, but note however that more information is still to be added, such as a link through to the complete record for the place-name:
Quite often with map resources the map markers are not put to good use – for example they will all be the same colour and shape. However, more information can be conveyed by markers, such as using different colours to represent different classification types as in the above screenshot. We’re intending to provide multiple ways of categorising map markers, such as by altitude and by date of earliest recorded form, although I still have to implement such features.
Another issue with some map-based resources is that it can often be difficult for people to process the data displayed on a map. In many cases just seeing a textual list of the data can be more useful. For this reason we’re giving users the option of switching from the map view to a text view of the data via different tabs, as you can see in the screenshot above. Clicking on the ‘text’ tab allows you to view a list of all of the matching place-names (not including the grey dot data), as you can see here:
The last feature I’m going to mention for now is the advanced search facility, which is pretty much completed. The quick search allows users to enter a term which then searches just the current place-names, the place-name elements, plus grid references (e.g. if you want to see all of the names in a particular square, such as ‘NT7__6__’). The advanced search gives users to freedom to tailor a search across up to 16 different fields, such as historical forms, altitude, classifications, sources, elements and languages. We’ll have to test this facility out as it’s possible it might be a little daunting to use for some, although it is targeted more at advanced users. Here’s a screenshot that shows a part of the search form:
There is still lots to be done before the front-end has all of the features we are intending to include, but to date I’ve made really good progress. The design of the interface (such as colour schemes and the layout of elements) is not something I’ve spent a lot of time on yet and this is an aspect that is likely to change considerably in future, and as of yet it’s not possible to access the full place-name data for each record, but we’re hoping to have a fully functional initial version completed and possibly shared with some users for feedback before the summer. Hopefully we’ll have a place-name resource that looks nice, works well, is easy to use for both casual users and hardcore place-name researchers and can be a model for future place-name resources.