CityLIS, Events

Utilising the digital: DITA at the BL labs & the usefulness and uselessness of computers

DITA’s visit to the British Library Labs Symposium on the 11th November kicked off a packed last few weeks of the term. What a welcoming and inspiring event; it was particularly lovely to get a shout out to the CityLIS contingent right at the start of proceedings! The keynote presentation by Armand Leroi set the tone of the day with discussion of the joining together of different disciplines in furthering knowledge (reflected in the many projects presented later where the BL collections had been put to use by a myriad of different people to artistic, historical and scientific ends).

At the British Library “Issac Newton” by EarthOwned is licensed under CC BY-NC-SA 2.0 

Armand Leroi’s talk ‘The Science of Culture’ started with the statements that “all of culture is essentially becoming digitised” and, that science is concerned with “elucidating causal mechanisms and general causal theories which transcend time and place”. He advocates then for the study of culture to be more scientific, as made possible by its newly digital nature. He illustrated what he meant with many examples, one being the analysis of music into patterns of chords. The distribution of these patterns could then be measured and used to quantitatively answer a common question in cultural studies ‘has commercialisation minimised diversity in music?’ (the answer is no, btw).

In order to achieve the scientific study of culture, Leroi made clear it was necessary to have everyone on board, that no one discipline could achieve this alone. Librarians, Scholars, Scientists and Engineers would all be needed to contribute. I should mention that Leroi himself is an evolutionary developmental biologist and as he pointed out he could not have completed the work he went on to speak about, ‘The evolution of popular music: USA 1960–2010’ without his colleagues from other disciplines.

The following week I attended The National Archives Cataloguing Day, an event giving staff at the Archives an opportunity to present their work. The programme of talks varied widely but as with the presentations at the BL labs symposium, I was struck by the common thread of passion for the work that ran through each one, for many these were projects done wholly or partly on their own time and their enthusiasm was infectious. The presentation given by Mark Bell, ‘Automating Content’ where he posed the question ‘can a computer ever write catalogue descriptions?’ (Spoiler Alert: it can…but not well!) was of particular interest in the DITA context of this post.

Some of the challenges in teaching a computer to recognise the things that humans do – particularly qualitative spatial reasoning and linguistic inference for example are difficult, even using artificial intelligence. 

Using entity recognition, it is still difficult to get a computer to differentiate between numbers on a page, it must be taught where to look, not just what to look for. Training systems to do this is theoretically straightforward, after all it is by experience of page numbering that humans are able to pick them out of a text. One of the problems here though comes with the way in which many available proprietary algorithms for dealing with text require a large corpus of text for their training, and these tend obviously to be the most available, like newspaper archives – which makes the AI great for interpreting and extracting meaning (and page numbers) from newspapers, not so great at doing the same for a digitised 19th century catalogue.

Extracting meaning is also not as simple as it may seem, Mark gave the example of wishing to extract the names of embassy staff from a document so they could be used in its record. The passage in which the two names appear does not refer to them directly as embassy staff, rather the account following the names: “Two men came one day, saying that they were from the Embassy…” must be understood to refer to them. This kind of linguistic inference is not straightforward and it is difficult to supply a training set which would adequately teach an AI to do this at present.

Ultimately, he concluded, computers see texts as ‘bags of words’ when it’s the structure that so often holds the meaning we wish to extract when creating a good catalogue record, so this will remain a computer-human hybrid task for the foreseeable future.

This application of AI in LIS linked in nicely with the following week’s ‘AI will replace you’ class, where lively discussion of Floridi’s paper, ‘What the Near Future of Artificial Intelligence Could Be’ kicked off the session.

We need to see the gears inside AI.
“An old pocket watch in brilliant sunlight” by jeremy_buttler is licensed under CC0 1.0 

The problems of closed systems, the ‘black box’ was highlighted again as being problematic in AI. We had touched on this problem the previous week in discussion of how varied results from search and text analysis algorithms could be. In the case of marketing visuals this may be no cause for alarm, but in text analysis of literature, for example, where conclusions may rely on the data spat out by the algorithm, or in the possible biased presentation of search results to a user, it certainly becomes more problematic. In the context of automated decision making, it is imperative that we open up the black box to understand how decisions affecting real-world humans are made by digital machines (I’ve written about ethics in AI in a previous post here). I’d like to return to this discussion in another post, seeing as I am trying to work on writing shorter posts and there was far too much good stuff in those two classes to cover quickly here!

Finally I would like to reflect that it remains one of my favourite aspects of this cohort that we all bring such varied views to the discussion, but everyone listens and considers other points of view (we could do with more of that in the wider world, imo). It was not surprising at this point then to hear from those at one end of the spectrum: ‘utterly terrified of what may come from AI’ to those at the other: ‘positively welcoming our robot overlords’. Full disclosure that I started off from a different place entirely that can be summed up as ‘not convinced by the whole thing’ as despite regularly thanking self-service check outs and ‘talking round’ misbehaving IT equipment, I don’t see AI as anything but the complex processing of information by machines – the lights are on, but nobody’s messaging you with them.

Our class has changed my view somewhat, I’m now ‘welcoming the robot overlords’ (I’m sure they will do a better job than our current human overlord contingent) but with the caveat that I don’t believe AI will ever achieve sentience enough to care to take over (but we should probably treat them nicely anyway, just in case…).

“ThankYouKeys” by Fenng(dbanotes) is licensed under CC BY-NC-SA 2.0 
CILIP events, Personal

Information Literacy: a personal perspective from PMLG Conf 19

CILIP Public and Mobile Libraries Group Conference in collaboration with the Information Literacy Group 2019 was held at Canada Water Library, Friday 4th October 2019

Information literacy is a big part of the work of a public library service, but it’s not something many of us on the front line often have time to stop and really think about, it’s more an instinct to help than a skill we set out to explicitly teach our users. So it was particularly welcome to take time out to think about what information literacy is, to share ideas and practice and to hear from some fantastic speakers on the subject at this year’s PMLG conference.

Nick Poole’s introductory address underlined public libraries as being at the forefront of information literacy, and therefore at the forefront of empowering people by equipping them with those skills.

So to a quick summary before I get carried away (as I did at the sight of this pirate ship full of picture books in the library…!)

Picture books in a Pirate ship?!?! This is everything I didn’t know I needed in a library! “Canada Water Library” by quisnovus is licensed under CC BY-NC 2.0 


  • People need information literacy skills more than ever to feel empowered in the modern world
  • There is more than one kind of information literacy, and many ways of becoming information literate
  • Library workers need to be ready to adapt how they approach helping people become information literate to the needs and life context of those people they are serving

Information Landscapes

In the workshop “Everyday Information Literacy”, Dr Pam McKinney of the Sheffield University ischool invited us to see information literacy in context – to view it as “lived information behaviour” and to consider that there is no ‘one size fits all’ approach to providing support in this area.

We were asked to break down our own experience of becoming information literate in a particular area into how we used particular modalities – epistemic, social and corporeal (you can see more information about these in Dr McKinney’s slides from the talk, referenced below, and in Lloyd (2017)). We could then see how we gained our literacy in that area, and whether we favoured a particular modality to do so (which we could see may have been task-dependent but also learning style-dependent). 

This allowed us to consider, in the context of our work, how the modes preferred by (or open to) an individual may vary and it was important then to be aware not to unwittingly impose our own preferred approach on our library users but rather to adapt information to the user’s situation and level of comfort. Some good examples of research in this area are given in the slides mentioned above.

Discussion in the “eSafety of Library Users” workshop led by Kev John of Kirklees libraries also touched on how, just as with communicating traditional literacy skills, a uniform approach doesn’t work for everyone as they may not see the relevance of ‘Information Literacy’ to their lives. One good idea to tackle this was to promote more general, positive-sounding events such as an ‘Internet Shopping Workshop’ as a hook to help those put off by the seeming formality of a ‘Digital Literacy Session’ or perhaps more fear-inducing ‘Digital Safety Day’; getting to those people who struggle to see information literacy itself as important but will jump at the chance to learn something practical which has relevance to their lives.

And Finally…

Finally a personal, and I’ll be honest completely unexpected, revelation about information literacy came in the closing talk by Paul Gravett, “And Finally…” in which he broke down the medium of the graphic novel and its role as a major contributor to literacy in general and information literacy in particular.  In the visual space, the story is “performed by the reader”, the author using different aspects of visual communication which enable them to get across complex situational information to the reader in an efficient and specified way (where using the written word to do the same thing in that situation would require lengthy and clumsy exposition). One example was showing a character’s face with apparent perspiration droplets on her cheek, a shorthand way to communicate her tension and anxiety.

Then there was the physical layout of the medium itself and how it could be used to give a particular rhythm to the work, building up the suspense and then, by turning the page at the specific moment intended by the author, the revealed panel is the payoff plot point (or often a cliff-hanger ending in itself). An interesting and timely illustration of this intentional use of the physical medium was given in the way mobile phone tech has gained traction (particularly in South Korea) as a preferred medium by which to read graphic novels/manga/comics. The nature of reading on a mobile device allows, for example, a character ‘falling’ down the screen as the reader scrolls, or the sudden ‘reveal’ where a continuously scrolling comic will then have a panel which will be ‘sticky’ and so flick on to the screen in its entirety when you scroll past a certain point in order to give the reveal (mirroring the traditional use of thoughtful layout in hard copy volumes as mentioned above).

I cannot do justice to Paul Gravett’s full talk in a short blog post and would encourage you to read more of his work if this area is of interest to you (see Sources below). But I wanted to convey why this struck a personal chord with me and bring us back to thinking about information literacies. 

On a personal note…

“IKEA high impact online banner” by Rodger Werkhoven, Mark&Rodg ‘ is licensed under CC BY-NC 4.0 

I find visual representations of information somewhat baffling and hard to read – maps, road signs, Ikea instructions (though I think the latter gets to everyone to a certain extent!) all have to be worked around on a regular basis. Some things can be learnt by rote (the road signs certainly) but others are harder, anything which is trying to visually represent a process or convey a situation requires a good level of visual literacy – an aspect of information literacy which, just like understanding the written word, many people take for granted. I’d never thought about my limited visual literacy as anything but an annoyance because I think our society is far more reliant on reading literacy, but I started to think about how it would be so much different if society were built around visual literacy, how much my life would be absolutely limited and how much I would need to rely on others to interpret the world for me, how much trust I would have to place in them that they are not going to mislead me, and how much work I would have to put in, in order to become independent in that skill.

So I came away from that final talk in particular with a greater sense of empathy to take back to my workplace, and a greater sense of how difficult it can be to learn new skills that others take for granted (I read a lot of graphic novels and yet I still thought the ‘sweat beads’ mentioned above were tears, and so my ‘performance’ of that story was not what the author intended as I lacked the visual information literacy skills to interpret it correctly).

“Literacy mountain” by dougbelshaw is licensed under CC BY 2.0 

Which all brings me back to our role as public library workers (and more broadly information professionals) in not only recognising the importance of information literacy and our role in helping people gain it but perhaps more importantly that we cannot assume that everyone is able to learn these skills in the same way or that everyone will have the drive to do it for its own sake. Information literacy is important but it is not going to solve every problem people face, as always, it’s the context that’s important and the library worker’s skill is in taking this into account each time.

References & Sources of further information

An overview of the conference can be found here:

The slides from Dr McKinney’s workshop can be accessed here:          

For more information about Information Landscapes and Modalities of Information see: Lloyd, A. (2006) ‘Information Literacy Landscapes: an Emerging Picture’, Journal of Documentation, 62 (5) pp. 570-583 [Online] DOI:10.1108/00220410610688723
Lloyd, A. (2017) ‘Information literacy and literacies of information: a mid-range theory and model’, Journal of Information Literacy, 11 (1) pp. 91-105 [Online] DOI: 10.11645/11.1.2185

The latest on Kirklees Awareness of Online Safety (KAOS):

More information about graphic novels, comics and manga, with recommendations from Paul Gravett at


DITA, Ethics & AI

“oCat” by Annika E.N. is licensed under CC BY-NC-ND 4.0 

This post started life as a reflection on the first session of the Data, Information, Technologies and Applications (DITA) module where discussion centered around how information and data have the potential to affect the way we see and experience the world. It has though, followed me as I explored a little further into the realms of AI and information ethics …

Discussion in the first session was initially sparked by the class reading of David Beer’s article “Data and Political Change” and continued by Dr Lyn Robinson’s presentation “Finding the ‘i’ in Data”. I’ve attempted to summarise a few of the main discussion points below by way of introduction:

  • Information becomes fragmentary when we are bombarded with so much of it; we tend to lose sight of the bigger picture.
  • We rely more and more on proxies to cope, (think Tripadvisor, Rotten Tomatoes, etc) to give us a way to make a decision in the face of too much information and so no longer consult a primary source (not always through choice but through necessity to get things done and get on with our lives).
  • Dependency on sources of information which use algorithms informed by how we have acted previously (think social media but also shopping sites for example) means we may gradually be ‘nudged’ into opinions and ways of thinking, e.g. by ads continually telling us we are people who align with this certain brand or feeds showing us only news stories which agree with opinions we have expressed or ‘liked’ in the past.
  • What data should be collected, and who does that data belong to?
  • There is a need for greater education in Information and Data Literacy at all levels:
    • At a public level in order to equip people with the skills to see when they may be being influenced in ways with which they do not agree and to know when, how, to whom and to what extent they should limit the personal data they divulge (and indeed when they may be divulging it unwittingly).
    • At a professional level to educate those engineers and designers creating the systems which gather & communicate data/information.

Of course I acknowledge that on this last point, as a room full of LIS students and professionals everyone present was biased but I don’t think anyone outside the room could have mounted a convincing argument against the huge role LIS professionals should have in making this education happen.

Two things:

Two things came out of the session which led me in the direction of thinking more about wisdom, ethics and AI.

Lyn spoke of information as structured data and included in one slide was Ackoff’s ‘knowledge pyramid’ with Data as the foundation, followed by Information, Knowledge, Insight & Understanding in order of hierarchy with Wisdom at the apex.

The Knowledge Pyramid – Image from Rafael Fajardo’s blog post ( )

Given the discussion we’d just been having on the ethics of the collection of data, indeed whether we should collect certain data at all, this made me think that the pyramid may serve us better if transmuted to a circle. Wisdom coming before the gathering of Data so that we can decide whether it should be gathered at all and thus wisdom informing each point around the circle rather than sitting neatly as an important but somewhat impotent achievement, only (hopefully) informing the next time we want to gather some data, but doing very little else to deal with the implications of the data it’s ultimately resting on.

Following the above discussion, as part of her presentation Lyn made a balancing comment which resonated with me: “AI can be used for good” and it’s actually this that I’d like to explore below, though oddly the pyramid/circle idea comes back into this as, unsurprisingly, I am not the first person to have had this thought!

AI for good

Kriti Sharma is an AI technologist and humanitarian, she trained as a computer scientist and her focus is on ethics in AI. Among her many achievements she founded ‘AI for Good‘ which seeks to encourage AI to be an empowering tool to address problems of, for example, social justice and mental health support around the world. Just one example being the rAInbow app which is “a smart companion, designed to reach millions who are in an abusive, controlling or unhealthy relationships…a companion that uses AI to help people in difficult situations.”

I was reminded of the keynote Kriti gave at this year’s CILIP conference where she cited many examples of the sort of real-world problems the use of AI has thrown up recently, Amazon’s recruitment system’s bias against women being one that hit the headlines last year. Kriti argues that much of the bias in algorithms is there because they are written by humans with biases and trained on biased data so it’s not surprising when the output is also biased (bias in-bias out). She went on to outline a five point ‘checklist’ which, in her view, would be a good place to start in addressing the problems around AI.

  1. Does this AI reflect the diversity of its users?
  2. Is oversight built in to the system around this AI – is it held accountable in the same way a human completing the same task would be?
  3. Is it transparent enough?
  4. What is it being used for? Is this appropriate
  5. Are we training people appropriately for the AI future, for the jobs it will create, modify and overtake?

So to join up with what I was talking about earlier, it strikes me that this checklist is introducing wisdom at the start of the process of thinking about designing an AI, it’s putting wisdom before data.

Later, watching Jer Thorpe’s talk ‘Data and humans: a love story‘ at the 2016 Library of Congress conference on Collections as Data: Stewardship and Use Models to Enhance Access. I was interested to see that he too felt an alteration of the knowledge pyramid was called for, he chooses though to add another layer of wisdom at the base.

Screenshot of video at

For a first week exploring Information Science I was already feeling really engaged with the topic of data/information ethics and pleased to be able to find these far more elegantly expressed echoes of my own newbie thinking out there in the wider community.

And then this happened…

Excellent timing, Dr

I happened to see a tweet by the excellent Dr Andrew MacFarlane. If you don’t already, consider following @unixspiders for insight into open source, information ethics & retrieval. He passed on details of a talk about AI being hosted by the Centre for Human-Computer Interaction Design (HCID) at City the next day. The guest speaker Dr David Leslie, Ethics Fellow at The Turing Institute gave a talk entitled ‘Explainability with a human face‘, covering many interesting background topics in philosophy which ultimately point to the “central role that human interpretation and evaluation should play in the explanation of AI-supported decisions”.

Dr Leslie argues that “responsible implementation of AI is a dialogical process, it involves reaching a mutual understanding” and that without exposing the input of these systems to the humans using them, without translating the technical into the everyday to make the workings of AI meaningful to those humans, then mutual understanding is impossible. AI becomes a ‘black box’ which makes our decisions for us if we let it.

AI can only be a part of ethical decision making, and only if the results of its algorithms are placed into the context of the real-life humans which the decisions will impact.

In his paper ‘Understanding artificial intelligence ethics and safety’ (Leslie, 2019 Dr Leslie illustrates what could be seen as a responsible ‘content lifecycle’ for a system based on AI where human values are translated into the system at the initial design phase; the data processing takes place and produces a trained model which then must be translated back into ‘human-understandable’ values so that the output (statistics, decision guidance, etc) are able to be understood by the person who is using them in a way that is transparent.

Taken from ‘Understanding artificial intelligence ethics and safety’ (Leslie, 2019

My understanding of the implication here is that without this translation the system becomes another ‘black box’ and instead of supporting the human user in working through an evidence-based reasoning process, without the added human context it simply becomes a ‘computer says no’ system, liable to be blindly followed by those without the technical ability to understand its particular limitations in each situation.

Final thoughts

“When the robots do take over, at least they could be nice”

Kriti Sharma (2019) Can AI create a fairer world? [CILIP conference keynote address, 3 July]

I started this post as a simple reflection on what I felt I had learnt from discussions in the first session of DITA, I soon went down a rabbit hole of reading and talking to people about ethics and AI, ending in a serendipitous encounter with the HCID group and Dr Leslie. I feel I have barely scratched the surface of this topic, and apologise for any errors in my understanding I may have inadvertently communicated above; please visit the primary sources to combat my potentially misleading proxy?!

Overall whilst I recognise that AI, big data and all its relations present us with ethical risks and challenges, I am inclined to believe that it can be used for good and am reassured that there are people like Kriti Sharma and David Leslie out there informing the work of corporations and governments in this area. In order for our robots to be good, it is imperative that information and the ability to understand AI is not siloed. For our own part as LIS professionals we should engage at every turn to help inform and educate not just the engineers and designers of these technologies but society more widely. What is needed is democratic oversight of these technologies but people can only choose that path when they understand that they can, and why they should.

If you’ve stuck with me this far, thank you and I promise to make my next post shorter if Lyn promises to make no more intriguing comments (I’ve a feeling that’s not likely).

References & links

Beer, David Data and Political Change September 20th 2018

AIFfor Good

rAInbow app

Dastin, Jeffrey Amazon scraps secret AI recruiting tool that showed bias against women October 10 2018

Thorpe, Jer Data and Humans: A love story Sep 27 2016 at Library of Congress conference on Collections as Data: Stewardship and Use Models to Enhance Access.

Leslie, David (2019) Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector doi:10.5281/zenodo.3240529

CityLIS, Personal

A first step

A cartoon owl sits in a branch with autumn leaves swirling around against a blue background

“Owl Illustrations” by david scheirer is licensed under CC BY-NC-ND 4.0

It’s a dangerous business, Frodo, going out your door. You step onto the road, and if you don’t keep your feet, there’s no knowing where you might be swept off to.

J.R.R. Tolkien, The Lord of the Rings, The Fellowship of the Ring

3 things about me:

I’ve worked in a library in some capacity since I left school; I’ve been qualified but never had a ‘library qualification’.

I’ve written reflective journal entries for personal and professional reasons for years: I’ve never written a blog that other people may read!

I’ve studied for a degree before; I’ve never felt so excited/nervous to do so.


Following #CityLIS over the summer has confirmed that I chose the right institution and course for me. The focus on the application of LIS in the real (and rapidly changing) world is important to me and I want to be a part of that.

Also, cats.

A screenshot of a post by CityLIS course director Lyn Robinson showing a models cat on top of a stack of books with the caption 'we have all the cats #citylis'

The welcome at City has been very warm and I am looking forward to meeting the people I’ll be working with today. Yesterday’s welcome talks were full of advice and encouragement.

The Dean of the school, Professor Muttukrishnan Rajarajan was particularly inspiring, drawing on his own learning experiences and speaking of the fundamental things we would need:




He also spoke of the value of collaboration, of how speaking with those in other departments and schools would help make the most of our time here and build our skills and networks for the future.

This certainly resonates with my own view of work, particularly in libraries. Very much is achieved or conceived of while chatting through projects with others at lunch or a conference or even, in one case, a customer who happened to have some great contacts she was happy for us to leverage.

So I’ve made a start and I feel I’m in good hands, I’m excited to see where I get swept off to.