Karina van Dalen-Oskam on Computers and Literature

Guest blogger, Professor Karina van Dalen-Oskam, will deliver a CTTR-sponsored Faculty of Arts Public Lecture, ‘The Riddle of Literary Quality’, on Wednesday, 13 March, 6-7 p.m., MK405 George Wallis Building – further information here.

Can computers and literature work together?  Yes, they can!

To many it seems scary: algorithms that do things we thought only humans could do. Agreed, some things can be done better by machines than by humans. Calculating business profits in a split second, performing repetitive actions for days on end, for instance.  When strict rules and regulations apply, a machine will perform flawlessly, whereas a human may easily be distracted and occasionally make a mistake or change her mind the next day although dealing with a similar case as the day before. But surely humans can not be beaten when things are more complicated and difficult to describe in executable and simple steps. Making ethical decisions, for instance. Or discussing the aesthetic pleasure an art object yields. Aren’t they?

Sir Kazuo, winner of the 2017 Nobel Prize in Literature

Well, I am finalizing a research project that started in 2012 and that has such a scary new approach to literature: The Riddle of Literary Quality. The aim of the project is to see whether we can measure all kinds of stylistic features of contemporary novels and find out which features help the novel to be perceived as highly literary and which features will diminish an author’s chances for the Nobel Prize. Does the amount of direct speech in a novel influence the reader’s perception of literariness? And what about the use of cliché expressions, or the length and complexity of sentences, to mention only some of the issues that Andreas van Cranenburgh has dealt with in his dissertation.

My team and I make use of digital texts and of software that can deal with the intricacies of language on a huge scale. The computer can read many more novels than we could ever do in a lifetime of continuous reading, and it can find patterns that are too massive and complex for human eyes. It can, for example, measure the mean sentence length and the number of verbs in thousands of novels, and visualize the distribution of the results in helpful graphs. The kind of patterns that then become visible have never been interesting to us before. But when we see, for example, how a simple and perhaps boring thing such as sentence length tends to be longer in novels that are marketed as literary fiction than in novels labelled Suspense or Chicklit, this may change our ideas about how literary value is attributed and how literary and other texts may be analyzed in the future – or even how they will be written.

So why is this approach so interesting? First of all: the scale. Until now, we were happy to close-read a couple of novels again and again and develop an interpretation based on this very small corpus. At the end of the day, however, we always had to admit: further research will have to be done to find out whether this observation or pattern is uniquely found in this text, this oeuvre, this genre, this time period, this… whatever. And here is what will change with the advent of the digital age. We can still develop a hypothesis when we are close-reading one or more novels. But now we actually have the opportunity to check many of these hypotheses. By writing and applying software that will read and analyze  on a large scale.

And the fun is: We are in charge. We choose what to select for analysis, how to model our questions, and how to evaluate the results and come to certain interpretations and conclusions. And we will have to be open to surprises: we will regularly meet with outcomes that we did not expect, and which will set us off in totally new directions for follow-up research. Results may suggest different routes towards places we didn’t even know existed. That’s quite adventurous!

So can we measure literary quality? Yes, we can – partly. I will show how in my lecture on 13 March.

Prof. dr Karina van Dalen-Oskam is Head of the Department of Literary Studies of Huygens Institute for the History of the Netherlands and Professor in Computational Literary Studies at the University of Amsterdam. For more information, see: https://www.wlv.ac.uk/about-us/news-and-events/calendar/?view=fulltext&id=d.en.3834442

This entry was posted in Guest Blogs, Public Lectures, Research and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s