January 30, 2014, by Helen Lovatt
3D-Scanning Fundilia: Research on the statuary finds from the Sanctuary of Diana at Nemi
Can quantitative analysis give a new way of looking at art? By 3D-scanning Roman portraits, Katharina Lorenz aims to answer this question.
In the summer of 2013, the Nottingham Castle Museum and Galleries put on an exhibition of the finds from the Roman Sanctuary of Diana at Nemi. The collection is now back in the storerooms but the research continues – among other things, with work on the statues of Fundilia, and a conference at Nottingham on February 21, 2014.
An extended version of this blog post can be found here.
Over the last two years, I have been working on a project concerned with the 3D scanning of the two portrait statues of Fundilia, the herm statue in the Nottingham Castle Museum and Galleries (NCMG) and the full-body statue in the collection of the Ny Carlsberg Glyptotek in Copenhagen. This research is thoroughly multi-disciplinary: it involves archaeologists (myself, Ann Inscker at NCMG, and Jane Fejfer and Mette Moltesen in Copenhagen), Human-Computer-Interaction specialists (Damian Schofield and Matthew Andrews at SUNY Oswego) and a forensic anthropologist (Stephanie Davy-Jow at the University of South Florida), along with Nottingham University Classics students, who helped scanning the herm statue in 2011 (it is perhaps no surprise to hear that Nemi and its finds feature heavily in my teaching…).
So why 3D-scan Roman portrait statues? 3D-scanning produces large data sets – sets of quantitative measurements of the statues – which can support different types of advanced statistical analysis. What interests me is to what extent such measurements can help us understand the formal and stylistic relationship between statues better. The central questions for an ancient art historian are: how similar are the two portrait representations (we know they both show Fundilia because the inscriptions on the statues tell us)? Can we tell whether one was made before the other, ie. one statue served as the model for the other statue (this would have implications for the dating of the statues, and – possibly – also for their respective importance)?
Key to answering these questions is close scrutiny of the portraits of each statue, their faces. Traditionally, ancient art historians rely on visual autopsy: they study the portraits very closely and compare all the individual features in order to elicit an interpretation. But this type of visual autopsy comes with an error margin caused by subjectivity: different interpreters might quite literally look at things differently.
The field of Roman portraiture studies is continuously concerned with reducing this element of subjectivity. What interested me when starting the project (and still does!) is to what extent the quantification of a portrait (as a type of objective description, because not dependent on the eye sight of a single individual) could offer a new, stronger basis for the formal and stylistic analysis of Roman portraiture.
Comparing facial measurement, even if gained by an “objective” machine such as a 3D-scanner, is not easy – for one, there is the question of which measurements to compare. This is where my colleagues from Human Computer Interaction and Forensic Anthropology come in: they developed a standardised system of facial landmarks, which are particularly relevant for facial identification and against which any face can be mapped. And that is what we also did with the two Fundilias, based on the data collected from 3D-scanning them.
The outcome of the statistical analysis conducted with this data showed that the density of coordinates differs between the Nottingham and the Copenhagen portrait of Fundilia – or, in simple words: if the portrait heads were to be super-imposed, there would be significant discrepancy.
We are currently working on the next stage of the project, with further analysis of the mathematical data and visual feature comparisons between the statues – more results to follow soon!
A report on the scanning work can be found here.