September 16, 2017, by Philip Moriarty
In To Bat — Guest post from Oliver Gordon
Oliver (Oli) Gordon, a third year — soon to be fourth year — MSci Physics student has very recently completed a ten week summer intern project in the Nottingham Nanoscience group. The internship, as Oli recounts below, has spanned a rather wider range of topics than he first expected…
Before starting, I feel it good to mention that it’s difficult enough as it is to sum up ten weeks of anything even vaguely enjoyable into only a few hundred words. It’s even less trivial when you also have to include (amongst other things) researching with an evolving group of talented people, programming in three languages (two being new to me), writing 1100 lines of code in an evening (only to realise a better solution the next day), using about £1 million worth of scanning probe microscopes (one of which is shown to the right, along with an image of Ag atoms and clusters), moving individual atoms, classifying 6022 images, trying to publish my 3rd year research, report writing, building and training a neural network, and then implementing all of this (and still squeezing in a social life, playing music and following cricket!)
So, let’s start simple…
Rather than opting for a more typical business-oriented internship this summer, I chose to stay in Nottingham to work on a research project. I have always enjoyed the more project-oriented parts of my degree, and a PhD has been a looming question for some time. The project I applied for allowed me to spend the summer applying and developing the aspects of physics I particularly enjoy, whilst also trying to make my mind up about a PhD. I was also able to apply the skills and theory I learned in Imaging & Manipulation at the Nanoscale, Computing for Physical Sciences and my project modules. Working with Professor Moriarty (who also supervised my 3rd year project on correlations in drumming, the results of which are about to be submitted for publication in a journal) was also a real bonus.
The basis of the project involved scanning probe microscopy (SPM). As you may be aware, these operate by bringing an atomically sharp tip to within a nanometre of an atomic surface. (If you’ve not encountered scanning probe before, this Sixty Symbols video provides an introduction). By measuring the current that arises (thanks to quantum mechanics) along the surface, it is possible to build up an image. But surprisingly, there isn’t a reliable way of making atomically sharp tips. You crash a tip into the sample, apply a small voltage, and then scan a known surface. If the image doesn’t match well, you make another tip (which could be either better or worse). This ritual is then repeated until various planets align and take pity on you.
Our goal was to make a convolutional neural network that would be able to automatically make a good tip by recognising certain features (and/or their absence) in an image, and re-crashing the tip accordingly. This would save nanoscience researchers a significant amount of time.
Even if SPM isn’t your thing, neural networks are fascinating and well worth a YouTube binge! From learning to drive in GTA V to generating surprisingly convincing YouTube comments (or far less convincing baroque music) a lot of research time can be “constructively” spent learning about neural networks. (For a more serious introduction and tutorial to the techniques we used, Siraj Raval’s tutorial is a fantastic place to start.)
Much of our first two weeks were a more typical 9-5, doing background research and shadowing some of the PhD students in the nanoscience group. This helped us to better understand not only what we would have to do, but also appreciate the potential setbacks we would have to account for. We also travelled to and spent a day at Newcastle University’s computer science department, being introduced to a researcher who was of invaluable help to us.
From here, the project was very open-ended, and days were packed full of intrigue. A lot of our time was spent in and out of the lab, creating and writing analysis programs in MATLAB and Python, filling up whiteboards with systems and algorithms we created, and then trying to find faults with them. One fault we didn’t envision, however, was that it would be a better idea to use Zooniverse instead of writing (and scrapping) a 1100 line MATLAB GUI to classify training data (neural networks “learn” from thousands of already solved examples).
It was at this point our project gained serious press attention, with a mention on Test Match Special during the test at Trent Bridge. (My social networks were literally inundated with two entire messages!) Hearing Graeme Swann and Daggers trying to understand AFM and sharing my enthusiasm for Henry Blofeld made for great entertainment on an otherwise rainy day.
The last month of the project was then spent trying to develop a neural network with the Keras module for Python. There were many variables to optimise and issues to overcome (a computer has no common sense, so will “solve” your problems in ways that aren’t actually solutions, such as just guessing everything as the most likely outcome). Unfortunately this meant that the project was not quite completed, but hopefully it can be finished up next year.
Before closing, if you want to contribute to the project (you don’t need any particular knowledge!), the Zooniverse page is still open. Our vision was to have an ever-evolving network, and improving the training data is a key part of that.
Despite not seeing a final product, I’m proud of what we accomplished. I had such a sense of ownership and satisfaction with everything I did, and being able to work with some exceptionally talented individuals on a genuinely interesting and fun endeavour was a real pleasure. I’ve learnt so much about programming, researching, problem solving, report writing and more. This summer has also made me appreciate just how much I enjoy physics, and has left me eager to crack on with my fourth year and finish my degree in style.