
March 17, 2025, by Ben Atkinson
Digifest 2025 day 1: Connecting and gaining new insights
In the first of a series of blog posts, Learning Technology Consultant Sally Hanford reflects on experiences of attending JISC DigiFest 2025, discussing day one of the event in Birmingham. There will also be reflections from Day two in a following post, and thoughts from Learning Technology Consultant Dave Corbett who also attended DigiFest 2025 representing UoN Learning Technology.
—-
As I boarded the train to Birmingham the details of the programme I’d looked at the previous evening with growing excitement filled my mind (choosing which talks to attend is always tricky as there are so many and all look interesting), but the scheduled talks and workshops are not the only reasons why attending in person is so valuable. The event provides a great opportunity for reigniting relationships with contacts from other institutions, suppliers and folk from Jisc as well as making new connections and gaining new insights.
The room was packed, and the sense of expectation was tangible as we waited for the opening welcome talk and the first keynote speaker Paul Iske. Paul’s funny, insightful talk was about the realisation of his idea for the ‘Institute of Brilliant Failures’ and ‘failure archetypes’. The archetype that jumped out to me was ‘Not all relevant parties are involved’, for which he provides several real-life examples. It was a good reminder of the importance of including all stakeholders in our work, including students.
The message? That failures should be talked about, reflected upon and learned from, not (as is so often the case) hidden and seen as somehow shameful.
The next session I attended was sponsored by a Company called LearnWise. They have partnered with Jisc to deliver some of the AI pilots at institutions across the UK and had helped South Staffs College and Liverpool John Moors University implement chat bots using the data in their central services databases. The initial targets in both cases were their institutional IT helpdesks. The resulting wealth of quantitative and qualitative data generated by users interacting with the system had all sorts of benefits beyond the obvious, such as providing data that could be used to write business cases and surfacing the real barriers faced by the university population to help prioritise developments that would have the most impact on the student experience.
The final workshop session of the morning was hosted by Kaltura showing a new AI tool called ‘Class Genie’ to provide ‘hyper personalised’ resources. The new tool (currently in beta) organises existing teaching content to generate quizzes, flash cards and video snippets based on the engagement patterns and prior activity of the student. Very interesting, and Kaltura’s participatory approach to development during the workshop was good to see.
Sana Khareghani, Professor of Practice in AI at Kings College London, AI Policy Lead for Responsible AI in the UK and former Head of the UK Government Office for Artificial Intelligence, closed the event. Her talk touched on the history of AI, citing Marvin Minsky as the ‘Father’ of AI and relating major breakthroughs. She reminded us of the need for thoughtful AI policy and investment, drawing inspiration from countries like Estonia and Canada, which have led efforts in digital governance and AI integration. This focus on collaboration, ethics, and public service improvement is key to ensuring that AI benefits society at large.
After an inspirational day (with a disproportionate and unintentional focus on AI), I headed back home on the train intending to summarise some of the notes I’d made (on paper, not using AI), but fell into a deep conversation with two young fellow passengers. One was a 27 year old budding solicitor aiming for a training contract who had reached stage four of an interview with a large Birmingham law firm and was optimistic. I asked if he thought that the lawyer role might be impacted by the threat of AI and was very impressed at how he had already considered this and was aiming for a very specific niche within the career that would, he predicted, be immune to such development. A fellow passenger working in finance overheard our conversation and chipped in that it was the absences in information that she focused on discerning, something she felt that AI would never be skilled at. Clearly, they had thought through any perceived threat that AI might pose to their careers and adjusted their plans accordingly. I was very impressed.
No comments yet, fill out a comment to be the first
Leave a Reply