An image of two presenters shown picture in picture with their presentation at the Digital Accessibility Conference.

August 3, 2023, by Ben Atkinson

The Digital Accessibility Conference: Sessions Review Part 2

On the 29th of June 2023, Learning Technology hosted a Digital Accessibility Conference at the University of Nottingham. In this series of blog posts, we are reflecting on the conference and sharing reviews written by members of the team who chaired panels during the 2023 conference. In this post, Ben Atkinson reflects on the short papers in sessions 1B and 3F

Session 1B/1: Towards an institutional culture of inclusivity and accessibility

Presenters: Michael Shaw & Andy Beggan, University of Lincoln

In this first paper for Session 1B, Andy Beggan and Dr. Michael Shaw from the University of Lincoln discussed their approach to creating an institutional culture of inclusivity and accessible at Lincoln. Opening the session, Andy talked about the way in which Lincoln approached creating a culture of inclusivity and accessibility both before and after the adoption of the 2018 EU legislation. As many attending the conference will know, the 2018 adoption made it a legal requirement for institutions to consider their approach to accessible practice and this resulted in a marked change in adoption of accessible practices across the sector.

In Lincoln, work on accessible practice had already begun and it was great to hear about support for this approach from the whole University, from the Vice-Chancellor down. Andy and Michael’s presentation focused on four key strategies of their approach, including the tools used, training and support offered and approaches to building partnerships across the institution. Beginning with a mandatory course for accessibility, the team considered what they had already done in leading training on inclusive practice, and designed a brand new suite of training resources focused directly on accessibility while ensuring that colleagues across the institution would find the content useful and empowering. The results were positive, with a marked 42.5% increase in the accessibility score of documents and learning resources following the introduction of training.

I was particularly interested in the approach taken by Lincoln, with Dr. Michael Shaw outlining the hard work which had taken place to ‘win over the hearts and minds of colleagues’ to appeal to them from an empathetical perspective and encourage them to do the work required. Positive reinforcement seems to have been a successful approach at Lincoln!

Other interesting aspects of this presentation included the presentation of an accessibility toolkit (available at https://lncn.ac/access) and guidance and support for alt text on advanced images, including graphs, charts and scientific data. It was great to hear how Lincoln had partnered with a team of student video producers to create video resources that promote accessible practices to students and staff. The students added their own humour to these videos and helped to convey this important topic in a meaningful way. This was a great example of student engagement.

Andy and Michael ended their presentation with an overview of the Blackboard Ally tool which is used at Lincoln to review documents for levels of accessibility. Using data from the previous academic year, Dr. Michael Shaw shared that over 100k alternative format documents had been downloaded by students at Lincoln on a request basis, without having to ask for them. This was particularly interesting for me and shows the benefit of moving towards good accessible practice, if nothing else does. Clearly students are engaging with and benefiting from having these documents in a variety of accessible formats.

It was very interesting to hear how far Lincoln has come as an institution, in embedding accessibility across the University and the team’s plans for next steps of continuing to support its adoption.

Session 1B/2: Creating inclusive and digitally accessible learning experiences for our students using Moodle and H5P

Presenter: Michelle Thompson, Kaplan International Pathways

Michelle Thompson rounded off session 1B with a presentation on the tool H5P which has been used at Kaplan International Pathways to produced accessible learning resources. Michelle’s presentation was an informative deep-dive into the many accessible features of H5P and the key pitfalls to avoid when using the tool to design your learning resources.

Michelle began with an overview of Kaplan and their work, which is centered around supporting international students with their transition into Higher Education, representing a wide range of subjects and supporting students from 100 nationalities to move into HE study in the UK.

It was interesting for me, to hear how Michelle and the wider Kaplan team had approached digital accessibility, with the presentation including a variety of examples of how the team met WCAG 2.1 guidelines by employing the H5P tool. Between 2018-2022 the team undertook a project to respond to the EU legislation on Digital Accessibility, deciding to follow legal guidance despite being a formal awarding University. This project focused on historical issues of accessibility within the organisation. The main outcome of this stage of the work being that a focus on old content, content that is perhaps no longer used frequently, is not perhaps the best approach.

From 2022 onwards Kaplan colleagues pivoted their approach to ensure their staff teams could reach a point of consistent understanding and approach to digital accessibility, while at the same time undertaking an audit of all content currently in use. The project was informed by an inclusive model where support is available for all students and, by following inclusive design principals, resources produced would be appropriate for students.

Using H5P, a tool for building interactive learning resources, allowed for the quick rollout of accessible features including banner templates for the VLE which provided useful information and aided navigation. The team also produced a style guide which advised colleagues on how to build accessible content in H5P and advocated for key accessible practices such as the good use of headings, alt text on images and instructional text for students. H5P was seen a sensible tool to use, as it comes with built in accessibility features and has a number of interesting content types, including the ‘Accordion’ activity which has been used for long descriptions and further reading. There are however, some content types on H5P which are not accessible, and colleagues were advised not use to use these activities.

Overall, it was very interesting to see how an organisation like Kaplan approached the task of reviewing their approach in relation to accessibility and to upskilling colleagues in the key principals of accessible practice.

Session 3F/1: Social media accessibility tips

Presenter: Lizi Green, Ability Net

Lizzie Green must have been one of the most prolific presenters at our conference, stepping in to present for colleagues who were unable to attend at short notice, and presenting two papers herself. This was one of several appearances throughout the day, and in session 3F/1 Lizi provided an overview of best practice accessibility tips for when you’re using social media.

Lizi’s presentation began with an overview of social media and the benefits it provides;  that people use social media platforms and by their very nature they are diverse platforms which require us to employ an inclusive approach when designing content for our social media profiles if we wish to avoid excluding a broad range of internet users.

Lizi went on to discuss the use of the Microsoft Inclusive Design Toolkit and the importance of WCAG 2.1, both of which guide approaches at AbilityNet. I was particularly interested to hear some of the tips Lizi recommended for writing accessible content for social media, as many platforms are text based. Best practice then, includes building a respectful environment for everyone and importantly trying to avoid reinforcing existing stigma, phrases that suggest victimhood and being aware of content warnings and alarmist language.

When it comes to producing imagery for social media, it is important to ensure you using contrasting colours or perhaps even consider indicating your point with pattern, shape or other kind of icon which will be interpreted by a variety of users. I particularly enjoyed Lizi’s guidance around writing Alt text for images, suggesting you should think about ‘how you would describe an image to someone who you are talking to on the phone’. Great advice! It’s also important to remember that different platforms will provide different approaches to Alt text. On Twitter you are unable to edit your Alt text, but you can do so on Facebook and Instagram.

Something I had not considered, is the impact of Emojis, which as Lizi points out should be used sparingly. I will admit that I had not considered that the Alt text for each emoji would be read out each time by a screen reader. So, if you add four ‘clapping hands’ emojis in between each word or sentence, a screen reader will repeat the words ‘clapping hands’ five times!

 

Session 3F/2: Factors impacting accurately captioned video content

Presenter: Michael Shaw, University of Lincoln

Dr. Michael Shaw from the University of Lincoln, returned in the afternoon to give his second presentation of the day. This session was titled ‘Factors impacting accurately captioned video content’ and was one of the most interesting presentations of the whole conference, in my opinion. The topic in question, accurately captioned video content, has been much discussed across the higher education sector, but Dr. Shaw approached this topic in a new way. Thinking about not only the practical implications of inaccurate captions, but what technical impacts there might be which cause captions to be inaccurately transcribed in the first place.

As you will see, Dr. Shaw opened his presentation with a witty segment on miss-transcribed captions, using images and icons to infer what might his words might be and at the same time causing the audience to remember that the English language comes with its own in-built accessibility constraints, namely that many words in our language sound similar to each other or indeed are spelt the same but with very different meanings. After focusing on homonyms and homophones, resulting in a peel of laughter from the audience, the presentation moved on to discuss the ways in which we can adjust our practice as academics to better aid accurate captioning on our live lectures and recorded video content.

At the University of Lincoln, using their video hosting tool Panopto, Dr. Shaw compared the automatic captioning success rate of a number of platforms against Panopto to understand the success rate when generating an auto-captioned transcript. On the whole, across different platforms there was an 80%+ success rate. This figure surprised me, given the general distain of auto-captions across the board. As Dr. Shaw points out, often, it is a few dreadful or silly words which are mis-transcribed and lead us to think that all auto transcription services are of poor quality. From this presentation alone, I learnt that is not always the case.

Dr. Shaw spoke at length about some of the factors which might affect the impact of auto-captioning. Accent, it turned out, in this study did not have the great impact one might expect. Neither did the type of platform, with three main providers Panopto, Microsoft Stream and YouTube coming out with 84%, 96% and 98% accuracy. It follows that Google’s YouTube captions would be the most accurate, given the many thousands of hours of video the platform has at its disposal – it can learn much quicker than any other provider. But it was positive to see that Microsoft Stream is not far behind at 96% accuracy.

For academics who believe that technical terminology is what causes the problems with inaccurate captions, that tools don’t understand our technical academic terminology. Well in this research study, it appears not. There was no distinct trends in the data to show that technical terminology was being inaccurately transcribed. And, surprisingly, there appeared little benefit to paid services where someone might transcribe your video for you. These tended to be costly alternatives charged by video length and showing no overall benefit over auto-transcribed captions.

In the second half of his presentation, Dr. Shaw focused on the acoustic set up and audio quality of a recording. Here we seem to have found the root cause of bad transcriptions. Namely that reverb (when you are standing further away from the microphone and in a room with bad acoustics) has a far greater impact on the quality of your audio and therefore on the quality of your auto generated captions. The same is true of the type of microphone. The further you move away from the microphone, the worse your auto-generated captions will be. 3m away from the microphone and the accuracy drops below 80%.

Overall, this presentation was an insightful deep-dive into the various nuances of a accessibility topic which causes much frustration amongst the academic community. Dr. Shaw sought to dispel some of the myths about auto-captions, backed up by data captured during the project, and most interestingly, attempted to establish the root cause of poor auto-captions by examining the theory behind audio, reverb and the design of our teaching spaces.

Posted in AccessibilityConferences