lessons on more inclusive conferences
John Allsopp
The best writing about the web, each Friday morning.
ID24 2022
This is my script for my recent presentation at ID24, the long running accessibility and inclusive design focussed conference .
I spoke on some of the lessons we’ve learned over the last 3 years or so developing our online conferences, and how these might also help make our in-person conferences more accessible and inclusive as well.
You can watch the presentation, or read on for the script.
Today I’ll share some of the lessons we’ve learned about designing more accessible and inclusive conferences
Do as I say…
I guess there’s a kind of irony that a lot of what I’ll talk about today is how live online conference presentations have a negative impact on accessibility and inclusion, and how we can address that by pre-recording presentations. And yet here I am presenting live.
Now I must admit when we first planned to move our conferences online in the wake of Covid, perhaps the single most daunting decision, which was ultimately, as I’ll talk about extensively, the best one we made, was whether we’d have presenters present live, or pre-record presentations.
Now, as I sit here quite stressed about whether the network will stay up, whether I’ll make a mistake, and myriad other factors that could go wrong, when I really should be focussing on the delivery, perhaps this live presentation will become the best advertisement for pre-recorded presentations that I advocate.
I will however record a version, using the technologies and techniques I mention throughout, to give a side by side comparison and let you be the judge.
I’ve also tried to minimise the visual aspects of this presentation, for reasons I hope will make sense as we go along.
A word or two about inclusion
A lot of the focus at ID24 this year as in every year is accessibility. And today I’ll be talking quite a bit about that–but I also want to address some issues around inclusion, and what role design, and intentional decision making can play in making events more inclusive–this includes things like the languages people comfortably speak or listen in, socio economic status, and where in the world someone lives.
About me, and Web Directions
Very briefly, mostly for context–My name is John Allsopp. I’m a middle aged man of european origin and my pronouns are he/him.
Since 2004 I’ve been involved in organising conferences for web designers, developers, and all sorts of adjacent areas of professional practice.
I’ve also spoken dozens if not hundreds of times at events around the world, in person and online.
Our programs have always had a considerable focus on assistive technologies and ways of designing and developing with accessibility in mind…
And we’d always tried to make our events what we believed were as accessible as possible.
The pandemic, as with many things, made us rethink that.
A return to 2019
Let’s start by going back pre-covid (if that’s possible), and thinking about what most people would have considered the essential, indeed sufficient features of an accessible conference.
We might have expected
Hearing loops (available in some larger venues, can be hired)
Wheelchair access (depending on where the conference was held, this could be a legal requirement, as it is in Australia, though venues are often still far from wheelchair accessible despite this requirement)
Sign language interpreters (something you might see occasionally)
Real time captioning (very rare)
And none of these really address a fundamental but overlooked challenge to accessibility that pretty much every presentation has–presentations are almost always strongly driven by _visuals_–text on slides, graphs, charts, animated gifs, memes, all things that are inaccessible to people with visual disabilities as this graph shows.
This slide intentionally left blank
Right now the screen actually reads “this slide intentionally left blank”. The “joke” is of course that there is no graph–but if you are a person with a visual disability and you suddenly heard a room full of people laugh at a visual joke like this (and think of how many presentations use memes and animated gifs) or once again hear a speaker say ‘as this graph shows’ hopefully you’ll start to get a sense of how much we take for granted as presenters that our audience can see our slides.
And we haven’t even started to consider disabilities outside hearing and sight–events, as do almost all spaces outside the home expect neuro distinct people to enter environments that can be distressing and overwhelming, with no accommodations that may reduce this distress, or aid in the ability to focus on and absorb the material.
Now, today for the most part I’m going to focus on online conferences, but these considerations also impact accessibility and inclusion for in-person presentations and events–though some of the accommodations I’ll be talking about are more challenging or perhaps even impossible to adopt in that context.
When Covid struck
As I mentioned, Since 2004 I’ve been running events for web and digital folks-designers, engineers, and product managers.
When Covid struck…
We knew we needed to move to online
We didn’t want our online events simply to be placeholders until things went “back to normal”
We knew online conferences didn’t have to, indeed *shouldn’t* look like “traditional conferences, just online”
We wanted to see this very real and significant challenge as also an opportunity–to do conferences better
And a key aspect of this was seeing how online-first events and presentations could focus more on accessibility and inclusion.
Today I’d like to talk about some of the things we learned–which as I mentioned don’t just apply online, and which don’t just apply to conferences.
I then want to zoom (sorry, too soon?) in on a couple of specific things we did, some technology we built, to make our online conferences more accessible for Deaf people, and people with visual disabilities–but which also have benefits for all our users.
Conferences in 2019
OK, it’s now 2019 again. What does a conference look like? They probably are
2 or more days, back-to-back
These days are long (often 8 hour or more)
Presentations delivered live from speakers on stage
Now, a small irony here is that the very first presentation at a conference I helped organise was a pre-recorded presentation from Jeffrey Zeldman–I don’t think we’ve had one since.
Anyway.
Then Covid made in person conferences impossible–for how long we simply had no idea.
We, and many conference organisers did the only thing that was available other than giving up altogether (which many many did)–we started planning to move our events online.
Just like [whatever] only online
Now, over the last 30 years or so we’ve seen so many things from the physical world move online. And there’s a very common pattern when that happens. The most immediate example is how web design initially, and for quite some years really, largely mimicked print design.
A perhaps a more pertinent example here is how shopping moved online.
Online shopping is of course commonplace now, and might look like ebay, or Shein, or a more traditional shopping site, but the earliest incarnations were literally malls–only online.
Now lost in the mists of time is World Avenue, a huge play by IBM in the mid 1990s to essentially create a digital simulacrum of a mall. They spent more than a hundred million dollars on it. Back when that was real money.
How did this effort go? Well so monumental a failure was this colossal, hugely hyped effort that a search will yield you perhaps a handful of references to World Avenue anywhere online, pretty much all of which are reports from when the service was shut down. There are no screenshots–it’s simply vanished.
This is a commonplace of moving activities from the real world online–we start by recreating as closely as possible the ‘real world’ experience “only online” (something which BTW most ‘metaverse’ implementations do–food for thought).
Online conferences post covid
And we saw this with many online conferences post covid
They were
2 or more days back to back
Long (8 hour or more) days
Live presentations from speakers onscreen
What might a conference look like if it were online-first?
We wanted to avoid the world avenue approach to online events.
We wanted to rethink all of these factors and ask “What might, what could a conference look like if it were online-first?
The format
Let’s takes a look at something as straightforward as the format
Long back to back days
Long, back to back days are really limiting factors in so many ways–for audiences and organisers.
They limit attendance and so access (with an 8 hour day, you can really only target one time zone effectively–folks elsewhere need to get up early, or stay up late to attend. And allocating long stretches of time, over multiple days, is hard, and exhausting, especially when so much of work and life had moved onscreen).
So what did we do about that?
Replaced by short sessions
We had shorter sessions (originally around 4 hours, though they ended up shorter still), across originally 4 successive Fridays, then later 2 successive Fridays.
We were asking much less in terms of time and attention from our audience, for a similar amount of content.
Pre-recorded, not live presentations
We also importantly decided to pre-record all presentations. As I mentioned at the top, to be honest this of all the decisions we made at the time felt like the most risky–would attendees feel ripped off watching presentations that weren’t live?
Interestingly, only a few days ago Apple held their first in-person event, announcing new iPhones and so on, for 3 years.
While media and others assembled in the Steve Jobs theatre the entire event was pre-recorded! Asking media to fly across the country or the world to attend a prerecorded presentation even on new iPhones is something that would have been unimaginable 3 years ago.
Noted Apple watcher John ‘Daring Fireball’ Gruber observed
These pre-filmed product introductions move faster — the transitions between scenes happen at the speed of energetic cinema…
This allows Apple to cover the same amount of information in less time…
[it] allows Apple to use far more employees to make the presentation.
It’s also a win for would-be presenters who find speaking in front of a live audience too stressful.
More accessible and inclusive
Ultimately this decision to pre-record was the single best decision we made, not least because of the implications for access and inclusion.
For speakers
Let’s start with speakers.
Anxiety
language
As John Gruber observed, and many of us will know first hand, folks often find the idea of speaking in front of an audience very anxiety inducing. Pre-recording may make it more accessible for such folks to speak.
Speakers whose first language isn’t that of an event, even when proficient in that language, may find the idea of speaking publicly in a language other than their first challenging. Again, pre-recording may help address that.
For speakers
Travel & visas
Trust me, conferences have limited travel budgets, and so the opportunities for speakers to travel to an event to speak are in short supply, and so tend to go to more established speakers. This restricts the opportunity for new, and more marginalised voice to be heard.
Speakers from many parts of the world may also be impacted by visas requirements–these may take months to receive, be arbitrary withheld, may require being without a passport for an extended period of time while being processed. And they are not inexpensive–at each in person event we pay thousands of dollars for visas for speakers to visit Australia.
Including speakers
All these things factors mean that certain voices and points of view are far less likely to be heard–non English speaking presenters, people from the global south, people with anxiety disorders, and other neuro-distinct people.
When we moved to an online, pre-recorded model, and provided equipment (or otherwise made it easy to record, for example by arranging a videographer), the opportunity for speakers whose first language wasn’t english, who lived in countries where you’re much less likely to traditionally see speakers from, increased enormously.
We have had numerous speakers whose first language isn’t English, and from right across the world–Nigeria, Kenya, Brazil, Argentina, Indonesia, Malaysia, the People’s Republic of China, and Palestine among many others we’d never had speakers from previously.
For attendees
visas
time
For attendees, issues of access can similarly include cost (of tickets, and of travel) visa challenges, and also language barriers. But even when we move online and address some of these, there’s subtler challenges like time zones–An 8 hour long day will fit neatly into essentially one timezone. Those outside the time zone where the conference is being held might find themselves having to get up very early, or stay up late to attend, a challenge for folks with families, and other carer responsibilities among others.
But, by having shorter, pre-recorded sessions, we found we could repeat each session multiple times a day–ultimately 3 times across a 24 hour period. This meant more or less wherever someone was in the world, they could attend during their work hours (or of course if they were early risers or late night types they could attend then).
So, by rethinking the basic conference format, there can be tremendous wins in terms of inclusion across numerous factors.
Equity
One last thing on this before I turn to my main focus of today, was the issue of equity.
WEIRD
Western, educated, industrialized, rich and democratic
Developers in Malaysia earn on average less than 20% of a developer in a WEIRD (Western, educated, industrialized, rich and democratic) country. The same goes for developers in many countries around the world.
We wanted to ensure equitable access to our events, and so did quite a bit of research into relative developer salaries around the world.
We found there were more or less three tiers.
Tier 1
Tier 2 50-60% of tier 1 income
Tier 3 20% of tier 1 income
A top tier of markets like North America, Western Europe and Australia
A middle tier of about half that and then a Tier of about 1/6th of the top tier.
So we priced accordingly–using geolocation to price in local currencies adjusted to the relative developer earning power.
The impact of all these? We certainly ended up with attendees from around the world–still weighted toward attendees from the global north, but that’s likely as much about our ability to market globally as anything. But it was definitely gratifying to see folks from all over the world at these events.
Accessibility
But what I really want to focus on today is accessibility. While we’re seeing a return to in-person events, and I feel that we’ll see fewer and fewer online conferences that look and feel like conferences in the future, the job these conferences have been doing for their audiences will continue.
We’ll continue to see presentations online, as indeed we have for years–often recorded, in-person presentations thrown up on Youtube after a conference ends.
Hopefully some of what we’ve learned and implemented with our online events since 2020 might help these presentations become more accessible than they currently typically are.
So let’s start there, with the accessibility, or otherwise, of online presentations.
Deaf and hard of hearing attendees
Increasingly, at least for deaf and hard of hearing audiences, we see accommodations being made, even for live presentations like today’s.
A great many presentations end up on youtube, and auto-captioning has been a thing there for some time. It’s quick, and free.
And it is certainly better than nothing. You’re experiencing it right now.
But particularly with specialist topics, and above all for presentations focussed on software development, where code is used and referred to a lot, automatic, or even non expert human created captioning comes up very short. Terms of art, and code examples are often bafflingly mistranscribed.
For speakers with even a moderate accent (including possibly folks like Aussies), or people who speak quickly, auto-captioning is often not particularly accurate.
This is where a pre-recorded approach brought additional benefits in terms of accessibility.
Pre-recording gives us time to carefully edit transcripts and captions, to ensure, for example, programming language features, terms of art and less commonly used terms are properly transcribed.
Captions and neurodiversity
While we think of captions as being an accommodation for the Deaf and hard of hearing, folks with ADHD and on the autism spectrum talk about the benefits of captioning for being able to focus (possibly with the sound turned down to minimise distraction).
And we’re not all in quiet calm environments, with noise cancelling headphones.
So captions are an accommodation that benefit not simply Deaf and hard of hearing viewers. Or even just people with disabilities.
Live Transcripts
But we went one step further than simply captioning. We also created what we call a live transcript,
Onscreen right now is a screen recording of our live transcript feature–a block of text shown with the presentation video. What the speaker is saying at that time is highlighted, but several lines before and after the current phrase, are also visible.
A caption is what is being said right at that moment, and once displayed is no longer available. Our transcripts provide context–they allow the viewer to glance back and understand the context of what they are listening to now. But with what is being said right now highlighted, and easy to return to.
Remember, our sessions last for 3 or more hours, and concentrating consistently for such long periods (we do provide some downtime during the day) is challenging for anyone, and perhaps particularly so for some neuro distinct viewers.
We’ve had viewers praise these transcripts for how they allow them to focus better, and not lose track of during presentations.
Visuals in presentations are deeply inaccessible
But, as I also alluded to earlier, we rarely talk about the very visual nature of presentations and how inaccessible these can be for folks with visual disability.
How much of the information, context, even the subtlety and humour of a presentation is on a slide–and slides are completely inaccessible–they have no Accessibility Object Model–they are just bits on a screen. Screenreaders and other assistive devices can’t read them, let alone describe the meme GIF used, or the graph “that you can see here”.
So we decided to try and do something about that.
Audio Descriptions?
We originally considered audio-descriptions, where a narrator describes what’s on screen, as a solution to this.
AD is commonly used for film and TV, and even live theatre. We spoke with AD experts about the feasibility of using Audio Descriptions for presentations, but the conclusion we came to with them was this would likely be very distracting (in essence, the audio description runs as a second audio stream alongside the main stream) and technically very challenging.
So we approached this in a different way.
Accessible Slides
We take every slide from every presentation (another reason why pre-recording is really valuable, this is otherwise not only impracticable, it’s impossible).
We mark it up semantically in HTML (headings, lists, links and so on), and we provide alternate descriptions for purely visual elements like graphs, images and other visual components.
We time stamp these for when the slide appears in the presentation.
There is an option at our conference site for these to be displayed during the presentation, synchronised to the presentation.
On the screen right now is an example from a recent conference. On the left is the ode of the prevention. To its right are the accessible slides. As a slide appears in the presentation, the HTML version moves into view and is highlighted.
ARIA Live Regions
But how does this help people with visual disability? They are afterall still text on a screen.
Well, the slides are presented in an ARIA Live region. This means that screen readers which support live regions recognize and will read out slides as they appear on screen.
We talked about not opting for audio descriptions, as we felt they would be distracting.
So what’s the difference here?
With AD, the only control a user has is whether they are on or off. There’s no speed control, no skipping, no choice of voice, or volume.
With our ARIA based approach, the user of a screen reader is in control of all these aspects of how, and whether, slides are read to them, using their assistive technology of choice.
Not not only that–all the text, including code examples, of a presentation are now text on screen-that can be copied and pasted.
Links can now be activated
Each inaccessible black box slide of a presentation is now essentially a web page.
We often talk about how accessibility accommodations aren’t solely beneficial for people with disability, but this is a very powerful example.
And it doesn’t stop with with the live presentations
When a presentation is available on demand after a conference, the slides now serve as ways into the content–you can read through the slides then jump straight into the starting point of that slide.
And similarly with transcripts.
There’s no need to watch an entire presentation–read through and jump in to the points of interest.
And the entire text of the presentation is accessible to search engines.
The technical details
So, how’s this done?
For the developers in the audience I’d like to take just a short while to look at how it is done under the hood.
I mentioned live regions. These are an ARIA feature that
“are perceivable regions of a web page that are typically updated as a result of an external event when user focus may be elsewhere”. “Since these asynchronous areas are expected to update outside the user’s area of focus, assistive technologies such as screen readers have either been unaware of their existence or unable to process them for the user.”
So can pop-ups...
Onscreen we have the markup for our accessible slides. There’s a div with two attributes–aria-live=”polite” aria-atomic=”true”. Inside is heading of level 3 with the text ‘So can popups’ and then we close the div.
Thats it. All you need. We have a little JavaScript that updates the content of this div with the content of the slide each time that slide appears on screen. And because we’ve let assistive devices know this is a live region, screen readers will read out the new text inside the element.
We have this separate element that we update the contents of with the text of the current slide, because live regions are only read out when the content of the region changes.
We add the aria attribute aria-live to make the element a live region, and the value ‘polite’ to tell assistive devices how assertive or otherwise it should be in reading out the content when that changes (there are three levels–off, polite and assertive).
Our second aria attribute, atomic, specifies that the whole of the element should be read out when the content changes, not just the changed content–this is because each slide is entirely different.
With live regions, screen readers can read out changes to the region–So in our case the text of a slide when it is updated. All under the control of the user, who may choose to turn them off, the speed and voice with which they are read, when to skip them–whatever options their technology provides the user.
Here it is in action–with Voice over on Mac OS reading the live region. You’ll see in the utility window the text of the live region, and how as the slide below updates, Voice Over starts reading this.
My instinct, not as a user of screen reader technology, is the value might be more for on-demand videos where the user might pause the video, have the slide read out, then resume the video, than simultaneously with the presentation. But importantly, the user is on control of their experience.
Real world use
Btw, this system is one we’ve used extensively for our own events since 2020, and has been used by OZEWAI, the long running Australian Web Adaptability Initiative for their inaugural online conference, and by AITCAP, and Accessible & Inclusive Tourism Conference for their two online conferences to date.
While the technology is not yet open sourced, it’s our intention, and should you be interested in using it, please get in touch–we’re very glad to help make that happen!
If you’d like to see the technology in action, please visit our streaming service, Conffab, which features coming up on 1000 presentations from past conferences of our and other organisers. A free account will get you access to anything over 2 years old and many of those presentations (as well as all of our newer ones) feature the technologies I’ve been talking about today.
@johnallsopp on twitter and so forth
experience the technology [ conffab.com ]
Bonus thoughts
As we have a little more time, here are some bonus, sort of random thoughts I’ve come to attending, hosting, and speaking at conferences for many years now.
Use a script
I’m increasingly of the opinion that scripted presentations, particularly for online presentations, are much preferable to extemporaneous ones, even for presentations that are highly structured. I’ve edited the transcripts of hundred of presentations and to be honest almost every unscripted presentation is essentially gibberish–full of discursions, repetitions, half finished sentences–which we tend not to notice when someone is on stage but is far more noticeable in a screen based presentation–it’s also a barrier to inclusion for folks whose first language isn’t that of the presentation, and potentially neurodistinct people who might find a discursive presentations that goes off on tangents, and so on, difficult to keep track of.
I’m not saying that you necessarily read every word, but there’s a reason why very little television, even live television is impromptu.
Don’t live code
The ID24 notes to speakers discouraged live coding. This is something I strongly agree with–not least from from the perspective of accessibility
Again having edited the transcript of many live coded sessions, and sat through many more, they are almost invariably disjointed, halting, the speaker being focussed on the code, and what inevitably seems to go wrong, the audience focussed on what might have gone wrong.
Onscreen code examples are typically tiny and unreadable. And yet also rely on people seeing what the presenter is coding. Frankly, I would ban live coding (don’t @ me)
Great presentations, I think, are about the what and why–there’s little time in most presentations for the how–leave that for workshops and tutorials.
Be sparing with screen recordings
Once again I’ve broken my own rule here–I used screen recordings to show some of our accessibility features in action.
Adding alternative or descriptive text for images, charts, graphs and the like, while time consuming, is still beneficial. I’ve found adding alternative text for screen recordings to be very challenging, and speakers rarely describe in detail what they are demonstrating with a screen recording–so at the very least when using a screen recording describe for those who cannot see the recording what is happening.
Get in touch & thanks!
@johnallsopp on twitter and so forth
experience the technology [ conffab.com ]
If you’d like to see the technology in action, please visit our streaming service, conffab, which features coming up on 1000 presentations from past conferences of our and other organisers. A free account will get you access to anything over 2 years old and many of those presentations (as well as all of our newer ones) feature the technologies I’ve been talking about today.
If you’re keen to talk more, agree or disagree, or might be interested in using the technology we’ve developed (it is built with vanilla JavaScript CSS and HTML, with no dependencies on frameworks or libraries) please get in touch.
I’m @johnallsopp on twitter and elsewhere, and [email protected] if you’d like to email
Thanks to the ID24 crew for putting together this amazing event–I’ve run many in person and online and I can’t begin to imagine even attempting an event like this, let alone pulling it off year after year!
Enjoy the rest of ID24!
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday morning.