AI experimentation and adoption has been one of the hottest business topics of 2023. We’re seeing a lot of the same topics come up — the dangers of AI hallucinations and data leaks on public models, ethical implications, and the way in which it could improve productivity and creativity in the workplace.
However, something that has received far less attention has been the development of AI for accessibility and the benefits it could have for creating a more accessible work environment that caters to diverse needs.
This is something that Paula Goldman, Salesforce’s chief ethical and humane use officer, is thinking a great deal about.
As the first person ever to hold this position, she’s had the opportunity to shape the role. In addition to being involved in regulatory and governmental talks around AI, and ensuring an ethical and collaborative approach between departments at Salesforce, accessibility has been a key consideration for Goldman.
“One of the things I find surprising and that I’m excited about is how AI might be able to turbocharge accessibility,” Goldman said in an interview with SmartCompany.
“We’re working on a number of generative AI models in-house — one of them is a code generation model. We’ve been working with that team and trying to think about how can we train that model for patterns and anti-patterns of accessibility.”
The idea is to utilise AI to create better products that make jobs easier. But it’s also about AI learning when a product or system it’s being used on could be better optimised for accessibility.
“How do we use AI to empower people to do what they’re good at,” Goldman said.
“I can almost imagine the model giving a nudge to engineers saying ‘Oh I caught something, you might want to look at that.’”
“There’s a huge amount of potential here that I just think is incredibly exciting.”
There’s a plethora of ways in which AI can be useful when it comes to workplace accessibility. And a lot of it is quite familiar.
The likes of real-time transcription, screen readers, augmented reality, object recognition and braille conversion have been floated — and in some cases — have been available for awhile now.
But according to Salesforce, AI can help improve what’s already there.
“When it comes to making sure that AI is accessible, the questions aren’t new — the methodologies sometimes have to be adapted,” Goldman said.
“How can you make sure a screen reader can actually read it for people with low vision — the contrast between the colours of the different elements in images, for example.”
Goldman says that multimodal models like this — where an AI can understand or generate more than one type of data — aren’t a big challenge in the grand scheme of things.
She says that what is interesting is that way it could be used to influence the rest of the technology being developed.
“Some of the best innovations have come from designing for marginalised populations,” Goldman said.
Salesforce is very bullish on educating its own customers on how to use its tech, and that’s certainly been evident in San Francisco this week. The event schedule is packed solid with info sessions regarding the implementation of the new generative AI platform and technology it announced this week.
And apparently, that will extend to implementing accessible AI in the future.
“It’s in the works. I would say ‘watch this space,’” Goldman said.
“It’s early days and more people should be talking about it. It’s really important but I just think there’s a huge amount of opportunity.”
The author travelled to San Francisco as a guest of Salesforce.
Comments