If you had to name a technology that dominated that 2023, generative artificial intelligence is it. Since OpenAI unleashed ChatGPT on the world last year, the technology has dominated the headlines and caught the public attention, demonstrating that there is still some wonder left in the tech sector.
It has also galvanised rivalries. Google was caught on the back foot when ChatGPT launched, its own Bard chatbot still being tweaked and tested in the name of responsible AI. Microsoft threw money at the problem and invested in Open AI. And Amazon Web Services (AWS) has been chipping away, developing AI-powered tools that should make the transition to the new technology easier for customers.
If there was any doubt that the company is getting serious about artificial intelligence, its annual Re:invent conference should have dispelled it. OpenAI boss Sam Altman was still only settling back in as head of the company when AWS was warming up for its annual conference in Las Vegas. And this year’s topic was predictably generative AI and how AWS could bolster confidence in the technology for business users.
The decision to indemnify customers against potential legal action over copyrighted materials was a good start. Stock photography company Getty Images has already taken legal action against Stability AI, alleging it scraped its website for images without permission.
[ Elon Musk releases new AI chatbot ‘Grok’ in bid to take on ChatGPT ]
[ Google launches Bard chatbot in Europe ]
AWS chief executive Adam Selipsky unveiled a host of new features for customers that put generative AI at the heart of it, from the work-focused generative AI assistant Amazon Q to new Trainium chips for training AI models, to the release of Code Whisperer, which will suggest code and generate it for users, and a new image generator.
The Re:invent conference was about showing off what the company could do while also differentiating itself from its rivals. During his keynote speech, Selipsky made repeated veiled references to rival Microsoft and its OpenAI deal, noting that AWS was not beholden to a single provider, and on another taking aim at the company’s much publicised – albeit temporary – decision to restrict employee access to ChatGPT. It brought out partner after partner on stage to talk about their co-operation with AWS, making it clear that they were open to talking to everyone who shared its vision of the AI future.
Bedrock can be trained on proprietary data from company databases and it can execute multistep tasks. In theory that means chatbots will be more helpful to customers
If AWS had one message it was that generative AI is not going away any time soon. In fact, it is only just getting started. And the company wants to repeat the growth it has seen with its cloud services in the generative AI business.
Amazon Q fills a gap that the earlier, more consumer-focused generative AI products have not yet covered. Targeting businesses, the new assistant is designed to be trained on data specific to companies, while also protecting the privacy of that data, a major concern for companies who have been faced with the advances in technology over the past year.
“What I found really interesting as we talk to organisations is that many of them have the same issue: we don’t have enough developers. If you look at the statistics, often the developer can be spending as little as five hours a week developing because they’re waiting on other people or procurement,” said Phil de Brun, AWS enterprise strategist
“I think the cloud has changed some of that but Amazon Code Whisperer was helped with productivity. Amazon Q is layering on top of that. From a development point of view it is really empowering. It’s enabling developers to make better decisions in real time.”
[ EU agrees ‘historic’ deal with world’s first laws to regulate AI ]
[ We are rushing to a new AI-driven world without vital regulation ]
But amid the excitement, there was an acknowledgment that the new world of generative AI is not an easy one for companies to navigate. While rivals have been adding AI capabilities to products at a fast rate, AWS has focused on making it easier, and more secure, for companies to integrate the technology into its workflows in a way that suits the individual business.
That was evident in the updates to AWS’s Bedrock. The company’s tentative dip into AI earlier this year was met with mixed results. But AWS said the software tool, which makes large language models and highly customised foundation models (FMs) that are pretrained on data such as text and images, now has 10,000 customers and growing. Its strength is in its diversity. Rather than locking in with one provider, Bedrock offers access to technology from a range of AI companies, including AI21, Anthropic, Cohere, Meta, and Stability AI, all through a single route.
At Re:invent, AWS went a step further. Bedrock can now be trained on proprietary data from company databases, and it can execute multistep tasks. In theory, that means chatbots will be more helpful to customers, learning from in-house data that companies may have been reluctant to allow out of their control.
While many companies have hesitated to implement generative AI because of fears over AI potentially going wrong and straying into territory that would damage the company, AWS decided to add Guardrails, safeguards for companies that fit with their individual responsible AI policies and give them more control. Companies can create their own list of content filters and denied topics to keep interactions between users and applications safe, reducing the chance that they will end up on the wrong side of the news cycle for an AI blunder.
With concerns of potential bias in AI, having a diverse workforce is a key factor in trying to prevent it
All this new technology will require training, however. AWS’s Maureen Lonergan has spearheaded the company’s training and certification division for more than a decade. That includes responsibility for its GetIT programme aimed at encouraging young people, particularly women – into Stem careers, the growing Cloud Institute that is training millions in cloud technology, and more recently its commitment to train two million people globally in generative AI in the next two years.
[ Google unveils Gemini AI model as it goes head to head with ChatGPT ]
[ Amazon unveils new AI assistant for businesses ]
That means spending time with executives talking about training frameworks and how generative AI will change roles in the future.
“[Customers] are also trying to figure out how they’re going to apply the technology, and learning becomes a really key part to that,” she said. “I think that they’re super curious. Obviously, it’s been in the news for months, and right now, they’re just trying to figure out how to apply it and having really good thoughtful conversations about it. Where would they apply the tech? What does that mean for the people in those roles? It will change the roles at some level, they’ll find efficiencies. But with any new technologies, especially this one that’s a little bit more of a disrupter, there’s always more jobs after that, there’s different types of jobs.”
The changes that AI is bringing run deeper than just new jobs for people who have been automated into new roles. With concerns of potential bias in AI, having a diverse workforce is a key factor in trying to prevent it. That means not just gender diversity, but also thought and life experiences.
“For me, it’s not a ‘feel good’, it’s a need to have, because if we want the workforce to be diverse, you’ve got to be intentional about going out and finding different areas and democratising education,” said Lonergan. “These companies are building products for their customers that are very diverse.”
But despite the rush of companies into the technology, Selipsky was clear that this is very early days for generative AI – and, by extension, AWS’s involvement. It is, he says, more of a marathon than a sprint. “Everyone is moving fast, experimenting, learning.”