Amazon hones in on generative AI at AWS Summit and unveils new AI projects
Amazon is most commonly associated with its e-commerce platform, which has become a giant in the industry due to its ability to sell everything you can think of and deliver them to your doorstep within two days with a membership.
However, Amazon also has a strong presence in cloud computing and is about to become more involved with generative AI.
Also: How UPS workers’ big contract win could impact Amazon
On Tuesday, Amazon held its AWS (Amazon Web Services) Summit in New York, an event focused on Amazon’s work in the cloud that features expos, learning sessions, and a keynote address.
This year, Amazon used the platform to unveil several significant generative AI announcements that will optimize the creation of AI platforms for developers and ease AI integration for enterprises.
To build and power an AI model, there are several components, beginning with the actual chips you will use to power the model, then building and training the model, and finally applying the model in the real world.
Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders
AWS’s announcements today help optimize every step of the process. Here is a roundup of some of the most noteworthy announcements.
AWS HealthScribe
AWS HealthScribe is a HIPPA-eligible generative AI-powered service that transcribes conversations between patients and clinicians and creates clinical documents such as summaries with AI-generated insights.
“With AWS HealthScribe, healthcare software providers can use a single API to automatically create robust transcripts, extract key details (e.g., medical terms and medications), and create summaries from doctor-patient discussions that can then be entered into an electronic health record (EHR) system,” according to the press release.
Also: This is how generative AI will change the gig economy for the better
To address privacy concerns, Amazon shares that the model has data security and privacy built-in, not retaining any customer data after processing it and encrypting customer data in transit.
This isn’t the first time generative AI has been geared towards the medical field, as seen by Google’s launch of Med-PaLM 2 in April.
Public availability of Amazon EC2 P5
In March, Amazon announced its Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances powered by Nvidia H100 Tensor Core GPUs and meant to deliver the compute performance needed to build and train machine learning (ML) models.
These instances can provide up to six times faster training times compared to the previous model, and reduce up to 40% of training costs, according to Amazon.
Today, the Amazon Elastic Compute Cloud (Amazon EC2) P5 Instances became generally available.
Amazon Bedrock updates
To help developers build their foundational models, Amazon launched its foundational model service Amazon Bedrock back in April.
With Amazon Bedrock, developers can choose which foundational model they want for their specific use case. The choices included Amazon’s Titan, Anthropic’s Claude, Stability.ai’s Stable Diffusion, and AI21 Labs Jurassic-2, until today.
Also: The best AI art generators
At AWS, Amazon announced that the choices will now include Claude 2, Anthropic’s latest LLM, SDZL 1, Stability AI’s latest text-to-image model, and a brand new foundational model — Cohere.
Amazon Bedrock is also introducing agents, which allow developers to build AI applications that can complete a broader range of tasks by incorporating proprietary data without manual training.
These Amazon Bedrock new features are available in preview today.
Vector engine for Amazon OpenSearch Serverless
When you input a prompt into a generative AI model, the prompt you enter is conversational, and the output you receive is conversational, too, as seen by the rise of massively popular chatbots.
Many of these generative AI applications use vector embedding, numerical values given to text, image, and video data that show contextual relationships between data, to help generate accurate responses.
Also: 6 skills you need to become an AI prompt engineer
AWS’s vector engine for Amazon OpenSearch Serverless makes it easier for customers to search embeddings and incorporate them into LLM applications.
Now available in preview, the new vector engine allows customers to store, search, and retrieve billions of vector embeddings in real time without worrying about the underlying infrastructure, according to the release.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.