AWS announced updates to Knowledge Bases for Amazon Bedrock, which is a new capability announced at AWS re:Invent 2023 that allows organizations to provide information from their own private data sources to improve relevancy of responses.
According to AWS, there have been significant improvements since the launch, such as the introduction of Amazon Aurora PostgreSQL-Compatible Edition as an additional option for custom vector storage alongside other options like the vector engine for Amazon OpenSearch Serverless, Pinecone, and Redis Enterprise Cloud.
One of the new updates that Amazon is announcing is an expansion in the choice of embedding models. In addition to Amazon Titan Text Embeddings, users can now select from Cohere Embed English and Cohere Embed Multilingual models, both of which support 1,024 dimensions, for converting data into vector embeddings that capture the semantic or contextual meaning of the text data. This update aims to provide users with more flexibility and precision in how they manage and utilize their data within Amazon Bedrock.
To offer more flexibility and control, Knowledge Bases supports a selection of custom vector stores. Users can choose from an array of supported options, tailoring the backend to their specific requirements. This customization extends to providing the vector database index name, along with detailed mappings for index fields and metadata fields. Such features ensure that the integration of Knowledge Bases with existing data management systems is seamless and efficient, enhancing the overall utility of the service.
In this latest update, Amazon Aurora PostgreSQL-Compatible and Pinecone serverless have been added as additional choices for vector stores.
Many of Amazon Aurora’s database features will also apply to vector embedding workloads, such as elastic scaling of storage, low-latency global reads, and faster throughput compared to open-source PostgreSQL. Pinecone serverless is a new serverless version of Pinecone, which is a vector database for building generative AI applications.
These new options provide users with greater variety and scalability in their choice of vector storage solutions, allowing for more tailored and effective data management strategies.
And finally, an important update to the existing Amazon OpenSearch Serverless integration has been implemented, aimed at reducing costs for users engaged in development and testing workloads. Now, redundant replicas are disabled by default, which Amazon estimates will cut costs in half.
Together, these updates underscore Amazon Bedrock’s commitment to enhancing user experience and offering versatile, cost-effective solutions for managing vector data within the cloud, according to Antje Barth, principal developer advocate at AWS in a blog post.
The post Amazon adds new embedding model choices to Knowledge Bases for Amazon Bedrock appeared first on SD Times.
from SD Times https://ift.tt/jPC5KIS
Comments
Post a Comment