Key Points:
- AWS launched Amazon S3 Files, offering high-performance file system access to S3 data.
- This feature aims to simplify data management and AI workloads by eliminating data duplication.
- S3 Files provides low-latency access, integrating S3 with file-based applications.
- The innovation helps developers by reducing storage complexities and costs for AI development.
Amazon Web Services (AWS) has announced Amazon S3 Files, which they call the “first and only cloud object store that gives you complete, high-performance file system access to your data.”
S3 Files is a new feature that turns S3 into a shared file system. This means you no longer need to move or copy data between different storage places, aiming to make it easier for businesses to handle data storage and AI tasks. A key part of S3 Files is its fast, high-performance file access, all without your data ever leaving the AWS environment.
“Built using Amazon EFS, S3 Files gives you the speed and simplicity of a file system along with the huge scale, reliability, and cost-effectiveness of S3,” the company explained in a blog post. Until now, companies stored their data and large data collections in S3. However, tools, agents, and applications that work with files couldn’t access that data. They either needed a separate file system, had to copy the data, or build complicated systems to link everything.
S3 Files now makes this data available through both the file system and S3 APIs. This means “thousands of computing resources can connect to the same… file system at the same time.” Compared to competing systems, AWS now offers much simpler ways for developers to work, making both organized and unorganized data easy to access. More generally, this shows Amazon simplifying its storage layers to make AI development quicker and easier for developers, and cheaper for customers. “There are no separate data areas, no complicated syncing, and no compromises,” AWS concluded. S3 Files is already available to everyone in 34 of AWS’ regions.