Study Notes @
Slide Deck @
Amazon Simple Storage Service (S3)
- S3 is an Object Store
- Secure, durable, and highly-scalable cloud storage
- Optimized for reads and intentionally light weight
- Accessible from anywhere on the web
- One of the AWS Foundational Services
- General Purpose
- Infrequent Access
- Automatically migrate to the most appropriate Storage class
- Rich set of Access Controls
Replication is Automatic:
- Multiple devices
- Multiple facilities within a region
- Across availability zones
Scalability - Automatically partitions buckets supporting:
- High request rates
- Simultaneous access
- Multiple concurrent users
- S3 optimized for long-term backup and archival.
- 3-5 hour retrieval time
Dual Product Offering
- An S3 Storage Class
- Archival Storage Service
Types of Storage
Block - Storage Device Level
- Organizes data into numbered, fixed size blocks
- Storage Area Network (SAN)
- Fibre Channel
File - Server and Operating System Level
- Organizes data into named hierarchy of folders and files
- Network Attached Storage (NAS)
- Independent of Servers, Operating Systems
- Accessed over a network
- The native interface for S3 is a ReST API.
S3 Object Characteristics
S3 object contains BOTH data and metadata
S3 object is uniquely identifed by: <bucket><key>[<versionId>]
- Unicode characters whose UTF-8 encoding <= 1024 bytes.
- Size range is 0 bytes up to 5 terabytes
- Operations (GET, PUT) are on whole objects
- S3 treats all objects as a stream of bytes.
- S3 is completely
- A set of name/value pairs
- System metadata with object characteristics.
- Optional User metadata
bucket is a container (web folder) for objects (files) stored in S3.
- Each account may define 100 buckets, by default.
buckets are created and stored within specific AWS regions
buckets are the top-level,
global namespace in S3
- Must be
globally unique across all AWS
- Must be between 3 and 63 chars long.
- Contain only: lower-case characters, numbers, periods, and dashes.
- Additional restrictions apply
- Include your domain name
Conform to DNS naming conventions
- Object Key and Metadata
- Domain Name System
- DNS Naming Conventions
- Can hold an unlimited number of objects
- A simple flat folder with no hierarchy
Note: For your convenience, the Amazon S3 console and the Prefix and Deliter feature allow you to navigate within an Amazon S3 bucket as if there were a folder hierarchy.
However, remember that a bucket is a single flat namespace of keys with no structure.
Accessing S3 Objects
- Intentionally simple
Based on a ReST implementation of CRUD operations
- Note: the absence of an Update. Why?
- Representationale State Transfer (ReST)
Create, Read, Update, Delete (CRUD) operations mapped to HTTP methods
Ref: POST Object
- Create -> HTTP PUT (or POST to accomadate use of HTML forms)
- Read -> HTTP GET
- Update -> HTTP POST ( or PUT)
- Delete -> HTTP DELETE
High Level Interfaces
- AWS Software Developent KIT (SDK)
- Wrapper Libraries
- AWS Command Line Interface (CLI)
- AWS Management Console
Durability and Availability
- Will my data still be there ?
- S3 is 99.999999999% Durable
- Can I access my data ?
- S3 is 99.99% available
Reduced Redundancy Storage (RRS)
- Reduced Cost Alternative
- RRS is 99.99% durable
Protect against user mistakes
- Cross-Region Replication
- MFA Delete
- S2 is an
Eventually Consistent system
Immediately after an update, a read may return stale data. This is applicable to:
- PUTs to existing Objects
- Object Deletes
- Updates are Atomic - Partial updates cannot occur
- S3 is secure by default. Initially only creator has access.
Coarse-grained access control:
- S3 ACLs
- READ, WRITE, or FULL_CONTROL
- Bucket or Object level
Best Use Cases
- Enabling Bucket Logging
- Hosting a static website
Fine-grained access controls:
S3 Bucket Policies
- Recommended access control mechanism
- Similar to IAM policies
- Access Control over who, from where, and when
- AWS IAM
- Query String Authentication
Static Website Hosting
- Very common use case
- Every S3 Object has a URL
Configure the bucket
- Create a bucket with the same name as the desired website hostname.
- Upload the static files to the bucket.
- Make all the files public (world readable).
- Enable static website hosting for the bucket.
This includes specifying an Index document and an Error document.
- The website will now be available at the S3 website URL:
- Create a friendly DNS name in your own domain for the website using a DNS CNAME, or an Amazon Route 53 alias that resolves to the Amazon S3 website URL.
- The website will now be available at your website domain name.
- upload the static content
Prefixes and Delimiters
- While Amazon S3 uses a flat structure in a bucket, it supports the use of prefix and delimiter parameters when listing key names. This emulates a file and folder hierarchy within the flat object key namespace of a bucket. For example:
- REST API
- Wrapper SDKs
- AWS CLI
- Amazon Management Console
- Amazon S3 is not really a file system.
- Amazon S3 offers a range of storage classes suitable for various use cases.
Amazon S3 Standard:
- High durability
- High availability
- Low latency
- High performance object storage
Amazon S3 Standard — Infrequent Access (Standard-IA)
- Different Availability profile from Standard
- Designed for long-lived, less frequently accessed data
- Lower per GB-month storage cost than Standard
Minimums and Costs
- Object size (128KB)
- Duration (30 days)
- Per-GB retrieval costs
- Amazon S3 Reduced Redundancy Storage (RRS) offers slightly lower durability (4 nines) than Standard or Standard-IA at a reduced cost.
Amazon Glacier storage class
- Data that does not require real-time access
- Retrieval time of several (3-5) hours is suitable.
- Note: restore creates a copy in Amazon S3 RRS and original remains in Amazon Glacier
- Retrieval of up to 5% of the data is free each month
Amazon Glacier is also a standalone storage service
- Separate API and some unique characteristics.
Object Lifecycle Management
- Equivalent to automated storage tiering
- Attached to the Bucket
- Contents may be filtered by name prefixes
Reduce storage costs by automatically transitioning data from one storage class to another. For Example:
- Store backup data initially in Amazon S3 Standard.
- After 30 days, transition to Amazon Standard-IA.
- After 90 days, transition to Amazon Glacier.
- After 3 years, delete.
- Use the Amazon 53 SSL API endpoints to encrypt data
S3 encrypts data at the object level as it writes and decrypts on read
- S3's SSE uses the 256-bit Advanced Encryption Standard (AES)
- Use Client-Side Encryption before sending it to Amazon S3
SSE-S3 (AWS-Managed Keys)
- AWS handles the key management and key protection
- Every object is encrypted with a unique key.
- The actual object key itself is then further encrypted by a separate master key.
SSE-KMS (AWS KMS Keys)
- Amazon handles your key management and protection for Amazon S3
- You manage the keys
- There are separate permissions for using the master key
- Auditing is provided by AWS
- Allows you to view any failed attempts to access data
SSE-C (Customer-Provided Keys)
- Maintain your own encryption keys
- AWS will encrypt/decrypt your objects
- You maintain full control of the keys
- Encrypting data before sending it
- Protection against accidental or malicious deletion
- Preserve, retrieve, and restore every version of every object
- Restore objects to their original state simply by referencing the version ID
- Turned on at the bucket level.
- Once enabled, versioning cannot be removed from a bucket; it can only be suspended.
Multi-Factor Authentication (MFA) Delete
- On top of bucket versioning.
- Requires additional authentication to permanently delete an object
- Requires an authentication code (a temporary, one-time password) generated by a hardware or virtual Multi-Factor Authentication (MFA) device.
- Note: that MFA Delete can only be enabled by the root account.
- Object owner can create
- Valid only for the specified duration
An API that allows uploading large objects as a set of parts with the ability to:
- upload objects, where the size is initially unknown
- When object > 100 MB, multipart upload is recommended
- When object > 5 GB, multipart upload is required
- When using the low-level APIs, the file to be uploaded must be broken into parts, which are managed by the caller.
- High-level APIs, use automatically
- Lifecycle policy to abort incomplete multipart uploads after a specified number of days
- Download (GET) only a portion of an object
- Use a Range HTTP header to specify a range of bytes of the object
- Useful when you have poor connectivity or download a known subset of a large object
- Asynchronously replicate all new objects to a target bucket in another region
- Any metadata and ACLs associated with the object are also part of the replication.
- Any changes trigger a new replication to the destination bucket
- Versioning must be turned on for both source and destination buckets
- Requires a TAM policy to give Amazon S3 permission to replicate
- Used to reduce the latency by placing objects closer to a set of users
- Used to meet locality requirements
- ISC is an option to replication only new objects
- Track S3 requests by enabling Amazon S3 server access logs.
- Logging is off by default
- Store access logs in the same or a different bucket
Trigger notification events based on S3 object actions. Enables:
- Running workflows
- Sending alerts
- Transcoding media files
- Processing data files
- Synchronizing S3 objects with other data stores
- Set up at the bucket level
Publish notifications when:
- New objects are created
- Objects are removed (by a DELETE)
- S3 detects that an RRS object was lost
- Set up event notifications based on Object name prefixes and suffixes
Notifications can be sent through:
- Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service (Amazon SQS)
- AWS Lambda to invoke AWS Lambda functions
Best Practices, Patterns, and Performance
- Use S3 storage in hybrid IT environments and applications
- For example, backed up over the Internet to S3 or Glacier
- Use S3 as bulk "blob" storage for data, while keeping an index
- S3 will scale automatically to support very high request rates
- For request rates higher than 100 requests per second, ensure some level of random distribution of keys
- In a GET-intensive mode, consider using an Amazon CloudFront as a caching layer for S3.
- Extremely low-cost storage service for data archiving and online backup.
- Designed for infrequently accessed data
- Retrieval time is three to five hours
Common use cases:
- Long-term backup, archive, and storage of compliance data
- Usually consists of large TAR (Tape Archive) or ZIP files
- Designed for 99.999999999% durability
- Stores data on multiple devices across multiple facilities in a region.
- Data is stored in archives, whichcan contain up to 40 TB of data
- Unlimited number of archives
- Each archive is assigned a unique archive ID at the time of creation.
- Automatically encrypted, and immutable
- Containers for archives
- Max: 1,000 vaults per account
- Control accessusing IAM policies or vault access policies
- Specify controls such as Write Once Read Many (WORM) in a vault lock policy
- Once locked, the policy can no longer be changed.
- Retrieve up to 5% of your data for free each month
- Eliminate or minimize fees, by setting a data retrieval policy
Amazon Glacier versus Amazon S3
- Supports 40 TB archives versus 5 TB objects in S3
- Identified by system-generated archive IDs
- Automatically encrypted, encryption is optional in Amazon S3
- Amazon S3 is the core object storage service on AWS, allowing you to store an unlimited amount of data with very high durability.
- Common Amazon S3 use cases include backup and archive, web content, big data analytics, static website hosting, mobile and cloud-native application hosting, and disaster recovery.
- Amazon S3 is integrated with many other AWS cloud services, including AWS IAM, AWS KMS, Amazon EC2, Amazon EBS, Amazon EMR, Amazon DynamoDB, Amazon Redshift, Amazon SQS, AWS Lambda, and Amazon CloudFront.
- Object storage differs from traditional block and file storage. Block storage manages data at a device level as addressable blocks, while file storage manages data at the operating system level as files and folders. Object storage manages data as objects that contain both data and metadata, manipulated by an API.
- Amazon S3 buckets are containers for objects stored in Amazon S3. Bucket names must be globally unique. Each bucket is created in a specific region, and data does not leave the region unless explicitly copied by the user.
- Amazon S3 objects are files stored in buckets. Objects can be up to 5TB and can contain any kind of data. Objects contain both data and metadata and are identified by keys. Each Amazon S3 object can be addressed by a unique URL formed by the web services endpoint, the bucket name, and the object key.
- Amazon S3 has a minimalistic API—create/delete a bucket, read/write/delete objects, list keys in a bucket —and uses a REST interface based on standard HTTP verbs—GET, PUT, POST, and DELETE. You can also use SDK wrapper libraries, the AWS CLI, and the AWS Management Console to work with Amazon S3.
- Amazon S3 is highly durable and highly available, designed for n nines of durability of objects in a given year and four nines of availability.
- Amazon S3 is eventually consistent, but offers read-after-write consistency for new object PUTs.
- Amazon S3 objects are private by default, accessible only to the owner. Objects can be marked public readable to make them accessible on the web. Controlled access may be provided to others using ACLs and AWS IAM and Amazon S3 bucket policies.
- Static websites can be hosted in an Amazon S3 bucket.
- Prefixes and delimiters may be used in key names to organize and navigate data hierarchically much like a traditional file system.
- Amazon S3 offers several storage classes suited to different use cases: Standard is designed for general-purpose data needing high performance and low latency. Standard-IA is for less frequently accessed data. RRS offers lower redundancy at lower cost for easily reproduced data. Amazon Glacier offers low-cost durable storage for archive and long-term backups that can are rarely accessed and can accept a three- to five-hour retrieval time.
- Object lifecycle management policies can be used to automatically move data between storage classes based on time.
- Amazon S3 data can be encrypted using server-side or client-side encryption, and encryption keys can be managed with Amazon KMS.
- Versioning and MFA Delete can be used to protect against accidental deletion.
- Cross-region replication can be used to automatically copy new objects from a source bucket in one region to a target bucket in another region.
- Pre-signed URLs grant time-limited permission to download objects and can be used to protect media and other web content from unauthorized "web scraping."
- Multipart upload can be used to upload large objects, and Range GETs can be used to download portions of an Amazon S3 object or Amazon Glacier archive.
- Server access logs can be enabled on a bucket to track requestor, object, action, and response.
- Amazon S3 event notifications can be used to send an Amazon SQS or Amazon SNS message or to trigger an AWS Lambda function when an object is created or deleted.
- Amazon Glacier can be used as a standalone service or as a storage class in Amazon S3.
- Amazon Glacier stores data in archives, which are contained in vaults. You can have up to 1,000 vaults, and each vault can store an unlimited number of archives.
- Amazon Glacier Vaults can be locked for compliance purposes.