Skip to content

Commit

Permalink
init
Browse files Browse the repository at this point in the history
  • Loading branch information
bubriks committed Jan 15, 2025
1 parent b751d8d commit 25ee80d
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 0 deletions.
Binary file modified docs/assets/images/guides/fs/storage_connector/s3_creation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/user_guides/fs/storage_connector/creation/s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ When you're finished, you'll be able to read files using Spark through HSFS APIs
Before you begin this guide you'll need to retrieve the following information from your AWS S3 account and bucket:

- **Bucket:** You will need a S3 bucket that you have access to. The bucket is identified by its name.
- **Path (Optional):** If needed, a path can be defined to ensure that all operations are restricted to a specific location within the bucket.
- **Region (Optional):** You will need an S3 region to have complete control over data when managing the feature group that relies on this storage connector. The region is identified by its code.
- **Authentication Method:** You can authenticate using Access Key/Secret, or use IAM roles. If you want to use an IAM role it either needs to be attached to the entire Hopsworks cluster or Hopsworks needs to be able to assume the role. See [IAM role documentation](../../../../admin/roleChaining.md) for more information.
- **Server Side Encryption details:** If your bucket has server side encryption (SSE) enabled, make sure you know which algorithm it is using (AES256 or SSE-KMS). If you are using SSE-KMS, you need the resource ARN of the managed key.

Expand Down

0 comments on commit 25ee80d

Please sign in to comment.