Mount s3 bucket to linux

Saved searches

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Notes on how to mount an S3 bucket to a Linux host.

davidxjohnson/s3fs-howto

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

Latest commit

Git stats

Files

Failed to load latest commit information.

README.md

Cheat-sheet on how to mount a S3 bucket to a Linux host (securely)

What’s covered in this how-to:

  • Use of S3FS fuze-based file system.
  • Use of AWS CLI to create an S3 bucket, access-user and permissions.
  • Creating a mount and enable it on start-up.
  • Sync local data to the S3 mount (as a backup)

In the process of creating a Plex media center on a Raspberry Pi, I really wanted to have a way to periodically backup the local media store to the cloud. A S3 bucket seems ideal for a backup because it’s inexpensive, secure, highly available and very durable. Wouldn’t it be great if you could simply mount an S3 bucket to the media server and rsync to it? (Well . maybe.)

  • First, if you don’t already have an AWS account, go ahead and get one (it’s free).
  • Once you have an account, don’t login with the root account all the time. Be sure to follow the security recommendations as a minimum:
    • setup multi-factor authentication for the root account
    • setup a separate admin account for . well . regular administration.
    • create an admin group and attach the AdministratorAccess policy to it, then make the admin user a member of the admin group.

    Use the AWS CLI to create your S3 bucket and configure the user and access policy

    Let’s store your admin credentials for use in the CLI (use the AccessKeyId/SecretAccessKey pair that you generated previously for the admin user).

    developer@vbox: aws configure --profile=admin AWS Access Key ID [None]: AKIAI7IDCKIAGKSLMCQ AWS Secret Access Key [None]: r+Mc7ucp9YzqOyoeYxtNKb/8RQqOImBoAggJAABk Default region name [None]: us-east-1 Default output format [None]:

    Create the S3 bucket using the admin account (substitute S3 bucket name and region for your particular use case):

    developer@vbox: aws --profile=admin s3api create-bucket --bucket dxj.media-server --region us-east-1 < "Location": "/dxj.media-server" >

    Create a user account that will be used to access the S3 bucket (substitute whatever name you’d rather use):

    developer@vbox: aws --profile=admin iam create-user --user-name=media-server-user < "User": < "UserName": "media-server-user", "Path": "/", "CreateDate": "2018-01-13T15:53:12.711Z", "UserId": "AIDAIYUJKOUDY6RWCRZWM", "Arn": "arn:aws:iam::836808583228:user/media-server-user" > >

    Make note of the user Arn and bucket name from the above steps . you’ll need them for the next step. Create a bucket policy to grant the minimum access needed to mount a S3 bucket to you Linux host.

    developer@vbox: aws --profile=admin s3api put-bucket-policy --bucket=dxj.media-server --policy \ ' "Statement": [ < "Effect": "Allow", "Principal": < "AWS": "arn:aws:iam::836808583228:user/media-server-user" >, "Action": [ "s3:GetBucketLocation", "s3:ListBucket" ], "Resource": "arn:aws:s3. dxj.media-server", "Condition": <> >, < "Effect": "Allow", "Principal": < "AWS": "arn:aws:iam::836808583228:user/media-server-user" >, "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject" ], "Resource": "arn:aws:s3. dxj.media-server/*", "Condition": <> > ] >'

    Note: The above policy was generated using the Policy Generator.

    Next, generate an access key for your S3 bucket user:

    developer@vbox: aws --profile=admin iam create-access-key --user-name media-server-user < "AccessKey": < "UserName": "media-server-user", "Status": "Active", "CreateDate": "2018-01-13T16:03:09.094Z", "SecretAccessKey": "87NNCA2GUoP6HWDZxyPypVHN156HWNCA2+bBi2LH", "AccessKeyId": "AKIAJ22SJDBHJ22DBHJQ" > >

    Finally, save the above user AccessKeyId/SecretAccessKey pair for use by the AWS CLI and S3FS:

    # save credentials for the S3FS file system # syntax is :: developer@vbox: echo 'dxj.media-server:AKIAJ22SJDBHJ22DBHJQ:87NNCA2GUoP6HWDZxyPypVHN156HWNCA2+bBi2LH' >> ~/.passwd-s3fs chmod 600 ~/.passwd-s3fs # save the credentials for the AWS CLI developer@vbox: aws configure --profile=media-server-user AWS Access Key ID [None]: AKIAJ22SJDBHJ22DBHJQ AWS Secret Access Key [None]: 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 Default region name [None]: us-east-1 Default output format [None]:

    Mount the S3 bucket using the S3FS fuze-based file system:

    For testing purposes, let’s mount the S3 bucket to the current user’s home directory:

    # make a directory for the drive mount developer@vbox: mkdir -p ~/s3-drive/media-center # allow the mount command to change ownership of files developer@vbox: sudo sed -i s/\#user_allow_other/user_allow_other/g /etc/fuse.conf # mount the S3 bucket with default permissions of 750 and owned by 1000:1000 developer@vbox: s3fs -o allow_other,uid=1000,gid=1000,umask=027 dxj.media-server /home/developer/media-server # is it mounted? developer@vbox: mount | grep s3fs s3fs on /home/developer/media-server type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) # copy files to the drive developer@vbox: cd ~ developer@vbox:~$ rsync -rz media-server s3-drive developer@vbox:~$ tree s3-drive/ s3-drive/ └── media-server ├── music │ ├── 1X28UjzT.mp3 │ ├── _211rqv-.mp3 │ ├── 3m5BvQ-A.mp3 │ ├── cx-5Hwz0.mp3 │ ├── _IiIHYiX.mp3 │ └── WYhWs25z.mp3 ├── pictures │ ├── hcTYFi_q.jpeg │ ├── KJmJzwYt.jpeg │ ├── Sz22kOxB.jpeg │ └── tL2JW2rc.jpeg └── videos ├── 8Zjty7f3.mpeg ├── AmmkZ9r2.mpeg └── tJiMVwOz.mpeg

    Check the S3 console to verify that the files arrived in the cloud.

    While there, create a new directory using the S3 console. Check the drive mount in your Linux host to see if it shows there. It should be instantaneous.

    Note: The umask, gid and uid options used with the mount command are a work-around where files and folders are created in the S3 bucket using other tools or the S3 console. Without these options, the files wouldn’t be accessible by the current user.

    Configuring the drive to mount on start-up

    # move our experiment to a little more permanent place developer@vbox: cd ~ developer@vbox: sudo umount /home/developer/media-server developer@vbox: sudo mkdir -p /data/s3drive/media-server-backup developer@vbox: sudo mv ~/.passwd-s3fs /etc/passwd-s3fs developer@vbox: sudo chown root:root /etc/passwd-s3fs # let's mount it and test it developer@vbox: sudo s3fs -o allow_other,uid=1000,gid=1000,umask=027 dxj.media-server /data/s3drive/media-server-backup developer@vbox: rsync -rz --delete /home/developer/media-server/* /data/s3drive/media-server-backup/* # mount on start-up developer@vbox: echo 'dxj.media-server /data/s3drive/media-server-backup fuse.s3fs _netdev,allow_other,uid=1000,gid=1000,umask=027 0 0' | sudo tee --append /etc/fstab

    Reboot your Linux host to test that the mount persists.

    The following command bypasses the fuze file system and updates the S3 bucket direct:

    # two-way sync developer@vbox: aws --profile=media-server-user s3 sync /home/developer/media-server s3://dxj.media-server --delete developer@vbox: aws --profile=media-server-user s3 sync s3://dxj.media-server /home/developer/media-server --delete # known bug where s3 sync command does not remove empty folders. developer@vbox: find /home/developer/media-server -type d -empty | xargs rm -rf developer@vbox: find /data/s3drive/media-server-backup -type d -empty | xargs rm -rf

    Which begs the question: If all you want is a backup of your media drive, why not use the AWS CLI instead of a drive mount? 🙂

    Источник

    How To Mount AWS S3 Bucket On Amazon EC2 Linux Using S3FS

    In this blog, we will learn how to use S3 as a filesystem on EC2 Linux machine. Let’s start! 1) Create an EC2 Linux (I have used Ubuntu in this demo) instance Keep everything as default and add the below user data script to install awscli and s3fs utlity from advance section of wizard

    sudo apt-get update -y sudo apt-get install awscli -y sudo apt-get install s3fs -y 

    2) Create an IAM user for s3fs Image description 3) Give the user a unique name and enable programmatic access Image description Set permission —> create a new policy Image description Select the service as S3 and include below access levels Image description Give the policy a unique name and click Create policy Image description Once the policy is created, go back to the IAM tab and hit refresh so that newly created policy is included in the list
    , filter by policy name and hit the enable checkbox to add the policy to our IAM user. Image description Hit create user Image description Once the user is created, download the credentials. We are going to use it later. Image description 4) Login to your Ec2 Instance Image description Image description Go to your home directory and run below commands to create a new directory and to generate some sample files

    mkdir /home/ubuntu/bucket; cd $HOME/bucket ;touch test1.txt test2.txt test3.txt 

    Image description Next step is to create an S3 bucket.
    5) Go to S3 service and create a new bucket give it a unique name and leave reast of the settings as default.
    Block public access to this bucket should be enabled by default Image description Hit create bucket. 6) Once the bucket is created, go to the ssh session and configure our AWS credentails for authentication using the IAM account that we have created. Use the command

    Image description

    and provide the credential details that we have downloaded before 7) Now run the below command to sync local directory with the S3 bucket

    aws s3 sync path_on_filesystem s3://bucketname 
    aws s3 sync /home/ubuntu/bucket s3://test-s3fs-101 

    Image description

    8) create the credential file for s3fs s3fs supports the standard AWS credentials file stored in $/.aws/credentials. Alternatively, s3fs supports a custom passwd file.
    The default location for the s3fs password file can be created:
    using a .passwd-s3fs file in the users home directory (i.e. $/.passwd-s3fs) file should have the below content:

    $AWS_ACCESS_KEY_ID:$AWS_SECRET_KEY_ID 
    echo "AKIAQSCIQUH6XXYQMGDA:T5qM7rZmSaU3p/Y0xmuZyWv1/KUnT0Oc58sdCJ3t" > $/.passwd-s3fs; chmod 600 $/.passwd-s3fs 

    Image description

    9) Now you can run the command to mount S3 bucket as a filesystem

    sudo s3fs bucketname path -o passwd_file=$HOME/.passwd-s3fs,nonempty,rw,allow_other,mp_umask=002,uid=$UID,gid=$UID -o url=http://s3.aws-region.amazonaws.com ,endpoint=aws-region1,use_path_request_style 
    sudo s3fs s3fs-test-101 /home/ubuntu/bucket -o passwd_file=$HOME/.passwd-s3fs,nonempty,rw,allow_other,mp_umask=002,uid=1000,gid=1000 -o url=http://s3.ca-central-1.amazonaws.com ,endpoint=ca-central-1,use_path_request_style 

    Image description

    11) Add the entry in fstab using the below command so that the changes become persistent after the server reboot as well:

    bucketname directoryonfs fuse.s3fs _netdev,allow_other 0 0 
    s3fs-test-101 /home/ubuntu/bucket fuse.s3fs _netdev,allow_other 0 0 

    12) Now the moment of truth, go to your S3 bucket and hit refresh, you should see the files that were present in your file system Image description 13) Let’s now verify whether it’s getting synced properly after a object delete/addition Go to your S3 bucket, and upload a new file Image description Go to your ssh session and do ls in the same directory Image description Eureka! The file that you just uploaded in your S3 bucket appears in your FileSystem.

    Same way you can test the delete file operation. And it works both ways i.e if you perform any file operation on your filesystem, it will sync to your S3 bucket as well.

    Feel free to checkout the below hands-on demo of what we have learned so far:

    References:
    https://github.com/s3fs-fuse/s3fs-fuse
    https://aws.amazon.com/
    https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

    Limitations

    • random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
    • metadata operations such as listing directories have poor performance due to network latency
    • non-AWS providers may have eventual consistency so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020)
    • no atomic renames of files or directories
    • no coordination between multiple clients mounting the same bucket
    • no hard links inotify detects only local modifications, not external ones by other clients or tools

    Источник

    Читайте также:  Drop user in linux
Оцените статью
Adblock
detector