Amazon S3 是Amazon网落服务(Amazon Web Services,AWS)提供的云存储。Amazon S3在众多第三方已经开发完成的商业服务或客户端软件之上,发布了一组网络服务接口。本教程描述怎样用Linux命令行访问Amazon S3云存储。

最著名的Amazon S3命令行客户端是用python写的s3cmd。作为一个简单的AWS S3命令行工具,s3cmd的思想是用于运行脚本化的cron任务,比如每天的备份工作。

s3cmd 使用介绍


sudo apt-get install s3cmd

yum install s3cmd

emerge -av s3cmd

rpm -ivh

git clone
cd s3cmd
python install


Usage: s3cmd [options] COMMAND [parameters]

S3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing "buckets" and uploading, downloading and removing
"objects" from these buckets.

  -h, --help            show this help message and exit
  --configure           Invoke interactive (re)configuration tool. Optionally
                        use as '--configure s3://some-bucket' to test access
                        to a specific bucket instead of attempting to list
                        them all.
  -c FILE, --config=FILE
                        Config file name. Defaults to /home/mludvig/.s3cfg
  --dump-config         Dump current configuration after parsing config files
                        and command line options and exit.
                        AWS Access Key
                        AWS Secret Key
  -n, --dry-run         Only show what should be uploaded or downloaded but
                        don't actually do it. May still perform S3 requests to
                        get bucket listings and other information though (only
                        for file transfer commands)
  -e, --encrypt         Encrypt files before uploading to S3.
  --no-encrypt          Don't encrypt files.
  -f, --force           Force overwrite and other dangerous operations.
  --continue            Continue getting a partially downloaded file (only for
                        [get] command).
  --continue-put        Continue uploading partially uploaded files or
                        multipart upload parts.  Restarts/parts files that
                        don't have matching size and md5.  Skips files/parts
                        that do.  Note: md5sum checks are not always
                        sufficient to check (part) file equality.  Enable this
                        at your own risk.
                        UploadId for Multipart Upload, in case you want
                        continue an existing upload (equivalent to --continue-
                        put) and there are multiple partial uploads.  Use
                        s3cmd multipart [URI] to see what UploadIds are
                        associated with the given URI.
  --skip-existing       Skip over files that exist at the destination (only
                        for [get] and [sync] commands).
  -r, --recursive       Recursive upload, download or removal.
  --check-md5           Check MD5 sums when comparing files for [sync].
  --no-check-md5        Do not check MD5 sums when comparing files for [sync].
                        Only size will be compared. May significantly speed up
                        transfer but may also miss some changed files.
  -P, --acl-public      Store objects with ACL allowing read for anyone.
  --acl-private         Store objects with default ACL allowing access for you
                        Grant stated permission to a given amazon user.
                        Permission is one of: read, write, read_acp,
                        write_acp, full_control, all
                        Revoke stated permission for a given amazon user.
                        Permission is one of: read, write, read_acp, wr
                        ite_acp, full_control, all
  -D NUM, --restore-days=NUM
                        Number of days to keep restored file available (only
                        for 'restore' command).
  --delete-removed      Delete remote objects with no corresponding local file
  --no-delete-removed   Don't delete remote objects.
  --delete-after        Perform deletes after new uploads [sync]
  --delay-updates       Put all updated files into place at end [sync]
  --max-delete=NUM      Do not delete more than NUM files. [del] and [sync]
                        Additional destination for parallel uploads, in
                        addition to last arg.  May be repeated.
  --delete-after-fetch  Delete remote objects after fetching to local file
                        (only for [get] and [sync] commands).
  -p, --preserve        Preserve filesystem attributes (mode, ownership,
                        timestamps). Default for [sync] command.
  --no-preserve         Don't store FS attributes
  --exclude=GLOB        Filenames and paths matching GLOB will be excluded
                        from sync
  --exclude-from=FILE   Read --exclude GLOBs from FILE
  --rexclude=REGEXP     Filenames and paths matching REGEXP (regular
                        expression) will be excluded from sync
  --rexclude-from=FILE  Read --rexclude REGEXPs from FILE
  --include=GLOB        Filenames and paths matching GLOB will be included
                        even if previously excluded by one of
                        --(r)exclude(-from) patterns
  --include-from=FILE   Read --include GLOBs from FILE
  --rinclude=REGEXP     Same as --include but uses REGEXP (regular expression)
                        instead of GLOB
  --rinclude-from=FILE  Read --rinclude REGEXPs from FILE
  --ignore-failed-copy  Don't exit unsuccessfully because of missing keys
  --files-from=FILE     Read list of source-file names from FILE. Use - to
                        read from stdin.
                        Datacentre to create bucket in. As of now the
                        datacenters are: US (default), EU, ap-northeast-1, ap-
                        southeast-1, sa-east-1, us-west-1 and us-west-2
  --reduced-redundancy, --rr
                        Store object with 'Reduced redundancy'. Lower per-GB
                        price. [put, cp, mv]
                        Target prefix for access logs (S3 URI) (for [cfmodify]
                        and [accesslog] commands)
  --no-access-logging   Disable access logging (for [cfmodify] and [accesslog]
                        Default MIME-type for stored objects. Application
                        default is binary/octet-stream.
  -M, --guess-mime-type
                        Guess MIME-type of files by their extension or mime
                        magic. Fall back to default MIME-Type as specified by
                        --default-mime-type option
  --no-guess-mime-type  Don't guess MIME-type and use the default type
  --no-mime-magic       Don't use mime magic when guessing MIME-type.
  -m MIME/TYPE, --mime-type=MIME/TYPE
                        Force MIME-type. Override both --default-mime-type and
                        Add a given HTTP header to the upload request. Can be
                        used multiple times. For instance set 'Expires' or
                        'Cache-Control' headers (or both) using this option.
                        Specifies that server-side encryption will be used
                        when putting objects.
  --encoding=ENCODING   Override autodetected terminal and filesystem encoding
                        (character set). Autodetected: UTF-8
                        Add encoding to these comma delimited extensions i.e.
                        (css,js,html) when uploading to S3 )
  --verbatim            Use the S3 name as given on the command line. No pre-
                        processing, encoding, etc. Use with caution!
  --disable-multipart   Disable multipart upload on files bigger than
                        Size of each chunk of a multipart upload. Files bigger
                        than SIZE are automatically uploaded as multithreaded-
                        multipart, smaller files are uploaded using the
                        traditional method. SIZE is in Mega-Bytes, default
                        chunk size is 15MB, minimum allowed chunk size is
                        5MB, maximum is 5GB.
  --list-md5            Include MD5 sums in bucket listings (only for 'ls'
  -H, --human-readable-sizes
                        Print sizes in human readable form (eg 1kB instead of
                        Name of index-document (only for [ws-create] command)
                        Name of error-document (only for [ws-create] command)
  --progress            Display progress meter (default on TTY).
  --no-progress         Don't display progress meter (default on non-TTY).
  --enable              Enable given CloudFront distribution (only for
                        [cfmodify] command)
  --disable             Enable given CloudFront distribution (only for
                        [cfmodify] command)
  --cf-invalidate       Invalidate the uploaded filed in CloudFront. Also see
                        [cfinval] command.
                        When using Custom Origin and S3 static website,
                        invalidate the default index file.
                        When using Custom Origin and S3 static website, don't
                        invalidate the path to the default index file.
  --cf-add-cname=CNAME  Add given CNAME to a CloudFront distribution (only for
                        [cfcreate] and [cfmodify] commands)
                        Remove given CNAME from a CloudFront distribution
                        (only for [cfmodify] command)
  --cf-comment=COMMENT  Set COMMENT for a given CloudFront distribution (only
                        for [cfcreate] and [cfmodify] commands)
                        Set the default root object to return when no object
                        is specified in the URL. Use a relative path, i.e.
                        default/index.html instead of /default/index.html or
                        s3://bucket/default/index.html (only for [cfcreate]
                        and [cfmodify] commands)
  -v, --verbose         Enable verbose output.
  -d, --debug           Enable debug output.
  --version             Show s3cmd version (1.5.0-beta1) and exit.
  -F, --follow-symlinks
                        Follow symbolic links as if they are regular files
  --cache-file=FILE     Cache FILE containing local source MD5 values
  -q, --quiet           Silence output on stdout



s3cmd –configure


  1. AWS S3的访问密钥和安全密钥
  2. 对AWS S3双向传输的加密密码和加密数据
  3. 为加密数据设定GPG程序的路径(例如,/usr/bin/gpg)
  4. 是否使用https协议
  5. 如果使用http代理,设定名字和端口

配置将以保存普通文本格式保存在 ~/.s3cfg.

chmod 600 ~/.s3cfg

s3cmd la

s3cmd ls

#2.建立新的bucket icyboy
s3cmd mb s3://icyboy

s3cmd put 1.png 2.png 3.png s3://icyboy

s3cmd put --acl-public 4.png s3://icyboy
#如果上传的文件授予公开访问权限,任何人在浏览器中都可以通过 访问。

s3cmd ls s3://icyboy

s3cmd get s3://icyboy/*.png

s3cmd del s3://icyboy/*.png

s3cmd info s3://icyboy

s3cmd -e put encrypt.png s3://icyboy
s3cmd get s3://icyboy/encrypt.png

s3cmd rb s3://icyboy

s3cmd du s3://icyboy

s3cmd cp s3://icyboy/1.txt s3://xupeng/1.txt_copy
s3cmd mv s3://icyboy/1.txt s3://xupeng/1.txt_copy

xupeng@icyboy ~ $ s3cmd put [--recursive|-r] dir1 s3://icyboy/some/path/
dir1/file1-1.txt -> s3://icyboy/some/path/dir1/file1-1.txt  [1 of 2]
dir1/file1-2.txt -> s3://icyboy/some/path/dir1/file1-2.txt  [2 of 2]

xupeng@icyboy ~ $ s3cmd put -r dir1/ s3://icyboy/some/path/
dir1/file1-1.txt -> s3://icyboy/some/path/file1-1.txt  [1 of 2]
dir1/file1-2.txt -> s3://icyboy/some/path/file1-2.txt  [2 of 2]

xupeng@icyboy ~ $ s3cmd sync  ./  s3://icyboy/some/path/
dir2/file2-1.log -> s3://icyboy/some/path/dir2/file2-1.log  [1 of 2]
dir2/file2-2.txt -> s3://icyboy/some/path/dir2/file2-2.txt  [2 of 2]

xupeng@icyboy ~ $ s3cmd sync --dry-run --delete-removed ~/demo/ s3://icyboy/some/path/
delete: s3://icyboy/some/path/file1-1.txt
delete: s3://icyboy/some/path/file1-2.txt
upload: ~/demo/dir1/file1-2.txt -> s3://icyboy/some/path/dir1/file1-2.txt
WARNING: Exiting now because of --dry-run

xupeng@icyboy ~ $ s3cmd sync --dry-run --skip-existing --delete-removed ~/demo/ s3://icyboy/some/path/
delete: s3://icyboy/some/path/file1-1.txt
delete: s3://icyboy/some/path/file1-2.txt
WARNING: Exiting now because of --dry-run

xupeng@icyboy ~ $ s3cmd sync --dry-run --exclude '*.txt' --include 'dir2/*' . s3://icyboy/demo/
exclude: dir1/file1-1.txt
exclude: dir1/file1-2.txt
exclude: file0-2.txt
upload: ./dir2/file2-1.log -> s3://icyboy/demo/dir2/file2-1.log
upload: ./dir2/file2-2.txt -> s3://icyboy/demo/dir2/file2-2.txt
upload: ./file0-1.msg -> s3://icyboy/demo/file0-1.msg
upload: ./file0-3.log -> s3://icyboy/demo/file0-3.log
WARNING: Exiting now because of --dry-run