[amazon-s3] Getting Access Denied when calling the PutObject operation with bucket-level permission

I followed the example on http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3 for how to grant a user access to just one bucket.

I then tested the config using the W3 Total Cache Wordpress plugin. The test failed.

I also tried reproducing the problem using

aws s3 cp --acl=public-read --cache-control='max-age=604800, public' ./test.txt s3://my-bucket/

and that failed with

upload failed: ./test.txt to s3://my-bucket/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied

Why can't I upload to my bucket?

This question is related to amazon-s3

The answer is


I was having the same error message for a mistake I made: Make sure you use a correct s3 uri such as: s3://my-bucket-name/

(If my-bucket-name is at the root of your aws s3 obviously)

I insist on that because when copy pasting the s3 bucket from your browser you get something like https://s3.console.aws.amazon.com/s3/buckets/my-bucket-name/?region=my-aws-regiontab=overview

Thus I made the mistake to use s3://buckets/my-bucket-name which raises:

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied


I was able to solve the issue by granting complete s3 access to Lambda from policies. Make a new role for Lambda and attach the policy with complete S3 Access to it.

Hope this will help.


Similar to one post above, (except I was using admin credentials) to get S3 uploads to work with large 50M file.

Initially my error was:

An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

I switched the multipart_threshold to be above the 50M

aws configure set default.s3.multipart_threshold 64MB

and I got:

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

I checked bucket public access settings and all was allowed. So I found that public access can be blocked on account level for all S3 buckets:

S3 can block public ACL on account level


I had a similar issue uploading to an S3 bucket protected with KWS encryption. I have a minimal policy that allows the addition of objects under a specific s3 key.

I needed to add the following KMS permissions to my policy to allow the role to put objects in the bucket. (Might be slightly more than are strictly required)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "kms:ListKeys",
                "kms:GenerateRandom",
                "kms:ListAliases",
                "s3:PutAccountPublicAccessBlock",
                "s3:GetAccountPublicAccessBlock",
                "s3:ListAllMyBuckets",
                "s3:HeadBucket"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "kms:ImportKeyMaterial",
                "kms:ListKeyPolicies",
                "kms:ListRetirableGrants",
                "kms:GetKeyPolicy",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:ListResourceTags",
                "kms:ReEncryptFrom",
                "kms:ListGrants",
                "kms:GetParametersForImport",
                "kms:TagResource",
                "kms:Encrypt",
                "kms:GetKeyRotationStatus",
                "kms:GenerateDataKey",
                "kms:ReEncryptTo",
                "kms:DescribeKey"
            ],
            "Resource": "arn:aws:kms:<MY-REGION>:<MY-ACCOUNT>:key/<MY-KEY-GUID>"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
            <The S3 actions>
            ],
            "Resource": [
                "arn:aws:s3:::<MY-BUCKET-NAME>",
                "arn:aws:s3:::<MY-BUCKET-NAME>/<MY-BUCKET-KEY>/*"
            ]
        }
    ]
}

I encountered the same issue. My bucket was private and had KMS encryption. I was able to resolve this issue by putting in additional KMS permissions in the role. The following list is the bare minimum set of roles needed.

{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Sid": "AllowAttachmentBucketWrite",
        "Effect": "Allow",
        "Action": [
            "s3:PutObject",
            "kms:Decrypt",
            "s3:AbortMultipartUpload",
            "kms:Encrypt",
            "kms:GenerateDataKey"
        ],
        "Resource": [
            "arn:aws:s3:::bucket-name/*",
            "arn:aws:kms:kms-key-arn"
        ]
    }
  ]
}

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-encryption-kms-key/


To answer my own question:

The example policy granted PutObject access, but I also had to grant PutObjectAcl access.

I had to change

"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"

from the example to:

"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"

You also need to make sure your bucket is configured for clients to set a public-accessible ACL by unticking these two boxes:

enter image description here


If you have set public access for bucket and if it is still not working, edit bucker policy and paste following:

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketnamehere",
                "arn:aws:s3:::yourbucketnamehere/*"
            ],
            "Effect": "Allow",
            "Principal": "*"
        }
    ]
}

I was having a similar problem. I was not using the ACL stuff, so I didn't need s3:PutObjectAcl.

In my case, I was doing (in Serverless Framework YML):

- Effect: Allow
  Action:
    - s3:PutObject
  Resource: "arn:aws:s3:::MyBucketName"

Instead of:

- Effect: Allow
  Action:
    - s3:PutObject
  Resource: "arn:aws:s3:::MyBucketName/*"

Which adds a /* to the end of the bucket ARN.

Hope this helps.


For me I was using expired auth keys. Generated new ones and boom.


If you have specified your own customer managed KMS key for S3 encryption you also need to provide the flag --server-side-encryption aws:kms, for example:

aws s3api put-object --bucket bucket --key objectKey --body /path/to/file --server-side-encryption aws:kms

If you do not add the flag --server-side-encryption aws:kms the cli displays an AccessDenied error


In case this help out anyone else, in my case, I was using a CMK (it worked fine using the default aws/s3 key)

I had to go into my encryption key definition in IAM and add the programmatic user logged into boto3 to the list of users that "can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS.".


I was just banging my head against a wall just trying to get S3 uploads to work with large files. Initially my error was:

An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

Then I tried copying a smaller file and got:

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

I could list objects fine but I couldn't do anything else even though I had s3:* permissions in my Role policy. I ended up reworking the policy to this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::my-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucketMultipartUploads",
                "s3:AbortMultipartUpload",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::my-bucket",
                "arn:aws:s3:::my-bucket/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "*"
        }
    ]
}

Now I'm able to upload any file. Replace my-bucket with your bucket name. I hope this helps somebody else that's going thru this.


Error : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

I solved the issue by passing Extra Args parameter as PutObjectAcl is disabled by company policy.

s3_client.upload_file('./local_file.csv', 'bucket-name', 'path', ExtraArgs={'ServerSideEncryption': 'AES256'})