[amazon-web-services] AWS S3 - How to fix 'The request signature we calculated does not match the signature' error?

I have searched on the web for over two days now, and probably have looked through most of the online documented scenarios and workarounds, but nothing worked for me so far.

I am on AWS SDK for PHP V2.8.7 running on PHP 5.3.

I am trying to connect to my S3 bucket with the following code:

// Create a `Aws` object using a configuration file

        $aws = Aws::factory('config.php');

        // Get the client from the service locator by namespace
        $s3Client = $aws->get('s3');

        $bucket = "xxx";
        $keyname = "xxx";

        try {
            $result = $s3Client->putObject(array(
                'Bucket'        =>      $bucket,
                'Key'           =>      $keyname,
                'Body'          =>      'Hello World!'
            ));
            $file_error = false;
        } catch (Exception $e) {
            $file_error = true;
            echo $e->getMessage();
            die();
        }
        //  

My config.php file is as follows:

<?php

return array(
    // Bootstrap the configuration file with AWS specific features
    'includes' => array('_aws'),
    'services' => array(
        // All AWS clients extend from 'default_settings'. Here we are
        // overriding 'default_settings' with our default credentials and
        // providing a default region setting.
        'default_settings' => array(
            'params' => array(
                'credentials' => array(
                    'key'    => 'key',
                    'secret' => 'secret'
                )
            )
        )
    )
);

It is producing the following error:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I've already checked my access key and secret at least 20 times, generated new ones, used different methods to pass in the information (i.e. profile and including credentials in code) but nothing is working at the moment.

This question is related to amazon-web-services amazon-s3 aws-php-sdk

The answer is


I don't know if anyone came to this issue while trying to test the outputted URL in browser but if you are using Postman and try to copy the generated url of AWS from the RAW tab, because of escaping backslashes you are going to get the above error.

Use the Pretty tab to copy and paste the url to see if it actually works.

I run into this issue recently and this solution solved my issue. It's for testing purposes to see if you actually retrieve the data through the url.

This answer is a reference to those who try to generate a download, temporary link from AWS or generally generate a URL from AWS to use.


I had to set

Aws.config.update({
  credentials: Aws::Credentials.new(access_key_id, secret_access_key)
})

before with the ruby aws sdk v2 (there is probably something similiar to this in the other languages as well)


In my case, I was using S3 (uppercase) as service name when making request using postman in AWS signature Authorization method


This mostly happens when you take a SECRET key and pass it to elastic client.

e.g: Secret Key: ABCW1233**+OxMMMMMMM8x**

While configuring in the client, You should only pass: ABCW1233**(The part before the + sign).


In a previous version of the aws-php-sdk, prior to the deprecation of the S3Client::factory() method, you were allowed to place part of the file path, or Key as it is called in the S3Client->putObject() parameters, on the bucket parameter. I had a file manager in production use, using the v2 SDK. Since the factory method still worked, I did not revisit this module after updating to ~3.70.0. Today I spent the better part of two hours debugging why I had started receiving this error, and it ended up being due to the parameters I was passing (which used to work):

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures/catsinhats',
    'Key' => 'whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

I had to move the catsinhats portion of my bucket/key path to the Key parameter, like so:

$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
    'version' => '2006-03-01'
]);
$result = $s3Client->putObject([
    'Bucket' => 'awesomecatpictures',
    'Key' => 'catsinhats/whitecats/white_cat_in_hat1.png',
    'SourceFile' => '/tmp/asdf1234'
]);

What I believe is happening is that the Bucket name is now being URL Encoded. After further inspection of the exact message I was receiving from the SDK, I found this:

Error executing PutObject on https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png

AWS HTTP error: Client error: PUT https://s3.amazonaws.com/awesomecatpictures%2Fcatsinhats/whitecats/white_cat_in_hat1.png resulted in a 403 Forbidden

This shows that the / I provided to my Bucket parameter has been through urlencode() and is now %2F.

The way the Signature works is fairly complicated, but the issue boils down to the bucket and key are used to generate the encrypted signature. If they do not match exactly on both the calling client, and within AWS, then the request will be denied with a 403. The error message does actually point out the issue:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

So, my Key was wrong because my Bucket was wrong.


In my case (python) it failed because I had these two lines of code in the file, inherited from an older code

http.client.HTTPConnection._http_vsn = 10 http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'


After debugging and spending a lot of time, in my case, the issue was with the access_key_id and secret_access_key, just double check your credentials or generate new one if possible and make sure you are passing the credentials in params.


Got this error while uploading document to CloudSearch through Java SDK. The issue was because of a special character in the document to be uploaded. The error "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method." is very misleading.


Actually in Java i was getting same error.After spending 4 hours to debug it what i found that that the problem was in meta data in S3 Objects as there was space while sitting cache controls in s3 files.This space was allowed in 1.6.* version but in 1.11.* it is disallowed and thus was throwing the signature mismatch error


Just to add to the many different ways this can show up.

If you using safari on iOS and you are connected to the Safari Technology Preview console - you will see the same problem. If you disconnect from the console - the problem will go away.

Of course it makes troubleshooting other issues difficult but it is a 100% repro.

I am trying to figure out what I can change in STP to stop it from doing this but have not found it yet.


Another possible issue might be that the meta values contain non US-ASCII characters. For me it helped to UrlEncode the values when adding them to the putRequest:

request.Metadata.Add(AmzMetaPrefix + "artist", HttpUtility.UrlEncode(song.Artist));
request.Metadata.Add(AmzMetaPrefix + "title", HttpUtility.UrlEncode(song.Title));

This error seems to occur mostly if there is a space before or after your secret key


I got this when I had quotes around the key in ~/.aws/credentials.

aws_secret_access_key = "KEY"


Weirdly I previously had an error The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. There was an answer on Stackoverflow which required to add AWS_S3_REGION_NAME = 'eu-west-2' (your region), AWS_S3_SIGNATURE_VERSION = "s3v4.

After doing that the previous error cleared but I ended up with this signature error again. Searched for answers until I ended up removing the AWS_S3_SIGNATURE_VERSION = "s3v4 Then it worked. Placed it here maybe it might help someone. I am using Django by the way.


For me I used axios and by deafult it sends header

content-type: application/x-www-form-urlencoded

so i change to send:

content-type: application/octet-stream

and also had to add this Content-Type to AWS signature

const params = {
    Bucket: bucket,
    Key: key,
    Expires: expires,
    ContentType = 'application/octet-stream'
}

const s3 = new AWS.S3()
s3.getSignedUrl('putObject', params)

I just experienced this uploading an image to S3 using the AWS SDK with React Native. It turned out to be caused by the ContentEncoding parameter.

Removing that parameter "fixed" the issue.


I get this error with the wrong credentials. I think there were invisible characters when I pasted it originally.


I am getting the same error Due to following reason.

I have entered right credentials but with copy-paste. So that may b issue of junk characters insertion while copy paste. I have entered manually and ran the code and now its working fine

Thank You


I had the same error in nodejs. But adding signatureVersion in s3 constructor helped me:

const s3 = new AWS.S3({
  apiVersion: '2006-03-01',
  signatureVersion: 'v4',
});

The issue in my case was the API Gateway URL used to configure Amplify that had an extra slash at the end...

The queried url looked like https://....amazonaws.com/myapi//myendpoint. I removed the extra slash in the conf and it worked.

Not the most explicit error message of my life.


I solved this issue by adding apiVersion inside AWS.S3(), then it works perfectly for S3 signed url.

Change from

var s3 = new AWS.S3();

to

var s3 = new AWS.S3({apiVersion: '2006-03-01'});

For more detailed examples, can refer to this AWS Doc SDK Example: https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javascript/example_code/s3/s3_getsignedurl.js


I had a similar error, but for me it seemed to be caused by re-using an IAM user to work with S3 in two different Elastic Beanstalk environments. I treated the symptom by creating an identically permissioned IAM user for each environment and that made the error go away.


In my case I was calling s3request.promise().then() incorreclty which caused two executions of the request happening when only one call was done.

What I mean is that I was iterating through 6 objects but 12 requests were made (you can check by logging in the console or debuging network in the browser)

Since the timestamp for the second, unwanted, request did not match the signture of the firs one this produced this issue.


I got this error while trying to copy an object. I fixed it by encoding the copySource. This is actually described in the method documentation:

Params: copySource – The name of the source bucket and key name of the source object, separated by a slash (/). Must be URL-encoded.

CopyObjectRequest objectRequest = CopyObjectRequest.builder()
                .copySource(URLEncoder.encode(bucket + "/" + oldFileKey, "UTF-8"))
                .destinationBucket(bucket)
                .destinationKey(newFileKey)
                .build();

If none of the other mentioned solution works for you , then try using

aws configure

this command will open a set of options asking for keys, region and output format.

Hope this helps!


For Python set - signature_version s3v4

s3 = boto3.client(
   's3',
   aws_access_key_id='AKIAIO5FODNN7EXAMPLE',
   aws_secret_access_key='ABCDEF+c2L7yXeGvUyrPgYsDnWRRC1AYEXAMPLE',
   config=Config(signature_version='s3v4')
)

In my case I was using s3.getSignedUrl('getObject') when I needed to be using s3.getSignedUrl('putObject') (because I'm using a PUT to upload my file), which is why the signatures didn't match.


I had the same problem when tried to copy an object with some UTF8 characters. Below is a JS example:

var s3 = new AWS.S3();

s3.copyObject({
    Bucket: 'somebucket',
    CopySource: 'path/to/Weird_file_name_ðO´pi´u.jpg',
    Key: 'destination/key.jpg',
    ACL: 'authenticated-read'
}, cb);

Solved by encoding the CopySource with encodeURIComponent()


Most of the time it happens because of the wrong key (AWS_SECRET_ACCESS_KEY). Please cross verify your AWS_SECRET_ACCESS_KEY. Hope it will work...


My AccessKey had some special characters in that were not properly escaped.

I didn't check for special characters when I did the copy/paste of the keys. Tripped me up for a few mins.

A simple backslash fixed it. Example (not my real access key obviously):

secretAccessKey: 'Gk/JCK77STMU6VWGrVYa1rmZiq+Mn98OdpJRNV614tM'

becomes

secretAccessKey: 'Gk\/JCK77STMU6VWGrVYa1rmZiq\+Mn98OdpJRNV614tM'


I encountered this in a Docker image, with a non-AWS S3 endpoint, when using the latest awscli version available to Debian stretch, i.e. version 1.11.13.

Upgrading to CLI version 1.16.84 resolved the issue.

To install the latest version of the CLI with a Dockerfile based on a Debian stretch image, instead of:

RUN apt-get update
RUN apt-get install -y awscli
RUN aws --version

Use:

RUN apt-get update
RUN apt-get install -y python-pip
RUN pip install awscli
RUN aws --version

In my case the bucketname was wrong, it included the first part of the key (bucketxxx/keyxxx) - there was nothing wrong with the signature.


In my case I parsed an S3 url into its components.

For example:

Url:    s3://bucket-name/path/to/file

Was parsed into:

Bucket: bucket-name
Path:   /path/to/file

Having the path part containing a leading '/' failed the request.


I had the same issue, the problem I had was I imported the wrong environment variable, which means that my secret key for AWS was wrong. Based on reading all the answers, I would verify that all your access ID and secret key is right and there are no additional characters or anything.


I had the same issue. I had the default method, PUT set to define the pre-signed URL but was trying to perform a GET. The error was due to method mismatch.


generating a fresh access key worked for me.


I could solve this issue with setting environment variables.

export AWS_ACCESS_KEY=
export AWS_SECRET_ACCESS_KEY=

In IntelliJ + py.test, I set environment variables with [Run] > [Edit Configurations] > [Configuration] > [Environment] > [Environment variables]


Like others have said, I had this exact same problem and it turned out to be related to the password / access secret. I generated a password for my s3 user that was not valid, and it didn't inform me. When trying to connect with the user, it gave this error. It doesn't seem to like certain or all symbols in passwords (at least for Minio)


In my case I had to wait for a couple of hours between uploading files into the bucket and generating pre-signed URLs for them.