Archiving to S3 , how to diagnose failure


#1

Hi

I have configured an application to archive to S3.
Connection/permissions are accepted in the Archive settings.
There application devices are active and data is more than 31 days old.

This was setup on Thursday (25th)

However no data has been archived.

Couple of questions.

  1. How would we diagnose this failure ? Is there some log.
  2. How can we explicitly test the setup ? I would imagine using whatever you come up with for archiving data older than 30 days.

Thanks

Tim


#2

Hi Tim,

I’m sorry to here you have been having trouble with archiving your devices data. I hope I can help fix the issue. I did check our logs and found an archiving job that has been erring with an “Access Denied” error. Is this your application id, 59b88b9aec028e00079ef592? If this is not your application then there a few other places I can look, but for the rest of this comment, I’m assuming it is yours.

I would double check your permissions you have for the account on AWS. When you save the archive configuration it checks to see that the bucket exists and that your user can access it. However, it does not specifically check that the user can write to it. Attached is a screenshot of the AWS permission, the only one you should need is the Write Objects permission. The screenshot shows a user with full permission but that is not necessary for archiving to work.

As far as being able to explicitly test the setup, we will be adding a feature to archiving which will allow the user to archive all device data that is currently older than 31 days. This will kick off immediately and begin archiving data. We will also be adding a feature to send out emails to the users if an archive failed to complete. If you have any other suggestions on how to make this feature better, please let us know.

Let me know if this fixes your problem.

Thanks,
Erin


#3

Hi Erin

Sorry no that isn’t our application id. The application in question is 580ea412c87a3f01002e5e78.

Though the one you listed is one of ours but all data is over 30 days, so hanging out to archive the old data.
I did have a number of permission issues when I first tried to set it up.

It wouldn’t surprise me if the permissions are wrong, however I have explicit bucket policy in place with putObject permissions. Until I built the policy I couldn’t save the changes in Archive settings.

Some sort of log or as you suggest an email would be useful. An explicit test button which creates an empty file with a result (or error), would be a good way testing on the spot, rather than waiting until the archive occurs.

Thanks

Tim


#4

The explicit bucket policy that I have may not be sufficient.

Could you list specific permissions required for a policy ?

Thanks

Tim


#5

Having said that, current policy is infact “*”


#6

Tim,

The application ID you mentioned is also erroring with Access Denied on our side, when it tries to write to the bucket. When you say that your policy is “*”, do you mean the policy looks something like this?

{
  "Statement": [
    {
        "Action": "s3:*",
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:::your-bucket-name",
            "arn:aws:s3:::your-bucket-name/*",
        ]
    }
  ]
}

Having both of those resource lines is surprisingly important.

And as Erin said, we are definitely going to be putting some work into making this feature easier to use and debug!


#7

HI

Ok this is the current policy, built using https://awspolicygen.s3.amazonaws.com/policygen.html

{
    "Version": "2012-10-17",
    "Id": "Policy1516840804715",
    "Statement": [
        {
            "Sid": "Stmt1516840802035",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<SOME ID>:user/<some username>"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::<my-bucketname>"
        }
    ]
}

#8

I have just added the additional “arn:aws:s3:::your-bucket-name/*”, resource to the policy.
That is probably where I have gone wrong.

Thanks

T


#9

I just manually ran the daily archive for 580ea412c87a3f01002e5e78, and from our side it looks like it was successful. Are you seeing the archived data on your end?


#10

Yes.

I think my configuration should not have a directory with a leading /
Its currently /devices

But it has created with a blank name - see the two ‘//’

e.g.

Amazon S3 > bucket name //devices/2017-12-30T00:00:00.000Z

Will change that at my end and see what happens.

.

Thanks for the help.

Tim


#11

Great! Let me know if you need me to manually trigger it again. Among the other changes Erin has already talked about above, we are going to add the minimal required AWS policy to the documentation, to help make sure that other people don’t run into the same issues that you have run into.


#12

Hi Michael

If you can trigger it again that would be great. Save waiting till later.

And yep some example policies would be great in the docs.

Cheers

T


#13

No problem, triggered again, and finished again with no errors.


#14

That worked as expected.

I clearly shouldn’t include leading ‘/’ :wink:

Thanks again for all your help.

Cheers

T


#15

Excellent! I think we will add a note in the documentation about the effect of leading slashes as well :slight_smile: