Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Google Storage as a storage backend #75

Merged
merged 7 commits into from
Jun 4, 2019

Conversation

doanac
Copy link
Contributor

@doanac doanac commented Apr 4, 2019

This works with a Google Storage project configured with S3
interoperability.

Signed-off-by: Andy Doan andy@foundries.io

@doanac
Copy link
Contributor Author

doanac commented Apr 4, 2019

This is related to issue #74. Its very lightly tested as I'm hoping to get some feedback from people that understand this code base and scala better. However, I was able to push blobs to google storage using the getting started steps in your README.md

@xaviergully xaviergully requested a review from simao April 4, 2019 17:43
@simao
Copy link
Contributor

simao commented Apr 9, 2019

Thanks for your PR. Yes I think we could merge something like this if we added some more tests.

Can we use this only through config. specifying a different endpoint url for GS?

@doanac
Copy link
Contributor Author

doanac commented Apr 9, 2019

@simao - it should be possible. I don't actually know Scala, so this was just the most obvious way to construct the s3client. I'll try and carve out some time this week to make it work via an endpoint config option.

doanac added 2 commits April 25, 2019 08:55
Services like Google Storage expose s3 compatible APIs that can
accessed with a non-aws URL.

Signed-off-by: Andy Doan <andy@foundries.io>
Signed-off-by: Andy Doan <andy@foundries.io>
@doanac
Copy link
Contributor Author

doanac commented Apr 25, 2019

I've done a new force push that makes the config of this almost identical to what I've done for the ota-tuf reposerver. I wasn't able to run the unit tests properly and am not really sure the best way to achieve testing here. Feel free to just copy my idea and do it differently if you'd like.

@simao
Copy link
Contributor

simao commented Apr 25, 2019

@doanac I pushed some commits to make it tests green.

I don't have a google account to run these tests with valid credentials. Can you run them with valid google credentials?

You need to set these env variables:

export TREEHUB_AWS_ACCESS_KEY="your access key"
export TREEHUB_AWS_SECRET_KEY="your secret key"
export TREEHUB_AWS_BUCKET_ID="your bucket it"
export TREEHUB_S3_ENDPOINTURL="google endpoint url"

and then run sbt it:test, this will run the integration tests against the google endpoints.

I ran them with valid aws credentials and they are green for us.

@doanac
Copy link
Contributor Author

doanac commented Apr 25, 2019

@simao - I get a couple of failures from AWS assumptions:

[info] ObjectResourceIntegrationSpec:
[info] - POST with with x-ats-accept-redirect and size creates a new blob when uploading application/octet-stream directly *** FAILED ***
[info]   "https://andy-treehub-test.storage.googleapis.com/347b53d4593fb709ecbe04bee879f7376a36a7fd8e756ce6c2187c53dc31c139/a8/51ec4eed4f1987794c19e71add9f49898b29d1c11292a30cd89d64fc823d0b.commit?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20190425T161455Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3599&X-Amz-Credential=GOOGIQOQXOYO4CHMDY4EEW5M%2F20190425%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Signature=5d24855ea2afecda3c7cef81abb6b581868c893c59e984ed68ff5afcc9e4e415" did not include substring "amazonaws.com" (ObjectResourceIntegrationSpec.scala:38)

Is there a more generic string we could search for in this test to make it work for both google storage and AWS?

I also see this failure:

[info] S3DeltaStorageIntegrationSpec:
[info] com.advancedtelematic.treehub.delta_store.S3DeltaStorageIntegrationSpec *** ABORTED ***
[info]   com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: FBCC2CCFB2FBAF2A)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1586)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1254)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
[info]   at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
[info]   at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)

I'm wondering if the best way to deal with that failure would be to make a single "createS3Client" function in com.advancedtelematic.treehub.object_store and then have S3DeltaStorage, S3BlobStore, and S3DeltaStorageIntegrationSpec all use this function for creating the s3Client?

NOTE: I've also tested this with with an updated version of: https://github.com/doanac/ota-compose/ and have garage-push operations working.

@simao
Copy link
Contributor

simao commented Apr 25, 2019

@simao - I get a couple of failures from AWS assumptions:

[info] ObjectResourceIntegrationSpec:
[info] - POST with with x-ats-accept-redirect and size creates a new blob when uploading application/octet-stream directly *** FAILED ***
[info]   "https://andy-treehub-test.storage.googleapis.com/347b53d4593fb709ecbe04bee879f7376a36a7fd8e756ce6c2187c53dc31c139/a8/51ec4eed4f1987794c19e71add9f49898b29d1c11292a30cd89d64fc823d0b.commit?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20190425T161455Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3599&X-Amz-Credential=GOOGIQOQXOYO4CHMDY4EEW5M%2F20190425%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Signature=5d24855ea2afecda3c7cef81abb6b581868c893c59e984ed68ff5afcc9e4e415" did not include substring "amazonaws.com" (ObjectResourceIntegrationSpec.scala:38)

I think we could just match on the https.. and headers.

Is there a more generic string we could search for in this test to make it work for both google storage and AWS?

I also see this failure:

[info] S3DeltaStorageIntegrationSpec:
[info] com.advancedtelematic.treehub.delta_store.S3DeltaStorageIntegrationSpec *** ABORTED ***
[info]   com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: FBCC2CCFB2FBAF2A)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1586)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1254)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
[info]   at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
[info]   at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
[info]   at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)

I'm wondering if the best way to deal with that failure would be to make a single "createS3Client" function in com.advancedtelematic.treehub.object_store and then have S3DeltaStorage, S3BlobStore, and S3DeltaStorageIntegrationSpec all use this function for creating the s3Client?

Yes that might be a good idea but not sure how it would look, would have to spend some time on it.

NOTE: I've also tested this with with an updated version of: https://github.com/doanac/ota-compose/ and have garage-push operations working.

@doanac
Copy link
Contributor Author

doanac commented Apr 25, 2019

I think we could just match on the https.. and headers.

I bet X-Amz-Signature= is part of the generic S3 api and could be used.

@doanac
Copy link
Contributor Author

doanac commented Apr 25, 2019

With the latest 2 commits I was able to run the integration tests against a google storage bucket.

@simao
Copy link
Contributor

simao commented May 28, 2019

Added a commit and all good from our side, just need DCO on some commits.

simao and others added 3 commits May 28, 2019 17:55
Signed-off-by: Simão Mata <simao.mata@here.com>
Signed-off-by: Simão Mata <simao.mata@here.com>
Signed-off-by: Andy Doan <andy@foundries.io>
@simao simao force-pushed the google-storage branch from 3e6d5fa to 317645a Compare May 28, 2019 15:56
@simao
Copy link
Contributor

simao commented May 28, 2019

Also, I needed to force push as I forgot to sign off on my previous commits and needed to rebase yours on top of my commits. Sorry about that.

doanac and others added 2 commits May 29, 2019 09:22
This gives the test class access to the properly configured s3Client
to make sure the tests run properly

Signed-off-by: Andy Doan <andy@foundries.io>
Signed-off-by: Simão Mata <simao.mata@here.com>
@doanac
Copy link
Contributor Author

doanac commented May 30, 2019

@simao - i think this is ready now

@simao
Copy link
Contributor

simao commented Jun 4, 2019

@doanac Great. Thanks again for opening a pr.

@simao simao merged commit 953fd9b into advancedtelematic:master Jun 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants