Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example deploy fails Error: Invalid S3 bucket name (value: data-pipeline-simple-None-deployment-bucket) #49

Open
petervandenabeele opened this issue Mar 14, 2021 · 7 comments

Comments

@petervandenabeele
Copy link
Contributor

The None has a capital letter which is invalid.

I ran:

export AWS_DEFAULT_ACCOUNT=_____________29
export AWS_PROFILE=my-profile
export AWS_DEFAULT_REGION=your-region # e.g. eu-west-1

<..>/datajob/examples/data_pipeline_simple$ datajob deploy --config datajob_stack.py 
cdk command: cdk deploy --app  "python <..>/datajob/examples/data_pipeline_simple/datajob_stack.py"  -c stage=None
jsii.errors.JavaScriptError: 
  Error: Invalid S3 bucket name (value: data-pipeline-simple-None-deployment-bucket)
  Bucket name must only contain lowercase characters and the symbols, period (.) and dash (-) (offset: 21)
@vincentclaes
Copy link
Owner

The None has a capital letter which is invalid.

I ran:

export AWS_DEFAULT_ACCOUNT=_____________29
export AWS_PROFILE=my-profile
export AWS_DEFAULT_REGION=your-region # e.g. eu-west-1

<..>/datajob/examples/data_pipeline_simple$ datajob deploy --config datajob_stack.py 
cdk command: cdk deploy --app  "python <..>/datajob/examples/data_pipeline_simple/datajob_stack.py"  -c stage=None
jsii.errors.JavaScriptError: 
  Error: Invalid S3 bucket name (value: data-pipeline-simple-None-deployment-bucket)
  Bucket name must only contain lowercase characters and the symbols, period (.) and dash (-) (offset: 21)

thanks Peter, I just noticed it today that this crashes with my latest changes. if you explicitly pass the stage argument it should work.

datajob deploy --config datajob_stack.py --stage dev

i plan to fix this tomorrow evening.

@petervandenabeele
Copy link
Contributor Author

CloudFormation was successfully deployed. Now trying to run it ...

@vincentclaes
Copy link
Owner

fyi:

datajob execute --state-machine <your state machine name

to trigger the pipeline should work now

@petervandenabeele
Copy link
Contributor Author

As mentioned in the README, I clicked "Start Execution" in the GUI and this has run successfully :-)

In the logs of the 3 AWS Glue (1.0) Jobs, I do see the "hello world" message, nice

@petervandenabeele
Copy link
Contributor Author

petervandenabeele commented Mar 14, 2021

The "destroy" (of the CloudFormation stack and IAM Role) also worked successfully, including the destruction of the CustomCDKBucketDeployment....

Only the CloudWatch log groups are still present. That might be OK.

Not sure if a finite Retention of 3 months or so would be better ?

/aws/lambda/data-pipeline-simple-dev-CustomCDKBucketDeployment-5N8SZ8C1NN9 | Never expire | - | - | -
-- | -- | -- | -- | --
  | /aws/lambda/data-pipeline-simple-dev-deployment-bucketBackend | Never expire | - | - | -
  | /aws/lambda/data-pipeline-simple-devBackend | Never expire | - | - | -

I will stop typing in this Issue.

I suggest to close this issue after either the README is fixed with the --stage dev explanation or some stage (e.g. dev) is set as default.

@vincentclaes
Copy link
Owner

good point, thanks Peter. we should create the cloudwatch logs as part of the stack

@petervandenabeele
Copy link
Contributor Author

Before you do that (create snd destroy CloudWatch logs as part of CF (CloudFormation) stack): what is the life time of this CF stack? Would you want to see the Glue Job logs longer than the lifetime of this CF stack?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants