Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finish Migrating To Community CI By August 1st #16637

Open
BenTheElder opened this issue Jun 26, 2024 · 3 comments
Open

Finish Migrating To Community CI By August 1st #16637

BenTheElder opened this issue Jun 26, 2024 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@BenTheElder
Copy link
Member

/kind bug

https://github.com/kubernetes/test-infra/blob/master/docs/job-migration-todo.md

kOps is one of the few projects with many remaining jobs, please migrate by August 1st or risk losing CI coverage.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 26, 2024
@ameukam
Copy link
Member

ameukam commented Jun 26, 2024

cc @hakman @justinsb @rifelpet

@rifelpet
Copy link
Member

rifelpet commented Jul 1, 2024

The remaining jobs that havent been migrated can be categorized into:

  • Periodic jobs that publish version markers to a GCS bucket, currently gs://kops-ci. Example usage (1, 2, 3)

    I see a k8s-infra-kops-ci-results bucket was created years ago: infra/gcp: Add bucket for kOps k8s.io#2678 but is not mentioned in any prow job nor in kubetest2-kops. Should we use this bucket or is there a more appropriate project we should create a new bucket in? Currently these jobs use a static AWS credential secret and also have write access to the kops-ci GCS bucket. Which prow cluster should we migrate to and how will we acquire both AWS and GCP credentials in one job? We currently acquire random GCP projects from boskos but don't use boskos with AWS.

  • The presubmit E2E jobs publish artifacts to a k8s-staging-kops GCS bucket, used by the E2E cluster provisioned in the test. Can you confirm gs://k8s-staging-kops is in a community-owned project? Similar to above we need a way to get credentials for its project while also using AWS credentials for cluster provisioning. If it isn't community-owned then we'll need a new bucket.

  • The e2e-kops-aws-upgrade-k12[456] periodic jobs haven't been migrated because their k/k e2e.test binaries use older aws-sdk-go versions that dont support the authentication methods needed to run in the prow EKS cluster. We can either delete these 1.24 - 1.26 jobs or try to migrate them to EKS and skip any failing tests.

  • The Digital Ocean jobs are still waiting on having an account available from the CNCF.

cc @upodroid any recommendations you can provide here?

@BenTheElder
Copy link
Member Author

I see a k8s-infra-kops-ci-results bucket was created years ago: kubernetes/k8s.io#2678 but is not mentioned in any prow job nor in kubetest2-kops. Should we use this bucket or is there a more appropriate project we should create a new bucket in? Currently these jobs use a static AWS credential secret and also have write access to the kops-ci GCS bucket.

It seems we provisioned it for this purpose, though I don't recall this (might not have been involved then), it sounds OK to me ...

Which prow cluster should we migrate to and how will we acquire both AWS and GCP credentials in one job? We currently acquire random GCP projects from boskos but don't use boskos with AWS.

That's a new one. I'm not sure, I imagine @upodroid has opinions. I think we have some limited cross-cloud auth but I haven't interacted with this part yet.

The presubmit E2E jobs publish artifacts to a k8s-staging-kops GCS bucket, used by the E2E cluster provisioned in the test. Can you confirm gs://k8s-staging-kops is in a community-owned project? Similar to above we need a way to get credentials for its project while also using AWS credentials for cluster provisioning. If it isn't community-owned then we'll need a new bucket.

Yep https://cs.k8s.io/?q=k8s-staging-kops&i=nope&files=&excludeFiles=&repos=

We should already be doing that for log uploads AFAIK?

The e2e-kops-aws-upgrade-k12[456] periodic jobs haven't been migrated because their k/k e2e.test binaries use older aws-sdk-go versions that dont support the authentication methods needed to run in the prow EKS cluster. We can either delete these 1.24 - 1.26 jobs or try to migrate them to EKS and skip any failing tests.

I feel for this, kind doesn't drop support as fast as kubernetes, but at least these are out of support releases as far as the core project is concerned. I think either of those paths makes sense.

The Digital Ocean jobs are still waiting on having an account available from the CNCF.

I'm still periodically pinging CNCF folks about this one, no progress yet but they're aware at least. (We need the account, we also really need the budget to be monitored with the others, pending both of those currently)

I'm not 100% about the others off the top of my head but k8s-staging-kops and k8s-infra-kops-ci-results are both naming patterns we did not use with the legacy infra, so most likely, you can also take a look in the github.com/kubernetes/k8s.io repo where we do all the provisioning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants