-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Deis deployment pipelines #41
Comments
From @deis-admin on January 19, 2017 23:32 From @developerinlondon on July 14, 2014 22:48 +1 to that! |
From @deis-admin on January 19, 2017 23:32 From @ntquyen on July 14, 2014 23:42 +1! |
From @deis-admin on January 19, 2017 23:32 From @krancour on July 29, 2014 15:31 +1, but also probably not part of an MVP, IMO. |
From @deis-admin on January 19, 2017 23:32 From @gabrtv on July 29, 2014 19:41 @kamilchm thanks for pointing us to Heroku pipelines! We're going to study it closely and see if we should include it in Deis. As @krancour pointed out, we can add that feature after our stable release. Today I would recommend using multiple applications and promoting between them manually via |
From @deis-admin on January 19, 2017 23:32 From @kamilchm on July 30, 2014 6:26 I watched I really appreciate where you have come with Deis as PaaS solution. Keep up the good work! |
From @deis-admin on January 19, 2017 23:32 From @carmstrong on July 30, 2014 6:58 @kamilchm We really appreciate the kind words! |
From @deis-admin on January 19, 2017 23:32 From @suquant on January 14, 2015 11:5 +1 |
From @deis-admin on January 19, 2017 23:32 From @wotek on February 26, 2015 12:49 +1 |
From @deis-admin on January 19, 2017 23:32 From @bjwschaap on May 22, 2015 17:31 +1 |
From @deis-admin on January 19, 2017 23:32 From @rvadim on June 9, 2015 3:35 +1 |
From @deis-admin on January 19, 2017 23:32 From @scottrobertson on June 15, 2015 19:40 +1 |
From @deis-admin on January 19, 2017 23:32 From @jjungnickel on June 16, 2015 8:1 +1 |
From @deis-admin on January 19, 2017 23:32 From @JacopoDaeli on January 21, 2016 0:58 +1 |
From @deis-admin on January 19, 2017 23:32 From @nicolasbelanger on June 30, 2016 20:56 +1 |
From @deis-admin on January 19, 2017 23:32 From @phspagiari on July 4, 2016 18:13 +1 |
From @intellix on March 17, 2017 15:44 I've started working on a UI for Deis Workflow which has the ability to create pipelines and view them: https://github.com/xcaliber-tech/deis-admin Pipeline view: https://www.dropbox.com/s/dy4p6ehcxfwm236/Screenshot%202017-03-02%2015.25.16.png?dl=0 Still a few items required to finish it though |
From @onbjerg on April 19, 2017 15:11 That's great! Will it also support GitHub's auto-deploy feature like Heroku? It would be cool if Deis could create a staging or review app for pull requests 🥇 |
From @intellix on April 19, 2017 16:20 @onbjerg I started working on a Webhook server which tells Deis to rebuild and create these feature branches with adapters as middleware for the different providers: Github and Bitbucket as Pull Requests come in. I've been working around the clock on other projects though so haven't been able to finalise it :( Just pushed what I had so far. Not tested at all as it was still actively being played with, but I'm hoping people will get it working and PR :) https://github.com/xcaliber-tech/deis-webhooks |
I haven't used Heroku's pipelines feature, and I don't really want to copy another build pipeline, also not knowing what it looks like or anything else to compare it to besides either Jenkins or TravisCI, both of those I'm not sure about, so what config format should we use for the build pipeline feature? Right now I feel like the incumbent is the Jenkinsfile, with the groovy syntax of Jenkinsfiles. The jenkins-jobs repo is really something else! |
@kingdonb it's nothing to do with Jenkins, CI or anything like that. From the heroku docs: "Pipelines allow you to define how your code should be promoted from one environment to the next. For example, you can push code to staging, have it built into a slug and later promote the staging slug to production". Its essentially promoting a build from staging to production without rebuilding etc. Sorry if I am misunderstanding your comment. |
No you're explaining along the line of thought I was hoping to provoke... I really like the idea of adding a formal pipeline concept to Deis but I don't know how the pipeline config should be instantiated. Follow? I think we probably won't start with Jenkinsfile or .travis.yml, but I'm sure some kind of config artifact will be needed to describe the pipeline steps and their operation, and I don't know if it should be "hephyrc" ... |
I'm thinking we will probably end up creating a new config format standard My expert sense says don't create a new standard if you can avoid it, but Jenkinsfile is not for us... |
We don't have environments now, but that seems like the first formal concept to add to the "deis app" to make it pipeline-capable Then the second requirement is a function that must evaluate to "true" in order to promote from one environment to the other. I can imagine a separate function being invoked for |
Someone said it would be cool if Deis supported FaaS natively somehow. Kubeless, Serverless, I think both are very interesting. Brigade seems like the most obvious place to land our CI since it came from Deis team and is event-driven so seems like a natural to connect with git hooks or any other pipeline build triggers. @scottrobertson What runtimes would you use in a pipeline? J/W |
I mostly run Ruby apps, but also PHP and Python. |
I am primarily a Rubyist too! Heroku-style buildpacks seem to handle pretty well on Deis in Ruby |
It's been a long time since i used Deis, but it worked very well for Ruby when i did, especially Rails apps. |
@kingdonb, I think you might still be misunderstanding @scottrobertson's point. Or I am inferring that because you went on talking about CI. I was around when this feature was first requested. It had actually been discussed internally long before this issue was even opened. It was well understood to be something equivalent to Heroku pipelines and that feature, as @scottrobertson was trying to explain, does not have anything to do with CI. And although it does have something to do with deployment, it has nothing to do with CD-- continuous deployment. Taking it back to 12factor, recall that a release is defined as build + config. The "pipeline" feature that was requested here was the ability to (manually) promote the build (slug) from one "environment" (e.g. staging) to another (e.g. "production"; where the config is different), all without rebuilding. Contrast to how you could do the above with Workflow today if you were willing to make some small compromised. In both cases, create separate apps for each "environment": Option 1: Don't use Builder. Have your own Jenkins/Travis/Circle/whatever-based CI process that cranks out Docker images as deployable artifacts. use
Options 2: Use
The problem with option 1, is it's sort of "advanced usage." (Or that's what I am told. Personally, this is how I had felt Workflow should always be used, but the The problem with option 2 is that the application is actually rebuilt when promoted. You lose the guarantee that the code running in prod is the same as what you were happy with in staging. (I for one, could not live with that.) The feature request had been to combine the best of the two options above so you could continue to use that popular (You are absolutely right that doing this [probably] requires a missing notion of "environment" to be implemented, such that a single application has multiple environments. Don't be surprised by the sheer volume of effort that probably represents. It's a big change to the model.) My best advice is to avoid further conflating this with CI and CD. Look at CI and CD as higher level than Workflow and the |
Also...
Not sure how these two things are connected. Just as I've cautioned not to let CI/CD cloud the (Heroku-style) pipeline waters, don't let FaaS cloud those waters either. FaaS is compelling, but don't hitch all these different horses together. |
Thank you @krancour. Sorry i could not explain properly, things have been a bit crazy! |
@krancour , I agree that Pipelines may be valuable in the sense that they allow for promotion of the build without rebuilding:
It's very easy to see where @kingdonb would confuse the two - Pipelines and CI/CD. Do we really want Deis Workflow to manage complexity of pipelines? Sure Docker images, tagging, and promoting with It's the natural progression. Currently, using Deis Workflow in production and when whever we need to solve complex multi-stage builds of Docker images, we do it with Docker build and My dislike for pipelines comes from the mere name and complexity having experience built similar pipelines in CI/CD for microservice to promote artifacts built in .NET using source tags. Docker images with tags already solve that problem really nicely... |
The only problem I see that pipelines solves that docker tagging and promotion would not solve is the promotion from one environment to the next without worrying about configuration change between environments. Systems like Gitlab, Jenkins, TeamCity (CI) environments are created to solve for this exact problem. Pipelines sounds like a rabbit hole in this sense... |
That was true in Deis v1.x, but in Deis Workflow 2.x, it is not true... The foundational runtime for all apps built using Heroku buildpacks is Heroku's "cedar stack" (or, iirc, we might have used "cedarish--" an open source approximation of the same stack). That stack is pretty heavy. Baking it into your images makes for some pretty hefty images. What Workflow did to be a little more efficient about things is kept the cedar stack separate from each app's deployable artifact in a component called the "slug runner." So the idea was that there is one hefty slug runner image based on cedar and when an application is fired up, it's just the slug mounted into slug runner pods. Worth noting-- even that approach can be pretty heavy-weight compared to how well you can optimize a Docker image for size by writing your own Dockerfile. I do a lot of Go, and using a multistage build with the final stage being Anyway...
That's a very good question. I think that in 2015/2016 when the Heroku model of |
I agree that there are definitely more than two features described here (by my first interpretation at least, if not by the original poster) I also want to be supportive and promote new ideas, but at the same time I think it would be great if we don't add any new features to Deis without specifically going through the design documentation process, which means lots of review and discussion before any code, and fair to think that might result in a lot of ideas dying on the vine. That's tbh much preferable than the whole project scope creeping above what size our team can handle and wilting away completely. I'm just as interested in "how can a classic Deis Workflow integrate with other best-of-breed solutions on k8s" which is why I mentioned serverless and kubeless. I wouldn't reinvent those wheels, but I also can't imagine (eg. my organization) adopting Kubernetes and then going on not evaluating or using those other solutions. I would anticipate we'd try to build a three or four-stage pipeline, as we'd want both options to promote builds:
I have some more thoughts on this, but I'll put a cork in it for now and mull over some more what I think I'd like this support to look like. Your experience and suggestions are extremely helpful here. |
No.2 doesn't make a lot of sense. CI logically occurs and passes prior to deployment to any environment. i.e. You wouldn't even consider deploying to staging (or whatever your "first" environment is) if tests were failing, right? Movement / promotion of a build through different environments therefore occurs much later and wouldn't be gated on CI. It would be gated on something else-- user acceptance tests, for example. "I, the product owner and happy with what IT has put into staging and I'm signing off on a release of that into production." Also re: no.2, automating promotions (based on some TBD gates as discussed above) should simply involve the automation system scripting the manual promotion described in no. 1. i.e. If anything needs to be implemented in Workflow, it's just no.1. i.e. Let people use whatever automation suits them to implement the rest and let that automation simply use the (hypothetical) Remember the Unix philosophy of "do one thing and do it well." Be very careful not to allow notions of "here's what my org probably needs" to fool you into thinking Workflow needs to implement it all. Workflow is but one piece of the puzzle. Please don't be disparaged by any of this. My intentions here are only to help focus this thread on the idea that was expressed in the OP. |
I appreciate you're helping me keep this from becoming too complicated, but you're wrong about this one thing, the first environment is where we deploy our "maybe broken" stuff during test development. Our full feature test suite takes over an hour to run (and that's a totally different problem, but I'm not sure that matters, even if it was only 5 or 10 minutes I would not want to force developers to wait that time in order to see the behavior of a change, when deployment itself can be done in under 30 seconds.) I don't want two environments (staging, prod)... I want four (dev, test, staging, prod). Developers are free to deploy any broken thing they want to dev. Currently we are actually doing our UAT in dev, with days/weeks and rounds of testing and red tape after that, but if we ever implemented CD properly in our organization, I would expect UAT to be done in staging, as a sign-off that results in sending that build to prod. We're not doing continuous deployment anyway, we are doing development of a long-lived "dev" feature branch that will not go to prod until it is feature complete. Dev is our prod. (Maybe we really need five environments...) but again that's just my organization. Right now Jenkins is testing every PR against the main (dev) branch with the Jenkins Pull Request Builder for Bitbucket plugin. An alternative configuration would be that developers manually promote their builds from dev into test, when they are ready to tie up a worker for an hour so they can see the results and code coverage reports. Deploy hooks in Workflow are already a thing, so maybe promoting a build to Test could actually cause the CI job to run against the test deployment, and the test script invokes If the tests fail (or if CI says we haven't written tests to cover new code branches) then we would like to be sure nothing gets staged for a potential production deployment. But it is the CI result that we want to induce a build into being promoted to Staging. (There, in staging, is where UAT happens. If tests failed, then I'd rather not send anything ahead for UAT. But I don't want to do this manually, or impose on my team that we must all do this manually.) Anyway you're not wrong about anything else but we most certainly need deployments that broke tests to be deployed somewhere, for debugging. Otherwise developers must manage another deployment on their laptops, or somewhere outside of the system; we're doing that now, and there are lots of headaches associated with it. The nature of our tests that run in Selenium and feature-testing integrated environments also requires that there must be a deployed build in order for the tests to be executed against it. Capybara can manage this well with RAILS_ENV=test but running tests against an environment setting other than production can be misleading because a lot of things are designed to behave differently in a test environment... probably much fewer things would be different in staging. Currently I'm using Jenkins to deploy a test pod for this with kubernetes plugin and a pod template, but say ideally there is only Workflow and FaaS, and Jenkins becomes unnecessary. There's no deep integration necessary though... if a deploy hook calls my CI function-as-a-service and then stuff happens, well, that's already possible today without any modifications. I think you're mostly right that |
I'm not going to argue why I think it's wrong to deploy something before tests have passed because its's really counter-productive to the larger point I was trying to make, which is that you're making this feature too much about your organization's unique (and possibly, by your own admission, less than ideal) way of doing things. You can't do that with a product (or feature thereof) that you want to be generally useful. You need to cut through all the site-specific idiosyncrasies to get down to the fundamental kernel of value that the feature adds. |
I agree with the above and not sure why we are even arguing at this point. CI/CD and testing in general are always very passionately debated topics. (example DHH on TDD) Developers are known to have lots of opinions on how testing and promotion should be done and what principles/frameworks should be followed. Organizations and different teams like to adopt very different practices and approaches. There is nothing wrong with that, except my way is always the best... 🤣 It is a constantly evolving discussion. Obviously we would never be able to cater to everyone's specific idiosyncrasies. The real question we need to answer is how valuable is this feature for Workflow. What complexity is it limiting and what problem is it trying to solve?
Exactly. Is this really the path we want Workflow to evolve toward? Is the Heroku model completely what we want for a PaaS? Does it have I personally think there is a lot of value in extending for serverless with a pub/sub notification/webhook system, which can be used to build plugins into workflow. Thoughts?
Simple, if we design this it should be just so, and following the Unix principles.
|
This, though, is the most valuable point I think... it's not something I intuitively agree with but it's a point I think I could be convinced of by a strong gust of wind. We're doing this because we're doing "not really TDD" where tests are not guaranteed to be written before (or after TBH) a feature is developed, and it's frequently left up to developers discretion whether their feature branches get merged, regardless of what the coverage report says. Live debugging would probably not be as often necessary if we could consistently follow TDD. If your tests are effective and comprehensive, you don't so much need to push the button to see if it works. I've gone far off topic but this has been interesting, thank you! |
Isn't that also way off topic? Trying to help you guys focus. I feel like this narrowly-scoped thread got hijacked and turned into a wish list or debate over the future of the project. Shouldn't there be a separate thread for roadmap or something? |
I understand staying on topic. I was just making the point that there are a lot more useful features outside the scope of Heroku 12-Factor ideology that we could be designing into Workflow ... But at the same time making things extendible so that other systems can integrate rather than just adding complexity of That was my point. @kingdonb , thanks for referencing this issue in the other FaaS thread. |
From @deis-admin on January 19, 2017 23:32
From @kamilchm on July 14, 2014 19:1
Does someone use deis for building deployment pipelines?
Heroku have pipelines in labs https://devcenter.heroku.com/articles/labs-pipelines. How can I implement equivalent of it using deis?
Copied from original issue: deis/deis#1318
Copied from original issue: deis/workflow#711
The text was updated successfully, but these errors were encountered: