Thursday, 2017-12-14

*** kumarmn has joined #openstack-tc00:03
*** kumarmn has quit IRC00:28
*** harlowja has quit IRC00:45
*** kumarmn has joined #openstack-tc00:58
*** kumarmn has quit IRC01:01
*** kumarmn has joined #openstack-tc01:01
*** mriedem has quit IRC01:06
*** mriedem has joined #openstack-tc01:10
*** kumarmn has quit IRC01:20
*** kumarmn has joined #openstack-tc01:21
*** kumarmn has quit IRC01:22
*** kumarmn has joined #openstack-tc01:22
*** liujiong has joined #openstack-tc01:27
*** kumarmn has quit IRC01:36
*** kumarmn has joined #openstack-tc01:36
*** kumarmn has quit IRC01:36
*** kumarmn has joined #openstack-tc01:37
*** kumarmn has quit IRC02:07
*** kumarmn has joined #openstack-tc02:15
*** kumarmn has quit IRC02:21
*** kumarmn has joined #openstack-tc02:22
*** kumarmn has quit IRC02:26
*** openstackstatus has quit IRC02:32
*** purplerbot has quit IRC02:32
*** amrith has quit IRC02:32
*** ChanServ has quit IRC02:32
*** ChanServ has joined #openstack-tc02:34
*** barjavel.freenode.net sets mode: +o ChanServ02:34
*** openstackstatus has joined #openstack-tc02:36
*** purplerbot has joined #openstack-tc02:36
*** amrith has joined #openstack-tc02:36
*** barjavel.freenode.net sets mode: +v openstackstatus02:36
*** lbragstad has quit IRC03:44
*** kumarmn has joined #openstack-tc04:08
*** kumarmn has quit IRC04:13
*** rosmaita has quit IRC04:13
*** kumarmn has joined #openstack-tc04:30
*** mriedem has quit IRC04:49
*** kumarmn has quit IRC04:53
*** kumarmn has joined #openstack-tc04:53
*** kumarmn has quit IRC05:21
*** kumarmn has joined #openstack-tc05:21
*** kumarmn has quit IRC05:23
*** kumarmn has joined #openstack-tc05:24
*** kumarmn has quit IRC05:29
*** openstackgerrit has quit IRC06:47
*** liujiong has quit IRC08:31
*** jpich has joined #openstack-tc09:02
*** jpich has quit IRC10:22
*** kumarmn has joined #openstack-tc10:25
*** dtantsur|afk is now known as dtantsur10:25
*** kumarmn has quit IRC10:29
*** openstackgerrit has joined #openstack-tc11:17
openstackgerritDai Dang Van proposed openstack/governance master: Update policy goal for watcher  https://review.openstack.org/52729911:17
*** cdent has joined #openstack-tc12:00
cdentanybody have the cliff-notes version of the "longer development cycles" thread?12:21
*** kumarmn has joined #openstack-tc12:25
*** kumarmn has quit IRC12:30
ttxcliff-notes version?12:35
cdent"cliff-notes" are a study aid in the US, usually for novels that students don't want to read12:38
cdenthttps://www.cliffsnotes.com/literature/s/the-sound-and-the-fury/book-summary12:39
*** rosmaita has joined #openstack-tc12:49
ttxah13:09
ttxI can summarize13:09
ttxcdent: People generally agree with the premise, which is that a lot of developers, especially the part-time ones, are struggling with our rhythm of $stuff13:10
ttxThat said, some people disagree that the proposed solution would actually help in that regard13:10
cdentttx, I was mostly joking; I'm reading everything now13:10
ttxThere are concerns around the message it sends. Also concerns around the missed-train effect that would get more significant if we make the gap between releases that people want be one year13:12
ttxSome concern that it would result in less facetime as people would not want to do midcycles again13:13
ttx(that one is kinda weird, and comes from the same people who complained that the PTG would replace the midcycles)13:13
ttx(I read it as: stop changing the events, as it's always a good reason employers use to reduce their travel strategy13:15
ttxI also sense party lines between old-timers who are not really happy with it and new-comers/smaller-players who seem generally more in favor13:16
cdentthus far what I've read in the thread I'm not seeing much response from people who are new-timers to old projects13:18
ttxSome concerns around reducing cross-project collaboration, too13:18
ttxcdent: Some concerns around what this would mean for stable branches / number of supported for how long13:20
ttxanyway, open to suggestions. This one is definitely not consensual13:21
ttxI just thought it better to be discussed openly rather than privately13:21
ttxEven if that's causing me a bit of ad-hominem on Twitter13:22
cdentIt is good that it is happening in the open. I'm still working out my reaction in my head, but initially I'm having one of my standard responses: we should do more analysis before design.13:28
ttxI find the tangent on "if we had more independent components that would not be that much of a problem"13:30
ttxinteresting13:30
cdentI don't think that's a tangent. That's an effort to get to the underlying causes, which is what I mean by analysis before design13:33
cdentThere are, however, likely several underlying causes, not just tight coupling.13:34
cdentIf we're going to consider making such a big change, we might consider ways to make the change an actual change, in the guts, not just in timing.13:35
ttxI consider it a tangent because I don't think we can solve it before we die from contributor attrition, if we don't find a way to make part-time contributors more comfortable13:36
cdentthen in that case, isn't it more important to talk about "contributor attrition" than cycle timing?13:37
ttxwell the proposal talks about it. It's about making sure we can embrace part-time contributors, because we can't rely only on full-time people13:38
cdentYes, I know, but it presents itself as a solution to multiple problems, and as others have mentioned, by being so well formed it feels a bit like a fait accompli (even though it is not).13:38
cdentIt might have been better to decompose and list each of the individual problems13:39
cdentI dunno, there are of course horrible thread wastage risks with that approach too13:39
cdentI'll keep working on my thoughts on the matter and try to come up with a coherent response. It's a bit challenging as I'm seeing all this through a nasty head cold, making it hard to think.13:39
ttxagreed, there was a bit of time pressure as the ideal yearly timing happens to fall soon. I wish I had that idea 6 months ago :)13:39
ttxalthough the way it looks we'll soon have 12 months to wrap our heads around it13:40
cdentmy interpretation of the current outcome of the thread is not as clear cut as yours seems to be13:41
cdentbiab13:41
ttxI mean, to be able to pass it in time for Rocky, it would have required broad consensus, and I don't think we have that... so the next possibility would be to change in 201913:43
ttxwhich gives us time to wrap our heads around it13:43
*** mriedem has joined #openstack-tc13:51
cdentoh, that assumes a year that coincides with the calendar year, which doesn't have to be the case13:51
persiaI suspect that a number of consumers would prefer something that could appear to match a calendar year, for their own internal convenience of thinking.  I may be mistaken.13:57
cdentyou're probably right persia13:58
cdentbut that leads to one of the points I think we should think about: ways of detaching the consumption cycle from the development cycle13:59
cdentthey are already not the same thing13:59
cdentso why not take advantage of it13:59
persiaAh, yes.  And to be clear, I think the consumption cycle is best to be aligned with the calendar year.  I think development cycles are best aligned with events, the scheduling of which have different constraints.14:00
ttx++14:04
*** kumarmn has joined #openstack-tc14:19
ttxcdent: what was your alternate name suggestion for "strategic contributions" ? ISTR you had a good one14:20
cdentI've forgotten (see above about cold), but it should be in the log, I'll look.14:20
cdentI can't find it :(14:24
*** kumarmn has quit IRC14:24
*** kumarmn has joined #openstack-tc14:24
*** kumarmn has quit IRC14:29
ttxyeah, I couldn't either14:31
cdentto make up a new one I think "community oriented contributions" is at least less ambiguous, but unfortunately a bit of a mouthful14:35
persiaThat name sounds dangerous to me: tossing folk upstream without direction often leads to dissatisfaction by the sponsor.  "strategic" at least suggests some value may be gained.14:39
ttx"generally-useful contributions" ?14:41
ttx"Good citizen" ?14:41
flaper87jeez, the release thread exploded before I even had a chance to read the first email14:42
ttxheh14:42
cdentpersia: I don't have a problem with the word strategic itself, rather that without a modifier associated with it, we don't know for whom it is strategic14:42
ttxproject-strategic contributions ?14:43
* flaper87 reads the backlog and searches for a summary14:43
persiacdent: I consider that to be a value: carries implications of being strategic for both the project and the contributor (or contributing organisation)14:43
ttxflaper87: I did try a summary around 13:09 UTC today14:43
ttxalthough it's very partial, more a "learnings" than a :summary"14:44
ttxoh joy 6 more responses14:44
cdentflaper87: ttx's summary was good but I would recommend doing a reading of your own, as ttx's summary inevitably reflects his biases :)14:44
flaper87ttx: cdent ok, thanks14:45
cdentpersia: my concerns comes from my initial understanding when ttx first used the term. Out of context when I heard "strategic contribution"I thiought he meant "contributions solely for the benefit of the company"14:45
* flaper87 will never catch up T_T14:45
cdentwhich is the _exact_ opposite of what he meant14:45
cdentand then again when I was writing the recen tc report I forgot again, and almost wrote the wrong thing and had to correct myself14:46
mugsieis the term for that not "tactical contributions"?14:46
persiaAh, if the goal is to exclude that which is in the (selfish) interest of the contributor/contributing org, then yes, "Community oriented contributions" is right.  I hope that isn't being promoted as a good idea.14:46
cdentmugsie: only because you are sitting in the position of the community14:47
mugsietrue14:47
*** lbragstad has joined #openstack-tc14:50
persiaRe-reading the TC report: maybe "long-term contributions" or "project quality contributions"?14:50
persiaMy fear is mostly that if there isn't a selfish value to doing them, they may be difficult to justify for other than emotional reasons.14:50
cdentI think we need to address that gap. The style of development that is OpenStack is based on a fairly emotion justification: by working together we can make something that is better.14:51
cdentThat obliges to the participants to be something other than entirely selfish.14:51
cdentAnd also an awareness that the long term gains in doing something not immediately selfish are selfish14:52
persiaThat last point is the one I think most important.14:53
persiaIn general, when soliciting corporate sponsorship of open source activities, I spend a fair amount of time helping folk appreciate how spending time resolving technical debt, improving test frameworks, etc. are of selfish benefit to them, in excess of the investment.14:54
* cdent nods14:54
cdentI often feel we're not speaking about that out loud enough, often enough.14:55
persiaWhen we create a dichotomy between "corporate interest" and "upstream interest", we end up creating a situation where it is implied that corporations are not interested in upstream goals, which I believe to be unhealthy for the long-term.14:55
persiaMany organisations contributing to openstack do so because they have a downstream codebase (maybe a project, maybe a deployment, maybe something else).14:56
persiaIn most cases, I suspect they spend a lot of time downstream trying to catch up to upstream.  Fixing this by making it easier to land stuff is in their selfish interest.  Making it easier to land stuff may require spending several months refactoring code or dealing with edge cases.14:57
cdentI am not at all convinced that a longer cycle will help with landing code14:57
persiaBut if framed in terms of the benefit, this sort of contribution is easier to solicit.  On the other hand, everyone has this message: staff instructing management that "I'm working on X, because we need that for Y, which you told me to do this quarter" is a better way to pass the message than "Org B needs to send N people "upstream" necause they have J people downstream,and need to provide support to balance the support those folk need."14:58
persiaI thought that was the proposal: that a longer cycle would let people with less time to contribute be able to get their features ready within a cycle, rather than spending so much time rebasing.14:59
cdentI dunno, I sometimes feel like it ought to be as straightforward as: "you have a product based on openstack therefore you are ethically bound to provide bountiful upstream contributors for the sake of long term health"15:00
cdentpersia: I think that's the proposal, I just don't think it will work15:00
cdenttc-members, it is office hour and we've been at it for hours already15:00
smcginnisThe door is always open. ;)15:00
fungioof, you're saying i get to spend office hour catching up on scrollback ;)15:01
persiaI think that straightforward statement is hard to consume in light of fiduciary duty to stockholders by the majority of joint stock organisations.15:01
cdentfungi: indeed15:01
*** marst has joined #openstack-tc15:01
pabelangero/15:01
persiaContrast "If you strategically depend on this open source project for your business, you expose yourself to significant existence risk by not ensuring you have sufficient active upstream contributors to ensure the project continues to meet your future needs."15:02
smcginnisThe other option would be to do more frequent releases and just make them less of a big deal. Kind of what Ben was getting at earlier on in the LTS discussion I think.15:02
smcginnisSo if you miss a release, it's really not a big deal to wait a few months.15:02
smcginnisThe hard part being distros deciding which release to pick up as their official productized version.15:02
cdentpersia: I sometimes feel that fiduciary duties to stockholder is an imaginary shared myth and something I really wish we could stop with15:03
cdentsmcginnis: I think I'm in the more frequent and leave it up to the distros to do what they like camp.15:03
flaper87smcginnis: fwiw, I think I'd be more inclined to even shorter releases and making them less of a big deal. I've been looking at how releases in Kubernetes are done and how features are carried across multiple releases15:03
flaper87cdent: yeah15:04
persiacdent: I'd be delighted to live under different rules.  That said, I often find risk management a well-received argument.15:04
cdentpersia: this is why I'm glad you exist15:04
flaper87I haven't had time to wrap my head around all the discussion but I have been looking closer and closer to how the k8s releases are done15:04
smcginnisThe big concern for me then is if each distro picks a different release to make their "LTS" release.15:05
cdentwhy is that "our" problem?15:06
mugsieI will tell you trying to productise k8s is even more painful than OpenStack right now15:06
smcginnisAnd that resulting in either conscious or unconcious focus from different folks of trying to get certain features in specific releases.15:06
smcginniscdent: Just because of shenanigans like that. ^15:06
cdentin fact why is anything to do with productising "our" problem?15:06
cdentI'm not saying it shouldn't be, but why do we persist with it?15:06
flaper87mugsie: but has that been because of the release cycle?15:07
persiasmcginnis: If different folk push different things into different releases, doesn't that automatically provide some distribution of demand to allow PTLs to only concentrate on a few things each cycle?15:07
mugsiecdent: because right now, with the turn around time with some vendors, a new install may be out of upstream support before it is even installed15:07
mugsieflaper87: yes15:07
smcginniscdent: Fair point. But I think due to $CURRENCY we are kind of forced to have to consider it somewhat.15:07
smcginnispersia: Another fair point.15:07
flaper87mugsie: mmh, mind elaborating?15:07
cdentmugsie: but why is _that_ a problem? Is the reason the vendors (of open source stuff) exist so they can provide support?15:08
mugsieyou work on get $release ready, find all the new issues, fix them, adapt tooling, then release your new version, just as the next version of k8s is released.15:08
cdents/Is/Isn't/15:08
mugsiethen you get shouted at by customers looking for $feature, but you are behind15:08
mugsieand the cycle starts again15:09
flaper87mugsie: tbh, I don't think that's a problem the upstream community should try to solve15:09
persiaOn $CURRENCY: while it makes sense for mainline projects (e.g. OpenStack) to make is easy to productise their output, actually constructing a product reduces the opportunity for downstream differntiation and value creation: too much focus in mainline means less reason for folk to invest.15:09
TheJuliaSpending the last little bit reading the scroll back and to persia's point of long lag times for downstream integration, every large operator I've spoken to that has had to do anything custom downstream is >= 1 year behind the current release, and simply cannot keep up with the cadence.15:09
mugsieflaper87: how many users of OpenStack take is directly from source?15:09
persiaTheJulia: 1 year is a dream for some folk I speak with :)15:09
mugsieit hurts *our* suers15:10
mugsieusers*15:10
* persia regularly sees operators upgrading after 2-3 years, including rebasing custom code15:10
smcginnismugsie: I couldn't get any verified CD users when I tried a few months back.15:10
flaper87mugsie: very few or none but again, I think those are problems solvable downstream too.15:10
smcginnismugsie: Just a lot of "we might have users deploying from main".15:10
mugsieespecially when they try to make fixes, and the branch is gone.15:10
TheJuliapersia: And thinking about it, if you can't jump versions, your doing a new deployment, so your taking 3+ months for hardware, 3+ months to configure/systems validation/install, and then business acceptance could take six or more months....15:11
ttxohai15:11
fungiat least in the past when i've dealt with this for other projects, the customers demanding newer upstream features in your productized version don't _actually_ want to use you productized version and would ultimately be happier consuming upstream releases directly15:11
persiamugsie: I submit that most of the major operators consume mainline, and most of the smaller deployments consume some product.  From my limited information, I believe it to be related to a balance between the pain of waiting for features/bugfixes vs. the cost of hiring staff to do things directly.15:11
persiaTheJulia: Lots of folk redistribute hardware between deployments, but yeah, it's not pretty.15:12
pabelangerI always thought it would be an interesting project in openstack, to try and CD openstack on the hardware we are in infracloud for example. Having a group of devs / ops working together to try and make it happen, then use the resources in nodepool for openstack-infra.15:12
mugsiepersia: I would suggest that quite a few major operators actually have a hybrid, some custom, and some vendored15:12
pabelangerthen take what we learn and feedback loop into the project15:12
fungior, at least, after they get to try and consume upstream directly for a while they get a better understanding of why you're productizing something which isn't moving as fast or quite as up to date15:12
mugsiepabelanger: that would be great - but a lot of work15:12
persiafungi: Very commonly, although there are still a few organisations with private clouds that have stupid policies that require them to install vendor software, rather than own it themselves.15:12
ttxSo around "more frequent releases"... I like that, and I don't think it's necessarily the opposite to longer development cycles15:12
persiamugsie: Agreed.15:13
pabelangermugsie: Yup, i think you'd need to work with a deployment project in openstack already to make it happen15:13
persiattx: Could you expand on how you would detangle release cycles from development cycles?15:13
ttxThey already are.15:13
mugsiepabelanger: when we operated Designate in a public cloud it was great, we learnt so much, and the version in production was just a week or 2 behind master15:14
flaper87persia: I think they are detangled already15:14
TheJuliapabelanger: I seem to remember that was the original dream when the idea of infracloud was pitched in a conference room during a midcycle long ago.15:14
ttxa development cycle is more like an epic, starts with F2F, has goals15:14
persiaI thought a majority of the projects in the coordinated release had freeze scheduling to align development activities with teh release cycle.  If that isn't true, then it isn't important.15:15
pabelangermugsie: yah, to some degree, osic-cloud1 that mrhillsman was part of did the same too. I think it helped them greatly.15:15
persiaTo my mind, the other path involves a lot more effort on feature branches, for-next style arrangements, etc.15:15
pabelangerTheJulia: yes! Sadly, we lost people to do the work for it.  But agree it is still an important thing we could do as a project15:15
ttxpersia: only the projects following the one-release-per-cycle model (which arguably includes the largest ones)15:15
fungipabelanger: well, also, we don't have fresh hardware or anywhere offering to host it long-term either15:16
persiaHeh, indeed.  If we can ignore that small minority, then I retract my concern :)15:16
dmsimardpersia, mugsie: where do you draw the line between "mainline" and "vendor" ? For me, installing packages from Ubuntu cloud archive or RDO isn't vendor, but rather packaged source -- if an operator deals with Red Hat, Mirantis, Canonical, etc for an actual supported product offering, that's what I'd call vendor.15:16
ttxsee Nick Barcet email on the thread, 7 min ago15:16
ttxhe is proposing more coordinated intermediary releases15:17
ttx(and longer dev cycles)15:17
pabelangerfungi: agree, times have changed a little since fort collins15:17
dmsimardLike, would infracloud be "vendor" because it uses packages from Ubuntu Cloud Archive despite providing it's own installation mechanism ?15:17
persiadmsimard: For me, "mainline" is the master branch on git.openstack.org, "vendor" is some source provided from somewhere else, at some delay.  This includes the stable branches.15:17
smcginnispersia: ++15:17
mugsieyeah, productised is somehting that happens after the tag and tarball are put on *.openstack.org15:18
persiadmsimard: Infracloud is *definitely* "vendor",.  The vendor is Infra.  Infra builds it based on vendors to infra, which may include some mainline.15:18
dmsimardpersia: you really believe that most of the operators are running off of master ? I would be surprised if that's the case15:18
pabelangerSo, one of the comments I heard in passing a bout 1 year release cycle, was by moving to it, would allow openstack to stablize more for releases.  However, I admit I am not sure if that is going to be accurate, I would imagine with a longer development cycle, projects would want to get more features in, not less15:19
TheJuliapersia: would the feature branch idea.... could it be lighter weight on testing, and the proposed merge to say master branch would be the branch with all of the cross project gating/upgrade testing/etc15:19
dmsimardI mean, data from the user survey puts most of the operators a few releases old15:19
persiadmsimard: Most of the large operators with whom I've had the opportunity to discuss internal arrangments frequently pull bits of this and that from master, usually cherry-picking things.  The alternative is too long a wait to land that bit of code.15:19
persiaThe better ones then send their rebase results back to the stable teams, but folk often claim they don't have time for that.15:20
persiaThe core of what is being run is likely to be several releases old: very rarely less than a year,15:20
dmsimardFWIW I end up agreeing on your definition of vendor vs mainline15:20
fungiTheJulia: we could probably approximate that model without feature branches simply by running fewer jobs in the check pipeline than gate15:21
dmsimardAlthough I still draw a line where (vendor supported) productization comes in15:21
persiaTheJulia: I'd probably do full testing on each branch, with an always-trusted release branch, but yes, that could be possible: it depends on test resources vs, number of things to test.15:21
TheJuliafungi: That could possibly free resources, keep the queue down, but projects would have to be onboard15:21
fungiTheJulia: or we've discussed adding an intermediary pipeline too which only runs jobs after at least one core +2 review15:21
persiadmsimard: What is the support line?  Does it still apply for the larger customers who are able to instruct vendors to provide non-standard solutions under standard support?15:22
TheJuliaThat is not a bad idea either, the downside that I see is if something is broken elsewhere, then the fix to the fix could end up in a long nit-picking cycle15:22
TheJuliawhich becomes a cultural problem almost15:22
fungiTheJulia: but all of those (feature branches with fewer jobs as well) seem to mostly be about reducing ci load, at the expense of more immediate feedback to developers15:23
TheJuliawell, feedback now is not immediate15:23
fungiagreed15:23
TheJuliaFor some check pipelines it is hours15:23
* TheJulia realizes the horse is dead15:23
mrhillsmanpabelanger: it would be lovely to have cd and work for this was happening during osic15:23
fungisome of those very-long-running jobs may make more sense to punt into peeriodic as well15:24
fungiperiodic15:24
mrhillsmanhttps://github.com/osic/qe-jenkins-onmetal https://github.com/osic/qe-jenkins-baremetal and i have been still thinking about this since the dissolve of osic15:24
mrhillsmaneven yesterday after reading all the replies to the release change proposal thread15:26
mrhillsmanas a way to address a subset of concerns15:26
TheJuliafungi: I think at least one grenade/upgrade job would be needed for the check gate, and those are longish to begin with, since they can catch some major issues well in advance, but everyones milage is going to vary15:26
dmsimardpersia: I can't speak for Ubuntu Cloud Archive packages (or SUSE's) but despite an obvious personal bias, RDO takes a lot of pride in packaging upstream source as-is without custom patches -- for me that's very close to mainline. And yet, it is unsupported in the sense that you can't go and call Red Hat to get their engineers to help you with a problem through a support contract. That's what I mean by15:27
dmsimardvendor support.15:27
mrhillsmanmaybe something we can entertain in the openlab space15:27
pabelangermrhillsman: yah, i think it would be great if we could show that you could CD openstack some how15:27
ttxwow that thread is getting big. those openstack-dev ML stats were a bit down for 2017, figured I should fix that15:27
mrhillsmanwe were chasing master with that effort15:27
mrhillsmanand it as actually making some damn good progress15:28
smcginnisttx: :D15:28
mrhillsmanttx: lol15:28
ttxnext week I'll propose moving to GitHub15:28
TheJuliaI've heard of several teams that have gone off doing CD deployments chasing master with decent success, fwiw15:28
TheJuliattx: oh my15:28
persiadmsimard: Ah.  I see what you mean.  In my experience, this experience differs for certain customers for most of that class of vendor, but to the core, when considering vendor solutions, I think each operator takes a different set of choices about whether things like "support" are meaningful, and so picking any variable to separate them seems awkward.15:29
* dmsimard nods15:29
ttxtc-members: another question I had for you is around SIG governance. Currently since SIGs are not firmly below TC nor UC, if there is some issue in there (say the co-leads don't like each other anymore, or the participants don't like their leads) there is no place of appeals15:30
ttxOne way to fix it is to have the TC and the UC bless the Meta SIG and ask them to police that stuff15:31
mrhillsmanttx: i thought we decided on the meta sig handling that for now?15:31
mrhillsmanah ok, needs that blessing :)15:31
ttxmrhillsman: we did say that it could be a solution, still need that blessing15:31
mrhillsman++15:31
cdentwho is the meta sig at this point?15:31
mrhillsmanlol, me and ttx15:31
ttxso I'm checking if it actually flies or if anyone has a better suggestion15:31
ttxcdent: basically a rep from each body :)15:32
cdentseems a reasonable course of action to me15:32
ttxThe other solution is to call for common meetings the day it happens, but then you enter crappy vote territory15:32
fungior we could just say disagreements aren't allowed ;)15:33
smcginnisI think by having TC and UC representation in the meta-sig, that should be a good place for any issues to escalate.15:33
fungiyeah, wfm15:33
persiaMaybe a combined Meta-SIG review meeting once in a (rare) while, with all of TC and UC present?15:34
* persia is thinking once or twice a year for that15:34
mrhillsman++15:34
mrhillsmani like that idea15:34
cdentmeta-meta-sig should have governance over meta-sig, and have pre-meetings whenever they need to meet15:34
mrhillsmanthat was so redundant15:34
mrhillsmanhehe15:34
ttxOK, let's say Meta SIG should be co-lead by one TC and one UC member, and is tasked with solving issues, and worst case scenario calls for common meetings with TC/UC15:36
ttxlike in case they need more input to make the call15:36
smcginnisttx: Works for me.15:36
mrhillsman+15:36
ttxeach rep reports back to their $C15:37
cdentyup15:37
fungisounds fin15:37
fungie15:37
ttxmrhillsman: I'll draft a resolution on our side to capture that15:37
flaper87++15:37
ttxyou should plan to get a similar thing approved on yours15:37
mrhillsmancool15:37
mrhillsmanwill do15:37
ttxmrhillsman: maybe check that the idea works for them too, before I get busy drafting15:38
mrhillsmanwill send email shortly15:38
ttxOn the 1-year thing, where do y'all stand ? 1/good idea let's do it for Rocky, 2/might be a good idea but we need careful consideration and a long discussion, so not rocky, 3/worst idea ever15:41
cdent4/ there are problems to be solved, this doesn't solve them15:42
flaper87cdent: you literally stole my words15:42
TheJuliaI'm a 2 and 415:42
flaper87cdent: 415:42
fungii'm open to the idea of a 1-year cycle, but waiting to see more feedback before i can be sure we're not missing something15:42
flaper87if anything, I'd rather do more frequent releases, as we mentioned earlier in the office hour. But again, #415:42
ttxfor those on 4, any alternate suggestion to fix the problems ?15:43
cdentttx have you seen my message on the thread? I think there's some thinking to do around decoupling development timing from consumption timing15:43
flaper87ttx: not off ot the top of my head but I'll gladly spend time trying to figure out another way to solve these problems15:43
fungii wouldn't say i'm #2 necessarily, because >24 hours of discussion doesn't necessarily mean "long discussion" (even though i'll grant that thread has exploded quickly)15:43
ttxfungi: 1.5 :)15:44
ttx(I'm 1.5 too15:44
ttx)15:44
EmilienM4/15:44
cdentI partially agree with what fungi has said: we haven't actually played this out that much. It's been in awareness for a day. Sometime might push out some brilliance in the next week.15:44
fungiwell, i can't say whether i'm #1 until i see the discussion unfold. i could be #415:44
EmilienMI'm not sure we tackle the problem on the right side (which problem by the way, it seems folks happy about this proposal have different problems to solve)15:45
fungiwe're getting early feedback from our highest-engagement ml participants so far15:45
TheJuliattx: Cycle bound the large behavior changing features, rapid release new features during a cycle resulting in more frequent releases? I'm not sure CD really, truly solves the operator lag issue beyond confidence in end of cycle release to reduce the actual release overhead for packagers.15:45
flaper87cdent: I'm holding off on posting any opinions on that thread until I've gotten enough time to process everything15:45
ttxok, all that points to a timing too short to actually change anything for Rocky. is that the common view ?15:45
ttx(the release team kinda needs to work on the cycle now :) )15:46
flaper87Rocky is def off, in my opinion.15:46
smcginnisttx: Yeah, I think we are going to have to wait a bit longer on this one.15:46
fungii wouldn't rule out changing for rocky, at least not yet15:46
ttxTheJulia: looks a bit like what NickBarcet suggested in the thread?15:47
smcginnisttx: Let's give it a week maybe and see how the mood shifts.15:47
mrhillsmani like the idea floated by mriedem about 6 months feature, 6 months bugs15:47
dmsimardI feel like despite the intent of the development cycle change is not about LTS, the notion of LTS might come into play so it might be a good idea to wait how that discussion is going to end15:47
flaper87I just don't think there's any need to rush it and try to make this change in Rocky does feel like it15:47
cdentflaper87: so you're going to be the one who provides the brilliance in the next week? :)15:47
fungibut i can't form a solid opinion until more of the community has a chance to provide feedback15:47
TheJuliattx: I've not dug into my email yet today :\15:47
flaper87we're talking about taking more time for releases and preparing things so, let's do the same for planning this change15:47
flaper87cdent: it might all come down to a pic of me surfing15:47
flaper87:D15:47
cdentthat's it!15:47
smcginnisfungi: The only rush is that it would need to be now, or in a year.15:47
ttxflaper87: yeah, the rush was more linked to the release window opening nowish15:47
cdent1 year long cycle, six months of development, six months of surfing, interspersed.15:48
flaper87ttx: understood, makes sense to want to have answers now, regardless of the answer :)15:48
fungismcginnis: sure, but you must at least concede that one day is not enough time to say we have any representative amount of feedback on the proposal ;)15:48
flaper87cdent: see? didn't have to wait to next week15:48
cdentfungi++15:48
cdentwhat we have is feedback from the early actors, which is almost exactly not the group that we're trying to solve for, yeah?15:49
EmilienMthe main thing I've heard about why people are happy about this proposal is "we'll have more time" - but I'm not sure it's accurate, if we commit to do more things in a longer release. In fact, it highlight the problem that projects don't do a good job in planning their roadmap for cycles15:49
flaper87fungi: smcginnis let's put it this way, I barely caught up with the thread 10mins ago15:49
fungiwe have a lot of feedback, but from a very specific subset of our community who is probably not a representative sample of opinions15:49
mrhillsmani'd have to agree with that partially cdent15:49
smcginnisfungi: Absolutely! I'm just saying that was the only reason why we are even considering any kind of time pressure on this.15:49
flaper87so, yeah, we gotta let it mature fore another week. I bet we go over 500 emails15:49
mrhillsmancould definitely use more time for the late responders15:49
cdentEmilienM++ I think we'd just end up trying to pack even more in15:50
flaper87for*15:50
dmsimardEmilienM: this is not so much for OpenStack developers than it is about *users* and *operators* IMO.15:50
pabelangerEmilienM: yes, more time for X. But I honestly don't see that,  just means twice the amount of things will be added in 1 year over 6months15:50
mriedemdmsimard: how is this about users?15:50
*** dansmith has joined #openstack-tc15:50
mriedemstability?15:50
mriedemfeatures?15:51
mriedem1 year cycle doesn't magically get you stability15:51
* flaper87 agrees with mriedem15:51
dmsimardmriedem: wrong vocabulary, what I really meant was operators15:51
mrhillsmanoh shit, you have awoken the beast dmsimard15:51
fungialso, i don't quite see the rush. why can't we, a few weeks into rocky, say we've reached consensus to bump out the release date on it by an extra 6 months?15:51
EmilienMmriedem++15:51
pabelangerttx: cdent: right now 4, problems to solve, but unsure corrently if this solves it.  I'm trying to keep up with all the replies and process them.15:51
flaper87fungi: good point15:51
*** edleafe has joined #openstack-tc15:51
fungii mean, i agree waiting too far into the release cycle would be bad, but...15:51
flaper87however, let's first reach consensus.15:52
mrhillsmanjk mriedem15:52
flaper87also, I think there's a planing problem, fungi15:52
mriedemdmsimard: this doesn't give ops LTS either15:52
mriedemmost ops are'nt even 1 year out right?15:52
mriedemthey are waiting until start to eol the oldest supported upstream branch15:52
dmsimardmriedem: I realize that it has nothing to do with LTS15:52
TheJuliafungi: contributing businesses might get some heartburn from the community changing after what they perceive to have been committed to.15:52
flaper87the release cycle is planned before the cycle and extending it would translate to doing the planning again15:52
mrhillsmansome folks may take that the wrong way15:52
funginothing gives ops lts other than a group of people stepping up to solve maintaining an lts15:52
dmsimardmriedem: but releases are bound to be supported for longer than the actual 1 year support cycle because otherwise it means as soon as we ship a new release, the previous release goes EOL15:52
pabelangerI sitll think release early, realse often is the way to go, but still trying to wrap my brain around how a 1 year cycle for affect that. Even if we do encourage projects to release more15:53
mriedemdmsimard: supported by whom?15:53
mrhillsmanyeah, i am trying also since yesterday mriedem to figure out how it gets to the root of ops/users concerns15:53
mriedemupstream stable team, or vendors?15:53
dmsimardmriedem: by upstream15:53
ttxI think it's squarely for developers. If it doesn't benefit devs (especially part-time devs that we ned to attract more of) then we should not do it.15:53
fungiTheJulia: maybe, but this would be the first time we've changed our minds on a release date in ~7 years of the project existing so i can't imagine they'd get too bent our of shape15:53
mrhillsman^15:53
mriedemdmsimard: the stable team hasn't said they are going to change the stable policy of 1 year15:53
EmilienMTim said it would actually affect the feedback loop from operators15:53
ttxthere are way better answers for ops needs around releases, and that's skip-release upgrades, and LTS branches15:53
EmilienMwe would have to wait more time to know if what we're doing work15:53
mriedemthis in no way helps a part time dev that is pushing a non-priority feature15:54
dmsimardmriedem: releases.openstack.org shows each release currently going EOL a year after their release, if we move to a 1 year development cycle without touching the support cycle, it means we will eventually only have one stable release instead of two15:54
EmilienMlike, if I implement a feature early in the cycle, I would have to wait almost if not one year because it's in production15:54
mriedemdmsimard: those are the breaks i guess15:54
mriedemdmsimard: if we extend the stable policy because of this thing, then we're getting into upstream LTS areas15:54
mriedemwhich seems underhanded to me15:55
EmilienMttx: devs don't need more time, they need to learn about doing a schedule15:55
ttxunlikely. LTS means 3-5 years, not 215:55
EmilienMand right now, we all over-commit15:55
EmilienMand some of us are burnt and thing we need more time. No, we need better scheduling15:55
mriedemEmilienM: agree15:55
mriedemthis is why i said in the thread, the team needs to agree to less stuff per cycle15:56
dmsimardmriedem: if we're inflexible on the support cycle and leave it to one year, it means we're expecting operators to upgrade in a matter of weeks after the new release comes out because their current release goes EOL almost immediately and I don't think that's realistic15:56
mrhillsmanis that really because of what employer has you doing or things you take on yourself15:56
mugsiesomething I have seen said is that features take too long to hit large amounts of users, and the bug they find - would this cause that loop to double (at least)15:56
EmilienMwe as a community aren't good at saying "no we can't do it"15:56
mriedemand be OK with telling non-priority bps that they didn't make the release15:56
EmilienMbecause we want to be nice and accept everything15:56
EmilienMand look now, we recognize 6 months isn't enough so we want to extend to 1 year15:56
TheJuliaEmilienM: absolutely agree, although not just scheduling, the culture is vital to be on the same page to work together in the same direction.15:56
cdentmrhillsman: that's a good question and I think the answer is "it's complicated"15:56
mrhillsmanEmilienM is that employee driven or community driven?15:56
cdentsome people see voids and try to fill them, they are just drawn that way15:56
mriedemmrhillsman: for me it's both15:56
fungidmsimard: the way i see it, we'd be maintaining one stable branch at a time for purposes of validating master development toward the subsequent release (to confirm we can upgrade, and that we maintain some backward compatibility), but that one-year mark could be where the lts team takes over maintenance15:57
EmilienMmrhillsman: yeah it's both...15:57
mriedemmrhillsman: i don't like feeling like an asshole by telling someone no to their unicorn feature15:57
mrhillsmani think from the community there is a strong desire to slow down feature addition and shore up more technical debt but it is hard from what i hear when your employer is pushing for more15:57
EmilienMmrhillsman: my employer will probably ask us to do twice the work for each release if we take the proposal15:57
cdentmrhillsman: I'm not sure I believe that. I keep hearing from NFV people "why aren't you supporting X yet?"15:57
dmsimardfungi: it's an important discussion to be had IMO, the development cycle and the associated support cycle go hand in hand15:57
mriedemEmilienM: agree - this won't create stability cycles unless each project decides to do that15:57
EmilienMmriedem: we'll have to at some point15:58
cdentEmilienM: upstream, why?15:58
EmilienMcdent: why what? sorry16:00
pabelangerwouldnt the cost of a 1 year stability be more expensive then 6 month? could projects go that long without landing new features?16:00
cdentEmilienM: "we'll have to [create stability cycles] at some point". To which I'm asking: Why is that something that needs to happen upstream?16:00
cdentOf course I'm not sure I'm understanding what "stability cycles" means.16:01
EmilienMcdent: because upstream is where we develop things?16:01
dmsimardOTOH, 1 year is a long time.. I mean, k8s was a fraction of what it is today even two years ago. I'm concerned about the landscape changing enough that the plan set at the beginning of the development cycle ends up being inaccurate over that period.16:01
ttxdmsimard: agreed -- if we can't find a way to release more often in a longer cycle, that just won't work16:02
EmilienMcdent: not stability cycles, but cycles with less features probably16:02
fungiwell, there _was_ a time when openstack released every 3 months. we do seem to be cooling off in some areas of development, so it seems natural our need to have coordination points becomes less frequent as time marches on16:02
cdentEmilienM: I think was conflating "stability cycles" with some of the LTS needs, but you mean "cycles to fix stuff and not add stuff". I'm not sure why we need cycles for that, we should figure out ways to do both, better.16:02
mrhillsmana number i would like to see is of the 20% of contributors who do 80% of the work how many are employeed to work on openstack 80% of their day16:03
ttxnijaba suggested a longer cycle with more coordinated releases in it -- not sure that would relax pressure as much but that's another way to slice it16:03
smcginnisSo if we did a one year cycle with something like quarterly releases, we could "highly recommend" that that last release in the cycle be a stability release.16:03
EmilienM"Release early. Release often. And listen to your customers." if we go one year, we'll listen them too late and get serious problems16:03
ttxEmilienM: ++16:03
EmilienMcdent: no I didn't mean that, sorry if I wrote it16:03
smcginnismrhillsman: That would be interesting to know.16:03
EmilienMcdent: I meant to say, the problem is not in the duration, but in the content16:03
cdentDo we think of our customers as the people who use openstack, or the people who package it, or the people who deploy it? We talk about all of them, but then when it gets into the details it usually just one for any given decisions. That's complicated.16:04
* TheJulia wonders if openstack was to plan/execute on 3 months increments for a 1 year stable release, with exepcation that new features could be available along the way...16:04
fungii tend to think of all of those as customers16:04
cdentIf our customers are 1 to 2 years behind, already, then they will be even further behind on a 1 year cycle.16:04
EmilienMin that case, "customers" means all of us, devs, ops, users16:04
fungior "users"16:04
mrhillsmani like the model at rackspace, probably at some other companies to, of everyone is a customer16:04
cdentYes, I agree everyone is. The issue isn't that. It's that we don't remember that in the details of decision making.16:05
EmilienMOpenStack Infra is a big customer16:05
mrhillsmanbut you will not satisfy 100% of the customers 100% of the time16:05
mrhillsman:(16:05
ttxTheJulia: that's a bit like what Nick Barcet proposes on that thread16:05
fungiwell, i think it's a cop-out to say that everyone's a customer who should be treated with equal priority16:05
EmilienMimagine we wait one year to get an update on the features provided by public clouds that Infra is using16:05
TheJuliattx: \o/16:05
fungibut yes we still need to keep all of them in mind when making decisions16:05
EmilienMinstead of having small updates every month16:06
mrhillsmanfungi: agreed16:06
ttxTheJulia: longer cycles, more coordinated releases, but only one that gets a stable branch per cycle16:06
ttxbasically just do the same thing we are doing, but only have stable branches / PTL / PTG / Goals once per year16:06
* TheJulia thinks it is just time to take the car to the dealership for repair work and read the mailing list16:06
mriedemask the mechanic what he thinks16:07
dtantsurthe question holds: who is using the intermediary releases?16:07
mriedemdtantsur: no one16:07
cdentmriedem++16:07
TheJuliamriedem: sure! :)16:07
dansmithnor would they in a 1-year cycle, which means they're just as useless as they are now, IMHO16:07
ttxdtantsur: define intermediary releases16:07
smcginnismrhillsman: How do you know it's a he?16:07
smcginnis:P16:07
EmilienMdtantsur: excellent question16:07
dtantsurttx: in a sense of cycle-with-intermediary (do I spell it right?)16:07
TheJuliadtantsur: then the question is, who would use it if it was available across the board16:07
dtantsurlike what ironic is doign nowadays16:07
mrhillsmansmcginnis: touche16:08
ttxdtantsur: depends. Swift ones are used16:08
mriedemthe *only* reason i think nova would do intermediate releases is so we can do a freeze schedule a few times within the year coordinated release16:08
ttxdtantsur: also peripheral projects ones are used, since they have depends only one way16:08
dtantsurTheJulia: well, if they get bug fix backports (at least security) and upgrade support - sure16:08
mriedemso we can impose deadlines16:08
TheJuliawe know people have used intermediary ironic releases, but largely stand-alone users who are installing what is the current latest available "release"16:08
EmilienMlet's be realistics, most of products or deployments don't use intermediate releases16:08
ttxdtantsur: most of the other cebntral things like Nova  don't do intermediary releases anyway, so yeah they aren't used16:09
dtantsurhow is swift supporting their intermediary releases? do they provide bug fixes?16:09
mriedemswift is also a standalone thing16:09
mriedemso intermediate releases makes more sense there16:09
EmilienMI invited mnaser to join here, he's building one of the biggest OpenStack clouds in Canada. Let's ask him what he uses.16:09
*** mnaser has joined #openstack-tc16:09
mnasero/16:09
mriedemas noted in the thread, you can run mixed versions of the services and it should be fine - we don't CI that way, so it's a risk, but it's just not something that distros package that way either16:09
EmilienMmnaser: welcome here, I have a question for you. How do you deploy OpenStack? From final releases or from intermediate releases?16:09
ttxmnaser welcomne16:09
mrhillsmanyeah, mentioned vexxhost earlier16:09
mrhillsmanwondering how he do that pike 2 weeks after release :)16:10
ttxmnaser: does vexxhost even use a component that releases intermediary things ?16:10
fungidtantsur: my understanding, from a vmt perspective, is that the only reason swift has stable branches is for critical security fixes. and even then they strive for 100% backward compatibility so they'd prefer users just upgrade to latest16:11
ttxswift maybe?16:11
mnaserSo we upgrade to major releases when they are out to avoid being in a situation where we are behind and need to do a whole bunch of upgrades.16:11
dtantsurfungi: that's a fair answer, but I'm not sure "upgrade to latest" is something we're ready to tell people16:11
dansmithmnaser is also a good example of keeping up with the releases16:11
dansmitha poster child, if you will16:12
mnaserThe only reason we would upgrade to an intermediate release is if it contains a bug fix.16:12
*** mgagne has joined #openstack-tc16:12
dtantsurmnaser: major in a sense of sem-ver? or in a sense of named releases?16:12
pabelangeris there an easy way to see the project that are using cycle-with-intermediary tag? I'm using grep right now, but not sure if that is published to web some place16:12
EmilienMdansmith: with major releases, not intermediary16:12
fungidtantsur: for some set of "we" anyway... i mean there are already a ton of free software projects which say exactly that. if you report a bug many will close it and ask you to try to reproduce with the latest release16:12
EmilienMmnaser: thanks for your feedback!16:12
mnaserIf we run into a bug, we run stable branch until a release is cut16:12
dtantsuron governance.o.o you can get a list of tags, I think16:12
dansmithEmilienM: right, but my point is, he keeps up with the major releases and thus I expect isn't overly concerned about the yearly releases alleviating pain16:12
dansmithpresumably maybe he likes it less because he gets features less often16:13
mnaserThen we go back to the release which includes the bug fix16:13
ttxsaying nobody ever deploys intermediary releases of their keystone/nova/cinder/neutron deply sounds a bit disingenious, since those don't do intermediary releases at all :)16:13
EmilienMdansmith: good point16:13
mrhillsmanpabelanger there is, i remember seeing i think john post a one-liner to email16:13
mrhillsmanhe parsed releases repo i think...16:13
mnaserI hope that clears it out from our side16:14
mnaserI’d be happy to answer any other questions16:14
EmilienMyes it does, thanks16:14
mriedemttx: http://eavesdrop.openstack.org/irclogs/%23openstack-operators/%23openstack-operators.2017-12-13.log.html#t2017-12-13T17:44:2816:14
smcginnismnaser: Thank you, that's useful info.16:14
mriedemttx: i framed the intermediate question to ops yesterday,16:14
mnaserBut I wouldn’t like a 1 year release cycle though :( it would slow things down a lot imho16:14
dmsimardmnaser: you consume packages from RDO right ?16:14
mriedemin the form of - do you actually pick up the stable branch patch releases16:14
mriedemas a test16:14
ttxmnaser: do you use any component that actually does intermediary releases ? Swift maybe ?16:14
fungittx: i recall at one point zigo was packaging the milestone tags for debian/unstable (or maybe those were going into experimental)... not sure if any other distros did the same16:14
mnaserFrom a “can I get this into the next release” perspective .. 1 year is so long16:15
dansmithmnaser: two if you miss the first one16:15
persiafungi: I believe that was experimental16:15
ttxmriedem: oh, so you stable stable point releases. not intermediary releases ?16:15
ttxyou mean*16:15
EmilienMhow mnaser could continue to report so much bugs and good feedback if we wait one year to provide him a major release?16:15
mnaserAnd if those intermediary releases become ones with features, it makes it even harder to upgrade, which brings us back to square one16:15
mrhillsman  44 release-model: cycle-trailing16:15
fungipersia: yeah, i think you're right. so that he could get security fixes for our release versions through unstable into testing naturally rather than via tpu16:15
tdasilvapabelanger: just for reference: https://wiki.openstack.org/wiki/Swift/version_map16:15
mrhillsman 147 release-model: cycle-with-intermediary16:15
mrhillsman  37 release-model: cycle-with-milestones16:16
mrhillsman   2 release-model: untagged16:16
dmsimardfungi, ttx: FYI RDO packages trunk continuously -- every commit that lands in master and stable branches are immediately packaged and mirrored for consumption, regardless of tags or milestones16:16
EmilienMmnaser: right and we couldn't garantee to CI all cycle-with-intermediary together16:16
mrhillsman[openstack-dev] Upstream LTS Releases - John Dickinson on Nov 1016:16
mnaserSorry I’m on mobile I’m a bit slow on answers16:16
fungidmsimard: so presumably rdo would have little use for milestone tags/intermediate releases anyway?16:16
mnaserBut yes we use rdo stable branches and I trust running stable branches 100% because I trust the OpenStack and rdo ci16:17
mriedemttx: yes the closest nova gets to "intermediate releases" is stable point releases16:17
mriedemso i asked if ops even pick those up16:17
mriedemsince they should,16:17
mnaserThe stable point releases is just to have any easy number16:17
mriedembecause severe bug fixes16:17
dmsimardfungi: from our perspective, our automation already ships tagged releases to stable mirrors, whether there's more or less tags doesn't mean much for us16:17
pabelangerEmilienM: I would imagine switching to CD deployments for new feature branch16:17
EmilienMmnaser: if you have time today, you can reply to the public thread about this topic, so we get your feedback recorded on the ML as well. Thanks16:18
EmilienMpabelanger: easy to say...16:18
mnaserI will try to take sometime to do a write up16:18
dmsimardmnaser: thanks!16:18
mnaserAlso honestly OpenStack can just not be deployed with CD imho16:18
mnaserIt’s too complex. Too many moving parts.16:18
pabelangerEmilienM: of course, why I think it would be an important project in openstack to actually manage a cloud used by nodepool and CD openstack16:18
mnaserAnd that weird workaround you did 3 years ago will break your next upgrade in some unexpected way16:19
pabelangerEmilienM: to show it is possible and feedback loop16:19
dmsimardIs there even any operator still *really* operating a production environment on a CD basis off of master ? Last I heard (although I don't know for sure) even RAX doesn't do that anymore.16:19
mgagnefrom ops perspective, I more or less don't care about release cadence as long as I have a way to catch up/fast-forward to latest versions. Upgrading has so far taken up 1 year to complete and we skipped a version which some projects strongly not recommend.16:19
EmilienMmgagne: what deliverables do you deploy? Major releases?16:20
pabelangerdmsimard: I am not sure, osic-cloud8 is the model I always point too, mrhillsman16:20
mgagneand about CD, to reflect what has been said on the mailinglist already: wet dream. I think I would be the first in line to want to implement CD but I just can't. We don't have the resources.16:20
smcginnisI think we need to drop the idea of CD, but I've stated that before and been yelled at.16:20
EmilienMmgagne: good feedback.16:20
dmsimardpabelanger: osic-cloud as the cloud that we had for nodepool ?16:20
EmilienMsmcginnis++16:20
cdentsmcginnis++16:20
mnaserGiven that I can imagine the amount of work involved to test FF upgrades and i don’t think anyone is putting in the resources to do it :(16:20
smcginnisIMO, it causes more problems than benefits.16:20
mgagneEmilienM: I deploy latest version of a major release at the time we decided to upgrade and then we are more or less stuck to that version until next upgrade (we cherry-pick bug fixes)16:20
cdentwhatever happened with mordred's thread of packaging the services to pypi?16:21
EmilienMmgagne: thanks for the info16:21
edleafesmcginnis: agree, but CD is like a religion in Nova16:21
mrhillsmanyeah, osic-cloud8 was based on our CD stuff16:21
mriedemdropping the idea of CD support means allowing changes knowing they are backward incompatible and broken with the assumption you will fix them before you release16:21
mriedemand ^ is dangerous16:21
fungicdent: my take was that there was a minimal amount of feedback but no real strong arguments against it16:21
EmilienMso today we know that 2 (big) public clouds are 1) deployming major releases 2) aren't interested by extending a cycle to one year.16:21
mrhillsmanhonestly the CD was helping to increase feedback and work for osic folks16:22
mnaserWe don’t ever run anything other than upstream, we usually make sure things are merged and approved to stable branches16:22
* dansmith yells at smcginnis 16:22
mnaserWhich is nice because it makes sure nova cores approve our bug fixes before we break things.16:22
smcginnis:)16:22
ttxEmilienM: trying to make sure I captured your concern correctly -- you're saying that 6 months is still the sweet spot between what we can deliver with proper bells and whistles (coordinated, stable branch, etc),  and that people do not consume things that don't have those bells and whistles16:22
mriedemholding to a CD standard, regardless of whether or not people are using it, improves stability16:22
mriedembecause you actually make people think about not breaking shit16:22
mrhillsmanwe were doing at least nightly deployments from master and working to do deployments every 6-8 hours16:22
mgagnemriedem: what's CD for you? deploying in a production environment? Or running tempest or whatever against each change?16:22
fungicdent: and ultimately we'd like services released to pypi because it massively simplifies some sorts of testing to not treat them as special16:22
ttxEmilienM: and anything longer would therefore mean longer feedback cycles, which is bad16:22
mnaserI like 6 months because it lets us pace ourself relatively well.  Upgrading every year leaves such a huge period of time for things to go wrong and transitions16:23
EmilienMttx: you got it16:23
dansmithmriedem: ++16:23
EmilienMttx: and we should rather work together at improving our planning methods16:23
mriedemmgagne: pre-prod syncing changes downstream daily (or more than once per day), CI on that sync, and eventually, when you're ready, publish to prod16:23
mnaserLet’s say nova release wants to roll out cells v2 and wants to pace out the roll out over several releases to make it easier for consumers to upgrade16:23
mnaserWith 1 year we would have a one big cutover where so much can go wrong16:23
smcginnismriedem: Doesn't sound dangerous to me. Sounds like you can actually do things with ability to not have to make a lot of design compromises to fix things because someone somewhere might have picked up the bad code.16:23
pabelangerdmsimard: yes, we have 2 at one point. As I understand it osic-cloud1 was slow moving, production cloud. and osic-cloud8 was faster moving, master cloud. Find issues in osic-cloud8, then propose fixes to osic-cloud1 when upgrading16:23
mrhillsmanwe were using openstack-ansible to deploy, pulling master branches every X hours, and standing up what is traditionally deployed in production for folks starting off as rax private cloud customers16:23
mnaserOr things can take a really long time to happen16:23
dansmithmnaser: indeed, larger spans mean a much larger speed bump at each release16:24
mrhillsman22 physical nodes basically, ha, etc, the reference architecture publicly available16:24
mnaserExample: transition to adding placement service would have either taken 2 years to do smoothly16:24
mriedemsmcginnis: then we shouldn't do microversions either16:24
EmilienMI'm pretty sure y'all read it before, but I can't resist to share it again here : http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s04.html16:24
mnaserOr it would have been a big 1 change that could go very wrong16:24
mgagnemriedem: ok, well I guess I'm not the one that will be performing CD =)16:24
smcginnismriedem: Totally different. But we should just have one microversion bump per release IMO.16:24
mnaserI much rather more smaller steps rather than large bigger ones.16:24
dmsimardmriedem: I think there's a distinction between landing stuff that isn't broken (knowing you'll fix it later) and *actually* continuously updating your production environment with real customers and real SLAs. We see it first hand when RAX live migrates VMs in openstack-infra, some of the nodes become unresponsive, etc.16:24
ttxEmilienM: we have a lot of people skipping releases because they can't upgrade every 6 months though. Like SUSE doesn't even package more than a release per year16:24
ttxI think the good answer there is skip-upgrade16:25
dmsimardmriedem: doing CD would mean updating the production without impacting the customers and if you're able to do that, please tell me how16:25
EmilienMttx: I think the good answer is FFU (Fast Forward Upgrades)16:25
mrhillsmanand we were using rally to benchmark and test the deployment16:25
ttxEmilienM: right, what I meant16:25
persiaskip-upgrade is indeed the answer to vendor distribution solutions that are less frequent than deployment cycles.16:25
mgagnettx: if we can get people onboard the skip-upgrade/fast-forward ugprade, I'm all in.16:25
EmilienMttx: you can't skip anything, but you can go faster16:25
pabelangermriedem: ++, I don't think we can drop CD also16:25
mrhillsmanand working closely on the effort to push skip-level upgrades which i believe is not baked into openstack-ansible16:26
mrhillsmans/not/now16:26
mrhillsmanor ffe, cannot remember16:26
mnaserI don’t think anyone is against FFU. I think the problem is who is willing to dedicate and commit the resources to work on the CI needed to manage it.16:26
mrhillsmanffu16:26
EmilienMFFU16:26
EmilienMand not only OSA16:26
EmilienMbut tripleo, kolla, etc16:26
EmilienMFFU is a real thing, that both devs & ops would like16:26
mnaserAnd it’s very nice but if the people who need it don’t put down the resources to do it, then I can’t say much about it :(16:27
mgagnes/like/need/16:27
ttxEmilienM: I'd like to get to the bottom of why people would not consume releases that don't come with a stable branch though16:27
EmilienMttx: because they aren't tested properly?16:27
mgagneI *can't* afford NOT skipping version.16:27
ttxEmilienM: what makes them less tested ?16:27
dmsimardttx: if there are tags instead of branches, this means there no expectations of backports of any kind ?16:27
fungialso ffu testing upstream i think depends on us to solve some other challenges related to (or perhaps even involving) lts since it's going to be quite hard to test upgrading from a version we can no longer land fixes in16:28
*** kumarmn has joined #openstack-tc16:28
ttxdmsimard: yes16:28
mnaserMaybe this is my business mind talking but at the end of the day we all come from organizations with specific business requirements that we work on OpenStack to deliver16:28
dmsimardttx: then I can see that as a hinderance to that model -- maybe I don't want to ugprade to a new milestone or release but want to benefit from bugfixes to improve the stability of my environment16:29
mnaserI don’t think anyone has ever blocked anything that people haven’t put the effort into doing16:29
ttxdmsimard: ok, just checking that it boils down to that16:29
EmilienMttx: I guess it's more complex to understand and test properly (not only upstream but downstream as well)16:29
dmsimardttx: well, that's my opinion, at least16:29
persiattx: In other contexts, where a project doesn't have stable branches, I have watched folk install mirroring infrastructure and maintain private stable branches: the main argument seems to be that folk don't trust internal developers to maintain long-lived branches, which is important in terms of being able to access the specific code used in production when troubleshooting issues.16:30
mnaserSo if X organization needs Y feature, it can work with the others to put up the resources to work together to get they feature tbh.16:30
mgagnettx: most people don't consume them and it's only when a stable branch is cut that (some) people consume it and report bugs.16:30
mnaserpersia: I think those folks should get a bit more involved and instead of duplicating effort maintaining a stable branch, they can do less work with others who can help them.16:31
mnaserBut that’s not always easy to change.16:31
EmilienMttx: one day I would like my organization to ship at every milestone - but we're not here yet. I guess what I'm saying is our current model is fine imho, except we commit too much work in the cycles.16:31
ttxEmilienM: ok that makes sense. I agree that if we can't somehow produce something consumable in the middle of a year-long cycle, then it's just too long for feedback loop16:32
persiamnaser: I share your opinion: the "other contexts" bit was intentional, and related to the fact that some mainline teams are not always downstream friendly (in one extreme case, when a downstream demoed something using code, mainline was deleted from public mirrors, depite licensing).  Where cooperation can be done (like openstack), it should be done: that doesn7t mean folk may not still want to be consuming software labeled "stable".16:32
ttxand it sounds like "consumable" implies most of the cycle treats like stable branching16:32
mriedemmgagne: if most people don't consume stable branches from upstream, then why do people care that we keep the upstream stable branches around longer?16:33
mgagneso... to be that guy: is my employer too poor to afford OpenStack since it can't 100% pay for the human resources it needs for development, maintenance and upgrade and perform CD? =)16:33
persiamgagne: No: your employer is only too poor to afford openstack if it cannot pay for the human resources it needs for maintainance of it7s business processes using openstack.16:34
cdentmriedem: I've wondered that.16:34
persiamgagne: Key is to make sure that those allocating resources understand that by assigning resources to work closer to trunk, there is more opportunity for shared benefit, so less overall work may be required for any specific feature.16:35
mnaserI don’t think longer cycles will help those who can’t keep up with upgrades do them any more often16:35
mgagnepersia: welcome to the enterprise world and money isn't free. Asking for more isn't working here.16:35
dmsimardOpenStack is free </s>16:35
mriedemi assume the answer is so that when you hit a problem downstream on N-2 branch, hopefully someone has already fixed it for you upstream16:35
mriedemmnaser: totally agree16:35
mgagnepersia: sorry, those people don't care about your argument, I tried already.16:35
mriedemmnaser: it's the opposite, IMO, it supports people more that aren't upgrading as often16:35
persiamgagne: My sympathies.16:35
mnaserI think inherently if you look at how much money you’ve saved by getting OpenStack for “free”. The small resources you put into upgrading it and keep it healthy is pennies for what you’re getting.16:36
pabelangermnaser: I tend to agree, I think all it does it push back the issue another 6 months. However, maybe that 6 months is what people need?16:36
mgagnepersia: thanks. now I would like people to understand that I'm not alone in this kind of boat and that I'm doing my best to do what I can with what I'm given.16:36
mrhillsman^ that statement could be made for a lot of things16:36
mrhillsmanmnaser16:36
mnaserI think more time means that the OpenStack install will be even more out of date when upgrade time happens16:37
mnaserAnd then more issues will happen in the upgrade because it will be such a big upgrade16:37
mnaserUnless we slow down the pace of development but that’s a déterrement for OpenStack.16:37
cdentWe've strayed a long long long way from helping part-time contributors. Any ideas on that?16:37
mriedemyes - unless we have self-imposed stability periods within that yearly cycle16:37
mgagnelike I said, I don't care about the release cadence as long as I can skip/FFU because some times I will have the "budget" to upgrade. some time I won't and will need to play catch up later.16:38
persiamgagne: I don't understand your position.  I have trouble believing it is either "someone else should do all the work and I still get the latest stuff optimised for me" or "don't bother working on openstack: we're looking for something else."16:38
smcginnisMaybe the more frequent release idea would make it easier for part time contributors because there would be less pressure.16:38
mnasercdent: I kinda jumped into the conversation. I guess you’d like to bring the topic at hand.16:38
mgagnepersia: I used to contribute upstream, ask EmilienM. I just don't have the budget anymore.16:38
pabelangercdent: I think the more we talk about it, I agree it seems less to be about part-time contributors16:38
cdentpabelanger: yes, except that was the original justification, so we're on a strange path16:39
EmilienMpersia: mgagne was one of the main contributors to Puppet OpenStack modules16:39
mnaserI do think part time contributors would be disappointed that if they needed something it might be a year till it’s released, which will increase the likeliness of them maintaining their own branch.16:39
cdentmnaser++16:39
persiamgagne: Sorry if I rubbed salt in wounds.16:39
pabelangercdent: yes, I agree. As I listen more to the discussions, it feels like we need to keep reminding ourself of that. not sure if that is good or bad16:40
persiaBut I still believe that organisations that put significant funding into downstream work could likely get more features faster for less investment (this means firing folk) working closer upstream.  Working forther downstream keeps more folk employed, but maybe means mainline moves more slowly.16:40
EmilienMmnaser: good feedback16:41
mriedema yearly cycle does not help a part time contributor unless *everyone* slows down development16:41
EmilienMmnaser: iirc, this proposal was made for part time contributors16:41
mriedemand a yearly cycle does not mean your low priority vendor specific complicated feature that no one wants magically gets priority now16:41
EmilienMmriedem: exactly. Planning.16:41
dmsimardpersia: that's assuming there is any downstream work at all16:41
mriedemwe just have more time to not care about it16:41
mgagneyea... but life isn't always giving you that opportunity or surround you with likely minded people.16:41
EmilienMmriedem: and we can do planning in 6 months or in 3 months even. Not sure why extending to 1 year is needed16:41
ttxmriedem: it helps them writing code within the timefrane of one cycle16:42
EmilienMttx: then split the code in smaller pieces16:42
mriedemright, smaller patches,16:42
mriedemand,16:42
mnasermgagne: im sorry that you're in that position, i understand that there's not much that you can do (and its not what you'd like to do either)16:42
persiadmsimard: If there is no downstream work, doesn't it cease to matter, as the organisation isn't using openstack?  If nothing else, I would expect some minor integration and usage.16:42
dtantsurin my practice, writing code is not an issue. getting it reviewed - is. agreeing on a design - sometiems as well16:42
EmilienMmaybe developers aren't good a designing a feature into small chunks16:42
mriedemttx: it's irrelevant16:42
mriedemttx: you don't get it in the 6 month cycle because no one cares, or you don't get it in the 12 month cycle because no one cares16:42
smcginnisdtantsur: ++16:43
fungimriedem: i think a missing part of the problem statement is that development pace _is_ slowing, and so maybe it makes sense to slow our release cadence as well16:43
cdentfungi: it may be slowing in some places, but it certainly doesn't feel like the pressure it lightening in nova16:43
mriedemif development pace is slowing, then shouldn't upgrades be easier?16:43
mriedembecause less churn?16:43
fungimriedem: maybe it doesn't seem to a lot of us who work 100% (or nearly so) upstream because there are fewer and fewer of us to keep up with the load?16:43
* mwhahaha doesn't believe things are slowing down in all areas16:43
* dims paying attention16:43
mgagnemnaser: thanks. FFU is the only way for me to catch up and maybe free up time so I can contribute. (I still contribute when I find bugs or need a feature) Otherwise I'm just stuck, they won't hire people. That's life.16:43
dmsimardpersia: the fact that an operator integrates openstack with their customer portal or whatever has no value upstream16:44
mriedemin which specific projects does a yearly cycle help them?16:44
mriedemglance?16:44
mgagnemriedem: one of the aspect slowing down my upgrades is *major* changes in architecture like cellsv2 (because we had the *great* idea to use cellsv1)16:45
ttxDrop data: https://etherpad.openstack.org/p/srxw36lNbL16:45
EmilienMmwhahaha: right I want to know which major projects are slowing down16:45
fungiit does seem to me like openstack is continuing to put pressure on itself to deliver the same amounts of feature and bugfix velocity, but with fewer and fewer people to do it16:45
ttxmwhahaha: ^16:45
mriedemmgagne: yes i understand that part of your particular pain16:45
persiadmsimard: No, but it's still "work", in the sense that the organisation that does that (usualy) must pay for it to be done.  Where there is integration upstream (as in some of the cooperative goals the public cloud team is trying to accomplish), the amount of work per organisation is reduced.  Org response might be "hire more folk to add more features faster", but is likely "let folk go: this runs smoothly".16:45
dtantsurttx: I'd question this data16:45
mgagneit's a *serious* speed bump in the road for us16:45
mriedemmgagne: which we've talked about a few times :)16:45
ttxEmilienM: around 30% drop over past year16:45
dtantsurttx: was just about to say that ironic is heating up still16:45
mriedemfungi: i agree with that16:46
mriedemhowever,16:46
mgagnemriedem: yes, I appreciate your help with it. getting there, slowly ;)16:46
mriedemi've been tracking nova bp output since newton,16:46
mriedemand it's steadily going down each release with the loss of major key contributors, as one would expet16:46
mwhahahattx: considering we had resoruces go away (companies bailing), i don't think that means there's less work16:46
mriedem*expect16:46
EmilienMttx: the public thread is huge imho. not sure how we're going to reach consensus here16:46
ttxdtantsur: I can give you detailed numbers for Ironic to back that up:)16:46
mriedemhowever, we're still cranking out like 50 bps per release16:46
dtantsurttx: I trust you have numbers right :) I'm not sure these numbers have practical sense16:47
mwhahahai'd rather we land 3 features every 6 months and get it all tested for upgrades, updates, bugs, etc than push that out to 4 features over 12 months16:47
TheJuliamriedem: the mechanic did not grok the question16:47
dtantsurttx: I don't see pressure reducing, even though we keep the team roughly the same size16:47
dansmithdtantsur: agree.. a commit is not a uniform thing16:47
mriedemTheJulia: damn his hairy hide!16:47
ttxnumber of contributors, patchsets proposed, commits merged16:47
EmilienMmwhahaha: or 6 not upgraded tested16:47
mwhahahai don't see a push to improve the user experiance in the existing features so unless that would be included in this extra 6 months to polish stuff, i see it as more half backed thigns implemented16:47
mgagneEmilienM: hehe16:47
fungiTheJulia: in fairness, i don't yet entirely grok the question either ;)16:48
mriedemmwhahaha: agree16:48
dansmithnova has had such a hugely long tail of contributors that losing 40% of them doesn't seem significant to me in terms of overall project activity16:48
TheJuliamriedem: I'm not sure that is the look she was going for, honestly16:48
TheJuliafungi: good point16:48
mriedemdansmith: not sure i follow that statement16:49
mriedemi'd say losing alaski, sdague and johnthetubaguy are significant impacts16:49
ttxdansmith: so you'd say you haven't lost much "core" activity ?16:49
ttxhm what mriedem said16:49
dansmithttx: is this number covering cores or number of people with a commit during a cycle?16:49
mriedemi think we are still putting stuff out, at the expense of fewer people loading the burden to do it16:49
ttxdansmith: it's covering number of patchsets proposed, but also number of commits merged16:50
dansmithmy point being that if we have 1000 contributors and 500 (or more) of them are one-shot snowflake patches, that losing those contributors does not mean the project is slowing down or dropping in activity16:50
dtantsurttx: the number of changes merged reducing may mean that pressure is increasing, not decreasing16:50
cdentdansmith++16:50
fungii continue to get the impression that the remaining active contributors to a lot of these projects don't perceive a drop in activity because it's been meaning consistent (or increasing) amounts of work for them as contributors continue to peel away16:50
dansmithttx: right, and that seems like a nonsense metric, so I was arguing about the contributor one16:50
mriedemdansmith: agree with that16:50
dtantsurit may mean that we're not coping with the work. or it may mean that we have less work.16:50
dansmithttx: our changes have been getting much more complex over time16:50
dansmithttx: so commit rate dropping doesn't mean much to me16:51
ttxdansmith: indeed16:51
mriedemfungi: i fully perceive it16:51
mriedema 1 year cycle, however, doesn't fix that imo16:51
ttxI like to think that the 30% drop is more a sign of maturity. Things above that drop are a concern though16:51
dtantsurI like to think that too :)16:52
mriedemthere are always going to be contributors that are simply just more interested or engaged in a project and that fuels their desire to want to work on it16:52
fungimriedem: yeah, i didn't mean to imply that changing the cycle length necessarily solves the contribution drop, more than as development velocity slows it may simply be natural to slow our coordination points as well16:52
mriedemcycle length doesn't change that16:52
pabelangermriedem: yes, I agree with that16:52
mriedemfungi: ok, but dev velocity isn't slowing in nova in dramatic ways16:52
mrhillsmani think the discussion is cyclical re 1 yr cycle when focusing only on dev aspect16:52
mrhillsmanaspects16:53
dtantsurfungi: then we should talk about some 'load' metric, like # of features per contributor16:53
mriedemfungi: which is why i asked which specific projects are getting killed by a 6 month cycle16:53
mriedemthe deployment/packaging projects i can understand16:53
dtantsurthis is something I feel is not decreasing too much16:53
mriedembut which actual service projects?16:53
cdentfungi: you used the dreaded "just" word in you email! For shame.16:53
mrhillsmani think where it does not change/hurt anything, those should not be basis for y/n16:53
*** kumarmn_ has joined #openstack-tc16:53
mrhillsman0 review points :)16:53
ttxmriedem: how much activity is lost to boilerplate activities linked to the cycle ? Would you say that's negligible ?16:53
mriedemyes16:54
mrhillsmannothing changes, just good info, fodder even at times, but i think a pro|con , helps|hurts chart may be good to compose or ask in a survey16:54
mriedemi don't know who in nova is doing boilerplate besides me16:54
mriedemif boilerplate == admin stuff16:54
dansmithmaybe he means things like spec/code deadlines?16:54
mriedemif boilerplate is coming up with talks for summits, then sure16:54
fungicdent: oh, so i did! for shame :/16:54
ttxmriedem: well it's also feature freeze and other deadlines happening twice a year16:54
ttxand PTg prep16:55
mriedemttx: if anything, feature freeze increases activity16:55
dansmithptg prep, lol16:55
dansmithmriedem: ++16:55
mriedemif we don't have a freeze deadline,16:55
mriedemwe'll all be fishing16:55
dansmithone FF per year would mean much less activity than two, IMHO16:55
dtantsurbut also more pressure from people who see their features missing the deadline16:55
dansmithto the point that we'd _have_ to have another FF per cycle in order to stimulate thing I think16:55
mriedemdansmith: yes i totally think nova would have to do that16:56
ttxdansmith: yeah, that was a question I had actually16:56
mriedemi've said that a few times in the thread and in here i think16:56
dansmithmriedem: but it'll be harder, because we won't have an integrated deadline to hold to16:56
dansmithmriedem: which means we'll be pressured to slip it forever16:56
mriedemnova will still likely have a release at least every 6 months16:56
*** kumarmn has quit IRC16:56
mriedemeven if no one consumes it16:56
mriedemi'm not sure how we'd CI that thing wrt upgrades though16:56
mriedemwhat our support statement would be16:57
dansmithbut it'll be a lot of work for just that deadline, because we can't deprecate anything on that intermediate release16:57
dansmithso it'll be a release just to force the activity, but nobody will run it16:57
dansmith_that_ seems like more wasted busywork to me16:57
dtantsurthat's what we do: intermediary releases to keep us in shape (minus upgrades ofc)16:57
mriedemit would also unfreeze for new specs for the 2nd half16:57
ttxTo come back to what EmilienM said earlier about better planning -- what do you need to do that ? More time ? :)16:57
mriedemi don't think we necessarily need more time to plan16:58
ttx(my proposal implies that relaxing the pressure would lead to better org)16:58
mriedemwe need more quality contributors to review and test code16:58
mriedemand operators and users to give feedback <2 years16:58
EmilienMwe already have the PTG16:58
EmilienMwho does (actual) planning at PTG?16:58
ttxEmilienM: and yet as you said we need to get better at planning16:59
EmilienMlike, listing all features, evaluating the work, and tasking16:59
fungiEmilienM: infra did16:59
pabelangeryah, infra has some ptg planing for sure16:59
mriedemwe did some planning in denver,16:59
ttxThe release team does planning at PTG. But then we are small16:59
fungibut then again, infra doesn't produce deliverables involved in the coordinated release16:59
mriedembut it was mostly about priorities16:59
EmilienMgood, so some projects do it16:59
dtantsurEmilienM: kind-of planning, yes16:59
mriedemptg is about discussing designs and issues for nova, and coming up with what we think we can do in the release and what is our priority list16:59
EmilienMdo you feel we plan too much?16:59
dansmithmriedem: I call that planning :)16:59
pabelangerfungi: agree, but with a few people it wasn't too painful I think16:59
EmilienMdo we overcommit *sometimes* ?16:59
dtantsurEmilienM: always17:00
EmilienM(innocent question) :-)17:00
EmilienMright. So this is the problem.17:00
EmilienMextending the cycle to one year won't solve this one17:00
dtantsurwe nearly do it on purpose to be able to be flexible in case something gets delayed17:00
dtantsurfwiw I don't see it as a problem17:00
EmilienMwe'll commit too much for one year and then why not extending to 2 years lol17:00
ttxEmilienM: so we need to get better qualitatively at planning, not necessarily quantitatively ?17:00
EmilienMyou know what, we have a ton of features, let's release in 5 years17:00
fungiEmilienM: i would say from the infra team perspective, release cycle planning is more about "when would be the least disruptive time to the developer community for us to make major changes to the infrastructure (lengthy outages for needed upgrades, et cetera)17:00
EmilienMttx: exactly17:01
mnaseri'm ready for a flood fo answers but -- what's wrong with the current cycle?17:01
EmilienMand train developers to break down features better, so we can land pieces every cycle17:01
EmilienMand not postpone a whole feature to the next cycle17:01
cdentfungi: you're in a unique (in this context) situation where the distance and path between you and your "customer" is close and relatively clear.17:01
cdentthe feedback loops are quite tight, that's not the case otherwise17:01
ttxmnaser: I had lots of reports that our rhythm was too fast for part-time devs to jump on the train17:02
EmilienMmnaser: imho, the only thing wrong right now, is we overcommit every cycle17:02
pabelangerEmilienM: so, with 6months window today, what would you think is needed not to overcommit in rocky? Have PTL impose some limit on BP? Or some sort of cutoff timeline for features to be done then drop into S release17:02
mnaserpart time devs meaning, able to commit little # of hours OR come-and-fix-one-thing-and-go ?17:02
ttxmnaser: the former17:02
mgagnettx: is it because they couldn't complete a feature within the 6 months window?17:02
dansmithttx: I've never understood why the cycle and rhythm means anything for a 20%er17:02
fungicdent: yes, and it also means that we get the luxury of being able to rely on the release schedule to mostly divine when teams will have the most time to adjust to major changes or outages17:02
EmilienMpabelanger: not only PTL, but the whole team buying it17:03
ttxdansmith: for a part-time PTL it does17:03
mriedemhow many projects are in that bucket?17:03
dansmithttx: so we're looking to change the whole thing so that we can facilitate part-time PTLs?17:03
mriedemglance, trove, what else?17:03
EmilienMmriedem said it, it's hard for a PTL to pushbash features from other contributors17:03
ttxmriedem: more and more17:03
EmilienMmriedem: if I understood correctly17:03
fungittx: i wonder to what extent part-time contributors are perceiving the release cadence to drive change velocity, and it's actually the latter which makes getting involved hard?17:03
dansmithbecause that seems like exactly what I said on the ML: aiming for part-time participation of everything, which means openstack never moves past where we are today, IMHO17:03
mnasernow question -- are part time contributors involved in huge feature changes that will be the sort of change that spans few months?17:03
EmilienMit's a culture and education thing, we need to push17:03
ttxfungi: yeah, maybe17:04
mgagnebecause if there is overcommitting like EmilienM said, I suspect this could also impact review time and therefore delay merge for those part-time contributors.17:04
mnaseri feel like big changes that take a long time to complete are more taken up by full time contributors17:04
mriedemmnaser: totally17:04
mriedemex: placement and cellsv217:04
EmilienMsometimes I see folks frustrated because their 2000+ LOC patch doesn't land on time for a cycle17:04
EmilienMwell, first maybe split it?17:04
mriedemha yes!17:04
mgagneand like mnaser, need also to see what's the size of the changes contributed by those part-time contributors17:04
dtantsur++++++++ for split it up17:04
pabelangerEmilienM: right, so would it make sense to have teams cut back X in an effort to simulate having more time to do Y, but in 6 month cycle for Rocky as an experiment17:04
EmilienMwe're not robots17:04
mnaseri think its good to look into how big is the size of patches of these part time contributors17:05
mriedemthere have to be countless summit talks on how to actually get code merged in openstack that we can link people to yes?17:05
mnaserif they're all few lines bug fixes, dont think the year makes a difference to them17:05
ttxI mean, I've been a hard defender of 6-month cycles, but I also feel like we are optimizing for a group that is disappearing, and preventing its replacement by another group, so I've been trying to put myself in their shoes and interviewing lots of them lately17:05
mriedemif there is a bug fix that the majority of ops need, then show up and tell us to prioritize it17:05
mnaserbut (im maybe generalizing here), i dont think part time contributors are the ones delivering 2000 LOC changes17:05
dtantsurI've seen quite big things coming from part-timers in ironic, fwiw17:05
dtantsurincluding 2000 LOC, yes17:06
dansmithttx: I think we're a long ways away from the majority of major projects with a part-time ptl no?17:06
mnaserdtantsur: thats awesome to see17:06
dansmithttx: if we hammer that last nail ahead of time, we're sealing our fate it seems to me17:06
persiaOne of the organisations I work with is reluctant to assign part-timers, because engaging someone with openstack means they are individually important, so it becomes hard to rotate the assignment in a team.17:06
ttxdansmith: I interviewed more than just PTLs17:06
mugsieyeah - a lot of our large changes come from part timers - they work on a feature in islolation, and then push it up en-mass17:06
mgagnemnaser: hence, are projects overcommitting themselves? do they have the bandwidth to review? 1 year won't fix it imo, changes will just pile up overtime17:06
EmilienMand it's up to projects to say "no, your patch is too big, this isn't how we work"17:06
dtantsurokay, I"m going to look a bad person, but the most common cause of delays in big contributors from part-timers is the quality of submissions17:07
ttxdansmith: like people trying to get involved in Nova and realizing they will never have a say in that project because to be core you need to spend 120% of your time onit17:07
EmilienMand say "we don't want you to be frustrated so if you want the code to land, break it down so it gets merged quicker"17:07
dansmithttx: okay, but 20% people, even if cores, don't lead the project right?17:07
mnasermgagne: i agree, changes will pile up.  but as a committer, i ask myself if i want to review the code i just submitted17:07
mnaserand if its not, then i split it up17:07
mgagnemnaser: my code is perfect, of course =)17:07
EmilienMdtantsur: can I be the bad person too? you probably didn't do a good job at training newcomers then17:07
EmilienMdtantsur: do you have coding style documented?17:07
pabelangerhonestly, zuul makes it much easier IMO to split patches over multiple commit, but maybe new developer don't really understand the power that zuul provides or our testing?17:07
EmilienMdtantsur: or an onboarding guide that says what we expect from contributors?17:07
mnasermaybe we need to document that .. "hey, changes should be ideally X size"17:07
dtantsurEmilienM: it's not about code style even. it about code simply working.17:08
ttxdansmith: I'd like to think people could be core reviewers and part-time contributors.17:08
EmilienMdtantsur: i'm pretty sure if I sent a patch to Ironic right now, you'll -2 my sh**t :-)17:08
EmilienMdtantsur: and you would be right to do it17:08
dtantsurEmilienM: you'd be surprised17:08
ttxdansmith: I realize some projects are a long way from that17:08
ttxbut that's another thread17:08
dansmithttx: I think we have part-time contributor cores17:08
dtantsurEmilienM: I trust you to not send patches that are simply not working and cannot work even in theory17:08
mgagneEmilienM: your change will be forcefully abandoned :D /jk17:08
persiaEmilienM: We should consider that when we write code, some is landable, even as a first intro to a project.17:09
EmilienManyway, what I'm saying is we should train and help our contributors, by giving them the tool and access to knowledge to be better contributors17:09
dtantsurEmilienM: to be clear: this is not about nit-picks in docstrings17:09
EmilienMdtantsur: good, trust is critical here17:09
mnaseri agree with EmilienM.  if the problem is 2000 LOC patches, educate people to split their changes into smaller more reasonable ones17:09
ttxdansmith: people who spend less than one day on Nova per week ?17:09
cdentttx, dansmith : if that's another thread it is a very closely related thread17:09
mriedemttx: yes17:09
mriedemttx: john and sean17:09
EmilienMhow many times I've -1 patches without even looking at what it does, just because 2000LOC?17:09
cdentkenichi?17:09
ttxoh, so emeritus core17:09
dansmithttx: maybe not that part-time, but considering the size and complexity of nova, I think proportionally we're covered there17:09
EmilienMI usually jump on IRC and try to engage with the author17:09
mriedemyeah kenichi more and more17:10
EmilienMand be like "hey, thanks for your work, you're awesome. But the patch is too big, can we split it?"17:10
ttxpeople who used to be 100%17:10
mriedemto be core doesn't mean you have to be full time17:10
EmilienMthat's all, it takes 2 min and it avoids frustration17:10
ttxand can continue helping17:10
mgagneEmilienM: although I have been away from puppet for far too long, zuul helped me catch up stuff I missed in my recent changes. so there is that =)17:10
mriedemit means you have to be good at REVIEWS17:10
ttxI'm more trying to attract new people17:10
mriedemhere is an example, i'm goign to pick on mel17:10
dansmithttx: but, if the goal here is to slow nova to the pace at which a 20%er wants, then we're really just making a choice there and not dealing with any sort of overwhelming reality17:10
mriedemhttp://stackalytics.com/report/contribution/nova/9017:10
dtantsurEmilienM: except that some people thing that 5x patches will take 5x time to land.. and occasionally they are right17:10
mriedem56 reviews in 90 days17:10
mriedemhowever, when mel does a review, it's effing solid17:11
mriedemthat's why mel is core17:11
persiaI think attracting new people. retaining old people, and encouraging cross-project contributions are the same problem: the barrier to landing a patch when one isn't current about project internal communications is very high.17:11
mriedemyou can be +1ing changes every day of the week - that doesn't get you on the core team17:11
persiaIf we can reduce the level of engagement required to land things, that would help several sorts of part-time contributors.17:11
mnaserin that same subject, nitpicky reviews will kill engagement of part time developers17:12
mnasera full time committer doesnt mind fixing a typo he made in the comment17:12
mnasera part time dev just gets super frustrated with that17:12
EmilienMdtantsur: not sure about that, but I'm maybe biased by tripleo17:12
pabelangermnaser: why not push up a patch to fix the nitpick? if you know a part-time dev17:12
ttxdansmith: no the goal here was to see if reducing the pressure a bit would lead to better results. I can see that you think it would have more drawbacks than benefits17:12
dtantsurEmilienM: well, at least if CI is not stable (speaking of tripleo ;), it may mean 5x rechecks17:13
ttxwhich is good input, and why I started the thread17:13
dtantsurmnaser: ++++ we need a culture of NOT nitpicking people to death17:13
mnaserpabelanger: we're dealing with people.  ownership and pride goes into "hey i just merged my first change in openstack!"17:13
mnaserit's very normal to us.  it's entirely different when others do it for the first time17:13
persiapabelanger: Why should you have to know the person to do that?17:13
pabelangermnaser: agree, there is a line I would say17:13
EmilienMdtantsur: touché17:14
ttxdansmith: so thanks for sharing17:14
dtantsurwe're trying to practice allowing people to follow-up with fixes for their nitpicks, if the patch is close to landing17:14
EmilienMdtantsur: now I go to cry.17:14
* cdent hugs EmilienM 17:14
dtantsurEmilienM: don't :(17:14
mnaserhell, i remember my first change in nova in 2011.  it was "hey, that's awesome." -- and makes you want to come and do better or soo17:14
EmilienMlol17:14
*** jungleboyj has joined #openstack-tc17:14
ttxmriedem: same -- thanks for caring and sharing17:14
EmilienMttx: what's the next steps? do we summarize our today's discussion? the thread is already *huge*17:15
smcginnismriedem is a caring and sharing kind of guy.17:15
ttxI have a more complete (and complex) picture of the problem now17:15
* jungleboyj is wishing I hadn't missed this discussion.17:15
pabelangerpersia: yah, I can see people taking offense of updating a patchset. Thinking of my experiences in infra, we all have enough stackalytics points already so often we just opt for another person to work on a patch if needed to move it along17:15
pabelangeror trivial rebase17:16
mriedemmnaser: agree - and it's totally fine to fix a thing yourself and then +2, or do the follow up - people don't realize that because they don't want to step on toes17:16
EmilienMmgagne, mnaser : thanks for joining us and representing ops world today ;-) it really helped.17:16
ttxEmilienM: let's see if the thread calms down. The discussion here was good, avoided a lot of back-and-forth between you, me and mriedem I think17:16
mgagneEmilienM: =)17:16
mnaseraha, for the fun of it, i found the first commit i ever have in nova17:16
mnaserhow did this ever get merged17:16
mnaserhttps://github.com/openstack/nova/commit/9e7c7706a76ad76612ba75314d436a8ba419a3eb17:16
mnaser:P17:16
EmilienM2011!17:17
mriedemmnaser: becaues it was a TypeError otherwise17:17
mriedem*because17:17
* EmilienM switches on something else now, thanks for the nice chat17:17
mriedemhow it got merged without a TEST is another question17:17
mnasermriedem: https://github.com/openstack/nova/commit/eba9e4abd6271e0265899a2d260b54068d78ee5117:17
mnaser:P17:17
*** alex_xu has quit IRC17:17
persiapabelanger: I think worrying about causing offense means that we end up crushing people who don7t know how we work (or have time to do it if they do).  Needing to do a complex rebase because of a typo is a frustrating first experience, and the one likely to be received if we only help those we know.17:17
EmilienMmriedem: it wouldn't happen if at that time we had one year cycle. Ok I'm out now ;-à)17:18
*** alex_xu has joined #openstack-tc17:18
mnaserbut np, thanks for the discussion.  and I agree mriedem, im all for follow up fixes (funny, thats exactly how i got started)17:18
mnasermerged something with no tests, follow up added tests17:18
TheJuliadtantsur: I think part of the conundrum is different perceptions of "ready to land" :\17:18
mnaserwho knows if i would be here if i was told "go write some tests" :D17:18
mriedempersia: yup - need the core team to help tell people it's not cool to -1 for a typo in a big series or complicated change17:18
mriedemit's a culture thing17:18
dtantsurTheJulia: correct17:18
edleafewin 1317:19
edleafedoh!17:19
dtantsuredleafe: no such window17:19
TheJuliaheh17:19
persiamriedem: I would argue we should go further, and push an update to the change with the typo fixed, expecting the original submitter to participate in review.  key is preserving the Author: header, so credits go to the right place when it finally lands.17:20
mriedemsure the author shouldn't change17:21
dtroyerttx: I think I'm unclear on the notion of 'attracting new people" in the world of the vast majority of us working on OpenStack are corporate dev being assigned.  Who is making that decision to be attracted?  What I've seen from the 3 companies I've been a part of during this time is that really is not the individual contributors decision, except when they decide to move on.17:23
ttxdtroyer: I mean getting ops and users more directly involved upstream17:23
dtroyerI have little direct experience with operators here, maybe that is where this happens?17:24
ttxWe have hundreds/thousands of users, if each would commit one part-time dev we'd be in another story17:24
dtroyerstill though, how many are able to make that decision on their own?17:24
ttxand yet we don't, so I tried to follow that lead17:24
ttxand asked them17:24
cdentdtroyer: I think that's the underlying thing here. The contributor map needs a backup.17:24
cdentbut yes, it's not really an individual decision, is it?17:25
ttxdtroyer: I asked them what is preventing them17:25
ttxa surprising part of them replied, the rhythm is too fast for us to jump on that train17:25
dtroyerttx: the potential contributors or the management of their employers?17:25
cdentI think we way underestimate the extent to which openstack is an joint collaborative operation by enterprises17:25
ttxsome replied: my boss won't allow it17:25
ttxbut not that many17:25
dtroyeralso, how much of this is really only about the Big Three (Nova, Cinder, Neutron)?17:26
dtroyeris the cadence too fast for someone who wants to add a process to Heat?17:26
ttxdtroyer: actually most of the people said that the Big Three are so much out tehre that they don't even think of jumping on that train. Most were talking about getting involved in smaller projects17:27
TheJuliawait, is it code, features, cadence of process, or context that moves too quickly?17:27
ttxa mixture. I'd say it boils down with keeping up with what's happening17:27
ttxwhich arguably won't get much easier with longer cycles17:28
dtroyerTheJulia: /me think back to contributing to other projects over the years, it's just keeping up with only 2-8 hours a week tp spend.  None of those were my $DAY_JOB17:28
dtroyerI don't think OpenStack  attracts many of those people at all17:29
ttxI also asked some of the pTLs who stepped up recently, and they mentioned taht cycles were short and they could not get anything done in one17:29
smcginnisttx: Was Nick Barcet's ML response the one that you referred to a few times earlier?17:29
ttxit was release time already, that kind of thing17:29
ttxwhich made me think 9-month cycles might be a calmer, more appeased option17:29
ttxbut then 9 months is tricky to organize anything around17:30
cdent(dtroyer good message on the thread, cuts deeply, appropriately)17:30
fungia lot of it is also familiarity with contributing to large free software projects. some of them move quickly, and you need to provide as much context with your contribution as possible, attempt to at least get some minimal familiarity with the community there, and have a lot of patience. i'm reminded of patches i've gotten into gerrit, or python packaging tools, or...17:30
ttxsmcginnis: yes17:31
fungii expect a lot of new contributors who have experience contributing to free software in the past have only experienced it with small, slow-moving projects17:31
ttxanyway i need to go -- thanks for the discussion all, that really helps17:31
fungiand there's only so much we can do to provide that same atmosphere in openstack17:31
dtroyerfungi: we both have those experiences as individual contributors.  What I hear ttx is saying is that we really need to focus on individual corporat-sponsored contributors.  Less self-motivation, but still enough to be an issue17:31
ttxdespite what some people imply I'm just looking for solutions and improvements17:31
dtroyerthanks for getting this rolling ttx, we needed an mega-thread before the holidays!17:32
ttxbecause saying we don't have a problem or we can't change anything is not getting us anywhere17:32
dtantsur++ it is very helpful to reflect on something we've been taking as granted17:32
fungiit's all ultimately so we have something to read while vacationing, right?>17:32
ttxdtroyer: actually the -dev ML stats showed a drop in 2017, so I'm trying to fix that17:32
fungiheh17:32
mriedemfungi: when i was new to openstack, having sdague, dansmith and cyeoh was key17:33
dtroyermy Twitter stats show the same thing17:33
mriedemto tell me what not to do17:33
* dtroyer goes to fix that too17:33
ttxa couple more hundreds replies and we should be good17:33
fungimriedem: yep, i expect we've also lost some of that mentorship drive in more recent years as we all get overwhelmed by the volume of activity17:34
cdentmriedem: if I recall correctly you were able to devote that time to being mentored because you were willing/able to use up "extra" time for it?17:34
mriedemcdent: opposite,17:34
mriedemi was working over time by choice because i loved working on it17:34
cdentyes, that's what I meant17:34
mriedemi went from packaging openstack and internal CI to doing upstream dev17:34
mriedemwith mentoring from sean, dan and chris17:34
mriedembut,17:34
cdentyou _chose_ to work _over_ time17:35
cdentwhich is great, good you're here17:35
mriedempart of my yearly goal from my boss was "work on nova"17:35
cdentbut is that the sort of thing we can/want-to require?17:35
dansmiththere was no upstream university or any of those thousand "how to contribute" resources when mriedem started, I might add17:35
mriedemi don't expect it's a requirement17:35
dansmithnor when I started17:35
mriedemas i said much earlier, the people that become maintainers are doing it because they actually really enjoy it17:36
cdentnor me, but we're all unique snowflakes who are either allowed or able to devote absurd amounts of time17:36
mriedemdespite the pains in the ass17:36
cdentand I think that's fine17:36
fungii'll admit to similar desires. i like free software and already did a lot of free software contribution in my personal time (not my employer's time) and saw taking a job working on openstack as an opportunity to quite my day job and do free software all the time17:36
fungis/quite/quit/17:36
cdentbut it means that anything "we" want or need or find useful, is not going to be aligned with other folk, is it?17:36
fungii think the same goes no matter who you are17:37
mriedemi don't understand the question17:37
fungii will agree that my driving desires aren't likely aligned with those of a majority of others, but the same could likely be said of just about any single individual17:38
mriedemthere are lots of things i review and spend my day working on which i don't need, nor does my employer care about17:38
cdentmriedem: the experiences of mriedem, dansmith, cdent are not good guides for helping to encouage or enable people, who through no faul of their own, have to be "part time"17:38
mriedembut i consider ^ my duty and responsibility for having the opportunity to be mostly full time upstream17:38
cdentexactly17:38
mriedemcdent: i'm not sure what we're trying to enable or encourage17:39
mriedemwith part time people17:39
mriedemif it's "hey i have 4 hours per week to work on nova, and i want to be core, please invest your time in mentoring me"17:40
mriedemthat's gonna be tough17:40
cdentI'm not entirely sure either, other than the general notion of making it more possible for more of them to contribute17:40
fungiwe all come to this from different perspectives and backgrounds, and what will make us successful leaders is if we have an ability to understand and respect the positions others are coming from17:40
cdentyeah, I don't think it's that17:40
mriedemthe person that did this bp in queens https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/rebuild-keypair-reset.html17:40
mriedemwas totally part time17:40
mriedemand it got done because there was shared understanding and value in seeing it happen17:40
mriedemthe reason https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/scaleio-ephemeral-storage-backend.html is going release to release,17:41
mriedemis because there is not shared understanding or value in seeing it happen17:41
fungii know i've said it before, and i'm not the only one, but we need to do a better job of highlighting stories like those17:41
mriedemboth are part time contributors17:41
cdentmriedem: much like you're not entirely sure what I'm trying to say, I'm not clear on what you're trying to say17:41
fungipeople who want to get involved need examples of what works17:41
mriedemhttps://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html was mgagne who is part time dev, and i helped because i understood the shared need17:42
mriedemcdent: i'm saying i think it is possible for part time people to contribute17:42
dansmithI think what mriedem is saying is that the shared understanding, shared need, and shared feeling of importance is like 90% of what gets something landed17:42
cdentI don't think anyone is saying that it is not possible.17:42
mriedemthe success of their contribution depends on what and how they are going about that contribution17:42
mriedem"cdent: I'm not entirely sure either, other than the general notion of making it more possible for more of them to contribute"17:42
cdent_more_17:43
mriedemok, i don't know how to make it more possible to propose code17:43
mriedemvisibility of that code is another issue,17:43
cdentright, which is why I think it is important to stop being distracted from band-aid ideas like extending the cycle, and focus more on the core issue17:43
mriedemif we did slots then that might fix that issue17:44
cdentmight do17:44
mriedeme.g. slots: priority A -> priority B -> vendor snowflake no one cares about -> priority C, etc etc17:44
mriedemkanban17:44
mriedemit would at least make things more fair17:44
ttxfungi: ++17:45
* cdent nods17:45
dtantsurmriedem: we do something slightly reminding it with our weekly priorities17:45
dtantsurwe have a slot for each in-tree vendor to propose their patch17:46
fungidtantsur: is there some expectation that you'll actually merge the vendor-proposed feature enhancements? or is this mostly for helping prioritize bug fixes for drivers?17:48
dtantsurfungi: only for prioritizing17:48
dtantsurno promises made17:48
fungiseems reasonable enough17:49
mriedemi'll also say, i fully realize that the 'vendor need' in nova in no way compares to cinder, neutron or any other project that supports a bazillion vendor drivers in tree17:49
mriedemmaking it a huge distraction to focus on core functionality17:49
*** dtantsur is now known as dtantsur|afk18:01
mnaserwhy dont the vendor drivers live out of tree and focus on implementing an interface18:08
mnaserkinda like nova's virt drivers (easier said that done obviously)18:08
funginova's probably the most extreme example of why doing that is complicated18:10
fungisome other teams do already either have all drivers except a reference driver out of tree, or some mix of in-tree and out-of-tree18:11
fungii have only the most shallow understanding of nova, but i get the impression that abstracting away the differences between different hypervisors is nontrivial to begin with (and where it can be, libvirt is already filling much of that gap on its own)?18:12
fungiwhen people have tried to maintain their own out-of-tree hypervisor integration for nova, it's rarely managed to keep up with changes to nova's internals18:15
mriedemoh i also wanted to say, i think a bigger help to part time contributors is actually documenting what's going on, be that specs, forum/ptg session recap summaries, and weekly digests of upstream activity, like in keystone and what cdent does for placement in nova18:17
mriedem^ is useful for full time contributors as well18:18
mriedembecause i forget what it was i said about something 3 months ago18:18
cdentmriedem: thankfully there's been a pretty strong acknowledgement of that recently. In many of the discussion of "how to deal with new and part timers", "write shit down" has been a biggie18:18
cdentand you're right that most of the problematic contributors are the ones that will never have read that stuff18:19
cdentthey are _too_ part time18:19
mriedemalso useful for apac full timers18:19
cdentfungi: nova virt driver interface officially declared not-stable, isn't it?18:20
cdentout of tree drivers undesirable18:20
cdent(which I think is a shame, but is the current state of affairs)18:20
fungicdent: yep, that's more or less what i was trying to say in answering mnaser18:21
dtroyercdent, mriedem: is the interface to nova-compute just as explicitly not-stable?  The fact that there is a network boundary makes it at least more intentional18:23
mriedemnova-compute has a stable rpc interface18:23
mriedemthere is no stable interface between nova ComputeManager and the virt driver18:24
mriedemwe can add/remove/change virt driver method signatures at will18:24
dtroyerok, that's what I thought.18:24
mriedemit doesn't happen often, but it can happen18:24
mriedemthere are several virt drivers that run successfully out of tree with i think minimal impact18:24
dtroyerhow crazypants would it be to build a nova-compute that only did, say, libvirt and bypass the internal interface?18:24
mriedemefried and jichenjc can keep me honest18:25
mriedemdtroyer: not a priority18:25
dtroyerie, draw the line at the rpc boundary?18:25
mriedemspending core time on something like this is wayyyyyyyyyyy down on the priority list, IMO18:25
* TheJulia opens up the entire thread and begins reading18:25
dtroyerI'm not asking about priority, I am curios about feasability and gut reactions to the idea18:25
mriedemdtroyer: i'm not sure i understand the idea honestly18:26
dtroyerbecasue a hypothetical hypervisor vendor could take on all of that work given a stable-enough interface18:26
fungidtroyer: idea being to treat libvirt as the plugpoint at tell people who want to support their own hypervisors to just go talk to the libvirt community?18:26
dtroyerthe notion of splitting off nova-compute into its own thing has been raised before18:26
dtroyernot libvirt, the nova RPC interface18:26
fungiahh18:26
dtroyerI'm not suggesting that the nova team do this, I'm just trying to sort out where the natural stable interfaces exist18:27
* cdent passes TheJulia some coffee, some reading classes, some light soothing music, a salty snack and a blanket18:27
TheJuliacdent: <318:28
mriedemdtroyer: the rpc interface is to the ComputeManager, not the ComputeDriver18:28
mriedemthe compute manager does the orchestratoin18:28
mriedembecause "build me an instance" is a shit load of other stuff than just calling driver.spawn(18:29
*** harlowja has joined #openstack-tc18:29
mriedemexcuse my salty snacklike language18:30
dtroyersure.  how much of that happens on the compute node?  its the interface to nova-compute process that I am thinking about.  I have not looked at this since before DB access was taken away from it so I'm way out of date here18:30
mriedemdtroyer: 95%18:30
mriedemif we moved the volume and network stuff to conductor18:31
mriedemlike we've talked about for ages18:31
mriedemthat would help18:31
mriedembut the contributors working on moving network thingies to conductor were osic18:31
mriedemRIP18:31
* dtroyer pauses for a moment18:32
edleafecdent: "reading classes"? What are you implying about TheJulia?18:33
TheJulialol, I groked it as glasses18:34
cdentand that's how it was meant18:34
cdentas we all know, I type by engrams18:34
cdentand they are broken18:35
cdentoh noes, yet another one of my typos forever engraved into the twitter memory18:37
TheJuliamuahahahaha18:38
fungimriedem: which out-of-tree hypervisor backends are currently successful? are xenserver and zvm out-of-tree?18:44
mriedemxen is in tere18:44
mriedem*tree18:44
mriedempowervm and zvm and lxd i'd say18:45
mriedempowervm and zvm are working on getting in-tree18:45
mriedemlxd has never broached the subject18:45
fungiahh, i assumed xenserver didn't use the in-tree xen support18:45
fungisince it's a separate (proprietary) thing18:45
mriedemi don't know who actually uses the xenserver driver outside of rax18:45
fungii didn't realize rax used xenserver either, thought they just used xen18:46
mriedemand i assume rax is still on juno18:46
fungivaguely remember BobBall doing third-party ci for citrix xenserver integration18:47
mriedemyeah the citrix team is still maintaining it18:47
fungibut no clue if that's still there18:47
fungiahh18:47
mriedemyup18:47
mriedemalive and kicking18:47
mriedemadding vgpu support in queens18:47
fungibut that's separate from the xen support, right?18:48
mriedemhow they get paid idk18:48
mriedemlibvirt+xen is different yes18:48
fungik18:48
mriedemhell we have libvirt+lxc18:48
mriedemand libvirt+uml18:48
mriedem0 idea if those work anymore18:48
fungiso people doing vanilla xen with nova are generally going through libvirt, but if they want to use xenserver instead the driver for that is in-tree in nova?18:48
mriedemyup18:48
fungicool18:49
* dtroyer remembers uml, retreats back under his rock18:51
fungihey, i ran some very successful production virtualization on uml18:52
fungifor many years18:52
fungiif memory serves, linode based their service on it for a long time as well18:53
fungii mean, the only competition for it in linux back then was chroot ;)18:54
fungibut yes, i can't imagine anyone actually using it with openstack today18:54
fungithe only real competition for uml across the free software spectrum back then was the jails implementation on freebsd, for that matter18:55
fungi(which was also remarkably awesome)18:55
fungii think my initial foray into uml was when i decided we needed to separate our authoritative and recursive dns resolvers, but management wouldn't give me budget for twice as many nameservers (and bind views were nearly impossible to use to accomplish that at the time), so i carved the nameservers up into uml guests for each role18:58
fungiback in the days when cache poisoning attacks were still a relatively new threat18:58
*** openstack has joined #openstack-tc20:32
*** ChanServ sets mode: +o openstack20:32
*** pabelanger has quit IRC20:39
*** pabelanger has joined #openstack-tc20:39
*** SamYaple has quit IRC21:07
*** SamYaple has joined #openstack-tc21:08
*** openstack has joined #openstack-tc21:11
*** ChanServ sets mode: +o openstack21:11
*** SamYaple has quit IRC21:22
*** SamYaple has joined #openstack-tc21:23
*** diablo_rojo has quit IRC22:04
*** diablo_rojo has joined #openstack-tc22:07
* cdent waves goodnight22:12
*** cdent has quit IRC22:12
*** ChanServ has quit IRC22:17
*** kumarmn_ has quit IRC22:20
*** ChanServ has joined #openstack-tc22:24
*** barjavel.freenode.net sets mode: +o ChanServ22:24
*** ChanServ has quit IRC22:28
*** ChanServ has joined #openstack-tc22:31
*** barjavel.freenode.net sets mode: +o ChanServ22:31
*** kumarmn has joined #openstack-tc22:39
*** kumarmn has quit IRC22:41
*** kumarmn has joined #openstack-tc22:41
*** kumarmn has quit IRC23:24
*** kumarmn has joined #openstack-tc23:25
*** kumarmn has quit IRC23:30
*** harlowja has quit IRC23:32
*** harlowja has joined #openstack-tc23:39
*** harlowja has quit IRC23:40
*** harlowja has joined #openstack-tc23:42
harlowjawoah, to much to read, lol23:46
*** harlowja has quit IRC23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!