Thursday, 2018-01-04

*** purplerbot has quit IRC01:18
*** smcginnis has quit IRC02:13
*** mriedem has quit IRC03:48
*** lbragstad has quit IRC04:11
*** dtantsur|afk is now known as dtantsur08:49
*** cdent has joined #openstack-tc08:50
*** purplerbot has joined #openstack-tc08:52
*** jpich has joined #openstack-tc09:02
*** chandankumar has joined #openstack-tc11:01
*** openstackstatus has quit IRC11:28
*** openstack has quit IRC11:28
*** openstack has joined #openstack-tc13:08
*** ChanServ sets mode: +o openstack13:08
*** openstackstatus has joined #openstack-tc13:10
*** ChanServ sets mode: +v openstackstatus13:10
*** rosmaita has joined #openstack-tc13:39
*** dtantsur is now known as dtantsur|brb13:49
*** dtantsur|brb is now known as dtantsur14:22
*** dansmith has quit IRC14:25
*** hongbin_ has joined #openstack-tc14:29
*** mriedem has joined #openstack-tc14:29
*** hongbin_ has quit IRC14:29
*** hongbin has joined #openstack-tc14:30
*** rosmaita has quit IRC14:32
-openstackstatus- NOTICE: zuul has been restarted, all queues have been reset. please recheck your patches when appropriate14:48
cdenttc-members is it time?15:02
cdentit appears to be time15:02
dhellmanno/15:02
* mugsie lurks15:03
cdenthi dhellmann, did you manage to break during the break?15:03
dhellmanncdent : mostly. I spent some time on a talk I'm giving next month because my opportunity to practice in front of a crowd is next week15:03
EmilienMhello15:04
fungilooks like time15:04
dhellmanncdent : you?15:05
cdentI did, but it was tempered by having a lingering cold (started mid december, but stuck around)15:05
cmurphyoh hello15:06
* johnthetubaguy waves15:06
johnthetubaguycdent: thanks for that summary blog, brings a few threads of conversation together nicely15:07
cmurphy++15:07
cdentis anyone aware of there being any plan to bring the "release cycle length" thread to a tidy conclusion15:07
cdentthanks johnthetubaguy, glad it was somewhat useful. When I was taking notes from all the reports, it was hard to choose relevant bits.15:08
johnthetubaguyI mostly spent the holidays looking after little Francis, which has totally turned our lives upside down, in a good way I hope15:08
dhellmannI thought ttx's last email basically said we weren't going to change the dev cycle length?15:08
cdentit seems like that thread had a great deal of important concerns in it, and it would be a shame for them to fall on the floor15:08
dhellmannjohnthetubaguy : congrats, again!15:08
cdentdhellmann: yes, but what I just said15:08
cdentjohnthetubaguy: getting much sleep?15:09
johnthetubaguycdent: more than I expected, but a little glazed over15:10
dhellmanncdent : yeah, most of the feedback I was getting internally was basically the same -- these are all things to fix, but this is not the way to fix them15:10
johnthetubaguya summary list of the issues would be good, which I guess is what cdent was asking one of us to do?15:11
cdentnot actively, no, just inquiring what people thought15:11
cdentextracting some kind of list is probably a good idea, but only if we plan to do something about it...15:11
dhellmannI would like to understand why it is so hard to upgrade. That seems like an underlying cause of a lot of other things, so what is causing that?15:11
johnthetubaguyso I am seeing lots of deploys that just about work, reboot the thing and its totally busted, re-run the config management, no one really knows what will happen, etc.15:14
johnthetubaguydoing upgrade on top of that is more than scary15:14
ttxhola!15:14
cdentttx welcome back15:14
dhellmannjaypipes pointed a few of us to this thread on the kubernetes-dev list with their recent discussion of the same topic: https://groups.google.com/forum/#!topic/kubernetes-dev/nvEMOYKF8Kk15:14
johnthetubaguyand loads of folks adopting are crazy risk averse, so just don't fancy15:14
johnthetubaguyrandom datapoints there really, but its all I have right now15:15
persiaAnecdotally, some reasons not to upgrade that I've encountered : "We are satisfied with current performance, functionality, and support contracts", "We don't have staffing for that migration right now", "We need to revalidate all our workflows against a staging environment before we could migrate, and other teams are busy."15:15
dhellmannjohnthetubaguy : I wouldn't want that either. What tools were used to deploy those systems? Home grown? Community supported? Distros?15:15
johnthetubaguypersia: yeah, busy, happy with what they have, and a bit risk averse15:15
persiajohnthetubaguy: For values of "a bit" that might be replaced with "extremely" by most developers15:16
dhellmannthose reasons all sound like they might be given by someone who wants an LTS, but we did hear from people who said upgrading is "hard" and that's what I was interested in exploring15:16
johnthetubaguydhellmann: not seen much home grown as such, combination of the other things I think15:16
dhellmannbecause I'm not sure we can overcome "I don't want to" as a reason15:17
dhellmannjohnthetubaguy : ok, that's good at least; it means bug fixes can be shared15:17
ttxStill catching up15:18
persiadhellmann: For many monolithic systems, upgrade consists of executing a predefined list of simple steps, often with close support by someone else (even if that "support" is illusory, as when folk claim "supported by upstream").  For virtual infrastructure substrates, one has to take the entire thing down, and may not know precisely how to return it to state (as one may not have a good understanding of what tenants did).  A common pattern seems to15:19
persiabe "install a new cloud, migrate everyone, decommission the old cloud".  Many discussions about "upgrade the substrate" include people taling about hardware budgets.15:19
mugsieI think that our "upgrade" "strategy" is migrate + wipe + new install which is why we have so many old installs running15:19
ttxI'll try to steer the cycle length thread to a conclusion, likely next week. Agree that it would be great to turn some of the reaction energy that the thread created into solving the underlying issues15:19
dhellmannmugsie : who do you mean by "our"? the community or your employer?15:20
mugsiemy employeer15:20
dhellmannk15:20
* mugsie should have clarified that15:20
dhellmannso this is confusing to me, then, because we have had so much interest in fast-forward in-place upgrades recently15:20
fungii've been in plenty of situations and seen plenty more where organizations simply don't want to upgrade most of their critical software more often than once every few years, so in that sense any fast-moving project is really a poor fit15:21
dhellmannare those approaches being taken by completely distinct groups of people?15:21
mugsiedhellmann:  is it from a smaller subset of users ?15:21
fungithere's a lot of history in organizations where software versions are tied to hardware deprecation and replacement15:21
fungiso you "upgrade" when you replace your servers15:21
johnthetubaguyfungi: +115:21
persiadhellmann: From whom comes the interest?  I suspect that some of the fast-forward-in-place interest comes from folk who support clouds and can't move fast enough with the migrate / tear-down / build / migrate cycle.  This may not represent the opinions of those folk who set strategy for the operators.15:22
mugsiefungi: yup15:22
fungiand you don't want to replace your servers every 6 months ;)15:22
dhellmannmugsie : I don't know. I know Red Hat and SUSE are both interested for their customers.15:22
cdentyeah, and given that, the idea of version mismatched, asynchronous upgrading seems right15:22
cdent"that" == "hardware kill = upgrade"15:22
mugsiedhellmann: yeah, but is that a sub set of their customers?15:22
dhellmannmugsie : I know for Red Hat we do have 2 customer tracks based on the frequency of upgrades. I don't know about SUSE.15:22
mugsieI know that there used to be huge amount of old Helion installs left unupgraded15:23
cmurphySUSE does not have different tracks, we have a standard in-place upgrade solution and then we have outlier customers who do their own thing and we try to support15:23
persiaStatements from vendors expressing customer opinion should be taken with some uncertainty (including my statements).  Often a vendor will want to achieve something they can use at a customer to cause the customer to think things are better, which may not match the thing the customer wanted in the first place (for many reasons, from simple miscommunication due to differing viewpoints to development of customer lock-in strategies for long-term15:24
persiarevenue capture)15:24
mugsiepersia: ++15:24
dhellmannI don't see that much for us, as a community, to do about folks who are planning the hardware-refresh upgrade path. But I thought we have had a lot of people complaining about in-place upgrades.15:24
cdentI continue to wonder why upgrades shouldn't be considered a downstream value-add? I'm not saying "let's do that" but asking "why isn't it that way"?15:24
dhellmannpersia : well, without revealing customer info, I know we do have some largish customers who want to upgrade in place but not every 6-12 months15:25
persiacdent: Some prominent downstreams use "use slavishly provide upstream" as an excuse to require professional services payments for bugfixing.  There are other models, but few downstreams want the risk of actually needing to provide warrants that the software will work in a particular way.15:25
cmurphycdent: you could make that argument for a lot of the official projects we have, couldn't you?15:25
cdentsure, yeah15:26
dhellmannI'm detecting quite a lot of cynicism in this conversation. Maybe it's just my own sensitivity today. But I'm trying to understand why folks want LTS and better upgrade support and I'm hearing that some of you don't think they really do.15:26
cmurphycdent: the open source community answer is if everyone is facing the same problem why don't we solve it together?15:26
dhellmannam I misunderstanding?15:26
persiadhellmann: I know of private cloud operators whose infra departments would like to upgrade the cloud more often.  At at least one of those, the CTO office would not authorise such a change without validation from every engineering team that uses the infra.  It's all about viewpoints.15:26
cdentexcept that in the case of things like upgrades the solutions are very different15:26
cdentfor instance vio doesn't do upgrades really, it does blue green replace and swap15:27
fungii definitely think people want lts... in particular the segment i described who only upgrade when they get new hardware would love to have some confidence that the version they're going to install and then run for the lifetime of the machine will have bug fixes and security patches15:27
dhellmannpersia : ok, well let's set aside "because the boss said..." reasons because we have less control over those than "because it's too hard" or "because the code breaks" reason15:27
dhellmannI get there are lots of political issues, too. I just want to focus on things we can actually do something about.15:28
persiadhellmann: I think that's dangerous: if we set aside "because the boss said", we're only communicating with the staff, and encouraging the staff to act counter to the boss, which is not a good position from which to encourage the boss to recommend OpenStack.15:28
johnthetubaguydhellmann: you mean we call everything version 10 and pretend there are no upgrades any more?15:28
mugsiedhellmann: I don't think so - but there is a feeling (on my side anyway) that people want new features, and long term support, but each time it comes down to providing a good chunk of resources, features seem to win?15:29
TheJuliapersia: well said15:29
persiaInstead, let's consider sources of organisational risk, and try to mitigate them, which makes it easier for both risk-adverse infra teams and their bosses.15:29
dhellmannpersia: ok, well, I think you're derailing a useful technical question that can be considered in its own merits.15:29
pabelangerrunning a little late this morning, catching up on backscroll now15:29
dhellmannmugsie : so is your position that there's no way at all we can improve the current situation?15:30
mugsiethere definitly is, but we need to realise the trade offs15:30
dhellmannjohnthetubaguy : no, I don't think continuous deployment is the solution to the problem.15:30
fungimugsie: and i suppose, recognize that this is not an easy problem to solve ;)15:30
persiadhellmann: From what I understand from folk who are a bit more mature about their cloud operations, upgrades can mostly be done on a per-service basis, often without tenant visibility.  Maybe we need better docs, but I'm not sure we have a technical problem to solvce here.  On the other hand, sometimes things fall down, which is risky.  I think fixing that is a more interesting problem.15:30
dhellmannmugsie : ok. Well, let's play "what if" then. What if we had a solution to the problem, and just needed people to do it. What might the solution look like?15:30
mugsiefungi: not easy at all :(15:30
mugsiebug fix / no feature release where nothing other than LTS CI / fast forward work was done?15:31
dhellmannpersia : yes, I was surprised to hear that lots of people are running mixed versions in production. I liked the suggestion (from mriedem?) to test that upstream.15:31
fungipersia: to what extent do you think the existence of an integrated release plays into perceptions that services all need to be upgraded together?15:32
dhellmannmugsie : what would that work consist of? do we know enough to plan a dev cycle like that?15:32
mugsieI dont think so. I think the planning would have to be done as part of that cycle, as the planning will probably take as long if not longer than the solution15:33
persiafungi: I don't have a good answer.  My impression is that public cloud operators don't care much.  My impression is that private cloud operators care with inverse proportion to the degree their cloud is actually operated by a vendor.  My sample size isn't big enough for me to be confident in those statements.15:33
mugsieagain, pulling things from gut, so don't base any policy on me15:33
dhellmannI asked on the thread, but don't think there was really a response: What would our world look like if we used cycle-with-intermediary releases for all services and only ran integration tests with released components?15:33
cdentI like that.15:33
mugsiedhellmann: that would be really interestign15:33
TheJuliadhellmann: In my past, it was always easier to upgrade a single component and be able to have a rollback plan if that one thing failed. That helped our risk management concerned folks not worry, and us keep our sanity from having to fix majorly broken things at 4 am.15:34
ttxdhellmann: no more stable branches >15:34
mugsiefirst release my be a little hairy, but after that it might make things easier15:34
ttx?15:34
dhellmannTheJulia : that's a very sensible approach. I wonder what we'd have to do to document that upstream.15:34
fungistable branches could be replaced by lts-frozen branches i suppose15:34
TheJuliattx: I think people would still cut stable branches downstream as a point in time reference and work from there15:34
dhellmannttx: that wasn't part of my assumption; why do you think that would be a fall-out?15:34
persiaI have some concern with the idea of a consolidated release of code that has not been tested for interoperability: if there were only staggered intermediate releases, I think integration testing against released code only would be very interesting.15:35
fungithe bigger question for me is not whether we test with released versions of other services but how we test upgrading15:35
ttxdhellmann: would you still have coordinated  "releases" ?15:35
TheJuliadhellmann: That documentation does feel like a big missing piece, but as also pointed out by pesia, it is only half the problem and desire.  Of course, this is also a no one size fits all thing15:35
dhellmannttx: I don't think they'd look the same, but maybe. Cut a stable branch at feature-freeze?15:36
dhellmannpeople still want stable release support, don't they?15:36
ttxdhellmann: so you would keep cycles15:36
fungii would readily entertain a solution where there is only lts and no more interim stable branches between lts versions15:36
TheJuliaI wouldn't call feature freeze a point of stability, I would think it it would actually be the most un-stable point.15:37
dhellmannI think the cycles help with community cadence, and aligning all of our various contributing companies' schedules15:37
mugsieTheJulia: ++15:37
dhellmannTheJulia : so maybe not "at" but "shortly after" as we do now15:37
dhellmannmaybe that change wouldn't buy us anything at all, but I think *thinking* about it gives us a different perspective15:38
TheJuliaso many facets to this discussion :(15:38
dhellmannand maybe it does give us more explicit support for mixed-version testing15:38
ttxdhellmann: so I think that would be interesting, but might turn into a complex nightmare if each component follows widely-different upgrade support schemes or cadences15:39
ttxLike you would have to learn of each component cadence15:39
ttxOne would release every month and support up to n -> n+215:39
TheJuliattx: and some components won't truly have a solid cadence if it is not forced and may be more feature release focused15:39
dhellmannwe need to break out of the cycle of users asking for something and us saying we can't do it for the same reasons we've always given15:39
ttxAnother would release every 4 months and only support n -> n+1...15:39
TheJuliadhellmann: ++15:39
dhellmannttx: sure, we might want to set some standards there, like you have to at least support stable-1 to stable15:40
dhellmannwhich may mean many intermediate releases15:40
ttxI think the reaction on the thread when we sort of proposed that was that everyone would skip intermediary and only use "real" stable15:41
fungihonestly, if we declare lts vs non-lts releases, people will probably mostly just use the lts ones15:41
dhellmannmaybe. anyone upgrading on a stable series is already using a mix of upstream versions, though, since we release on those branches at different times15:42
dhellmannfungi : exactly15:42
TheJuliaAnyone who is risk adverse will focus on and stick with LTS unless their requirements change and they need to move faster or change a component up.15:42
mgagnere running mixed releases. In our case, we used to use (more or less) vanilla Ubuntu packages and you can't run mixed releases because of the common Python dependencies.15:42
cdentI fear that we each make a lot of assertions about what everyone will do based on our own (limited) experiences and biases, and responses in many places indicate that those are incomplete. There is not going to be one universal solution.15:42
ttxdhellmann: I think this could be a gradual change, with first step being to actually test components against most recent intermediary releases of other components15:43
TheJuliacdent: not even two are possible15:43
dhellmanncdent : that's true. it's good to have some of those assumptions brought out in the open, though15:43
dhellmannbut saying "we can't solve this for everyone" isn't the same as "'we can't solve this for anyone"15:43
dhellmannwhat could we do to make the situation incrementally better for some usefully large subset of users?15:44
mgagnewe just recently gain the ability to run mixed releases by using virtualenv. But just so you know, some people might still be running with Ubuntu/RedHat packages and can't run mixed releases even if we would suggest/recommend them to do so.15:44
cdentdhellmann: yeah, not trying to be stop energy, rather just what you've said15:44
dhellmanncdent : ++15:44
ttxNobody uses intermediary releases because we don't even test against them, so we don't promote that usage15:45
fungicdent: it's more witnessed behavior. people mostly deploy from distros (either vendor custom ones or openstack provided in common linux distros), and the distro package maintainers will generally optimize toward changing up openstack versions only when they make a new release of their distro. as a result, they'll prefer the releases we provide with the longest support durations (so stable releases over15:45
fungiintermediates, lts over non-lts)15:45
dhellmannmgagne : I think we can assume that anyone running mixed version deployments is using some sort of isolation (virtualenv, vm, separate host) between the components15:45
ttxcontainers!15:45
cdent15:45
mgagnedhellmann: yes. but it was responding to "to what extent do you think the existence of an integrated release plays into perceptions that services all need to be upgraded together?"15:46
dhellmannttx: you joke, but are there any changes we could make to improve support for upgrading container-based deployments?15:47
dhellmannmgagne : ok, good point.15:47
mgagneso even if we advertised that you *CAN* run with mixed released (to ease the upgrade process), some people just won't be able to do so.15:47
ttxdhellmann: I'm not really joking, just surprised you would not list that in your examples15:47
mgagneand if you suggest containers/virtualenv, maybe they just can't for lack of reasons or political reasons.15:48
dhellmannI know we've had discussions about being able to pick intermediate releases of different components in the past. I don't know how far those went, and lots of our tooling is built around stable branches now.15:48
ttxIf we want to be more micro-service, supporting more decoupling in versioning is necessary15:48
dhellmannmgagne : I agree with the people who say we can't solve this for everyone. I want us to consider whether we can improve it for anyone.15:49
ttxAnd by supporting I don't mean "it kinda already works"15:49
johnthetubaguyif there was one use case we made better, ideally benefiting the most users, what would it be? "upgrade less often" use case?15:49
ttxI mean actually testing it15:49
dhellmannjohnthetubaguy : that's a very important question. what are we actually trying to fix?15:49
mgagnedhellmann: I agree, looking forward to it =)15:49
persiajohnthetubaguy: I think, rather than "upgrade less often", it is "have fewer operations that require lots of simultaneous changes to a deployment"15:50
dhellmannwhy do people want LTS? because they're not getting it from a vendor and want it from us?15:50
persiaAt least in some places, multiple simultaneous changes require much higher-level signoff for change control approval.15:50
TheJuliattx: at that point, it seems like we would need to have an entire constantly rolling testing system, not huge staged jobs that setup an entire specific scenario15:50
dhellmannwhy do people complain about upgrades? because of openstack components or third-party things (I keep hearing that networking upgrades are complicated)15:50
ttxdhellmann: We should survey them.15:51
dhellmannTheJulia, ttx: do the kolla folks not already do that sort of testing?15:51
cdentno reach15:51
TheJuliadhellmann: no idea15:51
ttxdhellmann: I wonder how much of teh LTS discussion is people wanting to upgrade less often fearing of missing out with releases every 6 months15:51
TheJuliadhellmann: actually, i think they kind of do, lighter weight staging15:51
mgagnedhellmann: in my case, most issues were not 100% related to openstack which I explained on the list: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124561.html http://lists.openstack.org/pipermail/openstack-dev/2017-November/124564.html15:52
ttxdhellmann: because as you say, they should be able to get it from vendors15:52
dhellmannttx: I'm still trying to understand what sort of fixes people might want merging into an LTS branch15:52
persiattx: Another possibility is that people want to consume vendor LTS and fear loss of OpenStack support part-way through the period (and, presumably, don't trust vendor promise of extended support, or similar)15:52
persiaErr, "OS vendor LTS"15:53
mgagneTLDR: requires lot of planning, especially external teams/systems, lot of testing and lack of resources to prepare/perform the actual upgrades.15:53
fungipersia: i see that as an aspect of "they're not getting it from a vendor and want it from us"15:53
ttxdhellmann: yes, i feel like we still don't have an accurate picture of the "what" of what they want15:54
dhellmannare there people running clouds who are surprised that it's just naturally a complicated thing to do?15:54
mugsiedhellmann: yes15:54
dhellmannor are we ignoring ways to make it simpler?15:54
dhellmannttx: I agree15:54
fungipersia: it may be that they're actually getting lts-like security fixes backported by their vendor, but that they're not getting the quality level they perceive they would get from upstream-provided fixes15:54
persiafungi: What is the "it" they aren't getting?  To my mind, the requestor wants "a solution that lets me have supported infra management software for the next period".  Since OpenStack doesn't provide an operating system, I'm not sure how OpenStack could proivide that.15:55
johnthetubaguymgagne: that is a good list of external issues, sounds very familiar reading through that.15:55
ttxdhellmann: it is probably a combination of differing interests, all federated behind a shiny acronym15:55
TheJuliadhellmann: I think both15:55
persiaMy impression is that most consumers don't perceive a quality difference between "vendor" and "upstream", and some consumers perceive "vendor" as higher quality than "upstream"15:55
dhellmannfungi : at red hat we would be happy to share the work of backporting security fixes, fwiw, so (without speaking for my company) we support some notion of an LTS but the details need to be worked out15:55
fungipersia: i.e., general perception that running a version no longer supported upstream (even if supported by your vendor) is risky15:56
ttxSome want a vendorless LTS, some want less upgrade pressure from their bosses...15:56
cmurphythis discussion is focusing on the cloud operators but a big voice in the discussion is the vendors themselves, we want upstream LTS so we can share the burden of maintaining bugfixes. Our customers don't need upstream LTS, they get it from us.15:56
mugsieis there many vendors that keep a stable branch around, and build new LTS releases from that, or do they add a patch in the packaging step to fix issues, and if it is the later, would an LTS branch help?15:56
dhellmanncmurphy : that's what I was trying to say, but you said it better15:56
TheJuliattx: I think those are two separate issues that we need to de-couple15:57
persiafungi: For me, the interesting question is why: if a vendor provides a warrant, that should be sufficient.  Are vendors providing provisional warrants adn requiring upstream LTS to provide a real warrant?  If so, w should do LTS (where the vendors collaborate).  If not, I wonder why there is the worry.15:57
cmurphydhellmann: ah i was composing my thought before i saw yours15:57
persia(the answer might still be for us to do LTS, but we should know the right question first)15:57
fungimugsie: the impression i get is that maintaining an lts would reduce the chances that they need to make significant adjustments to backported fixes vs maintaining their own divergent lts15:57
ttxTheJulia: yes. It's two separate asks jumping to the same conclusion but actually wanting two different things15:57
dhellmannif as fungi said earlier everyone just starts using the LTS release, does that help or hurt with the upgrade problem? what is the relationship between an LTS and upgrades?15:58
fungipersia: yeah, i too find it logically inconsistent, but there are certainly cases where people would like to be closer to upstream in terms of consuming bug fixes and security patches, but don't roll their own and instead rely on a vendor because the effort level is reduced15:58
ttxdhellmann: I think it reduces options15:58
dhellmannif it helps the "boss says" pressure problem by introducing more time between them, but increases the complexity, is that a net benefit? or loss?15:58
persiaRegarding conflation: it may also be useful to segment the solution: if a plurality of vendors want LTS, they should be able to collaborate on that without anyone caring what the operators do.15:59
dhellmannpersia: that was basically the proposal we discussed in sydney15:59
fungiand was also teh original premise behind our stable branches for that matter15:59
mugsiedid the discussion dims was leading for LTS go anywhere?15:59
persiafungi: That sounds like an unhealthy vendor relationship, which I think can be fixed, but not necessarily by one thing.15:59
dimsmugsie : will restart it next week16:00
dhellmannexisting stable branches could just hang around and teams could form to apply patches but the project teams wouldn't automatically be signed up to "own" the results in any way16:00
persiadhellmann: Excellent.  Apologies for duplication.16:00
mugsiedims: cool - I hadn't been following closely16:00
dhellmannpersia : np, I think several people coming to the same conclusion independently is a good signal16:00
ttxmugsie: I think it's safe to say that the mythical group of people interested in LTS did not magically appear to do the work yet16:00
dimsright ttx16:00
* mugsie is shocked16:00
dimsi am still drumming, let's see16:00
ttxMaybe new year good resolution time is the right time to ask again though16:01
dhellmannwe haven't actually changed any policies yet, either16:01
dhellmannso anyone who wants to do it doesn't know what to do16:01
persiattx: I think dhellmann and cmurphy have already expressed vendor support to demythicise those folk, assuming they have a home in which to go.16:01
fungithe counterargument is that the people who want ocata-lts don't exist yet because most of them are currently running liberty or mitaka16:01
dhellmannyeah, a bunch of red hat folks signed up on an etherpad somewhere as interested in helping to review patches16:01
persiafungi: If LTS is blessed, is there a good reason it should not start with mitaka-LTS?16:02
ttxYes, but I think we need to realize such a shift will take time, it won't be a quick policy change that will make everything good16:02
fungii'm probably the wrong person to answer that16:02
dhellmannfungi : if we start with ocata, would that give people more incentive to upgrade?16:02
dhellmannpersia : I'm not sure we can test mitaka upstream any more, can we?16:02
mugsiepersia: I could see issues getting mitaka CI back running possibly16:02
mugsieocata may be a better target ?16:03
ttxfungi: that is a good point16:03
ttxMaybe we prematurely kill LTS before it can even exist16:03
dhellmannttx: a very simple change we could make is to do what the docs team did and say we're going to stop deleting things. Just stop closing stable branches and stop turning off tests for patches submitted to them. Those things might still break, but we could ignore them if no one is interested in maintaining them.16:03
fungidhellmann: it may give them more incentive to upgrade to ocata, but likely will give them proportionally less reason to upgrade after ocata until ocata reaches end of lts16:03
persiaIt's a lot of work, but if there is a way to exchange money for something blessed as "Mitaka LTS", I presume there is also a way to exchange that money for bodies helping rebuild the CI.16:03
fungiwhich is basically the reason to have an lts i suppose, so not particularly surprising16:03
dhellmannfungi : true16:03
dhellmannon both counts16:03
ttxdhellmann: that, and actually testing intermediary releases and giving them more weight16:04
dhellmannttx: yes, that one would require more thought about how to set up the tests16:04
dhellmanna simple first step would be to publish all server releases to pypi16:04
dhellmannwe have a mix right now16:04
* dhellmann notes the time16:05
fungithe previous discussions about lts or generally extending "support" for stable branches did in fact involve no longer closing those branches, but included less interest in maintaining the ci and instead gravitated toward disabling testing on them16:05
fungisimilar to what cinder's driverfixes branches are now16:05
ttxright. I'm interested in small changes (technical and messaging) we could implement to make progress in the right direction. But we might want more clarity on the landscape first, lest we pick the wrong direction16:05
dhellmannyeah, I think in sydney we agreed that disabling tests would be an option, but not necessarily happen automatically16:06
* ttx checks appropriate use of lest in dictionary16:06
dhellmannttx: sure16:06
persiattx: You got it right (re lest)16:06
dhellmannttx: you used it exactly correctly16:06
ttxyay! can cross that one off my 2018 list16:06
fungiit does lead me to question the level of trust consumers will put in those lts-maintained branches if they're no longer tested16:06
dhellmannmy impression was that we were talking about adding skips or deleting individual functional/integration tests, not turning jobs off completely16:07
dhellmannlike if a test becomes racy because of some underlying change16:07
persiafungi: If we presume LTS to be mostly vendor led, is it not safe to offload the risk/trust managment also to the vendor?16:07
fungisounded like one suggested option was to switch to a different style of distro-specific testing but the complexity of rebuilding packages for arbitrary distros from source in the upstream ci meant that they would likely be provided by third-party ci mechanisms16:07
dhellmannand yeah, we did also talk about having some sort of "we tested it downstream" statements16:08
mugsiefungi: if the branches are being maintained by vendors, we talked about the vendors taking the risk, and doing testing16:08
TheJuliapersia: that is a good point, the vendor should take that charge, and the ask upstream seems to be more to enable collaboration16:08
dhellmannso it sounds like we need some sort of way to clearly communicate just how much is being tested, and where, in the branches16:08
persiaMy read of the "ask" is from vendors to "OpenStack", being "please let us collaborate on something called LTS".  I may be mistaken, as I wasn't in Sydney.16:08
ttxSo I think we'll need to have an "upgrading" room at the PTG to further explore that intersection between stable branching, testing, dev cycling, release management, vendor policies and ops techniques16:09
ttxbecause there is a lot to brainstorm there16:09
mugsiepersia: at its core, yes16:09
fungittx: seconded, sounds like a great idea16:09
dhellmannI think the way red hat would consume them is to pull individual patches that we want, and do our own testing. So they become a way to share those patches, rather than a way to determine that the upstream version it well tested.16:09
ttxBUT we need to get a more accurate pictuure of the landscape before then, or we risk following the wrong direction16:10
mugsiewe have had that ask before, but we did not (to my recoloection) think about allowing an untested branch before16:10
persiaIn which case, the need for comprehensive upstream CI is reduced, but the release team still needs a way to say "these old branches aren't tested properly upstream: if you need them, speak with a vendor"16:10
dhellmannperhaps one outcome might be a set of suggestions, with their expected impact, to be used as a way to get feedback from users?16:10
ttxSo yes, now is the time to get more clarity on what people want, why they want it etc16:10
dhellmannpersia: we did say we would not tag releases on the lts branches16:11
persiattx: We need to combine that with who they are for the data to be useful.  Vendor need for LTS is driven by folk who can commit to pay vendors, who may not be the same as the people who directly use or operate OpenStack.16:11
dhellmannin the log message id discussion I found it very useful to have some strawman proposals for people to knock down16:11
persiadhellmann: Is there a good writeup on what was decided?  I think you've already had most of the thoughts I would have about this, and I'd like to stop repeating them :)16:12
dhellmannwe probably don't want to be calling it "LTS" if we're not going to actually support it upstream.16:12
dhellmannpersia : heh, let me see if I can find the etherpad16:12
fungidhellmann: "long-term stable"16:12
ttx"post-EOL branch"16:13
dhellmannpersia : https://etherpad.openstack.org/p/LTS-proposal16:13
dhellmannI think that's the one dims created after the summnit16:13
persiattx: Maybe "post-EOS"?  It's still alive.16:13
dhellmannttx: "not dead yet"?16:13
cmurphylol16:13
ttxZombie branch16:13
dhellmannsold16:13
TheJulia+1 for zombie16:13
cmurphyzombie implies it died at some point16:14
ttxNot sure the enterprisy folks will be happy deploying offthe Zombie branch16:14
dhellmanncmurphy : vampire?16:14
persiaenterprise folk don't have to be exposed to codenames.16:14
ttxcmurphy: Vampire branch ?16:14
* dhellmann is not up on his mythical creatures16:14
cmurphyhaha16:14
dimsare the zombies fast or slow? :)16:14
TheJuliavampires could work16:14
dhellmannor anyone else's for that matter16:14
cdentdims++16:14
fungidims: i think we agreed on slow zombies for this one16:14
persiaBut generally, use of names of undead creatures isn't a god idea, for cultural-sensitivity reasons16:14
ttxSuccubes also don't die transforming but they are gross16:14
ttxVampires are classy16:15
* fungi wonders whether ttx considers max schreck in "nosferatu" as classy16:15
ttxTIL: a male succube is called an incube16:16
* TheJulia sighs16:17
persiattx: In English, "incubus/incubi" and "succubus/succubi" are the more common orthographies.16:17
fungipersia: which is actually latin ;)16:18
ttxyes, we got past Latin16:18
ttxOh well16:18
* ttx goes back to the email pile16:19
cdentbefore tc-members disperse I wanted to remind folk that smcginnis and I are running for the board next week, in large part in response to conversations had here about increasing throughput betwen TC and board16:20
ttxand you like meetings16:21
fungicdent: thanks for the reminder!16:21
pabelangercdent: ++16:21
cmurphycdent: yay16:21
* fungi wishes we could switch the board elections to condorcet/stv/something16:22
cdentttx: heh, but no.16:22
cdentfungi: yeah16:22
fungiperhaps from within the board is a great place to drive such discussions16:23
ttxfungi: last time I pushed for it I was opposed Delaware corporate code16:23
ttxbut yes, probably best to push for it from inside the board than from outside16:24
persiaSpurious argument: Delaware allows boards to be managed by appointment, so condorcet-to-inform-chairman and charman-appoints, should work (but get on the board first, then ask counsel)16:25
dhellmannthis was a good conversation today, thank you everyone16:27
* dhellmann has to drop offline16:27
fungidims: back on the kubernetes-dev thread about release cycle length, lts and upgrading (wow they really ended up on the same cluster of issues didn't they?) i see the most recent post mentions that their "patch managers" have to be google employees... do you happen to know why that is?16:27
fungi"Another effect of longer releases is an increase of duty time for patch managers.  This role can only be handled by a Google employee which creates an unfair burden there in terms of engineering time."16:28
dimsfungi : or folks can buddy up with a google employee... there are some build infra pieces that is not yet in the open and needs googl credentials16:28
fungioh, interesting. so it's infra team issues on that end16:28
dimsincluding pushing to the gcr.io docker repo16:28
ttxyeah, I think it's the point release end of it16:29
ttxtheir releases require google employees too16:29
fungireminds me of early on when there was a (often ignored) policy which disallowed non-hpcloud employees from logging into the servers run there16:29
dimsttx : at least a few hours worth of work16:29
fungimakes me glad we're not in their shoes16:30
dimsthey are making good progress to reduce that piece ... so not as bad as it used to be a while ago (3-4 releases)16:32
dimsi can hunt for a TODO list if you are all interested16:32
*** dtantsur is now known as dtantsur|afk16:41
*** cdent has quit IRC16:49
*** gcb has quit IRC16:49
*** ChanServ sets mode: -r 16:50
*** cdent has joined #openstack-tc16:52
*** jpich has quit IRC16:56
*** david-lyle has quit IRC18:00
*** david-lyle has joined #openstack-tc18:01
*** cdent has quit IRC20:26
*** smcginnis has joined #openstack-tc21:27
*** flwang has quit IRC23:14
*** hongbin has quit IRC23:18
*** flwang has joined #openstack-tc23:24
*** mtreinish has quit IRC23:39
*** mtreinish has joined #openstack-tc23:42

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!