21:00:04 #startmeeting tc-python3 21:00:05 Meeting started Thu Mar 7 21:00:04 2019 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:08 The meeting name has been set to 'tc_python3' 21:00:10 #topic rollcall 21:00:11 o/ 21:00:52 people are excited about python3 21:00:53 sweet 21:01:09 i have a feeling the excitement is more about !python3 21:01:15 er, !python2 21:01:26 #topic introduction 21:01:51 can someone give us a brief about what's been happening? we all know python3 is going away, but just how far we've gotten, what's gotten done,? 21:02:09 o/ 21:02:30 Someone have the link to dhellmann's tracking page? 21:02:37 o/ 21:02:41 (and really, why we've needed to have this meeting too, considering i think a lot of tc and community members maybe hasn't been able to follow up as much with this 21:02:48 smcginnis : do you mean this? https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects 21:02:57 dhellmann: Yep, thanks. 21:03:09 #link https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects 21:03:11 so I understand it this meeting is about which versions of Python3 to test in Train/Stein 21:03:12 that really only tracks test jobs for any version of python 3 21:03:21 o/ 21:03:25 not about which projects have migrated to python 3 21:03:42 i'm just trying to make our meeting notes somewhat consumable by a our community 21:03:55 so, as a reminder we passed a resolution on how we'll make these decisions starting with Train 21:04:02 #link https://governance.openstack.org/tc/resolutions/20181024-python-update-process.html 21:04:33 the answers should fall out of that without us having to apply too many judgement calls 21:04:35 pretty sure we need to test python 2.7.15 and 3.6.something in stein at a minimum. depending on what rhel 8 releases with and what opensuse leap have that may need extending further? 21:04:44 er, s/stein/train/ 21:04:54 but it does rely on us doing it at the beginning of a cycle and setting up goals and expectations 21:05:05 we were too late to do that for Stein 21:05:25 I think in this meeting we should get to the point where we know what we're doing for Train 21:05:27 yeah, basically whatever minor python versions are the default python and python3 for the three platforms we list there 21:05:35 yea, but we at least start from Stein what and how much we can do like latetst distro thing 21:05:43 and that will help us to provide guidance for what projects should do right now in Stein 21:06:10 makes sense, so zaneb you're suggesting we come up with something for train, but start working on it in now 21:06:22 so we don't necessarily ship with that goal, but at least we'll be ready for it by then 21:07:13 that gives us the time to get things in place, so seems like a solid route forward 21:07:23 retroactively applying stein we need to test 2.7 because that's what ubuntu 18.04 and centos 7 have for default python2, and python 3.6 because that's the default python3 on ubuntu 18.04 (those platforms chosen because they were the latest lts for each distro at the start of the stein cycle) 21:07:25 we need to start preparing the Train goal now. but also we need to know what it's going to say so that we know what makes sense to do in the meantime before this process kicks in 21:07:46 fungi: can we please deal with Train first? 21:07:58 sorry, gmann asked to start with stein 21:08:06 are we changing distros for train? 21:08:07 after i started with train 21:08:34 we still need to figure out stein, otherwise, we'll ship something that will be problematic for deployers 21:08:36 but i'm happy to be quiet and let you all decide which you want to talk about first 21:08:56 gmann: is this why you put the idea of porting legacy jobs to bionic? 21:08:56 yeah, we have half of the jobs testing bionic and half xenial. 21:09:01 yeah 21:09:09 so there are 3 bullet points in the resolution. let's go through them one at a time for Train 21:09:22 1) The latest released version of Python 3 that is available in any distribution 21:09:32 I submit that this is py37 for Train 21:09:36 any disagreement? 21:09:44 not from me 21:09:56 sounds likely unless centos 8 comes about before then and has python 3.8. doubtful 21:10:11 2) Each Python 3 version that is the default in any of the PTI distros 21:10:32 we don't know what distros will be in train 21:10:41 i don't think we have a centos 8 release date 21:10:56 let's assume centos 7 21:11:08 python3 doesn't exist on centos 7. 21:11:10 mnaser: we pick them in advance, so if centos 8 is not released before the start of train, we stick to 7.5 21:11:37 right, i think the point is we don't know *now* what will be available at the start of the train cycle 21:11:41 I don't think it actually makes any difference, because we've said we support py27 until U, and py36 is the default on both centos8 and ubuntu bionic 21:11:59 thats easy then 21:12:08 unless leap is different? 21:12:13 So I believe this is py27 and py36 21:12:29 good point, we can deduce since we know it won't be 3.5 in centos 8 and 3.8 isn't due to release until october 21:12:31 don't know about leap, but I assume it's py36 21:12:44 if it's 37, well, we already put that on the list from (1) 21:12:49 i think rhel 8 preview had a python version 21:13:02 " In Red Hat Enterprise Linux 8, Python 3.6 is the default." 21:13:04 it is 36 for leap 15 21:13:10 centos 8 being a rebuild, will have 3.6 21:13:19 so 36 it is 21:13:21 problem solved then 21:13:26 3.6* 21:13:35 ok, so the list so far is py27, py36, py37 (incorporating both (1) & (2)) 21:13:41 now for the fun one 21:13:46 train=2.7,3.6,3.7 21:13:52 3) Each Python 3 version that was still used in any integration tests at the beginning of the development cycle. 21:13:57 soooo 21:14:10 this is up for interpretation as i mentioned before 21:14:19 the plan is to switch everyone to bionic before Train 21:14:31 in which case it's only py36, and no changes here 21:14:37 i take it to mean any version that can't be updated in integration tests early in the cycle 21:14:51 if we didn't manage to get everyone to switch then we'd have to add py35 to the list 21:15:10 fungi: "Testing for these versions can be dropped once all integration tests have migrated." 21:15:43 just to recap so far, does that mean train will have 2.7, 3.6 and 3.7 so far (and potentially 3.5 if we don't get rid of it this cycle?) 21:15:44 but at the beginning of a cycle, it's based on the status at the beginning of the cycle 21:15:45 yeah, i'm less and less sure it actually specifies a unique limitation on its own 21:15:55 mnaser: correct 21:16:12 #topic Decide on py3 targets for Train + Discuss recommendations for Stein 21:16:17 (for my sakes later, let's keep going) 21:16:23 for unit testing 21:16:34 does this lead us to discussing the potential idea of porting legacy jobs to bionic? 21:16:43 if it can be dropped later in the cycle then it's unclear to me why it matters that we know it was used at the beginning of the cycle 21:16:57 mnaser: yes, this would be a good point to discuss that 21:17:00 fungi: i think it is so we have a clear leist 21:17:07 list* 21:17:20 that we then edit when the migration happens 21:17:32 #topic porting legacy jobs to bionic 21:17:38 this seems like a really big leap 21:17:49 how much do we risk breaking by this 21:18:14 how different is 3.5 from 3.6 ? 21:18:27 so rephrasing for my benefit, as i'm still quite confused, "if we can't get all official projects integration tested with the versions we said we require in #1 and #2, then we require that all projects also remain tested on the earlier version" 21:18:31 mostly it should be internal script used 21:18:48 (fwiw, i think consuming less test resources is always nice, rather than us testing a ton of targets) 21:19:03 so there's value in that. 21:19:20 are we more concerned that stuff that's not python wuld break under bionic? 21:19:38 fungi: yes, as long as there are gates running on py35, another project dropping support for py35 could break that gate 21:19:40 for the start of train, we will unit test with 2.7, 3.6, 3.7 (and 3.5 if the move to nbionic does not happen) 21:19:57 mnaser: yes that is going to be main blocker. 21:20:05 i guess this is to avoid "the trove effect" we saw at the trusty to xenial transition where all other projects had switched to running jobs on xenial and started merging changes which broke trove because it was still testing on trusty (even months after the release) 21:20:24 I think we should clarify that "all projects" statement, though 21:20:36 I wouldn't want cloudkitty to require nova to keep py35 tests, for example 21:21:03 yea. the "integration tests" line makes me think of the integrated gate 21:21:06 in effect, at the trusty to xenial switch we accepted that the trove team's lack of resources to get their jobs updated would not hold up other projects dropping support for trusty 21:21:12 mugsie : yeah 21:21:15 and we do not have all project cross integration testing so i cannot think of more than 7-8 projects combination 21:21:15 yeah. we left it up to the goal champions to define if there would be a hard cutover for laggards 21:21:54 the integrated gate is 1 job, right? that's the whole point? 21:22:01 or at least 1 set of all the same jobs 21:22:17 so those projects would all move at one time 21:22:31 gmann: well - there is more than that if you take jobs outside of the integrated gate 21:22:33 dhellmann: two, tempest-full and grenade but those are used in small number of projects 21:22:34 and other projects could lag but we could say that support for the older images would be dropped at some point 21:22:47 the problem is, we could organise an orderly transition in Train with lots of advance publicity of when things will break and plenty of time to fix. but if we do we have to unit test py35 in Train 21:22:49 most of the "non core" projects rely on some of the core services 21:23:10 also i think this is once again conflating platforms and python versions. in the trusty-xenial switch it was still python 2.7 on both sides. what changed was the platform not the python version we were running, and what broke trove was other projects dropping support for features of the old platform or starting to rely on features of the new platform 21:23:19 alternatively, we can try to ram through the change in Stein at the last minute, in which case we get to drop py35 for Train 21:23:23 zaneb : I don't think we want to interpret that rule as meaning that if we start out with something we're not allowed to drop it. 21:23:37 I think we want to focus on the end state we want for each cycle, not the start state 21:23:37 I'm actually in favour of the latter fwiw 21:24:05 we can drop 3.5 unit testing later in the cycle when it is less likey to break prpjects 21:24:14 dhellmann: we could drop it once the goal was complete (and we did explicitly say that in the resolution) 21:24:22 right 21:24:37 so if we don't drop 35 during stein, that doesn't mean we can't drop it during train 21:24:47 i think where the start state matters is that we need to be able to give projects fair warning of what we expect them to be running and so we have to decide at the start of the cycle what the target is and thus can't realistically choose a target which won't be available for use until later in te cycle 21:24:48 we should just need to declare the intent 21:25:03 agreed 21:25:08 fungi : yes, good point. we can't *add* something late, but we can drop something 21:25:15 precisely 21:25:27 if that's not clear in the "rules" we should fix that 21:25:30 fungi: makes a really good point about mixing things 21:25:32 we couldn't drop it *from the beginning* but we could announce that we had a plan to drop it during the cycle 21:25:51 zaneb : sure. we would want to set a reasonable date for it 21:25:56 give me a break on this, how hard would it be to run a 16.04 image with py36 .. 21:25:59 so for example, if centos 8 is not available officially at the start of the train cycle, we tell them centos 7 is the target for the train release 21:26:02 M1, M2, whatever 21:26:15 even if we expect centos 8 to be available later in train 21:26:25 fungi: yes, thats how I see it 21:26:38 fungi : I think that's fair. And we can say that projects could optionally add centos 8 jobs but we wouldn't require it. 21:26:56 sure, if people want to be proactive, thats great :) 21:26:57 that way we can split the python version <=> ubuntu platform problem 21:27:08 mnaser: again, what we're testing is not actually "python 3.6 on bionic" it's "the python3 which ships with bionic" so compiling our own or using backports isn't really the same thing 21:27:40 * mnaser wishes we lived in a world where "python3.6" and "python3.6 that ships with bionic" was the same thing 21:28:12 buuut anyways, we've come up with a lot of ideas. does this mean that for stein we'll try to push projects to drop py36, but not aim to drop it in this cycle? 21:28:13 mnaser: or at least, in the past we made the conscious decision that what we test against is the python interpreter provided by each platform we're targeting, not some ideal python x.y interpreter 21:28:32 mnaser: I think you mistyped that :) 21:28:35 mnaser : not drop 3.6, drop 3.5 21:28:40 mnaser: for stein i think we push to drop 3.5 21:28:41 oh yes 21:28:42 sorry 21:28:43 yeah 3.5 21:28:57 so IMHO, no 21:29:10 we could do that, but I'm also content to just say they need to include 3.6 21:29:37 at the start of the cycle, all tests were on Xenial, so I think 3.5 was a reasonable target for Stein 21:29:44 bullet #3 if i'm interpreting it correctly is that if we can get projects testing 3.6 (that is, bionic) in time for the stein release then they don't need to be running 3.5 (xenial) come release time and we don't need to support both for stable branch testing 21:30:04 which gets dicey if we want to continue maintaining stable/stein after xenial reaches eol 21:30:08 if we do manage to move completely to Bionic it would be a close-run thing 21:30:28 bionic ships 3.6, we listed bionic as a platform, does that mean we need to get projects to add py36 as a target in stein? 21:30:28 but if we manage it then we could tell projects to drop py35 on stable/stein 21:30:40 at the start of the cycle, bionic was available (well before the start of the cycle even) so mograting to xenial was a given based on our past transitions 21:30:57 mnaser: we do, and we actually *did* make that a goal for Stein 21:31:06 so, yay for us :) 21:31:11 sweet. 21:31:22 how close did we come to accomplishing it? that's still in progress, right? 21:31:30 we decided to design-by-committee the transition until it's now nearly too late to pull the trigger and be able to drop xenial testing when we drop stable/rocky 21:31:36 dhellmann: aren't you the goal champion? ;) 21:31:40 not for this, no 21:31:48 yeah, not all projects has started the legacy jobs - https://etherpad.openstack.org/p/legacy-job-bionic 21:31:51 oh, well, I guess sort of 21:32:01 I wasn't worried about the OS, just the python version 21:32:16 #link https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs 21:32:36 dhellmann: ahem https://governance.openstack.org/tc/goals/stein/python3-first.html#champions 21:32:42 I like the idea of dealing with the python version unit tests by having a series-specific name template for those 21:32:48 the "python version" is still a red herring here, but i've just about given up convincing anyone of that 21:33:01 zaneb : yeah, like I said, just the python version, not the OS version 21:33:24 fungi : I understand, but I'm trying to make the point that I didn't do any work to deal with updating the OS version 21:33:41 right, that was *not* a goal that we set 21:33:54 although in retrospect we should have 21:34:03 at least from the "what can we reasonably test" end of things, we lose the ability to test what we intended when the platforms which shipped them cease to receive security updates and bug fixes 21:34:45 so the choice we make here has far more bearing on stable branch lifetimes 21:35:02 fungi: you're bringing this up w.r.t. the issue of continuing to unit test py35 (on Xenial) in stable/stein? 21:35:09 so for train, we need to say "3.6 on bionic" right? and 3.7 somewhere? 21:35:20 zaneb: yes, exactly 21:36:08 fungi: I think that's fair, but also if it turns out we're still running integration tests on xenial by the time Stein releases, then the unit tests are the least of our problems 21:36:25 and if we're not we can drop the py35 unit tests 21:36:34 as of the rocky release there was already a newer lts distro available so our maintaining use of the old distro in our ci should die with that stable branch, essentially 21:38:18 we have a lot of similar colors, but it looks like someone has written some conclusions 21:38:23 how do we feel about those? 21:38:37 mnaser: that's me 21:38:41 we're running most integration testing on bionic at this stage, right? it's the "legacy" jobs which are still relying on xenial at this point? 21:39:04 L27 on etherpad- we should be conclusive there. either to complete the migration or keep testing all for old ditro 21:39:06 gmann: might be the best person to answer that 21:39:23 fungi: correct. only legacy jobs on xenial 21:39:23 fungi: so I think what you're saying is we should force everyone to migrate from xenial->bionic in stein, even though it's at short notice 21:39:46 where short notice is since the beginning of this cycle, yes 21:39:59 i hate to say it, but i think it might be a very necessary evil, also, i don't think a lot of projects are going to be much suffering out of it 21:40:10 zaneb: yes. but leaving thing tested in mixed way might cause issue on stein deployment 21:40:13 most apt packages are still there, and i doubt ubuntu 16 => 18 has had a ton of fundamental cahnges for example 21:40:24 fungi: not disagreeing with you about the solution, but we didn't give them notice 21:40:26 mnaser: it depends on the projects 21:40:39 before this go-round, the infra team told everyone "okay there's a new lts, everybody get your stuff in order and move" but this time around we (the tc) got to take responsibility for sending that message 21:40:51 e.g. trusty -> xenial had a completly new powerdns which broke us badly 21:41:00 gmann: do we know which projects have not yet migrated? 21:41:04 but we did warn at the start of the cycle 21:41:11 mugsie: yeah i can imagine that's a scenario where it might happen 21:41:11 fungi: right, and we didn't send the message until very recently AFAIK 21:41:20 the thing is, most people will be doing new deploys on 18.04 21:41:36 * mnaser has done plenty of 18.04 rocky deploys and we do it in OSA CI 21:41:37 mnaser: almost all the projects use 50% of their gate job as legacy job which means xenial 21:41:56 those jobs are either owned by projects or in infra 21:42:19 but most projects aren't like designate for example, where there is very strong fundamental base on a system service 21:42:22 zaneb: depends on which message. some of us did say well early and repeatedly over the course of the cycle to get it done 21:43:05 the tc didn't make an official proclamation from on high to get it done because we couldn't agree on the words we wanted to use to plan similar transitions which won't happen for a couple more years 21:43:22 can we try an experimental job across a few major projects with legacy job running? 21:43:29 for example neutron stadium has lot of testing and seems good there - L64 on https://etherpad.openstack.org/p/legacy-job-bionic 21:43:50 mnaser: you mean legacy job running with xenial or bionic ? 21:43:56 legacy job running bionic 21:44:09 mnaser: the problem there is the "legacy" jobs are basically the one-off project-specific stragglers those various teams haven't gotten around to converting 21:44:21 fungi: we have a mechanism for co-ordinating changes across the whole project and we didn't use it. you can argue that people shouldn't be surprised, but people *will* be surprised 21:44:41 (I think we should do it anyway) 21:45:04 yeah so converting legacy base jobs on bionic move most of projects gate to bionic 21:45:04 mnaser: so running one or even a few doesn't tell us much, nor does running them on other projects than the ones for which they were specifically written 21:45:05 and the people who helped create some of these one of jobs are not always around, which makes migrations harder 21:45:30 yes, i would argue that if your team doesn't understand the jobs it's running, those jobs are a liability and better dropped 21:45:47 yeap 21:45:48 at least if no one is willing to figure them out 21:46:02 today i am trying to move the infra owned legacy job to bionic which run on many projests gate 21:46:24 fungi: +10. 21:46:26 i'd like to propose and ask if there is anyone opposed to removing xenial this cycle (it's painful yes, but anyone who deson't feel its not the right thing) 21:46:44 err rather who feels it's not the right thing to do, painful or not 21:47:16 what does "remove xenial" mean exactly? 21:47:27 gmann: when yuo say "the infra owned legacy job" you specifically mean the legacy devstack-gate job which other projects are inheriting from? that job itself isn't run directly by any project right? 21:47:29 change the legacy jobs to use bionic 21:48:30 fungi: these - http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-jobs.yaml 21:48:30 are the legacy jobs the only ones using xenial still? 21:48:56 gmann: oh, jobs plural. you said "legacy job" so i wasn't sure which one you were referring to 21:49:13 dhellmann: i think converting those legacy jobs will move a huge portion of our jobs 21:49:18 the infra team has also considered those legacy jobs to be basically frozen in time, as they're nigh inscrutable and so not easy to troubleshoot if something goes sideways 21:49:19 in integration testing yes, legacy jobs are the only one using xenial. 21:49:44 fungi: ohk, sorry for typo . 21:50:18 fungi: yeah I am going to give try and see how they behave. most of them are experimental jobs so should give projects times to fix 21:50:22 mnaser : I think it will. I'm trying to understand whether it's important to do it. If those are all 1-off jobs for each project, do they affect our ability to say the projects run on bionic? 21:50:46 dhellmann: well i'm assuming most of those 1 off jobs inherit some sort of base from 'legacy' jobs 21:50:53 which is why for over a year the infra team has urged other teams to stop running legacy jobs in favor of writing newer jobs, because the time would come when they need to make changes (such as, say, running them on a newer platform) 21:51:04 mnaser : ok, I don't know how that works so I don't know if that's a safe assumption 21:51:23 so i'm operating under the assumption that if we change the 'base' job, projects will just start using a different nodeset 21:51:27 correct me if im wrong gmann 21:51:54 what happens if we ignore the legacy jobs? 21:51:55 mnaser: dhellmann yes, majority of them are inherit from 'legacy-base' and 'legacy-dsvm-base' 21:52:05 and i will say 70% 21:52:09 or even more 21:52:12 there is a legacy-dsvm-base job the inherit from, yes 21:52:14 would we have any projects with no testing on bionic at all if we did that? 21:52:20 (ignored them) 21:52:44 legacy base is pinned to xenial now: https://opendev.org/openstack-infra/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml#L917-L922 21:52:45 all Heat jobs are legacy afaik 21:52:57 dhellmann: they do test bionic also which with all new devstack based jobs zuulv3. integrated-gate 21:53:16 those 2 statements from zaneb and gmann are contradictary 21:53:25 corvus: thanks! i was still hunting for that line 21:53:29 oh this is a nice url too: http://zuul.opendev.org/t/openstack/job/legacy-base 21:53:32 i am not sure if heat run integrated-gate or not. 21:53:42 dhellmann: 'that' in your question was ambiguous 21:53:57 zaneb : sorry, I followed up with () 21:54:00 forgive me 21:54:01 https://review.openstack.org/#/c/573228/ 21:54:03 i don't see any legacy jobs here? 21:54:23 unless they've just been renamed 21:54:26 dhellmann: ah, so you did 21:54:46 so if we change the base legacy setting, does that just kick this can further down the road? don't we still have to have teams update those jobs? and don't those jobs also run on stable branches that may not work on bionic? 21:55:10 http://zuul.opendev.org/t/openstack/job/heat-functional-orig-mysql-lbaasv2 -> http://zuul.opendev.org/t/openstack/job/heat-functional-devstack-base -> http://zuul.opendev.org/t/openstack/job/legacy-dsvm-base 21:55:16 mnaser: renamed https://git.openstack.org/cgit/openstack/heat/tree/.zuul.yaml#n2 21:55:17 mnaser: many of the legacy jobs are renamed so it is hard to judge by name 21:55:31 ah tricky, okay, yeah, i just saw the zuul inventory file too 21:55:31 IOW, maybe the best long term thing is to say that those legacy jobs "don't count" towards testing and ignore them, upgrade all of the non-legacy jobs ("modern"?) and move on 21:55:59 so what if we moved legacy jobs to non-voting and set nodeset to bionic 21:56:12 dhellmann: good question. the way i am modifyng the legcy base job is 1. they run on bionic in stein onwards adn 2. keep running on xenial < stei 21:56:13 we do have the job inheritance tree we could use to root them all out, but i favor letting teams be responsible for identifying what jobs they need to deal with whether or not they've chosen to rename them 21:56:14 stein 21:56:25 mnaser : what impact will changing those jobs have on stable branches? 21:56:34 dhellmann: it won't, according to the way gmann has had it setup 21:56:36 fungi : indeed 21:56:43 mnaser: what ever fail, move it to n-v is the good idea and project fix then make it v 21:56:50 ah, sorry, I missed gmann's response there 21:57:00 so if a project _desparately_ wants to run legacy jobs, they have to mark it voting explicitly 21:57:08 dhellmann: yeah, stable testing is on xenial 21:57:20 and if it breaks.. well they have to fix the legacy jobs but at least it won't block their current work 21:57:32 ok, so my long-term question still applies. Should we try to fix this, or make them painful and encourage teams to deal with that? 21:58:17 my opinion is that we should just change the legacy nodeset to bionic and have them all run bionic, if they break, they probably weren't good jobs in the first place 21:58:29 and the teams can just move those jobs to non voting 21:58:30 dhellmann: it depends . either they migrate that job to zuulv3 which make it on bionic or fix with legacy definition if job is critical 21:58:56 I'm inclined to agree with mnaser, but I'm not an expert in this area 21:59:00 another alternative to making them non-voting is we let teams decide whether they pin them to xenial on their own if they want to continue running them, and understand that odds are other projects they're testing against may drop xenial support at some point (like what trove experienced when they clung to trusty for months after other projects switched to xenial) 21:59:44 i'm starting to be more inclined to "you either have to set your jobs to non-voting (why are you running them?) or fix them (might as well as convert to new jobs)" 21:59:54 fungi: and also that stable/stein will eventually break when infra stops supporting Xenial, right? 21:59:54 legacy-base sets the default node type, but jobs inheriting from it can still override that 22:00:01 zaneb: correct 22:00:14 that needs to be in the warning 22:00:21 fungi: right so that will be done via audit of project team 22:00:29 ok, what's the timing for making that change, given where we are in the cycle? 22:00:32 btw, xenial doesn't even have packaging for rocky. 22:00:59 i agree with the mnaser idea of making n-v on failed one and leave up to projects 22:01:37 fungi: how i do is check the job definition and find any overridden nodeset all the way till base job 22:01:37 does anyone disagree on us changing default nodeset to bionic? 22:01:54 I support changing it 22:01:58 same 22:02:07 +1 22:02:15 * mnaser apologies for infra in advance and hopes that we can help them out with this 22:02:33 +1 22:02:40 +1 22:02:46 now the second argument: should we transition jobs to non-voting or voting for legay? 22:03:03 if we move them to non-voting, they won't fail the gates of downstream users, but they risk merging broken code 22:03:12 mnaser: I thought we were saying we are leaving that up to projects 22:03:27 just wanted to get some sort of agreement 22:03:37 gmann: i'm not sure which direction you're talking about. you can check the job browser in zuul to see what nodeset parent jobs of specific jobs set 22:03:40 we are leaving it to projects = we keep them voing in the base job? 22:03:48 s/voing/voting/ 22:04:00 if they fail then tell projects that make it n-v to unblock gate of your project or other and fix or leave based on their decision 22:04:07 yeah, warn everyone, change the job, let them fix or set to non-voting 22:04:12 mnaser: yes, and if a project's gate breaks, they can make a job non-voting in their local Zuul config 22:04:25 yeah 22:04:25 non-voting or just remove it 22:04:26 we can do it the same day i proposed changing the base default nodeset as well 22:05:00 fungi: yes. that way and checkign job definition if changes are required. 22:05:11 ok, so legacy jobs will be moved to bionic nodeset and remain voting for master 22:05:16 fungi: if we put up a test patch for the nodeset change, can projects test it out using Depends-On? they can, right? 22:05:17 is that accurate? 22:05:35 yeah, initial dadline i set little late (april 1st) but with new way of making n-v it can be early 22:05:54 zaneb: yes, that is how we have started till no 22:06:05 zaneb: project testing that way - https://etherpad.openstack.org/p/legacy-job-bionic 22:06:07 and by doing this, we don't have py35 in train, right? 22:06:41 zaneb: yes, depends-on changes to the openstack-zuul-jobs repository will work 22:07:18 https://etherpad.openstack.org/p/python3-meeting 22:07:20 gmann: awesome. so we can give projects a little time to find out the impact (if any) and report back before we break them 22:07:22 do we agree whats under 'conclusions' 22:07:27 (the change i need to make to the project-config non-legacy base job on the other hand, depends-on doesn't help us) 22:07:29 yeah 22:07:58 mnaser: whats the deadline we should give to projects for legacy job moving ? 22:08:46 fungi / clarkb might have useful input on that 22:08:56 i vote for wednesday to coincide with the base job change i've proposed 22:08:58 we don't want to make it too late either 22:09:09 13th. 22:09:48 does that date seem to make sense for most? 22:10:10 we can start sending warnings and give advice on how to test with depends-on 22:10:24 i picked next week because the following week is rc target so would be a bad choice, and after rc is worse because some projects will be branching from rc1 22:10:44 anyone opposed or we're okay on that? 22:11:02 all ok seems :) 22:11:05 it seems very early given that projects _could_ be testing and fixing using Depends-On before we go ahead and possibly break them 22:11:26 that said, timing argument from fungi is also compelling 22:11:50 but testing effort has been started on Feb 25th 22:12:04 ok. action items because i'm sure y'all are getting tired 22:12:08 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003129.html 22:12:24 Propose openstack-python3-train-jobs Zuul template 22:12:40 well, true, we could push the legacy-base default nodeset change to later if we wanted to give teams longer to depends-on test against it, as i said that isn't the case for the project-config change to the non-legacy base job 22:13:00 since depends-on won't work there 22:13:01 is openstack-python3-train-jobs necessary? 22:13:19 fungi: is there an advantage to doing them together? 22:13:23 or can we tweak the job defn' so py35 doesnt run on master anymore? 22:13:36 fungi: 13th ok as first. giving lot of time to projects end up no action from them :) 22:13:38 zaneb: reducing confusion is the main advantage i foresee 22:13:43 mnaser: we said that we wouold did it in the resolution 22:13:43 mnaser: train not stein 22:14:01 mnaser : I think we want to get projects used to the idea of updating these settings regularly 22:14:06 ok cool 22:14:24 so does anyone wanna pick that up? 22:14:34 I can take that one 22:14:42 #action zaneb Propose openstack-python3-train-jobs Zuul template 22:14:50 Email ML with recommendations for projects in Stein 22:15:09 i don't wanna throw more work at gmann but i feel he'd be best at taking care of this :) 22:15:19 I can also take that one if we are agreed on the stuff in the conclusions 22:15:25 this is for py versions ? 22:15:31 yes, what we have under 'conclusions' 22:16:01 if zaneb wants to get this, then sure 22:16:09 #action zaneb Email ML with recommendations for projects in Stein 22:16:15 I'll take it and let gmann have the next one :) 22:16:27 yeah, that why i did not opt :) 22:16:39 cool 22:16:45 #action gmann Notify ML with nodeset base change 22:17:04 zaneb: i guess your email will put context of todays meeting 22:17:10 so we probably don't really need an extra update? 22:17:12 we can just link to the logs here 22:17:18 i dont think so 22:17:20 yeah, I can do that 22:17:43 i think we probably need to update our docs to reflect PTI for train 22:17:46 Lance Bragstad proposed openstack/governance master: Elaborate on the business value of Glance https://review.openstack.org/641784 22:18:07 which is creating a page similar to https://governance.openstack.org/tc/reference/runtimes/stein.html 22:18:28 do we want to hold off on that fro now? 22:18:31 s/fro/for/ 22:18:33 the contents might even be basically identical 22:19:08 it's missing Leap 22:19:17 yep, we added that 22:19:26 but yeah, I see no reason not to go ahead and copy it now for Train 22:19:43 anyone want to take up that action with the info from the etherpad? 22:19:50 o/ 22:19:53 cool 22:19:55 thanks mugsie 22:19:59 oh, right leap is new for train 22:20:11 #action mugsie Update governance repo for Train PTI 22:20:28 i'm really happy we flushed all this stuff out, took way longer than i thought :) 22:20:31 but i think it's the best outcome 22:20:37 thanks for everyone's patience <3 22:20:45 o/ 22:20:49 thanks for putting this together mnaser! 22:20:49 o/ 22:21:00 now buy flamesuits when everyone finds out they're switching to bionic 22:21:02 SURPRISE 22:21:07 #endmeeting