Thursday, 2018-10-18

*** annabelleB has joined #openstack-tc00:35
*** annabelleB has quit IRC00:37
*** diablo_rojo has quit IRC00:39
*** annabelleB has joined #openstack-tc02:22
*** melwitt has joined #openstack-tc02:24
*** annabelleB has quit IRC02:31
*** lbragstad_503 has quit IRC05:27
*** lbragstad_503 has joined #openstack-tc05:27
*** annabelleB has joined #openstack-tc05:48
*** annabelleB has quit IRC06:08
*** openstackgerrit has joined #openstack-tc06:35
openstackgerritTony Breeds proposed openstack/governance master: T Release Name  https://review.openstack.org/61151106:35
*** e0ne has joined #openstack-tc06:38
*** ricolin has joined #openstack-tc06:40
*** persia has quit IRC07:19
*** persia has joined #openstack-tc07:21
evrardjpo/07:28
*** tosky has joined #openstack-tc07:31
*** openstackgerrit has quit IRC07:35
*** dtantsur|afk is now known as dtantsur07:56
*** jpich has joined #openstack-tc08:00
*** e0ne has quit IRC08:02
*** e0ne has joined #openstack-tc08:13
*** ricolin has quit IRC10:29
*** ricolin has joined #openstack-tc10:59
*** e0ne has quit IRC11:50
*** dims has quit IRC12:30
*** dims has joined #openstack-tc12:33
*** mriedem has joined #openstack-tc12:38
*** e0ne has joined #openstack-tc13:07
*** cdent has joined #openstack-tc13:08
*** lbragstad_503 is now known as lbragstad13:32
*** openstackgerrit has joined #openstack-tc13:51
openstackgerritCorey Bryant proposed openstack/governance master: Add optional python3.7 unit test enablement to python3-first  https://review.openstack.org/61070813:51
mriedemttx: do you know if there are still unused forum session slots available?14:30
ttxI was not on the selection committee... I think I heard they allocated all their slots, let me see14:30
mugsiemriedem: all slots were allocated14:31
mriedemok14:31
ttxwe could potentially change things / merge your topic in a session though, I guess14:31
mriedemi know lyarwood had duplicate sessions,14:32
mriedembut yeah14:32
*** annabelleB has joined #openstack-tc14:34
*** annabelleB has quit IRC14:40
*** cdent has quit IRC14:54
fungiopenstack community: tc-members are around to answer your questions for the next hour15:00
lbragstado/15:00
gmanno/15:00
gmannsent email too15:04
gmannbtw i wanted to discuss the gate job migration to bionic15:04
gmannfrickler and myself was discussing about migration of jobs to Bionic on qa channel today15:05
*** dklyle has joined #openstack-tc15:06
dimso/15:07
gmannAs most of the base job QA team own(in devstack and tempest),  I agree (from QA PTL point of view) on point that QA team needs to take the ownership of migrating those jobs to new platform.15:07
gmannwe have started the progress in that.15:07
gmann#link https://etherpad.openstack.org/p/devstack-bionic15:07
fungithanks gmann!15:08
*** jeremyfreudberg has joined #openstack-tc15:08
gmannPlan is 1. test the projects side and integration jobs to Bionic (^^ etherpad started by frickler) 2. Fix the issues if any 3. if all well, QA team will migrate the base job setup15:09
gmannand also work with projects specific job testing before we migrate the base jobs setup15:09
gmannis this plan works fine ? any feedback/opinion15:10
gmannand another question is - is it ok to do this migration in stein ?15:10
gmannfrom TC perspective15:10
*** annabelleB has joined #openstack-tc15:10
zanebso I would say I'm +1 personally on doing this in Stein, but I think we should have that recorded somewhere and the whole TC vote on it (https://review.openstack.org/611080 seems like a good place to me)15:12
*** e0ne has quit IRC15:12
mugsiewe do say "latest LTS" on https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions15:12
toskygmann: and tell users of the devstack-tempest job to keep an eye on their jobs15:13
gmanntosky: yeah, we are going to float the ML on that15:13
gmannzaneb: mugsie yeah it is already there as "latest LTS". do we need anything else to make it very specific and explicitly ?15:14
mugsieI don't remember anything like that for the xenial migration, it just sort of happened akaik15:14
gmanntosky: we are not going to merge the base job changes until  all projects side jobs are tested well.15:14
mugsieafaik*15:14
fungiin the past the tc hasn't really voted on it, we've just been able to rely on the openstack infra team to push the community to upgrade testing. glad to see the qa team taking the lead this time (it certainly seems within their remit, even more than it was for the infra team)15:14
gmannyeah15:14
zanebmugsie: yeah, that's fair. so I guess there is no obstacle to the QA team going ahead15:15
fungimugsie: yeah, we've tried a couple solutions in the past. for precise to trusty the infra team just put on their flame-retardant suits, flipped the switch, let everything break, and let the teams work out the blockers in their particular jobs15:16
mugsiefungi: I remember :)15:16
zanebthat said, the QA team are not the only folks running functional test jobs, and if we want to inspire concerted action from everyone, just having the general principle documented will not do it15:16
fungifor trusty to xenial, because everyone complained about the breakage the previous time, we told teams to migrate at their own pace piecemeal. that also didn't work out well because a number of teams procrastinated15:16
fungiand so we released openstack with integration testing split between two different ubuntu lts versions, and stuff broke that way instead15:17
fungii should point out that the precise->trusty switch was announced well ahead, experimental jobs were set up and teams were encouraged to work out their issues ahead of the flag day15:18
*** e0ne has joined #openstack-tc15:18
fungireally most of the pain from either method came down to some teams procrastinating and prioritizing other work. so not terribly surprising15:19
mugsieyeah, either way there will be pain on some side15:19
gmannthis time (with zuulv3 jobs) base jobs are the one defining the ubuntu version (in 90% of the job if i remember ). so changing there is going to effect most of the projects side jobs too15:19
zanebfungi: iirc both of those occurred before we had a project-wide goals process. do you think following that process (with e.g. goal champions &c.) might make it go smoother?15:20
gmannbut yes, any failure which is project specific job side has to be fixed by project team (QA and infra team can alwys help)15:20
fungizaneb: i think following the goals process for something like this does at least provide us with a bit better insight into the state of progression for the switch across the projects15:20
mugsiegmann: is there a devstack-tempest-bionic we can tell teams to use to check things, and then we can swap the OS of the root job after $TIME ?15:20
gmannmugsie: yes, it is there - https://review.openstack.org/#/c/610977/15:21
gmannand this testing all the tempest gate jobs  -https://review.openstack.org/#/c/611572/15:21
fungizaneb: which is why i'm in favor of smcginnis_vaca's governance change (in some form) being positioned as a fait accompli cycle goal15:22
gmannand etherpad tracking the project specific jobs testing - https://etherpad.openstack.org/p/devstack-bionic15:22
zanebfungi: yes, I am also +1 on that but I think it needs to say how to achieve the goal15:23
*** cdent has joined #openstack-tc15:23
zanebthe current patch only records what the latest LTS version was at the start of the cycle15:24
*** jamesmcarthur has joined #openstack-tc15:24
fungii guess "jdi" isn't enough to say how? ;)15:24
clarkbzaneb: I think start of cycle is the critiera?15:25
clarkbzaneb: bionic released during the rocky cycle but because we were halfway through it deferred to stein15:25
clarkbs/it/rocky/15:25
fungiwe never explicitly said anywhere formal that it was latest lts "as of the start of that cycle" but we've so far interpreted it that way to minimize disruption to the release process15:25
clarkbah15:26
fungithe pti doesn't get quite that specific15:26
cdentI think we should state it as lts at the _end_ of the cylce15:26
cdent(expected lts)15:26
*** annabelleB has quit IRC15:27
fungibut we've also got the counterpoint from distro packagers that we should be targeting what the distros plan to release rather than targeting what already exists15:27
zanebclarkb: my point was it's useful to say "to save you looking it up, the latest Ubuntu LTS is 18.04", but more useful to say "you need to update these specific test jobs before the end of Stein"15:27
mugsieyeah, that is more difucult though15:27
fungicdent: running jobs on bionic well before bionic existed would have been challenging15:27
mugsiewe have no idea what is in RHEL $VERSION+1, or when exactly debian will be released15:28
clarkbfungi: though possible because of the beta releases which pabelanger had images running for15:28
fungicdent: as i mentioned in the review, the solution to that is to run on the rolling release that ubuntu freezes from, so basically debian/testing or debian/unstable15:28
cdentfungi: I think you're just being pedantic, but what I mean is: the python that is expected to be in the LTS at the end of the release, even if that means running a non-lts15:28
clarkbmore difficult with rhel/centos because well they just drop15:28
cdentfungi, yes, that15:28
fungiclarkb: we didn't have bionic beta releases at the start of the rocky cycle15:28
gmannfor ownership. i will say - each repo defining those jobs has ownership. QA for most of them as they are providing the base jobs setup15:29
fungiclarkb: and yeah, we can't really test open0core distros that lack an open release process15:29
fungier, can't test them in advance of releasing that is15:29
fungiwe could test on fedora rawhide which is sort of that for rhel i suppose15:29
zanebcdent: you're mixing up unit and functional tests there. you can't always treat them the same15:29
cdentzaneb: where am I mixing that up?15:30
*** devananda has joined #openstack-tc15:30
fungicdent: i'm not being pedantic, i'm saying not everything we produce is in python, and not bug we're going to find will be related to the python interpreter (many if not most will be things like newer qemu/kernel, c libs linked in our deps, system tools we call into from subprocesses, et cetera)15:31
zanebcdent: it's generally easy to do unit tests with the version of python you expect in the next LTS. It's much harder to do functional tests on a distro that hasn't been released yet. The discussion started with gmann talking about functional testing15:31
cdentzaneb: you asked for a general principle of what I'm going for, which I haven't had a chance to respond to on the review, but the gist is this: upstream should work slightly in the future of distributors and one way to signal this is by testing with an OS that provides the Python that is the default distributed from Python from their front page (currently 3.7.0).15:33
*** annabelleB has joined #openstack-tc15:33
cdentI acknowledge that this will break things15:33
cdentbreaking things is what tests are for15:33
cdentso that we can fix things15:33
clarkbas far as method goes ya I think devstack/tempest should start by adding the bionic nodesets and then push a self testing change that bumps nodeset on base devstack/tempest jobs to bionic15:33
cdentwe've gotten ourselves into a state where we want the tests to always pass and to not have to fix them unless we change our code. we need to use testing (and CI in general) as a forcing function for focussing our work into the future15:34
clarkbthat change will be self testing and once it passes set a day to merge it. Chances are corner cases will break in projects consuming that and we'll work through it from there (hence setting a day to merge it)15:34
gmannclarkb: yeah, we started that - https://etherpad.openstack.org/p/devstack-bionic15:34
cdentwhich is where we (upstream) live15:34
clarkbgmann: great!15:34
zanebcdent: there wasn't a platform on which we could have tested that at the beginning of Stein, and not because OpenStack was broken15:34
cdentsure there was: debian had 3.7 available before rocky was released15:34
zanebcdent: actually, I tell a lie. we could have tested it on Debian unstable15:34
zanebyep15:34
clarkbcdent: zaneb for python3.7?15:34
clarkbfedora 28 has it today and we have images for that too aiui15:35
clarkband now bionic also has packages for it too15:35
cdentanyway: as I said in one of my screeds: I'm not wed to this idea, it may be impractical. But I _am_ wed to the principles it implies: that we should be testing for what it is to come, not what has already passed15:35
zanebclarkb: F28 version was a beta that crashed similar to U18.04 version, at least at the beginning of Stein15:36
clarkb++ this is one reason I've encouraged dirk and others to get the tumbleweed image up and running15:36
clarkbit is an actual rolling release distro with CI itself (unlike arch)15:36
clarkbwhere we can in theory test new versions of all the things on a rolling basis15:36
clarkbthough its still on python3.6 for some reason15:37
cdentzaneb: when you said "It's much harder to do functional tests on a distro that hasn't been released yet. " did you mean integration, not functional?15:39
cdent(that is, tempest, not in-tree functional tests)15:39
zanebcdent: I... don't know because everyone uses different terminology15:39
fungiwe don't even seem to always agree on what count as "unit tests"15:40
cdenttrue enough15:40
zanebHeat's 'functional' tests are in-tree but tempest-like, so...15:40
* zaneb shrugs15:40
fungialso, i'm stepping afk for a bit. replacement appliances just arrived15:40
mugsieyeah, we dropped our functional tests and went full tempest15:40
cdentwhen I say "integration" I mean "live processes started outside the running of the tests"15:40
zanebcdent: yeah, I meant that15:41
cdentwhen I say "functional" I mostly mean "yeah, there's a database there, but it is probably in memory"15:41
cdent(which I agree is much too vague)15:41
* zaneb should just stop saying functional15:43
clarkbI definitely think if there is interest it would be a worthwhile experiement to run a visible job (maybe this means voting or maybe non voting in a lot of places?) on top of a "prerelease" type distro. Either add debian unstable as an option or use tumbleweed which is already there (or $other thing, maybe gentoo which is mostly there now too)15:43
clarkbI expect this would be very valuable for nova and neutron in particular because they both interact with the system in interesting ways15:44
*** jeremyfreudberg has quit IRC15:44
gmannok, i think we all are ok for Bionic migration plan for stein.  and we can work  to make "such work/migration" a process for future (when and who) in this- https://review.openstack.org/#/c/611080/15:50
cdentclarkb, fungi: What are your thoughts (from an infra standpoint) on the need to limit the number of new jobs, or have some kind of "add one, remove one" policy?15:52
clarkbcdent: typically that type of stuff doesn't concern me too much because the vast majority of resource usage is from such a small set of projects that most projects can add and remove jobs as they like without making a major impact15:53
cdenti'm thinking more in terms of removing templated jobs15:54
clarkbif we are actually concerned about resource usage then combatting the incredibly flaky gate problem and disproportionate use by a small number is the way to do that15:54
cdentones that many projects use15:54
clarkbcdent: still not a huge concern, tripleo neutron and nova are most of our resource usage, like 80% ish15:54
gmanni would like to go for second approach15:55
cdentI definitely agree that we need to work harder with regards to the flakey gate problem15:55
clarkbwe are also in a (hopefully) temporary degraded state with resource allocations from ovh as they work through their post upgrade stuff15:58
clarkband the packethost/platform9 donation hasn't quite stabilized yet15:58
clarkbI'm sure some of that is related to the software we produce so finding better ways to communicate that across might help too?15:59
*** dtantsur is now known as dtantsur|afk16:00
cdentI sometimes wish there was a way to break the gate completely for a project once it crosses a threshold of N units of flakiness. Whatever that is. We are inured to flakiness more than is good for us.16:00
cdentAnyway, I gotta go.16:00
* cdent waves16:00
*** cdent has quit IRC16:00
*** e0ne has quit IRC16:00
* lbragstad steps away to get a run in over lunch16:01
*** mriedem is now known as mriedem_lunch16:10
*** annabelleB has quit IRC16:12
*** jamesmcarthur has quit IRC16:15
*** dklyle has quit IRC16:16
*** dklyle has joined #openstack-tc16:16
*** cdent has joined #openstack-tc16:24
*** ricolin has quit IRC16:25
*** jamesmcarthur has joined #openstack-tc16:30
*** jpich has quit IRC16:30
*** dklyle has quit IRC16:37
*** annabelleB has joined #openstack-tc16:39
*** annabelleB has quit IRC16:43
*** annabelleB has joined #openstack-tc16:52
*** dklyle has joined #openstack-tc17:30
*** dklyle has quit IRC17:42
*** diablo_rojo has joined #openstack-tc17:44
*** dklyle has joined #openstack-tc17:50
*** mriedem_lunch is now known as mriedem17:51
*** dklyle has quit IRC18:02
*** annabelleB has quit IRC18:07
*** jamesmcarthur has quit IRC18:09
*** diablo_rojo has quit IRC18:12
*** diablo_rojo has joined #openstack-tc18:13
*** lbragstad has quit IRC18:17
*** dklyle has joined #openstack-tc18:20
*** annabelleB has joined #openstack-tc18:25
*** diablo_rojo has quit IRC18:26
*** scas has quit IRC18:28
*** dklyle has quit IRC18:29
*** annabelleB has quit IRC18:32
*** annabelleB has joined #openstack-tc18:34
*** jamesmcarthur has joined #openstack-tc18:37
*** jamesmcarthur has quit IRC18:43
*** diablo_rojo has joined #openstack-tc18:44
*** lbragstad has joined #openstack-tc18:48
*** dklyle has joined #openstack-tc19:07
*** jamesmcarthur has joined #openstack-tc19:12
*** dklyle has quit IRC19:14
*** jamesmcarthur has quit IRC19:16
*** lbragstad has quit IRC19:21
*** lbragstad has joined #openstack-tc19:24
*** annabelleB has quit IRC19:32
*** diablo_rojo has quit IRC19:35
*** devananda has quit IRC19:47
*** e0ne has joined #openstack-tc19:52
*** e0ne has quit IRC20:05
*** zaneb has quit IRC20:31
*** openstackgerrit has quit IRC20:36
*** annabelleB has joined #openstack-tc20:54
*** zaneb has joined #openstack-tc20:54
*** zaneb has quit IRC21:11
*** diablo_rojo has joined #openstack-tc21:12
cdentdims: how 'live' is https://review.openstack.org/#/c/586212/21:32
*** cdent has quit IRC21:34
*** dklyle has joined #openstack-tc21:36
*** diablo_rojo has quit IRC21:42
*** diablo_rojo has joined #openstack-tc21:44
*** dklyle has quit IRC21:45
*** openstackgerrit has joined #openstack-tc21:50
openstackgerritMerged openstack/governance master: Fix format errors in PTI docs  https://review.openstack.org/61109821:50
openstackgerritMerged openstack/governance master: Update sphinx extension logging  https://review.openstack.org/61113221:50
*** jamesmcarthur has joined #openstack-tc21:55
*** jamesmcarthur has quit IRC21:59
*** scas has joined #openstack-tc22:09
*** mriedem has quit IRC22:12
*** diablo_rojo has quit IRC22:52
*** tosky has quit IRC23:02
*** diablo_rojo has joined #openstack-tc23:03
diablo_rojoReminder that we have 3 days left to nominate people for Community Contributor Awards! It only takes a few minutes to nominate someone. https://openstackfoundation.formstack.com/forms/berlin_stein_ccas23:10
*** zaneb has joined #openstack-tc23:24
*** annabelleB has quit IRC23:31
mnaserIt would have been nice to take this operating system discussion to the ML.23:33
*** jamesmcarthur has joined #openstack-tc23:36
*** diablo_rojo has quit IRC23:37
*** jamesmcarthur has quit IRC23:39
*** jamesmcarthur has joined #openstack-tc23:39
*** jamesmcarthur has quit IRC23:55
*** jamesmcarthur has joined #openstack-tc23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!