Thursday, 2021-01-14

*** tosky has quit IRC00:36
*** jamesmcarthur has joined #openstack-tc01:17
*** cloudnull has quit IRC01:29
*** jamesmcarthur has quit IRC01:42
*** jamesmcarthur has joined #openstack-tc01:42
*** timburke_ has quit IRC01:43
*** jamesmcarthur has quit IRC01:47
*** dklyle has quit IRC02:01
*** david-lyle has joined #openstack-tc02:01
*** david-lyle has quit IRC02:18
*** jamesmcarthur has joined #openstack-tc02:36
*** jamesmcarthur has quit IRC02:37
*** jamesmcarthur has joined #openstack-tc03:44
*** ricolin_ has joined #openstack-tc04:13
*** jamesmcarthur has quit IRC04:15
*** jamesmcarthur has joined #openstack-tc04:17
*** ricolin_ has quit IRC04:24
*** amotoki has quit IRC04:44
*** amotoki has joined #openstack-tc04:44
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-tc05:35
*** jamesmcarthur has quit IRC06:11
*** jamesmcarthur has joined #openstack-tc06:12
*** ricolin has quit IRC06:17
*** jamesmcarthur has quit IRC06:17
*** jamesmcarthur has joined #openstack-tc06:42
*** ricolin has joined #openstack-tc07:39
*** akahat|rover is now known as akahat|lunch07:45
*** rpittau|afk is now known as rpittau07:47
*** belmoreira has joined #openstack-tc07:49
*** ralonsoh has joined #openstack-tc07:51
*** e0ne has joined #openstack-tc07:56
*** ralonsoh_ has joined #openstack-tc07:56
*** ralonsoh has quit IRC07:59
*** slaweq has joined #openstack-tc08:03
*** andrewbonney has joined #openstack-tc08:17
*** ralonsoh has joined #openstack-tc08:28
*** ralonsoh_ has quit IRC08:28
*** jamesmcarthur has quit IRC08:39
*** tosky has joined #openstack-tc08:49
*** slaweq has quit IRC08:55
*** slaweq has joined #openstack-tc09:00
*** e0ne has quit IRC09:24
*** lpetrut has joined #openstack-tc10:33
*** jamesmcarthur has joined #openstack-tc10:36
*** jamesmcarthur has quit IRC10:41
*** e0ne has joined #openstack-tc11:32
*** jamesmcarthur has joined #openstack-tc11:37
*** apevec has joined #openstack-tc11:41
*** jamesmcarthur has quit IRC11:42
apevecmnaser, following up Dec 17 TC meeting: RDO (me, amoralej et al) would hold video AMA on Stream next Thu, I hope proposed slot after Thu TC will work11:43
apevecThursday Jan 21 1600 UTC11:44
apevecare there any big known conflicts at that time slot?11:44
*** jamesmcarthur has joined #openstack-tc11:53
*** jamesmcarthur has quit IRC11:59
*** jamesmcarthur has joined #openstack-tc12:09
*** jamesmcarthur has quit IRC12:14
*** akahat|lunch is now known as akahat|rover12:36
*** e0ne has quit IRC13:12
*** pojadhav is now known as pojadhav|afk13:45
*** jamesmcarthur has joined #openstack-tc13:51
*** jamesmcarthur has quit IRC13:55
*** jeremyfreudberg has quit IRC14:46
*** jeremyfreudberg has joined #openstack-tc14:46
mnaseri think that should be ok afaik apevec14:52
knikollao/14:54
*** cloudnull has joined #openstack-tc14:55
mnaser#startmeeting tc15:00
openstackMeeting started Thu Jan 14 15:00:11 2021 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
mnaser#topic rollcall15:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
*** openstack changes topic to " (Meeting topic: tc)"15:00
openstackThe meeting name has been set to 'tc'15:00
*** openstack changes topic to "rollcall (Meeting topic: tc)"15:00
gmanno/15:00
belmoreirao/15:00
ricolino/15:01
jungleboyjo/15:02
mnaserhrm, i guess we can still meet and don't necessarily require a quorum?15:04
jungleboyjOk with me.15:04
fungitechnically it won't count as a "meeting" for purposes of satisfying the bylaws and charter requirements, but as long as it's not every time it's fine15:04
gmann6 are here if knikolla is still here15:04
knikollao/15:05
jungleboyj:-)15:05
mnaserfungi: we'll need 1/4 times a month to hit quorum ;P15:05
mnaseri'll take my chances ha15:05
* knikolla lost track of time15:05
fungiquorum is mostly important if binding decisions are going to be made *in* the meeting, which the tc doesn't do these days thanks to gerrit15:05
mnaser#topic Follow up on past action items15:05
*** openstack changes topic to "Follow up on past action items (Meeting topic: tc)"15:05
mnaserdiablo_rojo complete retirement of karbor15:05
mnaseri think that's mostly been done, we just need https://review.opendev.org/c/openstack/project-config/+/767057 rebased15:06
mnaserand then we can merge the governance patch15:06
mnaseri'll keep it on the list till next week to keep following up with it15:06
mnaser#action diablo_rojo complete retirement of karbor15:06
mnasermnaser submit a patch to officially list no community goals for X cycle15:07
mnaseri submitted a patch and started getting feedback which is good :)15:07
jungleboyj++15:07
jungleboyjThanks for doing that.15:07
mnaserdont think we need that as an action item to follow up anymore, this lives in gerrit now to continue15:07
gmannyeah, that works15:07
mnaserdiablo_rojo reach out to SIGs/ML and start auditing states of SIGs15:07
mnaserwe have a topic for this that we can re-add the action item to15:08
mnaserdiablo_rojo update resolution for tc stance on osc -- that was done and looks like we're progressing on reviews, no need for an action item imho, as it's an open review / disucssion15:08
mnasergmann continue to audit tags + outreach to community to apply for them <= we can keep that for the discussion later too imho15:09
gmannyeah close to merge once dansmith comments are resolved for CI things15:09
gmannyeah15:09
mnaser#topic Write a proposed goal for X about stabilization/cooldown (mnaser)15:09
*** openstack changes topic to "Write a proposed goal for X about stabilization/cooldown (mnaser) (Meeting topic: tc)"15:09
dansmitho/15:09
mnaser#action mnaser to remove proposed goal topic from agenda15:09
mnaseri think this mostly continues to live inside gerrit imho15:10
mnaser(going through those quickly to have more time for the later topics :])15:10
jungleboyjMakes sense.15:10
mnaser#topic Audit SIG list and chairs (diablo_rojo)15:10
*** openstack changes topic to "Audit SIG list and chairs (diablo_rojo) (Meeting topic: tc)"15:10
mnaserno progress update, we'll keep an action item to keep up with it15:11
mnaser#action diablo_rojo reach out to SIGs/ML and start auditing states of SIGs15:11
jungleboyjKind of seems like we need diablo_rojo for all of this.  :-)15:11
mnaser#topic Annual report suggestions (diablo_rojo)15:11
*** openstack changes topic to "Annual report suggestions (diablo_rojo) (Meeting topic: tc)"15:11
gmannricolin also mentioned to help on SIG audit in previous discussion, not sure if he has some updates to share15:11
ricolindiablo_rojo let me know if you need help on SIGs auditing15:11
mnaser#action mnaser remove annual report suggestions from agenda15:12
mnaser(its getting drafted so too late to make changes so we can close that out)15:12
ricolingmann, I didn't get anything on that yet15:12
mnaserand ricolin that's awesome, i think it would be good to connect with kendall on that15:12
gmannok15:12
ricolinmnaser, sure, or I can just take that action if diablo_rojo is not available for now15:12
mnaser#topic Add Resolution of TC stance on the OpenStackClient (diablo_rojo)15:12
*** openstack changes topic to "Add Resolution of TC stance on the OpenStackClient (diablo_rojo) (Meeting topic: tc)"15:12
mnaserricolin: i will leave it for you and diablo_rojo to figure out how you want to sort that out together :)15:13
mnaser#link https://review.opendev.org/c/openstack/governance/+/75990415:13
mnaseri know dansmith had some comments on having the CI / docs language in there15:13
mnaserwhich i think make sense to me15:14
dansmithyeah15:14
dansmithI think it just got lost in all the worddansmithing15:14
mnaserdansmith: do you think you could respin that with some of the wording ideas you had? i think kendall would probably appreciate that help15:14
gmannyeah, good to mention that too15:14
mnaser_if_ you can :)15:14
dansmithsure I can15:14
mnaserthat would be great and helpful and then we can help land it15:14
dansmithI know diablo_rojo_phon is like super defensive of turf, so I didn't want to run afoul of any good neighbor rules15:15
mnaserahaha , i don't think so =P15:15
dansmithhehe15:15
mnaser#action dansmith update osc change to include ci/docs commentary15:15
mnaser#topic Audit and clean-up tags (gmann)15:16
*** openstack changes topic to "Audit and clean-up tags (gmann) (Meeting topic: tc)"15:16
gmannbumped the email with ptl tag for API interoperability tag15:16
gmann#link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019814.html15:16
gmannlet's see how many projects start applying it15:16
gmannnothing else to update on this.15:17
mnaserok, thank you for following up on this15:17
mnaseri'll rename this topic15:17
mnaser#topic infra-core additions (was Farewell Andreas) (diablo_rojo)15:18
*** openstack changes topic to "infra-core additions (was Farewell Andreas) (diablo_rojo) (Meeting topic: tc)"15:18
mnaser#link https://review.opendev.org/q/project:openstack/project-config+is:open15:18
mnaserthe repo seems to be quiet and there isn't really a backlog (yet)15:18
fungii've been trying to pay a little more attention to changes in there15:19
gmannyeah few of them can be merged for label things which just lost somehow15:19
fungimainly to make sure folks don't wait on simple requests like project creation or job shuffling15:19
mnaserthanks fungi -- i should try to make a bit of an effort, feel free to ping me for reviews anytime15:19
gmannI will also try to review those today15:19
fungigladly, thanks!15:19
fungiwe should also bring up this topic in the opendev infrastructure meeting, maybe get it on the agenda for next tuesday15:20
mnaseri don't know if there is anything actionable yet15:20
mnaserbut maybe as we get more into the year things start picking up again15:21
fungiwe can at least quickly mention that it's worth keeping an eye out for any frequent config reviewers who could help us merge stuff if they had +2 permissions15:21
openstackgerritDan Smith proposed openstack/governance master: Add Resolution of TC stance on the OpenStackClient  https://review.opendev.org/c/openstack/governance/+/75990415:21
fungibut i agree, that's probably not the case just yet15:21
dansmithignore this ^15:21
mnaseragreed15:21
mnaser#action mnaser add openstack/infra-core as discussion topic in opendev meeting15:22
mnaseri say we can drop the topic (for now).. and we can agree to work with opendev if we start to develop some sorts of a backlog15:22
fungiget it onto the agenda before monday if possible, because clarkb usually announces the agenda on the ml a day ahead15:23
gmannmake sense15:23
mnaserack, i'll try to get it done today15:23
ricolin+115:23
fungithanks!15:23
mnaseri'll keep it here so i have a chance to update the tc on it next week15:23
mnaser#topic Dropping lower-constraints testing from all projects (gmann)15:23
*** openstack changes topic to "Dropping lower-constraints testing from all projects (gmann) (Meeting topic: tc)"15:23
gmannyeah this is something projects asked to TC for consensus and direction for all proejcts15:24
gmann#link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html15:24
gmannthis is ML with other ML thread link too15:24
gmannoslo, ironic, and i thin neutron already started dropping l-c job15:25
gmann#link https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%20status:merged)15:25
slaweqgmann: neutron only for stable branches for now15:25
gmannslaweq: i see, thanks15:25
slaweqwe discussed that in our team meeting and decided to go that way for now15:25
gmannnova also made them n-v for stable branch15:25
mnaserhrm15:27
*** rosmaita has joined #openstack-tc15:27
gmannl-c testing is not part of PTI i think but we should provide some direction here for consistency testing and providing lower bound to package maintainer or so15:27
knikollakeystone has dropped l-c for now too.15:27
gmannah did nt know about keystone, thanks knikolla  for update15:28
gmannI do not have any package maintenance experience so do not know how much helpful current l-c file is for them15:28
openstackgerritDan Smith proposed openstack/governance master: Add Resolution of TC stance on the OpenStackClient  https://review.opendev.org/c/openstack/governance/+/75990415:29
mnaserhonestly it sounds like our current l-c were not exactly testing in a valid way15:29
gmannyeah15:29
fungii doubt any distro packagers directly rely on the lower-constraints.txt files in projects for anything, but they may indirectly rely on our lower bounds checking to ensure that our projects actually work with the minim versions of dependencies we list in our requirements.txt files15:30
mnasermaybe if apevec is around or can ping someone from the rdo team to answer ^15:30
fungiso if someone can work out a consistent, deterministic means of testing lower bounds then it might be of some benefit to package maintainers15:30
gmanntrue15:30
knikollaan automated way to keep those up to date too, would be quite helpful.15:31
fungibut i doubt jobs which aren't actually testing our lower bounds (like the l-c jobs before pip's dep solver got smart enough to tell us) were doing anyone any good15:31
mnaserfungi: which further proves that they weren't being consumed by distros15:31
mnaserbecause otherwise they'd all be broken15:31
mnaser(a long time ago)15:31
apevecfungi, we don't rely on lower-constraints15:32
fungihtanks for confirming!15:32
fungier, thanks15:32
mnaserso that rules out rdo, so does debian/ubuntu/opensuse rely on it is the remaining question15:33
dansmithwell, whats-his-face chimed in right?15:33
fungizigo was the only package maintainer i recall speaking up on the discuss ml asking us to keep testing our minimum versions15:33
*** lpetrut has quit IRC15:33
dansmithyar, zigo15:33
gmannyeah15:33
mnaserright, but i don't know if that was "we use it" or "it is something that's nice to have"15:34
fungibut didn't directly acknowledge that the jobs never actually tested what folks thought they were testing to begin with15:34
rosmaitaanother question is whether there would be a problem if we keep the minima in requirements as close to the upper-constraints as possible?15:34
apevecwe track u-c and keep our deps on that upper end, doing period sync e.g. https://review.rdoproject.org/r/#/c/31545/15:34
rosmaitamy understanding of the l-c was that it gave packagers a wider range to choose from to satisfy several projects15:34
mnaseri just pinged jamespage in #openstack-charms to hear if they use it, i suspect they don't15:34
mnaseri think the only reason why debian is concerned is because it ships _as part of debian_ and not some extra repo so that might be useful15:35
mnaseruca/rdo all ship diffferent repos with all their own packaging15:35
gmannthis one from zigo http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019684.html15:35
fungirosmaita: correct, or rather us keeping looser minimums gave them that option15:35
mnaserso they just ship with upper constraints15:35
knikollaalso, the tests in the respective tox.ini files use upper contraints15:36
mnaseryup15:36
gmannknikolla: expect l-c tox env, rest all yes use u-c15:36
rosmaitayes, most testing (except for the l-c job) is being done with versions close to the upper-constraints15:36
rosmaitawhat gmann said15:36
knikollabut the l-c job also only tests that the dependencies are installed, and doesn't run the actual tests, right?15:37
rosmaitaruns unit tests, i believe15:37
fungidjango is the perennial example. horizon is only one of many bits of software in the broader ecosystem which relies on django. it's easier on distros if most projects can use a common version of django so that they don't have to carry multiple django releases in the distro. if we aggressively bump our django minimum in horizon, *that's* what makes their job harder (as does not aggressively raising our15:37
fungimax cap on django in horizon)15:37
mnasergetting a more detailed confirmation but from #openstack-charms: "<jamespage> mnaser: I don't think so - I believe we still focus on alignment with upper-constraints"15:37
gmannyes unit test does not actually test for exact lower bound and all15:37
mnaserso that's two packagers that don't care about them15:38
mnaser<coreycb> mnaser: thanks for checking, we try to align as best we can with upper-constraints. I don't think we ever look at lower-constraints.15:38
mnaserso that leaves debian and suse15:38
mnaserfor suse we can discuss in the next point, and debian perhaps we can reach out to zigo and ask if that's actually used in some sort of job?15:39
mnaserand if no one uses it...15:39
gmannyeah, next topic is for suse testing support15:39
fungiwhat was previously happening with the l-c jobs and older pip is that it was installing the exact versions listed in lower-constraintx.txt, but not all the transitive dependency set was necessarily listed there, and pip was also potentially installing versions of packages which conflicted with minimum or max cap requirements in other dependencies (so could have been subject to subtle incompatibility15:39
fungibugs not apparent in unit testing, for example)15:39
mnasercould someone reach out to the ML and check if zigo can confirm if its a "nice code quality thing" or "we actually have things that actively use them" ?15:40
mnaserfor suse, we can leave that for the next point15:40
rosmaitafungi: exactly, that is my worry about having too wide a gap between the minima in requirements and upper-constraints15:40
gmannmnaser: sure, i can check that15:41
mnaser#action gmann follow-up with zigo if debian uses l-c15:41
mnaserIMHO, if no one uses it, we can drop it, one less thing for us to worry about that is of no benefit (other than mostly for packagers)15:41
mnaserand they don't care about it, we should invest our time elsewhere :)15:41
mnaser(and also, adding that it also wasn't reliable)15:42
rosmaitaok, so the advantage of the l-c job is that it (sort of) kept us honest about the minima in our requirements15:42
mnaserrosmaita: correct!15:42
rosmaitabut if we are aggressive about keeping the requirements updated, that won't be a problem15:42
mnaserbut most packagers don't seem to rely on it and ship upper-constraints anyways15:42
rosmaitaso it sounds like updating requirements to pip freeze at milestone-3 is a good idea?15:43
mnaserso they're always shipping the upper-constraints (which makes sense, cause that's also what openstack upstream ci tests with, minimize any chances of incompatibilities)15:43
mnasernot necessarily, because we have no versions for the msot part in reqs, and upper-constraints is the upper boundary15:44
rosmaitai mean requirements.txt in each deliverable15:44
mnaserupper-constraints is mostly frozen15:44
mnaserand all packagers rely on those, so there's no point to freeze it twice15:44
mnaserso if glance relies on sqlalchemy, and u-c contains 'sqlalchemy==1.4.0' inside stable/victoria, then rdo will produce python-sqlalchemy-1.4.0 which will be a dependecny of glance package15:45
mnaserthe same way that our CI would test glance with that version of sqlalchemy15:45
mnaseri think we can move onto the next topic with an action item of reaching out to debian and suse will be discussed next and follow up on the progress there15:47
gmannand we can remove (after Debian checks) it from master as well as from stable branch,15:47
rosmaitaok, so is the consensus that the minimum version of some dependency in requirements.txt for cinder does *not* mean that cinder can actually work with that version?15:47
fungithat makes sense when the package is only a dependency of an openstack project, it becomes harder when it's a common dependency of openstack projects and also a lot of non-openstack projects carried by the distro15:47
rosmaitaso we only need to update minima "on demand", which could force a change to upper-constraints if necessary15:48
fungijust "follow our upper constraints" isn't always an option for large distros, unfortunately15:48
rosmaitaright, so it does sound like some kind of lower bound is useful15:48
gmannyeah, follow u-c is much safe and reliable15:48
mnaserrosmaita: i would argue that cinder shouldnt pin versions and not worry about minimas and rely on upper constraints15:48
mnaserand fungi agreed, but it looks like most popular openstack distros are very much 'fully vendored distros'15:48
rosmaitamnaser: that's what happens in practice, my statement is about what expectations people should have in looking at requirements.txt15:49
fungiyeah, it's ultimately still up to the package maintainers in distros to solve the problem of getting openstack and gnome and kubernetes and anything else you can imagine co-installable15:49
fungimnaser: i don't know what a "fully vendored distro" is, but you can explain it after the meeting15:50
gmannI think rosmaita has good point that if requirement.txt stats the lower bound then it is not so reliable so why having those?15:50
mnaserrequirements.txt in most openstack projects are meaningless in my experience15:50
mnaserif you're not adding upper constraints, you're getting a broken install15:50
fungiit's been relied on as a means of triggering pip to upgrade dependencies in an existing environment when updating a package where the lower bound increases15:50
rosmaitawell, we still need to know when to adjust upper constraints15:51
mnaserfungi: oh, i see your point, in case someone doesn't use -U15:51
rosmaitahappens when cinder says foo>1.2 and u-c has foo==1.115:51
fungiwell, even if they do use -U and the upgrade strategy is to only upgrade what's necessary15:51
fungipip has multiple upgrade strategies15:51
mnaserright, yes, you'll have to bump u-c first and then 'pin' the newer version afterwards15:51
mnaserit sounds like requirements.txt might have been lower-constraints all along :)15:52
fungiand people sometimes have to mix distro-supplied python modules with some from pypi, so pip not touching the distro-supplied ones is helpful15:52
rosmaitamnaser: i am thinking so too15:52
mnaserbut maybe not a lower-constraints as much as it is .. lower-requirements15:53
mnaser"you need to be newer than this but that's all i'm declaring"15:53
mnaseri'd like us to have time for the next subject15:53
gmannrequirements.txt can match with u-c but not with exact lower bound15:53
mnaserso i think we should keep this as an open topic for next week15:53
*** dklyle has joined #openstack-tc15:53
rosmaitasounds good, i think we are approaching some clarity15:53
gmannyeah, we will get debian check also meanwhile15:53
mnaseryep15:53
mnaserthanks for joining rosmaita and apevec, fungi for feedback15:54
rosmaitanp15:54
mnaser#topic Decide on OpenSUSE in testing runtime (gmann)15:54
*** openstack changes topic to "Decide on OpenSUSE in testing runtime (gmann) (Meeting topic: tc)"15:54
mnasersad to see and say, but i'm for it15:55
gmannopensuse distro job is broken for a month and devstack team is approaching towards removing it #link https://review.opendev.org/c/openstack/devstack/+/76988415:55
mnaserthere is no investment from the company (as we know this) and none from the community from what i see15:55
gmannif we have any maintainer we can add it back anytime15:55
gmannyeah15:55
jungleboyjThey stated that they weren't going to be continuing to support it.15:56
clarkbis it broken on master or just stable?15:56
clarkbI had sort of been tending to it on master and I thought it was fine there (but also haven't had much time for it recently so maybe that changed)15:56
gmannmaster also15:57
mnaserhttps://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-opensuse-1515:57
gmannhttps://zuul.openstack.org/builds?job_name=devstack-platform-opensuse-15+15:57
gmannah you are fast :)15:57
mnaser:P15:57
mnaserhttps://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-opensuse-15&result=SUCCESS15:57
mnaserthe most recent pass was 3rd of december 202015:57
gmannyeah15:58
clarkbah ok so it was working until recently. I wasn't compeltely crazy :)15:58
mnaserstable seems to be broken with WARNING: this script has not been tested on opensuse-15.215:58
mnasermaster broken with ModuleNotFoundError: No module named 'six'15:59
mnaserbut i think it's just a matter of extra load on the devstack team15:59
*** cloudnull has quit IRC15:59
gmannyeah and team has really less bandwidth15:59
mnaserwe will need a change on https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions16:00
gmannalso in wallaby testing runtime if we remove testing now -https://governance.openstack.org/tc/reference/runtimes/wallaby.html16:00
mnaserand maybe for Xena we will drop it from the tested runtimes16:00
mnasershould we drop it from https://governance.openstack.org/tc/reference/runtimes/wallaby.html too?16:00
gmannI think so if we remove the testing now16:00
ricolinI think we should16:01
mnasergmann: how about we make the governance change and if that goes through we merge the devstack one16:01
ricolinas it said `Tested` Runtimes for Wallaby16:01
mnaser(which it will, but just as part process and the suse job is quickly and quietly failing anyways)16:01
gmannsure, from testing runtime and /project-testing-interface.html#linux-distributions both?16:01
mnaseri think so16:01
gmannok will do today16:02
mnaserany closing thoughts? :)16:02
ricolindo we care to have a ML out for this too?16:02
gmannbefore or after moving testing/governance change?16:03
ricolinafter gov IMO16:03
gmann*removing16:03
ricolinfine to do it before/after testing16:03
gmannyeah i think that will helpful to notify wider people16:04
fungias a wider person, i appreciate notification of things ;)16:04
*** cloudnull has joined #openstack-tc16:05
gmannI can put once we finish the changes16:05
mnaserthank you gmann16:05
ricolingmann, thx, short notice will do IMO:)16:05
gmannsure16:05
mnaser#action gmann update supported distros to drop opensuse16:05
mnaseri think that's it?16:05
gmannyeah.16:05
mnaserthank you all :)16:06
mnaser#endmeeting16:06
*** openstack changes topic to "OpenStack Technical Committee office hours: Tuesdays at 09:00 UTC, Wednesdays at 01:00 UTC, and Thursdays at 15:00 UTC | https://governance.openstack.org/tc/ | channel logs http://eavesdrop.openstack.org/irclogs/%23openstack-tc/"16:06
openstackMeeting ended Thu Jan 14 16:06:06 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:06
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.html16:06
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.txt16:06
openstackLog:            http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.log.html16:06
jungleboyjThanks everyone.16:06
gmannthanks everyone16:08
clarkbI know the meeting has ended but my day only just started :)16:09
clarkbit is worth noting that openstack has a significant backlog in zuul right now16:10
clarkblike 24 hours for getting your nova or neutron changes tested16:10
clarkbI looked at it briefly yesterday before calling it a day and noticed that neutron changes in check run something like 36 jobs each. Some of which look like they never pass (something to do with loki and uwsgi)16:11
dansmithI was just asking about this in the nova meeting, yeah16:11
dansmithit's suuper bad16:11
dansmithis that just workload or is something failing?16:11
gmannyeah, even devstack too16:11
clarkbadditionally neutron runs tripleo jobs which seem to run a job that builds all the tripleo images first then the long jobs and that whole process takes like 3-4 hours16:11
clarkbdansmith: its load as far as I can tell16:11
dansmithoof16:11
clarkbnodepool and zuul are going full steam ahead and yall are throwing more at it than it can handle16:12
dansmithbut what about the neutron jobs that never pass? are they n-v?16:12
clarkbdansmith: they are, but they also take an hour and use at least one test node for that time16:12
clarkbreally for throughput here what matters is cpu time consumed and every job adds to that16:12
dansmithack, so.. we need to ask some neutron people to review things?16:12
clarkbyes and tripleo I think16:12
slaweqdansmith: I will take a look at those non-voting jobs16:12
dansmithslaweq: thanks16:13
clarkbneutron also had a gate reset caused by pylint16:13
clarkbyesterday when I looked gate resets didn't seem to be a major problem though16:13
slaweqclarkb: what You mean by "gate reset"? I'm not familiar with that16:14
clarkbslaweq: when a job fails in the gate all of the changes that come after the change that has a failuer have their jobs stopped and discarded. Zuul then removes the change that failed from their git history and reparents to the nearest non failing change. Then starts jobs over again with the new git state16:14
slaweqclarkb: ahh, ok16:15
slaweqthx16:15
clarkbslaweq: this is a common source of zuul backlog problems beacuse the gate gets priority and if you have a deep gate queue that is reset frequently you end up throwing away those priority resources and starting over and over and over16:15
fungithis is a significant part of how we are able to test changes in parallel while still being sure they don't break one another16:15
clarkbright now I think we're hitting that at times, but not over and over and over16:15
*** rosmaita has left #openstack-tc16:16
clarkbthe bigger issue seems to simply be significant demand for resources via direct demand16:17
clarkbon the tripleo side of things it seems like maybe images are being rebuilt for master, victoria, ussuri, and train on many (all?) changes but then there don't seem to be branch specific jobs that consume them16:18
clarkbalso I think this was a response to docker hub rate limiting. quay.io claims to not rate limit image downloads and maybe that can be used as an alternative to rebuilding all the images all the time16:19
clarkb(and its possible I'm missing something here because those jobs are large and complicated)16:19
*** rlandy has joined #openstack-tc16:24
rlandyclarkb: hi - apevec let us know you have some concerns about the length of the gate and the provider jobs building container images ... we (marios) has a bunch of patches out to address this ... https://review.opendev.org/q/topic:reduce-content-providers16:28
rlandyie: unnecessary branches running16:28
rlandywas related to requirements for upgrades jobs16:29
clarkbrlandy: yes the extra branches was one concern. THe other is that maybe we can leverage something like quay.io to avoid rebuilding all the things all the time?16:29
clarkbsupposedly they don't rate limit image downloads16:29
clarkband I think tripleo looked into it at one time though I don't know if it was rejected for a specific reason16:29
rlandywe are using quay.io - but we have also experienced connection drops etc. there16:30
clarkbI see16:30
rlandywe only fall back to docker.io in rare cases16:30
rlandyI know that weshay was looking into various options with quay.io for better results16:30
rlandyhe will be back tomorrow16:31
clarkbrlandy: it seems like the content-provider jobs take sigifnicant time though (I mean they pause while the other jobs run but before the other jobs runs the content provider jobs are not fast)16:31
clarkbI wouldn't expect that if you are fetching a cached minimal delta from quay16:31
rlandywe can pick up the discussion on how far he got with that16:31
rlandyclarkb: correct - the content provider jobs do take time - but it serves us better than multiple failing jobs because we get rate limited16:32
rlandythe theory was that we should only pull containers once per job testing16:33
rlandythat being said, they are not always stable16:33
*** jamesmcarthur has joined #openstack-tc16:33
rlandyand there is work to do there16:33
clarkbya thats all fine, more that I woudl expect that to be much faster16:33
clarkbbut I haven't dug through the job logs much yet16:33
rlandybesides getting to a minimal set16:33
rlandyclarkb: we are actively looking at this16:33
rlandyand are opento your/infra ideas16:34
clarkbrlandy: I'll try to take a look at the content-provider logs today and see if anything jumps out to me16:34
rlandyclarkb: sure - again, we can review when weshay returns tomorrow - considering where he left off on the investigation16:35
clarkbsounds good16:35
rlandybut yes, have beetr quay.io performance would help a lot16:35
rlandythanks for your interest here16:35
diablo_rojo_phonSorry I wasn't around for the meeting today. Was exhausted from all the craziness yesterday (we lost power and internet and cell service)16:36
diablo_rojo_phonmnaser ricolin dansmith jungleboyj ^16:37
dansmithdiablo_rojo_phon: we had a blip during the day, and then last night woke to a screeching alarm as PGE turned off power for a few minutes while doing repairs16:38
*** zbr3 has joined #openstack-tc16:38
dansmithkinda like "enjoy your power today, see you tonight"16:39
*** zbr3 has quit IRC16:39
*** zbr9 has joined #openstack-tc16:40
*** zbr has quit IRC16:40
*** zbr9 is now known as zbr16:40
clarkbdansmith: we lost power briefly at 2:30am and I discovered that one of my UPSs has a bad battery in the process. It was very unahppy and made lots of noise and woke me up. New battery should be here tuesday so as long as we avoid wind and rain again I'm hoping to avoid this experience16:44
apeveclooks like Powerwall would be good to have!16:44
dansmithheh16:44
clarkbalso my bike route was closed yseterday due to high water. I had to improvise16:45
dansmith*gasp*16:52
diablo_rojo_phonWe had the same problem clarkb. An alarm in the server closet woke us up too.16:58
*** belmoreira has quit IRC17:04
*** diablo_rojo has joined #openstack-tc17:04
openstackgerritGhanshyam proposed openstack/project-team-guide master: Document $series-last tag for Temepst to test EM branches  https://review.opendev.org/c/openstack/project-team-guide/+/76982117:08
*** rpittau is now known as rpittau|afk17:42
dansmithhappy to work near people for which "the server closet" is a socially acceptable and unsurprising reference17:46
jungleboyjdansmith:  :-)  I have a server room.  Was one of the things that excited me when I found this house.17:57
jungleboyjUnfortunately everything is on shelves right now.  Really need to get a Rack.17:58
dansmithyeah, it was a requirement for the current house17:58
dansmithI'm racked.17:58
* jungleboyj is jealous17:58
fungii divested myself of server racks and pretty much all my large equipment when i moved to the beach17:59
* dansmith points at the door17:59
jungleboyjfungi:  Well, we all have to make sacrifices to be at the beach.17:59
fungiyep, i'm goin' ;)17:59
dansmithjungleboyj: https://imgur.com/a/HqnFy5h18:02
dansmithcould be neater, but...18:02
funginice! my final iteration, which lasted years, was two 7' freestanding racks bolted to a shipping pallet with an array of heavy-duty swivel casters mounted underneath. worked out really well18:04
dansmithhah, yeah I've had that setup before18:04
jungleboyjHoly cow!  That puts my setup to shame.18:05
jungleboyjSomething to shoot for when I grow up.18:06
dansmiththat's only half the networking for the house, I have a closet upstairs with two more 24pt switches, another 24pt injector, connected to the main stuff with 4x aggregated GigE fiber18:07
jungleboyjHow many physically connected devices do you have?18:07
dansmith;pts18:07
dansmith*lots*18:07
jungleboyjI just got Gig fiber this fall so my upgrade was the a 48 port PoE switch.  Really liking Ubiquiti stuff.  Now managing multiple networks from the controller I have here.18:09
dansmithGot 52 macs from switch318:09
dansmithGot 21 macs from switch218:09
dansmithGot 76 macs from switch118:09
dansmithGot 56 macs from switch418:09
dansmithjust a quick count ^18:09
jungleboyjWow.18:09
dansmithmaybe ten of those are wireless devices, and maybe another five are multihomed machines that count twice18:09
dansmithactually, that may double the number, come to think of it18:10
dansmithbut yeah, I'm guessing probably 100 unique macs18:10
jungleboyjMy house is a little less busy.18:11
jungleboyjhttps://usercontent.irccloud-cdn.com/file/L6ULQugX/image.png18:11
dansmithstill respectable18:12
*** apevec_ has joined #openstack-tc18:12
jungleboyjI also have the inverters for my solar Panels in the server room.  It can get a bit toasty in there.18:14
*** apevec has quit IRC18:15
dansmithall dino juice for me, baby18:15
dansmithin portland, solar panels are called "expensive house hats"18:15
clarkbdansmith: pge will sell you electricity that is hand wavily provided by green sources though18:15
dansmithclarkb: uh huh.. I pay them enough :)18:16
dansmithI think I'm in the third rung of "you use too much so you pay more"18:16
dansmithand they tell me what percentage is already green anyway, so I think I'm good18:16
jungleboyj:-)  my electric company is quite progressive.  They were more than happy to pay me for the power I produce.  11 kw on a good day.18:17
*** andrewbonney has quit IRC18:17
dansmithjungleboyj: right, we would too, but only for three months out the year where the sun is visible18:17
jungleboyjAh, good point.  I get good sun at least 8 months of the year.18:18
dansmithgmann: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22tempest.api.image.v2.test_images.MultiStoresImportImagesTest.test_glance_direct_import_image_to_all_stores.*inprogress%5C%2218:21
dansmithoops18:21
dansmithwrong window18:21
*** apevec has joined #openstack-tc18:23
*** apevec_ has quit IRC18:24
*** jamesmcarthur has quit IRC18:26
*** apevec_ has joined #openstack-tc18:27
*** apevec has quit IRC18:29
*** diablo_rojo has quit IRC18:34
fungii desperately need to run twisted pair throughout my house. i bought 330m of good shielded plenum grade stuff, as well as a 2m flexible auger bit and self-lit usb snake scope for piloting holes for conduit inside finished walls, a long spool of fish-tape for pulling wire, et cetera but finding the18:38
*** slaweq has quit IRC18:38
fungitime is the real challenge18:38
jungleboyjfungi: ++  I have the wire and I have a hole to follow to the attic where they ran the solar conduit.  Waiting for my friend who is good at crawling through attics to come help pull the wire.18:39
dansmithfungi: sounds like "priorities" are the problem. No rack? No wire? c'mon :)18:40
*** ralonsoh has quit IRC18:41
fungifair18:41
*** diablo_rojo has joined #openstack-tc18:43
jungleboyjhe he.18:48
*** timburke has joined #openstack-tc18:48
*** apevec_ is now known as apevec18:59
*** timburke_ has joined #openstack-tc19:12
*** timburke has quit IRC19:15
apevecdansmith, nice setup! I wonder about that acoustic isolation on the wall behind rack, does that isolate well?19:22
dansmithapevec: it helps a little, but the thing that helps the most is quiet machines.. the R710, R610 are pretty quiet and those optiplex boxes are basically silent19:23
dansmiththat room is dedicated to that purpose, has a door, and more foam on the back side of the door :)19:23
*** e0ne has joined #openstack-tc19:28
*** e0ne has quit IRC19:39
gouthamrdansmith diablo_rojo: o/ hey, there is one thing bothering me about the "Add Resolution of TC stance on the OpenStackClient" - is there any reason we couldn't make the stance a bit more inclusive? - i don't want to assume the intent to call out Compute, Identity, Image, Object and Block Storage APIs, and someone comes along and asks, hey where's Networking - can we include Networking because it's a main service20:03
gouthamr:D20:03
gouthamri know that osc started out with a small set of projects before the plugin architecture began, and now all projects can just plug in - so why not call that out in the stance?20:04
openstackgerritGhanshyam proposed openstack/governance master: Drop openSUSE from commonly tested distro list  https://review.opendev.org/c/openstack/governance/+/77085520:04
openstackgerritDan Smith proposed openstack/governance master: Add Resolution of TC stance on the OpenStackClient  https://review.opendev.org/c/openstack/governance/+/75990420:05
dansmithgouthamr: how about that? ^20:05
gouthamrdansmith++20:05
gouthamrdansmith: amazing, thank you for listening to the rant - glad you cut me off there :)20:06
dansmithgouthamr: np, sorry for not remembering to fix that whilst revising it20:07
gmann+120:07
gmanngouthamr: as you are here, reminding you for this tag for manila  http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019814.html20:07
gouthamrgmann: o/ oh yessir - i have an unfinished commit from before the holidays; glad you reminded me :)20:08
gmann:)20:08
*** lpetrut has joined #openstack-tc20:10
*** e0ne has joined #openstack-tc20:19
*** slaweq has joined #openstack-tc20:19
openstackgerritGoutham Pacha Ravi proposed openstack/governance master: [manila] add assert:supports-api-interoperability  https://review.opendev.org/c/openstack/governance/+/77085920:23
openstackgerritGhanshyam proposed openstack/governance master: Define Xena release testing runtime  https://review.opendev.org/c/openstack/governance/+/77086020:25
openstackgerritGhanshyam proposed openstack/governance master: Define Xena release testing runtime  https://review.opendev.org/c/openstack/governance/+/77086020:26
*** slaweq has quit IRC20:27
*** e0ne has quit IRC20:37
*** e0ne has joined #openstack-tc20:39
*** e0ne has quit IRC20:39
*** e0ne has joined #openstack-tc20:49
*** e0ne has quit IRC20:49
*** njohnston has quit IRC21:02
*** lpetrut has quit IRC21:07
*** jrosser has quit IRC21:29
*** ildikov has quit IRC21:30
*** jrosser has joined #openstack-tc21:31
*** ildikov has joined #openstack-tc21:32
*** diablo_rojo has quit IRC21:48
*** rlandy is now known as rlandy|bbl23:28

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!