16:00:17 <gmann> #startmeeting tc
16:00:17 <opendevmeet> Meeting started Wed Jan 25 16:00:17 2023 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:17 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:17 <opendevmeet> The meeting name has been set to 'tc'
16:00:20 <gmann> #topic Roll call
16:00:26 <gmann> o/
16:00:32 <dansmith> o/
16:00:49 <knikolla[m]> o/
16:01:33 <rosmaita> o/
16:02:00 <JayF> o/
16:03:03 <gmann> let's start
16:03:05 <gmann> #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions
16:03:09 <gmann> today agenda ^^
16:03:17 <gmann> #topic Follow up on past action items
16:03:25 <gmann> two action item from last meeting
16:03:36 <gmann> gmann to send email to PTLs on openstack-discuss about checking PyPi maintainers list for their projects
16:03:37 <gmann> done
16:03:53 <noonedeadpunk> o/
16:03:59 <gmann> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html
16:04:08 <gmann> we will talk about it in next topic
16:04:18 <gmann> gmann to reachout to release team to check about Zaqar gate fix by 25th Jan as deadline to consider it for this cycle release
16:04:49 <gmann> this also done, we will talk in detail in next topics
16:05:00 <gmann> #topic Gate health check
16:05:05 <gmann> any news on gate
16:05:08 <dansmith> unhealthy
16:05:28 <dansmith> the skip of the test that was eating the large images merged which helped a lot
16:05:38 <dansmith> I've got that fix up for the test and the unskip now
16:05:46 <dansmith> but nova at least is still struggling to merge stuff
16:06:06 <gmann> ok, I will take a look to the fix, thanks
16:06:06 <dansmith> we've got a functional failure that manifests fairly often, which I don't think we've tracked down
16:06:29 <dansmith> and we still have a lot of failures, usually around volume stuff hitting in the tempest jobs, which probably affect other people
16:06:31 <gmann> yeah
16:06:33 <JayF> I don't think any of the gate issues for the last couple of weeks have impacted Ironic; FWIW. We're still in about as good of shape as we've been all year.
16:06:36 <spotz[m]> o/
16:06:39 <rosmaita> i'm seeing problems in ussuri and train grenade and tempest jobs, something about not being able to build bcrypt because Rust isn't available
16:06:42 <dansmith> not sure if those are cinder or nova
16:06:57 <fungi> i did notice cinder/glance/nova are all having a tough time getting the ossa-2023-002 backports landed
16:07:05 <gmann> I saw someone reported in qa channel also about volume state issue
16:07:06 <rosmaita> fungi: exactamundo
16:07:06 <dansmith> yeah
16:07:35 <fungi> rosmaita: missing rust build dependency is a sign that pyca/cryptography doesn't have a suitable wheel for that platform with the requested version
16:07:46 <gmann> rosmaita: train, ussuri are not supposed to run grenade as mandatory as they are in EM state
16:08:15 <rosmaita> gmann: how about tempest?
16:08:23 <gmann> one recent tempest master change broke the stable/wallaby EM testing which tempest master does not support so I need to pin old compatible tempest there
16:08:26 <rosmaita> i guess for EM, only unit and pep8?
16:08:34 <gmann> rosmaita: those should pass.
16:08:40 <gmann> tempest integration tests are required
16:08:58 <gmann> please ping me failure link in qa and I will have a look
16:09:09 <rosmaita> dang
16:09:12 <rosmaita> ok, will do
16:09:38 <gmann> but stable/wallaby is broken as reported by gibi in nova channel and I will push fix today
16:10:09 <gmann> this cycle seems not good for gate health not just tox4 but many more unstable things
16:10:10 <noonedeadpunk> tbh integrated queue doesn't look optimal for cases with vulnarabilities
16:10:35 <dansmith> gmann: yeah
16:10:44 <noonedeadpunk> As when you need to merge smth fast and withing this queue patches are waiting for merge and depending on everything else that's going on around
16:11:18 <dansmith> noonedeadpunk: but CVEs can be manually applied as long as the patch is available, which is why the distributors get advance warning, and even people deploying from source can do that
16:11:32 <noonedeadpunk> and in times of intermittent things or just loaded zuul workers it's becoming nightmare to land things
16:11:58 <fungi> yes, this is exactly why our advisories don't say "upgrade to this release" but rather "here's a link to the patches, even if they're not merged yet"
16:12:41 <JayF> I think it's hard to make the argument that it's OK that it can take a while to make our git-shipped version of software secure, even if the patches are available.
16:12:45 <noonedeadpunk> Well, pip can't really pull from gerrit, can it?
16:12:59 <noonedeadpunk> As it needs to fetch refs first that he knows nothing about
16:13:37 <dansmith> to me, it's hard to say "we forced this patch in that didn't pass tests, but we're sure it's better"
16:13:56 <dansmith> especially if that has the potential to 100% break everyone if it's wrong, instead of the partial failures that occur with the general gate instability
16:14:15 <noonedeadpunk> I'm not saying not passing tests, what I'm saying is - in case of some random failure on another project we're to wait for another 5 hours
16:14:45 <noonedeadpunk> and hoping that nothing else will happen on queue
16:14:47 <fungi> yes, but to reiterate (for the i can't recall how many times now) we publish packages to pypi for ease of retrieval by testing systems and distributors. that some deployment projects are relying on packages on pypi for production use is a serious concern
16:14:50 <dansmith> meh, the patch is available
16:15:07 <dansmith> fungi: right
16:16:19 <gmann> ok any other gate failure/concern to discuss?
16:16:46 <noonedeadpunk> well, for me as operator doing `apt upgrade` and getting regression for neutron, which happened like 3 times last 4 releases, is quite serious concern not to use distro packages.. .anyway
16:18:20 <noonedeadpunk> I think we can move forward:)
16:18:44 <gmann> let's hope we get gate green there :) that is ultimate goal
16:18:47 <gmann> moving to next topic
16:19:06 <gmann> #topic Cleanup of PyPI maintainer list for OpenStack Projects
16:20:02 <gmann> as discussed in previous meetings, sent email about audit to openstack-discuss #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html
16:20:05 <gmann> also created a etherpad to track the audit results #link https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup
16:20:21 <gmann> I can see 3-4 projects done audit and added the result in etherpad
16:20:39 <gmann> there is some discussion on ML about PyPi maintainers cleanup, hope you have read that
16:21:12 <gmann> any opinion on that?
16:21:48 <JayF> I'm not surprised we get some pushback on it. I don't think it changes that we're going down the right path.
16:22:03 <noonedeadpunk> Well, I think I can kind of relate on maintainers unwilling to fully give out credentials for projects they've just moved under openinfra umbrella
16:22:07 <gmann> as we discussed earlier also, communication to additional maintainer is the key and convey the intension behind that
16:22:32 <gmann> noonedeadpunk: yes that is the case.
16:22:51 <noonedeadpunk> also if foundation tries to use our infra as hosting for projects
16:22:51 <gmann> I will not be surprise to see that feedback more as audit goes
16:23:50 <noonedeadpunk> as - "we can prodide you ci and infra stack but you will need to give us access and remove yourself" sounds indeed not great and more as a catch
16:24:00 <knikolla[m]> it's also worth considering the fact that PyPI is not the only place we publish things to, and having additional maintainers can put things out of sync.
16:24:01 <gmann> JayF: agree, we stay with the cleanup decision otherwise cases like xstatic* is more dangerous
16:24:30 <noonedeadpunk> but from technical prespective I still don't really see why there should be humans involved
16:24:31 <gmann> knikolla[m]: true
16:25:25 <JayF> So I think it's fair to say nobody has a technical concern; but there is a perceptional concern.
16:25:25 <knikolla[m]> from playing a bit around with it, the API is very limited.
16:25:53 <noonedeadpunk> yeah and how to deal with that feeling of "takeover" I'm not sure to be frank
16:25:58 <gmann> noonedeadpunk: but at same time, in open source its governance and OpenStack umbrella which has accountability and there is no such things "I am owner of this project/repo"
16:26:21 <JayF> noonedeadpunk: yeah, the problem is the individual hooked up to pypi alongside openstackci feels a false sense of ownership :/
16:26:39 <JayF> noonedeadpunk: understandable, but in terms of governance it's clear that it's owned by the community, and openstackci user is the rep of that
16:26:42 <knikolla[m]> gmann: ++, the way I see it, when you cede your project to OpenStack's governance the owner is effectively OpenStack, not you.
16:26:52 <gmann> true
16:27:02 <noonedeadpunk> gmann: I totally agree with you.
16:27:47 <gmann> I think we all are in same page here. let's continue the discussion on ML and add your opinion there. we continue this audit then cleanup as planned
16:28:08 <fungi> just to touch on the "foundation" point, the "foundaton" isn't mandating this, it's an openstack community decision about official openstack deliverables
16:28:10 <gmann> also reachout to your known projects to plan the audit
16:28:23 <gmann> fungi: yeah
16:28:25 <knikolla[m]> If more scripts are needed, I'm happy to write them. That was fun.
16:28:47 <gmann> knikolla[m]: +1, that listing projects are really helpful.
16:28:48 <fungi> the foundation isn't telling projects they need to hand over credentials, far from it
16:29:08 <gmann> yeah its openstack community and governance
16:29:36 <fungi> it's not required for hosting a project on opendev either
16:29:40 <noonedeadpunk> ok, then maybe we should also review what we're saying in docs regarding moving existing project under openstack umbrella?
16:29:45 <clarkb> a foundation employee (me) pointed out one of your packages had been hijacked. But I didn't mandate you do anything at all. I did suggest that one path forward would be to officially hand over control of that packge to the hijackers and retire it in opendev though
16:30:20 <clarkb> Also note I noticed this because I'm subscribed to openstack project events in github. Something anyone can do. No special access as foundation employee or opendev admin required
16:30:43 <knikolla[m]> ++fungi and clarkb. This is an initiative from TC, not OpenInfra.
16:30:44 <gmann> 'moving existing project under openstack umbrella?' ? which one. i did not get this fully
16:31:17 <gmann> clarkb: yes
16:32:48 <noonedeadpunk> I was thinking about this page https://governance.openstack.org/tc/reference/new-projects-requirements.html or smth related about adding new projects (or emerging projects) under umbrella
16:33:09 <noonedeadpunk> I was talking overall about process and managing expectations of maintainers
16:33:33 <fungi> "Releases of OpenStack deliverables are handled by the OpenStack Release Management team through the openstack/releases repository. Official projects are expected to relinquish direct tagging (and branch creation) rights in their Gerrit ACLs once their release jobs are functional."
16:33:38 <fungi> that could be expanded on, sure
16:33:52 <gmann> noonedeadpunk: sure, there is no harm to mention it there to be explicit and if new project creators does know about it
16:34:24 <knikolla[m]> ++
16:35:02 <gmann> noonedeadpunk: do you want to push the doc change?
16:35:12 <noonedeadpunk> to address things like `remove the project creator from their own project just for contributing it to OpenStack`
16:35:18 <noonedeadpunk> gmann: yep, can do this
16:35:25 <knikolla[m]> awesome
16:35:27 <gmann> cool, thanks
16:35:44 <gmann> let's move to next topic
16:35:52 <gmann> #topic Less Active projects status:
16:35:59 <gmann> Zaqar
16:36:20 <gmann> Zaqar (Zaqar deliverable) Gate is green, bete version is released
16:36:30 <gmann> #link https://review.opendev.org/c/openstack/releases/+/871399
16:36:32 <gmann> but
16:36:33 <fungi> awesome, i'll abandon my change. thanks!
16:36:41 <gmann> Zaqar-ui, python-zaqarclient tox4 issue fixes are up but not yet merged
16:36:51 <gmann> #link https://review.opendev.org/q/topic:zaqar-gate-fix
16:37:17 <gmann> these tox4 failure are not just this project but many project -ui/tempest plugins, client repo might not be fixed yet
16:37:50 <gmann> even there is only PTL active (active on ping) there I feel we can continue with the Zaqar to be released in this cycle
16:37:55 <gmann> not to mark as inactive
16:38:03 <fungi> yep, i just abandoned
16:38:06 <gmann> but keep monitoring the situation
16:38:08 <fungi> excellent outcome
16:38:44 <gmann> any objection on the plan? ^^
16:40:06 <gmann> no reply seems no objection :)
16:40:14 <gmann> I will convey the same to release team
16:40:21 <gmann> fungi: +1 on abandon the patch
16:40:30 <gmann> Mistral
16:40:40 <gmann> Gate is green, Beta version is released and all good now
16:40:48 <gmann> #link https://review.opendev.org/c/openstack/releases/+/869470
16:40:58 <gmann> #link https://review.opendev.org/c/openstack/releases/+/869448
16:41:04 <gmann> Governance patch to deprecate Mistral release is abandon
16:41:12 <gmann> #link https://review.opendev.org/c/openstack/governance/+/866562
16:41:31 <gmann> with that, we left with no inactive projects.
16:42:08 <noonedeadpunk> \o/
16:42:21 <gmann> but I agree that we stretched the monitoring/marking of less active projects beyond deadlines which is not good for release team planning
16:42:37 <JayF> I wonder if there's something we can do to be proactive for Bobcat
16:42:49 <gmann> that is something we need to improve 'detect and decide on such project before m-2'
16:42:54 <JayF> we know these are skeleton crewed projects (and there are likely others); it might be benefitial for us to reach out to them much earlier
16:43:03 <gmann> yeah, we should do that
16:43:22 <JayF> if I can crack the nut on grading how active a project is, that could help us quantify which projects need help
16:43:31 <gmann> I will add this topic for vPTG discussion
16:43:47 <gmann> JayF: perfect and that can help to proceed/discuss further
16:44:30 <gmann> anything else on this topic?
16:45:08 <gmann> #topic Recurring tasks check
16:45:10 <gmann> Bare 'recheck' state
16:45:16 <gmann> #link https://etherpad.opendev.org/p/recheck-weekly-summary
16:45:32 <gmann> slaweq is not present today but he added summary in above etherpad
16:45:38 <gmann> data seems much better than last week
16:46:09 <gmann> #topic Open Reviews
16:46:22 <gmann> #link https://review.opendev.org/q/projects:openstack/governance+is:open
16:46:36 <gmann> three open reviews, two of them are waiting on project-config change
16:46:52 <gmann> this need more reviews, please check  #link https://review.opendev.org/c/openstack/governance/+/871302
16:47:07 <gmann> and that is all from today agenda
16:47:30 <gmann> we will have out next meeting on Feb 1st which will be a video call on zoom.
16:47:55 <gmann> we have ~13 min left if anything else to discuss? otherwise we can close the meeitng.
16:48:12 <JayF> I'll note I'm early on working on my item for this cycle
16:48:23 <JayF> I strongly welcome any input here: https://etherpad.opendev.org/p/project-health-check
16:48:45 <JayF> Trying to get ideas on what to measure and why before I start thinking about how and writing it up
16:48:51 <JayF> so if you have opinions please toss them in the etherpad
16:48:57 <gmann> thanks. will check it
16:50:19 <fungi> i'm getting a strong feeling of déjà vu. check the earlier lists of criteria we tried to measure for project health if you haven't already, as well as reasons why we eventually deemed those untenable
16:50:39 <JayF> fungi: ack; will look for them.
16:51:03 <fungi> should be able to dredge them out of the governance git history
16:51:25 <fungi> and look at tc meeting minutes from around those changes landing
16:51:33 <gmann> I will say that was more of checking every project health and making decision and i noticed TC did without discussing it with projects team :)
16:51:43 <spotz[m]> Yeah also maybe the chat logs?
16:51:57 <gmann> but now situation changes where we are strugling with even having single active maintainers
16:52:32 <fungi> also remember that there are 9 tc members and approximately 10x that many project teams
16:52:35 <knikolla[m]> would be interesting to compare our standards for what an inactive projects looks like then vs now.
16:52:41 <gmann> so instead of 'go and check every projects' we can go with 'observe these number of things and then take that project to monitor and discussion table'
16:52:41 <dansmith> fungi: yeah...
16:53:01 <knikolla[m]> "project only has 5 cores" vs "oh, we're lucky the the only core fixed the gate"
16:53:06 <gmann> yeah
16:53:40 <JayF> Yeah I don't think the idea is to program this into a terminator bot or anything
16:53:54 <JayF> but just give us some kind of reliable way to indicate to people externally that a project needs help
16:54:01 <gmann> yes
16:54:06 <JayF> e.g. maybe OVHCloud steps up for Mistral earlier if we have it in yellow on a dashboard
16:54:22 * dansmith is skeptical it will ever be anything remotely like "reliable"
16:54:22 <gmann> yeah, that is good example
16:54:43 <gmann> anyways let's put the ideas in etherpad
16:54:54 <JayF> Including skepticisms too :D
16:56:04 <gmann> ok, i think that is all for today. thanks everyone for joining
16:56:08 <gmann> #endmeeting