16:00:17 #startmeeting tc 16:00:17 Meeting started Wed Jan 25 16:00:17 2023 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:17 The meeting name has been set to 'tc' 16:00:20 #topic Roll call 16:00:26 o/ 16:00:32 o/ 16:00:49 o/ 16:01:33 o/ 16:02:00 o/ 16:03:03 let's start 16:03:05 #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions 16:03:09 today agenda ^^ 16:03:17 #topic Follow up on past action items 16:03:25 two action item from last meeting 16:03:36 gmann to send email to PTLs on openstack-discuss about checking PyPi maintainers list for their projects 16:03:37 done 16:03:53 o/ 16:03:59 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html 16:04:08 we will talk about it in next topic 16:04:18 gmann to reachout to release team to check about Zaqar gate fix by 25th Jan as deadline to consider it for this cycle release 16:04:49 this also done, we will talk in detail in next topics 16:05:00 #topic Gate health check 16:05:05 any news on gate 16:05:08 unhealthy 16:05:28 the skip of the test that was eating the large images merged which helped a lot 16:05:38 I've got that fix up for the test and the unskip now 16:05:46 but nova at least is still struggling to merge stuff 16:06:06 ok, I will take a look to the fix, thanks 16:06:06 we've got a functional failure that manifests fairly often, which I don't think we've tracked down 16:06:29 and we still have a lot of failures, usually around volume stuff hitting in the tempest jobs, which probably affect other people 16:06:31 yeah 16:06:33 I don't think any of the gate issues for the last couple of weeks have impacted Ironic; FWIW. We're still in about as good of shape as we've been all year. 16:06:36 o/ 16:06:39 i'm seeing problems in ussuri and train grenade and tempest jobs, something about not being able to build bcrypt because Rust isn't available 16:06:42 not sure if those are cinder or nova 16:06:57 i did notice cinder/glance/nova are all having a tough time getting the ossa-2023-002 backports landed 16:07:05 I saw someone reported in qa channel also about volume state issue 16:07:06 fungi: exactamundo 16:07:06 yeah 16:07:35 rosmaita: missing rust build dependency is a sign that pyca/cryptography doesn't have a suitable wheel for that platform with the requested version 16:07:46 rosmaita: train, ussuri are not supposed to run grenade as mandatory as they are in EM state 16:08:15 gmann: how about tempest? 16:08:23 one recent tempest master change broke the stable/wallaby EM testing which tempest master does not support so I need to pin old compatible tempest there 16:08:26 i guess for EM, only unit and pep8? 16:08:34 rosmaita: those should pass. 16:08:40 tempest integration tests are required 16:08:58 please ping me failure link in qa and I will have a look 16:09:09 dang 16:09:12 ok, will do 16:09:38 but stable/wallaby is broken as reported by gibi in nova channel and I will push fix today 16:10:09 this cycle seems not good for gate health not just tox4 but many more unstable things 16:10:10 tbh integrated queue doesn't look optimal for cases with vulnarabilities 16:10:35 gmann: yeah 16:10:44 As when you need to merge smth fast and withing this queue patches are waiting for merge and depending on everything else that's going on around 16:11:18 noonedeadpunk: but CVEs can be manually applied as long as the patch is available, which is why the distributors get advance warning, and even people deploying from source can do that 16:11:32 and in times of intermittent things or just loaded zuul workers it's becoming nightmare to land things 16:11:58 yes, this is exactly why our advisories don't say "upgrade to this release" but rather "here's a link to the patches, even if they're not merged yet" 16:12:41 I think it's hard to make the argument that it's OK that it can take a while to make our git-shipped version of software secure, even if the patches are available. 16:12:45 Well, pip can't really pull from gerrit, can it? 16:12:59 As it needs to fetch refs first that he knows nothing about 16:13:37 to me, it's hard to say "we forced this patch in that didn't pass tests, but we're sure it's better" 16:13:56 especially if that has the potential to 100% break everyone if it's wrong, instead of the partial failures that occur with the general gate instability 16:14:15 I'm not saying not passing tests, what I'm saying is - in case of some random failure on another project we're to wait for another 5 hours 16:14:45 and hoping that nothing else will happen on queue 16:14:47 yes, but to reiterate (for the i can't recall how many times now) we publish packages to pypi for ease of retrieval by testing systems and distributors. that some deployment projects are relying on packages on pypi for production use is a serious concern 16:14:50 meh, the patch is available 16:15:07 fungi: right 16:16:19 ok any other gate failure/concern to discuss? 16:16:46 well, for me as operator doing `apt upgrade` and getting regression for neutron, which happened like 3 times last 4 releases, is quite serious concern not to use distro packages.. .anyway 16:18:20 I think we can move forward:) 16:18:44 let's hope we get gate green there :) that is ultimate goal 16:18:47 moving to next topic 16:19:06 #topic Cleanup of PyPI maintainer list for OpenStack Projects 16:20:02 as discussed in previous meetings, sent email about audit to openstack-discuss #link https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html 16:20:05 also created a etherpad to track the audit results #link https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup 16:20:21 I can see 3-4 projects done audit and added the result in etherpad 16:20:39 there is some discussion on ML about PyPi maintainers cleanup, hope you have read that 16:21:12 any opinion on that? 16:21:48 I'm not surprised we get some pushback on it. I don't think it changes that we're going down the right path. 16:22:03 Well, I think I can kind of relate on maintainers unwilling to fully give out credentials for projects they've just moved under openinfra umbrella 16:22:07 as we discussed earlier also, communication to additional maintainer is the key and convey the intension behind that 16:22:32 noonedeadpunk: yes that is the case. 16:22:51 also if foundation tries to use our infra as hosting for projects 16:22:51 I will not be surprise to see that feedback more as audit goes 16:23:50 as - "we can prodide you ci and infra stack but you will need to give us access and remove yourself" sounds indeed not great and more as a catch 16:24:00 it's also worth considering the fact that PyPI is not the only place we publish things to, and having additional maintainers can put things out of sync. 16:24:01 JayF: agree, we stay with the cleanup decision otherwise cases like xstatic* is more dangerous 16:24:30 but from technical prespective I still don't really see why there should be humans involved 16:24:31 knikolla[m]: true 16:25:25 So I think it's fair to say nobody has a technical concern; but there is a perceptional concern. 16:25:25 from playing a bit around with it, the API is very limited. 16:25:53 yeah and how to deal with that feeling of "takeover" I'm not sure to be frank 16:25:58 noonedeadpunk: but at same time, in open source its governance and OpenStack umbrella which has accountability and there is no such things "I am owner of this project/repo" 16:26:21 noonedeadpunk: yeah, the problem is the individual hooked up to pypi alongside openstackci feels a false sense of ownership :/ 16:26:39 noonedeadpunk: understandable, but in terms of governance it's clear that it's owned by the community, and openstackci user is the rep of that 16:26:42 gmann: ++, the way I see it, when you cede your project to OpenStack's governance the owner is effectively OpenStack, not you. 16:26:52 true 16:27:02 gmann: I totally agree with you. 16:27:47 I think we all are in same page here. let's continue the discussion on ML and add your opinion there. we continue this audit then cleanup as planned 16:28:08 just to touch on the "foundation" point, the "foundaton" isn't mandating this, it's an openstack community decision about official openstack deliverables 16:28:10 also reachout to your known projects to plan the audit 16:28:23 fungi: yeah 16:28:25 If more scripts are needed, I'm happy to write them. That was fun. 16:28:47 knikolla[m]: +1, that listing projects are really helpful. 16:28:48 the foundation isn't telling projects they need to hand over credentials, far from it 16:29:08 yeah its openstack community and governance 16:29:36 it's not required for hosting a project on opendev either 16:29:40 ok, then maybe we should also review what we're saying in docs regarding moving existing project under openstack umbrella? 16:29:45 a foundation employee (me) pointed out one of your packages had been hijacked. But I didn't mandate you do anything at all. I did suggest that one path forward would be to officially hand over control of that packge to the hijackers and retire it in opendev though 16:30:20 Also note I noticed this because I'm subscribed to openstack project events in github. Something anyone can do. No special access as foundation employee or opendev admin required 16:30:43 ++fungi and clarkb. This is an initiative from TC, not OpenInfra. 16:30:44 'moving existing project under openstack umbrella?' ? which one. i did not get this fully 16:31:17 clarkb: yes 16:32:48 I was thinking about this page https://governance.openstack.org/tc/reference/new-projects-requirements.html or smth related about adding new projects (or emerging projects) under umbrella 16:33:09 I was talking overall about process and managing expectations of maintainers 16:33:33 "Releases of OpenStack deliverables are handled by the OpenStack Release Management team through the openstack/releases repository. Official projects are expected to relinquish direct tagging (and branch creation) rights in their Gerrit ACLs once their release jobs are functional." 16:33:38 that could be expanded on, sure 16:33:52 noonedeadpunk: sure, there is no harm to mention it there to be explicit and if new project creators does know about it 16:34:24 ++ 16:35:02 noonedeadpunk: do you want to push the doc change? 16:35:12 to address things like `remove the project creator from their own project just for contributing it to OpenStack` 16:35:18 gmann: yep, can do this 16:35:25 awesome 16:35:27 cool, thanks 16:35:44 let's move to next topic 16:35:52 #topic Less Active projects status: 16:35:59 Zaqar 16:36:20 Zaqar (Zaqar deliverable) Gate is green, bete version is released 16:36:30 #link https://review.opendev.org/c/openstack/releases/+/871399 16:36:32 but 16:36:33 awesome, i'll abandon my change. thanks! 16:36:41 Zaqar-ui, python-zaqarclient tox4 issue fixes are up but not yet merged 16:36:51 #link https://review.opendev.org/q/topic:zaqar-gate-fix 16:37:17 these tox4 failure are not just this project but many project -ui/tempest plugins, client repo might not be fixed yet 16:37:50 even there is only PTL active (active on ping) there I feel we can continue with the Zaqar to be released in this cycle 16:37:55 not to mark as inactive 16:38:03 yep, i just abandoned 16:38:06 but keep monitoring the situation 16:38:08 excellent outcome 16:38:44 any objection on the plan? ^^ 16:40:06 no reply seems no objection :) 16:40:14 I will convey the same to release team 16:40:21 fungi: +1 on abandon the patch 16:40:30 Mistral 16:40:40 Gate is green, Beta version is released and all good now 16:40:48 #link https://review.opendev.org/c/openstack/releases/+/869470 16:40:58 #link https://review.opendev.org/c/openstack/releases/+/869448 16:41:04 Governance patch to deprecate Mistral release is abandon 16:41:12 #link https://review.opendev.org/c/openstack/governance/+/866562 16:41:31 with that, we left with no inactive projects. 16:42:08 \o/ 16:42:21 but I agree that we stretched the monitoring/marking of less active projects beyond deadlines which is not good for release team planning 16:42:37 I wonder if there's something we can do to be proactive for Bobcat 16:42:49 that is something we need to improve 'detect and decide on such project before m-2' 16:42:54 we know these are skeleton crewed projects (and there are likely others); it might be benefitial for us to reach out to them much earlier 16:43:03 yeah, we should do that 16:43:22 if I can crack the nut on grading how active a project is, that could help us quantify which projects need help 16:43:31 I will add this topic for vPTG discussion 16:43:47 JayF: perfect and that can help to proceed/discuss further 16:44:30 anything else on this topic? 16:45:08 #topic Recurring tasks check 16:45:10 Bare 'recheck' state 16:45:16 #link https://etherpad.opendev.org/p/recheck-weekly-summary 16:45:32 slaweq is not present today but he added summary in above etherpad 16:45:38 data seems much better than last week 16:46:09 #topic Open Reviews 16:46:22 #link https://review.opendev.org/q/projects:openstack/governance+is:open 16:46:36 three open reviews, two of them are waiting on project-config change 16:46:52 this need more reviews, please check #link https://review.opendev.org/c/openstack/governance/+/871302 16:47:07 and that is all from today agenda 16:47:30 we will have out next meeting on Feb 1st which will be a video call on zoom. 16:47:55 we have ~13 min left if anything else to discuss? otherwise we can close the meeitng. 16:48:12 I'll note I'm early on working on my item for this cycle 16:48:23 I strongly welcome any input here: https://etherpad.opendev.org/p/project-health-check 16:48:45 Trying to get ideas on what to measure and why before I start thinking about how and writing it up 16:48:51 so if you have opinions please toss them in the etherpad 16:48:57 thanks. will check it 16:50:19 i'm getting a strong feeling of déjà vu. check the earlier lists of criteria we tried to measure for project health if you haven't already, as well as reasons why we eventually deemed those untenable 16:50:39 fungi: ack; will look for them. 16:51:03 should be able to dredge them out of the governance git history 16:51:25 and look at tc meeting minutes from around those changes landing 16:51:33 I will say that was more of checking every project health and making decision and i noticed TC did without discussing it with projects team :) 16:51:43 Yeah also maybe the chat logs? 16:51:57 but now situation changes where we are strugling with even having single active maintainers 16:52:32 also remember that there are 9 tc members and approximately 10x that many project teams 16:52:35 would be interesting to compare our standards for what an inactive projects looks like then vs now. 16:52:41 so instead of 'go and check every projects' we can go with 'observe these number of things and then take that project to monitor and discussion table' 16:52:41 fungi: yeah... 16:53:01 "project only has 5 cores" vs "oh, we're lucky the the only core fixed the gate" 16:53:06 yeah 16:53:40 Yeah I don't think the idea is to program this into a terminator bot or anything 16:53:54 but just give us some kind of reliable way to indicate to people externally that a project needs help 16:54:01 yes 16:54:06 e.g. maybe OVHCloud steps up for Mistral earlier if we have it in yellow on a dashboard 16:54:22 * dansmith is skeptical it will ever be anything remotely like "reliable" 16:54:22 yeah, that is good example 16:54:43 anyways let's put the ideas in etherpad 16:54:54 Including skepticisms too :D 16:56:04 ok, i think that is all for today. thanks everyone for joining 16:56:08 #endmeeting