17:59:31 <knikolla> #startmeeting tc
17:59:31 <opendevmeet> Meeting started Tue Apr 11 17:59:31 2023 UTC and is due to finish in 60 minutes.  The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:59:31 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:59:31 <opendevmeet> The meeting name has been set to 'tc'
17:59:43 <knikolla> #topic Roll call
17:59:48 <noonedeadpunk> o/
17:59:49 <knikolla> Hi all, welcome to the weekly meeting of the OpenStack Technical Committee
17:59:54 <knikolla> A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct
17:59:55 <knikolla> o/
18:00:08 <jamespage> o/
18:00:12 <dansmith> o/
18:00:18 <gmann> o/
18:00:33 <slaweq> o/
18:00:35 <knikolla> We have JayF noted under absences for today.
18:00:43 <spotz> o/
18:01:42 * slaweq will probably need to leave a bit earlier today
18:01:58 <knikolla> thanks for the heads up!
18:02:56 <knikolla> If I'm not mistaken we're missing rosmaita  from the roll call
18:03:11 <rosmaita> oops
18:03:14 <rosmaita> o?
18:03:20 <rosmaita> i mean o/
18:03:21 <knikolla> \o/
18:03:29 <knikolla> #topic Follow up on past action items
18:03:30 <slaweq> :)
18:03:47 <knikolla> I had an action to create a new TC tracker for 2023.2
18:04:01 <knikolla> As I was on vacation last week, I have not completed that action.
18:05:29 <knikolla> No other action items come to mind excepting the PTG items. So moving on to the next topic.
18:05:53 <gmann> there is one for JayF #link  https://meetings.opendev.org/meetings/tc/2023/tc.2023-03-22-16.00.html
18:06:01 <gmann> whichis completed
18:06:23 <knikolla> Yes, I think we discussed that during the PTG.
18:06:38 <gmann> yeah
18:06:38 <knikolla> If I'm not mistaken.
18:07:03 <knikolla> #topic Gate health check
18:07:13 <knikolla> Any updates on the situation of the gate?
18:07:24 <dansmith> not super healthy
18:07:30 <dansmith> hard to point the finger to one thing
18:07:43 <dansmith> although I will say that there was a recent u-c bump for PasteDeploy
18:07:49 <dansmith> which broke glance's functional tests hard,
18:07:59 <dansmith> and have been broken for several weeks now, but it was just discovered
18:08:01 <noonedeadpunk> We've seen post failures lately related to slow swift backends I assume
18:08:34 <dansmith> I think there's a todo item there to get glance's functional tests into the u-c gate, but I might be wrong
18:08:55 <dansmith> noonedeadpunk: I've seen several such post failures as well
18:09:19 <gmann> yeah there were few post failure last week
18:09:48 <slaweq> I have seen it once or twice too
18:12:42 <knikolla> Any action items that we want to circle back on during next weeks meeting?
18:13:24 <dansmith> nothing specific to address at the moment I think
18:13:30 <dansmith> (which is not a good place to be)
18:13:47 <knikolla> Great to hear!
18:13:57 <dansmith> um...
18:14:22 <knikolla> I misread that, sorry.
18:14:44 <knikolla> Are there any exploratory items we can take to reach out with the teams?
18:15:12 <dansmith> I've seen a number of guest kernel crashes on volume-based tests lately,
18:15:18 <dansmith> but I dunno what to do about those
18:15:29 <dansmith> they might be qemu things we have less control over
18:15:51 <slaweq> dansmith what guest image are You using?
18:15:56 <clarkb> are they overriding to enable nested virt?
18:15:58 <dansmith> I guess we need to "explore" how to avoid breaking glance functional tests with further u-c bumps :)
18:16:11 <clarkb> someone was looking at nested virt crashes on vexxhost I think
18:16:12 <dansmith> slaweq: just cirros in the usual jobs
18:16:20 <slaweq> I think we have seen many of kernel panics with Cirros 0.5.x IIRC
18:16:27 <slaweq> but with 0.6.1 it's better
18:16:28 <dansmith> slaweq: yeah
18:16:32 <gmann> dansmith: adding job there can help as requirement gate is good to wait if more thigns to fix before u-c bump
18:16:43 <dansmith> apparently 0.6.1 changes a lot about dhcp/network though so we saw worse behavior with 0.6
18:17:04 <slaweq> there's different dhcp client used but tempest supports that already
18:17:14 <slaweq> we are using it in neutron ci job and it's fine for us
18:17:18 <gmann> slaweq: dansmith: I think we did revert the 0.6 in devstack, should we bump version there?
18:17:19 <dansmith> slaweq: yeah, bauzas tried using 0.6 and saw lots (more) failures
18:17:33 <dansmith> slaweq: hmm, okay
18:17:35 <slaweq> ahh, ok
18:18:19 <gmann> ah i remember failure with 0.6 and reverted to use 0.5 in devstack.
18:18:32 <dansmith> yeah
18:18:56 <knikolla> Got it. I'll write this down when sending the weekly summary of the TC, to see if someone else has any ideas or run into the same things.
18:19:23 <slaweq> maybe frickler can help with cirros issues thene
18:19:26 <slaweq> *then
18:19:39 <noonedeadpunk> I think it was when Qemu/KVM<4.2, and its generic kernel version <5.4
18:21:25 <dansmith> ...moving on?
18:21:30 <gmann> yeah, we can move on
18:21:39 <slaweq> ++
18:21:51 <knikolla> #topic 2023.2 cycle Leaderless projects
18:22:01 <knikolla> #link https://etherpad.opendev.org/p/2023.2-leaderless
18:22:15 <gmann> we have few changes from what we discussed in PTG
18:22:25 <gmann> first sahara:
18:22:36 <gmann> we decided to retire it but there is volunteer now to lead this project. Jerry from Inspur company.
18:22:40 <gmann> they would like to maintain it
18:22:51 <gmann> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033254.html
18:23:29 <gmann> even in last cycle also and what tosky mentioned in PTG that there is PTL volunteer in last cycle also but project is not maintained
18:23:47 <gmann> we might see this situation this cycle also but denying their request also does not looks good
18:24:06 <gmann> I think we can try for this cycle also and accept their PTL request ?
18:24:11 <knikolla> Can we "approve with conditions"?
18:24:23 <noonedeadpunk> I'd give them a chance. I think gates should be relatively okeyish - at least last time I checked/fixed them they were not that bad.
18:24:24 <jamespage> I was about to ask the same
18:24:27 <gmann> conditions on what?
18:24:41 <noonedeadpunk> Release patches are passing at least, but not reviewed
18:24:42 <gmann> we can always retire any project if it goes to inactive right ?
18:24:52 <rosmaita> i think we can point out to them that they need to start thinking about PTL earlier this cycle
18:24:56 <slaweq> ++
18:25:04 <gmann> If we do not retire and accept their PTL then we can monitor it like any other project
18:25:47 <gmann> rosmaita: yeah, i mentioned about project actual mainteince also in email and it is not just be a PTL
18:25:50 <knikolla> Yeah... it's really hard to define any sort of condition
18:25:51 <rosmaita> what was the "tech preview" status for existing projects?
18:26:13 <gmann> I remember gate was broken and noonedeadpunk or tosky fixed it to get it release ?
18:26:35 <noonedeadpunk> yup
18:26:51 <noonedeadpunk> It was that https://review.opendev.org/c/openstack/sahara/+/864728
18:26:52 <slaweq> rosmaita it's not "tech preview" but "inactive" project IIRC
18:26:56 <gmann> we can check if their gate is broken and they are not fixing it then move it under Inactive project ?
18:27:09 <rosmaita> slaweq: yes, that's what i was thinking of
18:27:28 <noonedeadpunk> well, it's green now
18:27:32 <noonedeadpunk> from what I see
18:27:33 <gmann> but we need to decide on release things by m-1 so monitor closely
18:28:28 <knikolla> sure
18:28:32 <gmann> let's not retire and inspur showing interest in this but if we end up with no maintained situation in this cycle also then we can retire it before we ask for PTL in next cycle >?
18:29:01 <rosmaita> gmann: can we make it inactive before retiring?
18:29:02 <noonedeadpunk> We also should be explicit that they musrt review patches and ensure gate state in ML
18:29:18 <gmann> rosmaita: sure, that is good flow. we can do that
18:29:33 <rosmaita> i finally found the doc about that: https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html
18:29:45 <gmann> noonedeadpunk: +1, I can mention it explicitly in email as well as in their PTL appointment patch
18:29:55 <gmann> rosmaita: :)
18:30:11 <noonedeadpunk> as otherwise they have big risk of Antelope being their last release
18:30:24 <gmann> yeah
18:30:26 <knikolla> so, inactive and then circling back to decide if to release or retire on M-1?
18:30:50 <gmann> you mean marking inactive now ? or if their gate is broken and no maintenance from PTL/team ?
18:31:00 <noonedeadpunk> I think we're monitoring, if they don't do a thing - then yes
18:31:17 <gmann> yeah, let's not mark inactive now but we can monitor it closely
18:31:30 <slaweq> IIRC we should monitor it and mark as inactive before M-2 if it will not be in good shape
18:32:03 <gmann> yeah
18:32:16 <rosmaita> how about let them know they need to find a PTL before election time this cycle or they go to inactive and the clock starts for retirement
18:33:03 <spotz> I like that idea
18:33:04 <gmann> rosmaita: yeah, many project miss the PTL nomination deadline and shows up after that
18:33:23 <gmann> but yes, mentioning all those things to them is good idea
18:33:34 <spotz> But for an inactive project I think making sure they have someone on time shows at least some forward progression
18:33:36 <knikolla> It's not only about just having a PTL, but also about keeping the gate working, and releases flowing.
18:34:07 <gmann> yeah, and that is what expected from someone volunteer for PTL
18:34:14 <knikolla> I feel like marking the project as inactive gives us a way to mark something as "being actively monitored"
18:34:16 <gmann> to make sure project is maintained
18:34:22 <rosmaita> well, i figure the PTL will handle that stuff ... i think maybe inspur needs a fire lit under their butt to allocate time for the project
18:34:29 <knikolla> since exiting the inactive state requires fulfilling the criteria for being accepted as an official project.
18:34:50 <noonedeadpunk> "keeping the gate working, and releases flowing" + , and patches reviewed
18:35:04 <gmann> knikolla: not as such, inactive is state where project is failed gate, cannot merge patches or so. monitoring is different when their gate is up
18:35:44 <knikolla> per the resolution introducing the emerging and inactive states. "inactive state for existing projects that are not so active anymore"
18:35:52 <knikolla> there's nothing there saying that a project must be hard-failing.
18:36:37 <gmann> "For existing projects which became inactive due to lack of maintainers and are not able to do the mandatory activities, such as release, fix testing, review incoming code, etc., "
18:36:38 <knikolla> "are not able to do the mandatory activities, such as release, fix testing, review incoming code, etc., TC can consider marking such projects as inactive"
18:36:51 <knikolla> we have fixed their testing.
18:36:54 <gmann> this is entry criteria #link https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html#entry-criteria
18:37:09 <gmann> yeah it is not broken now. it is green
18:37:44 <gmann> either we could have moved them to inactive before fixing the testing or need to wait if it happen again and no one there to fix it
18:38:01 <knikolla> i'll drop the subject if i'm the only one thinking we should mark it as inactive now, rather than wait for later.
18:40:22 <knikolla> alright. then we'll keep an eye out for Sahara and if something breaks or the situation changes we'll mark them as inactive during M-1 or M-2.
18:40:22 <noonedeadpunk> I think we should not mark inactive now. But we can always vote :)
18:40:37 <slaweq> noonedeadpunk++
18:42:12 <knikolla> No it's alright, I was the only one pushing for it.
18:42:17 <knikolla> So gmann, can you respond to the PTL volunteer telling them to propose a patch to become Sahara PTL?
18:42:22 * slaweq needs to drop, I will read through log later
18:42:23 <slaweq> o/
18:42:23 <gmann> sure. I will explain all those work needed in email reply as well as asking them to propose governance patch for PTL assignment
18:42:35 <gmann> knikolla: yeah
18:42:57 <gmann> next leaderless project and we decide to retire is Winstackers
18:43:00 <knikolla> #action gmann respond to Sahara PTL volunteer to propose a patch to governance, and explain the outcome of today's discussion
18:43:08 <gmann> +1
18:43:27 <gmann> there is good question from tkajinam about window support in OpenStack if we retire this project #link https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033273.html
18:43:53 <gmann> retirement means all those dependent functionality also gone.
18:44:22 <gmann> I am not sure if we need to find any alternate implementation for window support or try to find someone to maintain this project ?
18:44:33 <knikolla> ugh :/ I'm sure there's plenty of clouds that depend on Windows VMs as a use case.
18:44:35 <dansmith> hmm, I wonder if that's a dep of the hyperv driver in the case of nova?
18:44:42 <gmann> also not sure if any other company than cloudbase was interested window support in openstack
18:44:48 <gmann> dansmith: yes
18:44:52 <dansmith> knikolla: not sure this has any effect (other than optics) on vm support
18:45:16 <rosmaita> all the windows CI was run by cloudbase, is that correct?
18:45:17 <clarkb> gmann: there have definitely been people asking about windows support recently. For example with amd + windows being broken in nova
18:45:19 <dansmith> yeah looks like just the hyperv driver
18:45:32 <gmann> dansmith: need to check hyperV deps but not sure if that driver is maintained by anyone else than cloudbase people
18:45:44 <dansmith> gmann: not sure it's really maintained at all, tbh
18:45:51 <dansmith> but never anyone other than them that I know of
18:45:52 <gmann> yeah.
18:46:20 <noonedeadpunk> Yeah, I don't think it's about guests
18:46:24 <TheJulia> Seems like something each project will need to look at and consider after consulting with their operator base, in the grand scheme of things.
18:46:43 <dansmith> last patch in 202
18:46:46 <dansmith> er, 2020
18:47:00 <gmann> humm. also not sure what is state of hyperV 3rd part Ci
18:47:16 <dansmith> non-existent AFAIK
18:47:39 <dansmith> although maybe it still runs on virt/hyperv, but I haven't noticed it in forever
18:47:48 <gmann> i can see here but failing https://review.opendev.org/c/openstack/nova/+/852087
18:47:59 <gmann> Cloudbase Nova Hyper-V CI
18:48:50 <knikolla> If it's been failing for a while, and this doesn't affect in any way guest VMs but only HyperV hypervisors, the situation is a bit different.
18:48:53 <gmann> but yes, it has deps from many projects and its question about what they want to do with this window support/users
18:49:39 <dansmith> last time I saw their CI run was 23 weeks ago
18:51:14 <knikolla> So the path forward is: 1) ask operators about windows support 2) ask projects to remove os-win dependencies, or make them soft dependencies 3) retire winstackers?
18:51:51 <dansmith> I think they're soft deps already IIRC, at least for nova
18:52:29 <gmann> yeah, it needs to be removed completely as os-win deps but project can find alternate implementation if needed ?
18:52:33 <dansmith> oh I see this: https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031044.html
18:52:49 <spotz> That the one from earlier?
18:52:50 <dansmith> no more ci, no more cloudbase working on openstack it seems
18:52:51 <gmann> dansmith: yeah in nov they announced, I also missed that
18:52:54 <dansmith> yeah me too
18:52:55 <spotz> NM november:)
18:53:07 <jamespage> I can't see a response to that statement from Lucian
18:53:27 <dansmith> so I guess that means nova should be working to remove hyperv from the tree
18:53:39 <noonedeadpunk> Yeah, I do remember it:)
18:53:55 <gmann> jamespage: yeah, no response and that is why I was thinking no one interested in window support ? but same time we cannot say openstack-discuiss ML is perfect place to get all the ansers
18:53:57 <gmann> answer
18:54:06 <gmann> dansmith: I think so
18:54:24 <knikolla> seems like the approach i outlined above still works, but with removing os-win entirely rather than just a soft dependency.
18:54:51 <gmann> I am more concern on 1st one. how to reachout to operators and how much time we need to get those answer from them ?
18:54:55 <jamespage> does last user survey have anything on this subject?
18:55:02 <gmann> openstack-discuss ML is not perfect place to reachout to them
18:55:07 <dansmith> cern was using hyperv at one point
18:55:17 <dansmith> it was a long while ago, not sure if they still are or not
18:56:02 <spotz> arne_wiebalck: ^
18:56:12 <jamespage> 2022 - 2% of deploys on HyperV
18:56:55 <dansmith> oof
18:57:29 <gmann> we can try to announce/notify it in June event and then decide on retirement things. do not start retirement for now ?
18:57:59 <knikolla> Agree on not starting requirement now
18:58:06 <gmann> and starting the email announcement now including on openstack-annouce ML
18:58:34 <knikolla> But we need to start introducing the subject through mailing list, as that will affect operators and projects.
18:58:37 <knikolla> gmann++
18:58:40 <gmann> yeah
18:59:05 <gmann> we can try in all those places and try final in June event. if nothing comes up then retirement is only option
18:59:14 <knikolla> ++
18:59:17 <jamespage> sounds sensible
18:59:19 <knikolla> any volunteers to write the email?
18:59:58 <knikolla> I can do it then.
19:00:04 <knikolla> We're out of time, thanks all.
19:00:07 <jamespage> I'm happy to give that a go
19:00:15 <knikolla> jamespage: awesome, thanks :)
19:00:22 <gmann> oh, we missed doc things which is important to fix
19:00:33 <jamespage> will ask for a pre-send review knikolla
19:00:41 <knikolla> #action jamespage to write email about Winstackers removal
19:00:48 <knikolla> :)
19:01:00 <knikolla> We can continue outside of the meeting gmann
19:01:04 <knikolla> #endmeeting