17:59:31 #startmeeting tc 17:59:31 Meeting started Tue Apr 11 17:59:31 2023 UTC and is due to finish in 60 minutes. The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:59:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:59:31 The meeting name has been set to 'tc' 17:59:43 #topic Roll call 17:59:48 o/ 17:59:49 Hi all, welcome to the weekly meeting of the OpenStack Technical Committee 17:59:54 A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct 17:59:55 o/ 18:00:08 o/ 18:00:12 o/ 18:00:18 o/ 18:00:33 o/ 18:00:35 We have JayF noted under absences for today. 18:00:43 o/ 18:01:42 * slaweq will probably need to leave a bit earlier today 18:01:58 thanks for the heads up! 18:02:56 If I'm not mistaken we're missing rosmaita from the roll call 18:03:11 oops 18:03:14 o? 18:03:20 i mean o/ 18:03:21 \o/ 18:03:29 #topic Follow up on past action items 18:03:30 :) 18:03:47 I had an action to create a new TC tracker for 2023.2 18:04:01 As I was on vacation last week, I have not completed that action. 18:05:29 No other action items come to mind excepting the PTG items. So moving on to the next topic. 18:05:53 there is one for JayF #link https://meetings.opendev.org/meetings/tc/2023/tc.2023-03-22-16.00.html 18:06:01 whichis completed 18:06:23 Yes, I think we discussed that during the PTG. 18:06:38 yeah 18:06:38 If I'm not mistaken. 18:07:03 #topic Gate health check 18:07:13 Any updates on the situation of the gate? 18:07:24 not super healthy 18:07:30 hard to point the finger to one thing 18:07:43 although I will say that there was a recent u-c bump for PasteDeploy 18:07:49 which broke glance's functional tests hard, 18:07:59 and have been broken for several weeks now, but it was just discovered 18:08:01 We've seen post failures lately related to slow swift backends I assume 18:08:34 I think there's a todo item there to get glance's functional tests into the u-c gate, but I might be wrong 18:08:55 noonedeadpunk: I've seen several such post failures as well 18:09:19 yeah there were few post failure last week 18:09:48 I have seen it once or twice too 18:12:42 Any action items that we want to circle back on during next weeks meeting? 18:13:24 nothing specific to address at the moment I think 18:13:30 (which is not a good place to be) 18:13:47 Great to hear! 18:13:57 um... 18:14:22 I misread that, sorry. 18:14:44 Are there any exploratory items we can take to reach out with the teams? 18:15:12 I've seen a number of guest kernel crashes on volume-based tests lately, 18:15:18 but I dunno what to do about those 18:15:29 they might be qemu things we have less control over 18:15:51 dansmith what guest image are You using? 18:15:56 are they overriding to enable nested virt? 18:15:58 I guess we need to "explore" how to avoid breaking glance functional tests with further u-c bumps :) 18:16:11 someone was looking at nested virt crashes on vexxhost I think 18:16:12 slaweq: just cirros in the usual jobs 18:16:20 I think we have seen many of kernel panics with Cirros 0.5.x IIRC 18:16:27 but with 0.6.1 it's better 18:16:28 slaweq: yeah 18:16:32 dansmith: adding job there can help as requirement gate is good to wait if more thigns to fix before u-c bump 18:16:43 apparently 0.6.1 changes a lot about dhcp/network though so we saw worse behavior with 0.6 18:17:04 there's different dhcp client used but tempest supports that already 18:17:14 we are using it in neutron ci job and it's fine for us 18:17:18 slaweq: dansmith: I think we did revert the 0.6 in devstack, should we bump version there? 18:17:19 slaweq: yeah, bauzas tried using 0.6 and saw lots (more) failures 18:17:33 slaweq: hmm, okay 18:17:35 ahh, ok 18:18:19 ah i remember failure with 0.6 and reverted to use 0.5 in devstack. 18:18:32 yeah 18:18:56 Got it. I'll write this down when sending the weekly summary of the TC, to see if someone else has any ideas or run into the same things. 18:19:23 maybe frickler can help with cirros issues thene 18:19:26 *then 18:19:39 I think it was when Qemu/KVM<4.2, and its generic kernel version <5.4 18:21:25 ...moving on? 18:21:30 yeah, we can move on 18:21:39 ++ 18:21:51 #topic 2023.2 cycle Leaderless projects 18:22:01 #link https://etherpad.opendev.org/p/2023.2-leaderless 18:22:15 we have few changes from what we discussed in PTG 18:22:25 first sahara: 18:22:36 we decided to retire it but there is volunteer now to lead this project. Jerry from Inspur company. 18:22:40 they would like to maintain it 18:22:51 #link https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033254.html 18:23:29 even in last cycle also and what tosky mentioned in PTG that there is PTL volunteer in last cycle also but project is not maintained 18:23:47 we might see this situation this cycle also but denying their request also does not looks good 18:24:06 I think we can try for this cycle also and accept their PTL request ? 18:24:11 Can we "approve with conditions"? 18:24:23 I'd give them a chance. I think gates should be relatively okeyish - at least last time I checked/fixed them they were not that bad. 18:24:24 I was about to ask the same 18:24:27 conditions on what? 18:24:41 Release patches are passing at least, but not reviewed 18:24:42 we can always retire any project if it goes to inactive right ? 18:24:52 i think we can point out to them that they need to start thinking about PTL earlier this cycle 18:24:56 ++ 18:25:04 If we do not retire and accept their PTL then we can monitor it like any other project 18:25:47 rosmaita: yeah, i mentioned about project actual mainteince also in email and it is not just be a PTL 18:25:50 Yeah... it's really hard to define any sort of condition 18:25:51 what was the "tech preview" status for existing projects? 18:26:13 I remember gate was broken and noonedeadpunk or tosky fixed it to get it release ? 18:26:35 yup 18:26:51 It was that https://review.opendev.org/c/openstack/sahara/+/864728 18:26:52 rosmaita it's not "tech preview" but "inactive" project IIRC 18:26:56 we can check if their gate is broken and they are not fixing it then move it under Inactive project ? 18:27:09 slaweq: yes, that's what i was thinking of 18:27:28 well, it's green now 18:27:32 from what I see 18:27:33 but we need to decide on release things by m-1 so monitor closely 18:28:28 sure 18:28:32 let's not retire and inspur showing interest in this but if we end up with no maintained situation in this cycle also then we can retire it before we ask for PTL in next cycle >? 18:29:01 gmann: can we make it inactive before retiring? 18:29:02 We also should be explicit that they musrt review patches and ensure gate state in ML 18:29:18 rosmaita: sure, that is good flow. we can do that 18:29:33 i finally found the doc about that: https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html 18:29:45 noonedeadpunk: +1, I can mention it explicitly in email as well as in their PTL appointment patch 18:29:55 rosmaita: :) 18:30:11 as otherwise they have big risk of Antelope being their last release 18:30:24 yeah 18:30:26 so, inactive and then circling back to decide if to release or retire on M-1? 18:30:50 you mean marking inactive now ? or if their gate is broken and no maintenance from PTL/team ? 18:31:00 I think we're monitoring, if they don't do a thing - then yes 18:31:17 yeah, let's not mark inactive now but we can monitor it closely 18:31:30 IIRC we should monitor it and mark as inactive before M-2 if it will not be in good shape 18:32:03 yeah 18:32:16 how about let them know they need to find a PTL before election time this cycle or they go to inactive and the clock starts for retirement 18:33:03 I like that idea 18:33:04 rosmaita: yeah, many project miss the PTL nomination deadline and shows up after that 18:33:23 but yes, mentioning all those things to them is good idea 18:33:34 But for an inactive project I think making sure they have someone on time shows at least some forward progression 18:33:36 It's not only about just having a PTL, but also about keeping the gate working, and releases flowing. 18:34:07 yeah, and that is what expected from someone volunteer for PTL 18:34:14 I feel like marking the project as inactive gives us a way to mark something as "being actively monitored" 18:34:16 to make sure project is maintained 18:34:22 well, i figure the PTL will handle that stuff ... i think maybe inspur needs a fire lit under their butt to allocate time for the project 18:34:29 since exiting the inactive state requires fulfilling the criteria for being accepted as an official project. 18:34:50 "keeping the gate working, and releases flowing" + , and patches reviewed 18:35:04 knikolla: not as such, inactive is state where project is failed gate, cannot merge patches or so. monitoring is different when their gate is up 18:35:44 per the resolution introducing the emerging and inactive states. "inactive state for existing projects that are not so active anymore" 18:35:52 there's nothing there saying that a project must be hard-failing. 18:36:37 "For existing projects which became inactive due to lack of maintainers and are not able to do the mandatory activities, such as release, fix testing, review incoming code, etc., " 18:36:38 "are not able to do the mandatory activities, such as release, fix testing, review incoming code, etc., TC can consider marking such projects as inactive" 18:36:51 we have fixed their testing. 18:36:54 this is entry criteria #link https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html#entry-criteria 18:37:09 yeah it is not broken now. it is green 18:37:44 either we could have moved them to inactive before fixing the testing or need to wait if it happen again and no one there to fix it 18:38:01 i'll drop the subject if i'm the only one thinking we should mark it as inactive now, rather than wait for later. 18:40:22 alright. then we'll keep an eye out for Sahara and if something breaks or the situation changes we'll mark them as inactive during M-1 or M-2. 18:40:22 I think we should not mark inactive now. But we can always vote :) 18:40:37 noonedeadpunk++ 18:42:12 No it's alright, I was the only one pushing for it. 18:42:17 So gmann, can you respond to the PTL volunteer telling them to propose a patch to become Sahara PTL? 18:42:22 * slaweq needs to drop, I will read through log later 18:42:23 o/ 18:42:23 sure. I will explain all those work needed in email reply as well as asking them to propose governance patch for PTL assignment 18:42:35 knikolla: yeah 18:42:57 next leaderless project and we decide to retire is Winstackers 18:43:00 #action gmann respond to Sahara PTL volunteer to propose a patch to governance, and explain the outcome of today's discussion 18:43:08 +1 18:43:27 there is good question from tkajinam about window support in OpenStack if we retire this project #link https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033273.html 18:43:53 retirement means all those dependent functionality also gone. 18:44:22 I am not sure if we need to find any alternate implementation for window support or try to find someone to maintain this project ? 18:44:33 ugh :/ I'm sure there's plenty of clouds that depend on Windows VMs as a use case. 18:44:35 hmm, I wonder if that's a dep of the hyperv driver in the case of nova? 18:44:42 also not sure if any other company than cloudbase was interested window support in openstack 18:44:48 dansmith: yes 18:44:52 knikolla: not sure this has any effect (other than optics) on vm support 18:45:16 all the windows CI was run by cloudbase, is that correct? 18:45:17 gmann: there have definitely been people asking about windows support recently. For example with amd + windows being broken in nova 18:45:19 yeah looks like just the hyperv driver 18:45:32 dansmith: need to check hyperV deps but not sure if that driver is maintained by anyone else than cloudbase people 18:45:44 gmann: not sure it's really maintained at all, tbh 18:45:51 but never anyone other than them that I know of 18:45:52 yeah. 18:46:20 Yeah, I don't think it's about guests 18:46:24 Seems like something each project will need to look at and consider after consulting with their operator base, in the grand scheme of things. 18:46:43 last patch in 202 18:46:46 er, 2020 18:47:00 humm. also not sure what is state of hyperV 3rd part Ci 18:47:16 non-existent AFAIK 18:47:39 although maybe it still runs on virt/hyperv, but I haven't noticed it in forever 18:47:48 i can see here but failing https://review.opendev.org/c/openstack/nova/+/852087 18:47:59 Cloudbase Nova Hyper-V CI 18:48:50 If it's been failing for a while, and this doesn't affect in any way guest VMs but only HyperV hypervisors, the situation is a bit different. 18:48:53 but yes, it has deps from many projects and its question about what they want to do with this window support/users 18:49:39 last time I saw their CI run was 23 weeks ago 18:51:14 So the path forward is: 1) ask operators about windows support 2) ask projects to remove os-win dependencies, or make them soft dependencies 3) retire winstackers? 18:51:51 I think they're soft deps already IIRC, at least for nova 18:52:29 yeah, it needs to be removed completely as os-win deps but project can find alternate implementation if needed ? 18:52:33 oh I see this: https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031044.html 18:52:49 That the one from earlier? 18:52:50 no more ci, no more cloudbase working on openstack it seems 18:52:51 dansmith: yeah in nov they announced, I also missed that 18:52:54 yeah me too 18:52:55 NM november:) 18:53:07 I can't see a response to that statement from Lucian 18:53:27 so I guess that means nova should be working to remove hyperv from the tree 18:53:39 Yeah, I do remember it:) 18:53:55 jamespage: yeah, no response and that is why I was thinking no one interested in window support ? but same time we cannot say openstack-discuiss ML is perfect place to get all the ansers 18:53:57 answer 18:54:06 dansmith: I think so 18:54:24 seems like the approach i outlined above still works, but with removing os-win entirely rather than just a soft dependency. 18:54:51 I am more concern on 1st one. how to reachout to operators and how much time we need to get those answer from them ? 18:54:55 does last user survey have anything on this subject? 18:55:02 openstack-discuss ML is not perfect place to reachout to them 18:55:07 cern was using hyperv at one point 18:55:17 it was a long while ago, not sure if they still are or not 18:56:02 arne_wiebalck: ^ 18:56:12 2022 - 2% of deploys on HyperV 18:56:55 oof 18:57:29 we can try to announce/notify it in June event and then decide on retirement things. do not start retirement for now ? 18:57:59 Agree on not starting requirement now 18:58:06 and starting the email announcement now including on openstack-annouce ML 18:58:34 But we need to start introducing the subject through mailing list, as that will affect operators and projects. 18:58:37 gmann++ 18:58:40 yeah 18:59:05 we can try in all those places and try final in June event. if nothing comes up then retirement is only option 18:59:14 ++ 18:59:17 sounds sensible 18:59:19 any volunteers to write the email? 18:59:58 I can do it then. 19:00:04 We're out of time, thanks all. 19:00:07 I'm happy to give that a go 19:00:15 jamespage: awesome, thanks :) 19:00:22 oh, we missed doc things which is important to fix 19:00:33 will ask for a pre-send review knikolla 19:00:41 #action jamespage to write email about Winstackers removal 19:00:48 :) 19:01:00 We can continue outside of the meeting gmann 19:01:04 #endmeeting