18:03:28 #startmeeting tc 18:03:28 Meeting started Tue Feb 27 18:03:28 2024 UTC and is due to finish in 60 minutes. The chair is JayF. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:03:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:03:28 The meeting name has been set to 'tc' 18:03:36 Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct. 18:03:36 Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee. 18:03:49 #topic Roll Call 18:03:53 o/ 18:03:56 o/ 18:04:00 \o 18:04:00 o/ 18:04:06 o/ 18:04:06 o/ 18:04:54 #topic Follow up on tracked action items 18:05:01 I had 3 action items, I have done none of them. 18:05:19 #action JayF reach out to Tony and Elod about follow-up to unmaintained group email 18:05:32 #action JayF to email ML with a summary of gate ram issues and do a general call for assistance 18:05:41 Actually I did the third one 18:05:47 > Find something written about who volunteered to keep things un-maintained status back to Victoria 18:06:03 I was unable to find anything in public ML archives with someone concretely volunteering to take things over back to victoria. 18:06:11 I did find a thread where that was volunteered *in context of heat* 18:06:23 I'll take care of those two other actions as soon as meeting is over here, sorry about that 18:06:25 o/ 18:06:42 #topic Gate Health Check 18:06:50 How is the gate looking? I'll note for Ironic we've been pretty swell. 18:07:09 lots of rechecks in nova/glance land, AFAICT 18:07:20 what's swell in that context? I only know ocean swell 18:07:42 "pleasant" 18:07:47 not that bad in Neutron AFAIK 18:08:08 thx fungi 18:08:14 "swell" can be an adjective meaning "very good" 18:08:29 pleasant is a pretty dead-on for getting the connotations 18:08:56 Going to move on, we addressed gate pretty heavily last meeting and I still have the follow-up action from that 18:09:05 #topic Implementation of Unmaintained Branch Statuses 18:09:09 does anyone have an update on this? rosmaita? 18:09:15 there were some issues in stable branches regarding upgrades from yoga 18:09:33 i don't have anything new 18:09:42 and people are unclear whether they should be made to work or whether yoga should be treated like being eol in that context 18:09:56 they=grenade jobs 18:10:20 I support the latter option fwiw 18:10:20 For purposes of unmaintained branch policy, I'm pretty sure that's entirely up to the folks doing the work, yeah? 18:10:24 yeah, discussion going on in #link https://review.opendev.org/c/openstack/grenade/+/908826 18:10:41 I also do not think we need to do upgrade testing from unmaintained to any other maintained branches 18:10:45 well the issue is that the job failures are affecting zed and 2023.1 18:11:11 we made it optional to tests for EM and for unmaintained either we can just remove or kepe it optional only 18:11:35 and then we have to discuss things like https://review.opendev.org/c/openstack/grenade/+/908826 18:11:43 frickler: job of yoga to zed upgrade so my point is we do not need to test upgrade from unmaintined to maintained releases 18:12:00 ++ 18:12:02 gmann: ++ 18:12:13 And we have a pretty strong workaround if it ever breaks: upgrade to -eom 18:12:18 then upgrade to latest from UM 18:12:32 because until maintenance ended, we proved upgrades worked 18:12:42 and devstack/grenade yoga branches will also not unmaintained so unmaintained maintainers can work there if they need to keep it tested? 18:12:55 other than that I can note that the yoga unmaintainance was completed and elodilles is now preparing patches to move all other older branches back to v 18:13:12 Thank you for the work making this happen 18:13:18 frickler: for devstack/grenade also or we need to do something there? 18:13:55 I think there was one bug logged in devstack or tempest for this 18:13:58 gmann: I don't understand the question. the patches will be for all projects, like we had them for yoga 18:14:22 frickler: ok I did not see unmaintained/yoga for devstack greande or its setup in the scripts. 18:14:27 will check later 18:14:41 well the branch was created 18:15:21 I didn't see much work so far in general to actually make jobs on yoga work 18:16:00 so another question is how much time do we want to give people until we say: "this doesn't work, let's just eol it" 18:16:01 ++, as long as we cut the branches for devstack/grenade then we are good. I also do not think I myself will spend time on setting up the tempest testing there 18:16:55 on time before EOL, maybe giving/open it for a cycle will be enough ? 18:17:22 it does not harm to keep the broken one for a cycle and it will give enough time for people to step up 18:17:26 Yeah, I think the normal 6 month period is what to wait for 18:17:36 a cycle is a long time, the usual time to move zed to eom would be one month after 2024.1 release 18:17:36 and then at next evaluation, if CI is not passing, it gets EOL'd 18:17:54 which is in roughly two months 18:18:04 frickler: Zed is not slurp, so it'd get an automatic unmaintained branch for 6 months AIUI 18:18:08 well, this is the first time we're doing this, i think we need to be flexible about the deadlines 18:18:20 a cycle will give enough time for people if they care that is only I was thinking. 18:18:29 rosmaita++ 18:18:30 for example, the normal timeline for stable/yoga -> unmaintained would have been november 18:18:47 it's also literally 2 days before C-3 18:18:56 many people (myself included) are trying to complete things before the end of cycle 18:19:12 I suspect we'll get a much better insight into how much people are concerned about the CI for these UM branches once some of the deadline-crunch is passed 18:19:58 ok, so let's wait and see 18:20:17 frickler: I'm going to send that gate email in my actions after this meeting; if you have a blurb you want me to add about UM branches that might be an easy way to raise awareness 18:20:35 take the opportunity to shine light when we can and hope someone is looking :D 18:21:37 JayF: maybe just mention that now is the opportunity for people to show they care about keeping those branches alive 18:21:42 ++ 18:21:46 #topic Testing runtime for 2024.2 release 18:21:51 #link https://review.opendev.org/c/openstack/governance/+/908862 18:22:15 gmann added this topic; but I'll note: we really need folks to participate here and vote; I'm not sure we're going to reach community consensus especially around python versions 18:22:49 the thing I think is missing there, 18:22:59 is adding 24.04 to "best effort" right? 18:23:07 yes that is one thing 18:23:13 it seemed like that was liked by multiple people for when it comes out 18:23:19 otherwise I think I'm good 18:23:21 I think that'd be an ideal change 18:23:21 my point is we should not add when it is not yet released 18:23:27 so I was waiting for or expecting that respin 18:23:29 once it is released in april then we can add 18:23:30 IMO it should not be added there. we didn't add py3.11 while it was non-voting either 18:23:54 gmann: so you want to amend the document once it's released? 18:24:14 Honestly not a bad approach, add 24.04 as "best effort to ensure smooth upgrade" once we're sure we can potentially do it 18:24:34 dansmith: we can that time. but now It seems not correct way to add something not yet there 18:24:50 well, we should at least say that we anticipate adding it when available 18:24:55 seems like we've had pushback before when we change that late, so it seems most up-front to go ahead and stake our claim on it being a thing 18:24:55 how "smooth" it will be if it will be best effort and some projects will test it and others maybe not? 18:24:58 otherwise, it looks completely ad-hoc later 18:24:58 we could add 24.04 once devstack jobs are working. iff someone actually implements that in time 18:25:01 minimal surprises and all 18:25:24 dansmith: sure in that case I will say to add in next cycle 2025.1 18:25:29 rosmaita: In those cases, I do consider the gerrit conversations as some historical backing. We do have public logs of these meetings, too :) 18:25:33 rosmaita: yeah, we could add a new thing, we just already have "best effort" but however we make it clear is fine I guess 18:25:44 because 1. it is not yet released 2. not setup in infra/devstack 18:26:07 I am +1 to the idea that it'd be silly to not test against Ubuntu 24.04 once we have it available. I do not thing many people outside of this meeting care about how we document that fact. 18:26:08 making that as target for next cycle seems like difficult. 18:26:11 s/thing/think/ 18:26:19 gmann: "best effort" means it could be zero if we don't have time or something major comes up that makes it hard, IMH 18:26:25 ++ 18:26:44 one upside to makring it best effort is it shows the effort would be apprecaited 18:26:54 dansmith: sure but adding something which does not exist yet makes me not comfortable :) 18:26:57 then peopel know they can push changes for it without being shot down because it doesn't match the doc 18:26:59 clarkb: that's a good perspective I hadn't considered 18:27:08 and I see less push back of updating runtime for best effort 18:27:34 in the interests of preserving my own downstream sanity I would like to put effort in to making that happen - 24.04 is already consumable via the development release so I can start looking at that sooner rather than later 18:27:39 well we do have a general statement somewhere that we do try to support the latest Ubuntu LTS release 18:27:39 as you mentioned, best effort is optoinal thing so we might not see push back on this if we add later 18:27:42 fwiw I'm actively trying to clear out old distro releases from nodepool to make room for new ones like ubuntu 24.04 18:27:54 so that will kind of kick in automatically once it is released 18:28:09 jamespage+++++ like I said in my gerrit comments; I'd rather you all be doing that work upstream than duplicating it downstream :D. I know many times support is not a choice, we're just choosing the venue. 18:28:28 frickler: I'm just kinda going on past history where people were, um, not pleased to see that changed mid-cycle 18:28:34 completely concur - holding patches downstream is 100% overhead 18:28:37 we've definitely added ubuntu lts versions while they were still rc in the past, so it's not impossible 18:28:56 best effort or adding non voting python versions are the possible ways to add in middle of the cycle also and advance preparation to make them mandatory in future cycle 18:29:04 though i think openstack has avoided depending on them, we offered them more as a preview 18:29:20 as I said, do make patches in infra and devstack, once that works, we can add it to testing runtime, but now earlier IMO 18:29:33 s/now/not 18:29:40 dansmith: but those were for changing our mandatory expectation of voting testing or so 18:29:49 gmann: yeah I know 18:29:53 doing best effort of nv python 3.12 should be ok 18:30:05 I am borderline -1 to saying best effort of python 3.12 18:30:10 frickler: ++ 18:30:12 because we already know that is extremely unlikely to be supported 18:30:27 frickler++ 18:30:35 see my note on Feb 23 here https://review.opendev.org/c/openstack/governance/+/908862/1/reference/runtimes/2024.2.rst#65 18:30:38 JayF: the problem is that we need to get the fails in front of people soon 18:30:44 yeah that is why we are not adding it right. same with 24.04 also, once it is released then we can ad 18:30:46 add 18:30:51 because things like taskflow are using removed-from-3.12 things currently AFAIK 18:31:02 dansmith: this fails in a way that is unlikely to fail in unit tests, and more likely to fail IRL 18:31:16 you could add a community goals to support py3.12 instead? 18:31:18 dansmith: but as long as we ensure that information is aggregated, I'm OKish with it 18:31:40 frickler: https://review.opendev.org/c/openstack/governance/+/902585 is mostly what this is 18:31:49 frickler: or at least, the largest identified prereq so far 18:32:04 24.04 is coming in April end or so right so once it is up and we have devstack patches/setup there then we can add? that might be early in the cycle not so late. 18:32:19 dansmith: ^^ does that work for brining failure early so that project can fix 18:32:26 I honestly don't understand the concern here so I guess I'll just sit on the side and wait 18:32:30 JayF: that's a subset where we don't know how much we dont know yet? 18:32:50 frickler: we do know that the sslutils module in oslo_service is 100% nonworking in python3.12 18:32:58 frickler: because the entire ssl.wrap_socket() interface is gone 18:33:37 I did some patches somewhere else to rework code around that interface removal 18:33:40 ok, so if we know that things aren't working with py3.12, how can we even discuss making it required, if only as best-effort? 18:33:57 frickler: that's exactly what I'm trying to say ++ 18:34:07 adding jobs that will just fail all the time makes no sense at all 18:34:17 or that will pass and give false sense of security 18:34:25 yes 18:34:26 which is the more likely outcome given how sslutils is used 18:34:47 So I have a suggestion for getting past the deadlock 18:35:08 I think let's vote on the current change, please do -1 if you think 24.04 in best effort is blocking. 18:35:09 taskflow literally can't even be imported in 3.12 AFAIK 18:35:15 vote on gerrit i mean 18:35:24 We should land the change, as-is, then post followups: one by someone who cares about Ubuntu 24.04 best effort support, and one by someone who cares about Pythin 3.12 best effort support 18:35:33 and we can vote opn those ideas separately 18:35:36 and if there are many objection then we can have a another version with 24.04 as best effort and vote there 18:35:47 that's the easiest way for us to get to a completed document without mixing the matrix of concerns 18:36:05 that way we can progress on that. as next cycle setup is coming soon we should be ready with runtime 18:36:15 that's a little tilted in favor of the people who don't want that right? :) 18:36:27 and these followup should IMO be backed by working jobs 18:36:30 how about as rosmaita said, some sort of statement of "we're going to add 24.04 to this list mid-cycle, so just assume it's coming" 18:36:43 dansmith: That's a reasonable point; I'm just thinking from a "how can we get clean up/down votes" perspective 18:37:03 dansmith: well we don't know when support will actually be implemented 18:37:12 so it may be mid-cycle or much later 18:37:20 I mean "in the cycle" 18:38:04 tbh, I don't know when we're going to start with 3.12 unit/functional jobs that fail obviously for people if not as soon as we have a distro that can run them 18:38:04 but much later likely would make more sense to defer to 2025.1 then. so no need to announce now, is my reasoning 18:38:05 "24.04 testing as best effort if it is released and testing setup is completed before m-2 of the 2024.2 cycle" 18:38:06 "OpenStack plans to implement devstack support for Ubuntu 24.04 when released during the 2024.2 cycle. When this is complete, we will offer best efforts to support Ubuntu 24.04 with available time." 18:38:10 how about that ^^ 18:38:12 dansmith: 24.04 is python 3.12? 18:38:20 dansmith: that's a detail I had missed until this moment 18:38:22 yes 18:38:24 it is 18:38:27 doing these things after m-2 is little hectic 18:38:29 JayF: that's why I'm so interested in it being on the list, yes :) 18:38:33 not for any other reason really 18:38:33 oof, then yeah, there's no point in doing it for D 18:38:46 because we can't get python 3.12 working in D unless we think we can eventlet-migrate a large amount of things in one cycle 18:38:56 um, what? 18:39:21 Eventlet does not expose the interfaces, in python 3.12, for us to replace the oslo_services.sslutils module 18:39:31 because they have changed upstream 18:39:35 So eventlet will hold up the move unless folks can work on their migration faster 18:39:51 so you are thinking everyone is going to migrate off of eventlet before we can support 3.12? I have some bad news for you in that case... 18:39:55 spotz[m]: that's pretty much what I'm asserting, or at least, we have to partially migate bits 18:40:35 dansmith: this is why I whipped up so much of a frenzy around starting the migration effort, when we ID'd this back in Nov 2023 18:40:37 Tim Burke proposed openstack/election master: Tim Burke candidacy for Swift PTL (Dalmation) https://review.opendev.org/c/openstack/election/+/910383 18:41:10 We have another important issue still on the agenda 18:41:20 JayF: AFAIK, the issue you're talking about is only for people exposing an eventlet-hosted SSL server socket, which is not everyone 18:41:23 our runtime deadline also says (somewhere in doc but i remember we follow that since starting): define the things to test if they are available before cycle start and not after or in the middle once they are available. that is why ubuntu LTS are always picked up in next cycle even they are released after 1 month of our cycle start 18:41:41 and I thought the assertion was that eventlet was working to mitigate all these things 18:41:43 dansmith: sslutils is used for more than that, fwiw 18:41:58 with that reason I think adding 24.04 in 2024.2 cycle not good instead considering it in 2025.1. 18:42:01 We made eventlet *install at all* on python 3.12, and fixed things like Queue and Event for python 3.12 18:42:42 I'm timeboxing this topic to :45 so we have time for Murano 18:43:13 again, my reasoning for 24.04 as best effort was to get the manageable 3.12 things to the forefront, like the fact that taskflow won't even import and thus a big chunk of glance isn't either 18:43:27 It definitely seems we have eventlet things that need to be accomplished before the move to 3.12 and 24.04 needs to be bumped unless you can run 2 versions of python in it 18:43:50 well, Dan's point about making more people aware of how broken py3.12 is ... it's a good point 18:44:15 that's really my only reason for wanting 24.04 in there at all this close to its release 18:44:22 and really only for unit/functional at this point 18:44:31 anyway, I said I was going to shut up 18:44:56 I'm glad you didn't that's a good point. 18:45:18 I'm closing this topic for now, moving on. I'm not sure what the specific action is, but I think it's unlikely we'll find general consensus and may just need to majority-vote. 18:45:32 #topic Murano Inactivity, and inclusion Caracal release 18:45:42 #link https://review.opendev.org/c/openstack/governance/+/908859/ 18:45:55 ls 18:46:13 So right now, we have Murano with strong consensus to mark as inactive 18:46:26 only concern in the doc change (i was doing before murano inactive status) was stopping the release for project moving to Inactive after m-2 18:46:31 However, myself, and some other TC members, felt we shouldn't release it since we know it has a critical unfixed bug and no support 18:46:33 I modified the doc change now with "not to define any release criteria for project going to inactive after m-2 and let release team decide on those" #link https://review.opendev.org/c/openstack/governance/+/908880 18:47:40 true, i do not think we should release but I agree on the argument about let's release team handle it like they do/did for any other project with broken gate/code 18:47:53 I am OK with that doc as changed... but was OK with it unchanged, too. I'm curious what frickler would think about that. 18:48:07 regarding the critical bug, we'll still have three "supported" releases containing it, so I don't think it will make much difference to have another one 18:48:15 I'll note that mark-murano-inactive change does not appear stacked properly with the modify-timeline patch anymore 18:48:24 I didn't get to read the updated on the above patch yet 18:48:32 *updates 18:48:46 frickler: Well, we didn't know those releases were bugged when we pushed them out. I think that shifts our responsibilities here a little bit. 18:48:54 frickler: please check and if that looks ok, i can rebase Murano inactive status change 18:49:22 Everytime we release an OpenStack project with known issues, or that's barely working, it hurts what "OpenStack" means for everyone :/ 18:49:34 As we are not worried about Murano releaase, I think we should stack up it on top of doc change handling the case of marking inactive after m-2 18:49:49 we still don't know how critical that bug actually is or whether it is bogus 18:50:19 fungi: Have you taken the actions we discussed the other day? w/r/t opening that bug? 18:50:38 i've urged the new maintainers to consider switching it to public if they don't have time to fix it 18:50:57 frickler: I will say, it's my understanding it *is* a critical bug, and unlikely to be fixed ever. 18:51:48 so that would make the project dead on the spot? 18:52:07 the specific quote from andy was "With the lack of community participation in Murano, this may not get fixed. I hope to look into the code when I get time, but I'm not sure when that might be." 18:53:13 so is this critical enough that we should consider actually pulling all release of murano? 18:53:33 I do not have access to the bug and haven't seen it. fungi may be the only person in the meeting who has 18:53:45 but if it's 1) bad enough to be nonpublic and 2) has no reasonable timeline to fix 18:53:51 that makes me feel like pulling releases is the right thing 18:54:02 i can say that people who have access to that bug indicated their production approach would probably be to disable a significant amount of its functionality 18:54:03 er, not releasing more 18:54:10 I don't know about *pulling* them from pypi that are already oyut 18:54:33 JayF++ 18:55:20 I agree with Jay here, any release of OpenStack that has a major bug in it hurts OpenStack. We can't help what's already out there except putting a warning in it or trying to find someone who can fix it and maybe backport it which I think currently it's own issue 18:55:40 All this being said: we're past the second milestone, and unless I'm willing to do the work to keep it out of the release, I prefer ultimately leave it up to the releases team who are doing the work 18:56:16 as a release team member I would prefer that decision not to be delegated to me 18:56:28 if there are tc members who are interested in helping either 1. try to find a fix for murano, or 2. provide authoritative guidance to the murano maintainers on how to proceed with disclosure/release activities related to this, i'm happy to subscribe you 18:56:30 JayF: walked right into that one 18:56:43 frickler: what form should that be in? A separate resolution, if we don't want to change the overall policy? 18:56:54 but with this new information, I would support actually dropping it from the current release as a special case 18:57:11 I still think we should not change the general policy, yes 18:57:12 Do we agree that logistically that takes the form of a resolution? 18:57:22 If so, I'll take an action to write one up. 18:57:23 probably, yes 18:57:37 #action JayF to write resolution removing Murano from Caracal release 18:57:50 We have 3 minutes left; anything else on this? 18:57:56 I do not think we need to write resolution per each cases of such inactive projects. does not our current polocy cover it? 18:58:08 i mean release policy/criteria ? 18:58:33 not the "remove from release after m-2" part I guess? 18:58:34 gmann: current policy is "inactive things can't release past -2", if releases team doesn't want to change that policy so TC continues to be incentivized to evaluate activity earlier, it makes sense to special case 18:58:57 I can understand releases team not wanting to change a policy that could make TC decide, say before the release, to pull something 18:59:05 we're 3 days before RC, we're pretty darn close to doing that now :D 18:59:05 that is my doc change cover the case of inactive after m-2 and let release team decide 18:59:19 > frickler> as a release team member I would prefer that decision not to be delegated to me 18:59:22 or we want to change that to TC need to decide with resolution or so 18:59:24 I am trying to respect that request 18:59:30 hence the TC resolution so we take the decision 18:59:31 I am confused honestly :) 18:59:53 we do not want TC to decide on release of them as proposed in doc change but need resolution for the same 19:00:11 We do not want to change the document for the general case. Murano is a special case: it's not just inactive, it's potentially insecure 19:00:27 I just want the "no release" decision to be murano specific, not a general policy change 19:00:39 ok 19:00:44 And that's time 19:00:48 thanks everyone o/ 19:00:49 #endmeeting