18:00:09 <knikolla> #startmeeting tc
18:00:09 <opendevmeet> Meeting started Tue Aug 15 18:00:09 2023 UTC and is due to finish in 60 minutes.  The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:09 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:09 <opendevmeet> The meeting name has been set to 'tc'
18:00:18 <knikolla> #topic Roll Call
18:00:19 <JayF> o/
18:00:22 <knikolla> o/
18:00:23 <knikolla> Hi all, welcome to the weekly meeting of the OpenStack Technical Committee
18:00:24 <spotz[m]> o/
18:00:28 <knikolla> A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct
18:00:28 <dansmith> o/
18:00:31 <knikolla> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
18:00:35 <gmann> o/
18:00:35 <knikolla> We have one noted absence, slaweq
18:01:11 <noonedeadpunk> o/
18:02:06 <rosmaita> o/
18:02:32 <knikolla> #topic Follow up on past action items
18:02:38 <knikolla> We have one action item from the last meeting
18:02:42 <knikolla> rosmaita to review guidelines patch and poke at automating it
18:03:02 <gmann> I think guidelines change is merged now
18:03:06 <rosmaita> yes
18:03:15 <rosmaita> no action on automating, though
18:03:25 <knikolla> ack, thanks rosmaita
18:03:57 <knikolla> #topic Gate health check
18:04:02 <knikolla> Any updates on the state of the gate?
18:04:34 <fungi> source of the occasional enospc failures was tracked down to missing data disks on the executors (we overlooked giving them dedicated storage when we replaced them in early july)
18:04:49 <fungi> should be fixed as of about 20 minutes ago
18:04:54 <dansmith> steadily getting better but still not "good enough" IMHO. We're merging things, which is good, but we're still hitting plenty of fails
18:05:20 <dansmith> I just got something that is months old to merge after 28 rechecks
18:05:50 <rosmaita> i have occasionally seen jobs pass
18:06:00 <rosmaita> just not all at once
18:06:10 * fungi sees lots of jobs passing, but yes there are also lots of jobs
18:06:37 <gmann> I have seen improvement in tempest and nova gate at least but did not check cinder
18:07:07 <gmann> many improvement changes merging in last month or so helping for sure but yes gate is not 100% stable
18:07:13 <fungi> there's a nova change in the gate right now running 24 jobs. if those jobs average a 1% failure rate then that's basically a 50% chance that the change can make it through check and gate in one go
18:07:35 <dansmith> fungi: yeah
18:07:58 <dansmith> one I'm watching right now has a legit timeout in tempest and a volume test failure in one run of one patch
18:08:02 <fungi> 2% average failure rate means nothing merges
18:08:31 <dansmith> (i.e. two failing jobs on one patch which doesn't actually change anything)
18:08:47 <gmann> rebuild server of  volume backed test was also failing many times which is now refactored so let's see if that help or not
18:08:47 <knikolla> ugh :/
18:08:52 <dansmith> so yeah, it's better but we're still headed for trouble come m3 I tink
18:09:16 <gmann> yeah
18:09:30 <gmann> it not going to be smooth for sure but at least some level better
18:09:41 <dansmith> yes, improved for sure, just not enough
18:09:48 <fungi> looks like we've also got a bunch of leaked servers stuck "deleting" in rackspace too, which is a big dent in our available quota. need to find time to ask them to clean those up
18:09:48 <gmann> we have seen time during last month where hardly anything merged and everything was stuck
18:09:56 <knikolla> understood. let me know if there's something i can do to help. I have some extra free cycles this week and the next.
18:10:04 <dansmith> three weeks ago we were at "no point in trying"
18:10:13 <gmann> yeah
18:10:23 <dansmith> knikolla: I think you were going to look at keystone db queries and caching right?
18:10:39 <fungi> also image uploads started failing for rackspace's iad region at the end of last month, so images there are quite stale which means more time jobs spend updating packages and git repos too
18:10:56 <dansmith> I know neutron is working on that (i.e. slaweq) both of which will help IO performance, which is a huge bottleneck
18:11:13 <dansmith> fungi: ah good to know
18:11:30 <knikolla> dansmith: I hadn't said i would yet, but i can prioritize that now.
18:11:31 <noonedeadpunk> well, talking about keystone, it blocks our upgrade jobs for stable branches, which are all red due to series of regressions... So performance IMO not the biggest issue right now... Or well, depending on who you will ask obviously
18:11:47 <noonedeadpunk> THough patches are proposed, so this should be solved soonish
18:12:34 <knikolla> noonedeadpunk: roger, i'll review the backports. i think i already reviewed the patch to master, or are there others?
18:13:05 <dansmith> knikolla: okay I thought you did, but yeah, would be helpful
18:13:50 <noonedeadpunk> knikolla: yup, jsut pushed fix for another one https://review.opendev.org/c/openstack/keystone/+/891521
18:14:29 <knikolla> noonedeadpunk: awesome, fresh hot the press. will review it after the meeting. thanks for proposing it!
18:14:32 <noonedeadpunk> but I don't have test scenario for that... Or well, I'm not skilled enough to make up one in given time
18:14:32 <knikolla> off*
18:15:01 <knikolla> I'll see if i can think of something to suggest re: testing
18:15:31 <gmann> one thing to mention about python 3.11 version testing.  this is tox job change #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1
18:15:41 <gmann> as this is proposed to run on debian bookworm, it need fixes in bidnep.txt for many/all projects
18:16:29 <gmann> I am adding it as non voting job in this cycle so that projects will get time (at least 2-3 months) to fix it before next cycle where we can make it mandatory #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/2
18:16:56 <gmann> few projects like cinder, neturon already run py3.11 testing and on ubuntu jammy
18:17:33 <noonedeadpunk> ++ sounds good
18:17:45 <gmann> adding it in non voting is improtant otherwise because of our general job template model, in next cycle it can break gate for projects not fixing.
18:18:21 <knikolla> ++
18:18:21 <fungi> also pep 693 says python 3.12.0 is due out two days before we release bobcat, so expect ubuntu to have packaged it by mid-cycle
18:18:58 <knikolla> do we know if 3.13 brings any possible problematic changes? i haven't looked at the changelog yet
18:19:01 <gmann> cool, we can consider that adding non voting once it is available and thing of adding mandatory in next cycle of its release time
18:19:02 <knikolla> 3.21*
18:19:03 <fungi> if so, non-voting 3.12 jobs might be worth adding at that point
18:19:07 <knikolla> 12*
18:19:16 <knikolla> it seems i can't type today. sorry.
18:19:22 <fungi> 3.12 deprecated some more stuff out of the stdlib
18:19:28 <gmann> 3.12 or 3.11 ?
18:19:31 <fungi> the "dead batteries" pep
18:19:44 <fungi> er, deleted already deprecated stuff i mean
18:19:55 <fungi> but i think most of that is due to happen in 3.13
18:20:25 <knikolla> 3.12, given that we'll have 3.11 testing in place so i'm less concerned about that, and i was curious about 3.12.
18:20:41 <fungi> no, i'm wrong, all the deletions for pep 594 happened in 3.11
18:20:56 <fungi> scratch that, i was right
18:20:57 <gmann> we can go with the same way there. adding it non voting in next cycle and see how it behaves
18:21:05 <fungi> deprecated in 3.11, deleteing in 3.13 mostlt
18:21:13 <rosmaita> gmann: dyk how big a bindep change is required for bookworm py 3.11 ?
18:21:26 <fungi> asynchat and asyncore go away in 3.12
18:21:46 <gmann> rosmaita: not bug, this is nova example and I think same for other projects too #link https://review.opendev.org/c/openstack/nova/+/891256
18:21:47 <fungi> #link https://peps.python.org/pep-0594/ Removing dead batteries from the standard library
18:21:53 <gmann> with that nova job passing
18:22:02 <gmann> #link https://review.opendev.org/c/openstack/nova/+/891228/3
18:22:07 <clarkb> rosmaita: its usually updating libffi versions and things like that that have hardcoded versions in the pcakge name
18:22:19 <gmann> yeah
18:22:46 <gmann> libmysqlclient-dev not present in debian bookworm and might be few more
18:22:57 <clarkb> no more python2 on bookworm
18:23:03 <clarkb> is another likely source of trouble
18:23:13 <gmann> anyways that is plan for py3.11 which need changes in almost many/all repo may be
18:23:16 <gmann> that is all from me on this
18:24:10 <knikolla> thanks gmann
18:24:32 <noonedeadpunk> there're no py2 on ubuntu jammy either, so not sure this will ba an issue
18:24:40 <knikolla> #topic Testing runtime for 2024.1 release
18:24:45 <knikolla> on that topic, since we're already talking about it
18:24:55 <knikolla> #link https://review.opendev.org/c/openstack/governance/+/891225
18:24:59 <gmann> yeah, i described the changes in commit msg #link https://review.opendev.org/c/openstack/governance/+/891225/2//COMMIT_MSG#9
18:25:25 <gmann> main changes are on debian side, adding debian 12 bookworm but also keepnig debian 11 because it was supported in our previous SLURP release
18:26:16 <gmann> with debian 12, py3.11 testing comes in as mandatory but no removal of any python version which are supported currently. this is what were changed in our PTI explicitly this cycle
18:26:44 <gmann> so min version of python to test is 3.8
18:27:29 <fungi> so it's mostly a question of how many versions in between those we also explicitly require testing for
18:27:35 <gmann> please review and let me know your feedback. also preparing the job template #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238/1
18:27:44 <gmann> fungi: yeah that is good question
18:28:04 <fungi> as currently written it says to run 3.8, 3.10 and 3.11, skipping 3.9
18:28:04 <knikolla> it does sound a bit weird to me that we're saying we support 3.9 in the testing runtime, while not testing for it, despite the minor differences. have we done something like that before?
18:28:31 <fungi> also the rationale for skipping 3.9 could also be applied to 3.10
18:28:33 <gmann> as proposed in job template, I think testing py3.8, py3.10, and py3.11 is enough. skipping py3.9 because things working in py3.8and py3.10 will work for py3.9 also
18:28:42 <clarkb> knikolla: I'm a proponent for testing only the bookends if you aren't testing platform differences (libvirt, etc)
18:29:03 <gmann> knikolla: we do test, many project explicitly have job of that and we can add it periodic
18:29:13 <clarkb> this has worked really well for zuul and reduces test runtimes, chances for random failures, and resource ndeeds. Zuul hasn't had problems doing that
18:29:26 <gmann> My proposal is to add py3.9 as periodic and not to run everytime on check/gate pipeline
18:29:30 <fungi> yes, the argument is that if what you're testing is "does this work for the given python minor versions?" then odds are anything that passes on both 3.8 and 3.10 will work for 3.9
18:29:37 <gmann> that should ve enough to cover the testing of py3.9
18:29:42 <dansmith> gmann: sounds fine to me
18:29:55 <dansmith> gmann: honestly, we could be doing that for 38 and 39 right now
18:30:10 <fungi> also projects concerned about it can still choose to run 3.9 jobs, the pti just won't require them
18:30:33 <gmann> yes. nova has functional job running on py3.9 too
18:30:37 <knikolla> I don't have a strong opinion, if we can save resources the better. Just wanted to ask :)
18:31:47 <noonedeadpunk> but I think we've added "appreciacion" to PTI to cover more versions than minimally required by PTI
18:32:06 <gmann> dansmith: I would like to keep py3.8 as check/gate job as it is min version and good to keep eyes on if anyone dropping its support.
18:32:13 <gmann> noonedeadpunk: yes
18:32:37 <dansmith> gmann: it's just very unlikely and finding something later in periodic would not be hard to recover from
18:32:50 <knikolla> makes sense
18:33:05 <dansmith> obviously running everything on every patch is *ideal* but just not necessary
18:33:07 <gmann> but it still make other repo projects to break for that time
18:33:42 <gmann> we can move py3.10 along wuth py3.9as periodic ? testing min and max py3.8 and py3.11on every change
18:33:51 <noonedeadpunk> I can recall solid reason to drop 3.8 support from Ansible... As they were able to drop some quite old things to speed-up execution a lot... But can't really recal details...
18:34:04 <noonedeadpunk> Anyway I think keeping 3.8 is reasonable at min
18:35:05 <fungi> there's been lots. the "faster cpython" effort has been making steady performance improvements
18:35:22 <clarkb> particularly beginning with 3.10
18:35:25 <gmann> also, many projects not seeing py3.8 job in check/gate will think of its going away or already dropped. I think testing that on every change make sense to me
18:36:19 <JayF> gmann++
18:37:09 * noonedeadpunk looking forward to noGIL
18:38:12 <knikolla> anything else on the topic?
18:38:16 <gmann> anyways its there in template, please review and add your feedback. that template change need to wait for this cycle release so we have time but governance change we can review and merge when it is ready
18:38:25 <gmann> this is template change #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238/1
18:38:46 <gmann> that is all from me on this
18:39:27 <knikolla> #topic User survey question updates by Aug 18
18:39:31 <knikolla> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034589.html
18:39:47 <knikolla> A reminder that the deadline for proposing changes to the survery that will be sent out is this Friday.
18:40:20 <knikolla> Are there any questions that we as the TC can pose that you'd like to propose?
18:42:02 <knikolla> I'll take the silence as a no, we can discuss outside the meeting if anything comes to mind.
18:42:21 <spotz[m]> Yeah I can't think of anything tC specific
18:42:46 <fungi> it's not just about proposing tc-specific questions, but also making sure the questions currently in there make sense
18:42:49 <knikolla> #topic Reviews and Open Discussion
18:43:08 <fungi> i went through, for example, and recommended some updates to questions which listed projects but were missing some more recent additions
18:44:42 <gmann> There are couple of things/updates.
18:44:54 <gmann> elod ping TC about reaching our to murano/solum PTL for stable.rocky EOL. I also reached out to them, and they responded to my changes and to release changes also.
18:44:58 <gmann> so we are good on these projects and PTL is responding the things.
18:46:02 <gmann> other is about election and any project changing their leadership model, tonyb asked about it and as election nomination is going to open tomorrow, we will not have any change in project leadership model for this cycle. if any application comes we need to postponed it to next cycle
18:46:11 <gmann> #link https://governance.openstack.org/election/
18:46:25 <gmann> these are two updates I wanted to share
18:46:44 <knikolla> thanks gmann!
18:49:16 <noonedeadpunk> But does this still apply if there's no PTL for the project? But there're volunteers for distributed leadership?
18:49:46 <knikolla> It'll be up to us to decide, IIRC.
18:49:53 <gmann> noonedeadpunk: that we can handle as leaderless project and then change to DPL model during PTL assignment task
18:49:57 <noonedeadpunk> As I thought it's possible to change the model in such case?
18:50:16 <gmann> yes but not during the election which create confusion
18:50:23 <JayF> basically it's locking them into having a PTL election; not to having a PTL (if nobody goes up for election; then we can change model if needed)
18:50:27 <noonedeadpunk> ah, ok, gotcha your point now
18:50:31 <gmann> we have deadline of doing it before election nomination start or after election
18:50:43 <noonedeadpunk> ok, yes, makes total sense
18:50:49 <fungi> if projects want to propose to switch to dpl there is a deadline for that. if the tc wants to switch a project to dpl they can do it at any time they want (just ought to avoid disrupting an ongoing election process)
18:51:04 <noonedeadpunk> ++
18:51:10 <gmann> JayF: yeah
18:53:04 <knikolla> alright. if there's nothing else. thanks all!
18:53:07 <knikolla> have a great rest of the week.
18:53:09 <knikolla> #endmeeting