Tuesday, 2023-08-22

opendevreviewMerged openstack/election master: Add Dan Smith candidacy for TC  https://review.opendev.org/c/openstack/election/+/89214500:32
opendevreviewMerged openstack/election master: Jay Faulkner (JayF) TC candidacy for 2024.1  https://review.opendev.org/c/openstack/election/+/89214800:32
opendevreviewMerged openstack/election master: Add Dmitriy Rabotyagov Candidacy for Vitrage  https://review.opendev.org/c/openstack/election/+/89215700:32
opendevreviewMerged openstack/governance master: Add aodh, ceilometer & gnocchi k8s charms  https://review.opendev.org/c/openstack/governance/+/89092201:54
*** d34dh0r5- is now known as d34dh0r5313:09
knikollao/13:31
knikollaI added +W on the Unmaintained resolution. Thanks all for the insightful and productive discussions and feedback around it. I learned a lot in the process of writing and shepherding it through.13:34
opendevreviewMerged openstack/governance master: Unmaintained status replaces Extended Maintenance  https://review.opendev.org/c/openstack/governance/+/88877113:37
* fungi cheers13:39
slaweqknikolla hi, can You check https://53ec660a16b30e470118-779b81139f4f29276caf956abf2a020f.ssl.cf2.rackcdn.com/890939/3/gate/neutron-ovs-grenade-dvr-multinode/f868b9c/controller/logs/grenade.sh_log.txt when You will have few minutes? It seems like related to keystone maybe - is it something what You already saw or is it new thing?15:34
slaweqthx in advance for help15:34
knikollaNot familiar with it, but I’ll check more closely after the meeting.17:14
knikollatc-members: reminder, meeting in ~38 minutes.17:22
opendevreviewMerged openstack/election master: Add Ghanshyam Mann candidacy for TC  https://review.opendev.org/c/openstack/election/+/89217817:30
slaweqknikolla thx17:59
knikolla#startmeeting tc17:59
opendevmeetMeeting started Tue Aug 22 17:59:52 2023 UTC and is due to finish in 60 minutes.  The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot.17:59
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:59
opendevmeetThe meeting name has been set to 'tc'17:59
knikolla#topic Roll Call17:59
noonedeadpunko/17:59
knikollaHi all, welcome to the weekly meeting of the OpenStack Technical Committee18:00
dansmitho/18:00
knikollaA reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct 18:00
jamespageo/18:00
noonedeadpunkyou're early... _again_18:00
knikollaToday's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee18:00
gmanno/18:00
noonedeadpunk:D18:00
knikollaWe have no noted absences.18:00
knikollao/18:00
rosmaitao/18:00
JayFo/18:00
knikollaI try to get the copy pasting done by 2.00pm :) 18:00
slaweqo/18:00
knikolla#topic Follow up on past action items18:01
knikollaNo items noted as to follow up from the previous meeting. 18:01
knikolla#topic Gate health check18:02
knikollaAny updates on the state of the gate?18:02
dansmithdefinitely steady improvement18:02
knikollaI know we haven’t been in a good place the past few weeks. 18:02
gmannit is much better this week18:02
dansmithstill seeing lots of volume-related failures as the largest single area by my estimate18:03
knikollaAwesome! Great to hear of the improvement! 18:03
dansmithI know some work is going on in that realm, which is good to see18:03
knikollaI’ve also started poking at Keystone database metrics and will hopefully have something to report back by next week. 18:03
dansmithah, was just about to ask :)18:03
dansmiththanks18:03
slaweqI'm also looking at neutron db still18:03
dansmith++18:04
gmann+118:04
fungiwe've reclaimed the full capacity of our rackspace quotas, which has helped tremendously18:04
knikollateam works makes the gate work (i’ll see myself out)18:04
slaweqI spoke with ralonsoh and we identified what is our top 1 query which we are trying to change now18:04
fungii was watching over 600 builds running concurrently earlier today18:04
dansmithjust in time18:04
spotz[m]o/18:05
dansmithI guess I can add one other thing:18:05
fungiprojects with broken job configurations are putting a strain on the collaboratory sysadmins though, we were just discussing in #opendev that we probably need to take a harder line on that18:05
gmannslaweq:  there might be many unnecessary network creation happing or duplicate in tests, feel free to ping me if there is. we just create netwokr resource for every test creds 18:05
gmannbut I am sure it can be optimized more18:06
dansmithin recent weeks I talked about improving actual performance to help with gate performance, related to things like db queries in those projects18:06
fungibasically give a cutoff date where if projects haven't deleted branches with broken configuration we'll remove them from the tenant config and stop testing changes for all of their branches18:06
dansmithand I've since put my money where my fingers are and have been squashing a bunch of nova lazy loads that have crept in over the years, which are unnecessary and should be avoided18:06
slaweqgmann so far I was rather looking at it from the neutron server PoV and trying to understand and hopefully optimize how it works18:06
dansmithwhich will reduce database queries, load, and improve responsiveness in general18:06
slaweqbut later I may also try to look at e.g. tempest18:06
knikollacleaning up branches through implementing unmaintained should hopefully help with that as well18:07
gmannslaweq: cool, thanks18:07
gmannwell, unmaintained branches also can have same issue of job config error18:07
JayFGenerally it looks like most of the offenders are the oldest branches though18:07
gmannI think that will add more when one project deleting branch and other project using those ? maybe18:07
JayFit should at least take a chunk out and leave meaningful breaks18:08
fungiknikolla: i don't think the unmaintained plan directly solves broken job configs since it means projects will still by default have broken configuration on their abandoned branches for a year potentially before they're culled for lack of interest18:08
knikollayes, but less branches would qualify to even be there in the first place18:08
knikollahence less would be broken. no saying it would solve it, just help :)18:08
fungialso there are plenty of repos with broken configuration on maintained branches too18:08
gmannfungi: if I understand your proposal correctly. if stable/rocky of nova has any zuul config error and it is not fixed on time then opendev will remove nova all changes testing?18:08
gmannor just stable.rocky one18:09
knikollamaintained branches, those are more concerning and we should take a harder line. especially in this election cycle. 18:09
JayFgmann: yes, all of nova changes18:09
JayFgmann: based on chatter in #opendev18:09
gmannhumm. not sure if that is good idea18:09
fungipoint being, broken configuration needs to be cleaned up. opendev sysadmins aren't going to bespoke delete random branches, we don't have time for that. openstack can eol and delete branches with broken job configs or the opendev sysadmins can remove those repositories from the zuul config18:09
JayFPretty good stick to get us to prioritize fixing these errors18:09
JayFand they've let us know about them for a while18:09
gmannhaving our unmaintained branches also in that list where we do not maintain them18:09
fungior someone can fix the broken configs, but even the fixes for those in many cases sit unreviewed and unmerged18:10
gmannhow about unmaintained branches? is that also in the list of this stick ?18:10
rosmaitafungi: when you say "broken job configs", what do you mean exactly?18:10
fungirosmaita: i mean anything that shows up in https://zuul.opendev.org/t/openstack/config-errors18:10
rosmaitathanks18:11
fungibasically we'd just remove all those repositories for you18:11
dansmithfungi: but no results in that list for nova?18:11
gmannwe started this long back but did not finsih https://etherpad.opendev.org/p/zuul-config-error-openstack18:11
dansmithI'm not sure what the nova connection was18:11
fungidansmith: there was no nova connection, this was about gate health more generally18:11
gmannfungi: you are considering unmaintined branches also in that list? for exmaple is unmaintained/train has some error then you stop all nova brnaches tetsing?18:11
dansmithfungi: okay JayF said "all nova changes"18:12
JayFdansmith: was answering gmann's question about a hypothetical18:12
spotz[m]And it was an example is how it read to me18:12
gmannconsidering unmaintained/train is not maintained by upstream team18:12
fungigmann: zuul reads configuration from all branches of every repository18:12
gmanndansmith: I am taking nova as an example to understand the new rule18:12
dansmithokay18:12
fricklerfungi: can we override that to specific branches only?18:12
gmannok so this is issue then. we cannot commit to fix the unmaintianed branches right18:12
fungiwe could maybe find a way to tell zuul to no longer read configuration from (and no longer test changes for) "unmaintained/.*" branches if that's what you're asking18:12
clarkbfrickler: I don't think so. Its come up that doing so might be a good zuul feature but no one has implemented it as far as I know18:13
gmannconsidering supported branches will make sense to be in that stick rule18:13
clarkbyou can tell zuul to ignore taking action on specific branches but not ignore the config on branches18:13
JayFI don't think fungi is proposing a rule for governance reasons; I think he's reflecting a technical pain that these errors cause18:13
fungigmann: if projects have unmaintained branches with broken zuul configuration in them, they could delete those branches in order that their maintained branches continue to be tested18:14
JayFI don't know exactly what that pain is, but it's reasonable to ask us to fix them18:14
gmannfungi: yeah, that will be great so that any job issue from there does not make that project defaulter of not fixing it18:14
JayFI would also ask that zuul avoid breaking config changes in the future, too, because that was painful :)18:14
spotz[m]Unless Zulu can be configured18:14
gmannwell, we want to keep unmainttained branches open and not maintained by ourself so not sure how soon those zuul cofig can be fixed by external maintainer18:14
JayFgmann: it's already an implicit requirement in practice that we follow opendev requirements in CI18:15
clarkba simple fix is for openstack to just delete the zuul config in those branches18:15
knikollaat least if it doesn’t get fixed it goes away in the next cycle, rather than linger as a zombie forever. 18:15
clarkbyou don't have to maintain it, you have to make the error go away. Its a slightly different need18:15
JayFgmann: I would suggest that would extend to unmaintained/ and we'd retire a branch if it was a recurrant issue18:15
fungiJayF: the main one which is a problem at the moment was deprecated an announced over a year in advance of the backward-incompatible change to configuration parsing going into effect18:15
gmannJayF: we do not want to control/maintained the unmaintained CI or anything right?18:15
JayFgmann: I personally don't, no, but the policy we just landed gives the PTL, and by extension, TC power around delegating that18:16
JayFgmann: this makes it clear that a condition of that delegation is 'keep zuul configured properly'18:16
gmannone way we want unmaintained branches to be open/some-testing for external maitainers and at the same time we are putting hard expectation of their maintenance. almost same as supported branches 18:17
JayFNobody would prevent those unmaintained branches from merging a `git rm -r zuul.d/`18:17
JayFso it's a self-imposed requirement that can be removed18:17
knikollaIt’s a different level of testing. As JayF mentioned. 18:17
JayFthe only concern we have is that our infrastructure remains happy, and fungi is reflecting we aren't doing a good job of that right now18:17
gmannbut did we mention that expectation in resolution. i feel it is coming little extra hard stick for them18:18
fungiall those entries you see in the config-errors list that say "extra keys not allowed @ data['gate']['queue']" have been broken that way for years now, and there was a warning that it would break a year before it did18:18
knikolla" At a minimum this needs to contain all integrated jobs, unit tests, pep8,18:18
knikolla  and functional testing.”18:18
gmannI am fine for expectation and keep them green but stopping project testing based on unmaintained brnaches state does not looks good to me18:19
knikolla“The CI for all branches must be in good standing at the time of opt-in.”18:19
knikollaSo keeping Zuul configured is part of the reqs. 18:19
fungiremember, those branches are already not being tested because zuul can't parse their configuration, so you're not going to make that any worse by removing the configuration in them18:19
fungibut also, nothing has merged to those branches, because (again) zuul can't parse their job configuration18:20
dansmiththere really aren't that many projects here that are in this boat,18:20
gmannI get the requirement, my concern is if unmaintained branches goes bad then it add risk to stop testing supportd branches. that is my concern.18:20
dansmithor at least most of them are in a few projects18:20
fricklerthat's because I did a lot of cleanup already18:20
gmannIMO, it should be "stop testing the branch having zuul config error"18:20
JayFdansmith: yep, Ironic is a big offender and believe it or not that's after we already resolved literally dozens of them18:20
dansmithso we might be able to get a bunch of these resolved by pointing some people there and seeing if there's any objection18:20
fungithere were tons more, frickler has done an amazing job trying to push to clean them up18:20
fricklerhttps://review.opendev.org/q/topic:zuul-config-errors18:21
gmanninstead of "stop testing projects all brnaches if any branch have error"18:21
dansmithI wasn't trying to minimize anything :)18:21
fricklerand some other topics, I mixed them up a bit18:21
knikollathe status of unmaintained should have zero impact whatsoever on stable branches. 18:21
gmannexactly 18:22
gmannbut with this proposal it seems like it has direct impact.18:22
knikollaWhether the config of those works or not. If it doesn’t the branch gets deleted rather than removing the project from zuul.18:22
gmannit should be like, if zuul config error in unmaintained/xyz branch and it is not fixed before deadline then stop testing it and then proceed for EOL.18:22
knikollaas happy CI is a condition of opt-in or renewal. 18:22
knikollayes, that’s the policy. 18:23
gmannduring opt-in is ok. how about after 1 month it start config error 18:23
noonedeadpunkI'm still concerned about the process of opt-in/opt-out18:23
noonedeadpunkas well as timelines for that18:23
knikolla@noonedeadpunk: that’s the next item in the agenda. 18:23
noonedeadpunkas this is smth that was mentioned in decision but never explained18:24
gmannhere status of unmaintained branches at any time impact directly on supported branch testing18:24
noonedeadpunk++18:24
knikolla#topic Documenting implementation processes for Unmaintained18:24
knikollaconsidering we’ve already sort of switched topics. 18:25
gmannfungi: any specific reason it cannot be done at branch (havinf config error )level deletion instead of all branches?18:25
knikollaWe merged the policy, the next step is documenting the opt-in, renewal process, and implementing the tooling necessary. 18:25
knikollaSo updating the project-config-guide and defining the timelines as well. 18:26
gmannI still feel doing action at branch level having error is more appropriate things especially we have two set of maintainers now with unmaintained concept18:26
fungigmann: because the opendev sysadmins don't have time to police branch configs for projects. we have a list of projects included in the tenant and can edit that list fairly easily18:27
fungiit's easy to add and remove a repository from the list18:27
gmannhumm18:27
gmannanywyas I still think it is big risk and if it happen to any project then it is panic situation18:29
fungii agree, config errors should be18:29
JayFgmann: I trust fungi and other opendev sysdmins to make loud noise about this on mailing lists and other places before action were taken, giving TC time to remove the branch as a last resort18:29
JayFthey already have made loud noises without a stick; now they have to make loud noises with one :)18:30
knikolla++18:30
fungizuul doesn't take backward-incompatible changes to configs lightly (as i said, the queue in pipeline change was announced a year before it was merged)18:30
gmannJayF: even with the risk of stop testing projects ? that does not sound good solution 18:30
knikollaWe’ll nuke the branch before we nuke the project.18:31
fungistopping testing some projects is preferable to stopping testing all projects18:31
JayFI agree; the good solution is that we proactively fix or retire branches with zuul-config-errors :) 18:31
JayFWe can't not maintain something and not be OK with consequences from that lack of maintenance.18:31
JayFWhich is realistically what this conversation is about; these are all brnaches that haven't had tests running in over a year18:32
dansmithif we don't run tests on a project,18:32
JayFI'm not worried about retiring them; I'm worried about someone running them untested 18:32
dansmiththen nothing can merge (or zuul won't merge anything) right?18:32
dansmithor does it become open season?18:32
JayFexactly18:32
JayFnothing runs, nothing merges18:32
clarkbdansmith: no merging since +2 Verified is a requirement to merge18:32
fungiopenstack has a vested interest in making sure the people running the ci/cd system have the time to do that effectively18:32
gmannyeah that is good question? 18:32
JayFthe branch is effectly dead when it's in config-errored state18:32
dansmithclarkb: ack18:32
gmannok so stop testing and mering the things on master too even everything fine there18:32
dansmithif it defaults to blocked, then that seems like a reasonable "fix your stuff and then you can merge again" incentive18:32
slaweqJayF IIUC it's not branch but project what is effecively dead in such case18:33
gmannon supported branches i am fine with that approach but unmaintained brnach state impact all other supported as well as master brnach development is not good18:33
JayFslaweq: Eh, I don't think that's true in all cases. python-ironicclient has old branches on the list; it's a sign those branches are not cared about and should be retired (the action I'll be taking with my PTL hat on unless smoeone fixes it)18:33
gmannit has to be independent 18:33
dansmithas long as a PTL can nuke an unmaintained release that isn't fixing its configs to unblock master, which I think we've covered right?18:33
fungimaybe i can restate this better... the opendev sysadmins review changes that add and remove projects from the openstack tenant. the openstack maintainers decide what branches are still open and can fix or remove configuration problems in them. if the maintainers don't do that, the opendev sysadmins can remove those repositories from the tenant in order to keep the configuration clean18:34
knikolla@gmann: if an unmaintained branch having a wrong config causes the maintained branches CI to break, that is a sign the branch should be retired and we can expedite the process. We have the CI breaking occasionally for any kind of reasons, it’s annoying but it’s not unfixable in an active manner. 18:34
slaweqJayF ahh, ok, now I understand what You said earlier :) 18:35
slaweqand I agree with it18:35
dansmithfungi: sorry if I missed this but, why does this matter at all? some zuul detail that hurts performance or something if there are any projects with config errors?18:36
gmannyes we can do that but that is extra monitoring and work to keep eyes on that. my understanding is that, we check unmaintained branches on opt-in time if all good then say OK and check in next cycle18:36
dansmithlike, I would expect zuul to just ignore those branches once they have an error without a lot of additional overhead, but I'm guessing it's not that simple?18:36
gmannif config error happen in between of cycle and opendev process take effect then we are in risk or doing extra work on unmaintained branches checks18:36
knikollaThat risk exists today with EM too.18:37
gmanndansmith: yeah if that can be done it is great but agree it is not easy seems18:37
fungidansmith: among other reasons, it makes it hard to identify new errors when nobody is bothering to fix the existing ones18:37
dansmithfungi: I guess I don't know why it matters really, but okay18:38
fungialso it means zuul is indexing configuration on an ever increasing number of broken branches which makes restarts/reconfigs take that much longer18:38
knikollaI propose we continue on the conversation with regards to implementing the processes from the policy that we merged. 18:39
dansmithindexing takes longer if there are config errors?18:39
fungifrickler and clarkb may have additional concerns beyond those18:39
clarkbit creates unexpected behaviors in your testing, it makes it harder to debug real problems, and they tend to hve knock on effects where you get errors causing errors causing erraors that are harder to untangle over time18:39
fricklerobscuring more important errors is my main concern18:39
fungii suppose the question is why do the opendev sysadmins care that openstack has broken job configs. we might not care quite as much if people didn't come to us asking for help with their job configs18:39
clarkbso you don't care about error A today, and in six months you decide you don't care about error B. Then six months after that you get error C and now you have to fix all three because you care about C and its much more difficult18:40
clarkbeasier if ou just fixed A and B as the occurred18:40
clarkbon a personal level I'm particularly frustrated with the errors taht occur due to renaming projects18:40
fungiwe could entertain not actually caring if openstack's job configuration is broken, and tell the openstack maintainers good luck they're on their own figuring it out18:40
clarkbrenaming projects in gerrit requires a downtime, is unsupported by upstream, and is potentially dangerous. We do it anyway because people like names to align or not conflict etc and then they don't even fix their zuul configs after we (opendev) do this major surgery on the system for them18:40
dansmithI guess I'm just trying to figure out how this is materially different from people not fixing gate stability issues that compound over time, other than that there's a dashboard that lists these errors in a nice list18:41
gmannyeah, gate stability seems more important than syntex error to me. it is directly of OpenStack as software quality 18:42
fungithat dashboard exists because previously when people wanted to know why their changes weren't being tested one of the sysadmins had to go trolling through service logs18:42
gmanncan we make policy to kick project out if they do not fix gate stability ?18:42
funginow we still end up looking at that page for people when they don't see changes getting tested18:42
dansmithanyway, I think I'm fine with a project no longer running tests on master if they have a broken branch if that will make everyone feel better and be a stronger sentinel to the owners of broken stuff18:43
fungito be fair, a syntax error creates very stable gating. you can predict with 100% certainty that no changes will be merging on that branch18:43
clarkbya Ithink the main difference is that we end up inn the debug path immediately18:43
clarkband we're already providing you the information needed to get ahead of those problems18:43
JayFdansmith++18:43
knikolladansmith++18:43
dansmithas long as a designated unmaintained owner can't block master indefinitely by not fixing things (i.e. the PTL can just nuke it)18:43
gmannnot sure what is deadline to be from opendev but it seems it can block master immediatly based on deadline ?18:44
knikolla++, I see the situation no particularly different than EM. WIth the difference that a project team had to fix the branch in EM, whereas Unmaintained allows nuking the branch rather than fixing. 18:44
fungiyes, giving the ptl control over whether fixes will be merged or deleting the affected branch is absolutely the right way18:44
gmannso that also an important thing to note/check. deadline should at least give 1-2 cycle18:44
dansmithknikolla: or nuking the .zuul in the meantime18:44
knikolla++, that too18:45
clarkbgmann: this came up because we're wanting to switch the openstack zuul tenant to ansible 8 by default and that may create zuul config errors. The reason for this is ansible 6 is no longer supported and zuul added ansible 8 support recently. The timeline is going to be after the release though18:45
clarkbwe're discussing it in our meeting in 15 minutes and there will probably be email about that particular change sometime this week once we sort out some details18:45
gmannyeah, .zuul config going until they do not fix can be better then deleting whole branch18:45
fungiand with lots of warning and opportunity for projects to check if it will cause problems for them18:45
gmannit will give time to any external maintainer to come forward and fix18:45
knikollaThis was very helpful in bringing to our attention a concern from the OpenDev team that we hadn’t prioritized before, and we will moving forward. 18:46
gmannclarkb: ack18:46
noonedeadpunkclarkb: well ,that for sure wil lcreate some issues with jobs. As that contains openstack.cloud collection 2.0 already, doesn't it?18:46
clarkbnoonedeadpunk: I have no idea18:46
dansmithyeah just nuking .zuul and then giving someone to the end of the cycle to revert/fix or delete the branch seems fine to me18:46
clarkbbut you can test it today on a per job basis by setting the ansible version on the job18:46
gmann++, this seems good tradeoff 18:46
slaweq dansmith++18:46
knikolla++dansmith: that seems a good approach and could even be automated. 18:46
noonedeadpunkwhich have quite different inputs/outputs and so anything that uses openstack.cloud>2.0 barely compatible with content that was for 1.018:47
dansmithknikolla: I'd prefer we just document it18:47
gmannlet's document this so that PTL knows about this way18:47
gmanndansmith: yeah18:47
dansmithknikolla: like we document that any nova patch that merged without sufficient core review has a fast-revert escape hatch18:47
dansmithjust document it as "18:47
dansmith"this is how to get out of jail"18:47
funginoonedeadpunk: jobs relying on that are probably invoking their own nested ansible on job nodes anyway, we're talking about the ansible version run on the executor18:47
spotz[m]Makes sense18:48
knikollaScripting is more fun than writing, haha. But yes, I’ll make a note to add that to PTL docs. 18:48
dansmithknikolla: well, write a script someone can run to generate the delete commit or something sure18:48
dansmithwe can use it to submit patches against this list right now :)18:48
gmannknikolla: also in unmaintained branches doc that can help PTL to know what action to take on unmaintained if this happen18:48
fungibtw, the zuul config-errors list is retrievable in json from its rest api if that helps18:49
knikollaOne thing I want to spend at least some time talking about today, is the opt-in process for Unmaintained. 18:49
gmannfungi: do we have doc link of opendev policy which we can link in OpenStack doc to communicate it to PTL/community along with the required actions18:49
noonedeadpunkfungi: well, given that post jobs (like upload logs to swift) and things like that are well-tested in zuul - then it might be fine18:49
gmannfungi: or you are going to draft one?18:49
knikollaWhat would be the right place to implement that in?18:49
fungigmann: there is no policy yet, we just started discussing it in irc a few minutes ago18:50
noonedeadpunkor well, in most cases, as indeed actiouns against openstack usualy are done in nested envs, except maybe pre/post steps18:50
gmannfungi: ok, noted18:50
dansmithknikolla: I assume we delete branches via commits in releases or something right?18:50
dansmithif so, generate the commit and let someone -1 it to volunteer :)18:50
noonedeadpunkalso iirc ansible 8.0 requires to run on >=python3.1018:51
knikollaThat might work, give a few weeks time for someone to -1 a patch, if not delete. 18:51
JayFI suspect if we let the "fix it or your CI is turned off" message hit the list with a date; we'd see the list pare down even further. No need for TC to take direct/scripted action (yet) IMO.18:51
funginoonedeadpunk: it's already running on python 3.11, so that's fine (our executors are all 3.11)18:51
noonedeadpunksweet :)18:51
funginoonedeadpunk: the specific concern with ansible 8 is if there are job playbooks/roles whose syntax isn't valid in newer ansible18:53
knikollaLast 5 minutes18:54
knikolla#topic Reviews and Open Discussion18:54
noonedeadpunknah, I'd say it's least of concern - code changes we had between core 2.10 and core 2.15 are rather minimal18:54
noonedeadpunkit's mostly collections that brings pain18:54
JayFI'll note for open discussion; if you're in your 11th month as a TC member and are planning on continuing to serve, please ensure you re-nominate yourself.18:56
JayFIf you're aren't planning on continuing to serve, please help recruit :)18:56
knikolla++18:56
gmann++, also encourage other members to run18:56
noonedeadpunkI guess all chairs have submitted. So we at least won't lack one18:56
noonedeadpunkBut helping to recruit won't hurt for sure18:57
gmannyeah, 4 seat and 4 candidacy we have for now18:57
noonedeadpunkas it's always good thing to do18:57
gmannmore and more member running election is good for long term18:57
noonedeadpunk++18:57
knikollaIt would be amazing to have to go back to running elections :)18:57
gmannalso we can encourage PTLs also existing or new one to send nomination before deadline, at least the one we know18:58
noonedeadpunkwe actually were running year ago ;)18:58
noonedeadpunkand we have a week to spare18:58
spotz[m]Also it really helps if you post to the ML not just make your commit18:58
gmannsure18:59
knikollaAlright, thanks all! 18:59
knikolla#endmeeting18:59
opendevmeetMeeting ended Tue Aug 22 18:59:16 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:59
opendevmeetMinutes:        https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-22-17.59.html18:59
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-22-17.59.txt18:59
opendevmeetLog:            https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-22-17.59.log.html18:59
spotz[m]Last year we dropped the ball and i was in Ireland at a conference18:59
slaweqthx knikolla 18:59
slaweqo/18:59
spotz[m]Thanks knikolla 19:01
fungiand sorry for derailing the tc meeting with that topic, but you can retaliate by derailing the opendev sysadmins meeting that's going on right now, i guess19:02
knikollafungi: not at all. That’s an issue you’ve mentioned multiple times. Sometimes the amount of noise has to be proportional to the importance of a topic once all other avenues are exhausted :) thanks for your patience.19:17
*** timburke_ is now known as timburke21:28
spotz[m]knikolla: Hey no TC listed on the PTG list22:19
JayFwe want on the list for sure :)22:28
* fungi scalps ptg tickets for the latecomers22:57
knikollaOops, I knew I had forgot to do something last week. 23:12
knikollaSent an email to ptg at opendev dot org23:12
knikollaI might need to bribe someone with drinks/tea/soda during the next summit23:13
clarkbknikolla: I dont' know that that is the correct email address since we don't have opendev.org set up for email like that iirc23:14
clarkbit is ptg at openinfra dot dev23:14
clarkbcc diablo_rojo_phone 23:14
knikollaI meant openinfra23:14
fungi.dev23:21
fungiyeah, should work23:22
knikollaalso pinged the keystone ptl on #openstack-keystone since I don’t see that on the list neither.23:22

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!