13:00:40 #startmeeting kolla 13:00:40 Meeting started Wed Sep 13 13:00:40 2023 UTC and is due to finish in 60 minutes. The chair is mnasiadka. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:40 The meeting name has been set to 'kolla' 13:00:42 #topic rollcall 13:00:43 o/ 13:00:46 o/ 13:00:47 \o 13:00:52 o/ 13:00:56 \o 13:01:22 o/ 13:01:41 o/ 13:02:15 o/ 13:02:21 \o 13:02:39 #topic agenda 13:02:39 * CI status 13:02:39 * Release tasks 13:02:39 * Regular stable releases (first meeting in a month) 13:02:39 * Current cycle planning 13:02:40 * Additional agenda (from whiteboard) 13:02:40 * Open discussion 13:02:42 #topic CI status 13:03:33 Overall is green I guess 13:03:48 But I think I don't like that multinode (ceph) jobs are really working nice in master 13:03:55 but they are failing in stable branches 13:04:06 the same way as they were breaking before we reconfigured RMQ to HA 13:04:28 Would we want to enable RMQ HA in tests/templates/globals-defaults for stable branches? 13:04:51 no? 13:04:53 in CI only? 13:05:04 in CI only 13:05:08 mmalchuk: why not? 13:05:16 oh on CI is ok 13:05:27 frickler: any thoughts? 13:05:29 seems ok 13:05:46 Seems reasonable to me 13:05:48 my other idea whether doing single node rmq only would be better 13:06:08 but likely HA is fine, too, gives it some testing 13:06:12 frickler we need HA 13:06:28 on CI 13:06:33 let's try if switching those jobs to HA solve the issue of randomness 13:06:48 anybody willing to change it and do a couple of rechecks? 13:07:25 can take a look probably soonish 13:07:42 ok, bbezak wins ;) 13:07:48 #topic Release tasks 13:08:06 I didn't raise a patch for cycle highlights, will do after this meeting 13:08:12 no other release tasks seem to be waiting 13:08:22 #topic Current cycle planning 13:08:51 I don't think anybody spend time reviewing podman patches 13:09:06 and I think kevko did not update the LE Kolla patch according to reviews 13:09:14 so those two things need focus :) 13:09:50 I also proposed setting OVN jobs as voting, because it's getting more and more deployments, so I guess it would make sense to have it as voting 13:10:03 #link https://review.opendev.org/c/openstack/kolla-ansible/+/894914 13:10:33 SvenKieske: did you mention that td-agent needs bumping up? 13:10:57 yeah, currently figuring that one out, might not be soo trivial 13:11:12 do we switch to OVN from OVS later? 13:11:15 do you have that amount of time, or do you need help? 13:11:34 mmalchuk: that's on my backlog for working on that, we should have something beginning of next cycle 13:11:46 cool thanks 13:13:24 I'll should figure it out today or cry for help :D 13:13:29 ok :) 13:13:42 #topic Additional agenda (from whiteboard) 13:13:55 to zun or not to zun (jangutter). TL;DR, zun needs fundamental fixes to work with docker>20, etcd>3.3. Docker is a _host_ level dependency so vendoring options in the container won't work. 13:14:24 Does anyone have contacts with folks on the Zun team? 13:14:44 jangutter: I think it does work, but only for Ubuntu and Rocky 9 (since they still have 20.x available) 13:15:06 jangutter: there is one person I think, and he already responded on the ticket I created in Zun 13:15:06 yep, unless etcd is updated, of course. 13:15:11 mnasiadka: didn't have a time 13:15:13 :( 13:15:29 there's only hongbin left in zun afaict, who I think has been around doing kolla patches, too 13:15:31 That's why I proposed we would copy the current etcd as etcd-zun, and upgrade etcd (that is used for everything else) 13:16:03 jangutter: if they don't shape up in C - we deprecate Zun and drop it in D (or earlier if we have to) 13:16:11 ah, right... running on different ports? 13:16:15 mnasiadka: do you have a link to that ticket? 13:16:38 #link https://bugs.launchpad.net/zun/+bug/2007142 13:16:40 maybe deprecate in B still? and undeprecate if things improve? 13:16:48 the goal is to fix it in C 13:16:59 I'm fine with deprecating in B, gives clear mandate to drop if they don't shape up 13:17:39 i.e. not remove, just don't run CI jobs on it? 13:17:45 jangutter: well, then the role would need to support it, right 13:17:57 deprecate means we're planning to drop, don't use it ;) 13:19:21 ok, I'll see what I can do with etcd-zun, can't promise too many hours. 13:20:59 will report back if it's feasible (migration path might be "fun") 13:22:31 well, seems like too much work 13:23:20 oof, yeah, bootstrap involvement: docker on the host needs access to it. 13:23:55 oh boy 13:24:18 so what's the alternative? do not update etcd or drop zun? 13:24:26 Anybody here that cares for Zun? 13:24:53 or pin your etcd container to a previous version of kolla? 13:25:07 I'd favor updating etdc. maybe one can pin etcd for zun deployments? 13:25:20 just add a reno for that, right 13:25:28 well not upgrading is not really an alternative imho 13:25:56 if we could pin for zun, that'd be nice, but I assume it's not that easy? would need a separate deployment somehow? 13:26:02 also zun relies on old docker 13:26:40 well, so maybe still a separate etcd container image for zun, and if you use zun - it's the one you get for deployment? 13:26:55 but then for such people - the upgrade will happen later 13:27:01 and they can't skip versions 13:27:02 so ... drop zun now, restore when it gets fixed? 13:27:04 oh holy crap 13:27:09 that sound like a plan. etcd and etcd-3.3 13:27:11 yes, that's what I vote for 13:27:16 drop zun 13:27:23 and people really needing zun will need to stay on 2023.1 until then 13:27:23 +1 13:27:53 any workaround is just going to postpone the inevitable, if not make it worse :-( 13:27:55 yeah, drop now, if they backport support for new docker into 2023.1 - we can revert 13:28:22 should we maybe announce that separately via ML? 13:28:43 maybe someone who cares is encourage to fix zun earlier then ;) 13:28:50 jangutter: can you please send a mail to openstack-discuss that we are going to drop zun support for that reason, and the plan is to revert the drop once Zun supports new Docker version - because we can't be running this etcd version forever? :) 13:29:13 let's wait a week if any massacre happens and start dropping 13:29:17 +1 13:29:31 will do, adding that if zun gains support it will be considered for the stable 2023.1 branch? 13:30:22 yes, something like that - we should probably drop it in kolla-ansible in a way that the Ansible inventory groups stay there, so it's easy to revert (without breaking anything) 13:30:41 and if they don't shape up - we drop the remnants in C 13:30:43 :thumbsup: 13:31:34 ok, let's hope case solved for now 13:31:53 Tap as a Service installation and configuration (jsuazo) 13:32:01 Did we get to any conclusion last time? :) 13:32:57 I have a clearer picture as to why we are touching the upper constraints, which I addressed in the kolla proposal 13:33:27 Yeah, I see that 13:33:42 Question is if we should be installing it from a tarball, or honor the u-c 13:34:03 TL;DR the pinned version is not working on X and above (tested versions), we tested de latest release, 11.0.0, which seems to work fine 13:34:28 afaik we need bump X version in u-c 13:34:53 release 11.0.0 is in master u-c 13:35:03 We managed to add the plugin as a neutron addition from git, the branch of the source could be changed to the correct one, currently we are overriding the source to master on deployments 13:35:09 we are not considering to add that to any of the stable branches (no backports) 13:35:19 (it's a feature) 13:35:26 Merged openstack/kolla-ansible stable/zed: CI: add q35 hardware machine type to tests https://review.opendev.org/c/openstack/kolla-ansible/+/894582 13:35:28 Merged openstack/kolla-ansible stable/2023.1: CI: add q35 hardware machine type to tests https://review.opendev.org/c/openstack/kolla-ansible/+/894581 13:35:33 Merged openstack/kolla-ansible stable/yoga: CI: add q35 hardware machine type to tests https://review.opendev.org/c/openstack/kolla-ansible/+/894583 13:36:11 so honor u-c should be fine 13:37:26 frickler: any controversial opinions? ;-) (apart the ones, that who needs TaaS or is that project alive) 13:37:46 up until now removing packages from u-c was rather exception, not a way of working 13:38:34 mnasiadka: Should i propose a change to the u-c in X then ? (and remove the u-c editing from the proposal) 13:38:45 in Xena? 13:39:00 jsuazo no since it is feature 13:39:04 (changing u-c in stable branches is not something is going to pass from my perspective) 13:39:08 will not backport to xena 13:39:36 ok, got it, will update the proposal then 13:39:55 you can backport the taas feature in your downstream branches and override the u-c, but I have a feeling taas is there for some reason 13:40:29 unless requirements team is happy with removing tap-as-a-service from u-c, then we can rework the build to use tarballs 13:40:49 td-agent is meeh, the repo installation changed to a good old "curl $internet | bash" -> voila you got a new repo under /etc/apt/sources.d/ :( 13:42:01 We actually had a tarball installation initially, but my team didnĀ“t like it that much :( 13:42:23 SvenKieske: oh boy, I'll try to have a look in mythical free time 13:42:25 and everything is now called "fluentd*" instead of "td-agent*" at least that can be mass changed I guess 13:42:40 https://docs.fluentd.org/installation/install-by-deb 13:42:50 https://www.fluentd.org/blog/upgrade-td-agent-v4-to-v5 13:42:59 mhm 13:43:07 let's discuss that next week when I'll know more about it :) 13:43:22 S3 backend support for Glance and Cinder-Buckup (jsuazo) 13:43:24 I guess it's doable, will try to extract URLs for gpg keys et all from that shell script :) 13:43:41 jsuazo: I think I commented on this patch today, do you need any help? 13:44:01 i would appreciate if you gave the kolla-ansible TaaS proposal a look 13:44:02 me too as always add some comments 13:44:24 jsuazo: I'll have a look on the k-a side as well this week 13:44:38 have a link? I know I saw it somewhere but it got lost in the torrent of other tabs :D 13:44:42 mnasiadka No help needed, Im keeping an eye and comments and trying to address them as I see them 13:45:15 kolla-ansible: https://review.opendev.org/c/openstack/kolla-ansible/+/885417 13:45:17 SvenKieske: ^^ 13:45:22 ok then 13:45:35 #topic Open discussion 13:45:39 anybody anything? 13:45:52 orphans-backports 13:45:57 https://review.opendev.org/q/I3ca4e10508c26b752412789502ceb917ecb4dbeb 13:46:01 https://review.opendev.org/q/I0401bfc03cd31d72c5a2ae0a111889d5c29a8aa2 13:46:05 https://review.opendev.org/q/I169e51c0f5c691518ada1837990b5bdd2a3d1481 13:46:10 https://review.opendev.org/q/Ief8dca4e27197c9576e215cbd960da75f6fdc20c 13:46:16 some merged, some one 13:46:42 and kayobe reviews 13:46:45 https://review.opendev.org/c/openstack/kayobe/+/861397 13:46:49 https://review.opendev.org/c/openstack/kayobe/+/879554 13:46:59 thats all from my side 13:47:42 guys 13:47:42 1. ovn/ovs redeployed ..not working :( ... 13:47:42 2. Can anyone check podman kolla patch ? locally working ..zuul is failing while building SOME images 13:48:01 I should have some more time next week 13:48:08 I can have a look around podman kolla patch 13:48:34 that would be great, I'm also out of ideas there, and I looked at the logs.. (maybe the wrong way) 13:49:01 mnasiadka: there is no reason to not work ..but it's not in zuul ...locally working like a charm 13:50:27 if all other debugging fails, we can hold a node and check what is going on there 13:50:37 kevko: well, basically if it fails on upgrade jobs, so it's checkin out previous release code - but what do I know :) 13:51:04 but it is failing on kolla build 13:51:12 so it is not related to upgrade job 13:51:31 because upgrade job is just deploy -> build images -> upgrade 13:51:39 and it is failing on build images 13:51:46 everything is built OK ..but not glance,horizon,eutron 13:52:38 I saw pip is failing or something similar 13:52:51 but it's working ok on all other jobs, just not upgrade :) 13:53:34 anyway - I unfortunately need to run - feel free to resolve that issue while I'm gone :) 13:53:38 Thank you all for coming 13:53:40 #endmeet 13:53:43 #endmeeting