15:00:42 #startmeeting neutron_ci 15:00:42 Meeting started Tue Dec 21 15:00:42 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:42 The meeting name has been set to 'neutron_ci' 15:00:44 hi 15:00:54 hi 15:01:07 this week we should have meeting on video 15:01:14 but I can't open https://meetpad.opendev.org/neutron-ci-meetings 15:01:17 is it working for You? 15:01:28 no, it isn't 15:01:32 :/ 15:01:42 at least it's not something on my side 15:01:45 :) 15:01:51 ok, so lets do meeting on irc only 15:01:58 and lets do it quick 15:02:07 ok 15:02:15 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:02:29 o/ 15:02:46 I think we can start as lajoskatona is on pto already and I also don't see obondarev to be online 15:02:52 #topic Actions from previous meetings 15:03:01 slaweq to add some extra logs to the test, related to https://bugs.launchpad.net/neutron/+bug/1954751 to help further debugging 15:03:11 I proposed fix for that https://review.opendev.org/c/openstack/neutron/+/822051 and it's merged now 15:03:19 it also added some extra logs 15:03:53 bcafarel: I will look to which versions we should backport that change 15:03:56 :) 15:03:59 thanks :) 15:04:12 next one 15:04:13 mlavalle to check failing neutron-ovn-tempest-ovs-master-fedora job 15:04:28 I did some fact finding 15:04:41 it seems we have a problem with mariadb 15:04:52 let me share with you some data... 15:05:10 https://paste.openstack.org/show/811785/ 15:05:30 look at lines 20 to 28 15:06:00 command 'sudo mysqladmin -u root password secretdatabase' is failing 15:06:33 so later, when trying to create a user in the DB, root doesn't have privilege 15:07:00 let me show you the same thing from the last succesful run: 15:07:17 https://paste.openstack.org/show/811784/ 15:07:45 look at lines 26 to 31 15:07:54 same commands succeed 15:08:09 it's the same Fedora release, 34 15:08:41 problem could be devstack script, method "configure_database_mysql" 15:08:42 however, if you look at both pastes, in the lines above, it is not installing the same thing 15:08:57 we are checking this 15:08:57 if is_ubuntu && [ "$MYSQL_SERVICE_NAME" == "mariadb" ]; then 15:09:38 in the successful case we install mariadb-3:10.5.12-1.fc34.x86_64 15:10:00 whereas is the fail case we install community-mysql-8.0.27-1.fc34.x86_64 15:10:10 isn't that weird 15:10:12 ? 15:10:16 no 15:10:24 there are different instalation methods 15:10:27 depending on the service 15:10:39 hmm 15:10:40 and I think we don't capture the OS correctly in devstack 15:10:46 devstack+fedora seems currently broken 15:10:57 was just mentioning that in -qa, too 15:11:10 frickler: is that in general, not only neutron? 15:11:14 yes 15:11:57 is someone fixing it? 15:12:20 or is there a reported bug? 15:12:21 well currently no. needs some investment from redhat or whoever cares about it 15:12:36 ok, that's us I think 15:12:56 mlavalle: probably we can invest some time in it :) 15:13:02 yeap 15:13:13 I'll create a bug in devstack 15:13:15 but it should be at least easy to reproduce locally if it's such general problem with devstack on fedora 15:13:22 mlavalle++ thx 15:13:38 slaweq: yeah, I will try to reproduce it locally 15:13:44 thx 15:13:52 and thx for investigation on that one 15:14:45 ok, I think we can move on for now 15:14:53 yes, thx 15:14:58 #topic Stable branches 15:15:05 bcafarel: any updates/issues? 15:15:17 ussuri is broken 15:15:19 we have https://bugs.launchpad.net/tempest/+bug/1955418 on train (gmann++ on it) 15:15:20 at least grenade 15:15:31 yeah, stable/train is also broken #link http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026405.html 15:15:51 I am on fixes which seems working, octavia passed. waiting for neutron testing patch to pass 15:15:58 thanks for working on it :) and thanks ralonsoh for reporting ussuri, I did not have time yet to check that 15:16:18 is there any bug for ussuri? 15:16:25 https://bugs.launchpad.net/neutron/+bug/1955486 15:16:36 a problem with oslo.utils 15:16:47 and the version used in Ussuri 15:17:25 the problem is I don't know how to cap this or implement a fix for stable tempest releases 15:17:31 because we don't have this 15:18:06 ralonsoh: i see, that is one I am fixing for stable/train using tempest 28.0.0 15:18:30 right, if it's grenade in ussuri, it first tries to install train 15:18:33 and as stable/ussuri greande use stable/train tempest version it happen there 15:18:36 so the issue can be the same :) 15:18:36 why are we using the version in ussuri? 15:18:44 ok then 15:18:48 so same fix will work there, I will add testing patch for stable/ussuri to verify 15:19:01 gmann++ thx a lot 15:19:04 these two fixes #link https://review.opendev.org/c/openstack/devstack/+/822380 #link https://review.opendev.org/c/openstack/tempest/+/822339 15:19:07 I'll mark this bug as a duplicate 15:19:17 sure. 15:19:51 from good news regarding stable/train, we have merged https://review.opendev.org/c/openstack/requirements/+/821972 15:20:00 yes 15:20:01 thx ralonsoh for fixing that 15:21:18 bcafarel: other than that issue, we should be good for stable branches' ci, right? 15:21:34 indeed 15:21:51 good :) 15:21:52 thx 15:21:58 so lets move on to the next topic 15:22:21 as lajoskatona is away already, lets skip stadium projects today 15:22:32 and go directly to the next topic 15:22:36 #topic Grafana 15:22:42 http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:23:48 TBH graphs looks really ok for me 15:23:59 maybe it's some kind of Christmas gift for us :D 15:24:22 Santa came early to Neutrontown \o/ 15:24:37 LOL, yeah! 15:24:38 :) 15:25:13 and also my weekly check of the failed jobs confirmed that 15:25:26 I didn't saw many errors not related to the patches on which it was run 15:25:38 #topic Tempest/Scenario 15:25:51 again I saw couple of timeouted jobs: 15:25:58 https://4b558001c4cbf62a997f-78e633edf2b137bdab04e16fad5df952.ssl.cf1.rackcdn.com/821433/2/check/neutron-tempest-plugin-scenario-linuxbridge/315a4ed/job-output.txt... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/klqERVtiMhWBIRMNARsRvTLN) 15:26:48 interesting thing is that all of those issues which I see are in the neutron-tempest-plugin-scenario- jobs 15:27:07 which IIRC are moved to use nested virtualization now, right ykarel ? 15:27:14 yes 15:27:31 Ghanshyam proposed openstack/neutron stable/ussuri: DNM: test tempest train-last tag https://review.opendev.org/c/openstack/neutron/+/822504 15:27:45 ralonsoh: slaweq ^^ testing grenade job 15:27:49 can it be somehow related? 15:27:50 thanks 15:28:05 slaweq, out of 4 i see 3 are older than 7 days 15:28:11 and not using nested virt nodes 15:28:48 4th one is on nested virt node for which you mentioned it didn't finished devstack 15:29:09 this is the one I'm checking 15:29:25 and seems to be too slow executing any command 15:29:37 ok, so maybe going for nested virt will actually improve it 15:29:51 ralonsoh: yes, that one can be just slow node 15:29:57 right 15:30:04 but in other cases the slowest were tests actually 15:30:11 and for that nested virt may help 15:30:22 ok, lets keep an eye on it for next week(s) and we will see 15:30:37 yes 15:31:04 next issue 15:31:09 I opened new bug https://bugs.launchpad.net/neutron/+bug/1955478 15:31:29 I found such issue only once so far but though it would be good to have it recoreded somewhere :) 15:31:53 as it seems that it may happen more times (maybe I just missed it somewhere else) 15:32:09 so if anyone has some time and wants to investigate that, that would be great 15:33:49 and that's basically all what I had for today 15:33:50 as per logstash seems there are many such failures 15:34:08 ykarel: can You share link to logstash query? 15:34:09 http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22failed%2017%2F20%3A%20up%20%5C%22 15:35:12 failure in different jobs, different nodes related to metadata fail 15:36:01 but most of them related to OVS (or even all, I'm not sure about non-neutron jobs) 15:36:51 so it seems that it's more critical issue than I thought :) 15:37:48 puppet jobs are also OVS ones, so can say all are openvswitch 15:38:08 ok, so all ovs related 15:38:13 we need to investigate that 15:38:36 I will add it as action item on me but I don't know if I will be able to do that this week 15:38:58 #action slaweq to check https://bugs.launchpad.net/neutron/+bug/1955478 and broken metadata connectivity issues 15:39:59 ok, that's basically all what I had for today 15:40:14 we had to continue discussion about improvements ideas today 15:40:49 but as we don't have video meeting, and some folks are already off, lets skip it and get back to that next year 15:41:03 #topic On Demand 15:41:12 do You have any other topics to discuss today? 15:41:18 no thanks 15:41:27 nothing from me 15:41:38 none from me too 15:41:45 nothing from me either 15:41:52 if not, I just want to say that this is our last meeting this year 15:42:09 so thx all for Your hard work keeping neutron ci green(ish) 15:42:29 have a great holiday season and see You all in 2022 :) 15:42:40 see you next year! 15:42:48 #endmeeting