16:00:37 #startmeeting neutron_ci 16:00:39 hi 16:00:39 Meeting started Tue Oct 29 16:00:37 2019 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:40 hi 16:00:41 o/ 16:00:42 The meeting name has been set to 'neutron_ci' 16:01:04 o/ 16:01:50 ok, we already have usual participants so we can start 16:02:05 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 16:02:10 please open now 16:04:35 sorry, I had to take care of daughter 16:04:40 but now I'm back 16:04:50 #topic Actions from previous meetings 16:05:02 first one 16:05:04 ralonsoh to check if there is any possibility to do somethig like ovsdb-monitor for openflows 16:05:19 one sec 16:05:26 #link https://review.opendev.org/#/c/689150/ 16:05:39 this is a patch to implement a OF monitor 16:05:49 then, if approved, we can use it in UT/FT testing 16:06:18 and You want to use it only in the tests? 16:06:31 for now I don't see any other use 16:06:50 we raised this bug just for debugging tests 16:06:58 but of course, could be use anywhere 16:07:38 yes, but I wonder if we couldn't simply run ovs-ofctl monitor as external service and log it's output to some file 16:07:51 similary to how now dstat is working in ci jobs 16:07:59 wouldn't that be enough? 16:08:37 well you need a bit of logix there to clean the output 16:08:43 but sure this can be done 16:09:07 the goal of this class is to be used anywhere you want to track the OF changes 16:09:42 ok, this monitor is run "per bridge" so it would be better to run it "per test" or "per test class" probably 16:09:53 yes, only per bridge 16:09:58 cmd = ['ovs-ofctl', 'monitor', bridge_name, 'watch:', '--monitor'] 16:10:02 so python is better for that, right :) 16:10:07 I think so 16:10:31 thx ralonsoh, I will add this patch to my review queue 16:10:46 thanks 16:11:49 thx for working on this, that may help us a lot with debugging some failed tests in the future 16:12:07 ok, next action 16:12:11 ralonsoh to investigate failed fullstack tests for dhcp agent rescheduling 16:12:20 as commented in https://bugs.launchpad.net/neutron/+bug/1799555/comments/23 16:12:20 Launchpad bug 1799555 in neutron "Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout" [High,Fix committed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez) 16:12:33 more time spent in reviewing logs than implementing the patch 16:12:40 just a timout issue 16:12:46 #link https://review.opendev.org/#/c/689550/ 16:13:00 that's all 16:13:07 yeah, I saw Your comment and patch which looks fine for me 16:13:11 cool 16:13:26 njohnston: bcafarel: please also review it if You will have some time :) 16:13:56 will do 16:14:02 njohnston: thx 16:14:05 ok, next one 16:14:07 slaweq to investigate failed neutron.tests.fullstack.test_qos.TestDscpMarkingQoSOvs 16:14:10 sure thing, added to the list :) 16:14:23 unfortunatelly I didn't have time for this 16:14:41 I will try to do it this (or next week), maybe somewhere at the airport/airplane 16:14:46 #action slaweq to investigate failed neutron.tests.fullstack.test_qos.TestDscpMarkingQoSOvs 16:15:06 bcafarel: also thx :) 16:15:09 ok, next one 16:15:11 slaweq to check strange "EVENT OVSNeutronAgentOSKenApp->ofctl_service GetDatapathRequest send_event" log lines in neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_port_creation_with_dscp_marking 16:15:20 for this one I also didn't had time 16:15:26 sorry for that 16:15:30 #action slaweq to check strange "EVENT OVSNeutronAgentOSKenApp->ofctl_service GetDatapathRequest send_event" log lines in neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_port_creation_with_dscp_marking 16:16:35 next one 16:16:37 ralonsoh to try to log flows at the end of faliled functional test 16:16:39 yes 16:16:45 once the previous patch is approved 16:16:55 #link https://review.opendev.org/#/c/689150/10 16:17:04 then we can use it for the tests 16:17:06 one question 16:17:25 this can be used during the test, logging the OF changes 16:17:42 or just when the test fails, we can print the OF changes in the log 16:17:47 thoughts? 16:18:04 I think that logging during the test is better for debugging 16:18:07 perfect 16:18:16 as than You potentially can find out some race conditions 16:18:30 easier, just make use of the class and when needed, log the OF changes 16:18:55 that's all 16:19:04 ok, thx ralonsoh 16:19:12 so You will take care of this, right? 16:19:15 sure 16:19:28 can You maybe open launchpad bug for it? Just for tracking purpose 16:19:33 yes of course 16:19:36 thx 16:20:14 #action ralonsoh to open LP about adding OF monitor to functional tests 16:20:29 ok, I think we can move on to the next topic than 16:20:31 #topic Stadium projects 16:20:47 according to tempest-plugin-migration I don't have any updates 16:21:06 do You have anything related to stadium projects? 16:21:22 should we start tracking py3 support removal and zuul v3 job conversion goals in etherpads (either new or reused)? 16:21:43 Or should we wait until after Shanghai for those to be finalized? 16:22:12 I think we can start doing that, based on what was anounced recently by gmann 16:22:37 but even we want to start it "now" it will be effectively after ptg :) 16:22:44 true, true :-) 16:23:07 ok so maybe let's start talking about it the meeting after PTG 16:23:07 unless everything is completed by the time we fly back :) 16:23:10 njohnston: will You prepare such etherpad maybe to track progress? 16:23:38 bcafarel: yeah :) 16:23:46 #action njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop 16:23:52 thx njohnston 16:24:54 ok, lets move on than 16:24:56 #topic Grafana 16:25:16 I have only one thing to say about grafana 16:25:26 infra had issue with disk space for few days 16:25:35 so we are missing data from last few days 16:25:44 it started working yesterday evening 16:26:41 now our numbers there looks fine but we don't have too much of recent data to check in fact 16:26:43 yeah, tough to make judgements 16:27:06 exactly 16:27:22 I see one thing, we should remove py27 jobs from it as we removed them from queues 16:27:27 I will do it today 16:27:40 #action slaweq to send patch to remove py27 jobs from grafana 16:27:59 +1 16:28:27 anything else You want to add/ask about grafana? 16:29:00 nope 16:29:08 no 16:29:17 let's just wait for it to fill again :) 16:29:27 ok, so lets move on than 16:29:33 #topic fullstack/functional 16:29:45 I found few new issues for You :) 16:30:11 first one is something which we saw in the past already 16:30:13 Problem with "AttributeError: 'str' object has no attribute 'content_type'" error agan, see 16:30:18 https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_12f/690908/1/check/neutron-functional/12f8c37/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_l3_tc_lib.TcLibTestCase.test_get_filter_id_for_ip.txt.gz 16:30:38 this one is very curious 16:30:50 because this is happening in the test case result return 16:31:08 weird 16:31:22 IMO, this is something related to testtools (but I can't confirm) 16:32:02 btw, I don't know if the test case failed or not (this is a good indicator to know what is in content) 16:32:16 yeah, it is strange 16:32:21 but if You look at http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AttributeError%3A%20'str'%20object%20has%20no%20attribute%20'content_type'%5C%22 16:32:33 it seems that this happens only on neutron functional/fullstack jobs :/ 16:32:51 ok, so we have a pattern, pointing to us.... 16:33:14 yep 16:33:26 I will open LP for this one 16:33:32 as it happens from time to time 16:33:40 maybe we will find fix for it 16:34:02 or some volunteer for work on this than :) 16:34:17 I can take a look 16:34:29 #action slaweq to open LP related to "AttributeError: 'str' object has no attribute 'content_type'" error 16:34:45 ralonsoh: thx, I will ping You with link to LP when I will open it 16:34:49 thanks 16:34:59 if You will have time, would be great if You can check it 16:35:01 thx a lot 16:36:11 next failure which I found is 16:36:13 https://0c68218832dc7cac70c7-9752c849fa19cb3b4ae0f2b2e19d3d65.ssl.cf2.rackcdn.com/691710/2/check/neutron-functional/15b9600/testr_results.html.gz 16:36:33 but in this case in test's log I don't see anything obvious 16:36:35 https://0c68218832dc7cac70c7-9752c849fa19cb3b4ae0f2b2e19d3d65.ssl.cf2.rackcdn.com/691710/2/check/neutron-functional/15b9600/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_good_address_allocation.txt.gz 16:37:15 slaweq, maybe 16:37:21 in assert_good_allocation_for_port 16:37:42 what we need to do is to retrieve the interface name and run the dhclient in the wait loop 16:37:52 not only to check the ip list 16:38:44 but when You run dhclient it will wait for dhcp reply and retry couple of times, no? 16:38:55 so it should be like "wait loop" already 16:39:33 I can take a closer look to those logs tomorrow 16:41:04 thx ralonsoh, but please don't waste too much of Your time on it, I saw it only once for now in the gate 16:41:11 ok 16:41:27 ok, lets move on 16:41:31 fullstack tests now 16:41:46 I found one new bug https://bugs.launchpad.net/neutron/+bug/1850292 16:41:46 Launchpad bug 1850292 in neutron "Fullstack test can try to use IP address from outside the subnet's allocation pool" [Medium,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:41:55 I have almost working patch for it already 16:43:00 but it doesn't seems for me like something very urgent as it don't hits as often 16:43:25 that's all about functional/fullstack on my side 16:43:34 do You have anything else to add/ask? 16:43:59 no 16:44:13 ok, lets move on than 16:44:19 #topic Tempest/Scenario 16:44:20 not on this topic 16:44:35 njohnston: not on which topic? 16:44:56 slaweq: was responding to "do You have anything else to add/ask?" just slowly 16:45:02 ahh, ok 16:45:03 :) 16:45:19 ok, going back to scenario tests 16:45:28 I found that Multicast scenario test is failing often 16:45:41 so I reported bug https://bugs.launchpad.net/neutron/+bug/1850288 16:45:41 Launchpad bug 1850288 in neutron "scenario test test_multicast_between_vms_on_same_network fails" [Critical,Confirmed] - Assigned to Slawek Kaplonski (slaweq) 16:45:54 and I proposed to make it unstable for now https://review.opendev.org/691855 16:46:14 but later I will want to investigate why it is failing from time to time 16:47:11 I also saw some various connectivity issues, like: 16:47:17 https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3ae/682483/1/check/tempest-slow-py3/3ae72c6/testr_results.html.gz 16:47:24 and 16:47:26 https://2a0fd24cd939d3b06641-34d5a3d9e2a9ca67eb62c8365f7602e7.ssl.cf5.rackcdn.com/679813/3/check/tempest-slow-py3/2313102/testr_results.html.gz 16:48:38 in both examples, one VM obtains IP and not the other one 16:48:59 ralonsoh: in both cases it was IMO the same vm 16:49:07 but it was resizes/cold migrated 16:49:13 so initially it got IP 16:49:15 hmmm correct 16:49:20 and after resize/migration it failed 16:49:32 that's how I see it 16:49:41 but I just checked it briefly for now 16:50:19 but I think that there is potentially some problem (race) there and we should check that 16:50:27 is there any volunteer maybe? 16:51:23 (distracted whistling) 16:51:34 (looking intently at the ceiling) 16:51:53 ok, I will open LP for that and we will see 16:51:53 (wanders away aimlessly) 16:52:12 LOL 16:52:27 You guys rock in such moments :P 16:52:27 we're quite a bunch, aren't we 16:52:38 hehehehe 16:52:55 njohnston: yeah, indeed :P 16:53:08 that is some team spirit at least :) 16:53:23 #action slaweq to report LP about connectivity issues after resize/migration 16:53:32 bcafarel: yeah 16:54:13 👍 16:54:37 ok, now some good news 16:54:53 mlavalle recently finally found root cause and fixed issue with router migrations 16:54:57 patch is merged https://review.opendev.org/#/c/691498/ 16:55:01 \o/ 16:55:13 good one 16:55:15 so I hope we should have better numbers on dvr multinode scenario job now 16:55:33 ralonsoh: yes, that was interesting issue 16:55:43 in fact it was bug in our code, not in tests 16:56:20 ok, and the last one from me 16:56:29 related more to grenade jobs 16:56:47 I saw that some jobs are failing due to issue in nova scheduler 16:56:52 so I opened bug https://bugs.launchpad.net/nova/+bug/1850291 16:56:52 Launchpad bug 1844929 in OpenStack Compute (nova) "duplicate for #1850291 grenade jobs failing due to "Timed out waiting for response from cell" in scheduler" [High,Confirmed] 16:57:00 * bcafarel waits for the https://review.opendev.org/#/c/691498/ backports to appear 16:57:11 but mriedem just marked it as duplicate of some other one, so it is already known issue for nova team 16:57:59 and that's all on my side for today 16:58:07 just in time 16:58:09 I have one thing. Looking at the numbers for the *-uwsgi jobs, they closely mirror their non-uwsgi versions but often are a bit lower. I'd like to propose we make them voting. I'll propose a change and we can conduct the conversation in Gerrit. 16:58:14 next week I will cancel ci meeting as we will be on ptg 16:58:24 njohnston, are you sure? 16:58:41 ok, let's talk in gerrit 16:59:01 njohnston++ 16:59:02 ralonsoh: I set up some custom graphite searches and it looked pretty good to me. The tempest uwsgi job has veen quite stable since the fix for it went in in mid August 16:59:15 then perfect for me 16:59:16 nice 16:59:28 njohnston: I remember that there was quite many timeout on this job some time ago 16:59:35 but I don't know exactly how it is now 16:59:41 lets talk about it on gerrit 16:59:49 ok, thx for attending 16:59:51 it's much better, as you can see in the current Grafana - the uwsgi lines are low and tight 16:59:54 thanks all 16:59:55 and have a great week 17:00:00 safe travels all 17:00:00 o/ 17:00:01 bye! 17:00:02 o/ 17:00:03 o/ 17:00:03 #endmeeting