16:00:30 <slaweq> #startmeeting neutron_ci
16:00:30 <openstack> Meeting started Tue Nov 12 16:00:30 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:34 <openstack> The meeting name has been set to 'neutron_ci'
16:01:47 <slaweq> ralonsoh: njohnston bcafarel: CI meeting, are You around?
16:01:55 <njohnston> o/
16:01:58 <slaweq> o/
16:02:02 <ralonsoh> hi
16:02:31 <slaweq> welcome back after the ptg :)
16:02:47 <slaweq> I think I finally recovered from jet lag and (very long) trip
16:03:05 <slaweq> let's start the meeting than
16:03:16 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:03:32 <slaweq> please open it now and we can move forward
16:03:34 <slaweq> #topic Actions from previous meetings
16:04:35 <slaweq> slaweq to investigate failed neutron.tests.fullstack.test_qos.TestDscpMarkingQoSOvs
16:05:03 <slaweq> I tried but there is no logs available anymore. So I though that if this will happen again, I will get back to it.
16:05:18 <slaweq> and I just found new occurence of this issue today
16:05:29 <slaweq> https://b56892e10e3e61a452c2-e4b54cf82b19c70bded9dbfe71e9b8f5.ssl.cf2.rackcdn.com/601336/41/check/neutron-fullstack/cfd8a8e/controller/logs/dsvm-fullstack-logs/TestDscpMarkingQoSLinuxbridge.test_dscp_marking_packets.txt.gz so I will investigate it now
16:05:41 <slaweq> #action slaweq to investigate failed neutron.tests.fullstack.test_qos.TestDscpMarkingQoSOvs
16:06:52 <slaweq> next one was
16:06:54 <slaweq> slaweq to check strange "EVENT OVSNeutronAgentOSKenApp->ofctl_service GetDatapathRequest send_event" log lines in neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_port_creation_with_dscp_marking
16:07:00 <slaweq> I checked it a bit but I don't know exactly what happend there. It seems for me that it probably didn't create br-tun properly and was still trying to get dp id for it. If this will happen more often, I will investigate again...
16:08:37 <slaweq> next one
16:08:39 <slaweq> ralonsoh to open LP about adding OF monitor to functional tests
16:09:09 <ralonsoh> the OF monitor review is still pending
16:09:22 <ralonsoh> https://review.opendev.org/#/c/689150/
16:10:22 <slaweq> thx, I will review it this week for sure
16:10:25 <ralonsoh> slaweq, https://bugs.launchpad.net/neutron/+bug/1848500
16:10:25 <openstack> Launchpad bug 1848500 in neutron "Implement an OpenFlow monitor" [Wishlist,In progress] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
16:10:41 <slaweq> and thx for reporting it on LP too
16:10:59 <slaweq> next one
16:11:01 <slaweq> njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop
16:12:15 <njohnston> I'll be working on that this week
16:12:21 <slaweq> njohnston: ok, thx
16:12:42 <slaweq> #action njohnston prepare etherpad to track stadium progress for zuul v3 job definition and py2 support drop
16:12:51 <slaweq> ok, next one
16:12:53 <slaweq> slaweq to send patch to remove py27 jobs from grafana
16:13:00 <slaweq> Patch https://review.opendev.org/#/c/691973/
16:13:18 <slaweq> it is already merged
16:13:22 <slaweq> next one
16:13:24 <slaweq> slaweq to open LP related to "AttributeError: 'str' object has no attribute 'content_type'" error
16:13:32 <slaweq> Bug https://bugs.launchpad.net/neutron/+bug/1850558
16:13:32 <openstack> Launchpad bug 1850558 in neutron ""AttributeError: 'str' object has no attribute 'content_type' in functional tests" [Medium,Fix released] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
16:13:56 <slaweq> it is already fixed by ralonsoh
16:14:02 <slaweq> thx ralonsoh for taking care of it
16:14:07 <ralonsoh> np
16:14:16 <slaweq> and the last one:
16:14:18 <slaweq> slaweq to report LP about connectivity issues after resize/migration
16:14:23 <slaweq> Bug reported https://bugs.launchpad.net/neutron/+bug/1850557
16:14:23 <openstack> Launchpad bug 1850557 in neutron "DHCP connectivity after migration/resize not working" [Medium,Confirmed]
16:15:04 <slaweq> this may be similar issue like described in https://bugs.launchpad.net/neutron/+bug/1849479
16:15:04 <openstack> Launchpad bug 1849479 in neutron "neutron l2 to dhcp lost when migrating in stable/stein 14.0.2" [Medium,New] - Assigned to Slawek Kaplonski (slaweq)
16:15:13 <slaweq> but I didn't have any time to work on it
16:15:19 <slaweq> maybe I will have some time next week
16:15:35 <slaweq> but if anyone else wants to take a look, feel free to take it :)
16:17:00 <slaweq> #action slaweq to take a look at connectivity issues after resize/migration
16:17:12 <slaweq> ok, that's all from last meeting
16:17:19 <slaweq> lets move on
16:17:21 <slaweq> #topic Stadium projects
16:17:52 <slaweq> there is no any update on moving tempest plugins to neutron-tempest-plugin
16:18:25 <slaweq> we still need to delete old tests from neutron-dynamic-routing repo (step 2) and do both steps for vpnaas
16:19:12 <slaweq> and I think that we can also use this topic to track progress on dropping py2 support and migration to zuulv3
16:19:18 <njohnston> I can do the n-d-r deletion
16:19:20 <slaweq> but this will start next week
16:19:27 <slaweq> njohnston: thx a lot
16:19:50 <njohnston> #action njohnston delete old tests from neutron-dynamic-routing repo
16:22:09 <slaweq> anything else You want to add regarding stadium projects?
16:22:21 <njohnston> nope
16:23:02 <njohnston> we'll see if any of the puppies don't get adopted, that could make stadium maintenance easier
16:23:28 <slaweq> njohnston: yes
16:23:36 <slaweq> I will send emails about them later this week
16:23:56 <slaweq> but we will have to wait until around Ussuri-2 with deprecation of some of them maybe
16:24:09 <njohnston> probably
16:24:48 <slaweq> ok, lets move on
16:24:50 <slaweq> #topic Grafana
16:26:16 <slaweq> one problem which I see is that neutron-tempest-plugin-dvr-multinode-scenario (non-voting) failing 100% times again
16:28:13 <njohnston> yes, also the periodic jobs for os-ken seem to be having issues (I forget if we talked about this already)
16:28:30 <slaweq> yes, but this is os-ken with neutron-dynamic-routing
16:28:36 <slaweq> and this one is broken since few days
16:28:42 <slaweq> let me find LP
16:29:09 <njohnston> not just that, also looks like every other run of neutron-tempest-with-os-ken-master too
16:29:21 <slaweq> https://bugs.launchpad.net/neutron/+bug/1850626
16:29:21 <openstack> Launchpad bug 1850626 in neutron "neutron-dynamic-routing: TypeError: bind() takes 4 positional arguments but 5 were given" [Critical,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
16:30:16 <njohnston> 4 fails out of the last 7 tries for neutron-tempest-with-os-ken-master
16:31:27 <slaweq> njohnston: yes, once it was global timeout
16:31:36 <slaweq> once some invalid backup error from cinder
16:32:13 <slaweq> once some timeout when processing neutron request
16:32:21 <slaweq> so there are various random issues there
16:32:33 <slaweq> I think that we should first move this job from legacy to zuulv3
16:32:43 <slaweq> and than maybe run only neutron related tests on it
16:32:51 <njohnston> makes sense
16:32:52 <slaweq> than it should be hopefully better
16:33:02 <slaweq> I will take care of this
16:33:29 <slaweq> #action slaweq to move neutron-tempest-with-os-ken-master to zuulv3 syntax and switch to run neutron related tests only
16:35:26 <slaweq> anything else related to grafana do You have maybe?
16:35:38 <slaweq> if not, I think we can continue with next topics
16:35:56 <njohnston> go ahead :-)
16:36:01 <slaweq> thx :)
16:36:03 <slaweq> #topic fullstack/functional
16:36:24 <slaweq> I found only one new issue in functional tests
16:36:26 <slaweq> neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_good_address_allocation
16:36:30 <slaweq> https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_02c/693323/1/check/neutron-functional-with-uwsgi/02c5b9d/testr_results.html.gz
16:36:37 <slaweq> did You maybe saw something like that before?
16:38:37 <njohnston> no, never seena. timeout there
16:39:39 <slaweq> logstash also shows only this one occurence of such issue
16:39:58 <slaweq> so I don't think it is really need now to report it and spend time on it now
16:40:06 <slaweq> lets see if that will happen more often
16:40:09 <slaweq> do You agree?
16:40:32 <njohnston> +1
16:41:15 <slaweq> ok, lets move on
16:41:17 <slaweq> #topic Tempest/Scenario
16:41:30 <slaweq> I found few issues from last week
16:41:42 <slaweq> first of all, funny issue with tempest-slow-py3
16:41:53 <slaweq> it seems that on compute node it still runs on python 2.7
16:42:08 <slaweq> I found it where I checked logs: https://ba2b4fc750309c3f0760-e6e77a9441ae9cccde1b8ed58f97fc24.ssl.cf2.rackcdn.com/690098/6/check/tempest-slow-py3/4a5cd35/compute1/logs/screen-q-agt.txt.gz
16:42:23 <slaweq> on controller node py36 is used already
16:42:31 <slaweq> I will propose patch to fix that this week
16:44:16 <slaweq> #action slaweq to fix python version in tempest-slow-py3 job
16:44:44 <slaweq> from other issues, I checked why this dvr mutltinode jobs is failing 100% of times
16:44:58 <slaweq> it seems that it is failing every time on neutron_tempest_plugin.scenario.test_migration.NetworkMigrationFromHA tests
16:45:07 <slaweq> I think this is related (again) to some wrong rootwrap filters:
16:45:09 <slaweq> https://12dcf03ddbc1f4ac6c48-3214406b4544fce2f9d807df6ea4fe3f.ssl.cf2.rackcdn.com/677166/13/check/neutron-tempest-plugin-dvr-multinode-scenario/31d556c/compute1/logs/screen-q-l3.txt.gz
16:45:25 <slaweq> anyone wants to investigate or should I assign it to my self?
16:45:54 <njohnston> not sure I will have time, so I don't want to commit,. but I will try
16:46:04 <slaweq> thx njohnston
16:46:13 <slaweq> I will assign it to You just as a reminder
16:46:18 <njohnston> sounds good
16:46:29 <slaweq> #action njohnston to check failing NetworkMigrationFromHA in multinode dvr job
16:46:46 <slaweq> and that's all from what I have for today
16:47:22 <slaweq> according to our discussion during PTG, this and next week I will spend some time on cleaning some of the jobs from our ci
16:47:44 <slaweq> and on checking multi/single node jobs to check if we can replace some single node jobs with multinode jobs
16:47:56 <slaweq> if I will have anything like that to review, I will ping You
16:48:02 <njohnston> +1
16:48:39 <slaweq> and if one of You can +W on https://review.opendev.org/#/c/681202/3 that would be great :)
16:49:26 <njohnston> +W
16:49:34 <slaweq> but it's not urgent as I just noticed that depends-on devstack patch needs some additional work
16:49:39 <slaweq> thx njohnston :)
16:49:53 <slaweq> do You have anything else to talk about today?
16:50:19 <njohnston> nope!  quiet week with the Forum and PTG.
16:50:34 <slaweq> great
16:50:43 <slaweq> so I think we can finish a bit earlier today
16:50:49 <slaweq> thx for attending the meeting
16:50:54 <njohnston> thanks!
16:50:57 <slaweq> and have a great day
16:50:57 <ralonsoh> bye
16:51:00 <slaweq> o/
16:51:03 <slaweq> #endmeeting