16:00:24 <slaweq> #startmeeting neutron_ci
16:00:27 <openstack> Meeting started Tue Mar 19 16:00:24 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:30 <openstack> The meeting name has been set to 'neutron_ci'
16:00:33 <mlavalle> o/
16:00:34 <slaweq> hello
16:00:51 <bcafarel> o/
16:00:53 <njohnston> o/
16:01:17 <slaweq> ok, lets start
16:01:25 <slaweq> first of all Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:01:27 <slaweq> Please open now :)
16:01:44 <mlavalle> LOL
16:02:25 <slaweq> #topic Actions from previous meetings
16:02:36 <slaweq> first action:
16:02:38 <slaweq> ralonsoh to take a look at fullstack dhcp rescheduling issue https://bugs.launchpad.net/neutron/+bug/1799555
16:02:40 <openstack> Launchpad bug 1799555 in neutron "Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout" [High,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
16:03:01 <slaweq> I know that ralonsoh was looking into this one
16:03:27 <slaweq> but there was no data why network wasn't rescheduled to new dhcp agent after one agent was down
16:03:49 <slaweq> so he send some dnm patch https://review.openstack.org/#/c/643079/ to get some more logs
16:03:58 * mlavalle has to leave 45 minutes after the hour
16:04:23 <slaweq> so he will probably continue this work as he is assigned to the bug
16:04:34 <slaweq> next one was:
16:04:36 <slaweq> slaweq to talk with tmorin about networking-bagpipe
16:04:53 <mlavalle> you sent an email, didn't you?
16:04:57 <slaweq> I sent email to Thomas today because I couldn't catch him on irc
16:05:30 <slaweq> if he will not respond, I will probably start this work for bagpipe project - it shouldn't be a lot of work to do
16:06:00 <slaweq> ok, and the last one from last week was:
16:06:02 <slaweq> ralonsoh to take a look at update_revises unit test failures
16:06:39 <slaweq> IIRC this patch should address this issue https://review.openstack.org/#/c/642869/
16:06:55 <slaweq> so thx ralonsoh we should be good with this :)
16:07:30 <mlavalle> yeap, it looks like it
16:07:42 <slaweq> and that was all actions from previous week
16:07:50 <slaweq> any questions/comments?
16:08:01 <mlavalle> not rom me
16:08:04 <mlavalle> from^^^
16:08:20 <slaweq> ok, lets go then to the next topic
16:08:22 <slaweq> #topic Python 3
16:08:27 <slaweq> Stadium projects etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status
16:08:31 <slaweq> njohnston: any updates?
16:08:52 <njohnston> no updates on this for this week.
16:09:13 <slaweq> ok
16:09:15 <njohnston> I hope to spend more time with it later this week
16:09:28 <slaweq> sure, that is not urgent for now
16:09:31 <mlavalle> so I volunteered last time
16:09:41 <slaweq> we have more important things currently IMO :)
16:09:41 <mlavalle> for one of them
16:09:51 <mlavalle> but the etherpad seems changed
16:10:05 <njohnston> mlavalle: wasn’t that the tempest plugin work?
16:10:09 <slaweq> mlavalle: I think You voluntereed for something else, trust me ;)
16:10:17 <bcafarel> :)
16:10:27 <mlavalle> yeah, you are right
16:10:42 <slaweq> mlavalle: You should tell that to my wife :P
16:10:50 <mlavalle> LOL
16:10:50 <njohnston> LOL
16:11:13 <slaweq> ok, lets move on to the next topic
16:11:15 <slaweq> #topic Ubuntu Bionic in CI jobs
16:11:32 <slaweq> last week I think patch which switched all legacy jobs to be run on Bionic was merged
16:12:26 <slaweq> yep, it's here: https://review.openstack.org/#/c/641886/
16:12:39 <slaweq> in Neutron we are good with it
16:13:05 <slaweq> as for stadium projects, we need https://review.openstack.org/#/c/642456/ for fullstack job in networking-bagpipe
16:13:25 <bcafarel> slaweq: I still see ubuntu-xenial-2-node in .zuul.yaml ?
16:13:29 <slaweq> but this job is there non-voting, and even with this patch it's failing because of some other reason
16:13:37 <slaweq> bcafarel: where?
16:14:13 <bcafarel> https://github.com/openstack/neutron/blob/master/.zuul.yaml#L234
16:14:14 <slaweq> ahh, right bcafarel
16:14:32 <slaweq> so we need to switch our grenade jobs to be run on bionic too
16:14:58 <slaweq> it's like that because we have specified nodeset in our .zuul.yaml file for them
16:15:18 <slaweq> any volunteer to switch that to bionic?
16:15:25 <slaweq> if no, I can do that
16:15:38 <bcafarel> since I raised it I can try to fix it :)
16:15:43 <slaweq> thx bcafarel
16:16:05 <slaweq> #action bcafarel to switch neutron-grenade multinode jobs to bionic nodes
16:16:29 <slaweq> from other stadium projects there is also issue with networking-midonet
16:17:06 <mlavalle> should we ask yamamoto?
16:17:11 <slaweq> but they are aware of it: https://midonet.atlassian.net/browse/MNA-1344 so I don't think we should bother a lot with that
16:17:29 <mlavalle> cool
16:17:30 <slaweq> mlavalle: yamamoto know about this issues, I already mailed him some time ago
16:17:45 <njohnston> yes there was an ML thread
16:18:06 <slaweq> and that's basically all about switch to bionic
16:18:21 <slaweq> questions/comments?
16:18:31 <mlavalle> not from me
16:18:35 <njohnston> nope
16:18:47 <slaweq> ok, next topic then
16:18:49 <slaweq> #topic tempest-plugins migration
16:18:54 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo
16:19:01 <slaweq> mlavalle: here You volunteered :)
16:19:14 <slaweq> any updates about that?
16:19:45 <mlavalle> I intend to work on this towards the end of the week
16:19:48 <njohnston> I pushed a couple of changes for fwaas
16:19:49 <bcafarel> I think njohnston is the most ahead there
16:20:19 <njohnston> I need to work on the zuul job definitions
16:20:26 <slaweq> yep, I saw today Your "super-WIP" patch :)
16:20:33 <slaweq> it's pretty red
16:20:48 <njohnston> yeah
16:21:22 <njohnston> I’ll fiddle with it later in the week
16:21:26 <slaweq> but great that You started this work already :)
16:21:31 <slaweq> thx njohnston
16:22:22 <slaweq> ok, so lets move on to the next topic then
16:22:24 <slaweq> #topic Grafana
16:22:39 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:22:44 <slaweq> You have it already :)
16:23:59 <slaweq> worst thing is IMO fullstack job in check queue
16:24:06 <slaweq> which is on quite high numbers again
16:24:31 <slaweq> but today there is also some spike on neutron-tempest-dvr-ha-multinode-full job
16:25:14 <mlavalle> yeah, fullstack is where the action / problem is
16:25:25 <slaweq> mlavalle: yes
16:25:51 <slaweq> according to this neutron-tempest-dvr-ha-multinode-full I would say - lets wait and see how it will be
16:27:07 <slaweq> there wasn't many jobs run today so this spike may be just because of some bad coincidence
16:27:53 <slaweq> from good things: functinal jobs looks good currently finally :)
16:28:02 <slaweq> any other comments on that?
16:28:08 <mlavalle> is it not from me
16:28:12 <slaweq> or can we move on to talk about fullstack?
16:28:38 <mlavalle> I menat not from me
16:28:43 <slaweq> ok, lets move on then
16:28:44 <mlavalle> let's move on
16:28:45 <slaweq> #topic fullstack/functional
16:28:59 <slaweq> I was looking into some fullstack failures today
16:29:11 <slaweq> and I identified basically 2 new (for me at least) issues
16:29:19 <slaweq> https://bugs.launchpad.net/neutron/+bug/1820865
16:29:20 <openstack> Launchpad bug 1820865 in neutron "Fullstack tests are failing because of "OSError: [Errno 22] failed to open netns"" [Critical,Confirmed]
16:29:29 <slaweq> this one is problem with open netns
16:29:47 <slaweq> and second is https://bugs.launchpad.net/neutron/+bug/1820870
16:29:48 <openstack> Launchpad bug 1820870 in neutron "Fullstack tests are failing with error connection to rabbitmq" [High,Confirmed]
16:30:03 <slaweq> that one is related to some issues with connectivity from agents to rabbitmq
16:30:30 <slaweq> both are quite often now so I set them as Critical and High priority for now
16:30:54 <slaweq> but I think we need some volunteers for them as I will not have cycles for both during this week
16:31:01 <mlavalle> do we need manpower for them?
16:31:11 <slaweq> mlavalle: yes, definitely
16:31:17 <mlavalle> assigne me one
16:31:37 <mlavalle> the one you think is more important
16:31:50 <mlavalle> please
16:32:01 <slaweq> I marked bug/1820865 as Critical because I think it happens more often
16:32:10 <mlavalle> ok, I take it
16:32:18 <slaweq> great, thx mlavalle
16:32:43 * mlavalle assigned it to himself
16:32:56 <slaweq> if I will have some time, I will try to take a look at second one but I can't promise that
16:33:13 <slaweq> so I will not assign it to myself yet
16:33:16 <mlavalle> if I have time I will also try to get to the second one
16:33:31 <mlavalle> I'll ping you if I get there
16:33:38 <slaweq> #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870
16:33:39 <openstack> Launchpad bug 1820870 in neutron "Fullstack tests are failing with error connection to rabbitmq" [High,Confirmed]
16:33:58 <slaweq> #undo
16:33:59 <openstack> Removing item from minutes: #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870
16:34:11 <mlavalle> yeap, it's the other one
16:34:18 <slaweq> #action mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820865
16:34:19 <openstack> Launchpad bug 1820865 in neutron "Fullstack tests are failing because of "OSError: [Errno 22] failed to open netns"" [Critical,Confirmed] - Assigned to Miguel Lavalle (minsel)
16:34:30 <slaweq> #action slaweq/mlavalle to check https://bugs.launchpad.net/neutron/+bug/1820870
16:34:33 <slaweq> now it's good :)
16:34:37 <mlavalle> yes
16:34:38 <slaweq> thx mlavalle for help
16:35:15 <slaweq> I hope that when those 2 will be fixed, we will be in better shape with fullstack too
16:35:20 <slaweq> any questions/comments?
16:35:37 <mlavalle> nope
16:35:59 <slaweq> ok, lets move to the next topic then
16:36:01 <slaweq> #topic Tempest/Scenario
16:36:20 <slaweq> mlavalle: as https://review.openstack.org/#/c/636710/ is merged, did You send patch to unmark tests from https://bugs.launchpad.net/neutron/+bug/1789434 as unstable?
16:36:21 <openstack> Launchpad bug 1789434 in neutron "neutron_tempest_plugin.scenario.test_migration.NetworkMigrationFromHA failing 100% times" [High,In progress] - Assigned to Miguel Lavalle (minsel)
16:36:39 <mlavalle> slaweq: I'll do it today
16:37:08 <slaweq> mlavalle: thx
16:37:44 <slaweq> other thing related to this is that still neutron-tempest-plugin-dvr-multinode-scenario is failing almost always.
16:38:12 <slaweq> I was checking couple of results of such failed jobs today
16:38:29 <slaweq> and there is no one single reason. We probably need someone who will go through some failed tests and report bugs for them.
16:38:36 <slaweq> any volunteers?
16:38:44 <mlavalle> I want to do that
16:38:52 <slaweq> thx mlavalle :)
16:38:56 <mlavalle> as long as I'm allowed to do it slowly
16:39:22 <slaweq> maybe You will be able to identify some groups of failures and report them as bugs that we can track them later
16:39:31 <mlavalle> my goal is a little broader
16:39:41 <mlavalle> I want to make that job stable generally
16:39:48 <slaweq> :)
16:39:59 <slaweq> that would be great definitelly
16:40:40 <slaweq> #action mlavalle to debug reasons of neutron-tempest-plugin-dvr-multinode-scenario failures
16:41:04 <slaweq> ok, and that's all from me regarding scenario jobs
16:41:08 <slaweq> questions/comments?
16:41:56 <bcafarel> none here
16:42:02 <slaweq> ok
16:42:12 <slaweq> so one last thing we I wanted mention is
16:42:32 <slaweq> thx njohnston we have fixed our first os-ken bug: https://storyboard.openstack.org/#!/story/2005142 - thx a lot njohnston :)
16:42:41 <mlavalle> ++
16:43:05 <slaweq> and with this optimistic accent I think we can finish our meeting a bit earlier today :)
16:43:15 <mlavalle> Thanks everybody
16:43:15 <slaweq> and let mlavalle to go where he need to go
16:43:17 <njohnston> :)
16:43:21 <mlavalle> o/
16:43:21 <slaweq> thanks for attending
16:43:26 <slaweq> #endmeeting