16:00:24 <mlavalle> #startmeeting neutron_ci
16:00:25 <openstack> Meeting started Tue Jun 19 16:00:24 2018 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:29 <openstack> The meeting name has been set to 'neutron_ci'
16:00:35 <mlavalle> Hi there!
16:02:19 <njohnston> o/
16:02:25 <haleyb> hi
16:02:28 <mlavalle> hey
16:02:42 <mlavalle> so you guys are stuck with me today
16:03:02 <mlavalle> Poland is playing Senegal and slaweq is watching the game
16:03:07 <mlavalle> Go Poland!
16:03:43 <mlavalle> #topic Actions from previous meeting
16:04:09 <mlavalle> slaweq to debug failing test_ha_router_restart_agents_no_packet_lost fullstack test
16:04:41 <mlavalle> slaweq reported a bug: https://bugs.launchpad.net/neutron/+bug/1776459
16:04:42 <openstack> Launchpad bug 1776459 in neutron "TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack fails" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
16:04:55 <mlavalle> and hasn't been able to get to the root cause
16:06:10 <mlavalle> so the test was marked as unstable for the time being
16:06:36 <mlavalle> and there is a patch to help debugging the problem: https://review.openstack.org/#/c/575710/
16:06:37 <patchbot> patch 575710 - neutron - [Fullstack] Ensure connectivity to ext gw before a...
16:09:46 <mlavalle> mlavalle was goint to talk to the OVO team about the constant time test case
16:09:54 <mlavalle> I did ;-)
16:10:11 <mlavalle> lujinluo agreed to take it over and fix it
16:10:21 <mlavalle> she is out on vacation this week
16:10:32 <mlavalle> but will work on it as soon as she comes back
16:11:31 <mlavalle> #topic Grafana
16:11:48 <mlavalle> Let's look at Grafana: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:15:13 <manjeets> question about gate and check queue, is gate queue failure rate implies after the patch is approved ?
16:15:27 <mlavalle> One thing that stands out is the 50% failure of unit tests in the gate queue. Is it the OVO test mentioned above^^^^?
16:16:23 <njohnston> I have noticed in the past week a lot of random failures in unit tests in the gate queue; often a recheck would result in a failure in a different set of tests
16:17:00 <njohnston> for example look at how many times https://review.openstack.org/#/c/570244/ had to be rechecked
16:17:01 <patchbot> patch 570244 - neutron - Use OVO in ml2/test_db (MERGED)
16:17:59 <mlavalle> yeah
16:18:17 <mlavalle> slaweq sent me an email with some of his comments for this meeting
16:18:22 <mlavalle> and he mentions the same
16:18:38 <mlavalle> he asks that if we see an issue more that two times
16:18:49 <mlavalle> let's file a bug so we can work on it
16:19:37 <njohnston> sounds good
16:20:41 <mlavalle> I am also noticing that we have a couple of panels with no data in Grafana
16:21:07 <njohnston> that's odd, I have data in all of mine
16:22:03 <mlavalle> ahh great
16:22:10 <mlavalle> so that's my connection
16:22:16 <manjeets> I think i see data in all panels too
16:22:18 <mlavalle> good
16:22:36 <mlavalle> #topic Stable branches
16:23:28 <mlavalle> It seems we have a few issues with the stable branches
16:24:00 <mlavalle> In Queens: https://bugs.launchpad.net/neutron/+bug/1777190
16:24:01 <openstack> Launchpad bug 1777190 in neutron "All neutron-tempest-plugin-api jobs for stable/queens fails" [Critical,Fix committed] - Assigned to Slawek Kaplonski (slaweq)
16:24:30 <mlavalle> I W+ the fix for this one yesterday
16:25:33 <mlavalle> In Pike and Ocata we have https://bugs.launchpad.net/neutron/+bug/1777506
16:25:34 <openstack> Launchpad bug 1777506 in neutron "neutron-rally job failing for stable/pike and stable/ocata" [Critical,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
16:26:59 <mlavalle> There are a couple of fixed proposed:
16:27:03 <mlavalle> https://review.openstack.org/#/c/576451/
16:27:03 <patchbot> patch 576451 - neutron (stable/pike) - Use rally 0.12.1 release for stable/pike patches
16:27:19 <mlavalle> https://review.openstack.org/576567
16:27:19 <patchbot> patch 576567 - neutron (stable/ocata) - Use rally 0.12.1 release for stable/pike patches
16:27:35 <mlavalle> Let's take a look at them, haleyb if you have some time
16:28:31 <haleyb> ok
16:29:25 <mlavalle> slaweq also indicated that we should probably take a closer look at the stable branches queues to make sure they are in working order
16:30:07 <mlavalle> right now we have haleyb and me as stable revieiwers
16:30:18 <mlavalle> I think we need another one
16:30:37 <haleyb> mlavalle: garyk has powers too
16:31:07 <haleyb> but we do need another
16:31:20 <mlavalle> haleyb: I noticed that one patch of yours got a +2 from me several weeks ago
16:31:29 <mlavalle> and it has been sitting there since then
16:31:46 <mlavalle> ok, let's fix that situation
16:31:53 <haleyb> +1
16:32:08 <mlavalle> and let's keep a closer eye on the stable queues
16:32:34 <mlavalle> #topic Open Discussion
16:32:43 <mlavalle> Anything else to discuss today?
16:33:04 <njohnston> would it make sense to set up a grafana for the stable jobs?
16:33:37 <njohnston> yet another checklist for releasing, but it would definitely give more visibility to error trends for stable releases
16:33:45 <mlavalle> njohnston: not a bad idea
16:33:55 <mlavalle> what do you think haleyb?
16:34:36 <manjeets> may be not since the patches for stable branches are limited, I doubt it could be maintenance overhead, but that;'s just my opinion
16:35:13 <haleyb> yes, that could be a good thing, the data must all be there somewhere in graphite
16:35:40 <njohnston> ok, I'll look at adding that
16:35:47 <mlavalle> njohnston: thanks!
16:36:13 <mlavalle> #action njohnston to look into adding Grafana dasboard for stable branches
16:36:22 <mlavalle> anything else?
16:36:59 <mlavalle> ok, Thanks for attending
16:37:03 <mlavalle> #endmeeting