21:00:34 <armax> #startmeeting networking
21:00:34 <openstack> Meeting started Mon Aug  8 21:00:34 2016 UTC and is due to finish in 60 minutes.  The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:37 <openstack> The meeting name has been set to 'networking'
21:00:44 <trevormc> o/
21:00:54 <HenryG> o/
21:01:39 <armax> hello everyone
21:01:40 <jschwarz> \o/
21:01:42 <njohnston_> o/
21:02:00 <armax> #link https://wiki.openstack.org/wiki/Network/Meetings
21:02:08 <armax> #topic Announcements
21:02:27 <ihrachys> late o/
21:02:42 <armax> as some of you might have noticed from the ML, there’s a patch up for review to make Neutron the default networking for DevStack
21:03:01 <armax> during the review process a few things were spotted
21:03:31 <armax> like OVH, which is one of the cloud providers OpenStack infra is using that doesn’t exectly like the current patch
21:03:48 <armax> the dislike manifests itself by breaking one of the tempest tests for DVR
21:05:43 <hichihara> http://lists.openstack.org/pipermail/openstack-dev/2016-August/100888.html ?
21:06:04 <armax> hichihara: yes, links are provided on the meeting wiki page
21:06:28 <hichihara> armax: Ah, I didn't notice.
21:06:31 <armax> the course of action as we stand is to merge the default switch, skip the test, solve the issue and unskip the test
21:06:58 <amuller> do we know why that test fails on only one cloud?
21:07:18 <dasm> amuller: just ovh hardware seems to be affected
21:07:29 <amuller> the root cause might change our strategy
21:07:34 <armax> amuller: it looks like it’s related on how the underlying networking box is configured
21:09:15 <armax> so we’ll have to see
21:09:25 <haleyb> perhaps a quick tempest test that tried to ping 10.x.y.199 would work - the assumption is something in their setup is responding
21:11:24 <armax> another reminder is the mid-cycle next week
21:12:05 <armax> we currently have these topics to discuss:
21:12:07 <armax> #link https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems
21:12:43 <armax> we’ll provide a status update when we get back
21:13:01 <Sukhdev> armax : Is there any plan for remote setup to participate in mid-cycle?
21:13:21 <armax> Sukhdev: the usual IRC participation
21:13:31 <armax> Sukhdev: so, none in particular
21:13:42 <Sukhdev> armax : can we not set one up?
21:14:00 <Sukhdev> armax : I would love to participate remotely?
21:14:04 <njohnston_> +1
21:14:33 <armax> Sukhdev: if someone wants to volunteer, by all means
21:15:13 <Sukhdev> armax: I am not familiar with the process to set it up - but. have partcipated in remote mid-cycle for Ironic
21:15:14 <armax> Sukhdev: though experience showed us that things don’t work very smoothly
21:15:22 <Sukhdev> it was very effective
21:15:39 <Sukhdev> OpenStack has phone lines that can be used by using laptops
21:16:13 <HenryG> We'll be grouped up according to the work items, so I suggest to find someone in a group to set up something on their laptop.
21:17:14 <HenryG> Volunteers can put their names on the etherpad?
21:18:56 <armax> HenryG: Sukhdev let’s add some pointers on the etherpad
21:19:30 <Sukhdev> Let me dig up the logistics to setup the conf call facilities offered by OpenStack and then we can go from there
21:19:36 <armax> there’s the timezone to factor in, but if folks are interested we should be able to engage them
21:20:55 <armax> ok let’s move on
21:21:03 <armax> #topic Blueprints
21:21:12 <armax> We’re getting closer to the end of N-3
21:21:36 <armax> some blueprints have still their whiteboards that are many months old
21:22:15 <armax> as blueprint assignee/approver have the the duty to provide a regular update on the progress of the effort
21:22:34 <armax> for the sake of people who are interested but cannot be involved on the day to day
21:23:11 <armax> I am reaching out the individual assignees/approvers to assess the latest status for the targeted efforts
21:23:48 <armax> it’ll be a huge favor to me, amongst others, if blueprints whiteboards were up to date
21:25:41 <armax> #topic Bugs and gate failures
21:25:58 <armax> electrocucaracha was our deputy for the week
21:26:05 <electrocucaracha> :)
21:26:26 <electrocucaracha> there were a couple o critical
21:26:47 <electrocucaracha> 1562878
21:27:02 <armax> bug 1562878
21:27:02 <openstack> bug 1562878 in neutron "L3 HA: Unable to complete operation on subnet" [Critical,Confirmed] https://launchpad.net/bugs/1562878 - Assigned to Ann Taraday (akamyshnikova)
21:27:35 <armax> electrocucaracha: ack
21:27:42 <armax> I am looking at Grafana
21:27:58 <armax> and the unit test pipeline doesn’t look happy
21:28:00 <armax> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=3&fullscreen
21:28:02 <electrocucaracha> sorry that one is too old
21:28:07 <armax> anyone knows the root cause?
21:28:12 <amuller> that bug is mixing two issues
21:28:23 <amuller> the part that is specific to HA routers is low or medium priority
21:28:32 <amuller> the gate issue is not specific to HA routers and is higher priority...
21:28:36 <amuller> it should be split to two bugs
21:28:40 <armax> also it looks like greande is busted too in the gate queue
21:28:53 <armax> but I think that’s for nova-net
21:29:13 <armax> no, actually that’s gate-grenade-dsvm-neutron-multinode
21:29:26 <electrocucaracha> basically, 1610960  1609693  and 1609540 were from this week
21:29:44 <HenryG> armax: I did not notice the py35 problem till now. I can look into it.
21:29:44 <ihrachys> armax: that one we are about to solve, first patch in gate to merge.
21:29:49 <armax> oh boy
21:29:52 <armax> the gate is a disaster
21:29:58 <ihrachys> armax: resolution https://review.openstack.org/#/c/352454/
21:30:00 <armax> http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=8&fullscreen
21:30:09 <ihrachys> and it's not us, yay.
21:30:11 <ihrachys> :(
21:30:27 <armax> ihrachys: ack
21:30:45 <dasm> armax: bug 1610960 gate-grenade-dsvm-neutron-multinode should be soon fixed
21:30:45 <openstack> bug 1610960 in neutron "Invalid input for external_gateway_info. Reason: '' is not a valid UUID." [Critical,Confirmed] https://launchpad.net/bugs/1610960
21:31:00 <ihrachys> should solve grenade failures. but I see dvr is not happy either.
21:31:11 <armax> ihrachys: ok, so I can look into the unit test failure
21:31:32 <armax> the failure on the lib periodic job is also a known issue
21:31:35 <armax> HenryG: any update?
21:32:16 <HenryG> Known since when?
21:32:23 <armax> since the last time we spoke? :)
21:32:35 <HenryG> Oh, I though that patch had merged.
21:32:41 <armax> um
21:32:49 <armax> then we’re not quite out of the woods yet
21:33:08 <ihrachys> HenryG: the hacking rules one? nope, it's not
21:33:16 <HenryG> armax: the patch is https://review.openstack.org/350723 -- I'll sync with boden
21:33:36 <armax> HenryG, ihrachys ack
21:34:01 <armax> ok, ihrachys you’re looking into the dvr instability?
21:34:19 <ihrachys> armax: no, why. I had plenty today to make me busy.
21:34:39 <armax> ihrachys: sorry I thought you said you were
21:34:45 <ihrachys> armax: and I would be glad if someone else takes it, not me tomorrow
21:34:47 <ihrachys> np
21:34:48 <armax> I will be looking at the unit test instability
21:35:05 <armax> anyone else willing to look into what the latest DVR instability is?
21:35:13 <haleyb> armax: i can look, thought we just had the os-client-config issue today
21:35:16 <armax> we;re about to peg at 100%
21:35:30 <ihrachys> haleyb: occ was grenade only (for some reason)
21:35:30 <armax> haleyb: could be the same you reckon?
21:36:00 <armax> #link when looking at http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=8&fullscreen
21:36:07 <armax> #undo
21:36:08 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x7f5bbf3072d0>
21:36:09 <armax> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=8&fullscreen
21:36:50 <haleyb> armax: i hadn't looked with the other one so bad, so i don't know
21:36:54 <armax> but the graphs for the gate queue look funny
21:36:59 <armax> http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=5&fullscreen
21:37:17 <armax> they seem to stop at yesterday
21:37:26 <HenryG> haleyb: I see https://review.openstack.org/352594 just got proposed.
21:39:06 <haleyb> i do see the strange "mountains" with grafana now, but see a linuxbridge spike
21:39:27 <armax> ok, let’s move for now, but before we do, we have njohnston deputy for the week
21:39:30 <armax> thanks njohnston
21:39:36 <njohnston> happy to assist
21:39:42 <armax> anyone with spare cycles for next week?
21:40:44 <armax> no-one?
21:41:10 <armax> ok
21:41:22 <armax> #topic Docs
21:42:03 <armax> let me remind you that getting closer to feature freeze means we should also start looking at providing documentation for what has been built in the Newton timeframe
21:42:30 <armax> I would invite folks to look at the list
21:42:31 <armax> #link https://bugs.launchpad.net/neutron/+bugs?field.tag=doc
21:43:24 <armax> and make sure we close as many bugs as we can by writing the appropriate documentation material
21:43:54 <hichihara> It seems we need volunteer for doc
21:44:27 <hichihara> Many bugs without assignee
21:44:50 * asettle pokes in head. You should ask in the docs meeting if anyone would like to volunteer. We have a few networking people interested in working on that stuff.
21:45:48 <ihrachys> I took for from the end since those are mine
21:46:00 <armax> once we go into feature freeze, I’ll be also going over this list and chase down folks
21:46:32 <ihrachys> *four
21:46:50 <armax> ok, let’s move on
21:46:55 <armax> #topic Transition to OSC
21:46:58 <hichihara> And we also have new another tag for api-ref now, https://bugs.launchpad.net/neutron/+bugs?field.tag=api-ref
21:47:08 <armax> hichihara: ack
21:47:23 <armax> amotoki is offline, rtheis perhaps you have anything to share?
21:48:03 <rtheis> armax: I will prepare a transition status update for mid-cycle next week.
21:48:28 <rtheis> progress continues with osc and osc plugin patches
21:48:36 <armax> rtheis: ack
21:48:44 <rtheis> nothing else from me
21:48:47 <njohnston> Is there still a plan for an API reference clean up sprint, or was that folded into the mid-cycle?
21:48:47 <armax> rtheis: thanks
21:49:22 <hichihara> njohnston: I guess that amotoki is preparing
21:49:22 <armax> njohnston: I’d defer the answer to amotoki, but I hope we’ll be pushing at the mid-cycle too
21:49:37 <armax> #topic Keystone v3
21:49:53 <armax> the schema migration patch merged
21:50:25 <armax> dasm, HenryG: what are you baking next for us?
21:50:41 <ihrachys> was anyone complaining from 3party? all smooth?
21:50:55 <dasm> armax: couple things need to be cleaned up, like temporary fixes from subprojects and missing deprecation warnings.
21:51:04 <armax> dasm: ok
21:51:26 <dasm> ihrachys: noticed just two problems: nova had mocked calls to neutronclient, and because of this, couldn't follow changes
21:51:31 <armax> the next two big steps would be to fix the API and sweep the codebase?
21:51:47 <armax> dasm: links?
21:52:10 <dasm> armax: digging
21:52:28 <ihrachys> armax: https://review.openstack.org/#/c/349299/
21:52:37 <ihrachys> https://bugs.launchpad.net/nova/+bug/1608258
21:52:37 <openstack> Launchpad bug 1608258 in OpenStack Compute (nova) "test_neutronv2 unit tests fail with python-neutronclient 5.0.0" [High,Fix released] - Assigned to Matt Riedemann (mriedem)
21:52:47 <dasm> ihrachys: thanks
21:53:11 <dasm> and also this one:  https://bugs.launchpad.net/bugs/1610905
21:53:11 <openstack> Launchpad bug 1610905 in neutron "Updating agent with non admin context returns 500 error." [Medium,Confirmed] - Assigned to Kengo Hobo (hobo-kengo)
21:53:12 <armax> ihrachys: thanks
21:53:51 <dasm> but second one seems to be not connected to latest change, according to HenryG
21:53:56 <ihrachys> dasm: how will plugin API work? will we receive both project and tenant IDs and will need to reconcile ourselves? or you will do it in api layer and provide a single value for us?
21:54:44 <dasm> ihrachys: we shouldn't break backward comp, so idea was to do another extension
21:55:33 <dasm> ihrachys: everything (hopefully) will be done on api site.
21:55:41 <dasm> *side
21:55:45 <armax> dasm: my understanding is that we’d be supporting both for sometime
21:56:10 <armax> ihrachys: and that the programmatic API would continue to see/handle tenant_id
21:56:25 <dasm> armax: yes, both will be supported.
21:56:27 <ihrachys_> cool with me, less work on objects side
21:56:40 <armax> #topic neutron-lib
21:56:47 <armax> HenryG: anything worth raising for awareness?
21:59:10 <armax> looks like we lost HenryG
21:59:12 <hichihara> 1min
21:59:15 <armax> oh well
21:59:18 <armax> let’s get it back then
21:59:33 <armax> thanks everyone for partecipating
21:59:37 <armax> #endmeeting