14:00:37 <jlibosva> #startmeeting networking
14:00:38 <openstack> Log:            http://eavesdrop.openstack.org/meetings/senlin/2017/senlin.2017-01-17-13.00.log.html
14:00:48 <openstack> Meeting started Tue Jan 17 14:00:37 2017 UTC and is due to finish in 60 minutes.  The chair is jlibosva. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:51 <openstack> The meeting name has been set to 'networking'
14:00:57 <njohnston> o/
14:00:58 <andreas_s> hi
14:00:58 <ihrachys> o/
14:00:59 <jlibosva> Hello friends!
14:01:04 <korzen> Hi
14:01:05 <amotoki> o/
14:01:06 <bcafarel> howdy
14:01:11 <bzhao> :)
14:01:19 <jlibosva> #topic Announcements
14:01:30 <dasanind> Hi
14:01:40 <hoangcx> hi
14:01:42 <jlibosva> The Project Team Gathering (PTG) is approaching fast. Please read the following email
14:01:45 <john-davidge> o/
14:01:50 <dasm> o/
14:01:55 <jlibosva> #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110040.html
14:02:19 <jlibosva> If you have a topic or idea that you think should be discussed there, feel free to write it down on this etherpad
14:02:21 <jlibosva> #link https://etherpad.openstack.org/p/neutron-ptg-pike
14:02:25 <ajo> hi o/
14:02:34 <ltomasbo> o/
14:02:57 <jlibosva> Note that there is also PTG Travel Support Program that can help with funding, if you are for some reason unable to join the gathering
14:03:00 <ataraday> hi
14:03:13 <jlibosva> Deadline for applications to this program has been extended and ends by the end of the day TODAY
14:03:19 <jlibosva> #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110031.html
14:03:53 * jlibosva slows down a bit with links but more are to come :)
14:04:19 <jlibosva> Yesterday a new neutron-lib 1.1.0 was relased. yay
14:04:28 <jlibosva> Congratulations to all who made it happen! Good stuff.
14:04:29 <dasm> \o/
14:04:46 <njohnston> yay!
14:04:50 <john-davidge> woop!
14:04:56 <jlibosva> You can read the enthusiastic announcement and a lot more here
14:04:58 <jlibosva> link http://lists.openstack.org/pipermail/release-announce/2017-January/000372.html
14:05:00 <jlibosva> #link http://lists.openstack.org/pipermail/release-announce/2017-January/000372.html
14:05:00 <annp> Hi
14:05:13 <ihrachys> have we bumped minimal already?
14:05:23 <dasm> ihrachys: i didn't see this yet.
14:06:31 <jlibosva> this is all I wanted to announce
14:06:39 <jlibosva> Anybody has anything else to announce?
14:06:52 <dasm> yes. friendly reminder: next week is FF
14:07:09 <dasm> so, just one week's left to squeeze all changes
14:07:11 <amotoki> we already have neutron-lib>=1.1.0 now in master
14:07:47 <amotoki> dasm: I think it is better to release neutronclient this week
14:07:47 <dasm> amotoki: hmm... this one shows 1.0.0 :/
14:07:49 <dasm> https://github.com/openstack/neutron/blob/master/requirements.txt#L19
14:08:05 <jlibosva> dasm: maybe it's not synced with global reqs yet?
14:08:08 <dasm> amotoki: ack. we still have one week, but we can work on this
14:08:13 <amotoki> to avoid a situation where our client does not breaks others
14:08:31 <amotoki> dasm: I will ping you after checking the situation
14:08:40 <dasm> amotoki: ack, thanks
14:09:04 <amotoki> we tend to release our client lately in a release and broke something several times.... let's avoid this
14:09:36 <jlibosva> dasm: thanks for FF reminder
14:09:51 <jlibosva> anything else?
14:09:54 <amotoki> dasm: fyi http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n110
14:09:58 <dasm> jlibosva: amotoki: you're both right. global-requirements has already neutron-lib 1.1.0
14:10:08 <dasm> amotoki: thanks, just noticed the same
14:11:03 <jlibosva> moving on
14:11:11 <jlibosva> #topic Blueprints
14:11:19 <jlibosva> #link https://launchpad.net/neutron/+milestone/ocata-3
14:11:31 <jlibosva> We're getting to the end of milestone 3 very soon
14:11:39 <hichihara> amotoki dasm: https://review.openstack.org/#/c/419345/
14:12:04 <jlibosva> ah, there it goes :)
14:12:07 <jlibosva> hichihara: thanks
14:12:10 <dasm> hichihara: thanks. now just wait for effect on all gates :D
14:12:33 <jlibosva> and let's pray for no failures ;)
14:12:52 <jlibosva> So back to milestone3, per planned schedule should be Jan 23 - Jan 27
14:13:03 <jlibosva> which is the same week as mentioned FF
14:13:34 <jlibosva> Does anybody want to raise here any bug/patch/blueprint that lacks proper attention and must get to ocata-3?
14:13:44 <ataraday_> Hi!
14:13:50 <reedip_> hi
14:13:57 <ataraday_> I've got 3 patches that are ready and waiting for some reviews: https://review.openstack.org/#/c/419815/ https://review.openstack.org/#/c/415226/ https://review.openstack.org/#/c/404182/
14:14:40 <jlibosva> ataraday_: good, thanks for bringing this up
14:14:49 <korzen> I have one ready for review: https://review.openstack.org/273546
14:15:09 <korzen> It is working doe long time, now fixed functional tests
14:15:33 <ihrachys> would also be good to give this OVO patch some love: https://review.openstack.org/#/c/306685/
14:15:39 <jlibosva> korzen: cool, thanks. I'm sure jschwarz will love it ;)
14:16:02 <ihrachys> and to make review progress on port bindings rework that will be used for multiple port bindings: https://review.openstack.org/#/c/407868/ and https://review.openstack.org/#/c/404293/
14:16:10 <jlibosva> ihrachys: do you want a dedicated topic for that? I saw no patches on wiki
14:16:15 <ihrachys> jlibosva: nah
14:16:28 <ihrachys> I think I mentioned already what's really important
14:16:43 <jlibosva> ihrachys: ok, thanks
14:18:05 <jlibosva> any other patches ready to land that are worth attention?
14:18:52 <ajo> https://review.openstack.org/#/c/396651/
14:19:17 <ajo> I have this one, refactoring the QoS drivers to something more decoupled
14:19:18 <jlibosva> ajo: thanks, this one is huuuge :)
14:19:24 <ajo> I'm sorry, yes ':D
14:19:34 <ajo> and I broke it on last changes, but I should push a new one now :)
14:20:04 <jlibosva> ajo: do think the related bug is doable in ocata-3 timeframe?
14:20:17 <ajo> jlibosva seems huge, but it's more moving stuff around, than creating new logic
14:21:06 <ajo> I'm unsure, but it would be beneficial to let driver implementers switch to the new driver model as soon as they can
14:21:13 <ihrachys> ajo: is it ready for another review run?
14:21:36 <ajo> ihrachys it is if you want, I have a -1 on jenkins I'm fixing now, but it must be a small change
14:21:40 <ihrachys> I see qos tests failing
14:21:50 <ajo> yes
14:22:01 <ihrachys> ok, ping me when everything is in shape Jenkins wise
14:22:17 <ajo> apparently passing unit test locally is not a warranty, :)
14:22:18 <ajo> ack, it should be good in a couple of hours
14:22:20 <ajo> I'll ping you, thanks ihrachys
14:22:37 <jlibosva> ajo: I asked because the bug is not set for milestone 3 and that could hide it from reviewers that prioritize o3 bugfixes
14:23:21 <ajo> oh, thanks jlibosva, may be we should set it for milestone-3, or add it on a separate bug on milestone-3
14:23:32 <dasm> jlibosva: john-davidge reminded me about this handy link to all o-3 related changes
14:23:33 <jlibosva> ajo: yeah, I was also thinking about separate bug
14:23:34 <dasm> #link
14:23:36 <dasm> https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fnetworking%2Dofagent+OR+project%3Aopenstack%2Fnetworking%2Dbgpvpn+OR+project%3Aopenstack%2Fnetworking%2Dovn+OR+project%3Aopenstack%2Fnetworking%2Dmidonet+OR+project%3Aopenstack%2Fnetworking%2Dbagpipe+OR+project%3Aopenstack%2Fneutron%2Dlib+OR+project%3Aopenstack%2Fnetworking%2Dsfc+OR+project%3Aopenstack%2Fpython%2Dneutronclie
14:23:37 <ajo> to be honest, the whole thing is probably not m-3 doable
14:23:38 <dasm> nt+OR+project%3Aopenstack%2Fneutron%2Dspecs+OR+project%3Aopenstack%2Fnetworking%2Dodl+OR+project%3Aopenstack%2Fneutron%2Dfwaas+OR+project%3Aopenstack%2Fneutron+OR+project%3Aopenstack%2Fneutron%2Ddynamic%2Drouting%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D%2D2%2Cself+branch%3Amaster&title=Neutron+ocata%2D3+Review+Inbox&Approved+RFE+neutron=%28mes
14:23:40 <dasm> sage%3A1458890+OR+message%3A1463784+OR+message%3A1468366+OR+message%3A1492714+OR+message%3A1498987+OR+message%3A1504039+OR+message%3A1507499+OR+message%3A1516195+OR+message%3A1520719+OR+message%3A1521291+OR+message%3A1522102+OR+message%3A1525059+OR+message%3A1560961+OR+message%3A1561824+OR+message%3A1563967+OR+message%3A1566520+OR+message%3A1577488+OR+message%3A1578989+OR+message%3A1579068+OR+messa
14:23:42 <dasm> ge%3A1580327+OR+message%3A1583184+OR+message%3A1585770+OR+message%3A1586056%29&High+Bugs+neutron=%28message%3A1365461+OR+message%3A1375625+OR+message%3A1506567+OR+message%3A1570122+OR+message%3A1580648+OR+message%3A1599936+OR+message%3A1610483+OR+message%3A1611626+OR+message%3A1626010+OR+message%3A1634123+OR+message%3A1642223+OR+message%3A1644415+OR+message%3A1647432+OR+message%3A1649124+OR+message
14:23:44 <dasm> %3A1649317+OR+message%3A1649503+OR+message%3A1654991+OR+message%3A1655281%29&Blueprints+neutron=%28topic%3Abp%2Fadopt%2Doslo%2Dversioned%2Dobjects%2Dfor%2Ddb+OR+topic%3Abp%2Fneutron%2Dlib+OR+topic%3Abp%2Fonline%2Dupgrades+OR+topic%3Abp%2Fpush%2Dnotifications+OR+topic%3Abp%2Frouted%2Dnetworks+OR+topic%3Abp%2Fagentless%2Ddriver+OR+topic%3Abp%2Fenginefacade%2Dswitch+OR+topic%3Abp%2Ffwaas%2Dapi%2D2.0+O
14:23:45 <ajo> this refactor: yes
14:23:46 <dasm> R+topic%3Abp%2Fl2%2Dapi%2Dextensions+OR+topic%3Abp%2Fneutron%2Din%2Dtree%2Dapi%2Dref+OR+topic%3Abp%2Fsecurity%2Dgroup%2Dlogging+OR+topic%3Abp%2Ftroubleshooting%29&Approved+RFE+python%2Dneutronclient=%28message%3A1457556%29&High+Bugs+python%2Dneutronclient=%28message%3A1549876+OR+message%3A1643849%29
14:23:48 <dasm> :( sorry
14:23:49 <ihrachys> dasm: !!!
14:23:49 <jlibosva> dasm: is it a link or a spam?
14:23:52 <dasm> #link http://status.openstack.org/reviews/
14:24:01 * ihrachys passes a prize to dasm
14:24:02 <john-davidge> dasm: Haha! That's why I didn't try to link you directly to it :P
14:24:11 <dasm> john-davidge: ;)
14:24:13 <jlibosva> lol
14:24:14 <mlavalle> lol
14:24:17 <ajo> ok, I'm adding a separate bug for it, thank jlibosva
14:24:21 <jlibosva> ajo: thanks
14:24:26 <ajo> in fact, I thought I had it hmm
14:24:42 <jlibosva> I also have one o3 patch that lacks eyes and love - https://review.openstack.org/#/c/402174/
14:25:03 <jlibosva> and thanks dasm for the link :)
14:25:17 <ajo> jlibosva oh right, I think that one is good to go probably
14:25:19 <ajo> it's simple
14:25:32 <mlavalle> jlibosva: I'll take a look later today
14:25:33 <ajo> I commited a new patch fixing a tiny typo in comments
14:25:45 <mlavalle> jlibosva: the patchset I meant
14:25:53 <jlibosva> mlavalle: thanks you! :)
14:26:42 <jlibosva> so if there are no other patches/bp to highlight we can move on to the next topix
14:27:00 <jlibosva> and the next topix is
14:27:00 <annp> Sorry. I have one https://review.openstack.org/#/c/203509 need more attention.
14:27:01 <jlibosva> #topic Bugs and gate failures
14:27:07 <jlibosva> #undo
14:27:08 <openstack> Removing item from minutes: #topic Bugs and gate failures
14:27:50 <ajo> annp but that looks like a spec, makes sense for pike,
14:28:03 <mlavalle> yeah, that is a spec
14:28:17 <ajo> I thought jlibosva was asking about code patches that need attention due to FF
14:28:18 <amotoki> annp: I think it gathers enough attentions these weeks. active discussion happens recently
14:28:45 <jlibosva> annp: thanks for bringing this up
14:29:10 <jlibosva> yeah, even though it already links some patches, it'll likely be discussed further in the next cycle
14:29:57 <jlibosva> anything else?
14:30:09 <jlibosva> #topic Bugs and gate failures
14:30:10 <annp> Ok, I understand please go ahead
14:30:14 <jlibosva> annp: thanks :)
14:30:29 <jlibosva> We started experiencing a lack of memory on gate jobs
14:30:35 <jlibosva> #link https://bugs.launchpad.net/neutron/+bug/1656386
14:30:36 <openstack> Launchpad bug 1656386 in neutron "Memory leaks on Neutron jobs" [Critical,New]
14:30:53 <jlibosva> At first it appeared it's only linuxbridge jobs but then I saw also other multinode job to fail because of insufficient memory
14:31:25 <jlibosva> I wanted to bring this to attention in case there is someone who loves memory leaks and stuff :)
14:31:50 <ajo> aouch
14:31:55 <electrocucaracha> jlibosva: do we have an entry in logstash for that one?
14:32:16 <electrocucaracha> jlibosva: just for monitoring the number of hits
14:32:22 <ajo> jlibosva it would be great to have some sort of memory usage output at the end of test runs
14:32:24 <jlibosva> electrocucaracha: good point, I think we don't have that
14:32:42 <jlibosva> ajo: the oom-killer dumps the processes before picking a victim
14:32:51 <ajo> ah, nice
14:33:01 <jlibosva> ajo: and also I think worlddump collects that as well
14:33:14 <electrocucaracha> jlibosva: ok, I'll doublecheck and maybe add something there
14:33:20 <jlibosva> electrocucaracha: thanks!
14:33:44 <ihrachys> jlibosva: worlddump is called in grenade only
14:33:44 <dasm> ajo: i tried to investigate it a little. it seems like during end of tempest run, swap is going through the roof and oom-killer tries to "solve" this by killing something
14:33:55 <reedip_> jlibosva : have we run a fulll tempest job on a local ( like devstack ) node to check  ?
14:34:05 <jlibosva> ihrachys: oh, I thought it's called on every failure. ok, nevermind, thanks for correcting me
14:34:18 <ajo> oh and we have ps output: http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/ps.txt.gz
14:35:03 <dasm> reedip_: i didn't see any local problems with this issue. probably good idea would be to try and reproduce on env similar to gate (like 8gb ram + 2gb swap)
14:35:38 <amotoki> in neutron-full failures in the bug comment, we got "Out of memory: Kill process 20219 (mysqld) score 34 or sacrifice child".
14:35:44 <reedip_> dasm : Hmm, that can be done , and probably we can use ps --forest to see a better detail of the tree
14:36:01 <jlibosva> reedip_: I looked at project config and all full runs are multinode it seems
14:36:24 <ajo> dasm, jlibosva  on those ps listings I don't see anything neutron outstanding in numbers
14:36:29 <reedip_> jlibosva : oh then reproducing it as dsvm wouldnt be helpful unless its also failing
14:36:31 <ajo> I see cinder using a lot of memory though
14:37:19 <ajo> well, where a lot of memory is 0.8GB , not huge
14:37:26 <jlibosva> ajo: IIRC I saw nova-api and mysqld being big. But we can dig into it later to not waste time on a single bug here on a meeting
14:37:28 <ajo> how much memory do test VMs have?
14:37:32 <ajo> ack
14:37:34 <jlibosva> ajo: 8G I think
14:37:36 <ajo> makes sense
14:37:49 <amotoki> yes, 8GB
14:37:55 <jlibosva> bug deputy was boden for last week but I don't see him around
14:38:11 <jlibosva> and we don't have a bug deputy for this week!
14:38:32 <jlibosva> so unless there is some other critical bug that you are aware of, I'd like to find a volunteer :)
14:38:51 <jlibosva> for this week, starting probably yesterday
14:38:51 <janzian> I haven't done it before, but I can give it a shot
14:39:08 <ajo> janzian++
14:39:17 <jlibosva> janzian: you're very welcome to do it :)
14:39:20 <jlibosva> janzian: thank you
14:39:23 <dasm> janzian: thanks
14:39:46 <jlibosva> we should also pick a deputy for the next week
14:39:57 <jlibosva> is there any other hero that will server next week?
14:40:05 <jlibosva> sorry, serve* :)
14:41:07 <jlibosva> it's a very prestigious role
14:41:49 <jlibosva> ok, so I take next week
14:41:59 <haleyb> selling used cars is not for you :)
14:42:17 <ajo> jlibosva let me take it
14:42:18 <ajo> It's been a long time for me
14:42:33 <jlibosva> haleyb: maybe I should wave my hands more :)
14:42:34 <mlavalle> thanks Tocayo!
14:42:43 <ajo> :D
14:42:49 <jlibosva> ajo: alright, sold to ajo :)
14:43:02 <ajo> \m/
14:43:32 <jlibosva> #topic Docs
14:43:41 <jlibosva> john-davidge: hello :)
14:43:47 <john-davidge> jlibosva: Hello :)
14:43:51 <jlibosva> john-davidge: do you want to update?
14:44:20 <john-davidge> One interesting bug to raise #link https://bugs.launchpad.net/openstack-manuals/+bug/1656378
14:44:20 <openstack> Launchpad bug 1656378 in openstack-manuals "Networking Guide uses RFC1918 IPv4 ranges instead of RFC5737" [High,Confirmed]
14:44:45 <john-davidge> There will be an effort across the networking guide to address that, possibly devref too if its needed
14:45:30 <john-davidge> If anybody is interested in seeking out and destroying instances of non-compliance it would be much appreciated
14:45:36 <haleyb> john-davidge: it already uses 2001:db8 for IPv6 right?
14:45:42 <john-davidge> otherwise our top priority remians the migration to OSC
14:45:46 <john-davidge> haleyb: Yes
14:45:57 <haleyb> cool
14:46:09 <amotoki> RFC5737 defines IP ranges for documentation. It is worth checked.
14:46:29 <john-davidge> haleyb: Obviously the IPv6 team is always on the ball :)
14:46:58 <haleyb> john-davidge: obviously :)
14:47:09 <mlavalle> lol
14:48:18 <john-davidge> That's all from me
14:48:26 <jlibosva> john-davidge: cool, thanks for the link :)
14:48:33 <jlibosva> #topic Transition to OSC
14:48:41 <jlibosva> amotoki: do you want to update about OSC?
14:48:50 <amotoki> yeah
14:49:12 <amotoki> A patch in discussion is FIP associate/disassociate https://review.openstack.org/#/c/383025/
14:49:36 <amotoki> It seems we need a discussion with Dean.
14:49:42 <jlibosva> #link https://review.openstack.org/#/c/383025/
14:49:54 <amotoki> If you are interested please show your opinion.
14:50:03 <reedip_> I had an opinion to change the options
14:50:27 <amotoki> I haven't checked the overall status. sorry for late, but it will be reported at latest this week.
14:50:44 <amotoki> * the end of this week
14:50:53 <jlibosva> amotoki: ok, thank you for update. I hope the discussion will continue on that patch
14:51:34 <amotoki> what I am not sure is which patches of OSC plugins want to be merged in Ocata neutronclient release.
14:51:51 <jlibosva> next topic should be neutron-lib but since I don't see boden here, we can move to on demand agenda as there is a topic there. So unless anybody wants to discuss neutron-lib, I'd pass on that
14:53:03 <dasanind> jli
14:53:21 <jlibosva> amotoki: maybe dasm can help as release liaison?
14:53:50 <dasm> jlibosva: nothing about neutron-lib. but afaik majority of things were merged
14:53:50 <amotoki> jlibosva: yes as we discussed at the beginning
14:54:02 <jlibosva> ok, thanks, moving on
14:54:07 <jlibosva> #topic Disable security group filter refresh on DHCP port changes
14:54:20 <jlibosva> mdorman: do you want the stage? :)
14:54:39 <mdorman> sure.  really i’m just looking for advice on how to go forward with https://review.openstack.org/#/c/416380/
14:55:11 <mdorman> for us, personally, we will probably just turn off DHCP to work around the problem (we don’t really use it anyway),but this seems like a scalabiliy thing that could affect others.
14:56:16 <amotoki> but currently we allow users to change IP addresses of dhcp ports after DHCP ports are created.
14:56:30 <amotoki> it would be nice if we have an alternative.
14:56:33 <mdorman> the idea of that patch was to stop refreshing all security group filters on all ports any time a dhcp port changes.   but turns out that is actually a breaking fix because there are inbound rules on the port specific to the dhcp agents on that network.  so i think the proposal in the comments is to do away with those specific inbound rules and replace them with a blanket rule that would allow all dhcp traffic in.
14:56:37 <jlibosva> seems like there is some kind of discussion going on on that patch
14:56:46 <mdorman> amotoki: correct.  that’s the current issue
14:57:07 <mdorman> yes.  i just wanted to raise the issue and try to get some more eyeballs
14:57:10 <ajo> wouldn't it be reasonable to allow any dhcp in from the specific DHCP servers?
14:57:34 <mdorman> ajo that’s the current behavior i believe.
14:57:37 <ajo> hmm
14:57:50 <ajo> and wouldn't that only be an issue if you move the dhcp server IPs around?
14:57:53 <mdorman> the problem is when a dhcp agent is added/removed/changed, then the rules on all ports in the network have to be updated
14:57:55 <jlibosva> mdorman: yep, more eyes are definitely useful :) thanks for bringing this up
14:57:56 * ajo opens the review
14:58:09 <mdorman> ajo: correct
14:58:17 <amotoki> let's continue the discussion and question on #-neutron or the review!!
14:58:17 <ajo> mdorman aha, makes sense
14:58:27 <ajo> so it becomes an scalability issue in such case
14:58:36 <ajo> for ovsfw we could use conjunctive rules...
14:58:49 <ajo> I wonder if for iptables we could use a generic chain used from all ports for that
14:58:55 <ajo> well... from all ports on specific networks
14:58:58 <mdorman> ajo: yup, exactly.  we run only providers networks, i nsome cases with 1000s of ports.  so any time a dhcp agent changes, thre is an avalanche of rpcs to neutron-server to refresh all the rules
14:58:58 <jlibosva> amotoki: +1
14:59:01 <ajo> one chain per network or so
14:59:04 <amotoki> we are out of time....
14:59:05 <jlibosva> we're running out of time anyway
14:59:11 <ajo> ack
14:59:28 <mdorman> fair enough.  happy to move to neutron channel
14:59:46 <jlibosva> thanks everyone for showing up :) and have a good day
14:59:48 <amotoki> mdorman: thanks for raising it anyway
14:59:54 <jlibosva> #endmeeting