18:00:46 <SumitNaiksatam> #startmeeting networking_policy
18:00:47 <openstack> Meeting started Thu May 25 18:00:46 2017 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:50 <openstack> The meeting name has been set to 'networking_policy'
18:01:07 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#May_25th.2C_18th_2017
18:01:15 <annak> hi!
18:01:26 <SumitNaiksatam> i think we are still dealing with the issues from last week (at least i am)
18:01:37 <SumitNaiksatam> #topic IP v4, v6 dual-stack support
18:01:59 <SumitNaiksatam> #link https://review.openstack.org/#/c/459823/
18:02:08 <tbachman> :|
18:02:28 <SumitNaiksatam> tbachman: :-) thanks for continuing to work on this
18:02:31 <tbachman> SumitNaiksatam: there’s at least one fix to make to that patch
18:02:35 <SumitNaiksatam> tbachman: oh
18:02:36 <tbachman> I need to change the defaults to use SLAAC
18:02:44 <SumitNaiksatam> ah okay
18:02:55 <SumitNaiksatam> tbachman: thats the neutron default?
18:02:57 <tbachman> I can resubmit that, but I was hoping to be further along than I am with functional/integration testing
18:03:00 <tbachman> Alas no
18:03:01 <rkukura> tbachman: Should that be configurable in GBP?
18:03:09 <tbachman> rkukura: good question
18:03:16 <SumitNaiksatam> rkukura: good point
18:03:31 <tbachman> we could add a config option
18:03:40 <tbachman> but I worry about endless config
18:03:46 <rkukura> This seems similate to default_prefix_len, etc.
18:03:51 <SumitNaiksatam> tbachman: only for those who need it
18:03:55 <tbachman> true
18:04:00 <tbachman> it’s easily added
18:04:02 <SumitNaiksatam> tbachman: is it the neutron default?
18:04:12 <tbachman> I don’t believe so
18:04:18 <tbachman> I believe the neutron default is none
18:04:22 * tbachman goes to check
18:06:25 <SumitNaiksatam> rkukura: are you good with the patch otherwise?
18:06:53 <SumitNaiksatam> annak: how about you, in case you got a chance to look at the lastest patchset?
18:07:14 <rkukura> yes, although I want to take a quick look at the changes since the last version I reviewed
18:07:31 <SumitNaiksatam> rkukura: sure
18:07:43 <annak> SumitNaiksatam: not the latest, but it looked good. I'll look at latest
18:07:49 <SumitNaiksatam> annak: thanks
18:08:20 <SumitNaiksatam> given that we are heading into the long weekend, i would hope that we can attempt to merge this sooner than later
18:08:31 <SumitNaiksatam> we have to merge the backports as well
18:08:37 <tbachman> SumitNaiksatam: ack
18:08:45 <rkukura> SumitNaiksatam: I have 4 or 5 of those to do to stable/mitaka
18:08:47 <tbachman> I can’t find reference, but I’m pretty sure the default is None
18:08:50 <SumitNaiksatam> you can expect some maintenance activity and downtime over the weekend
18:09:04 <SumitNaiksatam> rkukura: yes, we will come to that in a min
18:09:06 <tbachman> SumitNaiksatam: I also noticed that igordcard’s QoS patch wasn’t backported. Is that b/c it’s a new feature?
18:09:07 <SumitNaiksatam> tbachman: np
18:09:11 <tbachman> Or should that be backported as well?
18:09:29 <SumitNaiksatam> tbachman: we decided that we will not backport the QoS patch to newton
18:09:32 <tbachman> k
18:09:35 <SumitNaiksatam> it will be available starting Ocata
18:09:41 <tbachman> SumitNaiksatam: got it. thx
18:09:52 <SumitNaiksatam> actually we attempted to backport
18:10:04 <SumitNaiksatam> but ran into some issues, and decided it was best to leave it out
18:10:08 <tbachman> SumitNaiksatam: got it
18:10:24 <SumitNaiksatam> if someone has a real need for this in mitaka, we can definitely reconsider the decision
18:10:24 <tbachman> rkukura: SumitNaiksatam: should I add a config var for IPv6 RA mode, etc. for GBP?
18:11:00 <rkukura> tbachman: both ra_mode and address_mode?
18:11:14 <SumitNaiksatam> songole: hi
18:11:22 <songole> SumitNaiksatam: hello
18:11:24 <tbachman> rkukura: ack
18:11:39 <tbachman> SumitNaiksatam: since songole is here, we could talk about the policy ip_pool
18:11:40 <SumitNaiksatam> tbachman: these will be used as defaults for implicit creation, right?
18:11:43 <tbachman> SumitNaiksatam: ack
18:11:50 <tbachman> my thinking is that our default would be SLAAC
18:11:53 <SumitNaiksatam> tbachman: you mean proxy ip_pool
18:12:00 <tbachman> unless we want our defaults to be the neutron defaults
18:12:05 <tbachman> (with “our” meaning GBP)
18:12:10 <tbachman> SumitNaiksatam: ack on proxy ip_pool
18:12:12 <SumitNaiksatam> tbachman: yeah, i wanted to dig a little more on that
18:12:20 <SumitNaiksatam> tbachman: why SLAAC?
18:12:34 <tbachman> It seems to be a preferred mode, from what I can tell
18:12:39 <tbachman> but it doesn’t have to be
18:12:48 <tbachman> it may make more sense for our default to be the same as neutron's
18:12:53 <rkukura> tbachman: Are the neutron defaults usable for IPv6, or its it always necessary to specify something?
18:13:00 <SumitNaiksatam> tbachman: yeah thats what i wanted to clarify
18:13:13 <tbachman> rkukura: define usable ;)
18:13:35 <tbachman> I think their defaults mean that neutron doesn’t take any responsibility for that behavior
18:13:48 <rkukura> tbachman: You create an IPv6 subnet, attach it to a router, and you get connectivity over IPv6
18:13:51 <tbachman> the documentation actually does a good job of explaining this, if I remember correctly
18:13:57 <tbachman> let me go find that link
18:14:53 <tbachman> This link has a helpful table:
18:15:00 <tbachman> #link https://docs.openstack.org/liberty/networking-guide/adv-config-ipv6.html <== this link has a helpful table
18:15:04 <rkukura> My argument would be that the GBP defaults used in implicit workflows should result in usable IPv6 connectivity when ip_version == 46 or 6
18:15:16 <tbachman> note the first entry in the table
18:15:25 <tbachman> “	Backwards compatibility with pre-Juno IPv6 behavior.”
18:15:49 <tbachman> This is the part titled “ipv6_ra_mode and ipv6_address_mode combinations"
18:15:56 <rkukura> N/S	N/S	Off	Not Defined	Backwards compatibility with pre-Juno IPv6 behavior.
18:16:15 <rkukura> Looks like the defaults are for backwards compatibility rather than usability
18:16:16 <SumitNaiksatam> rkukura: agree, assuming neutron is not usable without that configuration
18:16:42 <tbachman> rkukura: ack on the backwards-compatibility
18:17:23 <tbachman> there’s also SLAAC from an openstack router, and SLAAC form a non-openstack router
18:17:43 <tbachman> but I’ve found that openstack doesn’t let you attach subnets to a router if you’ve elected SLAAC from a non-openstack router
18:17:55 <tbachman> so, I’d say it should be the SLAAC from an openstack router :)
18:18:03 <tbachman> (note: that’s for internal interfaces, and not the gateway)
18:18:16 <rkukura> tbachman: agree
18:18:56 <rkukura> but with pure-SLAAC, VMs don’t get a default gateway, right?
18:19:03 <tbachman> also note that most of these are “not implemented” by the reference implementation
18:19:08 <tbachman> SLAAC is just for address assignemnt
18:19:16 <tbachman> the router advertisements provide the default GW
18:19:48 <rkukura> So dhcpv6-stateless is what uses SLAAC and has RAs for the GW, right?
18:20:10 <SumitNaiksatam> tbachman: rkukura: i still think the defaults should be what neutron does, a specific driver can override the defaults
18:20:39 <rkukura> SumitNaiksatam: I could go along with that, as long as its configurable for the deployment
18:20:53 <tbachman> SumitNaiksatam: I’m fine with that. I’ll amend my patch to include these config vars
18:20:53 <SumitNaiksatam> and by driver, i mean the GBP policy driver
18:21:22 <tbachman> SumitNaiksatam: should these be provided in the resource mapping driver then?
18:21:25 <SumitNaiksatam> okay, we can follow up on this discusion on the gerrit review
18:21:27 <rkukura> Do we need config at the PD level? What do we do for default_prefix_len?
18:21:41 <tbachman> For IPv6, we’ve agreed it’s 64
18:21:50 <SumitNaiksatam> rkukura: i meant the PD can override it
18:22:07 <rkukura> tbachman: I meant for IPv4
18:22:21 <tbachman> ah — that’s part of the L3 config now, right?
18:22:26 <rkukura> I think its best of these sorts of config variables all work similarly
18:22:39 <tbachman> hmmm
18:22:52 <tbachman> rkukura: are you referring to implicit workflow (i.e. where L3P is created implicitly)?
18:23:11 <SumitNaiksatam> rkukura: default_prefix_len is part of GBP API
18:23:45 <rkukura> Both implicit L3P creation, and explicit where the value is not passed
18:24:21 <rkukura> I guess the explicit creation uses API default values
18:24:40 <tbachman> rkukura: your point is just that the defaults should all be consistent then, right?
18:24:43 <rkukura> So do these need to be added as L3P API attributes?
18:24:48 <tbachman> (and if so, I should double-check that’s the case)
18:25:09 <SumitNaiksatam> rkukura: i was hoping you would not bring that up! :-(
18:25:11 <tbachman> lol
18:25:51 <rkukura> My point is more about how a deployment sets defaults (via config) and how a user overrides those defaults than about whether the specific values are consistent between Neutron and GBP
18:26:49 <SumitNaiksatam> well without representation in the API there is no way the user can override it
18:27:14 <tbachman> but — we don’t provide all the information about subnets when we create an L3P
18:27:18 <SumitNaiksatam> the only way to override it is to create a neutron object with those overrides and associate it with a GBP object (i am glossing over the defails here)
18:27:37 <tbachman> for example, default GW
18:28:09 <tbachman> I guess it’s a question of whether this level of detail is warranted in the API
18:28:13 <tbachman> during L3P creation
18:28:25 <rkukura> right
18:28:28 <SumitNaiksatam> tbachman: true, in such cases, where neutron models that detail, we expect the neutron object to be creted directly
18:28:40 <tbachman> and is this something someone would want to change during deployment, w/o having to restart neutron
18:28:43 <SumitNaiksatam> and then be assocaited with the relevant GBP resource
18:29:03 <tbachman> I would think this would be consistent across a deployment
18:29:12 <tbachman> (meaning the same mode is used throughout)
18:29:27 <SumitNaiksatam> tbachman: good point
18:29:47 <tbachman> SumitNaiksatam: but with L3P’s, we still create the subnets, even if the user just provided a prefix
18:30:00 <tbachman> and we don’t create them during L3P creation
18:30:32 <tbachman> I can see an argument for exposing this
18:30:33 <SumitNaiksatam> tbachman: the resource_mapping driver would allow you to associate subnets with a PTG
18:30:56 <tbachman> b/c it’s kind of a “mode” for deployment
18:31:00 <SumitNaiksatam> and you can create the subnets in Neutron the way you want it
18:31:21 <tbachman> SumitNaiksatam: true
18:31:29 <tbachman> but in our subnetpools model, we have no way of providing this
18:31:34 <tbachman> it’s only specified during subnet creation
18:32:18 <SumitNaiksatam> tbachman: we will perhaps need to revisit that in the resource_mapping context :-)
18:32:22 <tbachman> :)
18:32:37 <tbachman> so, I just add a default in my driver for now, and we revisit later?
18:32:54 <tbachman> I’m just worried about changing a behavior/default at some later point
18:33:20 <tbachman> At the very least, I say we add config vars, and then we can examine later if we want to specify this in the REST
18:33:29 <tbachman> and if we expose it via REST, the defaults should be what’s in the config
18:33:34 <tbachman> (and not defined in the REST itself)
18:33:35 <SumitNaiksatam> tbachman: would it not be a matter of promoting this configuration in the future (as opposed to changing behavior)?
18:33:43 <rkukura> adding config now and API later is fine with me
18:33:54 <SumitNaiksatam> okay good, so we have agreement :-)
18:33:57 <tbachman> :)
18:34:04 <tbachman> cool — I’ll add that to my respin as well
18:34:10 <SumitNaiksatam> assuming annak is on board too :-)
18:34:19 <tbachman> and thanks to everyone for reviewing and approving the spec, too!
18:34:44 <SumitNaiksatam> i think rkukura deserves special pat on the back for that!
18:34:47 <annak> tbachman: i need to catch up a bit :) i understood subnets are always created implicitly for all drivers using subnetpools?
18:34:49 <tbachman> SumitNaiksatam: ack
18:35:02 <tbachman> annak: only in the implicit workflows :)
18:35:14 <tbachman> ah
18:35:17 <tbachman> annak: my bad
18:35:31 <tbachman> yes — subnets are created implicitly when needed, and not when the L3P is created
18:35:38 <tbachman> (from the subnetpools)
18:35:55 <tbachman> SumitNaiksatam: now that I’ve taken up 1/2 of our IRC meeting, do we want to cover the proxy ip_pool still?
18:36:20 <SumitNaiksatam> tbachman: yes
18:36:24 <SumitNaiksatam> songole: still there?
18:36:28 <songole> yes
18:36:42 <SumitNaiksatam> tbachman: go ahead, we should try to close this
18:36:56 <tbachman> songole: not sure if you’ve followed the dual-stack development, but we’ve expanded the use of the ip_pool parameter
18:37:09 <tbachman> as you’re well aware, there’s also a proxy ip_pool parameter
18:37:26 <songole> tbachman: I need to catch up on the spec..
18:37:35 <tbachman> our expansion of this parameter is to instead of it being just a single subnet prefix, it’s now a string, which we can interpret as a comma-separated list of prefixes
18:37:43 <tbachman> and the prefixes can be a mix of IPv4 and IPv6
18:37:54 <tbachman> right now, we haven’t changed any behavior for proxy ip_pool
18:38:11 <tbachman> but if you want to take advantage of dual-stack, there is a change we should probably make
18:38:23 <SumitNaiksatam> songole: so the summary is, whether we need to do the same thing for proxy_ip_pool, what we did for the ip_pool of the L3P?
18:38:25 <tbachman> right now, there is no ip_version provided with the proxy workflow
18:38:36 <tbachman> the ip_version is implied as IPv4
18:39:02 <tbachman> we should consider adding an ip_version config var, and allow it to use the same values as is used with the L3P
18:39:06 <tbachman> (4, 6, and now 46 to mean “dual-stack”)
18:39:07 <songole> SumitNaiksatam: I would think it is required
18:39:25 <SumitNaiksatam> songole: okay, tbachman’s current spec and implementation does not address this
18:39:43 <SumitNaiksatam> songole: we would need some follow up work to get to that
18:39:46 <tbachman> SumitNaiksatam: it wouldn’t take much, I think
18:40:08 <SumitNaiksatam> songole: until then, when using NFP and service chains, perhaps only ipv4 is going to work
18:40:12 <SumitNaiksatam> tbachman: okay
18:40:19 <tbachman> I think it would just be adding the ip_version config var
18:40:21 <songole> the proposal to make it a string - is it part of the spec?
18:40:30 <SumitNaiksatam> songole: yes
18:40:31 <tbachman> songole: it is — for L3P
18:40:38 <tbachman> but not the proxy ip_pool
18:40:41 <songole> tbachman: ah, got it
18:41:02 * tbachman has a terrible memory — we may have also changed the type for proxy ip_pool to be a string
18:41:19 <SumitNaiksatam> tbachman: could we potentially consider using the same ip_version for both ip_pool and proxy_ip_pool?
18:41:36 <tbachman> SumitNaiksatam: we could
18:41:46 <tbachman> songole: it is a string in the proxy ip_pool as well
18:41:51 <tbachman> (that’s in my patch)
18:42:01 <tbachman> I did that to keep things consistent
18:42:02 <SumitNaiksatam> tbachman: ah good, so that change is already done
18:42:07 <songole> tbachman: ok. cool
18:42:10 <tbachman> SumitNaiksatam: ack. But we still need ip_version
18:42:19 <tbachman> the question is whether we would want a separate one for proxy ip_pool
18:42:29 <tbachman> I can see the argument for a separate config file var
18:42:32 <songole> we could use the same version param with proxy ip_pool as well
18:42:35 <tbachman> since there is a separate ip_pool
18:42:36 <SumitNaiksatam> tbachman: yeah, i meant the ip_version stands for both pools
18:43:18 <tbachman> In any case, this is something we can address in a follow-on patch, if needed
18:43:24 <tbachman> we already have to do one for the DB schema change
18:43:35 <tbachman> (in order to increase the ip_pool string from 64 to 256 characters)
18:44:07 <SumitNaiksatam> if we use the same ip_version, then perhaps only additional changes are required in the plumber
18:44:19 <SumitNaiksatam> tbachman: ack
18:44:20 <rkukura> tbachman: So if ip_version enables IPv6 but proxy_ip_pool does not have any IPv6 prefixes, does this disable IPv6 for proxies?
18:44:34 <songole> tbachman: would nfp break with this change unless we handle v6 addressing changes?
18:44:52 <tbachman> rkukura: right now, there is no checking of ip_version against proxy ip_pool vaues
18:44:57 <SumitNaiksatam> songole: it depends on how its being used
18:45:03 <tbachman> songole: IPv4 works as-is
18:45:10 <tbachman> the goal was to not break existing workflows
18:45:30 <tbachman> so — you don’t get anything new, but nothing should be broken
18:45:44 <SumitNaiksatam> tbachman: nice thanks for confirming that
18:45:44 * tbachman admits he has a limited grasp of the proxy workflows
18:46:01 <SumitNaiksatam> songole: hopefullly our gate jobs will catch if anything breaks
18:46:09 <SumitNaiksatam> songole: the NFP gate jobs that is
18:46:15 <songole> SumitNaiksatam: ok
18:46:17 <rkukura> tbachman: right, so it should be possible to have ip_version=46, and proxy_ip_pool to contain v4 and/or v6 prefixes, providing control over which IP versions are available to proxy
18:46:19 * tbachman notes that those passed in his current iteration of his patch
18:46:37 <SumitNaiksatam> tbachman: nuce
18:46:44 <tbachman> rkukura: I’m not sure what happens if there is an IPv6 prefix in proxy_ip_pool
18:46:51 <tbachman> we don’t have a test for that
18:47:01 <tbachman> but IPv4 is currently tested, of course
18:47:06 <tbachman> (and still works)
18:47:27 <SumitNaiksatam> okay, perhaps we can switch to the other two topics on the agenda
18:47:35 <rkukura> ok
18:48:02 <SumitNaiksatam> songole: perhaps you can look at the spec: #link https://review.openstack.org/#/c/459823/
18:48:07 * tbachman notes he probably should have an exception to catch that for now
18:48:11 <tbachman> SumitNaiksatam: thanks for linking that!
18:48:15 <SumitNaiksatam> and we can follow up if you have concerns
18:48:30 <SumitNaiksatam> tbachman: thanks again for being on top of this!
18:48:42 <SumitNaiksatam> #topic stable/mitaka timeout issues
18:49:02 <tbachman> SumitNaiksatam: np!
18:49:15 <SumitNaiksatam> the summary here is that the stable/mitaka py27 timesout more often than not
18:49:23 <songole> SumitNaiksatam: will do.
18:49:24 <SumitNaiksatam> and i have been trying to get to the bottom of it
18:49:37 <SumitNaiksatam> so far with limited success
18:50:00 <SumitNaiksatam> i identified a couple of places where we were trying to connect to external servers/processes
18:50:06 <SumitNaiksatam> and waiting for it to timeout
18:50:33 <SumitNaiksatam> on both occassions we could just get rid of the tests since these are legacy
18:50:45 <SumitNaiksatam> unfortunately the problem is still not solved
18:51:10 <SumitNaiksatam> i have this patch: #link https://review.openstack.org/#/c/467391/
18:51:33 <SumitNaiksatam> it also cleans up a lot of the logging
18:52:07 <SumitNaiksatam> so if you see this:
18:52:08 <SumitNaiksatam> http://logs.openstack.org/26/463626/19/check/gate-group-based-policy-python27-ubuntu-trusty/6dc1861/console.html
18:52:17 <SumitNaiksatam> a lot of the redundant noisy logging is gone
18:52:44 <SumitNaiksatam> makes it easier to narrow down the problems
18:53:07 <SumitNaiksatam> i think we should try to merge this patch and couple of others in the queue:
18:53:22 <SumitNaiksatam> #link https://review.openstack.org/#/q/status:open+project:openstack/group-based-policy+branch:stable/mitaka
18:53:30 <SumitNaiksatam> rkukura: your patches can also be posted
18:54:39 <SumitNaiksatam> thats it from me, unless there are questions/comments
18:54:39 <rkukura> SumitNaiksatam: Which patches of mine? I haven’t done any of the backlogged stable/mitaka back-ports yet
18:54:57 <SumitNaiksatam> rkukura: yes, i meant you can post them
18:55:14 <SumitNaiksatam> rkukura: i feel more confident that they will not perpetually timeout
18:55:29 <SumitNaiksatam> even while i try to fix the situation further
18:55:33 <rkukura> SumitNaiksatam: I haven’t been waiting for the timeout fix - I’ve been trying to finish the data migration patch for apic_aim
18:55:49 <rkukura> I will review the patches mentioned.
18:55:50 <SumitNaiksatam> rkukura: i know that, but the reality is taht we wouldnt have been able to merge them
18:55:58 <SumitNaiksatam> even if you had posted them
18:56:04 <rkukura> SumitNaiksatam: understood
18:56:15 <rkukura> I guess that is important for manual backports
18:56:35 <SumitNaiksatam> i feel a little more optimistic now (rechecks will probably be needed)
18:56:45 <rkukura> for these timeout work-arounds, I’d rather see fixes merged to master, then newton, then mitaka when possible
18:57:18 <SumitNaiksatam> rkukura: yes, but these might be branch specific
18:57:34 <rkukura> unless the code causing the issue is already gone/changed/fixed on the newer branches
18:57:35 <SumitNaiksatam> the log clean up which i am doing here will definitely be helpful in the other branches as well
18:57:44 <rkukura> right
18:57:49 <SumitNaiksatam> i will post the relevant patches separately
18:58:16 <SumitNaiksatam> some of the patching might be different for those branches
18:58:41 <SumitNaiksatam> okay switching topics
18:58:43 <rkukura> ok
18:58:46 <SumitNaiksatam> #topic Ocata sync
18:58:52 <SumitNaiksatam> annak_: still there?
18:58:58 <annak_> yep
18:59:06 <tbachman> SumitNaiksatam: 1 min
18:59:12 <SumitNaiksatam> sorry got delayed to get to this
18:59:22 <annak_> np
18:59:23 <SumitNaiksatam> so i was hoping to have started merging the patches this week
18:59:40 <SumitNaiksatam> but might take a little longer since there is still some backlog
18:59:52 <SumitNaiksatam> meanwhile annak_ thanks for keeping up the work on this
19:00:01 <SumitNaiksatam> and let us know if you are hitting any issues
19:00:12 <annak_> np, sure
19:00:14 <SumitNaiksatam> we can discuss on GBP IRC channel or over emails
19:00:19 <SumitNaiksatam> annak_: thanks!
19:00:33 <SumitNaiksatam> alrighty, thanks all for joining!
19:00:35 <SumitNaiksatam> bye
19:00:36 <tbachman> SumitNaiksatam: thanks!
19:00:37 <annak_> I had a few qs - I'll ask you offline
19:00:38 <SumitNaiksatam> #endmeeting