14:02:22 <sgordon> #startmeeting telcowg
14:02:23 <openstack> Meeting started Wed Dec  9 14:02:22 2015 UTC and is due to finish in 60 minutes.  The chair is sgordon. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:26 <openstack> The meeting name has been set to 'telcowg'
14:02:32 <sgordon> #link https://etherpad.openstack.org/p/nfv-meeting-agenda
14:02:35 <sgordon> #topic roll call
14:02:36 <sgordon> o/
14:02:44 <cloudon> hi
14:02:51 <sgordon> hiya
14:03:01 <sgordon> i am imagining that this will be pretty quick :)
14:03:07 <cloudon> yup...
14:03:11 <sgordon> #topic Complex Instance Placement Updates
14:03:17 <sgordon> #link https://review.openstack.org/#/c/251442/
14:03:19 <sgordon> i made a thing!
14:03:30 <sgordon> and by that i mean i integrated your feedback including the small nit
14:03:43 <cloudon> saw that, thanks
14:03:58 <sgordon> will see what arkady and others have to say about the updates
14:03:59 <cloudon> looking at it again just now, I wonder about this sentence preceding your cahnges...
14:04:30 <cloudon> (waits while he figures out how to use gerrit)
14:04:56 <cloudon> There is however no concept of having two separate groups of instances where the instances in the group have one policy towards each other, and a different policy towards all instances in the other group.
14:05:20 <sgordon> right
14:05:22 <sgordon> so effectively
14:05:32 <cloudon> Think you added that - not sure I understand
14:05:32 <sgordon> Instances in group A have affinity for each other
14:05:41 <sgordon> Instances in group B have affinity for each other
14:05:57 <sgordon> Instances in group A have anti-affinity for instances in group B (and vice versa)
14:07:15 <cloudon> OK, gotcha.  So extending current pair-wise affinity both to (a) (anti-)affinity between groups and (b) clustering within a group
14:07:54 <sgordon> right
14:08:13 <sgordon> i can probably make that clearer with an ascii diagram
14:08:25 <sgordon> ignoring that my artistic ability is confusable with that of a rock
14:08:29 <cloudon> ok, makes sense.  hadn't thought of the first - what sort of use case do you have in mind?
14:08:43 <cloudon> Boxes are good.  All you need.
14:09:07 <sgordon> i would have to re-confirm with the relevant NEPs but effectively each group is an "instance" of a complex application, itself made up of several instances
14:09:22 <sgordon> so you want the parts of the complex application close to each other for performance
14:09:30 <sgordon> but you want a spread of them across the infra for resiliency
14:10:14 <cloudon> Ah, right.  So maybe not necessarily separate tiers within an app, rather two instances of the same tier?
14:10:24 <sgordon> yeah
14:10:34 <cloudon> makes sense
14:11:48 <sgordon> #action sgordon to update complex instance placement user story with more detail and diagram for (anti-)affinity between groups (which also have their own policies)
14:11:57 <gjayavelu_> sgordon: IIRC, we need ability to limit the # of instances in group, and ability to specific a set of host(s) for placement. am i missing any other variable?
14:12:18 <cloudon> why do you need to specify hosts?
14:12:26 <sgordon> i dont believe either of those things are part of this proposal atm
14:12:33 <gjayavelu_> ah ok
14:12:33 <sgordon> we know already that specifying hosts will be rejected
14:12:43 <sgordon> from a project implementation pov
14:13:12 <sgordon> what's the driver for limiting the # of instances in the group (versus just not creating them :))
14:13:21 <sgordon> are you talking about like a group quota?
14:13:22 <gjayavelu_> not the hosts directly, but in the form of AZs
14:13:44 <sgordon> you can actually use AZs alongside groups already today
14:13:51 <sgordon> they are different filters
14:14:09 <gjayavelu_> ok
14:14:23 <sgordon> obviously if you specify the same affinity group and different AZs when booting though you will get a failure to launch on one of them
14:14:37 <sgordon> :D
14:16:58 <sgordon> gjayavelu_, what about the # of instances question? can you explain the driver for that?
14:18:34 <gjayavelu_> One is DoS and second could be prevent all VMs of group failing when a host dies. I'm trying to pull that spec..I believe there is already an option to do that.
14:18:52 <gjayavelu_> i mean there is an option to limit # of instances
14:19:01 <cloudon> That 2nd pasrt is one of the purposes of this proposal..
14:19:07 <sgordon> yeah there is a quota for that
14:19:21 <sgordon> it is not group specific it is how many instances the tenant can create period
14:22:49 <sgordon> ok
14:22:58 <sgordon> let's keep bouncing this around on the spec review
14:23:05 <sgordon> s/spec/user story/
14:23:19 <sgordon> i will also try and get something up for one of the other ones we are moving across this week
14:23:40 <sgordon> #action sgordon to propose sec segregation or session border control against product wg repo
14:23:48 <sgordon> thanks for your time
14:23:58 <cloudon> cheers
14:27:10 <sgordon> #endmeeting