14:01:23 <sgordon> #startmeeting telcowg
14:01:24 <openstack> Meeting started Wed Dec 17 14:01:23 2014 UTC and is due to finish in 60 minutes.  The chair is sgordon. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:27 <openstack> The meeting name has been set to 'telcowg'
14:01:31 <sgordon> #topic roll call
14:01:38 <amitry> good morning
14:01:48 <smazziotta> Hi
14:01:53 <sgordon> #link https://etherpad.openstack.org/p/nfv-meeting-agenda
14:01:55 <sgordon> hi all
14:02:06 <cloudon> hi
14:02:11 <sgordon> anyone else here for the telco working group?
14:02:14 <sgordon> hi cloudon
14:02:21 <smazziotta> hi
14:02:24 <sgordon> hi amitry, smazziotta
14:02:42 <sgordon> mkoderer, are you around?
14:02:58 <sgordon> #topic Meeting Times for holiday period
14:03:10 <sgordon> just to formalize something i proposed in the email reminder for this meeting
14:03:21 <sgordon> the next meetings would fall on Dec 24 and 31
14:03:38 <sgordon> i propose that we dont meet on those dates, making the next meeting jan 7
14:03:51 <sgordon> due to many people being out on holidays over this period
14:03:54 <sgordon> any objections?
14:04:28 <smazziot> +1
14:04:36 <cloudon> fine by me
14:04:38 <sgordon> #info No meeting December 24th and 31st, next meeting will be January 7th 2014.
14:04:48 <sgordon> #topic Actions from last week
14:04:58 <sgordon> so i will work backwards on these
14:05:19 <sgordon> #info sgordon_ was to attempt to frame an email to OPNFV tech lists highlighting the concern jaypipes raised wrt to alignment w/ openstack
14:05:41 <sgordon> #link http://lists.opnfv.org/pipermail/opnfv-tsc/2014-December/000387.html
14:06:14 <sgordon> i reached out to chris wright who is working with chris price on the OPNFV side
14:07:02 <sgordon> i believe the proposal is for jay and russell to present to the OPNFV technical committee on how to ideally interact with the openstack community
14:07:21 <smazziot> yep. this is my understanding as well
14:07:26 <jaypipes> sgordon: we've kicked off an email thread between cdub, chris from ericsson, me and russellb
14:07:29 * beagles wanders in late
14:07:49 <sgordon> #info Russell and Jay have been invited to present to the OPNFV TSC on interacting with the OpenStack community successfully
14:07:52 <jaypipes> sgordon: we'll be attending the first OPNFV TSC session in January to help them figure out the OpenStack community and reduce duplication
14:07:55 <sgordon> jaypipes, thanks for that
14:08:02 <jaypipes> pas de probleme
14:08:18 <ybabenko> hi
14:08:27 <sgordon> i had sent a couple of areas of concern to chris w just to highlight the issue
14:08:40 <jaypipes> cool.
14:08:43 <sgordon> but i think this is the best way forward :)
14:08:54 <jaypipes> agreed. just need to start collab early and often.
14:09:05 <jaypipes> make sure there's no rabbit holes or duplicated work.
14:09:15 <sgordon> yes, not just dev side either
14:09:21 <sgordon> some overlap i am seeing with operators projects
14:09:24 <sgordon> (e.g. logging)
14:09:43 <sgordon> ok
14:09:47 <sgordon> next ai from last week
14:10:06 <sgordon> #info mkoderer and DaSchab were to work on security segregation (dmz/mz concept), VNF instantiation and seperation of Infra/App Orchestration use case
14:10:16 <sgordon> mkoderer, DaSchab were you able to make any progress on this?
14:10:22 <sgordon> is there anything we can do to help?
14:10:31 <DaSchab> we started to write it down....
14:10:54 <DaSchab> but it's holiday season    sorry :-(
14:11:04 <sgordon> ack
14:11:25 <sgordon> #info DaSchab notes some progress on writing up these use cases but delayed due to holiday season
14:11:37 <sgordon> DaSchab, any chance that what you have could be put in the wiki as a draft?
14:12:15 <DaSchab> hopefully by the end of this week
14:12:54 <sgordon> #action DaSchab aiming to get initial draft on wiki by end of next week
14:12:58 <sgordon> DaSchab, ok we can revisit
14:13:13 <sgordon> i dont see rprakash here
14:13:17 <sgordon> so i will carry that AI over
14:13:29 <sgordon> #action rprakash adding Mobile Network use case for GTP tunneling and will bring it next week
14:13:43 <sgordon> #topic Use Cases
14:14:07 <sgordon> #link https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases
14:14:26 <sgordon> so we do not have anything new here as far as i can see from last week
14:14:41 <sgordon> aveiga's suggestion previously was that we pick a use case each week to try and examine
14:14:56 <sgordon> with a view to ultimately extracting requirements from it
14:14:58 <DaSchab> +1
14:15:19 <ybabenko> agree
14:15:35 <aveiga> sgordon: want to start going over use cases in January as scheduled topics then?
14:15:36 <sgordon> #info current use cases listed are VPN Instantiation, Session Border Controller, Virtual IMS Core, Access to physical network resources
14:15:41 <sgordon> aveiga, yeah
14:15:43 <ybabenko> seems like VPN as a service can be the first one?
14:15:55 <sgordon> what i would like to gauge is whether there is a feeling for priorities
14:16:04 <sgordon> or rather, enthusiasm for tackling any particular one first
14:16:20 <sgordon> ybabenko, that is really just the order they happen to be listed in
14:16:28 <sgordon> ybabenko, not an indication of priority at this time
14:16:58 <ybabenko> would be helpful to see consensus from operators side? where is the biggest pain?
14:17:40 <sgordon> ybabenko, agree
14:18:11 <amitry> do we want biggest pain or quick wins?
14:18:18 <sgordon> good question
14:18:23 <aveiga> this is for analysis
14:18:29 <sgordon> to me "Access to physical network resources" looks like the quickest win
14:18:32 <sgordon> on face value
14:18:33 <aveiga> I don't think we'll know how long it takes until we go over the gaps
14:18:34 <smazziot> I would have preferred a use case more challenging on the data plane
14:18:35 <ybabenko> so from our side we could offer to work on service function chaining out of openstack
14:18:44 <sgordon> smazziot, e.g. vCPE?
14:18:48 <ybabenko> this seems to be a huge open gap at the moment
14:18:55 <smazziot> yep
14:19:28 <sgordon> #info do we want to address biggest pain or quick wins for analysis?
14:19:47 <DaSchab> service chaining is also not described as an use-case yet
14:20:02 <sgordon> #info current use cases do not present a challenge on data plane side (perhaps need to solicit vCPE use case?)
14:20:05 <sgordon> DaSchab, right
14:20:09 <ybabenko> DaSchab: we could offer the first draft for SFC
14:20:14 <sgordon> and i think this is one where a use case is really important
14:20:31 <sgordon> because everyone wants to work on service chaining but not everyone has the same idea of how they need it to work
14:20:53 <sgordon> use cases need to drive that
14:20:57 <DaSchab> rigth
14:21:07 <DaSchab> right
14:21:24 <ybabenko> we can offer to do the first draft for the next meeting
14:21:28 <smazziot> I would also state that the challenge we see is non only on the use case itself but on underlying techno (SR-IOV vs OVS-DPDK for instance)
14:21:40 <adrian-hoban> Is this the relevant #link https://review.openstack.org/#/c/93524 for SFC?
14:21:42 <sgordon> #action ybabenko to draft a service chaining use case for future discussion
14:22:08 <smazziot> +1 for SFV
14:22:14 <smazziot> SFC
14:22:43 <sgordon> #link https://review.openstack.org/#/c/93524
14:23:12 <DaSchab> btw... when will be the next meeting?
14:23:39 <DaSchab> just asking due to xmas season
14:23:47 <sgordon> adrian-hoban, did anyone re-propose that for kilo?
14:23:52 <sgordon> DaSchab, January 7th
14:24:00 <sgordon> we discussed before you came in :)
14:24:08 <DaSchab> sorry!
14:24:13 <sgordon> i figured 24th/31st were probably going to be very low turnout
14:24:15 <sgordon> ;)
14:24:20 <ybabenko> adrian-hoban: thanks for link / not familiar with that
14:25:04 <sgordon> adrian-hoban, just looking at associated patches https://review.openstack.org/#/c/117671/ looks like it moved to Group Based Policy?
14:25:16 <adrian-hoban_> sgordon: Yes, it moved to GBP
14:25:36 <sgordon> #link https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining
14:25:51 <sgordon> ybabenko, that is probably the one you want to look at ^
14:26:01 <sgordon> for the more up to date submissions
14:26:41 <sgordon> ok
14:26:51 <ybabenko> sgordon: gotcha thanks
14:27:12 <sgordon> #action sgordon to send email to kick off discussion about which use case to pick for next meeting
14:27:27 <sgordon> mkoderer, are you around to talk orchestration use cases?
14:28:11 <ybabenko> sgordon: mark is not in the call today as off for the holidays
14:28:22 <sgordon> ok
14:28:35 <sgordon> #topic Design and Implementation
14:28:50 <sgordon> so still tracking some stuff that was originally proposed under the model we were using last cycle
14:29:01 <sgordon> #topic Design and Implementation - VLAN trunking
14:29:07 <sgordon> #link https://review.openstack.org/#/c/136554/
14:29:11 <sgordon> #link https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks
14:29:19 <sgordon> #link http://docs-draft.openstack.org/54/136554/3/check//gate-neutron-specs-docs/f4bea33//doc/build/html/specs/kilo/nfv-vlan-trunks.html
14:29:46 <sgordon> ijw has updated his spec for exposing whether or not VLAN trunking is supported by the mech driver in use
14:29:54 <ijw> he has.
14:29:59 <ijw> Also the one for MTU.
14:30:16 <sgordon> #link https://review.openstack.org/#/c/105989/
14:30:22 <sgordon> for the MTU change proposal ^
14:30:56 <ybabenko> MTU must support jumbo frames
14:30:58 <sgordon> #info need reviews on VLAN trunking spec ( https://review.openstack.org/#/c/136554/ ) and MTU selection and advertisement spec ( https://review.openstack.org/#/c/105989/ )
14:31:19 <ybabenko> there are some use-cases like carrier grade ethernet services which need jumbo
14:31:24 <sgordon> ybabenko, i would encourage you to read / review the spec and provide feedback there
14:31:47 <ijw> jaypipes: you should talk to me about opnfv and duplicate efforts, btw
14:31:49 <aveiga> ybabenko: I'll throw my hat in. There are other use cases for jumbo frames as well, like video transport :)
14:32:10 <jaypipes> ijw: absolutely, I'd be happy to. :)
14:32:13 <ybabenko> aveiga: >)
14:32:30 <ijw> -> #openstack-dev, after
14:32:30 <sgordon> if i recall correctly ijw jumbo frames was one of the scenarios you endeavored to include in that spec?
14:32:41 <ybabenko> sgordon: I am happy do that - so we are speaking about https://review.openstack.org/#/c/105989/ ?
14:32:45 <ijw> Indeed - it's nondiscriminatory as to frame size
14:32:49 <sgordon> ybabenko, yes
14:32:54 <sgordon> ijw, right - exactly as i thought
14:33:11 <sgordon> aveiga, i would encourage you to take a look at that one as well then and confirm whether it meets your needs :)
14:33:45 <aveiga> sgordon: you got it
14:34:00 <ijw> What it doesn't really do is explain how to set jumbo frames up.  A cloud has them or it doesn't and there's not much you can do about that at the API.  What it does is let you tell if your app is going to run, and where the networking is capable of adapting, you can tell the driver what you want.
14:34:01 <sgordon> #topic Design and Implementation - ML2 port security extension
14:34:11 <sgordon> #undo
14:34:12 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x3dc85d0>
14:34:30 <aveiga> ijw: we need a bug filed against current MTU selection in both Nova and Neutron
14:34:44 <aveiga> VIF selection isn't respecting MTU, and sometimes the network side isn't either
14:34:58 <ijw> aveiga: fine with me, reference the spec, which you will obviously +1
14:35:02 <aveiga> I know the option is there, but it's not reliably doing anything
14:35:18 <sgordon> #info MTU selection and advertisement spec covers telling you if your app is going to run, and where the networking is capable of adapting you can tell it which driver you want. It does not explain how to set jumbo frames up - a cloud has them or it doesn't and there is limited ability to do anything about this at the API level.
14:35:18 <ijw> aveiga: there are two config parameters and neither does what you would expect
14:35:20 <ybabenko> sgordon: sorry for stupid q but where is the blueprint itself for MTU topic / i see only the code
14:35:43 <ijw> ybabenko: read 'the code' - it's a document describing the change
14:35:48 <sgordon> ybabenko, the code in this case is the RST file which is the full specification
14:35:54 <sgordon> https://review.openstack.org/#/c/105989/6/specs/kilo/mtu-selection-and-advertisement.rst
14:36:07 <aveiga> sgordon: for specs, maybe we ought to link the rendered build in this forum?
14:36:22 <ijw> aveiga: problem is you can't comment on the rendered build
14:36:26 <sgordon> http://docs-draft.openstack.org/89/105989/6/check//gate-neutron-specs-docs/9169e72//doc/build/html/specs/kilo/mtu-selection-and-advertisement.html
14:36:27 <aveiga> ah, true
14:36:31 <sgordon> yeah
14:36:54 <sgordon> perhaps an education thing for me though
14:36:56 <ybabenko> ijw: we are doing our best )))))
14:37:05 <ybabenko> sgordon: OK. appreciate
14:37:05 <sgordon> wiki page on reviewing specs for new people if one doesnt already exist
14:37:38 <sgordon> ok, couple of others to cover quickly
14:37:39 <sgordon> #topic Design and Implementation - ML2 port security extension
14:37:47 <sgordon> #link https://review.openstack.org/#/c/99873/
14:37:52 <sgordon> #link https://blueprints.launchpad.net/neutron/+spec/ml2-ovs-portsecurity
14:38:00 <sgordon> #link http://docs-draft.openstack.org/73/99873/14/check/gate-neutron-specs-docs/42051c8/doc/build/html/specs/kilo/ml2-ovs-portsecurity.html
14:38:02 <ijw> Looks good to me, but someone else must approve
14:38:13 <sgordon> so, this has an awful lot of +1s
14:38:19 <sgordon> but needs some core attention
14:38:32 <ijw> More to the point, they've noticed that it's not really OVS-specific, which means that it works with LB implementations too (which is useful if you happen to like VLANs)
14:38:55 <sgordon> lol
14:38:56 <sgordon> yes
14:39:13 <sgordon> #info needs neutron core reviewer input
14:39:27 <ijw> driver, not core reviewer
14:39:39 <ybabenko> so is it mainly L2 port security?
14:39:56 <ijw> L2 and L3 (security group)
14:40:07 <ijw> It just allows it to be wholesale turned off
14:40:10 <aveiga> it's l3 security on an l2 access layer, yes?
14:40:14 <sgordon> ijw, ahh yes
14:40:16 <sgordon> #undo
14:40:17 <openstack> Removing item from minutes: <ircmeeting.items.Info object at 0x3c44fd0>
14:40:22 <sgordon> #info needs neutron driver input
14:42:31 <sgordon> ok i am going to skip large pages and cpu pinning and come back if we get time
14:42:48 <sgordon> #topic Design and Implementation - I/O based scheduling
14:42:54 <sgordon> #link https://review.openstack.org/#/q/topic:bp/input-output-based-numa-scheduling,n,z
14:43:10 <sgordon> this is up for code review
14:43:22 <sgordon> adrian-hoban_, i believe this needs to be iterated on based on the current read out?
14:43:56 <sgordon> pczesno, im assuming you are working on this
14:43:57 <sgordon> :)
14:44:39 <pczesno> hi
14:45:19 <pczesno> i'm currently reworking few things, code will be up for review by the end of this week
14:45:46 <sgordon> #info pczesno working on updates, will re-propose by end of week
14:45:49 <sgordon> thanks!
14:46:08 <sgordon> #topic Design and Implementation - Pluggable VIF driver
14:46:32 <sgordon> there has been a lot of interest in the past in having the ability to plug the VIF driver, this was removed (intentionally) in the not to distant past
14:46:45 <sgordon> i wanted to draw attention to some recent discussion in this area though
14:46:47 <sgordon> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/052509.html
14:47:18 <sgordon> #link http://lists.openstack.org/pipermail/openstack-dev/2014-December/052838.html
14:47:47 <irenab> sgordon: this was rejected by nova core team
14:47:53 <sgordon> irenab, yes - we know
14:47:58 <ijw> So the issue seems to have two components, in fact
14:48:00 <sgordon> irenab, im referring to the more recent discussion
14:48:20 <ijw> One is that there's  contingent of people who would just like to tidy it up to remove some of the crap that happens on the Nova side to support plugging
14:48:39 <sgordon> irenab, about potentially enhancing the VIF mechanism such that e.g. we don't need completely duplicated VIF plugins for everything that wants to use e.g. vhost-user in a slightly different way
14:48:48 <ijw> The other is that there are people wanting newer, different VIF drivers and the Nova guys don't like to put them in until they're certain they're being tested by something
14:49:29 <irenab> sgordon: yes, the script option
14:50:22 <sgordon> so i believe out of that maxime was going to attempt to repropose something based on dan's idea
14:50:46 <sgordon> not sure how this will end up meshing with proposal deadline etc. for kilo...i haven't seen it yet
14:50:50 <sgordon> but just wanted to highlight
14:51:36 <ijw> I'm not a massive fan of the proposal but it would solve the immediate problems
14:51:51 <ijw> Also, the discussion is always very KVM-centric
14:52:11 <smazziot> ijw: do you have an alternative in mind ?
14:52:43 <sgordon> ijw, indeed, though usually openly on the mailing list - do other hypervisors just not care?
14:53:33 <ijw> sgordon: I think most users use KVM, but in particular I wonder about Ironic
14:53:46 <ybabenko> ijw: +1
14:54:03 <sgordon> #info How does VIF proposal work with regards to Ironic?
14:54:55 <ijw> I put a spec together that was related to Ryota's but somewhat similified what he'd put together and addressed another point: https://review.openstack.org/#/c/141791/
14:55:09 <ijw> Obviously not going anywhere right now, but it was more a matter of writing it down
14:55:22 <sgordon> RIGHT
14:55:24 <sgordon> -caps
14:55:32 <sgordon> i think that is the longer term solution to a lot of this
14:56:01 <ijw> The strong opinion is that Nova should be ignorant of networking beyond being able to get an attachment it can use, so that expands the plugging interaction into a proper negotiation as opposed to the current situation where neutron has magical knowledge of what works for Nova
14:56:16 <sgordon> #link https://review.openstack.org/#/c/141791/
14:56:40 <sgordon> thanks for that
14:56:47 <sgordon> #topic other business
14:57:01 <sgordon> we have around 4 mins left, does anyone have something else they would like to raise today?
14:57:56 <gcossu> what about qos?
14:58:16 <sgordon> gcossu, indeed - what about qos?
14:58:23 <sgordon> (that is, what do you want to cover?)
14:58:27 <ybabenko> yep what about it!
14:58:30 <ybabenko> we need it!!!
14:58:40 <sgordon> ok
14:58:46 <ybabenko> QoS configuration in vswitch out of openstack ?
14:58:53 <sgordon> any volunteers to nail down use cases more concretely?
14:58:58 <ybabenko> L2 / L3 QoS paramtertisation
14:59:03 <aveiga> ybabenko: the question is, what do you need (QoS is a lot of things) and does it go here or elsewhere?
14:59:04 <sgordon> (noting that there are of course some existing proposals in this space)
14:59:09 <ybabenko> DSCP, p-bits, policing. ...
14:59:38 <aveiga> I'll second that then, as we've been carrying internal patches for DSCP for a long time
14:59:48 <aveiga> I hear it's a sore subject though…
15:00:14 <sgordon> the fact that the first hits you get for it usually have quantum in the name should be an indicator
15:00:14 <irenab> QoS is not in neutron priority for Kilo
15:00:25 <sgordon> right
15:00:27 <ybabenko> we work on the topic of QoS and will be interested to work on this topic
15:00:35 <sgordon> i think they key for this group would be defining use cases/needs
15:00:42 <sgordon> so that it can potentially be a priority for L
15:00:48 <ybabenko> maybe we will try to write a few things before the next meeting
15:00:50 <aveiga> sgordon: give me an AI for that
15:01:04 <adrian-hoban_> irenab had proposed a QoS type feature for SR-IOV ports, but with QoS not on the Neutron priorities list, it is not going to land in Kilo
15:01:04 * sgordon aveiga to define use case/needs for QoS
15:01:15 <ybabenko> aveiga: missing QoS features ouf of OpenStack
15:01:16 <sgordon> adrian-hoban_, +1
15:01:27 <sgordon> ok i think we're over
15:01:30 <sgordon> (time that is)
15:01:36 <gcossu> yes, regarding the blueprint already present. I would like to contribute. We need use cases requirements
15:01:37 <sgordon> happy to keep chatting in #openstack-nfv
15:01:39 <sgordon> thanks all!
15:01:42 <sgordon> #endmeeting