14:02:30 <russellb> #startmeeting nfv
14:02:31 <openstack> Meeting started Wed Jun  4 14:02:30 2014 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:31 <tianst> o/
14:02:31 <lukego> o/
14:02:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:33 <russellb> Hello!
14:02:34 <vladikr> o/
14:02:35 <openstack> The meeting name has been set to 'nfv'
14:02:38 <dandrushko> hi!
14:02:39 <bauzas> o/
14:02:40 <yamahata> hi
14:02:42 <russellb> who has joined us for our first chat?
14:02:44 * sgordon is here
14:02:45 <s3wong> hello
14:02:45 <thinrichs> Hi all
14:02:47 <cgoncalves> hi
14:02:50 <nijaba> o/
14:02:50 <xuhanp> hi
14:02:50 <russellb> #link https://wiki.openstack.org/wiki/Meetings/NFV
14:02:50 <heyongli> hi
14:02:50 <foliveira> hi
14:02:51 <cloudon> Hello
14:02:53 <mestery> o/
14:02:54 <Alon> hi
14:02:57 <pballand> Hi
14:02:59 <smazziotta> hello
14:03:12 <vladikr> Hi
14:03:19 <yjiang51> hi
14:03:26 <imendel> hi
14:03:30 <russellb> Agenda for today is on the wiki page
14:03:37 <danpb> greetings
14:03:45 <russellb> for future weeks, we'll switch to an etherpad to make it easier for everyone to add topics ahead of time that they would like to cover
14:03:51 * nijaba is happy to see a full house
14:03:57 <russellb> yeah, a good crowd for sure
14:04:04 <russellb> so let's jump to the first item ...
14:04:15 <russellb> #topic mission statement
14:04:28 <russellb> the first thing we should discuss is ... what do we want to achieve with this group?
14:04:41 <russellb> someone has drafted a mission statement here
14:04:44 <russellb> #link https://etherpad.openstack.org/p/nvf-subteam-mission-statement
14:04:50 <russellb> excuse the typo in the URL :-)
14:04:55 <nijaba> that was me with input from 3 or four others
14:05:00 <russellb> nijaba: great
14:05:12 <russellb> so, everyone take a look at that and see what you think
14:05:20 <sgordon> i think it captures the central parts of what we talked about @ summit
14:05:21 <nijaba> and that must be my typo too :)
14:05:22 <russellb> if we can nail something down, we'll move it to the wiki page to be more static
14:05:26 <ijw> looks good to me
14:05:30 <sgordon> but i know there are some questions about interaction with other projects/subteams
14:05:37 <sgordon> particularly in the servicevm meeting yesterday
14:05:41 <russellb> sgordon: yeah, let's cover that next
14:05:49 <russellb> any concerns with the proposed mission?
14:06:10 <cdub> +1 on mission
14:06:21 <Alon> +1
14:06:21 <s3wong> +1
14:06:25 <FJRamon> +1
14:06:26 <nijaba> +1
14:06:28 <sgordon> +1
14:06:28 <lukego> +1
14:06:31 <cloudon> Any desire to pass feedback upstream to ETSI?
14:06:32 <thinrichs> +1
14:06:33 <cgoncalves> +1
14:06:36 <russellb> #agreed mission statement as drafted on the etherpad looks good
14:06:36 <tianst> +1
14:06:37 <JohnHaller> +1
14:06:40 <bauzas> +1
14:06:42 <russellb> great thanks :)
14:06:43 <smazziotta> +1
14:06:45 <foliveira> +1
14:06:54 <SridarK> +1
14:06:56 <ndipanov> +1
14:06:57 <russellb> cloudon: hm, a good question
14:06:59 <dandrushko> +1
14:07:00 <cdub> cloudon: not a bad idea
14:07:07 <edube> +1
14:07:10 <adrian-hoban> The comment about "without any special hardware or proprietary software" may conflict with the need for 3rd party CI?
14:07:23 <cdub> cloudon: i don't know if we need to formally capture in mission or have it be a side-effect
14:07:31 <nijaba> adrian-hoban: "If special setups are required which cannot be reproduced on the standard OpenStack gate, the use cases proponent will have to provide a 3rd party CI setup, accessible by OpenStack infra, which will be used to validate developments against." should cover that
14:07:40 <russellb> cdub: as a side effect seems reasonable
14:07:45 <sgordon> adrian-hoban, mmm i think the point raised was that everything must be testable on at least 1 open source impl
14:07:47 <russellb> seems like a natural outcome as we go forward
14:07:52 <cloudon> Happy with a side effect
14:07:52 <sgordon> i think we possibly need to be more specific there
14:07:58 <ijw> I think the point of that was that in all cases whatever we're up to should be testable by anyone in its base form of functionality, even if there are vendor specific bonuses.
14:08:02 <cdub> russellb: agreed
14:08:06 <sgordon> ijw, right
14:08:10 <bauzas> adrian-hoban: I don't see a problem with the statement
14:08:14 <adrian-hoban> Ok, +1
14:08:17 <cgoncalves> cloudon: I would advice on following close to ETSI NFV terminology
14:08:20 <mestery> +1 from me too
14:08:27 <russellb> ijw: yeah i think that's key, that's pretty much how all of openstack is developed
14:08:31 <bauzas> adrian-hoban: it leaves possibility to run 3rd-party CIs
14:08:45 <bauzas> adrian-hoban: for specific hardware resources
14:08:51 <imendel> I find the ETSI terminology too cpmplex and less relevant for os
14:08:55 <adrian-hoban> bauzas: Agreed, it will be needed for some use cases
14:08:57 <russellb> so let's discuss the relationship of this group with other openstack teams
14:09:00 <cdub> cgoncalves: we discussed this at summit, i suggest we translate to openstack language
14:09:12 <sgordon> cdub, right
14:09:21 <russellb> on the ETSI terminology concerns, let's come back to that in a few minutes when we get to use cases
14:09:33 <russellb> so, other groups, in particular, servicevm
14:09:38 <cgoncalves> russellb: ok
14:09:54 <russellb> in general, i think it makes sense that some development efforts will be big enough that some specific coordination is fine
14:10:03 <russellb> and i think this group is a bit more broad focused
14:10:11 <cdub> russellb: what is the question, exactly?
14:10:13 <russellb> across all of openstack, and gathering requirements
14:10:23 <russellb> cdub: there seemed to be some concern with overlap with the servicevm effort
14:10:26 <yamahata> What's the issue with servicevm?
14:10:26 <russellb> sgordon: have some comments on that?
14:10:37 <sgordon> i was just lurking the minutes there
14:10:45 <sgordon> there is some overlap in terms of concerns about ETSI terminology
14:10:47 <sgordon> and use cases
14:10:57 <russellb> OK
14:10:58 <cdub> russellb: ah, ok.  it is likely that servicevm is relevant
14:11:00 <yamahata> servicevm has mostly overlapped requirement as https://wiki.openstack.org/wiki/ServiceVM/neutron-and-other-project-items
14:11:11 <russellb> so hopefully we can keep the ETSI terminology / use case translation work focused here
14:11:12 <sgordon> although servicevm is a more specific effort in terms of having a specific set of deliverables
14:11:12 <s3wong> russellb: what is the overlap? is there any development charter for the NFV subteam yet? or are we still working on requirements?
14:11:16 <sgordon> now in the form of a service
14:11:21 <ijw> ServiceVM has a large non-overlapping requirement, too, though
14:11:37 <sgordon> whereas i see this effort as more broadly formulating and driving NFV requirements into openstack
14:11:39 <NZHOU> I know there is going to have an open source project named OPN. This NFV team get requirement from vendors or operators directly or just from OPN?
14:11:43 <sgordon> wherever that may be
14:11:52 <ijw> NFV is creating user-run services.  ServiceVM is about providing APIs to those services through Openstack.  And in both cases I don't think we actually care who does the work as long as it gets done
14:11:54 <sgordon> so servicevm, neutron, nova, heat, ipv6 subteam etc
14:11:54 <cgoncalves> note that besides servicevm there is also the advanced services sub-team whih is covering some relevant NFV work items
14:12:10 <sgordon> ijw, +1
14:12:21 <ijw> NZHOU: as in any community, I think we get our requirements from the community
14:12:22 <russellb> ok, seems we're generally all with the same understanding about how these groups relate
14:12:35 <russellb> wanted to make sure there wasn't major concern there
14:12:39 <adrian-hoban> NZHOU: Both OPN and from vendors/operators
14:12:47 <sgordon> i dont think there is a major concern
14:13:00 <sgordon> but we should collaborate when it comes to terminology and use cases
14:13:00 <NZHOU> ijw: ok
14:13:03 <sgordon> if it makes sense
14:13:10 <russellb> #topic use cases
14:13:19 <sgordon> rather than independently creating two etsi -> openstack glossaries for example
14:13:22 <russellb> we've done a nice job early on with gathering a big list of blueprints
14:13:26 <ijw> cgoncalves: The advanced services stuff is all about plugging the service VM services - and others - into Openstack.  Again, they'll use the tools we make, but we won't have a complete overlap
14:13:34 <russellb> i think one big work area for us is the use cases and terminology translation for openstack
14:14:10 <ijw> I don't know how many ETSI bods we have, I know adrian-hoban is one...
14:14:12 <sgordon> right, and the question there is do we want to target specific VNFs or more generally translate the ETSI NFV use cases
14:14:16 <russellb> what would we like to accomplish in this area?
14:14:31 <sgordon> in a way it's, how do we define success
14:14:32 <FJRamon> II am not that sure if that mapping is actually required
14:14:46 <nijaba> smazziotta: I think you are in ETSI NFV, right?
14:14:48 <imendel> I thought we want to drive requirements. The ETSI use cases are far too high level
14:14:52 <FJRamon> I guess most ETSI folks are familiar with OpenStack terminology as well
14:14:55 <FJRamon> Yes
14:14:57 <russellb> from an openstack developer perspective, I feel like we need a "tl;dr" of why this stuff is important
14:15:01 <ijw> Personally, I was more interested in making it possible to run VNFs.  I don't think we should be 'creating VNFs' for ourselves (and there perhaps the serviceVM guys really do want to do that)
14:15:04 <imendel> not rwally
14:15:07 <sgordon> imendel, agree
14:15:12 <GGarcia> I am in ETSI NFV too.
14:15:16 <smazziotta> yes. I need to register enovance :-)
14:15:17 <GGarcia> fjrs, agree
14:15:18 <sgordon> imendel, i am just trying to get a feel for how we get to something more specific
14:15:30 <imendel> i gree, we need somthing specifc
14:15:45 <cdub> russellb: should we aim to have that for next week?
14:15:56 <russellb> it would be nice to have something, even brief, written up that ties blueprints to the "why"
14:15:56 <adrian-hoban> OpenStack fits _mostly_ in what ETSI-NFV describe as a Virtualisation Infrastructure Manager (VIM)
14:15:57 <nijaba> ijw:  would be happy to work with you on writing about this
14:15:58 <russellb> cdub: yeah
14:16:06 <GGarcia> Terminology in openstack is clear for most of ETSI folks
14:16:13 <GGarcia> adrian-hoban, agree
14:16:16 <russellb> maybe a few people can go off and start a first cut at something?
14:16:22 <NZHOU> adrian-hoban:+1
14:16:28 <cdub> russellb: ok, i'll help with that
14:16:33 <nijaba> russellb: volunteering
14:16:44 <adrian-hoban> russellb: Are you looking for a terminology translation or use cases?
14:16:50 <russellb> ok great, who wants to coordinate it?
14:17:01 <russellb> mainly use cases that the blueprints can be tied to, personally
14:17:02 <ijw> Certainly we should draw the mapping diagram, if nothing else; it's always the first thing people want to see
14:17:11 <sgordon> so far ijw, cdub, nijaba
14:17:12 <cdub> ijw: agreed re: building vnfs
14:17:15 <sgordon> i am happy to assist as well
14:17:17 <s3wong> adrian-hoban: I think russellb was talking about why the listed blueprints are relevant to NFV according to use cases
14:17:23 <russellb> cdub: OK, want to coordinate it?
14:17:32 <cdub> russellb: sure
14:17:39 <adrian-hoban> s3wong: Ok, makes sense
14:17:40 <JohnHaller> In addition to VIM, there is some intersection between Heat and possibly new projects like Climate and the Management/Orchestration (MANO)
14:17:40 <imendel> me too
14:17:44 <cdub> russellb: so sgordon, nijaba ...anyone else?
14:17:47 <russellb> #agreed cdub to coordinate a first pass on starting some openstack use case documentation, contact him if you'd like to help
14:17:53 <russellb> #undo
14:17:55 <openstack> Removing item from minutes: <ircmeeting.items.Agreed object at 0x2690110>
14:17:55 <adrian-hoban> There are ~9 key use cases that are being assessed
14:18:00 <russellb> #note cdub to coordinate a first pass on starting some openstack use case documentation, contact him if you'd like to help
14:18:06 <adrian-hoban> I will help too
14:18:11 <Sam-I-Am> i'd also like to help
14:18:20 <russellb> OK, let's come back to this next week and review what you guys come up with :)
14:18:21 <s3wong> cdub: how can we contact you?
14:18:31 <cdub> chrisw@sous-sol.org
14:18:32 <russellb> and we'll have something more specific to poke holes in and discuss
14:18:48 <GGarcia> Use cases in NFV are too detailed and telco-oriented to establish a mapping
14:18:56 <cdub> may be slow until later today to reply
14:19:10 <cdub> GGarcia: we need to dig under
14:19:12 <ijw> telco oriented is rather the point of NFV, no?
14:19:13 <sgordon> GGarcia, we need to find a middle ground to progress
14:19:41 <ijw> Also, kind of the worst cases of what you'd like from network services, so in some senses the best way to drive an architecture.
14:19:59 <FJRamon> So the question I guess is what we do understand as use case
14:20:03 <cgoncalves> perhaps a small contribution to the ETSI-NFV VIM part: https://review.openstack.org/#/c/92477/6/specs/juno/traffic-steering.rst,unified (problem description section)
14:20:31 <cdub> FJRamon: that's a good question, and we started at fairly high level (ETSI driven)
14:20:32 <GGarcia> FJRamon, cdub, sgordon: if we are meaning by use cases the ones here (http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf), it is too complex
14:20:48 <sgordon> GGarcia, we are actually saying that is too high level i believe
14:20:54 <tvvcox> GGarcia> nfv use cases maps very well with HPC use cases.. that'd enrich the target cases for OSP... not to say that numa, sriov, etc generally speaking high performance VMs are a need today
14:21:20 <FJRamon> Yes, that is correct
14:21:23 <s3wong> GGarcia: too high level actually
14:21:42 <ijw> We have time.  Let's see what we can do with the NFV docs in a week.  We're not saying other people can't work independently if they have bright ideas and there's a mailing list out there too.
14:22:06 <russellb> sounds good
14:22:08 <sgordon> cgoncalves, suggest adding it to the wiki if it's relevant for tracking
14:22:14 <russellb> we can come back to this at the end in "open discussion" if needed
14:22:17 <russellb> #topic blueprints and tracking
14:22:23 <adrian-hoban> Agreed that they are high level, but we can start with them and decompose into lower level requirements that are being expressed in the current set of blueprints that we have captured.
14:22:26 <russellb> so, we've got a big list of blueprints, a good start :)
14:22:35 <sgordon> adrian-hoban, that's the idea yeah
14:22:37 <russellb> 1) how complete is the list of active work?
14:22:43 <cdub> tvvcox: agreed and it's something i know i've been using to advocate
14:22:45 <sgordon> :)
14:22:50 <russellb> and 2) any thoughts on a better approach to tracking it all than wiki hacking?
14:23:08 <sgordon> so i think it was ijw raised on the list
14:23:21 <sgordon> we need to define what is actually realistic for juno
14:23:21 <ijw> russellb: like I said on the ML, I think what we have there is a lot of work but a lot of it is nice-to-have stuff
14:23:23 <bauzas> russellb: we could make use of a gerrit dashboard
14:23:30 <sgordon> and the low hanging fruit that would get best bang for buck
14:23:34 <ijw> You can do NFV if only you can get traffic from one VM to another, which is currently at issue
14:23:42 <bauzas> russellb: provided all specs are tracked
14:23:44 <russellb> bauzas: cool idea, willing to take a look at that?
14:23:53 <adrian-hoban> Re 1), I think this is just the starting point, but a good one that we can build on
14:23:59 <sgordon> right that sounds like a good idea bauzas
14:24:00 <ijw> Beyond that, we should work out how to rate the rest and focus on what we can agree on and accomplish when we have the ground work
14:24:04 <bauzas> russellb: okay, count me in
14:24:06 <sgordon> im concerned that the wiki will grow stale
14:24:09 <russellb> #action bauzas to take a look at building a custom gerrit dashboard
14:24:16 <sgordon> so ideally that might provide something we can more easily automate
14:24:37 <tvvcox> cdub> there's many offerings from public cloud providers such as Amazon, Google, Verizon where high performance VMs are the differentiator
14:24:40 <russellb> yeah, i was concerned about the work to keep it current, because if not current, it's not that useful
14:24:46 <s3wong> ijw: an issue for getting traffic from one VM to another?
14:25:00 <FJRamon> Yes, at line rate it is
14:25:12 <russellb> ijw: yeah, i think some prioritization is a good idea.
14:25:14 <cloudon> FJRamon: +1
14:25:20 <cdub> tvvcox: exactly
14:25:29 <russellb> both priority, as well as some breakdown of what we think is achievable in each release
14:25:30 <cdub> tvvcox: e.g. look aw hpc aws flavors ;)
14:25:34 <russellb> based on who we know is working on stuff
14:25:35 <adrian-hoban> ijw: I think it's more complex than just connectivity, line rate, jitter, latency characteristics all important
14:25:37 <GGarcia> FJRamon, cloudon: +1       40Gbps from one VM to another without loses
14:25:41 <cdub> s/look aw/look at/
14:26:07 <sgordon> i think the dashboard will be helpful
14:26:12 <sgordon> but does that also help with priority?
14:26:16 <russellb> i don't think so ...
14:26:20 <sgordon> the other thing i started doing was tagging bugs
14:26:20 <bauzas> sgordon: nope
14:26:21 <sgordon> with nfv
14:26:24 <ijw> adrian-hoban: fair point, but again, you can test stuff with less than that
14:26:28 <sgordon> unfortunately you cant do that with BPs
14:26:29 <russellb> i think we need something else to communicate priority and release breakdown
14:26:38 <bauzas> sgordon: that's only a high-level view of specs and patches
14:26:45 <sgordon> bauzas, right
14:26:55 <russellb> we don't really have anything for that in openstack infrastructure today
14:27:00 <russellb> so we may just be stuck with doing that on the wiki
14:27:02 <bauzas> russellb: +1
14:27:06 <cloudon> GGarcia,FJRamon: though lots of valuable control plane NFV apps don't need anywhere near line rate of course
14:27:14 <sgordon> yes
14:27:15 <ijw> s3wong: passing traffic without having it dropped by the fabric and firewalled by security groups, for starters
14:27:16 <FJRamon> Yes
14:27:26 <sgordon> i went to the storyboard session at summit but it doesnt quite seem ready for this
14:27:29 <FJRamon> But there is not much work needed there
14:27:31 <sgordon> (in particular no dep tracking)
14:27:35 <sgordon> something to keep an eye on
14:27:46 <FJRamon> The thing is that data plane is the elephant in the room
14:27:53 <GGarcia> +1
14:27:57 <FJRamon> Most of the network equipment requires that
14:27:57 <cdub> cloudon: true, i usually describe as spectrum
14:28:01 <bauzas> russellb: if we create different gerrit users for P1, P2, etc. then we got our views :)
14:28:01 <russellb> #agreed build gerrit dashboard for current design/code status, continue to use wiki to document priorities and release targeting for now
14:28:05 <russellb> ^^ that sound good?
14:28:14 <ijw> Well, given we meet for an hour a week, I think if we picked the first five to discuss over the next week we could make that work - more than that and how do we talk about them?
14:28:19 <ramk_> i think we should isolate the use cases between performance and functionality
14:28:20 <sgordon> FJRamon, GGarcia so i think that feeds into the use cases discussion that cdub will follow up with the others
14:28:32 <FJRamon> I agree
14:28:36 <GGarcia> agree
14:28:38 <bauzas> russellb: if we say that this patch is P1, then we add P1 user as reviewer of that patch
14:28:39 <sgordon> e.g. "which blueprints directly relate to improving the control plane performance"
14:28:39 <russellb> ijw: i like that
14:28:46 <adrian-hoban> Yes, we need to make sure we can deploy the data plane at sufficient performance. A number of items listed in the NFV wiki will help
14:28:59 <s3wong> ramk_: absolutely
14:29:07 <sgordon> ramk_, performance, determinism, reliability
14:29:11 <bauzas> sgordon: that's the goal of tagging blueprints :)
14:29:14 <ijw> adrian-hoban: maybe you could point at the relevant stuff in an ML email?
14:29:21 <russellb> #topic open discussion
14:29:30 <adrian-hoban> ijw: Sure thing
14:29:33 <cloudon> ramk_, sgordon: +1
14:29:43 <russellb> it has been kind of open discussion throughout, but that's ok :)
14:30:02 <ijw> Can someone tell me if I got the BPs I wrote right, or if I missed anything?
14:30:05 <cdub> russellb: current ML [NFV] tag is ad-hoc, at some point perhaps we formalize w/ infra request?
14:30:17 <cdub> russellb: and ditto for #openstack-nfv
14:30:20 <sgordon> cdub, probably should do that now
14:30:23 <russellb> cdub: yes, that's easy enough
14:30:25 <adrian-hoban> cdub: +1
14:30:32 <russellb> #action russellb to formally request NFV mailing list tag
14:30:36 <cdub> sgordon: i just figured we should prove we need it (by using it ;)
14:30:45 <russellb> #action russellb to request infra management of #openstack-nfv IRC channel
14:30:51 <lukego> Howdy! I am working on an NFV deployment for Deutsche Telekom this year. We want to use the standard upstream Juno release. We would need to upstream around 100 lines of simple non-disruptive code into OpenStack during this cycle. That's a VIF and ML2 mech driver for the open source Snabb NFV (http://snabb.co/nfv.html). If we get this upstream then it saves us maintaining a fork and it also makes it available to the community.
14:30:54 <cgoncalves> ijw: please refresh the wiki page. just added the traffic steering BP. hope it is relevant to this team to track
14:31:17 <ijw> cgoncalves: certainly wants consideration with the rest
14:31:32 <lukego> Our blueprints, specs, and code are submitted last week and proposed for juno-2. There is a dependency on a new QEMU feature that should upstream any week now.
14:31:43 <sgordon> lukego, thanks - that one is already listed in the wiki right?
14:31:46 <lukego> Yep
14:31:50 <sgordon> :)
14:31:52 <GGarcia> :-)
14:32:04 <ijw> I'd be surprised if you needed much help with that, lukego
14:32:23 <cdub> lukego: have you had feedback on qemu dep? (e.g. capability negotiation?)
14:32:28 <nijaba> russellb: how do we work on priorization?  Vote?
14:32:30 * ijw sits next to a Neutron core and can beat him with a stick till he gives you a +2 if you like
14:32:49 <nijaba> ijw: nice to have :)
14:32:55 <lukego> cdub: afaik we are targetted for QEMU 2.1 and should merge real soon now.
14:32:56 <sgordon> lukego, where is the qemu dep tracked
14:32:56 * cdub notes ijw's offer down for later
14:33:04 <yjiang51> ijw: it will be better if two :)
14:33:06 <russellb> i've set up an etherpad to work on the agenda going forward
14:33:07 <sgordon> lukego, link to that as well might be useful
14:33:08 <russellb> #link https://etherpad.openstack.org/p/nfv-meeting-agenda
14:33:15 <ijw> two cores, or two sticks?
14:33:22 <nijaba> both
14:33:30 <russellb> nijaba: fight over it?  i'm not sure yet :)
14:33:31 <adrian-hoban> lukego: We will need to check for overlap with some other proposals in the wiki
14:33:32 <lukego> ijw: The risk I see is core devs being overloaded with too many BPs and these ones falling through the cracks.
14:33:41 <cloudon> What practical help do folk envision this group being able to provide to help e.g. lukego's work reviewed and accepted?  Direct reviewing, peer pressure etc?
14:33:49 <russellb> nijaba: i think we should discuss requirements in detail, a few each week as time permits, and have priority be a part of that discussion
14:33:52 <ijw> lukego: do you have libvirt changes you need, too?
14:33:54 <sgordon> lukego, yes that is where as a group we need to help prioritize
14:34:15 <danpb> ijw: it is submitted to upstream libvirt and pending review
14:34:17 <lukego> I’d like help to have the code reviewed so that it’s good enough to be merged upstream
14:34:17 <nijaba> russellb: sounds good.
14:34:23 <ijw> cloudon: best help is always work, so go review
14:34:34 <lukego> (and I am willing to help review the other stuff we prioritize here)
14:34:39 <nijaba> russellb: priority should also be linked to commitment to do something too
14:34:46 <s3wong> cloudon: all of us can help code-review; and with enough +1s we will eventually get core's attentions
14:34:51 <russellb> nijaba: +1
14:35:01 <russellb> s3wong: have you looked at setting up CI?
14:35:07 <russellb> I think that's requirement if you have a new neutron driver
14:35:28 <russellb> err, i mean lukego
14:35:38 <lukego> Our code is submitted to QEMU, Libvirt, Nova, and Neutron. Linked on the wiki (if you search for “Snabb”). We are blocked on QEMU merging our code right now but once that happens I hope we can merge everything quickly.
14:35:39 <ijw> It is, and mestery's in the corner over there if you need expert advice on anything else Neutron
14:35:58 <mestery> I'm already in contact with lukego on his stuff. :)
14:35:58 * ijw hands you a stick
14:36:06 <lukego> russellb: I will setup 3rd party CI. Have to coordinate that with mestery. I am already developer/maintainer of one mech driver for Neutron so I know the ropes of that part pretty OK
14:36:11 <russellb> less sticks, more carrots
14:36:17 <russellb> lukego: ah cool
14:36:21 <ijw> It'd need to be a really tough carrot.
14:36:29 <mestery> I always use carrots, never sticks. :)
14:36:47 <lukego> Thanks all of the encouraging comments/suggestions (!)
14:37:34 <adrian-hoban> lukego: Can you also have a look at some of the other blueprints related to hugepages & user space vhost?
14:37:47 <lukego> adrian-hoban: I looked briefly but saw no code ?
14:37:58 <adrian-hoban> lukego: On the way...
14:38:09 <lukego> OK. I will follow those BPs.
14:38:12 <russellb> lots of this stuff is just proposed designs so far
14:38:12 <TapioT> :q1
14:38:19 <russellb> but it's important to get feedback on the designs
14:38:26 <russellb> because that covers how we expose these things in openstack
14:38:30 <russellb> which is often quite tricky
14:38:45 <russellb> to do it in a way that is cloudy enough
14:39:02 <russellb> so we need to make sure it's acceptably cloudy, but also satisfies what people actually need
14:39:17 <lukego> I have everything implemented to an “alpha” level now. heavy development/testing is happening on code outside the tree during the Juno cycle.
14:39:27 <Imendel> i am afraid of those being too hw specific
14:39:56 <adrian-hoban> russellb: Agreed, we need to give the NFV guys enough of control and at the same time keep the rest of the community safe
14:40:10 <GGarcia> Imendel: determinism and high performance requires some HW specific
14:40:10 <Imendel> as per russelb remark
14:40:11 <ijw> On BP review, please express horror and disgust on the ones I stuck up in a hurry - they really are blockers for a lot of things and I want to get them cleared out if people are happy with the proposals.
14:40:38 <ijw> (horror and disgust mainly at the formatting, they don't even pass spec tests right now...)
14:41:18 <Imendel> ggarcia , not sure i agree. at the end apps shoudnt care about hw choices
14:41:21 <danpb> ijw: fyi you can just run 'tox'  locally with no args to verify the specs
14:41:33 <ijw> I suspected there'd be a command, cheers danpb
14:41:57 <yamahata> ijw: I gave some comments on BP. but it was quick comments. later I need to look at it closely.
14:42:02 <russellb> ijw: yeah, and let me know if you have trouble with it
14:42:11 <adrian-hoban> Imendel: There is a need in NFV to be more specific about certain aspects of HW. Scale out cannot solve all of the perf related items we need to consider
14:42:29 <FJRamon> Agreed
14:42:31 <s3wong> ijw: thanks for filing them. even for serviceVM team some of those items were raised during the J-Summit
14:42:42 <GGarcia> adrian-hoban, FJRamon: agree
14:42:51 <cgoncalves> ijw: which spec are you referring to, sorry?
14:43:02 <ijw> russellb: doubtful - it was 2am and I was more interested in making sure they went up than whether they passed test, I wasn't exactly expecting them to go through first time of trying
14:43:03 <ChristianM_> s3wong: agree
14:43:13 <Imendel> not saying tools shluldnt exist. but being very specific is a sleepery road
14:43:13 <ijw> cgoncalves: the three in the first table on the meeting page
14:43:35 <cgoncalves> ijw: ah, thanks.
14:43:43 <smazziotta> on perf and determinisme , we can document precise use-case so that we can justify the need to these features in Openstack
14:43:46 <ijw> s3wong: the problems we both hit aare much the same in terms of plumbing
14:44:02 <ChristianM_> Imendel: some NFV use case will require some perf guarantee and HW knowledge could help here
14:44:13 <FJRamon> Yes, that it the point
14:44:25 <GGarcia> agree
14:44:39 <smazziotta> NFV use case like CDN or any data-plane VNF
14:44:40 <ijw> smazziotta: It's not something you can work with without infrastructure help, certainly, be it constraint specifications or monitoring, but I don't have a mental picture of what you need to ask for there, I have to say, that's where adrian-hoban could really help
14:44:51 <Imendel> i am aware of the use cases. i really dont think that nfv is the only one facing perf issues.
14:45:08 <s3wong> ijw: agreed. 97716 was raised during transparent firewall discussion, and 97715 was repeatedly brought up just for having a service VM pool
14:45:14 <Imendel> it doesnt means that nfv = hw aware.
14:45:25 <adrian-hoban> smazziotta: I think that is part of the action a few of us agreed to work on for review next week
14:45:31 <FJRamon> Yes, that is true
14:45:33 <ChristianM_> imendel: agree
14:45:38 <ijw> ChristianM_: you have to be careful about 'HW knowledge' - there's an abstraction between you and the hardware.  You really need to do things independent of HW at the API level.
14:45:51 <russellb> ijw: yes, and that's where things get tricky
14:45:54 <ijw> (I say this having had 101 discussions about that sort of violation as a quick fix)
14:46:00 <russellb> express something that allows you to express your performance requirements, without being hw specific
14:46:03 <FJRamon> Imendel: But it is also true that I/O intensive only is essential in NFV, so you need to take different things into account
14:46:24 <ijw> russellb: yup, and monitor to make sure you can take corrective actions
14:46:25 <Imendel> russellb, yes
14:46:26 <russellb> if you try to go hw specific in nova at least, you'll get rejected pretty quick
14:46:35 <smazziotta> my point is that it's not all NVF that require HW awareness. it's only for data plane. agree ?
14:46:47 <ijw> russellb: beyond abstract connections to the outside world, the same is true of Neutron
14:46:53 <FJRamon> I think that one thing is being hw specific and another being concrete on what you request in terms of machine layout
14:47:00 <ChristianM_> ijw: yes but for some IO intensive apps I might want to know where sriov is supported for example, a hw feature. But I agree about the abstractions
14:47:03 <ijw> smazziotta: anything that needs a guarantee of service
14:47:03 <cdub> e.g. "tie this VM to that core" vs. "give numa optimized VM"
14:47:09 <Imendel> FJRamon, yes take under consideration but abstract the underlying from the need
14:47:23 <ijw> ChristianM_: the answer to that is more often to constrain your VMs to run where you want them to run.
14:47:36 <cdub> ChristianM_: that could be part of aggregate definition, for example
14:47:42 <FJRamon> Imendel: I think we are saying the same but with different wording
14:47:47 <ijw> Also, Paul Carver had good information on QoS and bandwifth allocation from work in AT&T that I rather liked
14:47:49 <MikeBugenhagen> There are some Standard Telecom WAN abstractions that are used inter-company that will show up in NFV
14:47:49 <adrian-hoban> Imendel: Certain optimisations are required in order to meet some of the requirements that NFV appliances will have
14:47:50 <JohnHaller> It's not just dataplane, some of the control applications have some pretty high throughput, such as control plane protocol-aware firewalls
14:48:13 <Imendel> FJRamon, hope so... lets work the details in the design
14:48:25 <FJRamon> Imendel: Yes
14:48:27 <ijw> MikeBugenhagen: what specifically were you thinking of?
14:48:39 <MikeBugenhagen> The Metro Ethernet forum service abstraction are commonly used
14:48:40 <cdub> JohnHaller: hmm, makes the firewall a dp app, no?
14:48:59 <cdub> (where the data in the dp is control plane traffic...)
14:49:21 <FJRamon> The details on this discussion are relevant, and will be clearer once we have the use cases
14:49:34 <ijw> cdub's right - dp is not necessarily customer traffic, it's any traffic that's being passed for the sake of passing traffic, really
14:49:46 <FJRamon> Not really
14:49:47 <JohnHaller> cdub, yes
14:50:06 <FJRamon> Depends on the order of magnitude
14:50:06 <ijw> FJRamon: ?
14:50:41 <FJRamon> Broadly speaking in a carrier network there are two types of pipes: the big ones and the others
14:51:05 <ramk_> regarding NFV use cases such as NFVIaasS, vCDN, reliability etc. we have a concrete proposal as part of the solver-scheduler blueprint. Would be glad to discuss further with whoever is interested
14:51:06 <FJRamon> The big ones are 10 Gbps and above, essentially
14:51:12 <cdub> so perhaps dataplane/controlplane aren't the useful categories
14:51:15 <GGarcia> FJRamon, ijw: for me data plane means tens of Gbps processed by a VM, either a firewall, router or whatever
14:51:37 <alank> would be interested to discuss the solver-scheduler
14:51:55 <s3wong> ramk_: I am interested
14:52:01 <thinrichs> I’m interested in the solver scheduler too
14:52:13 <cloudon> me too
14:52:17 <cdub> ramk_: looks like a good way to handle our scheduling needs
14:52:19 <alank> data plane typically refers to normalised traffic from a give app that traverses one hop to the next......ingress/egress
14:52:22 <bauzas> alank: what do you want to know ?
14:52:22 <smazziotta> ramk_ : interested as well
14:52:23 <russellb> ok, so i think the scheduler is interesting
14:52:27 <ijw> GGarcia: You have to be somewhat realistic about this - you'll get 10s of Gbit through a service, but not necessarily through a single VM, depending on what you're doing.  Also, even if a VM can do that speed it can do lower, for internal firewalling, and it can be vertically split if you really care about how many VMs you use
14:52:32 <alank> Control traffic is for apps mgmt/control
14:52:33 <russellb> but from a practical perspective, i don't see it getting in short term
14:52:38 <bauzas> russellb: +1
14:52:52 <russellb> i think nova's priority right now is getting the current scheduler prepared to be split out into an independent project
14:53:02 <russellb> once that's one, there should be more room to focus on new scheduling approaches
14:53:06 <bauzas> the current focus is to split the scheduler into a separate project
14:53:08 <russellb> so that should be our view for when that can go in
14:53:10 <adrian-hoban> ijw: We will have some options. E.g. SR-IOV progress we want to make in Juno.
14:53:18 <alank> Do we really need to split the scheduler out to handle this?
14:53:19 <ijw> yup
14:53:23 <FJRamon> ijw: The real thing is that there are already VMs doing that
14:53:26 <bauzas> alank: yup
14:53:28 <alank> I am not sure i understand the reasoning
14:53:42 <ijw> alank: I think it's more an issue of how many people want to change it in drastic ways at the same time
14:53:48 <GGarcia> FJRamon: agree
14:53:57 <alank> yes, would agree ijw
14:54:01 <ramk_> we can have a reasonable starting point without splitting the scheduler
14:54:03 <russellb> ijw: yes, exactly
14:54:13 <alank> +1
14:54:15 <russellb> it's partially an issue of priorities and bandwidth within the nova project
14:54:22 <danpb> russellb: once the scheduler is split out, where will filters live - eg if we have a "numa scheduling" filter will that be in the schedular project or provided by the nova project as an add on  ?
14:54:25 <russellb> certainly fine to continue experimenting with it now
14:54:33 <russellb> danpb: tbd :)  bauzas ?
14:54:48 <ndipanov> danpb, I hope that filter will be in befor the split
14:54:48 <ijw> We can work on scheduling, but if we do it will have to be out of tree.  I don't think we have better options than that right now.
14:54:57 <bauzas> russellb: danpb: that's still a question unresolved
14:54:57 <ndipanov> so we will have to decide along with other filters
14:54:58 <russellb> yeah, i think new filters can go in now
14:55:01 <danpb> russellb: i mean i would rather expect the latter myself
14:55:19 <smazziotta> a lot of NFV use cases are dependant on the scheduler modifications...
14:55:20 <danpb> otherwise we could end up with tight bi-directional coupling between nova & the schedular
14:55:27 <ijw> And given the usual resource constraints with openstack dev that usually means people find other things to do in tree, in my experience...
14:55:32 <russellb> i guess i'm sayuing ... if we can solve stuff with filters/weights ... much better chance of being merged
14:55:34 <bauzas> danpb: probably nova could make use (or not) of an external scheduler depending on the operator choice
14:55:37 <alank> agree, in my mind we should address what info we want to make available, then decide later how best to use that info for a given scheduler etc etc
14:55:50 <alank> imho, filters and weights are insufficent
14:55:56 <russellb> sure, that may be the case
14:56:11 <ijw> alank: I agree.  We can't tell you what to do, we're just offering advice on the current situation.
14:56:16 <russellb> anything that can't be solved with the current approach is just going to be further out
14:56:22 <alank> filters adn weights are usefull for static
14:56:24 <bauzas> alank: that's exactly why we spin-off the scheduler :)
14:56:30 <ijw> And that advice is that regardless of what you want it's not all that likely you'll get this in in the Juno cycle
14:56:36 <ramk_> yup ... scheduler is the foundation for NFV
14:56:50 <alank> agree ijw....thats what i would agree on and makes sense so we do something that others can find a use of that "info" for
14:56:58 <bauzas> alank: some dynamic approach requires to get other metrics than the ones provided the usual way
14:57:16 <alank> Hmm......just to clarify, "scheduler is NOT the foundation for NFV"....its an "element in the NFV"
14:57:19 <MikeBugenhagen> ramk : NFV may also introduce OAM end pionts that don't exist
14:57:33 <ijw> bauzas: now, finding more metrics for scheduling with, you might get further than that
14:57:39 <ijw> further with that, even
14:57:45 <alank> +1
14:57:47 <yjiang51> bauzas: currently the compute node provide an infra to provide metrics for compute node.
14:57:53 <ramk_> agree michael
14:57:59 <adrian-hoban> alank: I also think the filters/weighs are insufficient in the medium to long term. We need to decide if the extensions being proposed now will be sufficient in Juno time frame?
14:58:20 <s3wong> 2 minutes, guys :-)
14:58:32 <ijw> adrian-hoban: sufficient for what, precisely?  They're almost certainly what you're getting regardless, to be pragmatic about it
14:58:48 <sgordon> yjiang51, it's infrequent though right?
14:58:49 <russellb> yep, if we have proposed requirements that can't be met, let's put them on the agenda to discuss in more detail
14:58:52 <alank> IF i may and i think ijw and adrian-hoban are saying the same, we should focus on "what information to gather and expose/make available"
14:58:54 <russellb> maybe we can come up with some alternative approaches
14:58:56 <GGarcia> Proposal of use case from Openstack perspective: VMs with high I/O BW requirements (tens of Gbps)
14:59:00 <sgordon> whereas NFV typically wants more rapid updates to those metrics
14:59:06 <yjiang51> sgordon: depends on the requierment, yes.
14:59:08 <alank> Some of that will be through Nova, some through Neutron, some through other elements
14:59:12 <sgordon> alank, ack
14:59:14 <adrian-hoban> ijw: Sufficient for deploying NFV at scale considering the increase in the complexity that will be needed in scheduling
14:59:23 <ramk_> is there any easy way i can share with the group the NFV scheduler proposal ?
14:59:34 <s3wong> ramk_: ML?
14:59:35 <russellb> i'm not sure what the NFV scheduler proposal is?
14:59:43 <sgordon> did you send it to the list or was that just in the servicevm meeting i saw it?
14:59:44 <alank> right sgordon, its more about acting before, not after
14:59:47 <ijw> There's more than one, so that's hardly straightforward.
14:59:47 <russellb> but yes, mailing list threads are good
15:00:10 <ijw> I would say 'more research needed' is likely to get us further on that.  Some things we can  commit, some we'd just have to test out
15:00:13 <sgordon> ok i think people are drifting in for the next meeting
15:00:21 <russellb> OK, we're out of time
15:00:24 <russellb> thanks for coming everyone!
15:00:27 <russellb> see you all next week, same time, same place.
15:00:28 <bauzas> thanks
15:00:29 <s3wong> thanks!
15:00:29 <russellb> #endmeeting