14:01:31 <russellb> #startmeeting nfv
14:01:32 <openstack> Meeting started Wed Jul  2 14:01:31 2014 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:34 <russellb> hello, everyone!
14:01:35 <openstack> The meeting name has been set to 'nfv'
14:01:38 <cloudon1> hi
14:01:40 <s3wong> hi
14:01:41 <cgoncalves> hi
14:01:43 <adrian-hoban> Hello
14:01:43 <smazziotta> hi
14:01:44 <ian_ott> hi
14:01:47 <JohnHaller> hi
14:01:47 <diga> Hi
14:01:49 <ulikleber> hi
14:01:49 <bauzas> o/
14:01:51 <diga> Hi
14:01:52 <russellb> #link https://wiki.openstack.org/wiki/Teams/NFV
14:01:57 <russellb> #chair sgordon
14:01:58 <openstack> Current chairs: russellb sgordon
14:02:00 <jchapman> Hey
14:02:08 <russellb> #link https://etherpad.openstack.org/p/nfv-meeting-agenda
14:02:19 <russellb> agenda on the etherpad, feel free to add topics
14:02:24 <russellb> #topic review actions from last week
14:02:37 <russellb> last week bauzas had created an NFV gerrit dashboard generator
14:02:37 <Imendel> hi
14:02:50 <russellb> #link http://bit.ly/1iFdldx
14:03:12 <russellb> i set up a VM for him to use to host the script and automatically update a redirect to a current dashboard
14:03:17 <bauzas> correct, there were 2 actions, one missing from the meeting
14:03:17 <russellb> bauzas: have a chance to look at doing that?
14:03:24 <russellb> bauzas: ah, didn't see the other in the minutes
14:03:31 <radek> Hi there !
14:03:48 <bauzas> right, my action was to implement the possiblity to update a short url
14:04:07 <bauzas> I'm just finishing that, I identified tiny.cc for the backend
14:04:13 <bauzas> possibly goo.gl aslo
14:04:19 <bauzas> also
14:04:39 <bauzas> so, the action is to finish that one, and then communicate on the short url
14:04:46 <russellb> OK
14:04:58 <russellb> could also implement it with a redirect on that VM i set up for you
14:05:00 <bauzas> the script is almost done, I'm just testing
14:05:22 <russellb> short url service probably easier though if it supports that
14:05:25 <bauzas> well I was planning to host that script on your VM
14:05:28 <russellb> ok
14:05:33 <bauzas> and update the url
14:05:46 <bauzas> so that doesn't require to change bookmarks
14:05:51 <sgordon> right
14:05:54 <ndipanov> o/
14:06:00 <russellb> k, well cool
14:06:02 <russellb> thanks for the work
14:06:06 <russellb> i find the dashboard very helpful!
14:06:14 <bauzas> place me an action btw. ;)
14:06:40 <russellb> #action bauzas to finish up work to automatically keep a short URL up to date for NFV gerrit dashboard
14:06:43 <russellb> sound good?
14:07:16 <russellb> moving on then
14:07:19 <russellb> #topic blueprints
14:07:28 <russellb> We had 1 NFV related spec approved in the last week
14:07:35 <russellb> #link https://review.openstack.org/100871
14:07:43 <russellb> "I/O (PCIe) Based NUMA Scheduling"
14:07:54 <russellb> so yay for that!  :)
14:07:58 <adrian-hoban> Great news!
14:08:00 <jchapman> Yaay
14:08:00 <sgordon> biggest concern is still the number of outstanding specs on the dash with negative feedback that has not been responded to
14:08:00 <cloudon1> +1
14:08:13 <russellb> right, what sgordon said ... that's the status of most other things
14:08:22 <russellb> there are a lot waiting for the submitter to update/respond to feedback
14:08:26 <sgordon> 17 outstanding nfv-related nova-specs proposals
14:08:31 <sgordon> only 2 dont have negative feedback
14:08:50 <sgordon> in most cases that feedback is > a week old
14:09:05 <russellb> also note that we're only 3 weeks away from the juno-2 milestone
14:09:12 <russellb> and I expect spec approvals to stop at that point
14:09:14 <russellb> definitely for nova
14:09:26 <russellb> so to have any chance of making it into juno, we have to move on these quickly
14:09:34 <russellb> #link https://wiki.openstack.org/wiki/Juno_Release_Schedule
14:09:56 <russellb> don't think I have anything else on blueprints
14:10:02 <russellb> any specific blueprints anyone would like to cover?
14:10:08 <diga> Hi Russell
14:10:54 <russellb> please go chase the submitter of blueprints you're most interested in :)
14:10:54 <russellb> diga: hi!
14:10:54 <sgordon> ijw, are you around?
14:10:55 <diga> can you guys have some implementation plan where I can help you ?
14:10:55 <ijw> Yup
14:11:13 <sgordon> ijw, wondering if you had any updates on two interfaces one cu^Wnet or the vlan trunking spec
14:11:18 <sgordon> ijw, or need any assistance
14:11:47 <ijw> I haven't done them, let me do it today - there's a few edits to make and I've not touched them recently.  Ping me if I'm being slow
14:11:49 <diga> I want to contribute here
14:12:08 <sgordon> ijw, np just making sure you still have scope to look at it :)
14:12:29 <russellb> and ping me when it's updated and i'll review
14:12:33 <ijw> sgordon: I want them done, I've just not had time.  I'll make some today
14:12:35 <sgordon> proactively pinging the other owners of items in the list to see if they need help might make sense for the others as well
14:12:37 <russellb> see if we can push it through
14:12:50 <ijw> Also, I had some thoughts about MTU stuff, too, over and above what's blueprinted
14:13:12 <sgordon> ijw, cool
14:13:24 <ijw> I was looking at this internally, particularly with respect to getting VMs to accept an MTU other than 1500.
14:14:07 <sgordon> ijw, where do you see that being configured from? per host, per network, etc?
14:14:22 <ijw> Bigger than 1500, specicifically.  I need ot check the v4 spec, but the v6 spec and the Linux kernel are both quite explicit that if you're setting an MTU you can revise it downward with an advertisement but not upward
14:14:51 <ijw> sgordon: It needs to be per network - the whole problem with MTUs is that they really don't work if you have different MTU settings on different hosts on the same network
14:14:57 <sgordon> right
14:15:45 <sgordon> ijw, which of the blueprints is this under?
14:15:54 <ijw> From our perspective that means two things - firstly, that we have to tell hosts what their MTU is.  Even now it's luck more than judgement that the MTU works at all when it's sub-1500 so it would actually improve behaviour with people who aren't interested in MTU at all.
14:16:00 <ijw> It isn't yet
14:16:32 <ijw> The other thing is that network services also need to be told their MTU - the routers specifically should know the MTU and set it on their interfaces
14:16:35 <adrian-hoban> danb: Regarding the "Virt driver pinning guest vCPUs to host pCPUs" blueprint. #link https://blueprints.launchpad.net/nova/+spec/virt-driver-cpu-pinning. Does this have a scheduler split-out dependency?
14:16:52 <russellb> danpb: ^
14:16:55 <russellb> adrian-hoban: no
14:17:01 <russellb> it shouldn't anyway
14:17:06 <russellb> split out won't happen in juno
14:17:10 <sgordon> adrian-hoban, i believe it now relies on the extensible resource tracker
14:17:12 <cgoncalves> ijw: not sure if this is somehow relevant for you but still: https://review.openstack.org/#/c/75281/
14:17:20 <danpb> yeah i don't see any dep there
14:17:23 <sgordon> adrian-hoban, although there is some potential for mitigating that dependency as well
14:17:49 <danpb> the design will say it depends on extensible resource scheduler
14:17:54 <danpb> s/scheduler/tracker/
14:18:04 <russellb> seems sensible for the spec
14:18:12 <danpb> but if the extensible resource scheduler work goes belly-up we'll just change it to not rely on that
14:18:24 <ijw> cgoncalves: That's a part of it certainly (though I don't know how that affects v6 rather than v4)
14:18:44 <adrian-hoban> sgordon: Thanks for pointer. I'll look more closely at it this week.
14:18:53 <danpb> the dep on extensible resource scheduler was only added to keep pedantic reviewers happy, but because it is really needed
14:19:25 <russellb> s/but/not/
14:19:27 <russellb> agree :)
14:19:52 <danpb> opps, yes
14:20:13 <diga> Heyy russellb: can you give me one task so that I can start my work
14:20:14 <bauzas> well, by looking at the last iterations, this bp sounds in a good shape, no ?
14:20:15 <adrian-hoban> danpb: russellb: Thanks, I wasn't clear on this point.
14:20:21 <ijw> So anyway - yes, we want to set per-network MTU in the API to some number no higher than the infrastructure can manage - if you can fragment the packets that's good but you still want it programmable (and someone had better check what happens when DF *isn't* set and you send a large packet over GRE or VXLAN)
14:20:27 <russellb> adrian-hoban: np :)
14:20:50 <russellb> adrian-hoban: there are some things that i do think are dependent on it ... generally depends on the complexity of the addition to the scheduler we're talking about
14:20:56 <russellb> most things are small enough we can do with the current scheduler
14:21:06 <russellb> the solver scheduler is an example of something big enough that it will likely have to wait, IMO at least
14:21:09 <danpb> adrian-hoban: basically i consider spec documents to be just a "best guess at what the impl will probably look like"  - not a 100% guarantee that the impl will work that way
14:21:33 <danpb> if things come to light during impl work that need a change in the design, that'll just be done as needed regardless of the spec
14:22:37 <russellb> diga: a good place to start would be to review the blueprints and code linked from the wiki page.  testing out features you're interested in would be helpful too.
14:22:50 <bauzas> fyi, the scheduler team decided to move faster and to spin up Gantt without having 100% feature parity with nova-scheduler
14:22:55 <diga> Thanks
14:23:14 <ijw> Anyway, so first we need an MTU attribute.  Second, drivers need to confirm that they support MTU setting and that the MTU chosen can be suported.  Thirdly, MTU needs giving to the hosts (config drive or network advertisement) and hosts need to be willing to set, and capable of accepting, the chosen value.  Hosts around today will generally discard a network advertisement; you could make it a requirement that hosts hav
14:23:14 <ijw> e a MTU of 9216 by default but that's not compatible with current clouds.
14:23:26 <bauzas> so Gantt won't be able at the moment to filter on aggregates and instance groups
14:23:33 <russellb> bauzas: concern just the cost of maintaining two schedulers
14:23:50 <sgordon> ijw, seems like we need an irc->nova-specs gateway :)
14:23:56 <ijw> And finally, the current config-drive setting of interfaces is shite - it writes out /etc/network/interfaces last I checked, rather than an abstract structure of data.
14:24:06 * ijw nominates sgordon
14:24:10 <sgordon> doh!
14:24:19 <bauzas> russellb: indeed, but the problem is about waiting the changes for aggregates and groups that depend on other BPs
14:24:36 <russellb> ijw: sgordon indeed, that needs to change.. i think there's a spec for that?
14:24:44 <bauzas> like extensible resource tracker
14:24:50 <ijw> The last two points are the most problematic - if you can't tell a host what the MTU is you can't really make much use of it
14:24:53 <sgordon> russellb, for the config-drive issue?
14:24:56 <russellb> sgordon: yes
14:25:07 <ijw> russellb: do you know if anyone's given any thought to the config-drive for network config?
14:25:25 <danpb> ijw: sounds like the issues around MTU are best outlined in a nova spec
14:25:26 <russellb> yes, that's what i was referring to, i was talking to comstud about that the other day
14:25:29 <russellb> thought he said there was a spec ...
14:25:31 <russellb> danpb: +1
14:25:41 <danpb> its too hard to follow all the details here in IRC
14:25:48 <ijw> Templating doesn't really work, you end up encoding network information in a file that is OS-specific from a cloud that should be OS-agnostic
14:25:54 <sgordon> russellb, this one ? https://blueprints.launchpad.net/nova/+spec/metadata-service-network-info
14:26:04 <sgordon> #link https://blueprints.launchpad.net/nova/+spec/metadata-service-network-info
14:26:14 <ijw> danpb: They want outlining in a spec but a spec necessarily requires a solution as well as a problem statement
14:26:24 <ijw> sgordon: ta
14:26:26 <russellb> maybe?
14:26:27 <sgordon> lol :)
14:26:41 <sgordon> russellb, tentatively maybe
14:26:50 <russellb> sgordon: no, that covers metadata API stuff
14:26:56 <russellb> not the format written to config drive
14:27:06 <sgordon> i will have to keep digging
14:27:13 <ijw> That spec is not publicly readable
14:27:31 <russellb> sgordon: actually maybe it is that, it does mention that this stuff should be in config drive
14:27:36 <russellb> https://review.openstack.org/#/c/85673/9/specs/juno/metadata-service-network-info.rst
14:27:43 <ijw> Ah, it's just the spec link, there's a review link below
14:28:35 <russellb> #topic open discussion
14:28:38 <ijw> Oh, no, no spec
14:28:44 <russellb> any other topics for today?
14:28:51 <sean-k-mooney> hi can i add #https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-usvhost.rst. to the agenda.
14:29:02 <ijw> You just did
14:29:10 <russellb> ijw: ha, indeed
14:29:23 <russellb> sean-k-mooney: sure, go ahead.  now is good.
14:29:26 <ijw> So there are two of these, this one and Luke's, both doing the same thing
14:29:40 <ijw> Luke's is rather better, this one needs consistent nova/neutron config
14:30:13 <sean-k-mooney> i belive i have found a way to address the per host vif bining configuation and would like input on the direction
14:30:24 <danpb> communication between nova & neutron is a desired feature
14:31:23 <sean-k-mooney> see final comment in the libvirt-ovs-use-usvhost.rst.
14:32:13 <ijw> Now, that's another spec I'd like to see, now you remind me
14:32:14 <danpb> sean-k-mooney: having to setup a config file listing all host ids seems very non-scalable
14:32:34 <ijw> I would like Nova to tell Neutron which VIF drivers suit, and Neutron to choose the most preferred one that it can suppot
14:32:41 <danpb> as ijw  said, nova needs to tell neutron what VIFs it is able to support
14:32:51 <danpb> and then neutron should reply with which it should actually use
14:33:03 <ijw> See, we're all good - shall I spec that one up?
14:33:19 <ijw> We can do the neutron side first if we report it in a header (which seems appropriate for an HTTP negotiation)
14:33:25 <sean-k-mooney> at present there is only one way to detect if vanila ovs has userspace vhost support
14:33:29 <ijw> Nova will ignore it till we make it pay attention
14:33:46 <sean-k-mooney> this is to check the commanline that the vswithcd is started with
14:33:58 <danpb> ijw: sure
14:34:36 <sean-k-mooney> because if this i was suggesting reusing the work used in this blueprint #https://blueprints.launchpad.net/neutron/+spec/hostid-vif-override merged in havana
14:34:36 <ijw> sean-k-mooney: I don't think your comment changes anything - it's still one-side config, which is what we're mostly (Przemyslaw excepted) saying we want
14:35:32 <ijw> I'm a bit surprised actually - so Neutron is potentially capable of choosing any VIF type, but in practice, I think you're saying, it will only ever use the same one universally?
14:35:45 <ijw> As in, it's implementation, not interface, that has the issue?
14:36:20 <sean-k-mooney> correct neutorn can chose the vif-type base on a numer of factor
14:36:23 <ijw> I would assume that this *doesn't* want to be config - for instance, in Neutron I may want to use sriov sometimes and not others (different plugging type based on circumstance)
14:36:32 <sean-k-mooney> most impemntation have the vif-type hardcoded
14:36:41 <ijw> But I think you have a case where the answer is 'installer knows best'
14:36:58 <ijw> Shouldn't this be selected via the specific typedriver?
14:37:44 <ijw> Anyway, seems fine, probably isn't the answer to this specific question
14:38:15 <ijw> Other than I think we all agree that it's Neutron's responsibility to choose the right OVS plug based on circumstance
14:38:49 <sgordon> +1
14:38:57 <ijw> russellb: Can you find the network info guys and get them to put a spec up so that we can review it?  Three changes there, no spec
14:39:01 <sean-k-mooney> for srivo maybe but the north bound interface to the swtich is identical between vhost or userspace vhost for configuration
14:39:53 <ijw> For sriov, the configuration is dramatically different - one way around, you're saying 'I want an SRIOV interface' and the other 'please attach me to a software switch' - no similarities at all, at least for some methods
14:40:22 <ggarcia_> +1
14:40:30 <ijw> SRIOV may involve reconfiguring a hardware switch or the PF but it's absolutely the case that plugging into an OVS is not going to get you what you want
14:41:02 <ijw> OK - anyway, I can spec up the idea I was talking about.
14:41:22 <sean-k-mooney> i am not suggeting useing this mecanisum fo sriov only to allw you to diferenciate if ovs has userspace vhost enable or not on a node.
14:41:24 <ijw> russellb: to change the topic: have you considered what we want to land for j2 and j3?
14:41:52 <ijw> sean-k-mooney: yup, and I'm fine with that, just saying that this is a separate area to that specific problem, but one way of helping Neutron make the right choise
14:41:55 <russellb> for j2, the stuff with code already posted is most realistic
14:42:53 * ijw hands russellb the big stick and nominates him to issue beatings until things are in
14:42:58 <russellb> :)
14:43:02 <russellb> but i am only one stick
14:43:07 <russellb> many sticks will do better than one
14:43:08 <ggarcia_> ijw, I agree, and I go further. Besides SRIOV interfaces and software switches interfaces, passthrough of the whole interface might be necessary in some scenarios
14:43:08 <ijw> It has nails.  I sharpened them specially
14:43:13 <russellb> neat!
14:43:16 <russellb> i'll do what i can
14:43:20 <russellb> mostly on the review front
14:43:24 <ijw> cool
14:43:45 <russellb> don't have as much influence on the neutron side
14:43:50 <russellb> but we've got a lot of nova stuff in the pipe i can help push
14:43:56 <ijw> Need to make sure the specs have implementers too.  I shall be using my own stick for that but people seem to stay at a respectful distance nowadays
14:44:01 <russellb> excellent
14:44:08 <russellb> biggest thing this week is to get specs iterated
14:44:13 <russellb> can't review much more there right now
14:44:22 <russellb> but i'm going to put some time into reviewing a bunch of nova code already up
14:44:26 <russellb> danpb has several patches ready
14:44:29 <ijw> You have a stick.  I shall be iterating
14:44:39 * russellb threatens ijw with the stick
14:44:47 * ijw cowers meaningfully
14:44:54 <russellb> plz update the 2 NICS 1 subnet spec
14:44:58 <russellb> or you get hit
14:45:03 <ijw> Yes sir
14:45:05 <russellb> yay
14:45:08 <russellb> my job here is done
14:45:30 <danpb> russellb: biggest blocker to me for the last week is actually our inability to actually land any patches in  the gate
14:45:32 <russellb> and let me know when you update in case i don't see it
14:45:36 <russellb> danpb: argh
14:45:48 <russellb> danpb: i haven't been following that very closely in the last week :(
14:45:50 <danpb> i've been waiting 5 days just to try to get some simple method renames landed
14:45:58 <danpb> must have rechecked about 40 times
14:46:01 <russellb> wow
14:46:13 <danpb> so chance of getting any actual features merged is close to zero
14:46:14 <russellb> welp ... guess it's time to help there too
14:46:39 * danpb is utterly demotivated to even bother trying to submit code to nova right now
14:47:06 <ijw> danpb: We can help.  russellb has a stick
14:47:38 <russellb> my stick won't help there
14:47:42 <russellb> only diving in and helping fix it
14:47:53 * ijw is reminded of Trainspotting
14:48:01 <russellb> there are folks that work really hard to keep that stuff going, but it's exhausting
14:48:03 <russellb> and not sexy work
14:48:13 <ijw> Yeah, thankless too
14:48:13 <danpb> yeah, i've been trying to debug some issues for days
14:48:19 <russellb> danpb: thanks a lot for that
14:48:20 <danpb> but they never reproduce locally
14:48:30 <danpb> which makes it pretty much impossible to debug many of them
14:48:33 <ijw> 'works in devstack'
14:48:36 <russellb> yes, that's been my experience
14:48:44 <russellb> ijw: or in this case, doesn't work in devstack (in the gate)
14:49:40 <russellb> alright well i guess that's it for today
14:49:44 <russellb> thanks everyone!
14:49:52 <ijw> tata
14:49:54 <russellb> #endmeeting