22:00:21 <sgordon_> #startmeeting telcowg
22:00:22 <openstack> Meeting started Wed Nov 26 22:00:21 2014 UTC and is due to finish in 60 minutes.  The chair is sgordon_. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:27 <openstack> The meeting name has been set to 'telcowg'
22:00:28 <sgordon_> #chair amitry
22:00:29 <openstack> Current chairs: amitry sgordon_
22:00:32 <sgordon_> #topic roll call
22:00:35 <sgordon_> hi all
22:00:43 <smazziotta> hi
22:00:43 <sgordon_> who is around for the telco working group meeting
22:00:47 <ian_ott> hello...Ian here
22:00:53 <dgollub1> hi
22:00:54 <sgordon_> recognizing that many will already but off for thanksgiving
22:00:57 <sgordon_> *be off
22:01:15 <amitry> present
22:01:30 <jrakerevelant> hi jannis here
22:01:33 <sgordon_> #topic use cases
22:01:49 <ian_ott> were the logs from last week posted?
22:01:53 <sgordon_> yes
22:02:03 <sgordon_> http://eavesdrop.openstack.org/meetings/telcowg/2014/
22:02:12 <sgordon_> i also sent them to the operators and developers list
22:02:15 <gokrokve> hi
22:02:23 <ian_ott> k maybe the wiki needs an update
22:03:19 <sgordon_> so
22:03:23 <sgordon_> i did some work on the wiki
22:03:39 <sgordon_> and moved use cases off to here: https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases
22:03:57 <sgordon_> the current three are a little too mixed probably
22:04:12 <sgordon_> two VNF use cases and one (vlan trunking) which is effectively a requirement rather than a use case
22:04:24 <sgordon_> though the context the latter came up in was justifying that requirement for neutron
22:04:33 <sgordon_> #link https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases
22:05:24 <sgordon_> there was some discussion about this in #openstack-nfv earlier in the week but i am trying to gauge what level to target the use cases at
22:05:30 <sgordon_> and how to come up with a meaningful template
22:05:55 <sgordon_> at the moment what i have is pretty high level/generic to give some freedom:
22:05:58 <sgordon_> * Title
22:06:01 <ian_ott> i think the wiki previously had the ETSI 8 use cases, the key is to make them relevant to the openstack community
22:06:03 <sgordon_> * Description
22:06:07 <sgordon_> * Characteristics
22:06:11 <sgordon_> * Requirements
22:06:16 <ijw> Bear in mind use cases don't necessarily drive requirements individually, I think that's a problem we had with the old method - so it's fine to express a use case without trying to solve your openstack problems at the same time
22:06:33 <sgordon_> right
22:06:39 <sgordon_> and in fact arguably preferable
22:06:40 <aveiga> +1
22:07:05 <aveiga> it's better for us to distill requirements at a baseline out of multiple use cases that to try to tailor your case to a specific technical function
22:07:06 <jrakerevelant> a use case might even already possible to implement with openstack as is, correct?
22:07:09 <sgordon_> part of the problem with the way the ETSI 8 use cases is that they were listed with a very brief one sentence
22:07:18 <sgordon_> it wasn't really made relevant at all
22:07:24 <sgordon_> it was just here are the ETSI use cases
22:07:26 <aveiga> jrakerevelant: absolutely
22:07:30 <ian_ott> sgordon_: agree
22:07:42 <sgordon_> if they can be made relevant, then i think that would be good
22:08:08 <jrakerevelant> we are currently onboarding a vEPC and i will see if there is a use case here
22:08:16 <ijw> Frequently the changes to Openstack make a use case easier, or better (i.e. improving compute performance) but don't actualy enable it wholesale, I would say
22:08:33 <sgordon_> right so i think broadly vEPC is a use case not covered in what we have documented today
22:08:39 <ijw> Depends.  Mainly I'm saying don't write a blueprint in the form of a usecase
22:08:40 <aveiga> I'd say for the ETSI cases we don't necessarily need them all in OpenStack, either
22:08:41 <sgordon_> of course not all vEPC are created equal ;)
22:08:44 <sgordon_> but got to start somewhere
22:08:45 <aveiga> some of them belong here and some may not
22:08:56 <jrakerevelant> sgordon_:  agrre
22:09:05 <jrakerevelant> sgordon_:  agreed
22:10:06 <jrakerevelant> i am still trying to figure out how a use case can be general enough, some of the things we encounter is rather: how can I abstract from things like DPDK and SRIOV
22:10:07 <sgordon_> ijw, right - which i think is effectively what i did when i dumped your vlan trunking justification in as a a use case :/
22:10:20 <ijw> Bad sgordon_
22:10:24 <sgordon_> :)
22:10:28 <sgordon_> easily deleted! :P
22:10:54 <ijw> Probably wise, and I'll see if I can get the relevant team to write up the actual use case that justifies it from our perspective
22:10:59 <adrian-hoban__> Also agree that ETSI-NFV defined use cases are very high level. Where is the link to them now for OpenStack folks if we need to refer back
22:11:24 <sgordon_> i believe it's in references at the bottom
22:11:41 <sgordon_> #link http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf
22:12:08 <adrian-hoban__> sgordon: Thanks, I missed the link
22:12:42 <sgordon_> jrakerevelant, to your point i think that is why there needs to be a high level description and coverage of characteristics of the workload
22:13:00 <jrakerevelant> is "virtualizing an IMS" really a use case for us though?
22:13:04 <sgordon_> and even requirements at a high level
22:13:11 <sgordon_> well put it this way
22:13:12 <jrakerevelant> its rather an application on top
22:13:17 <sgordon_> im now down to two use cases
22:13:28 <sgordon_> so if you want to cut more then we need people to contribute some ;)
22:13:28 <jrakerevelant> :)
22:13:36 <amitry> we can work on adding some more
22:13:39 <sgordon_> jrakerevelant, define on top?
22:13:52 <amitry> making openstack suitable to run IMS
22:13:57 <jrakerevelant> no i dont want to remove it i want know if it is asking the right question
22:13:59 <sgordon_> from the discussion last week facilitating the workloads that run on top of openstack seemed to be the agreed focus
22:14:00 <sgordon_> for phase 1
22:14:20 <aveiga> sgordon_: I think it should be
22:14:23 <sgordon_> perhaps i am mis-paraphrasing though
22:14:28 <jrakerevelant> amitry: right, with clearwater as a reference?
22:14:32 <aveiga> we need a target, rather than just to piecemeal implement t echnologies
22:14:36 <ijw> Look - when it comes down to it a use case really needs to be described by either (a) a vendor with a widget who for whatever reason has some difficulty running it on stock openstack or (b) a telco who has a widget from somewhere and wants to describe what they would *like* to do with it rather than what's possible today
22:14:44 <sgordon_> ijw, right
22:14:51 <sgordon_> ijw, and i think both we have atm were cases of (a)
22:14:56 <ijw> Yup
22:14:57 <sgordon_> which is perfectly fine
22:15:04 <ijw> aveiga surely must have examples of b
22:15:06 <sgordon_> but would be great to get a mix of (b)
22:15:08 <aveiga> ijw: we can bring that up a level, widget isn't always the only way to describe it :)
22:15:25 <sgordon_> #info "ijw> Look - when it comes down to it a use case really needs to be described by either (a) a vendor with a widget who for whatever reason has some difficulty running it on stock openstack or (b) a telco who has a widget from somewhere and wants to describe what they would *like* to do with it rather than what's possible today"
22:15:30 <ijw> That's fine, I would expect an operator to have a picture in their mind of what they want to do with it and that's what you need to describe
22:15:44 <ijw> s/with it//
22:15:46 <aveiga> for instance, I want  to provide X type of service with X technological method
22:16:05 <aveiga> maybe managed VPNs where I pick MPLS
22:16:15 <sgordon_> makes sense as a use case in this context i think
22:16:15 <aveiga> or managed voice where I run a SIP gateway
22:16:25 <ijw> Yup - and use cases are 'I want to provide X type of service' and perhaps a worked example of how you might do it, but if that's the case I think we should expect other people to add to the worked examples or at least quibble with design choices
22:16:35 <aveiga> I agree
22:17:04 <jrakerevelant> my main problem as a telco currently is that vendor X needs technology Y they say
22:17:18 <jrakerevelant> and Y isnt easy to implement with vanilla openstack
22:17:22 <sgordon_> i think that is everyone's problem currently :)
22:17:25 <ijw> Are we good with that?  Stick the 'I want' on the wiki and let people argue the implementation at the bottom or at least pass comment in an etherpad?  Or we could even have a repo with this crap in
22:17:25 <aveiga> yup
22:17:31 <jrakerevelant> so you start adding balconies that i dont want
22:17:36 <sgordon_> ijw, right
22:17:44 <aveiga> ijw: I think that's the abstraction we want to start with
22:17:55 <sgordon_> ijw, i did dig into storyboard a bit this week but for simplicity i would say keep it on the wiki *for now*
22:17:58 <ijw> jrakerevelant: A fascinating expression, and in that case your use case would be 'a client says they need Y'
22:18:01 <aveiga> then we can debate the merits of individual implementations at a second level
22:18:23 <ijw> At worst, and don't expect much sympathy if you put that in ;)
22:18:31 <sgordon_> #info Start with "I want" use cases on the wiki and let people argue the implementation at the bottom or at least pass comment in an etherpad
22:18:36 <ijw> Or perhaps the wrong sort of sympathy ;)
22:18:45 <jrakerevelant> ijw: well, but i dont necessary want to provide Y but an abstraction layer to Y
22:18:46 <sgordon_> aveiga, agree
22:19:14 <ijw> jrakerevelant: Best expressed as 'I want to do Y' and then we can debate the merits of various solutions up to and including changing Openstack
22:19:19 <sgordon_> right
22:19:46 <sgordon_> openstack design tenets ultimately favor abstraction of implementation anyway where possible...
22:19:51 <ijw> Open source network architecture, this is going to be fun...
22:20:25 <jrakerevelant> ijw: yes probably
22:20:30 <sgordon_> jrakerevelant, you mentioned a vEPC you are looking to set up earlier
22:20:31 <aveiga> the basis of it is that either a) OpenStack can do it now with some tweaks, in which case we document the tweaks, b) OpenStack can be made to do it, in which case we write up a BP or c) OpenStack can't be made to do it because the vendor of Y did it wrong, in which case we can't do much about it other than recommend options?
22:20:42 <sgordon_> jrakerevelant, would you be interested in trying to cover that use case?
22:20:47 <adrian-hoban__> jrakerevelant: If you add why you need to do Y, then I think it will help folks to understand much more quickly the value
22:21:11 <sgordon_> =1
22:21:12 <jrakerevelant> sgordon_:  maybe a few weeks in
22:21:13 <sgordon_> +1 even
22:21:32 <ijw> aveiga: Even that's useful, but 'it would be much easier if we changed X' is a fine answer to the problem and one we can target.  The 'openstack can do it with config tweaks' solutions tend to involve dedicating your whole cloud to a single application, which is bloody annoying
22:21:57 <aveiga> ijw: I agree, but it's always those damned snowflakes that hold the purse strings...
22:21:59 <jrakerevelant> sgordon_: we are even struggling to find the most flexible way to integrate physical interfaces like eNode Bs
22:22:12 <jrakerevelant> we know how to do it in 3 different ways
22:22:32 <jrakerevelant> but they all dont feel natural to openstack
22:22:41 <ijw> aveiga: I know, and true enough, but you're trying to help reduce their infrastructure cost, and they will agree with you after sufficient whisky
22:22:56 <ijw> jrakerevelant: Again - 3 solutions and the one you'd like is fine
22:23:04 <sgordon_> ijw, right - i think documenting the config tweaks approach is more about illustrating how bad the current state is
22:23:10 <ijw> Easier to pitch the change if there's background information
22:23:10 <aveiga> ijw: I find your ideas intrigueing and would like to subscribe to your newsletter
22:23:21 <ijw> aveiga: it's very expensive but comes with whisky
22:23:42 <aveiga> jrakerevelant: put them up! No reason not to chat about them, because you may find someone knows a way to make more natural
22:24:01 <aveiga> and if not, we can figure out which way is the most OpenStack-like (agnostic to forced solutions) and implement that
22:24:04 <sgordon_> #action sgordon to add a section to top of https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases with guidance from this discussion on how to frame use cases
22:24:14 <jrakerevelant> aveiga: you mean as a use case?
22:24:22 <aveiga> jrakerevelant: absolutely
22:24:29 <sgordon_> yeah
22:24:33 <aveiga> one use case,and provide the 3 ways you think it can be done
22:24:34 <sgordon_> you have an i want
22:24:39 <sgordon_> :)
22:24:40 <ijw> About time someone started making app design recommendations, in fact.  We kind of short change tenants in favour of cloud operators in the world of Openstack
22:24:40 <aveiga> and we'll debate the 3 options
22:24:48 <adrian-hoban__> When we get to BPs for these things, it will be good to document the alternatives too
22:24:51 <jrakerevelant> sounds good
22:24:55 <aveiga> adrian-hoban__: agreed
22:25:19 <aveiga> I don't even think it's wrong to implement multiple methods if they are not mutually exclusive
22:25:20 <jrakerevelant> put in an action item, i will try to do it until next wednesday
22:25:22 <aveiga> options are good for everyone
22:25:28 <sgordon_> #action jrakerevelant to document desire to integrate physical interfaces 'like eNode Bs' and current approaches as a use case
22:25:44 <sgordon_> right i was going to say before
22:25:51 <sgordon_> some of these things arent mutually exclusive
22:25:51 <ijw> jrakerevelant: as an aside did you see the cloud edge BP?
22:26:03 <ijw> jrakerevelant: if not go look it up, but we should probably take that out of this meeting
22:26:09 <sgordon_> and at a high level may be expressed as i need to move packets at a certain rate or w/e
22:26:16 <sgordon_> but there are different ways to achieve that
22:26:29 <jrakerevelant> ijw: no pleas msg the link, or i google :)
22:26:48 <ijw> https://blueprints.launchpad.net/neutron/+spec/cloud-edge-networking
22:26:59 <sgordon_> #link https://blueprints.launchpad.net/neutron/+spec/cloud-edge-networking
22:27:00 <jrakerevelant> yupp found it
22:27:03 <jrakerevelant> thanks
22:27:06 <sgordon_> ok
22:27:21 <ijw> #link https://review.openstack.org/#/c/136555/
22:27:21 <sgordon_> so it's thanksgiving eve and i am not sensing a wealth of volunteers to throw up use cases
22:27:36 <jrakerevelant> i just found out i know you ian
22:27:39 <sgordon_> but i can call out via M/L again
22:27:50 <aveiga> sgordon_: I'll gladly do them, however I am on PTO for the next week...
22:28:00 <ijw> jrakerevelant: Yeah, I worked out who you were a while back (but your name is a little more obvious ;)
22:28:02 <sgordon_> #action sgordon to issue call for use cases via M/L once UseCases page updated
22:28:16 <sgordon_> aveiga, np - it's more i want to make sure we capture a broad spectrum if possible
22:28:24 <aveiga> absolutely
22:28:26 <ijw> OK, there are three BPs worth checking, that one and the revenge of VLANs
22:28:37 <aveiga> and once I'm back, someone gently nudge me with a big stick
22:28:40 <aveiga> ;)
22:28:45 <ijw> We having a section on that?
22:28:47 <jrakerevelant> whats the revenge of the vlans??
22:29:07 <sgordon_> a new hope
22:29:14 <ijw> That wasn't a sequel
22:29:15 <aveiga> BOOOOO
22:29:16 <sgordon_> ijw, the vlans?
22:29:29 <sgordon_> sure why not
22:29:37 <sgordon_> #topic vlan trunking redux
22:29:39 <ijw> https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks
22:29:45 <sgordon_> #link https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks
22:30:06 <ijw> https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
22:30:07 <sgordon_> i briefly caught the discussion you had with amuller about it y'day in the neutron channel
22:30:13 <sgordon_> #link https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
22:30:29 <ijw> Haven't checked the latter but I think Erik was facing Maru's -2 and Maru had somehow managed to miss that we want both to go through
22:30:36 <jrakerevelant> i heard vlans are evil
22:30:38 <jrakerevelant> ;)
22:30:48 <ijw> It's a horror movie sequel
22:30:57 <aveiga> oh man am I in the wrong company
22:31:06 * aveiga likes 802.1q
22:31:21 <ijw> aveiga: ipv6 is the answer, you know.  Now what's the question again?
22:31:29 <aveiga> ijw: it always is
22:31:29 <jrakerevelant> ijw: is it a requirement for legacy applications?
22:31:41 <ijw> jrakerevelant: VLANs?
22:31:45 * sgordon_ struggles to find the spec link for erics
22:31:48 <aveiga> jrakerevelant: no, there are soome newer things that need it too
22:31:55 <aveiga> i.e. MT-ISIS peers...
22:32:09 <ijw> Partly yes, we expect to face VMs that like them, and partly no, we also and independently of NFV expect there to be VMs with lots and lots of network connections
22:32:31 <sgordon_> #link https://review.openstack.org/#/c/136554/
22:32:31 <ijw> The former is more the VLAN trunk thing, the latter is more the port thing
22:32:42 <sgordon_> urgh that's yours :)
22:32:43 <ijw> But the line is faint and easy to cross, in both directions
22:32:46 <aveiga> ijw: the latter is the one I expect to see as a big deal, but also in an addresses-per-port issue as well
22:33:01 <ijw> Yup, and Erik's spec addresses the port addressing
22:33:21 <ijw> The former annoys me more, but that's a matter of what I face more often.  We need both, that's all I'll say on the subject
22:33:47 <sgordon_> #link https://review.openstack.org/#/c/94612/
22:34:09 <ijw> I suspect when we have them we will want to tweak them, but this time we need to make a concerted effort - all of us - to keep on top of the specs, review and criticise them, and then do the same for the code.
22:34:25 <ijw> I am as guilty as anyone of this, but fast turnarounds, please
22:35:08 <sgordon_> #info specs are up for review and need concerted review bandwidth and turnarounds
22:35:12 <jrakerevelant> i think i am not the right person to judge if "we" need sth like that, so i trust in you guys
22:35:16 <sgordon_> #action sgordon to ask maru about -2 on https://review.openstack.org/#/c/94612/
22:35:16 <ijw> The spec you don't have yet because I got a bit tied up in the details is the one for MTU specification and discovery - sorta kinda NFV related, for some applications (including one I have to deal with) and an annoyance to cloud users in general, too
22:35:33 <aveiga> +1
22:35:34 <adrian-hoban__> Are these all captured on the wiki too?
22:35:35 <ijw> jrakerevelant: read it anyway, the worst that will happen is you don't vote
22:35:56 <aveiga> adrian-hoban__: no, because right now they're not use cases
22:36:00 <aveiga> they're potential solutions
22:36:11 <aveiga> but I intend to add some use cases that may need some of them
22:36:28 <sgordon_> #info ijw working on spec for MTU specification and discovery
22:36:30 <ijw> sgordon_: other than that I think there were about 5 or so libvirt/KVM BPs mentioned in the summit Nova session, and I admit I've not been following them too closely
22:36:41 <sgordon_> yeah they are still progressing
22:36:43 <jrakerevelant> ijw: I'll lool into it
22:36:47 <sgordon_> lot of back and forth about the data model
22:36:56 <sgordon_> and still of course the issue of CI which i need to chase down
22:37:16 <sgordon_> there seemed to be some indication we may in fact be able to demonstrate them on HP Cloud infra at least
22:37:22 <ijw> ... with me, I think, but I need to check with the opnfv guys about the hardware to be provided and bugger about with cobbler
22:37:31 <adrian-hoban__> Intel reps are starting to engage with the Infrastructure team now to get our CI in place
22:37:35 <sgordon_> yeah
22:37:39 <aveiga> this is good news
22:37:40 <sgordon_> seems to be lots of hardware
22:37:42 <ijw> sgordon_: I think we want both physical and virtual but virtual would be a real win even if it's not perfect
22:37:48 <sgordon_> trying to nail down a resource to "own" it
22:37:55 <sgordon_> right
22:38:10 <sgordon_> i still think we would need physical to demo the stuff adrian-hoban__'s team are working on with device locality
22:38:13 <sgordon_> i could be wrong though
22:38:35 <aveiga> actually, that's a good point.  We should caveat that use cases asking for X performance rates may not be verifiable on all OpenStack CI systems
22:38:39 <jrakerevelant> ijw: what are the requirements on the virtual setup?
22:38:41 <aveiga> and therefore YMMV
22:38:44 <sgordon_> but being able to get some of the other stuff into the gate rather than third party would be win
22:38:51 <ijw> vhostuser is still an issue, we need a spec in Neutron again to get the Nova guys to agree to use it, and for that we need a controller that causes it to be used (even if it's not gating)
22:38:55 <adrian-hoban__> sgordon: Yep, I think we need physical for a few of the items
22:39:10 <ijw> jrakerevelant: Not the expert, but there's a ML thread with Dan Berrange that describes their thinking
22:39:19 <sgordon_> #info use cases caveats need to be added around ability of OpenStack CI to measure certain performance rate requirements etc.
22:39:42 <jrakerevelant> ijw: could you dig up the link maybe?
22:39:57 <sgordon_> jrakerevelant, basically you can actually expose a numa topology on libvirt/kvm
22:40:04 <adrian-hoban__> ijw: we're working up some plans on the vhost-user. Watch this space
22:40:12 <sgordon_> jrakerevelant, the key is whether we can orchestrate this on one of the clouds infra uses (or ideally both)
22:40:21 <ijw> adrian-hoban__: Get it in, man ;)
22:40:28 <sgordon_> grabbing link
22:40:37 <jrakerevelant> sgordon_: ok
22:40:48 <sgordon_> i actually have someone who can help with that if needed adrian-hoban__
22:41:07 <sgordon_> though i think to ijw's point key is what if anything is done on the neutron side
22:41:11 <sgordon_> to ensure we can test it
22:41:17 <ijw> http://osdir.com/ml/openstack-dev/2014-11/msg00602.html is the thread for testing
22:41:26 <adrian-hoban__> sgordon_: Would welcome that. Thanks
22:41:29 <jrakerevelant> ijw: awesome thanks
22:41:41 <sgordon_> #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/050469.html
22:41:43 <sgordon_> jrakerevelant, ^
22:42:08 <ijw> that too
22:42:34 <ijw> OK, so any more specs of interest right now?  I know what I know, other people must also have opinions
22:43:09 <sgordon_> #link https://review.openstack.org/#/c/128825/
22:43:37 <sgordon_> this recently picked up a -2, trying to explain the logic atm
22:43:49 <sgordon_> topic is optimizing virtio-net multiqueue usage
22:43:51 <ijw> (virtio-net multiqueue - enhanced networking speed for supporting VMs)
22:44:46 <ijw> Was that the one where Dan said that there's a limit on the number of queues per host, at the summit?
22:44:53 <ijw> danpb, that would be
22:44:57 <sgordon_> per guest i think
22:44:58 <sgordon_> but yeah
22:45:00 <adrian-hoban__> ijw: Yep I think so
22:45:25 <aveiga> that could get tricky at the scheduling level...
22:45:26 <ijw> I wonder if that also aplies to singlequeue, in which case we probably have a repair job to do to the way nova schedules and should raise a bug
22:45:28 <sgordon_> vladik wasn't actually at the summit so dan and i were trying to explain it on his behalf
22:46:19 <ijw> OK, can you get the details from Vladik?  Cos we should definitely check if that has wider implications, for starters
22:47:27 <sgordon_> mmm
22:47:37 <sgordon_> are you saying in terms of it being a finite resource?
22:47:47 <sgordon_> as my understanding is there is a limit of # of queues per guest
22:47:51 <sgordon_> not a per host limit
22:48:20 <adrian-hoban__> queues and MSIx
22:48:38 <sgordon_> yeah
22:48:43 <sgordon_> both guest side though afaik
22:49:20 <sgordon_> while we're at it, and because i want aveiga to go on his PTO made with me
22:49:27 <sgordon_> PXE boot was raised on the M/L again
22:49:33 <sgordon_> #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/051561.html
22:50:35 <sgordon_> question is whether in a cloud world that makes sense versus a PXE image in glance
22:50:58 <sgordon_> monty also suggested on M/L that ironic already has PXE support (which im sure it does) that is driven by nova
22:50:58 <aveiga> I still don't see a reason to do this
22:51:13 <ijw> I don't have a use case, but we've used an iPXE image to boot a 'diskless' VM and that works just fine, albeit it has one disk.
22:51:14 <sgordon_> it's unclear
22:51:17 <jrakerevelant> i dont see the use case
22:51:21 <aveiga> I mean, if you're going to PXE boot something *not* in OpenStack, ok...
22:51:21 <sgordon_> i'd prefer to see a use case
22:51:32 <jrakerevelant> i mean in a telco specific way
22:51:35 <sgordon_> everyone i have talked to about it has ended up putting the image in glance and being fine with it
22:51:38 <ijw> Ironic is using PXE for entirely different and non-tenant-facing reasons and I don't think that's pertinent
22:51:46 <sgordon_> ijw, yeah that was my thought....
22:52:01 <aveiga> I can see needing to provide PXE services to outside devices
22:52:11 <sgordon_> mmm
22:52:17 <sgordon_> do we necessarily stop that though?
22:52:20 <aveiga> I mean, booting a thin client farm that connects to a "remote desktop" farm?
22:52:26 <clarkb> ijw: I think its pertinent in that ironic could maybe boot your VMs for you (which may be what mordred was getting at)
22:52:27 <ijw> I presume the request is 'I want a machine with 0 disks and for it to load all it state into a RAMdisk from a remote server' and you can take that as meaning that you the tenant will provide the server or you want the system to do it for you, I guess
22:52:29 <aveiga> sgordon_: yes, we block DHCP for good reason
22:52:31 <ijw> I've done it with a tenant server.
22:52:42 <sgordon_> aveiga, you are no fun at all today
22:52:43 <sgordon_> ;p
22:53:06 <aveiga> it's snowing here, so I'm passing along the pain :)
22:53:13 <ijw> aveiga: Oh, god, the firewalling stuff, that's another can of worms we should at least mention so it's saved for next tie
22:53:17 <sgordon_> ijw, yeah
22:53:21 <aveiga> ijw: +1
22:53:23 <sgordon_> in the simplest case the tenant imo
22:53:29 <sgordon_> but still
22:53:29 <aveiga> the implicit filtering stuff can be a pain for NFV uses
22:53:32 * sgordon_ shelves that
22:53:35 <aveiga> necessary for shared clouds though
22:53:39 <ijw> sgordon_: let's get whoever it is to clarify their use case
22:53:59 <ijw> aveiga: there are at least two BPs, hence can of worms
22:54:00 <sgordon_> #info PXE, much confusion, need documented use case
22:54:03 <aveiga> ah
22:54:16 <sgordon_> ok so i had three more items
22:54:26 <sgordon_> i locked in the meeting time, obviously we're here so goodo
22:54:26 <ijw> Run, Forrest, run!
22:54:53 <sgordon_> jannis added a glossary
22:54:56 <sgordon_> #link https://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup/Glossary&action=edit&redlink=1
22:55:06 <sgordon_> or at least a placeholder for one
22:55:10 <sgordon_> and finally last weeks hot topic
22:55:13 <ijw> I like that glossary as it is
22:55:14 <sgordon_> #topic orchestration
22:55:18 <sgordon_> mkoderer, still around
22:55:20 <sgordon_> ?
22:55:23 <ijw> I think it sums up the problem nicely
22:55:28 <sgordon_> #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/051473.html
22:55:37 <sgordon_> ijw, it needs one of those mental puzzle images
22:56:00 <jrakerevelant> sgordon_: yeah i didnt know what to put in there yer
22:56:12 <sgordon_> mkoderer took an action to kick off a thread about orchestration and keep working on the etherpad last week
22:56:19 <sgordon_> that happened
22:56:23 <aveiga> I'm actually really curious as to where we think the line should be drawn for NFV orchestration within OpenStack
22:56:31 <sgordon_> from a use cases perspective we determine that wasn't the initial focus
22:56:35 <sgordon_> *determined
22:56:37 <sgordon_> aveiga, right
22:56:39 <aveiga> I mean, we don't expect OpenStack to orchestrate network gear outside of the stack, right?
22:56:50 <sgordon_> aveiga, well - i dont
22:56:53 <aveiga> I certainly don't want OpenStack (no offense) to manipulate my routers...
22:56:54 <sgordon_> but i am but one man!
22:56:55 <ijw> aveiga: Well, I think there are two questions, really
22:57:12 <sgordon_> and what about tripleo...
22:57:14 * sgordon_ ducks
22:57:22 <ijw> aveiga: clearly, anything that is 'a part of the cloud' and not an individual device needs a cloud API, implying Openstack, to make it do stuff and things
22:57:37 <aveiga> that's the point of heat
22:57:56 <sgordon_> yeah
22:58:07 <ijw> aveiga: that aside, how do you start and restart your VMs, configure them, etc?  That could be Heat or a similar aaS offering - which is not the very minimal core of Openstack, but it is Openstack - or it could be an application.  I suspect both cases are relevant and required
22:58:09 <sgordon_> and i think zane or someone expressed interest from a heat perspective on the thread
22:58:23 <sgordon_> like everyone they need use cases though
22:58:23 <ijw> aveiga: it's not really - Heat tells other services what to do
22:58:55 <aveiga> ijw: exactly.  What I'm getting at is we should provide an ETSI orchestration tool with the northbound interface of OpenStack
22:59:11 <ijw> aveiga: so if I had some random (potentially virtualisable) network device that made my VNFs 10x sexier, then I would expect there to be some service - perhaps a new one - to orchestrate it, then I would expect Heat to boss it about
22:59:14 <aveiga> I don't think we should be in the business of building the tools to orchestrate everything including the xternal systems
22:59:43 <ijw> aveiga: +1, I totally agree with that
22:59:46 <aveiga> ijw: that's totally fine, but wouldn't that network device's interface to OpenStack bee a third party driver?
22:59:52 <aveiga> we have ways of building those already
22:59:54 <ijw> Depends what it does
23:00:10 <ijw> It may have a totally new and shiny API, in which case it's probably a new endpoint (which is also fine, imo)
23:00:13 <sgordon_> the other project that came up on that thread was murano
23:00:15 <aveiga> yeah, but I have yet to see anything that requires that much more of OpenStack than we already provide
23:00:21 <sgordon_> but of course that is still very inwardly openstack-focussed
23:00:24 <sgordon_> not external systems
23:00:33 <adrian-hoban__> I agree from the perspective of offering the right level of configuration & control capabilities, but not necessarily innate knowledge of the service
23:00:35 <aveiga> with the exception of being able to manipulate neutron for connecting to external netionworks in a custom fash
23:00:50 <aveiga> adrian-hoban__: we need at least a little, otherwise you can't properly service chain
23:00:57 <ijw> The 'keep-your-app-running' element of orchestration definitely includes Murano
23:01:11 <sgordon_> #info defining what NFV orchestration really means in an openstack-specific way continues to be a challenge :)
23:01:17 <sgordon_> ijw, +1
23:01:23 <aveiga> yup
23:01:34 <jrakerevelant> agree
23:01:36 <sgordon_> ok we're at time for today
23:02:04 <sgordon_> those of you in the US enjoy your thanksgiving
23:02:04 <ijw> I think I would at least split the definition into 'internal' and 'external' orchestration, for want of a better word.  One provides a cloud API to something that is not itself cloudy (a widget, a server) and one looks after applications
23:02:33 <sgordon_> #info "ijw>  split the definition into 'internal' and 'external' orchestration, for want of a better word.  One provides a cloud API to something that is not itself cloudy (a widget, a server) and one looks after applications"
23:02:42 <ijw> (via published APIs, generally)
23:02:50 <sgordon_> i think that's accurate to the discussion we were having above
23:03:04 <sgordon_> #endmeeting