22:00:21 #startmeeting telcowg 22:00:22 Meeting started Wed Nov 26 22:00:21 2014 UTC and is due to finish in 60 minutes. The chair is sgordon_. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:27 The meeting name has been set to 'telcowg' 22:00:28 #chair amitry 22:00:29 Current chairs: amitry sgordon_ 22:00:32 #topic roll call 22:00:35 hi all 22:00:43 hi 22:00:43 who is around for the telco working group meeting 22:00:47 hello...Ian here 22:00:53 hi 22:00:54 recognizing that many will already but off for thanksgiving 22:00:57 *be off 22:01:15 present 22:01:30 hi jannis here 22:01:33 #topic use cases 22:01:49 were the logs from last week posted? 22:01:53 yes 22:02:03 http://eavesdrop.openstack.org/meetings/telcowg/2014/ 22:02:12 i also sent them to the operators and developers list 22:02:15 hi 22:02:23 k maybe the wiki needs an update 22:03:19 so 22:03:23 i did some work on the wiki 22:03:39 and moved use cases off to here: https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases 22:03:57 the current three are a little too mixed probably 22:04:12 two VNF use cases and one (vlan trunking) which is effectively a requirement rather than a use case 22:04:24 though the context the latter came up in was justifying that requirement for neutron 22:04:33 #link https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases 22:05:24 there was some discussion about this in #openstack-nfv earlier in the week but i am trying to gauge what level to target the use cases at 22:05:30 and how to come up with a meaningful template 22:05:55 at the moment what i have is pretty high level/generic to give some freedom: 22:05:58 * Title 22:06:01 i think the wiki previously had the ETSI 8 use cases, the key is to make them relevant to the openstack community 22:06:03 * Description 22:06:07 * Characteristics 22:06:11 * Requirements 22:06:16 Bear in mind use cases don't necessarily drive requirements individually, I think that's a problem we had with the old method - so it's fine to express a use case without trying to solve your openstack problems at the same time 22:06:33 right 22:06:39 and in fact arguably preferable 22:06:40 +1 22:07:05 it's better for us to distill requirements at a baseline out of multiple use cases that to try to tailor your case to a specific technical function 22:07:06 a use case might even already possible to implement with openstack as is, correct? 22:07:09 part of the problem with the way the ETSI 8 use cases is that they were listed with a very brief one sentence 22:07:18 it wasn't really made relevant at all 22:07:24 it was just here are the ETSI use cases 22:07:26 jrakerevelant: absolutely 22:07:30 sgordon_: agree 22:07:42 if they can be made relevant, then i think that would be good 22:08:08 we are currently onboarding a vEPC and i will see if there is a use case here 22:08:16 Frequently the changes to Openstack make a use case easier, or better (i.e. improving compute performance) but don't actualy enable it wholesale, I would say 22:08:33 right so i think broadly vEPC is a use case not covered in what we have documented today 22:08:39 Depends. Mainly I'm saying don't write a blueprint in the form of a usecase 22:08:40 I'd say for the ETSI cases we don't necessarily need them all in OpenStack, either 22:08:41 of course not all vEPC are created equal ;) 22:08:44 but got to start somewhere 22:08:45 some of them belong here and some may not 22:08:56 sgordon_: agrre 22:09:05 sgordon_: agreed 22:10:06 i am still trying to figure out how a use case can be general enough, some of the things we encounter is rather: how can I abstract from things like DPDK and SRIOV 22:10:07 ijw, right - which i think is effectively what i did when i dumped your vlan trunking justification in as a a use case :/ 22:10:20 Bad sgordon_ 22:10:24 :) 22:10:28 easily deleted! :P 22:10:54 Probably wise, and I'll see if I can get the relevant team to write up the actual use case that justifies it from our perspective 22:10:59 Also agree that ETSI-NFV defined use cases are very high level. Where is the link to them now for OpenStack folks if we need to refer back 22:11:24 i believe it's in references at the bottom 22:11:41 #link http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf 22:12:08 sgordon: Thanks, I missed the link 22:12:42 jrakerevelant, to your point i think that is why there needs to be a high level description and coverage of characteristics of the workload 22:13:00 is "virtualizing an IMS" really a use case for us though? 22:13:04 and even requirements at a high level 22:13:11 well put it this way 22:13:12 its rather an application on top 22:13:17 im now down to two use cases 22:13:28 so if you want to cut more then we need people to contribute some ;) 22:13:28 :) 22:13:36 we can work on adding some more 22:13:39 jrakerevelant, define on top? 22:13:52 making openstack suitable to run IMS 22:13:57 no i dont want to remove it i want know if it is asking the right question 22:13:59 from the discussion last week facilitating the workloads that run on top of openstack seemed to be the agreed focus 22:14:00 for phase 1 22:14:20 sgordon_: I think it should be 22:14:23 perhaps i am mis-paraphrasing though 22:14:28 amitry: right, with clearwater as a reference? 22:14:32 we need a target, rather than just to piecemeal implement t echnologies 22:14:36 Look - when it comes down to it a use case really needs to be described by either (a) a vendor with a widget who for whatever reason has some difficulty running it on stock openstack or (b) a telco who has a widget from somewhere and wants to describe what they would *like* to do with it rather than what's possible today 22:14:44 ijw, right 22:14:51 ijw, and i think both we have atm were cases of (a) 22:14:56 Yup 22:14:57 which is perfectly fine 22:15:04 aveiga surely must have examples of b 22:15:06 but would be great to get a mix of (b) 22:15:08 ijw: we can bring that up a level, widget isn't always the only way to describe it :) 22:15:25 #info "ijw> Look - when it comes down to it a use case really needs to be described by either (a) a vendor with a widget who for whatever reason has some difficulty running it on stock openstack or (b) a telco who has a widget from somewhere and wants to describe what they would *like* to do with it rather than what's possible today" 22:15:30 That's fine, I would expect an operator to have a picture in their mind of what they want to do with it and that's what you need to describe 22:15:44 s/with it// 22:15:46 for instance, I want to provide X type of service with X technological method 22:16:05 maybe managed VPNs where I pick MPLS 22:16:15 makes sense as a use case in this context i think 22:16:15 or managed voice where I run a SIP gateway 22:16:25 Yup - and use cases are 'I want to provide X type of service' and perhaps a worked example of how you might do it, but if that's the case I think we should expect other people to add to the worked examples or at least quibble with design choices 22:16:35 I agree 22:17:04 my main problem as a telco currently is that vendor X needs technology Y they say 22:17:18 and Y isnt easy to implement with vanilla openstack 22:17:22 i think that is everyone's problem currently :) 22:17:25 Are we good with that? Stick the 'I want' on the wiki and let people argue the implementation at the bottom or at least pass comment in an etherpad? Or we could even have a repo with this crap in 22:17:25 yup 22:17:31 so you start adding balconies that i dont want 22:17:36 ijw, right 22:17:44 ijw: I think that's the abstraction we want to start with 22:17:55 ijw, i did dig into storyboard a bit this week but for simplicity i would say keep it on the wiki *for now* 22:17:58 jrakerevelant: A fascinating expression, and in that case your use case would be 'a client says they need Y' 22:18:01 then we can debate the merits of individual implementations at a second level 22:18:23 At worst, and don't expect much sympathy if you put that in ;) 22:18:31 #info Start with "I want" use cases on the wiki and let people argue the implementation at the bottom or at least pass comment in an etherpad 22:18:36 Or perhaps the wrong sort of sympathy ;) 22:18:45 ijw: well, but i dont necessary want to provide Y but an abstraction layer to Y 22:18:46 aveiga, agree 22:19:14 jrakerevelant: Best expressed as 'I want to do Y' and then we can debate the merits of various solutions up to and including changing Openstack 22:19:19 right 22:19:46 openstack design tenets ultimately favor abstraction of implementation anyway where possible... 22:19:51 Open source network architecture, this is going to be fun... 22:20:25 ijw: yes probably 22:20:30 jrakerevelant, you mentioned a vEPC you are looking to set up earlier 22:20:31 the basis of it is that either a) OpenStack can do it now with some tweaks, in which case we document the tweaks, b) OpenStack can be made to do it, in which case we write up a BP or c) OpenStack can't be made to do it because the vendor of Y did it wrong, in which case we can't do much about it other than recommend options? 22:20:42 jrakerevelant, would you be interested in trying to cover that use case? 22:20:47 jrakerevelant: If you add why you need to do Y, then I think it will help folks to understand much more quickly the value 22:21:11 =1 22:21:12 sgordon_: maybe a few weeks in 22:21:13 +1 even 22:21:32 aveiga: Even that's useful, but 'it would be much easier if we changed X' is a fine answer to the problem and one we can target. The 'openstack can do it with config tweaks' solutions tend to involve dedicating your whole cloud to a single application, which is bloody annoying 22:21:57 ijw: I agree, but it's always those damned snowflakes that hold the purse strings... 22:21:59 sgordon_: we are even struggling to find the most flexible way to integrate physical interfaces like eNode Bs 22:22:12 we know how to do it in 3 different ways 22:22:32 but they all dont feel natural to openstack 22:22:41 aveiga: I know, and true enough, but you're trying to help reduce their infrastructure cost, and they will agree with you after sufficient whisky 22:22:56 jrakerevelant: Again - 3 solutions and the one you'd like is fine 22:23:04 ijw, right - i think documenting the config tweaks approach is more about illustrating how bad the current state is 22:23:10 Easier to pitch the change if there's background information 22:23:10 ijw: I find your ideas intrigueing and would like to subscribe to your newsletter 22:23:21 aveiga: it's very expensive but comes with whisky 22:23:42 jrakerevelant: put them up! No reason not to chat about them, because you may find someone knows a way to make more natural 22:24:01 and if not, we can figure out which way is the most OpenStack-like (agnostic to forced solutions) and implement that 22:24:04 #action sgordon to add a section to top of https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases with guidance from this discussion on how to frame use cases 22:24:14 aveiga: you mean as a use case? 22:24:22 jrakerevelant: absolutely 22:24:29 yeah 22:24:33 one use case,and provide the 3 ways you think it can be done 22:24:34 you have an i want 22:24:39 :) 22:24:40 About time someone started making app design recommendations, in fact. We kind of short change tenants in favour of cloud operators in the world of Openstack 22:24:40 and we'll debate the 3 options 22:24:48 When we get to BPs for these things, it will be good to document the alternatives too 22:24:51 sounds good 22:24:55 adrian-hoban__: agreed 22:25:19 I don't even think it's wrong to implement multiple methods if they are not mutually exclusive 22:25:20 put in an action item, i will try to do it until next wednesday 22:25:22 options are good for everyone 22:25:28 #action jrakerevelant to document desire to integrate physical interfaces 'like eNode Bs' and current approaches as a use case 22:25:44 right i was going to say before 22:25:51 some of these things arent mutually exclusive 22:25:51 jrakerevelant: as an aside did you see the cloud edge BP? 22:26:03 jrakerevelant: if not go look it up, but we should probably take that out of this meeting 22:26:09 and at a high level may be expressed as i need to move packets at a certain rate or w/e 22:26:16 but there are different ways to achieve that 22:26:29 ijw: no pleas msg the link, or i google :) 22:26:48 https://blueprints.launchpad.net/neutron/+spec/cloud-edge-networking 22:26:59 #link https://blueprints.launchpad.net/neutron/+spec/cloud-edge-networking 22:27:00 yupp found it 22:27:03 thanks 22:27:06 ok 22:27:21 #link https://review.openstack.org/#/c/136555/ 22:27:21 so it's thanksgiving eve and i am not sensing a wealth of volunteers to throw up use cases 22:27:36 i just found out i know you ian 22:27:39 but i can call out via M/L again 22:27:50 sgordon_: I'll gladly do them, however I am on PTO for the next week... 22:28:00 jrakerevelant: Yeah, I worked out who you were a while back (but your name is a little more obvious ;) 22:28:02 #action sgordon to issue call for use cases via M/L once UseCases page updated 22:28:16 aveiga, np - it's more i want to make sure we capture a broad spectrum if possible 22:28:24 absolutely 22:28:26 OK, there are three BPs worth checking, that one and the revenge of VLANs 22:28:37 and once I'm back, someone gently nudge me with a big stick 22:28:40 ;) 22:28:45 We having a section on that? 22:28:47 whats the revenge of the vlans?? 22:29:07 a new hope 22:29:14 That wasn't a sequel 22:29:15 BOOOOO 22:29:16 ijw, the vlans? 22:29:29 sure why not 22:29:37 #topic vlan trunking redux 22:29:39 https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks 22:29:45 #link https://blueprints.launchpad.net/neutron/+spec/nfv-vlan-trunks 22:30:06 https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms 22:30:07 i briefly caught the discussion you had with amuller about it y'day in the neutron channel 22:30:13 #link https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms 22:30:29 Haven't checked the latter but I think Erik was facing Maru's -2 and Maru had somehow managed to miss that we want both to go through 22:30:36 i heard vlans are evil 22:30:38 ;) 22:30:48 It's a horror movie sequel 22:30:57 oh man am I in the wrong company 22:31:06 * aveiga likes 802.1q 22:31:21 aveiga: ipv6 is the answer, you know. Now what's the question again? 22:31:29 ijw: it always is 22:31:29 ijw: is it a requirement for legacy applications? 22:31:41 jrakerevelant: VLANs? 22:31:45 * sgordon_ struggles to find the spec link for erics 22:31:48 jrakerevelant: no, there are soome newer things that need it too 22:31:55 i.e. MT-ISIS peers... 22:32:09 Partly yes, we expect to face VMs that like them, and partly no, we also and independently of NFV expect there to be VMs with lots and lots of network connections 22:32:31 #link https://review.openstack.org/#/c/136554/ 22:32:31 The former is more the VLAN trunk thing, the latter is more the port thing 22:32:42 urgh that's yours :) 22:32:43 But the line is faint and easy to cross, in both directions 22:32:46 ijw: the latter is the one I expect to see as a big deal, but also in an addresses-per-port issue as well 22:33:01 Yup, and Erik's spec addresses the port addressing 22:33:21 The former annoys me more, but that's a matter of what I face more often. We need both, that's all I'll say on the subject 22:33:47 #link https://review.openstack.org/#/c/94612/ 22:34:09 I suspect when we have them we will want to tweak them, but this time we need to make a concerted effort - all of us - to keep on top of the specs, review and criticise them, and then do the same for the code. 22:34:25 I am as guilty as anyone of this, but fast turnarounds, please 22:35:08 #info specs are up for review and need concerted review bandwidth and turnarounds 22:35:12 i think i am not the right person to judge if "we" need sth like that, so i trust in you guys 22:35:16 #action sgordon to ask maru about -2 on https://review.openstack.org/#/c/94612/ 22:35:16 The spec you don't have yet because I got a bit tied up in the details is the one for MTU specification and discovery - sorta kinda NFV related, for some applications (including one I have to deal with) and an annoyance to cloud users in general, too 22:35:33 +1 22:35:34 Are these all captured on the wiki too? 22:35:35 jrakerevelant: read it anyway, the worst that will happen is you don't vote 22:35:56 adrian-hoban__: no, because right now they're not use cases 22:36:00 they're potential solutions 22:36:11 but I intend to add some use cases that may need some of them 22:36:28 #info ijw working on spec for MTU specification and discovery 22:36:30 sgordon_: other than that I think there were about 5 or so libvirt/KVM BPs mentioned in the summit Nova session, and I admit I've not been following them too closely 22:36:41 yeah they are still progressing 22:36:43 ijw: I'll lool into it 22:36:47 lot of back and forth about the data model 22:36:56 and still of course the issue of CI which i need to chase down 22:37:16 there seemed to be some indication we may in fact be able to demonstrate them on HP Cloud infra at least 22:37:22 ... with me, I think, but I need to check with the opnfv guys about the hardware to be provided and bugger about with cobbler 22:37:31 Intel reps are starting to engage with the Infrastructure team now to get our CI in place 22:37:35 yeah 22:37:39 this is good news 22:37:40 seems to be lots of hardware 22:37:42 sgordon_: I think we want both physical and virtual but virtual would be a real win even if it's not perfect 22:37:48 trying to nail down a resource to "own" it 22:37:55 right 22:38:10 i still think we would need physical to demo the stuff adrian-hoban__'s team are working on with device locality 22:38:13 i could be wrong though 22:38:35 actually, that's a good point. We should caveat that use cases asking for X performance rates may not be verifiable on all OpenStack CI systems 22:38:39 ijw: what are the requirements on the virtual setup? 22:38:41 and therefore YMMV 22:38:44 but being able to get some of the other stuff into the gate rather than third party would be win 22:38:51 vhostuser is still an issue, we need a spec in Neutron again to get the Nova guys to agree to use it, and for that we need a controller that causes it to be used (even if it's not gating) 22:38:55 sgordon: Yep, I think we need physical for a few of the items 22:39:10 jrakerevelant: Not the expert, but there's a ML thread with Dan Berrange that describes their thinking 22:39:19 #info use cases caveats need to be added around ability of OpenStack CI to measure certain performance rate requirements etc. 22:39:42 ijw: could you dig up the link maybe? 22:39:57 jrakerevelant, basically you can actually expose a numa topology on libvirt/kvm 22:40:04 ijw: we're working up some plans on the vhost-user. Watch this space 22:40:12 jrakerevelant, the key is whether we can orchestrate this on one of the clouds infra uses (or ideally both) 22:40:21 adrian-hoban__: Get it in, man ;) 22:40:28 grabbing link 22:40:37 sgordon_: ok 22:40:48 i actually have someone who can help with that if needed adrian-hoban__ 22:41:07 though i think to ijw's point key is what if anything is done on the neutron side 22:41:11 to ensure we can test it 22:41:17 http://osdir.com/ml/openstack-dev/2014-11/msg00602.html is the thread for testing 22:41:26 sgordon_: Would welcome that. Thanks 22:41:29 ijw: awesome thanks 22:41:41 #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/050469.html 22:41:43 jrakerevelant, ^ 22:42:08 that too 22:42:34 OK, so any more specs of interest right now? I know what I know, other people must also have opinions 22:43:09 #link https://review.openstack.org/#/c/128825/ 22:43:37 this recently picked up a -2, trying to explain the logic atm 22:43:49 topic is optimizing virtio-net multiqueue usage 22:43:51 (virtio-net multiqueue - enhanced networking speed for supporting VMs) 22:44:46 Was that the one where Dan said that there's a limit on the number of queues per host, at the summit? 22:44:53 danpb, that would be 22:44:57 per guest i think 22:44:58 but yeah 22:45:00 ijw: Yep I think so 22:45:25 that could get tricky at the scheduling level... 22:45:26 I wonder if that also aplies to singlequeue, in which case we probably have a repair job to do to the way nova schedules and should raise a bug 22:45:28 vladik wasn't actually at the summit so dan and i were trying to explain it on his behalf 22:46:19 OK, can you get the details from Vladik? Cos we should definitely check if that has wider implications, for starters 22:47:27 mmm 22:47:37 are you saying in terms of it being a finite resource? 22:47:47 as my understanding is there is a limit of # of queues per guest 22:47:51 not a per host limit 22:48:20 queues and MSIx 22:48:38 yeah 22:48:43 both guest side though afaik 22:49:20 while we're at it, and because i want aveiga to go on his PTO made with me 22:49:27 PXE boot was raised on the M/L again 22:49:33 #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/051561.html 22:50:35 question is whether in a cloud world that makes sense versus a PXE image in glance 22:50:58 monty also suggested on M/L that ironic already has PXE support (which im sure it does) that is driven by nova 22:50:58 I still don't see a reason to do this 22:51:13 I don't have a use case, but we've used an iPXE image to boot a 'diskless' VM and that works just fine, albeit it has one disk. 22:51:14 it's unclear 22:51:17 i dont see the use case 22:51:21 I mean, if you're going to PXE boot something *not* in OpenStack, ok... 22:51:21 i'd prefer to see a use case 22:51:32 i mean in a telco specific way 22:51:35 everyone i have talked to about it has ended up putting the image in glance and being fine with it 22:51:38 Ironic is using PXE for entirely different and non-tenant-facing reasons and I don't think that's pertinent 22:51:46 ijw, yeah that was my thought.... 22:52:01 I can see needing to provide PXE services to outside devices 22:52:11 mmm 22:52:17 do we necessarily stop that though? 22:52:20 I mean, booting a thin client farm that connects to a "remote desktop" farm? 22:52:26 ijw: I think its pertinent in that ironic could maybe boot your VMs for you (which may be what mordred was getting at) 22:52:27 I presume the request is 'I want a machine with 0 disks and for it to load all it state into a RAMdisk from a remote server' and you can take that as meaning that you the tenant will provide the server or you want the system to do it for you, I guess 22:52:29 sgordon_: yes, we block DHCP for good reason 22:52:31 I've done it with a tenant server. 22:52:42 aveiga, you are no fun at all today 22:52:43 ;p 22:53:06 it's snowing here, so I'm passing along the pain :) 22:53:13 aveiga: Oh, god, the firewalling stuff, that's another can of worms we should at least mention so it's saved for next tie 22:53:17 ijw, yeah 22:53:21 ijw: +1 22:53:23 in the simplest case the tenant imo 22:53:29 but still 22:53:29 the implicit filtering stuff can be a pain for NFV uses 22:53:32 * sgordon_ shelves that 22:53:35 necessary for shared clouds though 22:53:39 sgordon_: let's get whoever it is to clarify their use case 22:53:59 aveiga: there are at least two BPs, hence can of worms 22:54:00 #info PXE, much confusion, need documented use case 22:54:03 ah 22:54:16 ok so i had three more items 22:54:26 i locked in the meeting time, obviously we're here so goodo 22:54:26 Run, Forrest, run! 22:54:53 jannis added a glossary 22:54:56 #link https://wiki.openstack.org/w/index.php?title=TelcoWorkingGroup/Glossary&action=edit&redlink=1 22:55:06 or at least a placeholder for one 22:55:10 and finally last weeks hot topic 22:55:13 I like that glossary as it is 22:55:14 #topic orchestration 22:55:18 mkoderer, still around 22:55:20 ? 22:55:23 I think it sums up the problem nicely 22:55:28 #link http://lists.openstack.org/pipermail/openstack-dev/2014-November/051473.html 22:55:37 ijw, it needs one of those mental puzzle images 22:56:00 sgordon_: yeah i didnt know what to put in there yer 22:56:12 mkoderer took an action to kick off a thread about orchestration and keep working on the etherpad last week 22:56:19 that happened 22:56:23 I'm actually really curious as to where we think the line should be drawn for NFV orchestration within OpenStack 22:56:31 from a use cases perspective we determine that wasn't the initial focus 22:56:35 *determined 22:56:37 aveiga, right 22:56:39 I mean, we don't expect OpenStack to orchestrate network gear outside of the stack, right? 22:56:50 aveiga, well - i dont 22:56:53 I certainly don't want OpenStack (no offense) to manipulate my routers... 22:56:54 but i am but one man! 22:56:55 aveiga: Well, I think there are two questions, really 22:57:12 and what about tripleo... 22:57:14 * sgordon_ ducks 22:57:22 aveiga: clearly, anything that is 'a part of the cloud' and not an individual device needs a cloud API, implying Openstack, to make it do stuff and things 22:57:37 that's the point of heat 22:57:56 yeah 22:58:07 aveiga: that aside, how do you start and restart your VMs, configure them, etc? That could be Heat or a similar aaS offering - which is not the very minimal core of Openstack, but it is Openstack - or it could be an application. I suspect both cases are relevant and required 22:58:09 and i think zane or someone expressed interest from a heat perspective on the thread 22:58:23 like everyone they need use cases though 22:58:23 aveiga: it's not really - Heat tells other services what to do 22:58:55 ijw: exactly. What I'm getting at is we should provide an ETSI orchestration tool with the northbound interface of OpenStack 22:59:11 aveiga: so if I had some random (potentially virtualisable) network device that made my VNFs 10x sexier, then I would expect there to be some service - perhaps a new one - to orchestrate it, then I would expect Heat to boss it about 22:59:14 I don't think we should be in the business of building the tools to orchestrate everything including the xternal systems 22:59:43 aveiga: +1, I totally agree with that 22:59:46 ijw: that's totally fine, but wouldn't that network device's interface to OpenStack bee a third party driver? 22:59:52 we have ways of building those already 22:59:54 Depends what it does 23:00:10 It may have a totally new and shiny API, in which case it's probably a new endpoint (which is also fine, imo) 23:00:13 the other project that came up on that thread was murano 23:00:15 yeah, but I have yet to see anything that requires that much more of OpenStack than we already provide 23:00:21 but of course that is still very inwardly openstack-focussed 23:00:24 not external systems 23:00:33 I agree from the perspective of offering the right level of configuration & control capabilities, but not necessarily innate knowledge of the service 23:00:35 with the exception of being able to manipulate neutron for connecting to external netionworks in a custom fash 23:00:50 adrian-hoban__: we need at least a little, otherwise you can't properly service chain 23:00:57 The 'keep-your-app-running' element of orchestration definitely includes Murano 23:01:11 #info defining what NFV orchestration really means in an openstack-specific way continues to be a challenge :) 23:01:17 ijw, +1 23:01:23 yup 23:01:34 agree 23:01:36 ok we're at time for today 23:02:04 those of you in the US enjoy your thanksgiving 23:02:04 I think I would at least split the definition into 'internal' and 'external' orchestration, for want of a better word. One provides a cloud API to something that is not itself cloudy (a widget, a server) and one looks after applications 23:02:33 #info "ijw> split the definition into 'internal' and 'external' orchestration, for want of a better word. One provides a cloud API to something that is not itself cloudy (a widget, a server) and one looks after applications" 23:02:42 (via published APIs, generally) 23:02:50 i think that's accurate to the discussion we were having above 23:03:04 #endmeeting