15:00:37 <serverascode> #startmeeting operators_telco_nfv
15:00:37 <openstack> Meeting started Wed Nov 16 15:00:37 2016 UTC and is due to finish in 60 minutes.  The chair is serverascode. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:42 <openstack> The meeting name has been set to 'operators_telco_nfv'
15:00:59 <serverascode> #topic roll call
15:01:12 <serverascode> Good day to everyone :)
15:01:33 <serverascode> #link https://etherpad.openstack.org/p/ops-telco-nfv-meeting-agenda
15:01:45 <serverascode> ^ agenda, feel free to add/change anything
15:03:15 <talonx> Hi all
15:03:56 <serverascode> hi talonx
15:04:01 <serverascode> so far you and me
15:04:09 <ad_rien_> o/
15:04:23 <serverascode> hi ad_rien_
15:04:28 <ad_rien_> hi
15:04:50 <ad_rien_> should we start ?
15:05:11 <serverascode> sure, I was just waiting to see if anyone else was here
15:05:14 <serverascode> but we can start
15:05:46 <serverascode> have a look a the agenda I linked above and feel free to add anything
15:06:12 <serverascode> #topic Consensus on NFVi mid to long term project
15:06:41 <serverascode> so the first item I had was ensuring we had consensus on the mid to long term project we discussed last week
15:06:52 <serverascode> we had a good meeting but it was a bit rushed at the end
15:07:05 <serverascode> anyone have any thoughts on that?
15:07:16 <serverascode> we don't have a lot of people in the meeting at this time
15:07:37 <serverascode> I started an etherpad about that page
15:07:46 <serverascode> #link https://etherpad.openstack.org/p/OTN-nfvi-project
15:07:55 <serverascode> about that topic I mean
15:08:23 <serverascode> so have a look at that, there are a few things we need to figure out
15:09:56 <ad_rien_> I just added  information  related to G5K (aka Grid'5000)
15:11:16 <serverascode> thanks ad_rien_ I'm looking at the enos repository now :)
15:12:03 <GeraldK> question to me is whether we should spend lot of efforts on defining our own reference architecture or better re-use some existing architetcure maintained by someone else.
15:12:26 <ad_rien_> GeraldK:  such as ?
15:12:50 <jamemcc> Jamemcc here now also - sorry for being late
15:13:11 <serverascode> no worries
15:13:33 <GeraldK> as_rien_: sure, we have to pick one. just wanted to make a point that IMHO we should not consider defining our own one
15:13:52 <ad_rien_> the point is a minimal ?
15:14:01 <ad_rien_> so what is a minimal NFVi
15:14:58 <GeraldK> good question. minimal to what we want to use it for, i.e. do some initial work on benchmarking.
15:14:58 <serverascode> that is me using that word
15:15:30 <serverascode> my thoughts on minimal is that it's the most basic openstack posssible, plus something like neutron-sfc
15:16:01 <ad_rien_> yes that was roughly the end of our meeting
15:16:19 <ad_rien_> basic openstack what does it mean ? how many locations  ?
15:16:39 <ad_rien_> wich services should be centralized vs distributed?
15:17:05 <serverascode> I would think one location
15:17:06 <ad_rien_> GeraldK:  if you have a pointer to some architecture candidates it would be great to give it a look
15:17:23 <serverascode> yes please if anyone has examples, please let us know
15:17:27 <ad_rien_> is one location a NVF infrastructure ?
15:17:41 <serverascode> I believe so
15:17:46 <jamemcc> I like Curtis start  which is just a few componnts of Openstack - seems to me Keystone, Nova and Neutron and then we need to more focus on if waht type of underlying hardware and then how many clouds assuming it is a multi-cloud
15:18:14 <serverascode> do people here think NFVi has to be mulit-cloud?
15:18:23 <ad_rien_> jamemcc:  "assuming it is a multi-cloud"
15:18:26 <ad_rien_> :-)
15:18:41 <ad_rien_> from my understanding NFVi has to be a multi site cloud
15:19:04 <ad_rien_> that is several (nano/mini) DCs supervised in an unified manner ?
15:19:16 <ad_rien_> but this is my understanding
15:19:22 <GeraldK> ad_rien_: what about the OPNFV scenarios, e.g. the one from the Colorado release: https://wiki.opnfv.org/display/SWREL/Colorado-Compact+Table+of+Scenarios
15:19:31 <jamemcc> Ok, good - yes my opinion is that's sort of one step past minimal perhaps - but not nearly as much value to the reference architecture if it's not multi-cloud
15:21:03 <serverascode> ok, interesting re multi-cloud
15:21:17 <ad_rien_> GeraldK:  not sure I correctly understand the table, could you explain it in a few lines please ?
15:22:31 <ad_rien_> in my mind the minimalist architecture can be to have controllers in a central DC and then only compute nodes deployed on the different sites (i.e. you have a master sites and then several secondary sites. The master hosts keystone, nova controller,… whereas the secondary sites host compute nodes only)
15:23:12 <serverascode> #link https://wiki.opnfv.org/display/SWREL/Colorado-Compact+Table+of+Scenarios
15:24:03 <ad_rien_> We can evaluate scenarios such as bandwith/latency issues related to the way OpenStack has been designed (we can discover that there is strong assumptions regarding the connectivity between compute nodes and controller ones for instance).
15:24:05 <GeraldK> ad_rien_: second column shows the base scenario, e.g. OpenStack + ONOS. for this scenario several variants exist, e.g. OS + ONOS +SFC. The table then shows that this combination is supported by which installers (e.g. all 4 in the example)
15:24:21 <serverascode> ad_rien_ so by multi-cloud do you mean that hypervisors can be in different DCs?
15:24:44 <ad_rien_> this can be one possibility (among others)
15:25:15 <ad_rien_> GeraldK:  ok thanks
15:26:15 <GeraldK> just to confirm, when you talk about NFVi, are you referring to ETSI NFVI + VIM ?
15:27:05 <serverascode> I am yeah, mostly because I'm focussed on openstack, which is, IMHO, NFVi + VIM
15:27:19 <serverascode> I don't see how you could separate them when talking about OpenStack
15:27:56 <ad_rien_> maybe we can clarify that point
15:28:04 <ad_rien_> because OpenStack can be just a VIM actually
15:28:24 <ad_rien_> at least it can be seen as a VIM (from the ETSI definition), cannot it ?
15:28:26 <jamemcc> As a proposal- multi-cloud means 2 different instances of OpenStck - basically to satisfy the interoperability of 2 NFVis.  Basically if you follow this minimum 2 different operators could get their NFVs to work together.
15:29:04 <serverascode> ad_rien_ I agree, but what else can openstack control as a vim? vcenter?
15:29:30 <serverascode> jamemcc +1 on multi-cloud = 2 different instances of openstack
15:29:51 <ad_rien_> I just read (once again) the definition on the document that has been linked during the last meeting:
15:30:00 <ad_rien_> 1 NFVI: Totality of all hardware and software components thatbuild up the environment in which VNFs are deployed
15:30:01 <ad_rien_> 2 VIM: Functional block that is responsible for controlling and managing the NFVI compute, storage and network resources, usually within one operator’s Infrastructure Domain (e.g. NFVI-PoP)
15:30:11 <ad_rien_> for me the difference is not clear.
15:31:24 <serverascode> I've always just though of the vim as things like the API endpoints and the scheduler, authentication
15:31:48 <serverascode> and NFVi as the parts that provide network, compute, and storage
15:31:52 <GeraldK> NFVI is kind of the DC, which consists of servers, hypervisors, the operation systems, VMs and network resources
15:32:21 <ad_rien_> GeraldK:  and VIM is ?
15:33:13 <GeraldK> the entity that is managing the resources in the NFVI
15:33:58 <GeraldK> but I see that OpenStack may not easily be mapped to just the VIM
15:34:26 <ad_rien_> maybe we should clarify that in a pad (I'm sorry but for me the distinction between the two concept is not clear)
15:34:45 <ad_rien_> regarding the remark of : is a NFVi a multi cloud? I would say it depends how the different openstack instances collaborate.
15:35:06 <serverascode> I'm trying to capture some of it, we can clean it up later, but it is important for all of us to have similar definitions :) so this is good
15:35:29 <ad_rien_> There are different approaches. Either there is a middleware (a broker/orchestrator) or you can try to make collaborations directly
15:35:51 <ad_rien_> (see the multisite deployment on the Openstack documentation)
15:36:33 <serverascode> can you link me to the doc?
15:37:14 <ad_rien_> #link http://docs.openstack.org/arch-design/multi-site.html
15:37:50 <ad_rien_> So the question is : should we investigate the multi operator scenario for the basic use-case ?
15:38:29 <ad_rien_> (in reference to the jamemcc remark)
15:38:44 <ad_rien_> which means two keystones spaces etc…
15:39:27 <ad_rien_> if you look directly to #link http://docs.openstack.org/arch-design/multi-site-architecture.html
15:39:35 <serverascode> ok, so that doc, to me, is about openstack multi-region
15:39:36 <ad_rien_> you can see another reference architecture.
15:40:17 <ad_rien_> serverascode:  yes but the difference is not so big from my point of view, is it?
15:40:40 <serverascode> multi-region is a totally valid architecture, but it is quite different from a multi-openstack
15:41:06 <serverascode> well maybe not quite different, the major difference is you share a single keystone (perhaps horizon, perhaps glance) database across regions
15:41:17 <ad_rien_> …. hmmm as I said it depends how you make the collaboration between your different openstack instances
15:41:24 <serverascode> I agree
15:41:43 <serverascode> but to be honest, multi-region is not that common, though it is very cool
15:41:55 <ad_rien_> so first architecture (very basic): one openstack (i.e horizon, keystone, nova, neutron) and only compute nodes on remote locations
15:42:27 <ad_rien_> second architecture the one that is presented in the picture; That is horizon, keystone and maybe glance are shared other services are deployed on the remote locations
15:42:30 <serverascode> yes, though it is also not common (yet) to put compute nodes in remote locations
15:42:38 <ad_rien_> third ....
15:42:41 <serverascode> typically they are all in the same dc
15:43:23 <ad_rien_> for a telco (at least for BT), it may make sense to keep the control plane in one location and distribute only the compute that will host the VM at the edge
15:43:48 <serverascode> for sure, you just can't really do that with openstack right now as far as I know
15:43:56 <ad_rien_> (s/edge/on the customer equipments)
15:43:57 <ad_rien_> why ?
15:44:18 <ad_rien_> What does prevent you to do that ?
15:44:28 <ad_rien_> that is an interesting question
15:44:40 <ad_rien_> that can be the minimal architecture?
15:45:08 <ad_rien_> (don't get me wrong, I didn't say we should do that, I just try to put some materials in the discussion)
15:45:10 <serverascode> I've heard that typically called "fog" or "edge" computing. there is a massively distributed owrking group that is working on that use case
15:45:27 <ad_rien_> yes that is indeed one direction we would like to investigate
15:45:30 <serverascode> it's not impossible, just that openstack wasn't quite designed for that
15:45:42 <serverascode> for example nova-compute has to tallk to the rabbit cluster
15:46:06 <jamemcc> If we go with the official OpenStack multi-site reference architecture then it relly just leaves it up to us to be minimizing it downt ot what it takes to support NFV.  I kind of like lettign others defince the basica OpenStack and we just use it.
15:46:06 <serverascode> neutron also becomes a bit of an issue that way too
15:46:48 <ad_rien_> actually we would like to answer the question core developers of OpenStack asked to the Tricircle team when they tried to join the big tent. From the Openstack core persons, it seems that you can do multi site scenarios with the vanilla code.
15:47:44 <serverascode> #link https://wiki.openstack.org/wiki/Massively_Distributed_Clouds
15:47:47 <ad_rien_> serverascode:  there is nothing special to deploy rabbitMQ on top of a WAN we already tried it at Inria (see #link https://hal.inria.fr/hal-01320235)
15:48:05 <ad_rien_> regarding neutron it might be an issue as we never played with it.
15:48:25 <ad_rien_> jamemcc:  not sure I'm following you
15:48:36 <ad_rien_> can you clarify/elaborate your point please
15:49:38 <serverascode> I think multi-region is great, but I also think telecoms will tie together multiple separate openstack clouds using a higher level system (eg the MANO tier) or, as you mentioned, something like Tricircle
15:49:39 <jamemcc> In the Multi-site-architecture picture the top comonent is a load balancer - I'm not sure that applies to our refernce architecture - I understand it for a multi-site deployment where the 2 sites in exssence back each other up - but for our purposes here the 2 sites are not duplicate but exist to service different NFVs
15:49:42 <ad_rien_> serverascode:  but you are right rabbitMQ is definitely not the good choice for production systems but we need to come with strong arguments and performance evaluations to show that to the core devs.
15:50:49 <ad_rien_> serverascode:  the main issue with an approach that is built on top of distinct OpenStack is that you have to reify all openstack capabilities/features at the high level.
15:51:31 <ad_rien_> In somehow you have to implement OpenStack features on top of OpenSTack … which means that you will use only basic features of OpenSTack since all advanced mechanisms will have to be implemented at the higher level.
15:52:01 <ad_rien_> that is one of the reason Tricircle has not been accepted in the big tent during this summer
15:52:20 <serverascode> yeah, that makes sense
15:52:37 <serverascode> I guess I was hoping to avoid all this complexity for now, and just do a single openstack site :)
15:52:48 <serverascode> for our initial project anyways :)
15:52:54 <ad_rien_> finally they splitted the code of Tricircle in two part and now Tricircle just focuses on providing networking automation across Neutron in multi-region OpenStack clouds deployment.
15:53:10 <serverascode> but, it does sound like some kind of multi-site/region/cloud is important to you and jamemcc
15:53:32 <serverascode> (time check: about 5 min left)
15:53:47 <ad_rien_> yes I fully agree. The question of interoperability between operators is crucial
15:54:11 <ad_rien_> I just say that you have two ways to address this multisite/multi clouds/multi *** scenarios
15:54:19 <ad_rien_> I'm used to make a comparison with TCP/IP
15:54:35 <ad_rien_> every operator uses TCP/IP to exchange packets and then they have peering agreements
15:55:02 <serverascode> jamecc do you also feel multi-* is important for this project?
15:55:03 <ad_rien_> if we consider that OpenStack is the default VIM/NFVi then why should we reimplement a stack above to ensure interoperability
15:55:13 <jamemcc> As another observation about the large Telco use case of the multi-site-architecture is that you may choose instead of just 1 centralized Horizon/Keystone/Swift to instead replicate the Keystone and other common configurations.  To ad_rien's point - basically above OpenStack.  Since that's more of the big Telco thing though - seems to me this multi-site with share Keystone is close enough.
15:55:19 <ad_rien_> why not investigating how OpenStack can natively cooperate
15:56:08 <ad_rien_> jamemcc:  I agree (If I'm correctly understanding).
15:56:17 <ad_rien_> I didn't say that there should be  one keystone
15:56:28 <serverascode> ok, cool, so I think we have some consensus here regarding multi-* and in fact specifically multi-region
15:56:32 <ad_rien_> you can also envison to have a keystone that is fully distributed across different locations
15:56:46 <ad_rien_> using cassandra with replications for instance
15:56:52 <serverascode> one keystone database
15:57:11 <ad_rien_> how can we move forward ?
15:57:13 <serverascode> shared across locations (which is done with mysql galera at this time)
15:57:28 <ad_rien_> Should we try to synthesise our ideas on one pad?
15:57:41 <ad_rien_> galera is ok for two three sites
15:57:48 <ad_rien_> but it does not scale more than that
15:57:50 <serverascode> yes for sure, we need to come up with some definitions and agree on them
15:58:42 <serverascode> please add your thoughts to the etherpad and next meeting we will move forward
15:58:48 <serverascode> (1 min left)
15:59:04 <ad_rien_> jamecc	 if you are available we can continue to discuss privately
15:59:11 <ad_rien_> (the same servercode ;) )
15:59:22 <ad_rien_> Actually I would really appreciate to clarify that
15:59:57 <serverascode> I've got a meeting unfortunately, but please feel free to use the #openstack-nfv channel
16:00:03 <ad_rien_> just to be sure that I do not pollute the discussion
16:00:07 <serverascode> I'm going ot have to end the meeting!
16:00:18 <ad_rien_> jamecc	 are you available
16:00:22 <serverascode> thanks everyone! good discussion :)
16:00:24 <jamemcc> Sure ad_rien feel free to message me at jamemcc@gmail.com
16:00:25 <ad_rien_> I will switch to Openstack-nfv ?
16:00:41 <jamemcc> ok
16:00:42 <ad_rien_> thanks serverascode
16:00:42 <serverascode> #endmeeting