14:00:06 <mlavalle> #startmeeting neutron_drivers
14:00:07 <openstack> Meeting started Fri Oct 27 14:00:06 2017 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:10 <openstack> The meeting name has been set to 'neutron_drivers'
14:00:15 <yamamoto> hi
14:00:24 <slaweq_> hello
14:00:25 <mlavalle> hi yamamoto
14:00:38 <mlavalle> hi slaweq_
14:01:51 <mlavalle> amotoki: ping
14:02:03 <amotoki> hi
14:02:08 <mlavalle> I don't see ihrachys or armax around
14:02:16 <mlavalle> let's give them a couple of minutes
14:02:46 <mlavalle> amotoki, yamamoto: you going to Sydney?
14:02:54 <mlavalle> slaweq_: you?
14:02:54 <yamamoto> no
14:02:56 <amotoki> yes, i will be there
14:02:59 <slaweq_> mlavalle: no
14:04:56 <amotoki> it seems most folks focus on PTGs now
14:05:29 <mlavalle> yeah, if you have to choose between the 2 and you are a dev, the PTG makes more sense
14:05:42 <mlavalle> ok, let's get going
14:06:17 <mlavalle> RFEs to review today are: https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe
14:06:43 <mlavalle> First one is https://bugs.launchpad.net/neutron/+bug/1690425
14:06:44 <openstack> Launchpad bug 1690425 in neutron "[RFE] neutron cells aware" [Wishlist,Triaged]
14:07:42 <mlavalle> For this one I did a little bit of research in terms of the Tricircle option
14:08:28 <mlavalle> It hasn't been deployed in production yet. But it has a non open source predecessor it is based on, Cascading
14:08:33 <amotoki> me either, but I can say nova and neutron are two main projects which heavily use MQ and DB
14:09:16 <mlavalle> which is in production in 5 public clouds, including Huawei's
14:09:41 <amotoki> I think we can talk about tricircle as a separate topic as it provides a different focus. AFAIK, it is a API proxy in front of multiple openstack clouds, right?
14:10:01 <mlavalle> Joe Huang of Huawei is going to get me more data in terms of scale achieved in those deployments
14:11:00 <mlavalle> yes, although there is an effort to align it with Nova Cells. I think one of the links I posted referes to taht
14:11:25 <mlavalle> https://docs.openstack.org/tricircle/latest/install/installation-guide.html#work-with-nova-cell-v2-experiment
14:11:32 <mlavalle> it is experimental as well
14:13:34 <amotoki> while It sounds an interesting thing, I wonder what is a good start.
14:14:10 <mlavalle> amotoki: do you have a suggestion for a good start in mind?
14:14:19 <amotoki> IMHO we can start cells support in one OpenStack clouds. it means we can start from what nova does.
14:14:45 <mlavalle> yeah, I agree
14:15:13 <amotoki> IIUC, nova faces some difficulties by spreading out their db into multiple parts
14:15:51 <mlavalle> yes, that is the reason I also posted a pointer to a Cells V2 document
14:16:08 <mlavalle> https://docs.openstack.org/nova/pike/user/cellsv2_layout.html
14:16:36 <mlavalle> here they have the concept of superconductor, for the entire deployment, and conductor, for each cell
14:16:52 <mlavalle> and also API DB (the whole deployment) and cells DBs
14:17:26 <mlavalle> There is a nice graphic in that doc that illustrates the entire thing
14:18:29 <amotoki> perhaps we can start to explore cells supports by studying how we can support the same model as nova.
14:18:53 <amotoki> for example, how does parent MQ and DB affect neutron?
14:19:01 <mlavalle> meaning an API DB and cells DB?
14:19:07 <amotoki> exactly
14:19:44 <mlavalle> armax made the point last time we discussed this that the DB is not a problem in our case
14:20:06 <mlavalle> that our bottleneck is the messaging bus
14:20:27 <amotoki> good point
14:20:30 <mlavalle> that is why https://www.youtube.com/watch?v=R0fwHr8XC1I
14:20:38 <mlavalle> is an alternative
14:21:29 <mlavalle> So I guess to move ahead the first step would be to decide whether we agree that in our case the DB is not the problem
14:22:36 <mlavalle> if the answer to that is yes, then the messaging alternative pointed out ^^^ might be a good approach
14:22:42 <amotoki> i think a balance of load pressures from nova and neutron is one of key points.
14:23:07 <amotoki> is it better to ask their experiences from operators for performance?
14:23:43 <mlavalle> yeah. Actually that is a conversation we might want to try to have in Sydney
14:24:22 <amotoki> the number of compute nodes is an important factor
14:25:05 <amotoki> personally i have no good data on this. our cloud with 500 nodes (one region) works well so far.
14:25:34 <mlavalle> there is also a cells V2 session scheduled in Sydney: http://forumtopics.openstack.org/cfp/details/60
14:26:39 <mlavalle> ok, I will update the RFE with two next steps if you all agree: 1) seek feedback from operators
14:27:00 <mlavalle> 2) catch up with the Nova team in Sydney about the status of Cells V2
14:27:11 <amotoki> agree
14:27:13 <mlavalle> makes sense?
14:27:28 <mlavalle> yamamoto: what do you think?
14:27:30 <amotoki> yes
14:27:45 <yamamoto> sounds reasonable
14:27:54 <mlavalle> cool :-)
14:28:42 <mlavalle> Next one is https://bugs.launchpad.net/neutron/+bug/1692490
14:28:43 <openstack> Launchpad bug 1692490 in neutron "[RFE] Ability to migrate a non-Segment subnet to a Segment" [Wishlist,Triaged]
14:29:48 <mlavalle> The way I read this originally made me think it was not possible
14:30:34 <mlavalle> moving from an arbitrary number of segments in a network to associating segments to subnets
14:31:10 <amotoki> i am not sure this is a common case.
14:32:48 <mlavalle> but if all they want to do is to move from one segment / subnet to a sitatuion where the segment is associated to the subnet, it is a pretty easy thing to do, isn't it?
14:33:00 <mlavalle> it is an update to the API
14:33:27 <mlavalle> is that your reading yamamoto?
14:33:32 <yamamoto> yes
14:34:09 <amotoki> perhaps we first need to clarify what we assumed when network segment support was implemented.
14:34:55 <mlavalle> you mean clarify in the RFE?
14:35:07 <amotoki> yeah
14:35:20 <mlavalle> yeah, I was going to propose that
14:35:24 <amotoki> i wonder we can avoid the sitaution by the order of operations
14:35:41 <amotoki> or it happens easily
14:36:22 <amotoki> i haven't checked the detail of this RFE yet. I need to think more
14:36:47 <mlavalle> we can ask the submitter to clarify exactly what he wants to do and go from there
14:36:50 <mlavalle> makes sense?
14:36:54 <yamamoto> mlavalle: +1
14:36:56 <amotoki> yes
14:37:13 <mlavalle> ok, I will update the RFE wit a request for clarification
14:37:22 <amotoki> clarifying operation scenario really wil help us
14:37:23 <mlavalle> feel free to add questions of your own
14:37:46 <mlavalle> amotoki: yeah, go ahead and add a question there if you want
14:37:53 <amotoki> sure
14:38:23 <mlavalle> cool
14:39:45 <mlavalle> Next one is https://bugs.launchpad.net/neutron/+bug/1705084
14:39:47 <openstack> Launchpad bug 1705084 in neutron "[RFE] Allow automatic sub-port configuration on the per-trunk basis in the trunk API" [Wishlist,Triaged]
14:42:59 <amotoki> regarding this, I tend to agree with yamamoto that this is not a job of neutron in general.
14:43:18 <mlavalle> yes I also lean towards that
14:43:43 <amotoki> do we support similar things via metadata API? anyway this is also what nova metadata API returns
14:44:51 <mlavalle> any thoughts yamamoto? should we decline this one?
14:45:56 <yamamoto> i think so. it's basically a metadata which neutron itself doesn't use.
14:46:10 <mlavalle> amotoki: agree?
14:46:37 <amotoki> i agree to declient it
14:46:43 <mlavalle> me too
14:47:01 <yamamoto> if we need something like this, it should be designed to be more generic.
14:47:33 <amotoki> there are three ways to provides configurations to a guest: config driver, metadata API and information retrieved via API
14:48:03 <amotoki> if our API does not provide enough information, we need to improve it, but I don't think this is the case.
14:48:24 <amotoki> s/driver/drive/
14:49:16 <mlavalle> LOL
14:49:33 <mlavalle> ok, moving on
14:49:45 <mlavalle> Next one is https://bugs.launchpad.net/neutron/+bug/1705467
14:49:46 <openstack> Launchpad bug 1705467 in neutron "[RFE] project ID is not verified when creating neutron resources" [Wishlist,Triaged]
14:51:03 <mlavalle> yamamoto: do you know if that nova spec got ever implemented?
14:51:26 <yamamoto> it has been implemented.
14:51:43 <yamamoto> partially or fully, i don't know
14:52:34 <mlavalle> do you think we should follow suit?
14:53:56 <yamamoto> i'm not sure.  it depends on how often it needs to be executed.  ie. the load increase on keystone is accepted or not
14:54:17 <mlavalle> thta's a good point
14:54:30 <amotoki> this is really tricky. I to agree to follow it in general as admin operations would be predictable
14:54:40 <amotoki> I tend to agree*
14:54:47 <yamamoto> in case of nova those operations are not frequently used i guess
14:55:00 <amotoki> yamamoto: yes
14:55:41 <amotoki> on the other hand, there are some cases we create resources in GET operations. this is the corner case like SG
14:55:46 <mlavalle> if load on Keystone is our concern, we could make it configurable
14:56:16 <amotoki> if it is limited to global admin, it is not a problem.
14:56:32 <amotoki> if it is allowed to domain admin, it might be a problem
14:56:38 <amotoki> as they are not cloud admin
14:56:54 <yamamoto> we need this only when the resource project-id is not same as the requester's, right?  i wonder how often it's the case.
14:57:07 <mlavalle> right
14:57:55 <amotoki> some weird situation is "typo"
14:58:17 <amotoki> assume a user issues a CLI like 'neutron auto-allocated-topology-show bbbbbbb'
14:58:31 <amotoki> if it causes a problem, we need to avoid it
14:58:52 <amotoki> there is something OSC/CLI can do in CLI side though
14:59:51 <amotoki> i would like to think more what are cases where real problems happen.
15:00:02 <amotoki> more input and clarification would be helpful
15:00:21 <mlavalle> cool, would you post some comments there?
15:00:30 <mlavalle> and we pick up from here next week
15:00:34 <amotoki> as far as the reported case, I can guard this in OSC side
15:00:46 <mlavalle> please comment that in the RFE
15:00:54 <amotoki> sure
15:01:08 <mlavalle> yamamoto: you agree?
15:01:20 <yamamoto> yes
15:01:30 <mlavalle> ok, thanks for attending
15:01:38 <amotoki> thanks
15:01:42 <mlavalle> will continue next Thursday
15:01:44 <mlavalle> o/
15:01:48 <mlavalle> #endmeeting