09:00:12 <oanson> #startmeeting Dragonflow
09:00:13 <openstack> Meeting started Mon Dec 19 09:00:12 2016 UTC and is due to finish in 60 minutes.  The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:16 <openstack> The meeting name has been set to 'dragonflow'
09:00:19 <oanson> Hello.
09:00:25 <lihi> Hi
09:00:27 <hujie> Hello
09:00:28 <oanson> Welcome to this weeks Dragonflow Weekly!
09:00:28 <yuval> Hey
09:00:48 <oanson> We'll wait a second for others to join.
09:00:50 <yuval> ¯\_(ツ)_/¯
09:01:11 <dimak> Good morning
09:01:31 <oanson> yuval, a giant in the playground, no doubt!
09:01:55 <evrardjp> o/
09:01:56 <hujie> play basketball??
09:02:11 <oanson> I thought more along the lines of order of the stick, but that works too
09:02:15 <oanson> evrardjp, hi!
09:02:22 <yuli_s> hello
09:02:23 <oanson> All right. Let's start.
09:02:30 <oanson> #topic Ocata Roadmap
09:02:37 <irenab> hi
09:02:49 <oanson> irenab, rajivk, hi.
09:03:08 <rajivk> hi
09:03:13 <oanson> #info lihi hujie yuli_s yuval dimak evrardjp irenab rajivk in meeting
09:03:31 <oanson> IPv6 - lihi uploaded an initial advertiser implementation.
09:03:32 * xiaohhui waves his hand
09:03:46 <oanson> #info xiaohhui also in meeting
09:04:26 <oanson> lihi, is it ready for review? Do you want some comments on it? Or do you want more time to stabilise it?
09:05:58 <lihi> I still need to stabilize the tests. They fail differently in different envs. However, comments would be nice :)
09:06:08 <oanson> Great
09:06:09 <oanson> #link Add IPv6 ND Neighbor Advertiser application https://review.openstack.org/#/c/412208/
09:06:31 <oanson> SFC - dimak, I guess most of your patches will wait the NB refactor?
09:06:39 <dimak> Yes
09:06:41 <oanson> Anything you'd like to discuss?
09:06:45 <dimak> Don't have much to show for now
09:07:04 <dimak> Mostly NB refactor, but thats for later :)
09:07:10 <oanson> Yes. Next time don't try to rewrite the project in ten patches, and we'll be good :)
09:07:31 <oanson> Chassis health/service health reporting
09:07:56 <oanson> rajivk 's spec is here: https://review.openstack.org/#/c/402395/
09:08:03 <oanson> #link Spec to support service status reporting https://review.openstack.org/#/c/402395/
09:08:28 <oanson> I see it has a few comments, but if I recall it is fairly close to being ready
09:08:28 <rajivk> Sorry, no status update.
09:08:35 <oanson> Anything that has to be discussed?
09:08:39 <rajivk> yes
09:09:10 <rajivk> How should i handle service disable and enable?
09:09:45 <rajivk> Is it ok, if i put service status down?
09:09:59 <rajivk> or i need to remove corresponding app from the pipeline?
09:10:14 <oanson> rajivk, putting service status down is enough
09:10:34 <oanson> This feature is just monitoring and reporting. Acting upon this information can be dealt with later
09:10:42 <rajivk> ok
09:10:58 <rajivk> I will update the patch soon :)
09:11:05 <rajivk> that's all from my side.
09:11:07 <irenab> oanson: question about the last poit
09:11:14 <oanson> Since there are some open questions on how to handle actions: Each application to its own, or a centralised way? How to do this in a distributed manner (e.g. taking over control of other nodes), etc.
09:11:19 <oanson> irenab, shoot
09:11:44 <irenab> what is the way to manage services, it is only conf file for now?
09:12:16 <irenab> I mean enable/disable
09:12:39 <rajivk> sorry, can you please elaborate more?
09:13:33 <irenab> rajivk: you asked about taking action. My question is how this can be done, is there some API to do it?
09:14:05 <rajivk> I will add a command in df-db
09:14:20 <rajivk> which can enable or disable a service.
09:14:41 <irenab> rajivk: got it, thanks. so it for the follow-up after monitoring and reporting is done
09:15:14 <rajivk> yes
09:15:35 <rajivk> Later on we can extend to perform some operations based on the status of the service.
09:15:47 <rajivk> as oanson pointed out earlier.
09:15:50 <oanson> Sounds good to me. I hope that will probably be made easier with the NB refactor
09:15:57 <irenab> rajivk: thanks for clarification
09:16:15 <oanson> Anything else for chassis/service health?
09:16:26 <rajivk> no
09:16:58 <oanson> Great! TAPaaS
09:17:23 <oanson> yuli_s, any updates?
09:17:29 <yuli_s> sure
09:17:52 <yuli_s> tap as a service
09:17:59 <yuli_s> https://review.openstack.org/#/c/256210
09:17:59 <oanson> I see on the patch that the tunnel ID issue requires discussion
09:18:25 <yuli_s> ops,
09:18:26 <yuli_s> this one
09:18:28 <yuli_s> https://review.openstack.org/#/c/396307/
09:18:53 <xiaohhui> yeah, the neutron ml2 doesn't expose api to allocate segment_id, so how could the tapaas get segment_id from neutron?
09:19:08 <yuli_s> Hong Hui Xiao advised to use one segment_id for all tapped packets
09:19:49 <yuli_s> and I wanted to bring this to discussion
09:19:52 <oanson> If there are two TapService ports on a remote compute node, how will that compute node know which tapped packets go where?
09:20:03 <irenab> yuli_s: Can this be reserved range of seg-ids for taas?
09:20:07 <xiaohhui> No, I suggest to use a specific tunnel port whose local_ip is something special for the tapaas
09:20:24 <yuli_s> irenab, yes, by marking original packet src
09:20:38 <oanson> xiaohhui, and then recognise it's TAPaaS by tunnel source IP?
09:21:06 <xiaohhui> yes, so that we don't need to depend on neutron to allocate segment_id
09:21:22 <yuli_s> from security point of view, it is better to have at least one segment for each customer
09:21:42 <xiaohhui> you can have one segment for one customer,
09:21:44 <oanson> yuli_s, true. But that might not be possible
09:21:44 <yuli_s> to minimize the chance the packet interfrerance in case of the bug
09:21:59 <xiaohhui> just the src of the tunnel is special
09:22:03 <yuli_s> yes,
09:22:09 <yuli_s> for me this can be done
09:22:22 <oanson> If we don't have a way to get segment IDs, that won't work. And a bug may cause the same issue with different segment IDs too.
09:22:42 <xiaohhui> I just can't think about an easy way to get segment_id from neutron.
09:22:46 <yuli_s> it complicates the system a bit
09:23:05 <yuli_s> we can assign a segment id on each new tenant
09:23:13 <yuli_s> in a kind of global way
09:23:26 <oanson> xiaohhui, we can try asking them to expose an API for it.
09:23:27 <oanson> Until then, I think the segmentation be src IP would work.
09:23:46 <oanson> yuli_s, how would you assign the segment id?
09:24:01 <oanson> It must be unique from anything Neutron will assign
09:24:20 <yuli_s> currently we have no table for tenants
09:24:55 <oanson> yuli_s, I don't see how a table for tenants would help
09:25:18 <xiaohhui> wait, neutron now has api for segments, which might be something we want.
09:25:50 <oanson> xiaohhui, can you post a link?
09:25:55 <yuli_s> we have the allocate_tenant_segment() call
09:26:04 <xiaohhui> one sec
09:26:32 <yuli_s> in neutron/plugins/ml2/drivers/type_tunnel.py
09:26:46 <irenab> do we need neutron to assign seg_id for taas? Maybe taas service plugin/driver can manage it
09:27:59 <yuli_s> yes, that is what suggested by, or one for each tenant
09:28:00 <oanson> irenab, you mean the TaaS northbound?
09:28:10 <irenab> yes
09:28:13 <xiaohhui> https://review.openstack.org/#/c/317358/
09:28:34 <xiaohhui> I added it in newton for routed network
09:28:51 <oanson> xiaohhui, very cool! :)
09:29:29 <oanson> yuli_s, what does the TaaS northbound say about segment IDs?
09:29:32 <xiaohhui> If we are going to use segment_id from neutron, then we can use the segment api
09:29:52 <oanson> I think that would be better than piggy-backing it on the src IP.
09:30:09 <oanson> If we can put it in the TaaS northbound, that would be best.
09:30:13 <oanson> yuli_s, what do you think?
09:30:50 <yuli_s> I need to study this.
09:31:09 <yuli_s> imho this can be done
09:31:17 <oanson> yuli_s, that's great.
09:31:40 <oanson> Please also consult yamamoto if it's possible to add it to the TaaS northbound
09:31:57 <yuli_s> ok
09:31:59 <oanson> Otherwise, we will have to do it in our northbound driver
09:32:37 <oanson> Whichever solutions you think is best.
09:32:51 <oanson> Please also update the spec
09:33:46 <yuli_s> ok
09:33:55 <oanson> Anonymous sNAT
09:34:02 <oanson> ishafran, any updates?
09:34:11 <ishafran> I put latest spec: https://review.openstack.org/#/c/397992/12
09:34:28 <ishafran> I would like to get functional comments/rejects if any
09:34:37 <oanson> I see in that spec that you still have per-tenant hidden networks in br-int
09:34:50 <ishafran> There are 3 possible solutions threre , and I am going to stick to first one
09:34:56 <oanson> I thought this requirement was removed due to the information in the metadata fields (reg6 and metadata)
09:35:30 <ishafran> I need cross-tenant network if double NAT is used
09:35:48 <oanson> My understanding was to do first [1], and then [3]
09:36:04 <ishafran> I am OK with it
09:36:23 <oanson> Great.
09:36:43 <oanson> Please update the spec and diagram, and I think we can start pushing this along :)
09:36:56 <ishafran> OK
09:37:19 <oanson> Anything open in Distributed sNAT?
09:37:42 <ishafran> I will use local file to store tenant info at least at first phase
09:37:50 <oanson> Sounds good
09:38:01 <ishafran> fine
09:38:14 <oanson> In the second phase (With external IP per (Compute node, tenant) pair) you will need an IP distribution mechanism
09:38:26 <oanson> To distribute one IP pool across multiple compute nodes
09:38:43 <oanson> But as we said, that's the second phase. It can wait
09:38:49 <ishafran> no sure but of course database is better
09:39:11 <oanson> If it's local and constant, config file is a good solution
09:39:23 <oanson> (constant in the lifetime of the controller, that is)
09:39:42 <ishafran> agree
09:39:57 <oanson> Great!
09:40:08 <oanson> NorthBound API refactor
09:40:23 <oanson> #link North Bound Code Refactor https://review.openstack.org/#/c/410298/
09:40:54 <oanson> I don't know if you had a chance to go over the spec
09:40:57 <dimak> I read it and looks good, have some comments
09:41:10 <oanson> The version I put up last night (IST) was a bit buggy, and I only updated it a few hours ago
09:41:16 <oanson> dimak, shoot
09:41:57 <dimak> I saw that you defined events per model
09:42:04 <dimak> which I think is a great idea
09:42:14 <dimak> They serve to replace https://github.com/openstack/dragonflow/blob/master/dragonflow/db/api_nb.py#L199 ?
09:42:34 <dimak> With some code in local controller that registers its functions to specific events?
09:42:46 <oanson> Yes
09:42:56 <oanson> There is plan for a southbound refactor as well.
09:43:10 <dimak> Ok
09:43:14 <oanson> It will allow applications to register to these 'dynamic' events
09:43:39 <dimak> Another thing with crud example
09:43:40 <oanson> Hopefully, we will also have other cool stuff, like an application manager that registers and organises applications
09:44:09 <dimak> We're using **kwargs construct objects inside the crud helper
09:44:15 <dimak> same with update
09:44:31 <dimak> why not have @classmethod from_dict() or something?
09:44:37 <dimak> on the model
09:44:39 <oanson> Yes. Currently we support 'partial' updates, meaning we can only pass the columns we update
09:44:57 <dimak> and update() for instances
09:45:23 <dimak> This reduces the case for which we need to define a custom crud helper
09:45:24 <oanson> dimak, the new structure will allow it, so there's no problem adding it
09:45:48 <dimak> and the crud helper will operate on objects directly rather than dictionaries
09:46:24 <oanson> dimak, I think being able to update a single column is a good thing
09:46:55 <oanson> e.g. the Neutron API lets you update a single column if you provide the ID of the object
09:47:10 <oanson> (I think it's a REST thing, but I don't remember the details enough to commit)
09:47:21 <dimak> Yes, I wasn't talking about partial vs full updates though :)
09:48:00 <oanson> In any case, with a from_dict/to_dict method, we can call: nbapi.get_resource(<resource>).update(**<instance>.to_dict())
09:48:40 <oanson> dimak, if I need to pass an object rather than column=value fields (or in addition), we're going to have strange overloading magic to support both methods
09:48:42 <dimak> CRUD can handle the instance rather than the dict
09:48:46 <oanson> we*
09:49:14 <oanson> but then I need to know if the instance is partial or complete
09:49:24 <dimak> ummm
09:49:33 <oanson> for a syntactic improvement that can be done with '**'
09:49:52 <oanson> Actually, with '**'..to_dict(), but that still is not much
09:49:58 <dimak> We still have to retrieve the full object for an update..
09:50:14 <dimak> Well I'm not sure it can be done in a nice way, I'll think about it
09:50:36 <oanson> The retrieval is done very close to the database layer. The API layer doesn't see or know about it
09:50:46 <dimak> Anyway, I wrote both points as comments, I'll post once I'll finish going over the spec
09:50:50 <oanson> If some day we will be able to take advantage of that, I think that would be great
09:50:57 <oanson> Sure
09:51:15 <oanson> Any other comments on the NB refactor spec?
09:51:29 <dimak> Over all, it seems like a huge step in the right direction
09:52:10 <oanson> I hope so :)
09:52:19 <oanson> hujie, would you like to take it?
09:52:38 <oanson> Since you started doing something similar?
09:52:41 <hujie> I'll take part in :)
09:53:06 <oanson> I broke it down into byte sized actions. So we can split the work
09:53:20 <hujie> great!
09:53:40 <oanson> Registration was already started and merged by xiaohhui
09:54:04 <oanson> The Base CRUD helper was written by dimak in another patch - dimak, would you mind isolating it and uploading it for review?
09:54:11 <dimak> sure
09:54:31 <oanson> The next two steps are Db Store implementation, and model constructor.
09:54:32 <dimak> I'll see revise it according to the ideas in the spec
09:54:43 <oanson> Great! Thanks!
09:55:34 <oanson> Once these two are done, models can be moved 1 by 1 (rather than all of them at once)
09:56:10 <oanson> So hujie, do you want to take the Db Store implementation? Or the model construction?
09:56:44 <hujie> ok, maybe I need time to think about it
09:56:56 <oanson> Sure. Let me know, and I'll take the other one
09:57:09 <hujie> sure
09:57:13 <oanson> Thanks!
09:57:18 <hujie> :)
09:57:21 <oanson> Anything else for Nb API refactor?
09:57:41 <oanson> #topic Open Discussion
09:57:50 <oanson> We have three minutes. Anything anyone would like to raise?
09:58:37 <oanson> All right. Thanks everyone for the great work!
09:58:59 <oanson> Enjoy your dinner/lunch/breakfast/tea
09:59:09 <dimak> Gbye!
09:59:14 <oanson> #endmeeting