09:00:07 #startmeeting dragonflow 09:00:08 Meeting started Mon Jul 18 09:00:07 2016 UTC and is due to finish in 60 minutes. The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:11 The meeting name has been set to 'dragonflow' 09:00:21 Hello everyone and welcome to our week's best time 09:00:34 its the Dragonflow meeting 09:00:37 Hi 09:00:41 who is here for the merting 09:00:41 Hi 09:00:43 meeting 09:00:44 ין 09:00:46 hi 09:00:47 o/ 09:00:56 \o 09:01:04 Just as a side note WSOP day 6 start today, November 9 is going to be selected tommorow 09:01:10 \o/ 09:01:54 #info DuanKebo, oanson, oshidoshi, Shlomo_N in meeting 09:01:58 The winnings go to the openstack foundation? 09:02:13 oanson: nice google skills :) 09:02:22 hujie!!!! 09:02:22 Cheers 09:02:27 we waited just for you :) 09:02:37 #info hujie in the meeting as well, late, but in the meeting 09:02:53 #topic ML2 09:02:56 oh thank you :) 09:03:09 ok, so how was OpenStack china days ? anything to share?> 09:03:43 The openstack unified API is cool. It also has a python and ansible frontends. 09:04:07 DuanKebo: any open patches for the ML2 work? i saw your message regarding liuhaxia patch and removed my -1, the patch is merged but as well agreed some fixes are needed in future patches 09:04:10 for simplicity 09:04:35 oanson: what do you mean "unified API" ? 09:04:37 Are we planning on adding an ML2 gate job once the feature is stable? 09:04:42 Yes, the patch will be submited later 09:05:05 DuanKebo: ok thanks 09:05:07 gsagie: That any openstack feature (networking, servers, storage) can be managed from a single command line function, with a naming convention 09:05:18 i.e. no confusion between remove and delete, etc. 09:05:58 oanson: understood, sounds interesting 09:06:18 i think we will want to make ML2 the default installation in the gate 09:06:21 The python API is done with a library called 'shade' 09:06:25 any objections? 09:06:30 +1 09:06:34 agree 09:06:37 gsagie, maybe keep both until we see it is stable 09:06:50 Is there a plan to fade out the core plugin configuration? 09:07:03 oanson: its possible, i think the plan we agreed was to slowly remove the core plugin 09:07:13 and stay only with the ML2, DuanKebo, do you also see it that way? 09:07:22 +1, but before that, we need review and merge l3 plugin 09:07:31 yes of course 09:07:36 review the l3 service plugin 09:08:03 okie, so any open issues for the ML2? 09:08:14 DuanKebo, anyone tested this end-to-end ? 09:08:21 He shan will focus on updating this patch. 09:08:43 ok 09:08:48 link: https://review.openstack.org/#/c/316785/ 09:08:51 @gal heshan currently 09:09:00 #link https://review.openstack.org/#/c/316785/ 09:09:17 #action gsagie,oanson,nick-ma review L3 service plugin patch 09:09:26 Will do 09:09:27 anything open on that? 09:09:38 on ML2 09:09:51 #action gsagie add ML2 testing jobs to gate 09:10:16 yuanwei1: i welcome you with the dragon greeting! 09:10:21 for plug, we also submit 09:10:34 a qos driver for dragonflow 09:10:43 DuanKebo: QoS + ML2? 09:10:46 someone can help reviewing it 09:10:47 for the ML2 09:11:08 Yes it's qos driver 09:11:24 the default one use rabbitmq 09:11:35 DuanKebo: we will take a look, is it changing the pipeline or its just using OVSDB? 09:11:40 to configure it on the ports 09:12:00 so you changed to update and add a new table to the DB right? 09:12:04 it uses ovsdb 09:12:14 need to make sure you add it to the other DB's 09:12:17 but pipeline also be changed 09:12:25 some has a setup process to create the tables 09:12:45 DuanKebo: any link to the patch review board? 09:13:11 you mean the spec? https://review.openstack.org/#/c/332662/ 09:13:40 code: https://review.openstack.org/#/c/337497/ 09:13:48 #link qos spec https://review.openstack.org/#/c/332662/ 09:13:58 #link qos code https://review.openstack.org/#/c/337497/ 09:14:05 ok lets all review them and comment about them 09:14:16 DuanKebo: the spec has a -1 from you btw.. 09:14:45 ok, anything else on ML2 ? 09:14:59 ok we will update the patch 09:15:12 hujie is in charge of it now 09:15:19 yes 09:15:56 ok, good job hujie, we will take a look this week 09:16:18 maybe add also a document describing how to enable QoS with Neutron and in your patch 09:16:32 Or at least a link to the QoS API 09:16:35 so it will be easier to check it (as i remember you need to add some configuration to Neutron for it) 09:17:13 ok 09:17:19 i think we can add a link to qos api 09:17:20 lets move to the next topic 09:17:41 #topic DB - Controller consistency 09:17:52 I think hujie was also working on this, hujie any update? 09:18:12 https://review.openstack.org/#/c/336377/ 09:18:21 please review this patch :) 09:18:21 #action hujie working on QoS patch 09:18:42 #link https://review.openstack.org/#/c/336377/ DB consistency patch 09:19:13 hujie: we will review, please mind that you have some comments from Li Ma 09:19:29 is there anything you would like to discuss about ? 09:19:36 or everything seems closed? 09:20:10 I'll review the comments from Li ma:) 09:20:17 okie 09:20:46 this patch is the db consistency logic on local controller side 09:21:17 ok 09:21:26 #topic Bugs 09:21:37 anyone know of any critical open bugs? 09:21:43 yuli_s is here? 09:21:59 our famous bug deputy 09:22:34 Maybe he;s out on patrol. 09:22:41 :) 09:22:44 probably 09:22:52 #topic Packaging 09:23:02 oanson: any update here 09:23:18 or anyone else is handling this? 09:23:18 I was skiving in openstack days china :) 09:23:19 So no 09:23:29 okie 09:23:53 #action oanson continue working on packaging possibilities 09:24:13 yuli_s: Any important bugs you would like to share? 09:24:47 sorry, I lost connection just now :) 09:24:55 hujie1: np 09:25:13 okie guess yuli_s is still not here 09:25:17 sorry 09:25:20 ahh 09:25:34 i had a bug 09:25:44 that I failed to recreate 09:26:00 I have a bug patch too: https://review.openstack.org/#/c/336896/ 09:26:07 no flows in table=0 were created in openflow 09:26:21 i had tried to recreate it several time without success 09:26:41 i will deeg deeper to find it 09:26:41 hujie1: ok thanks will review 09:26:50 thx 09:26:51 yuli_s, this is on a local installation? 09:26:56 i am concentrating on the control flow tests 09:27:00 Did you look for exceptions in the logs? 09:27:07 oanson, yes, fresh version of everything 09:27:25 yuli_s: ok, i think what we need right now is to go over all the bugs list, make sure they are all assigned and started cleaning old bugs or ask the asignee of the bug whats going on (or move it to someone else maybe from DuanKebo team) 09:27:35 i had to restarr df-controller 09:27:44 because we have a big list of Open bugs and most of them are probably invalid/old 09:27:50 and the problem dissapeared 09:28:00 yuli_s, did you upload the logs to the bug report? Usually there are indicative exceptions and errors 09:28:14 This may be vital information. 09:28:23 oanson, i will check it and go over all bug and close them 09:28:39 oanson, I will check for exception 09:28:52 Thanks. 09:29:01 thanks yuli_s 09:29:24 #topic performance (control + datapath) 09:29:34 yuli_s, Shlomo_N: any update on this? 09:29:42 sure 09:30:07 i created a test with a sliced version of dragonflow in containers 09:30:30 with 50 sliced version of df-controllers running in containers 09:30:53 and now I am trying to recreate this test with regular version of dragonflow 09:30:53 yuli_s, this is using https://review.openstack.org/#/c/309948/ (DB time testing patch)? 09:31:03 I have finished the work on the patch, it already generates a report, you can check it at: https://review.openstack.org/#/c/304470/. There is also a readme file: https://review.openstack.org/#/c/304470/16/dragonflow/tests/performance/readme.txt 09:31:16 oanson, I used it 09:31:35 ok thanks we will check this out 09:31:57 In new version, I am starting a df-controller to make it write to separate br-int 09:32:00 Have we approched anyone from the Openstack performance team regarding this? 09:32:10 br-int2, etc.. 09:32:14 No 09:32:18 I am debugging this now 09:32:19 I think we should 09:33:05 yuli_s: ok 09:33:13 I also think we should take it as a separate project from DF 09:33:18 yuli_s: but your OVSDB is still shared 09:33:30 gsagie, yes 09:33:53 ok 09:34:15 I use socat proxy to make it available from container 09:35:13 ok thanks yuli_s and Shlomo_N, i think when the control plane testing is close to something we can review maybe its best to review it together 09:35:20 but lets take this offline 09:35:27 gsagie, sure 09:35:39 DuanKebo: any testing your team is doing for scale/performance at this point? 09:35:45 or still no resources? 09:36:13 we are 09:36:31 plan to do controll plan test 09:36:47 and date plan test has been done 09:37:41 okie, please update us with results :) 09:37:43 #topic roadmap 09:37:46 Compareing with neutron referrence implementation, we have 100% imporvement for DVR+SG 09:38:26 besides the ML2 and the QoS, what other features we have in the road map that we need? 09:38:41 DuanKebo, oanson: managed to discuss about this in China by any chance? 09:38:46 i think we nee live-migration 09:38:52 *need 09:38:59 ok so we have live-migration 09:39:13 gsagie, yes, but it's long and I would rather go over it off-line first. 09:39:21 ok np 09:39:45 DuanKebo: beside the live-migration anything else notable? 09:40:06 another interesting feature is comunicating between different dvrs 09:40:33 this feature is already supported by AWS 09:40:37 DuanKebo: distributed SNAT is something that is important? 09:40:46 or something you want to try and implement? 09:41:16 It's interesting also 09:41:24 And you have the requierments ? 09:41:30 but we lack a staisfying solution 09:41:32 there are several options to solve it 09:42:05 for example doing the SNAT per compute node, doing it per group of gateways and so on 09:42:08 ok we can discuss these soluions 09:42:11 depending on the network architecture 09:42:16 *solutions 09:42:42 Ok, so no special requirements restrictions? for example every compute node is connected to the public network? 09:42:50 we also care about VM grade QoS 09:42:52 what about the solution of the qos for several shared ports, not just for one single port in current patch? 09:43:12 But is this something that currently exposed in API? 09:43:24 in Neutron API 09:43:28 probably not 09:43:31 @gal, no apis curreently 09:43:52 the HWS need the feature 09:43:57 we can consider submit one 09:44:33 DuanKebo: ok, maybe its best to first try to submit the API part, is there a problem with implementing it? 09:45:04 yes, we are still working on the solution 09:45:13 there are problems. 09:45:42 DuanKebo: ok, feel free to consult with us, maybe schedule a meeting 09:45:45 we need a valid and simple solution for the feature, after provided the solution, we can consider the APIS 09:45:55 i think with some OVS and tc queue work we can get it to work :) 09:46:13 you mean for several ports for one VM right? 09:46:51 yes Gal 09:46:53 yes, you can work on the api part 09:47:11 ok, i will try to look at it and help you out 09:47:13 if possible 09:47:22 ok 09:47:29 #topic open discussion 09:47:39 Anything else anyone? 09:47:50 Remember the Dragon always hear you.. 09:48:09 I have a request. 09:48:21 oanson: yes.. 09:48:21 There are many patches. And many of them aren't short. 09:48:30 We could use any help we can get when it comes to reviews. 09:48:44 It will also speed-up the review-cycle and process 09:48:46 oanson: The Dragon heard you my son, and you shall be answered 09:49:02 :) 09:49:12 Thank you, oh Dragon Speaker 09:49:14 documentation 09:49:25 Ok, so please everyone make sure to review the patches 09:49:36 the review process is not limited for cores, we wait for your +1 09:50:01 of course :) 09:50:20 and the review process is important, we dont want to add un stable code to master branch 09:50:30 another part is documentation 09:50:46 as oshidoshi mentioned, our features increase and we need to make sure its easy to configure and use 09:50:58 so please lets make sure we add documentations on new features 09:51:25 even links to other Neutron documentation or instructions how to enable these features in Dragonflow, its very important to who ever that doesnt work on the actual patch 09:51:32 and want to test it 09:51:33 or try it 09:51:44 Anything else anyone? 09:51:47 Links to Neutron are especially important. 09:52:26 Ok, lets take the rest we discussed about offline, thanks all for attending and Let the Dragon be with you! 09:52:37 thanks 09:52:55 bye! 09:52:58 Thank you. 09:52:58 #endmeeting