09:00:18 #startmeeting Dragonflow 09:00:19 Meeting started Mon Oct 31 09:00:18 2016 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:22 The meeting name has been set to 'dragonflow' 09:00:38 hello 09:00:40 Hi. Welcome. Who is here for the dragonflow meeting? 09:01:00 o/ 09:01:56 All right, let's start. The stragglers will join later 09:02:02 #topic Ocata Roadmap 09:02:19 So there was a summit in Barcelona. We are looking at several directions 09:02:32 The Ocata work will be summarised here: https://etherpad.openstack.org/p/dragonflow-ocata 09:02:36 Hey 09:02:42 #link Ocata Roadmap etherpad https://etherpad.openstack.org/p/dragonflow-ocata 09:03:33 The hot topics are deployment, SFC, ipv6, NAT, and LB 09:03:58 Someone contacted me for load balancing. I will talk to him, since he was supposed to upload a spec. 09:04:27 Additionally, from my personal experience, we should supply networking troubleshooting tools 09:04:45 To assist both developers trying to understand why their packets are missing 09:05:02 And users/operators/deployers to understand why their customers' packets are missing 09:05:27 hello 09:05:32 yuli_s, hi 09:05:35 any takers? 09:05:41 hi 09:05:46 nick-ma_, hi 09:05:49 Hi 09:06:10 All right, I guess we'll get back to that 09:06:24 The next hot topic is SFC. dimak will start looking into it 09:06:33 ok 09:06:53 Note that this is a huge undertaking, so this won't happen by next week :) 09:07:05 :) 09:07:14 dimak, I suggest you start by reviewing networking-sfc, the IETF standards draft, and there's a review 09:07:20 (1 sec, I'll find it) 09:07:48 rfc7665? 09:07:53 #link networking-sfc SFC graph API https://review.openstack.org/#/c/388802/ 09:07:57 networking-sfc needs load balancing for a kind of port group. I discussed with the team in Austin. 09:08:08 dimak, yes 09:08:25 All SFC docs are summarised here: https://datatracker.ietf.org/wg/sfc/documents/ 09:08:28 #link SFC docs https://datatracker.ietf.org/wg/sfc/documents/ 09:08:48 nick-ma_, yes. That is a very important feature for them 09:09:07 dimak, please take that into account in the spec. 09:09:21 Please don't forget the security considerations as well. 09:09:22 i'll go over it 09:09:46 Next up is deployment 09:10:05 The nice people from Strato agreed to write an openstack-puppet module. 09:10:15 great news 09:10:35 for dragonflow? 09:10:48 I started working with the guys in openstack-ansible on a dragonflow module. The progress can be seen here: https://review.openstack.org/#/c/391524/ 09:10:55 hujie, yes 09:11:04 good 09:11:37 I want to skip ahead to the IPv6, and come back to the other things later 09:11:46 In IPv6, lihi started looking into it 09:12:15 Many of our applications don't support IPv6. I saw in the review that it was suggested to say the Dragonflow doesn't support IPv6 at all. 09:12:29 I would like to avoid that. It will give off the wrong impression. 09:12:49 I would prefer each application which doesn't support IPv6 to shamefully admit it, and have it fixed 09:12:57 ;) 09:13:30 +1 09:13:40 I also think it would be best if lihi marks each such application as she finds them during her tests, and then fix them later, one by one. 09:13:47 lihi, would that be all right? 09:13:59 Yeah 09:14:06 Great. 09:14:17 I already started to do so 09:14:22 Great. 09:14:25 Tap-as-a-service 09:14:53 this is cool service ;) 09:14:59 This is a new feature that was displayed in the summit. It doesn't seem complex, but needs a carrier. 09:15:39 I mention it since it came up in the fishbowl. I think it could be a DF application, which shouldn't change the rest of the framework. But I didn't give it a lot of thought. 09:15:48 i can probably take it 09:15:55 is it of high priority? 09:15:57 yuli_s, if you have time 09:16:02 thanks 09:16:08 nick-ma_, I don't think so. 09:16:25 Mostly, I think it is small enough to slip in between the big things, and will be good PR 09:16:39 and would help towards writing troubleshooting tools 09:16:51 that makes sense. 09:17:12 I think the same goes for VLAN aware VMs, but I haven't read the spec on that. 09:17:22 Documentation: 09:17:56 I would like to enforce function documentation on our code. Any function longer than ~3 lines should have a docstring. It can be a short, single line explaining the gist of the function. 09:18:09 \o/ 09:18:26 Just so that new contributors and reviewers can understand what a called function does without having to read through tens of lines of code. 09:18:58 +1 09:19:17 I will start enforcing this for patches submitted since 1st Nov. (Tomorrow) 09:19:23 +1 09:19:31 +1 as well :) 09:19:31 along with unit test. 09:19:39 nick-ma_, yes. 09:19:54 Our unit test framework is advanced enough (thanks to xiaohhui and others) that we can enforce that too 09:20:20 :) 09:20:38 For the docs, it may be useful to write a pep8 hack. For unit tests, there's a coverage library which may help enforcing that new methods are tested. 09:20:48 But this requires research, and can be done manually for now 09:22:07 There was also a request for a migration tool. I started working on it with a guy from Orange in the contributors meeting. I will followup on it. It is both very interesting and very important work. 09:22:40 I hope this work will also pave the way for an external, non-Neutron/openstack API which will facilitate testing and external port deployments 09:22:43 migration from ml2 ovs dvr? 09:22:48 nick-ma_, yes 09:23:24 Since everything is in the Neutron database, and all we need is the NB database populated, I think it should be fairly simple. 09:23:58 I would be happy if anyone who takes a Roadmap item will update the etherpad. This way we can keep track of who does what. 09:24:27 I think there is one last roadmap item: Backend drivers. 09:24:38 e.g. supporting P4 and eBPF in addition to OVS/OpenFlow 09:24:58 This sounds very interesting, but I suspect we are out of hands at the moment. 09:25:02 there is also an integration in OVS upstream to support eBPF. 09:25:36 nick-ma_, yes. I think that's one of the reasons eBPF is so interesting. 09:25:39 Do you want to follow the ovs or do it ourselves, implementing eBPF control plane? 09:25:59 I was thinking ourselves, so that we won't be tied to OVS 09:26:08 that's cool. 09:26:39 eBPF is in production, but P4, I don't know. 09:27:03 I think P4 only has compilation to OpenFlow, or implemented in hardware by some smartnics. 09:27:23 On one hand it's very flexible. On the other, if no one implements it, it won't help us much :) 09:27:26 yes. 09:27:53 Any other roadmap items you want to discuss? 09:28:05 dragonflow's northbound api interface? 09:28:20 Yes. 09:29:07 The plan is to support our own independent API. This allows dragonflow to be used without Neutron/Openstack, such as in kubernetes, as external ports, testing, etc. 09:29:16 yes. 09:29:20 does dragonflow need to do something for dpdk? 09:29:45 i did it before, a simple integration. you can have a try in your hardware environment. 09:30:02 ovs-dpdk 09:30:18 ok I see 09:30:33 re standalone api, I think orange will help as part of the migration tool work. 09:30:58 I'll discuss it with the person I met 09:32:01 #topic Performance 09:32:02 do you wanna land it in the next release? our own api means our own virtual topology definition. 09:32:32 nick-ma_, in theory yes, but I don't think we have the hands for it 09:32:38 ok. 09:33:14 In performance, there were several discussion of data-plane tests, and Rally came up as the de facto standard for control plane testing 09:33:36 yuli_s, could you look at the Rally gate test, and understand why it is unstable? 09:33:49 i will take a look 09:34:27 There are also several options for data-plane testing: Shaker, PerfKit, Browbeat (which is a wrapper). We need to select a direction, and implement a gate test here too. Preferably, cross-node. 09:34:36 yuli_s, could you review the options here to? 09:34:46 yes, sure 09:34:58 Obviously, not everything for next week. But I would like to have the two gate tests up and stable by the end of the cycle 09:35:26 yuli_s, and this should go hand-in-hand with the work you are doing now. 09:36:05 ok 09:36:11 Thank you. 09:36:20 ;) 09:36:21 Anything else for performance? 09:36:46 nop, I came in yesterday from a long vocation, so no new findings till now 09:37:00 #topic Bugs 09:37:24 I see there are a bunch of High bugs, but it looks like they are all in progress. 09:38:03 I would like to stress bug-fixing this cycle, but I am not sure how we'll do it in addition to the new features we want. 09:38:36 We'll review our progress next week and decide. Maybe will take 2 months for features, and 2 months for bug fixing. 09:38:53 I don;t know yet, and would really appreciate suggestions :) 09:38:54 There are also lots of ongoing patches in the gerrit. :-) 09:39:08 Yes. :) 09:39:32 Between the summit, and the holiday that was forced upon us the week before, I didn't do my bit in reviewing. 09:39:37 I will catch up this week. 09:40:10 Anything else for bugs? 09:40:15 that's ok. 09:40:18 Can we assign a bug triager as neutron does? 09:40:28 xiaohhui, sure. 09:40:34 We can swift the role every(two) week 09:40:50 switch 09:40:56 xiaohhui, that's a good idea. 09:41:24 There's an issue with the port status notifier driver 09:41:24 I can take the next two weeks, and we'll find a volunteer afterwards. 09:41:51 Is jingting here? 09:41:53 Great, let's see how it is going 09:42:03 The driver is missing, and devstack fails 09:42:24 jingting is coming 09:42:34 wait a min 09:42:41 Maybe you need to update your dragonflow by running "python setup.py install" 09:42:56 I remember I see similar problem 09:43:33 I think I ran into this using a etcd/zmq setup, but didn't have time to review it. Maybe it should be disabled unless redis is used? 09:44:13 i didn't see any redis-related code in port status notifier. 09:44:43 I thought only the redis driver was written, but I may remember wrong. 09:45:45 lihi, if xiaohhui's suggestion doesn't help, please open a critical bug with your local.conf. 09:45:54 yes, it needs redis to work for now or disabled explicitly if not using redis 09:46:10 i came across this as well 09:46:25 I will check and update 09:46:47 lihi, if the issue repeats, we can start by disabling the feature for non-redis configuration. 09:47:17 ok 09:47:33 If I recall correctly, the driver could be modified to be general (and not redis-specific) easily, but since it's critical, I want the fastest, simplest solution first. 09:48:31 yes, i think so. 09:48:40 jingting, in case you missed what we discussed earlier: In some cases devstack fails on the port notification driver. We're not sure if it's for non-redis setups only, or if Dragonflow simply has to be re-installed after a git pull. 09:49:41 jingting, Would you mind taking over testing it? If it is indeed a non-redis configuration thing, the feature should be disabled for non-redis configurations. 09:50:22 At least until a generic driver is written (which may be extracted from the current redis driver) 09:50:23 yes, the feature should be disable in non-redis configuration 09:50:42 jingting, could you please upload a patch to do that? 09:51:31 Thanks. 09:51:37 #topic Open Discussion 09:51:42 The floor is for the taking. 09:51:57 Yes, I will do it 09:52:50 If there is nothing else... 09:53:09 Thanks everyone for coming. 09:53:15 thanks all. 09:53:20 bye~ 09:53:24 Let's hope this cycle will be as successful as the previous one! :) 09:53:26 #endmeeting