08:58:08 <gsagie> #startmeeting dragonflow
08:58:09 <openstack> Meeting started Mon Jan  4 08:58:08 2016 UTC and is due to finish in 60 minutes.  The chair is gsagie. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:58:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
08:58:12 <openstack> The meeting name has been set to 'dragonflow'
08:58:28 <gsagie> #info gampel, kun_huang, diga, yuli_s, gsagie in meeting
08:58:41 <gsagie> #info Shlomo will not attend this meeting
08:59:15 <gsagie> ok, lets start with the first topic
08:59:20 <gsagie> #topic security groups
08:59:21 <yuli_s> hello !
08:59:27 <gsagie> hi yuli_s!
08:59:44 <gsagie> #info abaron and yuli_s in meeting :)
09:00:03 <gsagie> #link security groups design https://review.openstack.org/#/c/261903/
09:00:16 <gsagie> dingoboopt: ping
09:00:21 <dingboopt> pong
09:00:45 <gsagie> dingboopt: would you please share your progress on security groups
09:01:24 <dingboopt> I have reviewed the bp, and write some code about configuring DF
09:01:32 <gsagie> #link https://review.openstack.org/#/c/262634/
09:01:39 <dingboopt> security group  in DF DB
09:02:02 <gsagie> yeah we will start reviewing this patch
09:02:18 <gsagie> you need to fix some pep8 errors but looks good otherwise
09:02:36 <dingboopt> yes, I will fix it as soon as possible
09:02:39 <gsagie> Anyone has any comments/ideas about the security group impl design? because the next step would be to start implementing it
09:02:52 <gampel> looks good to me
09:02:57 <gsagie> hi raofei!
09:03:14 <raofei> hi gsagie
09:03:40 <gsagie> ok, dingboopt i think the next step will be to read the security groups configuration in the controller and adds some logic that implements the configuration to flows
09:04:03 <dingboopt> ok
09:04:04 <gsagie> do you have any questions regarding that step?
09:04:13 <dingboopt> currently no
09:04:24 <gampel> gsagie: we all set on the second option ?
09:04:24 <gsagie> #action dingboopt finish and merge security groups configuration from plugin
09:04:50 <gsagie> #action dingboopt start reading sg configuration from the controller and sending to sg application
09:05:06 <gsagie> gampel: i am, would like to hear other opinions
09:05:35 <gsagie> but i think we can adjust the code between the two solutions very quickly anyway
09:06:06 <gsagie> please everyone review the spec and see everything is ok
09:06:12 <gsagie> anything else on this topic?
09:06:51 <raofei> except sg, do we need start other features concurrently
09:07:15 <gsagie> raofei: we have some tasks yeah, are you available to take anything right now?
09:07:26 <raofei> OK, review will be done ASAP.
09:07:27 <gsagie> i will just get to it in next topic (blueprints)
09:07:33 <gampel> distributed DNAT  is up for taking
09:07:45 <gsagie> lets move to this topic
09:07:48 <gsagie> #topic blueprints
09:08:10 <gsagie> raofei: there is a distributed DNAT spec, would you like to start working on that?
09:08:31 <gampel> link: https://github.com/openstack/dragonflow/blob/master/doc/source/specs/distributed_dnat.rst
09:08:32 <gsagie> #link http://docs.openstack.org/developer/dragonflow/specs/distributed_dnat.html
09:08:41 <gsagie> thanks gampel :)
09:08:58 <gsagie> raofei: ?
09:09:24 <gsagie> You can start the first part, of just translating the configuration from the neutron plugin to DF DB
09:09:33 <gsagie> this will be a good rampup task
09:09:39 <raofei> I'd like to do. but i'm not sure whether I can start it right now. I think I may lauch this task if the networking-sfc testing is done in this week
09:10:17 <gsagie> ok, so lets talk later this week, i will optionally put this on you
09:10:37 <raofei> ok. I will try to start it.
09:10:49 <gsagie> #action raofei optionally start looking at distributed dnat task
09:10:56 <gsagie> thanks raofei! greatly appreciated
09:11:13 <raofei> my pleasure.
09:11:23 <gsagie> ok, i have written a spec about MAC spoofing protection, its complimenting feature for security groups
09:11:43 <gsagie> #link https://review.openstack.org/#/c/263019/
09:11:55 <gsagie> please all help to review the spec
09:12:56 <gsagie> i also havent gotten to it yet but i am suppose to write a spec about publish-subscribe abstraction, as nick-ma asked for it
09:13:02 <raofei> OK, it's quite similary with security group.
09:13:03 <gsagie> gampel: maybe you can update us regarding that?
09:13:31 <gampel> yes we are trying to integrate nanomsg as the pub syb
09:13:37 <gsagie> raofei: its implicitly implemented
09:14:04 <gampel> it will allow use of the DB pub/sub or use the DF implementations
09:14:18 <gsagie> #action gsagie write publish-subscriber abstraction spec
09:14:43 <gampel> this will allow us to add this support to DB like Ram Cloud that do not support pub/sub
09:15:08 <gampel> and will allow us to implement the Selective proactive in a very simple way
09:15:37 <gsagie> thanks gampel
09:16:19 <gsagie> i am also working on writing two more specs, one is for L2 ARP supression, currently we add ARP responders only for router ports there was a request to also add them for all ports
09:16:25 <gsagie> going to be pretty straight forward
09:16:25 <raofei> does dhcp spoofing will be done also under port security feature?
09:17:23 <gsagie> raofei: currently all DHCP traffic for DHCP enabled networks is being directed to Dragonflow DHCP application, so we believe its safe for that case
09:17:25 <raofei> will dhcp spoofing  be done also under sg or ip/mac spoofing?
09:18:06 <gsagie> raofei: do you have another use case when there is a problem? the only case i see is for networks with DHCP disabled and the user brings their own DHCP VM
09:19:00 <gampel> raofei: the only way a DHCP traffic from a VM will go out is in DHCP disable networks
09:19:35 <raofei> Yes, i think dhcp spoofing is done indirectly due to the DHCP application mechnisam.
09:20:01 <gsagie> raofei: yeah, but let me know if you think about a problematic scenario, it should be done as part of the MAC spoofing spec
09:20:37 <gampel> Yes when DHCP is disable we allow the user to implement it in a VM but we can add a configuration to allow or disallow this
09:20:42 <gsagie> there are other blue prints that i am going to add specs too, will update in next meeting
09:21:15 <gsagie> working on a mechanism to sync DB's between Neutron and DF
09:21:17 <gsagie> #link https://review.openstack.org/#/c/263035/
09:21:37 <gsagie> that will help solve inconsistent states
09:22:18 <gampel> this is a period task ?
09:22:54 <gsagie> havent decided yet, we can have different configurations, lets discuss it over the spec once i write it
09:23:09 <gampel> I think that we should look in the patch from Li MA as well the role back patch
09:23:13 <gsagie> for example, one that is actually doing periodic sync, one that is only alerting of inconsistent and so on..
09:23:18 <gsagie> yeah
09:23:49 <gsagie> #link https://review.openstack.org/#/c/262423/
09:24:25 <gsagie> gampel: this can also be used for cases like with RAMcloud after a restart..
09:24:41 <gampel> gsagie: good idea
09:25:19 <gsagie> gampel: i also would like to start writing a plan for how to integrate selective proactive in steps, for example first step sync all but only send related information to applications
09:25:21 <gsagie> and so on
09:25:39 <gsagie> #action gsagie start writing a plan for selective proactive integration
09:25:42 <gampel> we should  think about it and make sure we have a plan for the DB inconsistent
09:26:00 <gsagie> ok
09:26:07 <gsagie> Can i put this on you for now?
09:26:11 <gampel> yes
09:26:15 <gampel> sure
09:26:28 <gsagie> #action gampel have a plan for DB inconsistent problems between DF and Neutron
09:26:37 <gsagie> ok, anyone has anything else regarding blueprints?
09:27:03 <gampel> I think that we should move all the DB drivers in the new service
09:27:15 <gsagie> gampel: DB topic is after..
09:27:25 <gsagie> #topic testing
09:27:32 <gsagie> yuli_s: ping
09:28:05 <gsagie> yuli_s: would you please share with us your progress regarding fullstack tests ?
09:29:35 <yuli_s> Yes
09:29:36 <yuli_s> Sure
09:29:57 <yuli_s> I found a solution for sequential tests
09:30:13 <yuli_s> to fix the raise conditions I have while running tests
09:30:26 <gsagie> you added port creation/deletion tests as i saw
09:30:30 <gsagie> ok, good job
09:30:54 <gsagie> #link https://review.openstack.org/#/c/263042/  yuli_s fullstack patch
09:31:09 <yuli_s> i am going to comment out new tests for now and revert them later as you sugested privately
09:31:11 <gsagie> for next meeting you moving to flow tests and update right?
09:31:19 <gsagie> okie
09:31:24 <yuli_s> Yes,
09:31:27 <gsagie> #action yuli_s work on update fullstack tests
09:31:40 <gsagie> #action yuli_s work on OVS flows related tests
09:32:02 <gsagie> basically yuli is adding fullstack tests that will check that we configure the correct OVS flows for various different scenarios
09:32:51 <gsagie> i would also like to refactor the base test class, so we can have few files for different tests and not everything in the same class, so the base one will have the clients (neutron, DF DB) and then every test class will inherit from uit
09:33:21 <gsagie> #action gsagie refactor fullstack test class
09:33:52 <gsagie> for the unit tests, we have encounter a small problem as OVS python binding dont yet have python3 support so our unit tests keeps failing in the gate
09:33:58 <gsagie> i have added patch to solve it in the meantime
09:34:13 <gsagie> #link https://review.openstack.org/#/c/262981/   remove python3 gate tests
09:34:40 <gsagie> i know that russellb is working on making OVS package work with Python3, so we should have solution quickly and i will start adding unit tests examples
09:34:51 <gsagie> diga: are you interested to work on that as well?
09:35:45 <gsagie> kun_huang: ping
09:35:50 <gsagie> diga: ping
09:36:15 <gsagie> #action gsagie add unit tests examples once python3 job is fixed
09:36:25 <kun_huang> gsagie: not yet, I'm trying to supply HWs ASAP
09:36:49 <kun_huang> btw nick-ma has a fever these days, he is in hospital now...
09:36:58 <gsagie> kun_huang: ok, we are also working here to create performance tests enviorments so we can debug L3 east-west performance and data path performance
09:37:15 <diga> gsagie: hi
09:37:21 <gsagie> kun_huang: ohh sorry to hear that :( please send him our regards and hope he feels better soon
09:37:23 <diga> yes
09:37:33 <gampel> kun_huang: I am sorry to hear , yes please send mine as well
09:37:44 <kun_huang> no problem
09:37:55 <gsagie> #info warm regards to nick-ma and hope he feels better soon :)
09:38:31 <gsagie> kun_huang: thanks, Shlomo from our team is going to start working on the setup from our end, i will connect the two of you so you could share information
09:38:53 <gsagie> i have found Rally are doing some work in order to support benchmarking same as VMTP/Shaker projects.
09:39:03 <gsagie> #link https://review.openstack.org/#/q/status:open+project:openstack/rally+branch:master+topic:bp/vm-workloads-framework
09:39:13 <gsagie> #link https://review.openstack.org/#/c/254851/6
09:39:19 <gsagie> maybe will be good to look at them
09:39:24 <gsagie> as well
09:39:30 <gsagie> for automating the tests
09:39:44 <kun_huang> I know that work
09:39:55 <gsagie> cool :)
09:39:59 <kun_huang> anyway, let's build the setup first
09:40:05 <gsagie> #info kun_huang waiting for HW for scale testing
09:40:47 <gsagie> kun_huang: we also could use any help you can provide with performance testing that is not related to scale, like L3 in 1-2 compute nodes, do you have any experience with Shaker/VMTP or any other automated tests written that you can share?
09:41:16 <kun_huang> I have built VMTP inside FusionNetwork
09:41:34 <kun_huang> as daily test in our CI
09:42:16 <gsagie> kun_huang: cool, thats is very use full, i will tell Shlomo to sync with you regarding that, i think we need something like that for Dragonflow as well
09:42:40 <gsagie> #action gsagie, Shlomo sync with kun_huang regarding daily VMTP CI tests for Dragonflow
09:42:52 <gsagie> anything else for testing? gampel?
09:43:23 <gampel> no
09:43:45 <gsagie> ok
09:43:47 <gsagie> #topic DB
09:44:28 <gsagie> We are planning to merge RethinkDB, gampel will work on fixing any final things that needs to be addressed
09:44:53 <gsagie> gampel: right?
09:44:57 <gampel> will do yes
09:45:03 <gampel> I think that we should move  all the DBs drivers into a service configuration
09:45:11 <gsagie> ok, thanks, sorry for the rush but we running out of time :)
09:45:23 <gampel> like we did for etcd
09:45:42 <gsagie> #action move all DBs installtions into services similar to what we did with etcd
09:45:56 <gsagie> #action gampel merge and fix final things with RethinkDB
09:46:13 <gsagie> nick-ma is not here, but he is working on ZooKeeper integration, he will update us next meeting
09:46:24 <gsagie> #info nick-ma update next meeting regarding zookeeper integration
09:46:34 <gsagie> We should review the patch for cluster configuration
09:46:52 <gsagie> #link https://review.openstack.org/#/c/261731/
09:46:55 <gampel> gsagie: Ok will do
09:47:02 <gsagie> gampel: you did :) i have too
09:47:05 <gsagie> just noticed
09:47:23 <gsagie> diga, can i assign a task for you to start looking at unit tests?
09:47:28 <gsagie> you told me you want to take it
09:47:32 <diga_> yes
09:47:34 <gampel> wait i will rebase it today today
09:47:49 <gsagie> #action diga start working on unit tests for the controller
09:48:04 <gsagie> diga_ : i started to create the needed things for it, look at this patch:
09:48:23 <diga_> gsagie: okay
09:48:24 <gsagie> #link https://review.openstack.org/#/c/262796/
09:48:40 <gsagie> still need to wait for the infrastructure to remove python3 jobs, hopefully today or tommorow
09:48:43 <gsagie> from the gate
09:48:48 <gsagie> #topic bugs
09:49:10 <gsagie> i forgot also, i havent looked yet but needs to look at tempest failing tests, maybe i will start converting them to bugs
09:49:21 <gsagie> #action gsagie look at failing tempest tests and convert them to bugs
09:49:53 <gsagie> we should probably give more time for talking about bugs next meeting and going over all the open bugs
09:49:58 <gsagie> anyone has any notable bugs?
09:50:29 <gsagie> kexiaodong: ping
09:51:11 <gsagie> ok
09:51:16 <gsagie> #topic open discussion
09:51:24 <gsagie> Happy new year everyone :)
09:51:31 <gampel> Happy new year
09:51:34 <gsagie> and thanks for attending the meeting and all your help
09:51:43 <yuli_s> Happy new Yes
09:51:45 <gsagie> if anyone has anything else please let us know
09:51:52 <yuli_s> hm
09:51:52 <gsagie> gampel: next week we going to have the meeting?
09:51:54 <diga_> Happy New Year to you all
09:52:09 <kun_huang> gsagie, gampel what's your schedule of coming China?
09:52:10 <yuli_s> i think we need to think again about running tests sequentially
09:52:25 <gampel> not sure i think that we should postponed  the next meeting
09:52:44 <yuli_s> i just commited a fix to the bug
09:52:44 <gsagie> yuli_s : ok, lets continue to talk about it in #openstack-dragonflow, but i agree its something we need to consider
09:52:54 <yuli_s> ok, great
09:52:55 <gsagie> kun_huang: we come next week
09:53:07 <gsagie> kun_huang: you located in hangzhou?
09:53:12 <kun_huang> yep
09:53:17 <gampel> kun_huang: see you next week
09:53:20 <gsagie> ok cool, we will meet then face to face :)
09:53:25 <kun_huang> what's your arrived date on hangzhou?
09:53:32 <kun_huang> I could make a meetup for us
09:53:39 <gsagie> #info next week meeting is canceled, we will continue week after
09:53:48 <kun_huang> there are some other openstack guys
09:53:52 <gampel> on the 10th
09:53:53 <gsagie> kun_huang: 12/1
09:54:13 <gampel> 11th sorry
09:54:16 <gsagie> ohh yeah sorry 11th
09:54:39 <kun_huang> arrive at 11th?
09:54:41 <gsagie> ok everyone thanks for attending the meeting! and see you in two weeks (next week meeting is canceled)
09:54:43 <gsagie> #endmeeting