14:04:11 <apuimedo> #startmeeting kuryr
14:04:12 <openstack> Meeting started Mon Dec  5 14:04:11 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:15 <openstack> The meeting name has been set to 'kuryr'
14:04:50 <apuimedo> Hello and welcome to another kuryr weekly IRC meeting
14:04:55 <apuimedo> who's here today?
14:05:01 <ivc_> o7
14:05:04 <irenab> hi
14:05:06 <lmdaly> o/
14:05:12 <limao> o/
14:05:15 <mattmceuen> o/
14:05:25 <yedongcan> o/
14:05:30 <garyloug> o/
14:05:32 <mchiappero> o/
14:05:47 <janonymous> \o/
14:05:52 <mchiappero> :D
14:06:04 <apuimedo> welcome to all of you :-)
14:06:13 <apuimedo> #topic kuryr-lib
14:06:42 <alraddarla_> o/
14:07:22 <apuimedo> #info last week we released kuryr-lib 0.2.0. This release marks the last of our direct development of new features in the openstack/kuryr repository. It was agreed that from now on new features go into kuryr-libnetwork and kuryr-kubernetes and then they move to openstack/kuryr
14:08:23 <apuimedo> anybody has anything about kuryr-lib?
14:08:51 <mchiappero> uhm, I don't have anything specific in mind but I guess there might be some changes/refactoring there
14:09:26 <mchiappero> well, one thing (yet has to be decided) is whether we want to add a mean to avoid the binding drivers to set the IP addresses
14:09:38 <apuimedo> mchiappero: yes. There is refactoring coming up
14:09:41 <hongbin> o/
14:09:52 <apuimedo> but not before we finish container-in-vm for both kuryr-libnetwork and kuryr-kubernetes
14:09:57 <mchiappero> although I would really like to have some clear  well defined replies on libnetwork first
14:10:10 <mchiappero> no ok, I see
14:10:25 <apuimedo> mchiappero: since you bring it up
14:10:25 <mchiappero> it seems to work anyway :)
14:10:31 <apuimedo> #topic kuryr-libnetwork
14:11:21 <apuimedo> We merged mchiappero and lmdaly's fix so that now we create and delete the devices on the proper docker REST calls: https://review.openstack.org/#/c/394547/
14:11:37 <mchiappero> thank you all! :)
14:11:51 <lmdaly> +1 :)
14:12:24 <apuimedo> ltomasbo confirmed that with this (as was intended) we'll not have leftover container devices :-)
14:12:49 <apuimedo> so now we have a bit of a funny but good situation
14:13:18 <mchiappero> I have something I would like to mention
14:13:22 <apuimedo> we have https://review.openstack.org/#/c/402462/ and https://review.openstack.org/#/c/400365/
14:13:49 <apuimedo> and they don't overlap on goals, but they may overlap partly on how they go about solving the needs they have
14:14:15 <apuimedo> mchiappero: go ahead
14:15:00 <mchiappero> related to the previous patch,  we experienced some issues with an old version of Neutron (6.0.0) that had a bug exposed by that patch
14:15:40 <mchiappero> however I came to know by I coleague of mine that neutronclient is going to be deprecated in favour of openstackclient
14:15:40 <apuimedo> the mac address limitation
14:15:42 <apuimedo> ?
14:16:02 <apuimedo> I thought that deprecation is only for the cli aspect of it
14:16:18 <apuimedo> and that openstackclient will continue consuming neutronclient's code as a library
14:16:21 <mchiappero> oh, ok, I didn't know, I wanted to check with you :)
14:16:52 <apuimedo> irenab: limao: can you confirm?
14:17:00 <mchiappero> so, anyway, we will do more testing, but I guess the advice is to use a fairly recent version of Neutron
14:17:12 <irenab> apuimedo: I am not sure, can check and update
14:17:36 <limao> Sorry, I get disconnected at last min
14:17:44 <apuimedo> ok
14:17:47 <limao> can you repeat it one time..
14:17:50 <apuimedo> limao: sure
14:18:38 <apuimedo> limao: it was about python-neutronclient. mchiappero said that he heard it is deprecated and I said that I thought only the cli part is deprecated and openstackclient will continue consuming it
14:18:42 <apuimedo> (as a library)
14:19:55 <hongbin> i think yes, openstackclient is the replacement of hte cli part, the python client part should continue to work
14:20:04 <apuimedo> hongbin: thanks!
14:20:17 <apuimedo> mchiappero: ;-)
14:20:20 <limao> apuimedo: yes, cli part will be openstackclient
14:20:24 <mchiappero> good, makes sense
14:21:28 <apuimedo> mchiappero: lmdaly: did you review ltomasbo's patch?
14:21:59 <mchiappero> no, I'm sorry, I have little time lately
14:22:07 <lmdaly> no, not the most recent update
14:22:24 <apuimedo> I would appreciate if you could review it
14:22:34 <mchiappero> I will
14:22:36 <ltomasbo> me too :)
14:22:40 <apuimedo> it seems it is ready to merge, and if you could point out incompatibilities
14:22:51 <apuimedo> between your approaches
14:22:52 <lmdaly> yep will do!
14:22:58 <mchiappero> so, would that prevent our patch from merging?
14:22:58 <apuimedo> we can discuss and reach consensus
14:23:05 <apuimedo> mchiappero: may be
14:23:12 <apuimedo> that's why I want your review
14:23:15 <apuimedo> to minimize conflict
14:23:32 <mchiappero> ok, I'll do, I would like to get all the comments collected and the issues sorted ASAP
14:23:38 <apuimedo> #action mchiappero lmdaly ltomasbo to review https://review.openstack.org/#/c/400365/ and https://review.openstack.org/#/c/402462/
14:23:49 <apuimedo> mchiappero: agreed!
14:23:49 <mchiappero> so that both can be merged quickly
14:24:02 <apuimedo> yup. I want to merge them latest on Wednesday
14:24:11 <apuimedo> but I don't always get what I want
14:24:13 <apuimedo> :-)
14:24:18 <apuimedo> Anything else on kuryr-libnetwork?
14:24:29 <mchiappero> cool :) we are a bit behind for the UT but will try to run fast
14:24:51 <apuimedo> mchiappero: remember to add people as reviewers https://review.openstack.org/#/c/406636/
14:24:53 <apuimedo> :-)
14:24:58 <mchiappero> ok :)
14:26:03 <mchiappero> one last thing
14:26:13 <mchiappero> I was actually forgetting
14:26:40 <apuimedo> http://timhandotme.files.wordpress.com/2014/04/steve-jobs-one-last-thing-wwdc-2007.jpg
14:26:42 <apuimedo> xD
14:26:45 <mchiappero> the code for IPVLAN and MACVLAN, unless something changes in the kernel drivers, is the same
14:26:46 <apuimedo> go ahead
14:27:14 <mchiappero> however IPVLAN is not going to work at the moment, do we want to prevent it from loading, right?
14:27:47 <mchiappero> I mean, the binding driver to be used by kuryr-libnetwork
14:28:10 <apuimedo> mchiappero: We can just log an error
14:28:31 <mchiappero> what do you mean exactly?
14:28:58 <mchiappero> at runtime or start-up time or what?
14:29:05 <ltomasbo> why IPVLAN is not going to work? due to the ovs with same mac problem?
14:29:11 <apuimedo> well, if people try to load the ipvlan driver, we just LOG error and raise exception
14:29:17 <mchiappero> ltomasbo: yes
14:29:30 <mchiappero> ltomasbo: well, ovs...? no
14:29:33 <ltomasbo> but a neutron port is created, and it is included in allowed_pairs
14:29:44 <ltomasbo> can we just take the MAC of the new created subport for the ipvlan?
14:29:50 <mchiappero> apuimedo: ok :)
14:30:14 <ltomasbo> or is there a kernel restriction to create the ipvlan device with a different mac than the main device is attached to?
14:30:28 <apuimedo> mchiappero: and please create a big TODO(mchiappero) that says when we can remove the error
14:30:43 <mchiappero> ltomasbo: the second one :) let's continue on the kuryr channel
14:30:51 <ltomasbo> ok
14:30:54 <apuimedo> ltomasbo: mchiappero said that there is that restriction, but that it could easily be fixed in kernel
14:31:24 <ltomasbo> ok, great to know! I was not aware of that
14:31:31 <apuimedo> #topic fuxi
14:31:34 <mchiappero> I would have another topic, but not for this meeting, but I want to mention it before I forget
14:31:41 <mchiappero> :D
14:31:45 <apuimedo> mchiappero: in the open discussion section ;-)
14:31:50 <mchiappero> sure
14:32:29 <apuimedo> #info hongbin got a few patches merged and posted a devstack patch. I think it looks okay. It has a problem in fedora that was pointed out, but it should be fixed soon
14:32:35 <apuimedo> #chair hongbin
14:32:36 <openstack> Current chairs: apuimedo hongbin
14:32:40 <hongbin> hi
14:32:56 <hongbin> last week, i tried to setup same basic fullstack test
14:32:57 <apuimedo> hongbin: please, update us :-)
14:33:05 <apuimedo> good!
14:33:08 <hongbin> for example, create a volume with fuxi driver
14:33:21 <hongbin> #link https://review.openstack.org/#/c/403931/
14:33:29 <hongbin> #link https://review.openstack.org/#/c/403941/
14:33:43 <hongbin> They are on merge conflict now, i will resolve them
14:33:56 <hongbin> After the fullstack tests are merged, the next step is to setup the CI
14:34:18 <hongbin> for the old reviews, apuimedo gave a good feedback on the devstack plugin patch
14:34:34 <hongbin> i will address his comment soon
14:34:41 <hongbin> then, you will have the devstack plugin to setup fuxi
14:35:15 <hongbin> the last thing is the review queue. Thanks all who performed reviews. We need more help on that :)
14:35:19 <hongbin> #link https://review.openstack.org/#/q/project:openstack/fuxi
14:35:27 <hongbin> apuimedo: that is all from me
14:35:34 <apuimedo> thanks a lot hongbin!
14:35:38 <apuimedo> very nice update!
14:35:43 <apuimedo> #topic kuryr-kubernetes
14:36:31 <ivc_> services/endpoints is WIP, will prolly push something this week
14:36:32 <apuimedo> #info: Last week we made some devstack improvements (still some left to do when restacking)
14:37:12 <apuimedo> #info ivc_ pushed the initial CNI driver support. https://review.openstack.org/404038 . It looks practically ready for merging
14:37:32 <apuimedo> once we have this, we have to enable the gates (I posted a devstack install patch that I have to fix)
14:37:55 <apuimedo> and after those two pieces are there, just as hongbin is doing for fuxi, we need to start with the fullstack tests and put them as gates
14:38:10 <apuimedo> ivc_: you'll push them with polling I suppose
14:38:16 <ivc_> yup
14:38:18 <ivc_> for now
14:39:37 <apuimedo> very well
14:39:58 <apuimedo> #action ivc_ to push reworked versions of the services/endpoints patch
14:40:06 <apuimedo> anything else about kuryr-kubernetes?
14:40:16 <ivc_> apuimedo most likely it will have the same API logic as the current VIFDriver's activate_vif
14:40:30 <irenab> when do we want to push devref?
14:40:46 <ivc_> whenever you think it is ready :P
14:40:56 <ivc_> i dont think we should wait for CNI
14:41:08 <ivc_> so i'd say leave CNI part as-is
14:41:08 <apuimedo> irenab: I propose to push it for the part we have merged
14:41:17 <apuimedo> then we draft the CNI and services part
14:41:22 <irenab> apuimedo: ivc_ agreed
14:41:35 * apuimedo winks at writing "we draft"
14:41:54 <irenab> apuimedo:  I will push a devref during this week
14:42:02 <apuimedo> great irenab!
14:42:02 <ivc_> irenab, you want to keep my diagram for pipeline or will you redraw it with EArchitect?
14:42:26 <apuimedo> Your work with that gdoc is excellent, it helped me put in order my head (I made slides out of it)
14:42:44 <ivc_> aye, awesome work, irenab :)
14:43:01 <apuimedo> ivc_: irenab: If you don't mind, I'll try replacing those diagrams with pngs and have the source in the repo in an open format
14:43:13 <irenab> puting ivc_ ideas on paper (not code) was indeed a challenge
14:43:29 <apuimedo> I was checking https://www.modelio.org/
14:44:06 <irenab> apuimedo: sounds good, will check
14:44:26 <apuimedo> irenab: I don't mind doing it if you prefer to spend the time with the rest of the devref
14:44:40 <apuimedo> anything else, anybody?
14:45:09 <irenab> apuimedo: will be great, thanks
14:46:26 <apuimedo> #topic Open Discussion
14:46:32 <apuimedo> mchiappero: your turn
14:48:20 <mchiappero> I noticed that the way kuryr-libnetwork there is no support for both IPv4 and IPv6
14:48:37 <mchiappero> *kuryr-libnetwork is
14:48:44 <mchiappero> I mean at the same time
14:49:44 <mchiappero> when you receive a request for address a port is created, I would assume libnetwork preforms two such calls for a V4 and V6 address, so two separate ports would get created
14:50:11 <mchiappero> I could not test as libnetwork doesn't seems to actually use IPv6 even when asked
14:50:35 <mchiappero> but I guess this might (should) happen at some point
14:50:41 <apuimedo> right
14:50:46 <mchiappero> so
14:51:06 <mchiappero> there aren't many solutions, basically two
14:51:58 <mchiappero> either not reserving at RequestAddress, or rejoining the two ports into a single one as soon as we can correlate the two IP addresses as belonging to the same endpoint (at CreateEndpoint time)
14:52:36 <mchiappero> but it's something more long term, I guess
14:53:49 <apuimedo> I'm more for the latter
14:54:15 <apuimedo> mchiappero: do we have a bug or blueprint for that?
14:54:42 <mchiappero> I don't know, I'm not sure if someone else managed instead to test both IPv4 and IPv6 together
14:55:08 <apuimedo> mchiappero: I don't think so
14:55:24 <mchiappero> yes, the latter is more safe
14:55:28 <apuimedo> I think everybody that reported was either 6 or 4
14:56:15 <mchiappero> but in theory it should take a fraction of seconds to move from RequestAddress to CreateEndpoint, so creating those two ports might not be a huge safety advantage
14:56:50 <mchiappero> oh, ok, does anybody know why they prevent the use of both?
14:57:00 <apuimedo> I don't :(
14:57:27 <mchiappero> ok.. that's all from me
14:58:10 <apuimedo> thanks mchiappero!
14:58:15 <apuimedo> please, file a bug/bp!
14:58:33 <apuimedo> So we can look at it and have better discussion
14:58:35 <apuimedo> :-)
14:58:42 <apuimedo> thank you all for joining the meeting!
14:58:42 <mchiappero> I'll do
14:58:46 <apuimedo> #endmeeting