14:04:11 #startmeeting kuryr 14:04:12 Meeting started Mon Dec 5 14:04:11 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:04:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:04:15 The meeting name has been set to 'kuryr' 14:04:50 Hello and welcome to another kuryr weekly IRC meeting 14:04:55 who's here today? 14:05:01 o7 14:05:04 hi 14:05:06 o/ 14:05:12 o/ 14:05:15 o/ 14:05:25 o/ 14:05:30 o/ 14:05:32 o/ 14:05:47 \o/ 14:05:52 :D 14:06:04 welcome to all of you :-) 14:06:13 #topic kuryr-lib 14:06:42 o/ 14:07:22 #info last week we released kuryr-lib 0.2.0. This release marks the last of our direct development of new features in the openstack/kuryr repository. It was agreed that from now on new features go into kuryr-libnetwork and kuryr-kubernetes and then they move to openstack/kuryr 14:08:23 anybody has anything about kuryr-lib? 14:08:51 uhm, I don't have anything specific in mind but I guess there might be some changes/refactoring there 14:09:26 well, one thing (yet has to be decided) is whether we want to add a mean to avoid the binding drivers to set the IP addresses 14:09:38 mchiappero: yes. There is refactoring coming up 14:09:41 o/ 14:09:52 but not before we finish container-in-vm for both kuryr-libnetwork and kuryr-kubernetes 14:09:57 although I would really like to have some clear well defined replies on libnetwork first 14:10:10 no ok, I see 14:10:25 mchiappero: since you bring it up 14:10:25 it seems to work anyway :) 14:10:31 #topic kuryr-libnetwork 14:11:21 We merged mchiappero and lmdaly's fix so that now we create and delete the devices on the proper docker REST calls: https://review.openstack.org/#/c/394547/ 14:11:37 thank you all! :) 14:11:51 +1 :) 14:12:24 ltomasbo confirmed that with this (as was intended) we'll not have leftover container devices :-) 14:12:49 so now we have a bit of a funny but good situation 14:13:18 I have something I would like to mention 14:13:22 we have https://review.openstack.org/#/c/402462/ and https://review.openstack.org/#/c/400365/ 14:13:49 and they don't overlap on goals, but they may overlap partly on how they go about solving the needs they have 14:14:15 mchiappero: go ahead 14:15:00 related to the previous patch, we experienced some issues with an old version of Neutron (6.0.0) that had a bug exposed by that patch 14:15:40 however I came to know by I coleague of mine that neutronclient is going to be deprecated in favour of openstackclient 14:15:40 the mac address limitation 14:15:42 ? 14:16:02 I thought that deprecation is only for the cli aspect of it 14:16:18 and that openstackclient will continue consuming neutronclient's code as a library 14:16:21 oh, ok, I didn't know, I wanted to check with you :) 14:16:52 irenab: limao: can you confirm? 14:17:00 so, anyway, we will do more testing, but I guess the advice is to use a fairly recent version of Neutron 14:17:12 apuimedo: I am not sure, can check and update 14:17:36 Sorry, I get disconnected at last min 14:17:44 ok 14:17:47 can you repeat it one time.. 14:17:50 limao: sure 14:18:38 limao: it was about python-neutronclient. mchiappero said that he heard it is deprecated and I said that I thought only the cli part is deprecated and openstackclient will continue consuming it 14:18:42 (as a library) 14:19:55 i think yes, openstackclient is the replacement of hte cli part, the python client part should continue to work 14:20:04 hongbin: thanks! 14:20:17 mchiappero: ;-) 14:20:20 apuimedo: yes, cli part will be openstackclient 14:20:24 good, makes sense 14:21:28 mchiappero: lmdaly: did you review ltomasbo's patch? 14:21:59 no, I'm sorry, I have little time lately 14:22:07 no, not the most recent update 14:22:24 I would appreciate if you could review it 14:22:34 I will 14:22:36 me too :) 14:22:40 it seems it is ready to merge, and if you could point out incompatibilities 14:22:51 between your approaches 14:22:52 yep will do! 14:22:58 so, would that prevent our patch from merging? 14:22:58 we can discuss and reach consensus 14:23:05 mchiappero: may be 14:23:12 that's why I want your review 14:23:15 to minimize conflict 14:23:32 ok, I'll do, I would like to get all the comments collected and the issues sorted ASAP 14:23:38 #action mchiappero lmdaly ltomasbo to review https://review.openstack.org/#/c/400365/ and https://review.openstack.org/#/c/402462/ 14:23:49 mchiappero: agreed! 14:23:49 so that both can be merged quickly 14:24:02 yup. I want to merge them latest on Wednesday 14:24:11 but I don't always get what I want 14:24:13 :-) 14:24:18 Anything else on kuryr-libnetwork? 14:24:29 cool :) we are a bit behind for the UT but will try to run fast 14:24:51 mchiappero: remember to add people as reviewers https://review.openstack.org/#/c/406636/ 14:24:53 :-) 14:24:58 ok :) 14:26:03 one last thing 14:26:13 I was actually forgetting 14:26:40 http://timhandotme.files.wordpress.com/2014/04/steve-jobs-one-last-thing-wwdc-2007.jpg 14:26:42 xD 14:26:45 the code for IPVLAN and MACVLAN, unless something changes in the kernel drivers, is the same 14:26:46 go ahead 14:27:14 however IPVLAN is not going to work at the moment, do we want to prevent it from loading, right? 14:27:47 I mean, the binding driver to be used by kuryr-libnetwork 14:28:10 mchiappero: We can just log an error 14:28:31 what do you mean exactly? 14:28:58 at runtime or start-up time or what? 14:29:05 why IPVLAN is not going to work? due to the ovs with same mac problem? 14:29:11 well, if people try to load the ipvlan driver, we just LOG error and raise exception 14:29:17 ltomasbo: yes 14:29:30 ltomasbo: well, ovs...? no 14:29:33 but a neutron port is created, and it is included in allowed_pairs 14:29:44 can we just take the MAC of the new created subport for the ipvlan? 14:29:50 apuimedo: ok :) 14:30:14 or is there a kernel restriction to create the ipvlan device with a different mac than the main device is attached to? 14:30:28 mchiappero: and please create a big TODO(mchiappero) that says when we can remove the error 14:30:43 ltomasbo: the second one :) let's continue on the kuryr channel 14:30:51 ok 14:30:54 ltomasbo: mchiappero said that there is that restriction, but that it could easily be fixed in kernel 14:31:24 ok, great to know! I was not aware of that 14:31:31 #topic fuxi 14:31:34 I would have another topic, but not for this meeting, but I want to mention it before I forget 14:31:41 :D 14:31:45 mchiappero: in the open discussion section ;-) 14:31:50 sure 14:32:29 #info hongbin got a few patches merged and posted a devstack patch. I think it looks okay. It has a problem in fedora that was pointed out, but it should be fixed soon 14:32:35 #chair hongbin 14:32:36 Current chairs: apuimedo hongbin 14:32:40 hi 14:32:56 last week, i tried to setup same basic fullstack test 14:32:57 hongbin: please, update us :-) 14:33:05 good! 14:33:08 for example, create a volume with fuxi driver 14:33:21 #link https://review.openstack.org/#/c/403931/ 14:33:29 #link https://review.openstack.org/#/c/403941/ 14:33:43 They are on merge conflict now, i will resolve them 14:33:56 After the fullstack tests are merged, the next step is to setup the CI 14:34:18 for the old reviews, apuimedo gave a good feedback on the devstack plugin patch 14:34:34 i will address his comment soon 14:34:41 then, you will have the devstack plugin to setup fuxi 14:35:15 the last thing is the review queue. Thanks all who performed reviews. We need more help on that :) 14:35:19 #link https://review.openstack.org/#/q/project:openstack/fuxi 14:35:27 apuimedo: that is all from me 14:35:34 thanks a lot hongbin! 14:35:38 very nice update! 14:35:43 #topic kuryr-kubernetes 14:36:31 services/endpoints is WIP, will prolly push something this week 14:36:32 #info: Last week we made some devstack improvements (still some left to do when restacking) 14:37:12 #info ivc_ pushed the initial CNI driver support. https://review.openstack.org/404038 . It looks practically ready for merging 14:37:32 once we have this, we have to enable the gates (I posted a devstack install patch that I have to fix) 14:37:55 and after those two pieces are there, just as hongbin is doing for fuxi, we need to start with the fullstack tests and put them as gates 14:38:10 ivc_: you'll push them with polling I suppose 14:38:16 yup 14:38:18 for now 14:39:37 very well 14:39:58 #action ivc_ to push reworked versions of the services/endpoints patch 14:40:06 anything else about kuryr-kubernetes? 14:40:16 apuimedo most likely it will have the same API logic as the current VIFDriver's activate_vif 14:40:30 when do we want to push devref? 14:40:46 whenever you think it is ready :P 14:40:56 i dont think we should wait for CNI 14:41:08 so i'd say leave CNI part as-is 14:41:08 irenab: I propose to push it for the part we have merged 14:41:17 then we draft the CNI and services part 14:41:22 apuimedo: ivc_ agreed 14:41:35 * apuimedo winks at writing "we draft" 14:41:54 apuimedo: I will push a devref during this week 14:42:02 great irenab! 14:42:02 irenab, you want to keep my diagram for pipeline or will you redraw it with EArchitect? 14:42:26 Your work with that gdoc is excellent, it helped me put in order my head (I made slides out of it) 14:42:44 aye, awesome work, irenab :) 14:43:01 ivc_: irenab: If you don't mind, I'll try replacing those diagrams with pngs and have the source in the repo in an open format 14:43:13 puting ivc_ ideas on paper (not code) was indeed a challenge 14:43:29 I was checking https://www.modelio.org/ 14:44:06 apuimedo: sounds good, will check 14:44:26 irenab: I don't mind doing it if you prefer to spend the time with the rest of the devref 14:44:40 anything else, anybody? 14:45:09 apuimedo: will be great, thanks 14:46:26 #topic Open Discussion 14:46:32 mchiappero: your turn 14:48:20 I noticed that the way kuryr-libnetwork there is no support for both IPv4 and IPv6 14:48:37 *kuryr-libnetwork is 14:48:44 I mean at the same time 14:49:44 when you receive a request for address a port is created, I would assume libnetwork preforms two such calls for a V4 and V6 address, so two separate ports would get created 14:50:11 I could not test as libnetwork doesn't seems to actually use IPv6 even when asked 14:50:35 but I guess this might (should) happen at some point 14:50:41 right 14:50:46 so 14:51:06 there aren't many solutions, basically two 14:51:58 either not reserving at RequestAddress, or rejoining the two ports into a single one as soon as we can correlate the two IP addresses as belonging to the same endpoint (at CreateEndpoint time) 14:52:36 but it's something more long term, I guess 14:53:49 I'm more for the latter 14:54:15 mchiappero: do we have a bug or blueprint for that? 14:54:42 I don't know, I'm not sure if someone else managed instead to test both IPv4 and IPv6 together 14:55:08 mchiappero: I don't think so 14:55:24 yes, the latter is more safe 14:55:28 I think everybody that reported was either 6 or 4 14:56:15 but in theory it should take a fraction of seconds to move from RequestAddress to CreateEndpoint, so creating those two ports might not be a huge safety advantage 14:56:50 oh, ok, does anybody know why they prevent the use of both? 14:57:00 I don't :( 14:57:27 ok.. that's all from me 14:58:10 thanks mchiappero! 14:58:15 please, file a bug/bp! 14:58:33 So we can look at it and have better discussion 14:58:35 :-) 14:58:42 thank you all for joining the meeting! 14:58:42 I'll do 14:58:46 #endmeeting