14:00:41 #startmeeting kuryr 14:00:42 Meeting started Mon Aug 29 14:00:41 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:45 The meeting name has been set to 'kuryr' 14:00:57 Hello everybody and welcome to yet another Kuryr meeting 14:01:03 Hello! 14:01:03 who's here for the show? 14:01:05 o/ 14:01:06 o/ 14:01:12 o/ 14:01:12 o/ 14:01:12 o/ 14:01:13 o/ 14:01:15 o/ 14:02:03 I like this attendance level :-) 14:02:34 #info pablochacin, vikas, devvesa, tonanhngo, limao, banix, garyloug and apuimedo present 14:02:41 Let's get started 14:02:46 #topic kuryr-lib 14:03:05 we're almost done with the split (sorry it took so long) 14:03:30 limao and I sent some patches to remove wrong requirements today 14:03:59 and after that, I think we can make a 1.0.0 release, then make a 2.0.0 when we have the rest/rpc 14:04:26 #link https://review.openstack.org/362070 14:04:37 #link https://review.openstack.org/361972 14:04:46 #link https://review.openstack.org/361937 14:05:24 #info vikasc has patches ongoing for adding the pieces for container-in-vm 14:05:38 #link https://review.openstack.org/361937 14:05:50 #link https://review.openstack.org/362023 14:06:23 are we all in agreement to make a very short lived (a couple of weeks) 1.0.0 release before we add the rest/rpc? 14:06:42 yes 14:06:46 +1 14:06:50 +1 14:07:06 very well 14:07:34 so once the three patches I put above are merged, I'll try a deployment and if it works, I'll send the tag 14:08:11 #info There's agreement in having a very short lived (two-three) weeks 1.0.0 openstack/kuryr release before we add rpc/rest 14:08:34 #info apuimedo is working on packaging for centos/fedora 14:08:53 the patch will be a submission to openstack-packages/kuryr 14:09:44 #info: note that openstack-packages/* do not seem to be built with py3 as of now, so that means we'll need to put some work before kuryr-kubernetes can use those packages 14:10:02 anything else about openstack/kuryr? 14:12:52 alright, that's a "move along!" 14:12:58 #topic kuryr-libnetwork 14:13:53 #info vikasc has improved a lot the configuration and option listing 14:14:19 #action apuimedo to rebase the keystone v3 support on top of this to unblock kolla 14:14:57 limao_: what's the status of https://review.openstack.org/#/c/355714/ 14:14:59 ? 14:15:50 Hi apuimedo, I'm modifying UT, when I test I find the gate-install-dsvm-kuryr-libnetwork-nv not work 14:16:08 limao_: did you identify the issue? 14:16:15 yes, https://review.openstack.org/#/c/361841/ 14:16:22 and https://review.openstack.org/#/c/361795/ 14:16:43 second one is for no kuryr.log 14:17:01 after first one merged , I will rebase the patch 14:17:54 #links https://review.openstack.org/#/c/361841/ 14:18:04 #link https://review.openstack.org/#/c/361841/ 14:18:12 #link https://review.openstack.org/#/c/361795/ 14:18:14 thanks limao_ 14:18:36 #action apuimedo to review limao's kuryr-libnetwork fixes 14:18:53 banix: any update or issues about kuryr-libnetwork? 14:19:29 apuimedo, should we keep kuryr_libnetwork/common? 14:19:34 no the only thing is adding support for keystonev3; which is a lib issue I suppose. 14:20:29 banix: it's a lib issue. Now that we fixed the config generation, I'll rebase and push 14:20:36 vikasc: I'm in favor of dropping it 14:20:56 limao_, 14:21:04 My understanding is that it would make sense to have a kuryr_libnetwork/common if we had a plugin architecture for kuryr-libnetwork 14:21:04 banix, 14:21:06 let's say 14:21:22 for example kuryr_rd and kuryr_ipam 14:21:33 apuimedo, makes sense 14:21:36 makes sense 14:21:51 there is no further branching 14:22:00 but with how interdependant they are, I think it's fine to put what we have in 'common' in kuryr_libnetwork 14:24:17 #action apuimedo to take to the ml about the 'common' thing and merge the removal if it by next week if there is not a strong counter argument 14:24:34 anything else about kuryr-libnetwork? 14:24:45 yes 14:25:39 for getting trunk port id in vm I am planning to add a variable mentioning IP of trunk port in config file 14:25:49 and then filter using IP 14:26:07 trunk port ID will be used to create subport 14:26:20 is there any other better way? 14:26:27 vikasc: not for libnetwork 14:26:38 for kuryr you could retrieve it from the scheduler status info 14:26:51 but for libnetwork I think that is it 14:27:11 so for now, let's put the configuration option in openstack/kuryr-libnetwork 14:27:19 i was also thinking similar. 14:27:43 thats why raised in kuryr-libnetwork disussion 14:27:48 thanks 14:29:47 :-) 14:29:49 thanks vikasc 14:29:56 #topic kuryr-kubernetes 14:30:46 #info devvesa's first upstreaming patch was merged today: https://review.openstack.org/359811 14:30:50 thanks a lot devvesa 14:31:02 thanks to you for the reviews and good feedback! 14:31:10 #info devvesa has sent the aio and asyncio loop patches as well 14:31:18 #link https://review.openstack.org/360376 14:31:33 #link https://review.openstack.org/360629 14:32:04 the game plan is to get these merged, then proceed with the watchers and then the translators, is that right, devvesa? 14:32:14 Yes, and unit tests 14:32:31 I'm still thinking which relationship should be between watchers and translators 14:32:48 relationship? 14:32:56 which reminds me that 14:33:00 Should the watcher call the translator? 14:33:06 #action apuimedo to update the k8s devref 14:33:09 Should the translator have the watcher as attribute? 14:33:21 and we should have the discussion there 14:33:36 Ok. Let's have it :) 14:33:46 but as an intro, my idea is that when you are scheduling the watchers 14:33:57 before scheduling them 14:34:11 the service start registers the filters and translators for each watcher 14:34:20 then they get scheduled 14:34:29 very explicit and tidy IMHO 14:34:45 so you initialize the watcher with the translators 14:34:52 that should call on each event 14:34:57 right 14:35:02 that was my idea 14:35:08 sounds good 14:35:18 I will skip filters now, we can introduce them later 14:35:29 sure, it's just a matter of composing methods 14:35:55 Also I think that it's the watcher who has to update the annotations on K8s 14:36:13 I think the ideal would be that both watchers, filters and translators can all be simple methods 14:36:29 but if one wants to have a class instance play the part 14:36:35 making it callable should suffice 14:36:49 apuimedo: what you mean by scheduled? 14:36:57 devvesa: that's fine for me 14:37:12 pablochacin: telling asyncio to start them 14:37:36 ok. Have you considered more than one filter/translator? 14:37:39 devvesa: not only fine in fact, I think it's the best way 14:37:41 :P 14:37:49 sure 14:37:55 good 14:38:17 you should be able to have multiple filters and translators for each event type 14:38:23 and resource type 14:38:30 it will simplify the logic a lot 14:38:34 devvesa the watcher updating the annotation? 14:38:57 pablochacin: yes. or create another entity (which I think now is not necessary) 14:38:58 pablochacin: this way, the neutron translators do not need to know anything about k8s 14:39:06 uhm 14:39:07 yes 14:39:09 so it is much simpler to unit test and reason about them 14:39:16 watcher are on K8s side, translators on Neutron side 14:39:23 right 14:39:32 (we can change the names if you feel more comfortable, I am bad at names) 14:39:36 the idea is that you can register the filter/translator under a name 14:39:39 for example 14:39:44 then translators have to communicate back to watchers 14:39:51 'kuryr.org/port' 14:40:01 then, when the watcher calls the translator 14:40:07 whatever dictionary it gets back 14:40:46 it updates the k8s resource with key 'kuryr.org/port' and the value is the serialization of what it gets back from the filter/translator 14:41:17 pablochacin: the watcher does yield from (await) on the filter/translator 14:41:27 and it will get the dict back 14:41:42 I think it shouldn't bring issues 14:41:50 (unmanageable issues) 14:42:12 what i don't like is that the watcher will need the translator instances and the translators will need the watcher instances 14:42:46 kind of weird initializations 14:43:00 devvesa, why the translator will need the watcher? 14:43:02 devvesa: I don't think it needs such thing 14:43:05 sorry i could not get the translators will need the watcher instances 14:43:13 if you take the translator to be a method 14:43:18 for example 14:43:22 there's no instance involved 14:43:35 pablochacin, vikasc: from the translator you will need to eventually tell the watcher to update the annotations 14:43:55 the contract for a translator is simply, receive an event as a parameter, do all my stuff async, and return a dict of the result 14:44:05 devvesa, that would be same watcher waiting on await 14:44:21 the translator does not know anything about watchers 14:44:25 vikasc: true... 14:44:49 can i say .. 14:45:15 vikasc: go ahead 14:45:39 events from k8s results in parallel instances of watchers but translators dont know or bother which watcher has called him 14:45:47 problem with this is that the watcher becomes some kind of controller of the logic. What if we want to watch the same events for two different logics? 14:46:11 devvesa each logic should be independnet 14:46:31 and therefore the changes they make to annotations also 14:46:32 my point is that maybe watcher should not know about translators either 14:46:50 devvesa: that's fine 14:46:51 and have some kind of controller that joins both 14:46:52 let's separate que topics, then 14:47:03 the watcher can call different filters/translators for each event 14:47:17 and for each await that returns data, it puts one annotation 14:47:29 I agree with apuimedo 14:47:34 is simpler 14:47:45 apuimedo, +1 14:48:28 devvesa: if we see that the watcher logic ends up being simpler with another element taking over the annotation updates, we can do that as well 14:48:33 another way is to use event like communication 14:48:40 watchers rise update events 14:48:42 it would be 14:48:55 you start a watcher and an updater 14:49:00 translator process them, rise update events (to the annotations) 14:49:20 like pubsub instead of calls 14:49:23 no, no. I am ok with annotation updates on watcher, I mean that maybe a controller should manage input/outpus of watchers/translators 14:49:26 and synchronize them 14:49:43 I'd rather leave synchronization for later 14:49:44 :P 14:49:50 after the pods 14:49:54 haha. No, syncrhonize is not the word 14:50:17 If we can put the business logic in a method that does not need I/O, we may simplify the tests a lot 14:50:32 method/class 14:51:10 watcher <-> controller <-> translator 14:51:26 dispatcher? 14:51:38 devvesa: vikasc: pablochacin: I propose we have a separate discussion this week about this 14:51:49 apuimedo: ok 14:51:50 with video and diagrams 14:51:54 ok 14:51:57 cool 14:52:02 awesome 14:52:02 I'll send an invite 14:52:37 #info apuimedo to send an invite to discuss k8s watcher translator archicture for Wednesday 14:53:10 #info if anybody wants to join, please ping me 14:53:16 apuimedo, please avoid UTC 1400 to UTC1500 14:53:22 vikasc: understood 14:53:29 apuimedo: could you please propose two hours for the meeting? 14:53:29 thanks 14:53:46 pablochacin: you mean two different times or a duration of two hours? 14:54:38 two times and see who can attend, devvesa and I have some meetings 14:55:08 very well 14:55:11 thanks 14:55:12 (16-17 and 18-19, if I recall well) 14:55:28 oh, I have to bring up another topic 14:55:34 but we almost ran out of time 14:55:39 bring it fast! :) 14:55:49 thanks all for your ideas and suggestions! 14:55:56 we should have a discussion about the plan for container-in-vm in k8s 14:56:34 icoghla from Intel was proposing a short term step while trunk and subports are not ready based on address pairs 14:56:45 but we need more time to discuss it than remains 14:57:03 apuimedo, any relevant link to this 14:57:04 #info let's discuss in #openstack-kuryr about the container-in-vm plan 14:57:08 vikasc: not yet 14:57:20 #topic general 14:57:29 any other topic before we close the meeting? 14:57:40 Is the alternate week meeting still on? 14:58:13 its not happening these days 14:58:32 ok 14:58:38 tonanhngo: we are lacking some volunteer to chair it and attendance 14:58:44 it was usually just one person 14:58:46 or two 14:58:48 :( 14:59:10 would it make sense to move to this time slot? 14:59:26 tonanhngo: I think it's probably the best outcome 14:59:31 +1 14:59:35 +1 14:59:35 tonanhngo, +1 14:59:39 +1 14:59:42 +1 14:59:42 banix: does that work for you? 14:59:57 yes 15:00:02 alright then 15:00:05 Thanks! 15:00:05 I'll update it 15:00:17 #info all the meetings will happen at 14utc 15:00:32 #action apuimedo to send the meeting reservation and ics updates 15:00:41 thank you all for joining the meeting! 15:00:46 it was a great one! 15:00:49 #endmeeting