14:00:55 #startmeeting kuryr 14:00:55 Meeting started Mon Sep 5 14:00:55 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:59 The meeting name has been set to 'kuryr' 14:01:13 o/ 14:01:22 Hello and welcome to the first September meeting of Kuryr 14:01:28 who's here for the fun? 14:01:42 o/ 14:01:45 o/ 14:01:45 o/ 14:02:49 devvesa: are you gere? 14:02:51 *here 14:03:35 o/ 14:03:43 o/ 14:04:04 #info ivc_, tonanhngo, limao, vikasc, devvesa, janonymous and apuimedo in attendance 14:04:14 #topic kuryr-lib 14:04:41 Some good work in cleanups for the upcoming release happened in the last week 14:04:58 #info all the necessary cleanups for kuryr-lib got merged 14:05:05 #link https://review.openstack.org/361972 14:05:12 #link https://review.openstack.org/361937 14:05:20 #link https://review.openstack.org/361937 14:05:23 mmm 14:05:28 crap, I repeated one 14:05:54 #link https://review.openstack.org/362070 14:06:12 now, on for what's mising 14:06:16 for the release 14:06:31 #info we are missing the keystonev3 support that I've been working on 14:07:20 I'm merging keystone and neutron configuration all inside a neutron section 14:07:34 so, like in nova and Neutron, there'll be just on section 14:07:38 [neutron] 14:07:46 that will contain neutron and neutron auth opts 14:08:20 that's all I have on kuryr-lib 14:08:24 I have a question: 14:08:29 tonanhngo: go ahead 14:09:00 When we configure for the VM case, would these config continue to be stored on the VM? 14:10:11 no, not this one 14:10:41 the only configuration on the VM would be how to reach the kuryr-lib rest server that will then send the calls to neutron 14:10:49 vikasc is working on that 14:11:05 oh, I forgot 14:11:08 apuimedo, rpc_server 14:11:16 vikasc: yes, sorry 14:12:08 more questions? topics? under kuryr-lib 14:12:10 ? 14:12:13 ok, for Magnum integration, one concern is storing things like account name, password in a config file on the VM 14:12:25 junst an info 14:12:39 i have updated testing for rest driver in kuryr-lib https://review.openstack.org/#/c/342624/ 14:12:42 tonanhngo: what's the current solution? 14:13:09 there should probably be some auth between kuryr-libnetwork remote driver in the VMs and the rpc server 14:13:44 either that, or whitelisting origin addresses 14:13:58 Before Kuryr, Magnum does not store any service password on the VM 14:14:19 We leave it to the users to enter these, for security 14:14:31 tonanhngo: how does the kubernetes load balancer plugin that talks to Neutron work then? 14:14:46 It uses the user credential 14:15:26 but now, it seems we need to give the services password, like Neutron, rabbit 14:15:48 we do not need rabbit on the VMs 14:16:04 for the openvswitch agent? 14:16:10 that runs on the host 14:16:13 not on the VMs 14:16:25 the only credentials would be something so that only the remote driver talks to the kuryr rpc server 14:16:56 ok good, then is there a security concern in this case? 14:17:08 well, we have to decide how to do it 14:17:22 it's still undefined 14:17:40 first we have to get it running, then we'll restrict access 14:17:56 sounds good, I am just voicing a concern from what we see. 14:18:08 tonanhngo: and you do well to keep it in our minds 14:19:00 tonanhngo: can you remind me how does the Kubernetes on Magnum, which runs on a VM, access the Neutron API server 14:19:05 which runs on the overlay 14:19:15 ? 14:19:37 The Kubernetes plugin uses the user credential to authenticate with Keystone and get a session 14:20:00 then the client just talks to Neutron in that session 14:20:25 It requires the service endpoints, so that's what Magnum configures 14:20:52 so magnum makes keystone and neutron available in the overlay 14:21:12 for the user credential, Magnum does not handle it. We leave it to the user to log into the node and type in the credential 14:21:53 Yes, the user VM can access the services 14:22:29 I haven't checked from the Flannel overlay, but I think the pods can reach the services also, but the pods would not have the credential 14:22:57 ok. I'll try to set up a devstack to check it out 14:23:01 thanks tonanhngo 14:23:22 There is a concern that the pods can reach the config files and get the credential, but we don't deal with that since that's how Kubernetes set it up 14:23:23 thanks tonanhngo 14:23:29 I guess authentication code is for reference: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack.go#L219 14:23:31 np 14:23:32 #action apuimedo, vikas to work out the auth remotedriver-> rpc driver 14:23:43 thanks janonymous 14:23:54 #link https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack.go#L219 14:24:01 #topic kuryr-libnetwork 14:24:51 #info the contents of the 'common' package have been moved to the base package as agreed, thanks to vikas' patch https://review.openstack.org/361567 14:25:08 some code cleanups as well 14:25:15 #link https://review.openstack.org/362857 14:26:01 #info sample config file generation is now generated by a script that prevents common mistakes when using oslo-config-generator 14:26:06 #link https://review.openstack.org/364240 14:26:30 this will prevent the wrong config being generated when you are working on config patches 14:27:07 #info limao fixed the MTU bug that meant that MTU was not being retrieved from Neutron 14:27:22 #link https://review.openstack.org/#/c/355714/ 14:27:54 #info Docker 1.9 compat patch was merged as well https://review.openstack.org/352768 14:28:18 #info devstack should now be back into a working state 14:28:47 limao: did you try https://review.openstack.org/365336 14:28:57 I wonder if the job now completes successfully 14:29:30 job has not passed yet 14:29:34 I will check them 14:30:00 limao: thanks 14:30:03 i will help too.. 14:30:14 #action limao and janonymous to check the rally jobs 14:30:16 :-) 14:30:41 :-) 14:31:47 #action vikasc apuimedo to review https://review.openstack.org/#/c/363414/1 14:32:10 we had originally screwed up the move from 2376 to 23750 14:32:53 anything else about kuryr-libnetwork? 14:33:40 alright, moving on 14:33:48 #topic kuryr-kubernetes 14:34:13 #info last Thursday a kuryr-kubernetes videoconf meeting was held 14:34:29 #info More information about it can be found on the mailing list 14:34:41 Are there recordings of that meeting? 14:36:34 flaper87: can be provided on demand 14:36:36 :-) 14:36:48 but I made quite extensive meeting points 14:36:55 in the mailing list 14:37:14 apuimedo, agree 14:37:29 #info devvesa is working on the python3 upstreaming 14:37:55 #info ivc_ is working on a py2/py3 eventlet PoC that matches its features 14:38:09 we're reviewing both 14:38:44 #info There is a new spec proposal for the watcher/translator interaction for kuryr-kubernetes 14:38:55 #link https://review.openstack.org/365600 14:39:01 I appreciate reviews 14:39:03 still have to take a deep look, but I like it :) 14:39:12 specially from devvesa and ivc_, since they are implementing 14:39:15 cool 14:40:05 we should be studying studying also the Magnum deployment for the kubernetes integration 14:40:07 +1 devvesa. tho need to take a deeper look at it 14:40:15 (what I was saying earlier was more for kuryr-libnetwork) 14:40:45 #action apuimedo, vikas to review https://review.openstack.org/363758 14:40:58 sure 14:41:24 btw, this patch introduces kind of first approach 14:41:27 #action apuimedo, devvesa, ivc to come up with a functional testing approach for the gates 14:41:27 https://review.openstack.org/#/c/363758/ 14:41:46 #link https://review.openstack.org/#/c/363758/ 14:42:14 thanks limao for getting so fast to testing the libnetwork gates :-) 14:42:34 :-) 14:42:39 alright, does anybody else have stuff for kubernetes integration? 14:42:41 for functional testing first we should create a devstack plugin to deploy, right? 14:42:54 devvesa: yes 14:43:02 any volunteer in the room?? :D 14:43:04 probably minikube 14:43:13 :-) 14:43:31 minukube is virtualbox based i'm afraid 14:43:39 ivc_ yes 14:43:44 oh, sorry, ivc_, I meant hyperkube 14:43:46 xD 14:43:48 or more specifically its vagrant 14:44:09 I prefer bare-metal for starters 14:44:18 so hyperkube should serve us 14:44:19 I think this script can run on local 14:44:22 https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh 14:44:30 without nothing else than Go 1.6 14:44:38 But I haven't tried so far. 14:44:49 devvesa: compiling and running all in the gate may be too resource intensive 14:44:50 i think we can just go with hyperkube 14:44:53 What's true is that a functional testing is mandatory 14:44:54 +8gb ram for build , 64bit os 14:45:03 that's far too much 14:45:09 :( 14:45:11 I propose devstack plugin with hyperkube 14:45:18 +1 on that 14:45:27 and that the kubelet container mounts the CNI driver on a volume 14:45:31 last time I looked at, I could not find the command that spawns hyperkube anymore 14:45:40 really? 14:45:52 apuimedo: I think it is deprecated 14:45:55 #action apuimedo to check hyperkube for signs of life 14:46:06 pablochacin: that makes me a bit grumpy 14:46:13 #action apuimedo ask to Kubernetes-dev on Slack :) 14:46:25 installing Slack again... 14:46:28 ok, ok 14:46:32 I'll get to that too 14:46:36 I can do it, no problem 14:46:53 #action devvesa to ask about hyperkube on kubernetes-dev slack 14:46:55 i've got hyperkube-based deployment on my poc. i'll be adding details on that to my research doc 14:46:56 thanks devvesa 14:47:01 minikube maybe but not sure 14:47:35 let's try to come up with the necessary info to take a decision for next Monday ;-) 14:47:48 +1 14:47:57 #info next meeting to discuss and decide the testing approach and tools 14:48:07 anything else? 14:48:12 apuimedo: yes 14:48:28 just procedural question about docs location 14:48:58 #info Usage docs should go on kuryr-kubernetes / kuryr-libnetwork 14:49:05 should devrefs be hosted on the kuryr-k8s or kuryr repo? 14:49:08 #info specs on openstack/kuryr 14:49:17 apuimedo: thanks 14:49:30 if everybody disagrees we can change it though 14:49:44 I just want to minimize the effort to add info for now 14:49:51 apuimedo: design on k8s/libnetwork 14:50:17 and approving the specs on one place, making it easier to refer to one another seems more comfy to me 14:50:32 irenab: please, elaborate 14:50:38 apuimedo: specs are not devrefs 14:50:46 devrefs are more design 14:50:54 spec are more requirements 14:51:01 irenab: the distinction gets a bit lost on me, sorry 14:51:11 so probably my last patch is more of a devref 14:51:16 spec is about what and devref is about how 14:51:32 at least its how I get it 14:51:33 irenab: I'd be grateful if you could help me understand it with examples of our current doc 14:51:39 it can be offline 14:51:45 and I'll send an email to the mailing list 14:51:47 we may need to add new directory for 'why' 14:51:48 so we can all wigh in 14:51:51 and decide 14:51:53 (sorry, bad jokes) 14:52:04 devvesa: :-) 14:52:07 (you know I support you, Irena) 14:52:10 apuimedo: sure, but just for now, devrefs are to be in specific or the general kuryr one? 14:52:27 irenab: until we take it to the ML yes 14:52:41 the current status was all devref and specs go to openstack/kuryr 14:53:08 but I agree that the implementation heavy documentation will probably live better in kuryr-libnetwork and kuryr-k8s 14:53:11 as long as there is guideline, it does not matter too much 14:53:13 ;-) 14:53:32 I guess most of us have the four repos cloned in any case 14:53:37 so we'll find the better fit 14:53:45 but it's not a showstopper, I hope 14:53:54 but let's have it fixed soon 14:54:03 other questions/comments? 14:54:22 apuimedo: question about new added sub project for storage 14:54:29 I'm not getting to the address pairs point in the agenda since icoughlan couldn't attend 14:54:35 irenab: go ahead 14:55:09 #action gsagie to get fuxi people on the weekly meetings so that they also participate :-) 14:55:10 does it has its own core reviewers? How it is going to be managed, blueprints, specs, etc.? 14:55:25 irenab: it does not have its own core reviewers yet 14:55:34 the idea, just as with libnetwork and kubernetes 14:55:41 is that they can get their own cores 14:55:53 but kuryr-core people will keep core over the repo as well 14:56:30 irenab: oh, sorry, correction, the main developer is a core 14:56:36 Zhang Ni 14:56:47 apuimedo: and for specs, devrefs, will be the same policy like with k8s/libnetwork? 14:56:50 so it would be good if he can join the weekly meetings 14:57:04 irenab: we should try to have the same policy for all 14:57:23 apuimedo: makes sense 14:57:35 It should make us be flexible enough 14:57:41 #topic open discussion 14:57:50 any other open discussion item in these last three minutes? 14:58:31 I'd like to remind everybody about the mailing list thread to discuss and propose topics for the summit work sessions 14:58:51 Thanks for the agenda, btw 14:59:08 devvesa: you're most welcome 14:59:19 I think that a reminder and agenda suggestions can provide attention from other projects/people 14:59:24 although I should have published it the latest on friday 14:59:31 indeed 14:59:57 #action apuimedo to post the agenda by Thursday and ML reminder so people have time to add items 15:00:06 We should kind of improve our visibility. Devstack/functional test is an example that we need collaborators 15:00:23 However, I ignore how to do it 15:00:27 devvesa: any proposal? 15:00:29 ah, ok 15:00:31 :-) 15:00:37 well, increasing ML communication is the easiest way 15:00:41 also, if you are trying things 15:00:44 yes. and first one 15:00:47 please blog and twit about it! 15:01:04 that was another idea... but I am afraid I have no time to blog properly 15:01:10 we all love to know what everybody else is doing 15:01:14 devvesa: then twit ;-) 15:01:16 for example 15:01:21 a blog post is an amount of work considerable 15:01:49 Oh, my twitter account is only about cycling :D I'll have to create another account 15:01:53 gotten #openstack #kuryr #kubernetes python3 upstreaming to work with pod translation 15:01:55 :-) 15:02:06 devvesa: do it then ;-) 15:02:15 or I'll make a kurry logo on bicycle 15:02:22 Thank you all for joining! 15:02:25 thanks! 15:02:28 have a nice and productive week 15:02:30 #endmeeting