16:01:56 #startmeeting containers 16:01:57 Meeting started Tue Aug 18 16:01:56 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:00 The meeting name has been set to 'containers' 16:02:05 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-08-18_1600_UTC Our Agneda 16:02:12 #topic Roll Call 16:02:14 o/ 16:02:15 o/ 16:02:16 Adrian Otto 16:02:16 o/ 16:02:22 Ton Ngo 16:02:25 o/ 16:02:29 \o_ 16:02:30 Ronald Bradford 16:02:41 o/ 16:02:56 hello apmelton, daneyon_, mfalatic, Tango, madhuri, tcammann, rbradfor, and eghobo_ 16:03:06 o/ 16:03:20 o/ 16:04:00 hello dane_leblanc_, and hongbin 16:04:03 o/ 16:04:12 hi bradjones 16:04:38 o/ 16:05:04 o/ 16:05:56 hello sdake and diga 16:05:58 let's begin 16:06:03 #topic Announcements 16:06:12 1) adrian_otto will be out on 2015-08-25 due to travel to OpenStack Silicon Valley event. sdake will chair. 16:06:40 any other announcements form team members? 16:06:46 *from? 16:07:16 #topic Container Networking Subteam Update (daneyon_) 16:07:25 Last week's network subteam meeting had a ton of discussion around kuryr. What it is, what it's not, etc.. 16:07:37 Unfortunately, we did not have anyone from the kuryr team in attendance. I received confirmation that someone from the kuryr team will join this week's meeting. 16:07:57 I attended the kuryr weekly meeting yesterday. 16:08:14 topics included config mgt and the details of vif binding/unbinding 16:08:17 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-08-13-18.00.html Previous Meeting 16:08:24 and a WIP kuryr design spec 16:08:40 If you have time, please review the spec 16:08:48 link to the spec? 16:08:49 #link https://review.openstack.org/#/c/213490/ 16:08:55 thanks daneyon_ 16:09:04 Speaking of specs.... 16:09:16 It would be great to wrap up the magnum network spec 16:09:34 Please let me know if you have any questions on my magnum net spec. 16:09:52 I have my magnum dev env up and going 16:10:04 I ran into a coupole bugs related to flannel 16:10:12 the network spec danyeon_ 16:10:13 i tracked the bugs and committed fixes 16:10:14 was tom's issue sorted out 16:10:19 thanks everyone for getting them merged 16:10:30 I am +2 on teh spec with whatever changefs are needed to get it through the system 16:10:38 when the magnum net spec is merged, i will create bp's for the individual tasks 16:10:57 in the meantime, I'm starting to hack at some of the changes proposed in the spec. 16:11:24 sdake_ yes, tcammann is happy and +2'd the revised spec 16:11:29 adrain_otto one thing that I afound helped with rollcall votes 16:11:34 is to set a final deadline for approval 16:11:46 daneyon_ nice ill go vote on it again then and re-reiew 16:11:48 i think adrian_otto would like to do one last review 16:12:08 hopefully he is happy with the spec and it gets merged 16:12:09 my apologies for my delayed action 16:12:23 daneyon_ we are drinking from two firehoses atm 16:12:36 adrian_otto no worries... i know you are busy 16:12:46 daneyon, a family emergency has surfaced for me, and I may need to step away from work to attend to it. 16:12:56 sdake_ i hope you're thirsty... lol 16:13:14 more like hungry 16:13:19 plate overflowing! 16:13:26 adrian_otto unless their are any questions, that's all i have for the network subteam update 16:13:42 ok, let's get through our agenda, and I'll regroup with you on taht subject 16:13:44 adrian_otto no worries. family 1st. 16:13:51 adrian_otto i hope all is well 16:15:44 1 sec 16:16:31 #topic Magnum UI Subteam Update (bradjones) 16:16:49 Not a huge amount to update on this week 16:17:04 I have pushed a WIP patch that is the view for BayModel 16:17:26 currently working on getting the API working then once it is synced up I will be pestering you all for reviews :) 16:17:43 getting reviews from horizon folks without a magnum environment will be tricky 16:18:04 so hoping you guys will be able to help review in some capacity to check the workflow is as expected 16:18:16 bradjones note I dont think we can review the patches 16:18:30 sdake_: why? 16:18:48 adrian_otto I thought we had a separeate group that didnt have magnum in it in gerrit 16:18:51 but couldbe wrong 16:19:06 you can review anything in Gerrit 16:19:14 oh i mean +2 review 16:19:16 sdake_: it doesn't have to be a +2 review but just a +1 from a few people with magnum envs to test against will be great 16:19:20 yes of course I could review :) 16:19:25 I can work with Thai Tran here and provide him with a magnum environment to review 16:19:38 Tango: That would be great thanks 16:19:39 bradjones: I should be able to throw it in my magnum environment and play around with horizon 16:20:01 might need a little help getting it installed, but I'd be glad to test it 16:20:09 I will post message in #openstack-containers when it is in a state to have a proper review 16:20:15 and I'm reasonably sure that magnum core as a group belongs to the magnum-ui-core group 16:20:23 thanks bradjones 16:20:33 #topic Review Action Items 16:20:36 (none) 16:20:43 #topic Blueprint/Bug Review 16:20:53 Essential Blueprint Updates 16:21:24 I am going to skip the first on the agenda b/c we downgraded it 16:21:34 #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri) 16:22:22 After midcycle meetup, we have lot to do 16:22:59 hopefully few more patches are left 16:23:00 I notices a flurry of activity in the code review queue 16:23:08 *noticed 16:23:30 madhuri: is there anything with the core feature I can help with? 16:24:03 apmelton: Thanks, can we fix a time to discuss 16:24:31 thanks for your help apmelton 16:24:54 madhuri: anything you'd like to identify for team discussion today? 16:25:00 adrian_otto: I apologize for the delay, I just shifted to India, so some work 16:25:12 But I am back on the work 16:25:18 welcome back! 16:25:30 congrats on the new job! 16:25:57 Thanks apmelton :) 16:26:18 should I advance to the next work item, madhuri? 16:26:29 Sure 16:26:36 actually, there's something I'd like to bring up here 16:27:05 the effort to pull the lbaas cert manager into castella has stalled 16:27:15 are we still planning to use their code? 16:27:23 the lbaas cert manager code 16:27:42 o/ 16:27:47 ah, looks like good timing 16:28:06 rm_work: was the one working on the effort 16:28:11 we are using it for now 16:28:49 o/ : "entering late" 16:29:07 so, what I'm wondering is, if both Magnum and Octavia is planning to support that code, should we attempt pulling it into an oslo library? 16:29:17 apmelton: What's your concern? 16:29:51 Yes that can be an improvement. 16:30:25 I guess my concern is maintaining that code in two trees 16:30:28 That's a different task and we can discuss seperately 16:30:34 alright 16:30:43 yes, I would love to see it somewhere common 16:30:55 +1 16:30:57 there is some refactoring that needs to happen anyway, so i would support an effort to get it into a common project 16:31:21 maintaining which code? 16:31:35 sorry irc algged out 16:31:37 sdake_: the lbaas cert manager code we're planning to use in our cert manager 16:31:44 thanks 16:31:53 LBaaS abandoned its effort to get the code in Castellan, because we realized the CertManager interface wasn't going to work out for us anyway for technical reasons, but the CertGenerator code (which i think you actually care more about?) is still going to be used 16:32:06 it was just an even harder sell for Castellan so we hadn't tried yet 16:32:09 rm_work: yes, sorry, CertGenerator 16:33:03 I guess what we can do is solidify our use case in our tree, then pull out the common pieces once we've got it working 16:33:25 Yeah, I will be happy to help with that when you are working on it 16:33:33 whoever takes that task can drop me a line 16:33:35 ok, so let's do this one step at a time. 16:33:57 let's get the solution merges in Magnum first, and then decide how best to share it among projects 16:34:05 sounds good to me 16:34:09 yep 16:34:15 +1 16:34:29 +2 16:34:43 Update objects from the bay 16:34:46 cool, I'll make a note to put this on our topic list for Tokyo if we don't get to it before then. 16:35:04 vilobhmm: proceed 16:35:10 sure :) 16:35:21 Last week took ownership of it. We got the design and approach sorted out, thanks to sdake for providing help wherever needed. Looks like there is no Query-by-UUID k8s ReST API https://github.com/kubernetes/kubernetes/issues/4817 in Kubernates. Whereas in each of our resources (pod/rc/service) we try to instantiate the object using https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/utils.py#L70 but if we p 16:35:46 truncated? 16:36:00 We got the design and approach sorted out, thanks to sdake for providing help wherever needed. Looks like there is no Query-by-UUID k8s ReST API https://github.com/kubernetes/kubernetes/issues/4817 in Kubernates. 16:36:18 Whereas in each of our resources (pod/rc/service) we try to instantiate the object using https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/utils.py#L70  we use QUERY-by-UUID 16:36:34 but if we plan to create the objects for these resources using the ReST Bay/k8s endpoints we only have the option to get it from name as only QUERY-by-NAME is supported as of now 16:36:45 So my plan is to irc://irc.freenode.net:6667/#1. get the uuid -> name conversion as part of QUERY-by-UUID, will get this information from magnum db irc://irc.freenode.net:6667/#. Once we get the name in irc://irc.freenode.net:6667/#1. do a QUERY-by-NAME to fetch details from k8s ReST API endpoints. Want to know your thoughts on this. 16:37:06 So till the time kubernates provide a QUERY-by-UUID we will need a way to store the mapping from uuid to name and at present the best place to keep it IMHO happens to be the table for the respective resource for example magnum.rc, magnum.service etc… 16:37:16 addrian_otto : I hope it is not truncated now 16:37:44 so just to clarify, when getting a list 16:37:54 the process is for pod in list_from_kys: 16:38:14 pod = k8s_query_by_name 16:38:19 ok 16:38:24 pod_uuid = k8s_query_by_uuid? 16:38:25 I think it is better to have an object representation at magnum-db, which will provide the translation/mapping to the rc/service/pod object of k8s 16:39:00 suro-patz I'd rather avoid that, because if someone creates an objectt with the native client, the mapping wont be present in the database 16:39:28 sdake: point noted 16:39:29 So, we are not able to: magnum pod-show ? 16:39:30 Agree 16:39:36 sdake : but right now there is no way to have a uuid-name mapping 16:39:51 so what precisely is the proposal on uuids 16:39:58 without cut and paste ;) 16:40:01 sdake: but a name is unique identifier given a bay 16:40:06 I think this should read from kuberentes not magnum db 16:40:16 madhuri : +! 16:40:18 +1 16:40:37 suro-patz yes I htink we can use the bay uuid to help generate a uuid from the pod id 16:40:44 pod name rather 16:40:46 thats what the blueprint is for IMHO…just that right now only QUERY-by-NAME k8s Rest api is available 16:40:48 How about store static information in DB, and read from kubernetes for dynamic state 16:41:19 +1 16:41:24 nothing about pods can be in the database, because it wont be updated when a native client is ued to write to k8s 16:41:31 hongbin: +1 16:41:31 that's the direction we aimed at when we discussed this last as a team 16:42:01 essentially the database will be missing information on native client write operations 16:42:06 sdake, things like the name of the pod and it's identifier can be in the db 16:42:25 adrian_otto: not if it was created by the native client 16:42:27 +1 16:42:30 kube-ctl create pod.yaml 16:42:32 adrain_otto : +1 for now till the time https://github.com/kubernetes/kubernetes/issues/4817  gets resolved 16:42:39 we can use a synchronization approach to make sure the static state remains mirrored 16:42:53 adrian_otto that will create a pod that kubernetes doesn't know about 16:43:00 apmelton: yes, this is the same as heat convergence 16:43:00 rather that magnum doen't know about 16:43:15 yes i thought of this sync approach adrian_otto 16:43:23 it won't know about it initially, but it can learn about it soon after it is created 16:43:26 it seems better imo just to always get from the k8s api 16:43:44 I see 16:43:46 I thought the entire point of using CoE objects directly was to get us out of the sync and lock game 16:43:51 because a sync has to do the same thing in essence 16:43:55 get it from the k8s endpoint 16:44:22 so in that case we don't persist pod information at all. I could live with that. 16:44:25 apmelton no it is to get is to be able to use native cleints, what you mention is a side benefit ;) 16:44:34 or rc/service info 16:44:44 adrian_otto that is what we are after here ;) 16:45:12 Yes better to get from k8s endpoint directly 16:45:14 vilobhmm if you have more test patches in your stream, please put them up for review 16:45:15 even if they are wip 16:45:31 i'd like to see the whole stream befor merging 16:45:42 magnum api's : just keep mapping uuid-name in db just the mapping..whereas all the "fresh" data is accessed and updated using k8s ReST endpoints…for native client : get directly from k8s endpoints 16:45:45 so we aren't in a state of some stuff comes from db some stuff comes from k8s endpoint 16:45:48 sdake : sure will do 16:45:54 What's the outlook for kubernetes to provide this support? 16:45:56 if the objective is to make magnum client behave as per the native client, magnum api should behave as pass through 16:46:01 make sure to title it WIP: 16:46:10 sdake : yup 16:46:24 ok, vilobhmm ready to advance topics now? 16:46:25 tango I dont understand the q 16:46:50 sdake: the issue that vilobhmm pointed out 16:47:06 adrian_otto : sure..just a last question to you and team does this seems fair "just keep mapping uuid-name in db just the mapping..whereas all the "fresh" data is accessed and updated using k8s ReST endpoints…for native client : get directly from k8s endpoints" 16:47:24 for magnum api - just keep... 16:47:51 so that i can submit more WIP patches based of this 16:48:45 vilobhmm: let's start with a naive implementation that fetches all pod and service and rc related state directly from k8s rather than persisting duplicates of state data in our own db 16:49:07 adrian_otto : seems fair 16:49:11 we can move ahead 16:49:12 if that proves to be a problem, then we can optimize from there 16:49:15 thanks all! 16:49:22 adrian_otto: +1, no data in magnum db 16:49:23 alrite 16:49:30 cool, thanks! 16:49:33 madhuri : +1 thanks 16:49:39 link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango) 16:49:42 #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango) 16:50:09 We got Angus Lees (gus) to engage directly, so that's good news 16:50:42 He pointed out 3 new patches that we need. They are not merged yet, so I built a customer Kubernetes version based on 1.0.3 16:50:45 I also saw references to k8s patches 16:51:18 I am testing it now. With gus help, I am hopeful 16:51:22 we could "bird dog" stalk those patches with our k8s friends 16:52:00 Angus thinks there may be a few more bugs to fix 16:52:03 at a minimum simply express our interest in those by updating them in the k8s project bug tracker 16:52:12 Yep 16:52:57 Just for update I generated 1.0.3 kubernetes client code and it is not working 16:52:59 ok, Tango once we have a working setup, and we know what patches we want to land upstream, please let me know so I can apply what influence we have there so they get the attention they deserve 16:53:05 I am also working on a patch to set the parameters for the heat templates, will upload the initial version shortly, then will likely need feedback from the team 16:53:41 And adding a functional test based on wordpress, although it won't run until we have V1 api support 16:54:16 Tango: I generated v1 client code only 16:54:35 ok, should we pull in team members to assist with that? 16:54:40 madhuri: Can I pick them up? 16:55:14 I am busy with TLS, if any one wants to take it. it will be good 16:55:27 Yes it would be good to have help moving to V1 api 16:55:28 Sure Tango 16:55:35 I can help if you want 16:55:45 awesome hongbin! 16:55:47 hongbin: Thanks hongbin, will ping you 16:55:52 k 16:55:54 Anyone please :) 16:56:29 Tango: Please let me know if I can help anyway 16:56:35 #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton) 16:56:41 we have code for this now 16:56:52 got a WIP review up, I"m updating it as reviews show up and patches land 16:56:54 Tango: I can help in setting parameters for heat templates 16:57:02 won't really be finished until the core TLS stuff lands 16:57:15 thanks apmelton 16:57:16 #topic Open Discussion 16:57:37 I am scheduled to appear in a keynote panel at OpenStack Silicon Valley 16:57:39 I have a question: what is the deadline for landing liberty feature for Magnum? 16:57:54 there is a remote chance I may have family business that conflicts with it 16:58:03 +1 hongbin 16:58:09 so I may reach out to a few of you to act as a standby 16:58:47 the event is on August 26+27 in Mountain View, CA 17:00:02 hongbin: there is not an official deadline, but we need all new features ASAP 17:00:16 time elapsed 17:00:18 i have to jump to my next meeting 17:00:40 our next meeting is 2015-08-25 at 2200 UTC (sdake charis) 17:00:43 #endmeeting