15:00:21 #startmeeting kuryr 15:00:22 Meeting started Mon Feb 1 15:00:21 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:25 The meeting name has been set to 'kuryr' 15:00:37 Hello everybody and welcome to yet another kuryr meeting! 15:00:47 Hello! 15:00:48 hi 15:00:50 hi 15:00:56 who's made all the road to the show today? 15:01:03 heh 15:01:30 all gone 15:01:46 irena will be joining later 15:01:57 #info gsagie baohua banix are here 15:02:06 fawadkhaliq: are you not in? 15:02:16 here here 15:02:18 fawad is here i believe 15:02:20 o/ for lurking 15:02:31 :-) 15:02:41 thank you all for joining 15:02:43 hi hongbin, thanks for the review on the nested containers 15:02:52 gsagie: my pleasure 15:02:54 indeed hongbin 15:02:55 fawadkhaliq: may have gone back to sleep in the confusion about the meeting time … 15:02:59 thank you all for the comments 15:03:07 banix: :-) 15:03:20 It's not even proper night there, fawadkhaliq, is it? 15:03:25 Yeah, we need to either fix the wiki page about the IRC meeting times, or make the next meeting same time 15:03:34 yes 15:03:48 aloha 15:03:53 +1 15:04:06 gsagie: nothing to fix; it is correct 15:04:08 #info fawad, hongbin and salv-orlando are here too 15:04:18 #topic weekly updates 15:04:29 banix: the wiki might be correct, but the ics file you download not. 15:04:45 I imported in 2 distinct calendars gave me the same result. This week tue next one mond 15:04:53 should recheck the ics file and fix it, is it generated automatically? 15:04:57 anyway this can be fixed offline not a big deal 15:05:02 salv-orlando: i just downloaded it and looks ok in my calendar… wondering why it may not be correct for others... 15:05:16 #info apuimedo presented kuryr in FOSDEM yesterday. Showed LBaaS, FIPs, VM-Container interaction 15:05:33 banix: what app do you use for calendaring? 15:05:41 banix: idk frankly I have adjusted the calendar manually now so icbb anymore 15:05:51 any feedback on FOSDEM? 15:06:25 that we had the pip requirements file sorely lacking 15:06:28 apuimedo just checked with the mac Calendar 15:06:52 since probably we all use devstack on a single machine 15:07:10 so trying to deploy an extra machine only with docker, kuryr and a neutron agent was a bit of a pain 15:07:20 we need to sort out the packaging real soon 15:08:04 #info mestery sent some patches for the infra requirement checking task to vote on kuryr patches 15:08:22 i think devvesa worked on the Ubuntu packaging? 15:08:30 I expect that when that gets merged, all the patches jobs will fail until we properly fix requirements.txt 15:08:49 gsagie: he is close to finishing, I probably need to bump into him 15:09:11 #action apuimedo to follow up on the packaging with devvesa 15:09:47 anybody has any other event or update from last week that is not in directly related to other topics in the agenda? 15:09:54 I think Hui Kang is also close to getting the containerized version pushed to Kola 15:10:25 banix: cool, maybe he could add some instructions on Kuryr devref regarding that 15:10:26 :O 15:10:34 that is really awesome! 15:10:58 sure I don’t see him online now but will talk to him 15:11:01 looking forward to try that 15:11:11 really cool 15:11:11 banix: very nice. +1 to the instructions on devref 15:11:38 #action banix to follow up with Hui Kang on kolla progress and patch to Kuryr devref 15:11:44 anything else? 15:12:08 apuimedo: The first requirments patch merged, once https://review.openstack.org/274446 merges, the proposal bot will take care of Kuryr for requirements updates. 15:12:31 mestery: thanks. I didn't even know the bot had such brains 15:12:41 :) 15:12:53 bot gets smarter:) 15:13:04 there's also been some work from baohua on rally integration this week, hasn't it? 15:13:15 yeah 15:13:28 oh, yes, i started to write the plugin, but not sure if it be put into the kuryr repo or rally's. 15:13:40 patch here: https://review.openstack.org/274014 15:13:55 apuimedo: this also means that you subscribe to the requirements contract.... mestery surely has the link for it 15:14:08 gsagie: baohua: how does the new plugin fit with gsagie's previous work? 15:14:16 but in a nutshell you do not control anymore your requirements.txt and use g-r for any update you might need 15:14:23 the plugin framework is finished, and some new rally task is added. 15:14:35 salv-orlando: contract... I hope it comes with liquorice candy as compensation 15:14:41 salv-orlando: I'd have to dig it out, but the main this prevents is manually updating requirements like I saw this weekend with a bunch of patches by apuimedo :) 15:14:44 apuimedo: its not a new plugin, what baohua is working on is defining new test scenarios and adding them to run in the gate 15:14:49 ok, I'm fine with it 15:14:54 apuimedo: IT comes with liquor candy, is that ok? ;) 15:14:58 oh, IMHO, gal's work is for the gate testing. and we need more scenarios. 15:15:10 to forget bad situations? 15:15:20 baohua: agreed, thanks ;-) 15:15:32 i suggest we add a new plugin for kuryr specially. 15:15:33 baohua: if we keep it in Kuryr repo, and i think that we should for now, we should probably also add some Rally core reviewers to the reviews 15:15:53 #info baohua sent new rally test scenarios https://review.openstack.org/274014 15:16:03 yes, i want hear comments from u and other revirewers. gsagie 15:16:07 to make sure we do things right, i wanted to talk with them about adding a "Docker" context driver 15:16:19 sure, thanks! 15:16:27 gsagie: baohua: mind a bit of a debate on tests in kuryr vs plugin in rally? 15:16:39 I think that up until now the consensus was to have it in our repo 15:17:09 yes, we can look for rally team's comments. maybe stay in kuryr, and move to rally in future. 15:18:11 I think we should focus more on fullstack tests now and continue with Rally after, Rally is mostly better when we want to benchmark things, like run scenarios in some relatively bigger load 15:18:15 well, we'll need to define some sort of point of when to move it to rally repo 15:18:24 I dont think we have close to enough coverage yet 15:18:38 anyway, let's all keep adding scenarios to the test 15:18:48 Let's see if I have time to add some for the fips and lbs 15:19:10 sure, and there's also full-stack patch under review :) https://review.openstack.org/265105 15:19:19 we also need some for IPAM 15:19:27 #link https://review.openstack.org/265105 15:19:28 you can put an action on me for that 15:19:30 agreed 15:19:42 sure 15:19:44 #action gsagie to add ipam testing 15:19:53 #topic magnum integration 15:20:29 https://review.openstack.org/#/c/269039/ 15:20:33 fawadkhaliq has been answering comments in https://review.openstack.org/#/c/269039/3/doc/source/specs/mitaka/nested_containers.rst 15:20:43 #link https://review.openstack.org/#/c/269039/ 15:20:48 good progress there from fawad, i personally like the general direction 15:20:51 thanks everyone for feedback and comments 15:21:02 good discussion as well. 15:21:21 i do think we need to maybe break some action items to start working on, like adding the Kuryr Heat resource template for Magnum 15:21:48 sorry for being late 15:22:11 gsagie: +1 irenab also suggested similar that we should consider having specs for respective components. I like that idea. 15:22:29 I will register specs for those and will need some help there. 15:22:33 gsagie: do we need the Heat resource templates for Kuryr regardless of its use by Magnum? 15:23:03 banix: that is something that sounds useful 15:23:15 banix: probably can be consumed by other orchestrators that sit on top of OpenStack 15:23:20 Magnum currently uses scripts to setup. I would say Heat resource is a plus, but not a must 15:23:25 tripleo for example 15:23:26 Cloudify can be one.. 15:23:38 yup 15:23:50 so you consider heat resource to deploy kuryr? 15:24:26 I think it is not an immediate priority, but it is one of the tasks for Magnum integration 15:24:27 hongbin: is here and I would go with his comment. We can start on that in parallel anyway since we do see value in making that happen. 15:24:50 if there is a package/ Kolla containerized image, adding this might be very simple, i personally not familiar with that but there is someone from Magnum team that has alot of experience with that, dont remember the name 15:24:56 Robert... 15:25:07 hongbin:can you explain more in the magnum usage of scripts vs heat? 15:25:38 apuimedo: Magnum uses scripts as user-data 15:26:34 apuimedo: 1. Use heat resources to provision VMs. 2. Put scripts in each VM to setup the middleware 15:27:30 hongbin: so you are suggesting that adding scripts may be enough, right? 15:27:44 apuimedo: yes. 15:27:44 hongbin: the script information is passed to Heat anyway, right? 15:27:47 that they are placed on the VMs and that install kuryr and point it to the right auth url 15:28:10 fawadkhaliq: Yes, scripts is passed in cloud-init, which is a Heat resource 15:28:22 so seems we can just put kuryr inside the vm image template, and config with script? 15:28:25 any taker for the magnum deployment integration spec? 15:28:45 hongbin: makes sense. so indirectly Heat installs the Kuryr agent through the existing mechanism 15:29:09 fawadkhaliq: Per my understanding, it is correct. 15:29:29 apuimedo: I can kick it off and gsagie showed some interest as well ;-) we could both get it up? 15:29:37 fawadkhaliq: You can definitely add a script to install the agent 15:29:54 yes, np i can help fawad with this 15:30:08 hongbin: yes, I played a bit with that. Thanks much for the details. bring much more clarity. 15:30:27 #action fawadkhaliq and gsagie to start the magnum deployment integration 15:30:34 fawadkhaliq: wcl 15:30:41 I count on hongbin for heavy reviews :P 15:30:52 apuimedo: sure 15:31:02 I think we might also have help from kexiadong on that 15:31:02 apuimedo: you better buy hongbin KFC after ;-) 15:31:15 haha 15:31:20 well, the summit will be in austin, we can find some 15:31:27 and make bucket eating competition 15:32:02 Talking about the summit, I would like to propose Kuryr team social..let's keep that for the open discussion. 15:32:43 great! and just reminder the proposal door is closing in 12h 15:32:53 extended be a day 15:33:00 2nd Feb now 15:33:11 oh not notice that 15:33:21 fawadkhaliq: IIUC, from your answers to my comments, the current standing is that we would have all the traffice between containers reaching down to the host hypervisor 15:33:54 apuimedo: that's correct. this is the current expected behavior. Not desirable but I don't see another way :( 15:34:09 fawadkhaliq: I was thinking a bit about it 15:34:28 gotta go guys, sorry and will sync up later! 15:34:34 gsagie: no worries! 15:34:37 thanks gsagie 15:35:15 and I think that we should try to write it in a way so that the current behavior is reaching down, but that leaves the door open for having vlan per subnet 15:35:35 with this in mind 15:35:39 then 15:36:09 the binding script for a vendor could probably leverage security groups information for some sort of sg lite 15:36:31 apuimedo: I am not sure how VLAN per subnet would come in the picture. can you please elaborate? 15:36:38 which is a clear tradeof of sec/speed 15:36:40 apuimedo: I agree, it maybe vendor dependent behavior 15:37:13 fawadkhaliq: well, on the VM side you would just bind all the containers in a subnet to the same vlan 15:37:20 then, on the hypervisor side 15:37:33 you could split the up into subports knowing vlan and mac 15:37:43 for example 15:37:54 I'm sure there are more challenges 15:38:21 but leaving the door semi-open in the mapping sounds like a good idea to me 15:38:34 apuimedo: yeah you could do that. 15:38:45 with bpf stuff you could probably make a nice sg lite in the guest 15:38:58 apuimedo: you mean kuryr vlan alocation can be done in different ways? 15:38:58 which would be good enough for subnet level stuff 15:39:16 I am okay keeping it open not to block any implementations. 15:39:36 irenab: exactly, which is not something that happens at the "worker" host anyway 15:39:51 it's probably something that will happen on the k8s/mesos api host 15:40:04 and which will be passed down to the workers/containerizers 15:40:13 the vlan choice, that is 15:40:30 fawadkhaliq: ;-) 15:40:54 as we discussed pver spec with fawadkhaliq , we need to map different kuryr components to their role/responsibility/affinity 15:41:02 apuimedo: makes sense. let's keep it flexible. 15:41:09 very well. Anything more to discuss about the magnum integration? I brought up my thing only, and there were a lot of nice comments, so anybody else with something to bring up about the spec 15:41:15 irenab: exactly, role of Kuryr agent will define this area well. 15:41:33 let me add more details there and then we can iterate on it to make sure we don't miss anything. 15:41:48 cool! 15:41:57 thanks all the comments! 15:42:07 thank you for the patient replies 15:42:12 #topic k8s integration 15:42:41 (I half expect that we will not have enough time to discuss this, so maybe we can make another extraordinary meeting on Wed/Thu) 15:42:55 (about k8s) 15:42:57 apuimedo: +1 15:43:07 alright, let's get started 15:43:18 first, I want to thank you all for the contributions to the etherpad 15:43:25 #link https://etherpad.openstack.org/p/kuryr_k8s 15:43:34 fkautz: ^^ 15:44:24 what about we start with the open questions? 15:44:28 I didn’t update the use cases section, but I think we may want to align with https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit# 15:44:54 latest proposal discussed by kube networking team 15:45:02 #link https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit#heading=h.w0wvuhtdmr2l 15:46:16 seems like discussion is still on going, but I guess we may try to map existing proposal to fit neutron abstractions 15:46:23 I have to say, about that document, irenab, that I really hope they do the rules thing 15:46:33 instead of the "allowfrom" 15:46:40 allow_to :-) 15:46:53 now I got an allergic reaction 15:47:09 or you mean the naming? 15:47:17 as long as I don't see allow_maybe_to 15:47:19 xD 15:47:27 apuimedo: rofl 15:47:34 yes, agree. rules much more generic 15:47:54 but srtill I beleive we can try to map policy to SG tules 15:48:03 rules 15:48:06 i think it was mentioned in their last meeting that may be using rules will be better 15:48:07 i suspect allowfrom was added since it is a simpler concept for app devs 15:48:39 in security, explicit is better than implicit 15:48:39 anyway 15:48:39 open questions it is 15:48:40 most likely there will be two policies, app requesting access in a spec, second are a set of rules allowed by operator 15:49:05 sounds familiar ;) 15:49:11 banix: I believe so 15:49:12 fkautz: per cluster/namespace/pod? 15:49:19 the operator one 15:49:32 irenab: that depends on the outcome of these meetings, i don't know yet 15:49:33 fkautz: imho the simpler entities do not belong in the talk of what gets to the networking layer 15:49:46 that is just high level ux 15:50:34 working on kubernetes to have libnetwork support vs option T right now and option F long term vs only option F 15:51:14 apuimedo: can you please remind option F? 15:51:17 as a reminder: Option T. CNI that translates to libnetwork. Option F native CNI driver that reusers parts of the kuryr codebase and probably has an API watching component 15:51:21 CNI driver? 15:51:27 irenab: I'm just a slow typer 15:51:32 :-) 15:51:37 typist 15:51:48 and apparently typoist too :/ 15:52:22 there's 8 minutes to go and a lot to cover 15:52:27 BTW, I am nervous about supposing the k8s accessibiility will be enforced by SGs alone. SGs do not stop ARP traffic, and do not scale as well as using isolated ethernets. 15:52:30 some how I do not see much value for option T 15:52:41 (supposing the ethernets are small) 15:53:02 i don't think k8s will end up supporting libnetwork, they are both aligned against separate philosophies and target use cases. i think option F is the best outcome but can see option T helping to bridge quicker 15:53:04 mspreitz: can you explain more about your concern? 15:53:47 is it about the reference implementation isolation of services? 15:53:56 btw, thanks for joining mspreitz 15:54:16 If two tenants put their VMs on the same Neutron network, and try to isolate via SGs, they will still see each other's ARP traffic. 15:54:40 Sorry I am late, had another meeting until just a few mins ago. 15:54:47 no worries ;-) 15:55:03 mspreitz: isn't that an operator deployment prerrogative or policy? 15:55:26 if you don't want them to see the arps, you should probably not have them on the same l2 segment 15:55:33 I am making a technology statement. Neutron security groups do not apply to ARP traffic. 15:55:56 Thus, to achieve full isolation, we need more than Neutron SGs. 15:56:24 mspreitz: well, that's something to take into account in the translation 15:56:37 probably will enforce tiers into separate neutron nets 15:56:50 and separated by routers 15:56:50 I think it may end up with some deployment configurable options, such as one big L2 versus routed network 15:57:23 irenab: well, this is part of what makes me uneasy at the k8s sig document 15:57:26 apuimedo: the question how to infer it from the templates 15:57:30 No, one big ethernet does not isolate wrt ARP 15:57:32 that they several models of isolation 15:57:36 agree with apuimedo irenab that it won't end up being a showstopper, we can definitely talk about a way to tackle at L2 level later by doing something in another domain 15:57:41 and I don't see the templates all that different 15:58:19 fawadkhaliq: are you suggesting we consider the non-isolation wrt ARP a bug in Neutron and fix it? 15:58:26 shall we decide on anther k8s discussion day/time? 15:58:30 I feel like the document wants to leave the door too open for implementations to do several models of security 15:58:47 irenab: agreed 15:58:58 I propose wednesday at the 3 utc 15:59:00 sorry 15:59:06 1500utc 15:59:16 mspreitz: perhaps introduce a mode that makes it happen but it's certainly not a bug right now. 15:59:18 1600? 15:59:31 15:30? 15:59:33 okay for me too banix 15:59:56 either works for me 16:00:03 banix: i will try to make it. I am on PTO this week. 16:00:11 BTW, as you would expect, SGs also fail to isolate wrt non-IP traffic 16:00:23 * Sukhdev time check 16:00:31 fawadkhaliq: sorry to put work on PTO :( 16:00:32 fawadkhaliq: cool 16:00:42 Sukhdev: :-) 16:00:45 i should be able to get to either one 16:00:49 Sukhdev: join us :) 16:01:00 apuimedo: lets end the call pls 16:01:00 banix: is 15:30 good for you? 16:01:04 banix : ha ha - I am here 16:01:13 okay guys, how do we do l2-gateway in k8s? cc: Sukhdev 16:01:13 apuimedo: ok 16:01:28 #info k8s meeting at 15:30 utc Wednesday February 3rd 16:01:29 fawadkhaliq : time for next meeting 16:01:45 thanks! 16:01:47 thank you all for joining and being so chatty 16:01:51 #endmeeting