15:00:53 #startmeeting kuryr 15:00:54 Meeting started Mon Feb 15 15:00:53 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:57 The meeting name has been set to 'kuryr' 15:01:08 Hello all 15:01:32 Hello and welcome to yet another kuryr weekly meeting 15:01:39 who's here for the show? 15:01:49 Me 15:02:00 o/ 15:02:02 But I am here to watch the show! 15:02:16 salvorlando: I have a couple of questions to ask you :P 15:02:28 so don't sit too far back 15:03:00 #info gsagie_ salvorlando fawadkhaliq and apuimedo are here for the meeting 15:03:16 thanks for joining 15:03:25 #topic announcements 15:04:31 After the recent discussion in the neutron community about what should and should not be in the Neutron big stadium, we will be submitting the request for Kuryr to be a project on the big tent 15:04:40 proposing Gal Sagie as PTL 15:05:03 excellent! 15:05:03 and we'll request a design room for the Austin summit 15:05:18 :-) 15:05:34 yeah, i think that the last day of the last summit was probably the most effective one when we managed to all sit together 15:05:40 You will probably have more luck winning the euromillioms 15:05:41 so hopefully we will have more time this summit 15:06:07 salvorlando: why? what is the criteria to have a room? 15:06:11 salvorlando: rofl 15:06:19 salvorlando: we have a a failsafe plan! 15:06:32 KFC design room 15:06:34 :D 15:06:45 apuimedo: KFC design room ftw! 15:06:47 #link: https://vimeopro.com/midokura/734915331 15:06:59 I made a webinar last week 15:07:25 it shows the communication with lbaas and container <-> VMs 15:07:56 cool 15:08:03 salvorlando: I need more information on surrendering the email domain to the foundation, do you have some contact I could use for that? 15:08:32 apuimedo: very cool, thanks for sharing 15:08:39 apuimedo: which domain? kuryr.org? 15:08:42 :( 15:09:33 yes, salvorlando suggested that I should give control of it to the foundation 15:09:42 I created it to have cool links for the demos 15:09:54 like http://webinar.kuryr.org:8000/ 15:09:56 :-) 15:10:16 which has two containers running behind a load balancer 15:10:20 I was joking btw 15:10:35 oh, I thought it was a requirement salvorlando 15:10:40 you totally got me! 15:11:02 ok, just so you all know, if any of you want to use the domain for demos, just shoot me an email 15:11:08 and I'll add the necessary dns entries 15:11:28 #topic deployment 15:12:11 #info apuimedo setn https://review.openstack.org/#/c/279320/ 15:12:20 darned typo... 15:12:34 anyway 15:13:22 salvorlando: fawadkhaliq: gsagie_: I'd like to hear more about how you bind 15:13:46 to make sure that I'm not overlooking anything for the container 15:13:58 to make the common base container as complete as possible 15:14:18 I'll probably be submitting an ovs one soon 15:14:56 apuimedo: what do you mean an ovs one? you have one container per backend? 15:15:13 or better one Dockerfile 15:15:40 there should be kuryr/libnetwork:midonet kuryr/libnetwork:ovs kuryr/libnetwork:ovn 15:16:05 or dragonflow/kuryr:1.0.0 15:16:22 but what I meant is that yes, vendors should have their own dockerfile 15:16:40 that does a "from kuryr/libnetwork:1.0.0" 15:16:52 and adds a layer with their binding dependencies 15:16:58 apuimedo: for PLUMgrid/iovisor, it will follow similar mechanism as you have in there. I will review and comment. 15:17:10 gsagie_: in your case, df-db would be added 15:17:25 fawadkhaliq: very well 15:17:40 the design suggestion, of course, is that the neutron agent is on a separate container 15:17:47 where possible 15:17:47 apuimedo: sounds good, the only thing i am wondering is how for example if we take ovn/dragonflow the code inside the container will perform the binding to OVS 15:17:50 in the compute node 15:18:13 gsagie_: we run the kuryr container on the host networking namespace 15:18:19 so ovs-vsctl works fine 15:18:21 ;-) 15:18:25 this I already tested 15:18:44 okie cool 15:18:50 fawadkhaliq: will that split containerization work for you guys? 15:19:39 salvorlando: I'll be checking kolla this coming weekend 15:19:50 apuimedo: I will have to check that. Let me report back. 15:19:53 Apuimedo your approach to containers make sebse 15:19:55 gotta talk with SamYaple 15:20:02 fawadkhaliq: very well 15:20:03 Let me know if I can help with kolla 15:20:29 salvorlando: alright, so for that, since kolla only has ovs, I'll get ready the ovs version of the container 15:20:43 salvorlando: or do you have ovn kolla support? 15:21:38 i think that reaching ovn support from ova should be fairly easy 15:21:50 ovs 15:22:14 gsagie_: well, not so much, you'd need kolla to deploy all the components that ovn uses 15:22:17 probably 15:22:25 the extra agents and stuff 15:22:48 anyway, let's try to get ovs+kolla in the coming two weeks 15:22:51 :-) 15:22:54 okie 15:23:03 #action salvorlando apuimedo to try ovs + kolla 15:23:13 Thanks apuimedo :-) 15:23:14 (+kuryr, ofc) 15:24:08 #topic nested 15:24:16 fawadkhaliq: the floor is yours 15:24:26 apuimedo: thanks 15:24:50 I addressed most of the early comments. 15:24:58 magnum folks have provided some feedback 15:25:14 fawadkhaliq: in the review? 15:25:18 I would like to discuss one point here that taku and I discussed last week as well, needed broader audience 15:25:22 apuimedo: correct. 15:25:33 fawadkhaliq: raise it, raise it ;-) 15:26:07 the ask is to support networking via Kuryr when native container tools are used instead of magnum API. For example Docker, Kub CLI etc. 15:26:36 fawadkhaliq: oh, you mean that you deploy a bay 15:26:40 While is possible and I have a way to make it work, it would take us back to the path of making Kuryr agent communicate via management plane to other endpoints :( 15:26:55 so heres what I am thinking.. 15:27:06 and you want to get the containers networked whether the user goes through magnum api or swarm/k8s, right? 15:27:06 some of us plan to use Neutron and Docker without Magnum 15:27:24 apuimedo: correct 15:27:32 mspreitz: welcome ;-) 15:27:32 fawad: but i thought the plan was to integrate with the COE's themselves and not with Magnum 15:27:51 mspreitz: https://review.openstack.org/#/c/269039/ please review :-) 15:28:14 What information do you need from Magnum? 15:28:29 gsagie_: COE via magnum currently :) 15:28:57 gsagie_: well, my idea was, and fawadkhaliq I'll appreciate that you tel lme if I'm talking nonsense, that we have: 15:29:36 gsagie_: two paths. 1. if we bypass Magnum, then Kuryr agent communicates with other OpenStack endpoints. 2. If we go via Magnum API, we may be able to avoid it. 15:29:43 1. Magnum deploy the bay controller nodes with information as to where to reach Neutron and that such communication is allowed 15:30:07 so if we are okay with this communication, then we can update and make it happen. 15:30:19 2. We have an integration with bay types that speaks with neutron and does the ipam and port management 15:30:45 3. the worker nodes just get minimal information like what is the vlan to connect to 15:31:00 I am trying to connect containers to Neutron networks without any VMs in sight. Is anybody else here interested in that? 15:31:13 apuimedo: the idea is similar, except that more information is passed via Magnum 15:31:19 mspreitz: this is not the use case we talking now 15:31:28 however.. 15:31:54 apuimedo: I see your point and I see this communication seems reasonable to us.. 15:31:59 fawadkhaliq, apuimedo: ok that make sense, but now why would anyone want to run this without Magnum? and if they do why is this different then a regular "bare metal" integration with Docker/Kubernetes? 15:32:00 given that, let's update 15:32:07 mspreitz: yes, that comes just after this topic ;-) 15:32:16 we went with nested first today 15:32:40 gsagie_: it's not to run it without magnum 15:32:58 it's to let the magnum user consume the k8s/swarm api instead of the magnum one for service creation 15:33:05 apuimedo: +1, Magnum does one step and then rest can be done via native tools. 15:33:05 did I get it right fawadkhaliq ? 15:33:21 okie got it now 15:33:40 apuimedo: correct. essentially a hybrid workflow still gets consumer the network. 15:33:54 :-) 15:34:08 so I wanted to make sure you guys are onboard, before I propose the change. 15:34:17 Looks we are all on the same page with this 15:34:20 so I will go ahead 15:34:25 fawadkhaliq: can you summarize the change in one sentence? 15:34:36 (I'll put it in info comment) 15:35:05 apuimedo: goal is to facilitate networking even when native tools are used. 15:35:24 apuimedo: that would require Kuryr agent to have communication with other OpenStack endpoints. 15:35:41 #info nested goal: to provide networking whether magnum users consume magnum or bay specific apis 15:35:51 fawadkhaliq: i personally think, without giving it enough thought that this is the correct path either way 15:35:52 apuimedo: +1 15:36:17 fawadkhaliq: I think the communication can be very minimal and we should keep it like that 15:36:37 for swarm it may be more complicated, but for k8s probably only the controller nodes will need it 15:36:59 apuimedo: gsagie_, I am concerned about the security aspects on this communication. But thats something we can improve and can evolve. 15:37:09 yep, the end points should only do the binding similar to our Kubernetes integration plan 15:37:14 fawadkhaliq: exactly 15:37:38 thats all on nested. 15:37:45 very useful discussion :-) 15:37:46 #info: agreement to limit access to the management as much as possible 15:38:16 #topic k8s integration 15:38:20 mspreitz: here we go ;-) 15:38:26 thanks fawadkhaliq 15:38:37 apuimedo: welcome 15:39:06 alright then 15:39:15 We do not have Irena here 15:39:21 or banix ? 15:39:46 mspreitz: did you move forward with the plan to make cni call libnetwork? 15:39:59 gsagie_: hi 15:40:00 I have a draft on which I am working 15:40:06 ohh hi :) 15:40:20 Hope to get it running today 15:40:25 mspreitz: :O 15:40:34 have been sick all week. getting back to things.... 15:40:55 banix: sorry to hear. hope you feel well. 15:40:57 are you having a network configured in CNI vendor on each worker machine 15:41:10 banix: happy to hear you recovering 15:41:17 and then having the cni driver call docker libnetwork attaching? 15:41:26 banix: take care ;-) 15:41:29 It will be more interesting to me once Kuryr can connect to existing networks instead of make new ones. 15:42:26 My CNI plugin calls `docker network` 15:42:35 mspreitz: yes thats deafeningly something we had in mind to do 15:42:49 The `docker network create` needs to be done just once, thanks to Kuryr. 15:42:55 mspreitz: good point 15:43:14 apuimedo: gsagie_ fawadkhaliq thansk (and sorry for the interruption) 15:43:19 mspreitz: couldn't you create the networks as part of the deployment? 15:43:31 you add a worker node, you do a docker network create 15:43:44 apuimedo: I am not sure I understand the question. 15:43:50 which networks, which deployment? 15:44:07 Networks are not per worker node 15:44:27 mspreitz: which network topology are you using 15:44:29 ? 15:44:36 network per service? 15:44:51 I am taking baby steps 15:44:55 and what are the reasons you need to connect to pre-existing networks 15:44:58 ? 15:45:03 the one I want to take first is to connect containers to a provider network 15:45:11 But I can't yet, Kuryr won't connect to that. 15:45:13 aha 15:45:15 :-) 15:45:19 thanks mspreitz 15:45:40 My next step would be to have a tenant network per K8s namespace, probably 15:45:54 That is not the way k8s wants to go 15:46:02 it's just a possible next baby step. 15:46:05 mspreitz: is it okay for your draft to hack it? 15:46:14 the first step 15:46:27 what I mean is 15:46:42 do `docker network create myprovidernet` 15:46:47 you'll get a uuid 15:47:06 you delete the network that is created in neutron with that uuid 15:47:16 and update the provider network name to have that uuid 15:47:38 It would be very useful to know if that would work 15:47:44 Oh, rename the pre-existing provider nework. Hadn't thought of that 15:47:54 I can give it a try. 15:47:55 mspreitz: I have a hunch that it should work 15:47:58 if it does 15:48:34 probably connecting to existing networks will be a bit easier in "hacky mode" 15:48:41 I suppose you folks can tell me: will future libnetwork calls to Kuryr, to attach containers, find the Neutron network by nUetron network name or by UUID? 15:48:59 apuimedo: maybe you can add an action for me to add this support 15:49:13 mspreitz: yes, you'll pass --neutron-net a450ae64-6464-44ee-ab20-a5e710026c47 15:49:21 i don't think it should be too hard to add, i will write a spec 15:49:31 well, more as a tag 15:49:42 does `--neutron-net` take a name or a UUID? 15:49:50 mspreitz: we can support both 15:49:56 gsagie_: it can be a workaround while we can't put darned tags on resources 15:50:01 mspreitz: uuid 15:50:03 mspreitz: we meant to use the tags in Neutron once this feature is completed 15:50:06 gsagie_: i have the code that does that 15:50:16 there is one issue that we need to deal with 15:50:25 banix: you mean the name change? 15:50:35 apuimedo: we can just do according to the name right now 15:50:39 i think thats what banix has 15:50:42 i meant using existing networks 15:50:53 use existing network by Neutron name 15:51:08 gsagie_: you should not do it by name if you are then changing the current name :P 15:51:09 the issue is right now we rely on neutron network name 15:51:18 If work has to be done, I'd rather see banix go for the jugular 15:51:19 exactly 15:51:38 and for an existing neutron network for the rest of our code to work, we have to set its name to what we want; not desrirable but it works 15:51:58 banix: yes, that's what I proposed mspreitz 15:51:59 banix: or keep the mapping internally 15:52:02 or in a DB 15:52:05 i think once we have the tagging, things will look much cleaner 15:52:17 gsagie_: I'd rather knock my teeth out than adding a DB to Kuryr :P 15:52:30 heh ok 15:52:37 Kuryr is parasitic by nature 15:52:44 gsagie_: yesh, have been trying to see how use the docker kv but doesnt look promising 15:52:51 it should use the DBs of what it connects to 15:53:09 banix: the inflight restriction is a bit damning there 15:53:18 do you guys agree neutron tagging will solve this problem? 15:53:25 banix: wholeheartedly 15:53:43 What is "the inflight restriction" ? 15:53:55 the blueprint there seems almost approved; has one +2 last i checked 15:54:09 mspreitz: in docker libnetwork you can't access a kv resource in the same operation that is creating it 15:54:09 yes 15:54:27 so, for example, let's say that we are creating a network to connect to a Neutron net 15:54:35 I understand 15:54:37 kuryr receives the call to the network creation 15:54:40 yes, that's a pain 15:54:49 but it can't go to the kv to modify it to add data 15:54:52 mspreitz: indeed 15:55:04 when I feel hacky, I'd add deferred actions xD 15:55:12 that are executed just after we return 15:55:28 that's a real pain for any higher level automation 15:55:31 but then I think, well, tags should be coming soon to neutron 15:55:39 mspreitz: and raceful 15:55:58 and you'd lose it if the kuryr daemon restarts 15:56:11 like you said, "hacky" 15:56:13 anyway, mspreitz let us know if the hacky way works 15:56:20 the one of renaming 15:56:24 #topic general 15:56:27 hmm, I thought I was just told it will not 15:56:46 mspreitz: banix said that changing the name in neutron will work 15:56:54 `docker network attach` will cause Kuryr to find a Neutron network by its Neutron UUID, right? 15:57:22 mspreitz: by docker network id 15:57:29 (which maps to the neutron name) 15:57:41 So it will look up the network in Neutron by Neutron network name? 15:57:41 (while we don't have neutron resource tags) 15:58:01 mspreitz: what apuimedo said 15:58:03 mspreitz: docker network attach dockernetname 15:58:08 Then I'll give it a try 15:58:23 dockernetname -> dockernetid -> (to be found on neutron net name) 15:58:31 anybody has anything more for the alst two minutes? 15:58:42 looking at the log of this meeting i see Kola being mentioned; hui kang has worked on this 15:59:07 unfortunately he had to leave for a family emergency last week; i will check with him and see where he is 15:59:36 See what you did. Toni got upset! 16:01:33 any co chairs to close the call? 16:01:43 anyone else going to end the meeting :-) 16:02:03 #endmeeting