22:00:45 #startmeeting containers 22:00:46 Meeting started Tue Apr 21 22:00:45 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:49 The meeting name has been set to 'containers' 22:00:53 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-04-21_2200_UTC Our Agenda 22:00:58 #topic Roll Call 22:01:01 Adrian otto 22:01:05 Janek Lehr 22:01:13 Rob Pothier 22:01:16 Ton Ngo 22:01:18 Madhuri Kumari 22:01:23 Da 22:01:28 o/ 22:01:30 Dance ghoul 22:01:42 Vahid Hashemian 22:01:42 Jennifer Carlucci 22:01:45 Andrew Melton 22:01:46 suro-patz 22:01:51 Thomas Maddox 22:02:00 wow we got an army of people present :) 22:02:07 hello jjlehr, rpothier, Tango, madhuri____, fangfenghua, sdake, vahidh, apmelton, suro-patz, and thomasem 22:02:08 Fangfenhua 22:02:12 that's a quorum. 22:02:31 juggler: you had a topic to add? 22:02:48 o/ 22:02:59 topic sent 22:03:02 :) 22:03:03 oh, I see that in PRVMSG 22:03:13 you will have first crack in Open Discussion for that 22:03:27 we should have time 22:04:02 ok, advancing topics 22:04:07 #topic Announcements 22:04:20 1) Our IRC Meeting will be skipped on 2015-05-19 because we will be at the Vancouver Design Summit 22:04:30 my apologies to those of you not attending the summit 22:04:58 so be sure to raise your topics in the previous week, or using the ML during that time. 22:05:05 2) I am planning to tag a release of Magnum and python-magnumclient on Saturday 2015-04-25. 22:05:14 is there any reason to change this plan? 22:05:21 magnumclient 22:05:28 tags x.y.z 22:05:33 not 2015.1.0 22:05:38 that's right 22:05:46 it's 0.1.0 now 22:05:51 right 22:05:56 to be compatible with pypi 22:05:57 ok, just informing 22:06:03 wasn't sure if you knew or not 22:06:09 yes, thanks! 22:06:22 In the futue I expect we won't need to lockstep release our client 22:06:43 we do have new features landing that have support in both 22:07:02 but if we don't have new client/server paired features then we might only release one or the other, correct. 22:07:13 any other comments on release plans? 22:07:28 3) I am working with the author of https://pypi.python.org/pypi/magnum to see if we can arrange to use that namespace. 22:07:58 otherwise we will use something like https://pypi.python.org/pypi/magnum 22:08:08 sounds good 22:08:19 did you run that by 3openstack-infra? 22:08:27 the second option 22:08:31 if needed, I will 22:08:37 first things first. :-) 22:08:40 I would like if you would please :) 22:08:41 adrian_otto: were those supposed to be two different links? 22:08:44 yes jus tthinking ahead 22:08:50 my paste goofed 22:09:02 openstack-magnum would be the alternative fallback approach 22:09:07 gotcha 22:09:25 i on't know if openstack-infra would like that or not adrian_otto 22:09:31 please confirm - had irc discussion last week on this topic 22:09:37 and borught that up, seemed to be contentious 22:10:02 ok, I want to cross that bridge when we come to it, especially if it will be controversial 22:10:27 cool anything permanent wrt namespcces - is well permanent :) 22:10:34 so thats the controversy I think 22:10:37 anyway we can move on :) 22:10:43 4) PTL Elections closed, I will serve as your PTL for the Liberty release. 22:11:04 +1 22:11:08 congrats! 22:11:11 \o/ 22:11:19 congrats 22:11:24 Thanks everyone 22:11:25 +1 22:11:26 +1 22:11:28 wtg adrian 22:11:32 +1 22:11:41 grats bro :) 22:11:44 but I did not win an election because I was unopposed 22:11:45 congratulations! 22:11:55 irrelevant 22:12:05 ok, I appreciate your congratulations! 22:12:15 agreed, sdake 22:12:21 any other announcements from team members? 22:12:39 adrian_otto but I used that line on kolla too :) 22:12:59 <3 22:13:14 ok, let's advance to action items 22:13:20 #topic Review Action Items 22:13:29 1) adrian_otto to poll participants for a discussion with k8s devs about external-lb feature 22:13:42 completed. I can pause a moment and see what responses came back 22:14:42 there were no responses. I'll need to follow up with them 22:14:51 It appears that the current code use V1 of LBaaS 22:15:02 We can ask when they plan to move to V2 22:15:03 right, we had some good news on that BP 22:15:28 which version is in kilo? 22:15:37 #action adrian otto to regroup with k8s devs to select a time to discuss external-lb 22:15:40 I think V2 22:15:40 v1 22:15:47 uuuh 22:15:51 oh ok 22:16:03 actually that may be moving more quickly than I have been tracking it 22:16:10 so ignore my comment 22:16:13 can we gt a na ction to confirm 22:16:27 who would like to take that one? 22:16:33 I can do that 22:16:39 third party confirmation 22:17:01 #action Tango to confirm what version of LBaaS API is in Kilo, and what versions k8s supports 22:17:01 actually i guess it doesn't matter 22:17:11 that the right action? 22:17:17 wfm 22:17:20 kk 22:17:32 #topic Blueprint/Task Review 22:17:46 so this one is a cluster of a few links 22:17:47 1) Following two patches should be in a Liberty branch, or after a Kilo branch is made. 22:17:53 #link https://review.openstack.org/174209 Update rc support a manifest change 22:18:00 #link https://review.openstack.org/174208 Update service support a manifest change 22:18:14 so the question is where and when to merge those 22:18:29 sdake, you have remarks on this? 22:18:34 yes 22:18:44 branch kilo on 4/25 22:18:51 from master 22:19:06 ping ttx for details on how to do this (I honestly don't know how its done) 22:19:13 then all new changes go into master 22:19:16 ok, I think I know where those notes are 22:19:18 master becomes new liberty 22:19:33 we backport to kilo 22:19:39 or submit changes to kilo 22:19:41 for the rc series 22:19:57 preferrably backport 22:20:10 tag 2014.1.0.rc1 22:20:13 or whatever its called 22:20:20 and each rc gets a new tag on the branch 22:20:35 how to actually do all the tagging with gerrit - not sure 22:20:38 i can find out if you like 22:20:38 yep, that makes sense 22:20:56 I got the tagging stuff down 22:20:57 but i am super overloaded atm 22:21:02 cool 22:21:13 I think we might be missing the tarball job though 22:21:22 its definatey in there 22:21:27 whether it works or not is a different story 22:21:30 ok, good 22:21:36 we can spin rcs as needed for the tarball job 22:21:57 ttx can hep here, he knows what to do 22:22:01 just have to ask nicelly :) 22:22:12 (re tarblal job) 22:22:23 ok, got it. 22:22:30 i can't actually commit for him tho 22:22:37 so there ya go :) 22:22:38 is Lan Qi Song present with us today? 22:22:56 was not in the roll call 22:23:13 so sdake, on the subject of the -2 22:23:24 will you be arond on 4/25 to lift that? 22:23:28 ack 22:23:36 ping me on irc 22:23:42 ok, tx. 22:23:43 and i'll remove immediately 22:23:46 cool. 22:23:47 or i can remove now and yu can apply -2 22:23:55 that's what I was going to suggest. 22:24:02 let's do that now while we are thinking of it 22:24:04 after meeting i'll remove aftery ou add 22:24:09 or we can do now 22:24:14 have links? 22:24:18 will do now 22:24:27 i need to remove my -2 i think 22:24:41 i don't think you can 22:25:03 done 22:25:15 mine are on, so remove yours at your leisure. 22:25:27 next work item disucssion 22:25:29 2) Coordinate synchronized documentation change with this patch: 22:25:34 got it 22:25:41 its done 22:25:49 #link https://review.openstack.org/173763 Update Kubernetes version for supporting v1beta3. 22:26:07 so this sounds like the patch may be overtaken by events 22:26:08 No response from them yet 22:26:10 and may not be needed 22:26:25 but I was reluctant to vote on this without further clarity 22:26:27 events? 22:26:28 Yes may be not now 22:26:46 sdake: need may be obviated by a new default 22:27:03 we can sync up after meeting 22:27:08 i'm out of the loop on that discussion 22:27:13 so my question is what harm would come from merging this? 22:27:22 even if the default has changed? 22:27:39 The Kubernetes cluster has 0.11.0 v 22:27:46 as long as that URI remains working in k8s, then we can use that, right? 22:27:51 version incompatible 22:28:00 but we got rid of kubernetes kube calls right 22:28:01 But after ths merge, the kubectl will be 0.15.0 release 22:28:05 via cli? 22:28:15 this is about transitioning from cli to API calls 22:28:20 right 22:28:21 Yes after we merge our API code 22:28:34 madhuri can you confirm with the api code merge, cli will no longer be used? 22:28:40 In that case we don't need kubectl 22:28:59 We can remove it from dev guide 22:29:03 madhuri____: in that case I'd like to see a quickstart update in the patch too 22:29:11 Ok sure 22:29:12 can you confirm this will land by 4/25 madhuri? 22:29:12 exactly, so that happens all at once 22:29:42 I will mail on the ml the issues currently with the patch. 22:29:56 ok, cool. Thanks madhuri____ 22:29:57 madhuri perfect 22:30:01 can we get an action 22:30:05 I will be submitting the patch today 22:30:19 just for followup 22:30:54 #madhuri____ to begin a ML thread to explain approach for migration off of kubectl onto the k8s API. See: https://review.openstack.org/173763 22:30:58 #undo 22:30:59 Removing item from minutes: 22:31:13 #link https://review.openstack.org/173763 Update Kubernetes version for supporting v1beta3. 22:31:16 To track issues merging v1beta3 in k8s 0.11.0 22:31:50 #action madhuri____ to begin a ML thread to track issues merging b1beta3 in k8s 0.11.0. See: https://review.openstack.org/173763 22:31:53 This need to be an action item 22:31:55 does ## name act as #action? 22:31:56 is that action fair? 22:32:35 The review link should change 22:32:55 that's fine. It's only informative. 22:32:57 https://review.openstack.org/#/c/170414/ adrian_otto 22:33:01 This one 22:33:01 does #madhuri set an action? 22:33:13 sdake: no, that was an error 22:33:18 he undid that 22:33:40 we did record the action, so I'm going to advance topics unless there is more to cover on this now. 22:33:42 then redid it jugler 22:34:08 hmm I only see one #madhuri* 22:34:20 we got it. 22:34:26 if peopel are onfued i guess they can rea dthe ogs ) 22:34:41 3) Discuss where the fix for this should live: 22:34:43 #link https://bugs.launchpad.net/magnum/+bug/1446372 22:34:43 Launchpad bug 1446372 in Magnum "Bays spawned from devstack dont have external network access" [Undecided,New] 22:34:53 this one was a question raised by apmelton 22:35:05 we dcided on irc dev docks 22:35:05 so sdake had an interesting take on this 22:35:32 docker/docs:) 22:35:48 basically we'll add some documentation that demonstrates the usage of local.sh to run the command 22:36:17 what about making the masqyerade a devstack module 22:36:35 so you decide if you want it using a configuration directive in localrc or whatever 22:37:08 localrc can haee additional configuraiton so I don' think thatwill work 22:37:34 adrian_otto: I think local.sh is going to be fairly straight forward 22:37:36 I suppose magnum users are already following a set of directions 22:37:47 so we could just have this as one of the steps 22:38:03 yup that is the proposal 22:38:20 I can live with that 22:38:28 step in the quickstar or the contributing page steps? 22:38:33 apmelton: your perspective? 22:38:35 quickstart, correction 22:39:11 ok, any opposing views to consider? 22:39:12 adrian_otto: I think using local.sh is no more complex than what we already instruct people to do with local.conf 22:39:26 ok, I'm convinced 22:39:47 3 +2s looks like we hae winne:) 22:39:49 juggler: I think I'll be in the quickstart 22:40:03 cool 22:40:39 rather, I think it makes the most sense there 22:40:44 ok, next work item 22:40:50 4) Discuss what to do about replacePod API 22:40:57 #link https://review.openstack.org/175784 Remove duplicate replacePod API 22:41:10 this patch looked to me like it was turning some functions into a docstring 22:41:37 so I was going to leave remarks on that basis, but then I started thinking about the root cause for this 22:41:49 and that needs to be clearly expressed in a bug 22:41:54 I had discussed about this with google guy 22:42:08 we have this: 22:42:12 #link https://bugs.launchpad.net/magnum/+bug/1446529 22:42:12 and he said this ia an issue in their swagger-spec 22:42:12 Launchpad bug 1446529 in Magnum "pod-update fail with 404 status for an existing pod" [Undecided,In progress] - Assigned to Madhuri Kumari (madhuri-rai07) 22:42:13 madhuriwhichgoooleguy 22:42:29 Nikhil Jindal 22:42:34 so this code came from swagger? 22:42:36 He is working on swagger-spec 22:42:58 Yes swagger-spec from k8s 22:43:31 so to understand this, k8s is busted, right? 22:43:45 and we want a magnum workaround for that? 22:43:49 Once it gets fixed upstream we can remove this fix 22:44:00 wfm 22:44:00 Yes adrian_otto 22:44:09 ok, but let's not do this as a docstring 22:44:22 Ok I will remove it then 22:44:22 is there a cleaner way? 22:44:26 thi is what stable/kilo exisssts or 22:44:45 I'm fine just commenting that section of code if we expect to be putting it back rather soon 22:44:50 To remove the method is a cleaner way 22:44:59 if not, record the removed code in a tech-debt bug ticket 22:45:10 and let's just remove that code for now 22:45:19 we can easily put it back once the upstream fix lands 22:45:22 I will create a bug ticket for it 22:45:34 thanks. 22:45:47 madhuri++ 22:45:58 Thanks 22:45:59 so the https://review.openstack.org/175784 patch can be revised to just be a full removal 22:46:16 Yes 22:46:22 and let's update the commit message to explain where we are archiving this as tech debt to repay when upstream is working again. 22:46:28 everyone okay with this? 22:46:36 Ok sure 22:46:47 ok here 22:46:51 +1 22:46:54 +1 22:46:58 I am ok with this :) 22:47:04 ok, I am going to open us for Open Discussion, and let juggler have the first go. 22:47:07 #topic Open Discussion 22:47:27 offhand, are ppl here primarily testing magnum on baremetal systems? 22:47:35 i'm wondering if it's suitable to recommend on one of the contributing pages that using a vm (e.g. a virtualbox on Windows, for example) is not recommended 22:47:53 thoughts/input? 22:47:54 for trying out magnum it really should not matter 22:48:24 just wherever you can succesfully run devstack, right? 22:48:35 so I guess my question is, has anyone actually succesfully run magnum in a vm 22:48:39 diagree with vm approach - will give people negative view of perforance 22:48:53 adrian_otto: what is your definition of 'succesfully run devstack'? 22:49:21 apmelton: to have a working openstack cloud, started by the stack.sh script 22:49:22 yeah, i'm trying to get devstack to run in a vm and still having issues. wondering if it's even possible or has been attempted 22:49:25 dev-qucikstartshoud e rnnable in 20 minute 22:49:27 s 22:49:30 in whatever environment you choose to run it in 22:49:36 at least with the current devstack+magnum 22:49:40 performance expectations should be shaped with docs 22:49:57 minimum required hardware should be as well. 22:50:00 but devstack exists for a reason, and it's most commonly deployed in VM's for convenience. 22:50:23 (If you're going to try this in Devstack, that is) 22:50:33 I deploy success in cm 22:50:36 Vm 22:50:39 is the fact that devstack takes a long time to run in a VM the source of your concern, sdake ? 22:50:39 I learned from couple of iteration that the VM should at least have 16G mem to host devstack for magnum 22:50:42 in vms will take hours to deploy iirc 22:50:44 adrian_otto: but how many people doing that are doing anything more than running cirros 22:50:52 adrian_otto ye 22:51:00 cirros instances* 22:51:36 cirsso si *not* on eof our os choices 22:52:04 sdake, are you typing with chop sticks? 22:52:13 been up for two days 22:52:16 need to hit the rack 22:52:16 :( aww 22:52:17 we should recommend the ideal hardware match, and disclaim YMMV if you choose an alternate approach that is a lower performance option. 22:52:18 sdake: what I'm asking is, how many people are actually consuming the products of the services in devstack rather than just the services 22:52:40 if people ignore that and deploy on VMs, and don't like the performance, that's their choice, right? 22:52:52 and in the longterm, will we be supporting folks on VMs is well is also a potential consideration 22:53:01 or something like that 22:53:05 i think running magnum quickstart in vms will take hors 22:53:21 on crummy laptops maybe 22:53:24 adrian_otto: I think if we tell people they can run magnum+devstack in vms we're going to constantly be dealing with people who can't get it to work 22:53:39 I resemble that remark adrian_otto! :) 22:53:43 apmelton: +1 22:53:45 adrian_otto: I've tried magnum+devstack on our perf1-16G flavor 22:53:49 even that didn't work 22:54:00 apmelton +1 22:54:02 How much disk/memory/cpu are people using where they actually get it to work in Devstack? 22:54:14 mfalatic in a vm or baremetal or both? 22:54:24 Hmm, either. 22:54:37 another approach is to have a recommended setup for cloud operators (how to wire this to your cloud) 22:54:46 and a how to develop on magnum 22:54:47 mfalatic: https://gist.github.com/ramielrowe/8a5392d707c5fe217e49 22:54:52 that's with a 3 node cluster 22:54:57 3 node bay* 22:54:57 we have two documents for this now 22:54:57 mfaltic: I found out that on VM it needs at least 16G to bring up all the instances - but I ran into the networking issue 22:55:12 oic ok. 22:55:17 wow 22:55:37 Will just upgrade the memory in my MacBoo---- oh. 22:55:40 ?? 22:55:53 Just the amount of mem used 22:55:53 wait, devstack is taking 16GB now? 22:56:02 ouch 22:56:22 Is that with a bunch of other peripheral services? 22:56:28 that don't need to be on? 22:56:42 adrian_otto: when I ran devstack on a 8G flavor, it was not able to bring up all the three instances together due to memory issue 22:57:01 three instances of what? 22:57:19 three instance of nested VM to form the cluster 22:57:26 Bay node 22:57:30 I guess 22:57:30 so you are talkigna bout starting the bay 22:57:56 ok, so maybe we need a smaller flavor option for running the k8s cluster? 22:57:56 we may be getting close to moving the discussion, btw 22:58:12 2 minutes left 22:58:27 bug triage needs dsicussion 22:58:27 correct 22:58:32 ok, let's wrap up here. juggler, please take this to the ML 22:58:50 let's get a well reasoned soltuion 22:59:12 adrian_otto: we may also want to see how the heat developers handle this 22:59:16 develoeprs (ormain constitutient atm) is focused around devstack on bare metal 22:59:26 our next meeting is Tuesday 2015-04-28 at 1600 UTC. 22:59:27 I'm sure they have to have functioning instances in their development environments 22:59:45 lets go to #openstack-containers for overlflow lase 22:59:49 thanks everyone for attending. See you all next week. 22:59:51 o/ 22:59:55 #endmeeting