16:03:03 #startmeeting containers 16:03:04 Meeting started Tue Jan 19 16:03:03 2016 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:07 The meeting name has been set to 'containers' 16:03:11 #lin https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-01-19_1600_UTC Our Agenda 16:03:15 #topic Roll Call 16:03:17 Adrian Otto 16:03:19 o/ 16:03:24 o/ 16:03:25 o/ 16:03:26 Ton Ngo 16:03:26 o/ 16:03:26 o/ 16:03:27 o/ 16:03:27 o/ 16:03:28 o/ 16:03:30 o/ 16:03:33 o/ 16:03:34 o/ 16:03:42 o/ 16:03:45 o/ 16:03:56 o/ 16:04:08 o/ 16:04:26 hello madhuri, rods, wanghua, Tango, houming, Kennan_, rpothier, bunting, HimanshuGarg, muralia, hongbin, sew1, wangqun, coreyob, eghobo, and bradjones 16:04:31 o/ 16:04:47 hello juggler 16:04:48 o/ 16:04:49 hello 16:05:05 hello all 16:05:09 hello thomasem and tcammann. Glad to have you all here. Let's begin. 16:05:15 o/ 16:05:16 hihi 16:05:17 #topic Announcements 16:05:24 1) Midcycle 16:05:39 we have selected dates. Feb 18-19, hosted by HP 16:05:46 HPE! 16:05:49 HPE 16:05:57 my apaologies for the abbreviation 16:06:08 what's HPE? 16:06:18 is it in Sunnyvale? 16:06:27 :D We renamed to HPE, we split HP 16:06:44 Sunnyvale is the plan, I'm trying to book a room at the moment 16:06:53 tcammann and I will be working on the exact logistics, but please save the date. 16:07:07 I will email a reply to the date coordination thread indicating the selection 16:07:45 we will revisit topic selection soon 16:07:54 if you have a topic, keep it handy 16:08:21 any announcements from team members? 16:08:24 adrian_otto: I had a request to align the release model to cycle-with-intermediary (since you want to do a "mitaka" release afaik) 16:08:29 http://lists.openstack.org/pipermail/openstack-dev/2016-January/083726.html 16:08:43 must be proposed this week at the latest 16:08:57 so I figured I would give you a heads-up 16:08:59 ttx, yes, thanks. I need to get the review in to toggle that over, or if someone has already done it, I'm happy to cast a vote on it. 16:09:09 nobody did it yet 16:09:09 we agreed as a team to proceed. 16:09:24 ok, I'll plan to wrap that up today. 16:09:28 thx! 16:09:31 np! 16:09:45 ok, let's advance topics 16:09:53 #topic Review Action Items 16:10:02 1) #link https://review.openstack.org/#/c/268852/ spec for trust (wanghua) 16:10:17 wanghua: status on this item? 16:10:42 I think we need a discussion about this bp 16:11:10 It is a feature needed by many bps 16:12:08 We need to make a decision 16:12:17 ok, that might require a bit more time than we have today. Our options are to begin my ML, and then follow up by IRC, or schedule a meeting dedicated to this topic. What are your thoughts on this? 16:13:02 we should do a ML discussion. that gives me time to think about it and reply 16:13:27 seems there is already thoughtful discussion on that spec review 16:14:00 wanghua: what question would you like the team to answer? 16:14:10 is this a matter of how to implement? 16:14:18 yes 16:14:25 I don't see a controversy over whether this should be done or not 16:14:33 I do o/ 16:14:50 ok, let's focus energy on the review comments, and get our respective POV's there for consideration 16:15:11 and I'm happy to put a follow-up into next week's agenda if we feel we need a discussion as well 16:15:15 wanghua do you want to replace access info (for example load balancer) to something else? 16:15:39 I did not check that details, maybe need some time to review it 16:16:02 ok, so let's table this for today, and continue in the review comments, and in #openstack-containers 16:16:13 we'll get a good plan together. 16:16:23 ok 16:16:37 we are still in action item review. Next one is: 16:16:39 2) #link https://review.openstack.org/#/c/265057/ spec for volume integration (Kennan) 16:16:49 Kennan_: remarks on this one? 16:16:56 yes adrian_otto 16:17:13 I have replied manye comments, but no new comments input, and no review progress yet 16:17:23 s/manye/many 16:17:46 I want to collect more inputs, and make sure we move that 16:17:54 adrian_otto :) 16:17:55 I remember seeing a remark on this last night 16:18:13 about not needing a volume on the master node because it's not running a docker daemon 16:18:27 no adrian_otto: 16:18:37 it not same review items 16:18:42 it seems another item 16:18:53 it seems docker storage configruation 16:18:56 ok, so reviewers, please take a moment to review https://review.openstack.org/#/c/265057/ (myself included) at your earliest convenience 16:19:00 not related with this one 16:19:10 ok, got it Kennan_, thanks. 16:19:19 Thanks adrian_otto 16:19:51 ok, so the action items here are actually complete, and the reeust for both is for our reviewers to offer additional input on those reviews. 16:19:59 that concludes action item review. 16:20:49 the networking subteam is merging back into the main group again, so I will be dropping the subteam updates from the regular agenda going forward. 16:21:03 #topic Magnum UI Subteam Update (bradjones) 16:21:10 hi all 16:21:28 there is one last important bp which is still yet to merge 16:21:30 https://review.openstack.org/#/c/235620/ 16:21:34 hey bradjones 16:21:37 which is bay model create 16:21:57 there is a comment on current patch which between Shu & myself will be addressed shortly 16:22:08 will be great to get that merged as soon as new patch is out though 16:22:21 I'm going to spend a bit of time doing some bug management today too 16:22:23 that one ended up being a pretty big patch 16:22:26 Looks great 16:22:38 bradjones I would like to try that tomorrow if time is OK, good feature I think 16:22:50 as there are a few issues which have no movement on and look like they need fixing 16:22:59 Kennan_: great will try to get something out before then 16:22:59 thanks bradjones for all that work 16:23:33 adrian_otto: no worries, that's all from me 16:23:41 thanks bradjones 16:23:53 #topic Essential Blueprint Updates 16:24:24 #link https://blueprints.launchpad.net/magnum/mitaka Our Mitaka Blueprints 16:24:58 #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck) 16:25:23 dimtruck: how are we doing on this one? 16:25:35 just documentation left 16:25:50 there's one patch that's on hold due to the api tests now taking close to 2 hours 16:25:51 excellent. 16:25:55 so ican't spin up another bay :( 16:26:06 i'll have documentation done this week 16:26:18 eghobo and thomasem helped me out with that already 16:26:27 I did notice that the Magnum API is really slow recently 16:26:47 yup, that's b ecause bay tests are spinning up a bay...and that takes an extra 30 minutes 16:27:15 we may potentially want to increate the test timeout for these from 2 hours to something longer 16:27:26 like 150 or 180m 16:27:36 the bay create operation is taking much longer than I expect it to.. in the range of 20 seconds for what I expect should be subsecond time 16:27:36 up to the team.. 16:28:00 do you mean bay model create? 16:28:24 bay create takes ~18 minutes alone in spinning up swarm nodes and swarm manager 16:28:29 no, a bay create from an existing baymodel should return a 201 created right away, and it blocks for almost 20 seconds 16:28:36 ahh 16:28:45 I thought it was my sest env, so I chucked it and set up a new one, and the same thing happened again 16:29:23 i'll take a look...i've been more concentrating on getting the bay create time down and haven't looked at the actual api responses :( 16:29:29 maybe someone with performance profiling experience could have a look 16:29:29 or maybe there are some profiling tools for python that I could learn to use 16:29:33 full bay create i meant 16:29:53 thanks dimtruck 16:30:27 I don't think we want to extend the time allowed to run the func tests, but see about ways to make them execute more quickly instead 16:30:37 sounds good! 16:30:57 ok, next BP up for checkin is: 16:31:03 #link https://blueprints.launchpad.net/magnum/+spec/magnum-troubleshooting-guide (Tango) 16:31:11 Tango: any update? 16:31:38 We have 2 sections being added. Thanks dimtruck and Tom 16:31:55 2 other in progress 16:32:25 de we have enought velocity on this, or should we plan to pull in more help from other team members? 16:32:46 It would be good to get more help from the team 16:33:09 so if anyone has expertise on debugging, please feel free to jump i 16:33:11 in 16:33:22 ok, I am going to mark the BP as Slow Progress as a signal that we need more assistance on it. 16:33:40 I set up the TODO list in the BP, please put your name on the section you would like to work on 16:33:57 we also have a related BP: 16:34:02 #link https://blueprints.launchpad.net/magnum/+spec/user-guide (Tango) 16:34:11 that's one of the others in progress you mentioned? 16:34:22 Similarly, 2 are in progress, TLS and networking 16:34:35 ok 16:34:46 Wonder if bradjones can help with Horizon 16:34:52 how should I mark the status of the user-guide BP? 16:35:10 We can say Slow Progress also 16:36:08 ok, you go t it 16:37:08 #link https://blueprints.launchpad.net/magnum/+spec/resource-quota (vilobh) 16:37:33 vilobh is not in attendance today 16:38:12 A couple of days ago there was a revision to https://review.openstack.org/#/c/266662/ 16:38:45 I'll plan to review that one as well, hopefully today. 16:39:06 next is: 16:39:26 #link https://blueprints.launchpad.net/magnum/+spec/mesos-functional-testing (eliqiao) 16:39:39 eliqiao: remarks on this one? 16:40:00 adrian_otto: I have followed up with eliqiao last week 16:40:12 adrian_otto: He mentioned that one is complete 16:40:18 oh, sweet! 16:40:20 I will mark it 16:40:33 oh, it already is. My mistake! 16:40:45 ok, that brings us to open discussion 16:40:50 #topic Open Discussion 16:41:55 maybe we have time to revisit the specs 16:44:14 or, I have a question 16:44:56 I have an opinion I'll be giving in an upcoming talk at the SCALE conference about why you might select one COE over another for a particular workload 16:45:43 if you were to give me your ideas on the top one or two reasons for picking Docker Swarm, Kubernetes, or Apache Mesos as your COE type, what are they? 16:46:26 I'm thinking Swarm for the freedom to really customize the app setup and deployment process, because you can use an imperative style. 16:46:42 Another option is to run Kube and/or Swarm on top of Mesos 16:46:44 or that you are already familiar with the docker tools, and want to continue using those, plus the benefits of OpenStack 16:47:16 Tango: good point. In what circumstances would you prefer to do that? 16:47:51 adrian_otto: maybe we should talk about this topic during mid-cycle, i cannot type quick enough ;) 16:47:58 heh 16:47:59 This allows fine grain resource sharing, so if you have workloads running on both Kube and Swarm, it would be helpful 16:48:57 Tango: do you mean use mesos to schedule between kube and swarm? 16:49:24 Right 16:49:25 Kube is now available as a framework on Mesos, so there is interesting capability for managing the kube cluster 16:50:02 Tango: are you aware of anyone using that in a production capacity? 16:50:10 anyone I can ask about how that's working? 16:50:46 It's too early for production use 16:50:53 ok 16:50:56 It seems interesting. Would be helpful to have link for demo for that 16:51:00 The kube framework is only a few months old 16:51:15 yeah, I'd love to learn more about it 16:51:23 but it's an interesting possibility, we are looking into this 16:51:39 might be a good topic for mid cycle 16:52:05 adrian_otto: 16:52:10 when you talked about this 16:52:11 you can use an imperative style. 16:52:12 For Magnum, we could consider deploying Kube and Swarm as framework on Mesos cluster 16:52:13 ok, I'll be sure to get that on the topi list 16:52:16 what's that means ? 16:52:46 Kennan: as opposed to the declarative style used by Kubernetes in the YAML format kube file 16:53:19 instead of describing the output of the app setup, in an imerative style you would specify the explicit steps the system should follow during deployment. 16:53:35 so if you like declarative and want to use Swarm, you might select a tool like docker-compose 16:54:11 or if you prefer imperative you might just write a script that calls the docker client with a bunch of arguments that you orchestrate around yourself. 16:54:43 OK 16:55:03 I got it. so you mean swarm support both ways 16:55:09 it is very flexible 16:55:17 Kubernetes gives a nice rich declarative style experience 16:55:17 right ? adrian_otto 16:55:25 that's one of the sexy parts of it 16:56:09 but the deawback is that it requires a lot of code within the system itself to interpret the model provided by the user, and decide how to accomplish it 16:56:22 s/deawback/drawback/ 16:56:42 ok, if you have more thoughts on this question, find me. I'd love to get your input. 16:56:58 we are coming close to the end of our scheduled time for today 16:57:19 hongbin: thanks so much for helping out last meeting as chair. I really appreciate it. 16:57:27 adrian_otto: np 16:58:21 Our next meeting is scheduled for 2016-01-26 at 1600 UTC. I look forward to seeing you all then! 16:58:26 thanks for attending today! 16:58:42 thanks all 16:59:05 #endmeeting