21:00:02 #startmeeting Solum Team Meeting 21:00:02 Meeting started Tue Jun 9 21:00:02 2015 UTC and is due to finish in 60 minutes. The chair is devkulkarni. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:06 The meeting name has been set to 'solum_team_meeting' 21:00:19 #link https://wiki.openstack.org/wiki/Meetings/Solum#Agenda_for_2015-06-09_2100_UTC Agenda for today 21:00:30 #topic Roll Call 21:00:36 Devdatta Kulkarni 21:00:57 hi adrian_otto 21:01:02 I just started the meeting.. 21:01:12 devkulkarni: good. I was going to ask you to chair 21:01:18 I am still in transit today. 21:01:42 james li 21:01:45 ok.. we had discussed it last time that you might be out 21:01:56 hope you have safe travel 21:02:01 hi james_li 21:02:09 thanks! 21:02:33 Melissa Kam 21:02:46 hi mkam 21:03:40 Hello, this is Priti Changlani, new to the team. 21:03:46 hi pritic 21:03:54 glad to have you on the team 21:04:28 I am going to continue with next phase of meeting. 21:04:43 Hi pritic 21:05:00 If anyone wants to chime in to mark their presence please feel free to do so anytime during the meeting 21:05:05 #topic Announcements 21:05:08 ed cranford 21:05:19 kebray here. 21:05:23 i promise i'm here 21:05:23 any announcements from anyone 21:05:29 hi kebray datsun180b 21:06:23 moving on to review action items topic 21:06:29 #topic Review Action Items 21:07:02 devkulkarni: just saw you new spec: https://review.openstack.org/#/c/189929/3 21:07:17 we had two action items for adrian_otto.. but we can continue them to next week I guess since adrian_otto you are out.. let me know 21:07:37 james_li: yeah.. i just submitted it, it is not completely ready yet. 21:08:03 to jog our memory, the action items were: 1) adrian_otto to spring clean our blueprints 2) adrian_otto to spring clean our bug list 21:08:31 devkulkarni: do we want to send it to mail list once you finish writing it? 21:08:45 james_li: we could 21:08:50 ok 21:08:59 I definitely want to get randallburt's opinions on it 21:09:24 I am going to carry forward the two action items mentioned above for next time 21:09:32 #action adrian_otto to spring clean our blueprints 21:09:40 #action adrian_otto to spring clean our bug list 21:09:45 tx! 21:10:02 thanks adrian_otto 21:10:19 #topic BP/Task Review 21:10:43 I can talk about the spec that james_li was referring to above. 21:10:54 hi gpilz 21:11:00 hi 21:11:15 we are in the Task Review topic 21:11:26 ok, about the spec -- 21:11:45 it is about how to support app update without changing the app's URL 21:12:03 the basic idea is to use a heat template with load balancer and a server 21:12:16 app's endpoint URL 21:12:25 yes, that is correct james_li 21:12:59 in the spec I have outlined a two step process that is supposed to achieve the end goal 21:13:19 the main constraint to keep in mind is we may have multiple deployers 21:13:44 and so need to ensure that race conditions don't lead to incorrect system state 21:14:02 (such as, more than two servers being created within the heat stack) 21:14:24 please take a look at the spec whenever you get a chance 21:14:33 so the spec is just focused on a *single* server? will it apply to the case that apps with multiple servers? 21:15:11 james_li: I have not considered apps with multiple servers. we will have to add support for multiple servers from the ground up (API layer, worker, etc.) 21:15:30 devkulkarni: Magnum has a solution for that 21:15:31 we are not there yet in other areas of the code 21:15:41 oh nice! 21:15:49 so if you deploy into a Magnum pod that might be one less thing to deal with 21:15:54 adrian_otto: mind elaborating on it? 21:15:54 the actual solution is in Heat 21:16:11 it has a new feature that allows for concurrent updates to the same stack 21:16:25 adrian_otto: I see.. 21:16:52 is there any spec/docs that you can share with us on this? I would like to take a look 21:16:56 it automatically serializes them so the last one is complete before you get an UPDATE_COMPLETE status back from the heat API. 21:17:08 randallburt has details on this one 21:17:14 I don't know about docs on it 21:17:14 ok cool. 21:17:24 I will follow up with randallburt on this 21:17:36 thanks for the the pointer adrian_otto 21:17:40 np 21:18:44 thanks james_li and adrian_otto for the comments 21:18:50 I do remember more about it 21:18:57 if you use a ScalingGroup resource 21:19:17 you can define a webhook for scaling up the count, and back down 21:19:39 you can pass in a desired value of elements to those webhooks 21:19:59 and this webhook will be triggered on Solum app update…? 21:20:08 so if you have two callers asking for the new count to be "3" that's fine 21:20:37 if the goal is to scale it to 0 and then back to a nonzero value 21:20:57 then you scale to 0, wait for the count to reach 0, and then adjust it again to the nonzero value. 21:21:22 heat takes care of serializing the calls then? 21:21:27 yes 21:21:34 nice 21:21:42 I think you can also indicate which node to kill off in the scale down call 21:21:55 so if you wanted you could set the value to n+1 21:22:24 then do an n-1 indicating the uuid of the server resource or container resource you want to eliminate. 21:22:46 having a scaling group of Magnum containers could make that go really fast. 21:22:56 it will soon have support for auto-scaling the bay 21:23:43 cool.. how far along is python-magnumclient? could it be used to do these things from within Solum? 21:24:00 it's stable enough for that in my view 21:24:24 there could be some new API functions coming to support he upcoming Mesos bay type. 21:24:45 ok.. may be we can add a bug/story to our backlog to investigate its usage and possible integration 21:25:21 sure.. I guess the current bay types should be fine for solum, right? 21:25:33 but the existing API should be enough for what Solum needs. It has support for concurrent multi-version API support. So you can even get new versions of it, and keep old ones around without the need to tweak things integrated with older API versions. 21:26:14 nice 21:26:19 yes, I think the Swarm Bay type is probably enough for the Solum use case. That would give you more control than the Kubernetes bay would 21:26:42 so you could have Solum control the LB pool membership 21:27:24 we already hit a Docker API on the node we bring up through Heat 21:28:02 instead we just bring up a Magnum (swarm) bay, and bring up containers using the Bay's docker API 21:28:10 super small change to Solum 21:28:43 and would allow us to fall back to using Heat directly (no Magnum) in clouds that have Heat but not Magnum 21:29:07 so you are saying that the LB pool is actually a pool of container instances which can be controlled via the swarm bay api 21:29:31 yes, that would be preferred, right? 21:29:37 but what about situations when the LB pool is actually made up of VM instances 21:29:45 that way we could create and destroy them really fast. 21:30:08 either way you use Heat. 21:30:25 you just decide whether to use the Heat Docker resource or the Heat Magnum resource. 21:30:25 sure 21:30:41 using an alternate template in each case 21:31:06 ok.. at a high-level I think I get what you are suggesting.. will need to dig little deeper to understand how it will all fit together in solum 21:31:32 some contributors from Cisco are working on the Heat Magnum resource(s) 21:31:45 let me take an action item to file a bug to investigate solum-magnum integration 21:31:54 so sdake should be able to name them if you want to know more than what you find up for review on in tree now. 21:32:21 sure.. I can reach out to sdake to find out the current state of that resource 21:32:52 he's traveling today as well, but I expect him back later this week. 21:33:18 #action devkulkarni to file a bug to investigate solum-magnum integration outlining the various options, relevant documentation, etc. 21:33:23 sure.. 21:33:35 you guys hang out on #magnum? 21:33:49 #openstack-containers 21:33:53 ok 21:34:37 thanks adrian_otto for all the pointers on this 21:34:46 my pleasure 21:35:03 are there other tasks/blueprints that we want to discuss today? 21:36:25 ok, I will continue to open discussion then.. 21:36:35 #topic Open Discussion 21:37:49 pritic you still here? 21:38:22 I am, hi! 21:38:50 thanks for joining us today. Would you feel comfortable taking a moment to introduce yourself to the rest of our team? 21:40:22 Sure, I am working with the Rackspace Solum QE Team as a summer intern. Orginally I am from India, but I have been in the US since last fall for my masters in Computer Science at University of Florida. 21:40:56 excellent. I'm looking forward to working with you. 21:41:01 It is my first day today and I am really looking forward to a great summer experience. 21:41:17 :D 21:41:30 you're lucky to be on such a great team 21:41:43 it should indeed be a fun and challenging summer for you 21:41:53 +1 21:42:16 That's the plan! 21:42:41 * adrian_otto needs do disambark. Catch you later 21:42:48 thanks adrian_otto 21:42:51 disembark 21:43:00 others — anything else for today or should we call it? 21:43:39 nothing from me 21:43:56 ok.. mkam, james_li, gpilz, pritic? 21:44:05 I'm good 21:44:10 yes 21:44:30 I am good. Thanks. 21:44:34 ok then.. ending the meeting 21:44:37 #endmeeting