15:00:00 <bauzas> #startmeeting gantt
15:00:01 <openstack> Meeting started Tue Apr 15 15:00:00 2014 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:05 <openstack> The meeting name has been set to 'gantt'
15:00:15 <bauzas> woah, 15:00:00
15:00:18 <bauzas> hi all
15:00:21 <mspreitz> hi
15:00:54 <bauzas> n0ano is not able to attend this meeting, handovering it
15:01:38 <bauzas> let's wait a few 5 mins to get people in
15:03:46 <bauzas> who's here to talk about Nova scheduler?
15:04:01 <mspreitz> moi
15:04:09 <bauzas> mspreitz: :)
15:04:32 <bauzas> seems like the room will be crowdy
15:04:34 <bauzas> :)
15:04:44 <bauzas> ok, starting then
15:05:15 <bauzas> #topic Follow-up on previous actions
15:05:32 <bauzas> #link http://eavesdrop.openstack.org/meetings/gantt/2014/gantt.2014-04-08-15.01.html
15:05:37 <bauzas> there were 3 actions
15:05:50 <bauzas> I had no news from n0ano
15:05:53 * johnthetubaguy is lurking
15:05:56 <bauzas> will chase with him
15:06:10 <bauzas> about creating an etherpad for ATL sessions
15:06:37 <bauzas> #action bauzas to discuss with n0ano to see if etherpad about ATL sessions on Sched is already created
15:06:50 <bauzas> on my side, I had 2 actions
15:07:12 <bauzas> the last one (chairing the meeting) is currently running :)
15:07:22 <bauzas> about creating a top wiki page for Gantt
15:07:36 <bauzas> #link https://wiki.openstack.org/wiki/Gantt
15:07:45 <bauzas> that's a placeholder at the moment
15:07:53 <bauzas> content to come in :)
15:08:12 <bauzas> the etherpad for ATL sessions will be placed there
15:08:24 <mspreitz> some wiki pages are linked as children of others; is this one, in any way, linked under Nova?
15:08:50 <bauzas> well, the bbcode is about subpages of Gantt/
15:09:03 <bauzas> mspreitz: but we can add other references
15:09:23 <mspreitz> Compare with https://wiki.openstack.org/wiki/Heat/AutoScaling
15:09:28 <mspreitz> it points to its parent, Heat
15:09:48 <bauzas> oh, I got your point
15:09:50 <mspreitz> and its parent should point back to that child (among the others)
15:10:00 <bauzas> well, Gantt stands by itself
15:10:13 <bauzas> but we can link it to Nova some way
15:10:17 <mspreitz> We want it to, but I think Nova should point to it
15:10:25 <bauzas> indeed
15:10:32 <bauzas> ok, will see what I can do
15:10:35 <mspreitz> thanks
15:10:45 <mspreitz> this is, in part, a point-in-time statement
15:10:48 <bauzas> #action bauzas to link Gantt wiki namespace with Nova one
15:10:53 <mspreitz> today, Gantt is really a creature of Nova
15:11:02 <bauzas> mspreitz: +1
15:11:02 <mspreitz> with grander ambitions
15:11:16 <bauzas> indeed
15:11:36 <bauzas> well, frankly speaking, there is no Gantt code yet
15:11:55 <bauzas> there is at some point a Gantt repo, containing old Nova fokr
15:11:56 <bauzas> fork
15:12:00 <bauzas> but that's it
15:12:23 <bauzas> the idea of this wiki page is to wrap all content around Gantt, not only on an etherpad :)
15:12:27 <toan_tran|2> bauzas: well, it's not a problem so far if Gantt is totally separated
15:12:50 <bauzas> indeed, that's pure seam
15:13:18 <bauzas> I mean, all docs should be placed there, including etherpads, even if all efforts are currently under Nova hood
15:13:32 <toan_tran|2> bauzas: +1
15:13:38 <mspreitz> bauzas: +1
15:13:45 <bauzas> ok cool
15:14:00 <bauzas> ok, moving to the next topic then
15:14:14 <bauzas> #topic Status on forklift efforts
15:14:26 <bauzas> and again, some links...
15:14:34 <bauzas> #link https://review.openstack.org/#/c/82133/
15:14:42 <bauzas> reviews are welcome here :)
15:15:09 <bauzas> the idea is to define exactly what should be done and how
15:15:31 <bauzas> I'm considering only changes to Nova code but Scheduler
15:16:01 <toan_tran|2> right now there are a lot of things overlapping between Nova and Oslo
15:16:06 <bauzas> I also have to open another blueprint for changes inside the Scheduler, but that's another thing
15:16:21 <bauzas> toan_tran|2: which ones ?
15:16:22 <toan_tran|2> should we take this chance to split them out?
15:16:40 <johnthetubaguy> bauzas: are you feeling happy with that now, is there some stuff you are particuarly worried about?
15:16:53 <bauzas> johnthetubaguy: there was a misunderstanding with russellb
15:17:08 <toan_tran|2> I don't remember exactly, but I remember hearing Jay Lau discussing on it in the MLK
15:17:08 <bauzas> johnthetubaguy: but I hope this is now resolved
15:17:09 <toan_tran|2> ML
15:17:17 <bauzas> toan_tran|2: mmm
15:17:23 <toan_tran|2> like the weights
15:17:29 <bauzas> toan_tran|2: oh ok
15:17:42 <toan_tran|2> something about weight_multiplier
15:17:53 <bauzas> johnthetubaguy: will create second BP for chasing up changes to do on scheduler
15:18:21 <bauzas> toan_tran|2: won't be in the scope of https://review.openstack.org/#/c/82133/
15:18:31 <toan_tran|2> https://review.openstack.org/#/c/66285/
15:18:33 <toan_tran|2> here
15:18:36 <johnthetubaguy> bauzas: that sounds good, stuff for juno-2 I guess
15:19:12 <bauzas> johnthetubaguy: what fits for juno-2 sorry ?
15:19:23 <bauzas> johnthetubaguy: bp/scheduler-lib ?
15:19:38 <johnthetubaguy> I am thinking scheduler-lib for juno-1, follow up stuff for juno-2?
15:19:56 <bauzas> johnthetubaguy: oh ok, huge +2 here
15:19:56 <russellb> o/
15:19:58 <russellb> did you split up the spec into 2 now?  or?
15:20:11 <bauzas> russellb: o/
15:20:18 <russellb> looks like it wasn't split
15:20:33 <bauzas> russellb: sorry, but can't see why it needs to be split
15:20:34 <russellb> i still think this spec is including 2 separate things
15:20:44 <russellb> there are 2 distinct things mentioned in the introduction
15:20:52 <russellb> 1) need to isolate how you talk *to* the scheduler
15:21:00 <russellb> 2) need to isolate what the scheduler talks to (nova db or whatever)
15:21:06 <russellb> those are completely separate concerns in my mind
15:21:16 <bauzas> russellb: well, 2) is not that
15:21:18 <russellb> and this spec mixes them, and calls it a library
15:21:34 <russellb> 17	In this blueprint, we need to define in a clear library all accesses to the
15:21:36 <russellb> 18	Scheduler code or data (compute_nodes DB table) from other Nova bits (conductor
15:21:36 <bauzas> russellb: we don't want to change what scheduler talks to
15:21:38 <russellb> 19	and ResourceTracker).
15:22:16 <bauzas> ResourceTracker directly writes updates to compute_nodes
15:22:29 <bauzas> russellb: we're just isolating it thru the sched lib
15:22:32 <russellb> and you're defining that as data the scheduler owns?
15:22:35 <russellb> ok
15:22:41 <russellb> this wording is better this time
15:22:41 <mspreitz> from where are these numbers coming?
15:22:45 <russellb> that wasn't clear to me before ...
15:22:50 <bauzas> mspreitz: Gerrit copy/paste
15:22:51 <russellb> mspreitz: ignore them, just copied from gerrit
15:22:58 <mspreitz> got it
15:23:01 <russellb> https://review.openstack.org/#/c/82133/13/specs/juno/scheduler-lib.rst
15:23:07 <bauzas> russellb: well, I'm sorry, I don't know how to be more precise :(
15:23:24 <russellb> bauzas: no it's better now, sorry
15:23:42 <russellb> bauzas: until looking this time i thought you were changing internals of the scheduler, as well
15:23:52 <bauzas> russellb: oh ok
15:24:03 <bauzas> russellb: sorry about that, I was unclear
15:24:14 <bauzas> hence L21
15:24:17 <russellb> and i'm picky :)
15:24:19 <russellb> so sorry ;)
15:24:29 <russellb> but OK, i think we're much closer now, i'll do another pass on this today
15:24:35 <bauzas> russellb: ok cool
15:25:02 <russellb> i'm sure your intent was right/good the whole time, i'm just really picky to make sure we're clearly on the same page about it, so picky about how/what is written
15:25:03 <bauzas> russellb: I was just saying that changes to the scheduler will be defined in another bp we target for juno-2
15:25:08 <russellb> ok
15:25:29 <bauzas> russellb: thanks for jumping in now :)
15:25:43 <bauzas> russellb: I mean, that was appreciated
15:25:57 <bauzas> ok, so, I'm taking an action here
15:26:17 <bauzas> #action bauzas to create another blueprint for changes to scheduler
15:27:15 <bauzas> can we move forward and go to the next topic ?
15:27:25 <mspreitz> +1
15:27:39 <bauzas> #topic Juno summit design sessions
15:27:57 <bauzas> #link http://summit.openstack.org/
15:28:30 <bauzas> as said previously, we'll chase all proposals in an etherpad
15:29:00 <bauzas> at the moment, no review has yet been done on the proposals
15:29:28 <bauzas> except http://summit.openstack.org/cfp/details/45
15:29:37 <bauzas> http://summit.openstack.org/cfp/details/45 has been refused
15:29:59 <bauzas> it was about discussing how to manage reservations and scheduling in OS
15:30:41 <bauzas> so, I can invite people interested in discussing it in ATL to join
15:30:57 <bauzas> http://summit.openstack.org/cfp/details/142
15:31:11 <mspreitz> oh
15:31:23 <mspreitz> I had not noticed that
15:31:50 <mspreitz> I think we need to continue the API discussion for joint-scheduling of a heterogenous group of VMs
15:32:13 <mspreitz> The description of 142 does not clearly include that, while 45 sounded inclusive
15:32:39 <bauzas> mspreitz: the #142 session is about how Climate can integrate Openstack
15:32:54 <mspreitz> bauzas: exactly.  A more narrow scope than I read for #45
15:33:02 <bauzas> mspreitz: indeed
15:33:09 <mspreitz> I want to talk about something that is in 45 - 142
15:33:32 <mspreitz> I'd like to talk about it today and see it discussed at the summit
15:33:53 <bauzas> mspreitz: deadline for proposals is on April 20th
15:34:11 <mspreitz> OK, sounds like I need to add one, since 45 has been narrowed to 142
15:34:26 <mspreitz> Can we also discuss it a bit today?  When we get to the open part of the agenda?
15:34:46 <bauzas> mspreitz: there is another session which could help you
15:35:07 <bauzas> mspreitz: http://summit.openstack.org/cfp/details/140
15:35:33 <bauzas> mspreitz: please discuss it in the open part
15:35:36 <mspreitz> yes, I saw that one too
15:35:46 <mspreitz> but I thought that was maybe too broad
15:35:51 <bauzas> mspreitz: I'll jump on the next topic and we'll go back to your point after
15:35:56 <mspreitz> ok
15:36:12 <bauzas> mspreitz: well, one is too narrow and the other one is too broad :)
15:36:34 <bauzas> #topic open discussion
15:36:45 <bauzas> now you can talk
15:36:46 <bauzas> :)
15:36:47 <bauzas> :D
15:36:50 <mspreitz> OK...
15:36:59 <mspreitz> I want to discuss the next step for server groups
15:37:21 <mspreitz> That is, how to change the API to enable a joint decision to be made about a heterogenous group of VMs
15:37:33 <mspreitz> I imagine a three phase API:
15:37:52 <mspreitz> (1) describe the VMs and scheduling constraints, get a joint decision made
15:38:01 <mspreitz> (2) client then makes the calls to create the VMs
15:38:21 <mspreitz> (3) after all the success, failures, wedges, and give-ups, the client calls back to the group to confirm that he is done
15:38:41 <mspreitz> (3) is so that unused allocations can be released
15:38:51 <mspreitz> what do you think?
15:38:57 <bauzas> mspreitz: by server groups, you mean the same logic than with instance groups ?
15:39:07 <bauzas> mspreitz: but dealing with hosts ?
15:39:19 <bauzas> mspreitz: I mean computehosts
15:39:31 <mspreitz> AFAIK, instance-grouop-api-extension deals with what are now called server groups
15:39:38 <mspreitz> "server" here means VM AKA instance
15:39:48 <bauzas> ok
15:40:08 <mspreitz> I am talking about an evolutionary step forward from the current server group API
15:40:18 <mspreitz> the current API leads to serial decision making
15:40:24 <bauzas> mmm ok
15:41:11 <bauzas> correct me if I'm wrong but there are anti-affinity and affinity policies with instance groups, correct ?
15:41:22 <mspreitz> correct
15:42:13 <mspreitz> But current API allows only serial decision making --- no knowledge available about what else will be in the group in the future is available when making one scheduling decision
15:42:31 <bauzas> what do you mean by "in the future" ?
15:42:36 <mspreitz> ah right
15:42:40 <bauzas> could you provide a use-case ?
15:42:57 <mspreitz> by "future" here I mean not what you would, I only refer to the span of time that phase (2) takes
15:43:56 <mspreitz> Use case: you want to create a web server and a database server with affinity.  Today you have to create one first, and at that time there is no knowledge in scheduler that the second is really desired right now, and the first should be placed where there is room to put the second nearby
15:44:03 <toan_tran|2> mspreitz: If I understand correctly, you mean we need to "memory" the group configuration
15:44:34 <mspreitz> I mean the group config should be considered all at once in a joint decision making exercise
15:44:35 <toan_tran|2> so that whenever you create a new server, scheduler can traceback in which (anti-)affinity it belongs to
15:45:08 <mspreitz> I mean to change the scheduling protocol, so the scheduler can see the whole group and make a decision about all of it at once
15:45:23 <bauzas> mspreitz: but you can ask nova to boot 2 distinct VMs in a same group no ?
15:45:48 <mspreitz> bauzas: yes. But today you have to create one after the other
15:46:05 <mspreitz> I want to enable scheduling both in the same scheduler call
15:46:12 <bauzas> mspreitz: but you can ask Nova to boot 2 VMs on a single call
15:46:31 <mspreitz> bauzas: only if they are homogenous
15:46:38 <bauzas> mspreitz: ah right
15:46:41 <toan_tran|2> mspreitz: I think I got you idea, right now we only hint the group and scheduler will place it depending which filter we use, Anti- or Affinity-
15:47:22 <toan_tran|2> here you'll have the two policies altogether, so scheduler must know which policy to use
15:47:28 <toan_tran|2> is that tight?
15:47:29 <mspreitz> toan_tran|2: I am not debating the policy, my point is that it would be better to apply the poliicy simultaneously and symmetrically
15:47:32 <bauzas> mspreitz: do you have opened a blueprint for discussion here ?
15:47:34 <toan_tran|2> right?
15:48:39 <mspreitz> Gary K, Yathi, Debo and I brought a BP to the last summit.  It should be revised to clearly say what I am saying now.
15:48:46 <mspreitz> The old BP is more ambitious
15:48:47 <bauzas> mspreitz: indeed
15:48:55 <mspreitz> I think I will write this narrower one
15:49:11 <bauzas> mspreitz: I think I got your idea
15:49:11 <toan_tran|2> mspreitz: that would definitely help
15:49:27 <mspreitz> tt: I think you are adding the concern about multipe and potentially conflicting policies.  That's an orthogonal issue, I think.
15:49:43 <bauzas> mspreitz: but that requires to hold the scheduler decision for the first instance until the 2nd one comes in
15:49:57 <toan_tran|2> mspreitz: that would be another issue that we can concern once the instance group is done
15:49:59 <mspreitz> Right.  That's why I connected to your talk about reservations
15:49:59 <toan_tran|2> :)
15:50:37 <bauzas> mspreitz: well, that's still ambitious :)
15:50:50 <mspreitz> And remember phase (3), so unneeded reservations can be released soon if it is known they will not be used.
15:50:56 <bauzas> mspreitz: I'm just thinking it would require both Heat and Nova interactions at least
15:51:23 <johnthetubaguy> mspreitz: I do wonder if we should have "holistic scheduling" that makes reservations and then later create VMs for those reservations, maybe thats what you are suggesting?
15:51:29 <mspreitz> bauzas: I think true scheduling (place + time), as you have been advocating, is a hard problem in itself...
15:51:44 <mspreitz> the sophisticated placement I have been talking about is also a hard problem itself...
15:51:45 <johnthetubaguy> I mean holistic placement really, of course
15:51:46 <bauzas> johnthetubaguy: +1
15:51:56 <mspreitz> I can not recommend combining the two into an even harder problem
15:52:17 <mspreitz> johnthetubaguy: yes
15:52:24 <bauzas> mspreitz: another option would be to consider live migrating the first instance
15:52:27 <mspreitz> I am trying to take this incrementally
15:52:36 <mspreitz> By "holistic" I also mean "more than Nova"...
15:52:43 <bauzas> I mean
15:52:54 <toan_tran|2> bauzas: that would introducing control + delays
15:52:59 <mspreitz> What I am trying to talk about today is a small step: change server groups from serial to simultaneous decision making
15:53:15 <mspreitz> I think the Nova part does not require any Heat interaction.
15:53:24 <toan_tran|2> me mention about "small stop"
15:53:29 <mspreitz> Heat, as a Nova client, will of course have to have a way to use the new API
15:53:30 <bauzas> mspreitz: but that requires to serialize the scheduling decision :D
15:53:37 * toan_tran|2 mention about small stop
15:53:44 <toan_tran|2> s/stop/step/
15:54:22 <bauzas> we're running out of time
15:54:25 <johnthetubaguy> mspreitz: can we take an even smaller step, create nova APIs to make reservations, and this can be outside of Nova, and in the scheduler?
15:54:42 <johnthetubaguy> mspreitz: so leaving VM groups just as it is today, for now
15:54:55 <bauzas> johnthetubaguy: hence the discussion about Climate integration then
15:55:02 <johnthetubaguy> yeah
15:55:02 <toan_tran|2> johnthetubaguy: +1
15:55:03 <mspreitz> johnthetubaguy: not sure I follow, can you send an outline on ML?
15:55:29 <johnthetubaguy> mspreitz: I will try do that, bug me if I forget
15:55:38 <mspreitz> thanks.
15:56:02 <bauzas> mspreitz: I think Climate could help, provided we see how it can be integrated within Openstack
15:56:20 <bauzas> but ok, let's discuss it outside this meeting
15:56:33 <bauzas> any other opens to discuss ?
15:56:35 <mspreitz> bauzas: Possibly, if Climate is factored.  As I said earlier, I think we have two hard problems that I do not want to combine
15:56:40 <bauzas> we're having 3 mins left
15:57:22 <bauzas> tic tac
15:57:28 <bauzas> ok, will close the meeting
15:57:31 * mspreitz hears crickets
15:57:36 <bauzas> s/close/end
15:57:40 <bauzas> #endmeeting