15:00:52 #startmeeting gantt 15:00:53 Meeting started Tue Jan 20 15:00:52 2015 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:56 The meeting name has been set to 'gantt' 15:01:02 \o 15:01:04 o/ 15:01:05 o/ 15:01:08 \o 15:01:11 drift off for 30 seconds and people slap you :-) 15:01:25 sorry for the delay, anyway... 15:01:40 n0ano: we'll let it slide this one time... 15:01:41 eh eh 15:01:46 ;-) 15:01:52 #topic Remove direct nova DB/API access by Scheduler Filters - https://review.opernstack.org/138444/ 15:02:00 edleafe, but not next time 15:02:27 we talked on email and bauzas tried to explain, are you two still far apart? 15:02:30 ok, so here's the basic issue 15:02:47 there are two ways to get the instance info to the scheduler 15:03:02 edleafe: please explain those 2 things 15:03:13 one is to enhance the current _get_all_host_states() so that each host includes instance info 15:03:28 let's call it option #2 15:03:29 oops 15:03:31 option #1 15:03:35 the other is to have hosts report changes in instance info to scheduler, which would maintain that in-memory 15:03:52 let's call it option #2 15:03:57 bauzas: ok, these are options 1 & 2 15:04:09 option 2 is more disruptive 15:04:20 would require changing code in more places 15:04:31 ok, about #2, you mean that hosts will report all the instances for each 60 secs ? 15:04:31 but would be closer to the eventual model that we want 15:04:41 this was jaypipes's concern 15:04:49 and his main reason for favoring the approach 15:04:52 edleafe: option2, report changes to scheduler, means write the instance info to compute node table? 15:04:53 dammit, jaypipes is not yet there ! 15:05:15 bauzas: they would report when a change occurs 15:05:23 create/destroy/resize 15:05:24 seriously, Jay, if you're reading us, please come in, because we already discussed about this BP last week without you 15:05:34 edleafe: using RPC API call ? 15:05:44 bauzas: yes 15:06:07 edleafe: isn't what I said OK last week ? 15:06:26 in opt#2, the scheduler would get the info when it starts up, and the hosts would update when they change 15:06:38 option #2 is for having pending instances provided to the scheduler 15:07:11 edleafe: right, but do you understand that there is no persistence in the scheduler now ? 15:07:26 edleafe, my only concern with #2 is how to get initial state, does the scheduler have to query every host in the system? 15:07:34 * bauzas would like to be fluent in English... 15:07:49 n0ano, would like to be fluent in French :-) 15:07:55 bauzas: well, isn't that what would happen? If a host gets an instance request (pending), it would report it to the scheduler *before* it starts building it 15:08:06 because it sounds I have some problems explaining why I think you missed something 15:08:10 bauzas: if that fails, then it would report that, too. 15:08:32 bauzas: ok, please explain 15:08:35 edleafe: I said I was OK with that approach for pending instances 15:09:10 edleafe: but as HostState is not persisted in between each request, it needs to call all instances for each host 15:09:49 edleafe: req1 comes in, calls _get_all_host_states(), instantiate the list of HostState 15:10:01 edleafe: req2 comes in, calls _get_all_host_states(), instantiate the list of HostState 15:10:08 bauzas, isn't edleafe basically saying we would change HostState to `be` persistent in the scheduler 15:10:19 n0ano: well, that's the end goal 15:10:22 n0ano: it was not explicitely said 15:10:32 we shouldn't really have to make that call every time 15:10:44 n0ano: but I answered on that approach in my email by saying "+1000 but not now" 15:11:02 well, tbp, I said "+1000 but not *in that spec*" 15:11:03 n0ano: with opt#2, getting there would be half-complete 15:11:20 n0ano: with opt#1, we'd still be where we are now 15:11:33 edleafe: why ? 15:11:37 I'm understand why persisted will resolve the race problem. Persisted means we need lock for scheduler to read HostState? 15:11:53 s/I'm understand/I'm not understand... 15:12:07 alex_xu: the main problem is that if we persist HostState, we need to carefully think about all the problems around it 15:12:11 bauzas: because with opt#1, we're still calling _get_all_host_states() with each request 15:12:27 alex_xu, shouldn't need a lock, sheduler owns the persisted HostState, no contention on it 15:12:27 edleafe: right, and what is the scope of this BP ? 15:12:55 edleafe: are you planning to solve all scheduler problems in that spec or are you focusing to not call DB in the filters but rather elsewhere ?. 15:13:14 bauzas: so with opt#2, we still do that, but make a small step to scheduler maintaining info in-memory instead of always calling 15:13:29 edleafe: again, that's not the scope of this spec 15:13:47 bauzas: getting the full host state in memory would be an L project 15:13:57 edleafe: if that spec is requiring a persistent approach, then let's write a spec for persisting HostState and depend your spec on that one 15:14:41 bauzas: that would be fine, but I thought we agreed that that would come in L, not kilo 15:14:52 edleafe: but I'm sorry, opt#2 was not explicitely saying this approach 15:15:03 edleafe: exactly, here is the point 15:15:05 jaypipes thought that this would lay the groundwork 15:15:34 honestly, do we need to meet together all if jay is not here again ? 15:15:44 IOW, if we know we want to end up in one place, why add code that uses the approach we want to leave? 15:15:55 I don't want to spend too much on my time agreeing on something that jay could -1 it 15:16:04 bauzas: yeah, jay not being here sucks 15:16:18 I mean, he's the sub-PTL for scheduler 15:16:26 edleafe: so I ask to postpone that discussion until jay's back 15:16:38 he's not online so we have to try to do what we can 15:16:39 bauzas: ok 15:16:45 because I will have to explain again why I disagree etc. 15:17:08 n0ano: so do wwe need to set up another hangout or irc discussion> 15:17:09 if we postpone what are the odds we'll be done by the spec exception deadline 15:17:34 n0ano: we shouldn't wait until next week, since it'll be the mid-cycle 15:17:35 edleafe, I'd prefer IRC, I have to say the hangout didn't work that well 15:17:40 edleafe: sub-PTL doesn't necessarely mean that a people would be a tech lead 15:17:42 n0ano: agreed 15:17:52 IRC can do the job 15:18:05 bauzas: true, but he should be keeping everything on track 15:18:14 right, hence my ask to postpone 15:18:25 because we need his advice 15:18:31 bauzas: yes 15:18:44 let's keep an eye out for him on IRC and see if we can setup an ad-hoc channel 15:18:53 bauzas, when will you be quitting for the day? 15:19:02 please all keep in mind we're a distributed team with chinese and EU people 15:19:16 I'm here for around 2 hours left 15:19:26 and then, possibly here 15:19:40 #action Track down jaypipes and set up an IRC discussion with bauzas, edleafe, n0ano, and anyone else interested 15:19:45 provided that doesn't conflict with personal concersn 15:20:00 n0ano: could you take the action ? 15:20:10 edleafe, I'm good almost any time 15:20:14 edleafe: you can't add actions, you're not chairing the meeting 15:20:21 maybe set it up for early tomorrow morning in the US? 15:20:30 bauzas: ah 15:20:31 it hard to catch Jay, so don't think about me, I can check the log 15:20:36 bauzas, yes, I'll look for jay and try and setup something soon 15:20:53 #action n0ano to setup ad-hoc IRC when all available 15:21:12 * n0ano action doesn't work for me either, go figure 15:21:23 time to move on? 15:21:33 anyway, let's postpone this for now, moving on 15:21:44 #topic items for mid-cycle 15:22:07 I made a few notes on the etherpad 15:22:13 have we thought of things we want to raise at the mid-cycle, noting this will be stuff we will be targeting for L 15:22:28 link for the etherpad? 15:22:39 https://etherpad.openstack.org/p/kilo-nova-midcycle 15:22:40 * edleafe seems to have misplaced it 15:22:50 thx 15:23:45 n0ano: I added an item about discussing the interface questions 15:23:52 we should all think about some specifics for `next step', that's what I want but it's a little open 15:23:59 edleafe, that's good 15:24:05 edleafe: well, I was retaining the "next step" approach 15:24:12 I think it would be helpful to get a clearer picture of where we want to end up after the split 15:24:22 next step being Lxxx :) 15:24:29 hope it will be Liberty :) 15:25:37 bauzas: next steps are good too, but I was thinking about clarifying where we want to be after the next 10 steps :) 15:25:55 edleafe: I'm fine with you, anyway that's midcycle 15:27:05 that's also something we can tackle during midcycle 15:28:05 although thinking about it, general terms for the schedule are good, we should talk about specifics at the meetup 15:28:57 agreed 15:29:33 #action all to update the mid-cycle meetup etherpad with `general` ideas 15:30:22 sounds good 15:30:25 are we done ? 15:30:29 almost 15:30:33 #topic opens 15:30:39 oops :) 15:30:40 anything new from anyone today? 15:31:06 coding, coding, coding, fixing, releasing, coding, coding 15:31:19 reviewing, reviewing, ... 15:31:28 reviewing, reviewing... 15:31:32 :) 15:31:47 OK, winding down, I'll tnx everyone and we will be on IRC again - soon 15:31:53 #endmeeting