15:00:35 #startmeeting scheduler 15:00:36 Meeting started Tue Jun 18 15:00:35 2013 UTC. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:40 The meeting name has been set to 'scheduler' 15:00:51 show of hands, anyone here for the scheduler meeting? 15:00:58 here 15:01:13 hi all 15:03:45 #topic scalability 15:04:17 I started a thread on the dev mailing list and have been getting some replies, have you seen the tread? 15:05:18 not sure. what is the subject? 15:05:28 from my side, sorry, I was not available the last few days... 15:05:35 hi don 15:05:47 Subject: Compute node stats sent to the scheduler 15:06:12 ok, get it 15:06:17 to me the big question is do we communicate usage data to the scheduler via fan-out message or through the DB 15:07:09 I prefer fan-out messages (I hate DBs, it's a personal quirk) but I'm hearing from people that think the DB is the way to go. 15:07:27 I've voiced by concerns in the email thread, now waiting to hear back 15:09:06 we can talk about the issues here or just follow the email thread which is still active (I started the thread late) 15:09:16 sorry I need to read all thread 15:09:17 I prefer fan-out messages too, as n0ano concerned, I am not sure who else is using db. 15:09:34 +1 fan-out messages 15:09:57 well, the one issue that hasn't been brought up is ceilometer, does it want to get the data from the DB or does it want to query the scheduler 15:10:39 shanewang, I'm in the process of reviewing the code to see who actually uses the DB data, not done with that yet. 15:12:49 if no one has any strong opions today I think it makes sense to just see how the email thread works out. 15:13:12 agree 15:13:41 ok 15:13:48 #topic follow ups on scheduler BPs 15:13:58 anyone have anything to report here? 15:14:37 two weeks ago we decided to discuss the blueprint: https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones 15:14:52 there are any opinions now? 15:16:28 I don't see how you belong to multiple availability zones, can you explain that? 15:17:24 my understanding is that az is now defined in aggregates 15:18:09 a host can belong to different aggregates that have different azs 15:18:46 that seems - odd - to say the least, is that a feature that is actually being used? 15:20:19 I don't see much sense on it too. But you can have a setup were a host have multiple azs 15:20:36 that's why a raised the question in the BP 15:21:06 but that is what is available now in nova... 15:21:42 about the BP what do you guys think about it? 15:22:00 I'm not qualified to make a decision but I'd consider changing that to be a 1-1 map, host to AZ, but that might start a bit of a discussion 15:22:50 n0ano, +1 15:23:07 I agree. 15:24:07 but the BP to have multiple default azs to an instance 15:24:13 instead only one. 15:25:11 what's the benefit to set multiple available zones? for dividing more zones logically and physically. 15:25:33 belmoreira, in the implementation part, "After find the node that will run the instance set the availability zone of the node to the instance", means AZ are provided dynamically according to clients? 15:25:46 back to your BP, I'm unclear on the use case, if the AZ is set `after` scheduling then it can't be used for physical separation 15:27:09 jgallard. according to the client select using also the az_zone filter. 15:27:52 when an instance is booted and the az filter is enabled only the clients of the default az are selected. 15:28:19 if instead only one default az we have several 15:28:40 the az filter passes for all the nodes in them 15:28:59 the best node is selected and then the az is set 15:29:13 so you're only addressing the case where the user `doesn't` specify an AZ 15:29:28 exactly 15:29:40 means that he doesn't care 15:30:04 but we can provide some reliability to the instances 15:30:17 starting maybe in different azs 15:30:26 I can accept that and it makes sense but I still am having issues with setting the AZ after selecting the node, that means AZs don't apply to physical separation 15:31:16 n0ano. its is physical separation as well... 15:32:00 if you set two default azs means that you expect that the instances start in one of them 15:32:20 how, if you assign AZ after selecting the node then any node can be part of any AZ so there's no way to physically separate two nodes. 15:32:45 really? if a node can moved from one AZ to another, or even if a node can belong to multiple AZ at the same time? 15:33:07 jgallard, that's my point 15:33:28 n0ano, yes, and I agree with you 15:33:46 if a node belongs to multiple azs is an admin problem 15:34:06 in that case you don't have physical separation 15:34:29 I completely agree with that 15:34:51 but `assigned after node selection` => not under admin control, this is under user control 15:35:49 hang on, I think I see the confusion ... 15:35:50 but the az considered during the scheduler 15:35:55 scheduling 15:36:21 user control? 15:36:42 a node is assigned to an AZ, the schdule request will select a node (potentially use AZ criteria) and, after the node is selecte, the AZ for the `instance` is set 15:36:57 yes 15:37:47 but instead only one default az if you have lets say two azs 15:38:11 the scheduler uses the two azs for filtering 15:38:18 and picks the node 15:38:31 and the az of the node id set in the instance 15:38:42 in that case I don't see an issue with having multiple default AZs, if the user didn't specify then the user doesn't care which AZ it winds up in. 15:39:06 for my use case is important to have this 15:39:18 because users usually never define an AZ 15:39:24 this use case should not be handle with cells? 15:39:25 and we need to have multiple 15:39:47 so we end up with the default az very busy 15:40:17 jgallard: we also have cells… but our cells are big 15:40:31 we like to split them in azs 15:40:36 belmoreira, indeed. To me the only issue is which of the default AZs to pick, round robin or random or least used or ... 15:40:43 ok, in your configuration AZ are partitions of cells, right? 15:40:49 belmoreira, ah ok 15:42:08 n0ano: my proposal is to change to availability_zone filter to pass for all azs defined as default. 15:42:27 having the nodes of all azs 15:42:42 belmoreira, is my understanding correct? --> the idea is to have a kind of "scheduler for AZ", this scheduler will pick one availability zone among several default ones in the case the user doesn't choose a specific AZ 15:42:49 the scheduler will select the best one considering the other filters 15:42:59 and then let the normal scheduling choose the best node - seems like a reasonble fairly simple change 15:42:59 so is not random 15:44:06 the point that I raised in the BP and we started the discussion with it 15:44:34 is what to do if a host belongs to different aggregates 15:44:45 and have multiple azs 15:45:20 I agree that is bad… but someone can have a setup like this. 15:45:44 seems simple, if the host belongs to at least one of the default aggregates it passes, otherwise it only passes if it's a member of the specified AZ 15:47:13 if it belongs to only one az of the default list it passes… and that az is set if the host is selected 15:47:36 but if it belongs to more than one az in the default az list? 15:47:43 it passes as well 15:47:53 and what az we set to the instance? 15:48:05 probably ramdom? 15:48:09 belmoreira, random, the user didn't specify so the user doesn't care 15:48:12 probably :) 15:48:31 ok. good 15:48:45 or maybe the one which is the least loaded? 15:49:07 but for that we need more queries 15:49:15 jgallard, the normal scheduling should have found the least loaded so I don't think we need to worry about that in the AZ filter 15:49:17 random is to avoid that 15:50:05 but, I mean, if you are on a node with multiple AZ, the admin should want to give priority to AZ which is least loaded 15:50:12 I'm not sur if i'm clea 15:50:15 clear 15:51:02 jgallard: completely agree… but how to know what is the least loaded? 15:51:19 I don't think the issue is `least loaded AZ` so much as it's `most optimal host` and the rest of the scheduling determines that 15:51:44 belmoreira, h�h� yes, as you said this probably needs more queries 15:53:53 n0ano, in fact, what I want to explain is that, if the user don't care about a specific AZ, and a node with multiple AZ is selected, perhaps, the admin will want to give a policy to select a prefer AZ between the one available on that node 15:54:17 but this is not targeted by this BP 15:55:12 potentially but remember, `prefence` is determined by the weighting functions, filters only do yes/no so, as you say, finding the preferred AZ would be a different BP 15:55:22 s/prefence/preference 15:55:52 ok. I will start to implement this 15:56:19 belmoreira, you might want to update the BP to remove the question and put in the decision 15:56:40 ok 15:56:50 #opens 15:57:00 just a few minutes left, does anyone have any opens for toda? 15:57:10 belmoreira, may be you can ask a question about the fact that, in the current implementation it's possible to have several AZ on a node? 15:57:17 (on the mailing list) 15:57:52 jgallard, good idea (I like using the mailing lists) 15:58:22 ok 15:58:29 n0ano, same for me :-) 15:58:56 belmoreira, thanks! 15:59:03 hearing silence I think it's time to wrap up, tnx everyone, good discussion. 15:59:10 thanks to all! 15:59:27 #endmeeting