15:00:19 #startmeeting scheduler 15:00:20 Meeting started Tue Jul 9 15:00:19 2013 UTC. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:23 The meeting name has been set to 'scheduler' 15:00:36 o/ 15:00:49 show of hands, anyone want to talk about the scheduler? 15:00:52 \o 15:01:10 o/ 15:02:30 #topic code re-factoring 15:02:36 I 15:03:07 I've seen some emails on this subject but I'm not up on it, does anyone know what the real issue here is? 15:04:31 Maybe there's more than one effort, but I know about https://blueprints.launchpad.net/nova/+spec/query-scheduler 15:04:32 this is related to move some common part of scheduler for nova and cinder code into oslo? 15:05:20 Aah, common code between nova & cinder, that makes more sense (I had problems with duplicated code inside the scheduler itself) 15:06:13 given that there is some filtering/weighting going on in Cinder this sounds like a good idea, I guess it's just down to the implementation details 15:07:30 given I don't see any architectural issues here I say we just review the code changes as they come by 15:07:52 from my understanding yes: it's mainly related to implementation issue 15:07:57 what are the common things in detail between cinder&nova scheduler? filter/weight framework? 15:08:21 llu-laptop, yes, I believe that's where the overlap occurs 15:08:39 https://blueprints.launchpad.net/oslo/+spec/oslo-scheduler 15:09:14 this is the BP related to what we are discussing 15:10:04 looks like mainly specific classes that have some overlap, not a complete framework issue. 15:11:16 sorry, late for the meeting. 15:11:26 given the specificity (I think that's a word) of the classes I wonder how much overlap there exists in the rest of nova/cinder/swift/glance 15:12:03 probably an issue for someone who gets motivated to do a complete review of the system. 15:13:05 anyway, I think this problem is well in hand, let's move on 15:13:19 #topic volume affinity filter 15:13:49 jgallard, I think you've been involved in the thread, can you expand on this? 15:14:08 yes of course 15:14:31 https://review.openstack.org/#/c/29343/ 15:14:59 seems to me the biggest issue is are they talking about a filter or a weigh (I've seen that confusion in other contexts) 15:15:08 the goal of the patch is to allow (thanks to a hint) to place an instance on the same host of a volume 15:15:16 and yes 15:15:34 as you just said, the main issue is about filter and/or weigh function 15:15:43 currently the patch implements a filter 15:16:14 that seems rather restrictive, what's the view on changing to a weight? 15:16:51 yes the filter is restricitve 15:17:03 but it would have some interesting use cases 15:17:09 regarding the weigh 15:17:18 I didn't look at the code, however 15:17:56 the use of a weigh function is more complicated, because, the scheduler has to find himself from wich volume the instance has to be placed 15:18:05 (no possibility to use an hint) 15:18:37 I don't see the use cases for the filter, that wouldn't be a hint either (it would be a requirement) 15:18:43 but it seems that from a technical point of view the dic containing the host of the volume is not directly accessible from the scheduler 15:19:25 for the filter, maybe you can imagine a case where, the placement for high performance is mandatory 15:19:39 if you don't want besteffort 15:19:44 why not 15:20:26 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011329.html 15:20:46 for those who are interesting to participate on the discussion 15:21:35 hmm, a point, I wonder if that means we need both a filter and a weight or would that be overkill 15:22:33 if both of them are providing, I think the administrator will have more flexibility to configure its cloud 15:23:10 the admin can use the filter (very restrictive case) or the weigh function (no restriction, best-effort mode) 15:23:26 yeah but would it be overly complex and provide a framework that isn't actually used in practice 15:23:48 the main issue with the filter is that it "break" the cloud philosophy : the placement is not hidden to the user 15:24:26 n0ano, sorry, you are talking about the filter or the weigh ? 15:24:44 jgallard, I'm talking about providing both 15:24:55 ah ok 15:25:20 it would be simpler to provide just a weight and, if that satisfies 99% of the actual uses, that would be sufficient 15:25:32 Not sure I agree about breaking cloud philosophy - we already have affinity filters 15:25:50 PhilDay, +1 (you typed faster than me :-) 15:26:16 the issue is that, in cloud philosophy, you ask for a resource and you have it "in all cases" 15:26:33 but with this kind of filter, you may not have any resources at all 15:26:55 The general case though should be to be able to express some form or proximity - there are configurtaions where volume is never on the same host, but it could be closer on the network 15:26:58 but again, I think this is more related to the use case the admin wants to do 15:27:17 PhilDay, +1 15:27:30 but this is not possible to achieve with the filter 15:27:37 it's more related to the weigh function 15:27:48 hence my preference would be to provide a weight 15:28:36 Nothing stopping someone having a purely private filter or weighteing function that they configure in - but to be accepted into the core set it should be a bit more general 15:30:18 ok, so may be we can intervene on the mailing list? 15:30:37 I think a volume weight would be general enough to be part of the core, while the volume filter might be more appropriate as a private function. 15:31:44 n0ano, PhilDay, yes probably 15:32:18 jgallard, responding to the email thread is a good idea, nothing wrong with getting opinions out there 15:32:36 yes of course 15:33:15 OK, moving on 15:33:27 #topic follow ups on the scheduler BPs 15:33:43 Anything on the outstanding BPs anyone wants to raise? 15:34:58 Hearing silence 15:35:01 #topic opens 15:35:09 Any opens for today? 15:35:19 Scheduler hints APi is looking for reviewers: https://review.openstack.org/#/c/34291/ 15:35:41 I started the implementation of multiple default av_zones 15:36:08 belmoreira, cool, any major difficulties yet? 15:36:16 as discussed some weeks ago. Hope to have the first implementation soon. 15:36:27 Hoping to get started on the whole host allocation real soon now ;-) 15:36:41 PhilDay, great! 15:36:56 n0ano: not yet. 15:37:46 OK, sounds like we've got some coding to accomplish - yea! 15:39:04 Tnx everyone, I think we've run down so I'll close the meeting for today 15:39:21 #endmeeting