16:01:13 #startmeeting cinder 16:01:14 Meeting started Wed Feb 5 16:01:13 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:17 The meeting name has been set to 'cinder' 16:01:19 o/ 16:01:23 hi 16:01:23 hello 16:01:25 hola everyone.. it's been a while :) 16:01:32 Hola 16:01:36 seems Wed is the best day of week to fly :) 16:01:43 :) 16:01:58 howdi 16:02:09 hey 16:02:14 dosaboy: and DuncanT both!!! 16:02:17 W00T 16:02:28 * DuncanT gets nervious 16:02:33 Nahh 16:02:34 d-ream team 16:02:47 alright... let's fire this thing up 16:02:48 hi 16:03:00 #topic I3 Status 16:03:14 https://launchpad.net/cinder/+milestone/icehouse-3 16:03:26 A number of folks snuck some things in over the week-end 16:03:39 New driver submissions from prophet-store and others 16:04:09 basically we're at submission freeze, so if you BP isn't in it'll need an exception after this point 16:04:37 o/ 16:04:37 The way it's *supposed* to work is that reviewers would come across somethnig that isn't approved and not approve it in review 16:05:24 jgriffith: everything that is approved is marked as such? 16:05:39 Howdy all. Sorry I am late. 16:05:52 avishay: not yet :( 16:06:03 avishay: but I'll have it updated later today 16:06:12 jgriffith: ok cool 16:06:17 avishay: and the big thing is I'd like to NOT see people sneaking in new stuff 16:06:27 if somebody has a bp they need to add let me know 16:06:39 don't just add it and mark it as targetted 16:06:50 i would never do such a thing 16:06:56 * avishay whistles 16:06:58 of course none of the people that show up to this meeting do that so no point in me mentioning it here 16:07:06 jgriffith: I've had 1137908 on that list a while and i've decided to break it into 4 patches to make it more review-friendly 16:07:12 jgriffith: should be done buy end of week 16:07:26 avishay: :-) 16:07:27 dosaboy: great, you were next on my list of topics :) 16:07:31 unless anyone objects 16:07:34 jgriffith: I would like to propose a new driver bp 16:08:17 reckon it makes sense cause it got kinda bloated 16:08:34 dosaboy: sounds good to me 16:08:41 jgriffith: is the cert script a necessary precondition for +2ing drivers? 16:08:43 jgriffith: https://blueprints.launchpad.net/cinder/+spec/astute-nfs, we are mostly on completion with this 16:08:51 coolsvap: I think you're going to be late on that 16:09:02 coolsvap: we want things in the queue by end of next week 16:09:19 coolsvap: but if you've got one to propose get the BP posted this morning 16:09:34 coolsvap: I thought we already talked about this last week and you said the BP was on it's way? 16:09:51 avishay: I'd like to make it as such 16:10:06 avishay: I've got new changes in tempest and devstack that should fix things up 16:10:19 avishay: and hema ran into something weird that he patched yesterday 16:10:30 We need a place to put the files though 16:10:45 That part I don't have... 16:11:10 but maybe I could suggest google-docs, or maybe give a lnk to my S3 container? 16:11:13 jgriffith: from what i remember there is the IBM NAS driver and 2 HP drivers - so no merge until a successful run? 16:11:19 jgriffith: the blueprint I have already added, the code can be pushed by friday at the earliest 16:11:26 o/ 16:11:30 thingee: yo 16:11:31 coolsvap: then you're fine 16:11:31 jgriffith: if there are still bugs in the cert tests, it is hard to make it a requirement for new drivers 16:11:51 xyang: there aren't bugs in the cert test actually 16:12:13 xyang: the issues were setting protocol and vendor name in tempest 16:12:41 the devstack change: https://review.openstack.org/#/c/68726/ 16:12:47 is simply to make life "easier" 16:13:05 xyang: the script should be functional, and if it's not please let me know 16:13:18 jgriffith: can you make some small guide to running it? it seems that running unmerged drivers requires changing the script? 16:13:19 xyang: it's been out there for a number of weeks, I would've hoped people would've tried it 16:13:39 avishay: yes, that's an excellent point 16:13:47 avishay: I'll get something documented and sent out 16:13:54 coolsvap: for this new driver, I would also like to see the cert tests passing ^ 16:13:56 jgriffith: OK thanks 16:13:57 avishay: or just update cinder's dev guide 16:14:21 jgriffith: as far as i'm concerned it can be comments at the top of the script :) 16:14:27 jgriffith: sure, I haven't got chance to try it yet, but will give a try. I hope we can at least do that after 2/18? 16:14:32 thingee: sure I have that on the checklist 16:14:56 avishay: yeah... I was just thinking "who knows how long it will take till a change lands" :) 16:15:19 jgriffith: will the recent fixes help with the errors Nilesh reported for the IBM NAS driver? 16:15:26 I guess I'm not sure why not just run it 16:15:35 OK 16:15:35 It only takes around 5 minutes 16:15:47 fair nuff 16:15:51 avishay: I'm not sure what errors he reported 16:15:51 :-) 16:16:00 avishay: sorry... that was towards xyang 's question :) 16:16:03 jgriffith: He was seeing like 4 failures. 16:16:22 jungleboyj: I'll look, but I suspect it was the vendor_id and protocol 16:16:31 * jungleboyj still needs to go review the code again. 16:16:33 jungleboyj: but I thought that was *2* errors not 4 16:16:54 so there's a api.volume.test_volumes_types 16:17:03 it sets volume type and extra-specs 16:17:22 timeout in upload_image 16:17:22 the default in the test is vendor = OpenSource and protocol=iSCSI 16:17:39 avishay: hmmmm... might be a bug in his driver :) 16:17:43 avishay: we see upload_image timeout with Netapp drivers too 16:17:52 akerr: NFS version? 16:18:00 Ruh roh. 16:18:04 jgriffith: I think 4? 16:18:06 jgriffith: he claims it works manually and uploaded logs 16:18:11 jgriffith: but maybe 16:18:18 jgriffith: I'll try sooner, but if we run into issues, I don't know how long it will take for it to complete successfully. 16:18:31 xyang: fair enough 16:18:46 avishay: akerr so ping me after meeting and I can help debug 16:18:48 jgriffith: I ran the script for vmware driver, and had issues with 196 sec timeout during upload to image too 16:18:53 jgriffith: sure 16:19:05 jgriffith: I set the timeout higher and it passed 16:19:36 since we don't gate on NFS and it's become evident most people don't run the tests I wouldn't be surprised if there's a bug in the driver or in the tempest test 16:19:43 sneelakantan: ahhh 16:19:53 sneelakantan: so its just painfully slooooowwwwww 16:20:21 jgriffith: yep. Take about 4 mins to upload a 1GB file. May be just my env. 16:20:34 holy cow... that's horrid 16:20:35 not all of us has all-flash storage arrays 16:20:38 lol 16:20:45 haha 16:20:46 bswartz: LOL... 16:20:56 4 mins for 1 GB does seem a little off though 16:20:58 bswartz: I didn't know you used stone tablets and chissels though 16:21:17 Isn't everyone one running on little bogged down VMs? 16:21:21 * jgriffith should've changed the topic 16:21:27 #topic cert test 16:21:44 Rewind about 5 minutes for beginning of topic :) 16:22:22 :-) 16:23:11 Ok... guess that's that :) 16:23:22 while we are on the topic of uploads, were there not plans a while back to refactor the cinder.image? 16:23:24 #action jgriffith to updte docs on how to use cert-test 16:23:35 since we now have a kludge of v1 and v2 16:23:56 dosaboy: more importantly we're still not using v2 as default 16:24:03 agreed 16:24:07 dosaboy: I think there are few things we need to get fixed here 16:24:27 v2 still isn't in use in many / most places as of the last summit 16:24:37 jgriffith: v2 is routed by default 16:24:49 DuncanT: um i seem to remember hearing at the summit that they are planning to deprecate v1 16:24:58 dosaboy: yes 16:25:25 thingee: I think someone pointed out an issue last week on that 16:25:32 thingee: I'll have a look after the meeting 16:25:49 jgriffith: we found a few problems in the 3PAR drivers by running the cert test, hemna will be posting a patch soon to fix them 16:25:57 hm, news to me =/ 16:25:58 kmartin: nice! 16:26:11 thingee: I could be mistaken, I"ll go back through my notes 16:26:21 * jgriffith keeps notes 16:26:45 dosaboy: so to your point/questin 16:26:48 question 16:26:56 We talked about that, we should look at it 16:27:05 I'm not sure I understand. 16:27:09 v2 is routed 16:27:16 Honestly I was hoping to get some more improvements in image.py 16:27:19 it's just not warning your about using v1 still 16:27:32 thingee: What do you mean routed? 16:27:42 the v2 controllers are routed 16:28:06 if you run G, you get v2 16:28:15 you just have to make sure keystone catalog stuff is setup properly 16:28:29 jgriffith: i'm happy to help out with glance-related work if I find the time 16:28:54 thingee: From what I have seen, you can't count on Keystone being set up properly. 16:29:38 dosaboy: thanks 16:29:59 jungleboyj: people can setup anything wrong. how is this different? 16:30:04 thingee: deep breaths, it's ok... lemme see if I can find out what the complaint was 16:30:07 you can put v5 when you meant v1 16:30:10 thingee: perhaps it's nothing 16:30:23 thingee: True enough. 16:30:46 I guess instructing with a deprecation warning that v1 is old, use v2 would help 16:30:57 do we want to do that in I? 16:31:21 thingee: it would probably be good if we did 16:31:26 seems like the next step 16:31:27 ok 16:31:28 It also seems we keep adding to V1 so people don't have the incentive to move foward. 16:31:29 else nobody will move 16:31:37 jgriffith: +2 16:31:44 #action thingee will deprecate v1 16:31:57 jungleboyj: well, we're supposed to be rejecting feature adds to V1 already :) 16:32:00 jungleboyj: +1 16:32:18 yes, I have to ask that everyone be more strict on accepting patches where people add to v1. 16:32:23 jgriffith: That was why I was pushing back against the multi volume change. :-) 16:32:39 jungleboyj: indeed :) 16:32:41 the only exception in my opinion is having cinderclient v1 expose something v1 already have...imo 16:32:53 I have seen those changes come lately 16:32:54 thingee: agreed 16:33:11 Ok... we've digressed a bit, but at least on something relevant 16:33:22 jgriffith jungleboyj thingee just to confirm, should multi volume create go in v1 or not? 16:33:23 Let's see what bswartz has in store for us this am 16:33:32 #topic multiple pools per backend 16:33:33 hah 16:33:46 well the short answer is -- I tried to make this work and I couldn't 16:34:03 I think winston was right that this requires a DB change or something more invasive than what i had in mind 16:34:24 coolsvap: I would say no based on the discussion we just had. 16:34:46 this is because of how the scheduler tracks free space -- if you try to have multiple pools inside a backend, but volumes aren't associated with them in the DB, then you can't figure out allocated space 16:35:23 i think multi-create should go into v1 - works with v1 APIs 16:35:24 so I'm going to look at other approaches to solving this problem, because simply manding multiuple backends (one per pool) doesn't seem like the right approach either 16:35:39 bswartz: interesting 16:35:40 avishay: coolsvap let's hold off on that for now 16:36:04 I may propose a DB change for J to actuall allow cinder to track multiple pools 16:36:06 I'd like other ideas though 16:37:07 the goal is to allow 1 array to present multiple pools of free space (each with it's own set of capabilities) to the scheduler without forking lots of c-vol processes 16:37:12 bswartz: so going the other way and chaging how the scheduler interprets space isn't going to cut it 16:37:16 because it's dumb to have a bunch of processing managing 1 array 16:37:36 bswartz: so something that came up before was an aggregated backend 16:37:50 bswartz: one interface, multiple backends 16:38:04 jgriffith: any docs or BPs on that? 16:38:06 the multiple backends abstracted into a single cinder driver interface 16:38:16 bswartz: no, I just talked about it in Hong Kong 16:38:25 jgriffith: yes that's another reasonable approach to implement what I want 16:38:37 bswartz: processes are cheap. If the config file is sane, who cares if we have 1 or 100? 16:38:39 bswartz: but admittedly everybody was more interested in my 'report stats' was wrong 16:38:45 jgriffith: a pseudo-driver that calls other driver instances? 16:39:03 DuncanT: it's more of an issue if htey have to be in sync with eachother for some reason 16:39:11 avishay: well... it should be moore sophisticatd than that 16:39:22 avishay: but at a high level, yes 16:39:32 DuncanT: i know one issue people are seeing is lots of SSH connections to arrays because each driver instance has its own - sometimes you can run out of connections 16:39:37 jgriffith: gotcha 16:40:00 bswartz: avishay: Fair enough. Just wanted to shoot down the straw man and find out what the real problem was 16:40:05 * coolsvap was working on a poc similar to jgriffith's 16:40:07 ssh, xml, smis... three acronyms that should be banned :) 16:40:27 I think we want to keep a single DRIVER per array, because there's some state you wouldn't want to have to keep multiple copies off, but somehow that needs to appear to teh scheduler/manager as multiple things 16:40:44 bswartz: coolsvap maybe we sould spend some time together and see if we can arrive at a good solution for this? 16:40:46 jgriffith: :) 16:41:02 sort of a "parent-driver" approach 16:41:12 yes, but I don't see it happening in icehouse sadly 16:41:26 bswartz: yeah, time is not our friend 16:41:43 bswartz: but I don't know that any other proposal would help there either 16:41:57 bswartz: unless you have a working proposal that you want to run with 16:42:12 no I think we need to start thinking about what we'd like to see in J 16:42:18 jgriffith: sure 16:42:21 bswartz: fair 16:42:24 icehouse will have to ship without support for this because I don't have any bright ideas 16:42:24 ok 16:42:29 bswartz: i also have some interest in this as i've been thinking about issues leading me toward similar ideas 16:42:29 well thanks for the update 16:42:46 the "NO DB change" edict is going to make things difficult 16:42:59 and I don't want to half-ass this either, I think it's important to get right 16:43:17 bswartz: sounds good 16:44:04 alright, we def have multiple parties interested in this 16:44:36 Let's synch up later this week/early next week and get all of us on the same page 16:44:53 winston was interested too -- I suspect here's not here today 16:44:56 It might be J timeframe but it would help if we went in to J with a plan 16:45:01 or at least all on the same pge 16:45:02 page 16:45:11 s/here's/he's/ 16:45:23 #topic open-discussion 16:45:30 oh 16:45:31 is the cinder hackathon going ahead? 16:45:33 hackathon 16:45:37 :) 16:45:37 dosaboy: ;) 16:45:38 hah 16:45:40 haha 16:46:04 i'm in, remotely 16:46:09 i was gonna offer python-mock help where possible 16:46:14 I should be in remotely. 16:46:36 so I wanted to try something a bit different. it seems like remote is the way to go with short notice. 16:46:45 I will be too, remotely 16:47:01 but the different thing to try is something like google hangout, team speak or something where communication is shown as us being a bit more focused 16:47:04 me too remote 16:47:09 do people think that would help? 16:47:12 stupid idea? 16:47:20 thingee: good idea IMO 16:47:25 :) 16:47:28 Certainly worth a try 16:47:31 never tried either of those but worth a try 16:47:42 Some of the teams here use google hangouts for such things 16:47:49 yes we tried hangout at solum earlier 16:47:52 there is a limit on hang out though right? 16:48:03 yes I guess. 10 people 16:48:04 10 afaik 16:48:16 but you can add on with call ins 16:48:28 dial ins even 16:48:42 or have two guys sit in front of one laptop :D 16:48:48 that works. we don't really need to see faces :)...I don't think anyone wants to be staring at me all day making weird faces as I do reviews 16:49:02 * jungleboyj laughs 16:49:08 * jgriffith knows you don't want to see his mug all day 16:49:19 jgriffith: just point it at the dog all day 16:49:25 haha 16:49:29 thingee: now there's an idea 16:49:30 You guys would have to deal with me headbanging and dancing in my chair. ;-) 16:50:14 ok, lets try something like that. so plan on having some headset ready 16:50:22 thingee: it seemed you had a basic agenda - do you want to make a more detailed list of things to tackle? how will it run? 16:50:36 feb 24-26 sound good still? 16:50:57 thingee: So what is the idea? We will talk through reviews quickly and interactively. Talk out the solutions to avoid multiple patches? 16:50:59 ok for me 16:51:16 avishay: Jinx. :-) 16:51:20 is it for reviews or for fixes, or both? 16:51:29 avishay: so jgriffith had the idea of leaving it open. I just want the focus around getting patches through and some of the things I mentioned fix 16:51:33 stability/bug fixes 16:52:12 so kind of like a 3-day bug squashing event with voice/video 16:52:19 24th - 26th still looks good to me. 16:52:29 it might be most of the time it's people muted, but it's a way to quickly get attention and people talking out complicated things. 16:52:33 who is sponsoring the beer tent? 16:52:40 works for me 16:52:45 dosaboy: Good question! 16:52:48 * DuncanT wonders if there will be many on in a GMT timezone, or close? Avishay I guess... 16:52:49 thingee: I did? 16:52:57 it's an experiment. no promises it'll be perfect 16:53:10 and yes video if you don't mind people seeing you :) 16:53:34 Can I just get an idea of who is interested. 16:53:39 just a rough count 16:53:45 0/ 16:53:48 me 16:53:50 O/ 16:53:51 me 16:53:55 +1 16:53:55 me 16:54:06 o/ 16:54:25 I think we can count winston as well 16:54:29 great we might have room for google hangout or something. 16:54:31 yea 16:54:38 jgriffith: +1 16:54:39 and likely harlowja_away 16:54:49 yes winston-d ++ 16:54:54 winston mentioned being in. 16:54:54 eharney: not interested? 16:55:04 i jumped in above ^ :) 16:55:12 eharney: DOH! 16:55:13 my bad 16:55:26 I'd like to see all core members participate 16:55:27 I'll start an etherpad and people can list bug links they're interested on working on. you also don't have to work on bugs, you could be finishing a bp or doing a lot of reviews 16:55:30 eharney: we used the wrong notation. 16:55:33 personally 16:55:33 eharney: _1 16:55:39 most importantly, it's three days that we're going to focus 16:55:42 unless they have scheduling conflicts of course 16:55:50 I'm planning to take the time off from work to really focus 16:56:13 nobody is expected to do that of course, but the idea is just dedicated time to really focus on getting things through review and stability 16:56:23 thingee: What hours are we talking? 16:56:30 good question 16:56:31 hopefully gate will cooperate 16:56:49 thingee: I should block my calendar accordingly. 16:56:52 (i just jinxed it, didn't i) 16:57:08 * jungleboyj looks sternly at avishay 16:57:56 jungleboyj: i blocked my calendar from 8AM to 10PM, because who want boring calls and meetings anyway 16:58:07 avishay: :-) 16:58:17 so I know avishay is going to have different TZ. winston will be in the US that time 16:58:35 is there anyone else outside the US? 16:58:41 I will have to take off from daytime work 16:59:17 ok, we're running low on time. we can take the TZ talk to #openstack-cinder 16:59:23 I will let hemna know that he has to participate 16:59:32 kmartin: :) 17:00:17 cool, we're out of time 17:00:26 thanks everybody 17:00:31 Wow, that was fast. 17:00:42 most of us are all here all day in #openstack-cinder so feel free to ping 17:00:47 #endmeeting