16:00:09 #startmeeting Cinder 16:00:09 Meeting started Wed Aug 17 16:00:09 2016 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:13 The meeting name has been set to 'cinder' 16:00:16 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell stevemar watanabe.isao,tommylike.hu 16:00:20 hi 16:00:21 hi 16:00:22 hey 16:00:23 Hey everyone 16:00:23 o/ 16:00:24 Hi! o/ 16:00:24 hello 16:00:28 hola 16:00:28 Hello :) 16:00:33 morning 16:00:35 Greetings 16:00:39 hi 16:00:41 o/ 16:00:41 heyo 16:00:45 .o/ 16:01:00 #topic Announcements 16:01:13 #link http://releases.openstack.org/newton/schedule.html Newton release schedule 16:01:22 hi 16:01:26 Next week is the non-client library release deadline. 16:01:31 o/ 16:01:37 AKA os-brick 16:01:38 Hello 16:01:41 I thought it was friday? 16:01:44 #link https://review.openstack.org/#/q/project:openstack/os-brick+status:open Open os-brick reviews 16:01:53 hi 16:02:00 <_alastor_> o/ 16:02:14 hemna: Yeah, looks like we have through next week, so a few more days. 16:02:29 Do we a list of os-brick priorities? 16:02:41 If anyone has anything critical outstanding there, let me know. 16:02:49 ok phew 16:02:51 Not really. 16:03:04 hemna: Are you trying to get the patch you sent me yeaterday in? 16:03:04 I put up the patch to remove the locks in iSCSI and FC 16:03:09 Lol 16:03:10 The c-vol/n-cpu locking issue is kind of a high priority. 16:03:11 seems like most of the CI's are passing 16:03:26 hemna: go figure :) 16:03:29 hemna: I don't think that's actually what we want, but we'll see. 16:03:30 An awful lot of unanswered -1s on brick patches there 16:03:39 that needs to be rally tested 16:03:45 hemna: the only ones I think should be affected are Pure, Dell and 3Par 16:03:53 jgriffith, yah 16:03:57 Maybe others I don't know about but those are the main ones 16:04:02 hemna: This is actually the approach I think they were looking for: https://review.openstack.org/#/c/356532/ 16:04:09 * patrickeast wanders in late 16:04:11 shared target devices 16:04:14 But definitely need lot's of rally testing. 16:04:21 smcginnis, well we also discussed removing the locks entirely 16:04:23 jgriffith: Yeah, I think so too. 16:04:26 jgriffith: yep 16:04:48 hemna: that approach scares me a bit 16:04:50 hemna: Yeah, at the time we talked about removing the locks and just having retries, but I think the locks are valid for some scenarios. 16:05:04 patrickeast, me too. there are obvious race conditions in the code 16:05:16 jgriffith: I think we have that too 16:05:18 and the locks were in there to help prevent that, but the locks aren't shared..... 16:05:27 I'm still not sure I agree with the statement that we need them... but I don't want to derail meeting :) 16:05:30 retries....may help, but also cause other problems 16:05:37 Anyway, just be aware that deadline is coming up quick. 16:05:41 so it's a crap shoot. I'd like to get all of this rally tested IMHO 16:05:47 #topic Moving the failing jenkins tests to check-experimental until they pass 16:05:50 jgriffith: yea i don't think we need them for all platforms, situations, deployment configs, etc 100% agree there 16:05:55 jgriffith: but we do for some of em 16:05:56 boris-42: you around ^^ :) 16:05:56 DuncanT: Was this new this week, or left over from last week? 16:06:09 Left over form last week, we ran out of time 16:06:26 Oh, OK. So we still need to discuss it then. 16:06:28 hemna: I'll bet boris-42 would be happy to help with rally tests if you want/need 16:06:34 jgriffith: who sad rally? 16:06:37 It's a pretty straight-forward thing, I hope 16:07:01 oh yeah... e0ne is another potential victim... err candidate :) 16:07:04 jgriffith, hemna: please, ping me if you need any kind of help with rally 16:07:06 jgriffith, yah, I kinda want to create a rally gate test for brick as well 16:07:18 We currently approve new jenkins jobs with no evidence of them passing, then they slowly get fixed up 16:07:21 I'm just busy training my replacement this week. 16:07:24 smh 16:07:26 thaks e0ne 16:07:40 It means there's no way of knowing which of the none-voting jobs you should pay any attention to 16:07:46 DuncanT: Won't hiding them in experimental just make it worse (playing devil's advocate) 16:08:08 hemna: OMG, not even going to try and respond 16:08:42 jgriffith, hemna: I'm ready to help with rally job if needed 16:08:49 DuncanT: Still there? 16:08:50 smcginnis: Until they pass at all? No, I don't think so. It isn't like most people work on fixing them, just a small group, who can run "check experimental" easily enough until the job passes at least once 16:09:08 smcginnis: Sorry, injured finger, slow typing 16:09:12 DuncanT: Do you have a list of jobs you'd like to move. 16:09:20 DuncanT: Sorry. :) 16:09:39 smcginnis: I think they're all passing right now, but there were a few a week and a half ago 16:09:47 DuncanT: Swing dancing injury? 16:10:03 e0ne, ok thanks man. 16:10:11 DuncanT: So just looking for general consensus that we should do that if/when we get another batch of failing ones? 16:10:15 gate-tempest-dsvm-neutron-identity-v3-only-full-nv I haven't seen passing, but I haven't checked in carefully 16:10:36 DuncanT: I think that's fair. If it's being worked on, would be good to get working in experimental before polluting our normal results. 16:10:46 smcginnis: I'm asking so that there's a consensious behind me if I go asking on the infra review as to whether the jobs actually work yet 16:10:54 patrickeast: Didn't you have a patch for that identity v3 issue? 16:11:07 If a test is failing for some great length of time, that sounds ok. But merging patches into infra repos can be a lengthy process. 16:11:12 smcginnis: I'd like to be able to say we discussed it before I go off complaining 16:11:12 smcginnis, DuncanT: +1 on moving such jobs to experimental queue 16:11:16 smcginnis: i dont think so 16:11:27 DuncanT: +1 from me as well. 16:11:32 Thanks 16:11:35 patrickeast: OK, thought I saw something. 16:11:53 smcginnis: haha, i'll have to check... i've got a bunch of random crap up for review 16:11:54 DuncanT: I think that makes sense. 16:12:01 I don't promise to catch them all, but I'd like to change the culture of randomly failing tests if I can 16:12:11 Gotta catch em all. 16:12:14 :) 16:12:19 DuncanT, I think there are many of those :( 16:12:20 smcginnis: +2 16:12:21 smcginnis: :) 16:12:47 DuncanT: Yeah, make sense. You've got backing on it if you do. 16:12:53 I've also been gently poking 3rd party CI folks who've been failing a lot - netapp apparently have a plan in place to improve theirs for example 16:13:07 smcginnis: Thanks. I'm done with this subject then 16:13:14 DuncanT: Thanks! 16:13:17 #topic Cinderclient issues to discuss 16:13:26 scottda: Hey 16:13:32 So, Deadline to release is R-5 Aug29-Sep02 16:13:33 DuncanT, we got the wiki names in the drivers now 16:13:46 Woot 16:13:51 #link https://review.openstack.org/#/q/project:openstack/python-cinderclient 16:13:58 DuncanT: netapp has plans to invest a lot in making CI more reliable 16:14:04 hemna: Yup, I've started playing with tooling around that :-) 16:14:29 It'd be especially good to have client CLI support for features that are already in the server... 16:14:33 bswartz: Great to hear. Anything that can be shared in terms of lessons learnt would be good to hear too 16:14:37 so perhaps authors of the server code can help us get the client parts in. 16:14:39 scottda: +1 16:14:53 Related to that... 16:14:55 #link https://review.openstack.org/#/q/project:openstack/python-brick-cinderclient-ext 16:15:14 smcginnis: thanks:) 16:15:20 Huh, thet're all merged now though. :) 16:15:20 :) 16:15:21 One thing that would be good for client reviews is some evidence of testing... since tempest doesn't cover any of these new calls 16:15:33 Not much in brick-cinderclient that hasn't merged... 16:15:48 DuncanT: we could require functional tests 16:15:51 I've been holding off on client reviews because I don't currently have time to test them 16:15:53 Yeah, good to test as well as review client 16:16:08 DuncanT: right now we suspect that we're overloading our hardware with too many jobs executing in parallel 16:16:10 e0ne: Not sure if that's practical, but would be nice 16:16:27 bswartz: Servers or arrays? 16:16:40 1 array, 40 jobs 16:16:56 bswartz: Got you. Thanks. 16:17:19 back to cinderclient? 16:17:23 DuncanT: I mean that we already have a solution to get cinderclient patches tested 16:17:31 * bswartz apologies for being offtopic 16:17:37 DuncanT: but it's really low functional tests coverage for it 16:17:50 bswartz: :) 16:17:51 e0ne: Do we have a good example functional test in there yet? 16:18:17 Kind of related to that, would be good to get input on this: https://review.openstack.org/#/c/356177/ 16:18:21 DuncanT: Good question 16:18:21 DuncanT: everything we have is here https://github.com/openstack/python-cinderclient/tree/master/cinderclient/tests/functional 16:18:46 DuncanT: ther covers only CLI now:( 16:18:56 e0ne: I'll take a look. It would be nice to be able to point people at a small number of good examples to emulate 16:19:24 stevemar: Are you around? stevemar is working on OpenSTack Client stuff and has volunteered to keep cinder team informed on OSC stuff. And you can bug him as well ..... 16:19:27 Oh, here's the endpoint v3 I was thinking of: https://review.openstack.org/#/c/349602/ 16:19:32 yo 16:19:35 DuncanT: my team is working on extend functional tests. ping me if we want to implement something faster 16:19:36 stevemar: Thank you for that! 16:19:44 heyo! 16:19:48 e0ne: It would be nice to have some sort of test that matched the client and server bits together 16:19:57 hi 16:20:09 e0ne: Getting one example up ASAP would suit my purposes well 16:20:12 we had sheel rana working on a blueprint to fill the holes that were in OSC 16:20:16 but he's gone AWOL 16:20:25 e0ne: I can review it, I'm not going to be able to code it ATM 16:20:38 luckily, huanxuan has decided to pick up the blueprint! :) 16:20:42 DuncanT: ok. I'll try to get patch proposed asap 16:21:03 e0ne: Much appreciated 16:21:10 you can see hes' been pumping out OSC patches for volume support: https://review.openstack.org/#/q/owner:%22Huanxuan+Ao%22 16:21:36 Our guys that were working on it for a while got pulled off too. 16:21:37 his latest patches could use some eyes from cinder folks: https://review.openstack.org/#/c/356384/1 and https://review.openstack.org/#/c/353931/ since they are not as straight forward 16:22:31 scottda: smcginnis i'll try to bring issues we're having to the meeting here, but otherwise i'm hoping i have time to create a compatibility matrix for cinderclient vs osc 16:22:32 stevemar: Thanks for that. I find the OSC is not really on my radar screen, and probably true for most of us.. 16:22:44 scottda: it's the bees knees! 16:22:46 stevemar: That would be hugely useful. 16:22:52 smcginnis: right! 16:22:55 stevemar: That would be great! 16:23:12 So we have an idea where we are at and what needs to be done. 16:23:20 i'll mention the osc and the cinder CLI command 16:23:33 we've diverted in some ways, hopefully for the better 16:23:37 smcginnis: Agreed 16:23:43 fwiw, I had to stop using osc from git recently due to failures to configure keystone. :( 16:24:00 hemna: that's cruddy 16:24:10 hemna: file a bug in osc, we can help out 16:24:22 we've heard good feedback from a UX perspective 16:24:27 stevemar, ok will do. it was easy to reproduce. devstack wouldn't come up 16:24:34 can we just implement OSC plugin inside cinderclient and not re-implement all things in OSC? 16:24:52 this plugin should be mandatory 16:24:52 e0ne: swift asked the same thing 16:25:08 e0ne: I asked that too :-) 16:25:13 e0ne: and i think it's feasible, it's just code after all 16:25:16 Yeah, we discussed at the summit.. 16:25:35 just depends on where it lives 16:25:41 stevemar, DuncanT: we discussed it and I was not able to try implement it :( 16:25:50 One downside is that it risks missing the OS team's holistic view and standardisation of terminology 16:26:02 stevemar: I like how heatclient implemented OSC plugin 16:26:04 DuncanT: Probably the biggest risk. 16:26:10 AKA renaming metadata and breaking all the docs 16:26:11 all code is inside heatclient repo 16:26:12 DuncanT: righto! 16:26:54 e0ne: yes, all our plugins (there are many!) have code in-tree except for the big 6 services (identity, compute, volume, image, object, network) -- we have those in osc 16:26:56 At while point OS becomes a thing wrapper around a bunch of heavily deseparate clients, and therefore entirely pointless 16:27:43 not sure i follow that one 16:28:02 DuncanT: OSC is only for CLI 16:28:11 e0ne: to answer your question -- i think its possible, but we risk losing consistency 16:28:37 e0ne: but you'd be inheriting a bunch of our stuff, so maybe y'all can just follow the established pattern 16:28:42 e0ne: Yes, but unless terminology within the CLI is standardised, there's no point adding the extra layer of having a 'single' CLI 16:28:54 DuncanT: +1 16:29:19 stevemar: we can try to do it if somebody has a time for this effort 16:29:42 stevemar: How are you avoiding such terminology fragmentation now? 16:29:43 stevemar: plugin will help implement CLI faster 16:29:54 e0ne: yeah, it would involve some poking around, but it would look a lot like how heat client did it 16:30:13 DuncanT: we publish dev guides in osc docs 16:30:22 stevemar: it's better than having to CLIs, IMO 16:30:41 stevemar: But are the various plugins actually following it? 16:30:49 scottda: Maybe we should move on to the last subtopic? 16:30:52 stevemar: do you ahve an ideia on how much is missising in OSC to meet all functionalities already in openstack client? 16:31:03 I'm ready..... 16:31:05 stevemar: Is there anyway to extract the command tree, including all the plugins? 16:31:07 DuncanT: i hope so, i'll admit we can't keep up with all the reviews, but i hope they are copying established patterns 16:31:21 So, patch is up to deprecate support for 'cinder endpoints' #link https://review.openstack.org/#/c/349602/ 16:31:26 erlon: i mentioned i'll get that info in a week or so 16:31:52 we can totally jump to another topic, i don't want to hold the bus up 16:32:08 stevemar: mhm, I think we need to wait to see if is worth to work in the plugin then 16:32:08 scottda: So the background on that is it was carried over from nova and it doesn't quite fit. But do we care enough to clean it up now. 16:32:20 scottda: Is that the jist of it? 16:32:35 yes. 16:32:45 #link https://bugs.launchpad.net/cinder/+bug/1614104 Keystone v3 support bug 16:32:45 and further complicated with this bug: https://bugs.launchpad.net/cinder/+bug/1614104 16:32:46 Launchpad bug 1614104 in Cinder "Cinder endpoints throws error in Keystone V3" [Undecided,New] - Assigned to Jay Conroy (jayconroy) 16:33:05 So, we'd want to fix that ^^ if we continue to support cinder endpoint. 16:33:23 we can fix it in any case 16:33:33 scottda: We probably would need to fix it anyway, right. We'd have to deprecate the command so it will need to stick around a little while anyway. 16:33:34 deprecated!=not working 16:33:41 yes, deprecation will take time, so we'll end up fixing that bug. 16:33:45 e0ne: +1 - it's a bug, we should fix it if we can 16:34:05 OK, forget about the bug. Do we want to deprecate that command? 16:34:09 But good point in that we are needing to devote time and resources to something that doesn't really need to be there. 16:34:26 smcginnis: hmm, i thought i fixed that bug... https://bugs.launchpad.net/python-cinderclient/+bug/1608166 16:34:26 Launchpad bug 1608166 in python-cinderclient "cinder list endpoints results in parsing error" [Undecided,Fix released] - Assigned to Steve Martinelli (stevemar) 16:34:26 I'm actually in favor of deprecation/removal. 16:34:36 Has anybody seen a script that actually uses it? I've found one in our internal tools dump I've got, but it was a one-shot thing and can be trivally updated 16:34:56 stevemar: Yeah, seemed like we already did something there. SOmething must have snuck in and broke it again or it's slightly different. 16:35:09 smcginnis: funky 16:35:14 DuncanT: Yeah, should be trivial to change. 16:35:22 Do we need functional tests for OSClient as well? 16:35:26 Just switch everything over to osc, right stevemar? :) 16:35:38 DuncanT: those already exist in osc's tree 16:35:38 smcginnis: Not so fast! :-) 16:35:49 jungleboyj: :) 16:35:49 smcginnis: you wouldn't have to deal with all these bugs :P 16:36:22 nova had the same command did deprecate it as well 16:36:25 So I guess the question on the table is deprecate or keep maintaining the endpoint command. 16:36:28 for the same reasons 16:36:28 stevemar: Do they run against the cinder side changes though? 16:36:32 Probably not a big deal to deprecate. Just wanted to ask for feedback...We said we weren't ready to depricate the cinderclient. But this is a bit different. 16:36:46 scottda: yerp 16:36:48 smcginnis: fix and deprecate it 16:36:49 smcginnis: I've no objection to deprecating it 16:37:13 Sounds like the way to go ... especially if Nova has done that. 16:37:15 scottda: Yeah, I see that as completely different. I don't want to deprecate the cinder client any time soon, but an individual command that doesn't make sense, I'm all for deprecation. 16:37:27 smcginnis: ++ 16:37:32 smcginnis: +1 16:37:49 ok. It was the cinderclient deprecation that caused me to put on the breaks. I'll remove -1 and others can review, etc. 16:37:51 Great, no major objections. Let's get it marked deprecated. 16:38:04 I'm done. Thanks. 16:38:12 Thanks scottda 16:38:27 And now on to the fun stuff... 16:38:32 #topic Driver deprecation policy 16:38:40 haha 16:38:43 :) 16:38:48 #link http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html ML Thread 16:38:51 * jungleboyj waits for fireworks 16:38:56 smcginnis: I like hemna's proposal 16:39:04 thanks all! 16:39:08 My thoughts if anyone is interested: http://lists.openstack.org/pipermail/openstack-dev/2016-August/101568.html 16:39:46 I do like the idea of marking unsupported. 16:40:01 The question is whether that still breaks policy if we disable those by default. 16:40:11 Since upgrades will break without making config changes. 16:40:18 smcginnis: let's change the policy 16:40:30 e0ne: which one? 16:40:36 Given that they're likely to break anyway, I think explicitly calling them out is good 16:40:53 jgriffith: does it matter? 16:40:57 wait... let's back up a second :) 16:41:11 I still support removal - backporting is easy enough for a savy deployer, and a non-savy deployer would be better off migrating before upgrade 16:41:17 So there's a LOT of stuff in that thread and even a LOT of stuff in just Sean's posting 16:41:34 smcginnis: I just used the threat of removal to get exec attention. :-) 16:41:35 I think Sean touched on a version of each proposal that's feasible 16:41:43 in his ML post 16:42:14 After an awful lot of thought and consideration I felt that the third option while not the one with the most teeth was the most reasonable 16:42:29 ie a new tag that marks the status of each driver 16:42:56 I know a number of folks want the power of removal, and that makes sense... I totally get it and it's the largest stick you can have 16:43:17 But it's also a pretty destructive stick the more I think about it... and not just for the driver being removed 16:43:30 jgriffith: Can we combine that with deprecation? 16:43:35 jgriffith: And I guess to expand on that, tag it and also mark as deprecated so if they don't turn around we actually can remove them. 16:43:40 jungleboyj: jinx 16:43:45 jungleboyj: yes, and smcginnis yes :) 16:43:46 smcginnis: :-) 16:43:58 would've been funny if I answered each of you differently :) 16:44:10 jgriffith: Wouldn't have been the first time. :P 16:44:12 JOKE! 16:44:13 I think it is very important that we don't make it appear that Cinder doesn't follow deprecation policy. 16:44:17 jgriffith: Is leaving a deployer unable to access their volumes because the drive ris broken actually better? 16:44:35 DuncanT: you don't know that it's broken though 16:44:40 I also think we need to stick to our guns that unmaintained drivers can't stay in. 16:44:44 DuncanT: and in some cases... yeah, it kinda is 16:44:49 IMO anyway 16:44:57 DuncanT: with the tag they would presumably get a warning that says "this driver is probably broken" 16:45:04 so it would at least give a clue 16:45:09 As a consumer/operator you chose poorly.. there are consequences 16:45:11 So, kind-of like we are doing with XIV/DS8k right now we have said they need to opensource and have given them a grace period. 16:45:13 s/would/should/ 16:45:15 patrickeast: Not until after they've upgraded, and it's a bit late by then 16:45:27 but at the same time, we as an OpenStack community aren't going to rip the rug out from under your 16:45:28 you 16:45:37 They are 'deprecated ' right now. Will be removed in the next release if the problem isn't resolved. 16:45:40 https://review.openstack.org/#/c/355608/ 16:45:41 DuncanT: true, maybe we document the list of drivers and their status for each release so its not a suprise 16:45:45 So mark as unsupported, check and log a huge warning when it's loaded, mark as deprecated, and if they don't start working on things remove it in the next release. 16:46:07 smcginnis: I think that is the best of both worlds. 16:46:11 Easier on deployers who are already in an unfortunate situation that have a cloud running on deadbeat storage. 16:46:12 smcginnis: That sounds reasonable 16:46:19 check out that patch and try it out. 16:46:22 patrickeast: DuncanT the documentation of drivers and status was something we're supposed to be doing anyway :) 16:46:33 jgriffith: heh yea, supposed to 16:46:40 patrickeast: :) 16:46:42 patrickeast: A big stonking list of 'possibly broken, really, upgrading your cinder version might eat all your data' warning in the release notes for sure 16:46:44 hemna: So the only difference from what we discussed before I think is not needing to edit the config to say to run it anyway. 16:47:08 DuncanT: I think that might be a little dramatic in reality 16:47:10 well, I do like forcing admins to do that to re-enable 16:47:19 jgriffith: Good. 16:47:29 For non-compliant drivers, rather than a removal patch, we put up a patch to add the flag to their driver and mark it deprecated. 16:47:40 jgriffith: Actually data-loss is really dramatic, every time 16:47:43 hemna: Yeah, but then we have the "grenade" issues of broken upgrades. 16:47:48 DuncanT: if you as an operator are aware of the tag, and note that it's not in the state it should be at release... then I would hope you'd do some testing 16:48:02 DuncanT: but we're not going to *eat* volumes off the back end 16:48:10 nom nom 16:48:13 smcginnis, it won't affect the grenade tests unless we mark LVM as unsupported :) 16:48:13 DuncanT: there's a difference between data LOSS and data UNAVAILABLE 16:48:16 jgriffith: Making the tag obvious helps a lot, both the operation and the stick that it represents 16:48:18 well, if they're tasty with a little cheese? 16:48:36 jgriffith: Data unavailable is still quite a big drama 16:48:38 hemna: Well, not the actual grenade tests we run in gate, but it would break the principle that is trying to enforce. 16:49:05 DuncanT: sure, but that quite frankly is between vendor and customer... or perhaps vendor, customer and distro 16:49:06 it's a grey area IMO. we used to completely delete drivers..... 16:49:16 * jgriffith avoids inappropriate joke about the number 3 16:49:23 hemna: Right, it's completely better overall. 16:49:26 jungleboyj: quiet! 16:49:41 I think we'd still get called out on it though, and I'd rather just set this straight and be done with it. 16:49:42 jgriffith: You don't want to be around when number 3 happens. :-) 16:49:52 the TC folks seemed ok with it 16:49:54 fwiw 16:50:08 DuncanT: I'll raise my point again about the fact that distros are requiring a seperate qual/test process for every driver on every release anyway 16:50:15 I dunno, if we remove that, then the stick to whack vendors becomes very.....limp. 16:50:35 hemna: stop that 16:50:39 :P 16:50:40 jgriffith: They've got no choice right now, the upstream testing is pretty weak 16:50:44 hemna: nice visual 16:50:58 DuncanT: BS... they have me run the EXACT same upstream tests 16:51:04 hemna: Ok with which? 16:51:05 haha 16:51:06 DuncanT: there's nothing special or different 16:51:09 jgriffith: +1 16:51:14 not a damn thing 16:51:18 jgriffith: ok, the distro tests are pretty weak too then ;-) 16:51:23 it's just a bull shit process that causes me extra work 16:51:28 DuncanT: :) 16:51:42 DuncanT: but that's a ridiculous argument anyway... 16:51:47 OK, disabling by default aside, is everyone in agreement to set a flag in the driver and mark as deprecated? 16:51:49 hemna: You would be surprised the damage you can do with a limp stick. ;-) 16:51:56 Oy 16:52:06 jgriffith: Honestly, I don't have strong answers there, distros have a history of being a pain and requiring work... you can always choose to not be certified 16:52:07 DuncanT: You're saying "remove them cuz they don't run upstream tests" but then saying "upstream tests are weak and aren't good enough anyway" 16:52:35 DuncanT: I don't have nice things to say on that topic so I"m going to do as I was taught as a child and say nothing about it ;) 16:52:43 smcginnis: Yes, I think so. 16:52:47 jgriffith: In house, we found passing upstream tests was not a good indicator that it will actually work in a real system 16:52:58 jungleboyj: Thank you :) 16:53:01 smcginnis: I still prefer removal, and fixing the tag 16:53:03 smcginnis: At least I thought we were. 16:53:15 DuncanT: understood, but not really germain to this discussion I don't think 16:53:15 smcginnis: But I'm not going to fight it 16:53:19 DuncanT: I agree, but are we going to get wide spread agreement on that. 16:53:28 smcginnis: i'm on board 16:53:54 DuncanT: After trying to change the stable policy (and the SS that triggered) I'm hesitant to push for an openstack wide tag change. 16:53:56 DuncanT: don't get me wrong... I have no real problem with removal... but I do think that there are some consequences that we would have that I don't want to deal with 16:54:24 smcginnis: SS can be really healthy though, things need reexamining from time to time 16:54:31 removal is a good stick for vendors, but it's awful for deployments. 16:54:35 DuncanT: like it or not the deprecation policy tag exists, and if we lose that it *could* have consequences for drivers in Cinder. 16:54:46 DuncanT: at least for those that don't belong to a distro 16:55:01 DuncanT: True. But other than a small number of supporters for our policy up to this point, I didn't get the feeling I was getting much buy in on the idea. 16:55:07 DuncanT: things like "we're the only driver we can guarantee will be there next release" 16:55:20 I think we should be punishing vendors for not supporting their drivers and give deployments and chance to get off that vendor's backend. 16:55:26 hence the proposal I made 16:55:28 jgriffith: Yeah, I guess the best bet is to follow the policy, put flashing lights in the release notes and review the tag definition in slow time 16:55:35 hemna: +1 16:55:51 16:55:53 hemna: I think we should publish their contributions... including reviews, code submissions to core etc and show comparisons :) 16:55:55 in the release notes 16:56:02 jgriffith, :) 16:56:07 jgriffith, +1 16:56:09 DuncanT: +1 16:56:27 jgriffith: Rough. ;-) 16:56:35 OK, so if we have general agreement towards tagging the driver and deprecating it, the next question is the automatic disabling of the driver without and explicit config file setting saying to load it anyway. 16:57:02 re: limp stick 16:57:20 AKA, the limp stick issue. :D 16:57:34 smcginnis: I think that firms up the stick a bit. 16:57:53 I'm glad HR doesn't pay attention to IRC.... 16:58:09 scottda, shhh! 16:58:12 scottda: Chicken. ;-) 16:58:14 scottda: no worries, these logs aren't logged or anything :p 16:58:25 patrickeast: Lol 16:59:00 Two min left btw 16:59:06 Make that one 16:59:07 smcginnis: so i'm not sure how much that config option really buys us other than as an annoyance for deployers 16:59:22 smcginnis: at the point they've upgraded and still have a bad driver, they are going to use it anyway 16:59:39 smcginnis: I think we should have it. It will get some pressure on the vendor from customers that will be made more aware of what is going on. 16:59:42 patrickeast, that's kinda the point really 16:59:54 #link https://review.openstack.org/#/c/355608/ Driver tag patch 17:00:04 since we don't remove the drivers, we don't have much of anything to really annoy the vendors to force them to comply 17:00:10 Since we're out of time, please comment in the review. ^^ 17:00:12 re: https://goo.gl/kHnWVN 17:00:27 Times up. Thanks everone 17:00:32 hemna: Seriously? :) 17:00:36 #endmeeting