16:00:06 <smcginnis> #startmeeting Cinder
16:00:07 <openstack> Meeting started Wed Jul 27 16:00:06 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:11 <openstack> The meeting name has been set to 'cinder'
16:00:12 <dulek> hi!
16:00:14 <smcginnis> Courtesy ping: dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind jbernard _alastor_ bluex vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao
16:00:17 <yuriy_n17> hi
16:00:17 <cFouts> hi
16:00:20 <geguileo> Hi
16:00:21 <Swanson> hello
16:00:22 <diablo_rojo> Hello :)
16:00:25 <mtanino> hello
16:00:26 <adrianofr> Hi
16:00:27 <smcginnis> Agenda: https://wiki.openstack.org/wiki/CinderMeetings#Next_Cinder_Team_meeting
16:00:28 <jseiler> hi
16:00:28 <hemna> mornin
16:00:30 <xyang2> hi
16:00:31 * TheDude abides
16:00:33 <DuncanT> lo
16:00:37 <smcginnis> :)
16:00:58 <smcginnis> #topic Announcements
16:01:14 <smcginnis> No big announcements this week.
16:01:32 <smcginnis> I think we had a good productive midcycle last week. Thanks to all who participated.
16:01:32 <_alastor_> o/
16:01:54 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus
16:02:03 <e0ne> hi
16:02:14 <smcginnis> We've started to make some progress on the priorities we identified at the summit.
16:02:20 <smcginnis> HA in particular.
16:02:21 <dulek> Thanks to hemna for providing us the hangout and recording!
16:02:40 <smcginnis> So what we discussed was getting these in as long as they don't break the non-AA case.
16:02:41 <e0ne> dulek, hemna: +2
16:02:57 <smcginnis> Then once it's in there working through better testing and identifying any issues.
16:03:08 <rajinir> 0/
16:03:09 <smcginnis> So basically just some "beta" support for this release.
16:03:28 <hemna> :)
16:03:31 <smcginnis> Yes, thanks hemna for once again being our awesome AV team! ;)
16:03:48 <dulek> smcginnis: I don't think the stuff we've got in is actually functional from HA perspective. It's more like "prerequisites".
16:04:06 <smcginnis> dulek: OK, "alpha" support then. ;)
16:04:16 <smcginnis> Really just somethign to be able to start working on better testing.
16:04:31 <smcginnis> Pre-alpha even...
16:04:34 <geguileo> dulek is right, these are prerequisites
16:04:37 <geguileo> lol
16:04:57 * dulek is pointing that out to make sure this won't make it to project update presentation. ;)
16:05:02 <smcginnis> #link http://releases.openstack.org/newton/schedule.html Release schedule
16:05:07 <smcginnis> dulek: Good call. ;)
16:05:09 <e0ne> smcginnis, geguileo: IMO, we have to write some basic testcases fo HA to start working on tests
16:05:37 <erlon> hey
16:05:44 <e0ne> smcginnis: any news about os-brick? we're blocked 2 months
16:05:50 <smcginnis> So we're a little over a month away from newton-3.
16:05:58 <geguileo> e0ne: As agreed in the midcycle I plan on creating a document with things I've manually tested
16:06:04 <smcginnis> Final library freeze is before that though.
16:06:15 <diablo_rojo> e0ne: I think we are still waiting on privsep stuff
16:06:27 <e0ne> geguileo: please, ping me tomorrow. I'll take a look on our internal test cases for HA
16:06:30 <DuncanT> if this privsep thing isn't sorted soon, we should pull it out and go back to rootwrap
16:06:37 <smcginnis> e0ne: The patch to add hard-coded support of privsep to rootwrap is going through the gate queue as we speak!
16:06:42 <geguileo> e0ne: Thanks!
16:06:47 <e0ne> geguileo: I've missed today's QA meeting :(
16:06:52 <diablo_rojo> smcginnis: Yay!
16:07:01 <hemna> yah we need to make the rootwrap privsep patch a blocker
16:07:05 <hemna> that needs to land
16:07:06 <smcginnis> So once that's through and a new oslo.rootwrap release is out we can finally release os-brick!
16:07:18 <e0ne> smcginnis, diablo_rojo: I +2'ed all patches related to privsep and rootwrap
16:07:29 <e0ne> smcginnis: great!
16:07:34 <TheDude> I thought QA meeting was Thursday
16:07:36 <diablo_rojo> e0ne: Cool :) Hopefully that should get this moving
16:07:38 <smcginnis> I'll watch for that release and get a new os-brick one out once that lands.
16:07:49 <hemna> https://review.openstack.org/#/c/339275/
16:07:51 <hemna> that needs love
16:08:13 <e0ne> can we get https://review.openstack.org/#/c/328297/ merged before new release?
16:08:47 <hemna> I'll look at it
16:08:49 <smcginnis> We do have some time before all the oslo stuff makes it through to be useable for us.
16:08:52 <e0ne> hemna: I'm sorry, I didn't had a time to review and test patch with connectors refactoring today
16:09:05 <hemna> we could use a CI for it
16:09:12 <smcginnis> So yes, please take a look at os-brick patches and get some of those through that should get in there.
16:09:22 <hemna> e0ne, I have to rebase that refactor patch again today...another merge conflict.
16:09:32 <hemna> https://review.openstack.org/#/c/307974/
16:09:35 <hemna> I'd like that to land
16:09:43 <smcginnis> We can do another brick release before final library freeze, but the more runtime we have on some of these changes the better.
16:09:43 <hemna> (after I fix the merge conflict again today)
16:10:04 <smcginnis> hemna: That would be good. But also scares me with the size of it. ;)
16:10:05 <e0ne> hemna: +1. I'm going to review and test it asap
16:10:15 <hemna> I'll fix that right now.
16:10:36 <e0ne> smcginnis: TBH, it's not a big patch to review. a lot of LoC are changed, but almost all code are the same
16:10:37 <smcginnis> Biggest problem is all the failing third party CIs, IMO. Would feel more comfortable with more green there, but I'm not going to hold my breath.
16:10:49 <smcginnis> e0ne: Yeah, it looks a lot worse than it is.
16:10:59 <hemna> yah
16:11:08 <e0ne> smcginnis: the main changes are in connector.py
16:11:14 <hemna> I think the 3PAR brick CI is supposed to be working right now..
16:11:18 * hemna crosses fingers.
16:11:35 <e0ne> :)
16:11:37 <smcginnis> #topic Volume Group Hiding\Protecting
16:11:49 <smcginnis> So apparently no one wants to put there names on things anymore.
16:11:53 <smcginnis> Who's topic is this?
16:12:06 <xyang2> saggi?
16:12:12 <saggi> hi
16:12:21 <xyang2> saggi: your topic is up
16:12:22 <saggi> yes, that's me
16:12:28 <smcginnis> saggi: Take it away. :)
16:12:43 <saggi> So I put a draft for the BP.
16:12:54 <saggi> But I'll give the elevator pitch now
16:13:08 <e0ne> saggi: did you post a spec patch?
16:13:14 <DuncanT> This one makes me nervous with the level of detail that's there. No details about the interaction with normal cinder API. What happens if the tenant deletes a volume ina  hidden CG? Snaps it? Puts it in another CG? etc
16:13:24 <smcginnis> #link https://etherpad.openstack.org/p/hidden-consistency-groups-draft Hidden group spec work in progress
16:14:23 <xyang2> DuncanT: volume in a CG cannot be deleted individually, but it is possible to remove or add, so good to add that
16:14:53 <smcginnis> So wouldn't that be extremely confusing to an end user if a volume is in a hidden CG and they try to delete it?
16:15:02 <DuncanT> xyang2: So now the tenant suddenly gets volumes that they can't delete, without a good explaination of why. Not good.
16:15:21 <smcginnis> saggi: So how does Smaug handle the case now where a volume is being protected but gets deleted?
16:15:21 <e0ne> DuncanT: +1. it will be very confusing for users
16:15:25 <xyang2> DuncanT: I see what you meant.  almost like all volumes need to be hidden as well
16:15:41 <saggi> smcginnis: We still don't set up consistency groups.
16:15:41 <smcginnis> Hiding the actual volumes seems even more confusing.
16:15:48 <smcginnis> saggi: I mean without CGs
16:15:55 <saggi> Yes, we detect that
16:16:11 <smcginnis> saggi: So why are CGs different?
16:16:31 <saggi> Since they implement a continues feature.
16:16:49 <dulek> Locking the volumes instead?
16:17:06 <xyang2> saggi: I believe you have problem with individual resource as well?
16:17:17 <smcginnis> saggi: Not sure I follow.
16:17:43 <saggi> So if we just do a point in time backup. We do a full scan at that point in time and figure what's available
16:17:44 <xyang2> saggi: so if smaug backs up a volume and that volume can be deleted in cinder, wouldn't that be a problem too
16:18:02 <saggi> No, since it didn't exist at that point in time
16:18:31 <saggi> but if we create a consistency group to facilitate replication and the user modified it. We think that things are being replicated. But they aren't.
16:19:08 <saggi> Also we don't create volumes. (unless you are restoring). But if we create a consistency group the user might wonder where they came from.
16:19:09 <smcginnis> Not sure I like this, but I think it should still be dynamic like that. Maybe scan volumes, add to CG to get consistent snap, then remove after snap is complete?
16:19:14 <xyang2> saggi: so this is only a problem for replication?
16:19:29 <xyang2> saggi: because it is continuous, not point in time?
16:19:34 <saggi> Yes
16:19:58 <saggi> And also that we create an auxiliary entity visible to the user
16:20:07 <saggi> That they might not understand
16:20:56 <smcginnis> I think obvious naming could help. "smaug-cg-xxxx-xxx-xxx-xx-xxx"
16:20:58 <DuncanT> If we make replication groups have a description field, we can put an explaination in there
16:21:11 <saggi> It similar to why tmp\helper files in desktop applications are hidden. The user might want to modify\delete them. But you don't want them always visible in you $HOME
16:21:24 <cFouts> what about making replication status available via the API?
16:21:26 <saggi> I suggested that as the Alternative
16:21:36 <saggi> Just putting it in the description
16:21:55 <saggi> But if we end up making a lot of CS for some reason. They might clutter thing for the user.
16:21:58 <flip214> still the users need to look that up to understand why the volume can't be deleted
16:23:05 <saggi> A description is a must. The hidden feature is just to prevent confusion and clutter.
16:23:54 <saggi> We don't anticipate making a lot of groups. So clutter might not be an issue.
16:24:33 <DuncanT> I was honestly hoping replication groups will be a different entity to CGs
16:24:38 <saggi> But it does impact the UX if you keep seeing smag-cs-<SOME_GUID> in your list
16:24:48 <smcginnis> DuncanT: Good point - they are, aren't they?
16:24:50 <xyang2> DuncanT: it is different
16:24:57 <smcginnis> This all depends on xyang2's work.
16:25:13 <smcginnis> So the naming on this isn't accurate calling it consistency groups.
16:25:24 <DuncanT> So they'll only appear on the replication group list, which is exactly where you want them.
16:25:27 <smcginnis> If this is talking about tiramisu replication.
16:25:28 <saggi> DuncanT: That's above my pay grade.
16:25:34 <smcginnis> saggi: ;)
16:25:39 <DuncanT> saggi: :-)
16:25:45 <smcginnis> DuncanT: I think you are right.
16:26:20 <smcginnis> So we're a little ahead of things IMO.
16:26:26 <smcginnis> We don't even have replication groups yet.
16:26:49 <smcginnis> So something to keep in mind, but we're not there until we actually add that feature.
16:27:07 <saggi> Consistency group for backup we don't fear malice since we fix them before each backup but they still might clutter things.
16:27:32 <xyang2> saggi: we don't have CG for backup yet
16:27:36 <saggi> We *can* fix them. We don't use them yet.
16:27:40 <DuncanT> smcginnis: Makes sense. Nice to have a clear explanation of the problem now
16:27:47 <xyang2> saggi: we have CG for snapshot if that is what you are referring to
16:28:04 <saggi> xyang2: We plan to snapshot and copy from it.
16:28:11 <xyang2> saggi: ok
16:29:14 <smcginnis> So... doesn't seem like we are making any progress here.
16:29:29 <smcginnis> saggi: Could you actually submit that as a spec and we can discuss it there?
16:29:58 <saggi> smcginnis: Sure
16:30:33 <smcginnis> saggi: Thanks. I think we all need some time to digest this and figure out what the need is and how this will look to the end user.
16:30:41 <saggi> I'll also write about our user cases for backup and replication so it's clearer.
16:30:42 <smcginnis> We should probably readdress soon.
16:30:49 <smcginnis> saggi: Perfect - thanks!
16:30:58 <saggi> smcginnis: np
16:31:02 <smcginnis> #topic Tooz locks for volume drivers
16:31:09 <smcginnis> Another anonymous topic. :)
16:31:19 <bluex> soory, this one is mine
16:31:24 <DuncanT> We need a wiki blame command :-)
16:31:28 <smcginnis> bluex: All your then...
16:31:28 <e0ne> :)
16:31:34 <smcginnis> *yours
16:31:47 <bluex> ok then, we have tooz locking in base cinder code
16:31:56 <bluex> now we need them too in volume drivers
16:32:11 <smcginnis> #link http://paste.openstack.org/show/521166/ RemoteFS driver inheritance
16:32:28 <bluex> for now https://bugs.launchpad.net/cinder/+bug/1606698
16:32:28 <openstack> Launchpad bug 1606698 in Cinder "Volume drivers should use distributed lock manager" [Undecided,New]
16:32:50 <bluex> but I'll be splitting it to separate bug reports for each driver
16:33:12 <smcginnis> bluex: Have you grepped for how many dirvers are not using the distributes lock decorator?
16:33:31 <dulek> smcginnis: Definitely these ones: http://paste.openstack.org/show/521166/
16:33:32 <smcginnis> I agree it would be good to have bug reports for specific drivers.
16:33:36 <e0ne> bluex: who will be responsible for bug fixes? drivers maintainers or cinder community?
16:33:40 <dulek> smcginnis: But that may not be a complete list.
16:33:42 <hemna> smcginnis, +1
16:33:44 <bluex> yup, all of them are mentioned in bug report
16:34:06 <geguileo> e0ne: It should be driver maintainers
16:34:07 <smcginnis> bluex: Ah, I see.
16:34:08 <dulek> e0ne: I would be more comfortable if driver maintainers would switch that.
16:34:14 <bluex> there are quite a lot of drivers using synchronized(..., external=True) decorators
16:34:19 <smcginnis> geguileo: +1
16:34:35 <xyang2> smcginnis: what is the decision on this now?  Should all drivers be changed to use tooz?
16:34:35 <geguileo> e0ne: Because they are the ones who actually know if locks need to be distributed or if it's enough to be external
16:34:36 <e0ne> agree, just want to be sure that we're on the same page
16:34:45 <smcginnis> At least I would want the driver maintainers to +1 any change made by someone else.
16:35:05 <geguileo> xyang2: Only drivers that require the DLM to work in Active-Active need to move to Tooz
16:35:12 <smcginnis> xyang2: In preparation for HA AA, yeah, I think so.
16:35:14 <e0ne> smcginnis: driver maintainer + CI report should be required
16:35:21 <smcginnis> e0ne: +1
16:35:21 <xyang2> geguileo: smcginnis, ok, thanks
16:36:13 <smcginnis> bluex: So you will create specific driver bugs. Anything else on this?
16:36:18 <DuncanT> smcginnis: Who's taking on the job of poking the maintainers who don't respond? There are always a bunch
16:36:36 <DuncanT> smcginnis: I've not got time this time
16:36:36 <smcginnis> DuncanT: Yeah, true.
16:36:47 <bluex> I think that's all, only wanted to raise awareness of this :)
16:37:11 <smcginnis> DuncanT: I think if a bug is filed against their driver and they don't address it, then it sits there and users of that backend now there may be issues doing AA.
16:37:25 <smcginnis> Not the best story, but it is what it is.
16:37:58 <smcginnis> Part of the bigger issue of vendors not paying attention to bugs filed against their drivers.
16:38:11 <smcginnis> bluex: Cool, thanks for bringing this up.
16:38:35 <smcginnis> And on the topic of uninvolved vendors...
16:38:41 <DuncanT> smcginnis: I'm not sure we need a vendor to approve every change... that's one of the reasons I try to -2 changes that make the drivers too difficult to understand
16:38:48 <smcginnis> #topic Non-CI compliant driver removal
16:38:58 <DuncanT> (like being entirely hidden in third party libraries)
16:39:04 <smcginnis> DuncanT: Yeah, we can make a judgement call for sure.
16:39:17 <smcginnis> So for CI - at the midcycle we discussed CI enforcement.
16:39:34 <smcginnis> Trying to come up with a concrete policy about enforcement.
16:39:53 <smcginnis> We decided to start with looking at how many haven't reported in more than 22 weeks.
16:40:02 <smcginnis> Whoa - 2 weeks, not 22. :)
16:40:09 <smcginnis> Although that's pretty bad too.
16:40:25 <smcginnis> #link http://paste.openstack.org/show/542612/ CI's not reporting > 2 weeks
16:40:38 <smcginnis> Not sure if this is 100% accurate.
16:40:45 <smcginnis> But it's a start for looking at it.
16:41:00 <smcginnis> There are a few on here that I know haven't been reporting for some time.
16:41:35 <smcginnis> So I think I will put up some driver removal patches for a few that I know for sure haven't been around.
16:41:39 <DuncanT> Start putting the removal patches up and post the list of them on the mailing list. No point being overly soft about this, and we don't have to rush to merge the patches
16:41:59 <smcginnis> We can wait a week like we discussed and if they are not addressed, then we can +A the patches.
16:42:00 <hemna> ouch
16:42:03 <hemna> that's a lot of CI's
16:42:10 <smcginnis> But it should be a clear warning shot.
16:42:22 <hemna> DuncanT, +1
16:42:25 <hemna> do it
16:42:33 <cFouts> s/warning shot/incentive
16:42:45 <diablo_rojo> DuncanT: +1
16:42:49 <diablo_rojo> smcginnis: +1
16:42:54 <smcginnis> And if we post a patch and post to the ML and get no response from a vendor, that's a clear indication they are not paying attention and not involved in the community as they should be.
16:43:12 <erlon> smcginnis: +1
16:43:15 <diablo_rojo> smcginnis: +1
16:43:15 <hemna> so...
16:43:17 <_alastor_> smcginnis: +1
16:43:24 <DuncanT> We've got time for them to come back into the fold before release if they want to
16:43:25 <hemna> FWIW, the HPE_STORAGE_CI has been reporting
16:43:36 <smcginnis> So a lot of agreement. I want to make sure we think about all sides, so if anyone disagrees it's completely OK to speaj up.
16:43:45 <hemna> for example, https://review.openstack.org/#/c/292570/
16:43:49 <diablo_rojo> DuncanT: I think we decided so long as its before M3?
16:43:50 <smcginnis> hemna: Yeah, I know there's some other ones there. I won't be posting a patch for that.
16:44:04 <hemna> it just makes me wonder how valid that list is then
16:44:07 <hemna> something isn't right
16:44:09 <smcginnis> diablo_rojo: Yep, I believe that's waht we agreed on.
16:44:27 <hemna> can we verify the veracity of that tool that generated that list ?
16:44:32 <smcginnis> hemna: It's based on info on the third party CI wiki, so it's only as accurate as that information.
16:44:36 <_alastor_> hemna: It's meant to be a heuristic.  Definitely file bugs if you find them :)
16:44:44 <hemna> as it's not working
16:44:49 <smcginnis> To be clear - I'm only going to post patches for the ones on there that I know for a fact are not reporting.
16:44:57 <smcginnis> I will do a little double checking before I do anything.
16:45:06 <DuncanT> diablo_rojo: Honestly I missed the fine detail, sound quality was sketchy. The sooner we get the removal patches posted the better though
16:45:24 <diablo_rojo> DuncanT: That's what my wonderful notes in the etherpad were for :)
16:45:30 <smcginnis> But things like X-IO, Tintri (ironically), Tegile, etc. are all missing for a while now.
16:45:36 <hemna> wait, the tool uses the wiki to determine which CI's are reporting ?!
16:45:38 <hemna> I'm confused
16:45:56 <_alastor_> hemna: I'll look into why it's reporting HPE as not reporting
16:46:08 <diablo_rojo> _alastor_ can you explain how it works?
16:46:10 <erlon> hemna: agreed there's something missing/passing in the list, Iv been watching the CIs in this patch and counted a lot more than what is in the list missing: https://review.openstack.org/#/c/336092/
16:46:11 <hemna> we can take this offline
16:46:18 <smcginnis> _alastor_: I'm guessing name change or something not reflected. /HP/HPE/
16:46:26 <hemna> it just doesn't look like the data is accurate at all.
16:47:30 <_alastor_> It's using the wiki, so it's only as accurate as the information there.  We don't have any single place that has all the necessary information, so it's a best-guess system
16:47:33 <hemna> the tool isn't working
16:47:38 <hemna> it's completely broken IMHO
16:47:52 <smcginnis> It gives some useful hints.
16:47:59 <hemna> _alastor_, the wiki says HPE is online and reporting
16:48:04 <hemna> again, the tool is broken.
16:48:19 <_alastor_> hemna: agreed, something is up.  I'll take a look and see if I can fix it.
16:48:31 <hemna> ok
16:48:41 <hemna> the wiki doesn't seem like the right place to scrape IMHO
16:48:42 <xyang2> _alastor_: can you post the command so everyone can give a try?
16:48:43 <e0ne> IMO, we should to have policy of drivers removinf if CI is not reported for N weeks
16:48:44 <smcginnis> I will also compare against here http://ci-watch.tintri.com/project?project=cinder&time=7+days
16:48:51 <smcginnis> And manual checking, last comment, etc.
16:49:04 <hemna> the wiki may say the CI is reporting....
16:49:06 <smcginnis> e0ne: That's what we're discussing. :)
16:49:07 <hemna> when it's not.
16:49:30 <hemna> can we rethink this in cinder channel please
16:49:35 <hemna> (the tool)
16:49:42 <smcginnis> The policy agreed on was 2 weeks no reporting a patch goes up for removal. 1 week to give a chance to respond and address and it goes through.
16:49:43 <_alastor_> hemna: The tool uses Gerrit queries to determine the last time a CI reported on a patch
16:49:44 <e0ne> smcginnis: yes, I've missed a part of this. my point is: we have to implement policies and follow them
16:49:52 <smcginnis> e0ne: Totally.
16:49:59 <hemna> _alastor_, *sigh*.  which is it?  the wiki or gerrit ?
16:50:03 <jay-mehta> yes
16:50:06 <hemna> you are saying 2 different things every time I Ask.
16:50:14 <smcginnis> That's the hope that we can set a clear policy that we consistently enforce.
16:50:18 <diablo_rojo> e0ne: It was a discussion we had on day2 I think
16:50:20 <hemna> the policy is great
16:50:25 <erlon> _alastor_: cant the toll use the driver_list tool on Cinder to get the driver list and check it they are reporting?
16:50:27 <hemna> we just need a tool that isn't useless.
16:50:36 <smcginnis> hemna: Who's saying 2 different things?
16:50:37 <_alastor_> hemna: Wiki to get the initial information (like gerrit account info) then gerrit quries using that information
16:50:51 <hemna> smcginnis, first he said the tool uses the wiki, then he just said it uses gerrit.
16:50:53 * hemna is very confused
16:51:04 <hemna> either way, it's broken.
16:51:11 <smcginnis> Let's not get too hung up on the tool at this point. It's sure to have some issues, but it can be improved over time.
16:51:20 <smcginnis> Or replaced by something else if we find a better solution.
16:51:22 <cFouts> smcginnis +1
16:51:42 <smcginnis> So far we've had Tintri's dashboard, patrickeast's dashboard, some last-comment scripts.
16:51:46 <hemna> I just don't want folks getting driver removal patches put up based upon bogus data from a broken tool.
16:51:52 <_alastor_> erlon: What is that?
16:51:55 <smcginnis> No one has been able to come up with one tool that works the best.
16:51:55 <hemna> I'm all for a tool that helps automate it, but it needs to work.
16:52:04 <e0ne> hemna: +1
16:52:05 <_alastor_> hemna: agreed
16:52:08 <smcginnis> So I'm just using this as one input to help identify where to look.
16:52:12 <diablo_rojo> smcginnis: Point being, time to get reverting :)
16:52:17 <hemna> that's why I'm hung up on the tool.
16:52:23 <smcginnis> We are a long way from where we can blindly automate anything based on its results.
16:52:33 <erlon> hemna: I woudn't say its useless, just needs some tunning. It helped with the list. We just need to work on the false negatives
16:52:48 <hemna> cool.
16:52:56 <hemna> lets get it to work, and I'm all for it.
16:52:57 <erlon> _alastor_: there's a script that list all drivers in Cinder
16:53:22 <erlon> _alastor_: tools/generate_driver_list.py
16:53:34 <smcginnis> My ideal would be something that uses that list of drivers, matches it up with specific tests run by CIs, and gets the data on a per-driver basis.
16:53:46 <smcginnis> It's that mapping that's the killer.
16:54:00 <smcginnis> DuncanT: Didn't you have a proposal at one point to keep somethign in tree?
16:54:05 <hemna> can we add a CI_NAME attribute to each driver ?
16:54:16 <e0ne> hemna: great idea!
16:54:37 <smcginnis> hemna: CI_NAME and CI_TESTNAME. That could work. At least most of the time.
16:54:37 <DuncanT> I did, yes. I originally wanted to add an attribute to the drivers, several people prefered a yaml file
16:54:39 <_alastor_> smcginnis: That would definitely make the job easier
16:54:46 <hemna> hp_3par_fc.HPE3PARFCDriver.CI_NAME="HPE_Storage CI"
16:55:01 <erlon> hemna: I think would be better to add the driver name to the CI NAME
16:55:03 <smcginnis> And now that we have the interface compliance checks, we can enforce that all drivers implement somethign like a get_ci_name() call.
16:55:08 <DuncanT> The one thing that a yaml (or similar) file can add is a list of the files that make up a driver
16:55:13 <hemna> smcginnis, +1
16:55:18 <diablo_rojo> smcginnis: +1
16:55:23 <_alastor_> smcginnis: +1
16:55:33 <hemna> the yaml file can get out of sync with the driver
16:55:39 <DuncanT> smcginnis: You need to be able to instantiate the driver to call that though, unless it's a classmethod
16:55:42 <hemna> the driver should be the source of truth IMHO.
16:55:47 <e0ne> smcginnis: could you send mail to openstack-dev about CI_TESTNAME attribute, please?
16:55:50 <DuncanT> hemna: I can agree with that
16:56:04 <smcginnis> e0ne: Let's think about that one a bit first.
16:56:11 <smcginnis> I like the idea, but want to stew on it a bit.
16:56:33 <hemna> want me to throw together a WIP ?
16:56:43 <smcginnis> hemna: Sure, that would be great!
16:56:46 <hemna> ok cool.
16:56:52 <smcginnis> 3 minute warning.
16:56:56 <hemna> I'll just do a few drivers at first and see what it looks like.
16:56:57 <smcginnis> Anything else?
16:57:10 <DuncanT> hemna: If you can get the file list in there somehow, that would be awesome :-)
16:57:26 <_alastor_> If we could get the CI account name and actual CI job name that would be great
16:57:38 <smcginnis> OK, I'll call it. Thanks everyone!
16:57:47 <smcginnis> #endmeeting