16:00:03 <smcginnis> #startmeeting Cinder
16:00:03 <openstack> Meeting started Wed Mar 15 16:00:03 2017 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:06 <openstack> The meeting name has been set to 'cinder'
16:00:10 <Swanson> hello
16:00:10 <smcginnis> #chair jungleboyj
16:00:11 <openstack> Current chairs: jungleboyj smcginnis
16:00:15 <smcginnis> jungleboyj: Just in case ^^
16:00:22 <jungleboyj> smcginnis:  Thanks.  :-)
16:00:45 <e0ne> hi
16:00:47 <xyang> hi
16:00:48 <_alastor_> o/
16:01:01 <smcginnis> Oh good, was afraid everyone missed the time change.
16:01:02 <bswartz> just in case of what?
16:01:02 <rarora> hi
16:01:08 <tbarron> hi
16:01:11 <geguileo> hi!
16:01:12 <smcginnis> bswartz: In case I get dropped for some reason.
16:01:17 <jessegler> o/
16:01:23 <smcginnis> bswartz: Not in my usual hemisphere.
16:01:24 <geguileo> smcginnis: no ping today?  ;-)
16:01:28 <bswartz> ah
16:01:30 <diablo_rojo> Hello :)
16:01:31 <smcginnis> geguileo: Oh right! :)
16:01:35 <patrickeast> o/
16:01:36 <e0ne> smcginnis: AFAIK, only US switched to DST time now
16:01:43 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov
16:01:50 <smcginnis> wxy viks ketonne abishop sivn breitz
16:01:56 <smcginnis> e0ne: Yeah. When does Europe?
16:01:57 <jungleboyj> o/
16:02:13 <hemna> \o
16:02:14 <e0ne> the last weekend of March in Ukraine
16:02:29 <smcginnis> e0ne: Good to be aware of that, thanks!
16:02:40 <mdovgal> Hi
16:02:40 <smcginnis> #topic Announcements
16:02:41 <DuncanT> Same here I believe
16:02:47 <smcginnis> DuncanT!
16:02:52 <wxy> hello
16:02:56 <smcginnis> DuncanT: Nice to see you. :)
16:03:02 <hemna> whoa
16:03:10 <jungleboyj> DuncanT: !!!!!
16:03:12 <DuncanT> Thanks :-)
16:03:22 <DuncanT> Just starting to get back into things
16:03:29 * smcginnis pictures DuncanT walking into the Cheers bar.
16:03:47 <jungleboyj> smcginnis:  ++
16:03:47 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus
16:03:55 <smcginnis> We are four weeks out from milestone 1.
16:04:01 <jungleboyj> DuncanT:  Glad to see it.  There wasn't enough arguing without you.  ;-)
16:04:11 <smcginnis> No major Cinder deadlines for P-1, but still a good checkpoint.
16:04:27 <smcginnis> It would be really nice if all of the new driver submissions have had a once over by then.
16:04:45 <smcginnis> So we're not pointing out spelling errors and the like right before the actual driver deadline.
16:05:29 <smcginnis> #topic Tracking driver requirement
16:05:36 <smcginnis> eharney: All yours.
16:05:50 <eharney> well, i added driver-requirements.txt
16:06:06 <eharney> the aim is for drivers to add their "optional" dependencies there (optional for Cinder, but needed for the driver to work)
16:06:34 <smcginnis> #link https://review.openstack.org/443761 Driver dependencies patch
16:06:39 <e0ne> eharney: is it something like requirements.txt?
16:06:45 <eharney> this started because of difficulties figuring out whether we packaged all of the right things in RDO etc, so should be useful for any downstreams
16:07:00 <e0ne> eharney: I've checked for ceph - there is no rbd and rados packages in pypi:(
16:07:00 <eharney> e0ne: it's very much like it, it's just not managed by any tools
16:07:03 <smcginnis> eharney: So no automated enforcement or anything like that now, just a convention for us to capture these hidden dependencies, right?
16:07:04 <eharney> e0ne: correct
16:07:32 <e0ne> eharney: I like the general idea
16:07:32 <hemna> eharney, so these are pypi requirements to use a particular driver ?
16:07:34 <eharney> this may well end up being managed by our requirements tools etc at some point, but i haven't really figured out what that looks like, so it's just useful documentation for now
16:07:42 <eharney> hemna: yes
16:07:44 <hemna> ok
16:08:06 <hemna> does it make any sense to create a const in the driver class that has this info?
16:08:12 <hemna> and then the tools generate this file ?
16:08:14 <smcginnis> eharney: But we can also capture non-pypi in there too?
16:08:25 <hemna> as well as the generate_driver_list.py can generate it too ?
16:08:26 <smcginnis> hemna: Ooh, I kind of like that.
16:08:34 * smcginnis likes self documenting automation
16:08:38 <hemna> kinda like the CI_WIKI_NAME thingy
16:08:38 <eharney> i think we should determine if/how we will integrate with requirements tools before we integrate it too deeply into cinder's code
16:08:48 <smcginnis> eharney: Fair
16:08:48 <e0ne> eharney: +1
16:08:58 <eharney> but certainly sounds like a useful thing to consider
16:09:11 <hemna> well it wouldn't be much different than the hard coded driver-requirements.txt except that that file gets generated
16:09:24 <hemna> $0.02
16:09:47 <eharney> smcginnis: how non-pypi things work is still somewhat of an open question
16:09:51 <hemna> eharney, thanks for starting this
16:10:22 <eharney> there's also another area of non-python things (CLIs), not too sure there yet either, but, this seems like a starting point
16:10:35 <smcginnis> eharney: That may be harder to automate, but I think it might be even more useful since I know when I tried to find all of it, it really wasn't obvious where some of this came from.
16:10:45 <hemna> maybe a driver-bindep.txt
16:10:48 <smcginnis> I think just pypi is a great start.
16:11:28 <smcginnis> eharney: Anything else we should discuss on that now? Or just an awareness thing at this point?
16:11:33 <eharney> nothing else from me
16:11:39 <smcginnis> eharney: Cool, thanks!
16:11:42 <smcginnis> #topic Revisiting adding a Bandit Gate
16:11:52 <smcginnis> jonesn, rarora: Hi
16:11:57 <jonesn> https://wiki.openstack.org/wiki/Security/Projects/Bandit#Bandit_Baseline_Gate
16:12:05 <rarora> hi
16:12:21 <smcginnis> #link  https://wiki.openstack.org/wiki/Security/Projects/Bandit#Bandit_Baseline_Gate Bandit Baseline
16:12:33 <jonesn> Short story: Having a bandit gate that only checks for added issues will be a fairly small change to the tox.ini and zuul configuration to setup the gate itself.
16:12:51 <e0ne> jonesn, rarora: do we have a fresh report for cinder somewhere?
16:12:58 <smcginnis> jonesn: So kind of like what we do with pylint now. We just don't want the number to go up unnoticed.
16:13:22 <jonesn> Exactly like pylint, except we don't have to write the script for it.
16:13:35 <jonesn> e0ne: not off hand
16:14:41 <e0ne> jonesn: I'm ok with this job if there will be a little number of false positive errors
16:15:00 <rarora> so basically you can just run the bandit-baseline command and it will do a diff... the only issues that will pop up are new ones
16:15:11 <smcginnis> There were a set of patches to disable warnings on false positives. Are those still out there?
16:15:11 <rarora> with medium confidence and severity there should not be many
16:16:27 <jonesn> e0ne: We might want to look through the list of things bandit checks for and exclude some entirely
16:16:34 <rarora> smcginnis: not sure of which patches you're talking about but we can also set up a config to disable certain bugs altogether and people can always #nosec something and leave a comment if they know it isn't an issue
16:16:34 <xyang> smcginnis: I haven't seen any outstanding patches, but there are probably more to fix
16:16:41 <eharney> experience with the pylint job has shown that this kind of thing is useful, but really requires some particularly interested people to keep an eye on it
16:17:19 <jonesn> eharney: rarora, jessegler and I would be pretty dedicated to checking the failures
16:17:38 <e0ne> eharney: good point
16:17:39 <jonesn> In fact I was going to ask if there was a way to be notified if a particular gate fails.
16:18:15 <smcginnis> jonesn: Not that I know of, but a spot check from time to time should be good at a minimum.
16:18:34 <smcginnis> Our only challenge with the way pylint works is since it's non-voting, it tends to get ignored.
16:18:56 <smcginnis> But I think usually it doesn't go too long before someone (usually eharney) noticed and complains. :)
16:19:20 <jonesn> smcginnis: I'd strongly advocate using this as a trial period, with the goal of moving to a voting gate.
16:19:25 <rarora> we were planning on doing non-voting at least for now until we can fine tune the bugs, and it would at least be a step in the right direction
16:19:30 <smcginnis> So I guess I agree with e0ne - as long as there aren't too many false positives, I think it's good.
16:19:49 <smcginnis> jonesn: I'd really want to be more comfortable with it before we make it voting.
16:20:11 <smcginnis> Especially since out of the things flagged that I looked in to, not one was an actual security issue.
16:20:31 <smcginnis> But I don't want that one real instance to slip through either.
16:20:32 <eharney> yeah... most of the current things have resulted in adding #nosec from what i've seen
16:20:54 <smcginnis> So I'm good for now as long as it's nv.
16:21:05 <jonesn> Could we get some recommendations on which rules to turn off?
16:21:23 <smcginnis> Not that I know of off hand, but there may be some things.
16:22:21 <jessegler> Having it non-voting should allow us to do some statistics on which rules end up getting #nosec'd a lot and that might inform which to turn off.
16:22:32 <smcginnis> +1
16:22:52 <smcginnis> Let's get that added NV, then we can see where to go from there.
16:22:53 <jonesn> ^I could pull together a list of all the #nosec 's that are in the code right now
16:23:04 <jonesn> smcginnis: awesome.
16:23:05 <smcginnis> jonesn: That may be useful.
16:23:15 <smcginnis> OK, anything else on this topic?
16:23:25 <rarora> I think we're set
16:23:26 <eharney> i'm still not sure how many of the current #nosec items are things that should be fixed vs just being disabled for now to have a clean run to start with
16:23:56 <smcginnis> eharney: Would you want to review that first?
16:24:18 <eharney> maybe, and we should probably decide that any time we add one, there's a good comment about why it's safe, or a bug report
16:24:28 <jonesn> +1
16:24:38 <jungleboyj> eharney: +1
16:24:45 <rarora> eharney: with this set up we wouldn't have to have a clean run since it just checks the delta so we at least won't have to add anymore #nosecs for getting a clean run
16:24:59 <eharney> sounds good
16:25:12 <xyang> I did a grab on nosec and only get 9 entries
16:25:31 <smcginnis> May be a good exercide for someone to go through and add comments on those explaining why they are not really issues.
16:25:33 <rarora> also +1 for needing a comment for #nosecs
16:25:52 <smcginnis> But I don't think that would need to be the gate on getting a nv job added that just checks the diff.
16:26:38 <smcginnis> Alright, I'll move on for now. Please post a comment in #openstack-cinder to let folks know if a patch is submitted to add a job.
16:26:49 <jonesn> Will do.
16:27:02 <smcginnis> We can always comment on there if any issues are thought of by then.
16:27:04 <smcginnis> jonesn: Thanks!
16:27:12 <smcginnis> #topic Dynamic Reconfiguration
16:27:19 <diablo_rojo> Hello :)
16:27:20 <smcginnis> diablo_rojo: All yours
16:27:39 <smcginnis> #link https://specs.openstack.org/openstack/cinder-specs/specs/ocata/dynamic_reconfiguration.html Dynami reconfig spec
16:27:47 <diablo_rojo> So with all the changes due to the A/A stuff I wanted to make sure the spec was still accurate
16:28:14 <diablo_rojo> I know we had noted the approach we decided on might not be pretty now that those things have landed.
16:28:20 <smcginnis> diablo_rojo: There hasn't been a patch to move this to Pike, right?
16:28:34 <hemna> the formatting of that page looks borked for some reason
16:28:46 <diablo_rojo> smcginnis, yeah
16:28:50 <hemna> like the .rst was incorrectly formatted
16:28:58 <smcginnis> hemna: Under the Work Items section?
16:29:06 <hemna> Use cases
16:29:14 <diablo_rojo> hemna, I figured I'd fix that up if we had other changes to make
16:29:23 <diablo_rojo> Wanted to do it all at once.
16:29:25 <eharney> there are a lot of indentation errors after bullet points, which makes it format funny
16:29:27 <smcginnis> Oh, hmm. And Alternatives section too.
16:29:31 <hemna> not sure what happened there.  it didn't recognize the bullet points it looks like
16:29:59 <smcginnis> diablo_rojo: I think if you can fix that up and propose a move to Pike, we can comment on there.
16:30:01 <geguileo> I think bullet points where missing space after *
16:30:24 <smcginnis> diablo_rojo: It was already accepted previously, but we can have another review to make sure it still matches the current state of things.
16:30:43 <diablo_rojo> So, I talked to geguileo a bit yesterday and there are two approaches that we could do right now. One: create a new mechanism and have drivers implement. or Two: Modify the sighup to stop all child processes and start the new ones with the new config.
16:31:06 <diablo_rojo> smcginnis, right, just wanted to make sure the approach was still valid before I did a bunch of work and found out no one liked it anymore :)
16:31:11 <jgriffith> diablo_rojo spec LGTM
16:31:17 <hemna> can we safely stop processes though?  There could be outstanding actions being taken
16:31:23 <jgriffith> diablo_rojo I like the sighug approach
16:31:43 <DuncanT> diablo_rojo: draining has the disadvantage of creating an outage while long-running operations (backup, copy to/from image) finish
16:31:44 <diablo_rojo> hemna, yeah that was something I wondered. There could be ongoing processes that never get finished up?
16:31:53 <hemna> yup
16:32:00 <e0ne> diablo_rojo: sighup sounds good for me
16:32:01 <diablo_rojo> DuncanT, right.
16:32:03 <hemna> since we don't really track transactions/actions being taken
16:32:17 <hemna> copy volume <--> image
16:32:19 <smcginnis> sighug :)
16:32:22 <hemna> backup, etc
16:32:23 <DuncanT> hemna: I think that comes under the generic heading of 'drain'
16:32:24 <geguileo> we are already using sighup to do the reload within Cinder
16:32:33 <Swanson> smcginnis, quiet.
16:32:39 <hemna> DuncanT, we don't know what to drain at this point.
16:32:42 <geguileo> we do the drain using oslo services mechanism
16:33:09 <smcginnis> Swanson: It just sounds so friendly.
16:33:17 <diablo_rojo> smcginnis, hugged to death
16:33:21 <hemna> also, we need to add more information in the deployer impact section
16:33:22 <DuncanT> geguileo: So this feature is already implemented? Last I checked, our sighup handling was dangerously broken
16:33:34 <DuncanT> (It has been a while)
16:33:35 <jgriffith> +1 for *sighug*, we should start a new Big Tent project with that name
16:33:36 <geguileo> DuncanT: Last time I checked it worked
16:33:44 <hemna> it's obvious, but if you change replication settings, you could potentially orphan existing replicated volumes
16:33:47 <diablo_rojo> hemna, When I get the patch up to clean things up you can make comments and I will integrate them. Sound good?
16:33:49 <geguileo> DuncanT: that was for my Barcelona talk
16:34:00 <hemna> diablo_rojo sure
16:34:10 <DuncanT> geguileo: I'll take another look and see what I can break
16:34:14 <geguileo> the problem is that you are without service for as long as it's draining the service
16:34:21 <diablo_rojo> hemna, cool I will let you know as soon as I get it up there
16:34:33 <smcginnis> diablo_rojo: +1
16:34:36 <bswartz> bash: kill: SIGHUG: invalid signal specification :-(
16:34:39 <geguileo> and cinder backup and volume can take a long time when we are talking about the data plane
16:34:48 <DuncanT> geguileo: Ah, yes, that was one of my definitions of 'broken', though that wasn't the dangerous one
16:34:53 <hemna> geguileo, that's what concerns me
16:35:14 <diablo_rojo> hemna, agreed.
16:35:18 <geguileo> hemna: DuncanT I believe that's what we are trying to fix now
16:36:06 <hemna> diablo_rojo also need to note that changing things like FCZM settings can nuke existing attachments
16:36:19 <DuncanT> geguileo: What you suggest is working is exactly what the spec proposes
16:36:26 <diablo_rojo> hemna, can do.
16:36:50 <geguileo> DuncanT: No, I'm saying that the sighup mechanism is already there, we just need to modify it's behavior to whatever we agree
16:37:12 <DuncanT> geguileo: The spec suggests drain and restart
16:37:29 <geguileo> DuncanT: Mmmmm, then we already have that
16:37:46 <diablo_rojo> Lol
16:37:57 <DuncanT> We're done! Beer time!
16:38:05 <jungleboyj> DuncanT:  Yay!
16:38:34 <diablo_rojo> geguileo accidentially implemented my spec lol
16:38:42 <geguileo> we could at least support adding new backends and removing them through sighup
16:38:44 <smcginnis> :)
16:38:48 <jungleboyj> geguileo:  But does it refresh from cinder.conf?
16:38:55 <geguileo> jungleboyj: yup
16:39:14 * jungleboyj is baffled ...
16:39:15 <geguileo> jungleboyj: at least I think so...  now I'm unsure
16:39:34 <jungleboyj> geguileo:  I am going to have to go try it.
16:40:50 <smcginnis> Well, diablo_rojo and geguileo, maybe you too should talk a bit.
16:40:57 <jungleboyj> :-)
16:41:03 <smcginnis> diablo_rojo: But I think getting it updated is probably worth it.
16:41:09 <diablo_rojo> smcginnis, a bit more than we did yesterday anyway lol
16:41:14 <geguileo> I think the reloading of the config is a minor thing
16:41:14 <smcginnis> :D
16:41:22 <diablo_rojo> smcginnis, yep I put it towards the top of my todolist
16:41:33 <smcginnis> diablo_rojo: Excellent
16:41:39 <geguileo> The big thing is if we have any ideas about how to prevent the service being out while draining
16:41:42 <jungleboyj> geguileo:  The one man  Cinder show.
16:41:51 <diablo_rojo> jungleboyj, +1
16:42:37 <smcginnis> diablo_rojo: If you get an update, probably good to add DuncanT and hemna as reviewers to make sure their concerns get addressed.
16:43:33 <geguileo> diablo_rojo: Maybe we could change it to make it only draing services that have changed the config or being completely removed
16:43:44 <geguileo> s/services/backends
16:43:53 <diablo_rojo> smcginnis, can do
16:43:55 <geguileo> and spins up a new process for added backens
16:43:56 <smcginnis> geguileo: Nice, would be good if it can be smart about it.
16:43:58 <geguileo> backends
16:44:26 <smcginnis> diablo_rojo: I'm going to move on. I think you at least have next steps.
16:44:40 <diablo_rojo> smcginnis, thanks :)
16:44:43 <smcginnis> #topic Forum Topic Brainstorming
16:44:52 <smcginnis> #link https://etherpad.openstack.org/p/BOS-TC-brainstorming Brainstorming Etherpad
16:45:03 <smcginnis> They are looking for topics for the forum.
16:45:20 <smcginnis> I've also created a Cinder specific one for us:
16:45:26 <smcginnis> #link https://etherpad.openstack.org/p/BOS-Cinder-brainstorming Cinder topic brainstorming.
16:45:35 <smcginnis> All captured here:
16:45:39 <smcginnis> #link https://wiki.openstack.org/wiki/Forum/Boston2017
16:45:47 <xyang> smcginnis: what much time do we have?
16:45:58 <smcginnis> Some good topics from jgriffith. We should add those to the etherpad.
16:46:00 <xyang> smcginnis: how is this different from design summit in the past?
16:46:13 <smcginnis> xyang: Unfortunately I have no idea for any of those.
16:46:22 <smcginnis> diablo_rojo: Any foundation guidance you can provide?
16:46:29 <jgriffith> smcginnis hehe... I meant to add those as topics for todays meeting but :)
16:46:48 <jgriffith> I updated the wiki to reflect that after I noticed I screwed it up
16:46:49 <smcginnis> jgriffith: Probably good enough here. Will do that next
16:46:52 <jgriffith> :)
16:47:01 <smcginnis> Been there, done that.
16:47:38 <xyang> smcginnis: we'll probably get an answer on what to do with translations?
16:47:48 <smcginnis> Running out of time, so I'll move on. But add ideas and just know we may or may not be able to discuss at the forum depending on how timing is.
16:48:00 <smcginnis> xyang: Yes, hoping that's finalized by then.
16:48:03 <hemna> xyang remove them all!
16:48:04 <xyang> smcginnis: I saw that as a forum topic
16:48:06 <hemna> :P
16:48:10 <smcginnis> #topic 3'rd party CI
16:48:13 <xyang> hemna: :)
16:48:13 <jungleboyj> xyang:  It is just more like the fishbowl sessions.
16:48:25 <hemna> rm -f _LE, _LW, _LDIE!
16:48:32 <smcginnis> hemna: :)
16:48:41 <smcginnis> jgriffith: All yours now.
16:48:50 <smcginnis> #link https://etherpad.openstack.org/p/cinder-ci-proposals Changing 3'rd party CI requirement
16:48:54 * jungleboyj shakes my head at hemna
16:48:56 <jgriffith> smcginnis thanks!
16:49:10 <jgriffith> Ok, so folks that were in ATL are familiar with this
16:49:18 <jgriffith> at least if you stuck around Friday :)
16:49:29 <jgriffith> So I tried to summarize in that etherpad
16:49:47 <jgriffith> basically IMO we've stopped making any real forward/beneficial progress on 3'rd party CI
16:49:56 <jgriffith> so... maybe we should try something different
16:50:40 <jgriffith> The TLDR is stop *requiring* a true Continious integration for now
16:51:07 <jgriffith> instead require that CI's respond to triggers:  "run <driver-ci-name>" and "run all-cis"
16:51:13 <bswartz> more like an admission that the current "continuous" requirement is not being met by nearly all 3rd party CI
16:51:22 <jgriffith> bswartz correct
16:51:35 <jgriffith> in other words, quit fooling ourselves :)
16:51:38 <DuncanT> jgriffith: Doesn't seem like a terrible idea given where we are and the total lack of improvement in the last 12 months
16:51:46 <jgriffith> and focus on actually tasks to improve
16:52:13 <DuncanT> jgriffith: With the appendium that it's sad that we have to do this
16:52:15 <jgriffith> by isolating these to a more periodic and concerned effort we can publish, analyze and focus on getting things fixed up
16:52:23 <diablo_rojo> jgriffith, +1
16:52:28 <jgriffith> DuncanT yeah, but such is life
16:52:44 <jgriffith> So if folks want, take a look at the ehterpad and add comments/suggestions
16:53:10 <diablo_rojo> Not biting off more than we can chew. start small and work our way up from there to actually make things better.
16:53:37 <jgriffith> I'd propose doing both a dummy patch that just pulls from master and runs everything AND a known "everything should fail this" test
16:53:39 <bswartz> DuncanT: it turns out that maintaining 3rd party CIs is like a full time job for some people, and I don't that that was ever the intention when we started down this road it Atlanta (that ATL design summit not the PTG)
16:53:59 <smcginnis> jgriffith: I like the "pipecleaner" everything should fail on this patch
16:54:13 <jgriffith> bswartz DuncanT so the only caveat is that you'll still have to maintain a CI, it jus won't have the same load demand or elasticity requirements
16:54:24 <smcginnis> There goes my side business of running CIs for companies. :D
16:54:25 <jgriffith> smcginnis yeah, that's awesome!
16:54:53 <hemna> How many of us look at driver CI for a driver patch?
16:55:02 <jgriffith> and this way, as we go along if we get better we can do things like up the frequency or revisit ture continious
16:55:06 <hemna> I'm still not going to +A a patch for a driver, unless it passes CI
16:55:25 <jgriffith> hemna read the etherpad, I knew you'd say that :)
16:55:28 <tbarron> that's consistent I think
16:55:33 <bswartz> hemna: yeah that's important
16:55:37 <jgriffith> and I think that's perfectly reasonable/valid
16:55:41 <DuncanT> bswartz: Yeah... I think the fact that it is a full time job suggests that something, somewhere needs some serious rethought, but that turns out to be a very big topic and probably outside of the cinder remit
16:55:42 <smcginnis> hemna: I do. And I guess the good thing about this is we can just trigger extra runs on the ones we care about.
16:55:42 <hemna> ok coolio.
16:55:44 <bswartz> vendors CIs should run on vendor driver patches
16:56:03 <jgriffith> even if it means a core has to add the comment "run xyz" that's fine
16:56:08 <tbarron> +1
16:56:10 <hemna> jgriffith,+1
16:56:22 <smcginnis> Side bonus - less scrolling in gerrit. ;)
16:56:24 <jgriffith> if you're an overachiever your CI will already be running it, OR you'll add the comment to the review when you submit it
16:56:29 <jungleboyj> jgriffith:  +1
16:56:39 <jgriffith> smcginnis hehe
16:56:41 <smcginnis> jgriffith: Good point. There's no reason you can't run on all patches if you want to.
16:57:03 <jgriffith> Note, I did point out that this was the *requirement* but that folks can continue testing every patch if they want
16:57:27 <smcginnis> Going to run out of time, so let's let folks read up on the etherpad.
16:57:28 <jgriffith> I should clarify that.. if you're CI sucks and your cluttering things with nonsense you will be dealt with harshly :)
16:57:33 <jgriffith> kk
16:57:36 <smcginnis> #topic Filtering and the API
16:57:38 <smcginnis> +1
16:57:41 <jgriffith> F'ing FILTERS
16:57:43 <bswartz> it should also be pointed out that continual failing will be a lot less acceptable if you only have to run once a week
16:57:46 <jgriffith> can we stop the madness!
16:57:53 <smcginnis> bswartz: +1
16:57:53 <DuncanT> jgriffith: Filters are good
16:58:05 <jgriffith> seriously, the filtering every which way to Sunday is ludicrous
16:58:16 <smcginnis> DuncanT: I actually did say "let me put on my Duncan hat" when this came up at the PTG.
16:58:18 <jgriffith> Let's have a generic filtering mechanism and sotp
16:58:20 <hemna> OpenStack - we love the madness.
16:58:40 <DuncanT> generic, consistant filtering mechanism +1
16:58:40 <jgriffith> I have a spec and first round patch...
16:58:41 <jungleboyj> hemna:  Nice.  A new mantra
16:58:42 * jgriffith gets links
16:58:49 * bswartz eats popcorn
16:59:00 * jungleboyj throws popcorn
16:59:01 <jgriffith> https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/generalized-filtering-for-cinder-list-resource
16:59:17 <smcginnis> #link https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/generalized-filtering-for-cinder-list-resource Filtering patches
16:59:25 <jgriffith> https://review.openstack.org/#/c/441516/
16:59:41 <jgriffith> So there's a start of things, and the spec
17:00:10 <DuncanT> jgriffith: Filter by tenant (for admins) is missing and important
17:00:14 <jgriffith> My proposal is that we can go ahead and expose everything the DB lets us fitler on if we want, by using a --filter arg and a json file to control what the admin wants to allow
17:00:32 <smcginnis> Sorry, out of time.
17:00:32 <jgriffith> DuncanT it's just another filter k/v pair isn't it?
17:00:43 <jgriffith> volume list filter=tenant:id
17:00:44 <jgriffith> ok
17:00:50 <smcginnis> Let's go over to #openstack-cinder.
17:00:52 <jgriffith> til next time dirty rotten filters!
17:00:55 <DuncanT> jgriffith: I'm just trying to figure that out, but yeah, I think so
17:00:56 <smcginnis> hehe
17:01:00 <smcginnis> Thanks everyone.
17:01:03 <smcginnis> #endmeeting