16:00:36 <smcginnis> #startmeeting Cinder
16:00:37 <openstack> Meeting started Wed May 31 16:00:36 2017 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:41 <openstack> The meeting name has been set to 'cinder'
16:00:53 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino karlamrhein diablo_rojo jay.xu jgregor lhx_ baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov wxy
16:00:56 <patrickeast> o/
16:00:57 <e0ne> hi
16:00:59 <smcginnis> viks ketonne abishop sivn breitz
16:00:59 <xyang1> Hi
16:01:00 <cFouts> hi
16:01:00 <_alastor_> \o
16:01:00 <diablo_rojo> Hello :)
16:01:03 <scottda> hi
16:01:04 <abishop> o/
16:01:17 <bswartz> .o/
16:01:20 <jungleboyj> o/
16:01:22 <wxy|> o/
16:01:47 <jsuchome> hi
16:01:47 <smcginnis> Hey everyone
16:02:20 <smcginnis> #topic Announcements
16:02:24 <tbarron> hi
16:02:28 <DuncanT> hi
16:02:31 <geguileo> hi!
16:02:34 <jgriffith> hola compadres
16:02:37 <smcginnis> #info Pike-2 is next week
16:03:01 <smcginnis> We have a few driver reviews out there and a few specs that need attention before the cutoff.
16:03:28 <smcginnis> Please try to spend a little time on that if at all possible.
16:03:49 <smcginnis> #topic DocImpact and docs bugs
16:04:04 <smcginnis> Been seeing this a lot so I wanted to bring it up here.
16:04:23 <smcginnis> Patches only need DocImpact tags if they need follow up documentation updates to openstack-manuals.
16:04:43 <smcginnis> And it is up to the patch submitter to do that work or get it to the right person to make the docs change.
16:04:44 <e0ne> smcginnis: from my perspective, I prefer to assign these bugs to patch authors or close them
16:04:47 <mdovgal> hi)
16:05:07 <smcginnis> We have a bunch of open docs bugs that are autoamtically generated from commits of these DocImpact patches that are just sitting out there.
16:05:17 <smcginnis> e0ne: Yep, that is how it should work
16:05:40 <smcginnis> But for a lot of them, it doesn't appear the patch authors know or care to do anything with them.
16:05:53 <e0ne> :(
16:06:03 <jungleboyj> smcginnis:  So, I assume we still need them for config changes right.
16:06:08 <smcginnis> Which either means there wasn't really a DocImpact, or they think the magic documentation fairies will just take care of everything.
16:06:08 <apuimedo> hi
16:06:10 <jungleboyj> Even if they come with a release note.
16:06:30 <smcginnis> jungleboyj: If there is an additional change to documentation that is needed.
16:06:31 <jungleboyj> He he.  Magic Documentation Fairies
16:06:38 <jungleboyj> smcginnis:  Ok.
16:07:34 <smcginnis> Most config options automatically get generated in the tables. So unless there is something more to document about them, there's not really a doc impact.
16:07:55 <smcginnis> #link https://bugs.launchpad.net/cinder/+bugs?field.tag=doc
16:08:06 <smcginnis> Would love to see those implemented or closed. ^
16:08:35 <smcginnis> #topic Additional options to NFS backend
16:08:36 <e0ne> smcginnis: in such case, we have to be attentive during review such patches
16:08:47 <jungleboyj> smcginnis:  Ah, thank you.  The info about them be auto generated is helpful.
16:08:54 <smcginnis> jsuchome: Sorry, one second then it's all yours. :)
16:09:17 * jsuchome can wait :-)
16:09:19 <smcginnis> e0ne: Attentive in what way?
16:09:28 <jgriffith> smcginnis what about the config guide/docs?
16:09:48 <e0ne> smcginnis: to not add docimpact flag if only release notes are needed
16:09:57 <smcginnis> e0ne: +1
16:10:10 <jungleboyj> e0ne:  That was why I was asking.  :-)
16:10:12 <jgriffith> smcginnis for example. https://docs.openstack.org/mitaka/config-reference/block-storage/drivers/solidfire-volume-driver.html
16:10:19 <jungleboyj> I may have been an offender there.
16:10:52 <smcginnis> jgriffith: The config pages that just list a table of the config options will be automatically updated. But if you want to add text describing how it should be used or any notes, then that should have the DocImpact and follow up docs submission.
16:10:56 <smcginnis> :)
16:11:02 <jungleboyj> smcginnis:  I will try to look through the list of docs bugs and see which ones can maybe just be closed or are easy fixes.
16:11:09 <jgriffith> smcginnis excellent, thanks :)
16:11:12 * jungleboyj did say I would be the docs liaison
16:11:24 <smcginnis> jsuchome: OK, sorry about that. The floor is yours.
16:11:29 <jgriffith> I was unaware we got that automated now :)
16:11:38 <jungleboyj> jgriffith:  +1
16:12:06 <jgriffith> although... nope it's not.  But I'll look at that offline
16:12:16 <jsuchome> I basically wanted to ask, what's the correct way of providing extra NFS options to cinder backend - documentation seems to mention 2 ways
16:12:46 <bswartz> did you see the ML reply from eharney?
16:12:47 <jsuchome> also the cinder code is using both (cinder.nfs_mount_options, option in the file cinder.nfs_shares_config)
16:12:51 <eharney> which documentation are you looking at?  i'd like to see if it looks correct
16:12:58 <Swanson> hello
16:13:22 <jsuchome> oh, there was a reply? let me check
16:13:37 <jungleboyj> jsuchome:  cinder.nfs_shares_config is only used if there isn't an IP of a nas server in cinder.conf .
16:13:43 <jsuchome> I've linked the documentation links in my mail
16:14:24 <jsuchome> is used, or is supposed to be used?
16:14:43 <eharney> i'll look at updating the config-reference/block-storage docs, i think they need to be revamped a bit
16:14:47 <jsuchome> this piece of code https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L163 takes whatever is written there and passes it further as the optiobns
16:14:50 <eharney> for the nfs driver
16:15:19 <jungleboyj> eharney:  Agreed.  I don't thinks those were ever updated after I changed the way that nfs_shares_config is or is not used.
16:15:37 <smcginnis> Speaking of NFS, CI jobs have been failing for awhile now.
16:15:57 <jsuchome> eharney: ok, so you're saying putting options into nfs_shares_config config should be obsolere
16:15:59 <jungleboyj> smcginnis: :-(
16:16:03 <jsuchome> but what about nova?
16:16:32 <eharney> jsuchome: Nova will mount the export with the info that Cinder provides it, i believe
16:16:46 <jsuchome> when volume is being attached, nova knows about the options that were in nfs_shares_config, but not about nfs_mount_options
16:17:08 <jsuchome> exactly, but Cinder does not provide those from nfs_mount_options
16:17:12 <jsuchome> at least not in my tests
16:17:19 <jsuchome> so this might be a bug
16:17:37 <jungleboyj> nfs_mount_options should only be used if nfs_shares_config is not being used.
16:17:53 <jungleboyj> Let me look here.
16:18:06 <jsuchome> well, there's definitely nothing in the code that prevents such usage
16:19:29 <jungleboyj> If I remember correctly, if nas_host and nas_share_path are specified it should not use nfs_shares_config .
16:19:36 <eharney> correct
16:19:38 <jungleboyj> Then the nfs_mount_options will be used.
16:19:47 <eharney> i'm not sure about that
16:20:07 <jungleboyj> eharney:   I am pretty sure I have set that up.
16:20:09 <eharney> there's nas_mount_options and nfs_mount_options -- i think that's true for nas_mount_options, not sure about the other
16:20:16 <jungleboyj> I don't think I used nas_mount_options
16:20:36 <jungleboyj> I would have to go try it again to be sure though.
16:20:37 <eharney> at any rate it looks like we need to re-evaluate the config for this driver and polish it up a bit
16:20:55 <jungleboyj> eharney:  ++ and make sure the documentation matches
16:21:01 <cFouts> eharney +1
16:21:39 <smcginnis> And fix the CI problem. :)
16:22:01 * eharney looks to see what the CI problem is
16:22:02 <smcginnis> jsuchome: Anything else?
16:22:07 <jungleboyj> In fact, I know the nas_host/nas_share_path isn't documented properly as I have gone and looked for that in the documentation and couldn't find it.
16:22:17 <jsuchome> so should I file a bug report that nfs_mount_options are not passed to nova?
16:22:40 <jungleboyj> jsuchome:  Can you make the bug more generic?
16:23:12 <cFouts> I thought nas_mount_options were preferred over using nfs_mount_options
16:23:18 <eharney> smcginnis: oh, the compute service policy is still messed up
16:23:42 <smcginnis> eharney: Oh, right.
16:23:43 <jungleboyj> jsuchome:  NFS config options not documented/working as expected and indicate that the nfs_shares_config isn't used as documented and that it doesn't appear nfs_mount_options are passed along.
16:23:51 <eharney> cFouts: i think so, but we can sort that out in a thorough review in the bug
16:23:53 <jsuchome> I think I haven
16:24:06 <jungleboyj> eharney:  and I can use it to get things cleaned up.
16:24:09 <jsuchome> jungleboyj: ok, that sounds good, I'll do it
16:24:26 <jungleboyj> eharney:  smcginnis  Are you guys ok with that plan?
16:24:35 <smcginnis> jungleboyj: +1 from me.
16:24:43 <jungleboyj> Coolio
16:24:45 <eharney> sure, i'll help sort out what the end goal should be here and help figure out the details
16:25:07 <eharney> we need to start issuing deprecation messages for nfs_shares_config if we aren't already, and aim at getting rid of it, this transition started quite a while ago
16:25:23 <jsuchome> (but yes, nas options seem to have higher priority: https://github.com/openstack/cinder/blob/stable/newton/cinder/volume/drivers/nfs.py#L99 , not that it is relevant to my case)
16:25:41 <mdovgal> jungleboyj, Coolio with his gangsta's paradise? :D
16:26:01 <eharney> the nas_* options have higher priority because the goal was to have the same options for a handful of similar drivers called nas_* rather than having numerous duplicated options
16:26:40 * jungleboyj keeps spending most my life, Livin' in a gangsta's paradise
16:27:30 <jungleboyj> eharney: ++
16:27:49 <smcginnis> Sounds like we have a plan. Good to move on?
16:27:55 <jsuchome> I'm good
16:28:00 <jungleboyj> Move along.  Nothing to see here.
16:28:09 <smcginnis> #action jungleboyj to work on bug report from jsuchome
16:28:14 <smcginnis> #topic Recruiting awesome Block-Heads to help with fuxi-golang
16:28:27 <smcginnis> jgriffith: fuxit
16:28:32 <jgriffith> LOL
16:29:05 <jgriffith> So not sure how many people are already aware... but fuxi is a kuryr project that has provided a means to plug Cinder into a Docker environment
16:29:07 <jungleboyj> smcginnis:  That is fuxed up.
16:29:09 <e0ne> jgriffith: I don't know what is fuxi-golang, but I'm interesting in everything related to standalone
16:29:23 <jgriffith> similar to the cinder-docker-volume-plugin stuff
16:29:29 <apuimedo> it's pronounced Foo She
16:29:43 <e0ne> jgriffith: I'll be glad to help you with it
16:29:43 <smcginnis> Not in my head. ;)
16:29:43 <jgriffith> now, I know that there are several of us here that are doing Docker and K8's work on the side
16:29:48 <jgriffith> smcginnis +1 :)
16:29:54 <apuimedo> smcginnis: :D
16:30:09 <jgriffith> With the foundation for stand-alone cinder and some other things going on
16:30:26 <apuimedo> the idea is to consolidate the efforts just like in Neutron we did with Kuryr
16:30:40 <apuimedo> (fuxi also enables manila shares to be accessible as Docker vols)
16:30:46 <jgriffith> I am hoping it might be a good time to take another shot at building a community around some of this work, inparticular fooshi (I'll just spell it that way to keep jungleboyj under control)
16:30:56 <jgriffith> children!!
16:31:04 <jungleboyj> jgriffith:  Hey know.  I can be good.
16:31:11 <Swanson> jgriffith, don't ruin our fun.
16:31:12 <jgriffith> jungleboyj no you can't :)
16:31:26 <jungleboyj> jgriffith:  Ok, you are right.
16:31:39 <jgriffith> so there's efforts underway to make cinder and manilla usable as standalone components in K8s
16:32:11 <xyang> jgriffith: isn't there already a cinder driver in K8s?
16:32:24 <jgriffith> but there are also some base layers that we kinda need, like a cinder-volume-plugin and most importantly a version of os-brick that we can port to a golang pkg
16:32:27 <apuimedo> xyang: you mean the cloud provider?
16:32:32 <jgriffith> xyang yes as the cloud-provider
16:32:56 <jgriffith> xyang work is also underway to make a standalone cinder and manila option as well
16:33:01 <jgriffith> ie bare-metal with cinder/manila
16:33:16 <xyang> apuimedo, jgriffith: ok, I didn't realize that's different from this effort
16:33:27 <e0ne> jgriffith: I was asked about golang version of os-brick from by k8s community too
16:33:32 <jgriffith> anyway... what I'm trying to gauge is is there enough interest in Cinder members to help out with this?
16:33:33 <apuimedo> xyang: the idea is to support baremetal as well, yes
16:33:42 <apuimedo> side by side k8s and openstack is a big use case
16:33:52 <xyang> apuimedo: ok, thanks
16:33:56 <apuimedo> you're welcome
16:34:03 <jgriffith> rather than continue to fragment, it would be kinda cool if we all pitched in a bit
16:34:04 <apuimedo> e0ne: interesting
16:34:09 * dims peeks
16:34:09 <apuimedo> hongbin: is starting gos-brick
16:34:12 <jgriffith> it's really not that much work to be honest
16:34:18 <apuimedo> 'g' standing for golang
16:34:33 <apuimedo> gos means dog in my language, so I approve of the anem
16:34:35 <apuimedo> *name
16:34:42 <jgriffith> but if everybody continues to do their own thing it just means more fragmentation
16:34:49 <smcginnis> apuimedo: Hah!
16:34:54 <e0ne> jgriffith: +1
16:34:56 * jgriffith gives up
16:35:22 <e0ne> jgriffith: It wouyld be great to start this topic in openstack-dev@ too
16:35:30 <apuimedo> so the goal would be to get a strong core team for the fuxi effort from cinder folks
16:35:32 <jgriffith> e0ne can do
16:35:35 <apuimedo> and to build this together
16:35:42 <smcginnis> jgriffith: The repo is started for gos-brick. Once we start getting some code ported over there, I think that's a good first area folks can help contrubute to.
16:36:02 <smcginnis> apuimedo: +1
16:36:03 <jgriffith> so I'm curious... anybody here interested (I mean seriously interested, not just "oh that's neat")?
16:36:05 <e0ne> smcginnis, jgriffith: can you share a link, please?
16:36:21 <e0ne> jgriffith: I'm interesting in it
16:36:27 <jgriffith> We already have a golang version of Cinder-volumem plugin here:  https://github.com/j-griffith/cinder-docker-driver
16:36:39 <smcginnis> e0ne: This repo was just created: https://github.com/openstack/gos-brick
16:36:40 <jgriffith> just need to update, openstackify it and create a brick pkg
16:36:43 <e0ne> jgriffith: and even can find time for this activity :)
16:36:44 <smcginnis> So no code yet.
16:36:58 <e0ne> smcginnis: thanks
16:37:12 <jgriffith> honestly, shouldn't we have it all under a single repo and just seperate pkgs?
16:37:25 <apuimedo> jgriffith: I think you mentioned as well to have the fuxi flavored options
16:37:31 <e0ne> smcginnis: does it mean that Foundation officially accept golang?
16:37:40 <jgriffith> e0ne yes
16:37:47 <e0ne> I remember that mail thread bout goland and swift
16:37:51 <dims> e0ne : working towards it. yes. (TC not foundation)
16:37:57 <jgriffith> Foundation was never the detracting opinion
16:37:58 <jgriffith> TC
16:38:08 <jgriffith> dims +1
16:38:09 <smcginnis> jgriffith: The advantage I see in having it separate would be if there would end up being other go based projects needing to do something with it.
16:38:10 <jungleboyj> jgriffith:  Yeah, having different repos seems more complicated.
16:38:10 <e0ne> dims, jgriffith: cool, thanks!
16:38:47 <smcginnis> e0ne: https://governance.openstack.org/tc/resolutions/20170329-golang-use-case.html
16:38:56 <jgriffith> smcginnis fair, TBH though go lets you stuff multiple pkgs in a single rep and pull just what you want, but I don't have a strong enough feeling on this to worry about it :)
16:39:24 <smcginnis> jgriffith: True. I guess it's just a difference between go and python we're not really used ot.
16:39:27 <smcginnis> *to
16:39:36 <jgriffith> smcginnis ok... so forget that part, more important part here is level of interest from folks like _alastor_ patrickeast xyang e0ne geguileo etc
16:39:51 <jgriffith> although xyang might have some political things around that
16:40:09 <xyang> jgriffith: nothing political:)
16:40:11 <smcginnis> :)
16:40:14 <jgriffith> smcginnis I arleady have your vote :)
16:40:15 <e0ne> :)
16:40:16 <geguileo> jgriffith: I'd probably be interested if I had any time   :-(
16:40:32 <smcginnis> geguileo: You don't! :P
16:40:44 <jgriffith> somebody create another clone of geguileo :)
16:41:06 <e0ne> :)
16:41:10 <geguileo> yes, please!!
16:41:22 <xyang> jgriffith: I'm interested, just can't tell you how much time I can spend on it yet
16:41:29 <jgriffith> So to be clear, we'd need some work on the fuxi parts, but more importantly IMO is starting to make sure we put some effort into the stand-alone cinder model
16:41:41 * jungleboyj creates a work item for geguileo  cloning
16:41:47 <e0ne> geguileo: I'm waiting for your blog post about pros and cons on how to be cloned
16:41:48 <geguileo> rofl
16:41:55 <smcginnis> jgriffith: Good point. There are a few efforts here that all need to come together.
16:42:04 <jgriffith> smcginnis right
16:42:28 <jgriffith> the big question is if there's enough interest from cinder team to get to the end result to begin with
16:42:30 <apuimedo> jgriffith: can you expand on the 'stand-alone cinder model'?
16:42:56 <jgriffith> if there is, I can start working on documenting/outlining the things I think are needed
16:43:05 <e0ne> jgriffith: I hope, the answer from the most of us will be 'yes'
16:43:12 <smcginnis> jgriffith: So I think there's enough interest. For now, let's just try to do periodic updates in the Cinder meeting to keep the awareness up and folks can jump in when/where they can.
16:43:13 <jgriffith> apuimedo are you familiar with stand-alone cinder?
16:43:21 <jgriffith> or is that the question :)
16:43:25 <e0ne> jgriffith: feel free to ask me if anuy help is needed
16:43:28 <jungleboyj> I know we have interest.
16:43:36 <apuimedo> jgriffith: isn't it just cinder without keystone?
16:43:38 <smcginnis> jgriffith: +1 for documenting a plan.
16:43:40 <jgriffith> smcginnis +1
16:43:50 <jgriffith> apuimedo it's cinder with/without keystone and without nova
16:43:51 <e0ne> apuimedo: w/o keystone and w/o nova too
16:43:57 <jgriffith> that's the bare-metal piece I mentioned
16:44:02 <jgriffith> or triple-o
16:44:11 <jgriffith> or *insert-your-consumer-here*
16:44:16 <apuimedo> jgriffith: ok. Similar to what we do with Kuryr, just that we keep Keystone practically always
16:44:30 <jgriffith> apuimedo yeah, there are good reasons to keep keystone :)
16:44:49 <apuimedo> comfort being a big one
16:45:01 <e0ne> apuimedo: the main point of stanalone cinder was to make it work without nova
16:45:22 <apuimedo> e0ne: then it is the same goal fuxi and kuryr have
16:45:32 <apuimedo> (fuxi for cinder and manila, kuryr for neutron)
16:45:33 <e0ne> apuimedo: cool
16:45:43 <apuimedo> we started with baremetal
16:45:49 <dims> ++ apuimedo
16:45:51 <jgriffith> so I think there's a lot of value in this, not only for all of us with storage devices, but also for Cinder as a community and for customers
16:45:53 <apuimedo> now we support also things like Pods inside VMs
16:46:07 <e0ne> jgriffith: +1
16:46:14 <jgriffith> apuimedo +1
16:46:45 <jungleboyj> jgriffith:  +1
16:47:08 <jgriffith> so just to be clear, what this means is that anything that *works* in Cinder, works in container environments
16:47:26 <jgriffith> and it's backed by the community, rather than a single vendor/company
16:47:39 <smcginnis> Big +1
16:47:46 <e0ne> +2:)
16:47:48 <apuimedo> How we do this in kuryr is that we have a repo for the base stuff and then we have repos for integrations. With Golang you'd probably 'vendor' the base component
16:48:16 <jgriffith> apuimedo vendor as in glide or godeps you mean?
16:48:21 * jgriffith is confused
16:48:31 <jgriffith> apuimedo OH!
16:48:37 <jgriffith> No... cinder's different
16:48:43 <apuimedo> godeps
16:48:57 <jgriffith> we don't have external *things* with cinder
16:49:06 <apuimedo> jgriffith: I meant things like gos-brick
16:49:13 <jgriffith> apuimedo ahh!  Yes, ok
16:49:45 <apuimedo> and if we come up with subset functionality that can be used both by docker and by k8s integrations, it's also a candidate
16:50:09 <jgriffith> alright, well at least we seem to have a few people here interested
16:50:18 <apuimedo> (depends on whether people are interested in using the docker volume api for the k8s integration or prefer a more runtime agnostic approach really)
16:50:22 <smcginnis> jgriffith: A good start.
16:50:26 <jgriffith> maybe start some work and communications and get things rolling?
16:50:40 <smcginnis> jgriffith: Sounds like a good plan to me.
16:50:58 <patrickeast> late to the party, but im definitely interested... combines all my latest.. uh.. hobbies
16:51:20 <jgriffith> patrickeast phewww... I needed at least you or _alastor_ before feeling warm and fuzzy
16:51:29 <apuimedo> jgriffith: so, if I took my notes right, the action items are to send a message about this to the ML
16:51:30 <smcginnis> :)
16:51:34 <patrickeast> haha
16:51:40 <apuimedo> and then coordinate with zengchen to make a core team
16:51:48 <apuimedo> (and hongbin for the gos-brick)
16:52:31 <jgriffith> apuimedo I suppose that's needed... or just start hacking and see where we go
16:52:38 <jgriffith> earn core
16:52:54 <apuimedo> jgriffith: we need a starting team with good CInder knowledge
16:53:06 <apuimedo> otherwise I'll have to review too many things I'm not good at
16:53:09 <jgriffith> apuimedo yep, we have myself, smcginnis patrickeast e0ne and maybe xyang
16:53:12 <apuimedo> I wish I could be of more help, but my extent of storage knowledge ends at nfs and lvm
16:53:15 <jgriffith> off to a pretty good start :)
16:53:15 <apuimedo> xD
16:53:28 <apuimedo> great!
16:53:33 <jungleboyj> apuimedo:  That is all you need.  ;-)
16:53:34 <jgriffith> need to figure out if manila folks are interested or not
16:53:50 <xyang> jgriffith: you are not afraid that I'll reject things for political reasons?:)
16:53:52 <jgriffith> jungleboyj +1
16:53:58 <jgriffith> xyang never!
16:53:59 <jgriffith> :)
16:54:03 <xyang> :)
16:54:05 <dims> jgriffith : smcginnis : apuimedo : please post the outline to -dev@ this sounds like great start
16:54:08 <apuimedo> jgriffith: indeed. We should reach out as well. I know the huawei guys want manila since they already have it in
16:54:14 <jgriffith> dims will do
16:54:20 <jgriffith> I'll take that action item
16:54:26 <apuimedo> but getting manila cores involved would be the best
16:54:31 <apuimedo> thank jgriffith
16:54:32 <dims> thanks jgriffith
16:54:35 <jgriffith> #action jgriffith outline plan/steps and send to ML
16:54:42 <jgriffith> that trick never works
16:54:47 <apuimedo> :-)
16:54:47 <tbarron> putting this on the manila weekly meeting agenda would make sense
16:54:50 <smcginnis> jgriffith: Hah!
16:55:19 <smcginnis> We have tbarron here - what more do we need? :)
16:55:29 <smcginnis> OK, 5 minutes. Anything else?
16:55:41 <abishop> I've got one - any thoughts on updating cinderclient so it defaults to API v3?
16:55:45 <apuimedo> tbarron: :-)
16:55:56 <smcginnis> #topic Open discussion
16:56:16 <smcginnis> abishop: I thought we should wait at least another cycle.
16:56:56 <smcginnis> Just to make sure v3 has a little more time to actually make it out there.
16:57:00 <abishop> smcginnis, I thought v2 was reported as deprecated
16:57:04 <Swanson> What horrible thing is going on with iscsi?
16:57:19 <smcginnis> abishop: Yes, as of Pike.
16:57:47 <smcginnis> abishop: But the client is used for existing deployments too, so I think we're kind in a weird interim spot right now.
16:57:57 <abishop> smcginnis, so if v2 is deprecated, not sure why cinderclient shouldn't move on to v3
16:58:24 <smcginnis> abishop: Well, if you want go ahead and propose a change. We can see what other folks have to say about it.
16:58:32 <smcginnis> I'm not really sure what the best answer is.
16:58:45 <abishop> smcginnis, will do and see how it flies
16:58:54 <smcginnis> abishop: Sounds good. :)
16:59:15 <smcginnis> OK, thanks everyone.
16:59:22 <smcginnis> #endmeeting