16:00:20 <thingee> #startmeeting cinder
16:00:21 <openstack> Meeting started Wed Dec  3 16:00:20 2014 UTC and is due to finish in 60 minutes.  The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:25 <openstack> The meeting name has been set to 'cinder'
16:00:40 <jungleboyj> o?
16:00:43 <scottda> hi
16:00:44 <jungleboyj> o/
16:00:44 <cknight> Hi
16:00:45 <rushil> \o
16:00:46 <lpabon> hi
16:00:47 <thingee> hello all
16:00:50 <kaisers> hi
16:00:59 <thingee> agenda today
16:01:01 <hemna_> mornin
16:01:03 <thingee> #link https://wiki.openstack.org/wiki/CinderMeetings
16:01:05 <avishay> hello
16:01:35 <e0ne> hi
16:01:48 <thingee> I sent an email to new driver maintainers that have a bp for k-1.
16:01:50 <Swanson> hello
16:01:51 <bswartz1> .o/
16:01:57 <smcginnis> o/
16:02:00 <dustins_> \o
16:02:03 <xyang1> hi
16:02:03 <thingee> If I don't see code by the end of this week, I'm afraid we won't have time to review it.
16:02:16 <thingee> so just fyi, I will start to untarget bps
16:02:30 <smcginnis> thingee: I thought the 15th was the deadline previously stated.
16:02:40 <thingee> smcginnis: reread the email
16:02:54 <Swanson> And there it is in my junk folder....
16:03:00 <thingee> the 15th is for november
16:03:08 <flip214> thingee: even if eg. snapshots are not fully implemented yet? didn't get an email.
16:03:10 <ameade_> o/
16:03:23 <thingee> ok lets start
16:03:35 <thingee> #topic should openstack-cinder room be archived?
16:03:40 <scottda> so, Cinder is not archived here: http://eavesdrop.openstack.org/irclogs/
16:03:41 <thingee> scottda: you're up
16:03:46 <scottda> but it could be.
16:03:52 <thingee> I have no problems with this
16:03:52 <asselin_> o/
16:03:57 <akerr> +1
16:04:01 <flip214> +1
16:04:03 <kaisers> +1
16:04:04 <e0ne> scottda: +1
16:04:08 <scottda> I was told in the Infra channel to get a resolution at this meeting and they'd do it.
16:04:15 <asselin_> +1
16:04:16 <thingee> scottda: looks golden :)
16:04:21 <thingee> anything else?
16:04:25 <avishay> fine by me
16:04:28 <scottda> nope.
16:04:31 <jgriffith> +1
16:04:31 <hemna_> yah it's ok.
16:04:33 <DuncanT> +1
16:04:38 <hemna_> no more pr0n links in the channel guys.
16:04:40 <scottda> Is there some kind of magic you need to do thingee?
16:04:41 <xyang1> fine with me
16:04:46 <thingee> #topic Continue the discussion on over subscription
16:04:49 <e0ne> hemha_ :(
16:04:49 <thingee> xyang1: you're up
16:04:52 <jungleboyj> I have no concerns.  Have wished I had it in the past. +1
16:04:53 <xyang1> ok
16:04:59 <xyang1> so we talked about it last week
16:05:05 <thingee> #link https://review.openstack.org/#/c/129342/
16:05:19 <bswartz1> xyang and I have had a few conversations about this
16:05:20 <xyang1> I discussed with bswartz and sorted out some differences and updated the spec
16:05:49 <bswartz1> I decided to drop most of my objections last week after we reached agreement on the thick/thin provisioning capbilities
16:06:08 <bswartz1> I still thing we need to tweak the capabilities and extra specs (I think there should be 2, not 1)
16:06:19 <bswartz1> but on the whole I'm pretty happy with the form xyang's spec takes now
16:06:51 <thingee> bswartz1: what does that mean 2, not one?is there explicit limit in the spec?
16:06:54 <bswartz1> should we discuss the capabilities/extra specs here, or in the spec review on gerrit?
16:06:54 <xyang1> so the capabilities is the provisioning_capabilities, right now it can be thin, thick, or both
16:07:10 <bswartz1> the spec suggests 1 capability with 3 values
16:07:16 <bswartz1> I think I'd prefer 2 boolean capabilities
16:07:17 <thingee> xyang1: why is that?
16:07:40 <DuncanT> What does 'both' mean?
16:07:46 <xyang1> thingee: I can go either way. I put it together quickly last night
16:07:46 <bswartz1> something like "supports_thick_volumes" and "supports_thin_volumes"
16:07:50 <jgriffith> DuncanT: some devices let you "pick"
16:07:56 <DuncanT> Ah, got you
16:07:58 <xyang1> both means it supports both thin and thick
16:07:58 <DuncanT> Thanks
16:08:13 <DuncanT> I was thinking extra specs, not capabilities
16:08:15 <DuncanT> My bad
16:08:24 <xyang1> there's extra specs too
16:08:29 <jgriffith> xyang1: bswartz1 how about "supported_provision_types=x,y,z"
16:08:32 <thingee> bswartz1: sounds like that can be accounted for
16:08:33 <thingee> just the spec needs to be updated
16:08:43 <jgriffith> where x=thick, y=thin, z=both
16:08:43 <bswartz1> jgriffith: it's impossible for the scheduler to filter on that
16:08:51 <thingee> jgriffith: sure
16:08:54 <jgriffith> bswartz1: why?
16:08:54 <bswartz1> for filtering to work, it's better to have a list of booleans
16:09:02 <jgriffith> bswartz1: disagree :)
16:09:10 <xyang1> jgriffith: I think that is the current proposal
16:09:27 <jgriffith> xyang1: yes, but discussion started aroudn booleans, names etc
16:09:38 <avishay> the scheduler can filter on lots of things, you just have to set the extra specs correctly
16:09:41 <xyang1> ok
16:09:43 <jgriffith> xyang1: just trying to inject some sanity and keep things readable and understandable
16:09:56 <bswartz1> well..... maybe I'm lacking a clue here
16:10:16 <xyang1> I'm fine with whatever the team has decided on this one
16:10:30 <jgriffith> bswartz1: really by what I just proposed you can treat it as 3 booleans to eval in a single stat
16:10:45 <jgriffith> bswartz1: still does what you're proposing, just less verbose
16:10:50 <thingee> jgriffith: makes sense. I'm not sure what the problem is here.
16:10:51 <avishay> bswartz1: look at the oslo scheduler code, it has lots of keywords to filter by
16:10:55 <jgriffith> bswartz1: ans easier to deal with
16:10:56 <avishay> jgriffith: +1
16:10:59 <bswartz1> I just want to make sure that admins are able to specify that some volume_types MUST be thick, while others must be thin, and maybe some don't care
16:11:12 <thingee> bswartz1: sure
16:11:13 <jgriffith> bswartz1: noted :)
16:11:13 <bswartz1> and backends need to be able to advertise what they support
16:11:38 <jgriffith> bswartz1: provisioning_type=thin...
16:11:45 <jgriffith> must have "thin" in the results
16:12:01 <jgriffith> if you don't care, you omit the spec from the type altogether
16:12:17 <bswartz1> okay jgriffith I will try that out
16:12:19 * jgriffith is worried he's missing something
16:12:21 <thingee> bswartz1: are there any other issues besides that? if not, I'm sure xyang1 is going to treat herself to a milkshake with that spec being merged.
16:12:25 <bswartz1> perhaps I underestimated how smart the filtering is in the scheduler
16:12:38 <jgriffith> mmmmmmm.... milkshakes
16:12:41 <xyang1> thingee :)
16:13:22 * jungleboyj wants a milkshake
16:13:23 <hemna_> we specify thin,thick in our volume types already.  not sure how this affects that if at all
16:13:24 <bswartz1> >_< it's lunchtime here and you're talking about delicious food
16:13:52 <thingee> I'm going to take that as a no then. So I will look for a +1 from bswartz1 on that spec.
16:13:57 <thingee> anything else xyang1?
16:14:03 <xyang1> all set
16:14:12 <bswartz1> I'll +1 the spec after I convince myself that the capabilities/extra_specs are good enough
16:14:21 <bswartz1> I don't expect there will be any issues
16:14:39 <thingee> #topic Discussion on the current status of this spec "Support Modifying Volume Image Metadata"
16:14:49 <thingee> davechen: hi
16:14:52 <davechen> hi
16:14:57 <thingee> #link https://review.openstack.org/#/c/136253/
16:14:58 <davechen> thingee: hi
16:15:00 <jgriffith> we're doing this again :)
16:15:12 <thingee> DuncanT: present?
16:15:14 * jgriffith quickly reads spec
16:15:28 <davechen> yes, I just update the spec some mins ago.
16:15:30 <avishay> bswartz1: https://github.com/openstack/cinder/blob/master/cinder/openstack/common/scheduler/filters/extra_specs_ops.py
16:15:33 <DuncanT> Yup
16:16:07 <DuncanT> I think the delete confusion is now sorted?
16:16:11 <bswartz1> avishay: ty
16:16:32 <jgriffith> davechen: can you describe a use case for me real quick?
16:16:41 <davechen> sure
16:17:03 <davechen> This blueprint is actually a partial task of Graffiti project,
16:17:16 <jgriffith> davechen: and you lost me :)
16:17:32 <davechen> no
16:17:34 <thingee> I'm also not entirely sure I get graffiti myself
16:17:44 <thingee> that's another topic though
16:17:51 <jgriffith> thingee: +1
16:17:59 <hemna_> spray paint, wall, big letters.
16:18:04 <jgriffith> let's focus on a cinder use case as related to this spec
16:18:10 <davechen> on, the intention of this BP is support modifying image metadata in cinder
16:18:11 * jungleboyj is resisting the urge to make a Graffiti joke.
16:18:17 <jungleboyj> Thanks hemna_
16:18:22 <hemna_> jungleboyj, :P
16:18:29 <jgriffith> davechen: yes, but the question is "why"
16:18:44 <jgriffith> or at least "my question"
16:18:50 <thingee> jgriffith: mine too
16:19:10 <davechen> there is one question in the BP.
16:19:14 <avishay> davechen: i think the question is, you created an volume from an image, and the volume's image metadata reflects the source.  why would  you need to change it?
16:19:31 <jgriffith> davechen: your problem statement describes what you want to do, but I don't quite see the *problem* or the *why*
16:19:49 <thingee> davechen: I've glanced through it myself. can you point me to which line?
16:19:50 <jgriffith> avishay: correct... and more importantly should you be allowed to change it
16:19:54 <davechen> we need change it as the part of task of graffiti.
16:19:57 <DuncanT> avishay: Because a volume is mutable. I install a new kernel, maybe it needs different metadata (e.g. the attach method for BfV)
16:20:21 <hemna_> davechen, what is the volume metadata missing at that point that prevents graffiti from working ?
16:20:29 <jgriffith> DuncanT: so your making the decision to just cut glance out of the picture then?
16:20:36 <jgriffith> DuncanT: not saying that's bad :)
16:20:37 <thingee> davechen: I understand graffiti needs to change it, but *why*.
16:20:40 <davechen> LINE81
16:20:54 <avishay> DuncanT: I would expect the user to do volume->image with the updated metadata
16:21:00 <DuncanT> jgriffith: Once glance has created the bootable volume, it *is* out of the picture for that volume
16:21:00 <thingee> davechen: rest api impact?
16:21:04 <jgriffith> avishay: +1
16:21:08 <thingee> davechen: I'm not sure I'm following
16:21:27 <davechen> yes, i am not sure whether we need a new API to handle image metadata
16:21:37 <davechen> Delete API
16:21:39 <DuncanT> avishay: Upload it to glance and suck it back down just to change one piece of metadata?
16:22:01 <thingee> davechen: I'm not sure if use cases are thought out given how this discussion is going. I'm going to defer this topic until there are stated use cases listed on this spec.
16:22:13 <thingee> and api is not a use case.
16:22:27 <jgriffith> DuncanT: I don't see the necessity
16:22:46 <jgriffith> DuncanT: dont' see why it needs to be in Glance necessarily either
16:22:51 <davechen> nova scheduler need the metadata information for vm schedule
16:22:55 <DuncanT> jgriffith: Some of the glance metadata affects how nova sets up the volume attachment
16:22:55 <jgriffith> DuncanT: and the only thing I see as a use case here is Graffiti
16:22:58 <ameade_> why the heck does nova care the volume came from an image "Cinder volume_image_metadata is used by nova for things like scheduling and
16:22:58 <ameade_> for setting device driver options"
16:23:01 <avishay> DuncanT: i assume if you have your images in glance, and you make a change to an image that is noteworthy, you would store it in glance again.  ideally uploading and sucking it back are just CoW operations :)
16:23:12 <jgriffith> DuncanT: sure.... but not that much really
16:23:19 <jgriffith> DuncanT: maybe you have a use case to share?
16:23:28 <ameade_> DuncanT: jgriffith: doesn't that seem like the real issue here?
16:23:52 <jgriffith> ameade_: let's not side track :)
16:23:58 <DuncanT> jgriffith: The only usecase I have is a long running bootable volume I upgrade the kernel on such that a different attach method is prefered
16:24:12 <DuncanT> jgriffith: It's pretty niche but is a usecase
16:24:21 <davechen> imo, the spec is basically ready for approval :)
16:24:25 <jgriffith> DuncanT: meh... I'm not getting it
16:24:44 <jgriffith> To me it's a solution without a real problem
16:24:44 <DuncanT> jgriffith: I'll get the exact key names and such and put a comment on the spec
16:25:06 <avishay> i don't think it's that important, but if it is for some people, i don't think it's that big of a deal to implement and include
16:25:12 <DuncanT> jgriffith: You can do it by uploading to glance, editting the metadata there and re-downloading, but that is a faff
16:25:18 <davechen> I copy the problem desc here.
16:25:20 <jgriffith> DuncanT: and IIRC KVM at least is not very picky about kernel version
16:25:28 <hemna_> sorrison, on the reverse, what is the downside to allowing updating the metadata ?
16:25:30 <davechen> When creating a bootable volume from an image, the image metadata (properties)is copied into a volume property named volume_image_metadata.
16:25:31 <hemna_> gah
16:25:31 <jgriffith> but I'm wasting everybody's time I fear
16:25:31 <hemna_> so
16:25:43 <davechen> Cinder volume_image_metadata is used by nova for things like scheduling and
16:25:51 <davechen> for setting device driver options. This information may need to change after
16:25:55 <jgriffith> hemna_: breaks things like cached copies of image
16:26:09 <davechen> the volume has been created from an image, besides, the additional properties
16:26:23 <DuncanT> jgriffith: Why? I don't see how
16:26:35 <hemna_> jgriffith, but the image is untouched.  I don't see how that breaks cached images ?
16:26:35 <DuncanT> jgriffith: The image in glance is still mutable
16:26:39 <jgriffith> DuncanT: if I have a cached copy of image FOO on my backend device
16:26:45 <jgriffith> DuncanT: you update the metadata
16:26:47 <ameade_> davechen: wouldn't it be better if that information lived in volume metadata?
16:26:54 <ameade_> DuncanT: glance images are immutable
16:26:58 <ameade_> it's a core tenant
16:27:04 <jgriffith> Next person that comes along and want a Voluem with FOO image... I say "oh, got it... here ya go"
16:27:19 <DuncanT> jgriffith: The cached copy doesn't get touched at all
16:27:20 <davechen> ameade_: nova just use image metadata
16:27:30 <hemna_> DuncanT, yah I don't understant hat
16:27:31 <hemna_> that
16:27:36 <avishay> thingee gave up
16:27:54 <DuncanT> jgriffith: Only the db record associated with one specific live, mutable copy of that image gets changed
16:28:14 <jgriffith> DuncanT: so then I really don't get the point :(
16:28:30 <jgriffith> DuncanT: anyway... maybe you and davechen can write up a real problem statement
16:28:35 <jgriffith> and I'll see the light
16:28:36 <jgriffith> :)
16:28:54 <DuncanT> jgriffith: The metadata is associated with what is in the image. Once you change the contents of the volume, that data, for that specific volume, can be out of date
16:28:55 <jgriffith> I'm just not getting it.  As far as downside, maybe there's not one
16:29:07 <DuncanT> I'll work on a concrete example
16:29:09 <jgriffith> DuncanT: yeah, that's my point
16:29:20 <jgriffith> the cached copy never has the kernel update for example
16:29:26 <davechen> i want to know if can get any review comments and fix them asap before spec freeze?
16:29:27 <jgriffith> but the image still has the same ID
16:29:34 <thingee> DuncanT: seems problematic. volumes are going to change. metadata will always be out of date
16:29:35 <jgriffith> right?
16:29:38 <ameade_> i'll post some thoughts on the spec
16:29:41 <DuncanT> The cached copy isn't having its metadata changed, only the live volume is
16:29:43 <hemna_> davechen, maybe give some concrete examples of what nova needs to change in the metadata that prevents things from working as needed.
16:29:47 <jgriffith> It's anit-cloudy if nothing else
16:30:01 <jgriffith> DuncanT: yes, but you're missing what I'm saying
16:30:03 <avishay> i'm not sure what "cached copy" means
16:30:12 <jgriffith> User loads an image on the vol.... updates the kernel
16:30:16 <jgriffith> updates the metadata
16:30:36 <DuncanT> jgriffith: That metadata will get used next boot from that vol
16:30:43 <jgriffith> Does that update go anywhere else?  Does the metadata change go back up to glance?
16:30:46 <davechen> hemna_: nova needn't do any changes
16:30:57 <avishay> jgriffith: no, just on the volume, not in glance
16:30:58 * hemna_ is confused then.
16:31:00 <jgriffith> DuncanT: so that metadata change is *isolated* to that particular volume?
16:31:06 <jgriffith> DuncanT: ahhh... ok!!!
16:31:08 <DuncanT> jgriffith: It will only go back to glance if the user uploads that volume as a new image
16:31:10 <thingee> I don't get the point of it. if you change the kernel, make a new image, set the metadata...but don't worry about chasing the contents of the volume and keeping the metadata accurate.
16:31:11 <jgriffith> Now we're talking!!
16:31:14 <DuncanT> jgriffith: Yes
16:31:20 <DuncanT> jgriffith: Totally isolated
16:31:31 <jgriffith> DuncanT: in that case I don't care :)
16:31:35 <DuncanT> jgriffith: Sorry, I thought that was clear
16:31:38 <avishay> jgriffith: +1
16:31:41 <jgriffith> DuncanT: thank you very much for exaplaining the details
16:31:57 <jgriffith> DuncanT: no, based on 5 of us asking about it I don't think it was clear at all
16:32:05 <avishay> thingee: yes, i guess the user needs to keep the volume contents and the metadata in sync for some cases, which seems odd
16:32:16 <DuncanT> jgriffith: NP, glad we could sort out which bits were unclear :-)
16:32:28 <jgriffith> davechen: PLEASE put a note in the spec explicitly pointing out that this ONLY affects the individual volume
16:32:42 <thingee> avishay: It will get outdated. just doesn't seem worth it
16:32:44 <davechen> jgriffith: sure.
16:32:55 <jgriffith> avishay: yeah, I still don't see the point, but now I don't care because it won't break things I don't think
16:33:01 <DuncanT> thingee: The glance metadata stored by cinder gets actively used to make decissions at boot time
16:33:17 <DuncanT> jgriffith: +1 - You shouldn't care for the most part
16:33:17 <avishay> thingee: outdated is OK as long as it doesn't break things, but if nova is relying on it, that's a bit sketchy
16:33:19 <davechen> jgriffith, DuncanT: any other gaps in the spec?
16:33:21 <thingee> DuncanT: and it will do it wrong. humans are going to forget to update metadata.
16:33:32 <thingee> DuncanT: images are more a point in time than volumes are ever changing
16:33:41 <hemna_> davechen, I still think having a use case in the spec is valuable.
16:33:44 <jgriffith> davechen: If you could point out the "decisions" that Duncan is eluding to that would be great
16:33:48 <DuncanT> thingee: This is not for an image though, it is for a volume
16:34:03 <jgriffith> davechen: I'm a fan of requiring a real use case proposed and described in specs
16:34:04 <DuncanT> jgriffith: I'll find an example, no problem
16:34:20 <jgriffith> thanks, sorry to take up so much time on that
16:34:40 <jgriffith> thingee: humans forget.... whaaaat you say?
16:34:42 <jgriffith> :)
16:34:52 <jungleboyj> :-)
16:34:53 <davechen> jgriffith: I define some use case in the spec.
16:35:01 <thingee> DuncanT: I think you would have better luck have a new volume from an image if you want to rely on that metadata.
16:35:05 <hemna_> davechen, +1
16:35:31 <hemna_> thingee, this is metadata on a volume, not an image
16:35:43 <DuncanT> thingee: So you have to create an image, edit the metadata in glance then suck the image back down just to get new metadata associated with the volume... that sucks
16:35:45 <thingee> hemna_: create a volume from an image is my point
16:35:55 <thingee> hemna_: is where you would have more luck of that metadata being accurate
16:36:07 <thingee> the contents of the volume hasn't changed as much, because it's a new volume from an image.
16:36:30 <jgriffith> thingee: I think you're on the same concept I was stuck on
16:36:32 <thingee> DuncanT: better than having false positives with a 6 month old volume saying it has kernel X
16:36:38 <hemna_> yah, this isn't metadata on the image
16:36:43 <jgriffith> thingee: this is for the volume only
16:36:47 <hemna_> this is metadata about the image on a volume.
16:36:49 <DuncanT> jgriffith: +1
16:36:52 <jgriffith> thingee: NO change to image at all
16:37:07 <jgriffith> thingee: just let's you do things like update the boot info for that particular volume that you might modify
16:37:12 <avishay> i see what thingee is saying, and i don't think he's confused :)
16:37:17 <jgriffith> thingee: so you can start with a base and build off of it
16:37:26 <ameade_> so why do we call it volume IMAGE metadata
16:37:28 <jgriffith> avishay: thingee Oh, sorry
16:37:36 <jgriffith> in that case I'll shut my mouth and read again
16:37:36 <ameade_> like i'm trying to say, just shove this info in volume metadata
16:37:42 <hemna_> ameade_, I think it's data about the image where the volume came from?
16:37:43 <ameade_> i posted on the spec
16:37:58 <winston-d> ameade_: you made a good point
16:38:00 <ameade_> hemna_: yeah, which doesnt make sense to change
16:38:09 <DuncanT> ameade_: Historical reasons
16:38:12 <thingee> I'll review the spec again once things are updated, but it sounds like this error prone. I understand this is metadata with the volume. *that's* the problem
16:38:16 <avishay> if you store "golden images" in glance, and you made a significant change in one (by way of a volume), i would expect the user to put it in glance for themselves/others to use
16:38:26 <winston-d> ameade_: to me, the only missing point is the abliity for nova to consume volume metadata.
16:38:34 <ameade_> winston-d: +1
16:38:46 <DuncanT> winston-d: nova does consume it
16:38:51 <winston-d> jgriffith: in the case like TRIM support
16:38:55 <thingee> davechen: anything else?
16:39:08 <jgriffith> winston-d: hmmm...
16:39:16 <DuncanT> TRIM is different IMO
16:39:20 <thingee> #topic Posix backup driver replaces NFS backup driver
16:39:28 <davechen> thingee: yes, i really cannot catch your idea.
16:39:39 <jgriffith> winston-d: but why would you do that instead of just letting file systems be file systems and do their thing?
16:39:40 <thingee> nfs backup driver:
16:39:43 <thingee> #link https://review.openstack.org/#/c/138234/1
16:39:56 <thingee> posix backup driver:
16:39:58 <thingee> #link https://review.openstack.org/#/c/82996/
16:40:06 <thingee> davechen: likewise
16:40:23 <bswartz1> is tbarron here?
16:40:35 <thingee> kevin fox has proposed that the nfs backup driver is replaced with the posix backup driver
16:40:35 <thingee> #link https://review.openstack.org/#/c/130858/
16:40:42 <winston-d> jgriffith: to let kernel/file system do their thing, you have to present a storage controller that appears to kernel that support TRIM.
16:41:05 <thingee> is kevin fox here?
16:41:17 <rushil> bswartz: Nope
16:41:22 <bswartz1> I thought tbarron and kevin fox were talking between themselves about these 2 options
16:41:33 <avishay> seems neither are here
16:41:37 <winston-d> jgriffith: and nova relies on image metadata (only) to decide, which is sad for BFV.
16:42:06 <DuncanT> winston-d: Nova uses the cinder copy though, it doesn't pull it from glance for BfV
16:42:10 <thingee> I agree with DuncanT that the posix backup driver should be using some code that's already in swift. it's just duplicating code.
16:42:12 <bswartz1> we believe the NFS backup driver is better, and possibly could be generalized to support other (posix) use cases
16:42:32 <jgriffith> winston-d: answered in #cinder so not to interrupt here :)
16:42:35 <DuncanT> winston-d: It used to, but we fixed it because you can delete an image from glance and that used to break the BfV
16:42:36 <thingee> bswartz1: it's better if you support dedup, right?
16:42:56 <thingee> bswartz1: I think that's what kevin was getting it at
16:43:15 <bswartz1> there were a few things we didn't like about the posix driver -- several options would need to be added for it to be acceptable for us
16:43:21 <bswartz1> I thought tbarron has worked all that out though
16:43:24 <bswartz1> had*
16:43:38 <bswartz1> since neither are here to argue their case, I suggest we revisit it next week
16:43:53 <thingee> sounds fine to me. I see this pushed to k-2 now anyways
16:43:58 <DuncanT> I'd really like to see the de-dupe done before merging this driver
16:43:59 <ameade_> bswartz1: +1, and the conversation on the review isnt' complete
16:44:07 <avishay> is the posix driver supposed to be the choice for all filesystem drivers, like nfs, netapp, gluster, gpfs, etc?
16:44:18 <thingee> avishay: yes
16:44:24 <bswartz1> avishay: that was the theory -- but he implementation has limitations we don't like
16:44:33 <ameade_> not all filesystems are created equal
16:44:37 <ameade_> but lets shelve this
16:44:46 <thingee> bswartz1: I'm interested, because it doesn't make sense to me right now. :)
16:44:47 <akerr> won't you end up with one driver with 1000 options in that case?
16:44:58 <jgriffith> bswartz1: you keep saying "we didn't like" "didn't work for us"
16:45:21 <bswartz1> jgriffith: I'm summarizing the discussion -- the details should be in the spec review
16:45:49 <thingee> bswartz1: I read the spec review, it didn
16:45:52 <thingee> t make sense
16:45:59 <bswartz1> oy
16:46:13 <avishay> akerr: i can't imagine 1000 options for "take this file, put it there"
16:46:33 <bswartz1> one thing in particular that causes problem is that kevin's driver gzips everything
16:46:34 <ameade_> will the posix driver mount an nfs share for me?
16:46:39 <bswartz1> that makes it very hard to do incrementals
16:46:43 <akerr> each filesystem type can have its own optimizations
16:46:52 <avishay> conf.gzip_everything = False
16:46:56 <thingee> Ok, I agree we can defer this, but this isn't very productive to keep discussions elsewhere if we're all going to make a decision on these drivers.
16:46:57 <thingee> avishay: +1
16:47:19 <bswartz1> avishay: those are the kinds of changes we would want to see if kevin's driver is the one we use
16:47:21 <thingee> #topic RemoteFS snapshots
16:47:22 <thingee> kaisers: hello
16:47:30 <kaisers> thingee: thanks
16:47:40 <thingee> #link https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps
16:47:43 <kaisers> This is about refactoring the remotefs driver
16:47:54 <kaisers> has been started some time ago (see blueprint)
16:48:22 <kaisers> and due to our work on the quobyte driver becomes current now as we have some code duplication with other remotefs derived drivers
16:48:24 <avishay> yea i've been asking for this since havana :)
16:48:47 <kaisers> i just wanted to announce that we want to start on this in the next weeks
16:48:54 <kaisers> and see who is interested in this
16:48:55 <avishay> kaisers: quobyte doesn't support snapshots i assume?
16:49:10 <kaisers> trying to bash tempest in accepting the code right now ;-)
16:49:19 <eharney> start on what exactly?
16:49:44 <kaisers> moving code from quobyte.py, glusterfs.py, etc. to remotefs.py
16:49:55 <kaisers> e.g. online snapshots, lock wrappers, etc.
16:50:11 <kaisers> i would start by creating a blueprint / spec on what to change
16:50:19 <avishay> locks are bad
16:50:38 <kaisers> we already have a list of proposed changes from the duplications we saw in our code
16:50:54 <thingee> kaisers: is that list on an etherpad or something?
16:50:59 <kaisers> i just didn't want to start this without having everybody know
16:51:10 <jgriffith> avishay: Now you're talking!!!!!
16:51:14 <eharney> i'll be working on snapshots for the NFS driver (https://review.openstack.org/#/c/133074/) and presumably looking at similar issues... maybe we should come up with a more specific list
16:51:17 <kaisers> currently not, i intended to add this to the spec
16:51:31 <kaisers> eharney: Yep
16:51:49 <thingee> kaisers: a spec, even better :)
16:51:51 <thingee> kaisers: sounds good to me, and thanks for starting this
16:52:00 <kaisers> as soon as if complete the online snapshots with the quobyte driver (one test fails) i wanted to start this
16:52:33 <thingee> kaisers: makes sense. can people just ping you on #openstack-cinder if they want to help out?
16:52:41 <kaisers> absolutely, yes
16:52:45 <thingee> great
16:52:50 <eharney> yes, i'd like to make sure we don't duplicate effort
16:52:51 <thingee> kaisers: anything else?
16:52:54 <eharney> sounds good though
16:52:58 <kaisers> nope, that's it
16:53:18 <thingee> #topic Open discussion
16:53:32 <thingee> 7 mins left
16:54:22 <DuncanT> There's the let-nova-put-volumes-in-error spec that I still don't understand
16:54:36 <thingee> DuncanT: link?
16:54:41 <xyang1> DuncanT: is that ours?
16:54:43 <DuncanT> If anybody can explain why you'd ever want nova to do that, I'd appreciate it
16:54:48 <DuncanT> xyang1: I think so, yeah
16:55:01 <xyang1> currently nova put volume status back to available
16:55:17 <xyang1> then but backend already attached the volume
16:55:21 <xyang1> they are out of sync
16:55:25 <xyang1> so that is not right either
16:55:49 <xyang1> if volume is 'error' instead of 'available', it at least tells admin something is wrong
16:55:52 <xyang1> that is the purpose
16:56:04 <avishay> xyang1: doesn't nova call terminate_connection and allow the driver to reset?
16:56:08 <DuncanT> xyang1: error means the tenant can now do nothing with their volume
16:56:16 <xyang1> avishay: not in this case
16:56:28 <DuncanT> xyang1: It should just be terminate_connection
16:56:31 <xyang1> when initialize_connection times out, it didn't
16:56:35 <xyang1> but we will be adding that
16:56:47 <xyang1> even so, it could still fail
16:56:49 <DuncanT> xyang1: If there's a missing call to terminate then fixing that is sensible
16:57:04 <xyang1> if it still fail, error is more consistent with the status in the backend
16:57:21 <DuncanT> But nova shouldn't get to make that decision
16:57:26 <xyang1> DuncanT: yes, we are adding a terminate_connection call
16:57:51 <xyang1> but if that also fails, the status will still be 'available' in cinder while in the backend it is attached
16:58:26 <DuncanT> So when cinder transitions from attaching back to available, it should tell the driver to terminate
16:58:27 <avishay> xyang1: why doesn't the backend clean up on terminate_connection?
16:58:38 <xyang1> DuncanT: currently it will be changed from 'attaching' to 'available'. that is not right either
16:59:03 <jgriffith> avishay: that would seem better... clean up, if you can't fail and put volume in error
16:59:04 <xyang1> avishay: if terminate_connection still times out, then we are still out of sync
16:59:09 <DuncanT> xyang1: It should be made available on timeout. If we need to call the driver terminate at that point then we can
16:59:20 <jungleboyj> jgriffith: +1
16:59:29 <jgriffith> xyang1: ah... and round and round it goes
16:59:49 <jgriffith> xyang1: get a faster API :)
16:59:58 <jgriffith> quit timing out on everything
17:00:02 <avishay> you have the cinder DB mirroring your backend's metadata, obviously things will go out of sync
17:00:02 <xyang1> basically if terminate_connection still can't fix it, set it to error
17:00:08 <xyang1> jgriffith: :)
17:00:11 <thingee> thanks everyone
17:00:14 <thingee> #endmeeting