21:00:08 <ildikov> #startmeeting cinder-nova-api-changes
21:00:11 <openstack> Meeting started Wed Apr 20 21:00:08 2016 UTC and is due to finish in 60 minutes.  The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:15 <openstack> The meeting name has been set to 'cinder_nova_api_changes'
21:00:26 <smcginnis> o/
21:00:48 <ildikov> scottda ildikov DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo
21:00:59 <mriedem> o/
21:01:14 * alaski has to run another meeting, sorry
21:01:20 <scottda> hi
21:01:49 <ildikov> alaski: no worries, it's still just the warming up for next week
21:02:02 <DuncanT> Hi
21:03:11 <ildikov> hi All
21:03:20 <cFouts> hi
21:04:01 <ildikov> basically the plan for today is to prep for next week regarding scenarios we need to discuss mostly in connection with multiattach
21:04:22 <mriedem> i reviewed the update attachment api spec from scottda last night
21:04:26 <mriedem> haven't checked back on it today,
21:04:40 <mriedem> but i'd like the nova portion of that fleshed out a bit before a new api is added to cinder
21:05:29 <scottda> mriedem: Yes, I'm not ignoring  your review...
21:05:35 <hemna_> mriedem, +1
21:05:48 <ildikov> I agree, there's also the patch that jgriffith is working on to have a PoC for alternative #2, so a few bits to put together
21:05:48 <scottda> I'm just trying to gather answers.
21:07:13 <ildikov> mriedem: a quick question as you mentioned Nova
21:07:32 <ildikov> mriedem: is there a chance to have multiattach on the list of high priority items?
21:07:47 <mriedem> oops, accidentally closed the tab
21:08:05 <ildikov> mriedem: good timing, so question again
21:08:07 <ildikov> mriedem: is there a chance to have multiattach on the list of high priority items?
21:08:31 <mriedem> i won't commit to that right now, we discuss nova priorities in the last session on thursday of the design summit
21:09:01 <smcginnis> mriedem: At least that will be after the Nova-Cinder session.
21:09:05 <mriedem> right
21:09:07 <ildikov> I know, but I guess I need to start the campaign way before that session
21:09:30 <mriedem> it's going to depend on how clear the plan is after the nova/cinder session i think
21:09:40 <mriedem> if it's all up in the air still, it's going to be hard to say it's a priority
21:09:53 <mriedem> and things like swap volume with multiattach volumes scare me
21:10:02 <mriedem> all the weird edge cases with multiattach
21:10:13 <mriedem> plus we don't test any of that today
21:10:28 <mriedem> honestly,
21:10:30 <ildikov> you mean swap volume, etc in general?
21:10:45 <mriedem> before multiattach is a priority, i think volume migration needs to be tested in the multinode job in the gate
21:10:53 <mriedem> b/c that would also test swap_volume since it's part of the flow
21:11:10 <mriedem> and then we could see if multiattach blows any of that up
21:11:21 <smcginnis> mriedem: +1
21:11:25 <ildikov> ok that sounds reasonable also it would be good to test it anyway
21:11:38 <mriedem> i think i asked this earlier in the week, but does cinder allow migration of multiattach volumes?
21:11:52 <smcginnis> mriedem: I don't believe we block it.
21:12:02 <smcginnis> mriedem: But I also don't believe we test that at the moment.
21:12:10 <mriedem> volume migration? probably not
21:12:22 <mriedem> i think the only thing that uses both nodes in the multinode job is nova's live migration
21:12:23 <smcginnis> hemna_, scottda: Are you aware of any testing added for that?
21:12:31 <hemna_> wait
21:12:34 <scottda> No, I'm not aware of any testing.
21:12:37 <hemna_> do you mean nova live-migration ?
21:12:40 <hemna_> or cinder migrate ?
21:12:43 <mriedem> cinder migrate
21:12:51 <mriedem> which triggers swap volume in nova
21:13:01 <mriedem> neither are tested in the gate
21:13:07 <hemna_> :(
21:13:08 <mriedem> and those + multiattach scare me
21:13:17 <hemna_> well that seems to be a problem in general then
21:13:18 <mriedem> especially with some of the issues i found with swap volume this week
21:13:20 <mriedem> yeah
21:13:31 <ildikov> hemna_: +1
21:13:33 <mriedem> so, there are some testing gaps to get us to multiattach
21:13:46 <smcginnis> We're kind of putting on the roof before the foundation is done, in some respects.
21:13:48 <mriedem> i need to add that to the etherpad
21:13:58 <hemna_> I'm not sure why that would prevent nova from setting multiattach to high priority
21:14:01 <smcginnis> Not to be negative.
21:14:07 <ildikov> let's try to figure out dependencies as well
21:14:55 <ildikov> we might not want to put on the roof first
21:15:13 <scottda> It keeps us dry while we're digging a deeper hole.
21:15:18 <smcginnis> :)
21:15:29 <smcginnis> We certainly know how to dig holes.
21:15:42 <mriedem> hemna_: multiattach is a whole new set of problems if it goes wrong, and i'd like to be tight on testing of existing functionality before introducing new complexity
21:15:58 <ildikov> smcginnis: lol :)
21:16:20 <mriedem> especially given how hard it's been to get a clear picture on the design
21:16:33 <hemna_> I'm not saying it should be forced to land in nova
21:16:45 <hemna_> just saying it should be a candidate for high priority
21:16:47 <mriedem> i'm not knocking the team, or not trying to, it just worries me
21:16:51 <mriedem> it's a candidate yes
21:16:55 <hemna_> ok cool
21:17:05 <hemna_> we've had a lot of people working on it for literally years now
21:17:07 <hemna_> :(
21:17:25 <mriedem> yes, but as far as i recall, the major nova effort wasn't until mitaka
21:17:32 <mriedem> *nova effort from the nova team
21:17:49 <mriedem> anyway, it is what it is
21:17:59 <ildikov> we would need high prio to get review attention, I don't think we will flood Gerrit with too much new code by N-3
21:18:02 <mriedem> i'd like to see it happen, just thinking through gaps we have today before we add this layer in
21:18:08 <hemna_> well I wouldn't agree with that, but whatevs.
21:18:14 <scottda> One thing in the critical path is jgriffith code...
21:18:17 <hemna_> it's always the same thing in Nova.
21:18:20 <hemna_> lots of patches get in
21:18:24 <hemna_> nobody reviews or tests
21:18:28 <hemna_> deadlines come up and go
21:18:32 <hemna_> and oops...you are too late.
21:18:37 <hemna_> thank you next release.
21:18:46 <hemna_> 2+ years later.  here we are.
21:18:57 <hemna_> but at this point, it's probably not a bad thing :)
21:19:14 <ildikov> scottda: yeah, it's on my list to look at from Nova perspective as well
21:19:15 <mriedem> yeah, i'm familiar with that problem
21:19:20 <hemna_> so
21:19:38 <hemna_> it seems a prerequisite is getting some gate tests in the mix
21:19:42 <ildikov> scottda: maybe next Friday we could have a small code sprint on the contributors' meetup part as opposed to just talking :)
21:19:47 <scottda> ildikov: We just merged microversion support in the cinderclient, so you (or someone) could start to POC using that...
21:20:05 <hemna_> I thought we did have some nova live migration tests at some point ?
21:20:08 <ildikov> scottda: yeap, exactly
21:20:13 <scottda> ildikov: Sounds good.
21:21:24 <ildikov> scottda: I will look at that patch, I'll have some time on the plane tomorrow to do so
21:22:07 <scottda> Anyone know how difficult it will be to test cinder migration using current infra tools?
21:22:09 <mriedem> hemna_: we do in the multinode job in the gate
21:22:24 <mriedem> hemna_: it's non-voting b/c of some race bugs (we think due to old ass libvirt/qemu in those jobs)
21:22:26 <scottda> Have we thought about it?
21:22:27 <scottda> i.e using multinode in the gate.
21:22:41 <mriedem> scottda: i'd say look at the live migration tests in tempest
21:22:49 <mriedem> it's not too complicated, there is logic for finding the two computes
21:22:56 <mriedem> spawning the instance on one and then live migrating to the other
21:23:04 <mriedem> i'd expect volume migration to be similar
21:23:06 <scottda> mriedem: OK, Cinder has testing on the agenda for the Summit. We can look at this.
21:23:31 <mriedem> https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_live_migration.py
21:23:33 <ildikov> scottda: cool, that sounds good
21:24:31 <scottda> I put it on Cinder agenda for testing session.
21:24:59 <ildikov> scottda: I'll try to join to that slot
21:25:08 <mriedem> is jdg's option 2 fleshed out as far as the flow is concerned?
21:25:14 <hemna_> so the scenario in question is cinder migrate
21:25:16 <mriedem> looks like some of the notes in the etherpad are stale
21:26:30 <scottda> mriedem: I don't think things are fleshed out. It would be nice to get a flow diagram for Alt#2 similar to what hemna_ has done for the current flow.
21:26:35 <mriedem> stale as in it seems like we came up with different solutions for some of those items last week
21:26:53 <mriedem> e.g. nova not neeting to store the attachment_id in the bdm.connection_info column
21:26:58 <mriedem> *needing
21:27:00 <hemna_> anyone have the etherpad url handy ?
21:27:10 <mriedem> https://etherpad.openstack.org/p/cinder-nova-api-changes
21:27:18 <mriedem> i don't remember the details though, we have the meeting logs if we need to go back
21:27:24 <hemna_> thanks
21:28:05 <mriedem> here are the 4/13 meeting logs http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-cp/%23openstack-meeting-cp.2016-04-13.log.html#t2016-04-13T21:00:06
21:28:21 <mriedem> i feel like we came to some aha solution type things at the end which aren't fed back into the etherpad
21:28:46 <ildikov> mriedem: we ended up in a discussion about unversioned things
21:29:13 <mriedem> yeah, like os-initialize_connection passing the attachment_id, and nova passing the connector with the instance uuid
21:29:13 <ildikov> I'm not sure whether it was the connection_info itself or another bunch of data
21:29:20 <smcginnis> Sorry, I have to drop off. Will catch up later.
21:29:24 <mriedem> later
21:29:28 <hemna_> I think we were going to stuff the attachment_id into the connection_info
21:29:36 <hemna_> for nova to save at detach time
21:29:39 <mriedem> *os-initialize_connection putting the attachment_id in the connection_info dict response i mean
21:29:56 <hemna_> and we were going to save the connector at initialize_connection time.
21:29:58 <mriedem> but we were going to microversion that or not? i thought yes
21:30:12 <hemna_> I was unsure about that
21:30:16 <mriedem> me too :)
21:30:32 <mriedem> technically i don't think we *need* to because nova can check if it's there and if not, determine cinder isn't new enough
21:30:38 <hemna_> the other part is the fact that initialize_connection is called at times to simply fetch the connection_info
21:30:41 <hemna_> during live migration time
21:30:48 <scottda> Wouldn't Nova want that microversioned? To get a different payload in the connection_info?
21:30:50 <hemna_> and it can't create a new attachment entry then.
21:30:54 <mriedem> but, that happens down in the compute - if nova is going to fail on that, it'd be nice to fail fast in nova-api based on a versoin
21:31:13 <mriedem> scottda: generally yes we want api changes versioned
21:31:29 <scottda> We really need a design spec for this....
21:31:32 <hemna_> well
21:31:36 <hemna_> we aren't changing the API here
21:31:39 <mriedem> because nova-api could see it's asked to attach a multiattach volume, check the cinder version to see if it can handle it, and if not fail with a 400 before casting to compute
21:31:48 <hemna_> it's adding additional data to a dict in response
21:31:58 <hemna_> there is no new param in the call
21:32:05 <hemna_> or not a new response either.
21:32:05 <mriedem> hemna_: well, that's still changing the api response
21:32:21 <hemna_> so....
21:32:22 <mriedem> i realize we've fudged around this forever in the past
21:32:22 <hemna_> I dunno
21:32:29 <mriedem> and that's how it works with neutron port details too
21:32:33 <hemna_> cinder drivers put 'new' things into connection_info at times
21:32:36 <mriedem> i know
21:32:42 <hemna_> that qualifies as a 'change in response' ?
21:32:43 <mriedem> which is why it's not like new badness
21:32:56 <hemna_> I don't know what the right thing is here.
21:32:58 <mriedem> it's just continued badness
21:33:05 <hemna_> we don't version the connection_info as an object
21:33:10 <mriedem> long-term connection_info should be a versioned object in os-brick, imo
21:33:16 <hemna_> every cinder driver has different things it puts in there.
21:33:17 <hemna_> for later
21:33:19 <mriedem> like we're doing with vifs in os-vif
21:33:41 <mriedem> yeah, anyway, this is kind of a minor detail, i think it's a matter of UX on the nova side
21:33:45 <mriedem> we fail in the api or we fail in the compute
21:33:50 <mriedem> failing fast in the api is better
21:34:04 <scottda> So, it might not be necessary, but it might be useful (for fail-fast, at least) to microversion it. It's not a big deal to bump the version.
21:34:11 <mriedem> scottda: right
21:34:35 <mriedem> cinder api could strip the attachment_id key out of the connection_info in the API if it's not at that microversion
21:34:56 <hemna_> the other part of this is that detach needs the attachment_id
21:34:58 <mriedem> well, if nova isn't requesting that version
21:35:08 <hemna_> that could/should be in the same patch, which requires a microversion
21:35:20 <mriedem> agree it should be the same microversion
21:35:20 <scottda> Look, I hate to bog down in process, and I don't want this to slip, but there are others who aren't present who'll need to know all these design decisions. We're probably better off writing a spec for all of this, I think.
21:35:41 <mriedem> scottda: this might feed into a spec after the summit,
21:35:47 <mriedem> but these are questions that feed into that
21:36:01 <scottda> sure, probably not until after the summit
21:36:06 <hemna_> we don't have much time between now and the summit anyway
21:36:13 <mriedem> right
21:36:19 <hemna_> unless someone magic wands the tests into the gate right away :)
21:36:33 <mriedem> jungleboyj has minions, make them write those tests :)
21:36:56 <mriedem> just open a bug and mark it low-hanging-fruit, some intern will write them :)
21:36:57 <ildikov> hemna_: detach is already using the attachment_id
21:37:34 <hemna_> ildikov, wait, didn't we already look at that last week?
21:37:38 * hemna_ is confused
21:37:41 <mriedem> cinderclient(context).volumes.detach(volume_id, attachment_id)
21:37:52 <mriedem> ildikov: got that into nova in mitaka
21:38:13 <hemna_> https://github.com/openstack/cinder/blob/master/cinder/api/contrib/volume_actions.py#L151
21:38:14 <mriedem> in newton api nova passes the attachment id down to the compute, on a mitaka compute it looks up the attachment id
21:38:23 <hemna_> maybe I'm thinking terminate_connection
21:38:30 <ildikov> hemna_: we might mentioned it, the point is that those patches made it into Mitaka that pass the attachment_id to Cinder at detach time
21:38:42 <hemna_> yah I'm thinking terminate
21:38:43 <hemna_> sorry
21:39:01 <mriedem> ok, and that wasn't the thing that was reverted in cinder https://review.openstack.org/#/c/275316/
21:39:03 <jungleboyj> mriedem: Since when have I had Minions?
21:39:09 <mriedem> that was passing host and instance uuid to attach
21:39:19 <mriedem> jungleboyj: your glorious army of minions
21:39:23 <mriedem> will solve all of our problems
21:39:33 <ildikov> mriedem: that was the host_name, which got reverted
21:39:35 <jungleboyj> mriedem: They are my esteemed co-workers.  :-)
21:40:29 <mriedem> hemna_: ildikov: scottda: last week we were talking about a cinder api that needed the attachment_id and i suggested that we could stash that in the connector dict passed to cinder, was that terminate_connection?
21:40:30 <ildikov> jungleboyj: tell them they are cute and we like them :)
21:40:36 <mriedem> it's icky, and unversioned
21:40:43 <hemna_> mriedem, that could have been
21:40:54 <hemna_> it's kinda icky to put that into the connector though.
21:41:03 <jungleboyj> ildikov: You can tell them in Austin.  They are all coming with me.  :-)
21:41:13 <hemna_> the connector is supposed to describe the host initiator information for the entire system.
21:41:37 <mriedem> yeah, but it's the same idea as putting the attachment_id in the connection_info that comes back from init_conn
21:41:37 <hemna_> it seems like a hack to put that in there to me.
21:41:45 <mriedem> it's a hack both ways :)
21:41:48 <ildikov> jungleboyj: ok Gru, will do ;)
21:41:51 <hemna_> yah, it's a total hack :(
21:41:56 <mriedem> honestly, the thing to probably do,
21:42:06 <mriedem> is not put attachment_id in connection_info or instance uuid in connector,
21:42:13 <hemna_> mriedem, +1
21:42:15 <mriedem> but microversion those APIs and add them to the response body
21:42:17 <jungleboyj> ildikov: *Laughing*  Hey now!
21:42:22 <mriedem> and request body
21:42:24 <hemna_> mriedem, +1  I like that way better.
21:42:27 <mriedem> respecetively
21:42:28 <hemna_> it's clean.
21:42:39 <mriedem> scottda: catch all that? ^
21:42:45 <ildikov> jungleboyj: lol :)
21:42:52 <scottda> I did, and it sounds good to me.
21:43:08 <mriedem> i do wonder,
21:43:29 <mriedem> if nova isn't requesting that microversion, will cinder still ceate the attachment entry in init-connection?
21:43:40 <hemna_> mriedem, yes
21:43:49 <ildikov> do we want to store attachment_id in Nova or still not?
21:43:51 <mriedem> will it be able to clean up later?
21:43:54 <hemna_> mriedem, according to jdg's POC patch
21:43:55 <mriedem> in terminate_connection?
21:44:15 <hemna_> terminate_connection shouldn't really touch the attachment entry
21:44:19 <hemna_> only in detach()
21:44:38 <mriedem> ok, and we pass the attachment_id there
21:44:42 <hemna_> yah
21:44:48 <mriedem> which we lookup from the volume id and instance uuid in nova
21:45:02 <mriedem> which makes me wonder why initialize_connection needs to return the attachment id for nova?
21:45:10 <mriedem> i get so confused here
21:45:18 <hemna_> it is confusing
21:45:22 <hemna_> so
21:45:25 <hemna_> the reason is
21:45:28 <mriedem> nova could store it and then it wouldn't need to look it up i guess, but we already have that lookup code since mitaka
21:45:29 <hemna_> for multi-attach
21:45:41 <hemna_> it would get stored in the nova bdm
21:45:49 <hemna_> and nova would use that value directly
21:45:57 <hemna_> instead of looking it up again
21:46:08 <mriedem> still, that's just an optimizatoin right?
21:46:25 <ildikov> I like the Cinder is the ultimate source of truth thing better still
21:46:31 <hemna_> I'm still confused as to how initialize_connection is going to know if it's a call for an new attach, or a fetch for connection_info
21:46:46 <scottda> I need to go to IRC-on-the-smart-phone mode for a few minutes...limited typing ability...
21:46:56 <ildikov> hemna_: I think that's the key question here
21:47:01 <hemna_> re: live migration.
21:47:18 <mriedem> hemna_: which is why you wanted to create the attachment in reserve/
21:47:19 <mriedem> ?
21:47:29 <hemna_> mriedem, yes
21:47:45 <ildikov> hemna_: and it still looks weird to call initialize_connection only for fetching already existing connection info
21:47:45 <mriedem> i guess jgriffith would have to answer that
21:47:57 <hemna_> anyway, I'm not going to argue that anymore
21:48:20 <hemna_> ildikov, it seems weird, but it's completely necessary
21:48:36 <mriedem> well, if nova is calling initialize_connection to get the connection_info, and that creates a new attachment in cinder, but it's actually the same volume/instance, won't that be bad?
21:48:44 <hemna_> because the connection_info for volume X on host A will be different on Host B
21:49:01 <hemna_> mriedem, yes
21:49:11 <ildikov> hemna_: I understand the need for the info, I just wanted to highlight that with this naming structure and behaviour it's confusing
21:49:13 <hemna_> that's the problem I've raised a few times with creating the attachment in initialize_connection
21:49:21 <hemna_> but was told that it's not a problem.
21:49:27 * hemna_ shrugs
21:49:30 <mriedem> and why wasn't it a problem?
21:49:36 <mriedem> is something cleaning that up later?
21:49:38 <hemna_> magic
21:49:41 <mriedem> ref counting?
21:49:45 <hemna_> the POC ignores the problem really.
21:50:06 <hemna_> and ignores that we are trying to do this for multi-attach
21:50:09 <mriedem> if necessary, initialize_connection could take a new parameter, attach:boolean
21:50:24 <mriedem> as part of the microversion that returns the attachment id
21:50:39 <hemna_> ildikov, yah initialize_connection the name was fine prior to live migration :)
21:50:39 <mriedem> so nova passes attach=True on attach, cinder creates new attachment in the db,
21:50:57 <mriedem> if just getting the conn info, nova passes attach=False and cinder doesn't create a new attachment
21:51:02 <hemna_> so
21:51:05 <ildikov> hemna_: I kinda thought it's "legacy" :)
21:51:24 <mriedem> or is this bleeding into update_attachment territory?
21:51:26 <hemna_> the other way to do this, which requires even more nova changes is that live migration uses multi-attach
21:51:35 <hemna_> for each attachment of the volume
21:51:55 <ildikov> or maybe we reorganise things slightly and have a call that actually just asks for info
21:52:10 <hemna_> that may open up other problems though.
21:52:19 <mriedem> hemna_: not all volume drivers support multiattach right?
21:52:26 <hemna_> mriedem, correct
21:52:42 <ildikov> hemna_: you mean attaching the same volume to the same instance twice?
21:52:44 <hemna_> lots of confusion and grey area here.
21:52:54 <hemna_> ildikov, that too, but just on a different host.
21:53:07 <hemna_> the unique key is really host + instance_uuid + volume_id
21:53:12 <mriedem> that would probably totally bork the constraints checking in nova for the bdm
21:53:15 <ildikov> hemna_: yeap, exactly
21:53:15 <mriedem> yeah
21:53:27 <mriedem> nova is just checking volume id and instance uuid today
21:53:50 <hemna_> and for live migration, it simply calls initialize_connection on the destination host
21:54:05 <hemna_> which forces cinder to export a volume to a new host
21:54:14 <hemna_> and then it removes the source attachment
21:54:28 <hemna_> it kinda backdoors the 'attach' workflow on the destination compute host
21:54:38 <hemna_> by bypassing reserve
21:54:58 <mriedem> b/c it's already in-use
21:55:02 <hemna_> yup
21:55:09 <hemna_> and multi-attach isn't baked in
21:55:10 <hemna_> etc.
21:55:24 <hemna_> w/ multi-attach you can attach in-use volumes
21:55:24 <mriedem> so, can we sort out the ref counting issue with creating a new attachment on each call to initialize_connection before the summit?
21:55:26 <hemna_> and use the whole workflow
21:55:48 <mriedem> if detach cleans those up, maybe that takes care of it, i'm not sure
21:56:13 <mriedem> detach = delete all volume attachments for this volume + instance
21:56:14 <mriedem> idk
21:56:31 <mriedem> detach takes the attachment id though...
21:56:34 <mriedem> so it's going to just delete that one
21:56:57 <ildikov> it's supposed to delete just that one
21:57:14 <ildikov> by ref counting issue you mean the same host issue?
21:57:25 <hemna_> so I was just looking at the POC
21:57:34 <hemna_> and it creates a brand new attachment on every initialize_connection call
21:57:46 <hemna_> we'll have borked stuffs in the volume_attachment table during live-migration time.
21:57:57 <mriedem> it's not just live migration,
21:58:10 <mriedem> there is code in nova compute manager that calls initialize_connection to refresh the connection info, let me find that quick
21:58:24 <hemna_> mriedem, yah I think that's called during live migration
21:59:09 <hemna_> driver_block_device.refresh_conn_infos()  virt/block_device.py
21:59:12 <hemna_> that guy
21:59:19 <hemna_> https://drive.google.com/a/hemna.com/file/d/0B1Kp6K43HLHydU1wWFVIN29tc2s/view
21:59:21 <hemna_> :)
21:59:25 <mriedem> at least live migration and resize
21:59:33 <hemna_> resize too ?
21:59:41 <mriedem> i'm talking about https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L1833
22:00:15 <mriedem> and swap_volume https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L4886
22:00:18 <hemna_> yah that calls refresh_conn_infos if true
22:00:47 <mriedem> and swap volume calls it outright for the new volume
22:01:10 <hemna_> I 'think' that's ok though
22:01:16 <scottda> so I don't see where we need  a new update attachment API...
22:01:18 <hemna_> because cinder calls nova right?
22:01:31 <mriedem> hemna_: for swap volume yeah, but you can also do swap volume w/o cinder
22:01:33 <hemna_> and I think swap volume already marks the attachment as detached?
22:01:36 <hemna_> I dunno
22:01:43 <hemna_> I'd have to follow that mess of code as well
22:01:50 <hemna_> mriedem, oh ok
22:01:52 <hemna_> w/o cinder.
22:01:55 <mriedem> this is why i was asking for gate testing on ciner migration :)
22:01:59 <mriedem> cinder
22:02:00 <hemna_> damn
22:02:09 <hemna_> ok I am now scared even more now.
22:02:17 <mriedem> now you know why i'm scared
22:02:18 <hemna_> this is all feeling more and more like a giant hack
22:02:36 <hemna_> I dunno, I kinda think reserve should return attachment_id
22:02:38 <hemna_> and nova keeps it
22:02:45 <hemna_> and uses it for calls to cinder
22:03:07 <hemna_> does that solve anything
22:03:19 <mriedem> well, it'd be a single ref count at least
22:03:25 <hemna_> if nova calls initialize_connection(....., attachment_id)
22:03:43 <hemna_> Cinder wouldn't create a new attachment if it gets it passed in.
22:03:46 <hemna_> it simply looks it up
22:04:04 <hemna_> and updates the connector if needed and saves it into the attachment table
22:04:19 <mriedem> along those lines,
22:04:24 <mriedem> if attach volume fails in nova, it calls: self.volume_api.unreserve_volume(context, bdm.volume_id)
22:04:32 <mriedem> which could pass the attachment_id to delete it
22:04:48 <hemna_> yah
22:04:51 <mriedem> as would detach
22:05:02 <mriedem> i mean, nova calls detach with the attachment id already so it would delete it then
22:05:05 <hemna_> in the case of nova doing swap volume, it should be a new attachment really
22:05:11 <hemna_> because it's a different volume
22:05:13 <mriedem> yes
22:05:36 <hemna_> the old attachment should be marked as detached, and the new volume attachment gets it's new attachment table entry and attachment_id.
22:05:46 <hemna_> but, nova doesn't call reserve currently
22:05:50 <hemna_> for swap
22:05:56 <mriedem> ok, so this is making more sense to me than option #2 now, but would need to know why creating a new attachment entry on every call to initialize_connection in option 2 wasn't a problem
22:06:09 <hemna_> it was a problem
22:06:13 <hemna_> it was overlooked IMHO
22:06:19 <hemna_> I tried raising it several times.
22:06:31 <mriedem> hemna_: nova reserves the new volume in the api for swap
22:06:32 <mriedem> self.volume_api.reserve_volume(context, new_volume['id'])
22:06:43 <hemna_> ah ok that's good
22:06:47 <mriedem> and if swap fails
22:06:48 <mriedem> self.volume_api.roll_detaching(context, old_volume['id'])
22:06:48 <mriedem> self.volume_api.unreserve_volume(context, new_volume['id'])
22:06:54 <hemna_> and then it gets into the https://github.com/openstack/nova/blob/4ad414f3b1216393301ef268a64e61ca1a3d5be9/nova/compute/manager.py#L4898
22:06:56 <mriedem> so we should be covered there
22:07:25 <mriedem> yeah, nova-api does the reserve call to cinder, then casts to nova compute to do the actual detach/attach of the old/new volumes
22:07:56 <hemna_> I see the finally clause there
22:08:03 <hemna_> it nukes the old attachment it looks like
22:08:39 <hemna_> I presume that migrate_Volume_completion takes care of the correct state on the volume in that case.
22:08:58 <hemna_> gawd this is confusing
22:10:08 <hemna_> ildikov, I think we burned through out time here?
22:10:11 <mriedem> :) now you want gate tests too?
22:10:17 <hemna_> mriedem, yes
22:10:23 <mriedem> muwahahaha
22:10:25 <scottda> Yeah, I gotta go myself.
22:10:29 <mriedem> ok, me too
22:10:34 <hemna_> that panic feeling sets in....
22:10:44 <scottda> Maybe more talk in IRC, else see you all in Austing
22:10:50 <ildikov> hemna_: I still have more than 5 hours till the taxis picks me up, so it's fine :)
22:10:58 <scottda> damn! I keep adding a 'g' to Austin
22:11:08 <hemna_> ildikov, :)
22:11:22 <scottda> ok, bye everyone
22:11:26 <mriedem> later
22:11:43 <ildikov> C U in Austin
22:11:46 <hemna_> ok I have to run to a meeting...
22:12:07 <ildikov> thanks guys, let's sort these things out next week
22:12:15 <ildikov> safe travels
22:12:21 <ildikov> #endmeeting