17:00:42 #startmeeting cinder-nova-api-changes 17:00:43 Meeting started Mon Dec 19 17:00:42 2016 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:46 The meeting name has been set to 'cinder_nova_api_changes' 17:01:05 scottda DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis xyang1 raj_singh lyarwood 17:01:14 o/ 17:01:22 hey 17:01:34 hi :) 17:01:41 tough 17:02:05 o/ 17:02:31 let's wait one more minute and then we can start 17:04:05 on Cinder side we have the dependency of the new API patch merged, so hopefully jgriffith is busy with the API patch atm 17:04:32 johnthetubaguy: did you have a chance to go through the comments on the Nova side patch? 17:04:46 * johnthetubaguy hides in the naughty corner 17:05:16 * ildikov does not know what to say on the year's last meeting to this :) 17:05:27 heh 17:05:53 johnthetubaguy: we discussed it briefly and the main comment from mriedem was to not initiate any volume state changes in Cinder from Nova 17:06:41 johnthetubaguy: so far everyone agreed to remove that direction form your spec 17:07:06 OK, I am not sure I know what that means 17:07:06 johnthetubaguy: but we can discuss it further today if you feel the need 17:07:10 johnthetubaguy: yeah the main issue i had was there are several parts that rely on nova setting a vol attachment resource status to error in cinder when we hit a problem, and i thought nova would just delete the vol attachment 17:07:44 ah, delete vs update into an error state 17:07:44 similar to nova rolling back and calling os-terminate_connection today on a failed attach 17:07:57 yeah i don't like nova controlling resource state in cinder 17:08:05 i don't think anyone does 17:08:26 I think I wanted a "tombstone" for an operator to do checks/cleanup 17:08:28 i think you were doing it in part for evacuate making queries for attachments in error state, 17:08:33 it also gives more items to clean up later which is not that fortunate either 17:08:36 but there are probably other ways to handle that 17:08:37 ah, yeah, evacute 17:09:03 what was the replacement for evacuate? 17:09:04 * smcginnis strolls in but stays at the back of the room 17:09:05 it's been a couple of weeks since i left comments so they are in there 17:09:29 i had some questions about evacuate related to the error state thing, but don't remember what they were now 17:09:44 but was thinking there were alternatives, or just not needing the error thing 17:09:45 oh, use the migration object instead 17:10:16 not sure what you mean by replacement for evacuate, but yeah we use the migration object to track state for evacuate now 17:10:30 i think i thought about that at one point too, we could store some data in the migration object 17:10:49 like we have old/new instance type in the migration object 17:10:56 if we needed, we could have old/new vol attachment in there too 17:11:46 so I should go through that, and undo the error state stuff 17:12:15 so tomorrow is my last day before next year 17:12:26 I should try get that done tomorrow 17:12:39 but been pushing on neutron v2 refactoring instead 17:13:28 johnthetubaguy: the pretty high level Cinder spec: http://specs.openstack.org/openstack/cinder-specs/specs/ocata/add-new-attach-apis.html 17:13:56 johnthetubaguy: i'd say if you have thoughts, just make a note in the spec review as a reminder for when you get back, but keep trucking on the neutronv2 conductor stuff 17:14:00 johnthetubaguy: just in case you would need to double check what calls we ended up with finally 17:14:02 because that's actually in plan for ocata 17:14:33 mriedem: yeah, I think thats probably the better approach 17:15:32 I can't think of why delete the attachment wouldn't be enough 17:15:55 I think I was worrying too much about cinder knowing about all the state changes on the Nova side 17:16:11 which we don't want cinder to know about 17:16:34 mriedem: +1 17:16:41 I'm more of a fan of single responsibility as much as we can achieve that here 17:16:43 from what i remember i think the only case that it maybe made some kind of sense we evacuate cleanup, but again, it's been 2 weeks and i think we have alternatives 17:16:51 so what mriedem says :) 17:16:52 I think Cinder should have minimal awareness of what's happening with consumers. 17:16:54 s/we/was/ 17:18:05 mriedem: having alternatives sounds good, we can go into details if needed on the next meeting in Jan 17:18:45 I think I don't really understand the failure modes around disconnect that well at this point 17:19:01 I guess if its not detached, we leave it attached, if it is detatched, we detach it 17:20:19 anyways, simpler sounds like a better starting point 17:20:51 johnthetubaguy: +1 17:21:24 johnthetubaguy: when we'll have the Cinder API merged or close to that state we can also do some PoC, like what jgriffith started earlier 17:22:42 the tricky bit, I think, is going to be the smooth transition to the new API 17:23:30 you mean from the point of view of the already created volumes? 17:24:44 johnthetubaguy: Or just all the changes required on the nova side, microversions, etc.? 17:25:43 yeah, already created volumes 17:25:57 and when to use the new API part way through operations like live-migration 17:26:45 we have a mix of old a new compute nodes out there, etc 17:27:00 we have patterns to follow, its just tricky 17:27:18 we can switch to the new API, when all the computes are upgraded 17:27:19 johnthetubaguy: we can globally lock that out based on minimum nova-compute service version in the deployment if needed 17:27:24 that should be doable 17:27:37 mriedem: yep, that's what I meant too 17:27:52 we can also check per-compute service versions if we want to be fancy, but that's probably a more complicated dance 17:27:57 mriedem: yeah, we might have to do that per operation though 17:28:15 well, I mean at the start of long running operations, like live-migrate 17:28:20 so we don't swap half way through 17:28:27 old volumes should have attachment_id already, it's a question what other info we would need for live migrate for instance as we will create a new attachment for the new host anyhow 17:28:31 but the BDM pattern I proposed in the spec should do that 17:28:44 so the question here is removing the old one I guess 17:29:15 apparently, not all old volumes have an attachment id already 17:29:31 johnthetubaguy: do you mean attachment id in nova or cinder? 17:29:33 if they were attached before we created attachment ids 17:29:36 in cinder I mean 17:29:45 johnthetubaguy: i think we're talking about different things 17:30:01 i've been told in previous meetings that each volume attached has something in the cinder volume_attachments table already 17:30:16 only past a certain point in time, I believe 17:30:22 I don't remember when that got introduced, but should be there for a while now 17:30:23 is havana old enough? 17:30:32 it is for me! 17:30:38 hemna wrote the schema migration so he'd know 17:31:21 hey 17:31:28 sorry guys, been on multiple meetings at once 17:32:32 ok correct. 17:32:35 according to the user survey, 16% of users in October 2015 were on Havana, 12% are on older releases 17:32:39 johnthetubaguy: so for old volume attachments / BDMs we know we'll need to 'upgrade' those to the new model somehow 17:32:40 each attachment is supposed to have a volume_attachment entry 17:32:47 mriedem: yeah, basically 17:32:59 and all the original cinder volume table information was migrated when the volume_attachment table was created. 17:33:00 mriedem: I think we previously agreed a cinder-manage cmd to fix that up 17:33:00 johnthetubaguy: i left some comments on that in the spec too i believe 17:33:12 johnthetubaguy: i was hoping for something more automatic 17:33:20 i.e. online data migration 17:33:26 mriedem: we can call all that from the online data migrations stuff 17:33:33 because the spec says something about a nova-manage command calling a cinder-manager command, and that seems bad to me 17:33:57 so the trade off was vs adding an API thats just for the upgrade hump 17:34:12 i think we can migrate old attachments on (1) first access after rolling new code or (2) a periodic task in the computes for vols attached to instances on that upgraded compute 17:34:40 then maybe after n-2 or whatever we drop that compat code 17:35:00 yeah, I am thinking the same, except its nova-manage cmds to match the DB work 17:35:20 if nova-manage has to cast to the compute it's not going to be fun 17:35:26 *rpc call to the compute 17:35:43 when i was reading the spec i thought there was some reason we'd need to call to the compute - to get the volume connector from brick 17:35:47 mriedem: true, its really about staggering the system load on upgrade 17:35:50 which is needed in the update attachment for existing attachments 17:36:10 we do'nt have the connector from nova-manage, and you can only git it off the compute 17:36:19 yeah 17:36:20 so you're talking about adding an rpc call from nova-manage to compute i think 17:36:34 which is why i was talking about doing something more automatic 17:36:34 yeah, thats a "bit" evil :( 17:36:38 or background tasks 17:36:39 right 17:36:43 anyways, we can work through that 17:37:05 yeah it's all in the spec :) 17:37:12 :) 17:37:13 my wedding gift to you 17:37:19 mriedem: :) 17:37:34 ha. That's right congratulations johnthetubaguy 17:38:03 actually just got the usb stick with all the photos on today, so thats some fun for tonight 17:38:29 johnthetubaguy: I guess you're asking me now to cut the meeting short :) 17:39:04 I can't go in the living room right now, apparently, all I can here is present wrapping noises, and the odd bit of singing 17:39:06 so I am good for now 17:39:14 :) 17:39:31 :) 17:39:35 anyways, we derailed fast there :) 17:39:52 is there anything for the Nova spec we can/should discuss now? 17:40:03 cinder cmd vs cinder API? 17:40:06 *anything else 17:40:13 vs cinder lib 17:40:30 for that transition bit 17:40:58 maybe its such a mess if there is no API to "upgrade" an attachment, we should just allow us to call update? 17:41:31 (and we ignore the case where there is no attachment_id) 17:42:03 my guess here is that it's either update or create a new attachment and remove the old 17:42:18 oh... create a new one 17:42:29 I'm slightly unsure at this point what update will do with an old attachment 17:42:58 I think the idea was to just call update 17:43:20 I can clarify this with jgriffith later 17:43:35 but I think the API should be able to cover this 17:44:17 sorry... late :( 17:44:21 I wonder what the load will be during an upgrade and whether that can cause any issues 17:44:25 been trying to deploy OpenStack 17:44:32 I weep for our users 17:44:32 ha 17:44:58 johnthetubaguy update is basically a "finalize" and make the actual connection 17:45:06 jgriffith: now that you mentioned users I might accept this as an excuse :) 17:45:09 create *can* do that at the same time if you like 17:45:18 i have to drop - sev1 in prod, ttyl 17:45:22 but that doesn't work for Nova 17:45:45 mriedem: thanks for joining, ttyl 17:46:12 jgriffith: we are agonizing on an upgrade process 17:46:24 ildikov how come? 17:46:38 jgriffith: in the sense of switching to the new Cinder API and get old volume attachments updated 17:47:08 should be ok with a manage script 17:47:19 we've minimized the differences between the two 17:47:27 in terms of what's in the DB etc 17:47:33 jgriffith: you mean on the Nova or the Cinder side? 17:47:48 remember a while back we had this whole rework to try and make them compatible 17:48:16 ildikov Cinder side, Nova was a bit uglier (ie nova --> cinder OR cinder-1) 17:48:25 there are concerns on Nova having to directly call cinder-manage 17:48:39 johnthetubaguy I wouldn't have Nova do that if it were me 17:48:51 basically because we need to get the connector from the specific nova-compute node 17:49:17 johnthetubaguy what i'm saying is that Nova should just check which version of the API cinder supports and use the latest call 17:49:21 I am ok with nova-manage doing nasty things, but we can't really cheat here 17:49:36 as far as existing attachments and things like the stored attachment id... for that you do a get-attachments and find it 17:49:37 jgriffith: this is about all the existing volumes, and trying to move them so we only use the new API version 17:49:47 then drop the support in Nova for the old API 17:50:02 right, the connector isn't stored today 17:50:02 johnthetubaguy sure 17:50:12 so that needs pushing up 17:50:28 and connection_info I guess 17:50:32 johnthetubaguy all you need to do after an upgrade (like nova-migrate) is call cinder "attachments_get_all_by_volume" 17:50:50 johnthetubaguy that will give you the connector info and everything that you would get if you were using the "new" API calls 17:50:58 but cinder doesn't have the info from os-brick? 17:51:31 johnthetubaguy Oh! Yeah... that case 17:51:37 johnthetubaguy so that's doable 17:51:55 yeah, we just haven't agreed on how yet 17:52:07 johnthetubaguy we use the old call for detach etc, and just make sure we use all new calls for attach creation 17:52:19 johnthetubaguy I had some code that did this... it even worked :p 17:52:36 I think thats what ildikov was suggesting, that seems possible 17:53:09 johnthetubaguy I do think we can make this work, I'll need to get the latest update to Cinder done, and get a running cloud again to move forward 17:53:32 johnthetubaguy: partially as I was thinking about how to update and I think it's not what jgriffith meant 17:54:12 ildikov very hard to say "what I meant". even for me :) 17:54:23 I like the delete/create approach though 17:54:34 means we do it via the API 17:54:40 but its not APIs just created for upgrade 17:54:57 jgriffith: hmm, good opportunity for me to plant ideas in your head now it seems :) 17:55:02 its probably a create/delete, but thats a detail for later 17:55:18 johnthetubaguy I think we can solve this with the cinder api calls 17:55:35 I think Cinder API calls as a target is good in general 17:55:35 johnthetubaguy it's annoying for a couple releases to support both but I think it's certainly possible 17:56:07 jgriffith: as we have users still on Havana I wonder how many is that "couple"... :( 17:56:33 jgriffith: it does not make it impossible of course, it was just a note 17:56:42 well, maybe they just don't need the delete, because the attachment doesn't exist 17:57:09 it looked like > 20% users where on havana or earlier in December 2015 user server 17:57:17 so its very likely they still have volumes attached 17:57:22 and want to upgrade at some point 17:57:36 johnthetubaguy they can't upgrade H-->O anyway 17:57:44 totally 17:57:50 johnthetubaguy I can barely upgrade M-->N 17:58:11 its just more, folks are using that thing in production 17:58:56 jgriffith: well, I kinda think people should use OpenStack as a service, but I would say that 17:59:08 johnthetubaguy well, I do think it's completely doable, and it will work a hell of a lot better than things like Keystone V3 and Glance and ....... 17:59:35 jgriffith: totally agreed, its totally doable 17:59:39 johnthetubaguy after this lab move and redeploy I would agree with you! 18:00:12 We've built something that requires PS to deploy at this point 18:00:57 well, there are many competing deploy solutions, I guess 18:01:20 doing it yourself is crazy hard 18:02:03 anyways, at some point we should discuss private cloud deploy optimizations, but anyways 18:03:29 johnthetubaguy: sounds a bit like discussing what to do with global warming at this point to me 18:03:32 I guess we are out of time 18:03:43 johnthetubaguy: and that too :) 18:03:52 and lucky me... I get to go to a meeting on Open Source contribution policy :) 18:03:52 ildikov: yeah, thats true 18:04:01 does anyone here would like to have the first meeting on the 2nd? 18:04:03 jgriffith: I am sorry 18:04:20 or the 9th sounds just fine? 18:04:22 thats a public holiday for most folks right? 18:04:42 jgriffith: should I create a reason for you to miss that? :) 18:05:01 I think the 9th 18:05:11 Not much will be changed by the 2nd 18:05:15 jgriffith: unless you're the one holding it :) 18:05:54 scottda: a very good point 18:05:56 scottda: yeap, my thought exactly, just wanted to give a chance for objection if there would happen to be any :) 18:06:12 happy holidays, when that happens for folks 18:06:31 Happy Holidays everyone! 18:06:48 Talk to you on the 9th the latest! 18:07:27 yup, have a good one all 18:07:43 Thanks for all your efforts this year and have some well deserved fun and rest in the upcoming two weeks! :) 18:08:26 #endmeeting