13:00:03 <alex_xu> #startmeeting nova api
13:00:04 <openstack> Meeting started Wed Sep 28 13:00:03 2016 UTC and is due to finish in 60 minutes.  The chair is alex_xu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:08 <openstack> The meeting name has been set to 'nova_api'
13:00:11 <alex_xu> who is here today?
13:00:35 <mriedem> o/
13:01:13 <alex_xu> sdague: johnthetubaguy gmann are you around for api meeting
13:01:21 <johnthetubaguy> o/
13:01:45 <alex_xu> let us wait one mins?
13:02:29 <alex_xu> ok, just three of us
13:02:44 <alex_xu> let us start the meting
13:02:56 <alex_xu> #topic action from previous meeting
13:03:04 <alex_xu> action: alex_xu to work on generic diagnostics spec
13:03:15 <alex_xu> #link https://review.openstack.org/357884
13:03:30 <alex_xu> Sergey already help on this spec
13:03:57 * edleafe wanders in late
13:04:06 <mriedem> i haven't seen the latest on that one yet
13:04:11 <alex_xu> there is one hot point in the spec
13:04:37 <alex_xu> use which value to identify the disk
13:05:11 <mriedem> for libvirt i thought we can't rely on the device name
13:05:13 <alex_xu> there are three options at here, bdm_id, device_name, and disk local path
13:05:16 <mriedem> so that's why people wanted tags
13:05:25 <alex_xu> mriedem: yea, +1
13:05:53 <alex_xu> mriedem: the tags is really for normal user, which defined by normal user, and used by normal user
13:05:54 <mriedem> i don't like the idea of leaking the bdm id
13:05:55 <sdague> o/
13:06:16 <alex_xu> expose the tags in an API for debug, that sounds duplicated
13:06:23 <mriedem> bdm id is per cell and if we're talking about the id field in the db table it could collide with bdms in other cells
13:06:59 <alex_xu> so the disk local path is the last choice
13:07:00 <sdague> mriedem: it's not a uuid?
13:07:25 <johnthetubaguy> or we create a uuid for this API
13:07:27 <alex_xu> it is not
13:07:39 <alex_xu> johnthetubaguy: but we doesn't have api expose the bdm also
13:08:04 <mriedem> sdague: yeah we don't have a uuid on the bdm table
13:08:10 <mriedem> we've talked about having one forever
13:08:15 <mriedem> but never had use cases
13:08:26 <mriedem> dansmith has the patches to make that happen though
13:08:47 <mriedem> disk local path is the path on the host?
13:09:11 <alex_xu> mriedem: for network device, it can be a URI, very virt driver specific
13:09:16 * johnthetubaguy scratches head
13:09:48 <mriedem> hmm, if the only point is having a unique identifier per disk/bdm, then i think i'd just go with a uuid
13:10:01 <mriedem> and we revive dansmith's patches to add a uuid field to the bdm
13:10:13 <sdague> yeh, uuid seems better than a uri
13:10:38 <alex_xu> the point for disk path is this is debug API, so it should be ok for debug user
13:11:25 <mriedem> https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bug/1489581+owner:dms@danplanet.com
13:11:37 <johnthetubaguy> it feels like we should do it properly and add a uuid anyways
13:11:43 <sdague> johnthetubaguy: ++
13:12:13 <alex_xu> so do way need API expose that uuid?
13:12:55 <alex_xu> s/way/we
13:13:06 <mriedem> i'd think so
13:13:39 <alex_xu> something like /servers/{uuid}/bdm?
13:14:09 <johnthetubaguy> so we spoke about the VIF APIs I think, pointing to neutron, feels like a unified list of disks that has all bdms, not just volumes, might be what we want?
13:14:30 <sdague> johnthetubaguy: yeh, a GET call for that as a sub resource probably makes some sense
13:14:36 <sdague> like alex_xu just said
13:14:51 <johnthetubaguy> yeah, I just wonder if we already have something close...
13:15:29 <johnthetubaguy> oh, we call it os-volume_attachments, thats clearly not a bdm
13:15:34 <alex_xu> we have vol-attachment API, but it won't include swap and ephermal disk
13:16:02 <mriedem> nor a bdm uuid since that doesn't exist yet
13:16:09 <mriedem> brb
13:16:19 <sdague> yeh, so /servers/{uuid}/bdm seems like a good approach there
13:16:37 <alex_xu> is it a little duplicated with vol-attachment API?
13:16:47 <sdague> I guess, is the disk device info the only hold out here? Could we split that out?
13:17:19 <sdague> so diagnostics - disk on first go, get that in. Sort out bdm consistent exposure in parallel, then diag + bdm as second pass.
13:17:37 <sdague> just so that the rest of the cleanup work can be sorted
13:18:15 <johnthetubaguy> sdague: that sounds like a good idea
13:18:22 <alex_xu> we have microversion, so that is cool for making progress :)
13:19:03 <alex_xu> another idea is if we don't have clear use-case for /servers/uuid/bdm, we can put some bdm attribute with uuid in the diag API
13:19:49 <sdague> so, honestly, it feels like more consistently dealing with bdm and not just in the attachment api seems sensible
13:19:52 <johnthetubaguy> honestly, it feels like BDMs need some thought, generally. Where do we want to take disks and flavors.
13:20:14 <johnthetubaguy> it feels a bit wrong we don't expose bdms in a GET api
13:20:27 <sdague> johnthetubaguy: yeh
13:20:47 <sdague> ok, so plan forward. Split diagnostics spec so there is a non disk version
13:20:49 <alex_xu> ah, i see the point
13:20:58 <sdague> get that sorted and landed
13:21:22 <sdague> then there is probably a bdm api spec (which includes uuid exposure) and diag + bdm
13:21:24 <sdague> right?
13:22:07 <johnthetubaguy> +1
13:22:13 <alex_xu> yea
13:23:42 <mriedem> that would be a regression from the existing diagnostics api,
13:23:50 <mriedem> so if you wanted disk stuff you'd have to use v2.1 or something lower
13:24:11 <mriedem> the existing diagnostics api doesn't expose a disk id though, but it does expose some disk stuff
13:24:16 <alex_xu> or just no id, but with disk info
13:24:23 <alex_xu> mriedem: yea
13:24:29 <sdague> ok, well, maybe that then
13:24:53 <sdague> the thing is, I'd hate to have the standardization get held up on disk id thing that is going to take a while to sort out
13:26:10 <mriedem> if long-term you want both disk path and id in the diag api,
13:26:13 <mriedem> then you could do disk path now
13:26:15 <mriedem> and add id later
13:26:28 <johnthetubaguy> do we want disk path?
13:26:32 <mriedem> idk
13:26:42 <mriedem> it's really hard to tell what should be in this thing w/o input from someone that uses it
13:27:24 <mriedem> we could query the ops list to see if anyone uses the api and if so, what they think about options we're talking about
13:28:46 <johnthetubaguy> yes, although releasing a smaller reduced, but standardized API feels like an improvement
13:29:02 <johnthetubaguy> they may just tell us to delete it, I guess
13:30:12 <sdague> so, with the microversion it would let us replace this with something standard, even if missing info
13:30:58 <mriedem> i've got to get my kid on the bus so back in a while
13:31:31 <alex_xu> so, what we should do?
13:31:33 <sdague> ok, maybe another topic, because we seem to not be agreed here
13:31:45 <johnthetubaguy> yeah, lets come back to this later
13:31:51 <sdague> I don't think we're going to get reasonable feedback off of the ops list for a thing like this
13:32:06 <sdague> and I think standarization is more important than missing info
13:32:27 <johnthetubaguy> +1 for just getting this standardized
13:32:30 <alex_xu> +1
13:32:32 <johnthetubaguy> even with missing info
13:33:47 <alex_xu> so let me feedback to the spec, then we revisit the id problem later?
13:35:11 <sdague> alex_xu: sounds good
13:35:50 <alex_xu> #action alex_xu feedback to the diag API spec about just standardized the API first
13:36:09 <alex_xu> so let's go next one
13:36:16 <alex_xu> action: johnthetubaguy to sketch out what an ideal security group workflow looks like in the nova api now with neutron as the presumed backend
13:36:33 <alex_xu> #link https://etherpad.openstack.org/p/ocata-nova-security-groups
13:36:37 <johnthetubaguy> so that etherpad I had, includes some ideas
13:37:09 <johnthetubaguy> nova boot --nic net-id=uuid_net1 security-group=db --nic ned-id=uuid_net2 security-group=api --flavor 1 --image test-image test-server
13:37:17 <johnthetubaguy> is where I was thinking we could go
13:37:25 <johnthetubaguy> to add a security group
13:38:00 <johnthetubaguy> then you just need to look at the port in neutron to find out or modify any details around that
13:38:16 <alex_xu> johnthetubaguy: so we just need fail when use it with nova-network?
13:38:17 <sdague> johnthetubaguy: yeh, that makes a lot of sense.
13:38:39 <johnthetubaguy> alex_xu: I am thinking nova-network dies in a few weeks, lets just worry about neutron
13:38:39 <sdague> alex_xu: well nova-net calls after 2.35 are pretty suspect anyway
13:38:55 <johnthetubaguy> right, use the older version of the API if you must use nova-network still
13:38:58 <alex_xu> ok, cool
13:39:14 <sdague> johnthetubaguy: can you turn that into a spec? That seems like it would solve a bunch of things
13:39:27 <alex_xu> I'm thinking should we stop some point which works for nova-network after 2.35
13:39:53 <johnthetubaguy> yeah, I can make that into a spec
13:40:36 <mriedem> alex_xu: well, deleting nova-network is going to make it stop after 2.35
13:40:39 <mriedem> and before 2.35 for that matter
13:40:57 <johnthetubaguy> right, this becomes simpler once we nuke nova-net
13:41:20 <alex_xu> ah, I see now
13:41:21 <johnthetubaguy> #action johnthetubaguy to create a spec out of ideas in https://etherpad.openstack.org/p/ocata-nova-security-groups
13:41:46 <alex_xu> ok, so let us go to next one
13:41:49 <johnthetubaguy> the bit I like about --nic is that already means nothing for nova-net
13:42:04 <alex_xu> johnthetubaguy: yea
13:42:10 <alex_xu> action: mriedem to write up spec for os-virtual-interface deprecation
13:42:11 <mriedem> not really
13:42:21 <mriedem> --nic can pass the network id or fixed ip for nova-net
13:42:26 <mriedem> you just can't pass a port for nova-net
13:42:39 <mriedem> alex_xu: i didn't get that done
13:42:59 <johnthetubaguy> mriedem: oh, I didn't know that was a thing
13:43:13 <alex_xu> mriedem: it's fine, so let us talk about that when it is ready
13:43:40 <alex_xu> so next one
13:43:45 <alex_xu> action: alex_xu to write a spec for deprecating the proxy api to set/delete image metadata
13:43:57 <alex_xu> #link https://review.openstack.org/377528
13:44:06 <alex_xu> mriedem already give some review, I need update the spec
13:44:53 <alex_xu> the only highlight in the spec is there is quota check in server create_image API, which I thought we should remove
13:45:35 <mriedem> i think the quota check should be gone, it's actually kind of silly that nova has that at all given glance could blow up even if you pass the nova quota check for image metadata properties
13:45:38 <mriedem> if the glance quota is lower
13:45:58 <alex_xu> mriedem: yea
13:46:09 <mriedem> the thing i'm nervous about is moving the image create before the volume snapshot
13:46:25 <mriedem> not for any known reason right now except dragons
13:46:36 <alex_xu> yea, I double check that, probaly test it in my local env, ensure it is safe
13:46:52 <mriedem> note also,
13:46:59 <mriedem> cinder probably has a quota on volume snapshots
13:47:14 <mriedem> so you are fixing one thing by moving image create before vol snapshot, but could be breaking the quota check on vol snapshot
13:47:18 <mriedem> well, not breaking but you could still fail
13:47:46 <mriedem> fwiw this is probably why nova-api checks port quota with neutron before casting to the compute to create new ports
13:47:49 <mriedem> racey as that is
13:48:30 <johnthetubaguy> mriedem: I have a spec in the works to possibly change that, but yeah
13:48:44 <mriedem> # Number of volume snapshots allowed per project (integer value) #quota_snapshots = 10
13:48:47 <mriedem> ^ from cinder.conf
13:49:04 <mriedem> alex_xu: so i think i'd rather not even move the image create before the vol snapshot
13:49:08 <mriedem> since either could fail a quota check
13:49:08 <johnthetubaguy> honestly, leaving the order the same is probably a good idea, just to keep the API semantics the same
13:49:18 <johnthetubaguy> yeah, because either could fail
13:49:23 <mriedem> yes i don't want to introduce some new weirdness we don't know about if we don't have to
13:49:30 <alex_xu> but if we do vol snapshot first, then create image on glance failed on quota, that sounds waste. in the glance side, it just a db call.
13:49:59 <mriedem> well, vol snapshot quota check in cinder should also be a db call,
13:50:09 <mriedem> or do you mean, create image is a db call, then you still have to upload the data to glance
13:50:13 <alex_xu> mriedem: even if we don't move that, but sounds like we should have some rollback code for removing vol snaphsot when quota fail on glance?
13:50:15 <mriedem> which you won't have to do if you fail the image create
13:50:57 <johnthetubaguy> they could both fail for other reasons though, feels like we should fix that anyways
13:51:00 <mriedem> we'd probably want that yeah
13:51:07 <mriedem> right it sounds like a bug exposure today
13:51:30 <johnthetubaguy> I mean treat it separately outside this spec, I guess
13:51:31 <mriedem> we could test it by creating a volume-backed instance, set cinder snapshot quota to 0 and try to snapshot
13:51:35 <mriedem> and make sure everything is cleaned up
13:51:44 <mriedem> johnthetubaguy: +1 - i think it's just a bug
13:51:51 <johnthetubaguy> yeah
13:52:03 <alex_xu> ok, I can move that to a bug
13:52:29 <johnthetubaguy> well, we have the rest of the spec, and a separate bug, to be clear
13:53:16 <mriedem> yes the rest of the spec is pretty clear
13:53:23 <alex_xu> #action alex_xu is going to double check the moving create image, then separate the spec to a spec and a bug
13:54:01 <alex_xu> ok, so let us go next one?
13:54:17 <mriedem> yeah 6 min
13:54:37 <alex_xu> action: mriedem to follow up with gmann about testing swap-volume in tempest
13:54:56 <alex_xu> looks like we can't finish all the items today
13:55:24 <mriedem> gmann is out for awhile so i'll handle that tempest test,
13:55:28 <mriedem> it just needs to move from scenario to api
13:55:46 <mriedem> #action mriedem to move swap volume test from scenario to api https://review.openstack.org/#/c/299830/
13:55:54 <alex_xu> mriedem: cool, thanks
13:56:06 <alex_xu> action: sdague to start in on a capabilities spec
13:56:18 <sdague> #link https://review.openstack.org/#/c/377756/
13:56:31 <sdague> it's pretty early, but hopefully a reasonable starting point
13:56:58 <alex_xu> yea, sounds like it is related to qualitative part of placement
13:57:18 <sdague> the key thing I wanted to capture is that we're going to need a way to query *all* possible capabilities, as well as what's allowed on any particular resource
13:57:31 <sdague> otherwise we'll be in versioning hell
13:57:57 <alex_xu> +1
13:58:05 <sdague> I'm going to be out until the next meeting, so feedback there probably worth while
13:58:20 <alex_xu> oops, I'm in holiday next week also
13:58:32 <sdague> there are a couple of good feedback points
13:58:46 <alex_xu> do we still want to cancel this meeting next week?
13:59:04 <alex_xu> oops, you mean out before the next meeting
13:59:13 <sdague> one which is what degree of machine consumption vs. doc consumption we should end up with
13:59:28 <johnthetubaguy> sdague: I like the idea of ensuring we can see the full list of possible capabilities, to help with versioning, as you say
13:59:29 <sdague> the other is about granularity, though I honestly think that's mostly a comes later
13:59:49 <alex_xu> 1 mins left
13:59:53 <sdague> granularity gets decided independently of the mechanism to expose I think
13:59:59 <johnthetubaguy> I am quite keen on very course grained, to a human could understand things
14:00:03 <johnthetubaguy> so a
14:00:14 <alex_xu> so let us back to nova channel
14:00:18 <alex_xu> thanks all
14:00:18 <sdague> sure
14:00:20 <alex_xu> #endmeeting