13:00:29 <gmann> #startmeeting nova api
13:00:29 <openstack> Meeting started Wed Apr  5 13:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:33 <openstack> The meeting name has been set to 'nova_api'
13:00:47 <gmann> Hi, who all here today
13:01:05 <johnthetubaguy> o/
13:01:18 <mriedem> o/
13:01:57 <gmann> let's wait for a min to have sdague and more people in case
13:02:56 <gmann> let's start
13:02:57 <jichen> o/
13:03:00 <gmann> #topic priorities
13:03:25 <gmann> policy spec #link https://review.openstack.org/433037 https://review.openstack.org/427872
13:03:33 <gmann> johnthetubaguy: your turn
13:03:53 <johnthetubaguy> really, its just a please review
13:04:00 <johnthetubaguy> not a lot of updates
13:04:23 <johnthetubaguy> there is discussion in keystone about the middleware alternative
13:04:48 <johnthetubaguy> the first one doesn't really conflict with that
13:05:11 <johnthetubaguy> the second one kinda fights it, depending on how you look at it
13:05:13 <gmann> scope check one?
13:05:19 <johnthetubaguy> yeah, scope check one is fine
13:05:29 <johnthetubaguy> the second one is the additional roles
13:06:26 <johnthetubaguy> anyone got questions on those?
13:06:29 <gmann> does keystone have anything up, spec etc for midleware alternative ?
13:06:40 <johnthetubaguy> yeah, I can find the link, one sec
13:07:29 <johnthetubaguy> #link https://review.openstack.org/#/c/452198/
13:07:38 <gmann> johnthetubaguy: thanks
13:08:16 <johnthetubaguy> I am probably the most vocal hater of the current middlewear plan, mostly because it seems to create a heap of issues for operators we just spend time fixing
13:09:21 <johnthetubaguy> mriedem: do you have the bandthwidth to talk a look at that policy spec again?
13:09:41 <mriedem> johnthetubaguy: maybe the first one
13:09:51 <johnthetubaguy> yeah, the scope one is the important one
13:10:12 <johnthetubaguy> largely the bit I would like to get folks started on is the improved functional tests
13:11:30 <gmann> yea those will be nice.
13:11:40 <johnthetubaguy> I guess its time to move on
13:12:03 <gmann> policy docs are in nice shape and good progress  #link https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/policy-docs
13:12:39 <gmann> i think those are almost done with what we have up right now
13:12:56 <johnthetubaguy> yeah, its looking really close
13:13:29 <johnthetubaguy> #link https://docs.openstack.org/developer/nova/sample_policy.html
13:13:52 <gmann> security one was really mess with deprecated APIs and other server one with single policy
13:13:55 <gmann> #link https://review.openstack.org/#/c/452309/
13:13:57 <johnthetubaguy> mriedem: if I create a specless BP to remove the discoverable rules, would you approved that?
13:14:14 <mriedem> context?
13:14:23 <gmann> johnthetubaguy: can you check if my suggestion looks fine or any better way to describe those
13:14:25 <mriedem> is that just b/c now that extensions are gone,
13:14:28 <mriedem> discoverability doesn't matter?
13:14:32 <johnthetubaguy> mriedem: yeah, sorry, see the sample policy and search for "discoverable"
13:14:38 <johnthetubaguy> right
13:14:53 <johnthetubaguy> used to hide extensions existing, but you can't turn any off any more
13:15:04 <gmann> yea "discoverable" are just used in extension_info right now
13:15:18 <mriedem> we don't actually check the policy rule for that anywhere?
13:15:18 <johnthetubaguy> basically hard code everything to be visiable, and remove the policy
13:15:30 <johnthetubaguy> mriedem: I thought we did
13:16:30 <gmann> mriedem: johnthetubaguy only in exntension_info while showing list of extensions which are not for any use i think
13:17:04 * sdague sneaks in late
13:17:27 <johnthetubaguy> mriedem: you might be right, I can't see them being used anywhere any more
13:17:32 <gmann> #link https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extension_info.py#L225
13:17:49 <mriedem> if context.can(action, fatal=False):
13:17:56 <mriedem> yeah same thing gmann linked
13:17:59 <mriedem> that's where it's checked
13:18:12 <johnthetubaguy> yeah, just found that too
13:18:28 <johnthetubaguy> seems fine to just return true there
13:18:36 <gmann> yea
13:19:22 <mriedem> i guess we deprecated extensions at some point https://developer.openstack.org/api-ref/compute/#extensions-extensions-deprecated
13:19:24 <mriedem> maybe that was liberty?
13:19:35 <mriedem> https://developer.openstack.org/api-guide/compute/extensions.html
13:19:46 <johnthetubaguy> possibly
13:20:01 <sdague> liberty sounds about right
13:20:03 <mriedem> so it looks like the only place discoverable is used is the GET /extensions APIs
13:20:05 <johnthetubaguy> I think the config options got dropped a cycle or two ago
13:20:10 <johnthetubaguy> yeah
13:20:16 <gmann> yea
13:20:19 <johnthetubaguy> #link https://blueprints.launchpad.net/nova/+spec/remove-discoverable-policy-rules
13:20:33 <mriedem> and since you can't whitelist/blacklist extensions, they should all be discoverable now?
13:20:38 <johnthetubaguy> yeah
13:20:58 <mriedem> ok seems fair
13:21:23 <mriedem> https://github.com/openstack/nova/commit/75280e582e6d607b38bc2af9f51f80fd5135453c
13:21:44 <mriedem> https://github.com/openstack/nova/commit/04f8612aa99e618767a85d2f674ecdfc01955ff7
13:21:46 <mriedem> added the wording
13:21:53 <mriedem> so that was newton
13:22:13 <johnthetubaguy> so I think we keep the API
13:22:22 <johnthetubaguy> its just killing the policy related to it
13:22:30 <johnthetubaguy> so its not a distraction from the other policy rules
13:22:31 <mriedem> yeah
13:22:43 <mriedem> sdague: you fine with this?
13:22:47 <sdague> yeh
13:23:00 <mriedem> then let it be so
13:23:02 <johnthetubaguy> its super interesting, if you got back a few cycles, the policy files was a monster (compute.api.py and two API code bases)
13:23:25 <johnthetubaguy> we have got a long way forward here
13:23:36 <sdague> johnthetubaguy: yeh, it's amazing how progress does happen :)
13:23:55 <mriedem> bp is approved
13:24:02 <johnthetubaguy> mriedem: thanks
13:24:03 <gmann> cool, thanks
13:24:24 * gmann mriedem approved before adding Action iteam :)
13:24:44 <gmann> anything else on priority items ?
13:25:12 <gmann> let's move then
13:25:14 <gmann> #topic open
13:25:45 <mriedem> i've got something if no one elsedoes
13:25:53 <sdague> mriedem: go for it
13:25:54 <gmann> api extension removal  are in good review progress
13:26:05 <gmann> mriedem: please go ahead
13:26:19 <mriedem> i've got this spec for removing bdm device name from server create and volume attach POST requests https://review.openstack.org/#/c/452546/
13:26:29 <mriedem> ftersin pointed out in the ML and the spec review,
13:26:39 <mriedem> about the image bdm override use case on the server create
13:26:54 <mriedem> which i didn't know was a thing, but apparently ec2 api users care about it b/c people using aws do this a lot
13:26:55 <mriedem> i guess
13:27:03 <mriedem> so,
13:27:15 <mriedem> i've basically given up on trying to solve the server create part of this for pike,
13:27:31 <mriedem> but i'm wondering if there is use in still removing device from the volume attach post request,
13:27:38 <mriedem> since that doesn't have the image bdm override thing
13:27:57 <mriedem> for volume attach, it's just a request parameter field we don't honor, at least not for libvirt
13:28:08 <mriedem> i can't say for sure how the user-supplied device name works for hyperv, xen or vmware
13:28:40 <mriedem> fin
13:28:48 <sdague> mriedem: is there something about the wording of the API docs that might help here?
13:28:48 * johnthetubaguy is pondering it
13:29:04 <mriedem> sdague: in the api docs we say for that field that for libvirt it's not honored
13:29:06 <mriedem> since liberty
13:29:21 <mriedem> but, for example, tempest still passes it in most tests
13:29:40 <mriedem> https://developer.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail#attach-a-volume-to-an-instance
13:29:45 <mriedem> "Name of the device such as, /dev/vdb. Omit or set this parameter to null for auto-assignment, if supported. If you specify this parameter, the device must not exist in the guest operating system. Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name. This is the same behavior as if the device name parameter is not supplied on the request."
13:30:11 <sdague> mriedem: so, when it's used by EC2, what value is passed?
13:30:22 <mriedem> the same thing that the user supplies,
13:30:26 <mriedem> the thing about the image bdm override case,
13:30:48 <mriedem> is that the device_name is used to match the user-supplied bdms in the server create request with any defined in the image metadata
13:31:02 <mriedem> because in our shitty data model, we use instance_uuid and device_name as a sort of unique constraint
13:31:09 <mriedem> for an update_or_create operation in the db api
13:31:41 <mriedem> so as a hack, you can boot a server with an image that has bdm metadata, and override / customize the bfv experience with your own bdms in the request, iff they match the device names in the image meta
13:31:45 <mriedem> which is a completely terrible ux
13:31:52 <mriedem> but it's a thing people apparently do
13:31:53 <sdague> ah, yeh, this bleeds out of the amazon api because it's xen bleed through I htink
13:32:40 <mriedem> as far as i know, xen might be the only backend that honors the user-supplied device name
13:32:48 <sdague> yeh, that's my expectation
13:32:50 <mriedem> there is that whole is_xen() check in the api code
13:33:05 <sdague> you get a bunch of guest assistance because of xenbus
13:33:24 <johnthetubaguy> yeah, the device name is defiantly xen specific there
13:33:27 <mriedem> https://github.com/openstack/nova/blob/master/nova/compute/utils.py#L166
13:33:49 <johnthetubaguy> although I am not sure the ordering is any more certain than with KVM, but I could be wrong
13:33:55 <sdague> yeh, I don't know, this seem like one of those hard problems to unwind because of the EC2 case
13:34:18 <mriedem> right, so it seems like solving that ux problem,
13:34:22 <mriedem> or replacing it,
13:34:29 <mriedem> is going to require some new thinking on a new api design for that use case,
13:34:37 <mriedem> and we don't even have a tempest test for this use case today as far as i know,
13:34:38 <sdague> yeh
13:34:46 <mriedem> ftersin said it was all regressed a long time ago and ndipanov fixed it
13:34:56 <mriedem> so i was going to work on a tempest test for this in pike so we don't regress it
13:34:59 <sdague> it would be helpful if the ec2 folks got more engaged early on in this
13:35:20 <mriedem> i was thinking we could maybe long-term use the bdm tag as a correlation field for overrides,
13:35:36 <sdague> yeh, the device tagging makes a bunch more sense to me
13:35:38 <mriedem> but there are things we have to do before that, like actually put the bdm tag in the snapshot meta, and expose bdm tags out of the api
13:35:38 <gmann> yea tempest has just passing in request (POST server ) #link https://github.com/openstack/tempest/blob/d44b295a168f067e7352895f4ce0ad32a3ec672d/tempest/scenario/test_volume_migrate_attached.py#L59
13:36:00 <mriedem> so i'm scrapping the server create idea for pike
13:36:07 <mriedem> was wondering about volume attach,
13:36:15 <sdague> mriedem: honestly, I'm not super clear I understand the use case rally from the ec2 side. I feel like I'd need to see pictures to imagine all the things they are doing there
13:36:17 <mriedem> but at this point it's just kind of a distraction and i'm happy to drop it
13:36:30 <mriedem> sdague: ftersin gives an example in the spec review
13:36:47 <mriedem> https://review.openstack.org/#/c/452546/1/specs/pike/approved/remove-device-from-bdm-post-requests.rst@44
13:36:52 <mriedem> it's not a picture, but a scenario
13:37:12 <mriedem> so i'll work on a tempest test for it to see that it works and then we have a baseline in case we make changes later
13:38:52 <sdague> ok
13:39:11 <sdague> while on the topic of BDMs
13:39:22 <sdague> https://review.openstack.org/#/c/408151/ - is the RAID bdm one
13:39:30 <sdague> which I really don't want us to do
13:39:39 <mriedem> i had already -1ed that earlier
13:39:41 <mriedem> for the same thing
13:39:48 <mriedem> raid config is not going to belong in bdms
13:39:54 <mriedem> just b/c they are convenient
13:39:54 <mriedem> imo
13:40:03 <sdague> but, it would be nice to figure out what the actual use case and path forward is for ironic
13:40:08 <sdague> mriedem: you want to -2 it then?
13:40:39 <mriedem> i'd have to go back and read the latest comments first
13:40:50 <mriedem> i don't want to offend anyone here with a -2
13:40:55 <mriedem> since it's roman and ironic people
13:41:45 <mriedem> the easy thing to do is userdata
13:41:54 <mriedem> but you don't get api validation on that
13:42:13 <sdague> you can't do userdata I don't think
13:42:21 <sdague> because it happens too late
13:43:22 <sdague> I think that part of it is also a mismatch between the fact that ironic as a project does let vendor specific stuff directly through to drivers, and we try hard not to do that
13:44:11 <mriedem> well that and baremetal is a weird bolt on to nova
13:44:19 <sdague> well, sort of
13:44:30 <mriedem> i'm referring to resource tracking and scheduling
13:44:33 <mriedem> weirdness
13:44:36 <sdague> sure
13:44:46 <mriedem> our api is more biased to xen than bm
13:44:47 <sdague> any time you get an abstraction, you are going to lose fidelity
13:45:12 <mriedem> i'd have to read this again, but there is also a proposed forum session for raid config
13:45:22 <mriedem> http://forumtopics.openstack.org/cfp/details/16
13:46:15 <sdague> ah, yeh, that's the HPC folks
13:46:26 <sdague> honestly, that seems like the right conversation location
13:46:32 <sdague> I would -2 it until that forum session
13:46:34 <mriedem> fwiw this is pretty much the reason for the mogan project, for a nova derivative whose API is completely biased toward baremetal use cases
13:47:00 <mriedem> but that gets everyone riled up
13:48:12 <mriedem> ok i noted in the spec
13:48:16 <mriedem> that defers this out past pike
13:48:46 <mriedem> so i've got another thing if we're done with raid
13:48:57 <sdague> yeh
13:49:00 <mriedem> ok,
13:49:18 <mriedem> so in newton i was proposing to deprecate os-interface b/c it's a proxy to neutron to list/show ports,
13:49:34 <mriedem> and we want to expose vif tags which are stored in the nova virtual_interfaces table, which was historically nova-net only
13:49:42 <mriedem> but with vif tags we use that table for ports too
13:49:54 <mriedem> so we have this weird case where os-virtual-interfaces is nova-net only, and os-interface is neutron only
13:50:04 <mriedem> i think what we want to do is deprecate os-virtual-interface,
13:50:08 <mriedem> because it's nova-net only
13:50:11 <sdague> yeh
13:50:20 <mriedem> and as johnthetubaguy pointed out in that deprecation spec of mine,
13:50:35 <mriedem> move the GET from os-virtual-interfaces to os-interface
13:50:44 <mriedem> and in os-interface, return vif tags (eventually)
13:51:09 <mriedem> now since we don't have virtual_interfaces in the database for any ports created before newton,
13:51:22 <mriedem> we have two options for listing/showing ports in os-interface,
13:51:32 <mriedem> 1. proxy to neutron if the port isn't in the virtual_interfaces table,
13:51:47 <mriedem> 2. use the info_cache, but that's per-instance
13:51:48 * johnthetubaguy shakes head at the amount of work we created for ourselves in merging half baked and competing APIs
13:51:49 <mriedem> so i'm not sure that works
13:52:05 <mriedem> johnthetubaguy: yeah i feel like most of my blueprints lately are about removing code
13:52:07 <mriedem> :)
13:52:42 <sdague> mriedem: deleting is always good
13:52:42 <mriedem> oh wait,
13:52:48 <mriedem> #2 might not be bad
13:53:11 <mriedem> so os-virtual-interfaces is only a server extension
13:53:13 <mriedem> so we have the server_id
13:53:19 <johnthetubaguy> mriedem: sdague: our love of removing code always seems to remind me of this: https://www.youtube.com/watch?v=F9nGyPz9uT0
13:53:25 <mriedem> so in that case, we can get the instance.info_cache and list/show ports from that i think
13:53:49 <johnthetubaguy> (warning contains british humour, yes spelt like that ^)
13:54:07 <johnthetubaguy> for the record we have some tidying up of the info_cache
13:54:15 <johnthetubaguy> some of it syncs from neutron, some of it only we know
13:54:17 <mriedem> this also reminds me,
13:54:26 <gmann> can we return vif tags with device or server tag as consolidated tags on server resources and deprecate os-interface ?
13:54:39 <mriedem> GET /servers/server_id/port_id in os-interface doesn't actually use the server,
13:54:43 <mriedem> it just goes directly to neutron for the port
13:54:59 <mriedem> gmann: we can't deprecate os-interface because of the POST and DELETE operatoins
13:55:04 <mriedem> to add/remove ports to a server
13:55:04 <sdague> mriedem: interesting....
13:55:27 <sdague> does neutron every know anything about our server ids? or is it kept unawares?
13:55:41 <mriedem> sdague: the port.device_id is the owning server
13:55:45 <mriedem> it's the instance uuid
13:55:49 <gmann> mriedem: ah yea
13:55:59 <mriedem> i should step back,
13:56:07 <mriedem> my long-term goal is to expose bdm tags and vif tags out of the api
13:56:14 <mriedem> the bdm tags one is easy, we have os-volume_attachments
13:56:27 <mriedem> i'm stuck on this vif tag one b/c we have these two competing apis and data models
13:56:51 <mriedem> i *think* the way forward is deprecate os-virtual-interface, and use os-interface, but stop proxying to neutron in os-interface and use the info_cache
13:57:07 <sdague> mriedem: that seems sensible to me
13:57:15 <johnthetubaguy> mriedem: oh yeah, os-volume_attachmens only lists volumes right?
13:57:16 <mriedem> at some point we might want an action api somewhere to force a refresh of the info_cache
13:57:22 <mriedem> johnthetubaguy: yeah via bdms
13:57:26 <johnthetubaguy> yeah
13:57:41 <mriedem> i'm actually surprised we don't have a force info cache refresh api
13:57:48 <mriedem> that would seem useful
13:57:50 <mriedem> but i digress
13:57:57 <mriedem> and we have 2 minutes
13:58:01 <gmann> or refresh everytime GET comes
13:58:11 <gmann> yea 2 min left
13:58:30 <mriedem> gmann: but then we get back into that proxy thing
13:58:41 <mriedem> i'm not sure if that's better than an on-demand forced refresh
13:58:56 <mriedem> we periodicaly heal the info cache on the computes anyway
13:59:00 <mriedem> but yeah it might be a lossy api
13:59:05 <gmann> yea
13:59:17 <mriedem> the network API operations in the compute should also be refreshing the info cache if they change resources tied to an instance
13:59:25 <mriedem> we've seen bugs where that doesn't happen, like deleting a floating ip
13:59:30 <gmann> i wonder user will use that or at least remember to use before getting tags etc
13:59:40 <mriedem> anyway, i might just brain dump this to the ML first before working o na spec
13:59:44 <mriedem> since i'm a bit spec'ed out
13:59:46 <gmann> anyways 1 min left,
14:00:01 <gmann> yea
14:00:18 <mriedem> we're out of time
14:00:31 <gmann> thanks everyone let's jump to nova channel
14:00:34 <gmann> #endmeeting