13:00:29 #startmeeting nova api 13:00:29 Meeting started Wed Apr 5 13:00:29 2017 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:33 The meeting name has been set to 'nova_api' 13:00:47 Hi, who all here today 13:01:05 o/ 13:01:18 o/ 13:01:57 let's wait for a min to have sdague and more people in case 13:02:56 let's start 13:02:57 o/ 13:03:00 #topic priorities 13:03:25 policy spec #link https://review.openstack.org/433037 https://review.openstack.org/427872 13:03:33 johnthetubaguy: your turn 13:03:53 really, its just a please review 13:04:00 not a lot of updates 13:04:23 there is discussion in keystone about the middleware alternative 13:04:48 the first one doesn't really conflict with that 13:05:11 the second one kinda fights it, depending on how you look at it 13:05:13 scope check one? 13:05:19 yeah, scope check one is fine 13:05:29 the second one is the additional roles 13:06:26 anyone got questions on those? 13:06:29 does keystone have anything up, spec etc for midleware alternative ? 13:06:40 yeah, I can find the link, one sec 13:07:29 #link https://review.openstack.org/#/c/452198/ 13:07:38 johnthetubaguy: thanks 13:08:16 I am probably the most vocal hater of the current middlewear plan, mostly because it seems to create a heap of issues for operators we just spend time fixing 13:09:21 mriedem: do you have the bandthwidth to talk a look at that policy spec again? 13:09:41 johnthetubaguy: maybe the first one 13:09:51 yeah, the scope one is the important one 13:10:12 largely the bit I would like to get folks started on is the improved functional tests 13:11:30 yea those will be nice. 13:11:40 I guess its time to move on 13:12:03 policy docs are in nice shape and good progress #link https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/policy-docs 13:12:39 i think those are almost done with what we have up right now 13:12:56 yeah, its looking really close 13:13:29 #link https://docs.openstack.org/developer/nova/sample_policy.html 13:13:52 security one was really mess with deprecated APIs and other server one with single policy 13:13:55 #link https://review.openstack.org/#/c/452309/ 13:13:57 mriedem: if I create a specless BP to remove the discoverable rules, would you approved that? 13:14:14 context? 13:14:23 johnthetubaguy: can you check if my suggestion looks fine or any better way to describe those 13:14:25 is that just b/c now that extensions are gone, 13:14:28 discoverability doesn't matter? 13:14:32 mriedem: yeah, sorry, see the sample policy and search for "discoverable" 13:14:38 right 13:14:53 used to hide extensions existing, but you can't turn any off any more 13:15:04 yea "discoverable" are just used in extension_info right now 13:15:18 we don't actually check the policy rule for that anywhere? 13:15:18 basically hard code everything to be visiable, and remove the policy 13:15:30 mriedem: I thought we did 13:16:30 mriedem: johnthetubaguy only in exntension_info while showing list of extensions which are not for any use i think 13:17:04 * sdague sneaks in late 13:17:27 mriedem: you might be right, I can't see them being used anywhere any more 13:17:32 #link https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extension_info.py#L225 13:17:49 if context.can(action, fatal=False): 13:17:56 yeah same thing gmann linked 13:17:59 that's where it's checked 13:18:12 yeah, just found that too 13:18:28 seems fine to just return true there 13:18:36 yea 13:19:22 i guess we deprecated extensions at some point https://developer.openstack.org/api-ref/compute/#extensions-extensions-deprecated 13:19:24 maybe that was liberty? 13:19:35 https://developer.openstack.org/api-guide/compute/extensions.html 13:19:46 possibly 13:20:01 liberty sounds about right 13:20:03 so it looks like the only place discoverable is used is the GET /extensions APIs 13:20:05 I think the config options got dropped a cycle or two ago 13:20:10 yeah 13:20:16 yea 13:20:19 #link https://blueprints.launchpad.net/nova/+spec/remove-discoverable-policy-rules 13:20:33 and since you can't whitelist/blacklist extensions, they should all be discoverable now? 13:20:38 yeah 13:20:58 ok seems fair 13:21:23 https://github.com/openstack/nova/commit/75280e582e6d607b38bc2af9f51f80fd5135453c 13:21:44 https://github.com/openstack/nova/commit/04f8612aa99e618767a85d2f674ecdfc01955ff7 13:21:46 added the wording 13:21:53 so that was newton 13:22:13 so I think we keep the API 13:22:22 its just killing the policy related to it 13:22:30 so its not a distraction from the other policy rules 13:22:31 yeah 13:22:43 sdague: you fine with this? 13:22:47 yeh 13:23:00 then let it be so 13:23:02 its super interesting, if you got back a few cycles, the policy files was a monster (compute.api.py and two API code bases) 13:23:25 we have got a long way forward here 13:23:36 johnthetubaguy: yeh, it's amazing how progress does happen :) 13:23:55 bp is approved 13:24:02 mriedem: thanks 13:24:03 cool, thanks 13:24:24 * gmann mriedem approved before adding Action iteam :) 13:24:44 anything else on priority items ? 13:25:12 let's move then 13:25:14 #topic open 13:25:45 i've got something if no one elsedoes 13:25:53 mriedem: go for it 13:25:54 api extension removal are in good review progress 13:26:05 mriedem: please go ahead 13:26:19 i've got this spec for removing bdm device name from server create and volume attach POST requests https://review.openstack.org/#/c/452546/ 13:26:29 ftersin pointed out in the ML and the spec review, 13:26:39 about the image bdm override use case on the server create 13:26:54 which i didn't know was a thing, but apparently ec2 api users care about it b/c people using aws do this a lot 13:26:55 i guess 13:27:03 so, 13:27:15 i've basically given up on trying to solve the server create part of this for pike, 13:27:31 but i'm wondering if there is use in still removing device from the volume attach post request, 13:27:38 since that doesn't have the image bdm override thing 13:27:57 for volume attach, it's just a request parameter field we don't honor, at least not for libvirt 13:28:08 i can't say for sure how the user-supplied device name works for hyperv, xen or vmware 13:28:40 fin 13:28:48 mriedem: is there something about the wording of the API docs that might help here? 13:28:48 * johnthetubaguy is pondering it 13:29:04 sdague: in the api docs we say for that field that for libvirt it's not honored 13:29:06 since liberty 13:29:21 but, for example, tempest still passes it in most tests 13:29:40 https://developer.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail#attach-a-volume-to-an-instance 13:29:45 "Name of the device such as, /dev/vdb. Omit or set this parameter to null for auto-assignment, if supported. If you specify this parameter, the device must not exist in the guest operating system. Note that as of the 12.0.0 Liberty release, the Nova libvirt driver no longer honors a user-supplied device name. This is the same behavior as if the device name parameter is not supplied on the request." 13:30:11 mriedem: so, when it's used by EC2, what value is passed? 13:30:22 the same thing that the user supplies, 13:30:26 the thing about the image bdm override case, 13:30:48 is that the device_name is used to match the user-supplied bdms in the server create request with any defined in the image metadata 13:31:02 because in our shitty data model, we use instance_uuid and device_name as a sort of unique constraint 13:31:09 for an update_or_create operation in the db api 13:31:41 so as a hack, you can boot a server with an image that has bdm metadata, and override / customize the bfv experience with your own bdms in the request, iff they match the device names in the image meta 13:31:45 which is a completely terrible ux 13:31:52 but it's a thing people apparently do 13:31:53 ah, yeh, this bleeds out of the amazon api because it's xen bleed through I htink 13:32:40 as far as i know, xen might be the only backend that honors the user-supplied device name 13:32:48 yeh, that's my expectation 13:32:50 there is that whole is_xen() check in the api code 13:33:05 you get a bunch of guest assistance because of xenbus 13:33:24 yeah, the device name is defiantly xen specific there 13:33:27 https://github.com/openstack/nova/blob/master/nova/compute/utils.py#L166 13:33:49 although I am not sure the ordering is any more certain than with KVM, but I could be wrong 13:33:55 yeh, I don't know, this seem like one of those hard problems to unwind because of the EC2 case 13:34:18 right, so it seems like solving that ux problem, 13:34:22 or replacing it, 13:34:29 is going to require some new thinking on a new api design for that use case, 13:34:37 and we don't even have a tempest test for this use case today as far as i know, 13:34:38 yeh 13:34:46 ftersin said it was all regressed a long time ago and ndipanov fixed it 13:34:56 so i was going to work on a tempest test for this in pike so we don't regress it 13:34:59 it would be helpful if the ec2 folks got more engaged early on in this 13:35:20 i was thinking we could maybe long-term use the bdm tag as a correlation field for overrides, 13:35:36 yeh, the device tagging makes a bunch more sense to me 13:35:38 but there are things we have to do before that, like actually put the bdm tag in the snapshot meta, and expose bdm tags out of the api 13:35:38 yea tempest has just passing in request (POST server ) #link https://github.com/openstack/tempest/blob/d44b295a168f067e7352895f4ce0ad32a3ec672d/tempest/scenario/test_volume_migrate_attached.py#L59 13:36:00 so i'm scrapping the server create idea for pike 13:36:07 was wondering about volume attach, 13:36:15 mriedem: honestly, I'm not super clear I understand the use case rally from the ec2 side. I feel like I'd need to see pictures to imagine all the things they are doing there 13:36:17 but at this point it's just kind of a distraction and i'm happy to drop it 13:36:30 sdague: ftersin gives an example in the spec review 13:36:47 https://review.openstack.org/#/c/452546/1/specs/pike/approved/remove-device-from-bdm-post-requests.rst@44 13:36:52 it's not a picture, but a scenario 13:37:12 so i'll work on a tempest test for it to see that it works and then we have a baseline in case we make changes later 13:38:52 ok 13:39:11 while on the topic of BDMs 13:39:22 https://review.openstack.org/#/c/408151/ - is the RAID bdm one 13:39:30 which I really don't want us to do 13:39:39 i had already -1ed that earlier 13:39:41 for the same thing 13:39:48 raid config is not going to belong in bdms 13:39:54 just b/c they are convenient 13:39:54 imo 13:40:03 but, it would be nice to figure out what the actual use case and path forward is for ironic 13:40:08 mriedem: you want to -2 it then? 13:40:39 i'd have to go back and read the latest comments first 13:40:50 i don't want to offend anyone here with a -2 13:40:55 since it's roman and ironic people 13:41:45 the easy thing to do is userdata 13:41:54 but you don't get api validation on that 13:42:13 you can't do userdata I don't think 13:42:21 because it happens too late 13:43:22 I think that part of it is also a mismatch between the fact that ironic as a project does let vendor specific stuff directly through to drivers, and we try hard not to do that 13:44:11 well that and baremetal is a weird bolt on to nova 13:44:19 well, sort of 13:44:30 i'm referring to resource tracking and scheduling 13:44:33 weirdness 13:44:36 sure 13:44:46 our api is more biased to xen than bm 13:44:47 any time you get an abstraction, you are going to lose fidelity 13:45:12 i'd have to read this again, but there is also a proposed forum session for raid config 13:45:22 http://forumtopics.openstack.org/cfp/details/16 13:46:15 ah, yeh, that's the HPC folks 13:46:26 honestly, that seems like the right conversation location 13:46:32 I would -2 it until that forum session 13:46:34 fwiw this is pretty much the reason for the mogan project, for a nova derivative whose API is completely biased toward baremetal use cases 13:47:00 but that gets everyone riled up 13:48:12 ok i noted in the spec 13:48:16 that defers this out past pike 13:48:46 so i've got another thing if we're done with raid 13:48:57 yeh 13:49:00 ok, 13:49:18 so in newton i was proposing to deprecate os-interface b/c it's a proxy to neutron to list/show ports, 13:49:34 and we want to expose vif tags which are stored in the nova virtual_interfaces table, which was historically nova-net only 13:49:42 but with vif tags we use that table for ports too 13:49:54 so we have this weird case where os-virtual-interfaces is nova-net only, and os-interface is neutron only 13:50:04 i think what we want to do is deprecate os-virtual-interface, 13:50:08 because it's nova-net only 13:50:11 yeh 13:50:20 and as johnthetubaguy pointed out in that deprecation spec of mine, 13:50:35 move the GET from os-virtual-interfaces to os-interface 13:50:44 and in os-interface, return vif tags (eventually) 13:51:09 now since we don't have virtual_interfaces in the database for any ports created before newton, 13:51:22 we have two options for listing/showing ports in os-interface, 13:51:32 1. proxy to neutron if the port isn't in the virtual_interfaces table, 13:51:47 2. use the info_cache, but that's per-instance 13:51:48 * johnthetubaguy shakes head at the amount of work we created for ourselves in merging half baked and competing APIs 13:51:49 so i'm not sure that works 13:52:05 johnthetubaguy: yeah i feel like most of my blueprints lately are about removing code 13:52:07 :) 13:52:42 mriedem: deleting is always good 13:52:42 oh wait, 13:52:48 #2 might not be bad 13:53:11 so os-virtual-interfaces is only a server extension 13:53:13 so we have the server_id 13:53:19 mriedem: sdague: our love of removing code always seems to remind me of this: https://www.youtube.com/watch?v=F9nGyPz9uT0 13:53:25 so in that case, we can get the instance.info_cache and list/show ports from that i think 13:53:49 (warning contains british humour, yes spelt like that ^) 13:54:07 for the record we have some tidying up of the info_cache 13:54:15 some of it syncs from neutron, some of it only we know 13:54:17 this also reminds me, 13:54:26 can we return vif tags with device or server tag as consolidated tags on server resources and deprecate os-interface ? 13:54:39 GET /servers/server_id/port_id in os-interface doesn't actually use the server, 13:54:43 it just goes directly to neutron for the port 13:54:59 gmann: we can't deprecate os-interface because of the POST and DELETE operatoins 13:55:04 to add/remove ports to a server 13:55:04 mriedem: interesting.... 13:55:27 does neutron every know anything about our server ids? or is it kept unawares? 13:55:41 sdague: the port.device_id is the owning server 13:55:45 it's the instance uuid 13:55:49 mriedem: ah yea 13:55:59 i should step back, 13:56:07 my long-term goal is to expose bdm tags and vif tags out of the api 13:56:14 the bdm tags one is easy, we have os-volume_attachments 13:56:27 i'm stuck on this vif tag one b/c we have these two competing apis and data models 13:56:51 i *think* the way forward is deprecate os-virtual-interface, and use os-interface, but stop proxying to neutron in os-interface and use the info_cache 13:57:07 mriedem: that seems sensible to me 13:57:15 mriedem: oh yeah, os-volume_attachmens only lists volumes right? 13:57:16 at some point we might want an action api somewhere to force a refresh of the info_cache 13:57:22 johnthetubaguy: yeah via bdms 13:57:26 yeah 13:57:41 i'm actually surprised we don't have a force info cache refresh api 13:57:48 that would seem useful 13:57:50 but i digress 13:57:57 and we have 2 minutes 13:58:01 or refresh everytime GET comes 13:58:11 yea 2 min left 13:58:30 gmann: but then we get back into that proxy thing 13:58:41 i'm not sure if that's better than an on-demand forced refresh 13:58:56 we periodicaly heal the info cache on the computes anyway 13:59:00 but yeah it might be a lossy api 13:59:05 yea 13:59:17 the network API operations in the compute should also be refreshing the info cache if they change resources tied to an instance 13:59:25 we've seen bugs where that doesn't happen, like deleting a floating ip 13:59:30 i wonder user will use that or at least remember to use before getting tags etc 13:59:40 anyway, i might just brain dump this to the ML first before working o na spec 13:59:44 since i'm a bit spec'ed out 13:59:46 anyways 1 min left, 14:00:01 yea 14:00:18 we're out of time 14:00:31 thanks everyone let's jump to nova channel 14:00:34 #endmeeting