14:00:31 #startmeeting cinder 14:00:32 Meeting started Wed May 6 14:00:31 2020 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:35 The meeting name has been set to 'cinder' 14:00:42 #link https://etherpad.openstack.org/p/cinder-ussuri-meetings 14:00:42 #topic roll call 14:00:49 hi 14:00:50 hi 14:00:51 hi 14:00:52 o/ 14:01:01 hi 14:01:07 hi 14:01:07 O/ 14:01:08 hi 14:01:44 Hi 14:02:06 looks like a good turnout 14:02:14 hi 14:02:30 o/ 14:02:38 before we get started, i want to say that I hope everyone and their loved ones are healthy and doing well in these stressful times 14:02:50 and even your non-loved ones, i guess 14:02:56 #topic updates 14:03:09 ok, RC-2 was released on monday 14:03:16 http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014623.html 14:03:36 that should be our final RC, i haven't seen anyone post any release critical bugs 14:03:48 and we are running out of time to fix them anyway 14:04:20 next item: no meeting next week, you can attend the "community meeting" instead 14:04:47 the next cinder meeting on May 20 will be the first meeting of the Victoria cycle 14:05:00 so, as is our tradition, there will be a new agenda etherpad: 14:05:09 #link https://etherpad.opendev.org/p/cinder-victoria-meetings 14:05:18 it's empty now because i just thought of it 14:05:31 but i'll fill it with the boilerplate stuff after today's meeting 14:05:45 oops i skipped an item 14:05:52 PTG registration is open 14:06:02 #link https://virtualptgjune2020.eventbrite.com/ 14:06:20 it's free, but the foundation would like to keep track of who's planning to attend 14:06:35 will also get you on the mailing list about communication methods 14:06:47 (which i don't think have been decided yet) 14:07:01 so please register 14:07:27 and on that note, we could use some topics on the Cinder PTG etherpad 14:07:37 #link https://etherpad.openstack.org/p/cinder-victoria-ptg-planning 14:08:11 the times for the cinder meetings are on that etherpad ^^ 14:08:25 any questions about the PTG? 14:09:27 ok, there's always open discussion later if you think of something 14:09:33 final announcement 14:09:56 yesterday or this morning, depending on where you are, we merged a change related to eventlet and monkey patching 14:10:05 #link https://review.opendev.org/#/c/724754/ 14:10:14 if you want the background: 14:10:22 Thanks for putting the Timezones in there. 14:10:23 #link https://github.com/eventlet/eventlet/issues/592 14:10:46 this is just awareness, in case anything weird starts happening ... because you never know 14:11:02 Especially with eventlet. 14:11:10 "You never know" ^ 14:11:11 :) 14:11:21 ok, that's all the announcements 14:11:37 #topic  coordination with barbican(-tempest-plugin) people on volume tests 14:12:00 so, we have an interesting situation 14:12:12 our encrypted volume tests need access to barbican 14:12:30 and the easiest way to do that is to have them live in the barbican-tempest-plugin 14:12:47 something about circular dependencies, tosky can explain if you are interested 14:12:58 anyway, i think that's why he put it on the agenda 14:13:02 tosky: you're up 14:13:12 ++ 14:13:37 we may benefit from an increased amount of cinder tests that use barbican 14:13:43 i poked around a bit, there is no barbican-tempest-plugin core group 14:13:49 it's just barbican-core 14:14:01 do we want to propose to help maintain that plugin 14:14:02 now, the barbican tempest client is defined inside barbican-tempest-plugin and they also have a few tests there 14:14:27 some of them mirror the corresponding basic encryption tests from tempest.git, but with a barbican flavor 14:14:31 or could we propose re-architecting it so that we can use the stuff we need in the cinder-tempest-plugin? 14:14:44 (sorry, tosky i should let you talk) 14:15:15 I would say it may be easier to see if they could agree with having some of us voting on those patches, with the agreement of not approving non-volume tests (or waiting anyway for other approvals) 14:15:22 I took the liberty of raising this point at yesterday's barbican meeting, but as I haven't discussed about it here before, I didn't go in details 14:15:37 http://eavesdrop.openstack.org/meetings/barbican/2020/barbican.2020-05-05-13.01.log.html 14:16:43 well, looks like they didn't tell you to go boil your head 14:16:57 :-) 14:17:09 i can propose something on the ML like we did for ceph and nfs devstack plugins 14:17:25 and will stress the gentleperson's agreement not to approve anything non-volume related 14:17:41 yes, that could help, maybe we can define this before the PTG, and then use that time to refine and/or implement it properly 14:17:55 One question: should I move the retype encrypt test to barbican-tempest-plugin then? https://review.opendev.org/#/c/715566/ 14:18:30 enriquetaso: good question, i was wondering about that myself 14:18:39 since that retype test doesn't need to interact with barbican directly (and in theory could work with other key managers as well), i think not 14:18:51 it's purely a cinder functionality test 14:19:06 right: if it does not use the barbican client, it can stay there 14:19:16 ++ 14:19:18 thanks 14:20:25 I don't have anything else to add about this right now; the direction is "discuss and find an agreement" 14:20:27 IMHO 14:21:14 ok, i'll put up something to the ML to get this moving and we can follow up as necessary 14:21:23 thanks! 14:21:32 #action rosmaita email about cooperation with barbican-tempest-plugin 14:21:38 ok, thanks tosky 14:22:04 #topic security-related issue  when an attached volume exposes host information to non-admins 14:22:09 whoami-rajat: that's you 14:22:16 So we discussed a security issue during cinder mid-cycle PTG, the summary of the issue is, an attached volume could expose the nova host details via the volume show API and attachments API 14:22:46 the general consensus on lauchpad was to change the policy of the attachment API to admin only 14:23:07 but this might have affect on the consumer projects such as nova and glance 14:24:08 that policy being, to only how the host to admins? 14:24:10 only show* 14:24:25 yes 14:24:49 in volume show API, show host to only admins and in attachments API only show connection_info to admins 14:25:35 but in volume-show, don't you get the same stuff as attachments api as far as connection_info ? 14:26:20 rosmaita, i'm not sure, the bug is only reported against the HOST field 14:26:28 i will check that as well 14:26:30 also, we should probably be clear about what "host" we are talking about 14:26:43 this is the attatched_host ? 14:26:55 IIUC it's the nova host 14:27:05 where the volume is attached 14:27:21 ok 14:28:06 yeah, nova does not like to expose the VM host to users 14:28:09 i just posted this topic here for a general discussion and gather opinions on the path on which we should move ahead 14:28:47 i think nova also has some policy rules for not exposing the host info to non-admins 14:29:05 i think the issue is that nova does not like this being available as a backdoor for a user to discover the host info 14:29:27 yep 14:29:36 It is a concern for leaking out underlying infrastructure information. 14:30:16 Right. 14:30:19 i don't know if this makes sense, but 14:30:33 if we keep the volume-show and attachments api the way they are now 14:30:37 admin or owner 14:30:56 and only hide the attached_host, how does that affect usefulness of the response 14:30:58 ? 14:31:58 I wouldn't think anyone (person or system) should be relying on that information for anything. 14:32:18 yeah, my intuition is that the person making the connection knows who they are already 14:32:20 that's kind of my gut feeling too 14:32:37 That would be my thought. 14:33:12 so i guess whoami-rajat's next question is, do we need a policy on that single field? or do we just do an internal "is_admin" check and display or not? 14:33:20 or maybe just never display? 14:33:26 Might be worth calling it out on the ML that responses will be changing and why. That way if for some reason someone actually is paying attention to that information in this response, they have a warning not to do so anymore. 14:33:45 smcginnis: ++ 14:33:59 I would vote never display. Unless maybe is_admin, since an admin may be able to use that information to troubleshoot or something. 14:34:20 since this is a security issue, i don't think we need to worry about microversioning this change? 14:34:27 also JFYI from the raw response of attachment API, non-admins can get this all info https://bugs.launchpad.net/cinder/+bug/1736773/comments/1 14:34:27 Launchpad bug 1736773 in Cinder "attachment-show is including `connection_info` for non-admin callers, it shouldn't" [High,Triaged] - Assigned to John Griffith (john-griffith) 14:34:29 rosmaita: Agree 14:34:48 Agreed. 14:34:49 shouldn't need microversioning since this field can already be blank etc now 14:34:51 whoami-rajat: Hmm, that's a concern too. 14:35:59 it contains auth_password which i'm not sure how important but a password shouldn't show up in an API response i guess 14:36:46 depends, is that api used to connect to volumes? 14:37:22 i guess this is after it's already attached/connected 14:37:40 hmm, i think nova uses initialize_connection to connect and attachment_update to get the connection_info later 14:37:53 but i'm not sure if they use attachment_get for any purpose 14:38:04 refresh* 14:40:29 ok, looks like we may need some research here 14:42:18 ok, so to summarize: (1) should not populate the host_name in the volume-show response (except to admins); propose to rely on the admin context to decide whether to show or not; since no change to response, no need to microversion this 14:42:43 (2) need to verify that connection_info is not required for non-admin users in the attachment-show response 14:43:25 and depending on the outcome of (2), will either hide the connection_info or just depopulate the attached_host (or whatever it's called in that response) 14:43:47 changing the attachments api to admin only would require nova changes so i think this needs discussion with the nova team as well 14:43:59 rosmaita, that sounds like a good plan to me 14:44:22 whoami-rajat: do you want to continue to pursue this and we can also discuss (2) at ptg? 14:44:42 rosmaita, sure 14:44:49 i can send an email out about (1) since i think that won't affect anyone really 14:45:16 i think i can also work on this i parts since only 2 is the concern 14:45:20 and it's ok to discuss on the ML because this bug has been public since at least the cinder midcycle 14:45:22 rosmaita, that would be great 14:45:36 #action rosmaita (see above) 14:45:40 ok, thanks whoami-rajat 14:45:49 thanks everyone 14:45:58 #topic priorities for victoria milestone-1 14:46:30 i lost my tab, but M-1 is like 2 weeks after the ptg 14:46:41 anyway, the cinder team priorities are: 14:46:50 (1) volume local cache, and 14:46:57 (2) nfs encryption 14:47:12 LiangFang is working on 1, enriquetaso is working on 2 14:47:20 #topic volume local cache 14:47:24 LiangFang: that's you 14:47:29 thanks rosmaita 14:47:40 should I add this topic in PTG? 14:48:01 one thing in cinder patch is: 14:48:13 yes, if things aren't moving along well, or if some things come up that require discussion 14:48:33 volume type extra-spec without prefix "capabilities:" will be treated as filter. See: https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/capabilities_filter.py#L62 14:48:44 I just copied from patch comment 14:48:55 So extra-spec "cacheable" will be treated as filter. As discussed in spec, some backends like nfs and rbd will not be supported. So I'm trying to enable the supported backends explicitly. Any backends not marked "cacheable=True" will not be supported by default. It is like white list mode. We may can go through backends drivers and add more drivers as supported later in a following patch. 14:49:22 i need to study up on this again, but i (still) think it's not correct to put this as a flag in the LVM driver 14:49:43 ok 14:49:50 as far as i can tell, it isn't a driver attribute, it's a protocol attribute 14:49:59 your suggestion is currently: enable this flag in every iscsi/fc driver (i think) 14:50:19 we should just have the manager apply it to iscsi/fc etc as appropriate without requiring driver changes, unless we think there are driver-specific reasons that caching won't work 14:51:42 if we add like: capabilities:cacheable 14:51:58 then it is not act as filter 14:52:13 but how to prevent nfs and rbd 14:52:54 i think the point about "capabilities:" is correct but i haven't yet figured out why that means we need to do this inside the driver 14:53:40 if we don't add capabilities: 14:54:27 then cacheable will be treat as a filter, that means if the driver is not marked as "cacheable", it will not be scheduled 14:55:05 at last, will pop up like "can not find a valid backend" 14:55:15 i'll do some research into how that works, but i think the point remains that "cacheable" isn't actually a property of the driver -- so it should be set somewhere else 14:55:25 i don't know enough of the details to get to the bottom of it right now 14:56:02 eharney: thanks, may can see: https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/capabilities_filter.py#L62 14:56:49 another thing is about testing 14:57:08 I find I cannot finish tempest test at this point 14:57:42 because tempest test requires the patches ready first 14:58:03 there're 3 patches, cinder, os-brick, nova 14:58:06 so is the issue that: 'cacheable' is a property of the volume-type, but the way it is expressed now prevents some drivers from being scheduled at all? 14:58:30 rosmaita: yes 14:58:46 sorry, we are almost out of time ... looks like we need to continue to discuss this at next meeting and PTG 14:58:57 cacheable in type will schedule no backends 14:59:05 you can still apply it for all relevant drivers, for them to be scheduled, in the manager rather than in the driver code 14:59:14 rosmaita: Ok, thanks 14:59:42 eharney: maybe I need to study how to add in manager 14:59:53 One quick thing About (2) I re based the patch this week because of merge conflicts, but know it's ready for reviews https://review.opendev.org/#/c/597148/ . Please feel free to add comments 14:59:53 sounds like something to look at 14:59:57 eharney: may be you are right 15:00:04 enriquetaso: thanks ! 15:00:23 ok, thanks everyone, we need to get out of the way for Horizon 15:00:28 #endmeeting