Wednesday, 2021-08-11

*** dviroel|out is now known as dviroel11:32
rosmaita#startmeeting cinder14:01
opendevmeetMeeting started Wed Aug 11 14:01:27 2021 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
rosmaita#topic roll call14:01
simondodsleyhi14:01
sfernandhi14:01
walshh_hi14:01
fabiooliveirahi14:01
eharneyhi14:01
TusharTgitehi14:01
shoffmannHi14:01
enriquetasohi14:02
toskyhi14:02
rosmaitagood turnout14:03
rosmaita#link https://etherpad.opendev.org/p/cinder-xena-meetings14:03
rosmaitaok, let's get started14:03
rosmaita#topic announcements14:03
rosmaitaos-brick Xena release next week14:04
rosmaita2 things happening in os-brick, both around the nvmeof14:04
rosmaitaagent14:04
rosmaita#link https://review.opendev.org/c/openstack/os-brick/+/80269114:04
rosmaitanvmeof multiple volume support14:04
rosmaita#link https://review.opendev.org/c/openstack/os-brick/+/80001414:04
rosmaitathose are the highest priority reviews right now14:05
rosmaitaso, please review accordingly14:05
rosmaitaalso, would be good if each of the teams working on the two patches mentioned reviewed each other's work14:05
rosmaitaand, of course, it would be good if cinder people in general review the two mentioned patches14:06
rosmaitai think that's it for announcements14:07
eharney800014 shows merge conflict, fyi14:07
rosmaitayeah, i noticed that14:07
rosmaitapro tip for driver developers: merge conflicts are a real turn-off14:07
e0ne:)14:07
rosmaitaimportant to get those resolved quickly or people won't review14:08
rosmaitaok, that's all for announcements, i think14:08
rosmaita#topic How to solve retype error with nas_secure_file_operations14:09
rosmaitashoffmann: that's you14:09
shoffmannHi, thanks.14:09
shoffmannWe saw the issue, that retype of available volumes fail when using nfs share, if nas_secure options are used.14:09
shoffmannmeans, root is not allowed to write to that share.14:10
shoffmann#link  https://bugs.launchpad.net/cinder/+bug/193819614:10
shoffmannMy question is, how we can solve this in a nice way.14:10
eharneymy -2 was against adding a new config option, so i think that no longer applies14:11
shoffmannuse the existing nas_secure_file_operations option but also in volume/manage.py (where it is currently not defined for all backends) or introduce a new option.14:11
shoffmannMaybe there are also other ideas.14:11
shoffmann#link https://review.opendev.org/c/openstack/cinder/+/80288214:11
shoffmannRight now, i check, if the option is defined, but maybe this is a bit ugly, because the nas_secure option is related to some backend types only.14:12
eharneyi'm not sure that the current patch will work in all scenarios14:12
eharneyIIRC root access for _qemu_img_info is needed when volumes are attached, regardless of the nas_secure options14:13
eharney(because they may be owned by the qemu user via nova)14:13
shoffmannIn our setup, we configured nova to not change the owner of the file. Not sure, if it is possible, to enable nas_secure on NFS share and let nova change the owner.14:14
shoffmannBecause at NetApp only one user and one group can be allowed.14:14
shoffmannSo we use cinder:nova14:14
eharneythis code is in the generic nfs driver14:14
eharneyyears ago i think we generally felt that the nas_secure options weren't fully implemented correctly, do we know if there are other fixes needed like this to successfully use nas_secure?14:15
shoffmannTill now we didn't saw other issues. But maybe others know more.14:16
shoffmannIs it possible for generic/other nfs drivers to define nas_secure with many allowed users?14:17
enriquetasoI haven't seen any other bug related to nas_secure, but I'll check14:17
eharneyenriquetaso: there have been many14:17
eharneyyes, i think generic nfs configurations commonly do that14:18
eharneythese changes are also risky because there's a matrix of configuration options (two nas_secure* options), the generic nfs driver, netapp nfs drivers, whether you configure root_squash, etc.... it's hard to know that a change like this won't break something  (which i think it does)14:18
rosmaitawhat was the new option (that eric doesn't like)?14:19
shoffmannSomething like "secure_file_operation" or so.14:20
shoffmannAdding a NetApp specific option also sounds bad.14:20
rosmaitawhat was it going to do that you couldn't do with the current options?14:20
eharneythe purpose of the new option overlapped with the existing option14:20
rosmaitaand yea, we want to avoid vendor-specific options14:21
enriquetasoOn the reported bug side: i think i can start tagging all this bugs with 'Tags: nas_secure' or something like that to have them all together14:21
rosmaitaenriquetaso: ++14:22
eharneyi wonder if the easier fix for the qemu_img_info case wouldn't be to just attempt running it both as root and non-root, or maybe look at the files being inspected to decide which to do14:22
eharneyrather than trying to tie it to config options14:22
shoffmannAre there use cases, using the nas_secure_* but allow root to write to the nfs share?14:23
shoffmanneharney I also can implement this14:23
shoffmannsame for _connect_device in volume/manager.py?14:23
eharneywe may need a list of use cases for all 4 combinations of the existing nas_secure options, i'm not sure if we even know that they are all valid combinations14:25
rosmaitaand what's the deal with 'auto'?  seems like we would always be using True for both on fresh installs14:27
eharneywhether auto ends up as true or false depends mostly on how the permissions on the share are configured14:28
rosmaitaoh, ok14:28
eharneywe basically let deployers decide if they want to have NFS volume files owned as root or as a different user14:28
rosmaitaso that's what you were saying about inspect the files to decide what to do14:29
eharneywhich is the decision that the nas_secure_file_permissions option is about14:29
eharneythe nas_secure_file_operations option is about whether to run commands as root via sudo or just as the cinder user14:29
eharneyyes14:29
rosmaitaslightly off topic, we have a devstack-plugin-nfs-tempest-full zuul job ... i wonder how hard it would be to reconfigure it to test this change14:31
eharneywe could do that, but we'd also need to examine what coverage we get from tempest tests14:31
eharneyshould not be hard to do, though14:31
rosmaitagood point about what exactly tempest is checking14:32
rosmaitabut it looks like we will really need a "live" test of this change to see what happens14:33
eharneyyeah, there are a lot of details about file permissions with volumes that are attached, migrating, etc14:33
shoffmannthan I will try to implement checking file permissions. If I should add/change tests to tempest, please tell me (e.g. at the change or bug)14:33
shoffmannOr should I wait for what we will do with all the nfs share issues?14:34
rosmaitai think the file checking permissions is a good start14:35
rosmaitabut it would be helpful if you think about the nfs share issues in general, especially because you work a lot with nfs14:35
rosmaitaanybody want to volunteer to see what tempest and/or cinder-tempest-plugin test that will be relevant for this change?14:36
rosmaitashoffmann: could be helpful if you could list some scenarios where this bug surfaces  as a start14:37
shoffmannsure, I can help14:37
shoffmannWhat is a good place for this? mailing list or next meeting?14:37
rosmaitashoffmann: i think put some scenarios into an etherpad and then send a note to the mailing list14:38
*** liuyulong_ is now known as liuyulong14:38
enriquetasoi need to get familiar with this, i'd like to group all the related bugs first and list the scenarios we are currently having, but I can help with the tempest coverage as well :)14:38
enriquetasorosmaita++14:38
enriquetasosounds better 14:38
rosmaitabut shoffmann we should continue to follow this at the cinder meetings too14:38
rosmaita(it's not an either/or is what i mean)14:39
rosmaitaok, thanks, let's move on14:39
sfernand  enriquetaso yes me too, but I can check the nas_secure option with the netapp nfs driver14:39
eharneyfyi all of this interacts w/ cinder-backup too14:39
rosmaitaeharney: good point! we don't want to break backups14:39
rosmaitashoffmann: ^^14:39
rosmaita#topic Cinder matrix - Clarification on Snapshot Attachment14:40
rosmaitawalshh_: that's you14:40
walshh_Apologies for bringing this one up again.  Differing opinions14:40
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/80001514:40
walshh_This functionality is exposed via rpcapi (as a cinder backup enhancement) and not cinder API so maybe it should not be in the matrix at all.14:40
walshh_Only a couple of drivers can attach snapshots, most cannot.14:41
walshh_Just wondering is removing it might be the best option as it is certainly causing confusion with customers14:41
rosmaitathere was some confusion about this on the mailing list, too14:41
walshh_and confusing us also :-)14:42
rosmaitai think you may be right about removing it from the matrix14:42
eharneyi think that makes sense, since it isn't actually a feature that is exposed to users14:42
simondodsleyIf this is only for cinder-backup use then I will remove my +1 and agree that it be removed completely rom the matrix14:42
walshh_And customers do not like to see x's against features14:42
rosmaitamaybe move it over somewhere in the driver development docs?  so people at least know to think about it14:42
walshh_even if it is not possible to implement in a driver14:42
rosmaitawhere by 'people' i mean driver developers14:42
walshh_sure I can do that if I am pointed in the correct direction14:43
rosmaitaok, why don't you go ahead and revise your patch to just remove it, and if someone finds a good place to add a statement, they can tell you on the review14:44
walshh_so if there is no other strong opinions for it to remain, I will remove it14:44
rosmaitasounds good ... if anyone has 2nd thoughts, leave comments on the review14:44
rosmaitathanks helen14:44
walshh_thanks all14:44
rosmaita#topic RBD fixes that we really need14:44
rosmaitaenriquetaso: you're up14:44
enriquetasoI'd like to point out these two patches here that I think are ready for review. (1) Currently the Ceph backend fails when restoring non ceph volumes.14:45
enriquetaso#link https://review.opendev.org/c/openstack/cinder/+/75078214:45
enriquetasoand..14:45
enriquetaso(2) When creating an encrypted volume from an encrypted image the volume gets truncated and it's inaccessible after the operation.14:45
rosmaitathat is a bad one14:45
enriquetaso#link https://review.opendev.org/c/openstack/cinder/+/80152214:45
enriquetasoyes, i think rbd really needs both asap14:46
rosmaitaok, reminder that RBD is a community driver, so it deserves extra love14:46
enriquetaso:)14:46
enriquetasothanks!14:46
rosmaitathanks sofia14:46
rosmaita#topic Fix for PowerVC 14:46
rosmaitasimondodsley: you have the floor14:46
simondodsleyThanks -  This is an XS patch that we need to merge so we can create an urgent backport to Wallaby. PowerVC will use Wallaby for its next release. PowerVC uses `provider_id` heavily and so we need it  to be correctly associated with all volumes in consistency groups.14:46
simondodsleyhttps://review.opendev.org/c/openstack/cinder/+/80304614:47
rosmaitaok, reminder to all driver maintainers -- it is helpful to review other driver patches14:47
simondodsleyI try to :)14:48
rosmaita(hint to anyone asking in #openstack-cinder for reviews)14:48
rosmaitasimondodsley: that wasn't aimed at you, i have noticed your name showing up on all sorts of revieww14:48
rosmaitathanks simone14:48
rosmaitasorry about the type14:49
rosmaitatypo14:49
simondodsleylol14:49
rosmaita#topic mypy inline comments14:49
rosmaitai just saw these on https://review.opendev.org/c/openstack/cinder/+/802882/7/cinder/volume/manager.py14:49
rosmaitaiirc, pylint did this too for a while but we turned it off somehow14:49
rosmaitajust want to see what people's opinions are about what we as a team want here14:50
rosmaitainline comments are certainly in your face and motivate fixing things14:50
rosmaitaso maybe leave them?14:51
rosmaitaok, not hearing an outburst of opinions here, let's leave them for now14:51
eharneyfwiw, the comments on https://review.opendev.org/c/openstack/cinder/+/802882/7/cinder/volume/manager.py  look kind of odd14:52
rosmaitaalthough, let's review and merge this patch that fixes the particular problem14:52
eharneyare they on the right lines?14:52
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/78445314:52
rosmaitai don't know, looking now14:52
Roamer`personal opinion from somebody whose colleagues can sometimes get tired of "please also run this check before pushing to Gerrit": I like advice about improvements, and I like to keep my code as clean as possible, but then I suppose that's common among people gathered here :)14:52
eharneythe comments also seem to be reported twice14:53
rosmaitaeharney: line 980 doesn't seem to make sense, and the duplication is not nice14:54
eharneyi guess they're appeared twice because there was a recheck and the job ran twice14:54
Roamer`but yeah, in this particular case the line numbers in https://zuul.opendev.org/t/openstack/build/8b0af55849674691ac9b5c032ac2ec81 and the quoted "return True" statements do not seem to correspond to what is shown in Gerrit14:55
rosmaitajust as a side note, there's a patch listed in https://tiny.cc/CinderPriorities where mypy would have helped avert a bug14:55
rosmaitaso i encourage everyone to continue to review eharney's patches14:55
Roamer`is there any chance mypy was running on the wrong files?!14:55
eharneyRoamer`: this could be an artifact of zuul doing a merge before the mypy run, need to check the logs for that14:55
Roamer`eharney, oh, of course14:56
Roamer`eharney, yeah, don't know why I didn't realize that when I fight with Zuul merges in our CI weekly :)14:56
Roamer`hm, so this means that the line numbers will *almost certainly* be wrong14:56
rosmaitathat would seem to indicate that inline comments won't be helpful14:57
Roamer`right14:57
eharneyi can't remember if zuul does that in the check queue or just the gate queue14:57
Roamer`eharney, I think it has to do it in the check queue, too... I don't see how things would work without that (it is quite often that the Gerrit change is not made against the tip of the branch)14:58
Roamer`(at least it does in our CI)14:58
eharneyi think it does: https://zuul.opendev.org/t/openstack/build/8b0af55849674691ac9b5c032ac2ec81/log/job-output.txt#33414:58
eharneybecause it's running on a merge commit of 802882 and not the commit itself... so yeah i'm not sure how the line number thing would actually work14:59
Roamer`yeah, if the merge is kinda-sorta-trivial, then one could think of writing some tool that parses the Python AST and figures out the line number deltas, but even that could go wrong15:00
rosmaitaok, looks like we should turn off the inline comments15:00
rosmaitai will follow up on that15:00
rosmaitaall right, we are out of time15:01
rosmaitaeverybody: os-brick reviews!!!15:01
Roamer`BTW something that I didn't add to the etherpad (I thought it was too trivial, maybe I should have)..... what are the chances for a StorPool trivial performance improvement - implement revert-to-snapshot natively? https://review.opendev.org/c/openstack/cinder/+/68088915:01
Roamer`(and yes, I realize that I should do more reviews myself, sorry about that)15:01
rosmaita:)15:01
rosmaitaseems ok ... file a launchpad BP because we are coming up on M-3 feature freeze soon15:02
rosmaitathat will help us prioritize15:02
rosmaitathanks everyone!15:02
Roamer`I did, it's at https://blueprints.launchpad.net/openstack/?searchtext=storpool-revert-to-snapshot15:02
Roamer`but I only did it the day before :)15:02
Roamer`and thanks15:02
rosmaitaok, thanks i will make sure it's in the xena mix15:03
rosmaita#endmeeting15:03
opendevmeetMeeting ended Wed Aug 11 15:03:08 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:03
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.html15:03
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.txt15:03
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.log.html15:03
*** dviroel is now known as dviroel|away16:19
*** dviroel|away is now known as dviroel19:09
*** dviroel is now known as dviroel|ruck20:35
*** dviroel|ruck is now known as dviroel|ruck|out21:46

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!