14:00:30 <abhishekk> #startmeeting glance
14:00:31 <openstack> Meeting started Thu Feb 27 14:00:30 2020 UTC and is due to finish in 60 minutes.  The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:32 <abhishekk> #topic roll call
14:00:35 <openstack> The meeting name has been set to 'glance'
14:00:41 <abhishekk> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:00:46 <abhishekk> o/
14:01:58 <rosmaita> o/
14:02:07 <nao-shark> o/
14:02:32 <rosmaita> abhishekk: i haven't had time to look at the glanceclient patches this morning, did my comments make sense?
14:02:32 <abhishekk> lets wait 2-3 minutes for jokke_
14:02:52 <abhishekk> rosmaita, Yes, I have added inline answer to those
14:03:02 <rosmaita> ok, will take a look
14:03:14 <abhishekk> If you still insist to remove those check then I will do it in a separate patch
14:04:26 <abhishekk> lets start
14:04:38 <abhishekk> #topic release/periodic jobs update
14:04:56 <abhishekk> We need to release glanceclient ASAP
14:05:14 <abhishekk> two important patches needs to be merged
14:05:16 <jokke_> o/
14:05:22 <abhishekk> #link https://review.opendev.org/699656
14:05:26 <whoami-rajat> Hi
14:05:28 <abhishekk> jokke_, o/
14:05:37 <abhishekk> #link https://review.opendev.org/709086
14:05:41 <abhishekk> whoami-rajat, o/
14:06:02 <abhishekk> Kindly review these patches and get it going
14:06:39 <abhishekk> Periodic jobs couple of timeouts due to parser error
14:06:47 <abhishekk> which is our next topic
14:06:59 <abhishekk> #topic the stestr situation
14:07:04 <abhishekk> rosmaita, floor is yours
14:07:18 <rosmaita> yeah, i was looking into this real quickly
14:07:30 <rosmaita> was wondering if we could force the issue by going back to testr
14:07:38 <rosmaita> but, it looks like that is no longer maintained
14:07:50 <rosmaita> plus, the real problem seems to be in subunit
14:08:03 <jokke_> yeah iirc that's why we moved to stestr
14:08:10 <rosmaita> and it looks like stephinfin has a patch up for that
14:08:15 <abhishekk> rosmaita, https://github.com/testing-cabal/subunit/pull/40 this seems to be solved the parser error
14:08:24 <rosmaita> looks like the subunit maintainer is not too keen on that patch
14:08:45 <abhishekk> I have applied that in my local environment, and seems to be working
14:08:53 <rosmaita> so i wonder whether it would help if abhishekk and efried for nova added comments to that pull request
14:08:54 <jokke_> is it still the same issue that we're too heavy with logging and overwhelm subunit?
14:09:11 <abhishekk> jokke_, yes
14:09:33 <rosmaita> i don't know
14:09:46 <abhishekk> stephinfin has reproduce for this, https://review.opendev.org/#/c/700522/1/nova/tests/unit/test_utils.py
14:09:47 <rosmaita> the consensus seemed to be that there's some kind of mishandling of a file object somewhere
14:09:50 * jokke_ can see os_subunit fork coming soon to your neighbourhood
14:10:26 <rosmaita> anyway, i put this on the agenda wondering if there was an alternative testrunner we could use
14:10:36 <abhishekk> I have applied this patch in local environment without subunit pull changes it is failing with parser error
14:10:40 <rosmaita> but it looks like not if we want to have highly parallel tests running
14:11:05 <rosmaita> yeah, so the situation is that the patch works, but the subunit maintainer thinks it is a hack
14:11:10 <abhishekk> and with his pull changes it is working
14:11:57 <jokke_> rosmaita: thanks for looking into this and I truly hope we as community can come together and fix the issues rather than a) forking yet another project or b) start fiddling around and patching yet another test runner. Also like you said the options list is pretty short
14:12:11 <abhishekk> I can comment on that, but will it help?
14:12:39 <rosmaita> yeah, i think maybe abhishekk report on the pull request, and maybe we can circle up with stephen for some strategy
14:12:40 <jokke_> does he has better solution to offer?
14:12:53 <rosmaita> yeah, fix our tests so that it doesn't happen
14:12:57 <rosmaita> :P
14:13:04 <jokke_> oh lol, so no
14:13:11 <jokke_> yeah lets keep the pressure on
14:13:33 <abhishekk> rosmaita, cyril is going to drop a mail to him
14:13:47 <rosmaita> ok, sounds good ... that's all from me
14:14:06 <jokke_> one can always start flame war on reddit :P
14:14:12 <jokke_> That helps every time
14:14:17 <abhishekk> :D
14:15:04 <abhishekk> we had lengthy discussion on this today, lets see how this will progress
14:15:05 <abhishekk> moving ahead
14:15:10 <abhishekk> #topic Delete image from single store
14:15:12 <jokke_> ++
14:15:32 <abhishekk> jokke_, do you mind if I take this ahead?
14:16:03 <abhishekk> IMO this will be very helpful feature and don't want to waste our efforts
14:16:29 <jokke_> I don't care, haven't had time for it and my request for adding hours to a day has not been approved yet :P
14:18:30 <rosmaita> abhishek has been disconnected
14:18:52 <rosmaita> ok, so jokke_ you are ok with abhishekk taking over the patch?
14:18:54 <abhishekk> sorry, I have been disconnected from the network
14:18:54 <abhishekk> did I miss something?
14:18:58 <rosmaita> nope
14:19:07 <jokke_> I don't care, haven't had time for it and my request for adding hours to a day has not been approved yet :P
14:19:12 <jokke_> ^^ that possibly
14:19:12 <abhishekk> cool
14:19:49 <abhishekk> ok, moving ahead
14:20:03 <abhishekk> #topic glance cinder store nfs mount issue
14:20:10 <abhishekk> whoami-rajat, stage is yours
14:20:14 <whoami-rajat> Hi
14:20:37 <whoami-rajat> So i wanted to discuss regarding my patch that mounts the image-volume in the glance directory rather than the cinder one
14:21:09 <whoami-rajat> i've faced error regarding that in my environment (Permission Denied) and i think it can cause much bigger issues too
14:21:28 <whoami-rajat> Nova has the same approach of mounting nfs volumes to their own custom path
14:21:36 <whoami-rajat> and similarly i would like to propose for glance
14:22:08 <rosmaita> that seems to make sense
14:22:37 <jokke_> whoami-rajat: I hope my stance did not come too hars. Happy that you're looking into the problems there. This is just really something os_brick should handle rather than each consumer reinventing the wheel on their own, and that's why os_brick was pushed to us to use in that driver
14:23:22 <whoami-rajat> jokke_, not at all harsh :)
14:23:30 <rosmaita> well, i think that glance_store is supposed to be providing the abstraction layer to glance
14:23:32 <eharney> i'd like to understand the objection better... i think there's some misunderstanding about what os-brick does and doesn't do
14:23:45 <rosmaita> so the place to do the os-brick mediation would seem to be in the glance_store driver
14:25:20 <rosmaita> it has suddenly gone quiet in here? or am i disconnected?
14:25:37 <jokke_> nope, just quiet
14:25:39 <abhishekk> nope
14:25:46 <rosmaita> ok, thanks
14:26:04 <rosmaita> eharney: can you explain why nova takes the approach it does with os-brick?
14:26:43 <eharney> os-brick provides an interface to connect to things, but the consumer (glance_store here, or nova) still has to have code that handles block devices (iscsi/fc), nfs volumes, rbd volumes, etc
14:27:15 <jokke_> So iirc the whole idea of cinder+os-brick was that we get block device from cinder and os-brick provides us access to it so that we do not need to take care of the special sauce of different back-ends in glance
14:27:29 <eharney> os-brick will not provide a block device when cinder is serving volumes over nfs
14:27:51 <jokke_> and yes, we already have special sauce for rbd, but lets be honest they are just special in every way.
14:28:32 <eharney> i assume that the glance_store driver for cinder needs similar work for rbd, but that's a whole different project
14:28:51 <abhishekk> I just have one question, if cinder has configured multiple nfs backends then how this new config option will help?
14:29:26 <eharney> nfs exports are mounted to different directories under $cinder_nfs_mount_point_base
14:29:50 <whoami-rajat> jokke_, the code we're trying to implement isn't for each and every driver but the types of driver i.e. iscsi, fc, nfs, rbd so we just need to have code handling these scenarios rather than all the storage drivers
14:31:02 <whoami-rajat> eharney++
14:31:36 <whoami-rajat> the mount point is a directory with the volume id (IIRC) inside the mount_point_base
14:31:52 <eharney> i don't think i answered the why nova does what it does question... some of that predates the existence of cinder
14:32:18 <whoami-rajat> /var/lib/glance/<vol-id>/
14:32:23 <eharney> but the "how" of what it does, is that it determines where it wants to mount nfs exports to consume cinder volumes, and chooses a path to mount them to.  this particular item is one of things missing from glance_store currently
14:32:47 <jokke_> whoami-rajat: well, that's the thing ... when the cinder driver was introduced the selling point was that we don't need any of that as cinder abstracts the backend from us, now you're saying that we need special sauce for every type of connectivity as os-brick can't handle that and I guess next step is that we need spaghetti for HP, NetAPP, DellEMC, PureStore etc. cause they will need their
14:32:53 <jokke_> own special treatment
14:32:58 <eharney> no, it's not a per-driver issue
14:33:00 <eharney> it's a per-protocol issue
14:33:05 <abhishekk> current glance_store cinder driver has many loop holes
14:33:22 <eharney> so, iscsi, fc (which are mostly the same in brick), nfs, ceph, and... other rare ones if someone wants to
14:33:49 <jokke_> so can we give os-brick "base mount path" and it does the right thing there or do we need to bring special setting for each protocol?
14:34:23 <eharney> base_mount_path is only relevant for FS protocols (and at this point only NFS is really of interest there, i think)
14:35:06 <jokke_> cause _that_ is the problem I have with this proposal. If os-brick needs a folder it has permission to do it's thing I'm fine with that. If we need to specify that for each and every thing separately I'm having problem with it
14:35:54 <abhishekk> +!
14:35:57 <abhishekk> +1
14:35:57 <rosmaita> well, it's either fix it inside the glance_store cinder driver, or have a glance_store cinder_nfs, cinder_iscis etc
14:36:41 <jokke_> eharney: so can we call it "cinder_base_mount_path" and never have this discussion again when there is next special sayce that needs some node local FS trickstery, been that loop mounts, nfs, iscsi fs with volume images etc.
14:36:45 <jokke_> ?
14:37:27 <eharney> i guess it can be called that since it's in the glance_store cinder driver, but both nova and cinder have the option named "nfs_mount_point_base"
14:37:34 <jokke_> cause I know that day will come if we call that nfs specific that next guy doing their thing doesn't want to use it as it's nfs
14:38:04 <rosmaita> i think it's better to be consistent with cinder and nova
14:38:13 <rosmaita> since people from those teams may wind up working on it
14:38:33 <eharney> the problem with making it generic is that later when you add support for, i dunno, yzfs, then you need a different option named something else for yzfs mounts
14:38:36 <whoami-rajat> jokke_, i think the glance cinder store has a generic code that requires some restructuring for some specific cases.
14:38:48 <eharney> so i'm not sure it's a good idea to avoid putting "nfs" in the name
14:39:33 <abhishekk> rosmaita, as per our glance store standard we define each option with store_prefix_option_name
14:39:35 <whoami-rajat> eharney, nova has different mount_point_base names for different drivers like quobyte
14:39:41 <eharney> whoami-rajat: right
14:39:43 <jokke_> eharney: that's exactly what I wan to avoid
14:39:51 <rosmaita> ok, so i guess if we're arguing over the name of the config opt, then the issue is basically settled?
14:40:08 <eharney> i'm not sure it's a good idea to try to avoid that, i think that's a requirement
14:40:19 <eharney> but, what rosmaita said
14:40:38 <jokke_> eharney: so if brick is given mount path it can then do /put/mount/craps/here/[nfs,yfs,psnpfs] and the driver side should never need to know about it
14:41:00 <eharney> true, we could just make it "cinder_mount_point_base" and all nfs mounts go under $cinder_mount_point_base/nfs/<asdf>
14:41:11 <jokke_> it's just gives brick a path that it has permissions to do what it needs to do
14:41:11 <eharney> and yzfs goes under $cinder_mount_point_base/yzfs etc
14:41:13 <eharney> not a bad idea
14:42:22 <rosmaita> whoami-rajat: that sound OK to you?
14:42:29 <jokke_> cause there is no reason the consumer should be touching those or know about them, I'm assuming brick gives the consumer just fd anyways
14:42:35 <abhishekk> sounds reasonable to me
14:42:54 <whoami-rajat> rosmaita, i'm not sure there are 2 discussions going on
14:43:09 <abhishekk> :D
14:43:19 <whoami-rajat> rosmaita, if you're talking about making cinder_mount_point_base the generic and all FS driver files go into it then i'm ok with it
14:43:21 <jokke_> and that would avoid us having extra 300 lines of config options and comments for them in our already mile long config files just because
14:43:35 <rosmaita> whoami-rajat: ok, and what issue is still open?
14:43:41 <abhishekk> +1
14:44:04 <whoami-rajat> rosmaita, i think jokke_ is still suggesting it inside os-brick ?
14:44:27 <rosmaita> no, i thought it was going to be done in the cinder driver
14:44:58 <whoami-rajat> rosmaita, then i misunderstood the words and i'm clearly ok with everything
14:45:04 <abhishekk> 15 minutes remaining
14:45:30 <jokke_> the best case scenario is that it's in os-brick, we can pass it when we initiate the connector (regardless if it's needed or not) and brick does the right thing
14:46:16 <jokke_> it just knows "Hey I have a path here if I need one I have permissions to"
14:46:45 <jokke_> that would be nice way to consume it :P
14:47:46 <whoami-rajat> can we also have the final action items so i don't miss out anything?
14:48:18 <whoami-rajat> i know one for sure to rename the mount base path config option
14:48:43 <abhishekk> cinder_mount_point_base we can have this option in glance_store cinder driver
14:49:01 <jokke_> whoami-rajat: that's a good start, I think we need to continue this discussion offline. I need to run in a minute
14:49:28 <abhishekk> and all nfs mount goes under cinder_mount_point_base/nfs and others under cinder_mount_point_base/
14:49:29 <whoami-rajat> jokke_, ack, thanks for the discussion
14:49:34 <jokke_> can't be late twice a day and I joined couple of min late so need to take it back and run couple early :P
14:49:48 <abhishekk> :D
14:49:55 <abhishekk> moving to open discussion
14:50:03 <abhishekk> #topic Open discussion
14:50:13 <rosmaita> i have something
14:50:22 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012883.html
14:50:44 <rosmaita> question about show_multiple_locations deprecation status
14:51:08 <rosmaita> i replied, but the author suggested revising the train release notes to say the workaround is still needed
14:51:25 <rosmaita> abhishekk: that's up to you
14:51:42 <abhishekk> I need to do one stable/train release coming week
14:51:50 <jokke_> I haven't read that but I think we need to lift the deprecation and hopes of getting rid of it due to how popular ceph became and everything is heavily relying on hetting those location uris
14:52:12 <jokke_> getting
14:52:22 <abhishekk> rosmaita, If you provide me the wording I will add one for sure
14:52:42 <abhishekk> I am pretty bad in writing release notes
14:52:50 <rosmaita> abhishekk: sure, i think we can just copy the one from stein
14:52:58 <rosmaita> i will look and see
14:53:04 <abhishekk> rosmaita, ack,
14:53:06 <jokke_> Ping me if anything is needed. Now I really need to run (today is last day when my passport is valid and I have appointment to order a new one so can't be late from that) :)
14:53:20 <jokke_> will be back online in a while
14:53:26 <abhishekk> jokke_, o/~
14:53:28 <rosmaita> jokke_: ok, don't get deported
14:53:47 <rosmaita> abhishekk: i'll get up a patch for you in a bit
14:53:49 <abhishekk> we have new member waiting I guss
14:53:56 <abhishekk> rosmaita, thanks
14:54:09 <abhishekk> nao-shark, around?
14:54:26 <nao-shark> Thanks.
14:54:33 <nao-shark> I have two questions .
14:54:48 <nao-shark> https://review.opendev.org/#/c/687390/
14:54:50 <nao-shark> This is my spec for revive the S3 driver.
14:55:00 <nao-shark> and this is my patch of S3 driver. https://review.opendev.org/#/c/695844/
14:55:08 <nao-shark> I want to edit glance-api.conf to show how to configure the S3 driver.
14:55:16 <nao-shark> But it looks like generated by oslo-config-generator.
14:55:26 <nao-shark> So my first question is Can I make a separate patch for glance-api.conf before my S3 driver patch is merged?
14:55:35 <nao-shark> My concern is that modifications to the S3 driver may directly affect the content of glance-api.conf.
14:56:05 <abhishekk> nao-shark, I will have a look at those and will get back to you
14:56:14 <rosmaita> nao-shark: what you can do is regenerate the config file and submit it as part of your patch
14:56:32 <rosmaita> i think we still keep a full config sample? or did we stop doing that?
14:56:40 <abhishekk> nao-shark, could you ping me tomorrow in the morning time?
14:56:50 <abhishekk> rosmaita, I guess we have stopped doing that
14:57:00 <rosmaita> oh, ok
14:57:19 <rosmaita> nao-shark: best work it out with abhishekk tomorrow
14:57:28 <nao-shark> abhishekk OK. thanks
14:57:47 <abhishekk> cool
14:57:52 <abhishekk> anything else
14:58:08 <nao-shark> Maybe the next question should probably be tomorrow
14:58:28 <abhishekk> rosmaita, I have replied to your question on client patch
14:58:42 <abhishekk> nao-shark, yes
14:58:53 <abhishekk> or you can drop me a mail as well
14:59:25 <abhishekk> thank you all
14:59:42 <nao-shark> abhishekk  OK Thanks for your support !
14:59:50 <rosmaita> bye
14:59:53 <abhishekk> nao-shark, no worries
14:59:55 <abhishekk> #endmeeting