14:00:19 <abhishekk> #startmeeting glance
14:00:20 <openstack> Meeting started Thu May 13 14:00:19 2021 UTC and is due to finish in 60 minutes.  The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:24 <openstack> The meeting name has been set to 'glance'
14:00:24 <abhishekk> #topic roll call
14:00:28 <jokke> o/
14:00:30 <abhishekk> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:00:32 <abhishekk> o/
14:00:33 <dansmith> o/
14:00:52 <abhishekk> cool, lets wait couple of minutes more
14:01:29 <rosmaita> o/
14:01:59 <abhishekk> lets start
14:02:22 <abhishekk> #topic release/periodic jobs update
14:02:28 <abhishekk> M1 is two weeks away
14:02:39 <abhishekk> we have two topics to look after
14:02:48 <abhishekk> 1. Cache API work (jokke)
14:02:52 <jokke> I think I have some API changes to write ;)
14:03:12 <abhishekk> 2. Review and approve quota related specs
14:03:30 <abhishekk> jokke, cool, looking forward to it :P
14:03:51 <dansmith> is there a spec for the cache stuff?
14:04:18 <abhishekk> periodic job is all green except one post failure and one retry
14:04:23 <abhishekk> dansmith, yes there is
14:04:40 <abhishekk> #link https://review.opendev.org/#/c/665258/
14:05:00 <dansmith> ah, was looking for owner:jokke, got it
14:05:10 <abhishekk> ack
14:05:37 * dansmith assumes that needs updating
14:05:47 <abhishekk> moving ahead, we will discuss this in open discussion if you have any comments
14:06:33 <abhishekk> dansmith, right, me or jokke will update it with latest understanding
14:06:38 <abhishekk> moving ahead
14:06:39 <dansmith> wck
14:06:54 <abhishekk> #topic Native Image Encryption (rosmaita)
14:07:06 <abhishekk> this is rosmaita
14:07:23 <rosmaita> yeah, basically, it's what i said on the agenda
14:07:29 <rosmaita> (i'm in a concurrent meeting, sorry)
14:07:53 <rosmaita> it's looking like the barbican Consumer API may not be ready until late xena, or early Y
14:08:05 <abhishekk> ack
14:08:27 <rosmaita> so i proposed that we go ahead and implement the basic stuff anyway, including some CI
14:08:43 <abhishekk> +1 from me
14:08:54 <rosmaita> at first i was thinking it could be an experimental feature
14:09:14 <rosmaita> but i think actually the feature doesn't really need the consumer api
14:09:44 <abhishekk> I propose to have a separate spec/lite-spec mentioning this on top of the current approved one
14:10:17 <rosmaita> that makes sense to me
14:10:36 <rosmaita> ok, i'll bring that back to Luzi and the pop-up team
14:10:39 <abhishekk> #link https://review.opendev.org/#/c/609667/
14:10:49 <abhishekk> cool, thank you for taking care of this
14:11:03 <abhishekk> moving ahead
14:11:17 <abhishekk> #topic Implement glance-unified-quotas
14:11:24 <abhishekk> #link https://review.opendev.org/c/openstack/glance-specs/+/788037
14:11:32 <abhishekk> So we do have couple of reviews on the spec
14:12:13 <abhishekk> and I am Ok with the current proposal with having usage APIs as a followup with this efforts
14:12:25 <dansmith> I think there's a question about whether or not to split the current count to active count and staged count,
14:12:36 <dansmith> but other than that the comments from rosmaita were mostly clarifications
14:12:46 <dansmith> if people want to see the count split, I can update it with that
14:13:00 <abhishekk> If any one has any concerns/suggestions kindly raise on the spec before next meeting
14:13:18 <abhishekk> I think its better to have split count
14:13:42 <jokke> hmm-m I had a quick chat about it with abhishekk last week but forgot to write the review. Will do shortly. I'm quite surprised/worried how racy the proposal is
14:13:43 <dansmith> okay
14:14:39 <dansmith> jokke: happy to discuss here,
14:14:45 <jokke> just the fact that with the soft limits and no enforcement as long as there is any quota left we basically allow no
14:15:10 <jokke> limits situation where someone can just start bunch of imports and go terabytes over their quota pretty easily
14:15:37 <dansmith> well, that's kinda how soft limits work
14:15:53 <dansmith> they're "stop the bleeding" and not "enforce the last byte of usage"
14:16:25 <dansmith> the way oslo limit works, we will hit keystone each time we check the limit, so if we do that on each chunk read we will generate a ton of traffic back to keystone while doing so
14:17:22 <jokke> I'd suggest we put stopgap there once one gets close, say 10% of quota left and you get one concurrent operation or something among those lines
14:17:50 <dansmith> well, rosmaita's suggestion will let you limit to N in-progress imports,
14:18:01 <dansmith> which would let you limit it for imports at least
14:18:09 <jokke> just to make sure that one does not go "Oh I have only a gig of quota left, I better do all these imports and copy operations right now
14:18:17 <dansmith> for upload we'd have to do something else, like a quota on images in queued state or whatever
14:19:22 <dansmith> as noted, oslo limit doesn't really provide us a way to determine what our limit is, we tell it what our usage is and what we'ret trying to consume, and it tells us if we're over the limit or not
14:19:46 <dansmith> so doing a bunch of math (which I think is likely to be confusing to the user if they can't see it) with all the numbers is difficult
14:19:55 <abhishekk> Also once we have basic mechanism in place we can always enhance it as per our use cases
14:20:06 <jokke> Yeah, that's the other thing I noticed about the proposal. Due to the approach taken, the staging quota is pretty coarse too. It does not take into account for example copy jobs
14:20:47 <dansmith> abhishekk: right, and nova used to have a quota system that obsessed over hard limits, and it was ripped out years ago in favor of a counting approach like this which doesn't get out of sync and need maintenance,
14:20:54 <dansmith> so we're pretty in line with that
14:21:09 <abhishekk> ack
14:21:29 <dansmith> I think other projects know the size of large things before/when they enforce quota.. since glance doesn't on upload...
14:22:04 <dansmith> jokke: meaning the staging space used when we're doing a copy later?
14:22:15 <dansmith> jokke: as noted on that code patch, I'm gonna do that, I just haven't had time this week
14:22:22 <dansmith> and web-download as well
14:22:47 <jokke> dansmith: doing staging quota only based on the image state does not account staging usage during web-download or copy-image
14:22:56 <jokke> ah, kk
14:23:11 <abhishekk> yep, that could be addressed in implementation, but if you want we can also mention staging use during webdownload or copy operations in the spec
14:23:46 <dansmith> yeah, I'm kinda skeptical about how transparent that will be, because the space used in that scenario is more internalized,
14:23:47 <dansmith> and the user doesn't really know or understand that we're doing that in the same way they do when they stage..import,
14:24:15 <dansmith> but I can see the mismatch so I'm fine with doing it that way
14:24:16 <abhishekk> So I will suggest to add concerns on the spec and get it addressed as early as possible
14:24:30 <abhishekk> ack
14:24:44 <jokke> yeah, I get my comments into the spec review
14:24:45 <abhishekk> anything else jokke ?
14:24:49 <abhishekk> cool
14:24:58 <abhishekk> moving ahead
14:25:08 <abhishekk> #topic Bi-weekly Bug discussion
14:25:28 <abhishekk> Cyril is not around due to holidy
14:25:35 <abhishekk> *holiday
14:25:47 <abhishekk> We have 3 bugs to discuss today
14:25:51 <abhishekk> lets start
14:25:59 <abhishekk> #link https://bugs.launchpad.net/glance/+bug/1920936
14:25:59 <openstack> Launchpad bug 1920936 in Glance "Unrelated stack trace when image doesn't exist in store" [Undecided,New]
14:26:34 <abhishekk> This bug is reported in last cycle while we were working on glance-cinder multistore support
14:26:40 <jokke> I've been poking the pile of swift bugs past week, just fyi, the driver is surprisingly broken to the stage it amazes me it works at all
14:27:07 <abhishekk> :o
14:27:18 <abhishekk> we can discuss it later
14:27:58 <abhishekk> So the above bug is, if image is not present in any of the configured store then it raises 503
14:28:34 <abhishekk> Rajat has reported it, since it is difficult to reproduce we need to do some tricks for reproducing the same
14:28:59 <abhishekk> In either way I guess 503 is not acceptable and we should have meaningful response
14:29:23 <jokke> 503 is perfectly valid response
14:29:51 <dansmith> well, the stack trace isn't,
14:29:58 <jokke> not necessarily correct for the exact situation, but there is nothing wrong throwing 503 ;)
14:30:02 <abhishekk> sorry 500
14:30:08 <dansmith> and it looks like a bug in trying to call ImageProxy
14:30:50 <dansmith> it's not clear from the bug,
14:30:55 <abhishekk> Yep, I will try to have some steps to reproduce this issue and then will work on fix
14:31:10 <dansmith> but it looks like maybe if you delete the thing on the cinder side that you're stuck and can't even GET or DELETE the image anymore?
14:31:32 <abhishekk> I guess so
14:32:22 * abhishekk will work with rajat on this
14:32:27 <abhishekk> moving ahead
14:32:32 <abhishekk> #link https://bugs.launchpad.net/glance/+bug/1874458
14:32:32 <openstack> Launchpad bug 1874458 in Glance "Glance doesn't take into account swift_store_cacert" [Undecided,New]
14:32:34 <jokke> DELETE _should_ work
14:33:09 <abhishekk> jokke, as you are working with swift driver could you also note this down ?
14:33:16 <jokke> But looks like it might not be Cinder specific. Looks like we don't handle well if we have locations but none of them has data for the image
14:33:54 <abhishekk> hmm, will try to use this way to reproduce the same
14:34:14 <jokke> abhishekk: ack, I think it isn't in my current work set
14:34:39 <abhishekk> nope its not, but will be great if you add it
14:34:56 <jokke> yeah, I'll have a look
14:35:14 <abhishekk> also the bug doesn't show how to configure it, so does not have much information here
14:35:30 <abhishekk> cool, kindly update your findings on the bug
14:35:42 <abhishekk> move ahead
14:35:47 <abhishekk> #link https://bugs.launchpad.net/glance/+bug/1924612
14:35:47 <openstack> Launchpad bug 1924612 in Glance "Can't list "killed" images using the CLI" [Undecided,In progress] - Assigned to Abhishek Kekane (abhishek-kekane)
14:36:39 <abhishekk> So our image-list command have option but it does not list the killed images
14:36:46 <abhishekk> I have a WIP patch
14:37:07 <abhishekk> but again it showed me that it has another issue as well
14:37:14 <abhishekk> glance image-list --property-filter status=killed --property-filter deleted=1
14:37:39 <abhishekk> if I pass --property-filter deleted= true|false it does not recognize the value
14:38:04 <abhishekk> it only works with integer and not with boolean
14:38:19 <abhishekk> So may be that might be issue in our glanceclient
14:39:11 <abhishekk> For main bug I will write some unit/functional tests so that patch will be open for review
14:39:20 <jokke> either in client or request deserializer
14:39:27 <abhishekk> hmm
14:39:49 <abhishekk> I will report this second bug once I find the root cause of it
14:40:35 <jokke> but I'd say that is not crazy critical bug as in there is exactly one way image ever gets killed and that is if you use signed image and the signature verifivation fails upon create
14:41:01 <jokke> sorry, on upload
14:41:35 <abhishekk> jokke, right, but as we have filter support it should work for all the image states
14:41:39 <jokke> do they show up when you list all, including deleted?
14:41:47 <abhishekk> nope
14:42:00 <jokke> oh, ok ... that's interesting
14:43:06 <abhishekk> Other than this, we have pointed out some bugs which are reported against v1 and registry, so we are going to mark them invalid/close
14:43:35 <abhishekk> that's it from me for today
14:43:47 <abhishekk> #topic Open Discussion
14:44:39 <abhishekk> Just for update, I will not be around tomorrow
14:45:00 <jokke> abhishekk: how is your next week looking?
14:45:35 <abhishekk> will be there whole week
14:46:11 <jokke> ok, shall we, taken you're well by Monday, sync about the caching API stuff and get it moving from the start of the week?
14:46:26 <abhishekk> yep
14:46:42 <abhishekk> finish it by the week as well :D
14:46:49 <jokke> 14:00 utc works for you? or you prefer earlier?
14:47:03 <abhishekk> 1400 works well for me
14:47:28 <jokke> kk, I'll have invite in your calendar by the time you log on Mon for 14:00
14:47:38 <abhishekk> great, thank you
14:47:49 <abhishekk> anything else
14:48:27 <jokke> Yeah, so lots of the swift issues are related to keystoneauth had backwards incompatible breaking changes like couple of years ago
14:48:53 <jokke> and we missed them ... and have been lovely ignorant about the whole thing
14:49:04 <abhishekk> do you mean while doing multi store changes ?
14:49:13 <jokke> so that needs refactoring to the current API and we should have majority of those issues solved
14:49:31 <jokke> no I mean keystone changed keystonauth api
14:49:34 <abhishekk> I guess on Monday we should sync on this as well
14:49:53 <jokke> and we never caught on that
14:50:02 <abhishekk> ack
14:50:43 <jokke> thats all from me
14:50:54 <abhishekk> ack
14:51:01 <abhishekk> lets wrap up for today
14:51:05 <abhishekk> thank you all
14:51:10 <jokke> thanks all!
14:51:11 <abhishekk> have a great week ahead
14:51:31 <jokke> abhishekk: enjoy your time off, hopefully you're feeling better
14:51:47 <abhishekk> thank you, fingers crossed
14:51:57 <abhishekk> #endmeeting