14:01:06 <abhishekk> #startmeeting glance
14:01:07 <openstack> Meeting started Thu Jun 18 14:01:06 2020 UTC and is due to finish in 60 minutes.  The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:08 <abhishekk> #topic roll call
14:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:11 <openstack> The meeting name has been set to 'glance'
14:01:13 <abhishekk> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:01:15 <abhishekk> o/
14:01:23 <rosmaita> o/
14:01:24 <jokke_> o/
14:01:30 <abhishekk> short agenda today
14:01:39 <abhishekk> lets start
14:01:54 <abhishekk> #topic release/periodic jobs update
14:02:03 <abhishekk> V1 milestone this week
14:02:25 <abhishekk> We don't have anything merged so should we skip this milestone release?
14:02:37 <rosmaita> +1
14:02:41 <jokke_> ++ no need for it
14:02:42 <abhishekk> or we should release glance as we have modified the registry code?
14:03:20 <abhishekk> same goes with store and python-glanceclient as well
14:03:30 <jokke_> shouldn't make any difference
14:03:45 <rosmaita> i think sean has a patch up for the glanceclient release
14:04:10 <abhishekk> oh, will have a look and comment on it
14:04:11 <jokke_> milestone releases are not needed anymore and even tags are cheap, I see no reason fot doing it just for the sake of it
14:04:26 <abhishekk> +1
14:04:58 <abhishekk> cool, so lets skip this one and focus on milestone 2
14:05:28 <abhishekk> In next meeting we will finalized M2 priorities
14:05:42 <abhishekk> s/finalized/finalize
14:06:04 <abhishekk> Regarding periodic job, 1 functional-py36 job is failing
14:06:24 <abhishekk> I spent little time to analyze the failure
14:06:46 <abhishekk> One functional test regarding revert logic is failing due to race condition
14:07:12 <abhishekk> I will spend some time in next week to rectify it
14:07:24 <jokke_> humm
14:07:27 <jokke_> interesting
14:07:28 <abhishekk> As I am the one who has written that test :D
14:07:34 <rosmaita> :P
14:07:38 <jokke_> :D
14:08:08 <abhishekk> will ping jokke_ if something is needed
14:08:17 * jokke_ ducks
14:08:26 <abhishekk> :P
14:08:31 * rosmaita laughs
14:08:31 <abhishekk> moving ahead
14:08:41 <abhishekk> #topic devstack registry
14:08:54 <abhishekk> registry removal devstack patch merged \o/
14:09:03 <jokke_> \\o \o/ o// o/7
14:09:37 <abhishekk> thanks to jokke_ for the glance patch and dansmith for taking it ahead with devstack team
14:09:54 <jokke_> I will continue with the cleanup proper
14:10:03 <smcginnis> Yay, finally.
14:10:03 <abhishekk> awesome, thank you
14:10:18 <abhishekk> this means we have one less config file as well :D
14:10:43 <abhishekk> yes, smcginnis thanks for your push as well
14:11:02 <abhishekk> Lets move ahead,
14:11:16 <abhishekk> #topic Specs review
14:11:23 <abhishekk> We need to get on top of this
14:11:40 <abhishekk> Because our milestone 2 is dependent on these reviews
14:11:50 <abhishekk> sparse image upload - https://review.opendev.org/733157
14:11:50 <abhishekk> Unified limits - https://review.opendev.org/729187
14:11:51 <abhishekk> Image encryption - https://review.opendev.org/609667
14:11:51 <abhishekk> Cinder store multiple stores support - https://review.opendev.org/695152
14:12:03 <rosmaita> sorry, i started reviewing the sparse file upload and got sidetracked looking at sparse files
14:12:05 <abhishekk> These are some specs with top priorities which needs reviews
14:12:18 * smcginnis gets some tabs open
14:13:01 <abhishekk> rosmaita, no worries, eye from you and smcginnis will be additional benefit for us
14:13:30 <rosmaita> ok, i will wait for the author to revise that spec as you requested
14:13:57 <abhishekk> then there is one new spec related to duplicate downloads which will be good to have reviews as well
14:14:08 <jokke_> About that, I had very fruitful discussion with one of the Ceph devs last week this timeslot
14:14:21 <rosmaita> re sparse upload: my thought is that the title is misleading, the action as i understand it would take place after the full image has been staged
14:15:00 <abhishekk> that should be sparse image import?
14:15:03 <jokke_> Really feel like I understand the the traffic much better.
14:15:23 <abhishekk> jokke_, what was your discussion
14:15:24 <jokke_> Sparse upload is when glance is uploading not when client is
14:16:20 <jokke_> So i think the topic is accurate, if it was glanceclient spec it would point to the step before :P
14:17:18 <abhishekk> you had discussion with ceph devs related to sparse upload?
14:18:13 * abhishekk am I disconnected?
14:18:47 <jokke_> But yeah so there is couple of ways we can do the sparse upload and save the bandwidth. If the admin wants to fat provision the image but not send all the zeros over the wire, we can do something like buffered write again. Which sends, say 4kB, sample over the wire and then just tells ceph to write that 200k times. Or we can do thin provisioned images by seeking ahead and writing only the data.
14:18:53 <jokke_> abhishekk: nope, can see you
14:18:58 <abhishekk> ack
14:19:49 <jokke_> I think we should look both, and have the thin provisioning on/off configurable
14:20:20 <abhishekk> sounds good to me
14:20:38 <abhishekk> what about filestore?
14:20:38 <jokke_> So rather than having the config option we talked about in the PTG to turn sparse writes on or off, just flick which way we do the write into ceph
14:21:18 <abhishekk> his proposal talks about both rbd and filestore sparse upload support
14:21:21 <jokke_> I think same applies, we can call the config option "thin provisioned" and use the sparse writing there
14:21:54 <abhishekk> ok
14:22:18 <abhishekk> could you please add this suggestion on specs, we should get it rolling
14:22:22 <jokke_> I think the biggest change is really if admin wants to thin provision or not
14:22:25 <jokke_> sure
14:22:27 <jokke_> will do that
14:22:47 <abhishekk> thanks
14:23:05 <jokke_> Had quite a bit clarifications and more understanding/good pointers of other things as well how radoslib handles the I/O
14:23:38 <abhishekk> cool
14:24:05 <jokke_> but we can discuss them separately
14:24:19 <abhishekk> yes
14:24:20 <jokke_> I'll try to get bit of a refactring spec together
14:24:34 <abhishekk> that will be great
14:25:33 <abhishekk> I am working on cinder multiple store support PoC
14:26:46 <jokke_> Nice
14:28:06 <abhishekk> Ok, please spend some time in specs review this week
14:28:14 <abhishekk> Moving in to Open discussion now
14:28:14 <jokke_> yup, will do
14:28:26 <abhishekk> #topic Open discussion
14:28:57 <jokke_> Just couple of quick things around the rbd so I have also note recorded.
14:29:04 <abhishekk> I have uploaded our PTG recordings on google drive and shared link of those in PTG etherpad
14:29:17 <jokke_> thanks abhishekk!!!
14:30:05 <abhishekk> #link https://etherpad.opendev.org/p/glance-victoria-ptg
14:31:08 <jokke_> So first of all, multithreading per se is not a thing. What people are referring with that is async writes. Now my biggest fear of eating all the sockets is happening already. While the rbd client instance is running it maintains the sockets for all the OSDs it accesses with timeout of 900sec
14:32:10 <abhishekk> oh
14:32:33 <jokke_> For our usage pattern it also does not make sense to start pooling those rbd clients as main conern there is how long time the auth and handshakes takes, but as we're not dealing with thousands of small objects. it's actually not relevant for our usage pattern and we would need to maintain that pool
14:33:37 <jokke_> We could have pool of async write slots we could scale based on how many concurrent transfers we have ongoing. That would lessen the impact of high latency links
14:34:46 <abhishekk> that would be big change imo
14:34:46 <jokke_> but I think the biggest performance improvements we can achieve is by changing the way we write zeros to either that buffered rewrite or sparse seek and by changing how we allocate the size when we don't know image size we're writing
14:35:38 <jokke_> advice I got was to take the same approach as the ceph client is doing in such situation and just double th size every time resize is needed and trim at the end
14:35:44 <jokke_> totally safe
14:36:16 <abhishekk> the same we decided earlier for chunks upload?
14:37:44 <jokke_> abhishekk: so instead of growing the size by 1GB, the advice was just double the size. So start like 100MB, then 200, 400, 800 ... etc
14:38:06 <abhishekk> ack, got it now
14:38:26 <jokke_> but conceptually the same idea, just don't worry reserving too much and trim when finished
14:38:42 <abhishekk> so this will be part of rbd refactor
14:38:52 <jokke_> yep
14:39:05 <jokke_> small change, should be super easy to do actually
14:39:21 <jokke_> I might just do it as bugfix before looking into the more risky things
14:40:00 <abhishekk> cool, I think it will be good if its that much easy as it sounds :D
14:40:08 <jokke_> yup
14:40:47 <abhishekk> great, So we need to take out multiple rbd thing from our priorities
14:41:10 <jokke_> much more cofortable doing any changes there now when I know how it actually works and happy to share with anyone interested
14:41:23 <abhishekk> ME
14:41:30 <jokke_> :)
14:41:59 <abhishekk> we will take this down in next week :D
14:42:23 <jokke_> that's all from me. Just wanted to share a quick recap so we have it recorded somewhere :D
14:42:39 <abhishekk> thanks
14:42:55 <abhishekk> anything else guys?
14:43:49 <abhishekk> cool, lets wrap up early
14:43:53 <abhishekk> thank you all
14:43:59 <jokke_> Thanks all!
14:44:00 <abhishekk> have a nice weekend
14:44:09 <abhishekk> #endmeeting