14:03:08 <nikhil> #startmeeting glance
14:03:08 <openstack> Meeting started Thu May 26 14:03:08 2016 UTC and is due to finish in 60 minutes.  The chair is nikhil. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:03:11 <openstack> The meeting name has been set to 'glance'
14:03:21 <mfedosin> o/
14:03:25 <tsymanczyk> \o
14:03:37 <flaper87> o/
14:03:42 <dshakhray> o/
14:03:45 <rosmaita> o/
14:03:46 <nikhil> Courtesy meeting reminder: ativelkov, cpallares, flaper87, flwang1, hemanthm, jokke_, kragniz, lakshmiS, mclaren, mfedosin, nikhil_k, Nikolay_St, Olena, pennerc, rosmaita, sigmavirus24, sabari, TravT, ajayaa, GB21, bpoulos, harshs, abhishek, bunting, dshakhray, wxy, dhellmann, kairat
14:03:53 <hemanthm> o/
14:03:55 <wxy> o/
14:03:56 <sigmavirus24> o/
14:03:56 <jokke_> \o
14:03:57 <nikhil> for some reason my previous message did not go through
14:04:12 * nikhil had to restart irc
14:04:18 <jokke_> ah
14:04:42 <nikhil> #topic agenda
14:04:43 <mfedosin> let's beging then
14:04:46 <nikhil> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:04:49 <kairat> o/
14:04:57 <nikhil> Small agenda today
14:05:05 <nikhil> some people haven't read my mails
14:05:17 <nikhil> Let's get to that later.
14:05:33 <nikhil> #topic Updates
14:05:47 <nikhil> #info Glare updates ( mfedosin , nikhil )
14:05:53 <bunting> o/
14:06:13 <mfedosin> we finished the spec and waiting for reviews :)
14:06:35 <croelandt> o/
14:06:44 <mfedosin> #link https://review.openstack.org/#/c/283136/
14:07:23 <mfedosin> there are a lot of use cases and examples, that should help you (in theory) to understand how API works
14:07:48 <mfedosin> so if you have time leave your comments there
14:07:53 <mfedosin> thanks in advance :)
14:08:31 <mfedosin> also we are working on tests for Glare
14:09:17 <mfedosin> I think we will present code for review on 6th of June
14:09:51 <mfedosin> if you have any questions I'm happy to answer
14:10:23 <nikhil> I have a few things to update/add
14:10:36 <mfedosin> nikhil: shoot :)
14:10:37 <nikhil> that I've mentioned to mfedosin
14:10:47 <nikhil> #1. Let's work on only a POC
14:11:41 <nikhil> #2. All things upstream to help engage community in discussion and code. So, no internal discussions after a POC and openstack usualy channels based communication and development.
14:12:25 <nikhil> #3. Dissolving the ownership quotient -- since this is not a feature and entire service, we need everyone on the same page and ability to contribute. Let's provide a platform for that.
14:12:39 <mfedosin> nikhil: also we decided that API will be marked as 'experimental' initially
14:13:11 <nikhil> #4. Newton priority for Glare should be bare-bone code that will be used by a couple of early adopters that will help us evolve the API. By newton-3 we need to target a stable API.
14:13:56 <nikhil> #5. Stop all discussions on extra features and work on builing the community right after a POC is ready for review. (something not a WIP and set up in small patches easier to review and contribute as applicable)
14:14:29 <nikhil> #6. All bugs and stabilization should be done using LP and Mike was going to help setup a etherpad for that to enable pick up tasks/bugs to contribute.
14:14:53 <nikhil> mfedosin: right, the 'experimental' within newton but Current by newton-3
14:14:56 <nikhil> mfedosin: anything else?
14:15:14 <mfedosin> #7. Glare should be awesome
14:15:27 <nikhil> ++
14:15:28 <mfedosin> ^ it's the basic requirement :)
14:15:47 <nikhil> It will be -- all awesome people work in Glance.
14:15:59 <nikhil> moving on
14:16:04 <mfedosin> thanks nikhil
14:16:10 <nikhil> #info Nova v1, v2 (mfedosin, sudipto)
14:16:29 <mfedosin> okay, I added unit tests there
14:16:42 <mfedosin> #link https://review.openstack.org/#/c/321551/
14:17:06 <mfedosin> now there are 1750 LOC :)
14:17:47 <mfedosin> today I'll upload code for XEN
14:18:15 <mfedosin> then we have to wait for reviews
14:18:51 <mfedosin> yesterday multinode gates were broken and Jenkins put -1 for that reason
14:19:09 * nikhil can be added to the reviews when they are posted/ready
14:19:47 <mfedosin> nikhil: sure, in a couple of hours
14:20:03 <nikhil> np, that's in general
14:20:53 <nikhil> ok, let's move on then
14:20:56 <nikhil> Thanks mfedosin
14:21:13 <mfedosin> I'll add Matt, Sean and other Nova team members then
14:21:26 <nikhil> well you'd ask them before you add
14:21:29 <mfedosin> as reviewers
14:21:49 <mfedosin> but I have to mention...
14:22:02 <mfedosin> it's the final time I rewrite this code :)
14:22:03 <flaper87> my changes for devstack are still pending review
14:22:14 <flaper87> I'll probably start pinging ppl aggressively
14:22:14 <tsymanczyk> i think that's reasonable.
14:22:19 <mfedosin> flaper87: give us the link
14:22:41 <jokke_> mfedosin: not blaming you for that
14:22:42 <nikhil> flaper87: ++
14:22:45 <flaper87> #link https://review.openstack.org/#/c/315190/
14:23:04 <flaper87> mfedosin: it's a devstack review, I'll start pinging devstack folks
14:23:07 <flaper87> :D
14:23:11 <flaper87> mfedosin: you're fine
14:23:13 <flaper87> :)
14:23:20 <flaper87> ... for now
14:23:22 <flaper87> :P
14:23:42 <jokke_> flaper87: you give us the link, we keep it at the top of the list with nagging comments :D
14:24:00 <mfedosin> I don't want to rewrite it from scratch again :(
14:24:31 <nikhil> ok, let's move on here
14:24:34 <nikhil> #topic Releases (nikhil)
14:24:47 <nikhil> #info newton-1 next Tuesday May 31st
14:25:22 <nikhil> So, if anyone sees anything outstanding that needs to go in newton-1 let me know _today_
14:25:37 <nikhil> there's no release liaison in newton so, I am handling releases
14:26:05 <nikhil> Also, I think we can take the opportunity to propose a release for the client and store next week around Thursday
14:26:21 <nikhil> that way glance reviewers will have focus on knocking out release specific stuff
14:26:53 <nikhil> the release itself may not go through until later (as and when release team gets time to get those released)
14:27:03 <nikhil> moving on
14:27:09 <nikhil> #topic Announcements (nikhil)
14:27:25 <nikhil> I sent a Best practices for meetings email
14:27:39 <jokke_> nikhil: does Newton-1 (supposed) wirk with the latest release of store?
14:27:47 * jokke_ haven't check
14:28:32 <nikhil> jokke_: me neither. but that's a detail. we can time the releases to make stuff work (whether to release store before or after newton-1)
14:28:49 <nikhil> on my note about meetings:
14:28:55 <nikhil> #link http://lists.openstack.org/pipermail/openstack-dev/2016-May/095599.html
14:28:59 <nikhil> please read that carefully
14:29:11 <nikhil> try to be faster in updates (and precise)
14:29:33 <nikhil> any doubts or concerns?
14:29:52 <kairat> Please make sure you add your discussion topic at least 24 hours before the scheduled meeting.
14:30:08 <kairat> Can we make exception for request for reviews?
14:30:14 <nikhil> nope
14:30:30 <flaper87> I'd actually like to avoid using the meeting to request reviews
14:30:37 <flaper87> unless the review requires some actual discussion
14:30:37 <nikhil> ++
14:30:52 <jokke_> ++
14:30:56 <kairat> So do we have an etherpad where we can track that
14:31:02 <kairat> Maybe I missed that
14:31:04 <nikhil> what?
14:31:15 <nikhil> kairat: you are asking for tracking reviews?
14:31:18 <flaper87> my advice would be to reach out to the people you need reviews from if really needed.
14:31:34 <jokke_> or ping out at #openstack-glance
14:31:35 <flaper87> Also, keep in mind that pinging/requesting reviews so frequently is not really nice
14:31:40 <flaper87> but that might be just me
14:31:54 <nikhil> flaper87: yes and no
14:32:10 <nikhil> flaper87: sometime people want/need reminders if they get busy with internal stuff
14:32:21 <nikhil> so, please check with individuals you are asking for reviews
14:32:29 <flaper87> nikhil: note that I said "so frequently" ;)
14:32:38 <nikhil> flaper87: ok, thanks :-)
14:32:40 <flaper87> I didn't generalized
14:32:42 <kairat> Ok, I was talking about etherpad with priority reviews for that week like for example Heat does
14:32:49 <nikhil> flaper87: perfect
14:33:12 <nikhil> kairat: ok, thanks for that clarification
14:33:23 <rosmaita> kairat: +1 , more work for nikhil but would be really helpful
14:34:00 <nikhil> I used to do that before but many times people didn't pay attention
14:34:11 <jokke_> I'm fine without one extra etherpad I forget to follow ;)
14:34:13 <nikhil> I can look into that process a bit more. It's a bit of maintenance by itself.
14:34:17 <flaper87> Unless people make checking the ehterpad an habit, I believe it's not really useful
14:34:35 <nikhil> yeah, people have their own preferences
14:34:37 <flaper87> The dashboard helped a lot during Mitaka (IMHO) and for the cases it didn't, I used to chase people down
14:34:39 <flaper87> :P
14:34:40 <rosmaita> nikhil: or maybe a demo of good gerrit practices -- i have mostly "folk knowledge" of gerrit, i'm sure i don't use it effieciently
14:34:59 <nikhil> rosmaita: noted
14:35:12 <rosmaita> like, i guess i should look at flaper87 's dashboard more often
14:35:20 <hemanthm> can we do something on the dashboard to reflect the importance?
14:35:28 <kairat> ++ to hemanthm
14:35:35 <flaper87> hemanthm: yes and no
14:35:44 <jokke_> rosmaita: add the repos you need as followed under your settings, never go past page 2 unless you're bored and wanna clean up ;)
14:35:54 <flaper87> I mean, it can be done but it's a bit hacked in as there's no support for proper hashtags in our version of gerrit
14:36:10 <flaper87> What I did in mitaka is tag reviews with "Mitaka Priority" or something like that
14:36:20 <flaper87> and have the dashboard run a regex there and group those
14:36:23 <kairat> It looks like too long discussion
14:36:39 <kairat> we can discuss that in e-mail I guess
14:36:42 <nikhil> ok, let's see if we can come up with something offline
14:36:44 <flaper87> topics would work but they are overwritten on every new ps
14:36:50 * flaper87 stf
14:36:53 <flaper87> u
14:36:57 <hemanthm> thanks flaper87
14:37:05 <nikhil> flaper87: mind taking a action item for what all you did in mitaka?
14:37:30 <flaper87> sure, I can write all that down if that helps
14:37:32 <flaper87> :D
14:37:41 <nikhil> thanks, that's what we were looking for
14:37:47 <nikhil> we can share knowledge on this
14:38:14 <nikhil> on the meetings' etherpad best practices section
14:38:34 <nikhil> look at our agenda pad starting line 21 right now
14:38:43 <nikhil> I can take questions offline
14:39:35 <nikhil> #topic Deleting images that arebeing used by Nova: do we want the same behaviour for raw and qcow2 images? (croelandt)
14:39:46 <nikhil> croelandt: floor is yours
14:39:47 <croelandt> so :)
14:40:02 <croelandt> At Red Hat, we have a customer that complains about the following issue:
14:40:07 <croelandt> 1) They create an image in glance
14:40:17 <croelandt> 2) they boot a VM in Nova using the image created in 1)
14:40:24 <croelandt> 3) they try to delete the image in glance
14:40:33 <croelandt> Now, depending on the format of the image, it may or may not be deleted
14:40:48 <croelandt> if it is a raw image, it is considered "in use" and cannot be deleted
14:40:54 <croelandt> if it is a qcow image, it can be deleted
14:41:07 <croelandt> We were wondering whether the behaviour should become more "consistent" from a user point of view
14:41:14 <flaper87> So, lemme try to expand on this a bit more, if you don't mind.
14:41:20 <mfedosin> do they delete image with Nova?
14:41:21 <croelandt> Even though we might have to use some tricks to make it happen in glance
14:41:24 <croelandt> flaper87: sure
14:41:33 <croelandt> mfedosin: no, they use "glance image-delete" iirc
14:41:45 <sigmavirus24> == flaper87
14:41:46 <flaper87> The customer is using ceph and therefore, on raw images, ceph is doing a COW and basically marking the image as in-use
14:42:12 * sigmavirus24 was catching up sorry
14:42:14 <flaper87> I think this is a ceph specific case and I'm not sure we should expose it through the API.
14:42:42 <flaper87> I don't think this happens with other sotres, at least.
14:42:51 <nikhil> flaper87: correct
14:42:54 <jokke_> flaper87: ++
14:43:01 <flaper87> So, is the question: We should forbid deleting glance images if there's an instance running?
14:43:16 <flaper87> well, I failed to ask that with proper english
14:43:18 <flaper87> but you got it
14:43:22 <mfedosin> but it's a big security issue
14:43:24 <jokke_> ceph puts a lock on the file and prevents the deletion on that level
14:43:34 <flaper87> I'd say no as I don't think we should have that sort of dependencies
14:43:41 <mfedosin> if image is public and it cannot be deleted
14:43:49 <mfedosin> if somebody uses it
14:43:55 <flaper87> mfedosin: exactly, plus a bunch of other things
14:44:07 <flaper87> that would imply adding support for "force" deletes and who knows what else
14:44:13 <jokke_> mfedosin: that means that someone has failed on the deployment as glance clearly does not own the image data
14:45:02 <mfedosin> flaper87: frankly speaking we have a bunch of those issues from our customers as well with ceph
14:45:06 <nikhil> I'd a chat with nova folks on this ceph case
14:45:17 <flaper87> mfedosin: that's fine but I think it's a ceph specific case
14:45:23 <flaper87> I don't think it's Glance's problem, TBH
14:45:32 <jokke_> nikhil: flaper87 ++
14:45:38 <nikhil> the reason they want to support this off-the-band on-line usage of the image data in ceph pool as it's a popular way
14:46:09 <hemanthm> Dealing with this on the glance side means glance now has to keep track of image usage, which really shouldn't be a concern for Glance.
14:46:10 <nikhil> and they have some in-tree workarounds to keep consistency with glance constructs
14:46:35 <flaper87> hemanthm: right and we should also consider multi-clouds
14:46:36 <kairat> ++ to hemanthm
14:47:14 <flaper87> I mean, in general, I don't think Glance should get in the business of tracking this
14:47:18 <nikhil> so, like flaper87 and hemanthm are saying it's an edge case and should not be a top level feature in glance.
14:47:33 <rosmaita> +1
14:47:43 <flaper87> Now, we do want to provide a better story for this, though. Is the scrubber the answer for this? Just have delayed deletes enabled and deal with this on the scrubber side ?
14:48:09 <flaper87> jokke_ mentioned the scrubber is buggy but that's something we can work on to fix if it's
14:48:22 <mfedosin> flaper87: in Glare we're going to use shadow copy + delayed delete
14:48:27 <croelandt> We'd have to know exactly *how* it is buggy, first :)
14:48:32 <nikhil> I am not sure atm how I feel about scrubber
14:48:38 <jokke_> croelandt: ++ :)
14:48:44 <hemanthm> it works for the general case from what I have seen
14:48:52 <flaper87> croelandt sure, that's why I'm pinging jokke_. He probably has more info
14:49:16 <hemanthm> however, I don't know how scrubber can ensure there is no image usage anymore
14:49:35 <nikhil> we need to move on
14:49:37 <flaper87> hemanthm: it doesn't. It just tries to delete the data
14:49:45 <nikhil> croelandt: did you get a direction?
14:49:49 <flaper87> hemanthm: it'll keep retrying until it manages to delete it
14:50:00 <flaper87> prolly not the right solution, though
14:50:04 <jokke_> hemanthm: I think the idea behind that was to get user off from the synchronous delete and let scrubber fail as long as storage reports back busy
14:50:10 <nikhil> flaper87: it feels like a hack
14:50:10 <flaper87> just an idea, that's the best I can come upwith right now
14:50:15 <flaper87> nikhil: it is a hack
14:50:17 <flaper87> :D
14:50:25 <nikhil> one can write a crop job
14:50:31 <flaper87> but, the question is: Can we make it not a hack?
14:50:39 <hemanthm> it's a decent workaround
14:50:41 <nikhil> that runs a script in periodic intervals doing "image-delete"
14:50:43 <croelandt> nikhil: well, I guess I could discuss this with jokke_ and flaper87
14:50:48 <flaper87> Can we support this use case through the scrubber ?
14:50:51 <croelandt> and re-submit our findings for another meeting
14:50:53 <nikhil> cron*
14:50:54 <jokke_> please no
14:50:59 * flaper87 stfu and lets the meeting to move on
14:50:59 <hemanthm> jokke_: yes, makes sense
14:51:02 <jokke_> for the support
14:51:14 <nikhil> let's discuss offline
14:51:22 <nikhil> #topic open discussion
14:51:22 <jokke_> I really feel like that's can of worms we really don't want to open
14:51:34 <flaper87> jokke_: I didn't mean support of the dependency use case but the delete retries (I'm done, I swear)
14:51:47 <mfedosin> Alex asks for review https://review.openstack.org/#/c/320912/
14:51:59 <nikhil> so, kairat you posted a late request
14:52:09 <kairat> So oslo.log folks are going to remove verbose option
14:52:17 <kairat> we need to be prepared for that
14:52:21 <kairat> nikhil, yep, I know
14:52:32 <kairat> That's why I asked about clarification
14:52:33 <nikhil> (but I guess we can ignore that for this week as you'd questions)
14:52:37 <nikhil> cool
14:53:08 <kairat> https://review.openstack.org/#/c/317579, https://review.openstack.org/#/c/317556 - please review that, oslo.folks asked to remove that a month ago
14:53:32 <kairat> flaper87, jokke_ , hemanthm nikhil rosmaita ^
14:54:01 <nikhil> thanks kairat , good heads up. guess we can try to get that in by n-1
14:54:10 <nikhil> same is the case with rosmaita (late request)
14:54:26 <rosmaita> same request as last meeting
14:54:36 <jokke_> kairat: not against the change itself nor it's reasoning ... I'm sure oslo folks will follow standard deprecation so it's not huge rush ... but what I'd like to have is reference to that deprecation in the commit messages
14:54:42 <kairat> See "[openstack-dev] [oslo][all] oslo.log `verbose` and $your project" for more information
14:54:48 <rosmaita> would like to get the v1 api-ref in-tree soon, will make it easier to make corrections
14:54:57 <kairat> jokke_, ok, will do
14:55:02 <hemanthm> ++ jokke_
14:55:52 <nikhil> rosmaita: perfect
14:55:53 <jokke_> rosmaita: can we not introduce that as we're deprecating the api
14:55:56 * jokke_ ducks
14:56:02 <nikhil> lol
14:56:27 <rosmaita> jokke_: unfortunately, at the moment, it's "supported"
14:56:31 <nikhil> jokke_: I hope that was not a serious question?
14:56:34 <rosmaita> which includes providing docs
14:56:44 <jokke_> rosmaita: that is quickly fixed
14:56:49 <rosmaita> :)
14:56:52 <jokke_> I think flaper87 has patch waiting
14:57:05 <rosmaita> yes, one way to get people to not use it is to hide the docs!
14:57:05 <nikhil> well, we need to have API ref
14:57:14 <nikhil> doesn't matter current, supported or deprecated
14:57:24 <nikhil> no, we do not want to do that
14:57:28 <nikhil> we want to increase the docs
14:57:29 <flaper87> I actually do
14:57:31 <flaper87> :P
14:57:46 <nikhil> last thing we need is another set of angry customers who do not have a reference
14:57:51 <jokke_> I don't see point ot hold hand how to use something we don't want anyone to use
14:57:51 <nikhil> even for the sake of migration
14:58:01 <jokke_> ;)
14:58:02 <nikhil> so, docs docs docs!
14:58:07 <rosmaita> flaper87: trade you a review of your slide deck for a review of https://review.openstack.org/#/c/312259/ !
14:58:16 <flaper87> rosmaita: deal
14:58:26 <rosmaita> excellent!
14:58:44 <nikhil> looks like those are the things for today?
14:58:49 <nikhil> let's close this early to help setup the virtual sync
14:58:50 <rosmaita> i will revise my timeframe and look at the deck this afternoon
14:59:00 * flaper87 is ready for the virtual meeting
14:59:07 * rosmaita is not
14:59:10 <nikhil> Thanks all.
14:59:15 <nikhil> #endmeeting