14:00:50 #startmeeting Glance 14:00:51 Meeting started Thu Jul 2 14:00:50 2015 UTC and is due to finish in 60 minutes. The chair is nikhil_k1away. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:52 hihi 14:00:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:55 The meeting name has been set to 'glance' 14:00:57 o/ 14:01:00 o/ 14:01:01 o/ 14:01:02 o/ 14:01:03 o/ 14:01:05 o/ 14:01:07 o/ 14:01:14 o/ 14:01:15 o/ 14:01:17 nikhil_k1away IS BACK! 14:01:22 hi 14:01:31 nikhil_k1notaway 14:02:08 o/ 14:02:14 Hi all 14:02:31 I was away for past few days so don't have much on updates 14:02:39 except for one thing 14:02:41 #topic 14:02:45 #topic Updates 14:03:11 #info added entry for Glance mid-cycle to #link https://wiki.openstack.org/wiki/Sprints 14:03:31 #action all: please visit https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup and fill out the necessary surveys 14:04:34 Thanks for bringing that up flaper87 on the ML. I will response for everyone's awareness hwoever, updates and meeting minutes on the event would be a preference 14:05:10 nikhil_k: awesome, thanks 14:05:13 I am arranging for vidyo capability in the conf room and that enables remote participation very likely 14:05:53 Also, I will try to investigate the tool mentioned by Jeremy on the ML if that seems more useful. We had little luck with mumble last time :/ 14:06:17 moving on.. 14:06:21 #topic Glance images streaming from Horizon 14:06:49 tsufiev: please go ahead 14:06:57 #link https://review.openstack.org/#/c/166969/ 14:07:19 so, the problem is with support of file-like object during images upload via Glance v2 API 14:07:44 AFAIK (or more precisely, I was told), there isn't such capability in v2 API 14:07:47 is that true? 14:08:41 tsufiev: could you please be bit more precise? 14:09:39 I believe tsufiev's refering to stream whatever is being uploaded to horizon directly to glance 14:09:45 without re-uploading it to glance 14:10:00 jokke_, sure. So, when I'm to upload an image file into Glance, I could pass some object as 'data' key value, the most important requirement for it is to have .read() method implemented - that works for me in v1 API 14:10:22 I'm speaking about using glanceclient from Horizon api wrappers 14:10:25 I guess I got it wrong... 14:10:35 tsufiev: as far as I know only problem we have (and which is not solvable) is that the client cannot do progress bar for stuff like fifos etc. where it cannot check the size 14:11:03 in which cases we can just disable the progress bar 14:11:12 but since this is horizon, I doubt they are using the CLI at all 14:11:12 other than that I see no problem passing the data through horizon (obviously you need to implment the bits to make that happen in horizon end) 14:11:16 flaper87, nope, we're speaking about the same thing, just on different level of detalization 14:11:17 but rather the library itself 14:11:25 tsufiev: a-ha 14:11:35 I thought you could pass f-like objs to glanceclient 14:11:52 or at least it was possible 14:11:53 mmhh 14:12:04 flaper87, well, our glance experts said I couldn't do it for API v2 14:12:05 tsufiev: you can push the image data through fifo or just stdin on v2 as well no problem 14:12:14 mfedosin, ^^^ 14:12:25 tsufiev: The file like obj should work on glance v2 API. If they don't work in glanceclient then we have a bug. But the functionality should exist 14:13:02 tsufiev: that's were the difference between your comment and mine is. If you're asking whether glanceclient supports file like object, then the answer is (probably) yes 14:13:28 it's there and we fixed the behavior around early juno ... the defaults did not work great on glanceclient back then, but it has been working perfectly fine for about a year now 14:13:32 if the question is whether glanceclient honors the file-like object the right way, then I guess it doesn't 14:13:45 flaper87, okay, but is .read() method on that object in v2 API supported? 14:14:14 hm... perhaps I was misinformed... 14:14:31 tsufiev: I see, it's just a data stream 14:14:45 tsufiev: https://github.com/openstack/python-glanceclient/blob/master/glanceclient/common/http.py#L58-L64 14:15:05 you may be referring to copy-from in v1 that's more along the lines of .read() you are asking for 14:16:27 flaper87, nikhil_k: the above line seems like the thing that should work for me 14:16:27 and that's how I could make more sense of you comment on the review:- Since image_update is run in a separate thread and the browser itself 14:16:30 sends the large file via XHR, it is imperative that User doesn't close 14:16:34 Horizon page/tab before the image transfer is complete - otherwise it 14:16:35 tsufiev: can you point to the code you'r eusing for v1 right now? 14:16:37 will fail. 14:16:51 maybe that will help clear up the confusion 14:17:37 https://github.com/openstack/glance/blob/master/glance/common/utils.py#L156 14:17:53 I think CooperativeReader uses in both apis 14:18:05 (is used) 14:18:16 wait, I'm confused now. Are we talking about glanceclient or glance 14:18:46 as a Horizon developer I'm only able to use glanceclient directly :) 14:18:53 tsufiev: exactly 14:19:01 that's what I thought as well 14:19:28 and API perspective neither of the API gets file ... they get datastream 14:20:17 jokke_, so there are no serious differences in v1 vs v2 datastream processing? 14:21:07 not that I can think of 14:21:15 tsufiev: is there an issue your facing ? 14:21:25 Error? Failure? different behaviour ? 14:21:35 it may be related to the API but not to the datastream 14:21:36 tsufiev: I didn't say that (meaning I'm not sure), but API perspective there is no file for the REST api ... it gets datastream from the client. 14:22:13 and the differencies in the python-glanceclient side were unified about year ago 14:22:28 flaper87, I shall try for myself with v2 - will ping you back (file a bug) if something wrong happens 14:22:41 tsufiev: roger, that'd be very useful 14:22:41 sorry for possibly false alarm 14:22:46 Shouldn't this have been brought up before the emeting? 14:22:48 but as long as we're talking about doing something with around upload process I'm assuming we're talking about the client 14:22:57 In #openstack-glance for example? 14:23:29 sigmavirus24: yeah 14:23:37 Can we move on to the next topic? 14:23:41 sure 14:23:45 sigmavirus24: I see no problem having discussions like this on meeting or mailing list ... that's why we have these 14:24:06 jokke_: erm, I disagree 14:24:14 support -> openstack-glance 14:24:16 Ok, folks. Let's clear the confusion offline. 14:24:19 and you gate it 24h/7d 14:24:25 s/gate/get/ 14:24:41 #topic Reviews/Bugs/Releases 14:24:45 == flaper87 14:24:48 anywa, we're halfway through our meeting, lets move on 14:24:59 #info change default notification exchange name 14:25:08 #link https://review.openstack.org/#/c/194548/ 14:25:26 so, this patch is meant to change the default exchange name for glance notification 14:25:40 just like nova/cinder/etc 14:25:55 from original 'openstack' to 'glance' 14:26:06 s/original/default/ 14:26:27 sigmavirus24: thanks for the clarification 14:26:32 uuu, I just realized llu-laptop replied, I'm sorry about that 14:26:59 so, ceilo may listen by default on glance and that's fine 14:27:04 yeah, need your reviews :) 14:27:13 What's the motivation for changing defauls? (besides others doing it) 14:27:14 Yeah I think I agree with llu-laptop 14:27:29 nikhil_k: I think we were supposed to have changed them when we adopted the notifications work 14:27:42 It's not that nova, et. all, are just doing this now 14:27:50 It's that we never did it (if I understand correctly) 14:27:54 I understand it doesn't prohibit admins to set control_exchange 14:27:55 llu-laptop: sorry, can check the review right now, but what's the reasoning behind this. I'm really really reluctant to change our defaults lightly 14:28:22 my concern is that we're not warnning them that default has changed 14:28:25 :) 14:28:35 sigmavirus24: gotcha, so a antique bug :) 14:28:48 which means, if there are other downstream apps listenting on the 'openstack' exchange, then we'll break them\ 14:28:57 nikhil_k: it's an antique free-range artisinal bug 14:28:57 flaper87: ++ 14:29:08 heh 14:29:14 aaaaaaaand, this is part of the public API 14:29:15 jokke_: follow the openstack norm, is my first motivation 14:29:21 well, not really public 14:29:24 but under-cloud public 14:29:30 flaper87: I think the real question is "Does anyone really listen to notifications?" =P 14:29:31 llu-laptop: I think your change is fine 14:29:41 sigmavirus24: we'll never know until we break them 14:29:41 flaper87: :) notifications public? 14:29:45 but I'd assume yes 14:29:47 flaper87: my point exactly 14:29:53 so let's change them and find out what happens =P 14:29:59 sigmavirus24: I'll give them your home address 14:30:04 nikhil_k: they are in the under-cloud 14:30:15 flaper87: yeah that's fine 14:30:18 say you have your own billing service, you may want to listen there 14:30:22 I have an extra bed or two I could loan them 14:30:27 sigmavirus24: can you give me your home address ? 14:30:27 second level, right. tfter som processing. so... 14:30:29 :D 14:30:35 llu-laptop: I don't mind changing it given it makes more consistent/ Just want to ensure that we have it done right. 14:30:37 flaper87: so the thing is that operators may already be changing this to do billing right 14:30:50 Can we send a message to openstack-operators asking for feedback 14:30:59 Like we can sit here and guess at the impact, or we can ask the people who know 14:31:02 Seriously guys, you're ok breaking our next upgrade just for cosmetics?!?!? 14:31:12 jokke_: I'm trolling 14:31:24 sigmavirus24: I'm all for changing it but I'd like to see if there's a way to avoid the breakage OR at least have a warnning 14:31:26 llu-laptop: these flags seem important to get people attn: DocImpact, UpgradeImpact and the email shoutout like sigmavirus24 mentioned 14:31:29 and deprecate it 14:31:40 oh and yeah, what nikhil_k just said 14:31:50 nikhil_k: got that, will put those into commit message 14:31:56 My point is that assumption is the mother of all fuck ups 14:31:58 or most of them 14:32:02 ++ 14:32:15 and I don't feel comfortable assuming people have control_exchange=glance already 14:32:31 I said "may" not "probably" 14:32:31 =P 14:32:32 so, my two options are: 1) We change it in a way we don't break it 14:32:42 2) we deprecate it the right way 14:32:44 I'm fine with saying that we're changing this in M 14:32:50 2 means, we change it in M 14:32:50 You have L to set it correctly 14:32:56 right 14:33:09 +1 14:33:18 so L having deprecation warning of the old option and M changing the default 14:33:27 I'd be fine with that 14:34:04 #action llu-laptop : ensure https://review.openstack.org/#/c/194548 has mention and work done for -- L having deprecation warning of the old option and M changing the default 14:34:08 jokke_: it's not deprecating old options, but changing the default value 14:34:17 oh right 14:34:30 #action llu-laptop : ensure https://review.openstack.org/#/c/194548 has mention and work done for -- L having deprecation warning of the change in default and M changing the default 14:34:39 llu-laptop: sorry for giving trouble for such a small change but breaking backwards compatibility is something that we try really freaking hard not to do: https://www.youtube.com/watch?v=EYKdmo9Rg2s 14:34:50 llu-laptop: yes old default value rather than option 14:35:01 flaper87: understandable 14:35:02 llu-laptop: what I thought wasn't what I typed ;) 14:36:24 Let's move on 14:36:29 #info scrub images in parallel 14:36:42 #link https://review.openstack.org/#/c/196240/ 14:37:10 guess hemanthm is away 14:37:15 Why are we reducing that from 1000 to 100? 14:37:16 I had a chat with him 14:37:22 +1 to scrub images in parallel 14:37:27 "Oh to not overwhelm the swift cluster" 14:37:32 (it as in the commit message) 14:38:26 at Rackspace production scale it can't scrub all the images in serial 14:38:34 jcook: right 14:38:37 I think last time we discussed about this it was discarded because of the extra load the parallel scrubbing would cause, so rather do it slower, but ensuring that the services are not overloaded 14:38:59 I haven't reviewed the patch but the idea sounds nice 14:39:38 I saw on the order of 10s of thousands in the backlog that in parallel cleared up rather quickly and not noticeable prod issues 14:39:45 Yeah, the alternative is to refactor the scrubber to enable it to do the deletes over the day/week etc 14:39:52 Swift is designed for parallel scaling for speed of downloads and uploads 14:40:00 yeah I have a spec for that 14:40:33 I think hemanthm is working with an intern on task scrubbing and maybe image scrubbing updates 14:40:36 sorry not being able to check the review atm. but are/can we do that as configurable? 14:40:43 I agree that production deployments of swift should be able to handle this 14:40:48 in the short term though, the scrubber does not scale 14:41:01 and swift (in my experience) handles it fine 14:41:20 not necessarily devstack 14:41:24 * jokke_ would like to remind that we have other backends than just swift 14:41:35 yeah 14:41:41 that is true, make it configurable? 14:42:16 parallelizing deletes that is 14:42:20 I doubt if 100 threads would be too much for other backends 14:42:40 I would prefer than ... I know that it's again one more thing to take in account when deploying, but it would be way safer bet than again assuming that based on experience with swift the others wouldn't be affected 14:43:09 /than/that/ 14:43:11 nikhil_k: shouldn't be a problem for the filesystem backend, http store doesn't allow deletes, cinder is broken miserably 14:43:13 rbd? 14:43:27 either that or just make thread pool configurable and default 1 14:43:32 flaper87: would that maybe be a problem for rbd (same question to sabari for vmware) 14:44:01 I doubt if vmwaer is using scrubber 14:44:32 I don't think it would be a problem for RBD itself. What would worry me is: 14:44:50 if you're using rbd for nova, glance and cinder, you may be stressing it more than you want 14:45:05 and vms/devices will be affected 14:45:23 cinder should be less broken if this lands: https://review.openstack.org/#/c/166414/ 14:45:25 so, having it configurable is definitely the right thing to do in this regard 14:45:26 #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other 14:45:26 Only the meeting chair may start a vote. 14:45:34 boo 14:45:44 #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other 14:45:44 Only the meeting chair may start a vote. 14:45:53 haha! :P 14:45:56 yay! 14:46:01 #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other 14:46:01 Begin voting on: What to do with scrubber? Valid vote options are approve-it, config-for-parallel, config-for-thread-pool, config-both, other. 14:46:03 Vote using '#vote OPTION'. Only your last vote counts. 14:46:10 #vote approve-it 14:46:17 but I'm ok with config too 14:46:19 #chair nikhil_k 14:46:20 Warning: Nick not in channel: nikhil_k 14:46:22 Current chairs: nikhil_k nikhil_k1away 14:46:22 kragniz: yeah, I'd like to make it progress. Any reviews for it are appreciated 14:46:39 #vote config-both 14:46:46 #vote config-both 14:46:50 Why not increase testing complexity? (He says semi-seriously) 14:46:59 #vote config-both 14:47:01 #vote config-both 14:47:23 #vote config-for-thread-pool 14:47:30 config-for-thread-pool solves both problems 14:47:34 #vote config-both 14:48:11 yeah config for pool does solve both 14:48:37 though having an option to not use threads is fine 14:48:43 erm 14:48:46 10 mins left 14:48:50 12 to be precise 14:48:56 I guess we are using them either way 14:49:01 yep 14:49:14 #vote config-for-thread-pool 14:49:19 moving away from eventlet would be our goal when the entire ecosystem does so 14:49:34 if the entire ecosystem does so ;) 14:49:40 * jokke_ is really looking for that day 14:49:42 heh 14:49:51 even more than deprecating v1 api 14:50:00 #vote config-for-thread-pool 14:50:02 I hate eventlet 14:50:11 jokke_: are we ever going to port glance to twisted? 14:50:24 is the vote still open? 14:50:25 sigmavirus24: I seriously would love that option 14:50:29 longest voting ever :P 14:50:30 yes 14:50:37 >_>; 14:50:39 #endflaper87vote 14:50:44 hahahaha 14:50:45 lol 14:50:54 people still voting, what can I do.. :P 14:50:56 #endvote 14:50:56 Voted on "What to do with scrubber?" Results are 14:50:57 config-for-thread-pool (3): sigmavirus24, nikhil_k, jcook 14:50:58 config-both (4): kragniz, mfedosin, jokke_, flaper87 14:51:11 both it is 14:51:30 people are showing bitterness for more options in openstack 14:51:37 hemanthm: ^ 14:51:38 configurations to be precise 14:51:50 #topic Open Discussion 14:51:57 nikhil_k: certain devs are ;) 14:52:06 I'd like to make progress my proposal for implement upload/download to cinder glance_store. 14:52:11 https://review.openstack.org/#/c/166414/ 14:52:15 afair we wanted to have artifacts 5min discussions 14:52:16 This fix up the broken cinder backend. 14:52:28 flaper87: what's up? You seemed antsy to move on? 14:52:30 Any advice to make it forward? Does it help if I ask cinder experts for reviews for it? 14:52:32 mfedosin: we have 9 minutes to have a 5 min discussion 14:52:33 what cinder backend? #joke 14:52:38 lol 14:52:44 tsekiyam: yeah having cinder SMEs look at it would be helpful 14:52:56 mfedosin: yep, let's do that if you 14:53:05 nikhil_k: haha, all good. I was just doing some sigmavirus24trolling 14:53:06 so, just everyone to be aware of it... 14:53:14 you've time. I forgot to update the agenda and people had added items for today 14:53:21 :) 14:53:25 tsekiyam: I'll review your patch and get back to you 14:53:43 Alex Tivelkov status: He is fighting with oslo.versioned object. He made several commit there, but they were reverted, because they broke nova and cinder. Now he's working on fixing it. 14:54:11 okey, I'll ask someone in cinder devs to review it. 14:54:30 mfedosin: :( 14:54:37 mfedosin: but I'm happy he's working on that 14:54:44 that's a great cross-project effort 14:54:45 My and Darja status: We are working on the glance v3 client 14:54:57 Currently we have 3 commits there: 14:54:59 sigmavirus24: flaper87 do you guys have anything for drivers' meeting that happened this week? #link http://eavesdrop.openstack.org/meetings/glance_drivers/2015/glance_drivers.2015-06-30-14.02.html 14:55:03 initial one (https://review.openstack.org/#/c/189860/) 14:55:11 CRUD (https://review.openstack.org/#/c/192261/) 14:55:30 blob operations (https://review.openstack.org/#/c/197970/) 14:55:45 Also we have a commit in domain model https://review.openstack.org/#/c/197053/ and it wants an approve :) 14:55:51 nikhil_k: I think worth of mentioning, I managed to troll flaper87 totally on the drivers meeting :P 14:55:51 Future plans are to start working on api stabilization with api-wg and move out from domain model to something more plain. 14:56:12 jokke_: :) my guess is glance_store? 14:56:17 nikhil_k: yup 14:56:18 nikhil_k: just the email flaper87 sent to the ML 14:56:26 thanks sigmavirus24 ! 14:56:36 jokke_: :) 14:56:49 mfedosin: thanks for the update. I guess the action item is 14:57:47 Also there is a good bug https://bugs.launchpad.net/glance/+bug/1469817 14:57:47 Launchpad bug 1469817 in Glance liberty "Glance doesn't handle exceptions from glance_store" [High,Confirmed] - Assigned to Mike Fedosin (mfedosin) 14:57:55 #action review needed for input for glanceclient artifacts support 14:58:23 It was found by Alexei Galkin and now we have 39% test coverage instead of 38 :) 14:58:35 My idea to inherit all glance exceptions from similar glance_store exception and then catch glance_store exceptions on the upper level 14:58:35 ha 14:58:58 God help us when there will be 42% 14:59:50 mfedosin: they only says that "sorry for inconvenience" 15:00:02 We are out of time. 15:00:03 jokke_: ++ for the troll :D 15:00:05 Thanks all! 15:00:07 sorry I got distracted 15:00:08 #endmeeting