19:01:00 #startmeeting glance 19:01:01 Meeting started Wed Apr 10 19:01:00 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:04 The meeting name has been set to 'glance' 19:01:30 So, I didn't really lay out the agenda in the wiki 19:02:00 but the rough idea from my perspective is: follow up on caching discussion, more summit topic discussion, and more blueprint triage 19:02:13 +1 19:02:19 +1 19:02:21 but anybody else who has important topics feel free to throw them out now, we can work them in 19:02:46 i want more code reviews 19:03:19 ah, I've got a quick note on that 19:03:31 #topic review and bug squashing days 19:03:40 w00000t 19:03:45 last meeting flaper87 suggested we have bug squashing days 19:04:01 then privately made the suggestion that we ought to have review squashing as well 19:04:17 which struck me as even more critical, as it seems 19:04:24 is there a glance openstack room? where we can all discuss and coordinate during these days? 19:04:25 I know I haven't been doing enough review lately :-( sorry folks 19:04:38 I don't believe so, but we could make one 19:04:39 openstack-glance 19:04:52 +1 on both types of squash 19:05:07 any thoughts on when we should have the first one? week after the summit? 19:05:17 i saw that last meeting the concern was that we dont have enough bugs 19:05:24 I agree with the review days for sure 19:05:37 that sounds ok, the week after the summit 19:05:39 so maybe we just have a "squash" day and do both? i dont want reviews to wait til squash days though 19:05:42 bug squash sounds good too, tho that is something that could end up being more async for people 19:06:00 the suggestion was to have them together 19:06:00 ameade: agreed, we shouldn't put them off until squash day 19:06:07 +1 that 19:06:23 yeah, one day for both as needed 19:06:28 starting with the one in worse shape 19:06:54 +1 19:06:59 #agreed have a shared review/bug squashing day during the week after the summit 19:07:00 would it be more effective to have constant reviews if we do the traditional core review days where a core is assigned to a day? 19:07:21 ameade: I think we should have constant reviews regardless of squashing days 19:07:35 ameade: and I hope we aren't at a point where we need review days like nova 19:07:49 markwash: yeah i dont think we are that bad 19:08:12 i agree, lets just try out the squash days and if we need more than that we will revaluate 19:08:20 #action markwash schedule a squash day during the week after the summit 19:08:24 I think we should review daily and that review days should be used as a speed up on reviews and "shared reviews" day 19:08:33 ++ 19:08:36 +1 19:08:37 agree 19:08:40 +1 19:08:43 +1 19:08:51 approved :p 19:08:52 any other thoughts on reviews? 19:09:22 #topic glance caching 19:09:38 last week we talked about new approaches to managing cache 19:10:21 jbresnah has posted his ideas about future approaches to caching here https://tropicaldevel.wordpress.com/2013/04/09/free-cache/ 19:10:39 did we decide to not have a summit session on this? 19:11:12 I'm not sure that we decided, but one was not proposed 19:11:46 I do feel like we could work out a reasonable solution outside of the summit, however 19:12:08 * flaper87 wont be at the summit :( 19:12:20 so, this seems to be something that could be benefitial to discuss before exposing glance 19:12:37 would help on some design decisions 19:12:54 thoughts? 19:13:08 should caching not work almost the same way either glance is exposed or not? 19:13:45 I am not sure what exposed means here exactly... 19:14:01 from the previous meeting logs, looks like locations is something discussed 19:14:13 jbresnah: it refers to being able to use glance directly 19:14:23 as an independent service 19:14:25 my bad, it mean glance as a public deployment 19:14:34 oh right, sorry i forgot that in some places it is not public, thanks. 19:14:55 i think that multiple locations is a big part of the cache 19:15:23 nikhil: so i agree locations is something to discuss 19:15:37 jbresnah: on the similar note 19:15:54 if we decide on multiple locations and related glance cache 19:16:01 one thing I would note, is that with jbresnah's proposal, can we still keep the local cached copies urls hidden from regular users? if not I worry about exposing individual nodes to targeted dos attacks 19:16:35 i think you can 19:16:48 we could have additional way to indicate provate urls maybe? 19:16:54 in one sense glance api becomes another user of the relica catalog 19:16:55 I think it is important to be able to scale out cache instances 19:16:55 cool, wanted to make sure that wasn't antithetical 19:17:12 so, if a user goes to download an image from glance, glance looks at the catalog and chooses the best location 19:17:15 then streams it 19:17:26 flaper87: yes, in general you had other ideas for how we might approach caching 19:17:31 if the locations are exposed to an end user, then can make that decisions 19:17:34 flaper87: any code or docs for us to look at? 19:17:42 if not glance can make it internally and then stream it 19:18:31 i agree that horizontal scalability is crucial 19:18:42 i hope i have not spoken against that 19:18:57 markwash: I was writing it and then I thought that it should exist along the lines of what jbresnah proposed. The idea would be to use the multi-location implementation for caches so that cached images would be just a new location for an existing image but under a dedicated API. 19:19:15 jbresnah: guess not, rather proved a thought on how to achieve this 19:19:25 that will allow us to just disable glance-api paths and have instances dedicated just for caches 19:19:26 s/proved/provoked/ 19:19:49 the original problem was here was to deal with a cache management service 19:19:58 clearing the cache, listing images in it etc 19:20:24 i think that all of the problems there will also be in maintaining the consistency of multiple locations 19:20:44 so we should solve the multiple locations issue and accept the cache issue as a lesser included 19:20:56 i guess that is a challenge by itself 19:21:07 that is interesting, and we do need some motivation for the multiple locations api 19:21:21 that's the reason why i wished to include public glance in this discussion 19:21:48 basically, maintaining consistency would be an issue 19:22:37 I don't know about everybody else, but I'd like a little more time to think on this, and we can certainly discuss it more over libations at the summit (those who can attend) 19:22:44 markwash: we'r hoping to have the same uuid for copied/cache image to other location 19:22:47 jbresnah: do you envision cache management being a sepaarte servoce? 19:22:52 +1 libations 19:22:54 markwash: sure 19:22:58 +1 libations 19:23:10 iccha_: perhaps 19:23:13 or a tool 19:23:40 it gets complicated because issues come up like: Does the user who registered the replicated image deligate delete rights to the service 19:23:40 * markwash searches for an action item. . . 19:23:41 etc 19:23:56 hmm, interesting 19:24:04 there are some other interesting bits that come out of it too that we can discuss when appropriate 19:24:27 for example: nova-compute could register a local replica instead of maintaining its own cache 19:24:33 tho, that could be out of scope also 19:24:40 just a use case example really 19:24:50 jbresnah: very interesting 19:24:59 my main point is this: 19:25:06 may be a pluggable, though 19:25:09 multiple-locations makes glance a replica service 19:25:20 a copy on a local disk is just another replica 19:25:34 special case code will cause complications 19:25:46 nod pluggable 19:25:56 very thought provoking 19:26:04 hmm 19:26:10 +1 to an image living anywhere just being another location 19:26:38 do we see this tying into image cloning? 19:26:57 jbresnah: don't wanna go off topic, though one thing that stikes me is local copy is not at a known state and a snapshot on that is a different image (looks like deeper issue, may be?) 19:26:57 iccha_: I kind of think it like that 19:27:38 nikhil: i may not follow, but if the checksum changes it is no longer a replica 19:27:48 kk 19:27:59 I'm a little worried about scope creep here 19:28:06 may be i on a different thought train :) 19:28:11 again, I think it is important to be able to scale that out and being able to identify which images are indeed a cached image and which not. 19:28:13 images shouldnt go stale anyways, they dont change? 19:29:05 the issue about datasets changing is exactly why we will need some sort of consistency management with multiple locations 19:29:45 which feeds into my point about that feature subsuming cache management 19:30:11 i think for the purpose of this conversation the replicas are blogs of data, not images 19:30:20 tho i could definitely be missing critical info there 19:30:21 another thing that we might also want to consider is being able to have some auto-caching algorithms 19:31:06 so for next steps here 19:31:08 flaper87: perhaps that is true, but i think that is a future feature wrt the conversation here 19:31:22 flaper87: but a good feature nonetheless 19:31:27 +1 19:31:32 I here a lot of interesting ideas, but maybe we need someone to synthesize these into some smaller concrete proposals for next steps? 19:31:41 *ehar 19:31:43 *hear 19:31:44 there we go 19:32:15 i think the first step is to see the current multiple-locations effort through 19:32:27 +1 19:32:31 and i would suggest that we stall the cache effort until then 19:32:45 +1 19:32:48 see exactly what is needed at the end of that 19:32:55 +1, once that's done we could talk about how to replicate it 19:32:59 sounds good 19:33:14 markwash: besides the proposed topic for glance at the summit, whay else would you like to discuss? 19:33:24 *topics 19:33:34 lets' leave caching off here, and move on to glance design summit topics 19:33:46 sounds good 19:33:49 sounds good 19:33:58 all I have left is summit topics in general (which is probably a big talk) and then just blueprint triage 19:34:13 https://wiki.openstack.org/wiki/Summit/Havana/Etherpads is the wiki page for etherpads for summit sessions 19:34:17 but I'd be happy to leave off blueprint triage for other topics of interest 19:34:22 #topic glance summit sessions 19:34:42 3 sessions have been selected and scheduled so far 19:34:52 #link http://openstacksummitapril2013.sched.org/overview/type/design+summit/Glance#.UWWzb6tg_58 19:35:08 that leaves 2 sessions left to schedule 19:35:36 and one of the topics we have to hit relates to http://summit.openstack.org/cfp/details/47 19:35:43 i see only 2 sessions unreviewed 19:35:59 but possibly more generally just image upload/download performance 19:36:02 and nova boot performance 19:36:38 improving image transfer performance is a big topic, so I'm wondering 1) should it be split into two? 2) does anyone have any interest in rolling db migrations? 19:37:17 markwash: we definitely are interested in rolling db migrations 19:37:34 +1 19:37:48 markwash: we'r also interested in interoperability/inter hypervisor compatibility as well 19:37:52 lets just unconference everything :) 19:37:53 +1 19:37:59 so splitting the performance topic into two sessions would come at some cost 19:38:52 I have a summit pitch about performance that I'd love to make now 19:39:15 cool 19:39:25 i would like to hear that pitch 19:39:28 we are all ears 19:39:33 go go go 19:39:40 take this as a straw man if you like 19:39:51 but I propose that Glance should not be worried about performance 19:39:54 at least not directly 19:40:18 Glance should concern itself with exposing the information that other performance-oriented clients need in order to work efficiently 19:40:36 +1 19:40:37 that sounds right to me 19:40:40 +1 19:40:45 most of the proposals that I've seen say: 19:40:55 If we make images X, then we can have nova do Y, which is faster 19:41:34 well, never mind that last train of thought. . not sure where it was going to derail 19:41:43 but dont forget about glance being a 1st class API 19:41:55 +1 19:41:59 we still need glance to do transfers at some level right? 19:42:21 and i believe caching is done for performance improvements? 19:42:26 transfers and sync too 19:43:06 This behavior feels kind of like legacy compatibility to me 19:43:09 I would like to couple resource management with transfer performance 19:43:36 when consoidering going super fast you have to consider resource usage 19:43:41 and how that effects others 19:44:09 also i would like to add that as glance stands right now it cannot make proper decisions about either 19:44:15 because it is only on 1 side of the transfer 19:44:36 jbresnah: but if glance isn't concerning itself with those things, the responsibility for managing resource usage would be elsewhere. . glance isn't any side of the transfer 19:44:58 markwash: oh yeah +1 that 19:45:24 I think glance should be more worried about knowing more from images than knowing more about improving performances 19:45:36 I'd really like to see glance taken out of actual transfers and maybe just negotiate transfers? 19:45:52 it boils down to what is the purpose of glance and what we want it to be once is it is public 19:45:56 yeah 19:46:00 iccha_: +1 19:46:14 so glance didn't originally exist alongside nova 19:46:19 looks like everyone wants to decouple the glance and image data management service 19:46:21 I love this convoersation so far 19:46:43 it was created because public clouds didn't want people to be able to boot just anything, since all the broken nova boots would cause a support nightmare 19:47:13 it was also created as a central place to manage image data, however, since swift isn't really up to the challenge 19:47:55 I think we should figure out a way to formally adopt the former purpose as something like a mission statement 19:48:09 * markwash is really out on a limb here 19:48:16 I agree 19:48:20 i am with you 19:48:40 in a past life i worked on grid computing.... i will save the details for when beer is near by... 19:48:47 and help out other projects so that we can move away from the latter purpose 19:48:56 but to me it makes sense to have replica management and data transfer as separate things 19:49:21 tho i do understand the convenience of coupling them as well 19:49:29 jbresnah: I think it makes sense to keep track of different replicas, yes, assuming it is useful to keep multiple replicas of the same thing 19:49:41 which seems likely 19:50:04 wow, I've gotten kind of far from what I meant to do 19:50:07 markwash: replica might be too strong for me to be using, perhaps 'registry' is better 19:50:08 Yeah its a different thing maintaining info about replicas vs transfering them or making the replicas 19:50:47 lets not get into redefining Glance maybe? 19:50:50 going back to the summit topic, I guess the default option is to make a "booting and snapshotting, download and upload performance" session 19:51:06 and approve the db rolling upgrades session 19:51:06 sounds good 19:51:08 +1 19:51:11 +1 19:51:12 +1 19:51:33 jbresnah: one session is not a lot of time 19:51:59 nod, i will try not to talk too much 19:52:10 jbresnah: lol, not what I was trying to suggest :-) 19:52:12 we can try doing a glance grab a beer/dinner/coffee thing during the summit too 19:52:14 heh 19:52:22 most of my thinking is on my blog (i think) 19:52:30 s/beer/beers/ 19:52:33 there are also some folks who will want to talk about volumes as images 19:52:43 ah yeah thats always been there 19:52:45 markwash: can we ignore them? 19:52:48 lol 19:52:54 heh 19:53:01 :P 19:53:06 well, we can, but perhaps at our peril 19:53:15 i was also thinking of proposing a 'unconference' topic on an image transfer sercice 19:53:21 could get time that way i suppose 19:53:40 yeah unconferences are a good way to do that 19:53:41 zhiyan and IBM have an amazingly fast provisioning system they have built on top of volume-like abstractions, and want to expose that in openstack 19:53:43 this is my first summit tho, so i may not have the culture quite right 19:54:06 yeah i think i see a glance-cinder-driver blueprint too 19:54:56 markwash: can you point me at more details on that? 19:55:06 jbresnah: I will have to find a link 19:55:11 well, having glance not transfer any image data would definitely improve its performance 19:55:15 * nikhil is having a hard time finding out action item for this except for a philosophical discussion with glance folks 19:55:15 deadlock? 19:55:18 rosmaita: :-) 19:55:24 yeah i'm actually pretty excited about boot from volumes and stuff 19:55:43 as far as action items, I'm pretty sure the boot-from-volume stuff needs session time 19:56:19 will all this be recorded, written, G+ ? 19:56:27 so I'll check the other topics to see if its sufficiently covered 19:56:38 I'd love to participate somehow and catch up with everything that happens at the summit 19:56:41 and if not I'd like to boot the rolling db upgrades to an unconference 19:56:42 flaper87: we will take notes 19:56:52 iccha_: :D thank you so much! 19:57:15 flaper87: they would be live streaming these sessions as well, i hope 19:57:16 flaper87: I know they said they weren't gonna stream sessions but maybe enough people were unhappy about that and they changed their minds 19:57:24 #action markwash finish scheduling the summit sessions asap 19:57:26 flaper87: https://wiki.openstack.org/wiki/Summit/Havana/Etherpads has etherpad links for summit sessions 19:57:42 #topic open discussion 19:57:52 sorry I wandered so far afield today folks 19:58:16 i enjoyed it :-) 19:58:19 markwash: just pretend that's what you meant to do :P 19:58:26 ahhahaa 19:58:30 lol 19:58:30 i'm really hyped for the summit 19:58:34 gonna be a doosey 19:58:51 i am glad we re having glance meetings :) and we all should try hanging out in the openstack-glance channel flaper87 mentioned 19:58:55 flaper87: I'll make sure we keep use the etherpads as much as possible during the summit 19:59:29 +1 19:59:36 markwash: thank you, I'm really sad I wont be there! :( 19:59:55 Notes will be really useful for catching up 20:00:00 and commenting 20:00:09 there was some mention of irc being used during the meetings 20:00:26 not sure how well that would work, but if anyone has any ideas we could give it a try 20:00:36 we could always google hangout 20:00:42 the problem with AV solutions is that the A always sucks 20:01:03 we appear to be out of time... 20:01:06 indeed 20:01:18 #endmeeting