14:01:30 #startmeeting glance 14:01:31 Meeting started Thu Jun 6 14:01:30 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:35 The meeting name has been set to 'glance' 14:01:45 yo 14:01:54 so, time change seems to work okay for east coast 14:02:04 wave 14:02:10 yes :) thanks 14:02:16 jbresnah: hi! 14:02:22 hi jbresnah 14:02:24 it must be really early for PST ppl 14:02:24 jbresnah: glad you could make it! 14:02:30 jbresnah gets the worst time for a meeting award 14:02:36 iccha: not really, worst for hawaii 14:02:37 dedication and love for glance jbresnah 14:03:00 but this way zhiyan1 can be here more easily, (though it is pretty late!) 14:03:12 :) 14:03:16 * jbresnah is taking coffee intravenously 14:03:18 so anybody got anything they want to add to the informal agenda for today? 14:03:35 yes, yes, cinder-glance-store.. 14:03:41 zhiyan1: check 14:03:58 #link https://wiki.openstack.org/wiki/Meetings/Glance#Agenda_for_Next_Meeting 14:04:11 has the rough outlines, I'm most interested in ongoing blueprints today 14:04:32 esheffield: and me just have some updates on documentation, not much though 14:04:43 cool 14:04:52 I would like to talk about the multiple locations bp if possible 14:05:15 jbresnah: deal 14:05:29 yes, cool 14:05:40 if there is time i would like to mention import/export/clone 14:05:43 for zhiyan1's glance-cinder-driver work, there is this etherpad 14:05:45 #link https://etherpad.openstack.org/linked-template-image 14:05:45 o/ 14:05:56 folks might want to take a look through that before we get to it later on in the meeting 14:05:57 hi flaper87. 14:06:03 zhiyan1: hey :) 14:06:05 thanks mark, yes, thatis:) 14:06:10 * nikhil is eavsdroppping 14:06:36 #topic New Blueprints (fast-style!) 14:07:08 Is anyone in interested in checking the sanity of a few blueprints this week? 14:07:20 o/ 14:07:22 specifically to help out marking them as approved 14:07:24 sure, any in particular? 14:07:29 i thought we already though cross id was sane 14:07:35 jbresnah: dude, you're awake 14:07:37 T_T 14:07:43 go to sleep 14:07:43 first there is 14:07:44 :P 14:07:46 #link https://blueprints.launchpad.net/glance/+spec/ability-to-separate-snapshots-and-images 14:07:49 flaper87: partially 14:08:05 second there is 14:08:08 #link https://blueprints.launchpad.net/glance/+spec/cross-service-request-id 14:08:39 markwash: there's a thread in the mailing list about the cross-service-... 14:08:44 and as a third option there is 14:08:45 #link https://blueprints.launchpad.net/glance/+spec/use-oslo-common-db-code 14:09:01 first one: can this just be done with metadata and the back end sorts things out? 14:09:03 * markwash looks 14:09:05 I'll take the last one 14:09:10 :D 14:09:43 rosmaita: possibly 14:09:55 markwash: I put out comments on the first 2 14:10:08 are we talking about the 1st one yet? 14:10:17 because there are a lot more use cases than just RAID level 14:10:23 or all of them at once :P 14:10:24 i think the second is a solid idea, and seems to be going on in other OS projects 14:10:32 +1 jbresnah 14:10:34 jbresnah: it is 14:10:47 +1 jbresnah 14:10:56 +1 jbresnah 14:11:07 it looks like we have plenty of interest 14:11:18 so i would say we can call the second one approved 14:11:23 hahaha 14:11:25 :D 14:11:29 I think the task here for each one is either to get more feedback, or to just yell at me to approve it 14:11:41 yeah +1 I'm approving the request id one now 14:11:43 +1 on the 2nd, am I crazy markwash or didn't we implement that together a long time ago? 14:11:53 yeah I thought we did :-) 14:13:00 okay looks good for new bps 14:13:05 seems use-oslo-common-db-code is good to approval also, marwash? 14:13:12 sorry, markwash 14:13:16 no worries! 14:13:28 the oslo-common-db-code one is still a little unclear to me 14:13:29 zhiyan1: I'd like to have more info about that one 14:13:37 how he's planning to do it 14:13:40 the impact 14:13:40 I'm hesitant about "common db" code because glance is very coupled to its schema still 14:13:46 yeah what implications it entails 14:13:51 is those code ready in oslo? 14:13:56 and I don't want us to suddenly be very coupled with all the other projects schema as well 14:13:57 i'm not sure for that 14:13:59 zhiyan1: yup, most of it 14:14:07 markwash: also if we want to look at zero downtime with glance it may complicate it? 14:14:13 ok, just port code and landing down to glance, right? .. 14:14:17 ie with glance first 14:14:19 mclaren: o/ ! 14:14:20 TBH, that one sounds more like Ith release to me, but I'll get more feedback 14:14:31 zhiyan1: it just may not be the right idea yet 14:14:34 flaper87: goot it 14:15:03 #topic Blueprints in progress 14:15:09 seems, we need feedback to make sure it work well then to landing :) 14:15:16 jbresnah, want to talk about multiple locations? 14:15:33 yeah 14:15:35 did we ever decide on bp #1? 14:15:52 ameade, no, I think some folks were going to look into it more closely? 14:15:58 should I have actioned somebody? 14:16:10 markwash: i put a question on it 14:16:10 markwash: me for #3 14:16:33 i can chase it a bit more, but maybe i can talk it out with ameade first 14:16:38 i may be missing the intention 14:16:53 #action flaper87 evaluate use-oslo-common-db-code 14:17:35 #action jbresnah continue evaluating ability-to-separate-snapshots-and-images once there is feedback 14:17:44 seems like we're good 14:17:55 others feel free to participate in those 14:18:16 cool 14:18:40 * markwash yields to jbresnah about multiple locations 14:18:41 jbresnah, could you pls talk about multiple locations? i really need it 14:18:45 :) 14:18:58 i have a patch out there that i added comments to 14:19:06 i have not had a chance yet to see if anyone replied 14:19:24 has anyone had a chance to see it? 14:19:25 https://review.openstack.org/#/c/30517/ ? 14:19:34 no, i abandoned that 14:19:42 yes, which one? sorry 14:19:43 https://review.openstack.org/#/c/31591/ 14:20:00 markwash had thoughts on how to do it with PATCH 14:20:02 jbresnah: ah, good point about the restriction of multiple slashes 14:20:03 which i agree with 14:20:06 cool, i will check it later 14:20:14 just need to finalize the API 14:20:17 i also have: 14:20:24 I originally put in that restriction because I thoguht json-pointer was a bit much to implement 14:20:27 https://review.openstack.org/#/c/31306/ 14:20:40 I think we could remove the restriction now but it might take some more code 14:21:10 markwash: is it openstack policy tho? 14:21:17 | http://docs.openstack.org/api/openstack-image-service/2.0/content/restricted-json-pointers.html 14:21:39 jbresnah: no no, I just wrote that as a CYA kind of thing 14:21:43 i could hard code a special case for /locations/ somewhat easily 14:21:48 heh ok 14:21:53 the new format would be backwards compatible 14:22:10 so I think this is all great progress 14:22:22 and I'll keep reviewing! 14:22:37 markwash: so should i loosen the restriction? 14:22:48 jbresnah: yes I think so 14:22:54 in the general case? 14:23:22 jbresnah: that would be okay but we probably have to put in some code restrictions to make sure most properties can only be strings 14:23:42 I don't think we want to support requests that add user properties that are lists or json objects 14:23:44 actually i ran into something with that yesterday 14:23:49 i think there are restrictions there 14:23:57 okay cool 14:24:01 i was trying to implement the last way, where the list is changed all at once 14:24:07 one sample question jbresnah, is this 'remove: [{"remove": "add", "path": "/locations", "value": }]' should be this 'remove: [{"op": "remove", "path": "/locations", "value": }]' ? 14:24:14 and errors come back saying it can only be a string, i didn't get far into it 14:24:25 lets talk about it later today in #openstack-glance 14:24:33 cool 14:24:40 dumb question: do these locations contain credentials? if so do they pick up the metadata_encryption setting? 14:24:49 zhiyan1: yeah that seems right 14:24:56 mclaren: I think they should pick up the encryption setting 14:25:00 thx 14:25:07 mclaren: yeah they should 14:25:11 some of that should have already been done maybe? but I'll check again 14:25:37 so folks should keep reviewing multiple locations code 14:25:38 markwash: could you pls show me the url about metadata_encryption ? 14:25:38 mclaren: they could contain such info, that is up to the admin adding the location 14:26:02 mclaren: we sort of changed the approach and there is a bit of a buyer-beware attitude with this one 14:26:03 gotcha 14:26:03 * flaper87 needs to dig more into multiple-locations 14:26:34 zhiyan1: can you remind me after the meeting? I'll dig around and find it 14:26:36 zhiyan1: at some point i would like to talk to you about your interest in this bp 14:26:42 sure 14:26:58 zhiyan1: cool, i want to make sure i am addressing all needs 14:27:01 anything else on multiple locations? next is glance-cinder-driver 14:27:15 yes, for glance-cinder-store, if you checked https://etherpad.openstack.org/linked-template-image , there are some dependencies for implementation, and all about cinder: attach volume to host / direct volume IO. 14:27:19 markwash: i think that is good for here, just some follow up convos 14:27:55 so, i dont want to blocked from that, can can dev now...so, i have talked a lot of glance-cinder-store before with mark :) , and seems there a three choice for me.. 14:28:12 A simple approach: 14:28:13 1) upload image to glance non-cinder store 14:28:13 2) tell cinder to create a volume from that image 14:28:13 3) register the image to cinder store, use image id (from step 1) + volume id (from step 2) as the location 14:28:13 (each step is a separate api call) 14:28:21 B. multi-locations approach 14:28:25 C. final solution 14:28:29 after cinder have a pretty method to support attach volume to host / direct volume io (base on 'brick'/cinder-agent, in H3 or I1), i will change the cinder store driver to read/write image data from remote volume driectly but not separate handling.. 14:29:29 cool 14:29:49 in the multi locations approach 14:29:53 :), so jbresnah: i'm just now sure about #B 14:30:04 the idea is that you create the image with a non-cinder store 14:30:10 then ask cinder to create a volume from it 14:30:17 how to let glance-cinder-store match your plan? or any thoughts/comments? 14:30:26 and then register the cinder volume as a location on the image 14:30:37 markwash: yes, that of #A plan 14:30:42 the caveat is that the cinder store location would *not* support getting image data directly 14:30:56 so B is an improvement over approach A ? with addition pf multiple locations? 14:31:18 zhiyan1: I thought I was describing plan B 14:31:19 iccha: i think, yes 14:31:25 yes, pls 14:31:48 i do not quite have my head around this yet, sorry 14:31:55 mmh, cinder-store changes how glance's stores work, IMHO. I'm not fully against it but we need to figure it out a bit better 14:32:21 who will ask cinder to create volume from glance? is it glance or user? 14:32:30 api user 14:32:52 My first comment is that multiple-locations is a must for it to happen - assuming I got the idea right. 14:33:06 does cinder act as a glance client at that point or does it have special privledges? 14:33:08 flaper87: yeah, thats my concern as well. it seems strange to have a location that you can't really "get" 14:33:29 can the cinder location be a property of the image? 14:33:32 flaper87: yes, yes, i'd like address C# plan directly , but i check with cinder team from two weekly meeting, not sure there is a clear plan to address what i/cinder-store need 14:33:32 flaper87: but I think it could be fine if we figure out a clean way to do it 14:33:36 instad of a separate store ? 14:33:55 instead* 14:33:57 markwash: why can you not get it? 14:34:00 flaper87: yes, actually it can be a property of the image, in the form of a block device mapping property 14:34:22 jbresnah: because the only way to get it right now is to attach the volume as a device directly to the glance api host 14:34:26 markwash: that makes more sense to me than having a separate store 14:34:31 which may be okay too, but feels super squicky to me 14:34:51 jbresnah:cinder volume as a image, so it's glance's client. but i don't think there's a special privledges for cinder.. 14:35:22 is the end use case so nova can find a volume to boot via glance? 14:35:33 jbresnah: think so 14:35:39 jbresnah: basically 14:35:49 ok, I got it right then 14:35:52 so someone can 'get' the volume? 14:35:54 would the cinder store know that this image exists in glance other stores? or does glance become central point of authoirty over images/volume images now? 14:35:56 I think there is a constraint that the image should be bootable the "normal" way too 14:35:59 it just depends on where they are and what they can do? 14:36:41 iccha: in proposal B, glance stores the info in the locations table, cinder just treats the volume as usual 14:37:02 jbresnah: well, they can't download it through the http api 14:37:03 markwash: iccha wich I think makes more sense 14:37:39 markwash: oh i see, so if it is the only location then it changes glance's assumed functionality 14:37:50 jbresnah: right 14:38:07 so maybe we just make it explicit that stores that can't download can't be the only locations ? 14:38:13 wow I failed at explicit I think 14:38:20 lol 14:38:31 images must have at least one location that can be downloaded from to be active 14:38:43 that is fewer negatives 14:38:48 should we have additional info along with location to indicate store type or downloadable or not kind of information? 14:38:57 i think this is a fallout from the marriage of registry/replica service and data transfer service 14:39:02 I think this shouldn't be used as a store and that services consuming images should check if the image has a "volume" property 14:39:17 i dont see it as horrible to have a EWECANTSERVETHATDATA 14:39:18 markwash: we need keep forward-compatible i think, need support download, so justa localtion is not ok 14:39:35 ... i probably should have used spaces 14:39:40 haha 14:40:11 zhiyan1: it seems so strange though for people to want to download a volume from glance 14:40:12 it might be ok to image info which locations you cant download from as long as there is a way to indicate downloadable locations vs not volumes etc 14:40:22 because glance is image registry service basically right 14:40:38 so far i like having it as just another location, and throwing an error if you try to download it with only that location available 14:40:51 but i think i need to understand hte nova use case better 14:40:58 +1 jbresnah 14:41:35 iccha: yeah there should be a way to check if something is available for download 14:42:14 how does nova boot from a volume now? a specific flag saying use thing volume ID instead of an image ID? 14:42:42 something like that, but I don't recall exactly 14:42:45 I don't think that just throwing an error when trying the service tries to download the image is good enough 14:42:49 and is the idea to make it always an image ID and pick the volume from that image ID in certain cases? 14:42:50 jbresnah: this change not cover boot-from-volume, client need give volume id 14:42:55 the location should contain that info, somehow 14:43:08 have we considered a /volumes resource? 14:43:18 i am confused because a volume boot has such different semantics over an image 14:43:38 so nova couldnt just pick a volume from glance without the use specifically saying to 14:43:58 mclaren: not that I know of 14:44:02 so i think i dont get the use case 14:44:16 should we maybe have a followup discussion? 14:44:20 give folks more time to prepare? 14:44:23 markwash: +1 14:44:27 okay cool 14:44:38 +1 14:44:49 ok, thanks 14:44:50 #action markwash schedule a followup meeting about the cinder store (at a time that is convenient to zhiyan please!) 14:45:05 There were several other ongoing blueprints 14:45:15 thanks guys :) 14:45:28 zhiyan1: thank you 14:45:34 #topic async processing 14:45:45 I added a random assortment of ideas in code https://review.openstack.org/#/c/31874/1 14:45:58 let me know if you want me to add you as a reviewer, it is a draft so its restricted 14:46:04 markwash: I commented on that draft few minutes ago 14:46:09 and also its pretty unclear still how some important parts would work 14:46:24 cool! 14:46:24 markwash: add meh 14:46:45 markwash: I'd like to see that too 14:47:02 markwash, pls add me? 14:47:08 sure 14:47:14 and me and nikhil 14:47:18 +1 14:47:20 nikhil is on it 14:47:34 markwash: me too if you can, thanks 14:47:42 okay I'm going to resubmit it as not a draft :-) 14:47:47 I see that it's giving read only error in gerrit markwash 14:47:50 and jenkins can just deal 14:47:53 thanks 14:47:57 markwash: or, you could charge $10 for adding folks 14:48:00 haha 14:48:01 :D 14:48:13 or just link us to ur github branch ? 14:48:16 #topic documentation 14:48:18 markwash: ^^ 14:48:30 iccha, esheffield updates about docs 14:48:31 ? 14:48:34 https://etherpad.openstack.org/glance_v1_vs_v2 14:48:56 looks like new stuff at the top? 14:49:07 the top part of etherpad has list of places where glance / image services documentation resides and how out of date/ missing info/incorrect info is listed 14:49:12 #link https://etherpad.openstack.org/glance_v1_vs_v2 14:49:26 and also what the new documentation may need 14:49:53 looks pretty good! 14:50:20 yes,yes, seems good 14:50:33 the basic skeletal work of v1 vs v2 exists below, would be great to have ppl add in any info missed, or any other quirks or differences they have noticed in v2 vs v1 for ppl who would like to switch to be aware of 14:50:37 iccha: for v2 update it would be nice to see something about how "Content-Type: application/openstack-images-v2.0-json-patch and "Content-Type: application/openstack-images-v2.1-json-patch differ 14:50:44 I'll review that some more and then maybe we can figure out which thing we want to do next 14:51:10 shall we push some encoding hits to the doc? 14:51:37 probable somebady use non-ascii code string as the properties or some fields :) 14:51:50 yes sure feel free to add anything you think which should be there 14:51:59 this is a very rough repository of all our collective knowledge 14:51:59 jbresnah: yeah hmm.. we should really push people in the direction of v2.1 and just ask that they never think about v2.0 if possible 14:52:00 hmm 14:52:11 lol 14:52:15 v2.0 was compatible with draft 03 of the json-patch spec 14:52:24 v2.1 is compatible with the approved rfc version 14:52:28 6902 I think? 14:52:31 jbresnah: that ll be great to add , are u volunteering to add it to etherpad :p 14:52:38 :-D 14:52:40 iccha: he is 14:52:42 :D 14:52:52 iccha: heh, i would but i asked because i need to learn the differences! 14:52:52 #topic import export clone 14:53:03 rosmaita: a little bit of time. . . 14:53:22 ok, thanks, i think we're mostly agreed that a new resource is ok 14:53:42 so post to /images/actions and get back a location 14:53:48 like /images/actions/UUID 14:53:54 then you poll the UUID 14:53:59 what you get back ... 14:54:24 depends on what was in your request, something like { "import" : "stuff" } 14:54:31 likewise for export, clone 14:54:49 and what you get when you poll the UUID coudl be different too 14:54:57 that's actually my quyestions 14:55:10 rosmaia: actually, i really not get the benefits from upload/download to import/export :) sorry for that, i have checked your wiki, but .. 14:55:15 +1 14:55:19 rosmaita: +1 14:55:19 in nova actions, there are 9, 7 dont return bodies 14:55:21 rosmaita: I had some thoughts in code about what an action would look like 14:55:29 the 2 that do return different bodies 14:55:36 markwash: cool 14:55:44 should be able to see it here in https://review.openstack.org/#/c/31874/1 now 14:56:41 rosmaita: is the UUID basically an action identifier or the UUID of the image on which the action was taken? I assume the former? 14:56:58 action identifier 14:57:04 markwash: yeah 14:57:06 thanks 14:57:33 zhiyan1: the main benefit is that upload directly sets the data with no modifications 14:57:49 zhiyan1: import and export allow the format to change or for other lengthy processing to take place 14:58:05 zhiyan1: have u seen the mailing list discussion? 14:58:14 yes, but not catch them... 14:58:31 week ago, right? 14:58:37 (2 minutes left) 14:58:39 right 14:58:51 :( 14:58:58 sorry, rosmaita, seems i need pick it up :( 14:58:59 zhiyan1: we cna talk in openstack-glance after mtg if you like 14:59:12 flaper87: did I skip you? 14:59:13 goood :) thanks rosmaita. 14:59:27 markwash: yup, registry-driver :D 14:59:32 #topic registry driver 14:59:39 :) 30s 14:59:48 I don't *think* there is anyone after us 14:59:49 very quick, I'd love some feedback about this: http://lists.openstack.org/pipermail/openstack-dev/2013-June/009839.html 14:59:55 and won't turn into a pumpkin for another hour 15:00:29 * ameade secretly just watched said disney movie 15:00:31 the driver is *almost* done, I need to finish some tests that are being blocked by the fact that we don't have a way to deserialize datetimes 15:00:57 russelb seemed to be poo-poo-ing that for some reason 15:01:02 heh ameade 15:01:06 yes, you means primitive call right? 15:01:06 so, I was thinking we could do something like nova does (convert strtime into datetime in the db_api function) 15:01:33 in json, it seems like datetimes should be objects that self-identify somehow 15:01:33 markwash: some reason that 1) I still don't get 2) I disagree with what I got 15:02:05 flaper87: nod 15:02:12 I don't think we need to define new models just to serialize / deserialize datetimes (which is what he's suggesting 15:02:23 the other thing, that implementation won't land 'til H-3 15:02:26 which is bad for us 15:02:54 so, my suggestions are: 1) Implement datetime deserialization 2) do it in the sqlalchemy driver when needed 15:03:06 #2 is the easiest 15:03:11 but not the best, IMHO 15:03:15 that's just a workaround 15:03:42 so do you want feedback on the ML or here or some other way? 15:03:48 i can't figure why a strtime would be getting to the sqlalchemy layer 15:03:58 but i haven't been paying much attention to this 15:04:10 both work for me, but if we do it in the m-l we better make sure to find a consensus 15:04:31 oh rpc stuff right? 15:04:35 flaper87: then I better just talk to you in #openstack-glance :-) 15:04:40 markwash: +1 15:04:59 ameade: yup rpc stuff 15:05:01 :( 15:05:04 #topic quick open discussion 15:05:06 I mean, :) 15:05:08 :D 15:05:18 can I say, We rock ? 15:05:21 ok, I said it 15:05:24 does this meeting time work out okay? 15:05:34 flaper87: we should just have all the services on one node :P 15:05:37 we had mclaren and zhiyan so that part was great 15:05:38 markwash: +1 15:05:40 this time is great for me 15:05:49 yup i agree good to have eveyrone included 15:05:55 +1 15:06:00 :) 15:06:06 * flaper87 thinks that jbresnah felt asleep again 15:06:13 jbresnah: can you deal with this once every two weeks? 15:06:15 works for me, though 15:06:18 i dunno about having everyone here but the time is good 15:06:25 :P 15:06:28 haha 15:06:31 ameade: lol 15:06:34 +1 15:06:47 okay cool, Imma call it 15:06:49 markwash: yeah probably 15:06:58 :-) 15:06:59 peace out 15:07:02 #endmeeting