Wednesday, 2013-01-16

*** vishy is now known as vishy_zz00:07
*** jrodom has quit IRC00:07
*** eharney has quit IRC00:08
*** dwcramer has joined #openstack-meeting00:10
*** tong|2 has quit IRC00:10
*** markmcclain has quit IRC00:16
*** cloudchimp has quit IRC00:16
*** ryanpetr_ has joined #openstack-meeting00:16
*** juice has joined #openstack-meeting00:16
*** ryanpetrello has quit IRC00:19
*** mrodden has joined #openstack-meeting00:20
*** bearovercloud has left #openstack-meeting00:21
*** ewindisch has joined #openstack-meeting00:24
*** dolphm has quit IRC00:25
*** ryanpetr_ has quit IRC00:27
*** ryanpetrello has joined #openstack-meeting00:28
*** sdake_z has quit IRC00:33
*** mnewby has joined #openstack-meeting00:36
*** markwash has quit IRC00:50
*** markwash_ has quit IRC00:50
*** maurosr has quit IRC00:51
*** kaganos has quit IRC00:53
*** maurosr has joined #openstack-meeting00:53
*** sarob has quit IRC00:55
*** debo_cisco has joined #openstack-meeting01:00
*** ijw has quit IRC01:01
*** pony has quit IRC01:03
*** ewindisch has quit IRC01:04
*** eglynn has quit IRC01:04
*** derekh has quit IRC01:05
*** beagles has quit IRC01:05
*** Ryan_Lane has quit IRC01:05
*** salv-orlando has quit IRC01:09
*** debo_cisco has quit IRC01:09
*** ewindisch has joined #openstack-meeting01:09
*** maurosr has quit IRC01:11
*** henrynash has quit IRC01:12
*** stackKid has quit IRC01:17
*** ryanpetrello has quit IRC01:17
*** stevebaker has quit IRC01:24
*** ctracey has quit IRC01:26
*** vipul is now known as vipul|away01:33
*** vipul|away is now known as vipul01:33
*** MarkAtwood has quit IRC01:37
*** markmcclain has joined #openstack-meeting01:45
*** dwcramer has quit IRC01:45
*** vipul is now known as vipul|away01:48
*** stevebaker has joined #openstack-meeting01:49
*** gyee has quit IRC01:49
*** galthaus has joined #openstack-meeting01:50
*** galthaus has joined #openstack-meeting01:52
*** vipul|away is now known as vipul01:52
*** galthaus has quit IRC01:54
*** galthaus has joined #openstack-meeting01:55
*** galthaus has quit IRC01:57
*** reed has joined #openstack-meeting01:57
*** colinmcnamara has joined #openstack-meeting01:57
*** dwcramer has joined #openstack-meeting02:00
*** hemna__ has joined #openstack-meeting02:01
*** hemna_ has quit IRC02:05
*** hemna has quit IRC02:05
*** jog0 has quit IRC02:07
*** bencherian has quit IRC02:09
*** ctracey|away has joined #openstack-meeting02:09
*** ctracey|away is now known as ctracey02:09
*** ryanpetrello has joined #openstack-meeting02:09
*** zhuadl has joined #openstack-meeting02:21
*** danwent has quit IRC02:26
*** dolphm has joined #openstack-meeting02:27
*** mnewby has quit IRC02:30
*** mnewby has joined #openstack-meeting02:30
*** Mandell has quit IRC02:32
*** novas0x2a|laptop has quit IRC02:38
*** yaguang has joined #openstack-meeting02:39
*** galthaus has joined #openstack-meeting02:40
*** zhuadl has quit IRC02:41
*** Edward_Zhang has joined #openstack-meeting02:43
*** tongli has joined #openstack-meeting02:52
*** ewindisch has quit IRC02:55
*** dolphm has quit IRC02:59
*** dolphm_ has joined #openstack-meeting03:01
*** adjohn has quit IRC03:02
*** markwash has joined #openstack-meeting03:03
*** markwash_ has joined #openstack-meeting03:03
*** KurtMartin has joined #openstack-meeting03:04
*** KurtMartin has quit IRC03:05
*** galthaus has quit IRC03:08
*** galthaus has joined #openstack-meeting03:10
*** adjohn has joined #openstack-meeting03:10
*** ewindisch has joined #openstack-meeting03:12
*** zhuadl has joined #openstack-meeting03:14
*** colinmcnamara has quit IRC03:14
*** stevebaker has quit IRC03:14
*** galthaus has quit IRC03:16
*** alrs has quit IRC03:21
*** colinmcnamara has joined #openstack-meeting03:22
*** tongli has quit IRC03:25
*** s1rp has left #openstack-meeting03:33
*** galthaus has joined #openstack-meeting03:35
*** galthaus has quit IRC03:36
*** stevebaker has joined #openstack-meeting03:36
*** bearovercloud has joined #openstack-meeting03:39
*** bearovercloud has left #openstack-meeting03:40
*** MarkAtwood has joined #openstack-meeting03:40
*** adjohn has quit IRC03:41
*** stevebaker has quit IRC04:01
*** ewindisch has quit IRC04:11
*** jog0 has joined #openstack-meeting04:14
*** jog0 has quit IRC04:14
*** jog0 has joined #openstack-meeting04:14
*** jog0 has left #openstack-meeting04:14
*** markmcclain has quit IRC04:21
*** zhuadl has quit IRC04:32
*** Edward_Zhang has quit IRC04:33
*** Mandell has joined #openstack-meeting04:35
*** ewindisch has joined #openstack-meeting04:35
*** adjohn has joined #openstack-meeting04:40
*** colinmcnamara has quit IRC04:45
*** alrs has joined #openstack-meeting04:47
*** otherwiseguy has quit IRC04:48
*** vishy_zz is now known as vishy04:53
*** stevebaker has joined #openstack-meeting04:55
*** galthaus has joined #openstack-meeting04:55
*** stevebaker has left #openstack-meeting04:55
*** stevebaker has joined #openstack-meeting04:56
*** galthaus has quit IRC04:56
*** danwent has joined #openstack-meeting05:02
*** MarkAtwood has quit IRC05:05
*** alrs has quit IRC05:08
*** rohitk has joined #openstack-meeting05:08
*** MarkAtwood has joined #openstack-meeting05:08
*** galthaus has joined #openstack-meeting05:13
*** Edward_Zhang has joined #openstack-meeting05:18
*** dolphm_ has quit IRC05:27
*** dolphm has joined #openstack-meeting05:28
*** ryanpetrello has quit IRC05:28
*** alrs has joined #openstack-meeting05:30
*** ewindisch has quit IRC05:34
*** cp16net is now known as cp16net|away05:38
*** cp16net|away is now known as cp16net05:38
*** rohitk has quit IRC05:45
*** rohitk has joined #openstack-meeting05:46
*** niska has quit IRC05:48
*** niska has joined #openstack-meeting05:50
*** zhuadl has joined #openstack-meeting05:50
*** lbragstad has quit IRC05:50
*** koolhead17 has joined #openstack-meeting05:54
*** dansmith has quit IRC05:55
*** afazekas has joined #openstack-meeting06:00
*** dansmith has joined #openstack-meeting06:07
*** MarkAtwood has quit IRC06:15
*** llu has quit IRC06:24
*** topol has quit IRC06:26
*** alrs has quit IRC06:29
*** alrs has joined #openstack-meeting06:36
*** galthaus has quit IRC06:40
*** alrs has quit IRC06:57
*** koolhead17 has quit IRC06:57
*** jhenner has joined #openstack-meeting06:58
*** garyk has joined #openstack-meeting06:58
*** danwent has quit IRC07:05
*** vishy is now known as vishy_zz07:12
*** mrunge has joined #openstack-meeting07:14
*** dolphm has quit IRC07:15
*** rohitk has quit IRC07:18
*** dolphm has joined #openstack-meeting07:22
*** dolphm has quit IRC07:23
*** alrs has joined #openstack-meeting07:29
*** henrynash has joined #openstack-meeting07:30
*** henrynash has quit IRC07:39
*** asalkeld has quit IRC07:43
*** asalkeld has joined #openstack-meeting07:49
*** eglynn has joined #openstack-meeting07:54
*** rafaduran has joined #openstack-meeting07:56
*** ijw has joined #openstack-meeting08:10
*** afazekas has quit IRC08:14
*** adjohn has quit IRC08:24
*** afazekas has joined #openstack-meeting08:28
*** _afazekas has quit IRC08:33
*** turul_ has joined #openstack-meeting08:33
*** turul_ is now known as afazekas_08:34
*** christophk has joined #openstack-meeting08:38
*** Mandell has quit IRC08:50
*** dosaboy has joined #openstack-meeting08:58
*** dosaboy has quit IRC09:01
*** ijw has quit IRC09:01
*** dosaboy has joined #openstack-meeting09:01
*** derekh has joined #openstack-meeting09:11
*** christophk_ has joined #openstack-meeting09:16
*** afazekas has quit IRC09:16
*** stevebaker has quit IRC09:18
*** stevebaker has joined #openstack-meeting09:22
*** henrynash has joined #openstack-meeting09:24
*** egallen has joined #openstack-meeting09:27
*** egallen_ has joined #openstack-meeting09:27
*** christophk has quit IRC09:28
*** egallen has quit IRC09:31
*** egallen_ is now known as egallen09:31
*** alexpilotti has joined #openstack-meeting09:32
*** psedlak has joined #openstack-meeting09:35
*** adjohn has joined #openstack-meeting09:35
*** adjohn has quit IRC09:40
*** christophk_ is now known as christophk09:43
*** salv-orlando has joined #openstack-meeting09:48
*** darraghb has joined #openstack-meeting09:48
*** llu has joined #openstack-meeting09:50
*** sdake has quit IRC10:04
*** sdake has joined #openstack-meeting10:18
*** stevebaker has quit IRC10:28
*** henrynash has quit IRC10:35
*** maurosr has joined #openstack-meeting10:52
*** Edward_Zhang has quit IRC10:53
*** zhuadl has quit IRC10:53
*** henrynash has joined #openstack-meeting11:04
*** afazekas has joined #openstack-meeting11:10
*** henrynash has quit IRC11:31
*** yaguang has quit IRC11:41
*** shengjie has quit IRC11:53
*** rafaduran has left #openstack-meeting11:58
*** rafaduran has joined #openstack-meeting12:00
*** shengjie_min has joined #openstack-meeting12:02
*** shengjie has joined #openstack-meeting12:04
*** jaypipes has quit IRC12:10
*** beagles has joined #openstack-meeting12:14
*** EmilienM has joined #openstack-meeting12:20
*** jaypipes has joined #openstack-meeting12:27
*** henrynash has joined #openstack-meeting12:30
*** tongli has joined #openstack-meeting12:30
*** vkmc has joined #openstack-meeting12:33
*** henrynash has quit IRC12:38
*** tongli has quit IRC12:55
*** dprince has joined #openstack-meeting13:16
*** cdub_ has joined #openstack-meeting13:24
*** martine_ has joined #openstack-meeting13:27
*** dwcramer has quit IRC13:28
*** yamahata_ has joined #openstack-meeting13:29
*** mjs has joined #openstack-meeting13:31
*** mjs is now known as Guest3147513:31
*** martine_ has quit IRC13:32
*** jaypipes has quit IRC13:32
*** henrynash has joined #openstack-meeting13:35
*** martine_ has joined #openstack-meeting13:39
*** almaisan-away is now known as al-maisan13:41
*** martine_ has quit IRC13:45
*** martine_ has joined #openstack-meeting13:45
*** galthaus has joined #openstack-meeting13:49
*** topol has joined #openstack-meeting13:51
*** eharney has joined #openstack-meeting13:53
*** galthaus has quit IRC13:56
*** ryanpetrello has joined #openstack-meeting13:56
*** EmilienM has quit IRC14:03
*** lbragstad has joined #openstack-meeting14:04
*** EmilienM has joined #openstack-meeting14:05
*** jaypipes has joined #openstack-meeting14:06
*** ndipanov has quit IRC14:20
*** ndipanov has joined #openstack-meeting14:25
*** roaet is now known as roaet-away14:31
*** johngarbutt has joined #openstack-meeting14:36
*** dosaboy1 has joined #openstack-meeting14:36
*** dprince_ has joined #openstack-meeting14:36
*** dosaboy has quit IRC14:36
*** dprince has quit IRC14:38
*** annegentle_itsme has joined #openstack-meeting14:41
*** psedlak has quit IRC14:42
*** galthaus has joined #openstack-meeting14:42
*** galthaus has quit IRC14:42
*** galthaus has joined #openstack-meeting14:42
*** pschaef has joined #openstack-meeting14:48
*** dhellmann-afk is now known as dhellmann14:52
*** cdub_ has quit IRC14:53
*** cloudchimp has joined #openstack-meeting14:54
*** psedlak has joined #openstack-meeting14:56
*** al-maisan is now known as almaisan-away14:57
*** dolphm has joined #openstack-meeting14:58
*** Mandell has joined #openstack-meeting14:59
*** JM1 has joined #openstack-meeting15:00
*** markvan has joined #openstack-meeting15:05
*** radez_g0n3 is now known as radez15:08
*** mtreinish has joined #openstack-meeting15:09
*** tongli has joined #openstack-meeting15:13
*** markmcclain has joined #openstack-meeting15:18
*** roaet-away is now known as roaet15:20
*** topol has quit IRC15:21
*** topol has joined #openstack-meeting15:23
*** cdub_ has joined #openstack-meeting15:23
*** Mandell has quit IRC15:30
*** EmilienM has left #openstack-meeting15:34
*** EmilienM has joined #openstack-meeting15:35
*** EmilienM has left #openstack-meeting15:36
*** radez is now known as radez_g0n315:37
*** radez_g0n3 is now known as radez15:37
*** ewindisch has joined #openstack-meeting15:39
*** EmilienM has joined #openstack-meeting15:41
*** lbragstad has quit IRC15:46
*** spzala has joined #openstack-meeting15:47
*** frankm_ has joined #openstack-meeting15:47
*** garyTh has joined #openstack-meeting15:49
*** yamahata_ has quit IRC15:49
*** vkmc has quit IRC15:51
*** matelakat has joined #openstack-meeting15:53
*** gary_th has joined #openstack-meeting15:53
*** gary_th has quit IRC15:56
*** bswartz has joined #openstack-meeting15:56
*** garyTh has quit IRC15:57
*** garyTh has joined #openstack-meeting15:57
*** thingee has joined #openstack-meeting15:58
*** avishay has joined #openstack-meeting15:58
*** smulcahy has joined #openstack-meeting15:58
*** winston-d has joined #openstack-meeting15:58
*** jbrogan has joined #openstack-meeting15:59
*** colinmcnamara has joined #openstack-meeting15:59
*** otherwiseguy has joined #openstack-meeting16:00
*** xyang_ has joined #openstack-meeting16:01
jgriffithLooks like a crowded room16:01
jgriffith#startmeeting cinder16:02
openstackMeeting started Wed Jan 16 16:02:15 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.16:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:02
*** openstack changes topic to " (Meeting topic: cinder)"16:02
openstackThe meeting name has been set to 'cinder'16:02
thingeeo/16:02
JM1indeed!16:02
eharneyo/16:02
avishayhello all16:02
DuncanTHow's the new swimming pool John?16:02
xyang_hi16:02
kmartinhello16:02
jgriffithDuncanT: at least it will be a fancy below ground pool....16:03
jgriffithDuncanT: If I can find the darn thing16:03
bswartzhi16:03
avishayjgriffith: good luck!16:03
jgriffithWow..  good turn out this week16:03
*** jdurgin1 has joined #openstack-meeting16:03
jgriffithavishay: Yeah, thanks!16:03
smulcahyhi16:03
winston-dhi16:03
jgriffithOk.... let's start with the scality driver16:03
jgriffithJM1: has joined us to fill in some details16:03
jdurgin1hello16:04
JM1hi everyone16:04
jgriffithhttps://review.openstack.org/#/c/19675/16:04
jgriffith#topic scality driver16:04
*** openstack changes topic to "scality driver (Meeting topic: cinder)"16:04
JM1I see that DuncanT just added his own comment16:04
jgriffithJM1: Yeah, and after our conversation yesterday my only remaining concern is missing functionality16:05
JM1I didn't expect the lack of snapshots to be such an issue16:05
DuncanTFrom my POV, I think people expect the things that have always worked in cinder to continue to always work, regardless of backend16:06
JM1hmmm16:06
jgriffithJM1: Keep in mind that the first thing a tester/user will do is try and run through the existing client commands16:06
JM1I saw that the NFS driver doesn't support snapshot either16:06
JM1but I suppose some people use it?16:06
jgriffithJM1: Yeah, and it's been logged as a bug16:06
jgriffith:)16:07
JM1ok16:07
*** rushiagr has joined #openstack-meeting16:07
JM1can it be actually implemented with regular NFS?16:07
jgriffithSo why don't you share some info on plans16:07
jgriffithYou had mentioned that you do have plans for this in the future correct?16:07
winston-dwell, I notice that zadara driver doesn't support snapshot either.16:07
DuncanTqcow files rather than raw would allow it16:07
matelakatHi all.16:07
JM1regarding snapshots, we have no formal plans so far16:07
jgriffithoh, I misunderstood16:07
JM1of course we're thinking about implementing it16:07
matelakatWe added a XenAPINFS , and snapshot support is waiting for review. Although it is a bit generous.16:08
JM1but right now there is no such thing as a release date for this feature16:08
matelakatMaking deep copies instead of real snapshots ...16:08
rushiagrhi!16:08
JM1matelakat: interesting16:09
jgriffithJM1: Is there any sort of hack that comes to mind to make it at least appear to have this?16:09
jgriffithSimilar to what matelakat has done?16:09
JM1jgriffith: well, just as was said, we can do a full copy16:09
jgriffithJM1: That would alleviate my issue16:09
DuncanTditto16:10
thingee+116:10
jgriffithFor me it's more of continuity16:10
JM1I just thought that this could be more disappointing to users than knowing that they don't have real snapshots16:10
avishayI think we have to keep in mind that snapshot implementations will also need to support future functionality, like restore?16:10
jgriffithJM1: So I'd rather document to users what you're doing and why it's not a good idea16:10
jgriffithbut give them the opportunity16:10
*** frankm_ has quit IRC16:10
jgriffithRemember, most users are doing things automated16:10
rushiagrjgriffith: +116:11
jgriffithThey don't want/need to check "if this backend taht"16:11
jgriffiththat16:11
bswartzjgriffith: the ability to do fast/efficient snapshots seems like the kind of thing a driver should be able to advertise in it capabilities16:11
jgriffithetc etc16:11
avishaybswartz: +116:11
matelakatbswartz +116:11
winston-dbswartz: good idea. :)16:11
jgriffithThe continuity and adherence to the API is what matters to me, not the implementation16:11
jgriffithbswartz: +116:11
*** dwalleck has joined #openstack-meeting16:12
guitarzanso every backend has to implement every feature in the API?16:12
JM1jgriffith: well at the moment, they will need to also setup our SOFS before using it in cinder16:12
*** vkmc has joined #openstack-meeting16:12
bswartzbut I agree that some sort of dumb/slow snapshot implementation is better than none16:12
JM1and AFAIK, there can be only one cinder driver at a time16:12
avishayMaybe there should be a generic full copy implementation and those who have efficient snapshots will advertise the capability?16:12
jgriffithguitarzan: well, I hate to use absolute terms16:12
guitarzanwell, there is obviously some line being drawn here and it is unclear what it is16:12
DuncanTJM1: Multiple driver support is a hot new feature16:12
JM1DuncanT: do you mean it's being implemented?16:13
jgriffithJM1: But I consier things like create/snapshot/delete/create-from-snap core16:13
jgriffithguitarzan: ^^ that was meant for you16:13
winston-dJM1: there can be multiple cinder driver for a cinder cluster.16:13
DuncanTJM1: I believe it works, within limitations of works (requires both drivers to implement the get_stats function)16:13
JM1winston-d: ah good to know!16:13
guitarzanjgriffith: yeah, I got that :)16:14
jgriffithguitarzan: you object?16:14
guitarzanI just think it's an interesting stance to take16:14
guitarzanpeople wanting cinder support for a particular backend probably already know about that backend16:14
JM1winston-d: so I suppose there is a system of rules to determine what driver will host what volume?16:14
winston-dJM1: yeah, the scheduler16:14
DuncanTJM1: volume_types16:14
jgriffithguitarzan: that's fair16:14
jgriffithguitarzan: let me rephrase it a bit16:15
hemna__is there a way for the user to select which backend to use?  volume types?16:15
winston-dJM1: scheduler decides which back-end to serve the request based on volume type.16:15
jgriffithThe initial review is sure to get a -1 from me, but if the vendor simply can't offer the capability then exceptions can and likely will be made16:15
DuncanTCan we move multi-driver talk to later in the meetingm, or this is going to get confusing16:15
jgriffithDuncanT: +116:15
avishayhemna__: a user shouldn't need to choose a backend, as long as the backend has the capabilities they need16:15
winston-davishay: +116:16
guitarzanjgriffith: that sounds reasonable16:16
jgriffithAs DuncanT let's table the back-end discussion for the moment16:16
*** frankm has joined #openstack-meeting16:16
jgriffithguitarzan: that's more along the lines of what I had in mind and is why JM1 is here talking to us this morning :)16:16
guitarzan:)16:17
jgriffithso JM1....16:17
DuncanTI'd also be tempted to start being more harsh on new features being added without a reasonable number of drivers having the functionality added at the same time16:17
JM1so from this I understand that even copying would be an acceptable fallback to support snapshotting16:17
JM1that's something we can do16:17
jgriffithDuncanT: I would agree, but I don't know that we haven't adhered to that already16:17
matelakatJM1: on the manager level?16:17
*** hub_cap has joined #openstack-meeting16:17
jgriffithI've avoided putting SolidFire stuff in core for that very reason16:17
*** vkmc has quit IRC16:18
DuncanTjgriffith: As the project matures, we can bring in tighter rules16:18
JM1matelakat: manager?16:18
matelakatthe one that calls the driver.16:18
guitarzanI would guess he means at the driver level16:18
JM1ah, you mean as a fallback for drivers like ours that don't have the feature?16:18
guitarzanthe driver can certainly just do a copy16:18
guitarzanahh, I misunderstood if that's the case :)16:19
matelakatyes, so on the driver level, you only need a copy16:19
JM1yes I was thinking inside our driver16:19
JM1inside the driver it's easy to copy files, rename, whatever16:19
DuncanTIf it can be made generic enough that NFS can use it to, even better16:20
JM1indeed16:20
*** anilkb has joined #openstack-meeting16:20
bswartzDuncanT: +116:20
JM1the performance cost is high, so that won't be useful with all workloads16:20
JM1but I gather that for some it will be still better than nothing16:21
DuncanTI think that is true, yes16:21
jgriffithJM1: well snapshots inparticular have been notriously poor performance items in OpenStack16:21
matelakatMaybe we could come up with a name for those snapshots, so it reflects that they are not real snapshots.16:22
jgriffithLVM snaps suck!16:22
jgriffithmatelakat: we have clones now16:22
bswartzLVM snaps are better than full copies16:22
matelakatjgriffith: thanks.16:22
hemna__ok I gotta jam to work...16:23
jgriffithbswartz: I didn't ask for what's worse, I just said they suck16:23
guitarzanmatelakat: the snapshot vs backup discussion is another one :)16:23
JM1I don't see how LVM snaps can be worse than a full copy of a 1TB volume16:23
bswartzjgriffith: fair enough16:23
jgriffithThey're only worse when you try to use them for something16:23
jgriffithand they kill perf on your original LVM16:23
avishayit may take longer to make a full copy, but the full copy will perform better16:23
jgriffithI'm not talking perf of the action itself, but we're really loosing focus here me thinks :)16:23
guitarzanI think we're drifting again...16:24
jgriffithguitarzan: +116:24
jgriffithOk... back in here16:24
jgriffithJM1: Do you have a strong objection to faking snapshot support via a deep copy?16:24
JM1jgriffith: not at all16:24
jgriffithOr does anybody else on the team have a strong objection?16:24
JM1but I will have to think about details16:24
guitarzanthe api doesn't care if a snapshot is a snapshot or a copy16:24
jgriffithguitarzan: correct16:25
JM1and come back to you folks to ask implementation questions16:25
jgriffithSo TBH this is exactly what I did in the SF driver anyway16:25
guitarzanso until the "backup" discussion starts again, we shouldn't worry about implementation16:25
jgriffithwe didn't have the concepts of snapshots so I just clone16:25
JM1eg. can we expect the VM to pause I/O during the snaphost?16:25
hemna__that's pretty much what we do in the 3PAR driver as well16:25
jgriffithJM1: typically no, you can't assume that16:25
jgriffithJM1: we don't do anything to enforce that16:26
jgriffiththat could be a problem eh...16:26
DuncanT--force option allows snashot of an attached volume, but it is a 'here be dragons' option16:26
JM1jgriffith: ok so for a copy we need a mechanism to force a pause16:26
guitarzanyou have to be disconnected to snap unless you --force right?16:26
*** adjohn has joined #openstack-meeting16:26
guitarzanDuncanT: +116:26
JM1DuncanT: oh, so that means usually we snapshot only unattached volumes?16:26
DuncanTnormal snapshot without force requires source volume to be unattached16:27
DuncanTJM1: That is my belief16:27
JM1ok, so cp should work16:27
avishayyes16:27
DuncanTJM1: We don't currently support snap of live volumes, though that will be fixed some time16:27
JM1ah ok16:27
*** annegentle_itsme is now known as annegentle16:27
jgriffithyall speak for yourselves :)16:27
JM1I thought it already worked like that on more capable drivers16:27
guitarzanto be fair, the api also doesn't prevent you from immediately reattaching :)16:28
guitarzanagain, "here be dragons"16:28
DuncanTYeah, I'm thinking of proposing a state machine with an explicit 'snapshotting' state to cover that, but that is a state machine discussion16:28
bswartzDuncanT: +116:28
jgriffithDuncanT: +1 for states in Cinder!!16:28
*** lloydde has joined #openstack-meeting16:29
jgriffithBack to JM1 are we good here or is there more we need to work out?16:29
rushiagrDuncanT: +116:29
guitarzanso we've given JM1 a lot of work or stuff to think about16:29
JM1jgriffith: I think we're good16:29
jgriffithExcellent... anybody else?16:29
JM1I will see how to do a simple implementation with simlpe file copies16:29
winston-dDuncanT: +1 for state machine16:29
JM1and resubmit patches16:29
JM1and thank you all for your input16:30
jgriffithOk... so what else have we got16:30
JM1being primarily a dev I'm not as familiar with real use cases as I'd like to16:31
*** vkmc has joined #openstack-meeting16:31
jgriffithSince we started the multi-backend topic shall we go there?16:31
jgriffithI'm going to time limit it though :)16:31
*** danwent has joined #openstack-meeting16:31
* jgriffith has learned that topic can take up an entire meeting easily16:31
*** Vibhu has joined #openstack-meeting16:31
hemna__what is left to implement to support it?16:31
avishayBRB16:32
jgriffith#topic multi-backend support16:32
*** openstack changes topic to "multi-backend support (Meeting topic: cinder)"16:32
jgriffithSo there are option here16:32
jgriffithhub_cap: is possibly looking at picking up the work rnirmal was doing16:33
*** eglynn has quit IRC16:33
jgriffithSo we get back to the question of leaving it to a filter sched option or moving to a more effecient model :)16:33
hemna__which option has the best chance of making it in G3 ?16:34
jgriffithhemna__: They've all got potential IMO16:34
jgriffithSo let me put it this way16:34
jgriffithThe filter schedule is a feature that's in, done16:34
jgriffithSo what we're talking about is an additional option16:34
jgriffithThe ability to have multiple back-ends on a single Cinder node16:35
jgriffithThere are two ways to go about that right now (IMO)16:35
bswartzjgriffith: do you mean mutiple processes on one host? or 1 process?16:35
jgriffith1. The patch that nirmal proposed that provides some intelligence in icking bakc-ends16:35
winston-dbswartz: that's two different approaches16:35
jgriffith2. Running multiple c-vol services on a single Cinder node16:35
*** JM1 has left #openstack-meeting16:35
*** vkmc has quit IRC16:35
bswartzoh16:36
*** JM1 has joined #openstack-meeting16:36
bswartz(2) seems like it would create some new problems16:36
DuncanTI favour option 2 from a keeping-the-code-simple POV16:36
winston-dDuncanT: +10016:36
bswartzlol16:36
winston-dbswartz: which are?16:37
bswartzwell what would go in the cinder.conf file?16:37
bswartzthe options for every backend?16:37
bswartzwould different backends get different conf files?16:37
guitarzanI think you'd have the same problem with 1 or n managers16:37
jgriffithbswartz: not sure why that's unique between the options?16:37
guitarzanwith n managers you could build a conf for each16:37
jgriffithBTW: https://review.openstack.org/#/c/11192/16:37
DuncanTDifferent conf files or named sections in a single file... neither is overly complicated16:38
bswartzwell option 1 forces us to solve that problem explicitly16:38
jgriffithFor a point of reference on what Option 1 looks like16:38
jgriffithThis also illustrates the conf files, not so bad16:38
bswartzty16:38
xyang_opt 2 multiple c-vol services on one single cinder node seems good. you can use filter scheduler to choose node16:38
xyang_I mean choose c-vol service16:38
hub_caphey guys sorry im in IRL meeting. just saw my name16:38
hub_capim in favor of yall telling me what to do :D16:39
avishayback16:39
*** gongysh has quit IRC16:39
bswartzokay I'm in favor of (2) as well16:40
bswartzbring on the extra PIDs16:40
JM1could you attach volumes from 2 different cinder services in the same VM?16:40
DuncanTJM1: Yes16:40
guitarzansure16:40
hub_capso multiple c-vol services, would that be like nova-api, spawning multiple listeners in teh same pid?16:40
winston-dJM1: of course16:40
hub_capon different ports16:41
DuncanTNo need16:41
guitarzanhub_cap: no, just managers16:41
DuncanTThey don't listen on a port, only on rabbit16:41
winston-dhub_cap: that's a big different, nova-api workers are listening on _SAME_ port16:41
hub_capwinston-d: im talking about osapi, metadata and ec2 api in the same launcher16:42
avishay+1 for option #216:42
winston-dhub_cap: while c-vol services listen on AMQP16:42
hub_capsure... amqp vs port... not much diff... a unique _thing_16:42
hub_capbut ure right it was my bad for saying port :D16:42
jgriffithhub_cap: I think you're on the same page16:42
hub_capi dont think operators will be happy w/ us having 10 pids for 10 backends :)16:42
winston-dhub_cap: ah well. they should have their own pid if my memory serves me right.16:43
jgriffithI think in previous converstations we likened it to swift in a box or something along those lines16:43
hub_capi have a vm running let me c16:43
winston-dhub_cap: why not?16:43
hub_capure right winston-d16:43
DuncanT10 pids for ten backends is better than one fault taking out all your backends!16:43
bswartzhub_cap: I think 10 would be uncommon -- I see more like 2 or 3 in reality16:43
hub_capas long as they are started/stopped by a single binscript, like nova does im down for it16:43
jgriffithbswartz: hehe16:43
hub_capbswartz: ya i know :D16:43
jgriffithhub_cap: that's what I'm thinking16:44
JM1or maybe 10 instances of 2-3 drivers?16:44
jgriffithWe do introduce a point of failure issue here which sucks though16:44
JM1I expect people to do such crazy things for eg. performance16:44
hub_capso yall ok w/ me modeling it like the present nova-api, but w/ the subsitiution of amqp to pids, they create different pids but use a single binscript to start/sotp16:44
hub_capampq to ports... sry16:44
hub_captrying to listen IRL too16:44
DuncanTJM1: 10 instances of 1 driver, on a single node, gives almost no performance improvement16:45
JM1DuncanT: I meant several instances of a driver but each with different tuning16:45
*** lloydde_ has joined #openstack-meeting16:45
jgriffithJM1: we're not quite that sophisticated (yet) :)16:45
bswartz10 instances of 1 driver on a single node also increases your failure domain if that one node dies16:45
jgriffithOk...16:45
jgriffithbswartz: so that's my concern, however that's the price of admission for this whole concept no matter how you implement it16:46
jgriffithIf HA is a concern, do an HA cinder install or use mltiple back-ends16:46
jgriffithThat being said... back to the topic at hand16:46
JM1or use a redundant storage cluster (hint hint)16:47
jgriffithJM1: The problem is if the cinder node goes your toast16:47
bswartzyeah I'm just saying that in practice I don't expect a huge of PIDs on the same host -- people will spread them out to a reasonable degree16:47
jgriffithThere's now way to get to that redundant storage cluster :)16:47
*** mrodden has quit IRC16:47
*** lloydde has quit IRC16:47
JM1jgriffith: this will only affect creation of new instances, no?16:47
JM1(but I'm getting off topic)16:48
bswartzJM1: creation of new volumes, not instances, but yes16:48
jgriffithJM1: volumes, create, attach, delete andy api call16:48
*** vkmc has joined #openstack-meeting16:48
jgriffithback on track here...  Sounds like concensus for the multiple processes on a single Cinder node?16:48
DuncanTYou can put multiple instances of API behind a load ballancer and most things work, there are some gaps still16:48
hub_capok so it sounds like we have consensus?16:48
winston-djgriffith: +116:49
bswartzthe only thing unaffected by cinder going down is your data access16:49
DuncanTjgriffith: +216:49
hub_capsingle config file iirc? right?16:49
jgriffithhub_cap: hehe16:49
jgriffithhub_cap: yep, that's what I'm thinking16:49
* hub_cap hopes i didnt open a new can of worms16:49
hub_capcool16:49
avishayjgriffith: +316:49
xyang_+416:49
hub_capw/ specific [GROUP NAMES]16:49
jgriffithI just want to make sure up front... is anybody going to come back and say "why are we doing this?"16:49
jgriffithspeak now16:49
bswartzjgriffith: so each process has its own conf file?16:50
jgriffithhub_cap: GROUP NAMES seems like an interesting approach16:50
thingeejgriffith: I think the HA issue is going to come up.16:50
hub_capitll be nice to do CONFIG.backend_one.ip16:50
avishayHA needs to be solved regardless IMO16:51
avishayOrthogonal issues, no?16:51
DuncanTHA I think is a summit topic, though we might be able to solve some of it before hand16:51
jgriffithavishay: I would agree, but thingee I think is pointing out something folks will definitely bring up16:52
avishayAgreed16:52
jgriffithBut my answer there is then don't do it.. use multi-nodes, types and the fitlter scheduler16:52
jgriffithDuncanT: avishay and yes, HA is something we really need but it's a seperate issue IMO16:53
avishaytypes + filter sched will work here too, right?16:53
*** john5223 has joined #openstack-meeting16:53
*** stevebaker has joined #openstack-meeting16:53
jgriffithavishay: solves the more general problem and use case yes16:53
jgriffithavishay: but some don't want an independent cinder node for every back-end in their DC16:53
winston-djgriffith: well, i thought the only benefit of multiple c-vol on single node is to save physical machines since a lot of c-vols are just proxy, very lightweight workloads16:54
jgriffithwinston-d: isn't that what I said :)16:54
avishayjgriffith: i agree, but now the scheduler won't choose the backend?16:54
winston-dfrom scheduler point of view, it doesn't even have to know those c-vols are on the same physical server or not.16:54
jgriffithwinston-d: or are we typing in unison :)16:54
jgriffithwinston-d: Ahh.. good point16:55
winston-djgriffith: yup16:55
*** lloydde_ has quit IRC16:55
jgriffithOk... off topic again16:55
avishaynevermind my comment16:55
jgriffithSo the question is, everybody on board with hub_cap going down this path?16:55
DuncanTSeems good to me16:56
avishayme 216:56
winston-davishay: it still does16:56
winston-dme 216:56
hub_caps/path/rabbit hole/16:56
* jgriffith +1 has wanted it since Essex :)16:56
thingee+116:56
jgriffithhub_cap: synonyms :)16:56
jdurgin1it's fine with me16:56
guitarzannirmal will be so sad :)16:56
winston-dhub_cap: i'd love to help you test/review the patch16:56
* guitarzan runs16:57
xyang_sounds good16:57
jgriffithguitarzan: haha!!16:57
jgriffithguitarzan: speak up16:57
winston-dguitarzan: :)16:57
guitarzanno, I think multiple managers is good16:57
*** jpickard has joined #openstack-meeting16:57
jgriffithalright, awesome16:57
jgriffithLet's move forward then16:57
jgriffithhub_cap: You da man16:57
hub_capwinston-d: thank u sir16:57
hub_capand jgriffith <316:57
jgriffithhub_cap: ping any of us for help though of course16:57
hub_caproger16:57
jgriffithOk... almost out of time16:58
jgriffith#topic open discussion16:58
*** openstack changes topic to "open discussion (Meeting topic: cinder)"16:58
thingeeI'm back after being sick since last week wednesday...reviews and what not coming...sorry guys16:58
*** bobba_ has joined #openstack-meeting16:59
jgriffiththingee: glad you're up and about, hope you're feeling better16:59
avishaysubmitted the generic copy volume<->image patch16:59
rushiagrwe agreed on CONFIG.backend_one.ip type conf files, am i correct?16:59
avishayand DuncanT submitted the backup to swift patch which i hope to review soon16:59
kmartinjgriffith: any updates on the get_volume_stats() regarding the drivers providing "None" for values that they can't obtain16:59
DuncanTSnapshot/volume deletion is the other thing I wanted to bring up17:00
*** Mr_T has joined #openstack-meeting17:00
jgriffithkmartin: Oh... I haven't talked to anyone about that yet I don't think17:00
winston-dkmartin: could you elaborate ?17:00
*** mrodden has joined #openstack-meeting17:00
DuncanT Specifically the fact that currently, if you snapshot a volume then delete the volume, the snapshot is unusable if you need the provider loc/auth of the original volume17:00
jgriffithDuncanT: sorry... see I told you I'd forget :(17:01
jgriffithWe're out of time :(17:01
jgriffithBuT17:01
jgriffithEVERYBODY17:01
kmartinwinston-d: on 3par we are not able to provide a few of the values that are required17:01
jgriffithPlease take a look at the backup patch DuncanT submitted17:01
DuncanT:-)17:01
jgriffithand jump over to #openstack-cinder to finish this conversation17:02
jgriffithWe need to give the Xen folks the channel now17:02
jgriffith#endmeeting17:02
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"17:02
openstackMeeting ended Wed Jan 16 17:02:16 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-16-16.02.html17:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-16-16.02.txt17:02
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-16-16.02.log.html17:02
*** bswartz has left #openstack-meeting17:02
winston-dkmartin: let's continue discussion in cinder channel17:02
*** avishay has left #openstack-meeting17:02
*** smulcahy has left #openstack-meeting17:02
*** vkmc has quit IRC17:02
*** thingee has left #openstack-meeting17:02
*** vishy_zz is now known as vishy17:02
johngarbutt#startmeeting XenAPI17:03
openstackMeeting started Wed Jan 16 17:03:17 2013 UTC.  The chair is johngarbutt. Information about MeetBot at http://wiki.debian.org/MeetBot.17:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:03
*** openstack changes topic to " (Meeting topic: XenAPI)"17:03
openstackThe meeting name has been set to 'xenapi'17:03
*** vkmc has joined #openstack-meeting17:03
johngarbuttHi all17:03
johngarbutt#topic actions from last meeting17:03
matelakathi17:03
*** openstack changes topic to "actions from last meeting (Meeting topic: XenAPI)"17:03
toansterhello17:03
Mr_Thowdy17:03
*** lbragstad has joined #openstack-meeting17:03
bobba_Morning!17:03
*** bobba_ is now known as BobBall17:04
johngarbuttone action was for bobball to check about any limits on the xenstore stuff17:04
johngarbuttI think it was 2k for any xenstore value?17:04
*** frankm has left #openstack-meeting17:04
*** rushiagr has left #openstack-meeting17:04
BobBallThat's right - it's a page of 4k but with a bunch of headers, then we use a round value of 2048 bytes17:04
johngarbuttright17:05
Mr_TThanks again, BobBall.17:05
johngarbutt#info 2k limit on size of xenstore value17:05
BobBallhth!17:05
johngarbuttthe other think was to discuss resetting passwords and looking towards cloud-init rather than xenapi specific agent17:06
*** Vibhu has left #openstack-meeting17:06
johngarbuttI think we scheduled that for 17:3017:06
johngarbutt#topic blueprints17:06
*** openstack changes topic to "blueprints (Meeting topic: XenAPI)"17:06
johngarbuttanyone got any blueprint worries?17:06
johngarbuttmikal: is config drive done now? blueprint says in progress?17:07
*** koolhead17 has joined #openstack-meeting17:07
johngarbuttThere are the same review requests from last week I think17:07
*** mnewby has quit IRC17:07
johngarbuttsounds like nothing more on this one?17:08
johngarbutt#topic docs17:08
*** openstack changes topic to "docs (Meeting topic: XenAPI)"17:08
*** JM1 has left #openstack-meeting17:08
*** bencherian has joined #openstack-meeting17:08
johngarbuttI updated the hypervisor wiki page to note that config drive hit trunk17:08
johngarbutt#topic QA17:09
*** openstack changes topic to "QA (Meeting topic: XenAPI)"17:09
johngarbuttany more updates on gating trunk?17:09
johngarbutt#topic open discussion17:10
*** openstack changes topic to "open discussion (Meeting topic: XenAPI)"17:10
johngarbuttanything people want to raise?17:10
johngarbutt#info I got XCP iso running on virtual box, then openstack running using devstack, makes for a neat dev environment17:11
johngarbuttis this meeting time still good with people? is it still useful?17:12
BobBallIt's been a quiet week I think17:12
johngarbuttI am keen to keep doing this weekly, so we have a good time to raise things when they come up17:12
guitarzanI can bring up force detach if you like17:12
guitarzan:)17:12
zykes-any news on san stufF ? :p17:13
toansteryes, it is extremely helpful to me, i am still learning the space though17:13
*** mnewby has joined #openstack-meeting17:13
johngarbuttguitarzan: fire away17:13
matelakatguitarzan: is it related to volumes?17:14
guitarzanit's the same request... we (rackspace) would really like to be able to do it17:14
guitarzanmatelakat: yeah17:14
johngarbuttOh, I see, today you might wait until a VM reboots before it happens, no way to force it?17:14
guitarzanyes17:15
*** mfoukes has joined #openstack-meeting17:15
guitarzanand I'm not even sure the support guys can always get it to happen during a reboot17:15
guitarzanbut I'm not entirely clear on that case17:15
BobBallGuests with PV tools installed? Doesn't the detach happen from XS immediately? or is it OpenStack that's postponing it becuase it's not sure it can happen?17:15
johngarbuttyou get disk in use from XenServer17:15
guitarzanBobBall: xen won't detach because it's in use by the pv guests17:15
matelakatI guess it's mounted, right?17:16
BobBalloh - of course.  The detach is negotiated with the guest, yes.17:16
guitarzanmounted, part of a raid, lvm, someone used the wrong side of the pillow17:16
BobBallguitarzan, Just for confirmation, if you run a vbd-unplug with force=true, does that succeed?17:16
johngarbuttit is one of those things that might be kernel specific I guess, due to pvops code17:17
guitarzanBobBall: someone was supposed to get me your email address and I was going to send you a couple of emails17:17
*** ijw has joined #openstack-meeting17:17
*** anilkb has quit IRC17:17
guitarzanBobBall: I'm not sure... I didn't know about force=true!17:17
BobBallguitarzan, I'm bob.ball@citrix.com - feel free to send me as many emails as you like17:17
johngarbuttdo we have the bug for this?17:17
*** ijw1 has joined #openstack-meeting17:18
guitarzanBobBall: thanks!17:18
guitarzanone other interesting xen issue is we've seen pbd-unplug not actually remove the iscsi connection17:18
johngarbuttdoes nova have a force detach?17:18
guitarzanBobBall: that's the other one I needed to ask you17:18
guitarzanjohngarbutt: I'm not sure17:18
*** otherwiseguy has quit IRC17:18
johngarbuttI guess the current code goes into a retry loop checking to see if the volume gets detached17:18
guitarzanI think clay added it, but I think that was just the cinder side17:18
johngarbuttI guess the nice thing is to add a force remove (admin only operation?)17:19
*** lloydde has joined #openstack-meeting17:19
johngarbuttor is it users that get worried?17:19
BobBallguitarzan, I think that the iSCSI connection is managed by the SR - so since the PBD could in theory be re-plugged as long as the SR is active then that connection is expected to stay?  johngarbutt - is that right?17:19
guitarzanwell, they ran an sr forget as well17:19
guitarzanthis has only happened a few times17:19
BobBalloh - drat!17:19
johngarbuttit certainly should get tidied up17:20
guitarzanit's not exactly an unconfirmed bug, but I don't know how to reproduce it17:20
johngarbutt#link https://bugs.launchpad.net/nova/+bug/103010817:20
uvirtbotLaunchpad bug 1030108 in nova "Detaching a volume from a XenAPI instance fails if XenAPI thinks it is in use" [Medium,Confirmed]17:20
BobBallSo you've seen some cases where the SR doesn't exist in XS any more, yet the low level iscsi connection is still active?17:20
johngarbuttI think that is when I became aware of the issues here17:20
guitarzanBobBall: yes, very infrequently17:21
guitarzanjohngarbutt: got it17:21
*** ijw has quit IRC17:21
guitarzanthat bug specifies the "remove from queue" half, but I think a force is more wanted17:21
*** jhenner has quit IRC17:21
johngarbuttyep, we have a force, but bad things might happen to the data in the volume17:22
BobBallguitarzan, That's not ideal! We'll have to look into possible causes for that17:22
zykes-johngarbutt: san?17:22
guitarzanBobBall: it would be great if we could explain it at least. it's possible that it happened during some manual sr/pbd cleanup after failed resizes and such17:22
johngarbuttzykes: no real news there I am afraid17:23
BobBallguitarzan, Can we get a copy of the SMlog from a system that this has occured on?  There should be some logging of when we try and log out of the iscsi session17:23
*** vishy is now known as vishy_zz17:23
zykes-johngarbutt: doh17:24
guitarzanBobBall: I will look, but it's probably logrotated away by now17:24
johngarbutt#action BobBall and guitarzan to get some logs about occasional iSCSI issue17:24
guitarzanif I see another one, I will definitely grab logs17:24
BobBallGreat, thanks.  Ideally grab a bugtool so that other possible logs are also contained (e.g. messages, xensource.log) and the xapi DB to check state etc17:25
BobBallIs this something that you fall over quickly when it happens? (i.e. because you can't reconnect the iscsi target to another host or similar?)17:25
johngarbuttcool, so that covers most things17:26
*** John___ has joined #openstack-meeting17:26
guitarzanBobBall: no, we just found this doing some backend volume clean/audit17:26
johngarbuttany more for any more (before we move on to passwords without an agent17:26
BobBallHmmm okay.  Well let's take this offline - I'll have a bit of a think and a dig to refresh myself on how we handle the iscsi sessions and hopefully we'll get an occurence soon we can debug17:27
guitarzangreat, thanks17:28
*** John___ has quit IRC17:28
johngarbutt#topic password reset in a post xenapi agent world17:29
*** openstack changes topic to "password reset in a post xenapi agent world (Meeting topic: XenAPI)"17:29
johngarbutthi, so there were some extra folks going to join us17:29
johngarbuttwho is in the channel and interested?17:29
*** ijw1 has quit IRC17:30
johngarbuttalexpilotti: hows things?17:30
johngarbuttpvo: hows tings?17:30
*** ijw has joined #openstack-meeting17:30
alexpilottijohngarbutt: hey17:30
johngarbutt#link https://blueprints.launchpad.net/nova/+spec/get-password17:30
johngarbuttso looks like there is a patch from vish to make the old and new work together17:31
*** winston-d has left #openstack-meeting17:31
johngarbutt#link https://review.openstack.org/#/c/19746/17:31
*** hub_cap has left #openstack-meeting17:31
*** lloydde has quit IRC17:31
johngarbuttbasically, if you set a password with the agent, you can get the encrypted password using the new nova call17:31
*** lloydde has joined #openstack-meeting17:32
johngarbuttalexpilotti: what is the plan with hyper-v and cloud-init?17:32
johngarbuttor rather cloud-init on windows17:32
alexpilottijohngarbutt: 1) Implementing the patch that Vish proposed17:32
alexpilottiusing the metadata service17:32
*** jpickard has quit IRC17:32
johngarbuttpost to metadata service?17:33
alexpilottiyep17:33
johngarbutt#link https://gist.github.com/400876217:33
alexpilotti2) doing it via guest / host communication17:33
alexpilottiletting the user choosing between 1 and 217:33
johngarbuttso how do you kick off the getting of the password, is it just on VM create?17:33
*** afazekas has quit IRC17:33
alexpilottithat's the safest way17:34
alexpilottithis way the password don't travel unencrypted on any channel17:34
alexpilotti*doesn't17:34
johngarbuttI think the xenapi side requirement is that we can reset the password without a reboot, at the users request17:34
alexpilottithat's what I want to do as well17:35
johngarbuttI know the user currently specifies the password, but I think we are OK with it being generated17:35
johngarbuttas you say, much safer17:35
alexpilottihere's BTW the API from nova.api.metadata import password; password.set_password(context, ecrypted_pass)17:35
alexpilottito be used on the host side to set the password17:35
johngarbuttright, makes sense, thanks17:35
johngarbuttso I guess you want the hypervisor system so you don't need the metadata service17:36
alexpilottiI'm thinking on transforming cloud-init from a simple "inizialize and exit" service to a service that listens all the time for requests17:36
alexpilottiwhich is similar to the way in which it works on Azure, for example17:36
johngarbuttinteresting. where would the requests come from in your case?17:36
alexpilottisetting the password is the most obvious scenario17:36
johngarbuttI guess puppet kick might be another, eventually17:37
alexpilottiwith a Nova extension, like the one you are already supporting17:37
johngarbuttI mean, how does cloud-init get communicated to?17:37
johngarbuttdoes it poll metadata service? does config drive get updated?17:38
alexpilottiI'd like to have an sbstract API on top of each hypervisor's communication channel17:38
alexpilottilet's say a simple value get / set semantyc17:38
matelakatSo it's more like an agent :-)17:38
johngarbuttOK, I think that makes sense17:38
johngarbutthave you seen the agent api in xenapi, one sec17:38
alexpilottiwith also some event listening channel17:39
*** vishy_zz is now known as vishy17:39
matelakaty, the whole stuff remonds me to the xenapi agent.17:39
alexpilottijohngarbutt: can you sned me a link?17:39
johngarbutt#link https://github.com/openstack/nova/blob/master/nova/virt/xenapi/agent.py17:39
matelakat*reminds17:39
pvohere now… sorry had something run over.17:39
alexpilottiif we can avoid to reivent the well is even better :-)17:39
alexpilotti* wheel, lol17:39
*** hemnafk is now known as hemna17:40
johngarbuttand on the server side: #link https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/agent17:40
johngarbuttif __name__ == "__main__":    XenAPIPlugin.dispatch(        {"version": version,        "key_init": key_init,        "password": password,        "resetnetwork": resetnetwork,        "inject_file": inject_file,        "agentupdate": agent_update})17:40
johngarbutthmm, that is a bit specific I guess17:40
alexpilottiso agent.py runs on the guest?17:41
johngarbuttyes17:41
johngarbuttno17:41
johngarbuttsorry17:42
johngarbuttit runs on the hypervisor as an API extension17:42
johngarbuttalong with this code #link https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/xenstore.py17:42
johngarbuttit is called from nova-compute17:42
matelakatJohngarbutt: do you have a link to the agent source code?17:42
johngarbuttbasically there is a shared key value store17:42
pvois it still here? https://launchpad.net/openstack-guest-agents17:43
pvoor moved to github?17:43
johngarbutthttp://wiki.openstack.org/GuestAgent17:43
pvohttps://github.com/rackspace/openstack-guest-agents-unix17:43
johngarbuttthats the one, cheers17:43
matelakatalexpilotti: does it help?17:44
johngarbuttbit information overload, lets set back a little17:44
alexpilottilooking!17:45
johngarbuttpassword generate on guest, sent to use encrypted via metadata service: done by nova, cloud-init still needs to support17:45
johngarbuttpassword generate is triggered by VM start17:46
*** jhenner has joined #openstack-meeting17:46
alexpilottigot it17:46
johngarbuttat the same time as key injection17:46
matelakatjohngarbutt: thanks, that helps, I almost lost the context.17:46
johngarbuttwhat we need is:17:46
*** Mandell has joined #openstack-meeting17:46
johngarbutt1) method to kick off password reset17:46
*** vishy is now known as vishy_zz17:46
*** s1rp has joined #openstack-meeting17:46
comstud(yes, github code is current for agent)17:46
johngarbutt2) method to send encrypted generated password when not using metadata service17:47
johngarbuttyou could do 1 by making cloud init poll the metadata service17:47
alexpilotti1) I'd go with an extension like "reset-password" or similar17:47
johngarbuttI think vish already almost added that17:48
alexpilottiand use vishy's get-password to retrieve it17:48
johngarbuttclear-password or something?17:48
comstudi think cloud-init is not a daemon today, correct?17:48
comstudie, just runs once at boot17:48
* comstud is not sure17:48
alexpilotticomstud: correct17:48
johngarbuttright, good point17:48
alexpilotticomstud: and smoser doesn't like the idea of transforming it into an agent17:48
johngarbuttI like the idea of just re-triggering cloud-init in certain cases, and an agent would do that I guess17:49
johngarbuttright, that is worth knowing, thanks17:49
alexpilottimy take is that cloud-init should become a full agent17:49
*** garyk has quit IRC17:49
comstudswitching out the agent for cloud-init affects more than just password reset17:49
comstudit also affects any networking changes that are made17:49
johngarbuttgood point17:49
comstudright now we're able to tell the agent to re-configure networking17:49
alexpilottiI'd also fifferentiate the command plugins17:50
BobBallnice word!17:50
alexpilottilool17:50
alexpilottiforget ny typos :-D17:50
johngarbutt:-)17:50
alexpilottiI'm notoriously a disaster in writing in English quickly :-)17:51
johngarbuttI guess the other point is to get images that can work in more places at once17:51
alexpilottiI should deploy more than 4 fingers probably ;-)17:51
alexpilottiback to the idea17:51
alexpilottia command can have a property17:51
alexpilottithat indicates if it can be executed only at boot17:52
alexpilottiso for example, password reset could be triggered any time17:52
alexpilottinetwork injection only at boot, etc17:52
pvoalexpilotti: this is assuming its a daemon, right?17:52
johngarbutterm, I think we want network reset at other times too, but  I see what you mean17:52
alexpilottithat's an option.17:52
alexpilottipvo ^17:53
*** vishy_zz is now known as vishy17:53
pvoalexpilotti: how would it get triggered at other times?17:53
alexpilottipvo: on WIndows it's a service17:53
pvook17:53
comstudhe's proposing it become a daemon17:53
comstudwith certian commands marked as boot-time-only17:53
pvoa Windows service is a daemon, no?17:53
matelakaty17:53
comstudjohngarbutt: and yeah, network reconfiguration should be allowed at any time, IMO17:53
pvo(admittedly, its been a while for me and windows)17:54
alexpilottiI'd go with password-reset Nova API extension -> hypervisor sends message via comm channel to guest -> cloud-init agent triggers event -> command plugin execution17:54
comstudthis isn't windows 95 where we should have to reboot to get changes.17:54
comstud:)17:54
johngarbuttor windows ME17:54
johngarbuttyew17:54
johngarbuttanyways17:54
comstud;)17:54
alexpilotticomstud: reemind me to put a flag to set the host on fire if somebody tries to deploy Win95 :-D17:54
comstudheheh17:55
BobBallI miss windows 95...17:55
BobBallBut I digress17:55
johngarbuttI think we might need to separate the cloud-init and deamon thingy17:55
comstudI have a windows 3.11 VM.. w/ IE 3.17:55
alexpilottiBobBall: "de gustibus non disputandum est" :-)17:55
johngarbuttcould we say there is an agent that kicks cloud-init, for example17:55
matelakatjohngarbutt: +117:55
pvojohngarbutt: thats kinda what we did on our windows agent.17:55
pvoits two services17:55
alexpilottijohngarbutt: smoser anyway doesnt like the idea17:55
pvoalexpilotti: is that the only reason to kill it?17:56
alexpilottijohngarbutt: on WIndows, I'd prefer to deploy a single WIndows service17:56
comstudI think we do just have an extra agent that can support extra abilities that cloud-init cannot provide17:56
matelakatin XenAPI terms it means agent and configdrive living together?17:56
comstudIt's optional...17:56
alexpilottisince cloudbase-init (aka cloud-init for Windows) is already running as a service17:56
comstudif you don't have it in your VM.. you can't password-reset or reconfig networking without rebooting17:56
comstudetc17:57
alexpilotticomstud: correct17:57
johngarbuttI am thinking, what if we start again, what would we build...17:57
johngarbuttgood point, it is optional17:57
alexpilotticomstud: IMO "reset-password" shoud throw an Exception if the opration fails17:57
comstudright17:57
alexpilottidue to missing agent code17:57
johngarbuttI guess cloud-init takes configdrive or metadata stuff and does things17:57
*** EmilienM has quit IRC17:57
alexpilottijohngarbutt: correct17:57
johngarbuttideally we would like to add another alternative: hypervisor transport17:58
johngarbutt(such as xenstore)17:58
johngarbuttthe other thing is when do start cloud-init17:58
*** pschaef has quit IRC17:58
johngarbuttthat is either at boot, or maybe via an agent17:58
alexpilottiMy 2c are: configdrive/metadata RO, hypervisor transport RW17:58
johngarbuttwhat the agent does today could move to a cloud-init plugin?17:58
johngarbuttalexpilotti: good point17:59
johngarbutt(do ping if there is another meeting now)17:59
johngarbuttdoes that sound crazy or good?17:59
alexpilottijohngarbutt: form a first check, your agent's design looks very similar to cloud-init and cloudbase-init17:59
johngarbuttthe agent cloud become part of cloud-init, maybe17:59
alexpilottismoser: ping17:59
comstudyeah, that's what's been thrown around before18:00
alexpilottifor sure we can at least write a common code base18:00
comstudthings that are RO and aren't supported by cloud-init directly can be cloud-init plugins18:00
alexpilottiand if the argument cloud-init <> agent wins18:00
comstudRW is separate agent still, comm via XenStore or whatever secure mechanism if you have to talk between hyp and guest18:00
alexpilottiat least cloud-init and the agent become wrappers on the common code18:00
johngarbuttwell password stuff now means metadata can accept data18:01
alexpilottijohngarbutt: I hate it18:01
alexpilotti:-)18:01
johngarbuttbut the trigger from a two way channel is the agent18:01
smoseralexpilotti, my ears were ringing.18:01
alexpilottismoser: hi!18:01
alexpilottismoser: we are discussing about the old cloud-init vs agent18:02
smoseri just personally do not see much value in an agent that is tied to a hypervisor/cloud-platform.18:02
johngarbuttwe are over time here, we might want to nip onto #openstack-nova?18:02
alexpilottinp18:02
smosersure18:02
johngarbuttthanks all18:03
johngarbuttsee some of in the other channel18:03
alexpilottiI suggest -dev, as it's a discussion that might attract more people there18:03
johngarbuttgood point18:03
johngarbutt#endmeeting18:03
*** adjohn has quit IRC18:03
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"18:03
openstackMeeting ended Wed Jan 16 18:03:31 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-16-17.03.html18:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-16-17.03.txt18:03
openstackLog:            http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-16-17.03.log.html18:03
matelakatsee you18:03
*** matelakat has quit IRC18:03
*** mfoukes has quit IRC18:04
*** kaganos has joined #openstack-meeting18:07
*** jcmartin has joined #openstack-meeting18:07
*** darraghb has quit IRC18:08
*** ccorrigan has joined #openstack-meeting18:13
*** ndipanov has quit IRC18:13
*** cp16net is now known as cp16net|away18:14
*** s1rp has left #openstack-meeting18:15
*** garyk has joined #openstack-meeting18:17
*** derekh has quit IRC18:19
*** sdake has quit IRC18:19
*** dtalton has joined #openstack-meeting18:22
*** jhenner has quit IRC18:26
*** psedlak has quit IRC18:26
*** fnaval has joined #openstack-meeting18:28
*** adjohn has joined #openstack-meeting18:28
*** alexpilotti has quit IRC18:29
*** johngarbutt has left #openstack-meeting18:29
*** sdake has joined #openstack-meeting18:31
*** sdake has quit IRC18:31
*** jog0 has joined #openstack-meeting18:32
*** nati_ueno has joined #openstack-meeting18:36
*** adjohn has quit IRC18:36
*** sdake has joined #openstack-meeting18:37
*** vkmc has quit IRC18:40
*** nati_ueno has quit IRC18:40
*** sdake_z has joined #openstack-meeting18:47
*** nati_ueno has joined #openstack-meeting18:48
*** dosaboy1 has quit IRC18:50
*** dosaboy has joined #openstack-meeting18:50
*** eharney has quit IRC18:52
*** mrunge has quit IRC18:54
*** egallen has left #openstack-meeting18:59
*** kaganos has quit IRC18:59
*** vipul is now known as vipul|away19:02
*** cp16net|away is now known as cp16net19:03
*** jog0 has quit IRC19:03
*** kaganos has joined #openstack-meeting19:08
*** otherwiseguy has joined #openstack-meeting19:09
dosaboyanyone know if the swift meeting is happening?19:10
creihtdosaboy: next week19:11
creihtit is every other week19:11
dosaboyoh right it was last week19:11
dosaboyk thanks19:11
*** martine_ has quit IRC19:14
*** martine_ has joined #openstack-meeting19:15
*** vipul|away is now known as vipul19:15
*** shmcfarl has joined #openstack-meeting19:16
*** shmcfarl has left #openstack-meeting19:17
*** woodspa has joined #openstack-meeting19:18
*** salv-orlando has quit IRC19:19
*** sarob has joined #openstack-meeting19:20
*** sarob has quit IRC19:20
*** Mr_T has left #openstack-meeting19:20
*** sarob has joined #openstack-meeting19:20
*** rafaduran1 has joined #openstack-meeting19:21
*** jcmartin has quit IRC19:21
*** jcmartin has joined #openstack-meeting19:22
*** rafaduran has quit IRC19:22
*** novas0x2a|laptop has joined #openstack-meeting19:34
*** mawagon1 has joined #openstack-meeting19:37
*** dosaboy has quit IRC19:39
*** dosaboy has joined #openstack-meeting19:39
*** stevebaker has quit IRC19:41
*** mawagon1 is now known as olaph19:41
*** cp16net is now known as cp16net|away19:41
*** adjohn has joined #openstack-meeting19:45
*** jcmartin has quit IRC19:45
*** jcmartin has joined #openstack-meeting19:46
*** vbannai has quit IRC19:51
*** afazekas has joined #openstack-meeting19:51
*** shadower has joined #openstack-meeting19:52
*** stevebaker has joined #openstack-meeting19:52
*** cp16net|away is now known as cp16net19:56
*** sarob has quit IRC19:57
*** bencherian has quit IRC19:57
*** shardy has joined #openstack-meeting19:57
*** eharney has joined #openstack-meeting19:58
*** eharney has quit IRC19:58
*** eharney has joined #openstack-meeting19:58
sdake_z#startmeeting heat20:00
openstackMeeting started Wed Jan 16 20:00:06 2013 UTC.  The chair is sdake_z. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: heat)"20:00
openstackThe meeting name has been set to 'heat'20:00
sdake_z#topic rolecall20:00
*** openstack changes topic to "rolecall (Meeting topic: heat)"20:00
sdake_zsdake here20:00
stevebaker\o/20:00
shardyshardy here20:00
jpeelerjpeeler here20:00
asalkeldhere20:00
shadowerhere20:01
sdake_zzane around?20:01
stevebakerhe is not online20:01
sdake_z#topic action review20:01
*** openstack changes topic to "action review (Meeting topic: heat)"20:01
sdake_zya i see must be on extended pto ;)20:01
sdake_zACTION: git review pluggable-clients (everyone) (stevebaker, 20:23:22)20:02
sdake_zas i recall that patch went in20:02
stevebakerit sure did20:02
sdake_z#info patch merged20:02
sdake_zACTION: Write blueprints for any current and potential future feature development (everyone) (stevebaker, 20:30:15)20:02
sdake_zi noticed a few more blueprints20:03
shardyadded some for updatestack improvements20:03
sdake_zif you have more, please submit and we can take them up in h20:03
sdake_zACTION: Start moving towards the PTL process (sdake) (stevebaker, 20:35:22)20:03
*** jtran has joined #openstack-meeting20:03
sdake_zso started down this path, i'll have more in the meeting20:04
sdake_z#info see meeting topic governance20:04
*** annegentle has quit IRC20:04
sdake_z#topic blueprint review for g320:04
*** openstack changes topic to "blueprint review for g3 (Meeting topic: heat)"20:04
stevebaker#link https://launchpad.net/heat/+milestone/grizzly-320:05
sdake_zlast meeting I believe we agreed to hold off on new blueprint implementations and focus new features to h120:05
sdake_zbut we have some blueprints for g3, would like to review20:05
sdake_zhttps://launchpad.net/heat/+milestone/grizzly-320:06
sdake_zoverview page ^20:06
sdake_zfirst one up is20:06
sdake_zadd a static resource roup20:06
*** asalkeld has quit IRC20:06
sdake_zhttps://blueprints.launchpad.net/heat/+spec/static-inst-group20:06
*** asalkeld has joined #openstack-meeting20:06
sdake_zangus, definition is in "new"20:06
asalkeldsorry20:07
sdake_zno problems, should we move to approved?20:07
asalkeldyea we need it20:07
*** maurosr has quit IRC20:07
*** annegentle_itsme has joined #openstack-meeting20:07
asalkeldyip20:07
sdake_z#info add a static instance group defn moved to approved20:07
sdake_zrest of the fields look right20:07
asalkeldMoniker resource could be delayed20:07
sdake_zMoniker Resource20:07
sdake_zhttps://blueprints.launchpad.net/heat/+spec/moniker-resource20:08
asalkeldnot sure on the status of the client20:08
sdake_zshould I bounce out of g3 then?20:08
asalkeldyea20:08
sdake_zwe have 5 more weeks to sort thorugh it20:08
sdake_zok i'll bounce it20:08
sdake_z#action moniker resource delayed because of client issues - targeting to h20:09
sdake_z#undo20:09
openstackRemoving item from minutes: <ircmeeting.items.Action object at 0x1e36850>20:09
sdake_zaction sdake to bounce moniker resource - delayed because of client issues - targeting towards h20:09
sdake_zraw template db20:09
sdake_zhttps://blueprints.launchpad.net/heat/+spec/raw-template-db20:09
sdake_zdefn is pending approval20:10
sdake_zany problems with approving?20:10
asalkeldif someone has time to impl.20:10
stevebakerThat would be good to do within grizzly since it is changing the db structure20:10
sdake_zagree20:10
sdake_zstevebaker you have  time?20:11
stevebakeri think so20:11
sdake_zvpc resources:20:11
sdake_z#link https://blueprints.launchpad.net/heat/+spec/vpc-resources20:11
stevebaker#link https://etherpad.openstack.org/grizzly-heat-quantum20:12
stevebakerI was going to pick this up again in this cycle20:12
sdake_zone thing i dont like about the blueprint is the link is to a changing medium20:12
asalkeldstevebaker, need you for presentation20:12
sdake_zcan we pull the etherpad into the blueprint?20:12
asalkeldnormally you link to a wiki pages20:13
stevebakerthe original plan was that others would help with the effort - this would be good since I've made some huge assumptions on the vpn<->quantum mapping that need validation20:13
*** annegentle_itsme has quit IRC20:13
sdake_zdefn is new - should this move to approved?20:13
stevebakeryep20:14
asalkeldseems big20:14
sdake_zdoes seem big - what is current state of the impl?20:14
stevebakermaybe it should be a parent blueprint with another blueprint for each resource20:14
*** vipul is now known as vipul|away20:14
asalkeldgood idea20:14
shardysounds like a good idea20:14
sdake_zyup agreed20:14
stevebakerthere is a start, 2 resources so far?20:14
sdake_zwe can review this again next week after you have split out the blueprints and then approve at that time20:15
sdake_z#action sdake to review in next weekly meeting vpc resources for moving to defn->approved20:15
stevebakerok, and we can just do as many as there is time for in the cycle20:15
sdake_zyup20:15
shardysdake_z: I'd like to add autoscale-update-stack and instance-update-stack to g3, provided I have time to do them20:16
shardythey have been requested by several users20:16
sdake_zjpeeler your up with complete init functionality complete init functionality20:16
sdake_zok i'll add them to agenda to discuss next20:16
asalkeldchoose requested feature first IMO20:16
sdake_z#link https://blueprints.launchpad.net/heat/+spec/aws-cloudformation-init20:16
jpeeleri started it, but shifted over to working on something else20:16
jpeeler"Heat EBS implementation should support Cinder" which is also targetted for g320:17
sdake_zok20:17
sdake_zwell will this make g3 then?20:17
jpeeleri think so, it's not that much work20:17
sdake_zok moving defn to approved20:18
sdake_z#info init moved to approved20:18
sdake_zshardy can you link each blueprint20:18
sdake_zone at a time ;)20:18
shardy#link https://blueprints.launchpad.net/heat/+spec/autoscale-update-stack20:19
shardy#link https://blueprints.launchpad.net/heat/+spec/instance-update-stack20:19
*** topol has quit IRC20:19
sdake_zupdatestack for autoscaling:20:19
asalkeldfirst one should be easy20:19
sdake_zlooks good, anyone object?20:19
shardyautoscale should be pretty easy20:19
sdake_zif not, i'll target towards our series goal20:19
shardyinstance may be partial, the main feature required is to update metadata for cfn-hup, which again should be pretty easy20:20
*** topol has joined #openstack-meeting20:20
*** annegentle has joined #openstack-meeting20:20
sdake_z#info autoscaling update group accepted as blueprint20:20
asalkeldthat would be a good feature20:20
shardysdake_z: I'll assign to me as I've started looking at both20:20
sdake_zsounds good20:21
*** mawagon1 has joined #openstack-meeting20:21
sdake_z#info approved instance updatestack blueprint20:21
*** shmcfarl_ has joined #openstack-meeting20:21
*** afazekas_ has quit IRC20:21
*** APMelton12 has joined #openstack-meeting20:22
sdake_zok thats the blueprints20:22
sdake_z#topic bug squashing20:22
*** openstack changes topic to "bug squashing (Meeting topic: heat)"20:22
sdake_zwe have 40 bugs atm20:22
sdake_zi propose we each assign ourselves 3 bugs 1 high prio 2 medium and work on getting them solved on a weekly basis20:23
*** olaph has quit IRC20:23
sdake_zany objections?20:23
stevebakersounds good20:24
shardy+120:24
stevebakerlooks like some of them should be blueprints20:24
stevebakerhttps://bugs.launchpad.net/heat/+bug/108350120:24
shadowerI don't know how much time I'll be able to put into Heat these days but I'll try to help20:24
uvirtbotLaunchpad bug 1083501 in heat "Heat packaging for ubuntu" [High,Triaged]20:24
sdake_zfor g3, lets keep them as bugs, in future lets keep blueprints in mind20:24
*** shmcfarl_ is now known as shmcfarl20:24
sdake_zok shadower understood20:24
shardystevebaker: actually I have some (which I raised) which should also be blueprints, e.g:20:25
shardyhttps://bugs.launchpad.net/heat/+bug/107295220:25
uvirtbotLaunchpad bug 1072952 in heat "Implement Rollback feature of AWS API" [Medium,Triaged]20:25
sdake_z#topic governance20:25
*** openstack changes topic to "governance (Meeting topic: heat)"20:25
stevebakershardy: yeah, thats a huge one ;)20:26
sdake_zi've done a bit of hunting and found this link: #link http://wiki.openstack.org/Governance/TCElectionsFall201220:26
sdake_zif you read through that, the key points are 2 weeks of election cycle, 1 week for open candidacy and one week for election period20:27
sdake_zalso, ther eare election officials20:27
*** olaph has joined #openstack-meeting20:27
sdake_zMonty Taylor has agreed to serve as our election official20:27
asalkeldcool20:27
stevebakershall we ask ceilometer what they did? I bet it didn't take that long20:27
sdake_zThe dates are : Jan 14th-25th open candidacy for heat ptl20:27
*** mawagon1 has quit IRC20:27
sdake_zjan 26-feb1 heat ptl election20:27
sdake_zpart of what we will be evaluated on is how well we can learn openstack processes ;)20:28
sdake_zthe voting site is here:20:28
sdake_z#link http://www.cs.cornell.edu/w8/~andru/civs/20:28
stevebakertrue20:28
sdake_zso if we can operate launchpad and the voting site, we are golden ;)20:28
sdake_zany other questions about that?20:29
asalkeldlets do it20:29
sdake_zsounds good20:29
stevebakerye20:29
sdake_z#topic packaging20:29
*** openstack changes topic to "packaging (Meeting topic: heat)"20:29
sdake_zdid you add this stevebaker?20:29
*** vkmc has joined #openstack-meeting20:29
stevebakerSo g-2 packages were blocked on removing the extras dependency20:29
stevebakerthe question is, do we create a grizzly branch and backport that change and build packages from that?20:30
shardystevebaker: are the nightly builds running now?20:30
stevebakeror do we just create packages from master?20:30
sdake_zmaster imo20:30
stevebakeryep http://repos.fedorapeople.org/repos/heat/heat-trunk/20:30
shardystevebaker: cool, will try them out :)20:31
sdake_z#info packages to be created from master20:31
sdake_zspeaking of packaging we have one bug for an ubuntu ppa20:31
sdake_zmind taking that on stevebaker since your getting involved in ubuntu/deb upstreams?20:31
stevebakerMy heat-rpms is diverging a bit, so I'll be asking for a branch merge when g-2 is ready https://github.com/steveb/heat-rpms20:31
stevebakeryep, I'm getting my head around debian/ubuntu as well20:32
sdake_z#action stevebaker to take on ubuntu PPA of heat20:32
sdake_z#topic cfntools integration20:32
*** openstack changes topic to "cfntools integration (Meeting topic: heat)"20:32
stevebakerThis is also related to packaging20:33
stevebakerhere is what I think we should do20:33
stevebaker1. encourage users to spin their own images which includes packaged heat-cfntools, as well as anything else they need20:33
*** vipul|away is now known as vipul20:33
*** turul_ has joined #openstack-meeting20:33
stevebaker2. use cloud-init to check for cfn-*, and install from pypi if not there20:34
*** turul_ is now known as afazekas_20:34
asalkeldI like 220:34
sdake_z2 sounds interesting20:34
shardy+120:34
sdake_zneed to keep 1 i think20:34
sdake_zok lets do that - stevebaker taking that on?20:35
stevebaker3. always install to /usr/bin, and include a script which sets up symlinks to /opt/aws/bin. cloud-init runs this script if necessary <- this is because packages (and pypi) never should install in /opt20:35
sdake_z#info adding cloud-init support for cfntools20:35
*** SpamapS has joined #openstack-meeting20:35
asalkeldstevebaker, what ever - the setup.py can do that20:35
asalkeldinstall to /opt/aws or link to it20:36
stevebakerbut packages can't, not if we want the package to be in official repos20:36
sdake_zasalkeld i think sbaker is talking about option 320:36
stevebakerdebian already said no ;)20:36
sdake_zya packaging in opt is a  hot topic ;)20:36
asalkeldwell then the cloud-init script could do that20:36
asalkeld(make the links)20:37
sdake_zya we can upload a handler for that20:37
sdake_zok all set then?20:37
stevebakeryep20:37
sdake_z#topic open discussion20:38
*** openstack changes topic to "open discussion (Meeting topic: heat)"20:38
sdake_zsending out agenda before meeting - not sure this has been happening20:38
* asalkeld super lazy20:38
sdake_zi'll take that on, but need folks to add their agenda items on Tuesday before 2000 UTC20:38
sdake_zand then I'll mail out notes at conclusion of meeting20:39
sdake_zthats all I have for open discussion anyone else?20:39
SpamapShm20:39
stevebakerany issues running heat on f18 so far?20:39
SpamapSsorry I came late20:39
asalkeldhi SpamapS20:39
asalkeldwe are in open discussion20:39
SpamapSare we drafting/filing blueprints for discussion at the summit now?20:40
sdake_zstill time in the meeting slot20:40
asalkeldyip, you got time20:40
shadowerstevebaker, I had trouble getting devstack to work today, will get it and Heat sorted tomorrow20:40
sdake_zthe basic idea is we will draft now, and discuss at summit20:40
shardySpamapS: we added the updatestack autoscaling and instance metadata blueprints, which we discussed recently, for g-320:40
sdake_zand assign to h1/h2/h320:40
SpamapSshardy: woot20:40
SpamapSI'd also like to file one on performance20:40
sdake_zif there is something that makes heat broken without it, bring it up and we can see about fitting it in20:40
sdake_zbut we have a pretty tight schedule20:41
sdake_zfeb21 is our deadline for our 8 bps and 40 bugs20:41
SpamapSspecifically on measuring it during CI so we can improve it.20:41
SpamapSsdake_z: is there a need for general bug fixing? Like, should I just go pop a few off the stack of medium priority bugs?20:42
*** egallen has joined #openstack-meeting20:42
sdake_zthat would be great!20:42
sdake_zwe have a ton of bugs and devs pretty thin on time20:42
SpamapSmmk I'll look at doing that when next I'm blocked.. which happens a lot ;)20:42
SpamapSthats all I had20:42
sdake_z#info spamaps to pick a few bugs up in spare cycles20:43
sdake_zanything else for open discussion?20:43
sdake_zok well thanks folks20:43
sdake_z#endmeeting20:43
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"20:43
openstackMeeting ended Wed Jan 16 20:43:38 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-16-20.00.html20:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-16-20.00.txt20:43
openstackLog:            http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-16-20.00.log.html20:43
*** danspraggins has joined #openstack-meeting20:46
*** vipul is now known as vipul|away20:50
*** DanFlorea has joined #openstack-meeting20:50
*** DanFlorea has left #openstack-meeting20:50
*** danflorea has joined #openstack-meeting20:51
*** vipul|away is now known as vipul20:52
*** stevebaker has quit IRC20:52
*** maurosr has joined #openstack-meeting20:52
*** stevebaker has joined #openstack-meeting20:55
*** jcmartin has quit IRC20:57
*** eglynn_ has joined #openstack-meeting20:57
*** dolphm has left #openstack-meeting20:58
nijaba#startmeeting Ceilometer21:00
openstackMeeting started Wed Jan 16 21:00:01 2013 UTC.  The chair is nijaba. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
nijaba#meetingtopic Ceilometer21:00
nijaba#chair nijaba21:00
nijaba#link http://wiki.openstack.org/Meetings/MeteringAgenda21:00
nijabaATTENTION: please keep discussion focused on topic until we reach the open discussion topic21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: Ceilometer)"21:00
openstackThe meeting name has been set to 'ceilometer'21:00
*** openstack changes topic to " (Meeting topic: Ceilometer)"21:00
openstackCurrent chairs: nijaba21:00
nijabaHello everyone! Show of hands, who is around for the ceilometer meeting?21:00
nijabao/21:00
dhellmanno/21:00
n0anoo/21:00
egalleno/21:00
eglynn_o/21:00
*** eglynn_ is now known as eglynn21:00
*** alexpilotti has joined #openstack-meeting21:00
nijaba#topic actions from previous meeting21:00
danspragginso/21:00
*** openstack changes topic to "actions from previous meeting (Meeting topic: Ceilometer)"21:00
danfloreao/21:00
nijaba#topic llu to get in touch with the healthmon team to see what their reaction is to our plan for integration, putting the ml in cc21:01
*** openstack changes topic to "llu to get in touch with the healthmon team to see what their reaction is to our plan for integration, putting the ml in cc (Meeting topic: Ceilometer)"21:01
*** Shengjie_home has joined #openstack-meeting21:01
nijabaThat was done, we saw a few exchanges on the ml.  I don't think llu can join us at this time.21:01
nijaba#topic nijaba to make the api-aggregate-average bp a dep for v2 api21:01
*** openstack changes topic to "nijaba to make the api-aggregate-average bp a dep for v2 api (Meeting topic: Ceilometer)"21:01
jd__o/21:01
*** alexpilotti has quit IRC21:01
fnavalo/21:01
nijabaThat was done as well.21:01
nijaba#topic nijaba to prepare a blogpost announcing the release21:01
*** openstack changes topic to "nijaba to prepare a blogpost announcing the release (Meeting topic: Ceilometer)"21:01
nijabaThat was published on monday21:02
nijaba#link https://launchpad.net/ceilometer/+announcement/1106921:02
eglynnyay!21:02
*** sarob has joined #openstack-meeting21:02
nijabaWe'll be talking a bit about the release shortly21:02
*** cdub has quit IRC21:02
nijaba#topic nijaba to start a thread on ml about unit policy21:02
*** openstack changes topic to "nijaba to start a thread on ml about unit policy (Meeting topic: Ceilometer)"21:02
*** cdub has joined #openstack-meeting21:03
nijabaThread was started yesterday. We'll talk about it a bit more in a minute.21:03
*** sarob has quit IRC21:03
nijabaThat's it for last week action21:03
nijaba#topic Units discussion21:03
*** openstack changes topic to "Units discussion (Meeting topic: Ceilometer)"21:03
*** sarob has joined #openstack-meeting21:03
nijabaso, any progress on this?21:03
*** asalkeld has quit IRC21:04
jd__I think we all more or less agree on the same things21:04
eglynndidn't we agree to go with specific units for counts at last week's meeting?21:04
jd__we did21:04
dhellmannI believe that's right21:04
nijabawe did, but we still had not closed on the discussion21:04
*** cp16net is now known as cp16net|away21:05
nijabaand I think we should write up the policy for future ref21:05
nijabaany volumteer to start a wiki page?21:05
*** xyang_ has quit IRC21:05
nijabaI guess not.  So I'll take the action21:06
jd__oh too many candidates, shall we vote?21:06
jd__:-D21:06
* eglynn confused on the all units should be integers|floats point21:06
Shengjie_homewiki for all the meters' units ?21:06
* dhellmann was distracted by the sound of crickets21:06
jd__dhellmann: lamest excuse *ever*21:07
nijaba#action nijaba to specify draft policy on wiki for units21:07
jd__eglynn: what confuses you?21:07
eglynnsurely units are just labels without a intrinsic numeric value?21:07
jd__I think nijaba meant volume21:07
eglynna-ha, I see21:08
eglynngot it ...21:08
dhellmannso we agree to count things using numbers? :-)21:08
nijabajd__: right21:08
*** cp16net|away is now known as cp16net21:08
nijabayeah!!!21:08
jd__i.e. you can't have 'foobar' 'meters'21:08
dhellmannsure21:08
eglynns/Units should always be [integer|floats]/the values associated with units should always be [integer|floats]/21:08
jd__dhellmann: that's how far we went :)21:08
jd__next week debate, should we use base 10 ?21:09
nijabalol21:09
*** nealph has joined #openstack-meeting21:09
nijabaok, let's move on then21:09
danfloreaHi. I'm a new guy here. I can help with docs as I learn, so I'm happy to help. Just don't want to take a lead on things I know nothing about. But I can assist if guided..21:09
nijaba#topic G3 blueprint review21:09
*** openstack changes topic to "G3 blueprint review (Meeting topic: Ceilometer)"21:09
dhellmannjd__: please wait until april 1 to propose the move to base 221:09
dhellmannhi, DanFlorea, welcome!21:10
nijabaso we have 28 unimplemented bp with grizzly as a target21:10
eglynnwelcome+121:10
danfloreathanks!21:10
eglynnslow progress on the synaps blueprints21:10
*** vipul is now known as vipul|away21:10
eglynnI've been dragged off doing some glance work for the last week or so21:10
jd__hi danflorea, welcome21:10
eglynnbut will be back on it from about Friday of this week21:10
nijabawe should start cleaning this up as feb 21 sounds like a month away21:11
eglynnnijaba: any response to your call for volunteers?21:11
nijabaif you know of stuff you won't be able to complete in the next month, please let me know21:11
nijabaeglynn: none21:11
dhellmannnijaba: is there an easy way to get that list of blueprints out of launchpad?21:11
*** vipul|away is now known as vipul21:11
eglynndanflorea: http://lists.openstack.org/pipermail/openstack-dev/2013-January/004398.html21:11
jd__nijaba: I think https://blueprints.launchpad.net/ceilometer/+spec/provide-meter-units is done now?21:12
nijabadhellmann: #link https://blueprints.launchpad.net/ceilometer/grizzly21:12
eglynndanflorea: something may catch your eye in the the above list of unassigned work21:12
danfloreaok, great21:12
nijabajd__: updated21:13
nijabaanything else that I should update?  any bp that I should push to H?21:14
dhellmannhttps://blueprints.launchpad.net/ceilometer/+spec/config-driven-notification-monitoring can be closed entirely21:14
dhellmannit is obsoleted by the multipublisher work21:14
dhellmannwhich is covered better elsewhere21:14
dhellmannI will close it21:14
dhellmannoh, actually, I can't change its status21:14
nijabadhellmann: done21:15
nealphnijaba: will you be updating that list as people sign up?21:15
nijabanealph: yes21:15
dhellmannnijaba: I suspect we should remove rpc-zeromq as well21:16
dhellmannis anyone actively working on non-libvirt-hw?21:16
Shengjie_homejd__: if I am targeting G3, can I change the milestone for the bp myself21:16
nijabadhellmann: push to H?21:16
*** dprince_ has quit IRC21:16
eglynn+1 on remove zmq21:16
jd__Shengjie_home: not sure, ask for someone to do it21:16
dhellmannnijaba: let's leave it unassigned21:16
dhellmannnijaba: I would rather have someone interested in zeromq step up to help21:16
nijabaShengjie_home: not if your are not a driver I am afraid.  ping me, or one of the core devs21:16
*** belliott has left #openstack-meeting21:17
Shengjie_homenijaba: k, i will take it offline21:17
jtrandhellmann:  i might be interested in that one21:17
nijabadhellmann: done21:17
*** afazekas_ has quit IRC21:17
*** afazekas has quit IRC21:17
dhellmannjtran: for grizzly or later?21:17
jtransure, why not :)21:17
jtrangrizzly21:17
nijabajtran: let me know21:18
jtrani'll give you a more concrete answer in a couple days21:18
jtrannijaba: will do!21:18
nijabanp21:18
eglynnnon-libvirt-hw is more do-able now with with the inspector abstraction, but it should be split into per-hypervisor BPs I think21:18
nijabaeglynn: it is already21:18
dhellmannnijaba: rpc-qpid is much like rpc-zeromq21:19
*** shadower has quit IRC21:19
nijabaeglynn: I'll let you decide on that one21:19
nijaba(qpid)21:19
eglynnnijaba: I'll see if I hand it off to another red hatter21:20
*** dolphm has joined #openstack-meeting21:20
nijabaeglynn: I'll keep it as is for now then21:20
eglynnk21:20
nijabaanything else?21:20
nijabaI geuss not.  let's move on21:21
nijaba#topic Discuss downstream packaging efforts underway, identify potential problematic dependencies21:21
*** openstack changes topic to "Discuss downstream packaging efforts underway, identify potential problematic dependencies (Meeting topic: Ceilometer)"21:21
nijabaeglynn: the floor is yours21:21
eglynnk, so we cranking up to attack the fedora/epel/rhel aspects of packaging ceilometer21:22
nijaba\o/21:22
eglynnbut I noticed that some prior work had been done on deb/ubuntu packaging21:22
nijabait has21:22
nijabazul and zigo21:22
*** alexpilotti has joined #openstack-meeting21:23
eglynnso I want at least to establish a conduit for sharing experience etc.21:23
dhellmann+121:23
eglynnone big question is whether the deb work pre-dated the flask->pecan switch21:23
nijababoth are pretty active on openstack in general, so they read the ml21:23
jd__eglynn: yes and it has been updated since too21:23
eglynncool21:24
eglynnthe other thing two things I had in mind21:24
eglynn1. the ceilo client will need to be packaged also21:24
nijabaI beleive we have a deb for it21:24
eglynn2. we have some 'bleeding edge' dependencies, e.g. WSME21:24
jd__we have a .deb for python-ceilometerclient too21:25
eglynnby bleeding edge I mean that we rely on some unrelease fixes on trunk21:25
eglynnjd__: great21:25
nijabadhellmann: what's the status on this?21:25
jd__yeah, that's a problem even for Debian, we don't have that yet21:25
dhellmanneglynn: I'm on the hook to talk to Christophe about that already for our internal packaging at DH21:25
eglynndhellmann: so waht would be a good cut-off point for that release?21:25
dhellmannI believe I owe him one doc patch, and then he and I need to figure out what to do about the auto-generated docs for the ceilometer api21:25
eglynnI thing asalkeld has some recently proposed patches also21:26
eglynns/thing/think/21:26
dhellmanneglynn: for wsme? I wasn't aware of that21:26
eglynndhellmann: at least I thought that's what he said21:26
* eglynn may have misheard ...21:26
dhellmannyou might be thinking of pecan, which is in a similar situation21:26
* nijaba surprised asakeld is not around21:26
dhellmannalthough the pecan maintainer is my manager, so I can get a release done on fairly short order21:27
eglynndhellmann: a-ha, yes, you could be right21:27
dhellmannwe work with the current version of pecan, but we wanted an config option to completely turn off content-type guessing based on url extension21:27
eglynnok21:27
dhellmannI have that on my todo list, too, but got a little wrapped up in internal stuff this week21:27
eglynnso apart from pecan and WSME, are there any other lurking banana skins in the current set of dependencies?21:28
eglynn(obviously a bunch of other wrinkles in the synaps stuff, but lets limit to trunk for now)21:28
nijabazul: around? anything worth noting?21:29
jd__eglynn: I don't think so21:29
eglynnwas the webob 1.2.x thing the major transitive dependency version mismatch for example?21:29
eglynnk21:29
jd__eglynn: maybe stevedore if you don't have this as RPM yet21:29
dhellmannyeah, that was the only cross-project dependency with an issue afaik21:29
eglynnjd__: k, I'll check out the stevedore aspect21:29
eglynngreat, sounds like we're in (slightly) better shape than I thought :)21:30
jd__eglynn: but it's a really trivial standard Python package so not a blocker21:30
* eglynn ever the optimist ;)21:30
jd__:-)21:30
dhellmannoh, yeah, stevedore is unlikely to be an rpm yet21:30
dhellmannlet me know if you have any issues packaging that, eglynn21:30
eglynncool, will do21:31
eglynnI'll reach out to zul & zigo too to pick their brains21:31
*** shmcfarl_ has joined #openstack-meeting21:31
nijabaanything else on packaging?21:32
eglynnMING is just for testing, right?21:32
dhellmanneglynn: yes21:32
eglynni.e. not a runtime dependency in any form?21:32
eglynncool21:32
nijabashall we move on to open discussion?21:33
eglynnyep, thanks all for the info above ...21:33
nijaba#topic Open discussion21:33
*** openstack changes topic to "Open discussion (Meeting topic: Ceilometer)"21:33
nijabatopics, anyone?21:33
nijabahuh, looks like we are going to finish this meeting early21:34
nijabathat's unusual!21:34
jd__but that's how we are21:34
Shengjie_homenijaba: can u send us the link for units once you are done with that wiki21:34
Shengjie_homethanks21:34
eglynnquick, run before anyone thinks of anything ;)21:34
dhellmannoh, here's one: we need folks to look over the v2 flattening changeset21:34
nijabaShengjie_home: sure21:34
dhellmann#link https://review.openstack.org/#/c/19615/21:35
*** shmcfarl has quit IRC21:35
*** shmcfarl_ is now known as shmcfarl21:35
nijabathat's a large one21:35
*** lloydde has quit IRC21:35
dhellmannmost of the changes are updating the tests to reflect the new urls21:35
dhellmannbut we did rearrange and squash a good bit of code in the v2.py file as well21:36
jd__dhellmann: I'll re-look again tomorrow21:36
dhellmannjd__: thanks21:36
dhellmannI hesitate to +2 it myself because I worked with asalkeld on it21:36
eglynnI'll try to fit in a review tmrw too21:36
*** lloydde has joined #openstack-meeting21:37
nijabaanything else?21:38
* dhellmann shakes head21:38
eglynnnowt from me21:38
nijabaok, nice and short!  Thanks everyone21:38
nijaba#endmeeting21:38
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"21:38
openstackMeeting ended Wed Jan 16 21:38:58 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:39
openstackMinutes:        http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-01-16-21.00.html21:39
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-01-16-21.00.txt21:39
jd__thanks!21:39
openstackLog:            http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-01-16-21.00.log.html21:39
*** jtran has left #openstack-meeting21:39
*** bencherian has joined #openstack-meeting21:41
*** egallen has quit IRC21:41
*** gyee has joined #openstack-meeting21:42
*** danspraggins has quit IRC21:45
*** martine_ has quit IRC21:48
*** dwalleck has quit IRC21:49
*** dwalleck has joined #openstack-meeting21:52
*** dwalleck has quit IRC21:52
*** dwalleck has joined #openstack-meeting21:52
*** roaet is now known as roaet-away21:53
*** nati_ueno has left #openstack-meeting21:56
*** sarob has quit IRC21:56
*** markvan has quit IRC22:02
*** stevebaker has quit IRC22:14
*** topol has quit IRC22:15
*** jhopper has joined #openstack-meeting22:19
*** vipul is now known as vipul|away22:20
*** ijw has quit IRC22:20
*** ijw has joined #openstack-meeting22:21
*** koolhead17 has quit IRC22:22
*** maurosr has quit IRC22:24
*** spzala has quit IRC22:25
*** vipul|away is now known as vipul22:25
*** ijw has quit IRC22:26
*** ijw has joined #openstack-meeting22:27
*** maurosr has joined #openstack-meeting22:27
*** otherwiseguy has quit IRC22:28
*** jcmartin has joined #openstack-meeting22:30
*** Shengjie_home has quit IRC22:32
*** eglynn has quit IRC22:32
*** Shengjie_home has joined #openstack-meeting22:33
*** markwash_ has quit IRC22:34
*** maurosr has quit IRC22:34
*** roaet-away is now known as roaet22:39
*** APMelton12 has quit IRC22:39
*** metral has joined #openstack-meeting22:42
*** annegentle has quit IRC22:45
*** henrynash has quit IRC22:46
*** salv-orlando has joined #openstack-meeting22:46
*** otherwiseguy has joined #openstack-meeting22:48
*** stevebaker has joined #openstack-meeting22:49
*** mtreinish has quit IRC22:50
*** jcmartin has quit IRC22:52
*** shardy has quit IRC22:53
*** derekh has joined #openstack-meeting22:54
*** cdub_ has quit IRC22:55
*** derekh has quit IRC22:56
*** danwent has quit IRC23:01
*** john5223 has quit IRC23:01
*** novas0x2a|laptop has quit IRC23:02
*** maurosr has joined #openstack-meeting23:02
*** novas0x2a|laptop has joined #openstack-meeting23:02
*** danwent has joined #openstack-meeting23:02
*** asalkeld has joined #openstack-meeting23:03
*** danwent has quit IRC23:05
*** danwent has joined #openstack-meeting23:05
*** vipul is now known as vipul|away23:07
*** beagles has quit IRC23:08
*** shmcfarl has quit IRC23:12
*** stevebaker has quit IRC23:12
*** vipul|away is now known as vipul23:12
*** stevebaker has joined #openstack-meeting23:14
*** reed has quit IRC23:14
*** vkmc has quit IRC23:15
*** ctracey is now known as ctracey|away23:18
*** fnaval has quit IRC23:21
*** galthaus has quit IRC23:21
*** jcmartin has joined #openstack-meeting23:22
*** otherwiseguy has quit IRC23:24
*** cloudchimp has quit IRC23:28
*** ijw has quit IRC23:30
*** ijw has joined #openstack-meeting23:30
*** otherwiseguy has joined #openstack-meeting23:33
*** adam_g_ is now known as adam_g23:36
*** eglynn has joined #openstack-meeting23:38
*** eharney has quit IRC23:41
*** fnaval has joined #openstack-meeting23:44
*** fnaval has joined #openstack-meeting23:44
*** stevebaker has quit IRC23:45
*** danwent has quit IRC23:47
*** stevebaker has joined #openstack-meeting23:47
*** danwent has joined #openstack-meeting23:49
*** dosaboy has quit IRC23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!