Wednesday, 2013-01-09

*** bearovercloud has left #openstack-meeting00:02
*** annegentle-itsme has joined #openstack-meeting00:06
*** bencherian has quit IRC00:07
*** annegentle-itsme has quit IRC00:08
*** b3nt_pin_ has joined #openstack-meeting00:11
*** vkmc has quit IRC00:12
*** dolphm has joined #openstack-meeting00:12
*** b3nt_pin has quit IRC00:14
*** b3nt_pin_ has quit IRC00:21
*** dolphm has quit IRC00:21
*** bencherian has joined #openstack-meeting00:22
*** sandywalsh has quit IRC00:22
*** Guest45149 is now known as annegentle00:26
*** annegentle is now known as Guest6805800:27
*** markwash has quit IRC00:29
*** EmilienM__ has left #openstack-meeting00:33
*** henrynash has joined #openstack-meeting00:34
*** jog0__ has joined #openstack-meeting00:35
*** dcramer_ has quit IRC00:36
*** dcramer_ has joined #openstack-meeting00:41
*** sarob_ has joined #openstack-meeting00:42
*** sarob_ has quit IRC00:43
*** sarob_ has joined #openstack-meeting00:43
*** sarob has quit IRC00:45
*** nati_ueno has quit IRC00:46
*** vishy is now known as vishy_zz00:48
*** henrynash has quit IRC00:49
*** MarkAtwood has quit IRC00:51
*** markmcclain has quit IRC00:53
*** ryanpetr_ has joined #openstack-meeting00:56
*** ryanpetrello has quit IRC00:59
*** ayoung has joined #openstack-meeting01:04
*** b3nt_pin has joined #openstack-meeting01:05
*** ryanpetr_ has quit IRC01:06
*** jog0__ has quit IRC01:07
*** kaganos has quit IRC01:08
*** skiarxon has quit IRC01:10
*** mrodden has quit IRC01:10
*** eglynn has quit IRC01:18
*** zg has left #openstack-meeting01:25
*** maurosr has quit IRC01:26
*** ryanpetrello has joined #openstack-meeting01:26
*** Guest68058 is now known as annegentle01:28
*** annegentle is now known as Guest2465101:28
*** vipul is now known as vipul|away01:29
*** gakott has joined #openstack-meeting01:29
*** vipul|away is now known as vipul01:30
*** garyk has quit IRC01:32
*** jog0 has quit IRC01:34
*** MarkAtwood has joined #openstack-meeting01:37
*** jcooley is now known as jcooley|away01:38
*** salv-orlando has quit IRC01:39
*** stevebake has quit IRC01:52
*** vishy_zz is now known as vishy01:58
*** yaguang has joined #openstack-meeting02:01
*** jcooley|away is now known as jcooley02:02
*** alexpilotti has quit IRC02:03
*** bencherian has quit IRC02:05
*** reed has joined #openstack-meeting02:05
*** anniec has quit IRC02:07
*** jaypipes has joined #openstack-meeting02:12
*** vishy is now known as vishy_zz02:13
*** jcooley is now known as jcooley|away02:17
*** jcooley|away is now known as jcooley02:21
*** saurabhs has left #openstack-meeting02:22
*** jgriffith has quit IRC02:25
*** jgriffith has joined #openstack-meeting02:25
*** dolphm has joined #openstack-meeting02:29
*** Guest24651 is now known as annegentle02:29
*** annegentle is now known as Guest6821302:30
*** adjohn has quit IRC02:34
*** yaguang has quit IRC02:42
*** stevebake has joined #openstack-meeting02:44
*** colinmcnamara has joined #openstack-meeting02:46
*** yaguang has joined #openstack-meeting02:48
*** anniec has joined #openstack-meeting02:51
*** anniec_ has joined #openstack-meeting02:54
*** yaguang has quit IRC02:56
*** anniec has quit IRC02:58
*** anniec_ is now known as anniec02:58
*** gyee has quit IRC03:02
*** yaguang has joined #openstack-meeting03:03
*** bencherian has joined #openstack-meeting03:06
*** danwent has quit IRC03:07
*** Mandell has quit IRC03:08
*** ijw has quit IRC03:14
*** anniec has quit IRC03:18
*** colinmcnamara has quit IRC03:24
*** obondarev has quit IRC03:28
*** colinmcnamara has joined #openstack-meeting03:29
*** Guest68213 is now known as annegentle03:31
*** annegentle is now known as Guest2190303:32
*** galthaus has joined #openstack-meeting03:35
*** juice has quit IRC03:37
*** obondarev has joined #openstack-meeting03:40
*** adjohn has joined #openstack-meeting03:44
*** dolphm has quit IRC03:47
*** adjohn has quit IRC03:49
*** colinmcnamara has quit IRC03:51
*** adjohn has joined #openstack-meeting03:58
*** hemna has quit IRC04:12
*** hemna has joined #openstack-meeting04:16
*** fungi has quit IRC04:21
*** fungi has joined #openstack-meeting04:22
*** galthaus has quit IRC04:30
*** Guest21903 is now known as annegentle04:36
*** annegentle is now known as Guest6746404:36
*** anniec has joined #openstack-meeting04:36
*** bencherian has quit IRC04:37
*** anniec_ has joined #openstack-meeting04:37
*** Mandell has joined #openstack-meeting04:38
*** anniec has quit IRC04:41
*** anniec_ is now known as anniec04:41
*** adjohn has quit IRC04:49
*** anniec has quit IRC04:54
*** obondarev has quit IRC04:59
*** obondarev has joined #openstack-meeting05:02
*** bencherian has joined #openstack-meeting05:04
*** envydd has joined #openstack-meeting05:15
*** obondarev has quit IRC05:24
*** obondarev has joined #openstack-meeting05:26
*** stevebake has quit IRC05:30
*** bencherian has quit IRC05:31
*** obondarev has quit IRC05:33
*** envydd has quit IRC05:33
*** Guest67464 is now known as annegentle05:37
*** annegentle is now known as Guest7673205:38
*** ozstacker has quit IRC05:40
*** ozstacker has joined #openstack-meeting05:40
*** obondarev has joined #openstack-meeting05:47
*** adjohn has joined #openstack-meeting05:59
*** obondarev has quit IRC06:03
*** adjohn has quit IRC06:03
*** obondarev has joined #openstack-meeting06:05
*** obondarev has quit IRC06:16
*** obondarev has joined #openstack-meeting06:18
*** ryanpetrello has quit IRC06:19
*** Guest76732 is now known as annegentle06:39
*** annegentle is now known as Guest8313506:39
*** obondarev has quit IRC06:41
*** stevebake has joined #openstack-meeting06:42
*** stevebake is now known as stevebaker06:42
*** stevebaker has quit IRC06:43
*** stevebaker has joined #openstack-meeting06:43
*** stevebaker is now known as stevebake06:43
*** stevebake has quit IRC06:44
*** stevebaker has joined #openstack-meeting06:44
*** almaisan-away is now known as al-maisan07:07
*** al-maisan is now known as almaisan-away07:09
*** mrunge has joined #openstack-meeting07:13
*** luis_fdez has quit IRC07:36
*** Guest83135 is now known as annegentle07:40
*** annegentle is now known as Guest1586707:41
*** dhellmann-afk has quit IRC07:42
*** rafaduran has joined #openstack-meeting07:45
*** MarkAtwood has quit IRC07:46
*** dolphm has joined #openstack-meeting07:49
*** EmilienM__ has joined #openstack-meeting07:51
*** nati_ueno has joined #openstack-meeting07:52
*** MarkAtwood has joined #openstack-meeting07:52
*** dolphm has quit IRC07:54
*** ttrifonov is now known as ttrifonov_zZzz07:54
*** ttrifonov_zZzz is now known as ttrifonov07:55
*** dhellmann has joined #openstack-meeting07:56
*** nati_ueno has quit IRC07:58
*** eglynn has joined #openstack-meeting08:02
*** skiarxon has joined #openstack-meeting08:06
*** EmilienM__ has quit IRC08:07
*** eglynn has quit IRC08:08
*** EmilienM__ has joined #openstack-meeting08:14
*** mrunge has quit IRC08:15
*** MarkAtwood has quit IRC08:19
*** mrunge has joined #openstack-meeting08:19
*** ndipanov has joined #openstack-meeting08:20
*** mnewby has quit IRC08:21
*** almaisan-away is now known as al-maisan08:26
*** adjohn has joined #openstack-meeting08:30
*** martine has quit IRC08:32
*** jhenner has joined #openstack-meeting08:38
*** eglynn has joined #openstack-meeting08:41
*** Guest15867 is now known as annegentle08:42
*** annegentle is now known as Guest6892308:42
*** K has joined #openstack-meeting08:47
*** K is now known as Guest4642808:47
*** EmilienM__ has quit IRC08:48
*** salv-orlando has joined #openstack-meeting08:49
*** EmilienM__ has joined #openstack-meeting08:49
*** jhenner has quit IRC08:53
*** shang_ has quit IRC08:56
*** derekh has joined #openstack-meeting09:01
*** adjohn has quit IRC09:04
*** al-maisan is now known as almaisan-away09:09
*** shang_ has joined #openstack-meeting09:09
*** darraghb has joined #openstack-meeting09:27
*** henrynash has joined #openstack-meeting09:32
*** _ozstacker_ has joined #openstack-meeting09:33
*** ewindisch_ has joined #openstack-meeting09:33
*** jgriffit1 has joined #openstack-meeting09:33
*** shang_ has quit IRC09:42
*** ozstacker has quit IRC09:42
*** jgriffith has quit IRC09:42
*** reed has quit IRC09:42
*** ayoung has quit IRC09:42
*** otherwiseguy has quit IRC09:42
*** ewindisch has quit IRC09:42
*** ewindisch_ is now known as ewindisch09:42
*** Guest68923 is now known as annegentle09:43
*** annegentle is now known as Guest8504609:44
*** otherwiseguy has joined #openstack-meeting09:49
*** gakott has quit IRC09:50
*** shang_ has joined #openstack-meeting09:50
*** ayoung has joined #openstack-meeting09:51
*** rafaduran has quit IRC09:53
*** rafaduran has joined #openstack-meeting09:54
*** rafaduran has left #openstack-meeting09:54
*** garyk has joined #openstack-meeting09:58
*** ijw has joined #openstack-meeting10:00
*** alexpilotti has joined #openstack-meeting10:01
*** eglynn has quit IRC10:02
*** yaguang has quit IRC10:03
*** eglynn has joined #openstack-meeting10:03
*** henrynash has quit IRC10:38
*** garyk has quit IRC10:43
*** Guest85046 is now known as annegentle10:45
*** annegentle is now known as Guest7971210:45
*** vkmc has joined #openstack-meeting10:54
*** garyk has joined #openstack-meeting10:59
*** reed has joined #openstack-meeting11:00
*** maurosr has joined #openstack-meeting11:20
*** shang_ has quit IRC11:30
*** ravikumar_hp has quit IRC11:31
*** dosaboy1 has joined #openstack-meeting11:37
*** dosaboy1 has left #openstack-meeting11:38
*** shang_ has joined #openstack-meeting11:43
*** Guest79712 is now known as annegentle11:46
*** annegentle is now known as Guest3492911:47
*** Barath has joined #openstack-meeting11:55
*** Barath has left #openstack-meeting12:00
*** yamahata_ has quit IRC12:00
*** Barath_ has joined #openstack-meeting12:03
*** b3nt_pin has quit IRC12:13
*** b3nt_pin has joined #openstack-meeting12:15
*** Barath_ has quit IRC12:18
*** ijw has quit IRC12:33
*** ijw has joined #openstack-meeting12:33
*** martine has joined #openstack-meeting12:33
*** aabes has joined #openstack-meeting12:34
*** jhenner has joined #openstack-meeting12:35
*** Guest34929 is now known as annegentle12:48
*** annegentle is now known as Guest6522812:48
*** mrunge has quit IRC12:52
*** vkmc has quit IRC13:00
*** markvoelker has joined #openstack-meeting13:02
*** almaisan-away is now known as al-maisan13:04
*** henrynash has joined #openstack-meeting13:08
*** garyk has quit IRC13:08
*** dcramer_ has quit IRC13:12
*** vkmc has joined #openstack-meeting13:14
*** dolphm has joined #openstack-meeting13:15
*** garyk has joined #openstack-meeting13:18
*** dprince has joined #openstack-meeting13:21
*** henrynash has quit IRC13:27
*** radez_g0n3 is now known as radez13:30
*** b3nt_pin has quit IRC13:31
*** edgarstp has joined #openstack-meeting13:34
*** b3nt_pin has joined #openstack-meeting13:35
*** Guest65228 is now known as annegentle13:49
*** annegentle is now known as Guest2878213:50
*** henrynash has joined #openstack-meeting13:50
*** henrynash_ has joined #openstack-meeting13:55
*** henrynash has quit IRC13:57
*** markvan has joined #openstack-meeting13:58
*** henrynash_ has quit IRC13:59
*** annegentle_itsme has joined #openstack-meeting14:01
*** b3nt_pin has quit IRC14:02
*** eharney has joined #openstack-meeting14:10
*** eharney has quit IRC14:10
*** eharney has joined #openstack-meeting14:10
*** eglynn is now known as hungry-eglynn14:11
*** markvoelker has quit IRC14:13
*** mrodden has joined #openstack-meeting14:22
*** joesavak has joined #openstack-meeting14:26
*** lbragstad has joined #openstack-meeting14:28
*** ctracey has quit IRC14:29
*** frankm has joined #openstack-meeting14:30
*** pschaef has joined #openstack-meeting14:31
*** huats_ is now known as huats14:32
*** ctracey has joined #openstack-meeting14:32
*** hungry-eglynn is now known as eglynn14:37
*** eharney has quit IRC14:42
*** sandywalsh has joined #openstack-meeting14:45
*** ayoung is now known as ayoung-afk14:46
*** markvan has quit IRC14:47
*** mattray has joined #openstack-meeting14:49
*** eharney has joined #openstack-meeting14:49
*** eharney has joined #openstack-meeting14:49
*** woodspa has joined #openstack-meeting14:50
*** Guest28782 is now known as annegentle14:51
*** annegentle is now known as Guest1088414:51
*** mtreinish has joined #openstack-meeting14:56
*** pschaef has quit IRC15:00
*** EmilienM__ has quit IRC15:01
*** Divakar has joined #openstack-meeting15:03
*** roaet is now known as roaet-away15:05
*** Barath_ has joined #openstack-meeting15:06
DivakarHi Barath15:07
Barath_Hi divakar15:07
*** ryanpetrello has joined #openstack-meeting15:08
*** Divakar has left #openstack-meeting15:08
*** markvan has joined #openstack-meeting15:12
*** lbragstad has quit IRC15:13
*** annegentle_itsme has quit IRC15:14
*** EmilienM__ has joined #openstack-meeting15:16
*** markmcclain has joined #openstack-meeting15:17
*** markvan has quit IRC15:20
*** markvan has joined #openstack-meeting15:21
*** markvan has quit IRC15:26
*** darraghb has quit IRC15:28
*** darraghb has joined #openstack-meeting15:29
*** pschaef has joined #openstack-meeting15:30
*** Barath_ has quit IRC15:30
*** darraghb has quit IRC15:32
*** markvan has joined #openstack-meeting15:34
*** Mandell has quit IRC15:34
*** darraghb has joined #openstack-meeting15:34
*** KurtMartin has joined #openstack-meeting15:34
*** roaet-away is now known as roaet15:37
*** lbragstad has joined #openstack-meeting15:42
*** mnewby has joined #openstack-meeting15:43
*** annegentle_itsme has joined #openstack-meeting15:43
*** jbrogan has joined #openstack-meeting15:48
*** Guest10884 is now known as annegentle15:52
*** annegentle is now known as Guest1670215:53
*** xyang_ has joined #openstack-meeting15:55
*** avishay has joined #openstack-meeting15:58
*** smulcahy has joined #openstack-meeting15:58
*** bswartz has joined #openstack-meeting15:59
jgriffit1folks around for Cinder meeting?16:01
avishayyessir16:01
KurtMartinyep16:01
smulcahyyup16:01
jgriffit1bswartz: ?16:01
bswartzhi16:01
xyang_Yes16:01
jgriffit1Looks like aka quaurum to me16:01
jgriffit1#startmeeting cinder16:02
openstackMeeting started Wed Jan  9 16:02:00 2013 UTC.  The chair is jgriffit1. Information about MeetBot at http://wiki.debian.org/MeetBot.16:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:02
*** openstack changes topic to " (Meeting topic: cinder)"16:02
openstackThe meeting name has been set to 'cinder'16:02
jgriffit1morning everyone16:02
bswartzgood morning!16:02
avishaygood evening :)16:02
jgriffit1avishay: :)16:02
xyang_morning16:02
KurtMartingood morning16:02
jgriffit1first topic....16:02
jgriffit1#topic G216:03
*** openstack changes topic to "G2 (Meeting topic: cinder)"16:03
*** rushiagr has joined #openstack-meeting16:03
*** thingee has joined #openstack-meeting16:03
jgriffit1First off thanks everyone for all the hard work to get G2 out16:03
*** winston-d has joined #openstack-meeting16:03
jgriffit1All targetted items made it except my metadata patch :(16:03
jgriffit1There's something funky in Nose runner that we can't figure out and rather than continue beating heads against the wall we're just moving forward16:04
jgriffit1congratulations to winston-d !!!16:04
jgriffit1We now have a filter scheduler in Cinder!!!16:04
winston-d:)16:04
avishaywoohoo!16:04
bswartzyay16:04
xyang_wonderful16:04
avishaywinston-d: congrats - great job!16:04
DuncanTWoo!16:04
thingeewinston-d: yes congrats16:04
KurtMartinnice we can use that16:05
rushiagrthats one big achievement! great!16:05
*** guitarzan has joined #openstack-meeting16:05
jgriffit1So just a recap of Grizzly so far, we now have a V2 API (thanks to thingee) and we now have the filter scheduler16:05
winston-dthx guys, couldn't have done that without your support16:05
jgriffit1On top of that we've added I lost track of how many drivers with more to come16:06
*** hemna is now known as hemna__16:06
jgriffit1anyway... Grizzly is going great so far thanks to everyone!16:06
jgriffit1I don't really have anything else to say on G2... now it's on to G316:06
jgriffit1anybody have anything they want to hit on G2 wrap up ?16:07
jgriffit1Ok16:08
*** alexpilotti has quit IRC16:08
jgriffit1bswartz, wanna share your customer feedback regarding Openstack drivers?16:08
bswartzsure16:08
jgriffit1#topic feedback from bswartz16:08
*** openstack changes topic to "feedback from bswartz (Meeting topic: cinder)"16:08
bswartzso many of you may have noticed that netapp has submitted a bunch more driver changes16:09
*** sarob_ has quit IRC16:09
*** rushiagr has quit IRC16:09
bswartzwe started our original driver design in the diablo timeframe, and out vision was a single instance of cinder (actually nova-volume) talking to NetApp management software which managed hundreds of storage controllers16:10
*** rushiagr has joined #openstack-meeting16:10
bswartzsince NetApp had already sunk hundreds of man years into management software it seemed dumb not to take advantage of it16:10
bswartzbut the feedback we've been getting is that customers don't like middleware and they don't like a single instance of cinder16:11
bswartzthis probably won't surprise many of you16:11
bswartzsince (nearly?) all of the existing drivers manage a single storage controller per instance of cinder16:11
creihtbswartz: but that's what we did for lunr16:12
*** adjohn has joined #openstack-meeting16:12
guitarzancreiht: beat me to it16:12
winston-dfor what reason they dont like single instance of cinder? HA?16:12
creihtsingle HA instance of cinder talking to our lunr backend16:12
creihtguitarzan: sorry to steal your thunder :)16:12
bswartzHA is one reason16:12
bswartzscalability is another16:13
creihtThose really should be orthogonal16:13
bswartza single instance of cinder will always have limits16:13
jgriffit1hmm... I've always struggled with this, especially since the cinder node is really just a proxy anyway16:13
jgriffit1and we don't have any HA across nodes *yet* :)16:13
jgriffit1anyway...16:13
bswartzthe limits may be high, but it's still desirable to be able to overcome those limits with more hardware16:14
guitarzanthere's a big difference between single instance of cinder and cinder running on every storage node16:14
creihtbswartz: you can get ha with cinder by running however many cinder api nodes you want, all talking to your backend16:14
bswartzwell, no HA so much as "no single point of failure"16:14
winston-dyeah, agree with john. is there any number to tell for the limits?16:15
bswartzif you have a single cinder instance and it goes up in smoke, then you're dead -- multiple instances addresses that16:15
DuncanTFacts and customer's views are not always related ;-)16:15
jgriffit1bswartz: yep, we neeed a mirrored cinder / db option :)16:15
bswartzDuncanT: agree16:15
jgriffit1DuncanT: amen brother16:15
jgriffit1bswartz: so this is good info though16:15
*** cp16net is now known as cp16net|away16:16
bswartzso anyways, we getting on the bandwagon of one cinder instance per storage controller16:16
creihtbswartz: but that's the point, is if you have a driver then you can run load balanced cinder instances that talk to the same backend16:16
creihtthat's what we do with lunr16:16
bswartzand the new scheduler in grizzly will make that option a lot cooler16:16
DuncanTThat's also what we do16:16
bswartzcreiht: we are also pursuing that approach16:16
bswartzcreiht: however the new drivers that talk directly to the hardware is lower hanging fruit16:17
DuncanT I can't find our16:17
DuncanTSorry, ignore that16:17
bswartzthere are other reason customers take issue with out management software -- and we're working on addressing those16:17
bswartzs/out/our/16:17
bswartzanyways, I just wanted to give some background on what's going on with out drivers, and spur discussion16:18
jgriffit1bswartz: so bottom line, most of the changes that are in the queue are to address whcih aspect?16:18
bswartzI didn't understand the comments about lunr16:18
creihtbswartz: so lunr is a storage system we developed at rackspace that has its own api front end16:18
bswartzjgriffit1: the submitted changes add new driver classes that allow cinder to talk directly with our hardware with no middleware installed16:19
creihtcinder sits in front, and our driver passes the calls on to the lunr apis16:19
bswartzjgriffit1: our existing driver require managmenet software to be installed to work at all16:19
jgriffit1bswartz: ahhh... got ya16:19
*** Gordonz has joined #openstack-meeting16:19
bswartzcreiht: how do you handle elimination of single points of failure and scaling limitations?16:20
creihttraditional methods16:20
* creiht looks for the diagram16:20
*** dolphm has quit IRC16:21
*** Gordonz has quit IRC16:21
*** woodspa_ has joined #openstack-meeting16:21
DuncanTWe solve SPoF via HA database, HA rabbit and multiple instances of cinder-api & cinder-volume16:21
guitarzanwe do the same, except we aren't using rabbit at all16:21
DuncanT(All talking to a backend via apis in a similar manner to lunr)16:21
jgriffit1so the good thing is I don't think bswartz is necessarily disagreeing with creiht or anybody else on how to achieve this16:21
*** woodspa has quit IRC16:21
*** woodspa_ is now known as woodspa16:22
bswartzDuncanT: so multiple drivers [can] talk to the same hardware?16:22
*** Gordonz has joined #openstack-meeting16:22
guitarzanabsolutely16:22
DuncanTbswartz: yup16:22
creihtbswartz: http://devops.rackspace.com/cbs-api.html#.UO2ZJeAVUSg16:22
creihtthat has a diagram16:22
* jgriffit1 shuts up now as it seems he may be wrong16:22
creihtwhere the volume api box is basically several instances of cinder each with the lunr driver that talks to the lunr api16:22
bswartzcreiht: thanks16:23
bswartzjgriffit1: no I don't think there is any disagreement, just a lot of different ideas for solving these problems16:23
jgriffit1:)16:23
DuncanTThere are a couple of places (snapshots for one) where cinder annoyingly insists on only talking to one specific cinder-volume instance, but they are few and fixable16:23
jgriffit1DuncanT: avishay is working on it :)16:24
DuncanTjgriffit1: Yup16:24
creihtwell and our driver also isn't a traditional driver16:24
avishayjgriffit1: I am? :/16:24
*** dolphm has joined #openstack-meeting16:24
bswartzlol16:24
jgriffit1avishay: :)16:24
jgriffit1avishay: I didn't tell you yet?16:24
avishayjgriffit1: ...what am I missing?16:25
DuncanTlol16:25
winston-dlol16:25
*** colinmcnamara has joined #openstack-meeting16:25
DuncanTIt'll get done anyway... several people interested16:25
jgriffit1avishay: so your LVM work and the stuff we talked about last night regarding clones etc will be usable for this16:25
jgriffit1anyway..16:25
jgriffit1yeah... sorry to derail16:25
avishayjgriffit1: yes, it's a start, but not tackling the whole issue :)16:25
*** jgriffit1 has quit IRC16:26
*** jgriffith has joined #openstack-meeting16:26
avishaymutiny?16:26
jgriffithhaha16:26
avishayjgriffith: sorry, thought somebody offed you ;)16:26
jgriffithOk... so bswartz basicly your changes are to behave more like the *rest of us* :)16:26
jgriffithbswartz: You have been assymilated :)16:27
jgriffithbswartz: just kidding16:27
bswartzjgriffith: yes, it's been a learning process for us16:27
jgriffithbut in a nut shell, these changes cut the middleware out16:27
*** ayoung-afk is now known as ayoung16:27
creihtjoined the darkside16:27
creiht:)16:27
jgriffithOk... awesome16:27
creihtor maybe we are the darkside :)16:27
bswartzwe're not giving up on our loftier ideas, but in the mean time we're conforming16:27
jgriffithcreiht: hehe16:27
* jgriffith cries16:27
guitarzanwhich ideas are the lofty ones? I'm curious what seems more ideal to folks16:28
jgriffithbswartz: make you a deal, pick one or the other :)16:28
jgriffithguitarzan: NFS16:28
jgriffithCIFS to be more specific in Cinder16:28
jgriffithbswartz: can provide more detail16:28
*** sandywalsh has joined #openstack-meeting16:28
guitarzanI mean in regards to this HA cinder to external backend question16:28
bswartzjgriffith: I'm not sure what you're asking16:29
bswartzthe NAS extensions are completely separate from this driver discussion16:29
jgriffithI assumed that's what you meant by "loftier" goals16:29
jgriffithso what "lofty" goal are you talking about then?16:29
jgriffithPlease share now rather than later with a 5K line patch :)16:30
bswartzno, loftier means that we're leaving the original drivers in there and we have plans to enhance those so customers hate them less16:30
* jgriffith is now really confused16:30
bswartzso the netapp.py and netapp_nfs.py files are getting large16:30
*** woodspa has left #openstack-meeting16:31
bswartzjgriffith: we talked about reworking the NAS enhancements so that the code would be in cinder, but would run as a separate service16:31
bswartzthat rework is being done16:31
avishayjgriffith: the new patch doesn't replace the old drivers that access the middleware, just add the option of direct access. the lofty goal is to improve the middleware so that customers won't hate it. bswartz - right?16:31
jgriffithOk.. I got it16:31
bswartzavishay: yes16:31
jgriffithsorry16:31
*** woodspa has joined #openstack-meeting16:31
jgriffithWhy do you need both?16:32
bswartzaddresses 2 different customer requirements16:32
jgriffithreally?16:32
bswartzon is for blocks, other is for CIFS/NFS storage16:32
avishaythe direct access vs. the middleware access?16:32
bswartzwe have lots of different drivers for supporting blocks16:32
jgriffithalright, I'm out16:32
jgriffithavishay: yes :)16:33
bswartzsorry this has gotten confusing and out of hand16:33
DuncanTYup16:33
jgriffithLOL.. yes, and unfortunately it's likely my fault16:33
* bswartz remembers not to volunteer to speak at these things16:33
avishaybswartz: the question is why you need two options ( direct access vs. the middleware access) - not related to the NFS driver16:33
jgriffithavishay: thank you!16:33
jgriffithavishay: from now on you just speak for me please16:33
avishayjgriffith: done.16:34
jgriffith:)16:34
avishay;)16:34
xyang_I agree we should give customer more options, with direct and middleware access16:34
bswartzavishay: regarding our blocks drivers, we're leaving the old ones in, and we're adding the direct drivers16:34
jgriffithI disagree, but that's your business I suppose16:34
winston-dxyang_: do you plan to do similar thing for EMC driver?16:34
bswartzavishay: long term we will deprecate one or the other, depending one which works better in practice16:34
jgriffithcool16:35
avishayOK.  I guess all this doesn't affect the "rest of us" anyway.16:35
*** vishy_zz is now known as vishy16:35
bswartzavishay: the thing that affects the rest of you is the NAS enhancements, and jgriffith made his opinions clear on that topic16:35
DuncanTOther than monster reviews landing16:36
jgriffithDuncanT: +116:36
bswartzavishay: so our agreement is to resubmit those changes as a separate service inside cinder, to minimize impact on existing code16:36
*** sandywalsh has quit IRC16:36
*** adjohn has quit IRC16:36
avishaybswartz: yes i know, that's fine16:36
bswartzavishay: the changes will be large, but the overlap with existing code will be small16:36
bswartzthat is targetted for G316:36
DuncanTAh ha, that makes sense16:37
bswartzyou've already seens the essence of those changes with our previous submission, the difference for G3 is that we're refactoring it16:37
bswartzseen16:37
bswartzokay I'm done16:38
bswartzsorry to take up half the meeting16:38
jgriffithbswartz: no problem16:38
DuncanTThings are now reasonably clear, thanks for that16:38
winston-dbswartz: thx for sharing.16:38
avishayyup, thank you16:39
winston-di've decided to add stress test for single cinder volume instance to see what the limit is in our lab. :)16:39
*** sandywalsh has joined #openstack-meeting16:39
jgriffithbswartz: yeah, appreciate the explanation16:40
jgriffithwinston-d: cool16:40
*** vishy is now known as vishy_zz16:41
*** pschaef has quit IRC16:41
*** cp16net|away is now known as cp16net16:41
jgriffithalright, anybody else have anything?16:42
xyang_How is FC development?16:42
xyang_Will it be submitted soon16:42
KurtMartinxyang_, plan is to get the nova side changes submitted next week for review16:43
*** pschaef has joined #openstack-meeting16:43
DuncanTVolume backup stuck in corporate legal hell, submission any day now[tm]16:43
*** jhenner has quit IRC16:43
KurtMartinwe resolved the one issue we had with detach and have HP's blessing :)16:44
winston-dcreiht: where's clayg?  haven't seen him for very long time16:45
*** maoy has joined #openstack-meeting16:45
winston-dclayg also sth on volume backup, if my memory serves me right16:46
jgriffith:)16:48
jgriffithok, DuncanT beat up lawyers16:48
DuncanTI wonder if 'My PTL made me do it!' will stand up in court?16:49
jgriffithI'm going to try and figure out why tempest is randomly failing to delet volumes in it's testing16:49
jgriffithanybody else looking for something to do today that would be a great thing to work on :)16:49
jgriffithSure.. why not!16:49
creihtwinston-d: he has abondoned us :(16:49
creihtwinston-d: he went to work for swiftstack16:50
creihton swift stuff16:50
winston-dcreiht: oh, ouch.16:50
*** Mr_T has joined #openstack-meeting16:50
creihtwinston-d: sorry was being a little silly when I said abandoned :)16:51
*** salv-orlando has quit IRC16:51
creihtand I don't have much room to talk, as I'm back working on swift stuff as well16:51
*** salv-orlando has joined #openstack-meeting16:51
avishayThings we're working on for Grizzly is generic iSCSI copy volume<->image (the LVM factoring is part of that), and driver updates of course16:52
winston-dcreiht: :) so who's new guy in rackspace for cinder/lunr now?16:52
creihtguitarzan:16:52
creihtwinston-d: -^16:52
*** eharney has quit IRC16:52
winston-dk. good to know.16:52
guitarzanwinston-d: creiht has also abandoned us16:53
*** Guest16702 is now known as annegentle16:54
*** bobba has joined #openstack-meeting16:54
creihtwell, I haven't left the channel yet :)16:54
jgriffith:)16:54
*** annegentle is now known as Guest1499316:54
winston-di thought block storage is more challenging that obj. :) i still think so.16:54
creihtdifferent challenges16:55
jgriffithOk folks... kind of an all over meeting today, sorry for that16:55
jgriffithbut we're about out of time...  anything pressing?16:55
creihtjgriffith: doh... sorry :(16:55
jgriffithwe can all still chat in openstack-cinder :)16:55
jgriffithcreiht: No... I feel bad cutting short16:55
creihtI didn't realize I was in the meeting channel :)16:55
jgriffithcreiht: I've doing 4 things at once and I have been trying to be polite for john and the xen meeting that follows16:55
jgriffithcreiht: HA!!!16:56
jgriffithcreiht: Awesome!16:56
winston-djgriffith: xyang_ DuncanT bswartz please remember to update your driver to provide capabilities/status for scheduler16:56
*** eharney has joined #openstack-meeting16:56
*** eharney has quit IRC16:56
*** eharney has joined #openstack-meeting16:56
* creiht is in too many channels16:56
jgriffithwinston-d: yeppers16:56
DuncanTwinston-d: Yup16:56
bswartzwinston-d: got it16:56
jgriffithspeaking of which... please review my driver patches, and any other patches in the queue16:56
jgriffithcatch ya'll later16:56
jgriffith#endmeeting16:57
avishaybye all!16:57
xyang_bye16:57
jgriffith# end meeting16:57
jgriffithgrrrr16:57
jgriffith#end meeting16:57
*** matelakat has joined #openstack-meeting16:58
jgriffithhrmm???16:58
rushiagrjgriffith: you started meeting with nick jgriffit1. Is it the reason for this glitch?16:58
*** jgriffith is now known as jgriffith116:58
jgriffith1#endmeeting16:58
winston-dthat's quick16:58
avishaywithout the h16:58
rushiagrnope, it was without a 'h'16:58
*** jgriffith1 is now known as jgriffit116:58
jgriffit1#endmeeting16:59
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"16:59
openstackMeeting ended Wed Jan  9 16:59:05 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-09-16.02.html16:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-09-16.02.txt16:59
jgriffit1phewww16:59
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-01-09-16.02.log.html16:59
bobbahaha love it16:59
jgriffit1thanks guys16:59
rushiagrbazinga!16:59
avishayrushiagr: good job debugging :)16:59
*** bobba is now known as BobBall16:59
avishaybye all16:59
jgriffit1stupic irc nic serve16:59
winston-dbye16:59
*** avishay has quit IRC16:59
rushiagravishay: :)16:59
rushiagrbye all16:59
*** cp16net is now known as cp16net|away16:59
*** markvan has quit IRC16:59
*** smulcahy has quit IRC17:00
*** colinmcnamara has quit IRC17:00
*** johngarbutt has joined #openstack-meeting17:00
*** vkmc has quit IRC17:00
johngarbutt#startmeeting XenAPI17:01
openstackMeeting started Wed Jan  9 17:01:01 2013 UTC.  The chair is johngarbutt. Information about MeetBot at http://wiki.debian.org/MeetBot.17:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:01
*** openstack changes topic to " (Meeting topic: XenAPI)"17:01
openstackThe meeting name has been set to 'xenapi'17:01
johngarbutt#topic Blueprints17:01
*** cp16net|away is now known as cp16net17:01
*** openstack changes topic to "Blueprints (Meeting topic: XenAPI)"17:01
johngarbuttHi everyone!17:01
pvohello17:01
guitarzanhello17:01
*** jsavak has joined #openstack-meeting17:01
BobBallmorning!17:02
johngarbuttcool, so before we start17:02
*** bswartz has left #openstack-meeting17:02
*** rushiagr has left #openstack-meeting17:02
johngarbutthas anyone got things they would like to cover?17:02
johngarbuttlets build an agenda17:02
matelakatHi17:02
matelakatOkay, so cinder - xenapinfs - copy from image.17:03
*** gakott has joined #openstack-meeting17:03
matelakatidea: use the same code as in iscsi.17:03
matelakatJust attach the volume to the cinder box.17:03
pvoI had a few questions (not really agenda items) that I wanted to ask at the end17:03
johngarbuttOK, lets start with that blueprint17:03
johngarbuttpvo: cool17:03
*** bencherian has joined #openstack-meeting17:04
*** joesavak has quit IRC17:04
westmaashello17:04
matelakatI amended the xenapi-storage-manager-nfs blueprint to include that idea. #link https://blueprints.launchpad.net/cinder/+spec/xenapi-storage-manager-nfs17:05
*** joesavak has joined #openstack-meeting17:05
*** shengjie has joined #openstack-meeting17:05
matelakatAfter that, we'll have a really complete volume driver for xenapi.17:05
*** jgriffit1 is now known as jgriffith17:05
johngarbuttmatelakat: that one is marked as finished, might want a new one for the rest17:05
rainyapvo, thanks for reminder17:05
matelakatOh, ok.17:06
*** garyk has quit IRC17:06
BobBallDo #link's need to be put at the start of the text for the bot to recognise them?17:06
matelakatwill start a new one.17:06
toansterhello17:06
johngarbuttmatelakat: any major questions?17:06
johngarbuttpending reviews etc17:06
matelakatoh, yes.17:06
*** al-maisan is now known as almaisan-away17:06
matelakat#link snapshot-support-for-xenapinfs https://review.openstack.org/#/c/18780/17:07
*** jsavak has quit IRC17:07
johngarbuttOK17:07
*** vishy_zz is now known as vishy17:07
johngarbuttany more blueprint updates or pending reviews or burning questions about that kind of thing?17:08
matelakatno.17:08
*** colinmcnamara has joined #openstack-meeting17:08
johngarbuttI know there is Quantum OVS, if anyone feels they could take a look17:08
*** tongli has joined #openstack-meeting17:09
johngarbutt#link https://review.openstack.org/#/c/15022/17:09
*** sandywalsh has quit IRC17:09
*** thingee has left #openstack-meeting17:09
pvojohngarbutt: got your note. Will take a peek.17:09
johngarbuttpvo: thank you !17:10
johngarbuttany more blueprint things? I saw config drive was coming along17:10
shengjiehi, I have a quick update on the blueprint hbase-storage-backend17:10
*** ryanpetrello has quit IRC17:11
johngarbuttshengjie: fireway have you got a link?17:11
*** ryanpetrello has joined #openstack-meeting17:11
shengjiehttps://blueprints.launchpad.net/ceilometer/+spec/hbase-storage-backend17:11
shengjiewe've pretty much finished the 1st phase implementation, will have it committed for review soon.17:12
shengjiebut to have better performance, hbase will need extra 2ndary indices17:12
johngarbutthang on, sorry, I am probably missing something17:12
johngarbuttdoes that affect the XenAPI support in Ceilometer?17:12
BobBallThe Ceilometer meeting is meant to be at 15:00 UTC17:13
BobBallon Thursday17:13
dhellmannthe ceilometer xenapi blueprint link is https://blueprints.launchpad.net/ceilometer/+spec/xenapi-support17:13
johngarbuttmy bad, I meant XenAPI related blueprints in previous bit17:13
shengjiesorry, my bad17:14
johngarbuttthanks, no progress reported there :-(17:14
johngarbuttone other change I noticed, text console support for XenAPI from Internap17:14
johngarbutt#link https://review.openstack.org/#/c/17959/17:14
johngarbuttmight interest people using horizon, it looks bad without this support17:15
*** vkmc has joined #openstack-meeting17:15
*** colinmcnamara has quit IRC17:15
johngarbuttOK, shall we move to docs?17:15
matelakaty17:16
johngarbutt#topic docs17:16
*** openstack changes topic to "docs (Meeting topic: XenAPI)"17:16
johngarbutt#link http://wiki.openstack.org/HypervisorSupportMatrix17:16
johngarbuttI have updated this17:16
johngarbuttany more ideas welcome17:17
johngarbuttwe have quite a lot of pending doc bugs, is there anyone with a bit of time for those?17:17
*** jbrogan has left #openstack-meeting17:17
johngarbutt#link https://bugs.launchpad.net/openstack-manuals/+bugs?field.searchtext=xenapi17:18
johngarbuttOK, just wanted to raise that17:18
johngarbutt#topic qa17:18
*** openstack changes topic to "qa (Meeting topic: XenAPI)"17:19
johngarbuttwe have spoken about getting some tests, above and beyond smokestack, reporting into gerrit17:19
johngarbuttis that still on anyones roadmap?17:19
johngarbuttupdate from Citrix: internal CI is almost back up, based on DevStack17:20
*** winston-d has quit IRC17:20
pvojohngarbutt: our QA folks are looking at that now.17:20
pvoNot sure about their timeline though.17:20
johngarbutt#link https://github.com/citrix-openstack/qa17:20
pvoI can point you to the right folks, if you're interested.17:20
johngarbuttpvo: cool, might be good, just to make sure no one else tries the same think17:21
johngarbuttI know internap were interested at one point too17:21
*** b3nt_pin has joined #openstack-meeting17:21
johngarbuttCitrix have tempest running against XenServer using DevStack using jenkins plus the above scripts, if that helps people17:21
johngarbutt#topic bugs17:22
*** openstack changes topic to "bugs (Meeting topic: XenAPI)"17:22
*** aabes has quit IRC17:22
johngarbuttOK any major bugs for people?17:22
*** Slower has joined #openstack-meeting17:22
johngarbutt#topic Open Discussion17:23
*** openstack changes topic to "Open Discussion (Meeting topic: XenAPI)"17:23
johngarbuttpvo: fire way17:23
johngarbuttaway^17:23
*** dtynan has joined #openstack-meeting17:24
pvoit was a quick question really,… I was wondering if anyone using XenServer/XCP uses the metadata service or xenstore?17:24
pvoto pass data to instances.17:24
johngarbuttalso, any other questions or issues people would like to raise?17:24
*** xyang_ has quit IRC17:24
Mr_Ti've got a question, but i think pvo might be beating me to it17:24
johngarbuttWe have tested the metadata service with nova-network and flatdhcp17:24
johngarbuttit seemed to be working17:24
johngarbuttwith cloud-init picking up things17:25
pvojohngarbutt: ya, the dhcp part is always the sticking point for us.17:25
pvowe don't use DHCP, so there is the chicken-egg thing17:25
johngarbuttright17:25
*** hemnafk is now known as hemna17:25
johngarbuttlink local address any good?17:25
BobBallCould you briefly explain the chicken+egg thing for my benefit?17:26
pvojohngarbutt: ya, that is where our conversations usually drift to. We'd talked some time ago about an ipv6 link local, but got some pushback here for that.17:26
pvoBobBall: well, assumign we're not using linklocal addresses, we need a valid ip to talk to a metadata service.17:26
johngarbuttconfig drive could be helpful alternative, but again, no metadata service at that point17:26
BobBallah of course17:26
*** hemna has quit IRC17:26
pvoso we use xenstore to inject the ips, but if we're already halfway there with some data, we end up putting it all there.17:26
johngarbuttBobBall: metadata service is on an IP address, so you need the vif up17:26
westmaasI think the general direction is configdrive + metadata service17:26
pvowestmaas: ya, i was hoping to just copy someone's config : )17:27
johngarbuttconfig drive for ip address?17:27
westmaasconfigdrive for boot time data, metadata service for ongoing data that you need access to from the instance17:27
westmaasjohngarbutt: I'm not so clear on that, sorry :)17:27
pvojohngarbutt: how about root password setting with windows?17:27
johngarbutthmm, metadata is fairly one shot at the moment with cloud init17:27
pvothats always ends up killing the config drive convo17:27
westmaasand I more meant not xen specifically, but OS wide17:27
*** gyee has joined #openstack-meeting17:28
johngarbuttagreed17:28
johngarbuttI think most people think about a reboot to reset the password17:28
johngarbuttcloud-init could re-read on reboot17:28
westmaasin v3 of the api you can't set the password17:28
westmaasI believe.17:28
johngarbuttextension time17:28
johngarbutt:-(17:28
pvojohngarbutt: yea, that is what I proposed back in the Bexar timeframe….17:28
pvogot some resistance to that idea then17:28
westmaasor abandon that feature17:28
westmaasjust windows becomes a problem17:29
johngarbuttvish seemed more keen in that summit XenAPI session on reset on reboot17:29
johngarbuttwe have cloud-init in windows now I think17:29
pvoI think its reasonable to reboot an instance to reset a root password, but since people are used to not rebooting it may be a harder sell.17:29
johngarbuttor very close17:29
*** danwent has joined #openstack-meeting17:29
pvoso weve been trying to figure out a way around it17:29
johngarbuttwe could have xenstore kick cloud-init?17:29
johngarbuttxen specific but not changing the core functionality17:29
pvojohngarbutt: ah, hadn't seen cloud-init in windows yet17:29
pvothat would be helpful.17:30
johngarbutthyper-v guys mentioned somehting about that17:30
johngarbuttnot sure if it is ready for prime time yet though17:30
pvois that in openstace github or just on the internets somewhere?17:30
pvoopenstack… that would have been17:30
pvook17:30
pvowill check that out17:30
johngarbuttpvo: not sure, would have to google17:30
johngarbuttI guess there meeting might be a good place, if peter is not around17:31
pvojohngarbutt: ya, doing that now. will find17:31
johngarbuttpvo: cheers17:31
westmaasoh yeah mikal mentioned that to me too17:31
johngarbuttso xen specific extension to cloud-init, does that sound bad?17:31
johngarbuttto kick the standard on reboot password system17:32
westmaashttp://www.cloudbase.it/cloud-init-for-windows-instances/17:32
johngarbuttthe key requirment from HP was around ensuring they were never in a position to decrypt the password17:32
pvojohngarbutt: yea, that part I was trying to figure out too.17:33
johngarbuttI meant alexp not peter, oops17:33
johngarbutt#link https://github.com/alexpilotti/cloudbase-init17:33
westmaasyeah thats why we stopped storing it a while ago, but still there is the time its in transit17:33
pvocurious to how they're solving it17:33
johngarbuttpvo: it was ssh keys I think17:33
pvoon windows?17:33
westmaasbut not for windows :)17:33
westmaashaha17:33
westmaasyea17:33
johngarbuttthey injected key used to encrypt a generated password17:33
pvoif msft would just embrace openssh…17:33
johngarbuttwell, any key will do I guess, just make sure the user is the only one with the private bit17:33
johngarbuttany symetric key thingy I guess17:34
pvoDH works, but you need to bounce the messages back and forth.17:34
pvowhich is what we were looking at the metadata service *could* do.17:34
johngarbuttthat is what you do now right?17:34
pvobut felt the wrong way.17:34
pvoya, thats all.17:34
johngarbuttright17:34
pvothe gentleman yields the floor17:34
johngarbuttcan we not use the keypair, like an ssh key somehow17:35
*** bencherian has quit IRC17:35
johngarbuttuser creates key, adds key to instance17:35
pvoI'm sure we could with something custom in windows17:35
johngarbuttnormally crazy becuase it is windows17:35
pvofor linux, its all solved.17:35
johngarbuttright17:35
johngarbutttrying to think about kerberos and ssl apis they have already17:36
johngarbutt.NET includes the tools for this I think17:36
*** annegentle_itsme has quit IRC17:36
pvoI'm wintarded, so I have no idea.17:36
johngarbuttpvo: cloud-base may have done this already17:37
johngarbuttreading the readme for windows cloud init17:37
pvolooking through it now17:37
johngarbuttuses ssh key and password17:37
johngarbutttime to xen extend it maybe17:37
johngarbutthmm17:39
johngarbuttthey don't encrpyt it yet17:39
johngarbutthttps://github.com/alexpilotti/cloudbase-init/blob/master/cloudbaseinit/plugins/windows/createuser.py#L4717:39
johngarbuttthey optionally generate it though17:39
johngarbuttsounds like they have installed openssh or something17:39
johngarbutt#link https://github.com/alexpilotti/cloudbase-init/blob/master/cloudbaseinit/plugins/windows/sshpublickeys.py17:39
johngarbuttthey inject the keys17:39
pvoheh. that would be a fun fight to have again.17:40
johngarbuttI see a fun summit session coming up17:40
pvowe have that talk twice a year : )17:40
*** alexpilotti has joined #openstack-meeting17:40
johngarbuttah, hello17:41
johngarbuttalexpilotti: how does the change password work in cloudbase-init?17:41
alexpilottijohngarbutt: hi!17:42
alexpilottijohngarbutt: you mean the admin_pass in the metadata?17:42
johngarbuttalexpillotti: sorry to drag you into an XenAPI meeting17:42
johngarbuttyes that is the one17:42
johngarbuttare there plans for encrypting that password?17:43
*** dwcramer has joined #openstack-meeting17:43
alexpilottijohngarbutt: there's also anew patch from vishy to push the patch from the guest to the metadata17:43
*** kaganos has joined #openstack-meeting17:43
alexpilottijohngarbutt: that's the way we want to take17:43
alexpilottijohngarbutt: to push the password, sorry, lapsus :-)17:43
johngarbuttOK, push an encrypted one?17:44
alexpilottijohngarbutt: yes17:44
*** kaganos_ has joined #openstack-meeting17:44
johngarbuttusing the SSH key, or something else?17:44
*** Mandell has joined #openstack-meeting17:44
alexpilottijohngarbutt: yes17:44
johngarbuttcool17:44
alexpilottijohngarbutt: basically you encrypt it on the guest with the public key17:44
pvoalexpilotti: this is assuming ssh is installed on teh windows machien?17:44
johngarbuttdoes it reset on every reboot?17:44
pvo<sp>17:44
alexpilottipvo: no, you just need OpenSSL17:44
pvoalexpilotti: gotcha17:45
alexpilottijohngarbutt: no, unless you confige it to do so17:45
alexpilotti*configure17:45
johngarbuttOK, so you set the password, then reboot, and it picks it up?17:45
alexpilottijohngarbutt: even w/o reboot17:45
johngarbuttor just on new machine create?17:45
alexpilottijohngarbutt: at the first boot17:45
johngarbuttso we were wondering about without the need for a reboot, are you polling the metadata service or something?17:46
alexpilottijohngarbutt: yes17:46
alexpilottijohngarbutt: we are mainly supporting DriveInit17:46
alexpilottijohngarbutt: BTW did you guys implement ConfigDrive?17:47
johngarbuttdriveinit? sorry for all these questions!17:47
*** kaganos has quit IRC17:47
alexpilottijohngarbutt: ConfigDrive, sorry :-)17:47
*** kaganos_ is now known as kaganos17:47
johngarbuttmike still is taking that:17:47
alexpilottijohngarbutt: I try to avoid the metadata service as much as possible17:48
johngarbuttmatelakat: you got the link for that?17:48
alexpilottijohngarbutt: and ConfigDrive is the perfect solution17:48
alexpilottijohngarbutt: one huge problem with vishy's approach to the password problem17:48
alexpilottiis that it requires posting to the metadata service17:48
johngarbuttright17:49
matelakat#link https://review.openstack.org/#/c/18370/17:49
alexpilottimatelakat: cool!17:49
johngarbuttpvo: is that sounding better now?17:49
johngarbuttI think that is starting to join up17:49
pvojohngarbutt: much. Thanks.17:49
alexpilottiwhich means that the guest needs to have write access to the metadata service *cough*17:49
johngarbutthmm, yes...17:50
alexpilottiwith all the security issues that you can immagine17:50
alexpilottiwe found a solution:17:50
johngarbutthence push into the guest, I see17:50
johngarbuttalexpilotti: do tell17:50
alexpilottithe guest passes the encrypted password to the host on an internal channel17:50
alexpilottiand the host writes to the metadata17:50
alexpilottithis is hypervisor specific17:51
johngarbuttoh right, which is where XenAPI has been using xenstore17:51
johngarbuttright17:51
alexpilottiHyper-V has a technology called KVP exchange for that17:51
alexpilottiI wantedt to ask what is available on Xen17:51
johngarbuttI wondered about, user pushes password, nova cli helps encrypt with key17:51
alexpilotti*wanted17:51
pvoalexpilotti: this is how we do it now17:51
pvoa diffie-hellman exchange to pass it back and forth.17:52
*** vishy is now known as vishy_zz17:52
alexpilottipvo: cool. Do you have a blueprint or a reference patch?17:52
pvoalexpilotti: I can find one.17:52
pvowe've been using it for some time now17:52
alexpilottibecause IMHO having the guest writing directly to the metadata service is a huge risk of DOS and scalability problems17:52
pvoalexpilotti: if you're running one monolithic, sure.17:53
johngarbutt#link https://github.com/openstack/nova/blob/master/nova/virt/xenapi/agent.py#L16617:53
pvoif youre running one per compute on loopback, less so17:53
alexpilottiwhile passing the info to the host and having the driver doing it via nova api is IMO way safer17:53
pvoyea, that link17:53
alexpilottijohngarbutt pvo: tx, I'm going to take a look at it17:53
*** markwash has joined #openstack-meeting17:54
johngarbuttif we don't catch each other, fancy same time next week to just follow up?17:54
alexpilottisure17:54
BobBallNote that XS does have some KVP equivalent which is needed by SCVMM - although this isn't currently suitable for wider distribution I believe17:54
*** derekh has quit IRC17:54
pvosure17:54
*** b3nt_pin has quit IRC17:54
alexpilottiwould you guys like to schedule a meeting next week?17:55
alexpilottimaybe we should fetch somebody from KVM as well17:55
*** Guest14993 is now known as annegentle17:55
johngarbuttMr_T: your question?17:55
Mr_Toh, thanks - i was curious if anyone happened to know the size limit of files ("personalities") injected to instances via xenstore?17:55
johngarbuttalexpilotti: sure17:55
*** annegentle is now known as Guest2516617:56
Mr_Ti've heard it's somewhere around 2k, but wasn't able to find a more specific answer17:56
johngarbuttnot sure myself, smaller is better, that I remember17:56
johngarbuttis that a nova or a XenAPI limit?17:56
pvoits a limitation in how much data you can put into a xenstore value17:56
pvoits a xenapi limit17:56
johngarbuttright17:57
johngarbutt#link https://github.com/openstack/nova/blob/master/nova/virt/xenapi/agent.py#L21917:57
alexpilottik guys, let's sync on -dev after the meeting ends?17:57
johngarbuttalexpilotti: sure, thank you!17:57
pvoalexpilotti: I gotta run to my next meeting, but I'll follow up there later.17:57
alexpilottitx!17:57
johngarbuttThat is time for us17:57
BobBallBelieve the limit is 4k but I'd have to check up on that17:57
johngarbutt#action BobBall: check xenstore limit17:58
johngarbuttthat does ring a bell17:58
johngarbuttmany thanks all17:58
johngarbuttsame time next week hopefully17:58
Mr_Tthank you17:58
johngarbutt#endmeeting17:58
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"17:58
openstackMeeting ended Wed Jan  9 17:58:35 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-09-17.01.html17:58
matelakatthanks, bye17:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-09-17.01.txt17:58
openstackLog:            http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-01-09-17.01.log.html17:58
*** matelakat has quit IRC17:58
*** MarkAtwood has joined #openstack-meeting17:59
*** jog0 has joined #openstack-meeting17:59
*** vishy_zz is now known as vishy18:03
*** bencherian has joined #openstack-meeting18:06
*** dolphm has quit IRC18:06
*** darraghb has quit IRC18:08
*** hemna has joined #openstack-meeting18:12
*** KurtMartin has quit IRC18:13
*** b3nt_pin has joined #openstack-meeting18:13
*** adjohn has joined #openstack-meeting18:17
*** EmilienM__ has left #openstack-meeting18:20
*** pschaef has quit IRC18:23
*** cp16net is now known as cp16net|away18:25
*** zul has quit IRC18:27
*** BobBall has quit IRC18:29
*** johngarbutt has left #openstack-meeting18:29
*** Mr_T has left #openstack-meeting18:31
*** sarob has joined #openstack-meeting18:37
*** dolphm has joined #openstack-meeting18:40
*** b3nt_pin has quit IRC18:44
*** flaper87_ has joined #openstack-meeting18:49
*** flaper87_ has quit IRC18:50
*** clayg has joined #openstack-meeting18:52
*** juice has joined #openstack-meeting18:53
*** dcramer_ has joined #openstack-meeting18:54
*** pschaef has joined #openstack-meeting18:55
*** prometheanfire has joined #openstack-meeting18:55
*** pschaef has quit IRC18:55
*** stevebaker has quit IRC18:56
*** Guest25166 is now known as annegentle18:57
*** annegentle is now known as Guest6018718:57
*** gholt has joined #openstack-meeting18:58
*** dwcramer has quit IRC18:58
*** pandemicsyn has joined #openstack-meeting18:59
clayg[18:59]          --> | gholt [~gholt@brim.net] has joined #openstack-meeting19:01
notmyname#startmeeting swift19:01
openstackMeeting started Wed Jan  9 19:01:26 2013 UTC.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: swift)"19:01
openstackThe meeting name has been set to 'swift'19:01
chmouelhey there19:01
notmynameswift meeting time19:01
notmynamemy goal for these meetings is to have a place for contributors to regularly talk about what's going on in the project19:02
notmynametoday's agenda is: outstanding patches, outstanding bugs, next release, and "other"19:02
*** stevebaker has joined #openstack-meeting19:03
notmyname#topic outstanding patches19:03
*** openstack changes topic to "outstanding patches (Meeting topic: swift)"19:03
notmynamehttps://review.openstack.org/#/q/status:open+swift,n,z19:03
notmynamethe gerrit expiry script was fixed/enabled yesterday19:03
notmynameso older patches have now fallen off19:03
notmynamebut there are a few I'd like to mention19:04
notmynameredbo: (or other?) what's the status on the container quotas?19:04
tongliJohn, can we go one by one with the order of the list?19:05
claygdid it fall off?19:05
notmynameya, it fell off. and was work in progress, I think19:05
notmynametongli: I there are a few I want to highlight, but we can bring up others if needed19:05
tongliok,19:05
chmoueli have started working on a swift quota by account middleware19:05
redboI just haven't swung back around to it yet.  We still want it.19:06
claygtongli: I'd rather do older first, or higher priority, before we can change topics if there's some newer patches that need to be discussed we can bring them up... but since it's the first week if we want to go through *all* of them?19:06
tonglithat is fine. let's go.19:06
notmynameredbo: ok. chmouel: is your's related to redbo's at all?19:07
chmouelnot at all was going to release first as external middleware and maybe propose it to swift after as this is something enovance may need19:07
notmynameah ok19:07
*** dfg has joined #openstack-meeting19:07
notmynameah. dfg. just who I was about to talk about ;-)19:08
notmynamehttps://review.openstack.org/#/c/17878/19:08
*** K has joined #openstack-meeting19:08
chmouelwill get some code to show probably next week if that goes up in my assigned prioriities :)19:08
notmynamethe bulk middleware. it has a -1 that needs addressed, but's it's a cool feature that a lot of people I mentioned it to liked19:08
*** K is now known as Guest9128819:09
creihtnotmyname: yeah he's been working on it19:09
dfgnotmyname: i'm about to upload a new patch with chuck's changes19:09
notmynamegreat :-)19:09
dfgjust finished up unit tests19:09
notmynamecool19:09
claygdfg: wow xml serialization too!?19:09
*** Guest91288 is now known as kota19:09
*** dcramer_ has quit IRC19:09
dfgclayg: out going, not incoming.19:09
dfgwhich may be lame but will wait for comments19:10
notmynameas much as I don't like xml, I agree that it should be something we generate19:10
dfgon review19:10
notmynameanything else on the bulk middleware?19:10
notmynameok19:11
notmynamehttps://review.openstack.org/#/c/19296/ and https://review.openstack.org/#/c/19297/ were proposed today to add support for a separate replication network19:11
ogelbukhyes19:11
ogelbukhthat's our stuff19:11
claygwhoa, thats new19:11
ogelbukhone more patch is still in internal review and testing19:11
notmynameI've been talking to mirantis about it a little over email. I haven't had a chance to look at them in detail, but I have a couple of high level questions19:11
notmynameok19:11
creihtdangit... what's up with tempest? :/19:12
notmynamecreiht: cinder tests borked this morning or somthing19:12
notmynameogelbukh: first, does it need to be 2 dependent patches? does the first patch make sense without the 2nd?19:12
jgriffithnotmyname: not cinder  :)19:12
*** adjohn has quit IRC19:12
creihthaha19:13
ogelbukhbasically, they apply to different parts19:13
notmynamejgriffith: oh, sorry. "people" are on top of it ;-)19:13
jgriffithI just submitted a tempest patch that should fix it19:13
notmynameok19:13
ogelbukhfirst one to ring-builder and script, second to replicator processes19:13
claygjgriffith: hi!19:13
jgriffithalso I think sdague made a modificatio that should address as well19:13
creihtjgriffith: isn't this what reviews in gerrit are for? :)19:13
ogelbukhand the last one to the replication-servers19:14
notmynameogelbukh: ok, that was my 2nd question19:14
creihterm I mean notmyname -^19:14
notmynameogelbukh: I was expecting to see the REPLICATE verb use the 2nd network19:14
ogelbukhwe didn't want to dump too many lines in one patch in the first place19:14
ogelbukhyes, that's the last one]19:14
notmynameok19:14
notmynameI'm quite interested on the RAX perspective for the 2nd replication network feature19:15
notmynameit's essential for global clusters, but I think it's also quite beneficial for local clusters too19:16
pandemicsyns/essential/nice to have/ ?19:17
notmynamepandemicsyn: if you have a dedicated dark net WAN connection, it's probably not essential :-)19:17
gholtI think somehow a separate logical replication network makes actual network bandwidth increase.19:18
gholtBut... I kid, I kid. Separate would be fine. Not sure if we'd use it, but it'd be fine and desirable in many cases.19:18
*** EmilienM__ has joined #openstack-meeting19:18
*** Shengjie_Min has joined #openstack-meeting19:19
notmynamewe'll be looking at the patches at swiftstack, and I'd like some RAX people to look at it too, please.19:19
*** adjohn has joined #openstack-meeting19:19
notmynamethe more eyes the better, so if others can look as well, that'd be swell :-)19:20
notmynamespeaking of global clusters... torgomatic has 2 patches up (in various states of review) https://review.openstack.org/#/c/18562/ and https://review.openstack.org/#/c/18263/19:20
gholtSo, uhm, why is the rep network essential, maybe that'd help us understand the urgency?19:20
gholtI already +2 one of those. With the dependency I was waiting to review the other.19:21
torgomaticgholt: in slow-wan-link land, it lets you use QoS to prioritize user traffic over replication traffic19:21
notmynamebecause if you have a WAN connection between 2 clusters (in a global cluster scenario), you need to be able to separately control the replication traffic19:21
creihtsame here19:21
torgomaticthat's the use case I know of19:22
notmynamecreiht: gholt: ya, thansk :-)19:22
creihtjust waiting on torgomatic to fix his stuff :)19:22
torgomaticI'm going to fix up https://review.openstack.org/18263 soonish. I've been pretty much useless this week, though. stupid cold.19:22
*** dcramer_ has joined #openstack-meeting19:22
gholtOkay, I'm not sure why qos doesn't work with different logical networks, but I don't have to be sure. :)19:22
gholt<-- not a network guy19:23
chmoueli think qos just advise the app to go slower it doesn't force it19:23
chmouelbut i am not a network guy either19:23
gholtHehe19:24
notmynameheh19:24
creihtif it is a wan, how do you have two "differnet" networks?19:24
notmynameThose are all the outstanding patches I wanted to specifically bring up. any other patches to talk about before we move on to bugs?19:24
* creiht also isn't a network guy19:24
creiht:)19:24
tonglihttps://review.openstack.org/#/c/15818/19:24
tonglijohn, this one, I have a quick question.19:24
notmynameok19:25
tongliwhen number of segements for a file is greater than CONTAINER_LISTING_LIMIT.19:25
tongliif the requested range is not within the limit,19:25
tonglishould we ignore the range or return error especially, the request contains ranges that can be met and the ranges not in the limit.19:26
*** adrian_smith has joined #openstack-meeting19:26
notmynamemy first guess would be to return an error. torgomatic or gholt?19:27
tonglithis is about getting the ranges of an object.19:27
*** zul has joined #openstack-meeting19:28
torgomaticwell, returning an error sounds reasonable to me right now, but i haven't dug through rfc2616 to see if that's allowed19:29
notmynameya, that ^19:29
claygisn't range requests sort of advisory to begin with?  I thought you could ignore and return the whole thing if you wanted...19:29
tongliok19:29
claygmaybe there's a 400 for that...19:29
tongli416.19:29
notmynameany other patches to discuss?19:30
notmynameI'm not sure why we need testr instead of nose, but I don't know if mordred is around to answer that19:30
notmynameperhaps we'll bug him in the review19:30
mordredwell, how much do you want me to ramble...19:30
*** woodspa has quit IRC19:31
*** woodspa has joined #openstack-meeting19:31
*** cp16net|away is now known as cp16net19:31
notmynamemordred: a more detailed commit message would be nice19:31
mordrednotmyname: tl;dr - nose is invasive and the source of various build failures that are crazy19:32
mordrednotmyname: also, testr does parallel testing19:32
*** anniec has joined #openstack-meeting19:32
notmynameya, I just read that after clicking through enough links19:32
mordrednotmyname: also, we're moving everyone else to it :) - but there is no known reason yet to cause you to move to it against your will19:32
*** sdake_z has joined #openstack-meeting19:32
mordrednotmyname: so - more verbose commit message then? :)19:33
notmynamemordred: when have you known swift to do something "because everyone else is doing it"? ;-)19:33
mordrednever19:33
notmynamemordred: ya, please19:33
mordredI only ever expect you guys to do things because they are awesomer19:33
clarkbits also worth pointing out that in the process of converting nova a lot of broken tests were fixed (it is harder to get away with broken tests)19:33
swifterdarrellDo we lose anything by switching?  the code coverage report?19:33
clarkbswifterdarrell: code coverage can still be generated19:34
mordrednope. code coverage stays. you lose the colorized test output is the only thing19:34
creihtyeah I looked at it a bit, and it seems reasonable19:34
mordredso if you like watching colored test scroll by, we don't have a great answer for you19:34
chmouelsounds good but there is some red error when i try your review19:34
swifterdarrellcool; I could care less about colors, but the coverage report is quite useful19:34
creihtmordred: I was waiting for adding the coverage back to take a look at it again (for swift client19:34
mordredbut - testr itself has _way_ better tooling - like 'testr run --failing' which re-runs the tests that failed last time19:34
mordredcreiht: swiftclient doesn't have a .coverage - altohugh I would be happy to add one19:35
mordredchmouel: eek! I'll poke you about that and see if I can track it down19:35
creihtmordred: when I run the old .unittests, I see a coverage report (at least I thought I did)19:36
* mordred is actually very interested to see what happens on an attempt to move swift itself19:36
mordredcreiht: cool. I'll double-check - (I think I might have re-added that since we talked)19:36
notmynameanything else on patches, or can we move on to bugs?19:36
notmynamemordred: thanks19:36
mordrednotmyname: my pleasure!19:37
*** krtaylor has quit IRC19:37
notmynamemoving on to bugs...19:37
notmyname#topic outstanding bugs19:37
*** openstack changes topic to "outstanding bugs (Meeting topic: swift)"19:37
notmynameplease take a look at https://bugs.launchpad.net/swift/+bugs?field.status=NEW&field.importance=UNDECIDED and move them out of "undecided" as you can19:37
notmyname,as approriate19:38
notmynameI've got 3 bugs I want to bring up19:38
notmynamefirst, https://bugs.launchpad.net/swift/+bug/108476219:38
uvirtbotLaunchpad bug 1084762 in swift "error when writing to handoffs" [High,Confirmed]19:38
notmynamecan anyone look at that or work on it? seems that any 2 storage nodes down in a cluster could cause 500s to be returned to the client19:38
notmynameI'll be happy to work on it, but I may not be able to for a few weeks19:39
notmynameI suspect RAX has seen it in their logs, but I'd like confirmation on that19:39
creihtnotmyname: I think if we would have seen that, it would be fixed already19:40
creihtbeen19:40
notmynameya, I'd think so, but I also know how many log message you see every second :-)19:40
notmyname(it's somewhere above "a lot")19:40
claygnotmyname: your killing the nodes while the PUT is happening?19:41
notmynameno19:41
notmynamekill them first, then do the PUT19:41
claygor just shutting them down before the proxy ever calls get_nodes?19:41
notmynamewell, confirmation that it's not a real thing would be good too19:41
notmynameya19:41
notmynamethis is on my lucid SAIO19:41
*** aabes has joined #openstack-meeting19:42
swifterdarrellnotmyname: wish I remembered where I originally saw it, but it wasn't lucid19:42
claygsounds pretty crazy...19:42
notmynameanyway, an answer is something we can't get right now. but I'd love some confirmation beyond swifterdarrell and me19:42
claygmaybe on the saio it can't find two hand offs?19:43
notmyname4 servers, so it should still be able to get a quorum19:43
*** ttrifonov is now known as ttrifonov_zZzz19:43
notmynamenext bug is https://bugs.launchpad.net/swift/+bug/1082835. <-- this needs an answer. is the change ok or do we need to revert the functionality (manifests where the references segments are able to be listed)19:44
uvirtbotLaunchpad bug 1082835 in swift "World readable access to segmented object produces 401, even if _segments is also world readable." [High,Confirmed]19:44
claygyeah it just looks like some eventlety barf, and RAX prod is not likely to run into "I can't come up with three nodes"19:44
*** cp16net is now known as cp16net|away19:45
*** Guest60187 has quit IRC19:45
*** cp16net|away is now known as cp16net19:45
notmynameany comments on the .rlistings issue?19:46
torgomaticwell, it's certainly counterintuitive19:46
notmynameya, but isn't the side effect that you can probe an account if you have write access to a manifest object?19:47
claygis it because the proxy's listing gets rejected as anon?19:47
torgomaticseems to me that if you can read a manifest file and read each individual segment that you ought to be able to GET the whole object19:47
torgomaticnotmyname: yes, there is the downside that if I can write to one of your containers, I can use manifests to produce a listing of another container that I oughtn't be able to19:48
*** shadower has joined #openstack-meeting19:48
notmynameeither the docs need to make this explicit (the current reported behavior) or the behavior needs to be reverted19:48
swifterdarrellwe could just update a doc somewhere to point out that the use-case of anon access to segmented objects requires .rlistings on the segment container19:49
claygtorgomatic: could you acctually get the listing tho if you don't read access to those objects behind the listing?  the *proxy* could get the listing, but could the client on the other side of the manifest get?19:49
*** vkmc has quit IRC19:49
notmynameI'd have to check again, but does tempauth have listings as a superset of reads? or is it independent?19:50
notmynameIOW, if you can do the listing then you may already have read access19:51
notmynameok, in the interest of time, I need to move on to the last item19:51
notmyname#topic next release19:51
*** openstack changes topic to "next release (Meeting topic: swift)"19:51
torgomaticclayg: it's possible; you mutate the X-Object-Manifest header and perform repeated GET requests and see if you get back an etag of an empty object or not19:51
* torgomatic is just a little too slow :)19:51
notmynamegholt: pandemicsyn: creiht: did you start QA yesterday on master?19:52
gholtSorry, was afc. Uhm, no, been really busy with other stuff. :/19:52
*** gatuus has joined #openstack-meeting19:52
notmynames/yesterday/this week/19:53
notmynameok19:53
notmynameany idea when that will be able to start?19:53
gholtI'm hoping to start the packaging today, but I said that yesterday.19:53
notmynameheh ok :-)19:53
gholtI need to bug our torch too, and he hasn't been onsite.19:53
gholtAnyhow, it's definitely my #2 priority and I'll ping you when it's going for real.19:54
*** shardy has joined #openstack-meeting19:54
notmynamethanks19:54
notmynameif that does get started this week, I think that means we can do a public release the week of Jan 2119:54
gholtAlso, feel free to ping me about it daily if I haven't. :)19:54
notmynameok :-)19:55
notmynameany objections to a public release (swift 1.7.6) around thursday january 24?19:55
tongliany interesting topics for coming up OS summit on Swift?19:57
notmynameour next swift meeting is the 23rd, so we can have a final go/nogo then. If QA testing starts this week, I'll set the tentative release date for the 24th19:57
notmynametongli: always. but I don't knwo what they are yet :-)19:57
notmyname#topic other19:58
*** openstack changes topic to "other (Meeting topic: swift)"19:58
notmynameany other topics to bring up (in our last 2 minutes)?19:58
*** donaldngo has joined #openstack-meeting19:58
claygv3 keystone auth?19:58
*** SpamapS has joined #openstack-meeting19:59
notmynameclayg: seems that the mailing list thread is covering most of it19:59
claygheh19:59
notmynameI don't think there is anything affecting swift from what I've seen19:59
*** donaldngo has quit IRC19:59
notmynameto quote redbo: "if their version bump requires swift storage to change, they're doing it wrong" ;-)19:59
chmoueli don't think either, will give a try19:59
*** donaldngo has joined #openstack-meeting19:59
tongliright, all these things can be done in middleware.19:59
notmynamethanks for coming today. next meeting is the 23rd (two weeks from now).20:00
notmyname#endmeeting20:00
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"20:00
openstackMeeting ended Wed Jan  9 20:00:15 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-01-09-19.01.html20:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-01-09-19.01.txt20:00
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-01-09-19.01.log.html20:00
* prometheanfire might use this for his gentoo meetings20:00
*** dolphm has quit IRC20:01
asalkeld#startmeeting heat20:01
openstackMeeting started Wed Jan  9 20:01:10 2013 UTC.  The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot.20:01
*** dfg has left #openstack-meeting20:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:01
*** openstack changes topic to " (Meeting topic: heat)"20:01
openstackThe meeting name has been set to 'heat'20:01
SpamapSo/20:01
*** dolphm has joined #openstack-meeting20:01
asalkeld#chair asalkeld sdake_z stevebaker shardy20:01
*** gholt has left #openstack-meeting20:01
openstackCurrent chairs: asalkeld sdake_z shardy stevebaker20:01
*** pandemicsyn has left #openstack-meeting20:01
*** gholt has joined #openstack-meeting20:01
*** donaldngo has quit IRC20:02
*** gholt has left #openstack-meeting20:02
stevebakerok20:02
*** prometheanfire has left #openstack-meeting20:02
asalkeldo/20:02
*** donaldngo has joined #openstack-meeting20:02
shardyo/20:02
stevebaker#topic rolecall20:02
*** openstack changes topic to "rolecall (Meeting topic: heat)"20:02
Slowero/20:02
sdake_zsdake20:02
shardyshardy here20:02
jpeelerjpeeler here20:02
Slowerimain20:02
shadowershadower here20:02
stevebaker#topic Review last week's actions20:03
*** openstack changes topic to "Review last week's actions (Meeting topic: heat)"20:03
stevebaker#link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-02-20.00.html20:03
*** adrian_smith has quit IRC20:03
stevebakerACTION: asalkeld to setup heat-cfntools git repo (asalkeld, 20:06:02)20:03
stevebakerdone20:03
stevebakerACTION: investigate injecting heat-cfntools install during cloud-init (stevebake, 20:45:59)20:03
stevebakeroh, thats me20:04
*** maurosr has quit IRC20:04
stevebakerI haven't looked at heat's cloud-init code yet, but this should be possible20:05
stevebakerI still want image building to be so easy that everybody spins their own images20:05
SpamapSor just uses official images 90% of the time20:06
sdake_zboth those options make sense20:06
stevebakeryes20:06
sdake_ztrick is getting cfn tools in the 90% images ;)20:06
shardystevebaker: What's difficult about image building atm?20:06
*** clayg has left #openstack-meeting20:06
asalkeldtime20:06
SpamapSI've found it quite easy. Just stick the files in /opt/aws and it works.20:06
stevebakerthe time it takes, having bare metal available to do it20:06
stevebakershardy: ^^20:06
SpamapSIt should be a hard requirement that these tools only use python stdlib components, so it remains easy20:07
sdake_zwith stevebake's suggestion of injecting cfn into the image launch, additional requirement that a network host be available to serve them up20:07
sdake_zbut as an additional option makes sense20:07
stevebakerSpamapS: it currently depends on python-boto20:07
SpamapSis that new?20:07
shardystevebaker: That's an oz not heat problem tho, and if we package cfntools, then we no longer have to care (much) about image building20:07
SpamapSor did I just not use the boto-requiring bits?20:07
shadowersdake, I think you can base64 encode the data into the template20:07
sdake_z16k limit on user data20:08
shadowerthe file cfn-tools files contentts I mean20:08
shadowerright20:08
stevebakershardy: do you think python-boto can be removed as a dependency?20:08
SpamapS+1 for making them available in the same way any other apt-get/yum installable package is available.20:08
asalkeldagree20:08
shardystevebaker: yes, but not that easily20:08
sdake_zagree as well20:08
asalkeld(yum/apt-get install)20:09
shadoweryup20:09
sdake_zalthough dont see a strong incentive to take away jeos toolset20:09
*** sarob has quit IRC20:09
shadowerweren't people against this the last time? apt/yum20:09
stevebakerI've been working on packaging rpm and deb20:09
shardystevebaker: bascically we'd have to maintain our own internal client library for CFN and Cloudwatch20:09
*** sarob has joined #openstack-meeting20:09
asalkeldshadower, well I'd be happy with pip install too20:09
stevebakerrpm is done http://repos.fedorapeople.org/repos/heat/heat-trunk/20:09
stevebakercompletely untested PPA is here https://launchpad.net/~steve-stevebaker/+archive/heat-cfntools20:10
SpamapSpip is fine too20:10
SpamapSthe point is, using regularly available methods means less problems with proxies/restrictions/down servers.20:10
sdake_zover the long term when heat becomes standard practice, having rpm/deb packages makes it easy for the 90% to prebake the rpms which eliminates the network requirement20:10
stevebakeryep20:11
stevebakerare there any actions from this topic?20:11
stevebaker#topic shadower's configurable cloud backends patches20:12
*** openstack changes topic to "shadower's configurable cloud backends patches (Meeting topic: heat)"20:12
SpamapSseems like there's a need for heat's userdata generation to know where the tools are per-image.20:12
stevebaker#link https://github.com/tomassedovic/heat/compare/master...pluggable-clients20:12
shadowerso20:12
*** donaldngo has left #openstack-meeting20:12
shadowerwith this the users can import a third-party lib for any cloud that lib supports20:12
shardySpamapS: We expect /opt/aws same as AWS cfnbootstrap tools20:13
SpamapSErr, I mean where to grab the tools from, not where to put them.20:13
shardySpamapS: Ok, lets discuss later as this is OT now20:13
SpamapSagreed20:13
sdake_zseems like import would open us up to a potentially larger user base20:14
asalkeldshadower, that looks nice20:14
SpamapSshadower: this might be related to something I was discussing with stevebaker earlier in #heat. I'd like to be able to put my own private heat engine up in a standing cloud... so a backend I'd be interested in is just Openstack itself, but from a user standpoint rather than service standpoint.20:14
shardysdake_z: Do you see this as any problem re incubation?20:14
sdake_zwhich seems mostly positive to me20:14
asalkeldalso for large version differences20:15
sdake_zshardy hard to predict if this would be a problem with incubation, but i have only heard positive benefits so far20:15
shadowerso would you guys be interested in merging this?20:15
shadowerI can send a patch tomorrow20:16
stevebakerdoes this mean all the backends have to fake being openstack clients?20:16
shadoweryeah20:16
Sloweryes20:16
shadowerI wanted to disrupt the Heat codebase as little as possible20:16
shadowerminimal code changes20:16
asalkeldlooks like a good addition20:16
Slowerin practice it doesn't seem to be a big issue20:16
sdake_zya - probably didn't need a meeting to discuss - git  review do the trick ;)20:17
shadowershardy, suggested a meeting because of the incubation20:17
sdake_zi see20:17
asalkeldnice to have some integratioin in horizon20:17
sdake_zarmchair qb atm20:17
shardyWell it's not just the patch20:17
asalkeldso we can add say, rackspace20:18
SlowerI could see some political ramifications, I think it's worth discussing in a meeting20:18
shardyit's the wider issue, ie when we rip out Cloudwatch and move to ceilometer, do we maintain the internal implementation because aeolus needs it?20:18
stevebakerso the backends would live outside the heat source project? I see no problem with that20:18
shardySame for all the nested stack resources20:18
Sloweryeah that is a problem20:18
shadowerstevebaker, unless Heat wants to ship them, yes20:18
sdake_zshardy that is a complicated problem that i expect slower/shadower will come up with an equally good solution ;)20:18
stevebakerIts not just nova client that needs to be shimmed, also quantum, swift etc20:19
shadoweryea20:19
shardysdake_z: OK, but that wider issue is why I suggested to shadower it may be worth discussing before posting20:19
Slowerwhat about user-pluggable nested stack definitions? :)20:19
asalkeldguys think multi-cloud failover20:19
sdake_zgot it shardy20:20
stevebakerSo zaneb has started the pluggable resource types, another approach would be sets of AWS resource types with different backends in the implementations20:20
asalkeldrather than different types of backends20:20
*** zul has quit IRC20:20
*** zul has joined #openstack-meeting20:20
shardySlower: Well zaneb made the resources pluggable, so maybe not a huge issue, but it does mean we end up maintaining stuff in tree which is not directly related to openstack (when we move to openstack native implementations for stuff like loadbalancers etc)20:21
Sloweryeah I'd guess that would be up to the people who are wanting different backends20:21
shadowerasalkeld, if Heat gets integrated with Deltacloud (which is what Slower and I'd like to do), you could have multi-cloud failover20:21
shadowersince deltacloud speaks multiple clouds already20:21
stevebakerthis is related to our nested stack solutions vs openstack projects (DBaaS)20:21
asalkeldmulti == lots20:21
asalkeldnot different20:21
stevebakershall we bless this solution to hedge our bets? Long term we may do something different20:22
asalkeldI think it's fine atm20:22
sdake_zworks for me20:23
asalkeld+120:23
shadower\o/20:23
stevebaker#action git review pluggable-clients (everyone)20:23
Slowercool :)20:23
stevebaker#topic Plan/priorities and key features for g3 cycle20:23
*** openstack changes topic to "Plan/priorities and key features for g3 cycle (Meeting topic: heat)"20:23
stevebakersdake_z: GO!20:23
sdake_zso you may need a monster energy drink to parse: http://wiki.openstack.org/PTLguide20:23
sdake_zbut couple key points20:24
sdake_zblueprints start 4 weeks before H20:24
sdake_zwhich on this schedule: http://wiki.openstack.org/GrizzlyReleaseSchedule20:24
sdake_zwould be somewhere around the 28th of march20:24
stevebaker#link http://wiki.openstack.org/PTLguide20:24
stevebaker#link http://wiki.openstack.org/GrizzlyReleaseSchedule20:25
sdake_zwe should start thinking about blueprints for H around that time, but if you have something now, might as well get it in launchpad20:25
sdake_zthe key point, if you see the color coded schedule, is that we run off the release cliff feb 2820:25
asalkeldwe need a H series20:26
stevebakerIf there is one thing which we could do better for incubation, I think it is creating more blueprints for feature work20:26
asalkeldto target the bp's for20:26
sdake_zwhich gives us about 7 weeks to sort out the remaining bugs in g20:26
shardystevebaker: I agree, need brainstorming/ideas prioritisation for new features20:26
sdake_zasalkeld agree, i'll add a h series after meeting20:26
sdake_zbut key point is, we need g to work well - we can push new features into h20:27
sdake_zand bunch of bugs = not working well ;)20:27
sdake_zI believe there are 4 blueprints currently, which we should wrap up20:27
stevebakershardy: every brain fart should be a blueprint, then we can "prioritise"20:27
sdake_zand really try to focus on hardening the code base20:27
*** adjohn has quit IRC20:27
*** vipul is now known as vipul|away20:28
*** vipul|away is now known as vipul20:28
sdake_zthis is different from how we typically operate20:28
*** danwent has quit IRC20:28
stevebakersdake_z: in what way?20:28
sdake_zwhich is make features along the way - the guide states features should be made up front of the 6 month cycle20:28
shardyasalkeld: re ceilometer, how close are we feature wise to being able to use ceilometer for out metric store?20:28
asalkelda while  off20:28
shardynot the CW API, the metrics/alarms I was thinking20:28
asalkeldwe are move the rock in there...20:29
asalkeldbasically eoghan is merging synaps20:29
asalkeldbit by bit20:29
*** adjohn has joined #openstack-meeting20:30
shardyasalkeld: OK, so definitely a job for H then20:30
stevebaker#action Write blueprints for any current and potential future feature development (everyone)20:30
sdake_zcan target those for h20:30
sdake_zwe dont have to do that work now, we can wait until march20:30
asalkeldsdake_z I see other projects make bp as they need20:31
stevebakersdake_z: any particular development focus for the rest of G?20:31
sdake_zwe need to make a good g3 now ;)20:31
sdake_zwell if your out of things to work on, then making new bps may make sense20:31
sdake_zwe can start this new process during h20:31
sdake_zasalkeld ya, maybe ptl guide is wrong ;)20:31
sdake_zi'll ask around20:31
asalkeldI think that is mainly for discussion at summit20:32
sdake_zright20:32
asalkeldcould be wrong20:32
sdake_zget the new ideas for the next 3 milestones out20:32
sdake_zso they can have appropriate discussion20:32
asalkeldya20:32
sdake_zrather then ninja add features mid release ;)20:32
stevebakerthat process would apply more to the heterogeneous teams that the other projects have20:32
stevebakerthey can only decide what to work on face-to-face at summit20:33
stevebakerdoesn't apply so much to us20:33
sdake_zcommunity is growing, we want to support that model20:33
asalkeldwe are all in the same office;)20:33
stevebakertrue20:33
stevebakerThe norm in other projects seems to be large numbers of blueprints, many of which don't make their targetted milestones20:34
*** anniec has quit IRC20:34
*** anniec_ has joined #openstack-meeting20:34
stevebaker#action Start moving towards the PTL process (sdake)20:35
sdake_zdifference between writing a bp and implementing20:35
stevebaker#topic Open discussion20:35
*** openstack changes topic to "Open discussion (Meeting topic: heat)"20:35
stevebakerAnything else?20:36
asalkeldnot from me20:36
asalkeldSpamapS, ?20:37
sdake_zhmm we should probably have a ptl election20:37
sdake_zshould have had one before original summit20:37
stevebakerwhy yes we should20:37
sdake_zneed to wait for zane to return to the office20:37
stevebakershould we do it at next week's meeting20:37
stevebakergive the candidates a chance to campaign ;)20:37
asalkeldthere is a process20:37
sdake_zthere is a method20:37
SpamapSnoting from me at this point :) more next week20:37
SpamapSnothing20:37
*** dprince has quit IRC20:38
asalkeldok20:38
sdake_zlets wait until zane returns from pto20:38
sdake_zwhich is next week20:38
*** anniec_ has quit IRC20:38
*** jsavak has joined #openstack-meeting20:38
stevebakeralright, it looks like that is id20:39
stevebakerit20:40
asalkeldyea, back to bed20:40
stevebaker#endmeeting20:40
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"20:40
openstackMeeting ended Wed Jan  9 20:40:30 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:40
openstackMinutes:        http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-09-20.01.html20:40
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-09-20.01.txt20:40
openstackLog:            http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-01-09-20.01.log.html20:40
sdake_zpoor angus20:40
sdake_zworst timezone for all this20:40
asalkelddidn't sleep well20:40
stevebakerhot?20:42
*** joesavak has quit IRC20:42
*** sandywalsh has joined #openstack-meeting20:42
*** vipul is now known as vipul|away20:43
*** krtaylor has joined #openstack-meeting20:44
*** alexpilotti has quit IRC20:46
*** adjohn has quit IRC20:47
*** kota has quit IRC20:50
*** anniec has joined #openstack-meeting20:50
*** shardy has left #openstack-meeting20:51
*** sandywalsh has quit IRC20:59
*** shadower has quit IRC20:59
*** annegentle_itsme has joined #openstack-meeting21:00
*** danwent has joined #openstack-meeting21:05
*** bencherian has quit IRC21:06
*** EmilienM__ has left #openstack-meeting21:09
*** cp16net is now known as cp16net|away21:14
*** cp16net|away is now known as cp16net21:15
*** vipul|away is now known as vipul21:17
*** radez is now known as radez_g0n321:19
*** alexpilotti has joined #openstack-meeting21:19
*** Shengjie_Min has quit IRC21:21
*** dcramer_ has quit IRC21:22
*** maurosr has joined #openstack-meeting21:28
*** joesavak has joined #openstack-meeting21:29
*** freebird has joined #openstack-meeting21:30
*** jsavak has quit IRC21:31
*** annegentle_itsme has quit IRC21:31
*** freebird has left #openstack-meeting21:32
*** Shengjie_Min has joined #openstack-meeting21:33
*** bencherian has joined #openstack-meeting21:39
*** adjohn has joined #openstack-meeting21:39
*** eglynn has quit IRC21:42
*** ndipanov has quit IRC21:58
*** dolphm has quit IRC21:59
*** eharney has quit IRC22:01
*** anniec has quit IRC22:02
*** anniec has joined #openstack-meeting22:02
*** sarob has quit IRC22:05
*** mtreinish has quit IRC22:05
*** anniec_ has joined #openstack-meeting22:06
*** anniec has quit IRC22:08
*** anniec_ is now known as anniec22:08
*** davidkranz has quit IRC22:12
*** davidkranz has joined #openstack-meeting22:13
*** eglynn has joined #openstack-meeting22:15
*** gakott has quit IRC22:16
*** sandywalsh has joined #openstack-meeting22:19
*** tongli has quit IRC22:19
*** john5223 has joined #openstack-meeting22:25
*** roaet is now known as roaet-away22:26
*** gakott has joined #openstack-meeting22:30
*** eglynn has quit IRC22:34
*** sandywalsh has quit IRC22:34
*** maurosr has quit IRC22:34
*** maurosr has joined #openstack-meeting22:34
*** davidkranz has quit IRC22:35
*** davidkranz has joined #openstack-meeting22:36
*** anniec has quit IRC22:43
*** eglynn has joined #openstack-meeting22:43
*** kaganos has quit IRC22:44
*** sandywalsh has joined #openstack-meeting22:45
*** eglynn has quit IRC22:49
*** davidkranz has quit IRC22:49
*** anniec has joined #openstack-meeting22:50
*** davidkranz has joined #openstack-meeting22:50
*** roaet-away is now known as roaet22:51
*** vkmc has joined #openstack-meeting22:58
*** john5223 has quit IRC23:01
*** woodspa has quit IRC23:02
*** mrodden has quit IRC23:03
*** lbragstad has quit IRC23:04
*** ijw1 has joined #openstack-meeting23:05
*** vkmc has quit IRC23:06
*** ijw has quit IRC23:06
*** Gordonz has quit IRC23:07
*** gyee has quit IRC23:07
*** mattray has quit IRC23:08
*** stevebaker has quit IRC23:13
*** stevebaker has joined #openstack-meeting23:14
*** jgriffith has quit IRC23:16
*** jgriffith has joined #openstack-meeting23:17
*** vkmc has joined #openstack-meeting23:19
*** dolphm has joined #openstack-meeting23:24
*** krtaylor has quit IRC23:25
*** mrodden has joined #openstack-meeting23:33
*** anniec has quit IRC23:34
*** anniec_ has joined #openstack-meeting23:34
*** mrodden1 has joined #openstack-meeting23:35
*** kaganos has joined #openstack-meeting23:36
*** dolphm has quit IRC23:37
*** mrodden has quit IRC23:38
*** markmcclain has quit IRC23:45
*** vkmc has quit IRC23:46
*** sandywalsh has quit IRC23:52
*** reed has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!