Wednesday, 2013-06-12

*** noslzzp has quit IRC00:00
*** nati_ueno has joined #openstack-meeting00:01
*** lloydde has quit IRC00:03
*** nati_uen_ has quit IRC00:05
*** whenry has joined #openstack-meeting00:06
*** cp16net is now known as cp16net|away00:09
*** danwent has quit IRC00:14
*** cp16net|away is now known as cp16net00:15
*** danwent has joined #openstack-meeting00:16
*** otherwiseguy has joined #openstack-meeting00:18
*** garyTh has quit IRC00:23
*** lauria has quit IRC00:24
*** tanisdl has quit IRC00:26
*** jaybuff1 has quit IRC00:26
*** danwent has quit IRC00:28
*** cp16net is now known as cp16net|away00:29
*** ivasev_ has joined #openstack-meeting00:30
*** mkollaro has joined #openstack-meeting00:31
*** ivasev has quit IRC00:34
*** lastidiot has joined #openstack-meeting00:39
*** gyee has quit IRC00:40
*** timello has quit IRC00:41
*** mkollaro has quit IRC00:42
*** timello has joined #openstack-meeting00:42
*** ayoung has quit IRC00:44
*** abhisri has joined #openstack-meeting00:45
*** zb has joined #openstack-meeting00:47
*** jcru has left #openstack-meeting00:48
*** marun has quit IRC00:49
*** hemna is now known as hemna_00:50
*** zbitter has joined #openstack-meeting00:50
*** zaneb has quit IRC00:50
*** jaybuff has joined #openstack-meeting00:51
*** jaybuff has left #openstack-meeting00:51
*** zb has quit IRC00:53
*** abhisri has quit IRC00:54
*** zbitter has quit IRC00:54
*** ivasev_ has quit IRC00:55
*** timello has quit IRC00:55
*** ivasev has joined #openstack-meeting00:55
*** Mandell has quit IRC00:55
*** timello has joined #openstack-meeting00:56
*** bdpayne has quit IRC00:58
*** hartsocks1 has joined #openstack-meeting00:59
*** hartsocks has quit IRC00:59
*** MarkAtwood has quit IRC01:01
*** noslzzp has joined #openstack-meeting01:01
*** MarkAtwood has joined #openstack-meeting01:02
*** noslzzp_ has joined #openstack-meeting01:07
*** noslzzp has quit IRC01:08
*** noslzzp_ is now known as noslzzp01:08
*** abhisri has joined #openstack-meeting01:08
*** seanrob_ has joined #openstack-meeting01:09
*** whenry has quit IRC01:09
*** zaneb has joined #openstack-meeting01:11
*** reed has quit IRC01:12
*** ivasev_ has joined #openstack-meeting01:12
*** seanrob has quit IRC01:12
*** seanrob_ has quit IRC01:14
*** ryanpetrello has quit IRC01:15
*** ivasev has quit IRC01:15
*** fnaval has joined #openstack-meeting01:21
*** sandywalsh has quit IRC01:21
*** fnaval has quit IRC01:21
*** changbl has joined #openstack-meeting01:21
*** fnaval has joined #openstack-meeting01:22
*** rods has quit IRC01:25
*** stevemar has joined #openstack-meeting01:29
*** anniec has quit IRC01:30
*** ladquin is now known as ladquin_afk01:31
*** sandywalsh has joined #openstack-meeting01:34
*** stevemar has quit IRC01:34
*** stevemar has joined #openstack-meeting01:35
*** otherwiseguy has quit IRC01:35
*** otherwiseguy has joined #openstack-meeting01:37
*** nati_ueno has quit IRC01:39
*** jasondotstar has joined #openstack-meeting01:53
*** stevemar has quit IRC01:54
*** fifieldt has joined #openstack-meeting01:56
*** ivasev_ has quit IRC02:04
*** ivasev has joined #openstack-meeting02:05
*** tzn has quit IRC02:06
*** zaneb has quit IRC02:08
*** stevemar has joined #openstack-meeting02:14
*** otherwiseguy has quit IRC02:17
*** krtaylor has joined #openstack-meeting02:20
*** zaneb has joined #openstack-meeting02:21
*** ryanpetrello has joined #openstack-meeting02:21
*** schwicht has quit IRC02:21
*** ryanpetrello has quit IRC02:22
*** rwsu is now known as rwsu-away02:24
*** zaneb has quit IRC02:25
*** whenry has joined #openstack-meeting02:52
*** stevemar has quit IRC02:59
*** novas0x2a|laptop has quit IRC03:01
*** otherwiseguy has joined #openstack-meeting03:03
*** rongze1 has joined #openstack-meeting03:04
*** rongze has quit IRC03:05
*** abhisri has quit IRC03:06
*** afazekas has joined #openstack-meeting03:07
*** ryanpetrello has joined #openstack-meeting03:07
*** neelashah has joined #openstack-meeting03:10
*** whenry has quit IRC03:12
*** ayoung has joined #openstack-meeting03:17
*** yugsuo has joined #openstack-meeting03:19
*** dcramer_ has quit IRC03:20
*** ayoung has quit IRC03:24
*** jcoufal has joined #openstack-meeting03:27
*** jasondotstar has quit IRC03:28
*** haomaiwang has joined #openstack-meeting03:29
*** cp16net|away is now known as cp16net03:31
*** stevemar has joined #openstack-meeting03:34
*** otherwiseguy has quit IRC03:34
*** SergeyLukjanov has joined #openstack-meeting03:35
*** stevemar has quit IRC03:35
*** haomaiwang has quit IRC03:38
*** haomaiwang has joined #openstack-meeting03:38
*** colinmcnamara has joined #openstack-meeting03:39
*** sukhdev has joined #openstack-meeting03:41
*** MarkAtwood has quit IRC03:42
*** MarkAtwood has joined #openstack-meeting03:43
*** bdpayne has joined #openstack-meeting03:48
*** Mandell has joined #openstack-meeting03:51
*** martine_ has quit IRC03:53
*** mdomsch has joined #openstack-meeting03:57
*** Tross has joined #openstack-meeting03:57
*** colinmcnamara has quit IRC04:03
*** jcoufal has quit IRC04:03
*** reed has joined #openstack-meeting04:03
*** bdpayne has quit IRC04:06
*** jcoufal has joined #openstack-meeting04:11
*** neelashah has quit IRC04:11
*** sushils has quit IRC04:13
*** mdomsch_ has joined #openstack-meeting04:19
*** rostam has quit IRC04:20
*** mdomsch has quit IRC04:20
*** terry7 has quit IRC04:20
*** rostam has joined #openstack-meeting04:23
*** bdpayne has joined #openstack-meeting04:24
*** zaneb has joined #openstack-meeting04:28
*** zb has joined #openstack-meeting04:38
*** zaneb has quit IRC04:41
*** timello_ has joined #openstack-meeting04:45
*** danwent has joined #openstack-meeting04:46
*** timello has quit IRC04:48
*** SergeyLukjanov has quit IRC05:06
*** bdpayne has quit IRC05:07
*** ivasev has quit IRC05:09
*** lastidiot has quit IRC05:10
*** nati_ueno has joined #openstack-meeting05:12
*** nati_ueno has quit IRC05:13
*** whenry has joined #openstack-meeting05:13
*** nati_ueno has joined #openstack-meeting05:13
*** bdpayne has joined #openstack-meeting05:14
*** danwent has quit IRC05:14
*** zbitter has joined #openstack-meeting05:16
*** SergeyLukjanov has joined #openstack-meeting05:16
*** zb has quit IRC05:17
*** zbitter has quit IRC05:21
*** zaneb has joined #openstack-meeting05:22
*** zb has joined #openstack-meeting05:24
*** zbitter has joined #openstack-meeting05:25
*** zaneb has quit IRC05:27
*** zb has quit IRC05:29
*** zbitter has quit IRC05:30
*** cody-somerville has quit IRC05:31
*** garyk has joined #openstack-meeting05:35
*** fsargent has quit IRC05:37
*** edleafe has quit IRC05:37
*** Mandell has quit IRC05:37
*** oubiwann has quit IRC05:37
*** sivel has quit IRC05:38
*** annegentle has quit IRC05:38
*** topol has quit IRC05:38
*** hughsaunders has quit IRC05:38
*** pvo has quit IRC05:39
*** Guest19584 has quit IRC05:39
*** troytoman-away has quit IRC05:39
*** terriyu has quit IRC05:43
*** troytoman-away has joined #openstack-meeting05:43
*** dabo has joined #openstack-meeting05:44
*** SergeyLukjanov has quit IRC05:44
*** annegentle has joined #openstack-meeting05:44
*** apech has joined #openstack-meeting05:44
*** lillie has joined #openstack-meeting05:44
*** hughsaunders has joined #openstack-meeting05:45
*** lillie is now known as Guest7971705:45
*** pvo has joined #openstack-meeting05:45
*** sivel has joined #openstack-meeting05:45
*** ryanpetrello has quit IRC05:47
*** fsargent has joined #openstack-meeting05:47
*** oubiwann has joined #openstack-meeting05:48
*** bdpayne has quit IRC05:49
*** timello_ has quit IRC05:52
*** Mandell has joined #openstack-meeting05:53
*** dguitarbite has joined #openstack-meeting05:55
*** danwent has joined #openstack-meeting05:56
*** timello_ has joined #openstack-meeting05:58
*** koolhead17 has joined #openstack-meeting06:05
*** koolhead17 has joined #openstack-meeting06:05
*** dguitarbite has quit IRC06:08
*** nati_uen_ has joined #openstack-meeting06:08
*** mdomsch_ has quit IRC06:08
*** mdomsch has joined #openstack-meeting06:08
*** MarkAtwood has left #openstack-meeting06:09
*** nati_ueno has quit IRC06:11
*** derekh has joined #openstack-meeting06:12
*** kebray has joined #openstack-meeting06:13
*** timello_ has quit IRC06:13
*** stevebaker_ is now known as stevebaker06:14
*** dguitarbite has joined #openstack-meeting06:14
*** henrynash has joined #openstack-meeting06:16
*** tayyab has joined #openstack-meeting06:16
*** tayyab has joined #openstack-meeting06:17
*** timello_ has joined #openstack-meeting06:17
*** mrunge has joined #openstack-meeting06:19
*** whenry has quit IRC06:20
*** jpeeler has quit IRC06:22
*** danwent has quit IRC06:26
*** blamar has quit IRC06:29
*** kebray has quit IRC06:32
*** timello_ has quit IRC06:34
*** koolhead17 has quit IRC06:36
*** jpeeler has joined #openstack-meeting06:41
*** timello_ has joined #openstack-meeting06:46
*** ttrifonov is now known as ttrifonov_zZzz06:47
*** mdomsch has quit IRC06:48
*** ttrifonov_zZzz is now known as ttrifonov06:48
*** flaper87 has joined #openstack-meeting06:55
*** nati_ueno has joined #openstack-meeting06:56
*** nati_uen_ has quit IRC06:59
*** dguitarbite has quit IRC07:04
*** garyk has quit IRC07:09
*** garyk has joined #openstack-meeting07:09
*** shang has joined #openstack-meeting07:10
*** henrynash has quit IRC07:11
*** timello_ has quit IRC07:13
*** timello_ has joined #openstack-meeting07:14
*** gakott has joined #openstack-meeting07:17
*** boris-42 has joined #openstack-meeting07:17
*** dkehn has joined #openstack-meeting07:19
*** oubiwann has quit IRC07:20
*** garyk has quit IRC07:20
*** oubiwann has joined #openstack-meeting07:22
*** jhenner has joined #openstack-meeting07:24
*** afazekas has quit IRC07:25
*** zaneb has joined #openstack-meeting07:27
*** timello_ has quit IRC07:32
*** pvo has quit IRC07:32
*** jtomasek has joined #openstack-meeting07:33
*** timello_ has joined #openstack-meeting07:33
*** apech has quit IRC07:34
*** pvo has joined #openstack-meeting07:34
*** psedlak has joined #openstack-meeting07:40
*** psedlak has quit IRC07:41
*** saschpe has joined #openstack-meeting07:43
*** mkollaro has joined #openstack-meeting07:53
*** saschpe has quit IRC07:55
*** tayyab_ has joined #openstack-meeting07:59
*** mrunge has quit IRC08:01
*** salv-orlando has joined #openstack-meeting08:01
*** mrunge has joined #openstack-meeting08:02
*** tayyab has quit IRC08:02
*** tayyab_ has quit IRC08:06
*** tayyab has joined #openstack-meeting08:06
*** johnthetubaguy has joined #openstack-meeting08:07
*** johnthetubaguy has quit IRC08:10
*** johnthetubaguy has joined #openstack-meeting08:10
*** psedlak has joined #openstack-meeting08:16
*** henrynash has joined #openstack-meeting08:18
*** saschpe has joined #openstack-meeting08:23
*** salv-orlando has quit IRC08:23
*** egallen has joined #openstack-meeting08:25
*** locke105 has quit IRC08:28
*** henrynash has quit IRC08:36
*** tzn has joined #openstack-meeting08:41
*** mkollaro has quit IRC08:41
*** Mandell has quit IRC08:46
*** dkehn has quit IRC08:47
*** henrynash has joined #openstack-meeting08:47
*** skort has joined #openstack-meeting08:48
*** dkehn has joined #openstack-meeting08:48
*** afazekas has joined #openstack-meeting08:50
*** garyk has joined #openstack-meeting08:50
*** gakott has quit IRC08:51
*** gakott has joined #openstack-meeting08:52
*** skort has quit IRC08:53
*** the_akshat has quit IRC08:53
*** salv-orlando has joined #openstack-meeting08:54
*** akshat_at_cern_ has quit IRC08:54
*** garyk has quit IRC08:56
*** mkollaro has joined #openstack-meeting08:57
*** jesusaurus has quit IRC09:00
*** zaro has quit IRC09:03
*** mkollaro has quit IRC09:04
*** mkollaro1 has joined #openstack-meeting09:04
*** dkehn_ has joined #openstack-meeting09:05
*** dkehn has quit IRC09:05
*** zaro has joined #openstack-meeting09:05
*** afazekas has quit IRC09:07
*** tayyab has quit IRC09:08
*** jesusaurus has joined #openstack-meeting09:09
*** timello_ has quit IRC09:09
*** timello_ has joined #openstack-meeting09:14
*** gakott has quit IRC09:15
*** afazekas has joined #openstack-meeting09:20
*** salv-orlando has quit IRC09:23
*** sushils has joined #openstack-meeting09:25
*** sushils has quit IRC09:29
*** sushils has joined #openstack-meeting09:30
*** rods has joined #openstack-meeting09:40
*** afazekas has quit IRC09:41
*** haomaiwang has quit IRC09:51
*** dkehn_ is now known as dkehn10:03
*** mkollaro1 has quit IRC10:06
*** mkollaro has joined #openstack-meeting10:06
*** afazekas has joined #openstack-meeting10:08
*** jcoufal has quit IRC10:12
*** jcoufal has joined #openstack-meeting10:13
*** tayyab has joined #openstack-meeting10:17
*** gongysh has joined #openstack-meeting10:18
*** tayyab has joined #openstack-meeting10:18
*** angus has quit IRC10:21
*** MIDENN_ has quit IRC10:22
*** KurtMartin has quit IRC10:24
*** pcm__ has joined #openstack-meeting10:27
*** pcm__ has quit IRC10:27
*** pcm__ has joined #openstack-meeting10:27
*** torgomatic has quit IRC10:31
*** oubiwann has quit IRC10:32
*** annegentle has quit IRC10:32
*** fsargent has quit IRC10:32
*** Guest79717 has quit IRC10:32
*** dabo has quit IRC10:32
*** zigo has quit IRC10:32
*** pvo has quit IRC10:32
*** zigo has joined #openstack-meeting10:32
*** fungi has quit IRC10:33
*** annegentle has joined #openstack-meeting10:37
*** pvo has joined #openstack-meeting10:37
*** dabo has joined #openstack-meeting10:37
*** lillie has joined #openstack-meeting10:37
*** lillie is now known as Guest7915910:37
*** torgomatic has joined #openstack-meeting10:38
*** topol has joined #openstack-meeting10:38
*** fungi has joined #openstack-meeting10:38
*** oubiwann has joined #openstack-meeting10:39
*** fsargent has joined #openstack-meeting10:40
*** timello_ has quit IRC10:40
*** nati_ueno has quit IRC10:43
*** asalkeld has joined #openstack-meeting10:44
*** timello_ has joined #openstack-meeting10:45
*** oubiwann has quit IRC10:46
*** oubiwann has joined #openstack-meeting10:48
*** mkollaro has quit IRC10:56
*** dripton has quit IRC11:01
*** dripton has joined #openstack-meeting11:01
*** jbr_ has joined #openstack-meeting11:06
*** johnthetubaguy has quit IRC11:16
*** martine has joined #openstack-meeting11:18
*** HenryG has joined #openstack-meeting11:18
*** johnthetubaguy has joined #openstack-meeting11:18
*** HenryG_ has quit IRC11:18
*** chuckieb has joined #openstack-meeting11:19
*** adalbas has joined #openstack-meeting11:20
*** sushils has quit IRC11:21
*** yugsuo has quit IRC11:23
*** tedross has joined #openstack-meeting11:33
*** henrynash has quit IRC11:36
*** jbr_ has left #openstack-meeting11:40
*** Vivek has quit IRC11:41
*** Vivek has joined #openstack-meeting11:42
*** martine has quit IRC11:43
*** Vivek is now known as Guest4312411:43
*** fsargent has quit IRC11:43
*** johnthetubaguy has quit IRC11:43
*** johnthetubaguy has joined #openstack-meeting11:43
*** fsargent has joined #openstack-meeting11:45
*** zaneb has left #openstack-meeting11:51
*** gakott has joined #openstack-meeting11:54
*** fifieldt has quit IRC11:55
*** topol has quit IRC11:56
*** topol has joined #openstack-meeting11:57
*** sushils has joined #openstack-meeting11:58
*** SergeyLukjanov has joined #openstack-meeting11:59
*** sandywalsh has quit IRC12:18
*** timello_ has quit IRC12:22
*** timello has joined #openstack-meeting12:22
*** gakott has quit IRC12:22
*** gakott has joined #openstack-meeting12:22
*** gakott has quit IRC12:22
*** gakott has joined #openstack-meeting12:23
*** dprince has joined #openstack-meeting12:24
*** johnthetubaguy1 has joined #openstack-meeting12:27
*** johnthetubaguy has quit IRC12:29
*** skort has joined #openstack-meeting12:29
*** ryanpetrello has joined #openstack-meeting12:30
*** sandywalsh has joined #openstack-meeting12:30
*** gakott has quit IRC12:33
*** marun has joined #openstack-meeting12:35
*** haomaiwang has joined #openstack-meeting12:35
*** Guest43124 is now known as Vivek12:37
*** Vivek is now known as Guest9474812:37
*** Guest94748 is now known as Vivek12:37
*** skort has quit IRC12:37
*** Vivek has quit IRC12:37
*** Vivek has joined #openstack-meeting12:37
*** skort has joined #openstack-meeting12:38
*** jasondotstar has joined #openstack-meeting12:42
*** timello has quit IRC12:43
*** gakott has joined #openstack-meeting12:43
*** timello has joined #openstack-meeting12:44
*** garyk has joined #openstack-meeting12:45
*** topol has quit IRC12:46
*** skort has quit IRC12:47
*** gakott has quit IRC12:48
*** kiall has left #openstack-meeting12:49
*** gakott has joined #openstack-meeting12:57
*** gakott has quit IRC12:58
*** gakott has joined #openstack-meeting12:58
*** schwicht has joined #openstack-meeting12:59
*** garyk has quit IRC13:00
*** michchap has quit IRC13:05
*** dolphm has joined #openstack-meeting13:09
*** rnirmal has joined #openstack-meeting13:10
*** changbl has quit IRC13:15
*** dcramer_ has joined #openstack-meeting13:20
*** skort has joined #openstack-meeting13:21
*** dolphm has quit IRC13:21
*** krtaylor has quit IRC13:24
*** gakott has quit IRC13:24
*** jecarey has joined #openstack-meeting13:24
*** mtreinish has joined #openstack-meeting13:28
*** lbragstad has joined #openstack-meeting13:28
*** stevebaker has quit IRC13:28
*** anniec has joined #openstack-meeting13:30
*** mrunge has quit IRC13:32
*** neelashah has joined #openstack-meeting13:33
*** stackKid has joined #openstack-meeting13:33
*** vijendar has joined #openstack-meeting13:34
*** michchap has joined #openstack-meeting13:35
*** spzala has joined #openstack-meeting13:37
*** SergeyLukjanov has quit IRC13:42
*** marcosmamorim has joined #openstack-meeting13:44
*** michchap has quit IRC13:46
*** SergeyLukjanov has joined #openstack-meeting13:49
*** Tross has quit IRC13:49
*** fnaval has quit IRC13:50
*** haomaiwang has quit IRC13:51
*** haomaiwang has joined #openstack-meeting13:52
*** terriyu has joined #openstack-meeting13:52
*** kchenweijie has joined #openstack-meeting13:52
*** kchenweijie has left #openstack-meeting13:52
*** garyTh has joined #openstack-meeting13:52
*** feleouet has joined #openstack-meeting13:53
*** skort has quit IRC13:53
*** skort has joined #openstack-meeting13:53
*** ociuhandu has left #openstack-meeting13:54
*** eharney has joined #openstack-meeting13:55
*** maroh has joined #openstack-meeting13:57
*** anniec_ has joined #openstack-meeting14:00
*** rcurran has joined #openstack-meeting14:01
*** irenab has joined #openstack-meeting14:02
*** anniec has quit IRC14:03
*** anniec_ is now known as anniec14:03
*** mrodden has joined #openstack-meeting14:04
*** zehicle_at_dell has joined #openstack-meeting14:04
*** markmcclain has joined #openstack-meeting14:04
*** otherwiseguy has joined #openstack-meeting14:06
*** tedross has quit IRC14:07
*** lastidiot has joined #openstack-meeting14:09
*** maroh has quit IRC14:09
*** cp16net is now known as cp16net|away14:11
*** matrohon has joined #openstack-meeting14:11
*** noslzzp has quit IRC14:12
*** apech has joined #openstack-meeting14:12
*** michchap has joined #openstack-meeting14:12
*** mdenny has joined #openstack-meeting14:13
*** krtaylor has joined #openstack-meeting14:15
*** dhellmann-away has quit IRC14:15
*** irenab has quit IRC14:17
*** hartsocks1 has quit IRC14:17
*** hartsocks has joined #openstack-meeting14:18
*** jecarey has quit IRC14:19
*** dhellmann has joined #openstack-meeting14:19
*** SergeyLukjanov has quit IRC14:22
*** krtaylor has quit IRC14:22
*** zehicle_at_dell has quit IRC14:23
*** lastidiot has quit IRC14:23
*** topol has joined #openstack-meeting14:23
*** michchap has quit IRC14:24
*** krtaylor has joined #openstack-meeting14:24
*** changbl has joined #openstack-meeting14:25
mesteryFolks, is anyone in this channel early for the ML2 meeting at 15:00UTC? I messed up, it's actually at 14:00UTC. :(14:25
*** fnaval has joined #openstack-meeting14:25
mesteryIf enough of a quorum is here, we can do the meeting now for the next 30 minutes, or just cancel this weeks.14:25
apechsure, let's do it14:27
markmcclainmestery: nobody has the room at 150014:28
markmcclainso we could still do it then unless you ahve a conflict14:28
mesterymarkmcclain: I thought xen-api did? Let me re-check/14:28
*** acfleury has joined #openstack-meeting14:28
*** adalbas has quit IRC14:28
markmcclainah they do14:28
mesterySee here: https://wiki.openstack.org/wiki/Meetings#XenAPI_team_meeting XenApi has it 15:00UTC Wednesday14:28
markmcclainthey put a : in their time which why searching didn't find it14:29
mesteryI hit that same problem while searching14:29
mestery:D14:29
mesterySo, we do a quick one now if there are enough folks?14:29
*** blamar has joined #openstack-meeting14:29
*** blamar has quit IRC14:29
*** skort has quit IRC14:29
*** yugsuo has joined #openstack-meeting14:29
apechI have some questions I'd love to float by you on the MechansimDriver regardless14:29
*** rkukura has joined #openstack-meeting14:29
*** skort has joined #openstack-meeting14:29
mesteryapech: Cool, maybe we should just move forward with the meeting14:29
markmcclaincould also roll over to openstack-meeting-alt14:30
mesteryI see rkukura has joined too14:30
apechLooks like Bob just joined too14:30
mesteryapech: jinx! :)14:30
mesterymarkmcclain: Good call! Should we just use openstack-meeting-alt in 30 minutes?14:30
mesteryTo let others join? I'll send an email to the list if so.14:30
*** skort has quit IRC14:30
*** garyk has joined #openstack-meeting14:31
markmcclainI think that makes sense for today and then next week we can move to 140014:31
rkukuraI'm here, but am on duty at the OpenStack "ask the experts" booth at Red Hat Summit, so don't be surprised if I am slow to respond.14:31
mesterymarkmcclain: Lets do it! I'll send emails to the list now.14:32
apechmestery: sounds great14:32
mesterySo for folks here, we'll do the ML2 networking meeting on #openstack-meeting-alt in 30 minutes14:32
*** matiu has quit IRC14:32
*** dolphm_ has joined #openstack-meeting14:32
*** dolphm_ has quit IRC14:34
*** jecarey has joined #openstack-meeting14:37
feleouetHmmm, sorry for the ML post, as nobody was speaking, I wasn't looking at this chan any more...14:38
*** afazekas has quit IRC14:38
*** rcurran has quit IRC14:39
*** noslzzp has joined #openstack-meeting14:39
mesteryfeleouet: No worries, we'll use #openstack-meeting-alt in 20 minutes.14:41
*** noslzzp has quit IRC14:42
*** cp16net|away is now known as cp16net14:43
*** johnthetubaguy1 is now known as johnthetubaguy14:45
*** maoy has joined #openstack-meeting14:47
*** maoy has quit IRC14:47
*** med_ has joined #openstack-meeting14:48
*** dolphm has joined #openstack-meeting14:49
*** michchap has joined #openstack-meeting14:50
*** jtomasek has quit IRC14:50
*** stevemar has joined #openstack-meeting14:50
*** noslzzp has joined #openstack-meeting14:51
*** martine has joined #openstack-meeting14:52
*** noslzzp has quit IRC14:52
*** bobba has joined #openstack-meeting14:53
*** kchenweijie1 has joined #openstack-meeting14:56
*** kchenweijie1 has left #openstack-meeting14:57
*** martines__ has quit IRC14:57
*** otherwiseguy has quit IRC14:58
* johnthetubaguy is watching the clock14:59
* bobba is watching johnthetubaguy watching the clock14:59
johnthetubaguylol14:59
*** matel has joined #openstack-meeting14:59
*** yaguang has joined #openstack-meeting14:59
johnthetubaguy#startmeeting xenapi14:59
openstackMeeting started Wed Jun 12 14:59:56 2013 UTC.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.14:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:59
*** openstack changes topic to " (Meeting topic: xenapi)"14:59
openstackThe meeting name has been set to 'xenapi'15:00
*** Sukhdev has joined #openstack-meeting15:00
johnthetubaguyno actions from last time, straight to blueprint15:00
matelok15:00
johnthetubaguy#topic Blueprints15:00
*** openstack changes topic to "Blueprints (Meeting topic: xenapi)"15:00
johnthetubaguyso, quick show of hands, who is here?15:00
matelone15:00
*** medberry has joined #openstack-meeting15:01
*** medberry has joined #openstack-meeting15:01
*** asomya has joined #openstack-meeting15:01
johnthetubaguywell, OK15:01
johnthetubaguyanything for the agenda?15:01
johnthetubaguymy plan was to talk about blueprints, and where we are at15:01
johnthetubaguyso I just added a new blueprint, I am kinda working on it15:02
johnthetubaguy#link https://blueprints.launchpad.net/nova/+spec/xenapi-large-ephemeral-disk-support15:02
matelwhich one is that?15:02
matelok15:02
*** ZangMingJie has joined #openstack-meeting15:02
*** lastidiot has joined #openstack-meeting15:02
*** michchap has quit IRC15:02
johnthetubaguybasically get big ephemeral disk, but split the space between lots of VDIs, because VDIs can't be bigger than 2TB15:02
johnthetubaguyOK, other ones15:02
johnthetubaguyI have patches up for server-log stuff15:03
johnthetubaguy#link https://blueprints.launchpad.net/nova/+spec/xenapi-server-log15:03
bobbayeah - they were looking good to me15:03
bobba:)15:03
matelBack to ephemeral15:03
johnthetubaguybut the detailed log management is a TODO, and still trying to get someone other than me to do that :)15:03
johnthetubaguymatel: sure15:03
matelI have a strange feeling about nova creating partitions.15:04
matelinside a guest disk.15:04
johnthetubaguywhy?15:04
johnthetubaguyephemeral are blank disks, created by nova15:04
matelI think it is not nova's job, it should be left to the user, that's all.15:04
*** tayyab has quit IRC15:04
matel"In addition, which such large disks, we should allow admins to configure nova such that a partition is created on the disk"15:05
bobbabrb - 1 sec15:05
johnthetubaguywell, maybe, but the expectation is that they are created now, sadly that ship might have sailed15:05
*** fmanco has joined #openstack-meeting15:05
johnthetubaguymy plan is to start allowing the filesystem create be skipped, mostly because it will take ages15:06
johnthetubaguyanyways, lets keep moving I guess15:06
johnthetubaguy#link https://blueprints.launchpad.net/nova/+spec/xenapi-guest-agent-cloud-init-interop15:06
*** fmanco has left #openstack-meeting15:06
johnthetubaguythis one is taking some time, but I think we are slowing moving forward15:06
johnthetubaguyspotted a bug in my patch to skip the agent per image, and uploaded a fix for that today15:07
bobbais there a block on it at all?15:07
*** dgollub has joined #openstack-meeting15:07
*** kmartin has joined #openstack-meeting15:07
johnthetubaguyblock on what?15:07
*** Sukhdev has quit IRC15:07
bobbathat bp15:08
johnthetubaguytheres no blocker on it except people and effort really15:08
*** jdurgin1 has quit IRC15:08
johnthetubaguyits mostly testing still, just scoping out how the agent and cloud-init play together15:09
*** asomya_ has joined #openstack-meeting15:09
bobbaokay15:09
bobbacool15:09
johnthetubaguyits mostly fine so far, just trying to get our test guys engaged15:09
johnthetubaguycool, so that is all the updates from me15:09
johnthetubaguyI am feeling OK about getting that stuff in Havana15:10
johnthetubaguyH-2 is going to be close, but might still happen, for the key bits anyways15:10
*** noslzzp has joined #openstack-meeting15:10
johnthetubaguySo questions for bobba and matel15:10
bobbaI forget - when is H-2?15:10
johnthetubaguy#link https://wiki.openstack.org/wiki/Havana_Release_Schedule15:10
*** asomya has quit IRC15:11
bobbaah good15:11
johnthetubaguy18th July is when the branch is cut, so 11th July latest for code up there for any hope of hitting H-2 I would guess15:11
johnthetubaguyso we have a few blueprints for H-2 that are not started yet15:11
*** tanisdl has joined #openstack-meeting15:11
*** tanisdl has quit IRC15:11
johnthetubaguyare we still happy for H-215:11
johnthetubaguyhttps://blueprints.launchpad.net/nova/+spec/xenapi-compute-driver-events15:12
bobbathe event driver is likely to be H-315:12
johnthetubaguy#link https://blueprints.launchpad.net/nova/+spec/xenapi-move-driver-to-oslo15:12
johnthetubaguyOK, should we update that?15:12
bobbait might make H-2 but it's not a priority atm15:12
bobbamoved15:12
johnthetubaguyso thats a yes I guess15:12
johnthetubaguycool15:12
johnthetubaguymatel: what about the oslo work?15:13
matelNot started yet15:13
johnthetubaguyis that likley for H2? are you starting it soon?15:13
*** tanisdl has joined #openstack-meeting15:14
matelNo idea.15:14
*** noslzzp has quit IRC15:14
johnthetubaguyOK15:14
matelIt's not in this sprint.15:14
johnthetubaguydo you want that to make H? or not that high priority?15:14
johnthetubaguywhat is your sprint?15:14
matelLet's discuss this next week.15:15
matelYou can file an action, if you like.15:15
johnthetubaguyaction for what?15:15
matelIt is unlikely, that I will start it in the next 2 weeks.15:15
matelSo let's say, it won't make H2.15:15
johnthetubaguyOK, which means not H215:15
johnthetubaguycool15:15
johnthetubaguymoved it to h315:16
johnthetubaguyany other stuff people want to cover about work that is coming up?15:16
johnthetubaguyi.e. not bugs15:16
matel#link https://review.openstack.org/#/c/31977/15:16
matelAligned to your comments.15:16
matelI think it got better.15:16
johnthetubaguyOK, cool15:16
matelGive it a +1 if you like it.15:16
johnthetubaguyI will take a look after the meeting15:17
matelDo you have any idea how could we speed up quantum reviews?15:17
matel#link https://review.openstack.org/#/c/31077/15:17
johnthetubaguynot really I am afraid15:17
matelOkay.15:17
*** mrodden has quit IRC15:18
johnthetubaguywell, you you could get the bug targeted and given a priority15:18
johnthetubaguybest best is to ask people in the quantum IRC channel I guess15:18
johnthetubaguysee what they think15:18
matelThat's a good idea, I will try that.15:18
johnthetubaguyhave you seen this:15:18
johnthetubaguy#link http://status.openstack.org/reviews/15:19
johnthetubaguyquite a few people use that, so gives you an idea of how people see the priority of your review15:19
johnthetubaguyOK15:20
johnthetubaguyso lets move on15:20
johnthetubaguynothing on Docs this week I assume?15:20
matelnothing15:20
johnthetubaguywill move to Bugs/QA15:20
johnthetubaguy#topic Bugs and QA15:20
*** openstack changes topic to "Bugs and QA (Meeting topic: xenapi)"15:20
bobbaright I'm afraid - although the docs are quite high on our backlog so they might make it into the next sprint (we're sprinting fornightly)15:21
bobbaFirst news on bugs is that Euan has fixed the /sys/hypervisor/uuid bug!15:21
johnthetubaguyso smokestack died again15:21
johnthetubaguycool15:21
bobbaWe worked around it for OS but that has been fixed now :)15:21
johnthetubaguyah, cool15:21
*** colinmcnamara has joined #openstack-meeting15:21
*** jdurgin1 has joined #openstack-meeting15:21
johnthetubaguyany news on fixing smokestack, is DanP still really owning that?15:22
bobbaI just sent a list15:22
bobbait's fixed15:22
bobbaI just sent a mail to the list *15:22
bobbathe hosts had run out of the 30 day trial license15:22
bobbaI've applied a free license which means that VM can now be started15:22
johnthetubaguyarse15:23
*** matiu has joined #openstack-meeting15:23
*** matiu has joined #openstack-meeting15:23
johnthetubaguythanks for sorting that15:23
bobbaI guess previously they were using the RS license15:23
*** dhellmann_ has joined #openstack-meeting15:23
bobbawhich was very naughty15:23
bobbawell - only slightly naughty.15:23
johnthetubaguywell its in our DC, lol15:24
johnthetubaguybut true15:24
bobbaI'd forgotten that a fresh machine only had a trial license15:24
johnthetubaguyno worries15:24
johnthetubaguythe agent does ssh_key injection in the current openstack code, so that might help smokestack15:25
johnthetubaguybut anyways15:25
johnthetubaguygood to see it resolved15:25
johnthetubaguy#topic Open Discussion15:25
*** openstack changes topic to "Open Discussion (Meeting topic: xenapi)"15:25
*** mkoderer has joined #openstack-meeting15:25
bobbaI say resolved, but it's now in Dan's court :)15:25
johnthetubaguyany more for any more?15:25
matelno15:25
bobbayes15:26
johnthetubaguyfire away15:26
bobbaI'll be going to the openstack infra bootcamp in new york on the 28th and 29th to talk about gating XenServer15:26
johnthetubaguycool15:26
bobbaor gating a XenAPI devstack/tempest test15:26
*** dhellmann has quit IRC15:26
*** dhellmann_ is now known as dhellmann15:26
bobbamight need us to run devstack in dom0 for simplicity15:26
*** kebray has joined #openstack-meeting15:26
johnthetubaguyyou spoken to Rainya and crew who might go too?15:27
bobbaI only found out about it last night :D15:27
*** SergeyLukjanov has joined #openstack-meeting15:27
johnthetubaguylol, OK15:27
*** danwent has joined #openstack-meeting15:27
bobbaI went to the infra IRC meeting and they said "Are you coming"?15:27
bobbaThe budget got approved about 60 minutes ago.15:27
johnthetubaguyah, I spotted an email about it, but nothing more15:27
johnthetubaguycool15:27
johnthetubaguyso, run in Dom0 so its the same as KVM I guess15:28
bobbaNot sure I've spoken to Rainya before have I?15:28
bobbayup - significantly easier to fit in the current CI architecture... doesn't have two IPs that it needs to care about...15:28
johnthetubaguynot sure, she is the manager of the deploy team who are looking into this stuff15:28
*** michchap has joined #openstack-meeting15:28
bobbaof course, we'll keep running in domU for the isolation case - but we can support both15:28
johnthetubaguysure15:29
*** blamar has joined #openstack-meeting15:29
bobbaokay - would you mind sending an introductory email15:29
johnthetubaguyyou guys going to look at running in Dom0 then?15:29
bobbaHopefully next sprint (but isn't that always the way?)15:29
bobbaand we'll probably keep the default as domU15:29
bobbaand there will always be good reasons to run in DomU (seperation, security, performance...)15:29
*** dkehn has quit IRC15:30
*** mrodden has joined #openstack-meeting15:30
johnthetubaguyso what about running Ubuntu 12.04 + xcp-api?15:30
bobbaso it might not be a "supported" configuration but just used to test one aspect of XenAPI in the gate15:30
*** vijendar has quit IRC15:30
bobbaI'm very careful in saying XenAPI because I don't want to restrict the gate to being XenServer15:30
bobbawe could run XenAPI (possibly ubuntu base, possibly centos or fedora) with nova in dom0 in the CI and XenServer with nova in a domU with cloudcafe :D15:31
bobbathat'd be my ideal15:31
bobbamaybe15:31
bobba:)15:31
johnthetubaguywell nothing is really supported, but things are tested to different degrees15:31
johnthetubaguywell I think ubuntu 12.04 base + xcp-api packages should work15:31
bobbayes - but we're also moving towards the brave new world of proper XAPI packages15:32
johnthetubaguysure, but we want to catch OpenStack bugs15:32
bobbaso if we can I'd like to get them running in the gate rather than the existing "legacy" packages15:32
bobbayes - they need to be stable first, of course - but they are well on the way15:33
johnthetubaguythis might be useful info: http://www.brw.com.au/p/tech-gadgets/rackspace_to_launch_opencloud_from_wnuHk6S8Ep49Uu3skHQ3HL15:33
*** colinmcnamara has quit IRC15:33
johnthetubaguyOK, sounds good15:34
johnthetubaguyany more?15:34
*** colinmcnamara has joined #openstack-meeting15:34
bobbayay - it's official now :)15:34
johnthetubaguywell its official its coming soon15:34
bobbaI did sympathise with Ant that he wasn't forced to fly out there to assist with setting it up15:34
johnthetubaguylol15:35
johnthetubaguy#action johnthetubaguy to introduce bobba to the deploy team15:35
johnthetubaguyI think thats all now?15:35
mately bye15:35
matel\quit15:35
*** matel has quit IRC15:35
bobbagreat - thanks John15:36
bobbaI'm done too15:36
johnthetubaguysame time next week?15:36
johnthetubaguyprobably quick15:36
bobbasure15:37
johnthetubaguythanks all15:37
johnthetubaguy#endmeeting15:37
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"15:37
openstackMeeting ended Wed Jun 12 15:37:42 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:37
openstackMinutes:        http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-06-12-14.59.html15:37
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-06-12-14.59.txt15:37
openstackLog:            http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-06-12-14.59.log.html15:37
*** michchap has quit IRC15:39
*** Akendo has joined #openstack-meeting15:41
*** kchenweijie has joined #openstack-meeting15:42
*** kchenweijie has left #openstack-meeting15:42
*** yaguang has quit IRC15:43
*** dkehn has joined #openstack-meeting15:44
*** timello has quit IRC15:45
*** timello has joined #openstack-meeting15:45
*** cody-somerville has joined #openstack-meeting15:47
*** zhiyan has joined #openstack-meeting15:48
seiflotfy_Akendo:15:51
seiflotfy_ping15:51
Akendoseiflotfy_: pong15:51
*** otherwiseguy has joined #openstack-meeting15:51
*** dachary has joined #openstack-meeting15:51
mkodererhi15:51
*** colinmcnamara has quit IRC15:52
AkendoHi15:52
*** bdpayne has joined #openstack-meeting15:52
*** afazekas has joined #openstack-meeting15:52
*** reed has joined #openstack-meeting15:52
*** ajforrest has joined #openstack-meeting15:53
*** apech has quit IRC15:53
*** Navneet has joined #openstack-meeting15:53
*** vijendar has joined #openstack-meeting15:54
*** bpb has joined #openstack-meeting15:55
*** pschaef has joined #openstack-meeting15:55
*** matrohon has quit IRC15:55
*** thingee has joined #openstack-meeting15:56
*** yugsuo has quit IRC15:56
seiflotfy_meeting in 3 minutes?15:57
*** mkollaro has joined #openstack-meeting15:58
*** bswartz has joined #openstack-meeting15:59
*** winston-d has joined #openstack-meeting15:59
*** haomaiwang has quit IRC16:00
jgriffith#startmeeting cinder16:00
openstackMeeting started Wed Jun 12 16:00:15 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: cinder)"16:00
openstackThe meeting name has been set to 'cinder'16:00
jgriffithHey everyone!16:00
thingeeo/16:00
zhiyanhi16:00
*** xyang_ has joined #openstack-meeting16:00
mkodererhi16:00
winston-dhi16:00
jgriffithagenda for today: https://wiki.openstack.org/wiki/CinderMeetings16:00
xyang_hi16:00
*** haomaiwang has joined #openstack-meeting16:00
bswartzhello16:01
jgriffithOne thing I'd like to ask, when folks add items to agenda do me a favor and put your name on there :)16:01
*** jsbryant has joined #openstack-meeting16:01
jgriffiththat way we know who's topic it is :)16:01
zhiyanjgriffith: sure16:01
jgriffithand on that note...16:01
jgriffith#topic Ceph as option for backup16:01
*** openstack changes topic to "Ceph as option for backup (Meeting topic: cinder)"16:01
seiflotfy_hi16:02
*** Navneet has quit IRC16:02
jgriffithI'm down with that, don't know who's proposal it is, but makes sense to me16:02
AkendoHello16:02
jgriffithI'm curious how much effort is involved16:02
thingeejgriffith: it was seiflotfy_16:02
*** jsbryant is now known as jungleboyj16:02
seiflotfy_jgriffith its mine16:02
*** singn has joined #openstack-meeting16:02
*** rushiagr has joined #openstack-meeting16:02
jgriffithie Ceph/Swift compatability should be pretty easy I would've thought16:02
jgriffithahh..16:02
DuncanTGiven ceph can pretend to be swift, I think you get that for free now?16:02
seiflotfy_so there are 2 ways to do it and i would like to discuss which one would fit better with upstream16:02
jgriffithseiflotfy_: anything specific you want to bring up?16:02
thingeeseiflotfy_: I don't think anyone is opposed to the idea. Is there anything you need?16:03
seiflotfy_1) we use ceph swift api16:03
AkendoIndeed16:03
AkendoWe just check how to do so16:03
seiflotfy_2) we actually add direct support for it in openstack16:03
seiflotfy_(which would require a decent amount of code)16:03
AkendoWe have to do some tests on it but in theory it should work easy16:03
thingeeseiflotfy_: really that's your decision. :)16:03
*** asomya_ has quit IRC16:03
thingeeseiflotfy_: I don't care either way, as long as it works16:04
jgriffiththingee: +1 :)16:04
jgriffithseiflotfy_: just curious what option #2 buys you over #1?16:04
DuncanTI'd certainly be interested in hearing how you get on with trying to implement a backup driver, if you go that route...16:04
seiflotfy_thingee: well if we go with 1) then the coding might not even exist but more configuration16:04
seiflotfy_it needs to be tested16:04
thingeeseiflotfy_: yup16:04
thingeeseiflotfy_: I was under the impression since it's a compatible api, there shouldn't be a problem16:05
seiflotfy_in anyway I think i will start with 1) then later head to 2)16:05
seiflotfy_since it wil lrequire some refactoring of the code16:05
jgriffithseiflotfy_: sounds like a good idea to me :)16:05
jdurgin1seiflotfy_: I've been thinking adding an rbd or rados backup target that can do differential backups would be useful16:05
thingeeyup sounds good16:05
*** garyk has quit IRC16:05
seiflotfy_mkoderer: went through it and it looks like it will require some refactoring to not make swift the only hardcoded option16:05
jdurgin1but trying 1) first makes sense to me16:05
thingeejust flowing through the agenda16:05
mkodererrefactoring is needed for option 2)16:05
mkoderer1) should work out of the box16:06
DuncanTseiflotfy_: Should be a single config option to change the backup target... ping me if it looks harder16:06
*** cody-somerville has quit IRC16:06
jgriffithI think both options are good... I have no objections, don't think anybody else would either16:06
*** derekh has quit IRC16:06
*** avishay has joined #openstack-meeting16:06
jgriffithSo, unless there are any questions?16:06
*** michchap has joined #openstack-meeting16:06
avishayHi all16:06
*** hemnafk is now known as hemna16:07
jungleboyjHey avishay.16:07
jgriffithavishay: yo16:07
hemnamorning16:07
jgriffithOk, next item16:07
seiflotfy_ok cool can i take this task then16:07
seiflotfy_?16:07
zhiyanhi avishay, hemna16:07
seiflotfy_me and mkoderer would do it16:07
jgriffithseiflotfy_: it's all yours :)16:07
avishayhi16:07
mkoderer;)16:07
jgriffithseiflotfy_: You should link up with jdurgin1 when you get aroung to looking at option 216:08
winston-davishay hi16:08
zhiyanhi hemna, could you pls share the progress about the brick implementation?16:08
jgriffith#topic  brick status update16:08
*** openstack changes topic to "brick status update (Meeting topic: cinder)"16:08
hemnaheh16:08
avishaywinston-d: hi16:08
jgriffith:)16:08
hemnaok, well I have a WIP review up on gerrit16:08
hemnaI believe I have the iSCSI code working now16:09
hemnahttps://review.openstack.org/#/c/32650/16:09
hemnaI am just doing some more testing and waiting for my QA guy to give me the thumbs up16:09
zhiyanincluding attach and/or detach code?16:09
*** vishy has quit IRC16:09
* jgriffith wants his own QA person!16:09
hemnayes, this is the iSCSI attach/detach code16:09
hemnaheh16:09
zhiyancool16:10
avishayhaha16:10
hemnaI've modified the base ISCSIDriver in cinder to use the new brick code and it works16:10
xyang_hemna: works for copy image to volume as well, right16:10
hemna(for me)16:10
AkendoI could do some testing and QU and supporting seiflotfy_ and mkoderer16:10
hemnaxyang_, haven't tried it yet16:10
jgriffithhemna: you mean on the attach16:10
AkendoQA*16:10
Akendo:-)16:10
jgriffithhemna: I moved the target stuff a while back :)16:10
hemnaxyang_, I haven't modified the copy image to volume method yet to use brick...that's why it's a WIP still16:10
avishayhemna: there's an issue with nova that disconnecting from an iscsi target disconnects all LUNs...is that a problem here?16:10
seiflotfy_+116:11
hemnaavishay, if that's a bug in the current nova libvirt volume driver, then yes, it's a bug in this code16:11
seiflotfy_:D16:11
xyang_hemna: ok thanks16:11
avishayhemna: no it's not - libvirt keeps track of which VMs are using what, so they disconnect only if nobody is using16:11
zhiyanavishay: there is a check in nova libvirt volume detaching code...16:11
avishayhemna: do we need similar tracking?16:11
hemnaavishay, yes, that code is in this brick code as well16:12
zhiyanavishay: yes16:12
hemnabut we aren't attaching to VMs16:12
avishayhemna: sweet16:12
hemnawe are just attaching to the host and using the LUN and then detaching it16:12
avishayhemna: i know, but we still may have multiple LUNs, right?16:12
hemnayes we'll have multiple LUNs16:12
xyang_hemna: since copy image to volumes is from cinder, we may still have that problem16:12
hemnabut we should only be detaching the LUNs we are done with at the time16:13
xyang_hemna: cinder doesn't know what luns are attached16:13
hemnathe way nova looks at the attached LUNs is inquiring the hypervisor16:13
xyang_hemna: there's a log out call at the end if no luns attaches, that is one thing we don't know in cinder16:13
hemnawe don't have a hypervisor in this case16:14
avishayso we probably need to track the connections ourselves16:14
jgriffithhemna: xyang_ but can't we add that through intiator queries?16:14
hemnawell in our case it's always an attach, use, detach for a single LUN16:14
hemnawe aren't attaching, then going away and then detaching at some later time.16:15
dosaboysorry guys joining late here, if there is a moment at the end I have a few words on the ceph-backup bp16:15
xyang_jgriffith: avishay and I discussed about that.  so driver can find it out but cinder has to make an additional call16:15
hemnabut if cinder dies in that serial process....16:15
jgriffithhemna: states will fix that for us :)16:15
jgriffithcinder never dies!!16:15
avishay:)16:15
hemnaso yes, we aren't currently tracking (storing in a DB) which LUNs we have have attached16:15
jgriffithI hate to go down the path of BDM type stuff in Cinder16:16
hemnayah16:16
hemnaI'd like to keep this simple for the first round16:16
avishaywhat if we get two calls that attach at the same time?16:16
jgriffith+116:16
hemnait's already better than the code we copy/pasted from nova16:16
hemnathat's existing in cinder now16:16
hemnaavishay, lockutils16:16
*** winston-d has quit IRC16:16
avishayi'm find with keeping it simple for the first pass, but we should keep these issues in mind16:16
*** winston-d has joined #openstack-meeting16:17
hemnayup!16:17
jgriffithavishay: I hear ya16:17
hemnait's something we can mull over for H316:17
avishayhemna: works for me16:17
DuncanTavishay: Check out the code and raise a bug if you can see a specific scenario that would break it...16:17
avishayDuncanT: yup16:17
hemnaas it stands today there are issues with the existing copy volume to image code that doesn't work16:17
hemnathat I discovered in the process16:17
hemnalike....we never detach a volume.....16:17
hemna:(16:17
hemnathis WIP patch already addresses that issue.16:18
jgriffithOk, the only other thing there (I think) is the LVM driver migration16:18
*** michchap has quit IRC16:18
hemnaI saw another issue in the existing code that failed to issue iscsiadmin logins as well16:18
avishayhemna: there was no detach precisely because of the issue i raised16:18
jgriffithI am hoping to have that done here shortly16:18
thingeejgriffith, hemna: separate commit for the disconnect and backport?16:18
hemnaavishay, that leads to dangling luns and eventually kernel device exhaustion.  :(16:18
jgriffithAfter that we've got the key components in brick and we've got something consuming all of them16:18
*** SergeyLukjanov has quit IRC16:18
jgriffiththingee: hmmm?16:19
*** cp16net is now known as cp16net|away16:19
*** skolathu has joined #openstack-meeting16:19
thingeejgriffith: errr copy volume to image code not deataching16:19
jgriffiththingee: ahh...16:19
jgriffith:)16:19
hemnajust for Grizzly backport ?16:19
avishayhemna: if nova and cinder are running on the same host, cinder might logout of nova luns16:19
*** rostam has quit IRC16:20
thingeehemna: yea16:20
thingeeOh I guess that was folsom too16:20
thingeehmm16:20
jgriffithavishay: I'm still unclear on how this got so convoluted16:20
hemnacan you issue a copy volume to image when a volume is attached to a VM ?16:20
jgriffithavishay: We *know* what lun we're using when we atttach for clone etc16:20
xyang_there could be more than one luns on the same target, if we logout in copy image to volume, other luns can be affected16:20
jgriffithxyang_: understood, but since we know the lun why can't we log out "just" that lun16:21
avishaythe problem is that when you logout, it disconnect ALL luns on the same target16:21
*** rushiagr has quit IRC16:21
avishayyou can't log out of just one AFAIK16:21
hemnawell logout is a separate issue from removing the LUN from the kernel16:21
* winston-d checking connectivity16:21
jgriffithright, but what I'm saying is I *believe* there's a way to do a logout on JUST the one session/lun16:21
singnthis is how iscsiadm works when logs in to target16:21
thingeehemna: only grizzly. folsom just gets security fixes now16:21
hemnayou can remove a LUN from the kernel by issuing an scsi subsystem command16:21
avishaymaybe there is a better way than what nova does16:21
hemnaw/o doing an iscsi logout16:21
jgriffithavishay: that's what I'm wondering16:22
*** rushiagr has joined #openstack-meeting16:22
jgriffithavishay: xyang_ regardless... I'd propose we file a bug to track it (thought we already did though)16:22
hemnayou don't need to do a logout to remove a lun16:22
avishayhemna: so remove from the kernel, then you can check if there are no more luns and logout?16:22
jgriffithand address it after we get hemna 's first version landed16:22
xyang_jgriffith: I think avishay already logged a bug16:22
hemnayou should only logout from an iscsi session when you are done with the host16:22
avishayjgriffith: there already is a bug16:22
jgriffithxyang_: avishay I thought so :)16:22
hemnaavishay, yah there is a way I believe16:23
avishayOK, so there is a bug open, let's fix it in v216:23
hemnarequires some smart parsing of kernel devices in /dev/disk/by-path and knowing the target iqns, etc16:23
jgriffithI guess the *right* answer is actually the opposite of what I just said16:23
jgriffithin order to do the backport correctly16:23
jgriffithfix it in the existing code now and backport, then move forward with the new common code16:23
hemnaok so we have like 3 issues here :)16:24
hemna1) the detach in the existing cinder code16:24
*** rostam has joined #openstack-meeting16:24
hemna2) iscsi logout issues that can cause host logouts when LUNS are in use16:24
hemna3) detaches from the kernel16:24
avishayFC is so much easier ;)16:24
hemnathe important one here for now I think is the issue that thingee raised16:25
hemna:P16:25
jgriffithavishay: ha!  Now that's funny16:25
winston-davishay as long as you have HBA installed?16:25
avishayjgriffith: winston-d  :)16:25
hemnaI haven't started the FC stuff yet16:25
jgriffithwinston-d: avishay and you don't care about things like zoning16:25
avishayzoning shmoning16:25
jgriffithhemna: one thing at a time :)16:25
hemnaI'll probably do another patch for the FC attach/detach migration into brick16:25
jgriffithavishay: :)16:25
jgriffithhemna: yes, please do them separately16:25
hemnayah that was the plan :)16:26
seiflotfy_ah forgot dosaboy  is working on theceph blueprint16:26
avishayanyway, looks like a good start - nice work16:26
seiflotfy_so i will be trying to assist him then16:26
hemnathe brocade guys are supposed to be working on the zone manager BP16:26
winston-dsorry, guys my network connectivity is very unstable today.16:26
jgriffithOk... anything else on this topic?  I think hemna has a good idea of the challenges and the point thingee brought up16:26
jgriffith#topic QoS and Volume Types16:26
hemnawhat should be the plan for the Grizzly detach issue ?16:27
*** openstack changes topic to "QoS and Volume Types (Meeting topic: cinder)"16:27
hemnaok nm  we can hash it out in #openstack-cinder16:27
*** shang has quit IRC16:27
thingeehemna: sounds good16:27
jgriffithhemna: I would have liked to have seen that addressed already TBH16:27
hemnayah, I didn't notice it until I started the brick work :(16:27
jgriffithhemna: but yes, we'll talk later between xyang_ hemna and whoever else is interested16:27
hemna+116:27
xyang_sure16:28
jgriffithand avishay16:28
jgriffithsorry avishay you can't go home yet ;)16:28
jgriffithSo... QoS16:28
avishayjgriffith: i'm already home :)16:28
jgriffithavishay: ;)16:28
jgriffithwell then we're all set :)16:28
winston-dyes, QoS please. :)16:28
jgriffithwinston-d: where did you patch go?16:29
jgriffithahh fond it16:29
jgriffithfound16:29
winston-dit's here: https://review.openstack.org/#/c/29737/16:29
jgriffithhttps://launchpad.net/cinder/+milestone/havana-216:29
jgriffithoops16:29
jgriffithsorry16:29
jgriffithyeah.. what winston-d said ^^ :)16:29
jgriffithI don't know how many of you have looked at this16:29
jgriffithbut I had some thoughts I wanted to discuss16:30
jgriffithI think I commented them pretty well in the review but...16:30
jgriffithto summarize, I'm not crazy about introducing unused columns in the DB16:30
kmartinI have as well :)16:30
winston-dkmartin :)16:30
jgriffithand I'm not sure about fighting the battle of trying to cover every possible implementation/verbage a vendor might use16:31
jgriffithI had two possible alternate suggestions:16:31
jgriffith1. Use metadata keys16:31
jgriffithThis way the vendor can implement whatever they need here16:31
*** yugsuo has joined #openstack-meeting16:31
*** kchenweijie has joined #openstack-meeting16:31
jgriffithIt's like a "specific" extra-specs entry16:31
*** kchenweijie has left #openstack-meeting16:32
thingeejgriffith: +1, non-first class features should not be introducing changes to the model.16:32
jgriffithThe other option:16:32
DuncanTjgriffith: +1 seems like a sane solution16:32
jgriffith2. Implement QoS - Rate Limiting and QoS - Iops setting16:33
winston-djgriffith i have concerns about having vender specific implementation keys stored in DB for volume types, that makes volume types not compatible with other back-ends.16:33
thingeewhile I was working on the wsme stuff for the api framework switch, it made me realize how complex the volume object is becoming =/16:33
thingeeas jgriffith mentioned, we're half of what instances are in nova16:33
jgriffithwinston-d: actually... my proposal16:33
jgriffithwinston-d: would make it such that it's still compatable, just ignored16:34
jgriffithwinston-d: in other words if the keys don't mean anything to the driver it just ignores them16:34
jgriffithwinston-d: this creates some funky business with the filtering, but I think we can resolve that16:34
jgriffithwinston-d: just leave filter scheduling as a function of the "type"16:35
thingeethe only thing drivers should agree on is the capability keys.16:35
jgriffithnot QoS setting16:35
jgriffiththingee: I would agree with that16:35
thingeebut...16:35
jgriffithThe problem is I see little chance of us all agreeing on what QoS is and how to specify it16:35
winston-dthingee i agree as well, but i think QoS is among capabilities.16:35
jgriffithwinston-d: you're correct, but I think it's a "True/False"16:36
avishayYou can't call it QoS - that term is overloaded.  This is rate limiting.16:36
DuncanTThought that doesn't mean we shouldn't try to get drivers to agree (i.e. point out inconsistencies at review time), jsut let the standards be defacto rather than prescribed...16:36
jgriffithwinston-d: and TBH I'm still border line on whether I count rate-limiting as QoS :)16:36
* thingee thinks there should a way to extend capabilities if it's not a first class feature.16:36
avishayI thought this is what we discussed at the summit :)16:36
hemnathingee, +116:36
jgriffiththingee: _116:36
jgriffithooops16:36
jgriffith+116:36
jgriffithDuncanT: so the problem is... there's already an issue16:37
jgriffithDuncanT: For example, I use "minIOPS, maxIOPS and burstIOPS"16:37
jgriffithon a voume per volume basis16:37
jgriffiththat can be changed on the fly16:37
jgriffithOther's use "limit max MB/s Read and limit max MB/s Write"16:38
winston-dfolks, the QoS bp/patch was at first for client rate-limiting (aka, doing rate-limit at Nova Compute).  so we have to deal with back-ends, as well as hypervisors.16:38
jgriffithWhile yet other use "limit IOPs"16:38
*** jcoufal has quit IRC16:38
jgriffithwinston-d: indeed16:38
jgriffithwinston-d: but what I'm saying is maybe that should be "rate-limiting" and not QoS16:38
DuncanTjgriffith: On-the-fly changes don't seem to fit within the framework we've discussed16:38
DuncanTjgriffith: Nor per-volume limits (rather than per-type limits)16:39
jgriffithDuncanT: updates16:39
hemnaso you would change those settings on the fly after the volume is created?16:39
hemnaprobably out of scope for this I would presume16:39
jgriffithhemna: Yes, that's something I need to be able to do16:39
jgriffithwell... it's not something I'm asking winston-d to put in his patch16:40
hemnaah ok16:40
DuncanTI definitely feel that is not within the discussed framework, other than via retyping16:40
jgriffithbu it's something I'm keeping in mind with the design16:40
jgriffithDuncanT: correct16:40
hemnathat smells like v2 to me16:40
winston-dhemna there's no reason why not if back-end/hypervisor supports run-time modification16:40
avishaywinston-d: does libvirt support changing rate limit settings after the volume is attached?16:40
hemnalike a volume type update or something like that16:40
jgriffithDuncanT: well... it's just like "update extra-specs"16:40
jgriffithhemna: I'd like to have it be the same volume-type16:41
jgriffithSo the volume-type just tells what back-end to use16:41
hemnajgriffith, but in this case do you want to update the volume type here, or the specific volume instances's settings16:41
DuncanTjgriffith: I don't think that changes existing volumes?16:41
hemnalike for volume X, update it's IOPS settings now.16:41
jgriffithhemna: DuncanT so I don't want to kill the discussion on winston-d 's work here with my little problems :)16:42
winston-davishay last time we checked, it should be able to do so. but I didn't try that out16:42
jgriffithbut...16:42
avishaywinston-d: ok16:42
jgriffithhemna: but yes, that's what I intend to do16:42
hemnathat'd be cool :)16:42
*** ndipanov is now known as ndipanov_gone16:42
*** feleouet has quit IRC16:42
jgriffithDuncanT: to start it most likely would have to be an update to the volume-type16:43
jgriffithso for example:  volume-type: A, with QoS: Z16:43
jgriffithUpdate volume-type: A to have QoS: X16:43
DuncanTjgriffith: That is entirely outside of any scope of QoS discussed so far... and is going to cause major issues in regards to even slightly trying to standardise behaviours between backends16:43
jgriffithDuncanT: why?16:43
jgriffithDuncanT: and BTW I've already submitted a patch for this back in Folsom16:43
hemnawell I think it's a new feature that hasn't been discussed yet, but should be put in a new BP and scheduled.16:44
DuncanTjgriffith: Because the possibility matrix explodes, as far as what backends can do what featyres16:44
jgriffithDuncanT: that's why I'm saying you don't hard code that shit16:44
jgriffithDuncanT: That's the whole point of using metadata keys16:44
winston-dif we perfer K/V pairs for QoS metadata, maybe we should have a set of fixed keys?16:44
jgriffithwinston-d: can you expand on that?16:44
hemnathat's just the key standardization discussion all over again :)16:45
*** michchap has joined #openstack-meeting16:45
*** dcramer_ has quit IRC16:45
xyang_hema: 2 sessions at the summit:)16:45
hemna:)16:45
avishayand no conclusions, obviously16:45
jgriffithxyang_: hemna the good thing is it's paired own in terms of scope16:45
hemnatrue16:46
jgriffithavishay: I think we tried to tackle too large of a problem in the summit sessions16:46
jgriffithwinston-d: can you tell me more about what you're thinking with the standard keys?16:46
avishayjgriffith: agreed.  i also think that we failed to agree on simpler use cases than this.16:46
winston-dfor example, KVM/libivrt only accepts total/read/write bytes/iops per sec.16:46
guitarzanwe need to keep track of what we're discussing...16:47
winston-dso for a QoS setting requires client to do the enforcement, these keys must be there, at least 016:47
guitarzanwe're confusing client side, backend, qos, capabilities16:47
jgriffithwinston-d: I get that16:47
jgriffithwinston-d: so that brings me back to thinking that we have two types of performance control16:48
jgriffith1. Hypervisor rate-limiting16:48
jgriffith2. Vendor/Backend implemented16:48
winston-dI think the whole idea behind the bp/patch is we try to find a way to express QoS requirements for volume types in Cinder, which can be consumed either by Nova or cinder back-ends.16:48
rushiagrwinston-d: and that I guess is mixing 1 and 2?16:49
jgriffithwinston-d: indeed, but what I'm propsing is16:49
jgriffithrushiagr: haha.... I think we're thinking the same thing16:49
avishays/QoS/rate limiting/g  might make this issue easier to agree on16:49
jgriffithwinston-d: rushiagr so what if we had set keys for hypervisor limiting16:49
jgriffithand arbitrary K/V's for vendors16:50
jgriffithavishay: now that's more what I'm thinking!!16:50
winston-davishay well, it was simply cliend-side rate limiting at first. :)16:50
jgriffithavishay: I don't think we should treat rate limiting as QoS16:50
thingee10 min warning16:50
jgriffithDOHHHH16:50
guitarzansuprrise16:50
*** kirankv has joined #openstack-meeting16:50
jgriffithI don't think we're going to agree on representation16:51
jgriffithBut I do think we shoudl be able to agree on:16:51
*** rkukura has quit IRC16:51
jgriffith1. Should QoS and Rate Limiting be separate concepts16:51
jgriffith2. Should QoS be abstract K/V pairs16:51
jgriffiththoughts on those two points?16:51
*** diogogmt has joined #openstack-meeting16:52
hemna+1 to both16:52
jgriffithwinston-d: avishay thingee kmartin rushiagr ?16:52
*** nati_ueno has joined #openstack-meeting16:52
DuncanTI'm not convinced QoS and rate limiting are different concepts16:52
rushiagr+1 for 216:52
jgriffithDuncanT: they are16:52
DuncanT+1 on 2. though16:52
xyang_+1 for 216:52
thingeeyes16:52
jgriffithbut I can argue with you over a beer on that one :)16:52
winston-d+1 for 2.16:52
avishayQoS means different things to practically everyone16:52
hemnayup16:53
kmartinI ok with #2 but #1 is the same as far as HP is concerned16:53
avishayI can say that Flash vs. HDD is QoS16:53
guitarzando the decisions on 1 & 2 affect the client vs backend question?16:53
rushiagrthe first one needs some more discussion i guess. Need to think more on the idea of separating hypervisor/backend stuff16:53
hemnakmartin, well HP 3PAR that is.16:53
winston-dguitarzan client side usually can only do rate-limiting, AFAIK16:53
bswartzQoS is more about guaranteed minimums than it is about maximums16:53
guitarzanavishay: we call those two different products16:53
DuncanTjgriffith: Certainly it is a non-trivial argument space but ultimately the only sane conclusion is that they are the same class of thing :-)16:53
jgriffithguitarzan: that might help me win my argument :)16:53
hemnabswartz, +116:54
DuncanTjgriffith: I'll buy the first round16:54
jgriffithguitarzan: I could go for that :)16:54
jgriffithDuncanT: :)16:54
jgriffithOk.. one more minute on this16:54
jgriffithI think we all agree on #2 then16:54
jgriffithThe only question is #116:54
bswartz+1 on both16:54
jgriffithI'm willing to compromise here I think16:54
*** cbananth has joined #openstack-meeting16:54
jgriffithYay!!! bswartz16:54
guitarzanI think #1 and the client/backend question are easily bigger than "what keys"16:54
winston-d+2 for 1 if we shift QoS to 'I' release...16:55
jgriffithI think guitarzan makes a good point, what about separating client and backend16:55
guitarzanand here's my last off the wall idea16:55
jgriffithwinston-d: hmmm... that could hurt16:55
guitarzanmaybe the client side stuff should be stuck on an "attachment" instead of the volume itself16:55
jgriffithguitarzan: I actually like that idea16:56
jgriffithguitarzan: I think it's come up before actually16:56
winston-djgriffith never mind. i can do both for 1.16:56
bswartzwinston-d: https://launchpad.net/~openstack/+poll/i-release-naming16:56
jgriffithWhat do others think of the separation of client/backend implemented?16:56
winston-djgriffith +116:57
*** tjones has joined #openstack-meeting16:57
avishayfine by me16:57
jgriffithcool!16:57
*** michchap has quit IRC16:57
jgriffithkmartin: you good?16:57
jgriffithhemna: ?16:57
guitarzanI think things may get refined after someone actually implements something16:57
jgriffithbswartz: rushiagr ?16:57
DuncanTjgriffith: But the end result is the same, whether the rate limit is enforced on hypervisor or backend16:57
jgriffithDuncanT: ?16:57
guitarzans/think/hope/16:57
kmartin+116:57
haomaiwang+116:58
bswartzI don't understand what the client side imlpementation has to do with cinder16:58
hemna+116:58
winston-dguitarzan https://review.openstack.org/#/c/29737/16:58
jgriffithDuncanT: well.. backend become K/V's and client is "set" semantics that get in the DB16:58
guitarzanwinston-d: touche :)16:58
winston-dbswartz just like volume encryption in client side.16:58
avishayslight difference - doing it in the hypervisor adds ratelimiting to the network connection as well16:58
bswartzclients are welcome to limit themselves, but it's not our business16:58
jgriffithbswartz: fair point, but I like the idea of having that setting in Cinder via the attach16:59
bswartzwinston-d: okay well I can see it from that pespective16:59
jgriffithbswartz: and it allows us to keep from double implementing16:59
guitarzanshocker, our one minute is over :)16:59
jgriffithbswartz: in other words set it on the backend and on the hypervisor16:59
jgriffithDarn you time!!!16:59
DuncanTbswartz: Like encryption, cinder is the single place to store this kind of info... and I'd really rather most customers don't see things like rate limiting17:00
thingeeyup, that's time17:00
bswartzokay I take your points17:00
thingeesee ya all in #openstack-cinder17:00
jgriffithOk... suppose that will do it17:00
DuncanTShall we move to the cinder channel? I know dosaboy still have a question...17:00
bswartzcinder does need to understand rate limitting17:00
*** ajforrest has left #openstack-meeting17:00
winston-dit always make me feel that I'm back to OSD when discussing standardizing things among back-ends.17:00
winston-dwhich is good. :)17:00
jgriffith:)17:01
danwentwho's here for the vmware driver meeting?17:01
jgriffithalright, I need to wrap and go to my next meeting :(17:01
jgriffith#end meeting cinder17:01
*** henrynash has joined #openstack-meeting17:01
danwentjgriffith: ah, sorry, thought you were already done17:01
jgriffith#endmeeting cinder17:01
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"17:01
hartsocksThanks for wrapping up guys.17:01
openstackMeeting ended Wed Jun 12 17:01:26 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-06-12-16.00.html17:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-06-12-16.00.txt17:01
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-06-12-16.00.log.html17:01
avishaybye all!17:01
jgriffithdanwent: no worries... all yours :)17:01
*** avishay has left #openstack-meeting17:01
*** skolathu has quit IRC17:01
*** mdomsch has joined #openstack-meeting17:01
hartsocks#startmeeting VMwareAPI17:01
openstackMeeting started Wed Jun 12 17:01:43 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.17:01
danwentor all hartsocks :)17:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:01
*** openstack changes topic to " (Meeting topic: VMwareAPI)"17:01
openstackThe meeting name has been set to 'vmwareapi'17:01
*** bswartz has left #openstack-meeting17:01
hartsocks#topic salutations17:01
*** openstack changes topic to "salutations (Meeting topic: VMwareAPI)"17:01
*** gyee has joined #openstack-meeting17:02
hartsocksGreetings programmers! Who's up for talking VMwareAPI stuff and nova?17:02
danwenti am :)17:02
hartsocksanyone else around?17:02
hartsocksHP in the house?17:02
hartsocksCanonical?17:02
*** Divakar has joined #openstack-meeting17:02
kirankvHi !17:02
tjonesim here17:03
danwentman, its like we are trying to get reviews for our nova code :)17:03
DivakarHi17:03
hartsocks*lol*17:03
*** sandywalsh has quit IRC17:03
*** Sabari_ has joined #openstack-meeting17:03
hartsocksivoks, are you around?17:03
danwentlooks like Sabari_ is here now17:03
*** chinmay has joined #openstack-meeting17:03
Sabari_Hi, this is Sabari here.17:04
*** Eustace has joined #openstack-meeting17:04
hartsocksOkay...17:04
*** anniec has quit IRC17:04
hartsocks#link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda17:04
hartsocksHere's our agenda today.17:04
hartsocksKind of thick.17:04
*** thingee has left #openstack-meeting17:05
hartsocksSince we started last week with bugs, I'll open this week with blueprints.17:05
hartsocks#topic blueprint discussions17:05
*** openstack changes topic to "blueprint discussions (Meeting topic: VMwareAPI)"17:05
hartsocksI have a special note from the main nova team...17:05
hartsocksThey would like us to look at:17:05
hartsocks#link https://blueprints.launchpad.net/nova/+spec/live-snapshots17:06
hartsocksThis is a new blueprint to add live-snapshots to nova17:06
med_hartsocks, Canonical is lurking17:06
hartsocks@med_ hey!17:06
* hartsocks gives everyone a moment to scan the blueprint17:06
hartsocksDo we have folks working on or near this blueprint? Can anyone speak to how feasible it is to get this done?17:07
* hartsocks listens to the crickets a moment17:08
hartsocksNo comments on the "live-snapshots" blueprint?17:09
hartsocksnote: I need to talk to the main nova team about this tomorrow and say "yes we can" or "no we can't"17:10
Divakarfrom a technical feasibility it is :)17:10
hartsocksWhat about person-power? Do we have someone who can take this on?17:11
*** pschaef has quit IRC17:11
hartsocks#action hartsocks to follow up on "live-snapshots" to find an owner for the vmwareapi implementation17:12
hartsocksOkay, moving on...17:12
danwenthartsocks: would be good to check with our storage pm + eng folks on this.17:12
danwentdo you know alex?  if not, I can introduce you to him.17:13
hartsocks@danwent we should definitely follow up then…17:13
hartsocksNext blueprint:17:13
hartsocks#link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service17:14
hartsocksHow is this coming along?17:14
*** litong has joined #openstack-meeting17:14
kirankvuploaded a new patch set, got rid of many other minor improvements to make the patch set small for review17:15
hartsocksLooks like we need to look at your patch-set 7 collectively.17:15
danwentkirankv: great17:15
danwentkirankv: are unit tests added yet?17:15
danwentlast i checked we were in WIP status waiting for those, but that was a while ago17:15
kirankvyes, I will run the coverage report and check and try adding if we have missed it for the new code added17:16
danwentgreat17:16
hartsocksI am looking to see coverage on any newly added code… in general, if you add a new method I want to see some testing for it.17:17
hartsocksI'll mention here that if you want me to track, follow up, or review you changes … add me as a reviewer: http://i.imgur.com/XLINkt3.png17:17
hartsocksThis is chiefly how I will build weekly status reports.17:17
hartsocksNext up:17:17
hartsocks#link https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images17:18
*** seanrob has joined #openstack-meeting17:18
*** ayoung_ has joined #openstack-meeting17:18
hartsocksThis is the VMware Templates as Glance images blueprint.17:18
*** singn has quit IRC17:18
kirankvIm waiting for the multi-cluster to go through before submitting this one17:18
hartsocksIt is currently slated for H-217:18
hartsocksHm...17:19
hartsocksYou can submit a patch and say Patch A depends on Patch B17:19
hartsocksThere is a button..17:19
*** sandywalsh has joined #openstack-meeting17:19
hartsocksI'll see about writing a tutorial on that for us.17:19
kirankvim only concered about rebasing two patchsets17:20
hartsocksYou can cherry-pick Patch A for the branch you use for Patch B17:20
hartsocksIt's a bit more than I want to go into in a meeting, but … there's a "cherry pick" button in gerritt17:20
kirankveven now i havent rebased the current patchset, and on openstack-dev mailing list I noticed that the preffered way is to rebase every patchset and submit17:20
hartsocksSure.17:21
hartsocksThese are not mutually exclusive activities.17:21
hartsocksBoth are possible.17:21
hartsocksBoth are preferably done together.17:21
hartsocksOur reviews are taking a long time.17:21
hartsocksLet's try and do reviews regularly.17:21
hartsocksI will start sending emails Monday and Friday to highlight patches that need review.17:21
kirankvok, let me see if i can sublit a patch this week,17:22
Divakarreviews are going thru from a +1 perspective17:22
kirankv*submit17:22
danwenthartsocks: yes, and we also need to get more nova core developers reviewing our patches17:22
Divakarwe need to get core reviewers attention17:22
hartsocksLet's make sure that we can say:17:22
hartsocks"If *only* more core developers gave their approval we would be ready"17:23
hartsocksRight now, this is not always the case.17:23
*** vipul is now known as vipul|away17:23
*** michchap has joined #openstack-meeting17:23
danwenthartsocks: agree.  we need to make life as easy as possible for the core reviewers by making sure the obvious comments have already been addressed before they spend cycles.17:23
danwentmed_:  who does canonical have a nova core dev?17:24
DivakarArent all the nova reviews being monitored by nova core reviewers?17:24
hartsocks@Divakar they are but...17:24
hartsocks@Divakar it's like twitter… so much is happening it's easy to lose the thread17:24
danwentDivakar: in theory, there are just a LOT of them, so sometimes areas of the codebase that fewer people are familiar with get less love17:24
*** ivasev has joined #openstack-meeting17:24
danwentalso is dansmith around and listening?17:24
*** psedlak has quit IRC17:25
danwenti think he is a nova core who has attended the vmwareapi discussions before17:25
*** xyang_ has quit IRC17:25
danwenthartsocks: am i thinking of the right person?17:25
hartsocks@russellb are you there?17:25
DivakarWe need to see if we can talk to russelb on how to get core reviewers to look into vmware related bps and bug fixes17:25
hartsocks@danwent I've talked with Russell the most.17:25
danwenthartsocks: yes, but PTLs are very busy :)17:26
danwentso definitely let's encourage him, but we also need to make sure others are paying attention to vmwareapi reviews as well.17:26
hartsocks@danwent yeah, we should probably bring him in rarely.17:26
hartsocksI think if we have 8 +1 votes and we are waiting for two +2 votes that looks pretty clear.17:27
danwentyes, but there's a reason we have core devs :)17:27
hartsocksI think it will also look like we are a concerted and coordinated effort.17:27
danwentanyway, i think we all agree on the need for more core reviews.. i am continuing to encourage people, and I'd appreciate help from anyone else who can do the same17:27
Divakari was not asking for russelb's time.. as PTL he can let his core reviewers attention on these17:28
hartsocksOkay… let's agree that our followups should be to...17:28
hartsocks1. get more core reviewers17:29
*** rwsu-away is now known as rwsu17:29
hartsocks2. be more vigilant on reviews/feedback cycles ourselves17:29
Divakarsending a mail with the link to the review asking for +2 might be other option when things are not working17:29
hartsocks@Divakar that has not worked favorably for me…17:29
*** egallen has quit IRC17:29
*** ssurana has joined #openstack-meeting17:29
*** jhenner has quit IRC17:30
hartsocksLet's table this topic since we can't do more.17:30
hartsocks#action solicit participation in reviews by core-developers17:30
hartsocks#action get regular +1 reviews to happen more frequently17:31
hartsocksThese are for me.17:31
hartsocksLast blueprint...17:31
hartsocks#link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage17:31
hartsocksHas anyone followed up with the developer working on this?17:31
*** zul has quit IRC17:31
*** diogogmt has quit IRC17:32
*** sirushti has quit IRC17:32
hartsocksAnyone from Canonical follow up with Yaguang Tang?17:32
*** diogogmt has joined #openstack-meeting17:32
*** zul has joined #openstack-meeting17:32
*** sirushti has joined #openstack-meeting17:33
Davieyhartsocks: hey17:33
hartsocks@Daviey hey, how are we doing? Will we meet H-2 deadline for this?17:33
hartsocksRemember: it can take *weeks* for the review process to work out.17:34
hartsocksThat makes July 18th our H-2 deadline kind of "tight" by that rate of speed.17:34
DavieyUgh17:34
danwentqustion on the blueprint, by "ephemeral disks", does tang mean thin provisioned, or something else?17:35
DavieyI will follow up with him17:35
hartsocksI solicited some help on "ephemeral disks"17:35
hartsocksI have two different understandings...17:35
danwenthartsocks: help a newbie out :)17:35
hartsocks1. it's a disk that "goes away" when the VM is deleted17:35
hartsocks2. It's a "RAM" disk17:36
*** michchap has quit IRC17:36
danwentah, got it17:36
hartsocksSomeone was going to follow up with the BP author on that...17:36
*** andreaf has joined #openstack-meeting17:36
*** winston-d has quit IRC17:36
hartsocksOkay...17:36
hartsocks#topic Bugs!17:36
*** openstack changes topic to "Bugs! (Meeting topic: VMwareAPI)"17:36
hartsocksOr "ants in your pants"17:37
hartsocksTell me, first up....17:37
hartsocksAre there any newly identified blockers we have not previously discussed!17:37
hartsocks?17:37
DavieyThings look good?17:38
danwenthttps://bugs.launchpad.net/nova/+bugs?field.tag=vmware17:38
hartsocksNo *new* news is good news I suppose...17:38
danwenthartsocks: we're still having issues when more than one datacenter exists, right?17:38
hartsocks#link https://bugs.launchpad.net/nova/+bug/118004417:38
*** vipul|away is now known as vipul17:38
uvirtbotLaunchpad bug 1180044 in nova "nova boot fails when vCenter has multiple managed hosts and no clear default host" [High,In progress]17:38
danwentand I haven't seen anyone looking at https://bugs.launchpad.net/nova/+bug/118480717:38
hartsocksSo I'll go...17:38
uvirtbotLaunchpad bug 1184807 in nova "Snapshot failure with VMware driver" [Low,New]17:38
hartsocksThis is my status update on that.17:39
*** noslzzp has joined #openstack-meeting17:39
hartsocksChiefly, the bug root cause is...17:39
hartsocksonce the driver picks a host...17:39
hartsocksit ignores the inventory-tree semantics of vCenter.17:39
*** colinmcnamara has joined #openstack-meeting17:39
hartsocksThis is the root cause for *a lot* of other bugs.17:39
hartsocksFor example Pick HostA but accidentally pick a DataStore on HostB17:40
*** jlucci has joined #openstack-meeting17:40
hartsocksOr … in the case I first observed...17:40
Sabari_Yes, I agree with hartsocks17:40
*** bobba has quit IRC17:40
*** Tross has joined #openstack-meeting17:40
hartsocksPick HostA then you end up picking a Datastore on HostB which is in a totally different datacenter17:40
hartsocksThis also indirectly applies to Clustered hosts not getting used...17:41
hartsocksand is related to "local storage" problems in clusters...17:41
hartsocks(but only because it's the same basic problem of inventory trees being ignored.17:41
hartsocks)17:41
hartsocksI'm currently writing a series of Traversal Specs to solve these kinds of problems.17:41
Sabari_I am working on the bug related to resource pool and I figure out the root cause is the placement of VM within a VC is still unclear to the driver17:42
hartsocksI hope to post shortly.17:42
hartsocks@Sabrari_ post your link17:42
Sabari_https://bugs.launchpad.net/nova/+bug/110503217:43
uvirtbotLaunchpad bug 1105032 in nova "VMwareAPI driver can only use the 0th resource pool" [Low,Confirmed]17:43
danwentok, let's make sure this gets listed as "critical"17:43
danwentwhichever bug we decide to use to track it.17:43
hartsocks#action list 1105032 list as critical ...17:44
hartsocks#action list 1180044 as critical17:44
hartsocksOkay.17:44
hartsocksBTW….17:44
hartsocks#link https://bugs.launchpad.net/nova/+bug/118319217:44
uvirtbotLaunchpad bug 1183192 in nova "VMware VC Driver does not honor hw_vif_model from glance" [Critical,In progress]17:44
kirankv@Sabari: how are we deciding which resource pool to pick?17:44
Sabari_We can obviously allow the driver to be placed in a resource pool specified by the user, but still we need to figure out a way to make a default decision.17:44
Sabari_Currently, we don't. VC places the VM in the root resource pool of the cluster17:45
*** lloydde has joined #openstack-meeting17:46
hartsocksThis is one of those behaviors which might work out fine in production if you just know that this is how it works.17:46
kirankvarent we moving scheduling logic into the driver by having to make such decisions?17:46
hartsocksOf course, it completely destroys the concept of Resource Pools.17:46
hartsocks@kirankv yes… we have several blueprints in flight right now that are essentially doing that.17:47
Sabari_Yes, it depends on the admin and the way he has configured VC. If one chooses not to use Resource Pools, he stays fine with the existing setup.17:47
kirankvwell the blueprints leave the decision to the scheduler, the driver only makes available resource pools also as compute nodes17:48
Divakarin a way managing resource pool as compute is resolving this17:48
hartsocksWe have two time-lines to think about.17:48
hartsocks1. near-term fixes17:48
hartsocks2. long-term design17:49
med_danwent, sorry. That would be yaguang as core nova17:49
DivakarI dont think we need to worry about the default resource pool in a cluster17:49
Divakarlet the cluster decide where it wants to place the vm17:50
danwentmed_: ah, thanks, didn't realize he was a core.  great to hear, now we just need more review cycles from him :)17:50
med_:)17:50
*** dgollub has quit IRC17:50
*** terry7 has joined #openstack-meeting17:50
Divakarin case option of placing it in a resource pool is required, then lets address that through representing the resource pool as compute17:50
Sabari_I will take a look at the blueprint and the patch sets17:50
kirankv@Sabari: would like to see your patch set too since it addresses the bug17:51
hartsocksIs this about ResourcePools or ResourcePools in clusters?17:51
*** marun has quit IRC17:52
Sabari_@kirankv Sure, I am working on it.17:52
Divakarif we start putting the scheduler logic inside the driver we will break other logical constructs17:52
Sabari_@hartsocks I was talking about resource pools within the cluster.17:52
hartsocks@Sabari_ then I have to agree with the assessment about not bothering with a fix. However, stand-alone hosts can have resource pools.17:53
hartsocksIs this a valid use case:17:53
hartsocksAn administrator takes a stand-alone host...17:53
hartsocks… creates a Resource Pool "OpenStack"17:54
*** seanrob has quit IRC17:54
hartsocksand configures the Nova driver to only use the "OpenStack" pool17:54
hartsocks?17:54
kirankv@hartsocks: agree that fix is required for stand-alone hosts17:54
hartsocksShould we allow that?17:54
Sabari_Yes, that's valid too, but that cannot be done at this moment17:54
hartsocks@Sabari_ so that's a *slightly* different problem. Is that worth your time?17:55
kirankvbut im not sure how ESX is mostly used - 1. stand-lone 2. using vCenter? Im thinking its #2 using vCenter17:55
hartsocksI think I wholly agree that we don't need to change the Cluster logic though...17:55
Divakarsolution could be similar to allowing regex that we followed for datastore selection17:55
*** cp16net|away is now known as cp16net17:56
hartsocks@kirankv good point.17:56
hartsocksYou could have an ESXi driver change and a slightly different change in the VCDriver too17:57
hartsocksI'll leave that up for the implementer.17:57
Sabari_we still support ESXDriver so that a nova-compute service can talk to a standalone host right. In that case, shouldn't we support resource pools.17:57
hartsocks@Sabari_ I think you have a valid point.17:58
hartsocksAnything else on this topic before I post some links needing code reviews (by core devs)17:58
hartsocks?17:58
DivakarIn the cloud semantics do we really want to sub divide a host further into resource pools?  I agree we will need this in a Cluster though17:59
Sabari_I think I need to look at the blueprint and the related patch on how it addresses the issue in cluster. In the meanwhile, I don;t have anything more17:59
hartsocks@Divakar I'm allowing for a specific use case where we have an admin "playing" with a small OpenStack install. I think we will see that more and more.18:00
*** johnthetubaguy has quit IRC18:00
hartsocksWe're out of time...18:00
hartsocksI'll post some reviews...18:00
hartsocks#topic in need of reviews18:00
*** openstack changes topic to "in need of reviews (Meeting topic: VMwareAPI)"18:00
hartsocks•https://review.openstack.org/#/c/29396/18:00
hartsocks•https://review.openstack.org/#/c/29552/18:00
hartsocks•https://review.openstack.org/#/c/30036/18:00
hartsocks•https://review.openstack.org/#/c/30822/18:00
Davieyhartsocks: just those 4?18:01
Sabari_Thanks Shawn18:01
hartsocksThese are some patches that looked like they were ready to get some +218:01
DavieyIs there a beter way we can track inflight reviews18:01
Daviey?18:01
hartsocksAlso...18:01
hartsocks#link http://imgur.com/XLINkt318:01
hartsocksIf you add me to your review it will end up in this list.18:01
*** SergeyLukjanov has joined #openstack-meeting18:01
*** michchap has joined #openstack-meeting18:02
hartsocksIf I look (just before the meeting) and see a "bunch" of +1 votes I'll consider it ready to get some "+2" love.18:02
hartsocksTalk to you next week!18:02
hartsocks#endmeeting18:03
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"18:03
openstackMeeting ended Wed Jun 12 18:03:03 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-06-12-17.01.html18:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-06-12-17.01.txt18:03
openstackLog:            http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-06-12-17.01.log.html18:03
*** tjones has left #openstack-meeting18:03
hartsocksBTW:  head over to #openstack-vmware for continued side discussions!18:03
Davieythanks hartsocks18:03
*** kirankv has left #openstack-meeting18:05
*** cbananth has quit IRC18:06
*** Sabari_ has quit IRC18:06
*** chinmay has quit IRC18:07
*** Divakar has quit IRC18:07
*** garyk has joined #openstack-meeting18:09
*** dcramer_ has joined #openstack-meeting18:12
*** noslzzp has quit IRC18:13
*** michchap has quit IRC18:15
*** zhiyan has quit IRC18:15
*** rushiagr has quit IRC18:16
*** haomaiwang has quit IRC18:17
*** haomaiwang has joined #openstack-meeting18:17
*** noslzzp has joined #openstack-meeting18:18
*** cody-somerville has joined #openstack-meeting18:19
*** markmcclain has quit IRC18:20
*** gareth_kun has joined #openstack-meeting18:24
*** tedross has joined #openstack-meeting18:24
*** marun has joined #openstack-meeting18:24
*** Mandell has joined #openstack-meeting18:25
*** markmcclain has joined #openstack-meeting18:26
*** markwash has joined #openstack-meeting18:27
*** yugsuo has quit IRC18:28
*** dcramer_ has quit IRC18:29
*** mestery has quit IRC18:32
*** tedross has quit IRC18:34
*** dolphm has quit IRC18:35
*** dolphm has joined #openstack-meeting18:37
*** ayoung_ has quit IRC18:40
*** ayoung_ has joined #openstack-meeting18:40
*** michchap has joined #openstack-meeting18:40
*** dcramer_ has joined #openstack-meeting18:43
*** haomaiwang has quit IRC18:43
*** rods has quit IRC18:45
*** rwsu_ has joined #openstack-meeting18:48
*** portante has joined #openstack-meeting18:48
*** Eustace has quit IRC18:50
*** dguitarbite has joined #openstack-meeting18:51
*** novas0x2a|laptop has joined #openstack-meeting18:51
*** rwsu has quit IRC18:51
*** dguitarbite has quit IRC18:52
*** michchap has quit IRC18:52
*** mdomsch has quit IRC18:53
*** jcoufal has joined #openstack-meeting18:54
*** cody-somerville has quit IRC18:57
*** dolphm has quit IRC19:00
*** dolphm has joined #openstack-meeting19:01
notmynameswift meeting time. who's here?19:01
portantepeter19:01
davidhadashi19:01
creihtkinda19:01
gareth_kun+119:01
creiht:)19:01
*** shri has joined #openstack-meeting19:01
notmyname#startmeeting19:01
openstacknotmyname: Error: A meeting name is required, e.g., '#startmeeting Marketing Committee'19:01
torgomatico/19:01
notmyname#startmeeting swift19:01
openstackMeeting started Wed Jun 12 19:01:56 2013 UTC.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: swift)"19:01
openstackThe meeting name has been set to 'swift'19:02
*** clayg has joined #openstack-meeting19:02
notmynamewelcome all. agenda (or topics) for today are on https://wiki.openstack.org/wiki/Meetings/Swift19:02
davidhadasagenda is a plus for future meetings also :)19:02
notmynamefirst up, reminder about the hong kong summit19:02
notmynamehong kong summit19:02
notmynamenov 5-8: http://www.openstack.org/summit/openstack-summit-hong-kong-2013/19:02
notmynameus visa: http://hongkong.usconsulate.gov/acs_hkvisa.html19:02
notmynameget your passport and visa ready if you haven't yet19:03
gareth_kunwelcome19:03
notmyname#info get travel arrangements ready for the HK summit19:03
creiht"They must have a U.S. passport valid for at least six months"19:03
creihtinteresting19:03
notmynamealso, I believe hotel blocks open up this week (according to an email sent last week), so that stuff is coming up soon too19:04
creihtoh I guess that means doesn't expire within 6 months19:04
notmynamecreiht: I think that's common to most international travel19:04
davidhadascreiht: standard procedure - same for pepole going to the US19:04
chmouelyep19:04
shriwhen is the call for papers?19:04
creihtnotmyname: yeah my first thought was that it had to have been valid for 6 months19:04
*** hartsocks has left #openstack-meeting19:04
notmynameshri: not sure on the conference side. the summit (tech) side will be much closer to the event19:05
notmynameshri: normally about a month out19:05
* creiht needs to figure out if he would like to visit anywhere else in china19:05
davidhadascreiht: you need to be at least 6 month old also19:05
*** garyk has quit IRC19:05
notmynamecreiht: let me answer that for you: yes19:05
creihtdavidhadas: hehe19:05
notmynameok, next release19:05
notmyname#topic next release19:05
*** openstack changes topic to "next release (Meeting topic: swift)"19:05
notmynamewe had a bunch of good patches merged yesterday19:06
notmynametorgomatic is working on a affinity on writes patch right now. that's the last major part of global clusters, and once that lands I'd like to do a release19:06
notmynameit's been a while, and we've had lots of other stuff land too19:06
davidhadasnotmyname: how about trying to get account acls in before next release?19:06
davidhadasits in the oven...19:07
torgomaticfor certain values of "right now", at least. Actual right now, I'm here in a meeting. ;)19:07
notmynamethat's my next question: what else needs to land? any other outstanding patches that are in gerrit now?19:07
davidhadasall infrustructure is there now that we merged get_info19:07
creihtnotmyname: should we give some time with an RC before we release so that those region features can be tested fairly well before release?19:07
*** kebray has quit IRC19:07
notmynamecreiht: yes, to some extent. we're testing it as we go with a global clsuter we have19:08
creihtnotmyname: I imagine dfg would like to get his slo of slos patch in19:08
creihtbut he has been on vacation19:08
notmynameya19:08
creihthe should be back soonish19:08
notmynameI'd like to have the final patches land next week and then do a release on June 2719:08
notmynamethat's 2 weeks from tomorrow19:08
creihtthat seems reasonable19:09
davidhadasnotmyname: a bit pressed fro acount acls - I assume it would take 2 weeks to score19:09
davidhadasSo few more days would be great19:09
davidhadasIf we get it great - if not, next time19:09
notmynamedavidhadas: when do you expect to submit it?19:10
creihtdavidhadas: I imagine there will be room for more releases before Havana19:10
notmynameyes, there will be19:10
davidhadasI will try to send first version before weekend - working now on tests and need to close one or two items19:10
notmynameok, great19:11
* portante working towards getting series of patches for DiskFile refactoring into the release following June 27th19:11
davidhadasLets target adding it in19:11
notmynamethe next release will be 1.919:11
notmynameI don't think 1.8.1 is appropriate, but I'm willing to hear arguments19:12
*** gholt has joined #openstack-meeting19:12
portanteagreed19:12
davidhadassounds sexy to me19:12
notmynamedavidhadas: that's what we're going for ;-)19:12
*** mestery has joined #openstack-meeting19:12
creihtnotmyname: no 2.0? :)19:12
notmynamenot yet :-)19:13
davidhadascreiht: lets do the 2.* inline with an openstack release19:13
creihtglobal clusters seems like a big enough feature to bump to 2.019:13
creihtnot sure if there will be a feature more important than that in a while19:13
davidhadasbut I have no idea if after 1.9 its 2.0 or 1.a or 1.1019:13
creihtunless we are going to do like 1.10 1.11 etc19:13
notmynameI'm fine with 1.10, 1.11, etc19:14
*** kpg2012 has joined #openstack-meeting19:14
torgomaticthat's the nice thing about integers: there are so many of them19:14
creihtnotmyname: what would constitute 2.0?19:14
portanteperhaps save 2.0 bump to API changes?19:14
notmynamecreiht: not sure yet, but I'd tend to agree with portante19:14
creihtmeh19:14
creiht:)19:14
notmynamebreaking changes19:14
portanteyes19:14
creihtwell nothing would break because you would still have backwards compat :)19:14
notmynamewe've got some other stuff planned that may constitute a more major version bump :-)19:15
portantedo tell?19:15
notmynamenot yet :-)19:15
creihtnotmyname: meh19:15
portante;019:15
creiht:)19:15
notmynamelet's move on though, to other topics for today19:15
creihtnotmyname: if it is that super awesome, then it can be 3.0 :)19:15
notmyname#info next release targeted as 1.9 for June 2719:15
davidhadasAm I still connected - last message I see is from 22:1219:16
notmynamecreiht: it's teh awesomz!!11!!!19:16
notmynamedavidhadas: looks like lag19:16
notmyname#topic API specs19:16
*** openstack changes topic to "API specs (Meeting topic: swift)"19:16
notmynamehttps://wiki.openstack.org/wiki/Swift/API19:16
torgomaticdavidhadas: I see your message, though if you're not seeing mine, I don't know how much good this reply will do19:16
notmynameI've added some stuff to the API wiki page19:16
notmynamemostly comments based on when things were implemented19:17
notmynameI'd love more feedback on the wiki page19:18
*** davidhadas_ has joined #openstack-meeting19:18
portantedid we arrive at a decision for how to gauge what is in or out?19:18
davidhadas_I got disconnected19:18
portantefrom the last meeting, I thought notmyname was considering options19:18
*** alexpec has joined #openstack-meeting19:18
davidhadas_What is the current topic?19:18
portanteAPI specs19:19
notmynameportante: ya, but I didn't hear from anyone :-)19:19
notmynameI'm becoming convinced that we realy just need a decision to be made19:19
notmynamelike "everything in folsom or essex is v1.0"19:19
*** michchap has joined #openstack-meeting19:19
torgomaticif only we had some kind of leader to make that decision ;)19:19
portante;019:19
notmynamehmm ;-)19:19
creihtthat could make a good decision?19:19
creiht;)19:19
* portante ow!19:19
notmynameI'll put together a more formal proposal around that and send it to the mailing list19:20
notmynamecreiht: ouch!19:20
creihtI kid I kid :)19:20
davidhadas_lets call clay and vote19:20
claygapi v1.0 is so 3 years ago19:20
notmyname#action notmyname to make a formal proposal to the mailing list with decisions made as to what's in and out19:21
notmynameok, next topic19:21
notmyname#topic internal API changes19:21
notmynameHow backwards compatible do we need to be with internal (i.e. non-user-facing) interfaces? This question was inspired by https://review.openstack.org/22820, which changes the format of a data structure in the WSGI environment that other middlewares may use, though none in the Swift codebase do.19:21
*** openstack changes topic to "internal API changes (Meeting topic: swift)"19:21
notmynamewho proposed that item? ^19:21
torgomatico/19:21
gareth_kunthat's my issue19:21
torgomaticI'm just trying to get a sense of what people feel is important to have compatibility on19:22
*** martine has quit IRC19:22
torgomaticobviously, the client API must not change19:22
torgomaticand obviously, if I rename a method in swift.common.utils or something, that's okay19:22
portantebut isn't that talking about on disk data?19:22
davidhadas_torgomatic: its just the same since someone may use it19:23
portanteacls stored in metadata on disk19:23
* clayg snikers at "obviously" when coming from the 204->200 point the clients at the spec guy19:23
notmynamewe can't not change anything because somebody somewhere may change it19:23
creihtlol19:23
*** otherwiseguy has quit IRC19:23
davidhadas_we simply cant have such restrictions assuming we want to grow and prosper19:23
gareth_kuntorgomatic: is that a good way to change something and have release notes for next release?19:23
torgomaticportante: the on-disk ACLs aren't the contentious part there; it was the changes to the WSGI environment19:23
torgomaticso, just curious what people usually care about19:23
torgomaticportante: the on-disk ACLs are backwards-compatible in that change19:24
creihtin the past, we have chosen to keep things for backwards compatibility with a deprecation warning19:24
creihtand then take it out in a future version19:24
clayggah, that review is huge - do I need to understand that issue to under the current topic?19:24
davidhadas_any version should support growing from previous one(s) with whatever is on disk19:24
davidhadas_But I am not sure if one or ones and how many ones19:24
creihtwe've always had upgrade in place as well19:25
creihtif it is a data format issue19:25
torgomaticclayg: the issue is: an earlier version of that patch changed the value of env['keystone.identity'] that the middleware was emitting (leaking?)19:25
notmynameupgrade plans are important (eg ring formats, pickle->json)19:25
* portante wonders if there is a good summary somewhere about that discussion19:25
*** anniec has joined #openstack-meeting19:25
*** timello has quit IRC19:25
claygtorgomatic: yeah but so that's the same thing as changing a method name in utils right?19:26
creihtOne does not simply run an upgrade script on a swift cluster :)19:26
torgomaticclayg: seemed that way to me, but chmouel disagreed19:26
torgomatichence this discussion :)19:26
portanteclayg: not so sure, because a filter can be written and used outside of the make swift code base19:26
claygchmouel: wtf?19:26
*** timello has joined #openstack-meeting19:26
creihtif there are data changes that are cluster wides, the updates *must* happen in place19:26
notmynameheh19:26
portantethat is why we are working on the DiskFile refactoring19:26
*** dolphm has quit IRC19:27
torgomaticcreiht: definitely19:27
claygportante: yeah middleware that's not in core is tough to keep up with master19:27
creihtI think we should still keep backwords compatible functions that are depricated for a release or two so that outside utilities can catch up19:27
claygportante: but I mean this is way not as bad as something like swob, or the fact that our memcche client gripes about a deprecated kwarg "timeout"19:28
davidhadas_We should have defined APIs - e.g. the auth API is a good example which we agree to not change (only extend - unless we do a major release for example 2.0)19:28
chmouelclayg: heu i'm talking about the middleware19:28
notmynameportante: but isn't the difference witht he DiskFile stuff that it is introducing a new API that is relatively stable (ie we agree to to change it). no such agreement is there with env vars19:28
chmouelthe api that middleware is based on19:28
chmoueland get into the chain19:28
chmouelnot swift.common.utils and such19:28
creihtnotmyname: I think that there is still an implicit agreement to try to play well together19:29
notmynameof course19:29
portantenotmyname: yes, but, as an example, gluster today has to track the code changes made to make sure we are not broken by any code change assumptions19:29
davidhadas_But ethe rest... there is no difference between what's in mmcache, whats in environment and whats in the runtime as for function names, parameters etc19:29
portantethe external-middleware models will have to do the same, and to verify one did not miss anything is a real pain19:29
* portante shouts up from the whole we dug19:30
davidhadas_portante: so the DiskFile should fix that hopefully as we are making it an official API that will not change unless we declare ands warn of that change etc.19:30
portanteagreed19:30
*** rnirmal has quit IRC19:30
portanteso we have to be careful to define clearly what external filters can expect for their API19:30
claygheh, well it's not even going to be "official official" - it just reduces the surface area a bit19:31
* portante look another API definition popping up there too19:31
davidhadas_So as far as I can see there are only two levels - either an official API - in which case we proactivly make sure we can change before we change or at least pre-warn, and all the rest where we continue as usual and try to be fair19:31
clayglike if you "reach around" the abstraction you're prolly gunna get bent19:31
*** michchap has quit IRC19:31
davidhadas_but not prevent progress19:31
portanteclayg: true19:31
notmynameso the issue on this one is the keystone.identity env var. seems that the var is already "scoped" by name to be keystone. I can't imagine that eg swiftstack auth should require keystone's data structure to not change19:32
creihtyeah the internal api thing scares me a bit19:32
creihtI don't want to hamper innovation inside of swift19:32
*** noslzzp has quit IRC19:32
claygdavidhadas_: well having a good abstraction that's more or less stable *accelerates* progress, and not just disk file, if we renamed "get_logger" or "readconf" we'd break a bunch of stuff19:32
creihtfor example, what if we decide to rewrite the object server in C to get better io?19:33
creihtthat api abstraction totally goes away19:33
*** dolphm has joined #openstack-meeting19:33
portanteI would have to review the discussions, more but can somebody tell report on what existing middleware might get burned by the keystone.* wsgi change?19:33
*** dolphm has quit IRC19:33
creihtIt would seem more important to define the REST api between the services first19:33
claygcreiht: yeah but we already have a loose internal api that allows that19:33
davidhadas_clayg: agreed. But whatever we decide to fixate  - we need to think about it before we fixate it and than say it out load19:33
notmynamecreiht: not it's "external" one. ya, that19:33
claygheh yeah19:33
creihtlike define what the object server REST api is19:33
torgomaticI dunno, I like the Linux kernel model: syscalls Shall Not Change(tm), and if an internal API gets changed, all callers get updated by the changer19:34
notmynamecreiht: isn't that what' portante is doing with LFS? :-)19:34
*** dolphm has joined #openstack-meeting19:34
torgomaticif you're maintaining a driver outside of the kernel, it's up to you to keep up with internal API changes19:34
notmynametorgomatic: but that's only for stuff in the kernel right?19:34
* portante hopes he is doing that19:34
creihtnotmyname: I'm not entirely sure... I was under the impression that it was an internal api from a code perspective19:34
creihtbut then again I'm only going by what I hear you guys talk about19:34
*** vipul is now known as vipul|away19:35
portantecreiht: it will end being external once we get more than one backend implemented that is available to the community19:35
portanteend up19:35
creihtportante: ok, I guess I need to look at it closer then19:35
davidhadas_Lets create a wiki page where we define what we preceive as an external API that extensions can rely to not change without prewarning19:36
claygI think we're pretty loose on "defined" api's (previous topic was, yeah we should have one for the *public client api*) - i'm not sure what we're going to decide here19:36
portantecreiht: thanks, that would be great19:36
*** chuckieb has quit IRC19:36
notmynameperhaps "internal" vs "external" is confusing? an API s external, but it doesn't mean it's available to an end user19:36
clayg... except maybe - yes continue to try and not break stuff other poeople are probably using19:36
notmynameit's all perspective19:36
torgomaticbasically, I want to know what sorts of non-client-breaking changes are likely to piss off other Swift devs if they get approved19:36
portantenotmyname: true19:36
creihtportante: of course my list of things to look at is ever increasing :/19:37
davidhadas_torgomatic: so lets document that19:37
claygtorgomatic: easy, the one that's make some other swift dev have to change their code19:37
notmynametorgomatic: so how do we answer that?19:37
torgomaticnotmyname: well, I was hoping to do that with a quick straw poll here, just to get a feel for things19:37
portantetorgomatic: go for it19:38
torgomaticsure19:38
portantetorgomatic: can you frame it succinctly?19:39
portantethe straw poll that is?19:39
torgomaticso, who here gets annoyed when things change in the WSGI environment? how about utility functions? other stuff?19:39
creihtstraw man poll?19:39
claygtorgomatic: but it's impossible even your lowest bar "and obviously, if I rename a method in swift.common.utils or something, that's okay" is *NOT* ok - you can't change the name of get_logger19:39
creihtlol19:39
claygis there *anything* else you can think of that we could *ALL* agree is totally safe to change?19:40
notmynamedoes all this come down to "don't break things we know people are using and have good release ntoes"?19:40
chmouelnotmyname: +119:40
creihtI think it is a lot like how we have handled api things19:40
creihtadditive is fine19:40
torgomaticnotmyname: sure, could be19:40
notmynameand it may be that we don't know what people are using until a review19:40
creihtif you are going to radically change something, then we need some extra care19:40
litong@notmyname, a product in my view should only insist on the confirmation of the external API19:41
litonganything developed based on internal methods are at risk anyway.19:41
notmynamecreiht: agreed, but realizing that we can be a litle more free with non-client things19:41
portantebut is middleware internal methods?19:41
claygI think chmouel has to defined why he thinks the wsgi environ is scared or a similar +2 is likely to come up19:41
litongthere is no one there to ensure it will be always there like that.19:41
claygit's not obvious to me why that would be consumed by anyone except the guy setting it19:41
chmouelclayg: I'd say whatever other middleware which is non core would use19:41
notmynameportante: IMO, yes. middleware is an implementation detail not a definition of "optional"19:41
litongwe should not define that many APIs, it will be a big problem down the road for a product.19:41
creihtnotmyname: sure, I just expect a "best effort" at making sure we don't make changes that are likely to break things19:42
notmynameI agree19:42
torgomaticalright19:42
notmynametorgomatic: what else would you like to see?19:42
creihtand that goes for those that review the changes as well19:42
chmouelcreiht: good for me19:42
notmynameok19:42
torgomaticnotmyname: I'm done19:42
portantebut we allow other folks to write middleware and use with core, right?19:42
* davidhadas_ volnteers to try and create a list19:42
* portante willing to be done19:42
davidhadas_with typos!19:42
notmyname#agreed don't make changes that are likely to break things19:42
notmyname:-)19:42
claygportante: I've done that with many a middleware sure19:42
creihtlol19:43
notmyname#topic DiskFile status19:43
*** openstack changes topic to "DiskFile status (Meeting topic: swift)"19:43
notmynameportante: can you give a 3 minute summart?19:43
creihtI think we have been fairly reasonable so far... there have been a couple of misses, but I think we have learned from them19:43
davidhadas_notmyname: you skipped some items19:43
notmynamesumary?19:43
portanteyes19:43
davidhadas_in the agenda19:43
notmynamedavidhadas_: I'm not going in order19:43
portanteworking through test case failures19:43
claygchmouel: so this was a *real* breaking change between the keystone-client middleware and the keystone-auth middleware in swift?19:44
notmynamedavidhadas_: (I want enough time to discuss yours, so a quick DiskFile overview should work)19:44
portantestumbled on quarantine semantics of when an object gets quarantined19:44
claygchmouel: like ours was getting changed to look for a variable somewhere that the upstream middleware wasn't putting it?19:44
portantebased on refactoring of the APIs19:44
portanteworking to propose that topic separately19:44
chmouelclayg: which one are you talking about the review about the ACL ?19:44
portanteworking on refactoring the reader code like the writer19:44
notmynamechmouel: clayg: cna you discuss in #openstack-swift?19:44
notmynameportante: is that your next patch?19:45
gareth_kunclayg: breaking users who use env['keystone.identity'] defined in swifft, not keystone headers19:45
portantenext patch will likely be the quarantine refactoring19:45
portantethen reader done like writer19:45
portantethen API definition (this is private, this is public, this is reference implmemetation kinda stuff)19:46
notmynameok19:46
portantewould love input on quarantine issue from someone willing to listen on the problem19:46
*** martine has joined #openstack-meeting19:46
portantethat is the status for DiskFile19:46
notmynameportante: ok thanks19:46
notmynamelet's discuss quarantine issues in IRC or in a review19:47
notmynameIRC=#openstack-swift19:47
portantegreat19:47
notmyname#topic path control19:47
*** jcoufal has quit IRC19:47
*** openstack changes topic to "path control (Meeting topic: swift)"19:47
notmynamePath Control: Create an open interface to control the path used by Swift within devices (where within the device a/c/o/tmp/qurentined/async are placed).19:47
notmynameMake sure any code approaching the device go via a central function (e.g. storage_directory() in utils) allowing monkey patching it or offering other ways to extend it19:47
notmynameWhile we do it , we can extend storage_directory() to also append the basedir and device since this seem to be required by all its users.19:47
notmynameI'm not sure really what this is? davidhadas_?19:47
davidhadasI wanted to hear your views on this one19:48
portantethis feels like it will end up being related to the "backend" in use one we get the DiskFile refactoring, and DatabaseBroker refactoring, in place19:48
notmynamehave a single function in utils that determines where stuff lives on disk? seems reasonable. maybe even something good to have in DiskFile itself19:48
davidhadasfirst , si getting all swift code handling the path to go via a central place  - we can use this for ring doubling where we want to control the path19:48
portantenotmyname: agreed19:49
notmynamedavidhadas_: what's your concern?19:49
davidhadassecond I want te the path control function to be made external API such that installations can decide to change it at will19:49
portanteEssentially, when one looks at the three servers, there is an "understood" way of dealing with things on disk unites DatabaseBroker classes and DiskFile19:49
davidhadasIf this is ok with everyone I have no concerns19:50
creihtdavidhadas: can you share a use case to help us understand it better?19:50
portante*that unites19:50
notmynamedavidhadas: it sounds like something reasonable to be in DiskFile (eg portante doesn't want to name things the same way on disk in gluster)19:50
*** marun has quit IRC19:50
*** rkukura has joined #openstack-meeting19:51
portantenotmyname: or even for XFS today, where we would want to use a different path for tempfiles that take advantage of XFS allocation groups behaviors19:51
davidhadas1. Disk doubling. 2. I can do SAIO with multiple swift regionss on one with this 3. We may decide to use a special path for containers and trat it with more cache based on that path19:51
notmynamecreiht: I've got a hypothetical use case of having different storage policies using different top-level directories (objects/ and objects-reduced-redundancy/ and etc)19:52
portantedavidhadas: can't that be done today with device paths and mount_check = False?19:52
davidhadas3 is kind of hardware and system specific - but different pepole may have their own19:52
portanteThis feels very much tied to backends for DiskFile19:52
davidhadasportante: No - this allows chnaging the path on the fly as well19:52
portantebut when wearing bear glasses ...19:53
creihtI guess I was more cuious why you would change it on the fly?19:53
* portante echos creiht19:53
davidhadasportante: it is related to backends - I think it is a different issue from DiskFile  - BUt I also think it it can be combined to a threesome with DiskFIle and DB19:53
*** bgorski has joined #openstack-meeting19:54
davidhadascreiht: it asallows for exampel to create two swift domains in one - as you know we are into domains19:54
davidhadas:)19:54
portantea backend implementation might allow for this, but not sure changing specifically to get flexibility for the current implementation is the right way to go19:54
davidhadasDifferent domains for example may have different QoS19:54
creihtI guess it is difficult for me to visualize this without seeing some code19:55
portantethat most likely can be done with device paths, no?19:55
notmynamedavidhadas: I'd like to see the way things are stored more abstracted. eg what if my storage doesn't use directories? in that way, this proposal seems to be most appropriate under something like DiskFile rather than a sibline to it19:55
notmynamesibling19:55
torgomaticcreiht: I am in the same boat with you19:55
*** wirehead_ has joined #openstack-meeting19:56
davidhadasnotmyname: I cant see that, but I am ok with having this combined as long as I am not forced to implement a DiskFile simply to control the path19:56
notmynamedavidhadas: a DiskFile that allows you to control the path? ;-)19:56
portantedavidhadas: one should be able to subclass an implementation19:56
*** tspatzier has joined #openstack-meeting19:56
davidhadasportante: sure np19:56
portantegluster is doing that for the current implementation19:57
*** stevebaker has joined #openstack-meeting19:57
portantewe just tweak a few things and reuse the reader19:57
notmynamedavidhadas: does all that sound ok to you? questions answered?19:57
davidhadasyes19:57
*** TravT has joined #openstack-meeting19:57
notmynamegreat19:57
notmynamethanks for bringing it up19:57
*** shardy has joined #openstack-meeting19:58
*** michchap has joined #openstack-meeting19:58
notmynamereminder: next meeting is in 2 weeks, get your passport ready if you are going to HK, release tentatively on June 2719:58
notmynamethanks for your time19:58
notmynamehave a great day19:58
notmyname#endmeetign19:58
notmyname#endmeeting19:58
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"19:58
openstackMeeting ended Wed Jun 12 19:58:38 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-06-12-19.01.html19:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-06-12-19.01.txt19:58
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-06-12-19.01.log.html19:58
*** stevemar has quit IRC19:58
*** alexpec has left #openstack-meeting19:58
*** dkehn has quit IRC19:59
*** dprince has quit IRC20:00
*** zaneb has joined #openstack-meeting20:00
shardy#startmeeting heat20:00
openstackMeeting started Wed Jun 12 20:00:30 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: heat)"20:00
openstackThe meeting name has been set to 'heat'20:00
shardy#topic rollcall20:00
*** openstack changes topic to "rollcall (Meeting topic: heat)"20:00
zanebo/20:00
tspatzierHi all20:00
radixhello20:00
stevebaker\o/20:01
bgorskio/20:01
*** portante has left #openstack-meeting20:01
TravTo/20:01
shardyasalkeld, sdake, jpeeler, therve around?20:01
asalkeldhi20:01
*** andrew_plunk has joined #openstack-meeting20:02
jpeelerhey20:02
*** Banix has joined #openstack-meeting20:02
*** Banix has left #openstack-meeting20:02
*** jasond has joined #openstack-meeting20:02
andrew_plunkhello20:02
shardyOk, hi all, lets get started20:02
*** kebray has joined #openstack-meeting20:02
shardy#topic Review last week's actions20:02
*** openstack changes topic to "Review last week's actions (Meeting topic: heat)"20:02
kebrayI'm here.20:02
sdakeo/20:03
shardy#link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-05-20.00.html20:03
shardyOnly one action20:03
SpamapSo/20:03
shardy#info asalkeld/zaneb to start ML discussion re stack metadata20:03
*** Banix has joined #openstack-meeting20:03
zanebhey, that actually happened :)20:03
*** timductive has joined #openstack-meeting20:03
sdakegrats zaneb ;)20:03
zanebthanks asalkeld20:03
shardyCool, thanks guys20:03
asalkeldyip, that was good20:03
*** gareth_kun has left #openstack-meeting20:04
*** randallburt has joined #openstack-meeting20:04
shardy#topic h2 blueprint status20:04
*** openstack changes topic to "h2 blueprint status (Meeting topic: heat)"20:04
*** tkammer has joined #openstack-meeting20:04
shardySo just wanted to make sure everyone is happy with what they're doing for h2, what they have assigned etc20:05
*** clayg has left #openstack-meeting20:05
sdakeshardy I was thinking of making a new blueprint and tackling in h2 as well - injection of data so we can use gold images20:05
randallburtno complaints except the tempest gating is blocked on a nova bug :(20:05
sdakei'll write a blueprint when i get done with my 3rd fulltime job at  rht :)20:05
shardy#link https://launchpad.net/heat/+milestone/havana-220:06
SpamapSsdake: huh?20:06
SpamapSsdake: we can use gold images now20:06
shardyrandallburt: Yeah the gate failures are a bit frustrating20:06
sdakeSpamapS i'll msg you when I write blueprint it will make more sense20:06
SpamapSsdake: mmk20:06
sdakeproblem we found with RHEL images that can't be easily resolved20:07
shardyOk, cool, well if anyone else has anything they expect to do for h2, please make sure it's captured in the plan by raising or targetting the bp/bug20:07
sdakeSUSE may have same problem20:07
shardysdake: what problem is that?20:07
sdakewait for blueprint i'll explain there20:07
shardyOk, cool20:07
shardy#action sdake to raise BP re gold images20:08
shardyrandallburt are you and andrew_plunk happy with the providers tasks you have allocated?20:08
randallburtshardy:  yes20:09
shardyok, cool20:09
randallburtI'm about to start on the next bits20:09
shardyanyone have anything else related to h2 BP's or bugs they want to raise?20:09
randallburtthat being said, was leaving the json params for now since there was some controversy, but its not a blocker imo20:09
randallburtI feel confident we'll get someting work-able by h220:10
shardyrandallburt: OK, as long as we have clear direction and progress on the main bits then all good :)20:10
*** michchap has quit IRC20:10
shardylooks like it could be a short meeting today, anyone have any other topics before open discussion?20:11
*** m4dcoder has joined #openstack-meeting20:11
sdakeheat-templates likely needs a launchpad home20:11
shardysdake: it already does20:11
*** marun has joined #openstack-meeting20:11
sdakecool then nm :)20:11
shardyhttps://launchpad.net/heat-templates20:11
sdakehttps://bugs.launchpad.net/heat/+bug/1186791 is actually a heat-templates bug20:11
shardy;)20:11
uvirtbotLaunchpad bug 1186791 in heat "NoKey template uses -gold images which no longer exist" [Medium,Confirmed]20:11
sdakeand fixed IIRC20:12
stevebakernifty20:12
*** seanrob has joined #openstack-meeting20:12
shardy#topic Open Discussion20:12
*** openstack changes topic to "Open Discussion (Meeting topic: heat)"20:12
*** shri has left #openstack-meeting20:12
*** ayoung_ is now known as ayoung20:12
wirehead_So, there's http://developer.rackspace.com/blog/rackspace-autoscale-is-now-open-source.html20:12
shardy#link http://developer.rackspace.com/blog/rackspace-autoscale-is-now-open-source.html20:13
asalkeldI have a small one20:13
asalkeldhacking rules20:13
shardywirehead_: do you see this as contributing to or competing with the Heat AS efforts?20:13
asalkeldholding off until wirehead_ done ...20:14
wirehead_shardy: contribute.  Obviously, you can't just copy-paste our code into yours, given that it's built around Twisted and Cassandra.20:14
wirehead_where I recognize that 'our' and 'yours' are really 'our' and 'ours'20:15
*** cp16net is now known as cp16net|away20:15
wirehead_But I would view it as a failure if we're maintaining long-term Otter and Heat AS.20:15
*** cp16net|away is now known as cp16net20:15
shardywirehead_: Ok, I've not looked into the details but wanted to clarify if you're still onboard with the plan of incrementally adding features/api etc to heat, with a view to separating the AS functionality potentially later20:15
shardyrather than proposing a new project from the outset20:16
zanebshardy: as I understand it, this was a response to our request to open the source :)20:16
shardythe former approach is what we discussed at summit, but things have been very quiet since ;)20:16
*** tkammer has quit IRC20:16
SpamapSSounds to me like the team had a deadline, hit it, and then open sourced what they produced.20:16
sdakewirehead_ a good first go would be to incorporate the api you have developed into heat proper20:17
zanebshardy: so that we can see the direction things are heading before the code arrives20:17
shardyzaneb: aha, I misinterpreted it as more of a project unveiling, my bad :)20:17
SpamapSBut now the plan is to contribute what you've learned to Heat AS ?20:17
wirehead_The plan is that Heat will take up our AS API and learnings.  And overall, Heat will be able to scale to Rackspace-hosted scale.20:17
SpamapSAs in, "we tried X, it does not work. Do Y instead."20:17
*** marun has quit IRC20:18
zanebshardy: yah, I think Adrian said that accidentally creating that impression was one of the reasons they were reluctant to release the code in the first place20:18
SpamapSwirehead_: ah that too. One API to rule them all. :)20:18
wirehead_I'm a little slow with the typing, yes20:18
wirehead_And if the Heat API changes, I'd like to see the Otter API change at the same time.20:18
wirehead_But that's something that we've had chats with radix and theve about20:19
wirehead_(they are both otherwise occupied)20:19
sdakeotter graphic shell game Heat cloudwatch -> Ceilometer    Otter AS -> Heat ;)20:19
shardywirehead_: Ok, cool, well lets all take a look and we can continue discussions, but if you have specific stuff you want to happen for havana, e.g API, it would be good to start getting some details defined so we know what we're aiming for20:19
*** marun has joined #openstack-meeting20:19
wirehead_But, as you presume, RAX wrote that to meet a short-term important deadline.20:19
*** vishy has joined #openstack-meeting20:20
shardyOk, well thanks for the head-up, lets all take a look and continue discussion on the ML over the next few days20:21
*** acfleury has quit IRC20:21
wirehead_Sure thing.20:21
shardyasalkeld: you had something to raise?20:21
asalkeldre: hacking rules20:21
asalkeldare we required to implement all of them?20:22
asalkeldas an openstack prooj20:22
shardyI thought most projects skipped some, but I'm not sure20:22
zanebasalkeld: I think every project has its own list of ones they ignore, don't they?20:22
asalkeldzaneb, but is that cos the just haven't got there?20:23
asalkeldor they never want to go there?20:23
zanebI don't know20:23
SpamapSIt is worth implementing them all eventually.20:23
stevebakersome project object to some rules20:23
lifelessasalkeld: you're not.20:23
SpamapSBut not with any kind of priority.20:23
asalkeldmaybe a ml discussion20:23
lifelessmost of them are sane. Some are batshit.20:23
sdakenova implements all hacking rules20:23
lifeless[I disagree with Spamaps on 'all eventually']20:23
asalkeldno is doesn'20:24
shardyIMO features are more important than cosmetic refactoring atm20:24
asalkeldI have added the comments in tox.ini20:24
SpamapSshardy: the more features that are completed with the rules already in place, the less refactoring churn there is later.20:24
asalkeldmaybe we need (we do not intend doing X because)20:24
SpamapSso perhaps I can revise all to "all that you don't think are batshit"20:24
wirehead_I'd suggest "challenge all that you do think are batshit" but I'm a realist and know what roads that leads down. :P20:25
*** gholt has left #openstack-meeting20:25
*** rainya has joined #openstack-meeting20:25
shardySpamapS: that is true, but we already have a huge pile of stuff to do, and most of it's going pretty slowly20:25
*** otherwiseguy has joined #openstack-meeting20:26
asalkeldso I simply want to: 1 determine that we can long term ignore some rules, 2 if so make it clear in tox.ini20:26
SpamapSshardy: agreed. Its not a priority, but if it makes the code more readable, it is worth doing _eventually_. And it costs more the longer you wait.20:26
shardyso if there's quick cosmetic stuff, fine, but IMHO not worth spending days/weeks on20:26
stevebakershardy: +120:26
SpamapSI doubt any of the rules would take days.20:26
zaneb+1 I support anything that makes the code better20:27
asalkeldso I felt a bit off color and did some (not much brain power required)20:27
zanebnot the batshit stuff :)20:27
stevebakerso I've started looking at native replacements for heat-cfntools, specifically to meet tripleo's current needs (which afaict is a replacement cfn-hup and cfn-signal)20:28
SpamapSstevebaker: right.20:28
sdake#link https://wiki.openstack.org/wiki/PTLguide#Answers20:29
sdakethat link suggests we should send people to ask.openstack.org for q&a support20:29
sdakeI presume to provide a record20:29
*** tedross has joined #openstack-meeting20:29
shardystevebaker: can we plug in a backend (instead of boto) which talks to the ReST API, and figure out the waitcondition API interface?20:30
stevebakeraws waitconditions URLs are actually S3 buckets, but I'm assuming that installing swift in a tripleo undercloud is a non-starter20:30
asalkeldobject store (the simple one)20:31
asalkeld?20:31
zanebasalkeld: I'm guess the fancy pre-signed expiring URLs we want are only in Swift20:32
stevebakershardy: so evolve heat-cfntools to optionally use native APIs? I had assumed that was a non-starter if we were switching to the aws tools port20:32
SpamapSstevebaker: swift would be ok. I don't see why that is necessary when we have a REST API though.20:32
lifelessstevebaker: we can do swift in the undercloud, but the very start of the undercloud is a single machine.20:32
shardystevebaker: heat-cfntools could be ported to the native API20:32
lifelessstevebaker: so it would be a little odd :)20:33
zanebSpamapS: because we have to effectively reimplement it (incl security stuff) if we want to use our own ReST API20:33
shardystevebaker: then some potential future aws tools port could talk to the cfn api I guess20:33
lifelesszaneb: we can't factor it out into oslo?20:33
zaneblifeless: swift? ;)20:34
shardyIMO there'd be more justification for maintaing heat-cfntools long term if it talks to the native API20:34
stevebakerSpamapS: it comes down to how to do auth. A signed url would require no keystone auth. I'm actually OK with doing keystone calls from instances20:34
SpamapSWhy can't we give the tools a trust and a URL to use that trust in?20:35
asalkeldbut is everyone eles20:35
shardystevebaker: I hesitate to say this, but could we just insert the ec2token paste filter in the chain for the native API?20:35
lifelesszaneb: from swift into oslo20:35
shardythen we have presigned URLS etc which will work exactly the same as they do now20:35
shardyeven if it's an interim solution20:35
*** nachi__ has joined #openstack-meeting20:35
zanebcan I just point out that the wait conditions don't use cfn-tools, only curl?20:35
shardyzaneb: the problem is they depend on the CFN api, and ec2token authentication20:36
stevebakershardy: how about a dedicated pipeline just for native waitconditions20:36
*** michchap has joined #openstack-meeting20:36
SpamapSzaneb: and we're also talking about metadata access.20:37
shardystevebaker: wfm, no point in reinventing this if we can reuse the existing mechanism, but there may be resistance due to the awsishness20:37
*** vipul|away is now known as vipul20:37
shardyI'm assuming keystone doesn't provide any native signing functionality?20:37
shardysigning/verification20:37
SpamapStrusts?20:38
shardySpamapS: AFAICT trusts don't solve this problem yet, as you can't limit the scope to a specific endpoint, or action20:38
stevebakeranother option is to replicate a swift temp url in our API, then swift would be optional20:38
randallburtdo I have to have the ec2 extensions on for the existing ec2token stuff to work or is that simply an heat internal thing?20:38
shardythey can be set to expire tho, so they may solve part of the problem20:38
SpamapSshardy: so trust is just about full identity?20:39
shardySpamapS: more work to do basically20:39
SpamapSshardy: not limited policy?20:39
zanebmy preference would be 1) use swift, 2) copy swift like stevebaker just said20:39
shardySpamapS: you can drop roles, but you can't specify action/endpoint level granularity (yet)20:39
SpamapSmmk20:40
asalkeldand SpamapS are you ok with only readable metadata?20:40
SpamapSwell lets not design it now, but suffice to say, the boto/keystone/heat relationship is something I'd like to see go away sooner rather than later.20:40
radixboo. sorry I missed the conversation about otter earlier, I had another meeting20:40
SpamapSasalkeld: I require only readable metadata :)20:40
asalkeldcool20:41
SpamapSasalkeld: writable would make things tricky20:41
zanebI don't think writable metadata is safe in general20:41
shardySpamapS: I need to add keystoneclient support before I can fully investigate it, but for our purposes, it's only halfway to where we need it so far20:41
shardytrusts that is20:41
*** adrian_otto has joined #openstack-meeting20:41
*** seanrob has quit IRC20:42
stevebakershardy: maybe just focus on our other trusts use cases20:42
*** seanrob has joined #openstack-meeting20:42
shardystevebaker: that's my plan, and create a wishlist of remaining functionality when I more fully understand what's there now20:42
shardyplanning to get into it as soon as I get the suspend-resume patches merged20:43
* SpamapS has to run20:43
stevebakerSo in summary, reading metadata and signalling wait conditions can be done with urls that are passed to the instance, and those urls may be swift containers, or something we write which replicates that?20:43
shardystevebaker: yes20:44
zanebstevebaker: +120:44
asalkeld+120:44
stevebakerok, sounds like the replacement for heat-cfntools is curl :)20:44
adrian_ottowill the eventual consistency of a swift container be an issue?20:44
*** rkukura has quit IRC20:45
zanebadrian_otto: eventually20:45
adrian_ottoI imagine that if you used one of those for a wait condition backing store taht you might end up in a race20:45
asalkeldyou might have to retry20:46
asalkeldbut doubt race20:46
zanebwait conditions only append data, but I don't know how that works in swift/s320:46
adrian_ottomaybe a poor word choice,, ,I mean that different clients may not see a consistent view of state of that condition.20:46
shardyyeah, one writer, one reader, so only problem would be some additional latency?20:46
stevebakerall you can do with swift temp urls is GET or PUT20:47
*** boris-42 has quit IRC20:47
shardyadrian_otto: that depends on the sharding strategy we end up with for multiple engines20:47
*** seanrob has quit IRC20:47
zanebstevebaker: maybe we do need our own version of that then20:47
stevebakerat least we have an api to copy20:48
stevebakereven if it is sha120:48
shardyYeah, looking at that seems like a good start20:48
*** michchap has quit IRC20:49
shardy10mins left, anything else?20:49
sdake_ask.openstack.org20:49
*** tedross has quit IRC20:49
sdake_whoever wrote that ptl guide thinks projects need to guide people to there20:49
*** seanrob has joined #openstack-meeting20:49
sdake_i guess to create a record20:49
sdake_i notice i end up answering the same question several times a week20:50
sdake_but if we use that we have to maintain it - eg answer questions there20:50
sdake_point launchpad qs at ask.openstack.org etc20:50
shardysdake_: good point, but most of the time people drop into IRC looking for answers with a partial question, so we have to start the discussion to fully define the problem20:50
shardyah, so instead of LP Q's, got it20:51
sdake_can do that in ask.openstack.org as well20:51
sdake_like a bug report20:51
*** seanrob has quit IRC20:51
zanebit's always the same problem, they won't connect their computers to the damn internet20:51
* sdake_ giggles at zaneb20:51
*** sushils has quit IRC20:51
*** seanrob has joined #openstack-meeting20:51
sdake_we have the same troubleshooting tips20:51
sdake_but keep repeating them20:52
sdake_sucks up time that could be spent coding20:52
shardyMaybe we need a "fix your nova networking" bot ;)20:52
sdake_ask.openstack.org is async not sync20:52
sdake_irc support a bit distracting when in middle of software dev20:53
shardysdake_: OK, good suggestion, lets try directing people there and see how it goes20:53
sdake_need to also answer questions too :)20:53
shardymost people seem to want an immediate reaction, but definitely worth trying20:53
sdake_there are a few heatqs on there without answers now20:53
shardysdake_: can you setup alerts, like for LP Q's?20:54
shardythe LP email is normally what prompts me to look at those20:54
sdake_i dont think there is a way but someone in infra may know20:54
sdake_good RFE for mordred :)20:54
zanebsdake_: maybe stick a notice in the topic on #heat too20:55
sdake_zaneb i can do that20:55
radixI think I'll post a thing to openstack-dev about autoscaling/otter20:55
shardyradix: sounds good, please do20:55
*** seanrob has quit IRC20:56
shardyanything else, or shall we finish?20:56
shardyOk, thanks all20:56
shardy#endmeeting20:56
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"20:56
openstackMeeting ended Wed Jun 12 20:56:42 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-12-20.00.html20:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-12-20.00.txt20:56
openstackLog:            http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-12-20.00.log.html20:56
*** wirehead_ has left #openstack-meeting20:57
*** randallburt has left #openstack-meeting20:57
*** tspatzier has left #openstack-meeting20:57
*** timductive has left #openstack-meeting20:57
*** Banix_ has joined #openstack-meeting20:58
*** Banix has quit IRC20:58
*** Banix_ is now known as Banix20:59
*** m4dcoder has quit IRC20:59
*** TravT has quit IRC21:03
*** noslzzp has joined #openstack-meeting21:03
*** seanrob has joined #openstack-meeting21:05
*** martine has quit IRC21:06
*** jasond has left #openstack-meeting21:06
*** topol has quit IRC21:12
*** spzala has quit IRC21:14
*** michchap has joined #openstack-meeting21:15
*** shardy is now known as shardy_afk21:15
*** litong has quit IRC21:16
*** colinmcnamara has quit IRC21:22
*** bgorski has quit IRC21:23
*** colinmcnamara has joined #openstack-meeting21:24
*** stevebaker has left #openstack-meeting21:25
*** vkmc has joined #openstack-meeting21:27
*** vkmc has joined #openstack-meeting21:27
*** andrew_plunk has left #openstack-meeting21:28
*** michchap has quit IRC21:28
*** lbragstad has quit IRC21:30
*** Banix has left #openstack-meeting21:30
*** jasondotstar has quit IRC21:31
*** jhenner has joined #openstack-meeting21:31
*** hughsaunders has quit IRC21:32
*** Swami has joined #openstack-meeting21:33
*** hughsaunders has joined #openstack-meeting21:33
*** dolphm has quit IRC21:37
*** noslzzp has quit IRC21:38
*** sushils has joined #openstack-meeting21:40
*** danwent_ has joined #openstack-meeting21:40
*** danwent has quit IRC21:42
*** danwent_ is now known as danwent21:42
*** dcramer_ has quit IRC21:44
*** ivasev has quit IRC21:45
*** Kharec has quit IRC21:52
*** Kharec has joined #openstack-meeting21:53
*** michchap has joined #openstack-meeting21:54
*** marun has quit IRC21:56
*** colinmcnamara has quit IRC21:56
*** danwent has quit IRC21:59
*** kpg2012 has quit IRC22:03
*** colinmcnamara has joined #openstack-meeting22:05
*** mtreinish has quit IRC22:05
*** mkollaro has quit IRC22:05
*** neelashah has quit IRC22:06
*** michchap has quit IRC22:06
*** colinmcnamara has quit IRC22:07
*** egallen has joined #openstack-meeting22:10
*** danwent has joined #openstack-meeting22:11
*** afazekas has quit IRC22:12
*** marcosmamorim has quit IRC22:14
*** ladquin_afk has quit IRC22:14
*** asalkeld has quit IRC22:14
*** asalkeld has joined #openstack-meeting22:14
*** SergeyLukjanov has quit IRC22:14
*** ladquin_afk has joined #openstack-meeting22:16
*** henrynash has quit IRC22:17
*** mrodden has quit IRC22:19
*** marcosmamorim has joined #openstack-meeting22:20
*** jdurgin has quit IRC22:24
*** flaper87 has quit IRC22:25
*** vijendar has quit IRC22:25
*** vijendar has joined #openstack-meeting22:25
*** krtaylor has quit IRC22:25
*** jdurgin has joined #openstack-meeting22:26
*** dolphm has joined #openstack-meeting22:26
*** changbl has quit IRC22:27
*** jhenner has quit IRC22:30
*** vijendar has quit IRC22:30
*** spzala has joined #openstack-meeting22:30
*** danwent has quit IRC22:30
*** maurosr has quit IRC22:31
*** maurosr- has joined #openstack-meeting22:31
*** ladquin_afk is now known as ladquin22:31
*** danwent has joined #openstack-meeting22:32
*** otherwiseguy has quit IRC22:33
*** michchap has joined #openstack-meeting22:33
mordredsdake_: what did I do?22:34
sdake_mordred did nothign :)22:35
sdake_we had a question if it was possible to get notified when someone asks a question about heat on ask.openstack.org via the irc bot22:35
sdake_i believe shardy asked22:36
*** lastidiot has quit IRC22:37
*** nachi__ has quit IRC22:40
*** jecarey has quit IRC22:40
*** cody-somerville has joined #openstack-meeting22:42
*** sushils has quit IRC22:44
*** michchap has quit IRC22:45
*** kpg2012 has joined #openstack-meeting22:47
*** sandywalsh has quit IRC22:47
*** danwent has quit IRC22:48
*** sushils has joined #openstack-meeting22:50
*** diogogmt has quit IRC22:50
*** egallen has quit IRC22:54
*** egallen has joined #openstack-meeting22:55
*** ryanpetrello has quit IRC22:56
*** egallen has quit IRC22:56
*** lbragstad has joined #openstack-meeting22:58
*** sileht has quit IRC22:59
*** armax has joined #openstack-meeting23:00
*** armax has left #openstack-meeting23:00
*** kpg2012 has quit IRC23:00
*** RajeshMohan has quit IRC23:00
*** eharney has quit IRC23:01
*** RajeshMohan has joined #openstack-meeting23:01
*** kebray has quit IRC23:02
*** sandywalsh has joined #openstack-meeting23:02
*** lloydde has quit IRC23:04
*** fnaval has quit IRC23:04
*** egallen has joined #openstack-meeting23:05
*** dolphm has quit IRC23:05
*** vipul is now known as vipul|away23:08
*** vipul|away is now known as vipul23:09
*** michchap has joined #openstack-meeting23:12
*** timello has quit IRC23:15
*** krtaylor has joined #openstack-meeting23:16
*** timello has joined #openstack-meeting23:16
*** michchap_ has joined #openstack-meeting23:24
*** michchap has quit IRC23:25
*** michchap has joined #openstack-meeting23:25
*** tanisdl has quit IRC23:26
*** gyee has quit IRC23:27
*** markwash has quit IRC23:29
*** sushils has quit IRC23:31
*** seanrob_ has joined #openstack-meeting23:32
*** seanrob has quit IRC23:34
*** ayoung has quit IRC23:35
*** fifieldt has joined #openstack-meeting23:36
*** seanrob_ has quit IRC23:37
*** lloydde has joined #openstack-meeting23:39
*** mdenny has quit IRC23:39
*** rwsu_ has quit IRC23:42
*** cp16net is now known as cp16net|away23:49
*** mestery has quit IRC23:50
*** rushiagr has joined #openstack-meeting23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!