Tuesday, 2013-05-07

*** yamahata_ has quit IRC00:01
*** markmcclain has joined #openstack-meeting00:03
*** Nachi has joined #openstack-meeting00:03
*** beagles_backl8r is now known as beagles00:04
*** nati_ueno has quit IRC00:06
*** rkukura has left #openstack-meeting00:07
*** RajeshMohan has quit IRC00:12
*** RajeshMohan has joined #openstack-meeting00:12
*** garyTh has quit IRC00:15
*** ryanpetrello has quit IRC00:16
*** dkehn has joined #openstack-meeting00:22
*** SumitNaiksatam has joined #openstack-meeting00:27
*** bdpayne has quit IRC00:36
*** markpeek has joined #openstack-meeting00:38
*** hartsocks has quit IRC00:43
*** anniec has quit IRC00:43
*** stevebaker_ has quit IRC00:46
*** stevemar has joined #openstack-meeting00:48
*** stevebaker has joined #openstack-meeting00:48
*** zul has quit IRC00:56
*** stevemar has quit IRC00:57
*** ryanpetrello has joined #openstack-meeting00:58
*** cody-somerville has quit IRC01:00
*** cody-somerville has joined #openstack-meeting01:00
*** cody-somerville has joined #openstack-meeting01:00
*** stevebaker has quit IRC01:01
*** cody-somerville_ has joined #openstack-meeting01:02
*** lifeless has joined #openstack-meeting01:02
*** adjohn has joined #openstack-meeting01:03
*** cody-somerville_ has quit IRC01:03
*** cody-somerville has quit IRC01:03
*** cody-somerville has joined #openstack-meeting01:03
*** cp16net is now known as cp16net|away01:07
*** pcm__ has quit IRC01:07
*** cp16net|away is now known as cp16net01:07
*** dwcramer has joined #openstack-meeting01:07
*** bdpayne has joined #openstack-meeting01:07
*** litong has quit IRC01:08
*** stevebaker has joined #openstack-meeting01:09
*** zb has joined #openstack-meeting01:12
*** zaneb has quit IRC01:16
*** markmcclain has quit IRC01:24
*** stevebaker has quit IRC01:28
*** ryanpetrello has quit IRC01:30
*** Mandell has quit IRC01:31
*** markwash has joined #openstack-meeting01:33
*** yamahata has quit IRC01:36
*** anniec has joined #openstack-meeting01:37
*** yamahata has joined #openstack-meeting01:38
*** stevebaker has joined #openstack-meeting01:39
*** anniec_ has joined #openstack-meeting01:40
*** tomoe_ has joined #openstack-meeting01:41
*** anniec has quit IRC01:41
*** anniec_ is now known as anniec01:41
*** Nachi has quit IRC01:44
*** esker has joined #openstack-meeting01:45
*** adjohn has quit IRC01:46
*** terry7 has quit IRC01:47
*** shang_ has quit IRC01:57
*** shang has joined #openstack-meeting01:58
*** demorris has joined #openstack-meeting02:07
*** fnaval has joined #openstack-meeting02:08
*** markpeek has quit IRC02:09
*** markpeek has joined #openstack-meeting02:11
*** Sukhdev has quit IRC02:13
*** danwent has quit IRC02:16
*** jog0 has quit IRC02:25
*** novas0x2a|laptop has quit IRC02:27
*** hyunsun has joined #openstack-meeting02:27
*** dtroyer has left #openstack-meeting02:35
*** markpeek has quit IRC02:37
*** ryanpetrello has joined #openstack-meeting02:41
*** ryanpetrello has quit IRC02:45
*** lbragstad has joined #openstack-meeting02:45
*** esker has quit IRC02:46
*** esker has joined #openstack-meeting02:53
*** HenryG has quit IRC02:53
*** vkmc has quit IRC02:57
*** markpeek has joined #openstack-meeting02:59
*** markpeek has quit IRC03:04
*** markpeek has joined #openstack-meeting03:09
*** demorris has quit IRC03:15
*** markpeek has quit IRC03:18
*** sacharya has left #openstack-meeting03:20
*** danwent has joined #openstack-meeting03:26
*** markpeek has joined #openstack-meeting03:27
*** esker has quit IRC03:40
*** mgiles has joined #openstack-meeting03:42
*** RajeshMohan has quit IRC03:43
*** RajeshMohan has joined #openstack-meeting03:43
*** boris-42 has joined #openstack-meeting03:45
*** boris-42 has quit IRC03:50
*** boris-42 has joined #openstack-meeting03:50
*** lbragstad has quit IRC03:51
*** Mandell has joined #openstack-meeting04:05
*** hyunsun has quit IRC04:11
*** markpeek has quit IRC04:13
*** yuanz has quit IRC04:17
*** ewindisch has joined #openstack-meeting04:21
*** shang has quit IRC04:23
*** MarkAtwood has quit IRC04:26
*** martine has quit IRC04:28
*** markpeek has joined #openstack-meeting04:31
*** koolhead17 has quit IRC04:34
*** anniec has quit IRC04:34
*** glikson has joined #openstack-meeting04:42
*** ewindisch has quit IRC04:43
*** danwent has quit IRC04:45
*** nati_ueno has joined #openstack-meeting04:45
*** johnpur has quit IRC04:46
*** dwcramer has quit IRC04:47
*** koolhead17 has joined #openstack-meeting04:48
*** Yada has joined #openstack-meeting04:48
*** nati_ueno has quit IRC04:49
*** Yada has quit IRC04:51
*** ijw has quit IRC04:53
*** ijw has joined #openstack-meeting04:53
*** ewindisch has joined #openstack-meeting04:57
*** ewindisch has quit IRC05:00
*** esker has joined #openstack-meeting05:04
*** mgiles has quit IRC05:15
*** maoy has quit IRC05:16
*** RajeshMohan has quit IRC05:16
*** glikson has quit IRC05:16
*** boris-42 has quit IRC05:16
*** bdpayne has quit IRC05:16
*** RajeshMohan has joined #openstack-meeting05:17
*** ladquin has quit IRC05:18
*** danwent has joined #openstack-meeting05:19
*** glikson has joined #openstack-meeting05:24
*** MarkAtwood has joined #openstack-meeting05:27
*** bdpayne has joined #openstack-meeting05:30
*** reed has quit IRC05:41
*** SergeyLukjanov has joined #openstack-meeting05:42
*** mrunge has joined #openstack-meeting05:47
*** esker has quit IRC05:47
*** koolhead17 has quit IRC05:48
*** PhilDay has joined #openstack-meeting05:49
*** Guest46684 has joined #openstack-meeting05:51
*** PhilDay has quit IRC05:51
*** markpeek has quit IRC05:51
*** philday has joined #openstack-meeting05:52
*** philday has quit IRC05:53
*** kebray has joined #openstack-meeting05:53
*** MarkAtwood has quit IRC05:53
*** PhilDay has joined #openstack-meeting05:54
*** MarkAtwood has joined #openstack-meeting05:54
*** topol has joined #openstack-meeting05:55
*** Mandell_ has joined #openstack-meeting05:57
*** topol has quit IRC06:00
*** Mandell has quit IRC06:00
*** kebray has quit IRC06:10
*** RajeshMohan has quit IRC06:12
*** RajeshMohan has joined #openstack-meeting06:13
*** topol has joined #openstack-meeting06:20
*** topol has quit IRC06:25
*** cdub_ has quit IRC06:26
*** bdpayne has quit IRC06:29
*** boris-42 has joined #openstack-meeting06:29
*** bdpayne has joined #openstack-meeting06:31
*** garyk has joined #openstack-meeting06:31
*** bdpayne has quit IRC06:35
*** PhilDay has quit IRC06:35
*** flaper87 has joined #openstack-meeting06:41
*** danwent has quit IRC06:41
*** gongysh has quit IRC06:47
*** glikson has quit IRC06:47
*** jhenner has joined #openstack-meeting06:50
*** gongysh has joined #openstack-meeting06:54
*** jhenner has quit IRC07:11
*** glikson has joined #openstack-meeting07:13
*** psedlak has joined #openstack-meeting07:18
*** Mandell_ has quit IRC07:19
*** ndipanov has joined #openstack-meeting07:19
*** kirankv has joined #openstack-meeting07:24
*** marun has quit IRC07:27
*** salv-orlando has joined #openstack-meeting07:40
*** jhenner has joined #openstack-meeting07:56
*** johnthetubaguy has joined #openstack-meeting08:01
*** MarkAtwood has quit IRC08:02
*** MarkAtwood has joined #openstack-meeting08:03
*** johnthetubaguy1 has joined #openstack-meeting08:04
*** johnthetubaguy has quit IRC08:06
*** alexpilotti has joined #openstack-meeting08:08
*** mikal has quit IRC08:09
*** mikal has joined #openstack-meeting08:11
*** evilroots has quit IRC08:14
*** afazekas has joined #openstack-meeting08:16
*** evilroots has joined #openstack-meeting08:26
*** MarkAtwood has quit IRC08:27
*** evilroots has quit IRC08:34
*** boris-42 has quit IRC08:35
*** boris-42 has joined #openstack-meeting08:36
*** evilroots has joined #openstack-meeting08:38
*** jtomasek has joined #openstack-meeting08:46
*** jcoufal has joined #openstack-meeting08:50
*** afazekas has quit IRC08:58
*** afazekas has joined #openstack-meeting09:02
*** johnthetubaguy1 is now known as johnthetubaguy09:15
*** SergeyLukjanov has quit IRC09:39
*** afazekas has quit IRC09:46
*** afazekas has joined #openstack-meeting09:48
*** jcoufal has quit IRC09:51
*** timello has quit IRC09:52
*** mikal has quit IRC09:57
*** mikal has joined #openstack-meeting09:59
*** alexpilotti has quit IRC10:02
*** boris-42 has quit IRC10:05
*** timello has joined #openstack-meeting10:05
*** kirankv has quit IRC10:10
*** alexpilotti has joined #openstack-meeting10:11
*** nati_ueno has joined #openstack-meeting10:16
*** tomoe_ has quit IRC10:21
*** alexpilotti has quit IRC10:28
*** alexpilotti has joined #openstack-meeting10:30
*** kiall has joined #openstack-meeting10:40
*** zhuadl has joined #openstack-meeting10:45
*** boris-42 has joined #openstack-meeting10:54
*** jpich has joined #openstack-meeting11:01
*** rnirmal has joined #openstack-meeting11:08
*** rmk has quit IRC11:10
*** sandywalsh has quit IRC11:13
*** sandywalsh has joined #openstack-meeting11:25
*** mikal has quit IRC11:27
*** mikal has joined #openstack-meeting11:29
*** pcm__ has joined #openstack-meeting11:43
*** jcoufal has joined #openstack-meeting11:44
*** chbrian has joined #openstack-meeting11:44
*** chbrian has quit IRC11:44
*** SergeyLukjanov has joined #openstack-meeting11:44
*** HenryG has joined #openstack-meeting11:47
*** glikson has quit IRC11:55
*** glikson has joined #openstack-meeting11:56
*** radez_g0n3 is now known as radez12:03
*** ijw has quit IRC12:06
*** ijw has joined #openstack-meeting12:08
*** topol has joined #openstack-meeting12:12
*** martine has joined #openstack-meeting12:16
*** demorris has joined #openstack-meeting12:18
*** ayoung has quit IRC12:19
*** ijw has quit IRC12:19
*** SergeyLukjanov has quit IRC12:19
*** jang has joined #openstack-meeting12:20
*** SergeyLukjanov has joined #openstack-meeting12:21
*** shengjie_ has left #openstack-meeting12:24
*** ijw has joined #openstack-meeting12:25
*** ijw1 has joined #openstack-meeting12:26
*** ijw has quit IRC12:29
*** Guest46684 has quit IRC12:31
*** dhellmann is now known as dhellmann-away12:36
*** timello has quit IRC12:46
*** timello has joined #openstack-meeting12:47
*** dwcramer has joined #openstack-meeting12:51
*** markmcclain has joined #openstack-meeting12:57
*** markmcclain has quit IRC12:57
*** nati_ueno has quit IRC12:59
*** matiu has quit IRC13:01
*** dprince has joined #openstack-meeting13:01
*** BobBall has quit IRC13:04
*** psedlak has quit IRC13:07
*** salv-orlando has quit IRC13:07
*** litong has joined #openstack-meeting13:14
*** zb is now known as zaneb13:14
*** ayoung has joined #openstack-meeting13:15
*** tomoe_ has joined #openstack-meeting13:16
*** tomoe_ has quit IRC13:16
*** dwcramer has quit IRC13:21
*** stevemar has joined #openstack-meeting13:21
*** demorris has quit IRC13:24
*** ryanpetrello has joined #openstack-meeting13:27
*** nati_ueno has joined #openstack-meeting13:31
*** SergeyLukjanov has quit IRC13:32
*** dwcramer has joined #openstack-meeting13:35
*** SergeyLukjanov has joined #openstack-meeting13:37
*** SergeyLukjanov has quit IRC13:39
*** stevemar has quit IRC13:40
*** stevemar has joined #openstack-meeting13:41
*** lbragstad has joined #openstack-meeting13:41
*** stevemar has quit IRC13:44
*** stevemar has joined #openstack-meeting13:44
*** fnaval has quit IRC13:46
*** woodspa has joined #openstack-meeting13:46
*** PhilDay has joined #openstack-meeting13:48
*** woodspa has quit IRC13:49
*** ivasev has joined #openstack-meeting13:49
*** markpeek has joined #openstack-meeting13:50
*** zhuadl has quit IRC13:50
*** nati_ueno has quit IRC13:52
*** dwcramer has quit IRC13:55
*** gongysh has quit IRC14:00
*** markmcclain has joined #openstack-meeting14:09
*** cp16net is now known as cp16net|away14:11
*** SergeyLukjanov has joined #openstack-meeting14:11
*** markmcclain has quit IRC14:14
*** jamespage_ has joined #openstack-meeting14:16
*** topol has quit IRC14:17
*** maoy has joined #openstack-meeting14:17
*** amyt has joined #openstack-meeting14:18
*** ashwini has joined #openstack-meeting14:20
*** fnaval has joined #openstack-meeting14:22
*** dolphm has joined #openstack-meeting14:24
*** jcoufal has quit IRC14:25
*** spzala has joined #openstack-meeting14:26
*** esker has joined #openstack-meeting14:32
*** armax has joined #openstack-meeting14:32
*** johnpur has joined #openstack-meeting14:34
*** dtroyer has joined #openstack-meeting14:34
*** adjohn has joined #openstack-meeting14:41
*** blamar has quit IRC14:46
*** ociuhandu has joined #openstack-meeting14:47
*** blamar has joined #openstack-meeting14:48
*** colinmcnamara has joined #openstack-meeting14:50
*** topol has joined #openstack-meeting14:51
*** luis_fdez has joined #openstack-meeting14:55
*** jgallard has joined #openstack-meeting14:55
*** pnavarro has joined #openstack-meeting14:55
*** rerngvit has joined #openstack-meeting14:59
rerngvithello15:00
n0ano#startmeeting scheduler15:00
*** glikson has quit IRC15:00
openstackMeeting started Tue May  7 15:00:17 2013 UTC.  The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
*** openstack changes topic to " (Meeting topic: scheduler)"15:00
openstackThe meeting name has been set to 'scheduler'15:00
garykhi15:00
*** glikson has joined #openstack-meeting15:00
rerngvithi15:00
n0anoShow of hands, who's here for the scheduler meeting?15:00
n0anoo/15:00
garykack15:00
jgallard\o15:00
rerngvit+115:00
PhilDayPhil Day15:00
*** Mandell has joined #openstack-meeting15:01
* glikson here15:01
alaskio/15:01
toansterhere15:01
*** senhuang has joined #openstack-meeting15:01
PhilDayI'm at a conference so may have to bail out early15:01
*** ryanpetrello has quit IRC15:01
senhuangHello!15:01
rerngvithi all :)15:01
n0anoPhilDay, NP, I have a dentist appointment right after myself15:02
PhilDayOuch15:02
*** ryanpetrello has joined #openstack-meeting15:02
n0ano:-(15:02
n0anoOK, let's begin15:02
*** hanrahat has joined #openstack-meeting15:02
n0anoAs I said in the email agenda, I want to go through all the items at least once before we circle back for more detail on the earlier ones15:03
n0ano#topic whole host allocation capability15:03
*** openstack changes topic to "whole host allocation capability (Meeting topic: scheduler)"15:03
n0anoI don't claim to understand this one, is there anyone here to drive this?15:03
PhilDayI wrote up and filed the BP on that yesterday (based on the feedback from the summit)15:03
PhilDayhttps://blueprints.launchpad.net/nova/+spec/whole-host-allocation15:03
*** ociuhandu is now known as ociuhandu_15:04
n0anoPhilDay, cool, is the BP sufficient or are there issues you want to bring up now15:04
PhilDayThis isn't really a scheduler change in the same sense that the other are - its more akin to exposing host aggregates to users15:04
alaskiI'm also interested in this one, though I don't have a lot to add to the bp right now15:04
n0anowe can always study the BP and come back later.15:04
*** boris-42 has quit IRC15:05
*** ociuhandu_ is now known as ociuhandu15:05
PhilDayThat would be fine - I'm happ[y to field questions once you've had a chance to read it - probably best done in the mailing list15:05
PhilDayI've asked russell to target it for H315:06
johnthetubaguy+115:06
n0anonot hearing much discussion so let's study the BP and respond on the mailing list or another IRC meeting15:06
PhilDayworks fo rme15:06
johnthetubaguy+115:06
alaskisounds good.  The only thing that comes to mind right now is could we also achieve it with specialized flavors15:06
alaskibut offline is fine15:06
PhilDayI think its othogonal to flavors15:06
*** ociuhandu has quit IRC15:07
n0ano#topic coexistence of different schedulers15:07
*** openstack changes topic to "coexistence of different schedulers (Meeting topic: scheduler)"15:07
*** ociuhandu has joined #openstack-meeting15:07
n0anoso, curious about this one, how is this different from the plugin filters/weighters that we have now?15:07
PhilDayI think the idea is that you could have, for example, different stacks of filters for different users15:08
rerngvitor for different flavors15:09
PhilDaySo if you build this on top of the whole host allocation (i.e have a filter config spefic to an aggregate) then you get another step towards private clouds within a cloud15:09
n0anoimplication being that schedulers are a runtime selection rather than a startup configuration issue15:09
rerngvitI'm posting the ether pad here for reference (https://etherpad.openstack.org/HavanaMultipleSchedulers)15:09
ogelbukhmultiple schedulers is confusing name for this feature, in my opinion15:10
*** cp16net|away is now known as cp16net15:10
gliksonyep, the idea is to have multiple configurations of FilterScheduler, or even different drivers15:10
senhuangn0ano: you are absolutely right15:10
gliksonconfigured for different host aggregates15:10
*** ijw1 has quit IRC15:10
*** haleyb has left #openstack-meeting15:10
*** ijw has joined #openstack-meeting15:10
senhuangglikson: maybe we should call it dynamically loaded scheduler/filtering15:10
n0anoI'm concerned that this would be a `major` code change, has anyone looked at what it would take to implement this?15:11
ogelbukhis it possible to give admin user ability to specify filters list as a scheduler hint?15:11
PhilDaySo do you see this as multipel schedulers running on different queues (i.e a fairly static set) or something much more dynamic ?15:11
n0anoalso, is there really a call for this feature or are we doing something just becuase we `can` do something15:11
gliksonwell, there are multiple options to implement it..15:11
gliksonI guess I will can defer it to next week -- will be more ready to elaborate on options and dilemmas15:12
gliksonn0ano: sure, there are several use-cases involving pools with different hardware and/or workloads15:13
PhilDayI can see a number of use cases - would like to think about it being perhaps an aggregate specific scheduler - since that seems liek a good abstraction15:13
*** kpavel has joined #openstack-meeting15:13
gliksonPhil: yep15:13
PhilDayShould we also consider this as allowing combined bare metal & Hypervisor systems - of is bare metal moving out of Nova now ?15:14
*** jamespage_ has quit IRC15:14
*** zul has joined #openstack-meeting15:14
*** sacharya has joined #openstack-meeting15:14
n0anoPhilDay, I would think that bare metal would the orthogonal to this, not sure there's an impact15:15
rerngvitPhilDay: could you elaborate a bit more what you mean by "combined bare metal & Hypervisor"?15:15
gliksonn0ano: I suggest to postpone the discussion to next week -- hope to have next level of design details by then15:15
PhilDay+115:15
n0anoOK, I think we've touched a never an everyone's interested, let's postpone discusion till we've all had a chance to study the etherpad15:16
n0anoclearly a good are to explore, let's just do it later15:16
n0ano#topic rack aware scheduling15:16
*** openstack changes topic to "rack aware scheduling (Meeting topic: scheduler)"15:16
*** jamespage_ has joined #openstack-meeting15:17
*** BobBall has joined #openstack-meeting15:17
*** cp16net is now known as cp16net|away15:17
garyki am not sure who proposed this. in quantum we would like to propose a network proximity api15:17
n0anothis is another one of those - can this be handled by the current flavor / extra_specs mechanism15:17
PhilDayThat was one of mine as well - I haven't done any more on this as I wasn't sure how it fitted in with some of the bigger scheems for defining sets of instances for scheduling15:17
*** cp16net|away is now known as cp16net15:17
*** winston-d has joined #openstack-meeting15:18
senhuangPhilDay: it could be one use case of group-scheduling15:18
PhilDayNot sure you can so it by flavours - you need to add information to each host about its physical / network localicty15:18
PhilDayRight - group-scheduling would cover it.15:18
senhuangPhilDay: so is it more about how to get the "rack" information from hosts?15:19
n0anoPhilDay, note topic 1 (extending data in host state), add the localicity to that and then use current filtering techniques on that new data15:19
garyksenhuang: PhilDay: does a host have metadata? if so the rack "id" could be stored and used...15:19
PhilDayWhat I was propsoign going into the summit was something pretty simple - add a "rack" property to each host, and then write a filter to exploit that15:19
senhuanggaryk: okay. then we can add a policy for group-api that says "same-rack"15:20
jgallardis it possible to use availabilty zone for that? one availability zone = 1 rack (1 datacenter = multiple availability zones)15:20
PhilDayhosts don't really have meta today - they have capabilities but that's more of a binary (like do I have a GPU)15:20
garyksenhuang: yes, sounds good15:20
PhilDayI wouldn't want to overlay AZ15:20
PhilDaythat has a very specific meaning15:20
garykPhilDay: ok, understtod15:21
jgallardPhilDay, ok, thanks15:21
PhilDayhttps://etherpad.openstack.org/HavanaNovaRackAwareScheduling15:21
n0anostill sounds like something that can be handled by some of the proposals to make the scheduler more extensible15:22
PhilDaySo we could still do something very simple that just covers my basic use case, but if group-scheculing is going to land in H then that would be a superset of my iodea15:22
gliksonwe are working on another proposal to extend the notion of zones to also cover things like racks15:22
garykPhilDay: glikson i think that the group scheduluing can cover this.15:23
jgallardglikson, do you have some links on that work?15:23
PhilDayn0ano:  agreed - I suggest we shelve for now and see how those other ideas pan out.15:23
senhuanggaryk: PhilDay: +115:23
gliksoni.e., allow hierarchy of zones, for availability or other purposes, and surface it to the user15:23
gliksonhttps://etherpad.openstack.org/HavanaTopologyAwarePlacement15:23
*** danwent has joined #openstack-meeting15:23
n0anoPhilDay, then we'll let you monitor the other proposals and raise a flag if they don't provide the support you're looking for15:23
jgallardglikson, thanks!15:23
PhilDayWe can always revisist later if some of the bigger ideas don;t come through.  I could do with havign basic rack aware scheduling by the end of H15:24
n0anoOK, moving on15:24
n0ano#topic list scheduler hints via API15:24
*** openstack changes topic to "list scheduler hints via API (Meeting topic: scheduler)"15:24
*** jamespage_ has quit IRC15:24
*** reed has joined #openstack-meeting15:25
n0anoanyone care to expand?15:25
PhilDayWroet up the BP for this yesterday as well: https://blueprints.launchpad.net/nova/+spec/scheduler-hints-api15:25
n0anoPhilDay, busy bee yesterday :-)15:25
PhilDayBasically its about exposing the scheduler config to users so that they can tell which hints will be supported and which ignored (based on the config).  At the moment its just a black box15:26
PhilDayand hints for filters not configured would be silenlty ignored.15:26
n0anowhat exactly do you mean by `hint`15:26
rerngvitis this one somewhat overlap with the topology placement glikson just mention? or kind of a superset of?15:27
alaskiI'm in favor of it.  Slightly related to this I just thought it may be a good idea to be able to set policies on those hints15:27
PhilDaythe shceduler_hints options passed into server create15:27
PhilDayPolicies would be good - but that's anew topci I think15:27
alaskiagreed15:27
PhilDaya new topic ;-(15:27
PhilDay(My typing is crap today)15:28
n0anoif that's being passed into the scheduler by the users create call wouldn't the user know this info already?15:28
rnirmal+1 for exposing supported hints.. n0ano it's not available right now15:28
PhilDayThe user can't tell if the system they are takign to has that15:28
PhilDayfilter configured or not.15:28
n0anoaah, the user asks for it but the system may not provide it, in that case I'm in favor15:29
*** kebray has joined #openstack-meeting15:29
PhilDayExactly15:29
rerngvitlike you request for a vegetarian food in a party15:29
n0anoalthough it is a hint and there's never a guaranteee that a hint will be honored15:29
rerngvitorganizer may or may not provide it15:30
*** colinmcnamara has quit IRC15:30
n0anorerngvit, but, again, you don't know that it's not available util you ask for it and don't get it.15:30
*** jtomasek has quit IRC15:30
rerngvityep, there also should be a special response for this.15:30
n0anoby providing this API are we implicitly guranteeing that `these hints will be honored'?15:31
senhuangplus: you don't know whether you get it or not15:31
PhilDayIf its an affinity hint it can be hard to tell if its eign ignored or not - you might still get what you ask for but by chance15:31
glikson+1 to support API to query scheduler hints15:31
PhilDayMost of the 'hints' are implemented as filters today, so you either get them or not15:31
alaskin0ano: I don't think so.  But that raises the question of whether there should be another concept to encapsulate "fail if not honored"15:31
gliksonPhilDay: have you thought how to implement it?15:32
n0anoalaski, indeed15:32
PhilDayif you want it to really be a hint then ity needs to be in the weighting function,15:32
n0anoPhilDay, good point, I see rampant confusion between what a filter is as opposed to a weight15:33
senhuangPhilDay: then you need to expose what attributes are available for weighing15:33
PhilDayI was thinking of having teh API rpc to the scheduler, and for each filter to have a method that returns details of its supported hints - so a simple iteration through the configured filters and weighers15:33
gliksonPhilDay: e.g., would a driver just declare which hints it supports?15:33
PhilDaySince filters and weighers are sublcasses it should be fairly easy to add it15:33
rerngvitagree. Seems a possible implementation to me.15:34
gliksonyep, in FilterScheduler you could potentially delegate it to individual filters/weights..15:35
PhilDayJust becomes an API extension then15:35
PhilDayI think only the filter scheduler supports hints - I hadn't really thought about any of the others15:35
*** hartsocks has joined #openstack-meeting15:35
gliksonPhilDay: whould it "live" under "servers", as an action?15:36
n0anohaven't looked at the issue but sounds like hints need to be extend to the weights also15:36
PhilDayYep - I think both filters and weights would need to support the "decribe_hints()" method, and teh scheduler could iterate over both lists15:36
rerngvitI think so. It should be in both filters and weights.15:37
senhuang+115:37
n0anoassuming this is some sort of new, inherited class that should be fairly simple15:37
n0anoOK, my take is everyone seems to agree this is a good idea, mainly it's implementation details15:38
PhilDayglikson:  not sure about a server action - isn;t that for actions on a specific instance. ?   This is more of a system capability thing - a bit like listing suported flavours15:38
PhilDayI asked Russell to mark the BP for H215:38
rerngvit#agree15:38
gliksonI wonder whether this has anything to do with "help" command in CLI.. essentially this is related to invocation syntax?15:39
PhilDayShould be exposed by teh CLI fo rsure15:39
PhilDaymoving on ?15:40
n0anoPhilDay, took the words out of my keyboard :-)15:40
n0ano#topic host directory service15:40
*** openstack changes topic to "host directory service (Meeting topic: scheduler)"15:40
n0anoI have no clue what this one is15:41
russellbPhilDay: i'll look over these blueprints and get them updated today15:41
garyki am sorry but i need to leave. hopefully the groups/ensembles will be discussed. senhuang and glikson have the details. thanks15:42
n0anogaryk, tnx, unlikely we'll get to those today, should cover them next week tho15:42
*** rmk has joined #openstack-meeting15:42
n0anono one want to talk about host directory service?15:43
n0anoOK, moving on15:43
n0ano#topic future of the scheduler15:44
*** openstack changes topic to "future of the scheduler (Meeting topic: scheduler)"15:44
alaskiThis is mine15:44
n0anoalaski, I think this is your issue, yes?15:44
alaskiThe start of it is https://blueprints.launchpad.net/nova/+spec/query-scheduler15:44
alaskiEssentially modifying the scheduler to return hosts that can be scheduled to, rather than pass requests on itself15:44
senhuangalaski: i like this idea15:45
*** salv-orlando has joined #openstack-meeting15:45
PhilDayIs this related to the work that Josh from Y! was talking about - sort of orchestartion within conductor ?15:45
senhuangit clarifies the workflow and separate the complexity of work flow management from resource placement15:45
alaskiPhilDay: it is a part of that15:45
*** dwcramer has joined #openstack-meeting15:46
n0anogiven that I see the scheduler's job a finding appropriate hosts I don't care who actually sends the requests to those hosts, I have no problems with this idea15:46
gliksonalaski: so, essentially you will have only "select_hosts" in scheduler rpc API, and all the schedule_* will go to conductor, or something like that?15:46
alaskiThe reason I called this "future of scheduler" is that long term I would like to discuss how we can handle cross service scheduling, but I think there's pleny of work before that15:47
*** alexpilotti has quit IRC15:47
alaskiglikson: yes, most likely conductor15:47
senhuangalaski: we have a proposal covered part of that15:47
n0anoglikson, hopefully, whoever `called` the scheduler is the one who `does` the scheduling15:47
senhuangalaski: part of the cross service scheduling15:47
johnthetubaguyit will help the orchestration work15:47
johnthetubaguytoo15:47
glikson+115:47
PhilDaySo the scheduler basically becomes an information service on suitable hosts,15:48
*** SergeyLukjanov has quit IRC15:48
*** bdpayne has joined #openstack-meeting15:48
alaskiPhilDay: that's how I see it, and I think that eventually leads into cross service stuff15:49
senhuangsome orchestrator/work flow manager asks/calls the scheduler for the suitable hosts for creation/resizing15:49
senhuangalaski: +1. this should be helpful for the unified resource placement blue print15:49
alaskiCool.  It should really simplify retries, orchestration, and unify some code paths I think15:50
*** dabo has quit IRC15:50
rerngvitI have one question though. If the actual scheduling moves somewhere else other than the scheduler. How can the scheduler return an answer then?15:50
PhilDayI don't have a problem with the concept - I guess it needs that new central state management / orcestration bit to land first or at the same time15:50
rerngvitit would requires a scheduler to query other services for states, is this correct? or I misunderstand something?15:51
alaskiPhilDay: the initial idea is to put this into conductor, and start building that up for state management15:51
johnthetubaguywe already have a similar code path to this in the live-migrate retry logic inside the scheduler, I was thinking of moving to conductor, its kinda similar15:51
johnthetubaguyalaski: +115:51
*** SergeyLukjanov has joined #openstack-meeting15:51
*** garyk has quit IRC15:51
*** dabo has joined #openstack-meeting15:51
alaskirerngvit: it doesn't really change how the scheduler works now, except that instead of then making the rpc call to the compute it returns that to conductor to make the call15:52
*** jcoufal has joined #openstack-meeting15:52
n0anorerngvit, not sure why the scheduler would have to query something, already the hosts send info to the scheduler, that wouldn't change.15:52
PhilDayIs there a general BP to cover creating the workflow piece in conductor (or heat or where ever its going)15:53
*** shang has joined #openstack-meeting15:53
senhuangyep. it is something queries the scheduler for the selected host.15:53
rerngvitok15:53
senhuangPhilDay: Joshua from Y! has a blueprint on structured state management15:53
gliksonPhilDay: yes, Josh posted on the mailing list15:53
johnthetubaguyhe restarted the orchestration meeting15:53
PhilDayseems like this needs to be part of that work then - I can't see it working on its own15:54
alaskiPhilDay: I'm working with Josh a bit, he'll definitely be a part of it15:54
johnthetubaguyits related, for sure, I think the pull to conductor can be separate15:54
gliksonI think the two are complimentary really.. you can either move things to conductor as is and then reoganize, or the other way around..15:54
johnthetubaguyits the api -> conductor bit, then call out to scheduler and compute as required15:55
PhilDayI have to dash - but on #9 I wrote that BP up as well https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement15:55
gliksonwhat might be a bit more tricky (for both) is the option when there is no real nova-conductor service..15:55
n0anowell, I have to dash also, let's close here and pick up next week (moving to a new house, hopefully I'll have internet access then)15:56
PhilDayWe have some code for this, but it needs a bit of work to re-base it etc.   It's pretty much the peer of the cup_entitilement BP15:56
PhilDayOk, Bye15:56
senhuangnext time, i suggest we pick up from #915:56
*** koolhead17 has joined #openstack-meeting15:56
johnthetubaguyglikson: indeed, it means the api gets a bit heavy from the local calls, but maybe thats acceptable15:56
PhilDaygood meeting guys15:56
rerngvitok see you then.15:56
n0anotnx everyone15:56
n0ano#endmeeting15:56
*** PhilDay has quit IRC15:56
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"15:56
openstackMeeting ended Tue May  7 15:56:39 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:56
*** rerngvit has left #openstack-meeting15:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-05-07-15.00.html15:56
jgallardthanks15:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-05-07-15.00.txt15:56
openstackLog:            http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-05-07-15.00.log.html15:56
gliksonI wonder whether there will be conucrrency issues with long-running operations (like live migrations)15:57
*** winston-d has quit IRC15:57
*** jgallard has quit IRC15:58
alaskisorry, ducked out real quick.15:59
johnthetubaguyits kinda finished I guess, we can always chat on openstack-nova15:59
alaskiglikson: I'm looking at running the localmode conductor in a greenthread from the api side when the service isn't available.15:59
alaskijohnthetubaguy: fair point16:00
primeministerpociuhandu: hi tavi16:01
primeministerppnavarro: hi pedro16:01
ociuhanduprimeministerp: hello16:01
primeministerpi know alex is on his way to the office16:01
ociuhanduhi all16:01
pnavarrohi all !16:01
primeministerpwe'll start in a couple minutes16:01
primeministerpluis_fdez: hi luis16:01
*** alexpilotti has joined #openstack-meeting16:01
primeministerpand poof16:02
alexpilottihey there16:02
primeministerpalexpilotti: perfect timing16:02
primeministerp#startmeeting hyper-v16:02
openstackMeeting started Tue May  7 16:02:31 2013 UTC.  The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot.16:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:02
*** openstack changes topic to " (Meeting topic: hyper-v)"16:02
openstackThe meeting name has been set to 'hyper_v'16:02
primeministerpso let's begin16:02
*** senhuang has quit IRC16:02
primeministerpI guess a good place to start is next steps post summit16:03
primeministerpwhere we currently are in that process16:03
primeministerpwhat is priority after the design session16:03
primeministerpalexpilotti: would you like to start?16:04
*** kpavel_ has joined #openstack-meeting16:04
alexpilottisure16:04
primeministerpkpavel_: hi there16:04
primeministerpalexpilotti: or is there something else we should address prior?16:05
alexpilottiquite a lot coming for Havana16:05
primeministerplet's start at the top priority16:05
primeministerpwmiv216:05
alexpilottiwe got a lot of requests about how to handle clustering to begin with16:05
alexpilottimost Windows workloads are based on the assumption that the HA is handled by the OS16:06
primeministerpfrom a "hyper-v cluster" as a compute node perspective16:06
*** kpavel has quit IRC16:06
*** kpavel_ is now known as kpavel16:06
alexpilottiwe need to support a Hyper-V windows cluster as a compute node16:06
alexpilottiwe can use an approach similar to what vsphere took16:07
primeministerpbut use the hyper-v/cluster api's as the access point16:07
primeministerpto that functionality16:07
alexpilottiyep16:07
primeministerpjust to confirm16:08
alexpilottiteh idea is that the cluster is seen as a single compute node16:08
alexpilottibut Nova will be able to get data about each individual host16:08
primeministerpand the best part is hyper-v clustering features ship w/ the free hyper-v server16:08
alexpilottiyep16:08
alexpilottithe only thing is that we'll need a DC16:09
primeministerpwe already do need one for live migration16:09
alexpilottiin order to have a 100% free solution16:09
primeministerpthen samba416:09
alexpilottiwe can try to see how Samba4 works16:09
alexpilottiyep :-)16:09
alexpilottiI tried w a beta a few months ago16:10
primeministerpI have some people to reach out to if we have issues16:10
alexpilottiand I had issues with Kerberos delegation16:10
primeministerpfrom the samba side of the things16:10
alexpilottithat'd be great16:10
alexpilottion the API perspective16:10
primeministerpwell on the samba perspective16:10
primeministerpand on the api perspective16:11
alexpilottiWindows has some higher level Powershell API16:11
primeministerpand i can dig internally to see16:11
alexpilottisorry, I was starting a new sentence :-)16:11
primeministerpwhat is available for the cluster api interfaces16:11
alexpilottiso we have Powershell and COM low level interfaces16:11
primeministerpand documentation16:11
alexpilottiMS is suggesting Powershell, but it's a useless overhead for us16:11
alexpilottiso I'd go with COM16:12
alexpilottidepending on the level of PINTA, to say so :-)16:12
* primeministerp silently agrees16:12
primeministerpwe don't need additional overhead16:12
alexpilottiWe still have to target this one, but Havana-2 is a good target IMO16:12
alexpilottiBTW Havana-1 is behind the corner (ca 1 month)16:13
primeministerpwtf16:13
alexpilottiwell, next month is June :-)16:13
primeministerpyes it is16:13
alexpilottilet me see if I find the deadline scheduling16:13
alexpilottiso I expect Havana-2 in July16:14
alexpilottiand Havana-3 in August16:14
*** nati_ueno has joined #openstack-meeting16:14
alexpilottilst year we started working end July / begin August16:14
alexpilottiin time for Folsom-316:14
alexpilottiso it makes sense (unfortunately)! :-)16:14
primeministerpI'm not sure but i thought I heard something about releases shifting dates slightly16:15
primeministerpbut I coudld be wrong16:15
primeministerpanyway16:15
primeministerpso16:15
alexpilotti#link https://wiki.openstack.org/wiki/Havana_Release_Schedule16:15
alexpilottihere it is16:15
primeministerpthx16:16
alexpilottieven worse, May 30th16:16
primeministerpheh16:16
alexpilottibut the good news is that the final one is after the summer ;-)16:16
alexpilottiwhich means that the Gods decided not to **** up our summer this time :-D16:16
primeministerpalexpilotti: and we're not trying to get code resurrected ;)16:17
primeministerpso moving on16:17
alexpilottilol ;-)16:17
alexpilottiok16:17
primeministerppast cluster16:17
alexpilottiV216:17
alexpilottiWMI16:17
primeministerpyes16:17
primeministerpthat's the big one16:17
alexpilottithe Grizzly refactoring gives us a great start16:18
alexpilottiall the code is well layered16:18
alexpilottimeaning that we have to add the V2 "utils" classes16:18
*** nati_ueno has quit IRC16:19
alexpilottiand a factory that will allocate the right one based on the OS16:19
primeministerpalexpilotti: to deal w/ backward compatibality16:19
alexpilottiV1 on 2008R2 (legacy, soon unsupported)16:19
alexpilottiand V2 for 2012 and above16:19
alexpilottithe testing will be way easier, as all the rest of the code will be untouched16:20
primeministerp*nod*16:20
alexpilottianyway it'll still be a long work16:20
alexpilottithat blocks the next step (VHDX stupport)16:20
primeministerpalexpilotti: we need to followup on win-next16:20
alexpilotticool16:20
primeministerpalexpilotti: we'll i16:20
alexpilottiI'd schedule WMIV2 for Havana-116:21
alexpilottiin order to unlock the rest16:21
primeministerpalexpilotti: sounds good to me16:21
alexpilottiQuestions or should we go on?16:21
*** hanrahat has quit IRC16:21
primeministerpsounds good16:21
primeministerpmove on16:21
pnavarro+116:22
primeministerpalexpilotti: quantum16:22
pnavarroNVGRE ! NVGRE !16:22
alexpilottiThere's something more than NVGRE coming ;-)16:23
alexpilottibut we have to wait a few days to get a final confirmation16:23
primeministerplet's just say we're finalizing details16:23
alexpilottiNVGRE is still in the list, but priority would be shifted quite a bit down to say so16:23
primeministerpyes16:24
*** esker has quit IRC16:24
alexpilottion QUantum we need anyway the Agent support in our plugin16:24
primeministerpso let's hold till next week on that discussion so we can confirm/deny16:24
alexpilottiI mean, the new Agent API. I was trying to sneak them in for G316:24
*** ayoung has quit IRC16:25
alexpilottibut Quantum was frozen already16:25
primeministerpalexpilotti: H316:25
alexpilottiG316:25
primeministerpo16:25
alexpilottiin march16:25
primeministerpgotcha16:25
alexpilottiso the work is mostly done16:25
alexpilottiI just need to retarget the patchset16:25
alexpilottithere are a gazillion BP on Quantum16:25
alexpilottione of them is related to merge all the agents in one16:26
primeministerpi would imagine16:26
alexpilotti*the plugins, sorry16:26
alexpilottimost plugins are very similar: OVS, LinuxBridge, Hyper-V, etc16:26
alexpilottiso a refactoring makes definitely sense16:26
primeministerpand from release perspective?16:27
primeministerpon refactor work16:27
alexpilottiI suggest to postpone the Quantum discussion to the next meeting, hopefully we'll have more details to disclose16:28
alexpilottiwhat do you think primeministerp?16:28
primeministerpalexpilotti: yes16:28
primeministerpalexpilotti: quantum/networking discussion until we get appropriate confirmation16:29
primeministerper postpone16:29
primeministerpare you including the VHDX work in the WMIV2 migration?16:30
*** cp16net is now known as cp16net|away16:30
*** davidhadas has quit IRC16:30
alexpilottisure16:30
primeministerpok16:30
alexpilottiops, no :-)16:30
primeministerpI didn't think it was16:30
primeministerpwmi v2 first16:30
alexpilottiWMIV2 "unlocks" the VHDX badge :-)16:31
primeministerpvhdx after that is complete16:31
primeministerpyes16:31
primeministerp+200 mana16:31
primeministerpfor hyper-v16:31
primeministerp;)16:31
alexpilottiso right after we are done with V2 VHDX support is fairly fast to achieve16:31
alexpilottiso H2 is definitely a good target16:31
primeministerpgood16:32
*** marun has joined #openstack-meeting16:32
primeministerpfibre channel support?16:32
primeministerpnext?16:32
alexpilottisure16:32
primeministerpif i can have san and hba's16:32
alexpilottiFC dependes largely on HP's lab stuff16:32
alexpilotticool16:32
primeministerpso I have both in my lab already16:32
alexpilottias soon as we have a lab set up we can start developing16:32
primeministerpI just need to connect16:32
primeministerpit together16:33
primeministerpI can see if i can cobble it so we have something16:33
primeministerpto start16:33
primeministerpif interested16:33
alexpilottiit's gonna be a bigger PINTA compared with iSCSI16:33
*** hemnafk is now known as hemna16:33
primeministerpi would imagine16:33
alexpilottipnavarro: anything to add on FC?16:33
primeministerpwell16:34
primeministerppriority over that work would possibly be AD/Keystone work16:34
primeministerp?16:35
primeministerpsorry that was a question16:35
*** davidhadas has joined #openstack-meeting16:36
*** PeTe___ has joined #openstack-meeting16:37
primeministerpalexpilotti: we'll continue then16:37
primeministerpalexpilotti: think people are nodding off16:37
primeministerp;)16:37
alexpilottioki, sorry about that16:38
alexpilotti:-)16:38
primeministerpnp16:38
primeministerpso take aways16:38
alexpilottiAD / keystone sure16:38
primeministerplet's focus on the H116:38
primeministerpmilestones16:38
alexpilottiI'd like to hear Cern opinion16:38
primeministerpluis_fdez: ping16:38
*** zul has quit IRC16:38
*** mattray has joined #openstack-meeting16:39
primeministerpit's late over there16:39
primeministerplet's finish up, then16:39
primeministerpalexpilotti: ^16:39
*** dolphm has quit IRC16:39
alexpilottioki, let's move on16:39
primeministerpalexpilotti: so that's a good start16:40
alexpilottiWe're going to set up the BPs16:40
primeministerpyes16:40
alexpilottithere's quite a lot of work as usual16:40
primeministerpas usual16:40
alexpilottiah, there's that annoying bug on cinder-volume to fix16:40
primeministerppnavarro: ^16:40
*** glikson has quit IRC16:40
alexpilottithe eventlet stuff16:40
alexpilottiI got a guy writing me about that16:41
primeministerpcan you cc me on the thread16:41
primeministerpif possible16:41
alexpilottipnavarro: would you like to sync about it and also talk about the Havana stuff that you'd like to work on?16:41
*** SergeyLukjanov has quit IRC16:42
alexpilottiok, no ping from pnavarro :-)16:44
pnavarrosorry al16:44
*** zul has joined #openstack-meeting16:44
pnavarroalex16:44
pnavarroI was on phone16:44
alexpilottinp :-)16:44
alexpilottipnavarro: ^^16:44
*** ijw has quit IRC16:44
pnavarroso, about cinder, yes, there are some fixes to do16:45
pnavarroand the tests should be refactored to follow the other projects16:46
alexpilottipnavarro: +116:46
*** ijw has joined #openstack-meeting16:46
*** ijw has quit IRC16:46
*** ijw has joined #openstack-meeting16:46
alexpilottiis there any specific area you'd like to work on for Havana?16:46
pnavarroI'd like to add the missing features copy image to volume, volume to image16:47
alexpilotticool16:47
alexpilottiociuhandu told me that image to volume works already16:47
alexpilottias far as I remember16:47
pnavarrommm16:47
alexpilottibut I have to recheck this16:48
pnavarroin the driver there are NotImplement16:48
pnavarroI can help in nova gaps, for example, ephemeral16:48
alexpilotticool, that's great16:48
ociuhandualexpilotti: have to check that, was a bit ago and have to go through the things to refresh the short memory buffer16:49
*** kpavel has quit IRC16:49
alexpilottithe list of new features / BPs we started here is still uncomplete16:49
*** jhenner has quit IRC16:49
alexpilottiociuhandu: could be that it was simply related to the Linux driver on devstack?16:49
pnavarroFibre channel would be cool too16:49
alexpilottiociuhandu: and not to the WIndows one and my braincells just mixed them up?16:50
alexpilottipnavarro: cool!16:50
alexpilottisince we are running late my 2c are on continuing the list of BPs next week16:51
alexpilottias we still have quite a lot of stuff in the list from the design summit16:51
alexpilottiand from what we discussed today there's definitely enough to work for a while :-)16:52
primeministerpsounds good16:52
alexpilottibut if you guys prefer we can go on now16:52
primeministerpalexpilotti: we have a meeting in a few mintues16:52
primeministerper minutes16:52
primeministerpthat we should just get over16:52
alexpilottiyep16:52
primeministerpperfect16:52
primeministerpthen let's continue this discussion next week16:52
alexpilotticool16:52
*** afazekas has quit IRC16:53
primeministerpok16:53
*** Nachi has joined #openstack-meeting16:53
primeministerpclosing meeting16:53
primeministerpthanks everyone16:54
primeministerp#endmeeting16:54
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"16:54
openstackMeeting ended Tue May  7 16:54:04 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:54
openstackMinutes:        http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-05-07-16.02.html16:54
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-05-07-16.02.txt16:54
openstackLog:            http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-05-07-16.02.log.html16:54
ociuhandualexpilotti: i tested it on linux, afaik16:54
alexpilottiociuhandu: ok, makes sense16:54
*** rnirmal_ has joined #openstack-meeting16:56
*** gyee has joined #openstack-meeting16:58
*** Haneef has joined #openstack-meeting16:58
*** rnirmal has quit IRC16:59
*** rnirmal_ is now known as rnirmal16:59
*** mrunge has quit IRC16:59
*** terry7 has joined #openstack-meeting16:59
*** garyk has joined #openstack-meeting17:00
*** SergeyLukjanov has joined #openstack-meeting17:00
*** adjohn has quit IRC17:03
*** chuck_ has joined #openstack-meeting17:07
*** bdpayne has quit IRC17:07
*** glikson has joined #openstack-meeting17:08
*** martine_ has joined #openstack-meeting17:09
*** kirankv has joined #openstack-meeting17:10
*** zul has quit IRC17:10
*** pnavarro has quit IRC17:10
*** martine has quit IRC17:11
*** bdpayne has joined #openstack-meeting17:11
*** dolphm has joined #openstack-meeting17:16
*** mrunge has joined #openstack-meeting17:19
*** jog0 has joined #openstack-meeting17:22
*** ayoung has joined #openstack-meeting17:22
*** llu_linux has joined #openstack-meeting17:22
*** llu has quit IRC17:23
*** mkollaro has joined #openstack-meeting17:26
*** HenryG has quit IRC17:28
*** chuck_ has quit IRC17:34
*** MarkAtwood has joined #openstack-meeting17:44
*** saurabhs has joined #openstack-meeting17:44
*** kirankv has quit IRC17:50
*** dwaite has joined #openstack-meeting17:51
*** johnthetubaguy has quit IRC17:52
*** Nachi has quit IRC17:52
*** hub_cap has joined #openstack-meeting17:53
*** david-lyle has joined #openstack-meeting17:54
*** nati_ueno has joined #openstack-meeting17:54
*** nati_ueno has quit IRC17:58
*** Nachi has joined #openstack-meeting17:58
*** boris-42 has joined #openstack-meeting17:58
*** mrunge has quit IRC17:59
stevemarahoy18:00
ayoungKEYSTONERS!18:00
spzalaHello!18:00
stevemarthats us!18:00
gyee\o18:00
dolphm#startmeeting keystone18:00
openstackMeeting started Tue May  7 18:00:52 2013 UTC.  The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot.18:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:00
*** openstack changes topic to " (Meeting topic: keystone)"18:00
openstackThe meeting name has been set to 'keystone'18:00
topolHello18:01
*** bknudson has joined #openstack-meeting18:01
*** fabio has joined #openstack-meeting18:01
dolphm#topic High priority bugs or immediate issues?18:01
*** openstack changes topic to "High priority bugs or immediate issues? (Meeting topic: keystone)"18:01
dolphm(other than stable/grizzly stuff, which is on the agenda for last)18:01
ayoungdolphm, Well, the gate is now fixed18:02
dolphmayoung: bknudson: thanks! ^18:02
ayoungIPv6 and the memcached fixes are in18:02
dolphmayoung: what was the memcache fix?18:02
*** jbartels_ has joined #openstack-meeting18:02
ayoungdolphm, dolphm blackout one version of it18:02
*** alexpilotti_ has joined #openstack-meeting18:02
dolphmoh the 1.51 thing18:02
ayoungdolphm, the fix was done by putting a later version onto the mirrors18:02
*** jog0_ has joined #openstack-meeting18:02
*** jog0 has quit IRC18:02
*** alexpilotti has quit IRC18:02
*** jog0_ is now known as jog018:02
bknudsonhi18:02
*** alexpilotti_ is now known as alexpilotti18:02
*** dwaite has quit IRC18:03
*** atiwari has joined #openstack-meeting18:03
*** imsplitbit has joined #openstack-meeting18:04
ayoungdolphm, the only high bug I was aware of was the LDAP domain issue18:04
fabioAnvil4Me~!18:04
*** dwaite has joined #openstack-meeting18:04
simofabio: nice pw :)18:04
*** ijw has quit IRC18:04
gyeeha18:04
fabiosorry wrong window :-)18:05
*** alexpilotti has quit IRC18:05
dolphmayoung: cool18:05
ayoungdolphm, this one is the one we are going with, right? https://review.openstack.org/#/c/28197/18:05
dolphmi'd like the ldappers to decide that18:05
dolphmbut that's on the agenda in a minute18:05
ayoungcool18:05
*** ivasev has quit IRC18:05
dolphmor we can just skip to it..18:05
dolphm#topic stable/grizzly 2013.1.1 release and the default domain in LDAP18:06
*** openstack changes topic to "stable/grizzly 2013.1.1 release and the default domain in LDAP (Meeting topic: keystone)"18:06
dolphm#link http://lists.openstack.org/pipermail/openstack-stable-maint/2013-May/000611.html18:06
dolphmstable/grizzly is happening this week18:06
ayoung#link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting#Agenda_for_next_meeting18:06
dolphmi asked that keystone 2013.1.1 be held until we get a default domain fix together for ldap18:06
*** ijw has joined #openstack-meeting18:06
dolphm#link https://bugs.launchpad.net/keystone/+bug/116872618:06
uvirtbotLaunchpad bug 1168726 in keystone/grizzly "default_domain_id breaks the ability to map keystone  to ldap" [Critical,In progress]18:07
ayoungdolphm, I actually mentioned that to apevec as well18:07
dolphmso, *a* fix needs to merge ASAP if we're going to do a backport of anything at all18:07
topoldolphm, whats the review number18:07
ayoungtopol, gyee please look at https://review.openstack.org/#/c/28197/18:07
bknudsonI'll review it this afternoon.18:07
dolphmand to re-iterate, my proposed solution is backporting a feature-cut (multi-domain support for ldap) https://review.openstack.org/#/c/28197/18:07
dolphm#link https://review.openstack.org/#/c/28197/18:08
ayoungdolphm, I endorce this approach18:08
ayoungendorse18:08
topoldolphm, I endorse as well18:08
dolphmtopol: sooner the better18:08
bknudsonI was able to work around it earlier, just added a domain entry18:08
*** ijw has quit IRC18:08
topolI will review right after this meeting18:08
ayoungbknudson, yes, but it breaks other things that way18:08
*** ijw has joined #openstack-meeting18:08
dolphmbknudson: yeah, jcannava confirmed you could work around it post-config as well18:08
ayoungbknudson, if LDAP is read only, there is no field available for domain18:09
bknudsonI thought it was businessCategory18:09
gyee:)18:09
topolyes, but breaks our read only18:09
*** alexpilotti has joined #openstack-meeting18:09
topolbknudson, mixing two different things18:09
topolits not the businessCategory mapping; its the need for a default domain or even horizon pukes18:10
ayoungdolphm, yeah, I don't think your fix is going to be sufficient18:10
ayoungwe need to remove businessCategory and the domain field or we still have a broken LDAP18:10
ayoungwas it spzala 's fix that did that?18:10
dolphmayoung: explain?18:11
dolphmayoung: from config?18:11
ayoungdolphm, no, the user and projects objects18:11
topolCan't you put the domain mapping to businessCategory on the ignore list?18:11
*** kpavel has joined #openstack-meeting18:11
dolphmuser_domain_id_attribute?18:11
*** nachi_ has joined #openstack-meeting18:11
*** nachi__ has joined #openstack-meeting18:11
*** zul has joined #openstack-meeting18:11
topolfor groups, etc18:12
ayoungdolphm, right18:12
bknudsonso every user in LDAP should be in the default domain18:12
ayoungI thought I saw a patch that removed it18:12
ayoungbknudson, for now, yes18:12
topolayoung, why cant that just be fixed via config?18:12
bknudsonlooks like dolphm's patch does that.18:12
bknudsone.g., get_user() calls self._set_default_domain() on it.18:13
*** lloydde has joined #openstack-meeting18:13
bknudsonso if you were using businessCategory for your domain id, it'd get wiped out.18:14
ayoungbknudson, yes18:15
dolphmwiped out or ignored?18:15
*** PeTe____ has joined #openstack-meeting18:15
bknudson_set_default_domain is going to overwrite whatever domain_id is in the ref and replace it with the default domain_id.18:15
topolbknudosn, but what happens with user_domain_id_attribute?18:16
bknudsonI don't see user_domain_id being wiped out18:17
dolphmbknudson: that's how the function behaves, but i don't think it will be overriding anything in the real world?18:17
topolexactly18:17
ayoungtopol, OK,  I think you are right.  We cannot safely remove it. We need to set attribute_ignore on it18:17
topolayoung. yes, exactly18:17
topoland that can be done via config18:17
bknudsonif domain_id attribute is ignored, does Keystone set user_domain_id to the default domain_id?18:18
*** PeTe___ has quit IRC18:18
ayoungtopol, so dolphm's patch should probably check for that value instead of blanket removing the domain.18:18
topolin ldap in wont set it at all18:18
*** adjohn has joined #openstack-meeting18:18
* ayoung not happy with domain_attribute, but oh well18:19
bknudsonhow is a client going to know what the default domain id is?18:19
dolphmhmm, i need to update docstrings18:19
*** adjohn_ has joined #openstack-meeting18:19
dolphmerm, so create_user, for example, should be validating that the default domain was specified, and then remove the domain_id attribute before handing off to ldap18:20
dolphmi'm not doing that second part atm18:20
ayoungdolphm, it can be done another way18:20
ayoungthere is a field called attrbute_ignore.18:20
ayoungI'll link18:20
dolphmayoung: why not fix it now?18:20
topolthats in the config18:21
bknudsonI'd think you want to send the domain_id, and let ldap ignore it or not.18:21
topoland thats how we handle attributes that may or may not map to ldap depending on the schema18:21
ayounghttps://github.com/openstack/keystone/blob/master/keystone/common/ldap/core.py#L25118:21
dolphmif you've already deployed against ldap/ad with a real domain object, you shouldn't have to change config to use 2013.1.118:21
topolbknudson, AGREED!18:21
dolphmif that's the case, then this is unbackportable18:21
ayoungdolphm, what if some grizzly early adopter started using LDAP with domains18:22
topoldolphm, two cases, one they let the domain map two busiensscategory or their equivalent or they use ignore18:22
dolphmayoung: that's the case i'm referring to which i'd like to handle gracefully18:22
topolif you keep the ability to map to businesscatgory or equivalent you shouldnt break anyone18:23
*** adjohn has quit IRC18:23
topoland then others can use the attribute ignore if they have readonly and it doesnt map18:23
bknudsonand don't wipe out the domain_id if LDAP returns it.18:23
dolphmtopol: two users: one has already deployed keystone 2013.1 against ldap/ad and created a domain object, and the other is on folsom, both are migrating to 2013.1.118:23
dolphmand read-only users are currently stuck on folsom18:24
*** PeTe____ has quit IRC18:24
ayoungdolphm, so if default_domain is set, and 'user_attribute_ignore' is set...18:24
dolphmdosaboy: default_domain what?18:24
topolso you create a default domain, correct?18:24
bknudsonit's not created, it's virtual18:24
topolthat has an id attribute, correct?18:25
bknudsonthe virtual domain has it's id set to the default_domain_id18:25
topolK, so worst case that value gets shoved into businessCategory or equivalent (if they mapped it)18:26
topolor they can choose to add to the ignore list for a read only ldap with no domain equivalent18:26
bknudsonI think that's what you'd want. If they ignored it then it doesn't go anywhere.18:26
topolmake sense?18:26
bknudsonso this patch seems to be doing "too much"18:26
topoland we dont end up taking away the businessCategory mapping if someone is already using it18:27
*** dwaite has quit IRC18:27
*** PeTe__ has joined #openstack-meeting18:27
ayoungtopol, it needs to be added to the ignore list for all objects18:28
ayounguser, group, project at least...18:28
bknudsonayoung: you mean by default?18:29
*** epende has quit IRC18:29
dolphmi'm still lost on why we shouldn't just be stripping the domain_id attribute from all refs on create()18:29
topoldolphm, you were scared someone was using it. you will break them18:29
ayoungevery single one of the assignments, too18:30
*** dwaite has joined #openstack-meeting18:30
topolor you can take it away and see if anyone screams...18:30
*** llu_linux has quit IRC18:30
ayoungbknudson, yes, by default18:30
ayoungbknudson, one of the probles is that we don't have an upgrade script for LDAP, so we have no way of knowing.18:31
*** llu_linux has joined #openstack-meeting18:31
bknudsonayoung: that sounds like a better solution to me.18:31
ayoungSo it needs to be well documented18:31
ayoungbknudson, we need a read only default domain as well, I think18:31
ayoungand I don;'t know what will happen if we just attribute ignore it18:32
bknudsonI like the read-only default domain.18:32
ayoungcan the daomin field be blank?18:32
ayoungI suspect not18:32
dolphmayoung: where?18:32
topolshould we just pull it now and hope no one became dependent on it?18:32
ayoungtopol, have you tested this?18:32
*** anniec has joined #openstack-meeting18:32
*** zul has quit IRC18:32
topolI have not tested dolphms patch and not tested making it ignore18:32
ayoungtopol, if you are going to suggest that we "just" do the attribute ignore, you need to, at a minimum, confirm it works.  I don;t think it will18:33
dolphmtopol: not if we can't handle it gracefully, but i don't understand why we would be breaking them by ignoring it (unless they've already deployed multi-domain, you mean?)18:33
topoldolphm, that was the fear. Im not sure its realistic18:33
ayoungdolphm, of the two cases, I think breaking LDAP for 99% of the users is a worse sin than breaking the early adopter that is using LDA domains18:34
dolphmwe need to ping the operators list on this18:34
ayoungbut we a damned either way18:34
topolI rather do death or glory, pull it completely out and feel comfortable we know the code is solid.18:34
topolayoung, I agree18:34
dolphmayoung: unfortunately i agree18:34
dolphmayoung: that's a good way to word it, i might steal that18:35
topolcause frankly its the direction we are heading one domain per ldap so we break them now or break them later when they really are dependent on it18:35
bknudsonThe bug was reported by spzala... but I don't think that's because a customer complained.18:35
spzalano18:35
topolwe found it doing a poc18:35
spzalai found it via helping internal team18:35
ayoungactually, I also reported it, as I found it broken when doing a basic LDAP install for the FreeIPA presentation18:36
bknudsonok, just making sure that we had a good reason to backport to grizzly.18:36
ayoungIf we leave the domain attribute there, it needs to be completely ignorable18:36
*** shang has quit IRC18:36
topolayoung, what scary is the combinations we will have to debug if we leave it as an ignorable option or not...18:37
*** shang has joined #openstack-meeting18:37
dolphmayoung: can you be more specific? "there" and "ignorable"18:37
ayoungtopol, agreed.  I am in favor of yanking it whole hog18:37
bknudsonisn't it already ignorable?18:37
ayoungbknudson, I doubt it18:37
topolwe end up having to support two yucky options for a long time18:37
ayoungbknudson, ignored just means not persisted to the DB18:37
topolie. not put into ldap18:37
ayoungbut if the code depends on that value, we have to do something nasty like populate it  from config if the LDAP server is not providing it.  HOrrible option18:38
*** MarkAtwood has quit IRC18:38
ayoungit needs to go18:38
topolif we get shot either way I rather be shot by the .00001% that already depend on multiple domains for an ldap18:38
bknudsonwe already do that for domain_id in identity v2.18:38
ayoungI don;t think we have a choice18:38
ayoungbknudson, will that work for V3?18:38
bknudsonThis seems pretty similar18:38
* ayoung doubts it, but would like someone to test18:39
topolI agree with ayoung.  If we pull it now we have maybe one person yell at us18:39
ayoungJOe Savak18:39
topolWe wait a year we could really get stuck18:39
simoyou should not hardcode LDAP queries, and just have defaults that can be changed and replacement variables18:40
simothen a good default configuration taht defines the stuff the way you want it for default18:40
simolook at postfix support for LDAP if you need inspiration18:40
topolayoung, problem with Joe is he hates us now a little or hates us later (when fully dependent on it) a ton18:41
ayoungsimo, agreed, but there are only limits to what we can do right now18:41
simoayoung: right now you need a sane if simpler code18:41
ayoungthe domain support was not well thought out in the context of LDAP, and needs to be rolled back.18:42
topolbig picture is we dont want to support multiple domains in a single ldap.   We either break the bad news to Joe now or later. But either way he will be upset18:42
simotopol: I think you should not make that decision but for the short term it certainly is better to have less options than have broken ones18:42
bknudsonI'm sure Joe isn't impressed by how multiple domains is implemented today.18:43
*** ladquin has joined #openstack-meeting18:43
ayoungtopol, eventually, we can do it via multiple subtrees, just not today18:43
*** novas0x2a|laptop has joined #openstack-meeting18:43
ayoungOK, dolphm what do we need to do to make this happen, besides rewrite the patch?18:43
topolsimo, problem is right now it really is broken. Need some time to do it right18:43
topolayoung, agreed18:43
ayoungtopol, are you going to complete the rewrite?18:44
dolphmayoung: i'm not sure i understand the changes i'd need to make, so i'd prefer someone else to volunteer to pick it up?18:44
ayoungand by you, I mean spzala, of course18:44
topolayoung, rewrite of dolphs patch?18:44
dolphmor i need a mean code review to get me going18:44
dwaitemy opinion - it would be better to have the default LDAP integration be as simple and maintainable as possible, even if that means additional functionality plugged in via contributed code18:44
ayoungtopol, either that or continue on with spzala 's but removing the domains attribute as well as making sure the domain API is not broken for LDAP18:45
topolsahdev will help dolphm in any way he wants18:45
dolphmi'm not sure why jsavak would care ahead of havana18:45
spzalaspzala: topol, yes18:45
ayoungwant to #action that?18:46
topolsahdev likes dolphs approach18:46
dolphmthen there's no #action18:46
spzalaI like dolph's approach. as far as I know, domain support for ldap, wasn't complete....so for example, when you create user with domain_id attribute, there was no domain created and user was added to that domain.  So how would it break anything?18:46
ayoungNo18:47
ayoungthat is not sufficient18:47
ayoungnecessary yes18:47
gyeelike that, if it never works, then its not a bug :)18:47
ayoungbut we need to make sure that if the domain attribute remains, it can be ignored18:47
dolphmayoung: be careful about backportability18:47
dolphmi'd like to just ignore it 100%18:47
bknudsonwho decides backportability?18:48
ayoungdolphm, someone needs to confirm it works.18:48
dolphmstable maintenance team18:48
topoldolphm, was your code on a path to ignore it?18:48
ayoungI doubt that is the case18:48
bknudsonI've seen reviews in stable where they just say the change is too bit.18:48
bknudsontoo big.18:48
dolphmbknudson: +118:48
*** sandywalsh has quit IRC18:49
*** cp16net|away is now known as cp16net18:49
bknudsonI think they would reject dolphm's patch, but I'm just guessing.18:49
ayoung I think we have a broken impl, even with dolph's patch18:49
ayoungsuspect18:49
dolphmbknudson: i wouldn't be surprised, but stakeholders need to beg either way18:49
bknudsonprobably broken even without dolphm's patch, too.18:50
ayoungI'll confirm18:50
dolphmayoung: can you propose a follow up fix?18:50
topolsahdev's original pathc was much smaller18:50
bknudsonwhat was the original patch?18:50
topolwasnt pretty but small and got the job done18:50
bknudsonis this the one to add domain with keystone-manage?18:50
spzalatopol: my patch was to return a default domain (virtual one) if one is not created18:50
ayoung#link https://review.openstack.org/#/c/27364/18:51
*** ryanpetr_ has joined #openstack-meeting18:51
spzalaayoung: thanks18:51
bknudsonit failed jenkins so no one looked at it.18:51
ayoungspzala, what does that do if there is no attribute available to map to the domain id?18:51
*** MarkAtwood has joined #openstack-meeting18:52
topolbknudson, that was a jenkins bug I believe18:52
spzalaayoung: the patch was on providing our conceptual domain18:52
spzalabknudson: yes, that was due to 'token' bug18:53
spzalathat failed jenkins for many patches18:53
topolwhat if sahdev ran his patch and did a fullr egression with domain_id_atrribute being ignored?18:53
dolphmso, 8 minutes left... we need a strong consensus to backport anything here... if we don't have a consensus, i'd recommend that we leave stable/grizzly broken and think about how to move stable/folsom deployments to havana, or even publish a "fixed" driver outside of the openstack release cycle that those users can pick up18:54
*** koolhead17 has quit IRC18:54
*** ryanpetrello has quit IRC18:54
ayoungdolphm, I say to let spzala take it and run with it18:54
ayoungand I will be involved on the approach18:55
spzalaayoung: go with my patch? with ignore attribute?18:55
ayoungspzala, yes plus you need to set the attribute to something that doesn exist.  busineesssCatagory might not even be there on the back end18:55
topolayoung doesnt exist is guarenteed to break. I dont understand18:56
ayoungdolphm, we'll let you know shortly if this is going to work18:56
ayoungtopol, then we need to go further...lets just agree to work on it for now.18:56
spzalaayoung: thanks. hmmm... I didn't get it either18:56
bknudsonI think we're at least headed to not backporting this for the stable/grizzly 2013.1.118:56
dolphmi'd also suggest considering splitting the ldap driver into drivers that serve more specific use cases, so serving one specific use case is much less likely to break others moving forward18:57
topolayoung, I just dont understand what you mean18:57
simodolphm: as part of https://wiki.openstack.org/wiki/MessageSecurity I need a key server and I am starting with building one in keystone, I have a couple of q.s about that, you think there is time ?18:57
ayoungtopol, that is fine, I can explain later...we have 3 minutes left in the meeting18:57
bknudsona read-only LDAP driver would be nice.18:57
morganfainbergdolphm: pluggable LDAP drivers?18:57
dolphmsimo: yes, i have an opinion... contribute to https://github.com/cloudkeep/barbican18:58
bknudsonmorganfainberg: the LDAP driver is a plugin.18:58
simodolphm: unrelated18:58
morganfainbergbknudson: oh right. ugh.  i came into this a bit late, brain is not 100%, sorry.18:58
morganfainberg100% firing yet*18:58
dolphmmorganfainberg: multiple ldap drivers18:58
morganfainbergdolphm: right, makes sense.18:58
dolphmlike ldap.ReadOnlyDriver, etc18:59
ayoungsimo, dolphm lets move this discussion to #openstack-dev.  We are not going to finish it here and now18:59
topolsahdev will investigate and will try fulll regression with domain_id_atrtribute ignored and will report back. patch will be small.  and then others folks call whether it meets backporting guidleines18:59
simodolphm: but if you feel strongly I can see18:59
*** PeTe__ has quit IRC18:59
dolphm#endmeeting18:59
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"18:59
openstackMeeting ended Tue May  7 18:59:13 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/keystone/2013/keystone.2013-05-07-18.00.html18:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/keystone/2013/keystone.2013-05-07-18.00.txt18:59
openstackLog:            http://eavesdrop.openstack.org/meetings/keystone/2013/keystone.2013-05-07-18.00.log.html18:59
dolphm#success18:59
clarkbo/18:59
* fungi takes a bow19:00
*** zaro has joined #openstack-meeting19:00
*** PeTeT has joined #openstack-meeting19:00
jeblairany other infra people?19:00
*** bknudson has left #openstack-meeting19:00
*** fabio has quit IRC19:00
jeblairmordred: there's a topic on the agenda that is dear to you...19:00
clarkbI really need to learn to read the agenda first...19:00
jeblairanyone should feel free to write it too.  :)19:01
zaroyo!19:01
*** jlk has joined #openstack-meeting19:01
*** sandywalsh has joined #openstack-meeting19:01
jlko/19:01
*** olaph has joined #openstack-meeting19:01
*** ijw has quit IRC19:01
jeblair#startmeeting infra19:02
openstackMeeting started Tue May  7 19:02:09 2013 UTC.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.19:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:02
*** openstack changes topic to " (Meeting topic: infra)"19:02
olapho/19:02
openstackThe meeting name has been set to 'infra'19:02
jeblair#topic bugs19:02
*** openstack changes topic to "bugs (Meeting topic: infra)"19:02
jeblair(i'm going to abuse my position as chair to insert a topic not currently on the agenda)19:03
fungiabuse away19:03
jeblairthis is a PSA to remind people to use launchpad bugs for infra tasks19:03
jeblairi think we were a bit lax about that last cycle (all of us, me too)19:03
clarkbwe were. ++ to pleia for forcing us to do bug days19:04
jeblairespecially as we're trying it easier for others to get involved, i think keeping up with bug status is important19:04
jeblairyes, much thanks to pleia for that; we'd be in a much worse position otherwise19:04
jlk+1 to that19:04
jlkus new folks have to know where to find work that needs to be done19:04
fungii couldn't agree more. definitely going to strive to improve on that as my new cycle resolution19:04
*** atiwari has quit IRC19:05
jeblairso anyway, please take a minute and make sure that things you're working on have bugs assigned to you, and things you aren't working on don't.  :)19:05
jeblairbtw, i think we have started doing a better job with low-hanging-fruit tags19:05
anteayao/19:06
jeblairso hopefully that will be an effective way for new people to pick up fairly independent tasks19:06
*** danwent has quit IRC19:06
jeblairany other thoughts on that?19:06
clarkbI think we should try and make the bugday thing frequent and scheduled in advance19:06
*** AlanClark has joined #openstack-meeting19:06
jlkoh something that is also missing19:06
fungiseconded19:06
jlka document to outline proper bug workflow19:07
jlkmaybe that exists somewhere?19:07
jeblairclarkb: +1;  how often?  line up with milestones?19:07
jlkI just took a guess at what to do19:07
*** sandywalsh has quit IRC19:07
clarkbjeblair: I was thinking once a month. lining up with milestones might be hard as we end up being very busy around milestone time it seems like19:07
jeblairjlk: no, but i think i need to write a 'how to contribute to openstack-infra' doc19:07
jeblairjlk: i should assign a bug to myself for that.  :)19:08
fungilining up between milestones ;)19:08
clarkbbut any schedule that is consistent and doesn't allow us to put it off would be good19:08
clarkband maybe we cycle responsibility for driving it so that pleia doesn't have to do it each time19:08
spzalabknudson: yes, if it exist then use it.. if not, then use virtual default domain19:08
spzalasorry, wrong chat box19:09
jeblairclarkb: want to mock up a calendar?19:09
clarkbjeblair: sure. I will submit a bug for it too :P19:09
clarkb#action clarkb to mock up infra bugday calendar19:09
mordredo/19:10
jeblairjlk: basically, feel free to assign a bug to yourself when you decide to start working on something19:10
jeblair#topic actions from last meeting19:10
*** openstack changes topic to "actions from last meeting (Meeting topic: infra)"19:10
jeblair#link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-04-30-19.03.html19:10
jlkjeblair: that's what I assumed.19:11
jeblairmordred: mordred set up per-provider apt mirrors (incl cloud archive) and magic puppet config to use them ?19:11
jeblairmordred: maybe you should just open a bug for that and let us know when there's something to start testing19:11
mordredjeblair: yes. I will do this19:12
jeblairclarkb: clarkb to ping markmc and sdague about move to testr19:12
clarkbI have not done that yet19:12
jeblair#action clarkb to ping markmc and sdague about move to testr19:12
jeblairi assume that's still a good idea.  :)19:12
clarkbit is19:12
clarkband it should be a higher priority of mine to get things in before milestone 1 if possible19:13
clarkbmordred did bring it up in the project meeting iirc19:13
mordredyeah. people are receptive to it19:13
*** ryanpetr_ has quit IRC19:13
mordredI think on my tdl is "open a bug about migrating everything and set up the per-project bug tasks"19:13
jeblair#topic oneiric server migrations19:14
*** openstack changes topic to "oneiric server migrations (Meeting topic: infra)"19:14
jeblairso we moved lists and eavesdrop19:14
*** vipul is now known as vipul|away19:14
*** vipul|away is now known as vipul19:14
jeblairthe continued avalanche of emails to os-dev seems to indicate that went okay19:14
jeblairand meetbot is answering hails so19:14
jeblairi guess that's that?19:15
clarkbwe need to shutdown/delete the old servers at some point. Once we have done that the task is complete19:15
clarkbjeblair: not quite19:15
fungia resounding success19:15
clarkbwe need to delete the old servers (unless you already did that) and mirror26 needs to be swapped out for a centos slave19:15
*** ryanpetrello has joined #openstack-meeting19:15
jeblairreed: have you logged into the new lists.o.o?19:15
fungii can take mirror26 as an action item19:16
jeblairreed: if not, let us know when you do so and if you have any objections to deleting the old server.19:16
fungiunless you wanted it, clarkb19:16
clarkbfungi: go for it19:16
reedjeblair, yes19:16
fungi#action fungi open a bug about replacing mirror26 and assign it to himself19:16
reedsystem restart required19:16
jeblairreed: ?19:16
reedjust logged in, *** System restart required ***19:17
jeblairoh.  the issue.  :)19:17
jeblairi believe it actually was recently rebooted.19:17
clarkbit was rebooted on saturday before we updated DNS19:17
clarkbI guess that means that more updates have come in since then19:17
jeblairreed: anything missing from the move?  or can we delete the old server?19:17
* fungi is pretty sure our devstack slaves are the only servers we have which don't say "restart required" every time you log in19:18
*** stevemar has quit IRC19:18
reedjeblair, how should I know if anything is missing? did anybody complain?19:18
jeblairreed: no, i think we have the archives and the lists seem to be working, so i don't see a reason19:19
*** markmcclain has joined #openstack-meeting19:19
jeblairreed: but we didn't sync homedirs (i don't think)19:19
reedalright then, I don't think I have anything in the old server there anyway19:19
jeblairreed: so if your bitcoin wallet is on that server you should copy it.  :)19:19
clarkbjeblair: I did not sync homedirs19:19
reedoh, my wallet!19:19
mordredjeblair already stole all of my bitcoins19:19
jeblair#action jeblair delete old lists and eavesdrop19:20
reedone of the cloud expense management systems allows you to request bitcoins for payments19:20
*** sandywalsh has joined #openstack-meeting19:20
jeblairwe should charge bitcoins for rechecks19:20
clarkbha19:20
fungiwe should charge bugfixes for rechecks19:20
jeblair#topic jenkins slave operating systems19:20
*** openstack changes topic to "jenkins slave operating systems (Meeting topic: infra)"19:21
jeblairmy notes in the wiki say:      current idea: test master and stable branches on latest lts+cloud archive at time of initial development19:21
jeblairand:  open question: what to do with havana (currently testing on quantal -- "I" release would test on precise?)19:21
mordredthere's an idea about having the ci system generate a bitcoin for each build, and then embed build id information into the bitcoin...19:21
mordredoh good. this topic again. my favorite :)19:21
*** garyk has quit IRC19:22
clarkbjeblair: I have thought about it a bit over the last week and I think that testing havana on quantal then "I" on precise is silly19:22
jeblairclarkb: yes, that sounds silly to me too.19:22
*** mkollaro has quit IRC19:22
clarkbit opens us to potential problems when we open I for dev19:22
clarkband we may as well sink the cost now before quantal and precise have time to diverage19:23
jeblairso if we're going to stick with the plan of lts+cloud archive, then i think we should roll back our slaves to precise asap.19:23
*** Nachi has quit IRC19:23
fungiand the thought is that we'll be able to test the "j" release on the next lts?19:23
clarkbfungi: yes19:24
mordredlts+cloud archive ++19:24
mordredat least until it causes some unforeseen problem19:24
fungimakes sense. i can spin up a new farm of precise slaves then. most of the old ones were rackspace legacy and needed rebuilding anyway19:24
mordredI believe zul and Daviey indicated they didn't think tracking depends in that order would be a problem19:25
*** dwaite has quit IRC19:25
clarkbjeblair: I assume we want to run it by the TC first?19:25
clarkbbut I agree that sooner is better than later19:25
fungithe tc agenda is probably long since closed for today's meeting. do we need to see about getting something in for next week with them?19:25
mordredhonestly, I don't the TC will want to be bothered with it (gut feeling, based on previous times I've asked things)19:26
jeblairyes, why don't we do it, and just let them know19:26
mordredit doesn't change much in terms of developer experience, since we're still hacking on pypi19:26
jlkdon't make it a question19:26
fungifair enough19:26
jlkmake it a "hey we're doing this thing, thought you'd like to know"19:26
jeblairif they feel strongly about it, we can certainly open the discussion (and i would _love_ new ideas about how to solve the problem.  :)19:26
jeblairmordred: you want to be the messenger?19:27
mordredjeblair: sure19:27
mordredI believe we'll be talking soon19:27
jeblair#action mordred inform TC of current testing plans19:27
jeblair#agreed drop quantal slaves in favor of precise+cloud archive19:28
fungi#action fungi open bug about spinning up new precise slaves, then do it19:28
jeblairany baremetal updates this week?19:28
mordrednot to my knowledge19:28
jeblair#topic open discussion19:29
*** openstack changes topic to "open discussion (Meeting topic: infra)"19:29
fungioh, while we're talking about slave servers, rackspace says the packet loss on mirror27 is due to another customer on that compute node19:29
jeblairfungi: !19:29
mordredfwiw, I'm almost done screwing with hacking to add support for per-project local checks19:29
*** vipul is now known as vipul|away19:29
mordredas a result, I'd like to say "pep8 is a terrible code base"19:29
clarkbfungi: we should DoS them in return :P19:29
fungithey offered to migrate us to another compute node, but it will involve downtime. should i just build another instead?19:29
jeblairfungi: want to spin up a replacement mirror27?  istr that we have had long-running problems with that one?19:29
anteayaalias opix="open and fix" #usage I'll opix a bug for that19:29
fungiheh19:30
*** mrodden has joined #openstack-meeting19:30
*** Haneef has quit IRC19:30
jlkthat's the cloud way right? problem server? spin up a new one!19:30
jlk(or 10)19:30
jeblairmordred: i agree.  :)19:30
fungiyeah, i'll just do replacements for both mirrors in that case19:30
jeblairfungi: +119:30
clarkbdo we need to spend more time troubleshooting static.o.o problems/19:30
fungioh?19:31
clarkbsounds like we were happy calling it a network issue19:31
fungioh, the ipv6 ssh thing?19:31
clarkbare we still happy with that as the most recent pypi.o.o failure?19:31
fungiahh, that, yes19:31
clarkbfungi: no pip couldn't fetch 5 packages from static.o.o the other day19:31
fungiright19:31
jeblairclarkb: i just re-ran the logstash query with no additional hits19:31
fungithat's what prompted me to open the ticket. i strongly suspect it was the packet loss getting worse than usual19:32
clarkbfungi: I see19:32
fungii'd seen it off and on in the past, but never to the point of impacting tests (afaik)19:32
mordredso - possibly a can of worms - but based off of "jlk | that's the cloud way right? problem server? spin up a new one!"19:32
*** jcoufal has quit IRC19:32
jeblairfungi: though i believe the mirror packet loss is mirror27 <-> static, wheras the test timeouts were slave <-> static...19:33
mordredshould we spend _any_ time thinking about ways we can make some of our longer-lived services more cloud-y?19:33
mordredfor easier "oh, just add another mirror to the pool and kill the ugly one" like our slaves are19:33
fungimmm, right. i keep forgetting static is what actually serves the mirrors19:33
clarkbmordred: are you going to make heat work for us?19:34
fungiso then no, that was not necessarily related19:34
clarkbbecause I would be onboard with that :)19:34
jlkmordred: fyi we're struggling with that internally too, w/ our openstack control stuff in cloud, treating them more "cloudy" whatever that means.19:34
mordredjlk: yeah, I mean - it's easier for services that are actually intended for it - like our slave pool19:35
mordredotoh - jenkins, you know?19:35
jlkyup19:35
jeblairmordred: as they present problems, sure, but not necessarily go fixing things that aren't broke.19:35
jlkyeah, this are harder questions19:35
mordredjeblair: good point19:35
jlkjeblair: +119:35
jeblairmordred: we are making jenkins more cloudy -- zuul/gearman...19:35
jlkdoes gearman have an easy way to promote to master?19:35
* mordred used floating ip's on hp cloud the other day to support creating/deleting the same thing over and over again while testing - but having the dns point to the floating ip19:36
jeblairjlk: no, gearman and zuul will be (co-located) SPOFs19:36
mordredjlk: gearman doesn't have a master/slave concept19:36
clarkbmordred: yeah I intend on trying floating ips at some point19:36
mordredclarkb: it worked VERY well and made me happy19:36
* ttx waves19:36
mordredjeblair: doesn't gearman have support for multi-master operation-ish something?19:36
*** redthrux has joined #openstack-meeting19:36
*** jcru has joined #openstack-meeting19:36
jlkgear man job server(s)19:36
clarkbmordred: I think it does, but if zuul is already a spof...19:37
mordredttx: I am tasked with communicating a change in our jenkins slave strategy to the TC - do I need an agenda item?19:37
mordredclarkb: good point19:37
jeblairmordred, jlk: yeah, actually you can just use multiple gearman masters19:37
jeblairmordred, jlk: and have all the clients and workers talk to all of the masters19:37
jlkso yes, you can have multiple in a active/active mod19:37
mordredso once gearman is in, then our only spofs will be gerrit/zuul19:37
jlkbut as stated, doesn't solve zuul19:37
jeblairmordred, jlk: however, we'll probably just run one on the zuul server.  because zuul spof.19:37
mordredyeah19:37
ttxmordred: you can probably use the open discussion area at the end. If it's more significant should be posted to -dev and linked to -tc to get a proper topic on the agenda19:37
ttx(of next week)à19:38
*** danwent has joined #openstack-meeting19:38
mordredttx: it's not. I don't think anyone will actually care of have an opinion - but information is good19:38
*** grapex has joined #openstack-meeting19:38
ttxmordred: will try to give you one minute at the end -- busy agenda19:38
anteayajeblair: can I take a turn?19:39
*** matiu has joined #openstack-meeting19:39
*** matiu has joined #openstack-meeting19:39
jeblairanteaya: the floor is yours19:39
anteayathanks19:39
anteayasorry I haven't been around much lately, figuring out the new job and all19:40
*** markmcclain has left #openstack-meeting19:40
*** markmcclain has joined #openstack-meeting19:40
anteayahoping to get back to the things I was working on like the openstackwatch url patch19:40
anteayabut if something I said I would do is important, pluck if from my hands and carry on19:40
anteayaand thanks19:40
jeblairanteaya: thank you, and i hope the new job is going well.19:41
anteaya:D thank jeblair it is19:41
anteayalike most new jobs I have to get in there and do stuff for a while to figure out what I should be doing19:41
anteayagetting there though19:42
jeblairanteaya: we do need to sync up with you about donating devstack nodes.  should i email someone?19:42
anteayahmmm, I was hoping I would have them by now19:43
anteayawhen I was in Montreal last week I met with all the people I thought I needed to meet with19:43
anteayaand was under the impression there were no impediments19:43
anteayathought I would have the account by now19:43
anteayayou are welcome to email the thread I started to inquire19:43
anteayathough it will probably be me that replies19:44
jeblairanteaya: ok, will do.  and if you need us to sign up with a jenkins@openstack.org email address or something, we can do that.19:44
anteayalet's do that, let's use the official channels and see what happens19:44
anteayaI don't think so, I got the -infra core emails from mordred last week19:44
anteayaso I don't think I need more emails19:44
*** markmc has joined #openstack-meeting19:45
anteayaemail the thread, I'll forward it around, maybe that will help things19:45
anteayaand thanks19:45
jeblairthank you19:45
jeblairi'm continuing to hack away at zuul+gearman19:45
anteayafun times19:46
jeblairright before this meeting, i had 5 of it's functional tests working19:46
fungioh, on the centos py26 unit test front, dprince indicated yesterday that he thought finalizing the remaining nova stable backports by thursday was doable (when oneiric's support expires)19:46
fungidprince, still the case?19:46
jeblairi'm hoping i can have a patchset that passes tests soon.19:46
clarkbjeblair: nice19:46
anteayayay for passing tests19:46
clarkbI have a series of changes up that makes the jenkins log pusher stuff for logstash more properly daemon like19:47
zaroi'm figuring out how to integrate WIP with gerrit 2.6 configs.19:47
*** gyee has quit IRC19:47
clarkbI think that what I currently have is enough to start transitioning back to importing more logs and working to normalize the log formats. But I will probably push that down the stack while I sort out testr19:48
dprincefungi: for grizzly we need the one branch in and we are set.19:48
fungidprince: any hope for folsom?19:48
dprincefungi: for folsom I think centos6 may be a lost cause.19:48
fungiugh19:48
jeblairclarkb: what are you doing with testr?19:48
fungi'twould be sad if we could test stable/folsom for everything except nova on centos19:49
jeblairdprince, fungi: hrm, that means we have no supported python2.6 test for folsom nova19:49
dprincefungi: it looks like it could be several things (more than 2 or 3) that would need to get backported to fix all that stuff.19:49
clarkbjeblair: motivating people like sdague and markmc to push everyone else along :)19:49
fungileaves us maintaining special nova-folsom test slaves running some other os as of yet undetermined19:49
clarkbjeblair: I don't intend on doing much implementation myself this time around19:49
jeblairclarkb: +119:50
*** beraldo has joined #openstack-meeting19:50
sdagueoh no, what did I do wrong? :)19:50
clarkbsdague: nothing :)19:50
dprincejeblair: the centos work can be done. but I'm not convinced it is worth the effort.19:50
markmcjeblair, I'm fine with dropping 2.6 testing on stable/folsom - there should be pretty much nothing happening there now19:50
fungii'll start to look into debian slaves for nova/folsom unit tests i guess?19:51
jeblairoptions for testing nova on python2.6 on folsom: a) backport fixes and test on centos; b) drop tests; c) use debian19:51
markmc(b) or we'll get (a) done somehow IMHO19:51
mordredI saw b. folsom came out before we made the current distro policy19:52
fungioh, i prefer markmc's suggestion in that case. less work for me ;)19:52
mordreds/saw/say/19:52
clarkboh so I don't forget.19:52
jeblairokay.  (b) is a one line change to zuul's config19:52
clarkb#action clarkb to get hpcloud az3 sorted out19:52
*** adjohn_ has quit IRC19:52
* jlk has to drop off19:53
jeblair#agreed drop python2.6 testing for nova on folsom19:53
jeblairjlk: thanks!19:53
* mordred shoots folsom/python2.6 in the facehole19:53
fungi#action fungi add change to disable nova py26 tests for folsom19:53
fungii'll drop that on top of my oneiric->centos change and we can merge them together19:53
*** shardy has joined #openstack-meeting19:54
jeblairfungi: cool.  oh, sorry, i think it's 2 lines.19:54
fungijeblair: i'll find the extra electrons somewhere19:54
jeblairanything else?19:54
*** datsun180b has joined #openstack-meeting19:55
*** SlickNik has joined #openstack-meeting19:55
olaphhubcap19:56
fungia merry tuesday to all! (excepting those for whom it may already be wednesday)19:56
jeblairthanks everyone!19:56
jeblair#endmeeting19:56
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"19:56
openstackMeeting ended Tue May  7 19:56:42 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-05-07-19.02.html19:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-05-07-19.02.txt19:56
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-05-07-19.02.log.html19:56
*** robertmyers has joined #openstack-meeting19:57
*** beraldo has left #openstack-meeting19:59
vishyhi!20:00
mikalHello20:00
ttxWho is around for the TC meeting ?20:00
devanandao/20:00
shardyo/20:00
markwasho/20:00
*** pcm__ has quit IRC20:00
russellbo/20:00
*** gabrielhurley has joined #openstack-meeting20:00
cp16neto/20:00
dolphmo/20:00
datsun180bo/20:00
mordredo/20:00
SlickNiko/20:00
*** john5223 has left #openstack-meeting20:00
ttxmarkmc, annegentle, notmyname, jgriffith, gabrielhurley, markmcclain: around ?20:01
gabrielhurley\o20:01
markmcclaino/20:01
markmcyep20:01
ttxwe are on20:01
ttxI'll proxy jd__, per http://lists.openstack.org/pipermail/openstack-tc/2013-May/000240.html and https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee#Proxying20:01
markmccripes that could almost be a full house20:01
russellb11, 12 including proxy vote20:01
grapexo/20:01
ttx#startmeeting tc20:01
openstackMeeting started Tue May  7 20:01:39 2013 UTC.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.20:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:01
*** openstack changes topic to " (Meeting topic: tc)"20:01
openstackThe meeting name has been set to 'tc'20:01
ttxAgenda is pretty busy, we'll see how far we manage to go today20:01
ttx#link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee20:01
hub_capwe can keep reddwarf short ;)20:02
ttx#topic RedDwarf Application for Incubation20:02
*** openstack changes topic to "RedDwarf Application for Incubation (Meeting topic: tc)"20:02
ttxspeaking of which20:02
ttx#link https://wiki.openstack.org/wiki/ReddwarfAppliesForIncubation20:02
hub_cap#link https://wiki.openstack.org/wiki/ReddwarfUsingHeat20:02
hub_cap#link https://review.openstack.org/#/c/28328/20:02
ttxThis is a continuation from last week discussion.20:02
hub_capthose were my homework items20:02
*** armax has left #openstack-meeting20:02
ttxhub_cap posted the Heat integration plan we requested, see above20:02
hub_capthe latter is a review showing the effort to get postgres into reddwarf20:02
ttxI haven't seen clearly new questions being raised on the public discussion20:03
hub_capits not a full impl, but it _works_ ie i can create users/dbs20:03
hub_capnope ttx and i updated w/ shardy comments about the refactor20:03
ttxhub_cap: I had one question. In this doc you mention that Heat integration would be optional as a first step...20:03
hub_capsure ttx, let me splain20:03
ttxI think the benefit of using Heat is to avoid duplicating orchestration functionality, so if the code is still around in RedDwarf it's not so great20:03
hub_capsure i agree20:03
ttxI'd definitely like to see the orchestration code dropped in RedDwarf before it graduates from incubation.20:03
hub_capgrad, yes20:03
ttxSo if for any reason (n+1 deprecation ?) it's not doable in a few months, maybe it's a bit early to file for incubation ?20:04
ttxI see two options: (1) have it optional in a few months, file for incubation *then*, get visibility at summit, remove legacy during the I cycle and graduate to integrated for the J cycle20:04
hub_capoh its doable20:04
*** jbartels_ has quit IRC20:04
ttx(2) have it optional real quick, remove legacy during the H cycle, and graduate to integrated for the I cycle20:04
*** mdenny has quit IRC20:04
*** lglenden has joined #openstack-meeting20:04
hub_capthe reason i mentioned optional was20:04
hub_capbut someone could stand up reddwarf right now, and point it to say, hp cloud, and not need to self install heat, or have hp cloud install heat20:05
*** mdenny has joined #openstack-meeting20:05
hub_capie, it works against a cloud _right now_20:05
* mordred likes the idea but doesn't necessarily think we need to expect them to have the work done before incubation - I think having a scope and a road map is quite fair and in keeping with past projects20:05
ttxmordred: I'd just consider it a condition for graduation, personally20:05
mordredhub_cap: and I am a big fan of things not requiring all of the intra20:05
mordredttx: ++20:05
markmchub_cap, is that a use case for the project, though?20:05
dolphmscope = RDaaS or simply RDBaaS?20:05
*** colinmcnamara has joined #openstack-meeting20:05
dolphms/RDaaS/DBaas/20:06
markmchub_cap, would you not expect reddwarf and the cloud to be deployed together always?20:06
hub_capthx markmc for elaborating20:06
*** ayurchen has joined #openstack-meeting20:06
markmchub_cap, for clouds that don't have heat, heat would just be an impl detail of thm adding the reddwarf service?20:06
russellband the cloud could have heat running internally and not necessary exposed to customers20:06
hub_capyes markmc20:06
russellbyes, that.20:06
markmccool20:06
mordredmarkmc: ++20:06
mordredheat should hopefully soon be able to also run outside of a cloud20:07
hub_capeither they, could, or not, have heat and still get this puppy fired up20:07
ttxhub_cap: see dolph's question20:07
*** adjohn has joined #openstack-meeting20:07
ttx<dolphm> scope = DBaaS or simply RDBaaS?20:07
vishymarkmc, hub_cap: it sounds like you could run reddwarf locally and have it use a public cloud?20:07
vishyis that correct?20:07
hub_capsure, im not sure weve fully answered that, but last meeting i thought we decided RDBaaS was fine for _now_20:07
*** torandu_ has quit IRC20:07
hub_capvishy: correct, u dont have to own the cloud20:07
hub_capso to speak20:07
russellbusing heat doesn't change that though20:07
russellbin theory.20:07
* mordred would like for our accepted scope to be RDBaaS and for increase in that to require further vote20:08
vishyrussellb: it would mean you would have to run heat locally as well20:08
russellbyes20:08
dolphmhub_cap: that was my impression from last meeting, i just wanted to double check today20:08
hub_capim fine w/ RDBaaS for now20:08
markwashre scope, it makes sense to me to treat it as "DB Provisioning aaS"20:08
* markmc is fine with RDBaaS scope too20:08
markwashrather than focusing on relational vs non20:08
hub_capi _do_ know we are going to be doing a cluster api20:08
hub_capand that facilitates things like non RDB's20:08
hub_capso it might fall in line quite well20:08
vishyseems like keeping the option to run without heat might be valuable until heat is ubiquitous in public clouds20:08
markwashvishy: +120:08
mordredvishy: if heat can also run locally easily too?20:09
hub_capand if someone wants to make a NRDB, they shoud consider reddwarf as a viable option before going from scratch20:09
shardyvishy: unless it leads to lots of duplication and maintenance overhead..20:09
ttxvishy: i'm a bit concerned with code duplication20:09
russellbsame here20:09
* mordred is pushing hard for non-colocated heat so that openstack-infra can start using it20:09
russellband ttx had a good point earlier around when heat becomes "the way it works", and how that affects the incubation timeline20:09
shardymordred: I think (with some auth tweaks) heat could run locally too relatively easily20:09
russellbi'd like to explore that a bit more20:09
mordredshardy: I believe cody-somerville is working on it20:09
ttxvishy: someone would run Reddwarf+Heat outside of the cloud20:09
vishymordred: I like the idea, I'm just thinking in terms of user adoption. It is nice if i could try it out without having to start up heat.20:10
shardymordred: IIRC there's a patch up adding much of what we need right now20:10
*** dprince has quit IRC20:10
hub_capttx thats def possible20:10
shardymordred: yup, that's what I was referring to20:10
mordredvishy: ++20:10
hub_capthey can run reddwarf w/o a cloud now20:10
hub_capso, heat as a incubation graduation req?20:10
russellband if so, what does the timeline look like for that?20:11
ttxI'd rather avoid us having to deprecate a feature, I've lived through enough project splits20:11
markwashcan we really set rules for graduation requirements at this point? that would be putting constraints on future TC decisions that I don't think we have the power to make20:11
russellbpossible in H timeframe?20:11
hub_caprussellb: def20:11
dolphmmarkwash: i think that would be for the future tc to overrule?20:11
mordredI think markwash makes a good point - I think requirement is a bit strong20:11
hub_capttx: sure, one Q, is heat a required openstack service at this point? id hate to say u _have_ to have heat but heat is optional20:11
mikalmarkwash: I agree. We should note it and let the future TC decide.20:11
markmcmarkwash, it's totally reasonable to say "here's the things we think you'll need to graduate"20:12
ttxmarkwash: not requirement. Direction.20:12
russellbmarkwash: i think it's fair to set the roadmap for what we expect during incubation, even if that could change20:12
markwashmarkmc: +1, direction, not req20:12
mordred++20:12
dolphmmarkwash: +120:12
dolphmmarkmc: *20:12
russellbbut honestly, if we expect to keep the old way around, this whole pushing to use heat thing seems like a waste20:12
russellbwhat's the point if that's not going to be *the* way to do it20:12
ttxhub_cap: no service is required. But for example, I don't think Nova should have its own image service, when we have Glance20:13
hub_capso its just a matter of heat/optional, and id say we make heat default, and those who have alrady deployed w/o heat, can use the legacy code20:13
hub_capttx: sure but do u think that no user shoudl use nova if they have heat?20:13
*** colinmcnamara has quit IRC20:13
russellbbut the legacy code goes away when?20:13
markmcagree with russellb on two impls being pointless20:13
ttxhub_cap: err... not sure I understand that question20:13
markmclong term20:13
ttxheat uses nova20:14
russellbeither the legacy code is on its way out asap, or the heat idea is scrapped20:14
russellbimo20:14
hub_capttx: heh what i mean is that reddwarf is a user of nova20:14
hub_caprussellb: why is that? heat is the long term vision20:14
shardyhub_cap: it's not just nova though, you're orchestrating clusters of instances, post-install customization, managing dependencies, potentially supporting scale-out etc, etc20:14
hub_capi sure as hell dont want to write clustering code :)20:14
*** lloydde has quit IRC20:14
shardyall of which we already do20:14
hub_capfor things like just what shardy said :D20:14
russellbhub_cap: ok, so you see the legacy code being removed then.20:14
hub_capdef russellb20:14
hub_capit wont be around forever heck no :)20:14
russellbon what timeline?20:14
russellbguess, not commitment20:15
ttxhub_cap: hmmm... I see what you mean. i guess it's valid to directly address nova for a very basic scenario20:15
*** lloydde has joined #openstack-meeting20:15
hub_caprussellb: i was hoping to have yall help me w/ that20:15
hub_capim not sure how deprecating features, so to speak, works20:15
russellbok20:15
hub_capn+1? or _for_ graduation?20:15
*** AlanClark has quit IRC20:15
hub_capi thnk those are the valid answers but i dont know whats best overall20:15
russellbwell ideally at this point in the process we wouldn't have to worry so much about the cost of deprecation .... :(20:15
ttxhub_cap: my understanding is that you're covering more than just a basic scenario, and duplicating a lot of functionality from Heat20:16
hub_capttx: as of now its only the basic20:16
hub_capsingle instance20:16
markmcs/deprecating features/deprecating legacy implementation/ :)20:16
*** AlanClark has joined #openstack-meeting20:16
russellbmarkmc: mhm20:16
mordredmarkmc: ++20:16
russellbas a project in incubation, i honestly don't think you should have to worry about deprecation cycles20:17
hub_capah ic20:17
hub_capthat makes sense20:17
russellb... ideally, anyway.20:17
gabrielhurleyEven with proper deprecation I don't see it as a huge problem to mark as deprecated for the H release, ptentially graduate to Integrated in I and actually remove the code during that cycle...20:17
hub_capim fine w/ that gabrielhurley that was my hope20:17
hub_capwe can make heat default for all installs20:17
gabrielhurleyanyone who's new to RD in the H release should know better than to start using something that's already deprecated ;-)20:17
ttxrussellb: +120:17
shardyhub_cap: if you currenlty only support single instance, I'd be interested to see a comparison of functionality wrt our RDS nested stack resource20:17
ttxyes, mark deprecated in H is fine20:18
shardymay be a good starting point for your integration w/heat20:18
hub_capshardy: does the nested stack do backups/restores/etc?20:18
*** colinmcnamara has joined #openstack-meeting20:18
hub_capwe are working on replication now too, so this is really the ideal time to start integrating heat20:18
ttxMore questions before we vote ?20:18
hub_capcuz we will need master/slave VERY soon :)20:18
shardyhub_cap: Not currently, no, that kind of matrix is what I'm interested in discovering20:18
* mordred moves that we vote20:18
hub_capshardy: we should def chat then after20:18
shardyhub_cap: may be stuff we need to add to support your use-case etc20:19
* hub_cap moves out of the way20:19
ttxraise you hand if you still have questions20:19
ttxyour*20:19
mordredwait - we don't have to make motions to vote here... so nice and civilized...20:19
hub_capshardy: def. id love to work together on it20:19
jgriffithmordred: I move we vote20:19
mordredjgriffith: I second!20:19
shardyhub_cap: sounds good :)20:19
*** lloydde has quit IRC20:19
ttxNo more questions, setting up vote20:20
ttx#startvote Approve RedDwarf for Incubation? yes, no, abstain20:20
openstackBegin voting on: Approve RedDwarf for Incubation? Valid vote options are yes, no, abstain.20:20
openstackVote using '#vote OPTION'. Only your last vote counts.20:20
hub_capsweet! didnt know that existed20:20
dolphm#vote yes20:20
shardy#vote yes20:20
mordredhub_cap: we're fancy around here20:20
mordred#vote yes20:20
jgriffith#vote yes20:20
mikal#vote yes20:20
ttx#vote yes20:20
markmcclain#vote yes20:20
vishy#vote yes20:20
annegentle#vote yes20:20
markwashI #vote yes even without the requirements of deprecating non heat approaches20:20
ttxmarkwash: that won't count.20:21
mordred:)20:21
gabrielhurley#vote yes20:21
mordredok. we're not that fancy20:21
markwashjust wanted to let people know20:21
markwash#vote yes20:21
jgriffithhaha20:21
jeblairbut we accept patches20:21
markmc#vote yes20:21
dolphmmordred: i'll file a bug20:21
*** jrodom has joined #openstack-meeting20:21
SlickNikheh20:21
ttx30 more seconds20:21
*** mkollaro has joined #openstack-meeting20:21
russellb#vote yes20:21
jgriffithmarkwash: care to vote officially?20:21
vishyhe did20:22
dolphmjgriffith: he did20:22
jgriffithdoh20:22
jgriffithnm20:22
ttx#endvote20:22
jgriffithsorry.. just saw it20:22
openstackVoted on "Approve RedDwarf for Incubation?" Results are20:22
openstackyes (13): markmc, ttx, vishy, shardy, annegentle, russellb, jgriffith, mikal, mordred, gabrielhurley, dolphm, markwash, markmcclain20:22
hub_capwow20:22
hub_capthx so much guys20:22
* russellb considered abstain because of the movement in the heat area ... but taking leap of faith that it'll work out.20:22
ttxAwesome, congrats guys20:22
imsplitbit:-)20:22
hub_caprussellb: i dont blame ya20:22
ttxrussellb: we can vote them out of the island if they misbehave20:22
hub_capits on the top of my list of todos20:22
gabrielhurleyI also considered abstaining on questions of scope of openstack, but I want to use red dwarf myself, so....20:22
SlickNikthanks for the faith, russellb.20:22
russellbttx: ok, cool :)20:22
mordredttx: wait - there's an island?20:22
*** senhuang has joined #openstack-meeting20:22
hub_caphah ttx, who was the idol?20:22
* ttx votes mordred for today20:22
hub_cap*err has20:22
* markmc leaping of faith too :)20:22
ttx#topic Ironic / Baremetal split - Nova scope evolution20:22
*** openstack changes topic to "Ironic / Baremetal split - Nova scope evolution (Meeting topic: tc)"20:22
ttx#link https://wiki.openstack.org/wiki/BaremetalSplitRationale20:23
ttxThis is the first part of the discussion about splitting the baremetal-specific code from Nova into its own project20:23
markwashwe should steal the name Ironic for the I release20:23
ttxWe must first decide that this code doesn't belong in Nova anymore20:23
russellb+120:23
ttxWhich, I think, is a no-brainer since we didn't really decide to have it in in the first place, and the Nova crew seems to agree to remove it20:23
markwash+10 fwiw20:23
ttxQuestions ?20:23
markmcdefinitely think this has a lot of potential for use outside of nova20:23
dolphmIronic is an awesomely relevant project name #notaquestion20:24
gabrielhurleyMy biggest question is "how much code will be duplicated?" (I get that this makes the remaining code simpler, but still worry about another copy-and-paste of Nova's source)20:24
ttx(second part of the discussion will be about the incubation of the separate project)20:24
mikalmarkmc: I wanted "incarceration" for that release20:24
ttxgabrielhurley: maybe that belongs to the second part ?20:24
gabrielhurley:;shrug::20:24
russellbhoping we'll have the nova code removed asap20:25
*** topol has quit IRC20:25
russellbso that there's no duplication20:25
mikalbaremetal is very different from other virt drivers20:25
mikalOwn DB etc20:25
mikalI think it belongs elsewhere20:25
devanandagabrielhurley: i've been digging into that over the weekend. short answer is: a lot, unless ironic starts fresh w.r.t. api, service, etc.20:25
gabrielhurleyrussellb: there must be some... it wouldn't be *in* nova if it didn't rely on *any* nova code currently20:25
mordred++20:25
*** murkk has quit IRC20:25
gabrielhurleydevananda: that's more what I expected to hear ;-)20:25
markmcrussellb, think gabrielhurley means the service infrastructure and such20:25
mordredthe virt driver itself that will talk to ironic will still be in the nova tree though, right?>20:25
markmcrussellb, as opposed to the legacy virt driver20:25
russellbah yes, like cinder...20:25
gabrielhurleyyeah20:25
markmcyes, like cinder :)20:25
devanandabesides api and service, it relies on nova/virt/disk for file injection, which i want to abandon anyway20:26
markwashgabrielhurley: is this "bad" duplication or just "use cases for oslo" duplication?20:26
gabrielhurleythe copy-paste snowballing of problems/flaws/bugs makes me a sad panda.20:26
markmcthink it's going to be much more different from nova than cinder was20:26
mordredI thnk there's going to be some of both20:26
gabrielhurleymarkmc: can you elaborate?20:26
markmcgabrielhurley, no scheduler e.g.20:26
devanandait's _very_ different code from nova.20:26
*** colinmcnamara has quit IRC20:26
*** SergeyLukjanov has quit IRC20:26
ttxReady to vote on the Nova scope reduction ?20:27
gabrielhurleysure20:27
markmcquick q20:27
markmcwill the legacy virt driver be feature frozen20:27
markmcduring H?20:27
ttxmarkmc: I suppose20:27
devanandamarkmc: fwiw, I would like it to be, except for bug fixes20:28
devanandathere are several open BPs20:28
markmcdevananda, cool20:28
ttxok, ready to vote on the first part ?20:28
devanandaone in particular will have a big impact in terms of simplifying deployment of the baremetal driver in nova20:28
*** SlickNik has left #openstack-meeting20:28
mikalThere are a few security caveats too20:28
devanandamikal: i dont think those are any different in vs. out of nova?20:28
*** eglynn has joined #openstack-meeting20:29
jgriffithso stupid point of clarification, that means we're voting to skip incubation correct?20:29
ttxjgriffith: no20:29
mikaldevananda: sure, but I don't want a nova freeze stopping us from fixing / documenting them20:29
russellbno20:29
jgriffithttx: then how can we say "no features on existing code for"20:29
ttxjgriffith: just voting on removing baremetal code from Nova's scope for the moment. More at next topic20:29
jgriffithK20:29
ttxwe don't say that, YET20:29
jgriffithk... I'll be patient20:29
ttx#startvote Agree on long-term removal of baremetal code from Nova's scope? yes, no, abstain20:30
openstackBegin voting on: Agree on long-term removal of baremetal code from Nova's scope? Valid vote options are yes, no, abstain.20:30
openstackVote using '#vote OPTION'. Only your last vote counts.20:30
markwash#vote yes20:30
russellb#vote yes20:30
mordred#vote yes20:30
mikal#vote yes20:30
jgriffith#vote yes20:30
gabrielhurley#vote yes20:30
ttx#vote yes20:30
shardy#vote yes20:30
markmcclain#vote yes20:30
ttx30 more seconds20:30
markmc#vote yes20:30
*** vipul|away is now known as vipul20:30
annegentle#vote yes20:30
markmc(yes when ironic is ready I guess)20:30
markmcwe're not reducing the scope really until the legacy driver is removed20:30
* markmc shrugs20:30
ttx#endvote20:30
openstackVoted on "Agree on long-term removal of baremetal code from Nova's scope?" Results are20:30
openstackyes (11): markmc, ttx, shardy, annegentle, russellb, jgriffith, mikal, mordred, gabrielhurley, markwash, markmcclain20:30
ttx#topic Ironic / Baremetal split - Incubation request20:31
*** openstack changes topic to "Ironic / Baremetal split - Incubation request (Meeting topic: tc)"20:31
*** zul has joined #openstack-meeting20:31
russellbmarkmc: agreeing on direction to reduce scope, i guess20:31
ttxThis is the second part of the project split decision... create a project from Nova baremetal code and accept that new "Ironic" project into Incubation20:31
ttxThe idea being that Ironic could make a first drop by Havana release and we'd mark baremetal code deprecated in Nova in Havana...20:31
markmcrussellb, yeah - we could change our minds if Ironic fails is my point, I think20:31
markmcrussellb, (unlikely, but ...)20:31
ttxThen if everything goes well have the code removed and Ironic integrated during the I cycle20:31
russellbfair enough20:31
ttxFire questions20:31
mordredttx: makes sense to me20:31
ttxMy main question would be... is this code OpenStack-specific ? Should it become an OpenStack integrated project rather than, say, a generic Python library ?20:31
markmcis glance OpenStack specific? keystone?20:32
devanandattx: swift is an openstack project, but aiui can be deployed separately. how is this different?20:32
markmcIMHO this is as OpenStack specific as anything else20:32
mordredI believe that the reason I've been arguing that it's openstack and not just generic - is that I thin there are potentially several services who might want to integrate with its apis20:32
russellbi look at it like as something in the openstack brand, that may or may not be used in combination with openstack services20:32
jgriffithmordred: +120:32
ttxNo, I mean... if this is generally useful to more than just openstack...20:32
markwashplus it adds IMHO a key piece to OpenStack20:32
devanandaso far we def want interaction between ironic and nova, cinder, and quantum20:32
mordredfor instance - a pan-project scheduler might want to talk to the baremetal service for information about rack locality20:33
*** mattray has quit IRC20:33
jgriffithttx: I think your point is fine as well20:33
markwashdevananda: not glance )-;20:33
jgriffithttx: in other words it can be useful stand-alone20:33
jgriffithnothing wrong with that20:33
*** boris-42 has quit IRC20:33
devanandamarkwash: actually, yes, glance too!20:33
ttxmordred: ok, makes sense20:33
mordredglance is generally useful outside of openstack :) canonical run a public one for people to get their images from20:33
*** senhuang has quit IRC20:33
mordredoh. I read markwash's comment wrong :)20:34
* mordred shuts up20:34
vishymordred: it includes a rest api as well20:34
markmcvishy, Ironic will have a REST API20:34
vishyright which makes it more of a project than a library imo20:34
markmcah, ok20:34
ttxvishy: agreed.20:34
markmcyes20:34
ttxOther questions ?20:35
*** zul has quit IRC20:35
devanandaso i have a question for folks -- in splitting ironic, should i aim to preserve as much code from nova as possible, or start fresh so the result is less bloated? and does that affect incubation in any way?20:35
markwashrussellb: +1 to OS brand #notaquestion20:35
mordredI believe you should do things cleanly if it's possible and doesn't kill you20:36
ttxdevananda: since we do a deprecation cycle, you have some room for cleanup20:36
mordredbut I do not believe that's in scope for us here really20:36
gabrielhurleymordred++20:36
markwashI agree with mordred, but that would really be your call20:36
devanandaack.good to know that doesn't affect incubation20:36
markmcyou can ask us for opinions as the 18 wise people of openstack20:37
russellbjust be aware of the time impact20:37
gabrielhurleythe less snowballing the better. this is a chance for cleaning house. but as everyone said, not a requirement.20:37
*** PeTeT has quit IRC20:37
markmcbut you probably know best :)20:37
russellblike, look at how long it has taken to get quantum up to where we can make it the default, vs cinder20:37
ttxdevananda: if you want to hit the incubation targets to get integrated in I you'll have to produce working code very fast... so the "doesn't kill you" remark from mordred applies20:37
devanandaack20:38
russellbyes, that :)20:38
ttxworst case scenario, you do one more cycle as incubated, not so much of a big deal20:38
devanandaright.20:38
vishythe cinder approach is definitely faster20:38
markmcttx, well, it would be another cycle of the baremetal driver being feature frozen20:38
vishybut it delays adding new features for a long time20:38
devanandacinder approach = ?20:39
vishynova -> cinder transition was pretty painless (much less painless than nova -> quantum)20:39
russellbreusing code as much as possible, as opposed to starting over20:39
markwashbut quantum has lots of context that could influence that20:39
vishyrussellb: yes, also replicating the api directly20:39
jgriffithdevananda: I'm happy to share my thoughts offline if you're interested20:40
ttxdevananda: so you can refactor a bit, but would be better to reuse as much as you can so that you iterate faster20:40
vishyand just adding a python-*client wrapper to talk to the same apis exposed via rest20:40
vishywith no change at all to the backend20:40
jgriffithbut yes, copy out of nova and modify was a life saver for me20:40
devanandajgriffith: thanks, will def take you up on that after this meeting20:40
markmcthe Nova API probably wouldn't make much sense as a starting point for Ironic?20:40
mordreddevananda and I worked on a hybrid split - which involved git filter-branch on the nova tree to pull out the existing baremetal code, but leaving the other bits out20:40
vishybut that meant 6 months of no changes to the first six months of cinder essentially20:40
ttxMore questions ?20:40
markmcclainyeah.. we made a bunch of changes which is why moved at a different pace20:40
devanandattx: no more q from me20:41
*** nati_ueno has joined #openstack-meeting20:41
russellbexcuse me, not quantum, the project formerly known as quantum20:41
ttxEveryone ready to vote ?20:41
markmcrussellb, still known as quantum20:41
mikalI am20:41
* jgriffith moves we vote20:41
markmcrussellb, soon to be formerly known as quantum :)20:41
gabrielhurleyOpenStack Networking20:41
markmcgabrielhurley, nope20:42
mordredmutnauq!20:42
markmcquebec!20:42
gabrielhurleyI still vote we rename it "quality"20:42
markmcthat works20:42
markwashmarkmc: lol!20:42
gabrielhurleystarts with Q, same number of letters... Quality!20:42
ttx#startvote Approve Ironic for Incubation? yes, no, abstain20:42
openstackBegin voting on: Approve Ironic for Incubation? Valid vote options are yes, no, abstain.20:42
openstackVote using '#vote OPTION'. Only your last vote counts.20:42
markmc#vote yes20:42
russellb#vote yes20:42
mordred#vote yes20:42
* markmcclain is still accepting name nominations 20:42
mikal#vote yes20:42
dolphm#vote yes20:42
markmcclain#vote yes20:42
gabrielhurley#vote yes20:42
jgriffith#vote yes20:42
shardy#vote yest20:42
openstackshardy: yest is not a valid option. Valid options are yes, no, abstain.20:42
markwash#vote yes20:42
ttx#vote yes20:42
gabrielhurleylol20:42
shardy#vote yes20:42
mordredhaahahaha20:42
shardyoops20:42
vishy#vote yes20:42
ttx30 more seconds20:42
ttx#endvote20:43
*** cp16net is now known as cp16net|away20:43
openstackVoted on "Approve Ironic for Incubation?" Results are20:43
openstackyes (12): markmc, ttx, vishy, shardy, russellb, jgriffith, mikal, mordred, gabrielhurley, dolphm, markwash, markmcclain20:43
ttxdevananda: congrats!20:43
gabrielhurleywe're a very agreeable bunch today20:43
ttxyay process!20:43
devanandathanks! :)20:43
ttxgabrielhurley: that's because we rae missing the devil's advocate member20:43
gabrielhurleylol20:43
jgriffithhaha20:43
russellbdevananda: make it happen! go go go!20:43
markmczero no votes or abstains so far?20:43
*** olaph has left #openstack-meeting20:44
* markmc is sure devananda feels suitably empowered now :)20:44
ttxmarkmc: that's what I call managed lazy consensus20:44
ttx#topic Discussion: API version discovery20:44
*** openstack changes topic to "Discussion: API version discovery (Meeting topic: tc)"20:44
gabrielhurleyyay!20:44
ttx#link http://lists.openstack.org/pipermail/openstack-tc/2013-May/000223.html20:44
ttxThis is preliminary discussion on API version discovery20:44
markmc#vote yes20:44
ttxPersonally I'm not sure this needs formal TC blessing, unless things get ugly at individual project-level20:44
russellblulz.20:44
gabrielhurleyI cleaned things up into a nice reST document for y'all20:44
gabrielhurleyhttps://gist.github.com/gabrielhurley/549943420:44
markmcoh, no vote yet?20:44
jgriffithhaha20:44
annegentleI was agreeable too but missed the vote :)20:44
ttxBut I guess we can still discuss it :)20:44
ttxgabrielhurley: care to introduce the topic ?20:44
annegentlesowwy20:44
gabrielhurleyyep yep20:44
gabrielhurleyso20:44
*** nati_ueno has quit IRC20:44
gabrielhurleyshort version20:45
ttxWe were less agreeable last week, poor jgriffith20:45
gabrielhurleyWe now have a Keystone v2 and v3 API, Glance, v1 and v2, and a Nova v2 and soon-to-be v320:45
gabrielhurleypeople want to use these20:45
*** lloydde has joined #openstack-meeting20:45
gabrielhurleypeople want to use these across various clouds20:45
* jgriffith 's head stil hurts20:45
gabrielhurleyand use multiple versions within the same cloud20:45
*** zul has joined #openstack-meeting20:45
*** PeTeT has joined #openstack-meeting20:45
gabrielhurleythat means we need to start dealing with the issues of how to let consumers of these APIs (clients, Horizon, etc.) understand what versions are available, what capabilities (extensions, etc.) are deployed for each version, and more20:46
gabrielhurleyshort version of the proposed fix (see gist, ML thread, and etherpad for long version)20:46
gabrielhurley:20:46
mordredgabrielhurley: I am in favor of things that sensibly let me consume multiple clouds20:47
annegentleis an extension always a capability?20:47
ttxgabrielhurley: Any reason to believe there would be resistance to this ?20:47
gabrielhurleyMove the Keystone service catalog towards solely providing root service endpoints and let the clients/consumers do the work of interpreting a (standardized) "discovery" response from GET /20:47
gabrielhurleyttx: nope, everyone's been very positive so far20:47
mordredgabrielhurley: as long as it doesn't mean a) tons of branching logic in my consumer code because b) we're tending towards Least Common Denominator in some way20:48
dolphmannegentle: extensions can provide multiple capabilities, i believe20:48
gabrielhurleyand consensus has formed around most of the ideas in the latest revision of the proposal20:48
mordredbut I don't think that's what you're proposing20:48
*** gyee has joined #openstack-meeting20:48
gabrielhurleybut it is a huge cross-project effort to impement, hence TC involvement20:48
mordred++20:48
annegentlegabrielhurley: it seems like a huge doc effort as well?20:48
gabrielhurleyannegentle: see https://gist.github.com/gabrielhurley/5499434#extensions-vs-capabilities for "extension" vs. "capability20:48
gabrielhurleyannegentle: when I said "cross-project" I meant it ;-)20:49
ttxgabrielhurley: we can bless it, but I'm not sure we can mandate it20:49
*** jcoufal has joined #openstack-meeting20:49
gabrielhurleyttx: it's a fine line. if one project opts out the whole thing breaks20:49
vishygabrielhurley: so the idea is that we continue to provide /extensions for the existing apis and add /capabilities for new apis?20:49
annegentlegabrielhurley: so most projects keep /extensions but add /capabilities? (I did read the doc and still had Qs)20:49
dolphmttx: can we mandate that projects return a proper 300 response with an expected format?20:49
gabrielhurleyvishy: correct20:49
vishygabrielhurley: it seems like we need a standard format for the capabilites resource as well20:49
* annegentle thinks like a vish20:49
ttxgabrielhurley: would be good to engage with all PTLs and check they are all OK with that20:49
gabrielhurleyvishy: correct, we do. I recommend versioning that response as well, in case we need to tweak it over time.20:50
markwashI'm a little "meh" about capabilities being described exclusively as endpoint-level details20:50
gabrielhurleyttx: I have gotten feedback from more than half of them20:50
gabrielhurleybut I can try and pin down the rest20:50
ttxgabrielhurley: but yes, we can weigh in and say it's a very good idea20:50
*** saurabhs has quit IRC20:50
markwashseems like capabilities could be finer grained20:50
annegentleif we don't version extensions now, how do we version capabilities?20:50
gabrielhurleymarkwash: care to elaborate?20:50
gabrielhurleyannegentle: simply saying to version the response format20:50
markwashgabrielhurley: I'll probably just muddy the waters20:50
markwashgabrielhurley: and the granularity probably isn't a TC level issue20:51
ttxgabrielhurley: basically, if one project doesn't like it, I'm not sure we have a lot of ways to enforce it, apart from threatening to remove them from the integrated release.20:51
gabrielhurleyinterpretation of that data is a larger problem20:51
gabrielhurleyttx: hopefully it won't come to that20:51
ttxgabrielhurley: so consensus would be a much better way to get to it20:51
gabrielhurleyand I don't think it will20:51
dolphmannegentle: capabilities are versioned along with the API version, i think? GET /<version>/capabilities20:51
*** saurabhs has joined #openstack-meeting20:51
gabrielhurleydolphm: most likely yes20:51
ttxgabrielhurley: and I don't want our "blessing" to look like a mandate and cause some allergic reaction20:51
ttxwhere there shouldn't be any20:52
notmynamegabrielhurley: dolphm: I'd like something other than that (since that breaks existing swift clusters)20:52
mordredttx: ++20:52
vishygabrielhurley: it seems like we need multiple capabilities20:52
vishythe global one saying which endpoints are hittable20:52
vishythen some way of exposing schema for the endpoints20:52
vishyfor example if i have an extension that adds a parameter to a response20:53
vishysticking it in the global capabilities list seems odd20:53
gabrielhurleynotmyname: I switched it to /<version>/capabilities at your suggestion since you were already using "extensions" in a valid way... or did I misunderstand your comment?20:53
notmynamevishy: sounds like the rfc2616 OPTIONS verb ;-)20:53
vishynotmyname: yeah something like that20:53
notmynamegabrielhurley: not important for the tc meeting. we can discuss later, if you wnat20:54
gabrielhurleyvishy: I'm not convinced that a /capabilties is actually useful... it'd have to describe all the capabilties for all the versions20:54
gabrielhurleyI was proposing GET / gets you endpoint discovery for supported versions20:54
vishysorry i didn't mean /capabilities20:54
gabrielhurleyand /<version>/capabilties describes what's possible for that version20:54
vishyi mean that /version/capabilities could respond with all of the endpoints for that version20:54
annegentledolphm: gabrielhurley: but an extension's definition can change release to release (underlying release, not API release)? Is that why we'd use capabilities?20:54
vishybut sticking extra params there seems a bit messy20:54
dolphmgabrielhurley: does /capabilities need to be in scope here?20:54
ttx5 minutes left, and there are two more things I wanted to raise -- can we move this discussion to the ML and follow the result at the next meeting ?20:55
gabrielhurleyvishy: oh, I see, you're talking specifically about multi-endpoint20:55
*** lloydde has quit IRC20:55
ttxI think Gabriel needs to track down the remaining PTLs20:55
gabrielhurleyvishy: I don't think that's a good thing to try and solve now since we can't agree on that in Keystone anway20:55
vishygabrielhurley: no sorry endpoint is a bad word. i mean all of the paths that are reachable20:55
gabrielhurleydolphm: only because the standardization is helpful and related20:55
dolphmgabrielhurley: agree, but it seems like a second step20:55
gabrielhurleyvishy: gotcha. we can discuss more later20:55
gabrielhurleyttx: will do20:55
ttxI think you got the ball rolling here20:56
gabrielhurleyyep20:56
ttxwe'll definitely track this in future meetings20:56
ttx#topic Discussion: I naming candidates20:56
*** openstack changes topic to "Discussion: I naming candidates (Meeting topic: tc)"20:56
ttx#link https://wiki.openstack.org/wiki/ReleaseNaming#.22I.22_release_cycle_naming20:56
ttxThe only suggestion which strictly fits in the current guidelines is "Ili".20:56
annegentleI like Ili20:56
ttxSo I propose that we slightly extend the rules to accept street names in Hong-Kong, which should add a few options20:56
gabrielhurleyshort and to the point20:56
ttxor we can just accept Ili.20:56
gabrielhurleywhat are the other options?20:57
dolphmwhat about Ichang violates guidelines?20:57
hub_capi was hoping for innermongolia :/20:57
vishyno one suggested Imperial :(20:57
gabrielhurleyhub_cap: lol20:57
dolphm( ili is my first choice, after icehouse ;)20:57
ttxThe rules are rather strict, and Ichang is a bit borderline20:57
russellbInfluenza?20:57
gabrielhurley-120:57
russellbsorry.20:57
annegentleoh like Grizzly followed the rules20:57
mikalrussellb: !20:57
ttxIs that OK for everyone ? (extending to street names to have 3-4 candidates total)20:57
vishyis it ili or illi ?20:57
gabrielhurleyBear Flag Revolt!20:57
jgriffithannegentle: haha20:57
dolphm#vote yes20:57
ttxI'll take that as a YES20:58
gabrielhurleylol20:58
mikalYeah, works for me20:58
ttx#topic Open discussion20:58
*** openstack changes topic to "Open discussion (Meeting topic: tc)"20:58
ttxmordred: you had a communication to make ?20:58
mordredttx: yup. thanks20:58
mordredfwiw... openstack-infra is going to change how we're running tests on stuff20:58
ttxmordred: told ya I'd save one minute for you20:58
mordredwe believe we're still in compliance with the python support decision20:58
mordredbut based on canonical dropping support for non-LTS releases to 9 months, we're now planning on running 2.7 tests on LTS+cloud archive20:59
mordredand not attempting to run test slaves on latest ubuntu20:59
mordredbasically, nobody should notice20:59
mordredbut we thought we'd mention20:59
*** Vek has joined #openstack-meeting20:59
*** jlk has left #openstack-meeting20:59
mordredalso, for background, we have actually NEVER moved to the latest ubuntu as soon as it comes out in the CI system21:00
russellbseems reasonable.21:00
reedSooner than later we should start talking about the Design session in Hong Kong: need to make sure that we have successful design summit there, which means make sure all relevant people are able to travel there and the ones that can't, can still join the conversations21:00
mordredreed: +100021:00
mordredalso - I names21:00
mordred:)21:00
russellbi'm very concerned about the number of people that won't be able to make it from the US (or elsewhere) because of budget21:01
ttxannndd21:01
ttxwe are out of time21:01
ttx#endmeeting21:01
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"21:01
openstackMeeting ended Tue May  7 21:01:18 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-05-07-20.01.html21:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-05-07-20.01.txt21:01
openstackLog:            http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-05-07-20.01.log.html21:01
russellbnext!21:01
reedyeah!21:01
ttxmarkmc, dolphm, notmyname, markwash, jgriffith, russellb, shardy, gabrielhurley, markmcclain: still around ?21:01
russellbyes.21:01
ttxAnyone from Ceilometer to replace jd__ ?21:01
markmcyes21:01
markmcclaino/21:01
shardyyup21:01
markwashyes21:01
dolphmo/21:01
notmynameo/21:01
jgriffitho/21:02
gabrielhurley\o21:02
ttx#startmeeting project21:02
openstackMeeting started Tue May  7 21:02:16 2013 UTC.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.21:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:02
*** openstack changes topic to " (Meeting topic: project)"21:02
*** jbresnah has joined #openstack-meeting21:02
openstackThe meeting name has been set to 'project'21:02
ttx#link http://wiki.openstack.org/Meetings/ProjectMeeting21:02
*** grapex has left #openstack-meeting21:02
ttxThree weeks away from havana-1, let's see what those Havana roadmaps look like, and look into more details in havana-1 plans21:02
ttx#topic General stuff21:02
eglynnttx: I'm here for jd__21:02
*** openstack changes topic to "General stuff (Meeting topic: project)"21:02
ttxeglynn: awesome21:02
ttxGrizzly doc was released last week21:02
ttxapevec: around ?21:02
*** cp16net|away is now known as cp16net21:03
ttxmarkmc: no apevec, any idea on how is 2013.1.1 looking so far ?21:03
*** robertmyers has left #openstack-meeting21:04
markmcttx, not since http://lists.openstack.org/pipermail/openstack-dev/2013-April/007273.html21:04
markmcttx, but apevec is running the show :)21:04
markmc#link 2012.2.4 call for testing: http://lists.openstack.org/pipermail/openstack-dev/2013-April/007273.html21:04
ttxso there are stable/grizzly branches frozen and candidate tarballs out... ready for publication on Thursday21:04
markmcdoh21:05
* markmc totally mixed up21:05
markmcwasn't there a call for testing last week?21:05
*** jbartels_ has joined #openstack-meeting21:05
russellball my grizzly patches getting -2'd now!  </321:05
ttxThere /might/ be a security issue to include in it, need to talk to apevec (but you heard nothing)21:05
ttxannegentle, jeblair/mordred, sdague/davidkranz: News from Docs/Infra/QA teams ?21:05
markmc#link 2013.1.1 call for testing - http://lists.openstack.org/pipermail/openstack-dev/2013-May/008585.html21:05
* annegentle thinks21:05
mordredttx: I just told the TC - but we'll be using LTS+Cloud Archive for testing things instead of "latest ubuntu"21:06
annegentleDoc team meeting next Tuesday (morning for me, who knows for you)21:06
mordredwe do not believe it will make a noticable difference, since most of our depends come from PyPI21:06
annegentleWe have a stable/grizzly branch now for openstack-manuals21:06
ttxAnything else to mention before we go per-project ?21:06
sdaguettx: nothing major here21:07
ttx#topic Oslo status21:07
*** openstack changes topic to "Oslo status (Meeting topic: project)"21:07
ttxmarkmc: hi again21:07
*** mattray has joined #openstack-meeting21:07
ttx#link https://blueprints.launchpad.net/oslo/havana21:07
markmchowdy stranger21:07
ttxGeneral Havana plan looks good.21:07
markmcI tidied up more bps today21:07
ttxI don't see any new library publication in that plan, is that intentional ?21:08
markmcgood point21:08
markmcnothing concrete at the moment21:08
ttxNOTHING escapes me21:08
markmcthe messaging work is probably not going to be ready for release in this cycle21:08
markmcI'd love to do e.g. oslo.rootwrap in this release21:08
markmcbut you're hesitant21:08
ttxindeed, want to decide on the python snippet exec stuff first21:09
markmcalso pbr and hacking may fall under oslo during this release21:09
markmcif monty and I figure that out21:09
markmcor not21:09
ttxbut converging quantum-rootwrap takes all my rootwrap time21:09
ttxLooking into havana-1 now:21:09
ttx#link https://launchpad.net/oslo/+milestone/havana-121:09
ttxGood progress, nothing to add21:09
markmcok21:09
markmcthe delayed translation stuff needs looking at21:09
markmcthe current patch is stalled21:09
* markmc hoping to dig tomorrow21:09
markmcthe message security stuff is optimistic for h-121:10
markmcbut simo is making great progress, has reviews up21:10
ttxmarkmc: OK, you can mark blocked stuff blocked if you want to raise alarms21:10
markmcwe might get the oslo stuff in by then21:10
markmccommon client library I haven't looked at yet21:10
ttxmarkmc: Anything more to add ?21:10
markmcnope21:10
ttxQuestions about Oslo ?21:10
ttx#topic Keystone status21:11
*** openstack changes topic to "Keystone status (Meeting topic: project)"21:11
dolphmo/21:11
ttxdolphm: o/21:11
ttx#link https://blueprints.launchpad.net/keystone/havana21:11
simomarkmc: I have new patches to push btw, split into one more generic util and 2 rpc sepcific21:11
ttxdolphm: Looks good too.21:11
markmcsimo, great21:11
simoand working on a Key server which hopefully dolphm will adopt in keystone :)21:11
ttxdolphm: You mentioned x509 external auth at the summit... is that for havana ?21:11
dolphmttx: yes, that's scoped in pluggable-remote-user21:12
ttxah, ok21:12
dolphmwhich i currently have targeted to havana-m3, but it could move up, and it shouldn't be too much work21:12
*** mattray has quit IRC21:12
ttxHavana-1 plan is at https://launchpad.net/keystone/+milestone/havana-121:12
ttxLooks good to me21:13
*** litong has quit IRC21:13
ttxdolphm: if your roadmap is mostly set, you could fire a "Havana plans" email to the list, with the big themes. I know people appreciated those when we did them in Grizzly21:13
dolphmttx: absolutely21:13
ttxAnything more about Keystone ?21:13
dolphmregarding bug 1168726 having a backport to 2013.1.1 ...21:14
uvirtbotLaunchpad bug 1168726 in keystone/grizzly "default_domain_id breaks the ability to map keystone  to ldap" [Critical,In progress] https://launchpad.net/bugs/116872621:14
dolphmwe still haven't come to a consensus around a solution that is realistically backportable, so while the conversation continues, we've pinged the openstack user's list for feedback on our 2 proposed solutions21:14
dolphmas-is, stable/grizzly is broken for ldap/AD deployments that are not allowed to modify/add to their schema21:15
ttxdolphm: you'll have to see with apevec if the fix can make it to that snapshot. Otherwise it will be in the next one, not a big deal21:15
dolphmttx: thanks, that's it21:15
ttxdolphm: he /might/ be willing to delay one day or two if that's what it takes...21:16
ttx#topic Ceilometer status21:16
*** openstack changes topic to "Ceilometer status (Meeting topic: project)"21:16
ttxeglynn: o/21:16
ttx#link https://blueprints.launchpad.net/ceilometer/havana21:16
eglynnhavana BPs are drafted and almost all assigned: https://blueprints.launchpad.net/ceilometer/havana21:16
eglynnhighest priority BP not yet taken is hbase-related https://blueprints.launchpad.net/ceilometer/+spec/hbase-metadata-query21:16
ttx42 blueprints, impressive21:16
*** Mandell has quit IRC21:16
eglynn(not a tier-1 DB for us yet)21:17
eglynnbut I expect shengjie min will take it21:17
ttxeglynn: Hmm... "Essential" means that we can't release Havana without that feature being in, which leads to me being a pain tracking progress for that specific blueprint21:17
eglynn(I'll speak to him about tmrw, he's in my TZ)21:17
*** morganfainberg has left #openstack-meeting21:17
*** morganfainberg has joined #openstack-meeting21:17
eglynnyes, it seems over-prioritized to me also21:17
ttxeglynn: Could you explain why sqlalchemy-metadata-query and hbase-metadata-query are "Essential" to the success of the Havana release ?21:17
eglynnI would argue the case that maybe reflects the view of individual contributors21:17
ttxeglynn: ok, so maybe downgrade to High21:17
eglynn(with an interest in those features)21:18
eglynnttx: yep, agreed, will do21:18
eglynnotherwise we're in good shape for h121:18
ttxMy next remark is that it would be great to have a bit more milestone targets set so that we know when features are expected to hit21:18
ttxIn particular alarm-api sounds pretty advanced at this point21:18
eglynnttx: yes, we'll look at assigning to individual milestones this week21:18
ttxOtherwise looks good... Looking into havana-1 plan now21:19
ttx#link https://launchpad.net/ceilometer/+milestone/havana-121:19
ttxLooks fine and in good progress to me21:19
ttxeglynn: anything you wanted to mention ?21:19
eglynnyep, warm fuzzies on all the h1 work ... nothing further from me21:19
ttxQuestions on Ceilometer ?21:19
ttx#topic Swift status21:20
*** openstack changes topic to "Swift status (Meeting topic: project)"21:20
notmynamehi21:20
ttxnotmyname: o/21:20
ttx#link https://launchpad.net/swift/+milestone/1.8.121:20
ttxIs the status on this page accurate ?21:20
ttxmulti-region and proxy-affinity-writes look started to me :)21:20
notmynamehmm..conf.d configs has been merged, but isn't there21:20
*** anteaya has quit IRC21:20
notmynameya, multi-region was mostly done in the grizzly release21:21
notmynameit's a meta-task for the few backend ones21:21
ttxhmm, got disconnected, reasking21:21
notmynamemarked as "good progress"21:21
ttxStill no ETA for 1.8.1 release ?21:21
notmynameno, I don't have a date for 1.8.1 at this time21:21
ttxOK, anything you wanted to raise ?21:22
notmyname2 things21:22
notmyname1) reviews are getting longish again21:22
notmynamewe need people to review (including myself /confession)21:23
notmyname2) https://wiki.openstack.org/wiki/Swift/API is WIP for swift api definitions21:23
*** anteaya has joined #openstack-meeting21:23
*** anteaya has quit IRC21:23
ttxnotmyname: how has the team meeting been going so far ? usually a good thing to start building team spirit and reviewers motivation21:23
notmynameteam meetings are fine. we probably had our best one ever this week21:24
ttxok... Questions on Swift ?21:24
*** anteaya has joined #openstack-meeting21:25
ttx#help Swift reviewers to review more !21:25
ttx#topic Glance status21:25
*** openstack changes topic to "Glance status (Meeting topic: project)"21:25
ttxmarkwash: o/21:25
ttx#link https://blueprints.launchpad.net/glance/havana21:25
markwashahoyhoy21:25
* ttx cleans up superseded bp21:26
markwashah, thanks!21:26
ttxA bit more milestone targeting cannot hurt, otherwise looks good21:26
ttxDuring the summit you explored rolling DB migrations... I don't see that in any blueprint yet ?21:26
* markwash struggles21:26
markwashthe rolling db migrations task really needs an example migration21:27
markwashbut none of our immediate plans are calling for migrations21:27
markwashfortunately, I've been in communication with mike perez about making some progress towards this goal in cinder as well during havana21:27
ttxmarkwash: fair enough :)21:27
ttxLooking into havana-1 plans @ https://launchpad.net/glance/+milestone/havana-121:27
ttxLGTM21:28
markwashsince we have only 3 weeks left, some of that will probably slide21:28
markwashthere has been some design discussion investment over the past weeks21:28
markwashI expect it will pay off in the later milestones21:28
ttxmarkwash: you can also push a "havana plans" email to the -dev list21:28
ttxAnything more on Glance ?21:28
markwashyes, should be in a position to do that, but maybe not until next week21:29
ttxno hurry, so far only Swift did one21:29
markwashnot frome me21:29
*** rnirmal has quit IRC21:29
ttx#topic Quantum status21:29
*** openstack changes topic to "Quantum status (Meeting topic: project)"21:29
ttxmarkmcclain: hi!21:29
markmcclainhi21:29
ttx#link https://blueprints.launchpad.net/quantum/havana21:29
ttxwow, lots of recent activity there21:30
markmcclainyep :)21:30
*** martine_ has quit IRC21:30
ttxResult looks good21:30
ttxmarkmcclain: seeing the end of the tunnel ?21:30
markmcclainfor naming? yes.. will submit a list of name for vetting21:31
ttxoh no, for the havana roadmapping21:31
hub_capheh naming is harder than roadmapping21:32
markmcclainfor roadmap yes.. this is the bulk of what the team has agreed on21:32
ttxLooking at the havana-1 plan @ https://launchpad.net/quantum/+milestone/havana-121:32
*** dtroyer has left #openstack-meeting21:32
markmcclainI'm a little concerned about the amount of proposed work and the time remaining… told the team yesterday that we should retarget items to later milestones21:33
ttxlooks like good progress overall, ambitious goals (24 blueprints)21:33
ttxyes, we can look into that in the next meetings21:34
ttxAnything else on Quantum ?21:34
markmcclainnot from me21:34
*** dtroyer has joined #openstack-meeting21:34
ttxmarkmcclain: feel free to send that "roadmap" email to -dev too :)21:34
ttx#topic Cinder status21:34
*** openstack changes topic to "Cinder status (Meeting topic: project)"21:34
ttxjgriffith: o/21:34
ttx#link https://blueprints.launchpad.net/cinder/havana21:34
markmcclainttx: will do21:35
jgriffitho/21:35
ttxjgriffith: Could you set priority on those 3 Undefined blueprints ?21:35
ttxAlso there are still 4 proposed @ https://blueprints.launchpad.net/cinder/havana/+setgoals21:35
ttxOtherwise the list is in pretty good shape!21:35
jgriffithK... I'll take care of those items21:36
ttxLooking into https://launchpad.net/cinder/+milestone/havana-1 now21:36
ttxLooks good to me, although a bit late, if that status is accurate21:36
jgriffithYeah, serious time crunch coming up21:36
jgriffithbut I think we're going to make it21:36
jgriffithwill adjust next week if things don't look better21:36
ttxsure21:37
ttxAnything more in Cinder ?21:37
jgriffithnope21:37
ttx#topic Nova status21:37
*** openstack changes topic to "Nova status (Meeting topic: project)"21:37
ttxjgriffith: thx!21:37
ttxrussellb: o/21:37
ttx#link https://blueprints.launchpad.net/nova/havana21:37
russellbhi21:37
ttxStill looking good :)21:37
russellbthis is list is as complete as it can be at this point21:37
russellbi'm sure we'll have more trickle in over time, as usual ...21:38
russellbhavana-1 is overly ambitious, but we have a lot of optimistic devs :)21:38
ttxyes, there are always changes... the trick is to overcome the pile-up of new bleprints21:38
ttxAt the summit there were talks about addressing feature gaps in Cells... couldn't find a blueprint about that21:38
russellbyeah, it's not there21:39
russellbi didn't put anything on here that doesn't have someone committed to doing it21:39
russellbthere are probably 100 other open blueprints not on this list21:39
ttxsounds good to me21:39
ttxjust making sure it wasn't overlooked21:39
ttx#link https://launchpad.net/nova/+milestone/havana-121:39
russellbwell, not overlooked, just ... nobody stepping up21:39
ttx35 blueprints is certainly ambitious :)21:40
russellbi still owe a roadmap post/email21:40
russellbyeah, 35 is half of the list21:40
ttxdb-enforce-unique-keys (High) depends on db-api-tests (Low) -- should I bump db-api-tests to High ?21:40
russellbyes21:40
ttx(was wondering if that one was not actually already completed)21:40
ttxbumping21:41
russellbit's probably getting close, has been being worked on in a bunch of small steps21:41
ttx"baby steps", would markmc say21:41
russellbyup21:41
*** radez is now known as radez_g0n321:41
ttxLooks like we'll have some deferring work to do once those optimistic devs meet reality :)21:42
ttxOtherwise looks good to me21:42
ttxAny question on Nova ?21:42
russellbthanks!21:42
*** lbragstad has quit IRC21:42
ttx#topic Heat status21:42
russellbit would have been worse21:42
*** openstack changes topic to "Heat status (Meeting topic: project)"21:42
shardyo/21:42
russellbi foced a lot of deferrals already21:42
* russellb shuts up21:42
ttxshardy: hi21:42
ttxrussellb: when will they learn?21:42
ttx#link https://blueprints.launchpad.net/heat/havana21:43
ttxshardy: Looks good, would be nice to generally have more milestone targets set21:43
ttxThat's indicative and can be changed in the future... really helps to see if you're overcommitting on any given milestone21:43
*** ashwini has quit IRC21:43
shardyttx: sure, still figuring out who's committed to doing what, quite a few new contributors arriving or promised21:43
ttxYou also have one blueprint proposed @ https://blueprints.launchpad.net/heat/havana/+setgoals21:44
* shardy looks21:44
*** rerngvit has joined #openstack-meeting21:44
shardyaccepted21:44
ttxWas also looking for the autoscaling API stuff... is there a blueprint covering it ?21:44
shardyttx: not atm, I'm waiting to see if those requesting the features turn up with some dev resources21:45
ttxthat's wise21:45
ttxvolume-snapshots looks implemented to me ? https://review.openstack.org/#/c/28054/21:45
*** rerngvit has left #openstack-meeting21:45
shardyyep, I think the patch landed yesterday, or maybe today21:45
ttxwill update21:45
ttxYour havana-1 plan @ https://launchpad.net/heat/+milestone/havana-1 looks good to me21:45
*** jbartels_ has quit IRC21:46
*** lloydde has joined #openstack-meeting21:46
ttx(added volume-snapshots to h1 and marked it implemented)21:46
shardythanks21:46
ttxQuestions about Heat ?21:46
ttxshardy: can you look into those milestone targets before next week ? Will probably result in a few additions to the havana-1 plan21:47
shardyttx: ok will do21:47
ttx#topic Horizon status21:47
*** openstack changes topic to "Horizon status (Meeting topic: project)"21:47
gabrielhurleyyo21:47
ttxshardy: thx!21:47
ttxgabrielhurley: hey21:48
ttx#link https://blueprints.launchpad.net/horizon/havana21:48
gabrielhurleythe plan hasn't changed since last week21:48
*** rerngvit_ has joined #openstack-meeting21:48
gabrielhurleybut progress has been made21:48
gabrielhurleyI think everything in H1 will land21:48
*** rerngvit_ has quit IRC21:48
ttxThis is how I like it21:48
gabrielhurleyme too21:48
ttxLooks good, as does your havana-1 plan @ https://launchpad.net/horizon/+milestone/havana-121:48
ttxSo it looks like we'll have around 300 blueprints targeted for Havana cycle21:49
gabrielhurleyI think you mean 3021:49
gabrielhurleyoh21:49
ttx(total)21:49
gabrielhurley300 across everyone21:49
*** bradjones|away is now known as bradjones21:49
ttxup from 233 completed in grizzly21:49
*** dwcramer has quit IRC21:49
gabrielhurleythat's a heck of a lot of stuff21:49
ttxhttp://status.openstack.org/release21:50
ttxgabrielhurley: anything you wanted to mention ?21:50
gabrielhurleynope, stuff to discuss with the Horizon team specifically, but no concerns for the community at large. Good progress.21:50
*** lloydde has quit IRC21:50
ttxRocking and rolling21:50
gabrielhurleyalways21:50
ttx#topic Open discussion21:50
*** openstack changes topic to "Open discussion (Meeting topic: project)"21:50
gabrielhurleyttx: shouldn't you do an "incubated projects" section again?21:51
ttxWe'll probably open a topic for incubated projects here, next week21:51
*** sacharya has quit IRC21:51
ttx10 projects was waayyy too easy21:51
gabrielhurleyJust wanted to make sure we didn't short-change RedDwarf and Ironic ;-)21:51
ttxnah, I need to give them extra notice about it21:51
ttxthey are busy drinking right now21:51
ttxanything else, anyone ?21:52
ttxhub_cap, devananda: around ?21:52
gabrielhurleyis that gonna be the next hit slang? "man, I drank so much last night... I was Incubated!"21:52
devanandattx: here21:53
Vekhaha21:53
hub_caphai21:53
ttxdevananda: just wanted to mention, next week we'll add a topic to the meeting to cover incubation progress21:53
ttxhub_cap: ^21:53
hub_capgreat! id love some guidance21:53
ttxThat's generally where we try to mentor you through things21:53
hub_capcan i have a big brother/ big sister?21:53
hub_cap:D21:53
devanandattx: awesome! yes, will be much appreciated :)21:53
ttxhub_cap, devananda: https://wiki.openstack.org/wiki/PTLguide is a good start21:54
*** saurabhs has quit IRC21:54
devanandai've been reading that already21:54
hub_capme221:54
ttxas is https://wiki.openstack.org/wiki/Release_Cycle21:54
*** SlickNik has joined #openstack-meeting21:55
hub_capok thanks ttx will look @ that too21:55
hub_capi think ive got things ironed out on LP but i prolly missed a checkbox here or there21:55
ttxideally we switch to external release management (i.e. me handling your release) at the second or third milestone21:55
*** esp1 has joined #openstack-meeting21:56
*** saurabhs has joined #openstack-meeting21:56
hub_capttx: awesome more work for u!21:56
ttxtime for the CI stuff to be aligned21:56
ttxThat's about it for now :)21:56
hub_capill add that to the list of things i want to chat u up, and i think that id like to add devananda to that chat21:56
ttxAll I can think of at this late hour21:56
ttxMaybe time for a quick question21:57
hub_capsure, ive got ~6 items to ask u about in a few days21:57
ttxif you have any21:57
hub_capnaw dont want to occupy mroe time, im good for the next few days, business as usual21:57
reedis everybody going to ask.openstack.org to see if they can show off their knowledge answering questions?21:57
hub_capthe question i have are more open ended discussion type Qs21:57
ttxhub_cap: if you already have those questions, maybe email is the most efficient way to get to me (if you're in PST)21:57
ttxOK then... let's close this21:58
hub_capttx: ok should i email the entire tc list?21:58
ttxhub_cap: no, ask me21:58
hub_capkk21:58
*** eglynn has quit IRC21:58
hub_capthx again all21:58
ttxwill redirect if needed21:58
hub_caproger21:58
ttx#endmeeting21:58
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"21:58
openstackMeeting ended Tue May  7 21:58:41 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/project/2013/project.2013-05-07-21.02.html21:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/project/2013/project.2013-05-07-21.02.txt21:58
openstackLog:            http://eavesdrop.openstack.org/meetings/project/2013/project.2013-05-07-21.02.log.html21:58
*** Vek has left #openstack-meeting21:58
russellbbam, within an hour!21:58
russellbnice work ttx21:58
reedyeah21:59
devanandahub_cap: ++ to sharing a list of questions. i'll prollythink of some over night21:59
*** markmcclain has quit IRC21:59
hub_capdevananda: ill cc u most def21:59
devanandaright now i'm going to focus on actually getting a separate code base with things in it that work :)22:00
hub_cap:P22:00
ttxhub_cap, devananda: we can do it as an open etherpad22:01
gabrielhurleyooookay, Horizon meeting time22:01
ttxhub_cap, devananda: might seed a wiki page about incubation Q&A22:01
devananda++22:01
hub_capttx: already thought of doing that22:01
gabrielhurleyttx, hub_cap, devananda: take it in another channel ;-)22:01
hub_capsry gabrielhurley22:01
hub_cap:)22:01
gabrielhurleyno worries22:01
ttxI'm HACKING YOUR CONFERENCE22:02
gabrielhurleyhahahaha22:02
gabrielhurleywell played22:02
gabrielhurley#startmeeting horizon22:02
openstackMeeting started Tue May  7 22:02:27 2013 UTC.  The chair is gabrielhurley. Information about MeetBot at http://wiki.debian.org/MeetBot.22:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.22:02
*** openstack changes topic to " (Meeting topic: horizon)"22:02
openstackThe meeting name has been set to 'horizon'22:02
gabrielhurley#topic overview22:02
*** openstack changes topic to "overview (Meeting topic: horizon)"22:02
gabrielhurleyHello all22:02
gabrielhurleythings are oving right along22:02
lchenghello22:02
david-lyleHello22:02
bradjoneshey22:02
jpichhey22:03
*** cdub_ has joined #openstack-meeting22:03
gabrielhurleyBasically, the overview looks like this: the Havana plan looks solid and stable, we're halfway through H1 and look to be on target, and that makes me happy as long as we keep it up.22:03
gabrielhurleyeverything else I've got is about blueprints22:04
gabrielhurley#topic blueprits and bugs22:04
*** openstack changes topic to "blueprits and bugs (Meeting topic: horizon)"22:04
gabrielhurleyI'm gonna ignore the typo in that topic... ::sigh::22:04
*** hub_cap has left #openstack-meeting22:04
gabrielhurleyThere are reviews up for both of the "essential" blueprints:22:04
gabrielhurleyD3: https://review.openstack.org/#/c/28362/22:05
gabrielhurleyand enabling keystone v3: https://review.openstack.org/#/c/27989/22:05
gabrielhurleyI intend to review the D3 one thoroughly in the next day or two22:05
gabrielhurleythe Keystone one is from me and has some +1's but it needs core reviewer eyes on it asap22:06
gabrielhurleyit's blocking work on several other blueprints22:06
gabrielhurleyso it's the highest priority22:06
gabrielhurleyas far as the actual blueprint for it (https://blueprints.launchpad.net/horizon/+spec/api-capability-detection) I'm going to split it into two pieces22:06
gabrielhurleyone will be "enabling API version switching" which is what that review actually does22:07
gabrielhurleyand the other will be for the larger question of version/capability detection via the APIs22:07
gabrielhurleyI'm gonna do that because the latter has turned out to be a long-term process for the whole OpenStack community22:07
gabrielhurleyyou can see the current state of the proposal here: https://gist.github.com/gabrielhurley/549943422:08
gabrielhurleyI'm gonna continue to carry that through the whole H cycle22:08
gabrielhurleySo, to come back around, let's get reviews done on those two since they're the "essential" blueprints for the entire Havana cycle22:08
gabrielhurleythat'll open up the ability to progress on https://blueprints.launchpad.net/horizon/+spec/admin-domain-crud https://blueprints.launchpad.net/horizon/+spec/admin-role-crud https://blueprints.launchpad.net/horizon/+spec/login-domain-support and https://blueprints.launchpad.net/horizon/+spec/admin-group-crud22:09
gabrielhurley(basically all the Keystone v3 stuff for H1)22:09
david-lylethe other blocker for those is https://review.openstack.org/#/c/21942/22:09
david-lylepython-keystoneclient support22:10
gabrielhurleygood to know22:10
gabrielhurleyI'll keep an eye on that one too22:10
*** jcoufal has quit IRC22:10
*** dolphm has quit IRC22:10
david-lylelin's been making good progress using that patch22:10
gabrielhurleyso I see :-)22:10
gabrielhurleylet's run through the other H1 blueprints real quick22:10
*** cdub has quit IRC22:11
lchengFound some bugs on it already, added some comments  on the review.22:11
gabrielhurleyworking down the list, I still need to follow up on the Heat UI22:11
gabrielhurleyso that's on me22:11
gabrielhurleycody-somerville: you reassigned https://blueprints.launchpad.net/horizon/+spec/dry-templates to Tatiana Mazur... happen to have an IRC handle there?22:12
gabrielhurleyif not I can email to make sure of what's happening there22:12
*** glikson has quit IRC22:12
*** vkmc has joined #openstack-meeting22:13
*** vkmc has joined #openstack-meeting22:13
*** esp1 has left #openstack-meeting22:13
gabrielhurleyNot a big deal. I'll follow up there too.22:13
gabrielhurleydavid-lyle: You've got https://blueprints.launchpad.net/horizon/+spec/centralized-color-palette assigned to you. I assume you've been focused more on other areas like Keystone that're more preseeing. Any particular update to share?22:14
gabrielhurleys/preseeing/pressing22:14
*** esker has joined #openstack-meeting22:14
david-lyleI started pulling out the pieces, but haven't gotten back to it yet.  Shouldn't take too long to wrap up22:14
*** saurabhs has left #openstack-meeting22:14
gabrielhurleythat's about what I figured. thanks.22:15
gabrielhurleyIt's also one that can slip if needed, but it's early in the cycle to think about that.22:15
*** markmc has quit IRC22:15
lchenghttps://blueprints.launchpad.net/horizon/+spec/login-domain-support - will be probably be ready for review in a day or two.22:15
gabrielhurleyawesome22:15
*** zul has quit IRC22:16
gabrielhurleyit doesn't look like amotoki is around, so I won't linger on https://blueprints.launchpad.net/horizon/+spec/quantum-security-group22:16
gabrielhurleyif anyone wants to say something about Quantum Security Groups feel free though22:16
gabrielhurleyPer-project flavors... https://blueprints.launchpad.net/horizon/+spec/define-flavor-for-project22:17
gabrielhurleythe review expired. I'll contact the author and see about getting it updated. If not we should re-assign it and wrap it up.22:17
*** cdub has joined #openstack-meeting22:18
gabrielhurleylastly, password change(https://blueprints.launchpad.net/horizon/+spec/change-user-passwords )  has a review which was recently updated: https://review.openstack.org/#/c/23901/22:18
gabrielhurleythat needs review too22:18
*** kebray has quit IRC22:19
gabrielhurleybradjones: you asked for https://blueprints.launchpad.net/horizon/+spec/network-quotas to be assigned to you22:19
gabrielhurleyI went ahead and did that... thoughts on what milestone it should be in? (currently it's H3)22:19
gabrielhurleyalso, you'll obviously want to work closely with the Quantum team on that22:19
bradjonesyeah I can begin working on it straight away so i'll take a look at how the API is looking at get back to you22:20
gabrielhurleyawesome. just let me know.22:20
bradjoneswill do22:20
gabrielhurleyquick notes on bugs:22:21
david-lyleI've also got the prelimary step of https://blueprints.launchpad.net/horizon/+spec/multiple-service-endpoints about ready for review, it's just a selector to pick the region the services are being managed for.  Any feedback in the blueprint on the picker placement would be great.  http://imgur.com/gIh8MFh22:21
gabrielhurleysome interesting bugs crept into to the keystone API recently which got reported 4 or 5 times in differing forms22:21
gabrielhurleydavid-lyle: perhaps that'd be a good thing to reach out to the new OpenStack UX group for...22:22
david-lylegabrielhurley: was unaware of the group, I will do that.  Thanks22:22
gabrielhurleylemme find a link22:22
gabrielhurleyit's surprisingly hard to google for22:23
gabrielhurleyhttps://plus.google.com/u/0/communities/10095451239346324812222:23
gabrielhurley#action get that group more visiblity/discoverability22:24
gabrielhurleyit's just getting started22:24
gabrielhurleyso I'm interested to test the waters on having them weigh in on real UX questions22:24
gabrielhurleyanyhow, interesting keystone bugs... mostly not our fault... will get resolutions in the future22:24
gabrielhurley#topic open discussion22:24
*** openstack changes topic to "open discussion (Meeting topic: horizon)"22:24
gabrielhurleyeveryone's being mighty quiet today...22:25
gabrielhurleyhere's your chance!22:25
jpichQuestion about backporting translations22:25
jpichI've hit this strange error in the unit tests after backporting -> http://paste.openstack.org/show/36915/ It looks like a mix of locales is being used in the error message. Curious to hear if anyone is familiar with this?22:26
*** zul has joined #openstack-meeting22:26
gabrielhurleyI can help a  bit22:27
gabrielhurleythat's what happens when there's a unicode character in an exception string that is naively printed by a tool like (in this case) nose22:27
gabrielhurleyit can't print the real exception because it tries to convert the exception to an ascii string (stupid nose) and fails22:27
gabrielhurleyfixing that is a lot harder because you have to figure out what the failure is22:28
jpichWouldn't the tests still be run in English though?22:28
cody-somervillegabrielhurley: Hey. Sent you e-mail about that.22:28
cody-somervillegabrielhurley: She e-mailed me and asked to be assigned that bp.22:28
gabrielhurleythe tests have unicode characters in the test data22:28
gabrielhurleyI suggest tracing up the stack to somewhere in code under your control and wrapping that in a new try/except block and printing the error yourself so you can see what's happening22:29
*** fnaval has quit IRC22:29
jpichI pasted the error below the test output in that paste22:29
gabrielhurleycody-somerville: yeah, I saw the email, just wanted to follow up to on timeline/expectations22:30
gabrielhurleyjpich: oh... hmmmm22:30
gabrielhurleythat is odd22:30
gabrielhurleyokay, I'm not sure offhand22:30
jpichThe only change is the po/mo files22:30
jpichFair enough! I'll dig further22:30
gabrielhurleyyeah, now I'm curious22:31
jpichNot looking good for getting it into the next stable release this way though22:31
jpichCheers22:31
gabrielhurleyyeah, they're pushing to get that out ASAP22:31
gabrielhurleyif it goes into the next one so be it22:32
gabrielhurleybetter to figure out what's wrong22:32
jpichYep22:32
*** shang has quit IRC22:34
gabrielhurleyanybody else?22:35
vkmcIs there something new regarding Keystone's trust API integration with Horizon?22:35
*** maoy has quit IRC22:35
gabrielhurleydefine "something new"22:35
gabrielhurleylike, management of trusts?22:35
gabrielhurleyor something else?22:36
lchengTrust api has been pulled out of keystone before Grizzly release.  Not sure what is the current state.22:36
vkmcLike if there is planned blueprint for it22:36
gabrielhurleyyeah, that API is in flux currently, so we hadn't targeted anything for it22:36
gabrielhurleyactually I don't think there's even an untargeted BP22:37
gabrielhurleyhopefully that'll become clearer in H22:37
gabrielhurleyI think having an open BP to track it would be good22:37
gabrielhurleywe might want to land something later on (H3?)22:37
gabrielhurleyit all depends on Keystone22:37
*** jcru is now known as jcru|away22:37
vkmcI see... I lost the track of it and wasn't sure which was the current state22:38
vkmcI'll keep an eye on Keystone then and see how is it managed22:39
gabrielhurleyyeah. it's been messy22:39
gabrielhurleysounds good22:39
gabrielhurleygood question though22:39
jpichIs there something else in Keystone v3 that could help with implementing tenant deletion differently?22:39
gabrielhurleynot presentl22:39
gabrielhurleyy22:39
gabrielhurleythere's informal talk around improving cross-project event-driven behavior though22:40
gabrielhurleyit's actually being driven by changes in Nova22:40
vkmcThanks jpich, I didn't consider other choices heh22:40
*** flaper87 has quit IRC22:40
gabrielhurleywe'll see where it goes over the next milestone or two22:40
jpichInteresting, ok22:41
*** dolphm has joined #openstack-meeting22:41
jpichvkmc: Always be in flux ;)22:41
vkmcGreat :)22:41
gabrielhurleyhehe22:41
vkmcjpich, Oh I try hehe22:42
*** AlanClark has quit IRC22:42
*** jcru|away is now known as jcru22:43
gabrielhurleyokay22:43
gabrielhurleyI'm gonna call it here22:43
gabrielhurleyhave a great week folks!22:43
gabrielhurleyreview, review, review!22:43
gabrielhurleyand thank you all as usual.22:43
gabrielhurley#endmeeting22:43
*** openstack changes topic to "OpenStack meetings || Development in #openstack-dev || Help in #openstack"22:43
openstackMeeting ended Tue May  7 22:43:55 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:43
openstackMinutes:        http://eavesdrop.openstack.org/meetings/horizon/2013/horizon.2013-05-07-22.02.html22:43
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/horizon/2013/horizon.2013-05-07-22.02.txt22:43
openstackLog:            http://eavesdrop.openstack.org/meetings/horizon/2013/horizon.2013-05-07-22.02.log.html22:44
jpichThanks22:44
bradjonesthanks22:44
*** jcru has quit IRC22:44
vkmcThanks!22:44
lchengthanks22:44
*** bradjones is now known as bradjones|away22:45
*** dolphm has quit IRC22:46
*** lloydde has joined #openstack-meeting22:46
*** jpich has quit IRC22:47
*** gabrielhurley has quit IRC22:48
*** markwash has quit IRC22:48
*** lglenden has quit IRC22:48
*** vipul is now known as vipul|away22:49
*** lloydde has quit IRC22:51
*** datsun180b has quit IRC22:52
*** mikal has quit IRC22:56
*** spzala has quit IRC22:57
*** shardy has left #openstack-meeting22:57
*** mikal has joined #openstack-meeting22:59
*** zul has quit IRC22:59
*** johnpur has quit IRC23:01
*** vipul|away is now known as vipul23:01
*** vipul is now known as vipul|away23:03
*** ladquin is now known as ladquin_brb23:03
*** lloydde has joined #openstack-meeting23:07
*** vipul|away is now known as vipul23:07
*** mkollaro has quit IRC23:07
*** sacharya has joined #openstack-meeting23:13
*** mdenny has quit IRC23:14
*** hemna is now known as hemnafk23:14
*** mdenny has joined #openstack-meeting23:17
*** mdenny has quit IRC23:17
*** mdenny has joined #openstack-meeting23:18
*** beyounn has quit IRC23:19
*** beyounn has joined #openstack-meeting23:19
beyounnbt23:20
*** gongysh has joined #openstack-meeting23:22
*** dwcramer has joined #openstack-meeting23:25
*** markpeek has quit IRC23:31
*** amyt has quit IRC23:33
*** jrodom has quit IRC23:35
*** topol has joined #openstack-meeting23:43
*** jamespage_ has joined #openstack-meeting23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!