Wednesday, 2013-10-30

*** comay has quit IRC00:02
*** michchap has joined #openstack-meeting00:05
*** anniec has quit IRC00:06
*** jesusaurus has joined #openstack-meeting00:06
*** hemna has quit IRC00:07
*** markpeek has quit IRC00:09
*** anniec has joined #openstack-meeting00:10
*** rossella_s has quit IRC00:10
*** SergeyLukjanov has joined #openstack-meeting00:12
*** yogeshmehra has quit IRC00:14
*** rossella_s has joined #openstack-meeting00:14
*** markwash has quit IRC00:14
*** vuil has quit IRC00:16
*** IlyaE has quit IRC00:18
*** rakhmerov has quit IRC00:20
*** fbo is now known as fbo_away00:23
*** rossella_s has quit IRC00:26
*** troytoman-away is now known as troytoman00:27
*** jasondotstar is now known as jasondotstar|afk00:29
*** michchap_ has joined #openstack-meeting00:30
*** matsuhashi has joined #openstack-meeting00:30
*** colinmcnamara has quit IRC00:31
*** colinmcnamara has joined #openstack-meeting00:32
*** reed has quit IRC00:33
*** michchap has quit IRC00:33
*** oubiwann has quit IRC00:36
*** colinmcnamara has quit IRC00:36
*** pauli has quit IRC00:38
*** radsy has quit IRC00:38
*** ayoung has quit IRC00:40
*** troytoman is now known as troytoman-away00:41
*** yaguang has joined #openstack-meeting00:43
*** michchap_ has quit IRC00:44
*** michchap has joined #openstack-meeting00:47
*** jsergent has quit IRC00:54
*** SergeyLukjanov is now known as _SergeyLukjanov00:55
*** _SergeyLukjanov has quit IRC00:55
*** julim has quit IRC00:57
*** akuznetsov has quit IRC01:01
*** sarob has quit IRC01:01
*** sarob has joined #openstack-meeting01:02
*** nosnos has joined #openstack-meeting01:03
*** SumitNaiksatam has quit IRC01:04
*** colinmcnamara has joined #openstack-meeting01:04
*** dprince has quit IRC01:04
*** stevemar has joined #openstack-meeting01:05
*** jrodom has joined #openstack-meeting01:05
*** sarob has quit IRC01:06
*** rockyg has joined #openstack-meeting01:07
*** oubiwann has joined #openstack-meeting01:07
*** jasondotstar|afk is now known as jasondotstar01:07
*** SergeyLukjanov has joined #openstack-meeting01:08
rockyg<ttx> you around?01:09
*** jrodom has quit IRC01:12
*** suo has joined #openstack-meeting01:14
*** yamahata has joined #openstack-meeting01:15
*** rakhmerov has joined #openstack-meeting01:17
*** ayoung has joined #openstack-meeting01:17
*** sdake has joined #openstack-meeting01:20
*** vkmc has quit IRC01:20
*** rakhmerov has quit IRC01:22
*** colinmcnamara has quit IRC01:23
*** colinmcnamara has joined #openstack-meeting01:23
*** colinmcnamara has quit IRC01:24
*** colinmcnamara has joined #openstack-meeting01:24
*** colinmcnamara has quit IRC01:28
*** rockyg has quit IRC01:29
*** colinmcnamara has joined #openstack-meeting01:29
*** vipul is now known as vipul-away01:31
*** colinmcnamara has quit IRC01:32
*** achampion has joined #openstack-meeting01:32
*** colinmcnamara has joined #openstack-meeting01:32
*** markpeek has joined #openstack-meeting01:33
*** vipul-away is now known as vipul01:36
*** SumitNaiksatam has joined #openstack-meeting01:37
*** sjing has joined #openstack-meeting01:40
*** MarkAtwood has joined #openstack-meeting01:42
*** SergeyLukjanov has quit IRC01:42
*** markwash has joined #openstack-meeting01:44
*** achampion is now known as alunch01:46
*** nati_ueno has joined #openstack-meeting01:46
*** alunch is now known as slic01:46
*** slic is now known as achampion01:47
*** esker has joined #openstack-meeting01:50
*** MarkAtwood has quit IRC01:53
*** rongze has joined #openstack-meeting01:53
*** adalbas has quit IRC01:54
*** nati_ueno has quit IRC01:55
*** rongze has quit IRC01:55
*** danwent has quit IRC01:56
*** rongze has joined #openstack-meeting01:56
*** nati_ueno has joined #openstack-meeting01:56
*** markpeek1 has joined #openstack-meeting01:59
*** markpeek has quit IRC02:00
*** rongze has quit IRC02:00
*** dims has quit IRC02:01
*** dcramer_ has joined #openstack-meeting02:02
*** rongze has joined #openstack-meeting02:03
*** achampion has quit IRC02:04
*** dianefleming has quit IRC02:06
*** dianefleming has joined #openstack-meeting02:07
*** sjing has quit IRC02:07
*** sjing has joined #openstack-meeting02:09
*** fnaval_ has joined #openstack-meeting02:11
*** sarob has joined #openstack-meeting02:13
*** sarob has quit IRC02:13
*** sarob_ has joined #openstack-meeting02:13
*** sarob has joined #openstack-meeting02:14
*** rakhmerov has joined #openstack-meeting02:18
*** lpabon has quit IRC02:18
*** sarob_ has quit IRC02:18
*** nati_ueno has quit IRC02:23
*** nati_ueno has joined #openstack-meeting02:24
*** zul has quit IRC02:24
*** anniec has quit IRC02:26
*** nati_ueno has quit IRC02:28
*** ivasev has joined #openstack-meeting02:32
*** jamiec has joined #openstack-meeting02:33
*** zhikunliu has joined #openstack-meeting02:38
*** spzala has quit IRC02:44
*** yamahata has quit IRC02:46
*** michchap has quit IRC02:48
*** esheffield_ is now known as esheffield02:48
*** rakhmerov has quit IRC02:48
*** rakhmerov has joined #openstack-meeting02:49
*** gyee has quit IRC02:49
*** rakhmerov has quit IRC02:53
*** dcramer_ has quit IRC02:57
*** vuil has joined #openstack-meeting02:57
*** yamahata has joined #openstack-meeting02:59
*** anteaya has quit IRC03:00
*** vuil has quit IRC03:02
*** ErikB has quit IRC03:03
*** twoputt has quit IRC03:05
*** bdpayne has quit IRC03:05
*** sarob has quit IRC03:05
*** twoputt_ has quit IRC03:05
*** markpeek1 has quit IRC03:06
*** dcramer_ has joined #openstack-meeting03:10
*** coolsvap has joined #openstack-meeting03:13
*** galstrom_zzz is now known as galstrom03:15
*** markpeek has joined #openstack-meeting03:16
*** tedross has joined #openstack-meeting03:16
*** galstrom is now known as galstrom_zzz03:17
*** nikhil__ is now known as nikhil|afk03:17
*** vipul has quit IRC03:20
*** vipul has joined #openstack-meeting03:21
*** slagle_ has joined #openstack-meeting03:24
*** slagle_ has quit IRC03:24
*** slagle_ has joined #openstack-meeting03:25
*** herndon has quit IRC03:25
*** rakhmerov has joined #openstack-meeting03:47
*** jlucci has joined #openstack-meeting03:49
*** beagles has quit IRC03:56
*** danwent has joined #openstack-meeting03:59
*** schwicht has quit IRC04:02
*** stevemar has quit IRC04:03
*** vipul is now known as vipul-away04:05
*** toshi-hayashi has quit IRC04:06
*** jecarey has joined #openstack-meeting04:07
*** harlowja has quit IRC04:07
*** vipul-away is now known as vipul04:10
*** same5336 has quit IRC04:11
*** tedross has quit IRC04:15
*** rakhmerov has quit IRC04:18
*** schwicht has joined #openstack-meeting04:20
*** yaguang has quit IRC04:24
*** schwicht has quit IRC04:25
*** matiu_ has joined #openstack-meeting04:26
*** matiu_ has quit IRC04:26
*** matiu_ has joined #openstack-meeting04:26
*** nati_ueno has joined #openstack-meeting04:29
*** matiu has quit IRC04:30
*** vuil has joined #openstack-meeting04:31
*** schwicht has joined #openstack-meeting04:31
*** Adri2000 has quit IRC04:32
*** oubiwann has quit IRC04:32
*** Adri2000 has joined #openstack-meeting04:33
*** terriyu has quit IRC04:39
*** ivasev has quit IRC04:40
*** rongze has quit IRC04:42
*** rongze has joined #openstack-meeting04:43
*** yaguang has joined #openstack-meeting04:44
*** DinaBelova has joined #openstack-meeting04:45
*** colinmcnamara has quit IRC04:46
*** colinmcnamara has joined #openstack-meeting04:47
*** rongze has quit IRC04:47
*** sjing has quit IRC04:47
*** sjing has joined #openstack-meeting04:48
*** changbl has joined #openstack-meeting04:50
*** colinmcnamara has quit IRC04:51
*** colinmcnamara has joined #openstack-meeting04:52
*** vipul is now known as vipul-away05:02
*** jecarey has quit IRC05:02
*** sacharya has quit IRC05:04
*** neelashah has joined #openstack-meeting05:04
*** SergeyLukjanov has joined #openstack-meeting05:05
*** oubiwann has joined #openstack-meeting05:09
*** matiu_ has quit IRC05:11
*** nermina has joined #openstack-meeting05:15
*** jpeeler has quit IRC05:16
*** jpeeler has joined #openstack-meeting05:19
*** vipul-away is now known as vipul05:19
*** Toshi has joined #openstack-meeting05:23
*** Toshi has quit IRC05:25
*** matiu_ has joined #openstack-meeting05:25
*** matiu_ has quit IRC05:25
*** matiu_ has joined #openstack-meeting05:25
*** Toshi has joined #openstack-meeting05:26
*** Toshi has quit IRC05:27
*** zul has joined #openstack-meeting05:27
*** Toshi has joined #openstack-meeting05:29
*** Toshi has quit IRC05:32
*** zul has quit IRC05:32
*** oubiwann has quit IRC05:36
*** akuznetsov has joined #openstack-meeting05:38
*** garyk has quit IRC05:40
*** jlucci has quit IRC05:41
*** akuznetsov has quit IRC05:41
*** markpeek has quit IRC05:44
*** novas0x2a|laptop has quit IRC05:45
*** sdake has quit IRC05:46
*** svapneel has joined #openstack-meeting05:49
*** coolsvap has quit IRC05:50
*** svapneel has quit IRC05:50
*** coolsvap has joined #openstack-meeting05:50
*** colinmcnamara has quit IRC05:55
*** colinmcnamara has joined #openstack-meeting05:55
*** akuznetsov has joined #openstack-meeting05:56
*** danwent has quit IRC05:59
*** colinmcnamara has quit IRC06:00
*** zul has joined #openstack-meeting06:01
*** akuznetsov has quit IRC06:05
*** denis_makogon has joined #openstack-meeting06:06
*** blamar has quit IRC06:14
*** rongze has joined #openstack-meeting06:28
*** yogeshmehra has joined #openstack-meeting06:28
*** topol has joined #openstack-meeting06:30
*** jamiec_ has joined #openstack-meeting06:39
*** jamiec has quit IRC06:40
*** topol has quit IRC06:41
*** sjing has quit IRC06:42
*** sjing has joined #openstack-meeting06:44
*** matiu_ has quit IRC06:50
*** garyk has joined #openstack-meeting06:55
*** rdopieralski has joined #openstack-meeting06:58
*** rdopieralski has left #openstack-meeting06:59
*** jamiec_ has quit IRC07:03
*** yaguang has quit IRC07:05
*** yaguang has joined #openstack-meeting07:05
*** sushils has quit IRC07:05
*** vipul is now known as vipul-away07:06
*** SergeyLukjanov is now known as _SergeyLukjanov07:12
*** _SergeyLukjanov has quit IRC07:13
*** IlyaE has joined #openstack-meeting07:13
*** IlyaE has quit IRC07:15
*** doron_afk has joined #openstack-meeting07:18
*** doron_afk is now known as doron07:20
*** vuil has quit IRC07:25
*** doron is now known as doron_afk07:33
*** doron_afk is now known as doron07:34
*** jcoufal has joined #openstack-meeting07:38
*** doron is now known as doron_afk07:40
*** flaper87|afk is now known as flaper8707:44
*** haomaiwang has quit IRC07:47
*** haomaiwang has joined #openstack-meeting07:48
*** doron has joined #openstack-meeting07:51
*** belmoreira has joined #openstack-meeting07:52
*** doron_afk has quit IRC07:52
*** lsmola_ has joined #openstack-meeting08:00
*** jtomasek has joined #openstack-meeting08:06
*** Mandell has quit IRC08:09
*** Mandell has joined #openstack-meeting08:10
*** doron is now known as doron_afk08:10
*** sushils has joined #openstack-meeting08:12
*** matiu has joined #openstack-meeting08:12
*** Mandell has quit IRC08:14
*** nermina has quit IRC08:19
*** boden has joined #openstack-meeting08:19
*** flaper87 is now known as flaper87|afk08:22
*** denis_makogon has quit IRC08:26
*** tkammer has joined #openstack-meeting08:29
*** flaper87|afk is now known as flaper8708:32
*** yogeshmehra has quit IRC08:33
*** yogeshmehra has joined #openstack-meeting08:33
*** jlibosva has joined #openstack-meeting08:35
*** yogeshmehra has quit IRC08:37
*** dcramer_ has quit IRC08:39
*** ygbo has joined #openstack-meeting08:40
*** sjing has quit IRC08:49
*** fbo_away is now known as fbo08:51
*** dcramer_ has joined #openstack-meeting08:51
*** matsuhashi has quit IRC09:03
*** matsuhashi has joined #openstack-meeting09:03
*** f_rossigneux has joined #openstack-meeting09:04
*** rossella_s has joined #openstack-meeting09:04
*** derekh has joined #openstack-meeting09:05
*** matsuhas_ has joined #openstack-meeting09:08
*** matsuhashi has quit IRC09:08
*** doron_afk is now known as doron09:09
*** yassine has joined #openstack-meeting09:13
*** doron is now known as doron_afk09:15
*** plomakin has quit IRC09:23
*** plomakin has joined #openstack-meeting09:23
*** belmoreira1 has joined #openstack-meeting09:23
*** rakhmerov has joined #openstack-meeting09:25
*** belmoreira has quit IRC09:26
*** jlibosva has quit IRC09:29
*** jlibosva has joined #openstack-meeting09:29
*** tkammer has quit IRC09:36
*** matiu has quit IRC09:39
*** johnthetubaguy has joined #openstack-meeting09:40
*** johnthetubaguy1 has joined #openstack-meeting09:41
*** johnthetubaguy has quit IRC09:42
*** bgorski has joined #openstack-meeting09:44
*** bgorski has quit IRC09:46
*** bgorski has joined #openstack-meeting09:47
*** bgorski has quit IRC09:48
*** bgorski has joined #openstack-meeting09:49
*** matiu has joined #openstack-meeting09:50
*** matiu has quit IRC09:50
*** matiu has joined #openstack-meeting09:50
*** bgorski has quit IRC09:50
*** gongysh has joined #openstack-meeting09:51
*** bgorski has joined #openstack-meeting09:51
*** bgorski has joined #openstack-meeting09:52
*** bgorski has quit IRC09:54
*** bgorski has joined #openstack-meeting09:55
*** bgorski has quit IRC09:59
*** boris-42_ is now known as boris-4210:00
*** tkammer has joined #openstack-meeting10:00
*** bgorski has joined #openstack-meeting10:01
*** bgorski has quit IRC10:05
*** alexpilotti has quit IRC10:05
*** bgorski has joined #openstack-meeting10:05
*** bgorski has quit IRC10:08
*** zhikunliu has quit IRC10:08
*** bgorski has joined #openstack-meeting10:09
*** bgorski has quit IRC10:10
*** fifieldt has quit IRC10:10
*** bgorski has joined #openstack-meeting10:11
*** egallen has joined #openstack-meeting10:11
*** pcm_ has joined #openstack-meeting10:12
*** pcm_ has quit IRC10:13
*** pcm_ has joined #openstack-meeting10:13
*** tkammer has quit IRC10:15
*** bgorski has quit IRC10:22
*** bgorski has joined #openstack-meeting10:23
*** bgorski has quit IRC10:26
*** jtomasek has quit IRC10:30
*** doron_afk has quit IRC10:31
*** afazekas has joined #openstack-meeting10:33
*** plomakin has quit IRC10:35
*** plomakin has joined #openstack-meeting10:35
*** jtomasek has joined #openstack-meeting10:37
*** jtomasek has quit IRC10:37
*** jtomasek has joined #openstack-meeting10:38
*** topol has joined #openstack-meeting10:40
*** rongze has quit IRC10:42
*** matiu has quit IRC10:43
*** MadOwl has joined #openstack-meeting10:43
*** jtomasek has quit IRC10:45
*** bgorski has joined #openstack-meeting10:47
*** bgorski has quit IRC10:52
*** jtomasek has joined #openstack-meeting10:54
*** bgorski has joined #openstack-meeting10:54
*** bgorski has quit IRC10:56
*** bgorski has joined #openstack-meeting10:57
*** matsuhas_ has quit IRC11:01
*** matsuhashi has joined #openstack-meeting11:02
*** dims has joined #openstack-meeting11:03
*** slagle_ has quit IRC11:08
*** DinaBelova has quit IRC11:09
*** doron_afk has joined #openstack-meeting11:10
*** doron_afk is now known as doron11:11
*** matsuhashi has quit IRC11:11
*** bgorski has joined #openstack-meeting11:13
*** DinaBelova has joined #openstack-meeting11:14
*** bgorski has quit IRC11:15
*** bgorski has joined #openstack-meeting11:15
*** bgorski has quit IRC11:19
*** yamahata has quit IRC11:20
*** egallen has quit IRC11:20
*** flaper87 is now known as flaper87|afk11:24
*** yaguang has quit IRC11:25
*** doron is now known as doron_afk11:28
*** dosaboy_ is now known as dosaboy11:28
*** egallen has joined #openstack-meeting11:28
*** bgorski has joined #openstack-meeting11:35
*** akuznetsov has joined #openstack-meeting11:39
*** bgorski has quit IRC11:41
*** akuznetsov has quit IRC11:43
*** alexpilotti has joined #openstack-meeting11:44
*** alexpilotti has quit IRC11:44
*** egallen has quit IRC11:45
*** akuznetsov has joined #openstack-meeting11:47
*** akuznetsov has quit IRC11:49
*** matiu has joined #openstack-meeting11:55
*** bgorski has joined #openstack-meeting11:56
*** akuznetsov has joined #openstack-meeting11:57
*** dcramer_ has quit IRC11:58
*** akuznetsov has quit IRC12:00
*** matsuhashi has joined #openstack-meeting12:02
*** afazekas has quit IRC12:03
*** akuznetsov has joined #openstack-meeting12:05
*** pdmars has joined #openstack-meeting12:09
*** tedross has joined #openstack-meeting12:12
*** akuznetsov has quit IRC12:12
*** doron_afk is now known as doron12:12
*** adalbas has joined #openstack-meeting12:14
*** dprince has joined #openstack-meeting12:14
*** dhellmann is now known as dhellmann-afk12:16
*** schwicht has quit IRC12:16
*** akuznetsov has joined #openstack-meeting12:17
*** schwicht has joined #openstack-meeting12:18
*** slagle has quit IRC12:19
*** jpeeler has quit IRC12:19
*** jpeeler has joined #openstack-meeting12:19
*** doron is now known as doron_afk12:20
*** pdmars has quit IRC12:21
*** esker has quit IRC12:23
*** rongze has joined #openstack-meeting12:23
*** radez_g0n3 is now known as radez12:24
*** dims has quit IRC12:25
*** akuznetsov has quit IRC12:25
*** dims has joined #openstack-meeting12:25
*** bpokorny has joined #openstack-meeting12:25
*** tkammer has joined #openstack-meeting12:26
*** rook has quit IRC12:26
*** nosnos has quit IRC12:27
*** pdmars has joined #openstack-meeting12:27
*** akuznetsov has joined #openstack-meeting12:29
*** venyamin has joined #openstack-meeting12:30
*** venyamin has quit IRC12:31
*** yamahata has joined #openstack-meeting12:33
*** flaper87|afk is now known as flaper8712:35
*** akuznetsov has quit IRC12:36
*** yamahata has quit IRC12:39
*** afazekas has joined #openstack-meeting12:40
*** matiu has quit IRC12:41
*** akuznetsov has joined #openstack-meeting12:41
*** egallen has joined #openstack-meeting12:42
*** topol has quit IRC12:44
*** gongysh has quit IRC12:46
*** markvan has joined #openstack-meeting12:46
*** yamahata has joined #openstack-meeting12:46
*** thomasem has joined #openstack-meeting12:46
*** beagles has joined #openstack-meeting12:46
*** julim has joined #openstack-meeting12:46
*** akuznetsov has quit IRC12:48
*** radez is now known as radez_g0n312:49
*** tedross has quit IRC12:50
*** lblanchard has joined #openstack-meeting12:50
*** akuznetsov has joined #openstack-meeting12:52
*** lbragstad has joined #openstack-meeting12:53
*** jecarey has joined #openstack-meeting12:54
*** whenry has quit IRC12:54
*** eharney has joined #openstack-meeting12:55
*** slagle has joined #openstack-meeting12:56
*** akuznetsov has quit IRC12:58
*** proazi has joined #openstack-meeting13:00
*** anteaya has joined #openstack-meeting13:00
*** bgorski has quit IRC13:00
*** akuznetsov has joined #openstack-meeting13:01
*** oubiwann has joined #openstack-meeting13:02
*** akuznetsov has quit IRC13:02
*** proazi has quit IRC13:03
*** SergeyLukjanov has joined #openstack-meeting13:04
*** DrBacchus has joined #openstack-meeting13:07
*** matsuhashi has quit IRC13:07
*** MadOwl has quit IRC13:08
*** matsuhashi has joined #openstack-meeting13:08
*** matsuhas_ has joined #openstack-meeting13:09
*** matsuhashi has quit IRC13:10
*** neelashah has quit IRC13:10
*** neelashah has joined #openstack-meeting13:10
*** weshay has joined #openstack-meeting13:12
*** herndon has joined #openstack-meeting13:13
*** suo has quit IRC13:13
*** bgorski has joined #openstack-meeting13:14
*** tkammer_ has joined #openstack-meeting13:15
*** tkammer_ has quit IRC13:15
*** tkammer has quit IRC13:18
*** otherwiseguy has joined #openstack-meeting13:19
*** egallen has quit IRC13:23
*** tkammer has joined #openstack-meeting13:23
*** joesavak has joined #openstack-meeting13:24
*** egallen has joined #openstack-meeting13:25
*** rongze has quit IRC13:27
*** nermina has joined #openstack-meeting13:32
*** stevemar has joined #openstack-meeting13:33
*** stevemar has quit IRC13:35
*** ryanpetrello has joined #openstack-meeting13:36
*** stevemar has joined #openstack-meeting13:36
*** egallen has quit IRC13:36
*** stevemar has quit IRC13:36
*** jdurgin1 has quit IRC13:36
*** fnaval_ has quit IRC13:37
*** esker has joined #openstack-meeting13:38
*** esker has quit IRC13:38
*** fnaval_ has joined #openstack-meeting13:38
*** egallen has joined #openstack-meeting13:38
*** slagle has quit IRC13:38
*** esker has joined #openstack-meeting13:39
*** sandywalsh_ has joined #openstack-meeting13:39
*** DrBacchus has quit IRC13:39
*** dianefleming has quit IRC13:40
*** rongze has joined #openstack-meeting13:41
*** sandywalsh has quit IRC13:42
*** fnaval_ has quit IRC13:42
*** q3k has quit IRC13:43
*** q3k has joined #openstack-meeting13:44
*** markpeek has joined #openstack-meeting13:48
*** jdurgin1 has joined #openstack-meeting13:49
*** herndon has quit IRC13:49
*** slagle has joined #openstack-meeting13:50
*** haomaiwa_ has joined #openstack-meeting13:51
*** tkammer has quit IRC13:52
*** haomaiwang has quit IRC13:54
*** DrBacchus has joined #openstack-meeting13:56
*** jlucci has joined #openstack-meeting13:57
*** radez_g0n3 is now known as radez13:57
*** rcurran has joined #openstack-meeting13:58
*** rkukura has joined #openstack-meeting13:58
mesteryHi folks13:59
mestery#startmeeting networking_ml214:00
openstackMeeting started Wed Oct 30 14:00:10 2013 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: networking_ml2)"14:00
openstackThe meeting name has been set to 'networking_ml2'14:00
*** topol has joined #openstack-meeting14:00
rkukurahi everyone14:00
mesteryWho's here for the last pre-Icehouse Summit ML2 meeting?14:00
mesteryhi rkukura!14:00
mestery#link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda14:00
mesteryShort meeting today focused on the ML2 Icehouse Design Summit Sessions14:01
mesterySo, just a note: If you had an ML2 session accepted, please create an etherpad this week for the summit next week.14:01
*** mrodden has joined #openstack-meeting14:02
mesteryI've created some for "ML2 Extensibility" and "Deprecated Plugin Migration" already.14:02
*** vijendar has joined #openstack-meeting14:02
mesterySean Collins created one for "ML2 QoS" as well.14:02
*** dims has quit IRC14:02
mesteryI'd appreciate the ML2 team adding notes to the "Deprecated Plugin Migration" etherpad.14:03
*** mrodden has quit IRC14:03
mesteryThat one will require community input to be succesful I think.14:03
mesterySo .... that's all I had for this meeting.14:03
mesteryAny comments?14:03
*** dims has joined #openstack-meeting14:03
rkukurahaven't had a chance to go through the schedule yet and see how the ml2 sessions got paired up, but seems there are 4 or 5 sessions14:05
mesteryrkukura: I think so, yes.14:05
*** fnaval_ has joined #openstack-meeting14:05
*** bogdando has quit IRC14:06
*** SergeyLukjanov has quit IRC14:06
rkukuraother than me and mestery, do we know who else from these meetings will be in HK?14:06
*** Sukhdev has joined #openstack-meeting14:07
mesteryrkukura: I think we're alone today. :)14:07
*** tkammer has joined #openstack-meeting14:07
rkukuradid Sukhdev just join?14:07
Sukhdevgood morning14:07
Sukhdevyes14:07
mesteryMorning Sukhdev!14:07
mesteryWill we see you in Hong Kong next week?14:07
Sukhdevyes - will be there14:08
dkehnI will be there14:08
Sukhdevcoming a day late :-)14:08
mesteryGreat! Looking forward to meeting folks in person next week!14:08
*** DrBacchus has quit IRC14:09
rkukuralikewise14:09
mesteryOK, anyone have anything else?14:09
mesteryIf not, please setup an etherpad for your session this week so it's ready to go for the Summit next week!14:09
Sukhdevwanted to ask a favor -14:09
*** DrBacchus has joined #openstack-meeting14:09
mesteryAnd feel free to add notes to the existing etherpads as well.14:09
dkehnyes I'm trying to get the config setup for using ml2 in baremetal and have a few question14:09
mesterySukhdev: Sure.14:09
*** bogdando has joined #openstack-meeting14:09
mesterydkehn: Cool! How can we help?14:09
SukhdevI resubmitted my patch for havana/stable -14:09
Sukhdevcan you help get this merged?14:10
mesterySukhdev: Yes, will review this today.14:10
Sukhdevhave some DB migration issue will take care of this moning14:10
*** dianefleming has joined #openstack-meeting14:10
Sukhdevyes  - it will ready today14:10
mesterySukhdev: great14:10
rkukuraSukhdev: Saw that and was thinking the stable maintainers usually do the back-ports, but I could be mistaken14:10
Sukhdevcool thanks14:10
Sukhdevdo I have to follow some different process for that?14:11
mesteryrkukura: I think getting some +1s from us helps their job.14:11
dkehnother than the neutron.conf, core_plugin  is there other parms I need to address14:11
*** devanand1 is now known as devananda14:11
*** whenry has joined #openstack-meeting14:11
SukhdevOK cool14:11
Sukhdevthanks14:11
mesterydkehn: Does bare metal use OVS with agents? If so, that config should remain the same from your OVS configuration.14:11
dkehnyes, or at least I would like to remain suing the ovs14:12
*** ivasev has joined #openstack-meeting14:12
mesteryOK, then you should be good to go, as ML2 works with the OVS agent.14:12
dkehnthen were do we direct the ml2 usage14:12
*** coolsvap has quit IRC14:12
dkehnif the core_plugin is to remain the same?14:12
mesterydkehn: In neutron.conf by setting hte core plugin14:12
rkukuradkehn: Do you main install on bare metal, or doing a bare metal cloud?14:12
mesteryNo, core_plugin becomes ML214:13
mesteryBut the agent config remains the same14:13
dkehnok, thats what I was assuming just wanted to verify14:13
rkukuraDon't forget to add L3 to the service plugin list.14:13
*** otherwiseguy has quit IRC14:13
mesterydkehn: Cool! Let us know how it goes when you give it a try.14:14
dkehnok, thx14:14
mesteryrkukura: Good point, I forgot about that.14:14
dkehnsee you all inb HK14:14
mesterydkehn: Same here!14:14
mesteryOK, if there is nothing else, lets call this meeting early.14:14
rkukuraI think thats it!14:14
mesteryOK, thanks everyone!14:15
mesteryLooking forward to some productive ML2 discussions in Hong Kong next week!14:15
mesterySee you all there!14:15
mestery#endmeeting14:15
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"14:15
openstackMeeting ended Wed Oct 30 14:15:25 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:15
dkehnis the service plugin in neutron?14:15
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-10-30-14.00.html14:15
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-10-30-14.00.txt14:15
openstackLog:            http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-10-30-14.00.log.html14:15
*** rcurran has quit IRC14:15
*** DrBacchus has quit IRC14:16
rkukuradkehn: Yes14:16
dkehnrkukura: can you provide the actual tag, thx14:16
mesterydkehn: service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin14:18
*** ryanpetrello has quit IRC14:18
dkehnthx14:18
rkukuralooks right to mew14:18
rkukuras/mew/me/14:18
*** jmontemayor has joined #openstack-meeting14:19
dkehnmestery: assuming that goes into the default section?14:19
mesterydkehn: Yes, correct.14:20
*** mrodden has joined #openstack-meeting14:21
*** sacharya has joined #openstack-meeting14:21
*** DrBacchus has joined #openstack-meeting14:21
dkehnmestery: what is the default, just curious14:21
mesterydkehn: I don't think there is a default, but ML2 requires this as we decoupled the L3 mixin from ML2 in Havana.14:22
dkehnmestery: good to know14:22
rkukuramestery, dkehn: There is a puppet module for ml2 in review. Haven't had a chance to review it myself yet, but wouldn't hurt for any of us to take a look14:22
dkehnmestery: so bascially the following are the only core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin14:23
dkehnservice_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin14:23
dkehnmestery: to bring the ml2 into play14:23
mesterydkehn: Yes, that should be what's required.14:24
mesteryrkukura: Can you point me at the Puppet module review? I'll signup and have a look.14:24
dkehnmestery: great, now for the rebuild14:24
mesterydkehn: Les us know how it goes. :)14:24
dkehnmestery: will do, thx again14:25
*** matsuhas_ has quit IRC14:25
*** matsuhashi has joined #openstack-meeting14:25
dkehnmestery: one more so in the nova.conf I've set the linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver, I'm assuing this should stay14:26
dkehnmestery: this is the Driver used to create ethernet devices14:27
*** markwash has quit IRC14:27
mesterydkehn: That should work, though I had thought in Icehouse that one was going away in favor of the generic driver.14:29
mesteryIs this Havana or Icehouse?14:29
*** matsuhashi has quit IRC14:30
dkehnshould be pulling from truck each time14:30
dkehnso latests14:30
rkukuramestery: See https://review.openstack.org/#/c/48289/14:31
mesterykukura: Thanks14:32
mesteryrkukura: Thanks14:32
*** lpabon has joined #openstack-meeting14:36
*** sacharya has quit IRC14:38
*** matiu has joined #openstack-meeting14:39
*** ryanpetrello has joined #openstack-meeting14:41
*** Mandell has joined #openstack-meeting14:41
*** stevemar has joined #openstack-meeting14:41
*** sileht has quit IRC14:43
*** sileht has joined #openstack-meeting14:43
*** haomaiwang has joined #openstack-meeting14:45
*** rgerganov has joined #openstack-meeting14:45
*** johnthetubaguy1 is now known as johnthetubaguy14:45
*** jayahn has joined #openstack-meeting14:45
*** haomaiwa_ has quit IRC14:47
*** otherwiseguy has joined #openstack-meeting14:51
*** sarob has joined #openstack-meeting14:52
*** garyk has quit IRC14:53
*** pcm_ has quit IRC14:53
*** pcm_ has joined #openstack-meeting14:54
*** vincent_hou has joined #openstack-meeting14:55
*** spzala has joined #openstack-meeting14:57
*** pcm_ has quit IRC14:58
*** pcm_ has joined #openstack-meeting14:59
*** pcm_ has quit IRC15:01
*** sarob has quit IRC15:01
johnthetubaguy#startmeeting XenAPI15:01
openstackMeeting started Wed Oct 30 15:01:40 2013 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
*** openstack changes topic to " (Meeting topic: XenAPI)"15:01
openstackThe meeting name has been set to 'xenapi'15:01
johnthetubaguyI think BobBall is on holiday, is anyone here today?15:01
*** sarob has joined #openstack-meeting15:02
pvoo/15:02
pvobut probably not as useful as bob15:02
johnthetubaguynice to see you though15:02
*** pcm_ has joined #openstack-meeting15:02
johnthetubaguyanyone got anything to raise15:02
*** lblanchard has quit IRC15:02
johnthetubaguywe just changed the clocks in the UK, so that may cause some confusion15:02
pvoheh. that always happens.15:02
*** pcm_ has quit IRC15:03
*** tkammer has quit IRC15:03
johnthetubaguy#help please review the summit session plan https://etherpad.openstack.org/p/IcehouseXenAPIRoadmap15:03
*** burt has joined #openstack-meeting15:03
*** IlyaE has joined #openstack-meeting15:03
johnthetubaguyI haven't really got anything else to chat about, but will hang on to see if mate or anyone else joins and has something to say15:04
pvosounds good. Thanks john.15:04
johnthetubaguyno problem15:05
pvoConsider breaking 1:1 hypervisor / nova-compute mapping15:05
pvoI'm very interested in this.15:05
pvowe would reduce our footprint by a pretty large factor15:06
*** sarob has quit IRC15:06
johnthetubaguyyes, it would really help15:06
*** danwent has joined #openstack-meeting15:06
johnthetubaguymy thinking is several nova-computes on a single VM, a separate process and config file for each nova-compute15:06
pvojohnthetubaguy: not sure if you  know, but is Citrix moving away from vhd?15:06
johnthetubaguythen we only have a few things15:07
pvoor did I misread that somewhere?15:07
pvoor was it msft moving away from it?15:07
johnthetubaguyI know Citrix want LVHD support, but I don't see the project dropping VHD, but it might add qcow2 rather than VHDX but its not clear yet15:07
johnthetubaguyah, that might be VHDX, it removes the 2TB limit in VHD15:08
pvoah, thats it.15:08
pvogoogling turned it up.15:08
pvoIs that something we would want to add on a wishlist?15:08
johnthetubaguythe only thing keeping the 1:1 mapping is stuff like adding the disk partition15:08
johnthetubaguyVHDX?15:08
pvomaybe. I'm doing more research before I open my mouth. : )15:09
johnthetubaguyyeah, I think qcow2 is probably better bet15:09
pvoI was going to advocate qcow2 but I need to look at all the specs for vhdx15:10
pvosince I just vaguely heard of it.15:10
pvomaybe its the cat's meow.15:10
johnthetubaguyright, VHDX got some optimizations that Citrix VHD already added, but yeah15:10
johnthetubaguythe way Xen is going qcow2 might be more likely15:10
*** pcm_ has joined #openstack-meeting15:10
pvoit would make more sense here for the interop.15:11
*** caitlin56 has joined #openstack-meeting15:11
*** herndon has joined #openstack-meeting15:16
*** tkammer has joined #openstack-meeting15:17
*** Mandell has quit IRC15:18
*** Mandell has joined #openstack-meeting15:18
*** pschaef has joined #openstack-meeting15:19
*** lpabon_ has joined #openstack-meeting15:19
*** jayahn has quit IRC15:19
*** lpabon has quit IRC15:19
johnthetubaguyOK, so we should wrap up15:21
*** radez is now known as radez_g0n315:21
johnthetubaguysee some of you in HK15:21
johnthetubaguy#endmeeting15:21
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"15:21
openstackMeeting ended Wed Oct 30 15:21:23 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:21
openstackMinutes:        http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-10-30-15.01.html15:21
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-10-30-15.01.txt15:21
openstackLog:            http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-10-30-15.01.log.html15:21
*** lsmola_ has quit IRC15:22
*** colinmcnamara has joined #openstack-meeting15:22
*** caitlin56 has quit IRC15:22
*** caitlin56 has joined #openstack-meeting15:23
*** Mandell has quit IRC15:23
*** sriram_c has quit IRC15:24
*** jcoufal is now known as jcoufal|afk15:25
*** sriram_c has joined #openstack-meeting15:25
*** ayoung has quit IRC15:26
*** jmontemayor has quit IRC15:26
*** lpabon_ has quit IRC15:27
*** whenry has quit IRC15:27
*** MarkAtwood has joined #openstack-meeting15:29
*** dscannell has joined #openstack-meeting15:29
*** bpokorny has quit IRC15:29
*** rkukura has left #openstack-meeting15:31
*** dcramer_ has joined #openstack-meeting15:31
*** galstrom_zzz is now known as galstrom15:32
*** glenng has joined #openstack-meeting15:33
*** sarob has joined #openstack-meeting15:34
*** jmontemayor has joined #openstack-meeting15:37
*** jsergent has joined #openstack-meeting15:42
*** doude has joined #openstack-meeting15:46
*** ericyoung has joined #openstack-meeting15:46
*** sneelakantan has joined #openstack-meeting15:47
*** blamar has joined #openstack-meeting15:48
*** caitlin56 has quit IRC15:51
*** bpb has joined #openstack-meeting15:51
*** caitlin56 has joined #openstack-meeting15:51
*** nermina has quit IRC15:52
*** doude has left #openstack-meeting15:54
*** vkmc has joined #openstack-meeting15:54
*** vkmc has quit IRC15:54
*** vkmc has joined #openstack-meeting15:54
*** nati_ueno has quit IRC15:57
*** bswartz has joined #openstack-meeting15:58
*** winston-1 has joined #openstack-meeting15:59
jgriffithfolks here for cinder meeting?16:00
KurtMartinyep16:00
caitlin56yes16:00
*** winston-1 is now known as winston-d16:00
*** tkammer has quit IRC16:00
*** jjacob512 has joined #openstack-meeting16:00
bswartzhi16:00
*** zhiyan has joined #openstack-meeting16:00
*** garyk has joined #openstack-meeting16:00
eharneyhi16:00
jgriffithbswartz: eharney duncanT KurtMartin winston-d avishay16:00
jgriffithok16:00
winston-dhi, o/16:00
*** rushiagr has joined #openstack-meeting16:00
jgriffithlet's go for it16:00
zhiyanhi16:00
jgriffith#startmeeting cinder16:00
openstackMeeting started Wed Oct 30 16:00:42 2013 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: cinder)"16:00
rushiagryeah!!16:00
openstackThe meeting name has been set to 'cinder'16:00
*** KurtMartin is now known as kmartin16:00
rushiagro/16:00
jgriffithdosaboy:16:00
duncanTHey16:00
*** avishay has joined #openstack-meeting16:00
rushiagrjusst on time :)16:01
jgriffithdosaboy: with us?16:01
jgriffithrushiagr: :)16:01
dosaboyoh hey yes16:01
*** jungleboyj has joined #openstack-meeting16:01
duncanTGlad you shouted, totally forgotten the clocks change altered the meeting time16:01
avishayhi16:01
dosaboygoddam DST ;)16:01
*** joel-coffman has joined #openstack-meeting16:01
jgriffith#topic backup support for metadata16:01
*** openstack changes topic to "backup support for metadata (Meeting topic: cinder)"16:01
jgriffithdosaboy: LOL16:01
jungleboyjHowdy all!16:01
jgriffithjungleboyj: yo16:01
dosaboyok so16:01
*** any2names has joined #openstack-meeting16:01
avishayhttps://wiki.openstack.org/wiki/CinderMeetings16:01
guitarzanduncanT: it's ok, the rest of us will show up at the wrong time next week16:02
*** xyang__ has joined #openstack-meeting16:02
guitarzanhmm, or the week after I suppose16:02
jungleboyjguitarzan: +216:02
dosaboyso i had a few discussions now about backlup metadata16:02
dosaboyfew opionions flying around16:02
dosaboybut16:02
dosaboyi think the best way forward is as follows16:02
*** ajauch has joined #openstack-meeting16:03
dosaboyeach backup driver will backup a set of volume metadata16:03
*** matel has joined #openstack-meeting16:03
dosaboythat 'set' of metadata will come from a common api16:03
dosaboypresented to all drivers16:03
dosaboywhich will be versioned16:03
dosaboythis will allow for the volume to recreated from scratch16:03
dosaboyshould the db/cinder cluster get lost16:04
jgriffithdosaboy: why back up the metadata at all?16:04
dosaboy(some caveats tbd)16:04
jgriffithdosaboy: ahh...  db recovery16:04
avishaydosaboy: i noted on the BP that you need an import_backup too16:04
dosaboywell (queue DuncanT)16:04
dosaboyavishay16:04
dosaboyyes16:04
dosaboyI have created a sperate BP for that16:04
dosaboybasiucally all this back stuff is mushromming a bit ;)16:04
avishayOh OK, cool - please link the BPs16:04
dosaboyback/backup16:04
dosaboyyeah sorry i am all over the shop this week16:05
duncanTThe vision I've always tryed to keep for backup is that it *is* for disaster recovery16:05
dosaboytrying to keep up16:05
jgriffithduncanT: DR of what though?16:05
*** kartikaditya has joined #openstack-meeting16:05
caitlin56dosaboy: how would this work if we allowed the Volume Drivers to do backups?16:05
jgriffithduncanT: it's not an HA implementation of Cinder16:05
jgriffithduncanT: at least I don't think it should be16:05
duncanTjgriffith: Cidner volumes. Even if your cinder cluster caught frie orr got stolen, you can still get you volume(s) back16:05
duncanTGah, typing fail16:06
caitlin56dosaboy: having the Volume Driver do the backup would be more efficient when the drives are external, but we need a definitive format.16:06
jgriffithduncanT: if it's cinder volumes I ask again, why even back up metadata16:06
avishayjgriffith: if you backup to a remote site and you lose your entire cinder, your backups should remain usable16:06
dosaboycaitlin56: not sure what you mean16:06
jgriffithcaitlin56: besides the point right now16:06
jungleboyjduncanT: So, the data about what volumes there were?16:06
jgriffithavishay: that's Cinder DR not volume backup16:06
duncanTjgriffith: because certain volumes are useless without at least some metadata (e.g. the bootable flags and glace metadata for licensing)16:06
jgriffithwhat I'm saying is that they're two very different things16:06
avishayjgriffith: everything's connected :)16:06
jgriffithduncanT: ok.. I'm going ot try this one more time16:06
jgriffithavishay: duncanT16:07
jgriffithfirst:16:07
jungleboyjavishay: Oh, ok, backup of the volumes is separate and then this backs up the data for accessing them.  Right?16:07
jgriffithMy thought regarding the purpose of volume backup service is to backup volumes16:07
jgriffithwhat you're proposing now is bleeding over into the db contents16:07
jgriffithhowever...16:07
jgriffithif you're going to do that, then I would argue that you have to go all the way16:07
dosaboyjgriffith: since we need to backup the metadata, we could just shove it in the backend and effectively get DR for free16:08
jgriffithin other words just backing up the meta is only part of the story16:08
dosaboyit is not much effort to get that done16:08
*** thingee has joined #openstack-meeting16:08
*** sdake has joined #openstack-meeting16:08
jgriffithdosaboy: ummm... I don't think it's that simple honestly16:08
thingeeo/16:08
jgriffithdosaboy: quotas, limits etc etc16:08
dosaboywell,16:08
jgriffithall of those things exist in the DB16:08
avishaydosaboy: it's DR with high RPO and RTO16:08
jgriffithsnapshots16:08
duncanTquotas, limits etc I don't think are part of the volume16:08
dosaboythat is where my versioned api comes in16:08
dosaboyso16:08
jgriffithduncanT: neither is metadata16:09
dosaboythe idea is that we define a sufficient set of metadata16:09
bswartzif you want to recover from your whole cinder going up in smoke you need to mirror the whole cinder DB16:09
duncanTBut the stuff needed to use the volume *is* part of the volume16:09
caitlin56Backing up the metadata with the data is relatively easy, it's standardizing it and being compatible with existing backups that takes work.16:09
jgriffithI honestly think this makes things WAY more complicated than we should16:09
dosaboycaitlin56: hence the versioning16:09
avishaycaitlin56: yes16:09
*** EhudTrainin has joined #openstack-meeting16:09
jgriffithif you want cinder DR then implement an HA cinder setup16:09
duncanTbswartz: The backup API allows you to choose (and pay for in certain cases) a safe, cold storage copy of you volume16:09
jgriffithif you want to back up databases, back them up using a backup service16:09
*** bill_az has joined #openstack-meeting16:09
duncanTI *don't* want to back up the database16:10
winston-dcaitlin56: but can we 'backup' the metadata in db?16:10
caitlin56Backing up the CinderDB means that you would be restoring *all* volumes.16:10
jgriffithduncanT: but that's your argument here16:10
jgriffiththe only reason to backup metadata is if the db is lost16:10
duncanTjgriffith: No it isn't16:10
duncanT(my arguement)16:10
jgriffithok... then why backup the metadata at all?16:10
caitlin56If you want to restore selective volumes then you need selective metadata (or no metadata, which is what jgriffith is arguing).16:10
*** IlyaE has quit IRC16:10
duncanTRight, I want a backup to be a disaster resistant copy of a volume.16:11
jgriffithduncanT: what's the logic to backing up the metadata?16:11
bswartzif you're not worried about the database going away, then there's no point to making more copies of the metadata16:11
duncanTIncluding everything you need to use *that volume*16:11
duncanTNot *all volumes*16:11
duncanTNot *cinder config*16:11
duncanTJust the volume I've said is important16:11
duncanTOtehrwise use a snapshot16:11
jgriffithYou're still not realy answering my question16:12
winston-dbswartz: yes, there is. we 'snapshot' the metadata, in DB16:12
caitlin56duncanT: can you site some specific metadata fields that you would not know how to set when restoring just the volume payload?16:12
jgriffithbswartz: +116:12
jgriffithbswartz: which is my whole point16:12
jgriffithbswartz: and what I'm trying to get duncanT to explain16:12
duncanTcaitlin56: The bootable flag. The licensing info held in the glance metadata16:12
jgriffithintroducing things like "disaster resistant" isn't very helpful to me :)16:13
avishaywinston-d: needs to be consistent16:13
bswartzwinston-d: you're imagining that the metadata might change and you want to restore it from an old copy?16:13
jgriffiththat's a bit subjective16:13
duncanTI'd like to be able to import a backup into a clean cinder install16:13
winston-dbswartz: correct16:13
jgriffithduncanT: ahh... that's VERY different!16:13
jgriffithduncanT: that's volume import or migration16:13
jgriffiththat's NOT volume backup16:13
bswartzwinston-d: seems reasonble16:13
*** RajeshMohan has quit IRC16:13
duncanTjgriffith: No, it's backup and restore16:13
*** jcoufal|afk is now known as jcoufal16:13
avishayno, it's not migration16:13
duncanTjgriffith: Even if cinder dies, catchs fire etc16:13
* dosaboy is sitting on the fence whistling16:14
caitlin56Does anyone have examples of metadagta that SHOULD NOT be restored when you migrate a volume from one site to another?16:14
duncanTjgriffith: The backup should be enough to get my working volume back16:14
avishaymigration is within one openstack install16:14
*** RajeshMohan has joined #openstack-meeting16:14
duncanTI put that on my first ever backup slide, and I propose to keep it there16:14
jgriffithavishay: yeah...16:14
jgriffithavishay: duncanT alright we're obviously not going to agree here16:14
* jungleboyj is enjoying the show.16:15
jgriffithavishay: well maybe you and I will16:15
avishayhaha16:15
winston-davishay: backup is within one cinder install as well, no?16:15
jgriffithduncanT: fine, so you want a "cinder-volume service" backup16:15
avishayi'm with duncanT on this one16:15
avishaywinston-d: i can backup to wherever i want - think geo-distributed swift even16:15
jgriffithhaha16:15
dosaboyavishay: +116:15
jgriffithWTF?16:15
duncanTjgriffith: I just want the volume I backup to come back, even if cinder caught fire in the mean time16:16
*** DrBacchus has quit IRC16:16
jgriffithright16:16
* bswartz is nervou about conflating backup/restore use cases with DR use cases16:16
bswartznervous*16:16
jgriffiththe key is "cinder caugh fire"16:16
avishayif i backup to geo-distributed swift, and a meteor hits my datacenter, i can rebuild and point my new metadata to existing swift objects16:16
caitlin56What is a backup? If it is not enough to restore a volume to a new context then why not just replicate snapshots?16:16
jgriffithsorry.. buy you're saying "backup as a service" in openstack as a whole IMO16:16
duncanTbackup has been DR since day one on the design spec16:16
jgriffithYou're not talking about restoring a volume anymore16:16
winston-davishay, duncanT don't forget to backup volume types, and qos alone with volume and metadata16:17
guitarzanwhere is that leap coming from?16:17
jgriffithyou're talking about all the little nooks and crannies that your specific install/implementation may require16:17
bswartzduncanT: I don't agree that taking backups is the best way to implement DR -- it's *a* way, but a relatively poor one16:17
avishaybackup and replication are on the same scale, with different RPO/RTO and fail-over methods16:17
jgriffithand what's worse is you're saying "I only care about metadata"16:17
winston-davishay: duncanT 'cos those two are considered to be 'metadata' to the voluem as well16:17
duncanTThe problem is, when I first wrote backup, we didn't have bootable flags, or volume encryption... you got the bits back onto an iscsi target and you were back in business16:17
jgriffithbut somebody else says "but I care about quotas"16:17
guitarzanwinston-d has an interesting point16:17
jgriffithand somebody else cares about something else16:17
duncanTGlance metadata was the first bug16:17
bswartzthe real value of backups is when you don't have a disaster, but you've corrupted your data somehow16:17
jgriffithIt doesn't end until you backup all of cinder and the db16:17
avishaywinston-d: but i can put the data into a volume with different qos and it still works16:17
duncanTTypes or rate limits don't stop me using a volume16:18
guitarzanbswartz: isn't that a disaster? :)16:18
jgriffithduncanT: they on't stop YOU16:18
jgriffiththat's the key16:18
jgriffiththey may stop others though depending on IMPL16:18
bswartzguitarzan: no -- that's a snafu16:18
duncanTjgriffith: They don't stop a customer...16:18
jgriffithduncanT: they don't stop your customers16:18
jgriffithduncanT: they stop mine16:18
jgriffithduncanT: I have specific heat jobs that require volumes of certain types16:19
bswartzthere's a difference between users screwing up their own data, and a service provider having an outage16:19
duncanTjgriffith: The point is, right now, even if you've got your backup of a bootable volume, it is useless if cinder looses stuff out the DB16:19
winston-davishay but that's not the original volume (not data) any more.16:19
jgriffithduncanT: perhaps16:19
winston-davishay: i mean data is, volume is not16:19
*** Sukhdev has quit IRC16:19
caitlin56Would it always make sense when restoring a single volume to a new datacenter to preserve the prior QoS/quotas/etc.?16:19
avishaywinston-d: that's philosophical :P16:19
duncanTjgriffith: You can't restore it in such a way that you can boot from it. At all.16:19
*** belmoreira1 has quit IRC16:19
dosaboyall this could be accounted for by simply defining a required matadata set16:19
jgriffithduncanT: but the purpose of the backup IMO is if your backend takes a dump a user can get his data back, or as bswartz pointed out a user does "rm -rf /*"16:19
dosaboyi don't see why that would be so complex16:20
winston-ddosaboy: +116:20
avishayit's all of these use cases16:20
jgriffithdosaboy: I don't disagree with that16:20
dosaboydeliberating whther this or that metadata is required is ot reall for this conversation16:20
duncanTjgriffith: Right now, I CAN'T GET MY BOOTABLE VOLUME TO BOOT16:20
avishayit's rm -rf, it's a fire, it's a meteor16:20
duncanTIt jsut can't be done16:20
caitlin56dosaboy: agreed, if we are going to backup metadata, we need to define filters on the metadata so only things that should be kept are.16:20
jgriffithduncanT: keep yelling16:20
jgriffithduncanT: I'll keep ignoring :)16:21
guitarzanthat's the bug, you can't boot a restored backup16:21
avishayjgriffith: accidental caps lock ;)16:21
duncanTjgriffith: Sorry, I was out of line a touch there16:21
jgriffithavishay: haha... I don't think that's the case16:21
avishayjgriffith: be optimistic :)16:21
jgriffithdosaboy: so like I said in IRC the other day....16:21
*** neelashah has quit IRC16:21
dosaboyyarp16:22
jgriffithdosaboy: I'm fine with it being implemented, I could care less16:22
winston-di'd like to consider the volume as a virtual hard drive.16:22
duncanTjgriffith: But a snapshot covers the rm -rf case16:22
jgriffithdosaboy: I have the info in my DB so I really don't care16:22
jgriffithdosaboy: If you're an idiot and you don't back up you db then hey.. this at least will help you16:22
jgriffithdosaboy: but something else is going to bight you in the ass later16:22
winston-dwhatever it takes to backup a virtual hard drive, that's what we should do in cinder backup.16:22
caitlin56Allowing snapshot replication would deal with disaster recovery issues, but not with porting a volume to a new vendor.16:22
dosaboydamn it, i'm back on the fence again16:23
*** schwicht has quit IRC16:23
dosaboyi kinda think the only way to resolve this is to have a vote16:23
jgriffithdosaboy: I also think that things like metadata would be good in an "export/import" api16:23
*** markwash has joined #openstack-meeting16:23
avishaydosaboy: democracy doesn't work in these situations i think :)16:23
thingeeI missed the beginning of this convo. Why are people opposed to it restoring metadata?16:23
jgriffithdosaboy: duncanT but like I said, it doesn't *hurt* me if you put metadata there16:23
dosaboylol16:23
duncanTI totally agree that theya re part of export too16:23
duncanTAnd transfer for that matter16:24
dosaboyok so, i have implement a chunk of this,16:24
avishaywhat is "volume export"?16:24
duncanTCertain volumes are literally useless if you loose certain bits of their metadata16:24
dosaboywhy don't i see if i can knock uo the rest16:24
dosaboyand then if you like...16:24
*** ericyoung has quit IRC16:24
jgriffithdosaboy: I do think someone should better define the purpose of cinder-backup though16:24
jgriffithdosaboy: that's fine by me16:24
winston-dthingee: i think we are discussing about where to save the copy of metadata for a volume backup, in DB or in Swift/Ceph/Sth else16:24
dosaboyjgriffith: totally agree16:24
zhiyanwinston-d: if we thinking volume as a virtual hard driver, so can we export it as a package, like ovf? it contains metadata16:24
jgriffithdosaboy: like I said, I won't object to backing it up at all16:25
jungleboyjjgriffith: +116:25
jgriffithdosaboy: but I don't want to have misleading expectations16:25
dosaboyi would have asked for a session if i had not confrmed HK so late16:25
jgriffithdosaboy: this is nowhere near a Cinder DR recovery backup16:25
jgriffithand I don't want to make it one16:25
jgriffitherrr...16:25
jgriffiths/recover//16:25
dosaboythere are many a stong opinion on this one ;)16:25
thingeewinston-d: I think we've discussed  before to leave it to the object store.16:25
caitlin56jrgriffith: we're debating what a backup is good for.16:25
thingeeas object store metadata16:25
avishayjgriffith: what is "volume export"?16:26
jgriffithavishay: non-existent :)16:26
avishayjgriffith: it looks like i'm leading the "volume import" session, so thought I should know :)16:26
jgriffithavishay: the idea/proposal was to be able to kick out volumes from Cinder without deleting them off the backend16:26
*** slagle has quit IRC16:26
dosaboyput it this way, as long as the necessary metadat is backed up (either way)16:27
avishayjgriffith: ah OK16:27
jgriffithand then obviously an import to pull in existing volumes16:27
dosaboynoone gets hurt16:27
*** slagle has joined #openstack-meeting16:27
winston-dthingee: yeah, but as i said to dosaboy the other day, i missed that discussion16:27
jgriffithdosaboy: agreed16:27
*** DrBacchus has joined #openstack-meeting16:27
dosaboysoi'll keep going in the track i'm on16:27
dosaboyand we can take a look at what i get done16:27
avishayjgriffith: quotas are tricky on export16:27
dosaboyif we don't like it then fine16:27
thingeedosaboy: are you still storing it in object store metadata?16:28
dosaboythingee: yes16:28
jgriffithavishay: indeed16:28
dosaboybut each driver will have it's own way16:28
duncanTavishay: I'd suggest that once something is kicked out of cinder, then it can't take up a cinder quota?16:28
winston-davishay: and types, extra specs, qos?16:28
caitlin56dosaboy: be sure to escape the metadata then.16:28
jgriffithduncanT: +116:28
avishayduncanT: but it still takes up space on disk16:28
*** jbernard has joined #openstack-meeting16:28
jgriffithavishay: yeah so that's the counter, however you kicked it out16:28
duncanTavishay: Not cinder's problem once you've explicitly decided it isn't cinder's problem16:28
jgriffithavishay: that tenant can't 'access' via cinder anymore16:29
jgriffithavishay: it's troubling....16:29
avishayjgriffith: yes.  is export needed?16:29
jgriffithavishay: I'm having the same dilema with adding a purge call to the API16:29
caitlin56Quotas are tricky, because they are implicitly part of the cinder context where the backup was made.16:29
jgriffithavishay: well that's another good question :)16:29
dosaboyooh i had not thought of that16:29
dosaboyso how do backups count towards quota atm?16:30
avishaywait...i'm sorry i forked the conversation16:30
avishaypeople are getting confused16:30
duncanTdosaboy: Currently they don't16:30
* rushiagr definitely is16:30
avishaywe moved to talk about "volume export" without finishing backup metadata16:30
dosaboyand i presume that is bad?16:30
* jgriffith went down the fork nobody else is on16:30
avishayhaha16:30
caitlin56dosaboy: backups should count in your *object* storage quoata, not cinder.16:30
avishayi think we can hash out export at the session.  i think we should also find time to talk about backup metadata.16:31
dosaboycaitlin56: but they don't necessarily go to an object store16:31
dosaboylike if you offload to TSM16:31
caitlin56dosaboy: when the backup is to an object store, we should let that object store track/report the data consumption.16:31
*** sneelakantan has left #openstack-meeting16:32
thingeecaitlin56: +116:32
jungleboyjcaitlin56: That makes sense.16:32
*** egallen has quit IRC16:32
jungleboyjCinder can't run the whole world.16:32
thingeeI don't want this problem where multiple projects are tracking the same resource quota again.16:32
jungleboyj...yet.16:32
*** markmcclain has quit IRC16:32
*** salv-orlando has joined #openstack-meeting16:32
dosaboyanyway i don't wanna hijack this meeting anymore16:32
dosaboyi think quotas in backup are discussion to be had though16:33
jgriffithdosaboy: too bad :)16:33
avishaydosaboy: duncanT: can you come up with a set of metadata to back up and we'll discuss in person next week?  we can do an ad-hoc session?  jgriffith sound good?16:33
dosaboysure16:33
duncanTSure16:33
* jungleboyj is sorry he is going to miss that discussion. The IRC version was fun!16:34
jgriffithavishay: sure... but like I said, if dosaboy just wants to implement backup of metadata with a volume I have no objection16:34
dosaboywe can all sit around a nice campfire16:34
jgriffithso it would be a short conversation on my part :)16:34
avishayok :)16:34
dosaboyDuncanT can make the cocoa16:34
jgriffithno way!!!!16:34
jungleboyjdosaboy: +116:34
jgriffithI know duncanT would push me into the fire16:34
dosaboyhaha16:34
avishaythat way we know if duncanT is yelling or just hit the caps lock ;)16:34
avishayhahah16:34
*** SergeyLukjanov has joined #openstack-meeting16:34
jungleboyj:-)16:34
*** egallen has joined #openstack-meeting16:34
dosaboydon't worry, i'll bring the whiskey16:35
jungleboyjdosaboy: +216:35
jgriffithalright...16:35
jgriffithwell that was fun16:35
jgriffithIs Ehud around?16:35
*** masayukig has joined #openstack-meeting16:35
*** ajauch has quit IRC16:35
EhudTraininyes16:35
jgriffithEhudTrainin: welcome16:35
EhudTraininhi16:35
jgriffith#topic fencing16:35
*** openstack changes topic to "fencing (Meeting topic: cinder)"16:35
jgriffithhttps://blueprints.launchpad.net/cinder/+spec/fencing-and-unfencing16:36
jgriffithfor those that haven't seen it ^^16:36
jgriffithEhudTrainin: I'll let you kick it off16:36
EhudTraininIt is about adding fencing functionality to Cinder16:37
EhudTraininin order to support HA for instances16:37
avishayblueprint explains it pretty well16:38
EhudTraininby ensuring the instances on a failed host would not try to access the storage after they rebuilt on another host16:38
jungleboyjavishay: +116:38
duncanTMy concerns here are the failure cases... partitioned storage etc...16:38
winston-davishay: +1, nice write up in the bp. haven't seen any one like that for quite a while.16:38
rushiagr+1 for beutifully explaining in BP16:38
jgriffithyeah, the bp is well written (nice job on that by the way)16:38
*** sandywalsh_ has quit IRC16:39
avishayduncanT: please elaborate16:39
duncanTIt's easy to say it's hard for nova so cinder should do it, but exactly the same problems exist in cinder failure cases... like cinder looses communitication with a storage backend16:39
*** hemnafk is now known as hemna16:40
zhiyanEhudTrainin: i'm thinking how to cinder identify those attachment session for a volume. seems need prevent such race condition issue.. IMO16:40
avishayit's not hard for nova - it's impossible for nova.  the server is in a bad state and we can't trust it to do the right thing.16:40
*** dansmith is now known as damnsmith16:40
jgriffithI guess my only real question was:16:40
hemnafencing ?16:40
jgriffith1. why would the compute host try and access the storage?16:40
jgriffith2. why do we necessarily care?16:40
jgriffithI should clarify before somebody flips out...16:41
EhudTraininI think that if the storage does not response then fence will fail, but it is lower probability to both host and storage fail at the same time16:41
*** neelashah has joined #openstack-meeting16:41
jgriffithIf a compute host *fails* and instances are migrated16:41
*** damnsmith is now known as dansmith16:41
jgriffithit should IMO be up to nova to disable the atachments on the *failed* compute host16:41
avishayjgriffith: it might not fail completely - it might just lose connectivity or go into some other bad state16:41
jgriffithavishay: sure16:41
*** bogdando has quit IRC16:42
jgriffithavishay: but it migrated instances right?16:42
hemnashouldn't nova deal with detaching and reattaching elsewhere ?16:42
jgriffithhemna: that's kinda what I was saying16:42
*** bogdando has joined #openstack-meeting16:42
thingeeI agree with jgriffith. it should be up to whatever is doing the migration.16:42
avishaywait16:42
hemnasince Nova should know that it had to migrate the instance to another host, it has the knowledge of the state16:42
jgriffithwe could really screw some things up if we make incorrect decisions16:42
duncanTBut currently there is no way of telling cinder 'make sure this is teally detached', I don't think16:42
avishayEhudTrainin: if nova brings up the instance on another VM, does it have the same instance ID?16:42
duncanT*totally detached16:43
EhudTraininThis is not exactly migation, but a rebuild, since the host from Nova point of view has failed16:43
caitlin56jgriffith: I agree, nova knows the client state accurately. It should deal with the results of that changing.16:43
winston-djgriffith: migrate the instance may or maynot stop the old one from connecting cinder16:43
hemnaavishay, I'd asume since it's a rebuild,it would be a new instance id16:43
jgriffithwinston-d: yeah... but I'm saying it *should*16:43
dosaboysounds like we expect the same piece of code that may fail to migrat and instance to be sane enough to ensure a fence16:43
hemnacould be a poor assumption though16:43
jgriffithdosaboy: that could be double un-good16:44
dosaboyyeah, i may be missing how this would be done though16:44
avishaythe use case is that the nova server is not responsive but the VM continues to run and access the storage16:44
EhudTraininIt may rebuild with the same IP and attach it to the same volume16:44
jgriffithavishay: ahhh16:44
caitlin56dosaboy: could you propose something where we are protecting the volume rather than doing nova's work for it?16:44
duncanTSo if a compute node goes wonkey, when is it safe to reattach a volume that was attached to that host to another instance?16:45
*** jungleboyj has quit IRC16:45
jgriffithavishay: so rogue vm's that nova can't get to anymore16:45
avishayjgriffith: yes16:45
*** SumitNaiksatam has quit IRC16:45
jgriffithduncanT: never probably16:45
jgriffithavishay: so who does the fencing?16:45
avishaybut now i'm think that maybe this is only a problem if we have multi-attach?16:45
duncanTjgriffith: Indeed. I think the idea of fence is 'make it safe to do that'16:45
avishayjgriffith: i assume the admin16:45
jgriffithavishay:ie who makes the call16:45
*** rongze has quit IRC16:45
jgriffithavishay: and why not just send a detach/disconnect16:45
caitlin56rogueVMs aren't something we shoudl solve - at most we should protect the volume from rogue VMs.16:45
dosaboycaitlin56: not sure what you mean there16:46
hemnajgriffith, +116:46
duncanTcaitlin56: The idea here *is* to protect the volume for a rogue VM16:46
dosaboycaitlin56: how does cinder know what a rogue vm is though?>16:46
winston-djgriffith: detach failed16:46
duncanTI guess it is a disconnect on steroids16:46
winston-djgriffith: 'cos nova compute is not reachable16:46
caitlin56dosaboy: exactly, we don't want cinder falsely deciding a VM is rogue.16:46
hemnadosaboy, cinder doesn't know.  only nova does16:47
jgriffithduncanT: yeah, I'm assumign that's what the implementation would basicly be here16:47
dosaboycatlin56: ah gotcha16:47
hemnaI think nova should drive this and I'm not sure what cinder needs to do during the fencing process.  nova should detach from the rogue vm16:47
jgriffithOk... so interesting scenarios16:47
*** jcoufal has quit IRC16:47
avishayEhudTrainin: want to comment?16:47
jgriffithhere's my take...16:47
thingeehemna: +1 nova should be driving this16:47
thingeecinder does not have enough information16:48
duncanThemna: If the compute node stops talking, nova can't to the detach from the VM16:48
jgriffithif you want to implement a service/admin API call to force disconnect from a node and ban a node I *think* that's ok16:48
winston-dhemna: what if nova failed to detach volume, the only hope is to beg cinder to help16:48
EhudTraininAn instance may be rougue when there is no connection to nova-compute of its host, but in future further indication may used to decide a host is failed16:48
jgriffithquite honestly I'm worried about the bugs that will be logged due to operator/admin error though16:48
winston-dhemna: cinder is on the end of the connection16:48
hemnawinston-d, cinder failed in that case as well no ?16:48
jgriffithwinston-d: EhudTrainin hemna avishay duncanT caitlin56 thoughts on my comment ^^16:48
avishayjgriffith: +116:49
hemnajgriffith, +116:49
*** caitlin56 has quit IRC16:49
duncanTjgriffith: I totally agree this is basically a force call16:49
thingeeI think we all understand the use case now. nova node is not responsive, rouge vms. Still we keep coming back to cinder not having enough information. I think really though nova should be driving this still in handling this situation happening.16:49
*** caitlin56 has joined #openstack-meeting16:49
winston-djgriffith: +116:49
avishaythingee: what's missing?16:49
dosaboy+116:49
hemnaif nova can't talk to the n-cpu process on the host, nova can't really detach the volume.16:49
jgriffiththingee: yeah, but if we step back....16:49
duncanTjgriffith: My single concern is how to signal to the caller that the call failed16:49
jgriffiththingee: allowing an admin to force disconnect and ban a node from connecting I'm ok16:50
EhudTraininThe problem Nova can't take care of this, since a failure indication does not ensure the the instance is not talking to the storage or won't do it after some time16:50
*** nikhil|afk is now known as nikhil16:50
jgriffithduncanT: can you explain?16:50
jgriffithsorry16:50
winston-dduncanT: signal for what? force detach failed?16:50
duncanTjgriffith: If cinder can't talk to the storage backend, it can't force the detach...16:50
thingeeEhudTrainin: I apologize, but this is the first time I'm hearing about the bp. I'll check it out to understand more, but as jgriffith mentioned I fear the bugs in automating this.16:50
thingeeI like the idea of admin call to force detach though16:51
duncanTwinston-d: Yes, forced detach failed16:51
duncanTwinston-d: It is far from a show-stopper, just want to ensure it is thought of16:51
hemnathingee, but if n-cpu isn't reachable on the host....nova can't detach the volume.16:51
avishayduncanT: if nobody can talk to the VM and nobody can talk to the storage, i guess you need to pull the plug :)16:51
winston-dduncanT: a-synch call, no call back. please come back query the result.16:51
*** beyounn has quit IRC16:51
jgriffithok16:51
jgriffithso EhudTrainin I think the take-away (and I can add this to bp if you like)16:52
jgriffithis:16:52
duncanTwinston-d: That's fine, yup, just need to remember to add the queryable status :-)16:52
jgriffith1. Add an admin API call to attempt force disconnect of an attachment/s to a specified node/IP16:52
hemnathe best you can do in that case is ask cinder to disconnect from the backend, that'll eventually leave a broken LUN on the host, which will give i/o errors for the host and the vm.16:52
thingeehemna: cinder still doesn't have the information needed though to act. Maybe this bp explains that..I haven't read it yet.16:52
jgriffithUmmm... hmm16:52
*** beyounn has joined #openstack-meeting16:53
xyang__avishay: if this is done by cinder, what happened to the entries in nova db16:53
jgriffithso then what :)16:53
hemnathingee, cinder has the volume and attachment info in it's db.   it can call the right backend to disconnect16:53
avishayxyang__: good question - EhudTrainin ?16:53
thingeethis is a nova node HA problem. I'm not sure why we're trying to solve it with cinder.16:53
winston-dxyang__: nova is the caller, it should know what to do about the block-device-mapping16:53
hemnathis is just icky16:53
thingeehemna: that's not the problem16:54
xyang__winston-d: nova is not working here, right16:54
thingeehemna: the problem is cinder doesn't know to act16:54
xyang__winston: this will be a cinder operation completely16:54
jgriffithxyang__: a particular compute node is trashed16:54
*** twoputt has joined #openstack-meeting16:54
*** yamahata has quit IRC16:54
avishayso this should start at n-api and then to cinder?16:54
hemnathingee, well not yet.  :)  we were talking about forcing a disconnect from cinder.16:54
duncanTthingee: Nova knows it has lost track of a vm and can make the call, yes?16:54
*** sandywalsh_ has joined #openstack-meeting16:54
hemnaduncanT, yah16:54
*** coolsvap has joined #openstack-meeting16:54
EhudTraininwhen the instance is rebuilt, the volume is detached and then attached. the rebuild would be done only after fencing to avoid possible conflict16:54
thingeeduncanT, hemna: as jgriffith mentioned, I think making a call to force deatach is good. But nova should make that call16:55
hemnathingee, +116:55
thingeeor an admin16:55
winston-dxyang__: nova compute is not working, not the entire nova, e.g. nova-api is still working16:55
jgriffithEhudTrainin: ok, now you kinda lost me16:55
duncanTthingee +116:55
xyang__winston-d: ok.16:55
hemnawinston-d, but the host needs n-cpu to be working in order to detach the volume from the VM and the host16:55
jgriffithhemna: no16:55
jgriffithhemna: it just needs n-api16:56
jgriffithhemna: n-api can call cinder16:56
avishayEhudTrainin: i think the question is - why can't this be implemented in nova, where nova-api calls detach/terminate_connection for all volumes attached to the host?16:56
thingeejgriffith: but what tells n-api?16:56
hemnan-cpu does the work to detach the volume from the hypervisor and the host kernel16:56
jgriffiththingee: LOL16:56
jgriffiththingee: excellent question :)16:56
jgriffiththingee: and now we're back to admin, in which case who cares if it's direct to cinder api from admin etc16:56
*** neelashah has quit IRC16:56
winston-dhemna: in the case when n-cpu is on fire, n-api has to call for help from cinder16:57
thingeeagain, I really think this is a nova node HA case. I don't see anything right now that cinder can know to act on.16:57
hemnawinston-d, yah I think that's the only option.16:57
jgriffiththingee: I agree16:57
EhudTraininThe detach command does not ensure that the instace on the mal host would not try to access the storage16:57
jgriffithso you all keep saying things like "nova node on fire" "nova node is unreachable" etc etc16:57
jgriffithif the nova node is so hosed it's probably not going to be making iscsi connections anyway16:58
hemnaEhudTrainin, correct, but if the cinder backend driver disconnects from the storage, the host will get i/o errors when the vm/host tries to access the volume.16:58
avishaywhat about file system mounts, where terminate_connection doesn't do anything?16:58
jgriffithI have an easy solution...16:58
thingeeso we all agree...force deatach exposed. Leave it to the people handling the instances. If a nova node catches fire, there better be another nova node available to catch rouge vms and communicate with cinder16:58
jgriffithssh root@nova-node; shutdown -h now16:58
hemnaEhudTrainin, effectively detaching the volume....but with a dangling LUN16:58
winston-dthingee: cinder doesn't and doesn't have to know. cinder just provides help, in the case when n-cpu is broke and no one can reach n-cpu.16:58
jgriffithif that doesn't work login to pdu and shut off power16:58
thingeewinston-d: so when does cinder to the deatch to help?16:58
hemnathingee, +116:58
avishayjgriffith: what if the server's management network is down, but it's still accessing a storage network?16:59
duncanTjgriffith: It says in the blueprint you don't always have a PDU16:59
hemnaavishay, re: Fibre Channel ? :P16:59
jgriffithavishay: that's where the unplug came from LOL16:59
avishayhemna: or a separate ethernet network16:59
jgriffithduncanT: sighh16:59
caitlin56henma: if nova is hosed then it shouldn't be surprising that gettig evertything working again will not be trivial.16:59
jgriffithcall the DC monkey16:59
*** ArxCruz_ has joined #openstack-meeting16:59
avishayhah16:59
jgriffithalright, we're spiraling16:59
thingeetimes up.16:59
winston-dthingee: n-api finds out n-cpu is on fire, it'd like to re-create another vm on another n-cpu. but n-api failed to disconnect vol, it has to call for cinder's help17:00
hemnathrow a grenade and run.  next!17:00
jgriffithEhudTrainin: it's an interesting idea but there are some very valid concerns here IMO17:00
avishayhow about this case: we have an NFS mount on the host.  disconnect today does nothing.  how do we stop the VM from accessing it?17:00
duncanTThe way I'm reading the blueprint here, all it is asking for is a force_disconnect_and_rogue_reconnections17:00
duncanTavishay: Kill the export?17:00
*** sandywalsh_ has quit IRC17:00
*** jlibosva has quit IRC17:00
hemnaavishay, heh, that's why jgriffith and I complained about the NFS unmount code :)17:00
jgriffithI think we're all fine with an admin extension to force disconnect17:00
jgriffithlet's start with that and go from there17:00
jgriffitheverybody ok with that?17:00
hemna+117:01
jgriffithof course with NFS you're just screwed17:01
*** tjones1 has joined #openstack-meeting17:01
winston-d+117:01
jgriffithOk... we can theorize more in #openstack-cinder if you like17:01
thingee+117:01
* hartsocks waes17:01
EhudTraininI think beyond force disconnect we would also want to prevent the nova-compute on that host from creating new connections17:01
jgriffiththanks everybody17:01
hemnayah good luck deploying a cloud w/ NFS :P17:01
jgriffith#endmeeting17:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"17:01
openstackMeeting ended Wed Oct 30 17:01:41 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-10-30-16.00.html17:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-10-30-16.00.txt17:01
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-10-30-16.00.log.html17:01
winston-djgriffith: i think bswartz has some magic to disconect one NFS session on the fly17:01
*** thingee has left #openstack-meeting17:01
jgriffithhemna: lots of people do it17:01
hemnagives me shivers17:01
hartsocks#startmeeting vmwareapi17:02
openstackMeeting started Wed Oct 30 17:02:04 2013 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.17:02
avishayi think we can start with this, and EhudTrainin can expand on the BP to see if other options work and why not17:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:02
*** openstack changes topic to " (Meeting topic: vmwareapi)"17:02
jgriffithwinston-d: that would be cool, especially since we don't even disconnect int eh current code17:02
openstackThe meeting name has been set to 'vmwareapi'17:02
twoputthi17:02
*** avishay has left #openstack-meeting17:02
*** comay has joined #openstack-meeting17:02
* hartsocks waves17:02
*** reed has joined #openstack-meeting17:02
*** joel-coffman has quit IRC17:02
hartsocksWho's about.17:02
*** winston-d has left #openstack-meeting17:02
rgerganovhi17:02
*** glenng has left #openstack-meeting17:02
garykhi17:02
tjones1hi17:02
*** ArxCruz has quit IRC17:03
hartsocksI know HK is looming. So, my plan was to cover blueprints, then bugs, then yield for any HK discussion.17:03
hartsocks#topic blueprints17:03
*** openstack changes topic to "blueprints (Meeting topic: vmwareapi)"17:03
hartsocksI've been looking through this:17:04
hartsockshttps://blueprints.launchpad.net/nova?searchtext=vmware17:04
*** sriram_c has left #openstack-meeting17:04
*** bswartz has left #openstack-meeting17:04
hartsocksSeeing what folks already proposed. There's been some changes minor in the blueprint process…17:05
hartsocksfor something to get higher than "low" priority you are supposed to get 2 core developers on board for reviews.17:05
*** twoputt_ has joined #openstack-meeting17:05
hartsocksI think the theory there is to get better review cycles.17:06
*** smurugesan has joined #openstack-meeting17:06
*** neelashah has joined #openstack-meeting17:06
*** lblanchard has joined #openstack-meeting17:06
smurugesanHi, this is Sabari here.17:06
hartsockshey.17:07
*** nermina has joined #openstack-meeting17:07
garykhopefully after hk we will have a list of bp's17:07
*** twoputt has quit IRC17:07
hartsockswell,17:08
hartsocksyes.17:08
garykideally the vmware session will give us an opportunity to share with the community changes we'ed like to make and get their inputs17:08
garyki would hope that this will enable to core reviewers to get an idea what each one is about17:08
*** SumitNaiksatam has joined #openstack-meeting17:08
hartsocksWould you like to discuss anything in this forum or are you saving it all for HK?17:08
*** ayoung_ has joined #openstack-meeting17:09
*** jjacob512 has quit IRC17:09
hartsocksI wanted to bring a few BP that are not nova/vmware specific that I think we should be aware of.17:09
hartsocks#link https://blueprints.launchpad.net/oslo/+spec/pw-keyrings17:09
garykthere is the etherpad for the session - https://etherpad.openstack.org/p/T4tQMQf5uS17:10
hartsocks… I think this blueprint will solve one of our stated goals of getting plain text passwords out of our configurations.17:10
*** yassine has quit IRC17:10
dimso/17:10
garykcool17:10
*** zhiyan has left #openstack-meeting17:10
rgerganovI guess our SSO blueprint is also related to this goal17:11
hartsocksyep.17:11
hartsocks#link https://blueprints.launchpad.net/nova/+spec/vmware-sso-support17:11
hartsocksMy thought there was to use our SSO in vCenter with nova some how.17:11
hartsocksMy current thinking is this might be best broken into two halves...17:12
*** vuil has joined #openstack-meeting17:12
garykcan you please elaborate on the sso?17:12
hartsocks… a Keystone plugin and a nova plugin.17:12
hartsocksvCenter has a built in SSO facility.17:12
hartsocksIt implements various security token schemes.17:12
*** radez_g0n3 is now known as radez17:13
*** pauli1 has joined #openstack-meeting17:13
hartsocksIn particular Holder of Key (HoK) tokens.17:13
garykok, sounds interesting17:13
*** EhudTrainin has quit IRC17:13
hartsocksThe idea behind a full featured SSO is that you can do "identity proxy" actions...17:13
hartsocksthat is a person could log in on the CLI or from Horizon and if SSO works properly...17:13
*** sandywalsh_ has joined #openstack-meeting17:13
hartsocksthat identity could be broadcast all the way through to Nova driver communications.17:14
twoputt_hartsocks: let's say with have the vmware sso implemented, do you think this is something that can go easily to Oslo?17:14
hartsocksI'm trying to get time with the VMware security group to make sure w can do that.17:14
twoputt_from my perpective the keyring approach is something that all the projects will get for free17:14
hartsockstwoputt_ not sure how to do it yet.17:14
rgerganovI have some hands-on experience with our sso because of my other project at vmw17:14
hartsockstwoputt_: but yes, that oslo BP I linked to is a better more general solution.17:15
twoputt_yes I think so too17:15
*** jbernard has left #openstack-meeting17:15
hartsockstwoputt_ but I'll be running down a few more details to see if there isn't something better to use. The Oslo crypto solution uses a symmetric cipher embedded in the conf.17:16
*** artom2 has joined #openstack-meeting17:16
twoputt_ok17:16
*** artom2 has left #openstack-meeting17:16
hartsocksSo… there's an added benefit if there can be a transient key security context used… but we can only do that if there's an advanced way to integrate Keystone and vCenter SSO… not sure if it can happen yet.17:17
*** yogeshmehra has joined #openstack-meeting17:17
hartsocksBut… the Oslo thing. If some of you have a cycle & interest, that's something to watch.17:17
hartsocks#link https://blueprints.launchpad.net/nova/+spec/vmware-datastore-selection-by-scheduler-filter17:17
*** jmontemayor has quit IRC17:17
*** caitlin56 has quit IRC17:17
*** caitlin56 has joined #openstack-meeting17:18
hartsocksI posted this a while ago… not sure if it fits with Gary's current direction. I would like to kill it if it doesn't.17:18
*** danflorea has joined #openstack-meeting17:18
*** davidhadas has joined #openstack-meeting17:18
garykthis is one of the topics we'll discuss at the session17:19
smurugesanMy opinion, I think this is bettter handled at the driver level.17:19
smurugesanbut it will be interesting to see the other approach as well17:19
hartsocksI would actually like to see more distributed logic in OpenStack… but the current design … style? … ethic? … is to locate *everything* in the scheduler.17:20
hartsocksCentralized knowledge in the scheduler requires one big thing to know *everything* and decide everything. That can't possibly scale well.17:20
hartsockswell, Gary, let me know if I can help or if I need to clarify something for your session.17:21
smurugesanI proposed a change to decouple datastore retrieval and selection in the driver. https://review.openstack.org/#/c/52557/117:21
smurugesanthe selection can be expanded to add more filters.17:21
smurugesanprobably we can do it once and better. Need everyones thoughts17:22
smurugesanThat is one of the ways to collect aggregate stats.17:22
*** jmontemayor has joined #openstack-meeting17:23
garykhelp with the scheduling will be great. feel free to jump into the scheduling meetings on tuesdays. there are lots of ideas for improving things17:23
*** egallen has quit IRC17:23
*** pschaef has quit IRC17:24
hartsocksHmm… I think something similar to how the Scheduler does filters (these are rules systems rules similar to a Prolog program) is probably a better way to specify some of the policy we have lying around in methods like datastore_ref_and_name …17:24
garyksmurugesan: it would be nice if that change also has the regex per clister17:24
smurugesanI will incorporate the same as a dependent patch.17:24
*** nati_ueno has joined #openstack-meeting17:24
garykcluster. not clister17:24
hartsocksnice.17:25
hartsocksI like small patches.17:25
hartsocks:-)17:25
*** jungleboyj has joined #openstack-meeting17:25
hartsocksMy concern is the config for our driver will get too big, too complex, and too brittle...17:25
*** notmyname has quit IRC17:25
hartsocksWhich brings me to this BP...17:25
hartsocks#link https://blueprints.launchpad.net/nova/+spec/vmware-driver-configuration-validator-module17:25
hartsocksDan W. had asked for something like this… which alleviates some problems from a big-heavy config.17:26
*** notmyname has joined #openstack-meeting17:26
hartsocksBut I think this treats a symptom of a bigger problem.17:26
hartsocks#link https://blueprints.launchpad.net/nova/+spec/vmware-auto-inventory17:26
*** vipul-away is now known as vipul17:27
hartsocksWhich was to find some way to "tag" and read vCenter inventory and report it up to the scheduler better.17:27
*** jecarey has quit IRC17:27
tjones1that would be very cool17:27
hartsocksyeah… but *how* to do it is what I'm stuck on.17:27
*** egallen has joined #openstack-meeting17:28
*** jungleboyj has left #openstack-meeting17:28
hartsocksI think I've heard other people with the same/similar ideas so… it's something to talk about.17:28
garykthats just an implementation detail :)17:28
hartsocks*lol*17:28
hartsocksIn theory, theory and practice are the same… in practice...17:28
hartsocks:-)17:28
*** ygbo has quit IRC17:29
hartsocksSo the decision is how to create the inventory filter to do the job. Ideally you would implement this in such a way you could easily change your mind about that implementation detail later.17:29
garykif it compiles on paper then it will work. or in our case, if it interprets on paper17:29
*** harlowja has joined #openstack-meeting17:29
hartsocks;-)17:29
hartsocksAll my best work is imaginary.17:30
hartsocksI really wish I was going to Hong Kong … *sigh*17:31
hartsocksAnyway.17:31
hartsocksJust wanted it on y'alls list if it made sense.17:31
hartsocksThere's one more BP I was browsing and found ...17:31
hartsocks#link https://blueprints.launchpad.net/nova/+spec/auto-vm-discovery-on-hypervisor17:31
hartsocksThis isn't VMware API specific… but it runs counter to some of the assumptions we made in our driver construction.17:32
*** tjones1 is now known as tjones17:32
hartsocksIt's a VM discovery BP.17:32
*** egallen has quit IRC17:32
*** glenng has joined #openstack-meeting17:33
hartsocksSo that when you turn on OpenStack … VMs currently on the … say vCenter … show up in your nova list.17:33
*** glenng has left #openstack-meeting17:33
*** twoputt_ has quit IRC17:33
hartsocksJust something to watch for.17:33
*** macjack has joined #openstack-meeting17:33
dansmithhartsocks: I don't think that will ever fly, fwiw17:34
*** twoputt has joined #openstack-meeting17:34
dansmithhartsocks: so if you're concerned about it, you're "doing it right" IMHO :)17:34
hartsocksI'm actually relieved to hear you say that.17:34
*** DrBacchus has quit IRC17:35
hartsocksgroovy :-)17:35
*** ayoung_ has quit IRC17:35
hartsocksI'll try to do my best "ghost in the machine" impression for you during the conference if anyone needs it.17:35
hartsocksAny other BP people want to chat about here before HK?17:36
*** egallen has joined #openstack-meeting17:36
*** danwent has quit IRC17:36
garykat the moment i have two in review17:36
garyk1. start of the implementaion for the image caching17:36
garyk2. vm diagnostics (parity with other drivers)17:37
*** masayukig has quit IRC17:37
hartsocksnice. I've seen a lot of email flying around on image caching… is there anything in BP, etherpads, or public ML?17:37
garyki hope that by the end of the week there will be a first draft of the actual cache aging (i am debugging and writing unit tests)17:37
hartsocks*very* nice.17:38
*** masayukig has joined #openstack-meeting17:38
garyki'll hopefully get a wiki out soon17:38
*** IlyaE has joined #openstack-meeting17:38
garykit will just fit into the generic driver model17:38
hartsocksokay, it would be good to have public discussion of these things.17:38
*** Mandell has joined #openstack-meeting17:39
hartsocksThe vm diagnostics work is in a BP? or is it a bug?17:39
garykvm diagnostics is a bp. it was missing functionality in the driver17:40
hartsockslink?17:40
*** elo has quit IRC17:40
rgerganovhttps://blueprints.launchpad.net/nova/+spec/vmware-vm-diagnostics17:40
hartsocksokay, I've seen that.17:41
*** derekh has quit IRC17:41
hartsocksAny bugs we need to talk about?17:41
hartsocks#topic bugs17:41
*** openstack changes topic to "bugs (Meeting topic: vmwareapi)"17:41
*** jsergent has quit IRC17:41
hartsocksBTW: I didn't cut off anyone who had a BP that they wanted to bring up did I?17:42
*** masayukig has quit IRC17:42
*** lsmola_ has joined #openstack-meeting17:42
hartsocksAny bugs burning a hole in someone's cloud?17:42
garykfrom the tags there are 46 vmware bugs.17:43
garykit would be nice if we could make a dent in that and help bring down the amount of nova bugs17:43
garyksome are in progress https://bugs.launchpad.net/nova/+bugs?field.tag=vmware (which we need to get reviews)17:44
garyki do not know why the query shows commited one too ;)17:44
rgerganovIf I want to take a bug I should be looking for status CONFIRMED which is not assigned, right?17:45
*** whenry has joined #openstack-meeting17:45
hartsocksI've been holding off on the weekly review email because I figure everyone is probably focused on the summit. I'll start that up again the week of November 11th17:45
garykpart of our triage responsibilities are to confirm the bugs17:45
rgerganovok, I see17:45
hartsocksyep. So I have a query...17:45
hartsockshttps://bugs.launchpad.net/nova/+bugs?field.tag=vmware+&field.status%3Alist=NEW17:46
vuilfeel free to talk to person assigned to the bug if you are interested in taking over too.17:46
hartsocksTo find new bugs that haven't been confirmed.17:46
*** rongze has joined #openstack-meeting17:46
*** DrBacchus has joined #openstack-meeting17:46
hartsocksSometimes confirming a bug is a lot of work.17:46
tjonesi've not been able to repo https://bugs.launchpad.net/nova/+bug/124035517:47
tjones(which is why it is not confirmed)17:47
hartsocksYeah. I have a feeling that this break happens in special conditions.17:47
*** vincent_hou has quit IRC17:48
hartsocksI can try and look at it, but so far only Gary's said he's seen it before.17:48
*** caitlin56 has left #openstack-meeting17:48
*** cody-somerville has quit IRC17:49
hartsocks#action shawn follow up on https://bugs.launchpad.net/nova/+bug/124035517:49
*** rushiagr has left #openstack-meeting17:49
hartsocksAny other bugs or is it time for open discussion?17:50
hartsocks#topic open discussion17:51
*** openstack changes topic to "open discussion (Meeting topic: vmwareapi)"17:51
hartsocksI have a proposal for Hong Kong...17:51
hartsocksOpenSnack17:51
hartsocksYou would have these informal snack times …17:51
hartsocksthat's all I got.17:51
hartsocksSound good?17:51
tjonesha ha17:52
*** jsavak has joined #openstack-meeting17:52
hartsocksWell, with that… we'll plan on resuming on November 13th.17:53
hartsocksUnless someone really wants to hold the meeting during the summit.17:53
tjoneswhat time is it in HK now?17:54
hartsocksAlso, a reminder that there's this pesky day-light-savings-time in parts of the world…17:54
hartsocksThis slot is UTC which has no DLS.17:54
hartsocksum..17:54
*** twoputt has quit IRC17:54
hartsocks2am17:54
*** twoputt has joined #openstack-meeting17:55
hartsocksYou don't want to meet at 2am?17:55
rgerganov:)17:55
hartsocksWhat?!?17:55
tjonesnot even a little17:55
tjones:-D17:55
*** joesavak has quit IRC17:55
*** joesavak has joined #openstack-meeting17:55
hartsocksokay folks, have fun at your summit. Us internet ghosts will scare up our own fun while you're out.17:56
vuilI know you will be up at 2am, shawn. Probably not by choice17:56
*** vkmc has quit IRC17:56
hartsocks*lol*17:56
hartsocksyep.17:56
*** ivar-lazzaro_ has joined #openstack-meeting17:56
*** jsavak has quit IRC17:56
hartsocks#endmeeting17:56
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"17:56
openstackMeeting ended Wed Oct 30 17:56:53 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-10-30-17.02.html17:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-10-30-17.02.txt17:56
openstackLog:            http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-10-30-17.02.log.html17:57
*** jsergent has joined #openstack-meeting17:57
*** ArxCruz_ has quit IRC17:57
*** ivar-lazzaro_ has quit IRC17:57
*** ArxCruz has joined #openstack-meeting17:58
*** SridarK has joined #openstack-meeting17:58
SumitNaiksatamNeutron FWaaS folks here?17:59
SridarKSumitNaiksatam: Hi18:00
SumitNaiksatamhi SridarK18:00
SumitNaiksatamlets give a couple of mins, and then we can get started18:00
*** garyk has left #openstack-meeting18:01
*** Kaiwei has joined #openstack-meeting18:02
SumitNaiksatamok lets get started18:03
SumitNaiksatam#startmeeting Networking FWaaS18:03
openstackMeeting started Wed Oct 30 18:03:32 2013 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.18:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:03
*** openstack changes topic to " (Meeting topic: Networking FWaaS)"18:03
openstackThe meeting name has been set to 'networking_fwaas'18:03
SumitNaiksatamtoday's meeting is also in preparation for the Icehouse summit discussions/sessions18:03
SumitNaiksatam#info etherpad: https://etherpad.openstack.org/p/icehouse-neutron-fwaas18:04
SumitNaiksatam#topic Icehouse Summit session18:04
*** openstack changes topic to "Icehouse Summit session (Meeting topic: Networking FWaaS)"18:04
SumitNaiksatam#info link: http://icehousedesignsummit.sched.org/event/c8bf224a93d4e689ee91ffd958ae56a818:04
SumitNaiksatam#info date/time: Tuesday, Nov 5th, 2:50 PM18:04
*** cody-somerville has joined #openstack-meeting18:04
SumitNaiksatambtw, no FWaaS meeting in the next week or the week after (since we won't be having Neutron team meetings either), but we will start again in the following week18:05
*** tjones has left #openstack-meeting18:05
SumitNaiksatamcurrently we are the first services' session in the Neutron agenda18:05
SumitNaiksatamhope we can set a good stage ;-)18:05
SumitNaiksatamany issues for anyone with the session time?18:06
SridarKexcept for jet lag should be good :-)18:06
SumitNaiksatamSridarK: :-)18:06
SumitNaiksatamSridarK: get a pillow :-P18:07
SridarK:-)18:07
SumitNaiksatamok so on to the next topic18:07
SumitNaiksatam#topic zones18:07
*** openstack changes topic to "zones (Meeting topic: Networking FWaaS)"18:07
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/fwaas-zones-api18:07
SumitNaiksatamwe had more discussions on this and more input from folks18:07
*** jlucci has quit IRC18:07
SumitNaiksatamzones will be initially used in the source and/or destination arguments for a rule (this will18:08
SumitNaiksatambe later extended to firewall-rule-sets when they are defined)18:08
SumitNaiksatammost people seem to be in agreement with the current proposal18:08
SumitNaiksatamnote that we do not aim to perform validation on the composition of ports in the zone (if18:08
SumitNaiksatamthere are suggestions, please let us know) since different vendors seem to want to interpret differently18:08
*** masayukig has joined #openstack-meeting18:08
SridarKI think we are getting to some convergence may not be perfect for all but is a good start18:09
SridarKand folks seem to be in general agreement which is good18:09
SumitNaiksatamSridarK: right, i was just going to say you are looking at this feature closely, right?18:09
SumitNaiksatamin terms of driving it18:09
SridarKyes will do Sumit18:09
SumitNaiksatamthere is a pending item here to converge on the trunk port to constituent sub-interface ports mapping18:10
SumitNaiksatamthe discussion beyounn started18:10
*** twoputt has quit IRC18:10
SumitNaiksatami don't believe we are set on this yet, right?18:10
*** spzala has quit IRC18:11
SridarKyes i think we can discuss more in HK with some of the proposals on trunk ports18:11
SridarKwith all folks being there18:11
SumitNaiksatam#action SridarK beyounn SumitNaiksatam to followup with existing blueprint owners and if required, file a new blueprint, if new blueprint, bring it up for discussion during FWaaS slot in the summit18:11
*** twoputt_ has joined #openstack-meeting18:11
*** twoputt has joined #openstack-meeting18:11
SumitNaiksatammore thoughts on zones?18:11
SumitNaiksatamis Kaiwei here?18:11
*** masayuki_ has joined #openstack-meeting18:12
KaiweiSumitNaiksatam: I'm here18:12
SumitNaiksatamhi Kaiwei18:12
*** masayukig has quit IRC18:12
*** rossella_s has quit IRC18:12
SumitNaiksatamKaiwei: did the email exchange on the mailing list regarding zones satisfy your concerns?18:13
Kaiweiyes, it does18:13
SumitNaiksatamKaiwei: ok good18:13
KaiweiI'm fine using port, though I prefer network/subnet uuid :)18:13
SumitNaiksatamKaiwei: i see your point18:13
*** denis_makogon has joined #openstack-meeting18:13
SumitNaiksatamKaiwei: we can brainstorming on this a little more until the summit18:14
SumitNaiksatami don't think we plan any implementation on it at least until then18:14
Kaiweisure18:14
*** garyk has joined #openstack-meeting18:14
SumitNaiksatamSridarK RajeshMohan beyounn: anything else on zones?18:14
SridarKI am fine18:15
beyounnsorry, I was late18:15
beyounncatching up now18:15
SumitNaiksatamseems like others are in silent agreement :-)18:15
SridarKwe can refine with more discussion at HK18:15
SumitNaiksatambeyounn: np18:15
SumitNaiksatamok next topic then18:16
SumitNaiksatam#topic Service Insertion for Firewall18:17
*** openstack changes topic to "Service Insertion for Firewall (Meeting topic: Networking FWaaS)"18:17
beyounnok, so far so good18:17
beyounn:-)18:17
*** hartsocks has left #openstack-meeting18:17
SumitNaiksatambeyounn: ok18:17
*** matel has quit IRC18:17
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/fwaas-service-insertion18:17
SumitNaiksatami just created this bp18:17
SumitNaiksatampretty much depends on the other services' insertion and chaining dicussion18:18
SumitNaiksatamwith this bp we can do router level insertion18:18
*** SergeyLukjanov is now known as _SergeyLukjanov18:18
*** garyduan has joined #openstack-meeting18:19
*** _SergeyLukjanov has quit IRC18:19
SumitNaiksatamgaryduan: hi, we are discussing service insertion for firewall18:19
*** nati_ueno has quit IRC18:19
garyduanSorry, I am late18:20
*** Mandell has quit IRC18:20
*** nati_ueno has joined #openstack-meeting18:20
SumitNaiksatamgaryduan: np18:20
SridarKSumitNaiksatam: so the insertion will be at the level of a router and the zone members will be on top of that still as part of the rule18:20
SumitNaiksatamSridarK: yes, thats the current thinking18:20
SridarKok18:20
SumitNaiksatamSridarK: insertion should be able to setup the datapath correctly18:20
*** danflorea has left #openstack-meeting18:21
SumitNaiksatamSridarK: zones will be used to interpret the rules18:21
*** Kaiwei has quit IRC18:21
SridarKyes makes sense18:21
beyounnSumit, how about BIW case?18:21
*** fungi has quit IRC18:21
beyounnFor that we don't need router right?18:21
SumitNaiksatambeyounn: you mean insertion for BITW?18:22
beyounnyes18:22
SumitNaiksatambeyounn: no router for that18:22
beyounnok18:22
SumitNaiksatambeyounn: the insertion context will be different for that (it will be a subnet)18:22
*** enikanorov has joined #openstack-meeting18:22
garyduanIf router does specified, the fwaas driver should understand the insertion context and only load the specified routers18:22
beyounnok18:22
SumitNaiksatambeyounn: i was talking about the reference implementation18:22
SumitNaiksatamgaryduan: that is correct18:22
garyduanok. thinking of service framework integration18:23
*** SergeyLukjanov has joined #openstack-meeting18:23
SumitNaiksatamgaryduan: we have different agenda item for service type integration18:23
SumitNaiksatamgaryduan: unless you have something to discuss in the insertion context18:24
*** nati_ueno has quit IRC18:24
SumitNaiksatamgaryduan: per the latest proposal we don't modify the existing service type framework at all18:24
*** SergeyLukjanov has quit IRC18:25
garyduanSumitNaiksatam: "don't modify", can you elaborate?18:25
SumitNaiksatamgaryduan: we will use the existing service type/provider framework mostly as is18:25
garyduanyes.18:26
SumitNaiksatamgaryduan: the proposed insertion context will be a separate entity18:26
garyduanSumitNaiksatam: Yes. I was just thinking the behavior of fwaas driver.18:26
garyduanSumitNaiksatam: shouldn't have any impact on insertion18:26
SumitNaiksatamgaryduan: the plugin will have to interpret the insertion context, and convey the router id to the agent/driver18:27
garyduanSumitNaiksatam: yes.18:27
SumitNaiksatamgaryduan: so we will have to make changes to the agent and driver as well (reference implementation) to account for this18:27
SumitNaiksatamgaryduan: yes, driver will change18:28
garyduanSumitNaiksatam: agent need some change18:28
SumitNaiksatamgaryduan: yes18:28
SumitNaiksatamgaryduan: hopefully changes will not be too much18:28
SumitNaiksatamgaryduan: perhaps changes are more for interpreting zone18:28
SumitNaiksatamwe will see18:28
*** nati_ueno has joined #openstack-meeting18:28
garyduanSumitNaiksatam: I will think about it18:28
SumitNaiksatamanything more on insertion?18:28
SumitNaiksatamok moving on18:29
SumitNaiksatam#topic Address Objects18:29
*** openstack changes topic to "Address Objects (Meeting topic: Networking FWaaS)"18:29
pcm_SumitNaiksatam: Should I be able to access the spec for BP link for FWaaS service insertion? (I can't)18:29
* pcm_ VPNaaS team member lurking18:29
SumitNaiksatam#undo18:29
openstackRemoving item from minutes: <ircmeeting.items.Topic object at 0x39cca10>18:29
SumitNaiksatampcm_: hi, of course everyone is welcome :-)18:30
SumitNaiksatampcm_: no formal spec document yet18:30
*** SergeyLukjanov has joined #openstack-meeting18:30
SumitNaiksatampcm_: however the dependent blueprint captures a lot of details18:30
SumitNaiksatampcm_: the one regarding insertion and chaining18:30
pcm_SumitNaiksatam: Ah. Accounts for access denied :)18:30
*** Mandell has joined #openstack-meeting18:30
SumitNaiksatampcm_: yes, because there is no spec :-)18:31
pcm_SumitNaiksatam: np. Will look at the BP.18:31
*** Kaiwei has joined #openstack-meeting18:31
SumitNaiksatampcm_: ok, we can discuss offline if you have followup questions18:31
SumitNaiksatam#topic Address Objects18:31
*** openstack changes topic to "Address Objects (Meeting topic: Networking FWaaS)"18:31
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/fwaas-address-objects18:31
SumitNaiksatamwe will first target static IP objects18:32
*** ryanpetrello has quit IRC18:32
SumitNaiksatamBrian does not seem to be here18:32
*** dcramer_ has quit IRC18:32
*** masayuki_ has quit IRC18:32
SumitNaiksatam#topic Service Objects18:33
*** openstack changes topic to "Service Objects (Meeting topic: Networking FWaaS)"18:33
*** masayukig has joined #openstack-meeting18:33
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/fwaas-customized-service18:33
SumitNaiksatamproposal is to capture protocol, port, icml type and code, and timeout18:33
SumitNaiksatambeyounn has the blueprint18:33
beyounnSumit, should I mark the approver to you?18:33
SumitNaiksatambeyounn yes18:33
SumitNaiksatambeyounn: anything more to discuss beyond what we have already?18:34
*** jlucci has joined #openstack-meeting18:34
*** sushils has quit IRC18:34
beyounnIf no one has question , then let;s move on18:34
KaiweiI think in last week's meeting we were only talking about what is required18:34
Kaiweiand what is optional18:34
SumitNaiksatamKaiwei: ok18:34
SumitNaiksatambeyounn: can you elaborate?18:34
beyounnyes,18:34
beyounnI think we have agreed that timeout is optional18:35
beyounnI also think to make protocol/ports to be must to have field18:35
beyounnbut to provide notion "ANY"18:35
SumitNaiksatambeyounn: ok18:35
beyounnI think this can cover most of cases18:35
KaiweiI think protocol/port should be optional, and we should obsolete "protocol" attribute in current rule model18:35
*** DinaBelova has quit IRC18:36
SumitNaiksatamKaiwei: that could be one approach18:36
Kaiweiit doesn't make sense to have protocol in service object and in rule level....18:36
beyounnKaiwei,18:36
beyounnwe should not force people to only go one way18:36
SumitNaiksatambeyounn: thats right, that was the thinking18:37
beyounnUser should be able to decide whether to use service object18:37
beyounnor not18:37
SumitNaiksatamKaiwei: we want to let the users be able to define iptables type of rules18:37
RajeshMohanSumit, just joined. Sorry for joining late.18:37
SumitNaiksatamKaiwei: that will be the basic rule (sort of core definition)18:37
SumitNaiksatamRajeshMohan: hi, np18:37
*** masayukig has quit IRC18:37
SumitNaiksatamKaiwei: the service object will be an extension to the basic rule definition18:38
*** smurugesan has left #openstack-meeting18:38
beyounnSumit:+118:38
SumitNaiksatambeyounn: sorry did not mean to cut you18:38
beyounnno, not at all18:38
SumitNaiksatamKaiwei: thoughts?18:38
SumitNaiksatamor for that matter, others as well?18:39
SumitNaiksatamwe don't have to decide it here18:39
beyounnKaiwei,if you want you can post the idea to BP directly18:40
SumitNaiksatambeyounn: thats a good suggestion18:40
RajeshMohanIf we have contradicitng values in service object and rule, then we have to report error?18:40
SumitNaiksatamRajeshMohan: good point18:40
KaiweiI believe we can provide all flexibility to users if we want, but having two ways (or multiple ways) of configuring one field doesn't seems to be a good approach18:40
SumitNaiksatamperhaps that is Kaiwei's concern as well18:40
RajeshMohanyes - I was trying to undersand where Kaiwei is coming from as well18:41
Kaiweiyeah, especially they can conflict with each other18:41
beyounnMy thoughts is, you can not have both service or rule level proto/port defined at the same time18:41
beyounnI only thought if you defined a service, you need to have proto/port define18:41
SumitNaiksatamKaiwei RajeshMohan we will need to set some priority order18:41
beyounnotherwise, the serivce object is worng18:42
RajeshMohanok18:42
SumitNaiksatamalso keep in mind that the service definition we are talking about is a service "object"18:42
beyounnI will try to squze group concept18:42
SumitNaiksatamas a user it seems to be a little cumbersome to me to have to create an object if i just want to create a simple rule18:42
beyounnbut not sure if I will have time18:43
SumitNaiksatamok discussion on to the blueprint18:43
garyduantypically service definition should contain protocol, tcp:80 defines HTTP, (not considering AppID for now)18:43
beyounnI will update the BP18:43
*** galstrom is now known as galstrom_zzz18:44
*** jhenner has quit IRC18:44
Kaiweione way to simplify this is to provide "inline" service-object that can be specified in the rule directly, without explicitly creating a service-object....18:44
SumitNaiksatamKaiwei: agreed18:44
SumitNaiksatamKaiwei: cli, for example, can hide the object creation18:44
garyduanKaiwei: I think it's the current way18:44
SumitNaiksatamKaiwei: however from an api perspective we are still creating a new object18:44
beyounnbut I also hope we can reuse service object18:45
SumitNaiksatambeyounn: good point18:45
SumitNaiksatambeyounn: yes18:45
SumitNaiksatambeyounn: that would be the goal18:45
*** fungi has joined #openstack-meeting18:45
SumitNaiksatamlets first tackle as to what goes into the service (mandatory and optional) and then think about if want to change the existing rule18:45
SumitNaiksatamrule defintion18:45
beyounnok18:45
SumitNaiksatamdiscussion on to the blueprint/mail list18:45
Kaiweiok18:46
SumitNaiksatam#topic service_type framework18:46
*** openstack changes topic to "service_type framework (Meeting topic: Networking FWaaS)"18:46
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/fwaas-service-types-integration18:46
SumitNaiksatami believe garyduan is on this18:46
SumitNaiksatamgaryduan: anything to discuss?18:46
RajeshMohanSumitNaiksatam: Will this allow to write a plugin-driver?18:47
garyduanNot much, we talked about it in insertion context18:47
*** markwash has quit IRC18:47
garyduanRajeshMohan: plugin-driver is what I am thinking18:47
SumitNaiksatamRajeshMohan: you can write that today as well, but this will allow you to create firewalls using different drviers18:47
SumitNaiksatamRajeshMohan: as of today, you can configure exactly one driver and all firewalls are created using that18:48
garyduanprovider is set that firewall level18:48
garyduan'at' firewall level18:48
RajeshMohanThanks Sumit, Gary.18:49
SumitNaiksatamRajeshMohan: with this change, you can decide via the API as to which "provider" you want for a particular firewall18:49
SumitNaiksatamprovider maps to a driver18:49
RajeshMohanGood. Thanks.18:49
*** lblanchard has quit IRC18:49
SumitNaiksatamgaryduan: okay, i guess nothing new to discuss, we are going to do something similar to LBaaS and VPNaaS18:49
SumitNaiksatam#topic revisit firewall to firewall_policy association18:49
*** openstack changes topic to "revisit firewall to firewall_policy association (Meeting topic: Networking FWaaS)"18:49
*** markwash has joined #openstack-meeting18:50
SumitNaiksatam#link https://blueprints.launchpad.net/neutron/+spec/neutron-fwaas-explicit-commit18:50
SumitNaiksatammost vendors seem to think that the current implementation in the patch is good and works for them18:50
RajeshMohanSumitNaiksatam: +118:50
SumitNaiksatamby works i mean, the like the semantics18:50
*** garyk has quit IRC18:50
SumitNaiksatamRajeshMohan: okay18:50
SridarKSumitNaiksatam: same here +118:50
SumitNaiksatamSridarK: okay18:51
SumitNaiksatamhowever some members in the neutron core team have concerns18:51
SumitNaiksatamso i guess we will bring this up in the summit session18:51
garyduanone question18:51
SumitNaiksatamgaryduan: please18:51
*** notreallymyname has joined #openstack-meeting18:51
*** jcoufal has joined #openstack-meeting18:51
garyduanif admin creates a firewall first18:51
garyduanthen create rules and policy18:52
*** flaper87 is now known as flaper87|afk18:52
garyduanat this moment, policy is not associated with firewall yet18:52
garyduannow, to apply the policy to firewall18:52
SumitNaiksatamyeah18:52
SumitNaiksatamtwo step18:53
garyduan1.assoicate firewall to the policy, 2. commit18:53
SumitNaiksatamgaryduan: yeah18:53
*** DrBacchus has quit IRC18:53
SumitNaiksatamshould it be different?18:53
garyduanwhat I should see is, firewall is created with another policy,18:53
garyduanto associated with the new policy18:53
garyduanadmin has to, 1. update firewall, 2. commit18:54
*** venkatesh has quit IRC18:54
*** esheffield has quit IRC18:54
*** nikhil has quit IRC18:54
SumitNaiksatamgaryduan: when you create firewall without policy, there is default policy to deny all traffic18:54
SumitNaiksatamgaryduan: are you suggesting it should be one step?18:55
garyduanSumitNaiksatam: just try to confirm, always two steps18:55
SumitNaiksatamgaryduan: yeah, thats the current proposal18:55
garyduanSumitNaiksatam: create policy, create firewall with policy, commit18:55
SumitNaiksatamokay, i want to have time for open discussion for anything that we may have missed, and that we should bring up at the summit18:56
SumitNaiksatamgaryduan: lets take offline18:56
*** acoles has joined #openstack-meeting18:56
garyduanSumitNaiksatam: ok18:56
SumitNaiksatam#topic Open Discussion18:56
*** openstack changes topic to "Open Discussion (Meeting topic: Networking FWaaS)"18:56
SumitNaiksatamso have we missed anything in the past three weeks of discussion that is critical to someone?18:57
SumitNaiksatamat this point we will mostly bring up the items discussed today at the summit18:57
SumitNaiksatamsince they have owers18:57
RajeshMohanZones and Commit are critical to us. Is Zones assigned to someone?18:58
RajeshMohanI would think it will have more than one BP18:58
SumitNaiksatamRajeshMohan: yes18:58
*** peluse has joined #openstack-meeting18:58
RajeshMohanSumitNaiksatam: Thanks18:58
SumitNaiksatamwell by owners i meant, at least some initial interest18:59
SridarKRajeshMohan: i will pick up work on zones18:59
SumitNaiksatamwe can decide how we can split work based on the interest and needs18:59
*** hemanth_ has joined #openstack-meeting18:59
*** johnthetubaguy has quit IRC18:59
SumitNaiksatamplease note that, as a team we will also need to do tempest work18:59
RajeshMohanSridarK: Thanks Sridar.18:59
SridarKof course we all work together19:00
*** markwash has quit IRC19:00
SumitNaiksatamSridarK: yeah, like last time19:00
SridarKyes :-)19:00
SumitNaiksatamI think we are more hands now19:00
*** portante has joined #openstack-meeting19:00
SumitNaiksatamso hopefully it will be better19:00
SridarK+119:00
SumitNaiksatamalright folks, thanks much for attending this19:00
SumitNaiksatamfollow up discussions over emails19:00
*** lincolnt has joined #openstack-meeting19:01
SridarKthanks and see u all at HK19:01
RajeshMohanSumitNaiksatam: Service insertion is obviously important but I did not bring it up as I assumed that is different discussion19:01
SumitNaiksatamwe can huddle together after the summit session19:01
*** clayg has joined #openstack-meeting19:01
*** Kaiwei has quit IRC19:01
SumitNaiksatamRajeshMohan: thanks, yes absolutely19:01
SumitNaiksatami will let you know date/time about huddle in HK19:01
SumitNaiksatamHuddle in HK, i like that :-P19:01
RajeshMohanSumitNaiksatam: Thanks. See you all in HK.19:01
SumitNaiksatamalright, thanks all19:01
SumitNaiksatambfn!19:02
*** keving_ has joined #openstack-meeting19:02
SridarKbye19:02
beyounnBye19:02
*** zaitcev has joined #openstack-meeting19:02
SumitNaiksatam#endmeeting19:02
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"19:02
openstackMeeting ended Wed Oct 30 19:02:25 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-30-18.03.html19:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-30-18.03.txt19:02
openstackLog:            http://eavesdrop.openstack.org/meetings/networking_fwaas/2013/networking_fwaas.2013-10-30-18.03.log.html19:02
notreallymyname#startmeeting swift19:02
openstackMeeting started Wed Oct 30 19:02:39 2013 UTC and is due to finish in 60 minutes.  The chair is notreallymyname. Information about MeetBot at http://wiki.debian.org/MeetBot.19:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:02
*** openstack changes topic to " (Meeting topic: swift)"19:02
openstackThe meeting name has been set to 'swift'19:02
notreallymynamewelcome to the swift team meeting today19:03
notreallymynamewho's here?19:03
torgomaticyo19:03
pelusehowdy19:03
portanteo/19:03
zaitcevo/19:03
* clayg isn't really here19:03
* notreallymyname isn't either19:03
notreallymynameagenda items https://wiki.openstack.org/wiki/Meetings/Swift19:03
notreallymynamenot much for me to bring up this week19:03
acoleshere19:03
notreallymynamenext week is the HK summit19:04
notreallymynameshould be fun19:04
lincolntHi, Lincoln Thomas, HP, 1st time here, leading design session on Metadata Search in HK19:04
notreallymynamelincolnt: great. looking forward to hearing from you next week19:04
lincolntthx19:04
notreallymynameas for day-to-day stuff, I wanted to bring up the swift-bench separation19:04
portantenotreallymyname is usually known as notmyname, which is not his name19:04
swifterdarrello/19:05
notreallymynameah yes19:05
*** keving1 has joined #openstack-meeting19:05
*** keving_ has quit IRC19:05
notreallymynameI'm notmyname, but my bouncer is not working (or the guest wifi I'm on...)19:05
keving1here19:05
notreallymyname#topic swift-bench separation19:05
*** openstack changes topic to "swift-bench separation (Meeting topic: swift)"19:05
notreallymynameit's pretty much ready to go19:05
notreallymynamethe goals here are (among other things) to remore the dependency of swift on python-swiftclient19:05
claygremore is a thing19:06
notreallymynameso we need to make sure the rest of swift-bench is good to go and then extract it from swift itself19:06
*** streetcat has joined #openstack-meeting19:06
portanteso the copy has been done, no other changes made to the swift-bench code on the swift repo19:06
portanteso the next thing to do is git rm the tree and change the requirements.txt file/19:07
portante?19:07
swifterdarrellnotreallymyname: last I checked (@ hackathon), the swift-bench repo needed a bit of work: setup.py, (pbr ANYONE?!), Changelog, etc, etc19:07
notreallymynameright (sorry, had someone ask aquestion)19:07
notreallymynameya, swift-bench needs basic stuff like the testrunner, a readme, etc19:07
notreallymynamesetup.py19:07
notreallymynamehomefully not pbr ;-)19:07
notreallymynameso we need someone to do it19:08
* zaitcev hides19:08
notreallymynameshould take a day or two, mostly boilerplate stuff19:08
pelusesounds so exciting19:08
*** enikanorov has quit IRC19:08
notreallymynameI won't be able to do it until after hong kong at the earliest19:08
notreallymynameI guess we don't need a volunteer right now, but it's a thing that needs doing19:09
notreallymynamemoving on19:09
notreallymyname#topic EC/policy status19:09
*** openstack changes topic to "EC/policy status (Meeting topic: swift)"19:09
notreallymynametorgomatic: peluse: keving1: any updates?19:09
peluseso I've got one slide at the "Intel and OpenStack" session in HK covering EC - just FYI for everyone19:09
notreallymynamecool19:10
peluse..and on the code front just waiting on you torgotmatic to review a few things :)19:10
torgomaticsorry, I've been busy :|19:10
pelusenotmyname - do we want to do an impromptu polciies demo of any kind?19:10
torgomatictoo many things going on at once, and I don't multitask particularly well19:10
notreallymynameI've got a tech session on it, and I'd like to do a high-level demo (ie two policies in one cluster, http only, no replication etc)19:10
notreallymynamepeluse: yes19:10
pelusenice cross typing19:10
keving1@togomatic nothing major…  pyeclib is up on pypi19:10
peluseshall we jsut coordinate there?19:10
notreallymynamepeluse: or online before then :-)19:10
peluseOK, lets sync up today or tomorrow then on the demo19:11
notreallymynamepeluse: tomorrow for me. I'm at a workshop all day19:11
pelusetorgomatic - back to your comment.  no huge hurry but I am holding off an onay more work expanding on the poolcieis until you take a quick look at the two patches I ahve up there19:11
notreallymynameok19:11
pelusepolicies that is19:11
*** ndipanov is now known as ndipanov_gone19:11
notreallymyname#topic 410 response19:12
*** openstack changes topic to "410 response (Meeting topic: swift)"19:12
notreallymynametorgomatic: tag19:12
torgomaticso the question here is regarding a patch that would make some account GET/HEAD requests respond w/41019:12
torgomaticright now, there's a bug where a deleted-but-not-reclaimed account yields 204 responses on GET/HEAD requests19:12
torgomaticbut you can't PUT/POST to it19:13
notreallymynametorgomatic: can you see what the old behavior was and leave that as a comment in gerrit?19:13
portantemeaning, the account is marked as being created19:13
portantebut that created account can't be used19:13
torgomaticnotreallymyname: yes, I can do a little code archaeology19:13
portantedoes it get reclaimed too?19:13
notreallymynameportante: 204 -- no content19:13
notreallymynametorgomatic: thanks19:13
torgomaticoh, also this only happens when account autocreate is on19:13
peluseis this a reported bug?19:14
portantebut does it change the account so that it is actually created again?19:14
torgomaticportante: I don't think so19:14
portanteit seems that autocreate on should create the account and not return a 41019:14
portanteseems counter to the notion of auto-create, no?19:14
notreallymynamewhy did we add the autocreate magic responses? ;-)19:14
portanteor am I missing something else about the nature of the API?19:15
torgomaticthere's an interval after an account is deleted in which it cannot be recreated19:15
*** vkmc has joined #openstack-meeting19:15
torgomaticthat's to let the reaper do its job19:15
portanteand you are saying that is true today before this change19:15
torgomaticaccounts are weird in that you can delete them when nonempty19:15
portanteoh19:15
torgomaticportante: that's always been true AFAIK19:15
notreallymyname"you" being a cluster operator (ie reseller admin)19:15
*** tsg- has joined #openstack-meeting19:16
torgomaticanyway, I can investigate and see what the behavior was before the fake-account-listing stuff happened19:16
*** novas0x2a|laptop has joined #openstack-meeting19:16
portanteso by adding the 410 response behavior, we are acknowledging what is actually happening on the backend19:16
notreallymynameportante: ya19:17
torgomaticI just want to give people a heads-up that this might be happening, so if you have any insight or opinions, go add yourself as a watcher on that review19:17
*** danwent has joined #openstack-meeting19:17
torgomaticbecause we don't currently emit 410 responses, but we might be about to start19:17
*** jsavak has joined #openstack-meeting19:17
notreallymynamebut clients will properly respond to classes of responses like they should, right? ;-)19:17
*** jlucci1 has joined #openstack-meeting19:17
torgomaticand that's worth bringing up at a meeting so interested parties can go look, and uninterested parties can ignore it :)19:18
notreallymynameindeed. thanks for doing so19:18
portantecan you post the review id here?19:18
notreallymynameanything else on that for today (ie until we know the history)19:18
torgomaticnotreallymyname: well, existing behavior is 204 on HEAD plus 401 (403?) on PUT/POST, so that's clearly wrong and has to be fixed19:18
portantehttps://review.openstack.org/#/c/54449/19:18
* torgomatic has nothing else19:18
notreallymynameyes19:18
portantedone19:18
notreallymynameportante: and the author is otherjon in #openstack-swift19:19
notreallymyname#topic open discussion19:19
*** openstack changes topic to "open discussion (Meeting topic: swift)"19:19
* notreallymyname wants to get to lunch19:19
notreallymynamewhat else is going on?19:19
*** nati_ueno has quit IRC19:19
notreallymynameanything needing discussion or about HK next week?19:19
*** joesavak has quit IRC19:20
portantewhen is the next target for a swift release?19:20
portantedec or jan?19:20
notreallymynameFWIW, there were about 60 new swift "clusters" deployed this morning at the workshop19:20
portantenice19:20
portanteand what are we targetting for that release?19:20
notreallymynameportante: we don't do time-based releases. it would make sense to me to release after swift-bench is separated and to check with the other features under dev19:21
*** lpabon has joined #openstack-meeting19:21
notreallymynameportante: since 1.10 I haven't had a chance to look, but I'd like to see the diskfile stuff cleaned up before a release (in an ideal world)19:21
portanteredhat would like to see the account and container server API refactoring land so that we can complete the gluster layering on supported APIs19:21
notreallymynameportante: that would be grounds for a release IMO19:21
*** jlucci has quit IRC19:21
notreallymynamekk19:22
*** ryanpetrello has joined #openstack-meeting19:22
portantezaitcev is working through those changes, so we'd like to get eyes on them19:22
portantenotreallymyname: okay, thanks19:22
notreallymynameanything else?19:22
pelusenot from my end19:22
portantenothing here19:22
torgomaticnope19:23
notreallymynameok, have a good rest of the day. thanks for coming19:23
notreallymyname#endmeeting19:23
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"19:23
openstackMeeting ended Wed Oct 30 19:23:22 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:23
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-10-30-19.02.html19:23
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-10-30-19.02.txt19:23
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2013/swift.2013-10-30-19.02.log.html19:23
*** peluse has left #openstack-meeting19:23
notreallymynameand don't read the news if you want to stay happy about the state of the internet19:23
*** portante has left #openstack-meeting19:23
*** clayg has left #openstack-meeting19:23
*** zaitcev has left #openstack-meeting19:24
*** vipul is now known as vipul-away19:25
*** vipul-away is now known as vipul19:25
*** keving1 has left #openstack-meeting19:25
*** lincolnt has left #openstack-meeting19:25
*** galstrom_zzz is now known as galstrom19:26
*** akuznetsov has joined #openstack-meeting19:27
*** DrBacchus has joined #openstack-meeting19:27
*** jaypipes has quit IRC19:28
*** notreallymyname has quit IRC19:28
*** tsg- has quit IRC19:29
*** markmcclain has joined #openstack-meeting19:29
*** akuznetsov has quit IRC19:35
*** DrBacchus has quit IRC19:35
*** troytoman-away is now known as troytoman19:35
*** neelashah has quit IRC19:36
*** DrBacchus has joined #openstack-meeting19:39
*** ayoung_ has joined #openstack-meeting19:40
*** nati_ueno has joined #openstack-meeting19:40
*** elo2 has quit IRC19:42
*** joesavak has joined #openstack-meeting19:42
*** masayukig has joined #openstack-meeting19:43
*** DrBacchus has quit IRC19:44
*** jsavak has quit IRC19:44
*** DrBacchus has joined #openstack-meeting19:45
*** masayukig has quit IRC19:47
*** spzala has joined #openstack-meeting19:47
*** ErikB has joined #openstack-meeting19:48
*** otherwiseguy has quit IRC19:49
*** ryanpetrello_ has joined #openstack-meeting19:50
*** ryanpetrello has quit IRC19:51
*** ryanpetrello_ is now known as ryanpetrello19:51
*** ivar-lazzaro has quit IRC19:53
*** jasond has joined #openstack-meeting19:54
*** markwash has joined #openstack-meeting19:56
*** timductive has joined #openstack-meeting19:57
*** MikeSpreitzer has joined #openstack-meeting19:57
*** spenceratx has joined #openstack-meeting19:58
*** randallburt has joined #openstack-meeting19:58
*** lrengan has joined #openstack-meeting19:59
*** ldemina has joined #openstack-meeting19:59
stevebaker#startmeeting heat19:59
openstackMeeting started Wed Oct 30 19:59:39 2013 UTC and is due to finish in 60 minutes.  The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot.19:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:59
*** openstack changes topic to " (Meeting topic: heat)"19:59
openstackThe meeting name has been set to 'heat'19:59
stevebaker#topic rollcall20:00
*** openstack changes topic to "rollcall (Meeting topic: heat)"20:00
asalkeldo/20:00
*** dmitryme has joined #openstack-meeting20:00
shardyo/20:00
spzalaHi20:00
jpeelerhi20:00
jasondo/20:00
lifeless:q20:00
MikeSpreitzerlurking briefly20:00
*** tspatzier has joined #openstack-meeting20:00
tspatzierhi all20:00
vijendarhi20:00
*** spencera_ has joined #openstack-meeting20:00
*** zaneb has joined #openstack-meeting20:01
stevebakerno actions from last week, only one agenda item, this might be a short one20:01
stevebakerhttps://wiki.openstack.org/wiki/Meetings/HeatAgenda20:01
zanebo/20:01
*** spenceratx has left #openstack-meeting20:01
*** m4dcoder has joined #openstack-meeting20:01
kebrayhello20:01
stevebaker#topic Summit preperation20:01
*** openstack changes topic to "Summit preperation (Meeting topic: heat)"20:01
randallburto/20:01
stevebakerspeling?20:01
*** troytoman is now known as troytoman-away20:01
topolo/20:02
zanebpreparation20:02
m4dcodero/20:02
*** lakshminarayanan has joined #openstack-meeting20:02
stevebakerso this IBM preso which was on 5:20 thurs is now 3:10 friday http://openstacksummitnovember2013.sched.org/event/d021c726f6fbe4d1fc7ade0a72a6ae2a#.UnFlvnUW3qU20:02
lakshminarayananyes20:02
*** nati_ueno has quit IRC20:02
stevebakerso now we can all go and heckle ;)20:02
*** twoputt has quit IRC20:02
tspatzierlooking forward to see you all there :-)20:03
*** twoputt_ has quit IRC20:03
lakshminarayananWe would love to have you all and your questions20:03
*** vkmc has quit IRC20:03
stevebakerdesign schedule has been published http://icehousedesignsummit.sched.org/overview/type/heat20:03
sdake_o/20:03
*** sushils has joined #openstack-meeting20:03
zanebugh, now it's up against the OpenShift presentation20:03
timductiveo/20:04
stevebakerFrom now on we can be considering what to talk about in our placeholder session http://icehousedesignsummit.sched.org/event/4ef1f2f4238851d490f0e14c58423189#.UnFmGXUW3qU20:04
topolcan I be the guy in the back of the room saying just finalize this stuff so we can go grow the heat/HOT ecosystem? :-)20:04
zanebthat's unfortunate because stuff we learn at the IBM one is going to be directly useful for the OpenShift one20:04
randallburtwalkie talkies?20:04
*** neelashah has joined #openstack-meeting20:05
stevebakerzaneb: that is unfortunate, but not as bad as a clashing design session20:05
zanebtrue20:05
*** dprince has quit IRC20:06
*** tanisdl has joined #openstack-meeting20:06
*** lrengan has quit IRC20:06
stevebakeranything else summit related that anybody wants to raise?20:06
asalkeldif we have time user-loging20:06
zanebbut there's like two talks about Heat in the whole (non-Design) Summit... and they're on at the same time20:06
spzalasince heat design sessions are not before Thursday, what's the best way to find heat team - find them? twitter?20:07
*** MarkAtwood has quit IRC20:07
zanebspzala: yeah, I was going to bring that up20:07
zanebhow about we all meet on Tuesday at some point?20:07
spzalazaneb: cool20:07
lakshminarayanan+1 to meet on tues20:08
zaneba lot of the important discussions happen on the sidelines20:08
zanebno point waiting until thursday20:08
kebrayanyone up for a sideline conversation with Solum folks?  I may be able to set that up.20:08
tspatzierand at the parties in the evening ;-)20:08
tspatzier+1 kebray20:08
*** lpabon_ has joined #openstack-meeting20:08
stevebakerlakshminarayanan, zaneb, if you want to negotiate another schedule change then you can email pete@FNTECH.COM, cc lauren@openstack.org20:08
zanebkebray: +120:08
*** lpabon_ has quit IRC20:09
spzalakebray: yes, I am up20:09
*** lpabon has quit IRC20:09
lakshminarayananstevebaker: thanks for suggestion. I think we are open for another change.20:09
kebrayk. I'll try to set something up.20:09
lakshminarayananzaneb: do you have another time slot in mind?20:09
stevebakerlakshminarayanan: OK, I'll leave it to you20:09
asalkeldyeah, kebray I am interested too20:09
stevebakerlakshminarayanan: it looks like there are other slots on friday with only 3 presentations20:10
lakshminarayanankebray: I am also interested in meeting with Solum folks.20:10
lakshminarayanankebray +120:10
zaneblakshminarayanan: no particular one, so long as it doesn't clash with the design summit sessions20:10
stevebakerlakshminarayanan: 11:50, 17:0020:10
shardyDoh, I've just noticed Healing and Convergence clashes with a Tempest session prompted by a patch I submitted and was planning to attend :(20:10
*** nati_ueno has joined #openstack-meeting20:11
tspatzierso if possible, I would like the earlier slot and not let it slip towards the end of the summit20:11
sdakeneed better scheduling system imo :)20:11
asalkeldneed time machine20:11
lakshminarayananstevebaker tsaptzier: I agree 11:50 would be better20:11
stevebakershardy: I could swap convergence with something else20:12
asalkeldtspatzier, friday == everyone sleeping20:12
zaneblakshminarayanan: I'll leave it to up you, but 11:50 sounds good to me20:12
stevebakershardy: maybe swap convergence with abandon/adopt at 2:4020:12
*** ldemina has quit IRC20:12
*** MikeSpreitzer has left #openstack-meeting20:13
shardystevebaker: Ideally I'd rather not miss any Heat sessions, I'll speak to the Tempest PTL and see if there's any chance of them swapping their sessions around20:13
tspatzierasalkeld, then we have to be as entertaining as possible to keep people awake. Maybe we threaten to ask the audience questions.20:13
*** streetcat has quit IRC20:13
*** troytoman-away is now known as troytoman20:13
* radix is here finally20:14
lakshminarayananzaneb: I will check with others and will email20:14
zaneblakshminarayanan: thanks, much appreciated :)20:14
stevebakershardy: tempest has lots of slots on wed and fri, chances are they can swap20:14
*** ErikB has quit IRC20:15
shardystevebaker: yeah was thinking it may be possible as there's plenty of slots which don't overlap with Heat20:15
stevebakerwhen should we meet on tuesday? lunch?20:15
tspatzierlakshminarayanan, 11:50 would be ok with me. So will you email pete and lauren?20:15
zanebstevebaker: either at lunch or right after the IBM keynote imo20:16
lakshminarayanantsaptzier: yes I can email pete and lauren.20:16
*** weshay has quit IRC20:16
randallburtlunch sounds good20:16
zanebstevebaker: but a working lunch sounds20:16
zanebgood20:16
randallburtagreed20:17
kebrayshould I invite the solum folks to lunch too, or should that be a separate meet-and-greet?20:17
stevebakerok, lets announce on the ML and #heat on the day20:17
randallburtkebray:  separate, IMO20:17
stevebaker#topic open discussion20:18
*** openstack changes topic to "open discussion (Meeting topic: heat)"20:18
shardySo the 63 character thing..20:18
zanebmaybe we can come up with a plan at lunch on Tuesday for more sideline meetings throughout the week20:18
stevebakerI don't have anything that can't continue on the ML or IRL20:18
shardySeveral folks have complained about the hard-limit, do folks have any strong opinions about me proposing an alternative, one of:20:18
sdakewhile your at it beat zookeeper into submission for me pls :)20:18
asalkeldzaneb, very little on tuesday20:19
*** joesavak has quit IRC20:19
shardysquash the names into something semi-readable, or just name all the instances heat-<resource short random id>20:19
stevebakershardy: we just need to do the rest of the fix, which shortens any arbitrary physical name to be under the limit of that resource type20:19
asalkeldso +1 to, just make tuesday an unoffical heat day20:19
shardystevebaker: Yeah, thats what I was planning, but an even easier approach is just to use the existing resource unique/random ID and a fixed prefix20:20
*** thedodd has quit IRC20:20
zanebshardy: how about if we just shorten the names of nested stacks20:21
*** andrew_plunk has joined #openstack-meeting20:21
zanebshardy: by removing the parent stack name, for example20:21
shardyzaneb: humm, yup that's not a bad idea20:21
stevebakershardy: fixed prefix looses a lot of useful context, we could do better than that - it would still be easy-ish20:21
zanebit's multiple levels of nested stack that really drive things out of control20:22
shardyI'll raise a bug and take a look tomorrow, unless anyone beats me to it overnight ;)20:22
stevebakerzaneb: I was thinking of a shortening algorithm which does something like20:23
stevebakermystack-theresource-anestedstack-fooserver-kjho978as20:23
stevebakershortens to:20:23
stevebakermy-th-an-fo-kjho978as20:23
stevebakershardy: I'm fairly certain a bug already exists20:23
zanebsounds like a recipe for accidental profanity :D20:23
shardyJust dropping everything except the immediately owning stack will be much more readable20:24
shardystevebaker: didn't we close that as part of the Havana release?20:24
stevebakerzaneb: challenge accepted!20:24
stevebakershardy: I think another was raised for this issue20:24
shardystevebaker: Ah, k, must've missed that20:24
*** thedodd has joined #openstack-meeting20:24
shardybeen distracted beating keystone stuff into shape20:25
kebrayI have one topic for open discussion:  template catalog functionality, https://blueprints.launchpad.net/heat/+spec/heat-template-repo ...20:25
kebrayI'd like to offer to staff resources to implement this for Icehouse.. if, the community will have it in the Heat source tree.20:26
stevebakershardy: even with dropping parent stacks, it is not hard to exceed 63 with stackname + resourcename + random. A shortening strategy could work with any physical name20:26
asalkeldkebray, hasn't marono claimed that now;)20:26
*** egallen has quit IRC20:27
kebrayasalkeld:  They claimed the world, IMO, in that statement.  PaaS and SaaS and market place.20:27
asalkeldmurano20:27
asalkeldsomeone is getting excited20:27
stevebakerkebray: lets talk about this in person. It might best be done with a horizon UI over solum20:27
asalkeldkebray, +120:27
asalkeldkebray, I know melbourne university what that too20:28
randallburtlol, glad solum is the new dumping ground20:28
kebraystevebaker happy to talk in person, or by whatever means.  I've thought a lot about this, and have reasons I believe Horizon should consume the catalog service via an API instead of implement it.20:28
stevebakertakes the heat off us ;)20:28
kebrayI want to reuse that service across non Horizon UIs.20:28
*** ilyashakhat_ has joined #openstack-meeting20:28
*** ilyashakhat has quit IRC20:29
kebray... would certainly build it so Horizon can consume it though :-)20:29
randallburtstevebaker:  indeed20:29
tspatzierkebray, I think that would also be a good place for provider templates to live20:29
zanebkebray: why not just bung them in swift? what else is there for the service to do?20:29
asalkeldI think we need an independent catalog/sharing system20:29
stevebakerkebray: the thing is, a template shouldn't go into a catalog unless it is first managed with full revision control (git) and validated that it actually works (jenkins)20:29
shardykebray: can you summarise, what will this template store actually give folks, which they can't already get with git (plus some UI integration on top)20:29
*** neelashah has quit IRC20:29
zanebshardy: ++20:30
kebrayzaneb:  the service provides a service provider sanitized list of deployable template options.20:30
randallburtshardy:  a thin layer over that strategy that is consistent for everyone20:30
*** asalkeld has left #openstack-meeting20:30
randallburtwell, thin-ish ;)20:30
zanebkebray: wait, the *service provider* supplies the templates?20:30
stevebakerkebray: not to mention that we would often want custom golden images associated with a template in the catalog20:30
*** asalkeld has joined #openstack-meeting20:31
kebrayyeah, pretty thin... the backends could be plug able (swift, git, whatever).  But, it's a consistent way for folks to get a list of stored templates, and then tell Heat go deploy this one from the list.20:31
kebraystevebaker that's another use case I hadn't thought of, but yes, something like golden image associations could be implemented in a catalog.20:31
asalkeldit would be good to have a global instance global too20:31
asalkeld(a replacement to heat-templates)20:32
kebrayI view the catalog as sort of a "Glance for HOT" as randallburt described it.20:32
*** pablosan has joined #openstack-meeting20:32
shardywell AWS just stick them on a web-site, I don't see why static service provider sanitised examples can't be provided that way, then users get to manipulate stuff through git20:32
randallburtasalkeld:  +120:32
shardycan't we just enable python-heatclient to list stuff from the heat-templates repo?20:32
tspatzierI like the idea. And if this is a service that comes with openstack, it avoids having to setup up a repo on your own. Just using a git on the internet can be a problem behind firewalls ...20:32
shardyheat template-examples-list20:32
asalkeldshardy, it lets people share20:33
randallburtshardy:  assuming we do the work to make sure those templates are up-to-date and working20:33
asalkeldso it can grow in a community20:33
kebrayshardy, we could go that route, and build our "website catalog service" independent of Heat.   But, I think a standard way for Horizon, private OpenStack cloud distributions (with custom UIs), and public cloud UIs to consume catalog would be nice.20:33
asalkeldand all those warm fuzzy terms20:33
stevebakerunless the catalog is integrated with CI, in many cases what is being shared will be of low quality20:34
zanebasalkeld: what you're talking about is not an OpenStack service though, it's a website20:34
shardyrandallburt: well fragmenting it so every service provider maintains their own template catalog is not going to help20:34
tspatziershardy, I know of many private cloud deployments where there is no internet access, so heat-templates would not work20:34
asalkeldzaneb, horizon?20:34
shardytspatzier: having a tarball release, or local mirror/clone is trivial20:34
tspatziershardy, yes, that would be an option20:34
randallburtshardy:  I don't disagree, but having the service available as both a public repo as well as something anyone else can set up and manage their own repo aren't mutually exclusive20:35
*** dcramer_ has joined #openstack-meeting20:35
shardyrandallburt: git+github already enable all of that20:35
zanebasalkeld: the number of people you can share with in Horizon is limited, at best20:35
shardythe only thing missing is the ui integration20:35
kebrayI'm not so worried about the list of sanitized templates at the moment as I am the interface (service) to store and list them.. Each service provider or OpenStack operator may want to create their own sanitized list.  Not everyone who uses Heat in an OpenStack installation should have to set up a git repo and point their catalog at that.20:35
zanebkebray: so this is about having a list to pick from in Horizon?20:36
asalkeldzaneb, ok well I think kebray is after something different20:36
shardykebray: so everyone just has to point at some other service instead?20:37
kebrayzaneb any UI, not just Horizon.20:37
stevebakershall we continue this discussion in 5 days?20:38
kebrayIf Heat, as a service, provided a way for the Heat administrator to configure where (the backend) that sanitized templates come from, Horizon (or any other non Horizon UI they are using on top of OpenStack) could just consume the standard service API call to get the sanitized template list.20:38
kebraystevebaker sure.20:38
*** boden has quit IRC20:38
kebrayJust wanted to get people thinking more about it :-)20:38
randallburtpreferably over drinks20:38
zanebkebray: so, I would be very -2 on having this in the Heat *project*. But I'm open-minded about having it in the Orchestration *program*, though I currently remain unconvinced20:38
*** SvenDowideit has quit IRC20:38
*** SvenDowideit has joined #openstack-meeting20:38
kebrayzaneb agreed it should not be a required operational API of Heat... but, I do think it belongs within the Heat source tree, kind of like AutoScale, as an optional API endpoint that can be enabled.20:39
stevebakeranything else before the endmeeting?20:39
shardykebray: When we meet, I'd very much like to hear specific on what this gives us that having e.g a configurable git repo won't20:39
shardyspecifics20:39
kebrayshardy sure.20:39
zanebkebray: I would vote for separate repo, it's a completely different service with no shared code20:39
randallburtzaneb: def agree there.20:39
shardyzaneb: +120:40
kebrayzaneb k, your vote is recorded.20:40
zaneblol :)20:40
*** andrew_plunk has left #openstack-meeting20:40
asalkeldso we all meeting up on tuesday?20:40
zanebasalkeld: yes, at lunch20:40
stevebakerprovisionally at lunch20:40
asalkeldo20:40
kebrayTuesday sounds great.20:40
stevebakerI'll order some bouncers to keep a big table clear20:41
asalkeldthere is nothing much in the moring20:41
stevebakerending meeting in 3...20:41
zanebasalkeld: keynotes, TOSCA session20:41
stevebaker2..20:41
asalkeldok, was only looking at dev sessions20:42
stevebaker1.20:42
stevebaker#endmeeting20:42
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"20:42
openstackMeeting ended Wed Oct 30 20:42:15 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:42
openstackMinutes:        http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-10-30-19.59.html20:42
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-10-30-19.59.txt20:42
openstackLog:            http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-10-30-19.59.log.html20:42
stevebakerthanks all, see  you soon!20:42
*** tspatzier has left #openstack-meeting20:42
*** randallburt has left #openstack-meeting20:42
*** otherwiseguy has joined #openstack-meeting20:42
*** jdurgin has quit IRC20:43
*** lakshminarayanan has left #openstack-meeting20:43
*** spencera_ has left #openstack-meeting20:45
*** esheffield has joined #openstack-meeting20:46
*** topol has quit IRC20:46
*** bgorski has quit IRC20:48
*** jlucci1 has quit IRC20:50
*** markvan has quit IRC20:50
*** galstrom is now known as galstrom_zzz20:55
*** afazekas has quit IRC20:56
*** jdurgin has joined #openstack-meeting20:56
*** marun has joined #openstack-meeting20:58
*** bgorski has joined #openstack-meeting21:01
*** m4dcoder has quit IRC21:01
*** dkranz has quit IRC21:02
*** bgorski_ has joined #openstack-meeting21:04
*** IlyaE has quit IRC21:04
*** lsmola_ has quit IRC21:04
*** spzala has quit IRC21:05
*** bgorski has quit IRC21:05
*** anniec has joined #openstack-meeting21:05
*** DrBacchus has quit IRC21:07
*** nati_ueno has quit IRC21:07
*** nati_ueno has joined #openstack-meeting21:08
*** nati_ueno has quit IRC21:08
*** nati_ueno has joined #openstack-meeting21:09
*** dmitryme has left #openstack-meeting21:09
*** dcramer_ has quit IRC21:12
*** pdmars has quit IRC21:12
*** elo has joined #openstack-meeting21:14
*** nati_ueno has quit IRC21:18
*** MadOwl has joined #openstack-meeting21:18
*** nati_ueno has joined #openstack-meeting21:18
*** radez is now known as radez_g0n321:19
*** sacharya has joined #openstack-meeting21:19
*** nati_ueno has quit IRC21:22
*** rakhmerov has quit IRC21:24
*** galstrom_zzz is now known as galstrom21:24
*** IlyaE has joined #openstack-meeting21:24
*** dims has quit IRC21:25
*** egallen has joined #openstack-meeting21:26
*** elo has quit IRC21:27
*** rongze has quit IRC21:29
*** pcm_ has quit IRC21:29
*** sushils has quit IRC21:35
*** sushils has joined #openstack-meeting21:35
*** jasond has left #openstack-meeting21:36
*** IlyaE has quit IRC21:38
*** lbragstad has quit IRC21:40
*** dims has joined #openstack-meeting21:40
*** marun has quit IRC21:43
*** kartikaditya has quit IRC21:45
*** thomasem has quit IRC21:47
*** jcoufal has quit IRC21:49
*** mrodden has quit IRC21:51
*** stevemar has quit IRC21:52
*** stevemar has joined #openstack-meeting21:54
*** neelashah has joined #openstack-meeting21:54
*** markmcclain has quit IRC21:54
*** vijendar has quit IRC21:54
*** rakhmerov has joined #openstack-meeting21:55
*** vijendar has joined #openstack-meeting21:55
*** adalbas has quit IRC21:56
*** ivasev has quit IRC21:58
*** SergeyLukjanov has quit IRC21:59
*** vijendar has quit IRC22:00
*** burt has quit IRC22:00
*** stevemar has quit IRC22:01
*** MarkAtwood has joined #openstack-meeting22:02
*** fifieldt_ has joined #openstack-meeting22:04
*** yogeshmehra has quit IRC22:05
*** thedodd has quit IRC22:05
*** anniec has quit IRC22:05
*** galstrom is now known as galstrom_zzz22:06
*** anniec has joined #openstack-meeting22:07
*** IlyaE has joined #openstack-meeting22:08
*** anniec has quit IRC22:12
*** xyang__ has quit IRC22:13
*** neelashah has quit IRC22:15
*** egallen has quit IRC22:16
*** troytoman is now known as troytoman-away22:16
*** eharney has quit IRC22:17
*** egallen has joined #openstack-meeting22:17
*** twoputt has joined #openstack-meeting22:17
*** twoputt_ has joined #openstack-meeting22:18
*** gyee has joined #openstack-meeting22:22
*** sandywalsh_ has quit IRC22:25
*** jtomasek has quit IRC22:26
*** rongze has joined #openstack-meeting22:26
*** rakhmerov has quit IRC22:26
*** rakhmerov1 has joined #openstack-meeting22:26
*** dscannell has quit IRC22:26
*** dims has quit IRC22:26
*** danwent has quit IRC22:27
*** dims has joined #openstack-meeting22:27
*** sacharya has quit IRC22:28
*** rongze has quit IRC22:30
*** rakhmerov1 has quit IRC22:30
*** timductive has quit IRC22:31
*** danwent has joined #openstack-meeting22:35
*** ryanpetrello_ has joined #openstack-meeting22:39
*** colinmcnamara has quit IRC22:40
*** jsergent has quit IRC22:40
*** ryanpetrello has quit IRC22:41
*** ryanpetrello_ is now known as ryanpetrello22:41
*** colinmcnamara has joined #openstack-meeting22:41
*** jlucci has joined #openstack-meeting22:42
*** sandywalsh_ has joined #openstack-meeting22:42
*** sdake has quit IRC22:49
*** ryanpetrello has quit IRC22:49
*** herndon has quit IRC22:50
*** esker has quit IRC22:52
*** shardy is now known as shardy_afk22:52
*** mdenny has quit IRC23:00
*** danwent has quit IRC23:00
*** jlucci has quit IRC23:02
*** IlyaE has quit IRC23:03
*** MadOwl has quit IRC23:04
*** ashwini_ has joined #openstack-meeting23:04
*** bpb has quit IRC23:05
*** gyee has quit IRC23:05
*** gyee has joined #openstack-meeting23:09
*** dianefleming has quit IRC23:11
*** fifieldt_ has quit IRC23:12
*** fifieldt has joined #openstack-meeting23:13
*** markpeek has quit IRC23:16
*** oubiwann has quit IRC23:17
*** fbo has quit IRC23:19
*** MadOwl has joined #openstack-meeting23:20
*** danwent has joined #openstack-meeting23:21
*** dcramer_ has joined #openstack-meeting23:22
*** denis_makogon has quit IRC23:24
*** IlyaE has joined #openstack-meeting23:25
*** rongze has joined #openstack-meeting23:26
*** rakhmerov has joined #openstack-meeting23:26
*** jmontemayor has quit IRC23:27
*** fnaval_ has quit IRC23:29
*** rongze has quit IRC23:31
*** rwsu has quit IRC23:38
*** julim has quit IRC23:39
*** hemanth has joined #openstack-meeting23:41
*** hemanth has quit IRC23:42
*** pablosan has quit IRC23:43
*** gyee has quit IRC23:44
*** hemna is now known as hemnafk23:44
*** rwsu has joined #openstack-meeting23:44
*** julim has joined #openstack-meeting23:45
*** elo has joined #openstack-meeting23:56
*** markwash has quit IRC23:57
*** SridarK has quit IRC23:58
*** rakhmerov has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!