Wednesday, 2013-07-03

*** lloydde has quit IRC00:04
*** michchap has joined #openstack-meeting00:11
*** michchap has quit IRC00:12
*** epim has quit IRC00:12
*** michchap has joined #openstack-meeting00:13
*** fifieldt_ has joined #openstack-meeting00:16
*** salv-orlando has quit IRC00:16
*** ryanpetrello has quit IRC00:19
*** gongysh has quit IRC00:27
*** stevemar has quit IRC00:29
*** virajh has quit IRC00:34
*** sleepsonthefloor has quit IRC00:36
*** Tross has joined #openstack-meeting00:36
*** ryanpetrello has joined #openstack-meeting00:38
*** henrynash has joined #openstack-meeting00:45
*** zhuadl has quit IRC00:46
*** vkmc has quit IRC00:46
*** terry7 has quit IRC00:46
*** gyee has quit IRC00:49
*** fifieldt has quit IRC00:51
*** fifieldt_ is now known as fifieldt00:51
*** lastidiot has quit IRC00:52
*** jasondotstar has quit IRC00:52
*** hanrahat has quit IRC00:53
*** dkliban has joined #openstack-meeting00:54
*** ryanpetrello has quit IRC00:54
*** saschpe_ has joined #openstack-meeting00:57
*** saschpe has quit IRC00:57
*** otherwiseguy has joined #openstack-meeting00:59
*** openstack has quit IRC01:03
*** openstack` has joined #openstack-meeting01:04
*** openstack has quit IRC01:04
*** cliu_ has joined #openstack-meeting01:05
*** chmouel_ has joined #openstack-meeting01:05
*** openstack` is now known as openstack01:05
*** ChanServ sets mode: +o openstack01:05
*** primemin1sterp has joined #openstack-meeting01:05
*** guitarza1 has joined #openstack-meeting01:06
*** koolhead11|away has joined #openstack-meeting01:07
*** dragondm_ has joined #openstack-meeting01:07
*** cburgess has quit IRC01:08
*** markpeek has joined #openstack-meeting01:08
*** sandywalsh has quit IRC01:09
*** cburgess has joined #openstack-meeting01:10
*** otherwiseguy has quit IRC01:10
*** cliu has quit IRC01:10
*** koolhead17|zzZZ has quit IRC01:10
*** zul has quit IRC01:10
*** dripton has quit IRC01:10
*** bogdando has quit IRC01:10
*** dragondm has quit IRC01:10
*** primeministerp has quit IRC01:10
*** guitarzan has quit IRC01:10
*** creiht has quit IRC01:10
*** chmouel has quit IRC01:10
*** redthrux has quit IRC01:10
*** dragondm_ is now known as dragondm01:10
*** jgriffith has quit IRC01:11
*** dosaboy has quit IRC01:11
*** erwan_taf has quit IRC01:11
*** jgriffith has joined #openstack-meeting01:11
*** dosaboy has joined #openstack-meeting01:11
*** redthrux has joined #openstack-meeting01:11
*** ryanpetrello has joined #openstack-meeting01:12
*** creiht has joined #openstack-meeting01:12
*** ivoks has quit IRC01:12
*** fnaval has joined #openstack-meeting01:12
*** erwan_taf has joined #openstack-meeting01:13
*** ivoks has joined #openstack-meeting01:13
*** yaguang has joined #openstack-meeting01:14
*** cburgess has quit IRC01:15
*** danwent has quit IRC01:16
*** sjing has joined #openstack-meeting01:16
*** bogdando has joined #openstack-meeting01:16
*** zynzel has quit IRC01:17
*** carlp-away has quit IRC01:17
*** zul has joined #openstack-meeting01:17
*** zynzel has joined #openstack-meeting01:17
*** cburgess has joined #openstack-meeting01:18
*** sandywalsh has joined #openstack-meeting01:21
*** henrynash has quit IRC01:24
*** leizhang has joined #openstack-meeting01:25
*** stevemar has joined #openstack-meeting01:28
*** ryanpetrello has quit IRC01:34
*** radez is now known as radez_g0n301:40
*** IlyaE has quit IRC01:45
*** zehicle_at_dell has joined #openstack-meeting01:48
*** llu has joined #openstack-meeting01:52
*** markwash has quit IRC01:57
*** beagles has quit IRC02:00
*** jlucci has joined #openstack-meeting02:10
*** sjing has quit IRC02:10
*** sjing has joined #openstack-meeting02:11
*** NobodyCam has left #openstack-meeting02:11
*** lastidiot has joined #openstack-meeting02:12
*** guitarza1 is now known as guitarzan02:13
*** novas0x2a|laptop has quit IRC02:16
*** cliu_ has quit IRC02:16
*** dkliban has quit IRC02:16
*** jpeeler has quit IRC02:26
*** emagana has quit IRC02:27
*** dkliban has joined #openstack-meeting02:30
*** cliu_ has joined #openstack-meeting02:30
*** lpabon has quit IRC02:30
*** cyeoh has left #openstack-meeting02:38
*** nati_ueno has quit IRC02:38
*** fifieldt has quit IRC02:39
*** dkliban has quit IRC02:41
*** Tross has quit IRC02:42
*** danwent has joined #openstack-meeting02:42
*** danwent has quit IRC02:44
*** jpeeler has joined #openstack-meeting02:46
*** jpeeler has joined #openstack-meeting02:46
*** cp16net|away is now known as cp16net02:47
*** cp16net is now known as cp16net|away02:48
*** haomaiwa_ has quit IRC02:49
*** ryanpetrello has joined #openstack-meeting02:50
*** haomaiwang has joined #openstack-meeting02:54
*** zehicle_at_dell has quit IRC02:55
*** sjing has quit IRC03:04
*** matiu has quit IRC03:04
*** sjing has joined #openstack-meeting03:05
*** matiu has joined #openstack-meeting03:05
*** matiu has quit IRC03:05
*** matiu has joined #openstack-meeting03:05
*** bdpayne has quit IRC03:06
*** Tross has joined #openstack-meeting03:07
*** fnaval has quit IRC03:08
*** fnaval has joined #openstack-meeting03:09
*** mdomsch has joined #openstack-meeting03:10
*** haomaiwang has quit IRC03:11
*** haomaiwang has joined #openstack-meeting03:11
*** bdpayne has joined #openstack-meeting03:16
*** carlp-away has joined #openstack-meeting03:16
*** matiu has quit IRC03:17
*** cliu_ has quit IRC03:18
*** matiu has joined #openstack-meeting03:19
*** matiu has quit IRC03:19
*** matiu has joined #openstack-meeting03:19
*** stevemar2 has joined #openstack-meeting03:22
*** stevemar has quit IRC03:22
*** dkliban has joined #openstack-meeting03:24
*** afazekas has joined #openstack-meeting03:28
*** adalbas has quit IRC03:30
*** danwent has joined #openstack-meeting03:32
*** yaguang has quit IRC03:33
*** yaguang has joined #openstack-meeting03:33
*** tayyab has joined #openstack-meeting03:34
*** lloydde has joined #openstack-meeting03:42
*** sleepsonthefloor has joined #openstack-meeting03:44
*** tayyab has quit IRC03:51
*** henrynash has joined #openstack-meeting03:53
*** dkliban has quit IRC03:55
*** stevemar2 has quit IRC03:56
*** IlyaE has joined #openstack-meeting04:16
*** tanisdl has quit IRC04:19
*** hemna has quit IRC04:28
*** cliu_ has joined #openstack-meeting04:34
*** SergeyLukjanov has joined #openstack-meeting04:36
*** sdake has quit IRC04:45
*** tayyab has joined #openstack-meeting04:46
*** haomaiwang has quit IRC04:50
*** Tross has quit IRC04:52
*** jlucci has quit IRC04:55
*** IlyaE has quit IRC05:00
*** emagana has joined #openstack-meeting05:05
*** Tross has joined #openstack-meeting05:06
*** markpeek has quit IRC05:09
*** MarkAtwood has joined #openstack-meeting05:11
*** henrynash has quit IRC05:12
*** linuxmohan has joined #openstack-meeting05:12
*** fifieldt has joined #openstack-meeting05:13
*** linuxmohan has left #openstack-meeting05:15
*** linuxmohan has joined #openstack-meeting05:18
*** lloydde has quit IRC05:19
*** lastidiot has quit IRC05:24
*** bdpayne has quit IRC05:25
*** linuxmohan has quit IRC05:33
*** markwash has joined #openstack-meeting05:36
*** emagana has quit IRC05:37
*** ryanpetrello has quit IRC05:38
*** jhenner has joined #openstack-meeting05:40
*** IlyaE has joined #openstack-meeting05:42
*** Mandell has quit IRC05:42
*** SergeyLukjanov has quit IRC05:48
*** danwent has quit IRC05:50
*** anniec has joined #openstack-meeting05:53
*** haomaiwang has joined #openstack-meeting05:59
*** schwicht has joined #openstack-meeting06:09
*** dripton_ has quit IRC06:10
*** Mandell_ has joined #openstack-meeting06:11
*** epim has joined #openstack-meeting06:12
*** SergeyLukjanov has joined #openstack-meeting06:12
*** koolhead11|away is now known as koolhead1706:13
*** epim has quit IRC06:14
*** epim has joined #openstack-meeting06:15
*** dripton_ has joined #openstack-meeting06:16
*** SergeyLukjanov has quit IRC06:18
*** boris-42 has joined #openstack-meeting06:21
*** MarkAtwood has quit IRC06:22
*** MarkAtwood has joined #openstack-meeting06:28
*** lloydde has joined #openstack-meeting06:30
*** lloydde has quit IRC06:34
*** enikanorov-w has quit IRC06:36
*** enikanorov-w has joined #openstack-meeting06:37
*** sleepsonthefloor has quit IRC06:48
*** hanrahat has joined #openstack-meeting06:50
*** hanrahat has quit IRC06:54
*** SergeyLukjanov has joined #openstack-meeting06:55
*** jcoufal has joined #openstack-meeting06:55
*** zynzel_ has joined #openstack-meeting06:58
*** zynzel has quit IRC06:58
*** boris-42 has quit IRC06:58
*** matiu has quit IRC07:07
*** IlyaE has quit IRC07:08
*** blamar has quit IRC07:09
*** markmcclain has quit IRC07:10
*** blamar has joined #openstack-meeting07:10
*** zynzel_ is now known as zynzel07:13
*** avishayb has quit IRC07:15
*** sushils has quit IRC07:17
*** dguitarbite has joined #openstack-meeting07:20
*** jtomasek has joined #openstack-meeting07:25
*** markwash has quit IRC07:25
*** schwicht has quit IRC07:26
*** MarkAtwood has quit IRC07:26
*** markwash has joined #openstack-meeting07:27
*** psedlak has joined #openstack-meeting07:28
*** MarkAtwood has joined #openstack-meeting07:29
*** markwash has quit IRC07:30
*** haomaiwa_ has joined #openstack-meeting07:30
*** enikanorov-w has quit IRC07:32
*** haomaiwang has quit IRC07:33
*** enikanorov-w has joined #openstack-meeting07:33
*** ilyashakhat has quit IRC07:35
*** salv-orlando has joined #openstack-meeting07:36
*** ilyashakhat has joined #openstack-meeting07:36
*** MarkAtwood has quit IRC07:38
*** markwash has joined #openstack-meeting07:39
*** samalba has quit IRC07:41
*** samalba has joined #openstack-meeting07:42
*** koolhead17 has quit IRC07:43
*** markwash has quit IRC07:47
*** dkehn has quit IRC07:51
*** dkehn has joined #openstack-meeting07:53
*** mkollaro has joined #openstack-meeting07:53
*** eglynn has joined #openstack-meeting07:57
*** tzn has joined #openstack-meeting07:59
*** johnthetubaguy has joined #openstack-meeting08:01
*** derekh has joined #openstack-meeting08:12
*** johnthetubaguy1 has joined #openstack-meeting08:20
*** johnthetubaguy has quit IRC08:23
*** ujuc has joined #openstack-meeting08:26
*** mkollaro has quit IRC08:29
*** sjing has quit IRC08:39
*** Yada has joined #openstack-meeting08:43
*** Mandell_ has quit IRC08:43
*** eglynn has quit IRC08:45
*** sushils has joined #openstack-meeting08:46
*** haomaiwa_ has quit IRC08:55
*** haomaiwang has joined #openstack-meeting08:57
*** markmcclain has joined #openstack-meeting08:58
*** markmcclain has quit IRC09:02
*** garyk has joined #openstack-meeting09:15
*** eglynn has joined #openstack-meeting09:21
*** haomaiwang has quit IRC09:22
*** haomaiwang has joined #openstack-meeting09:24
*** egallen has joined #openstack-meeting09:29
*** eglynn has quit IRC09:33
*** Fdot has joined #openstack-meeting09:37
*** afazekas has quit IRC09:39
*** boris-42 has joined #openstack-meeting09:42
*** tzn has quit IRC09:47
*** eglynn has joined #openstack-meeting09:47
*** euanh_ is now known as euanh09:47
*** kiall has quit IRC09:47
*** kiall has joined #openstack-meeting09:49
*** fifieldt has quit IRC09:51
*** afazekas has joined #openstack-meeting09:53
*** tzn has joined #openstack-meeting09:54
*** Fdot has quit IRC09:55
*** haomaiwang has quit IRC09:57
*** yaguang has quit IRC09:57
*** garyk has quit IRC10:01
*** haomaiwang has joined #openstack-meeting10:04
*** egallen has quit IRC10:05
*** afazekas has quit IRC10:10
*** pcm__ has joined #openstack-meeting10:11
*** jhenner has quit IRC10:14
*** pcm__ has quit IRC10:16
*** pcm_ has joined #openstack-meeting10:17
*** branen has quit IRC10:22
*** afazekas has joined #openstack-meeting10:24
*** psedlak_ has joined #openstack-meeting10:31
*** lpabon has joined #openstack-meeting10:33
*** psedlak has quit IRC10:34
*** Fdot has joined #openstack-meeting10:34
*** cody-somerville has joined #openstack-meeting10:36
*** cody-somerville has joined #openstack-meeting10:36
*** afazekas_ has joined #openstack-meeting10:41
*** haomaiwa_ has joined #openstack-meeting10:44
*** haomaiwang has quit IRC10:45
*** adalbas has joined #openstack-meeting10:46
*** lpabon has quit IRC10:48
*** cody-somerville has quit IRC10:50
*** cody-somerville has joined #openstack-meeting10:51
*** jcoufal has quit IRC10:52
*** jtomasek has quit IRC10:53
*** cody-somerville has quit IRC10:55
*** cody-somerville has joined #openstack-meeting11:00
*** leizhang has quit IRC11:11
*** psedlak__ has joined #openstack-meeting11:13
*** anniec has quit IRC11:14
*** psedlak_ has quit IRC11:17
*** salv-orlando has quit IRC11:17
*** jcoufal has joined #openstack-meeting11:20
*** jhenner has joined #openstack-meeting11:27
*** dkliban has joined #openstack-meeting11:28
*** cody-somerville has quit IRC11:36
*** cody-somerville has joined #openstack-meeting11:36
*** ilyashakhat has quit IRC11:39
*** enikanorov-w has quit IRC11:39
*** ilyashakhat has joined #openstack-meeting11:40
*** enikanorov-w has joined #openstack-meeting11:41
*** vkmc has joined #openstack-meeting11:42
*** vkmc has joined #openstack-meeting11:42
*** eglynn is now known as eglynn-afk11:55
*** henrynash has joined #openstack-meeting11:55
*** pcm____ has joined #openstack-meeting12:00
*** beagles has joined #openstack-meeting12:01
*** pcm_ has quit IRC12:03
*** cody-somerville has quit IRC12:03
*** haomaiwang has joined #openstack-meeting12:05
*** salv-orlando has joined #openstack-meeting12:09
*** haomaiwa_ has quit IRC12:09
*** Fdot_ has joined #openstack-meeting12:17
*** Fdot has quit IRC12:19
*** ujuc has quit IRC12:19
*** cp16net|away is now known as cp16net12:19
*** pcm____ has quit IRC12:20
*** pcm_ has joined #openstack-meeting12:20
*** katomo has joined #openstack-meeting12:22
*** Fdot_ has quit IRC12:22
*** sandywalsh_ has joined #openstack-meeting12:23
*** egallen has joined #openstack-meeting12:25
*** katomo has quit IRC12:26
*** dkliban has quit IRC12:31
*** brucer has joined #openstack-meeting12:31
*** Fdot has joined #openstack-meeting12:32
*** markmcclain has joined #openstack-meeting12:38
*** Fdot_ has joined #openstack-meeting12:39
*** eharney has joined #openstack-meeting12:40
*** eharney has joined #openstack-meeting12:40
*** Fdot has quit IRC12:40
*** dprince has joined #openstack-meeting12:49
*** Fdot_ has quit IRC12:49
*** Tross has quit IRC12:50
*** Tross has joined #openstack-meeting12:50
*** ryanpetrello has joined #openstack-meeting12:50
*** jaypipes has quit IRC12:51
*** sandywalsh has quit IRC12:52
*** Fdot has joined #openstack-meeting12:52
*** sandywalsh_ has quit IRC12:54
*** jaypipes has joined #openstack-meeting12:55
*** psedlak has joined #openstack-meeting12:58
*** eharney has quit IRC12:59
*** jhenner has quit IRC12:59
*** psedlak__ has quit IRC13:01
*** beagles has quit IRC13:02
*** brucer has quit IRC13:02
*** brucer has joined #openstack-meeting13:02
*** leizhang has joined #openstack-meeting13:02
*** zehicle_at_dell has joined #openstack-meeting13:02
*** tedross has joined #openstack-meeting13:03
*** afazekas_ has quit IRC13:03
*** jhenner has joined #openstack-meeting13:05
*** beagles has joined #openstack-meeting13:05
*** radez_g0n3 is now known as radez13:07
*** sandywalsh has joined #openstack-meeting13:07
*** lorin1 has joined #openstack-meeting13:07
*** lorin1 has left #openstack-meeting13:07
*** mkollaro has joined #openstack-meeting13:08
*** joesavak has joined #openstack-meeting13:13
*** johnthetubaguy1 has quit IRC13:14
*** eharney has joined #openstack-meeting13:16
*** eharney has quit IRC13:16
*** eharney has joined #openstack-meeting13:16
*** marun has joined #openstack-meeting13:17
*** neelashah has joined #openstack-meeting13:17
*** Fdot has quit IRC13:17
*** afazekas_ has joined #openstack-meeting13:17
*** cliu_ has quit IRC13:17
*** garyk has joined #openstack-meeting13:17
*** Fdot has joined #openstack-meeting13:18
*** marun has quit IRC13:20
*** marun has joined #openstack-meeting13:22
*** jecarey_ has joined #openstack-meeting13:25
*** afazekas_ has quit IRC13:25
*** egallen has quit IRC13:27
*** egallen has joined #openstack-meeting13:28
*** dkehn has quit IRC13:28
*** dkliban has joined #openstack-meeting13:28
*** enikanorov-w has quit IRC13:30
*** enikanorov-w has joined #openstack-meeting13:31
*** brucer has quit IRC13:32
*** dhellmann-away is now known as dhellmann13:33
*** matrohon has joined #openstack-meeting13:34
*** kevinconway has joined #openstack-meeting13:35
*** eharney has quit IRC13:35
*** dkehn has joined #openstack-meeting13:35
*** bgorski has joined #openstack-meeting13:39
*** marun has quit IRC13:40
*** marun has joined #openstack-meeting13:41
*** markpeek has joined #openstack-meeting13:41
*** mdomsch has quit IRC13:42
*** marun has quit IRC13:43
*** johnthetubaguy has joined #openstack-meeting13:43
*** zehicle_at_dell has quit IRC13:44
*** garyTh has joined #openstack-meeting13:44
*** marun has joined #openstack-meeting13:45
*** marun has quit IRC13:46
*** hartsocks1 has joined #openstack-meeting13:47
*** marun has joined #openstack-meeting13:48
*** eharney has joined #openstack-meeting13:48
*** eharney has quit IRC13:48
*** eharney has joined #openstack-meeting13:48
*** marun has quit IRC13:51
*** apech has joined #openstack-meeting13:52
*** rkukura has joined #openstack-meeting13:52
*** haomaiwang has quit IRC13:53
*** marun has joined #openstack-meeting13:53
*** fnaval has quit IRC13:56
*** burt has joined #openstack-meeting13:57
*** marun has quit IRC13:57
*** marun has joined #openstack-meeting13:58
mesteryHi! Everyone here for the ML2 meeting?13:59
apechlet's do it13:59
mesteryapech: Love your enthusiasm. :)14:00
mestery#startmeeting networking_ml214:00
openstackMeeting started Wed Jul  3 14:00:16 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: networking_ml2)"14:00
openstackThe meeting name has been set to 'networking_ml2'14:00
*** vijendar has joined #openstack-meeting14:00
rkukuragood morning!14:00
mestery#link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda14:00
*** rcurran has joined #openstack-meeting14:00
rkukuraor afternoon!14:00
mesteryI thought we could run through action items from last week quick.14:00
mesteryThere was some confusion on the "Instance ID" from Nova. I opened a blueprint, but will close it per comments from rkukura. https://blueprints.launchpad.net/nova/+spec/vm-instance-id-neutron14:01
mestery#topic Action Items From Last Week14:01
*** openstack changes topic to "Action Items From Last Week (Meeting topic: networking_ml2)"14:01
mesteryI also opened a bug for proper OVS agent tunnel programming: https://bugs.launchpad.net/neutron/+bug/119696314:01
*** psedlak_ has joined #openstack-meeting14:01
rkukurathe device_id attribute already contains the instance ID14:01
*** mrodden has joined #openstack-meeting14:02
mesteryrkukura: Thanks for correcting my understanding there. :)14:02
*** feleouet has joined #openstack-meeting14:02
mestery#link https://wiki.openstack.org/wiki/Neutron/ML2 ML2 Wiki Page14:02
apechyes, sorry for the confusion in proposing this :) nice to see this is already there14:02
mesteryI added an ML2 Wiki page, so we can now put things like devstack info there.14:02
rkukuraI'll take an action to add description text, links to slides to wiki page14:03
mesteryrkukura: Thanks!14:03
mestery#action rkukura to update ML2 wiki with text and slides14:03
mesteryrkukura: Thanks for updating this review with ML2 comments! https://review.openstack.org/#/c/33736/14:03
rkukuraIs arosen here?14:04
mesteryrkukura: May be too early for him. :)14:04
*** dachary is now known as antoviaque14:04
*** antoviaque is now known as dachary14:04
rkukuraI spoke with him about this, and am starting to think his approach makes sense for our BP14:04
mesteryThat's great news actually! I'll review it closer as well, though I saw your comments there.14:05
rkukuraBasically, expose our segment_list as a single attribute14:05
*** shardy is now known as shardy_afk14:05
*** psedlak has quit IRC14:05
*** doude has joined #openstack-meeting14:05
mesteryThat sounds like it will work.14:05
rkukuraI don't like the term "transport_zones" for segment_list though14:05
*** BStokes has quit IRC14:05
*** Guest47356 has quit IRC14:05
*** mrodden1 has joined #openstack-meeting14:06
mesteryOK, one more action item was for rcurran to send some notes on common code for MechanismDrivers.14:06
mesteryrcurran: How goes that?14:06
rcurrani had actually already started that email thread before last weeks IRC14:06
*** mrodden has quit IRC14:06
rkukuraIf anyone feels full-fledged REST resources are needed for segments, please get involved in the Aaron's review14:06
*** jhenner has quit IRC14:06
*** zehicle_at_dell has joined #openstack-meeting14:07
*** dachary is now known as antoviaque114:07
*** mkollaro has quit IRC14:07
mestery#action ML2 team to review https://review.openstack.org/#/c/33736/ in the context of multi-segment ML214:07
apechrkukura: you mean the ability to read/write the segment list directly from standardized Neutron APIs?14:07
*** antoviaque1 is now known as dachary14:07
*** stevemar has joined #openstack-meeting14:07
rkukuraapech: Yes, although this approach does allow updating the list.14:08
*** ujuc has joined #openstack-meeting14:08
rkukuraarosen pointed out that only admins would ever see this segment resource14:08
rkukuraI also like the extensibility of the list-of-dicts vs. fixed fields, but am a bit concerned about losing queryability14:09
apechSeems like these are details that are easily hidden from the user, so admin-only access may be okay14:10
*** hughsaunders has left #openstack-meeting14:10
mesteryrkukura: If we think this will cover ML2 multi-segment, do we look at having arosen move his work to be more generic to cover that BP?14:10
rkukuramestery: The idea would be for his patch to [re]define the extension API, and our BP would cover implementing it for ml214:11
*** Sukhdev has joined #openstack-meeting14:11
mesteryrkukura: Got it.14:11
rkukuraI want to formalize how this co-exists with the current provider extension14:11
*** egallen has quit IRC14:11
*** cliu_ has joined #openstack-meeting14:11
mesteryOK, lets move on to the next agenda item now.14:12
mesteryWanted to point out rcurran's email from a week ago, folks writing MechanismDrivers should have a look at that and respond as necessary.14:12
*** afazekas_ has joined #openstack-meeting14:12
mesteryLets move on to blueprint updates now.14:12
mestery#topic Blueprint Updates14:12
*** openstack changes topic to "Blueprint Updates (Meeting topic: networking_ml2)"14:12
apechmestery: I thought the last discussion from last week is that rcurran was going to send out the common code he had in mind14:12
apechthat'd help. I'll certainly reread and respond too14:13
*** Sukhdev_ has joined #openstack-meeting14:13
mesteryapech: I think it was email and/or notes. :)14:13
rcurranyes, but i had already sent out that email on 21-Jun14:13
apechah okay. sorry, missed that. will look14:13
mesteryapech: No worries.14:13
mesteryOK, lets start with apech's MechanismDriver BP14:13
mestery#link https://review.openstack.org/33201 Review for MechanismDriver BP14:14
garykmarkmcclain: ping14:14
mesterygaryk: Hi14:14
mesteryapech: Any updates for everyone?14:14
*** egallen has joined #openstack-meeting14:14
*** _TheDodd_ has joined #openstack-meeting14:14
markmcclaino/14:14
apechI sent out an update last night, which then promptly failed pep8 for a last minute change. About to re-update14:14
apechI think it's getting close - appreciate the comments14:15
mesterygaryk markmcclain: FYI, we're in the middle of hte ML2 meeting on this channel. :)14:15
apechrkukura - think you'll have time to take a deeper look soon?14:15
*** uvirtbot has joined #openstack-meeting14:15
rkukurayes14:15
garykmestery: oops. sorry. wrong channel :)14:15
*** uvirtbot has quit IRC14:15
markmcclainmestery: sorry.. I thought there was something you all wanted me to look at14:16
*** tayyab_ has joined #openstack-meeting14:16
mesteryapech: I need to look at the latest version of your patch as well based on our discussion on gerrit on a prior version.14:16
mesteryAny other questions for apech on MechanismDriver BP?14:17
apechmestery: great, thanks14:17
*** lastidiot has joined #openstack-meeting14:17
mesteryGiven it's a likely long holiday weekend here in the US this week, should we try to shoot for early next week getting this BP merged?14:17
*** haomaiwa_ has joined #openstack-meeting14:17
rkukuraapech: It seems its getting very close, and just minor details should need changing14:17
*** uvirtbot has joined #openstack-meeting14:18
apechmestery: works for me. Hopefully others can just pull in changes to unblock their own development of ml2 mechanism drivers14:18
mestery#action ML2 subteam to review MechanismDriver blueprint with the goal of having it merge by early next week.14:19
*** tayyab has quit IRC14:19
mesteryOK, lets move on.14:19
*** blamar has quit IRC14:19
mestery#link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 PortBinding14:19
mesteryrkukura: Any updates?14:19
rkukurano progress yet, but will start this weekend (when other work will hopefully slow down)14:20
*** jhenner has joined #openstack-meeting14:20
apechrkukura: is your goal still to try to do this h2? not sure how long you think this will take14:20
*** mkollaro has joined #openstack-meeting14:20
*** blamar has joined #openstack-meeting14:20
rkukuraapech: I'd like to get in H2, shouldn't be too much code given the MechanismDriver work already in review14:21
apechrkukura: great, thanks!14:21
mesteryThanks for the updates rkukura!14:21
mesteryOK, any questions for PortBinding?14:21
Sukhdev_rkukura: any eta?14:21
rkukuracode in review by next week's meeting at latest14:22
rkukurawhich is the H2 freeze, I think14:22
Sukhdev_rkukura: thanks14:22
mesteryOK, lets move to the next agenda item.14:22
mestery#link https://review.openstack.org/33297 ML2 GRE Code Review14:22
mesterymatrohon: Here?14:22
matrohonmestery: yes14:22
matrohonhi14:23
mesteryhi matrohon!14:23
mesteryHow goes the bp/ml2-gre work?14:23
matrohonit should be ok for a merge ASA i take the review into account14:23
matrohonthere is only nits14:24
mesterymatrohon: Great! And apologies for my git review mishap which resulted in me rebasing your commit. :)14:24
matrohonmestery: it' ok :)14:24
mesteryThe instructions on the wiki for dependent commits were not quite right it turns out. :)14:24
*** obondarev has quit IRC14:24
matrohonbut i'd like rkukura to validate the achitecture14:24
matrohonwith tunnel_type.py14:25
matrohonand absctract method to handle endpoint managment14:25
rkukuraOK14:25
mesterymatrohon: I agree, as the bp/ml2-vxlan is dependent on that as well.14:25
mestery#link https://review.openstack.org/#/c/35384/2 ML2 VXLAN Code Review14:25
*** johnthetubaguy has quit IRC14:25
mesteryThis was pushed out yesterday, and is dependent on matrohon's GRE work.14:25
*** rwsu-away is now known as rwsu14:26
matrohonrkukura: you were talking about thinking of a better way to handle generic RPC calls14:26
*** michchap has quit IRC14:26
*** jhenner has quit IRC14:26
matrohonand to dispatch it in driver, no?14:26
rkukuraLooks like your TunnelTypeDriver may be more-or-less what I was thinking14:27
matrohonrkukura: ok great!14:27
*** obondarev has joined #openstack-meeting14:27
mesteryOK, looks like both GRE and VXLAN BPs are moving along nicely then.14:27
matrohonmestery: I reviewed your code about vxlan14:27
rkukuraWe may also want more general ability for drivers to mix-in RPC handlers14:27
mesterymatrohon: I saw that, thanks! I will plan to address comments today, appreciate it!14:28
*** cliu_ has quit IRC14:28
mesterymatrohon: I think your direction on the multicast group is a good one, and I'll address that today.14:28
matrohonrkukura: ok, do you want us to think about that before ml2-gre and vxlan get merged14:29
*** shengjiemin has joined #openstack-meeting14:29
rkukurais this the issue of storing multicast groups with the endpoints, vs configuring single group?14:29
rkukuraI could be way off base on that14:30
matrohonrkukura : I proposed to sort multicast group in VXLan allocation table14:30
*** spzala has joined #openstack-meeting14:30
*** lloydde has joined #openstack-meeting14:31
*** haomaiwa_ has quit IRC14:31
mesteryLets continue the VXLAN multicast discussion in the review and on the mailing list.14:32
rkukuraWhat is the disposition of "I even wonder if it's really usefull, in this first implementation, to store multicast group in db if it has to be the same for every VNI."?14:32
*** haomaiwang has joined #openstack-meeting14:33
rkukuraI was interpreting this as suggestion to not store groups in DB14:33
rkukuraOK with moving to email/gerrit14:33
mesteryrkukura: Sorry, please continue.14:33
matrohonrkukura : yes since bp vxlan-linuxbridge use a single multicast group for every VNI14:34
mesterymatrohon rkukura: The crux of the issue is do we want to support more than one multicast group or not?14:34
mesteryFor the first cut of the code, I planned to support a single one for simplicity.14:34
mesteryThoughts?14:34
*** marun has quit IRC14:35
*** shardy_afk is now known as shardy14:35
matrohonmestery: exactly, it's not necessary in a first time, but you should have this feature in the future14:35
rkukuraI'm for keeping it simple until we are sure complexity is needed14:35
*** lastidiot has quit IRC14:35
mesterymatrohon: OK, I can file a blueprint to track this.14:35
mestery#action BP ml2/vxlan to support a single multicast group in first iteration14:35
matrohonmestery : ok, great14:35
mestery#action mestery to file BP to add support for multiple multicast addresses to ML2 VXLAN code14:36
mesteryOK, any more GRE or VXLAN discussion before we move on?14:36
mesteryThe next item was ml2-multi-segment-api, but I believe we previously discussed this already.14:37
rkukuraright14:37
mesteryDo we need to discuss anything else on this now?14:37
rkukuraJust that we might track it for H314:37
mesteryrkukura: Good point. We should target that BP at H3 then, right?14:37
rkukurawas low priority, maybe change to medium if agreed on simple approach14:38
*** tayyab_ has quit IRC14:38
Sukhdev_I wanted to ask a question - I filed the BP for Arista driver, how come I do not see it in the Havana list?14:38
*** marun has joined #openstack-meeting14:38
mesterySukhdev_: Did you target it for H2/H3?14:38
Sukhdev_Do I have to take any additional step to include it in havana?14:39
*** topol has joined #openstack-meeting14:39
Sukhdev_I did not specify - I thought the approver does that14:39
rkukuraI'll target it14:39
mesteryrkukura: Thanks!14:39
Sukhdev_rkukura: thanks14:39
mesteryOK, moving on to the next agenda item.14:39
mestery#topic Bugs14:39
*** openstack changes topic to "Bugs (Meeting topic: networking_ml2)"14:39
rkukuradoes this replace the original hardwaredriver BP?14:40
mestery#link https://review.openstack.org/#/c/33107/ OVS agent tunnel_types bug14:40
Sukhdev_rkukura: yes14:40
apechrkukura: yes, original hardwaredriver BP can go away14:40
rkukuraH2 or H3?14:40
Sukhdev_H314:41
mesteryrkukura: Yong gave me a -2, and I thought this was so close.14:41
mestery#link https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3ysWgSyHa4V0pYELAiePD1s/edit#heading=h.4grgudkj8ei3 ML2 OVS Agent Changes Design14:41
mesteryI added a spec on what the OVS Agent will look like after the changes are done.14:41
mesteryrkukura: Your review would be appreciated!14:42
rkukuraI think he just wanted to understand the plan, and the writeup should help14:42
*** marun has quit IRC14:42
mesteryYes, agreed. I am now thinking to go the full way and implement everything in the document.14:42
mesterye.g. deprecate enable_tunneling in the server, add tunnel_types into the 'ovs' section, etc.14:42
matrohonmestery : make sense14:43
rkukuramestery: Two comments on that: 1) VLANs can already co-exist with flat, local, and gre networks. 2) should emphasize openvswitch agent supporting multiple tunnel types concurrently (with ml2) is goal14:43
*** markwash has joined #openstack-meeting14:43
*** fnaval has joined #openstack-meeting14:44
mesteryrkukura: Thank you, will update with those comments.14:44
mesteryI'll plan a new version of the tunnel_types patch with the changes from the document for early next week at the latest.14:44
rkukuraOK - lets solicit feedback on the writeup on openstack-dev14:44
*** marun has joined #openstack-meeting14:45
rkukurawe kind of have our own sandbox with ml2, but when we change agents or legacy plugins, people pay more attention14:45
mesteryrkukura: I sent email to that affect I believe.14:45
matrohonmestery : I assigned this bug to myself : https://bugs.launchpad.net/neutron/+bug/119696314:45
uvirtbotLaunchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Medium,New]14:45
*** fnaval has quit IRC14:45
matrohonmestery : it's ok for you?14:45
*** mdomsch has joined #openstack-meeting14:45
mesterymatrohon: I was going to discuss that one next. :)14:45
mesterymatrohon: And yes, thank you for taking that one up!14:46
matrohonmestery : ok sorry :)14:46
mestery#action mestery to send email to openstack-dev for the OVS Agent Writeup14:46
*** johnthetubaguy has joined #openstack-meeting14:46
mesterymatrohon: Do you think the bug you mentioned will be merged by H2?14:46
mesteryFor ML2, it will be important I think.14:47
matrohonmestery : i will try to work on it asap14:47
*** litong has joined #openstack-meeting14:47
mesterymatrohon: Great, thank you!14:48
matrohonis there a feature freeze for H2?14:48
mesterymatrohon: Do you mean after H2?14:49
matrohonI mean a date taht I have to respect? to let time for review?14:50
rkukuraH2 is 7/18, but I think a freeze on 7/10 was mentioned in the meeting14:50
matrohonrkukura : ok14:50
*** lloydde has quit IRC14:50
rkukuraMaybe not 7/10: "<markmcclain> Also it is now July, which means were are 10 days away from H2 feature freeze."14:51
mesteryAlso, keep in mind gerrit and CI are down this weekend for a day.14:51
rkukuracould be business days14:51
mesteryAnd with the name change, that may cause some shifting and churn.14:51
markmcclainfreeze is 7.1014:51
mesterymarkmcclain: thanks!14:51
markmcclainbranch will be cut July 16th14:51
rkukuramarkmcclain: End of day 7/10?14:52
*** fnaval has joined #openstack-meeting14:52
markmcclainyes14:53
mesteryOK, we're running low on time, any other bugs people want to discuss now related to ML2?14:53
apechmestery: I'm all good14:54
mestery#topic Questions/Comments?14:54
rkukuraI'm good14:54
*** openstack changes topic to "Questions/Comments? (Meeting topic: networking_ml2)"14:54
mesteryOK, thanks for everyone's great work on all the ML2 items!14:54
mesteryFor those in the US, have a great holiday this week!14:54
apechthanks! Happy 4th14:54
mestery#endmeeting14:54
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"14:55
openstackMeeting ended Wed Jul  3 14:54:58 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-07-03-14.00.html14:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-07-03-14.00.txt14:55
rkukuraWe're going to need to get at least one other core involved in getting merges in early next week14:55
*** apech has quit IRC14:55
openstackLog:            http://eavesdrop.openstack.org/meetings/networking_ml2/2013/networking_ml2.2013-07-03-14.00.log.html14:55
mesteryrkukura: Yes, agree.14:55
*** tzn has quit IRC14:56
*** Sukhdev_ has quit IRC14:56
*** michchap has joined #openstack-meeting14:56
*** doude has left #openstack-meeting14:57
*** rcurran has quit IRC14:58
*** danwent has joined #openstack-meeting14:59
*** mdomsch has quit IRC14:59
*** BobBall has joined #openstack-meeting15:01
*** kmartin has quit IRC15:01
*** eglynn-afk is now known as eglynn15:01
*** johnthetubaguy1 has joined #openstack-meeting15:02
*** eharney has quit IRC15:02
*** rkukura has left #openstack-meeting15:02
*** terriyu has joined #openstack-meeting15:02
johnthetubaguy1#startmeeting XenAPI15:02
openstackMeeting started Wed Jul  3 15:02:38 2013 UTC.  The chair is johnthetubaguy1. Information about MeetBot at http://wiki.debian.org/MeetBot.15:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:02
*** IlyaE has joined #openstack-meeting15:02
*** openstack changes topic to " (Meeting topic: XenAPI)"15:02
openstackThe meeting name has been set to 'xenapi'15:02
johnthetubaguy1Hi all15:02
johnthetubaguy1who is here for the meeting today?15:02
*** gongysh has joined #openstack-meeting15:02
*** mdomsch has joined #openstack-meeting15:03
*** terriyu has quit IRC15:03
euanhEuan here15:03
*** johnthetubaguy has quit IRC15:03
BobBallBob here15:04
*** ItSANgo has joined #openstack-meeting15:04
BobBallMate coming15:04
johnthetubaguy1So I was wanting to check progress towards H-215:04
*** matel has joined #openstack-meeting15:04
johnthetubaguy1so lets go to the blueprints15:04
johnthetubaguy1#topic blueprints15:05
*** openstack changes topic to "blueprints (Meeting topic: XenAPI)"15:05
johnthetubaguy1so has anyone got anything to report15:05
*** cliu_ has joined #openstack-meeting15:05
johnthetubaguy1H2 and H315:05
*** Sukhdev has quit IRC15:05
BobBallnope - didn't think we had blueprints in H215:05
*** dontalton has joined #openstack-meeting15:05
*** matiu has joined #openstack-meeting15:05
*** matiu has joined #openstack-meeting15:05
BobBallin terms of H3 we're not convinced the event reporting is high enough priority atm15:06
johnthetubaguy1hmm, I do I guess15:06
matelWhat was the etherpad address used during the summit?15:06
johnthetubaguy1can't remember right now15:06
matel#link https://etherpad.openstack.org/HavanaXenAPIRoadmap15:06
johnthetubaguy1lol, just found that too15:06
matelSo I will look at this #link https://blueprints.launchpad.net/nova/+spec/xenapi-volume-drivers15:07
*** michchap has quit IRC15:07
matelI need to look at what the cinder guys are doing around brick.15:07
johnthetubaguy1ah, yes, good point15:07
johnthetubaguy1that is not targeted for H right now15:07
BobBallI was tracking the pci pass through blueprint - it's just landing ATM which is great, and it's 95% hypervisor agnostic with only a few changes in the driver needed15:07
johnthetubaguy1that sounds cool15:08
*** lastidiot has joined #openstack-meeting15:08
johnthetubaguy1anything people activly working on for H215:08
*** pcm_ has quit IRC15:08
johnthetubaguy1I guess the answer was no15:08
BobBallNot in terms of the published blueprints no15:08
*** egallen has quit IRC15:09
BobBallWhen we get off the Blueprints topic I'm sure we can say what we have been doing :D15:09
*** feleouet has quit IRC15:09
mateljohn, do you have any links for the brick work?15:10
johnthetubaguy1afraid not, worth looking at cinder mins15:10
johnthetubaguy1so I have some blueprints15:10
johnthetubaguy1https://blueprints.launchpad.net/nova/+spec/xenapi-large-ephemeral-disk-support15:10
johnthetubaguy1I have that pending review, I removed the config15:10
johnthetubaguy1There was also this one:15:10
johnthetubaguy1https://blueprints.launchpad.net/nova/+spec/xenapi-guest-agent-cloud-init-interop15:10
johnthetubaguy1but I pushed that out to H3, it took a little while15:11
johnthetubaguy1so reviews welcome on the first one15:11
BobBallWell I cn have a look - I think that we like the 2TB disk one15:12
BobBallyeah15:12
matelI have some -1s in my bag.15:12
johnthetubaguy1so, any more for blueprints?15:12
johnthetubaguy1matel: hehe15:12
BobBallit's hurting the nova review stats though!15:12
BobBall10 days!15:12
johnthetubaguy1its really getting slow, the queue is huge15:12
*** tanisdl has joined #openstack-meeting15:13
matelDoes the size of the ques has something to do with the number of reviewers?15:13
matels/ques/queues/g15:13
johnthetubaguy1a little bit15:13
johnthetubaguy1but mostly just the number of patches added15:13
johnthetubaguy1its the time many people start pushing their code15:14
matelOkay, we are diverging.15:14
johnthetubaguy1lots of v3 API stuff too15:14
johnthetubaguy1indeed15:14
johnthetubaguy1has anyone else got anything?15:14
johnthetubaguy1jump to open discussion?15:14
matelI updated the Quantum install wiki.15:14
mateltrying to keep it up to date.15:14
matelWe are looking at full tempest runs15:15
BobBallEuan has fixed a bug.15:15
johnthetubaguy1#topic Open Discussion15:15
*** openstack changes topic to "Open Discussion (Meeting topic: XenAPI)"15:15
BobBallBFV tests are passing too Mate - don't forget that! good change right there15:15
BobBalllast gating test that wasn't apssing15:15
BobBallpassing*15:15
johnthetubaguy1awesome15:15
johnthetubaguy1some good work on tempest it seems15:16
johnthetubaguy1any news on gateing work from NYC?15:16
BobBallWe found + fixed a stability problem with smokestack and XenServer15:16
matelWe are not touching tempest, we are just looking at what are the failures.15:16
*** eharney has joined #openstack-meeting15:16
*** eharney has quit IRC15:16
*** eharney has joined #openstack-meeting15:16
johnthetubaguy1sure, just wondering what the planned path is / timeline is15:16
BobBallyeah - so the current plan is that someone (possibly Jim) will implement dependencies in zuul15:16
BobBallso you can have a depends-on patch bringing in another patch for testing and merging15:17
BobBallthat's a pre-requisite for any packaging really as if a nova change needs a packaging change they need to be synchronised15:17
BobBallThe packaging isn't something we are going to gate on - but if we can get the dependency mnagement in then we can look at gating on smokestack test failures that are unrelated to packaging15:18
johnthetubaguy1erm, I was thinking more XenAPI related15:18
*** lloydde has joined #openstack-meeting15:18
matelAh, I have a question.15:18
BobBallit comes round to XenAPI with smokestack being more resilient because it currently breaks when packaging changes are needed / merged15:18
BobBalland giving us the option of only posting -ve reviews when we know it's a test failure and not a packaging issue15:19
BobBallwhich is a big part of the issue with getting smokestack gating15:19
BobBallyes Mate15:19
matelCould you guys look at it? #link https://bugs.launchpad.net/nova/+bug/119657015:19
uvirtbotLaunchpad bug 1196570 in nova "xenapi: pygrub running in domU" [Undecided,New]15:19
*** koolhead17 has joined #openstack-meeting15:19
johnthetubaguy1so, I thought we were looking at getting something other than smokestack gating?15:20
matelSo it's about having a disk image, and we would like to ask pygrub to decide if it is a PV or HVM guy15:20
*** kebray has joined #openstack-meeting15:20
matelsorry guys, just finish off the discussion around testing, I did not want to be rude.15:20
johnthetubaguy1matel: there is a bit of code that uses pygrub, thinking about it15:20
BobBallWe're also looking at the option of having a Xenserver-core VM with nova in dom0 running the tempest tests - but if the dependency thing is implemented, smokestack gating is an easy step forward15:21
mateljohnthetubaguy1: this is a code, my issue, that this code is assuming, that you have pygrub in domU15:21
matelThat means that you might end up with different pygrub versions in dom0 and domU - dodgy15:21
BobBallonly due to the rootwrap?  Why does it use pygrub in domU rather than dom0?15:22
johnthetubaguy1matel: https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L192115:22
matelI would really want to run pygrub in dom0, or have a xapi extension....15:22
johnthetubaguy1matel: its not code most people use, if they have good glance metadata, but yes, I get your point15:22
mateljohnthetubaguy1: yes, I am referring to that code.15:22
BobBallThat bit will be trivial to move to dom0 if we want to - it just attaches the VDI to the domu only to run pygrub so that can easily do it in dom015:23
johnthetubaguy1Personally, we should just default to HVM, and stop worrying about trying to detect it15:23
johnthetubaguy1but I guess we can see what other people think15:23
matelSo in order to get some outcome from this discussion, who prefers which option? A) remove it B)delegate to dom015:23
BobBallHVM isn't supported for most guests :)15:24
BobBallonly windows is supported in HVM15:24
BobBallB - delegate to dom0 or C - leave it if we have to...15:24
BobBallto fix the bug, definitely delegate15:24
johnthetubaguy1yeah, that works for me15:24
johnthetubaguy1just wondering if we really need it15:25
mateltbh, I like the remove.15:25
matelalthough I originally did not think about it.15:25
johnthetubaguy1its worth a mail to the list, see what people think15:25
BobBallyes we must boot linux guests as PV if we want them to be supportable15:25
matelThat's my favourite code modification - delete.15:25
johnthetubaguy1we are trying to get rid of auto detect15:25
matelThe best thing that could happen to code - get removed15:25
BobBalltherefore we must keep it or let something else specify it15:25
johnthetubaguy1yeah, you can specify os type in glance, so its not like you can't choose15:25
BobBallok - if you can specify in glance then that's OK15:26
matelI met with this code while i was booting from volume15:26
*** mdomsch_ has joined #openstack-meeting15:26
BobBallSo if an image in glance is PV then it'll boot PV I'm happy15:26
matelThe issue is that if you are booting from volume, the metadata might not be there.15:26
johnthetubaguy1yeah, thats the fun one, but you can launch an image that specifis the block device mapping and the correct os type15:26
BobBallwe just can't boot _all_ guests as HVM and trust they will negotiate up (which is what I thought you were suggesting)15:26
mateljohnthetubaguy1: have you ever tried to do that?15:28
*** marun has quit IRC15:28
*** BStokes has joined #openstack-meeting15:28
johnthetubaguy1matel: no, actually, its quite a new feature15:28
matelOkay, so Bob suggests to delegate this to dom015:28
*** yaguang has joined #openstack-meeting15:29
johnthetubaguy1BobBall: I wasn't thinking they would negociate up, I was more thinking we need a better solution, guessing seems bad15:29
johnthetubaguy1Yeah, we could try for that15:29
*** marun has joined #openstack-meeting15:29
matelSimplest change is removal.15:29
*** bdpayne has joined #openstack-meeting15:29
BobBallhang on15:29
BobBallwait wait wait15:29
BobBallif we can typically rely on the metadata to determine if it should be PV or HVM then I'm happy with deleting the autodetect code15:30
johnthetubaguy1yeah, just default to HVM for giggles15:30
BobBallI know BFV currently doesn't have that metadata - but if that's a bug then we can still rely on metadata etc15:30
johnthetubaguy1well, you can do BFV from a glance image15:30
johnthetubaguy1then you get metadata15:30
johnthetubaguy1same thing you need for external ramdisk and kernels15:31
matelyes, so basically these are separate issues.15:31
BobBallif we _can't_ reliably rely on the metadata then we need autodetect15:31
*** vijendar has quit IRC15:31
BobBall(in dom0)15:31
johnthetubaguy1hmm, well maybe15:31
matelQuestion: do we want to autodetect if a given block device contains hvm vs pv stuff?15:31
johnthetubaguy1maybe15:32
BobBallI say no - if we need to detect the mode then we should have some form of metadata associated with the block device that says what it contains15:32
*** ujuc has quit IRC15:32
matelOkay, so Bob votes for removal.15:32
BobBallif the metadata route is typically the canonical source of information then that's what we should always use15:33
johnthetubaguy1we have that in glance, to some extent15:33
BobBallsorry for changing my vote.15:33
BobBallbut now I understand the problem better - and that we will still boot Linux guests as PV then I'm happy15:33
matelI vote for removal, because I want to make the code happier.15:33
johnthetubaguy1+115:33
matelLet's do it this way: I will submit a patch, and you can vote on the change.15:34
johnthetubaguy1and bring it back if we need to, in a better way15:34
matelin dom015:34
johnthetubaguy1yeah, the remove is a simple patch15:34
*** michchap has joined #openstack-meeting15:34
matelyes, let's YAGNI15:34
BobBall+15:34
BobBall715:34
BobBall+7 even15:34
matelseven?15:34
BobBallI don't think +1 is enough15:34
matelOkay, expect a patch soon.15:35
*** lloydde has quit IRC15:35
matelI am adding new items to the sprint backlog - my boss will love it.15:36
johnthetubaguy1:)15:36
johnthetubaguy1so we got anything else?15:36
matelI fixed some minor bugs last week, but nothing worths mentioning.15:37
BobBallI submitted a fix for snapshot reordering15:37
matelBad.15:37
BobBallcoalescing even15:37
matelAh.15:37
matelOkay, missunderstood.15:37
matelCould you link the change sir?15:37
BobBallbut it doesn't have a test yet and I think people want a test but I've been super busy on not being able to code :/15:37
BobBall#link https://review.openstack.org/#/c/34528/ <-- Lonely changeset seeking review.15:38
matelUntested code is broken by desgin.15:38
BobBallit's a trivial change15:38
BobBallyeah yeah :)15:38
BobBallI'm happy to try and add a test (although I'm not quite sure how to test this one!)15:38
*** eharney_ has joined #openstack-meeting15:38
BobBallit's all about the order in which things get called so I'll have to think about it15:39
matelthat's a huge function, good luck.15:39
BobBalland my head's been elsewhere15:39
BobBallindeed15:39
johnthetubaguy1no +2 without a test :)15:39
BobBallperhaps I should delegate the writing of the test...15:39
johnthetubaguy1unless its "already covered"15:40
matel1000 story points15:40
BobBallyou're a mean man!15:40
BobBallyes15:40
BobBallit's "already covered"...15:40
matelI bet the reviewers are running coverage.15:40
BobBalldefinitely15:40
johnthetubaguy1… yeah...15:40
BobBallthe code is being exercised so coverage wouldn't find it15:40
matelOkay, let's stop it.15:40
BobBallthe problem is that both sets of code are fully exercised but in a different order :D15:41
matelThe problem, is that the code is not really structured well15:41
BobBallcan I apply for a "too difficult to test" exception?15:41
matelSo it's not Bob's fault.15:41
BobBallyeah!15:41
matelI can take the job of testing it.15:41
*** jdurgin1 has joined #openstack-meeting15:42
BobBallI was kidding mate - I'm not the type to make others test my work15:42
matelreverse tdd15:42
*** eharney has quit IRC15:42
johnthetubaguy1if the ordering is a problem, lets keep it right15:42
BobBallI may ask for your advice though15:42
matelwe might want to extract something sensible.15:42
*** jbrogan has joined #openstack-meeting15:42
matelwe'll see.15:42
mateltake it offline15:42
BobBallI have an idea on how to test it15:42
BobBalljust no time this last week15:43
matelOkay, anything else?15:43
*** eharney_ is now known as eharney15:43
BobBallnot from me15:43
*** eharney has quit IRC15:43
*** eharney has joined #openstack-meeting15:43
matelI'm done as well.15:43
johnthetubaguy1nothing from me15:43
matelKeen to get back to my terminal.15:43
johnthetubaguy1we all good?15:43
BobBallgo for it Mate15:43
matelsure15:43
BobBallthere's a test waiting for you to write.15:44
matelyes15:44
matelAnd I can remove some lines in exchange15:44
BobBallGood pln.15:44
johnthetubaguy1we all done?15:44
matel["sure"] * 100015:45
*** michchap has quit IRC15:45
johnthetubaguy1#endmeeting15:46
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"15:46
openstackMeeting ended Wed Jul  3 15:46:06 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:46
openstackMinutes:        http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-07-03-15.02.html15:46
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-07-03-15.02.txt15:46
openstackLog:            http://eavesdrop.openstack.org/meetings/xenapi/2013/xenapi.2013-07-03-15.02.log.html15:46
*** joel-coffman has joined #openstack-meeting15:46
*** MarkAtwood has joined #openstack-meeting15:46
*** MarkAtwood has quit IRC15:46
*** MarkAtwood has joined #openstack-meeting15:47
*** cody-somerville has joined #openstack-meeting15:50
*** cody-somerville has joined #openstack-meeting15:50
*** Mr_T has left #openstack-meeting15:50
*** thingee has joined #openstack-meeting15:52
*** marun has quit IRC15:52
*** bpb has joined #openstack-meeting15:53
*** ajforrest has joined #openstack-meeting15:54
*** marun has joined #openstack-meeting15:54
*** gyee has joined #openstack-meeting15:56
*** zhiyan has joined #openstack-meeting15:57
*** bswartz has joined #openstack-meeting15:58
*** pschaef has joined #openstack-meeting15:58
*** winston-d has joined #openstack-meeting15:59
*** Fdot has quit IRC16:00
*** vijendar has joined #openstack-meeting16:01
*** markwash has quit IRC16:01
jdurgin1hi16:01
mkodererHi16:02
*** SergeyLukjanov has quit IRC16:02
dosaboyhowdi16:02
*** tanisdl_ has joined #openstack-meeting16:02
winston-dhi16:02
jgriffith#starmeeting cinder16:02
bswartzhello16:02
jgriffith#start meeting cinder16:02
jgriffithsighhh....16:02
*** tanisdl has quit IRC16:02
*** tanisdl_ is now known as tanisdl16:02
*** johnthetubaguy1 has quit IRC16:02
zhiyanhi16:02
matelhi16:02
winston-duvirtbot: hey, time to work16:03
*** Mandell has joined #openstack-meeting16:03
uvirtbotwinston-d: Error: "hey," is not a valid command.16:03
*** kmartin has joined #openstack-meeting16:03
bswartz^ spelling problems16:03
uvirtbotbswartz: Error: "spelling" is not a valid command.16:03
* guitarzan chuckles16:03
jgriffith#startmeeting cinder16:03
openstackMeeting started Wed Jul  3 16:03:40 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.16:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:03
*** openstack changes topic to " (Meeting topic: cinder)"16:03
openstackThe meeting name has been set to 'cinder'16:03
thingeeo/16:03
jgriffithhaha... starmeeting... startmeeting16:03
jgriffithfigure it out virtbot!16:03
winston-d\o16:03
eharneyhi16:03
zhiyanhi again16:04
matelhi16:04
jgriffithhehe16:04
med_hlo16:04
jgriffithmed_: yo!16:04
jgriffithkk...16:04
jgriffithlet's roll16:04
jgriffith#topic update on ceph backup16:04
*** openstack changes topic to "update on ceph backup (Meeting topic: cinder)"16:04
jgriffithdosaboy: ??16:04
*** dontalton has quit IRC16:04
dosaboyo/16:04
dosaboyok so16:04
dosaboyfirst two parts of bp merged16:05
dosaboythanks all for really good feedback and reviews16:05
dosaboyfinal part i.e. diff backups is WIP and almost ready16:05
med_https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph16:05
dosaboyI need to think of a decent way to do incremental backups with what we have16:05
dosaboyi might create a separate task for incremental backup since current task is for differential backups16:06
*** dkliban has quit IRC16:06
dosaboyalso needs more testing16:06
dosaboyand happy to see mkoderer stress testing :)16:06
dosaboyso keen to hear more about that16:06
mkodereryesss ;)16:06
dosaboyquestion for y'all16:06
dosaboydo we have a place for functional tests?16:06
*** tanisdl has quit IRC16:06
*** psedlak_ has quit IRC16:06
jgriffithdosaboy: tempest...16:07
dosaboyok16:07
jgriffithdosaboy: but that's not appropro here16:07
jgriffithdosaboy: that's what's used for gate tests, so we would need to look at that16:07
dosaboyright16:07
jgriffithdosaboy: ie having tests in tempest that aren't gated16:07
matelI guess the goal is 100% unit test coverage :-)16:07
jgriffithmatel: :)16:08
mkoderermatel: that works already ;)16:08
dosaboythere are a lot of corner cases with this code16:08
dosaboyand not all reachable with unit tests16:08
dosaboyso I think accompanying func tests would be good16:08
dosaboyI have been doing some16:08
dosaboybut nothing commited16:08
dosaboyfunctional that is16:08
*** lloydde has joined #openstack-meeting16:08
jgriffithdosaboy: ideally... you could deploy devstack16:08
dosaboyso yeah thats kind of it16:08
jgriffithdosaboy: configure with your gear and run tempest16:08
*** jgallard has joined #openstack-meeting16:09
dosaboyjgriffith: that is what I am using16:09
dosaboyI am testing with devstack and two ceph clusters16:09
jgriffithright... but what I'm saying is16:09
dosaboyoh yeah16:09
jgriffiththe whole point of the API's is to abstract what those back-ends are16:09
jgriffithso you should be able to write generic functional tests to work across the board16:09
dosaboyyeah so I have tested compatiblity with others drivers and backends16:09
jgriffiththe gates and official stack testsing will continue to use swift16:09
dosaboye.g. rbd drver with swift backup16:10
dosaboylvm driver with ceph backups etc16:10
jgriffithdosaboy: right, so that matrix is a bit tricky to support in the OpenStack CI obviously :)16:10
guitarzanis the question just where do we check in functional tests?16:10
dosaboyI *hope* I have covered most variants16:10
jgriffithguitarzan: yes16:10
mkodererdosaboy: I am quite sure we will test all Ceph cases here16:10
mkodererbut no lvm16:10
dosaboymkoderer: excellent!16:10
jgriffithguitarzan: but there's also a question IMO about how device specific those functional tests are16:11
dosaboywell from the api perspective they are device agnostic16:11
*** bpb_ has joined #openstack-meeting16:11
jgriffithFor specific devices (other than ref impl) I'd just suggest a github repo of your own that people can use/contribute to16:11
dosaboyand tests would have to be repeated for each device type16:11
jgriffithnon-openstack official16:11
guitarzanis there a problem with including device specific test scenarios?16:11
dosaboyjgriffith: sounds like a plan16:12
jgriffithguitarzan: there is if you gate on them16:12
dosaboyok well that's it from me16:12
winston-djgriffith: i guess device specific functional tests should be done by device vendors since they have the *device*16:12
*** sleepsonthefloor has joined #openstack-meeting16:12
med_as no one may have the devices16:12
guitarzanare the gate tests in the cinder tree?16:12
jgriffithguitarzan: no16:12
jgriffithguitarzan: they're in tempest16:12
*** michchap has joined #openstack-meeting16:12
jdurgin1glance has functional tests for backends that are disabled when  the backend isn't setup16:13
jgriffithguitarzan: are you proposing that we open up the cinder tree for device specific functional tests?16:13
jgriffithguitarzan: I'd rather not do that unless everybody else disagrees16:13
guitarzanjgriffith: I'm tossing that option out there, yes16:13
* guitarzan shrugs16:13
*** bpb has quit IRC16:13
jgriffithI can't see people managing/maintaining and reviewing code for devices they know nothing about and can't run16:13
guitarzanthey already do that for driver code16:14
jgriffithguitarzan: Ok16:14
jgriffithguitarzan: go for it then16:14
jgriffithfine by me16:14
guitarzanjdurgin1: those live in the glance tree?16:14
jdurgin1yes16:15
jgriffithso everybody agree with guitarzan that folks should just put their vendor specific functional tests in the cinder trunk and we should review maintain said code?16:15
* dosaboy looks at glance tree16:15
* guitarzan chuckles16:15
jgriffithguitarzan: why chuckle?16:15
eharneycan it be separated from the always-run unit tests?  different folder or something?16:15
guitarzaneharney: I think that's the idea16:16
jgriffitheharney: yes16:16
zhiyanjdurgin1: glance functional tests had some refactoring in week ago (last week?)16:16
bswartzwhy would we want funcitonal tests in the cinder tree? aren't unit tests enough?16:16
DuncanTUnit tests are never enough16:16
guitarzannot usually16:16
dosaboybswartz: there are always going to be cases that a unit test cannot cover16:16
eharneyi'm not sure i see the value of tests that most people will never run though16:16
med_bswartz, well, as noted, functional tests test differently.16:16
jgriffitheharney: +100016:17
guitarzaneharney: that's a good point16:17
jdurgin1it makes sense for open source backends...16:17
bswartzyeah but the purpose of the unit tests aren't to prove the driver is correct, they are to catch idiotic mistakes from breaking drivers by accident16:17
med_jgriffith, are you concerned about bitrot cruft, or just the initial review time/mindshare it takes to get in?16:17
bswartzfunctional tests are valuable, but I don't see why they need to be in the cinder tree16:17
jgriffithmed_: both16:17
winston-di think where test code sit in doesn't matter, whether tests being done regularly is more important.16:17
jgriffithmed_: just don't see much value16:18
jgriffithmed_: ROI is not very good16:18
dosaboyjdurgin1: do you know when and how the glance func tests are run?16:18
dosaboylike what do they run against16:18
dosaboyare they automated16:18
winston-dvendors should have QA team publish test reports/file bugs against their driver regularly.16:18
*** mkollaro has quit IRC16:18
jdurgin1dosaboy: they're not automated afaik16:18
thingeemed_: that and maintaining. driver maintenance has been a problem too16:19
bswartzwinston-d: +116:19
jgriffithwinston-d: I still say that's simple, just run tempest with your dev configured at a minimum16:19
guitarzanglance has them as an option in run_tests16:19
winston-djgriffith: but i don't have a solidfire or 3par16:19
jgriffithwinston-d: and there's my point :)16:19
kmartinIf only the the vendor can run them while keep them in cinder, let the vendor keep them somewhere. It will also increase review times16:20
jgriffithwinston-d: so why would we put solidfire or 3par specific functional tests in the cinder tree16:20
jgriffithwinston-d: why not just let folks use tempest on their own and configure what they want to test16:20
*** cody-somerville has quit IRC16:20
thingeei think we should be worried about adding yet more code that no one will really know anything about16:20
eharneyi have to agree w/ kmartin there.  The maintenance burden for Cinder doesn't seem to have much payoff16:20
jgriffithwinston-d: or... provide test libs on github if we so choose16:20
dosaboywhat is wrong with having functional tests part of cinder that can be cherry-picked?16:20
thingeei worry about maintenance. who owns that?16:20
jgriffithkmartin: eharney that's what I was saying but guitarzan *chuckled* at me :)16:21
jgriffiththingee: agreed16:21
dosaboyso e.g. I decide to implement soem solidfire feature, then I have some test I can run16:21
eharneythingee: right.  putting them in-tree makes people who aren't really that interested responsible..16:21
jdurgin1I don't see much maintenance burden from self-contained test cases16:21
jgriffithso as I started the conversation, I don't want to be responsible...16:21
jgriffithI don't know how the different back-ends are supposed to react/behave16:22
DuncanTjgriffith: One advantage of public test cases is they act as additional documentation. That isn't an arguement for having them in the main tree though16:22
thingeejdurgin1: every time we disable a driver test and we can't get hold of the maintainer has sucked16:22
jgriffithI do know how drivers are *expected* to behave however16:22
jgriffithDuncanT: again... umm... isn't that what the API is supposed to do?16:22
jgriffithDuncanT: and what tempest would flush out?16:23
jgriffithDuncanT: ensures that you adhere to the API correctly16:23
winston-djgriffith: but sometimes there are configuration tricks and stuff16:23
jgriffithwinston-d: ok16:23
DuncanTjgriffith: Maybe just enhancing tempest would be enough16:23
jgriffithDuncanT: enhancing in what way?16:23
winston-d+1 for tempest16:23
*** michchap_ has joined #openstack-meeting16:23
dosaboyI think I agree that tempest would be sufficient for the moment16:23
DuncanTjgriffith: There are lots of bits of code it doesn't cover16:23
winston-dadd functional tests to tempest?16:23
jgriffithyou know anybody can contribute to tempest right?16:24
DuncanTjgriffith: But I guess just fixing that is good enough16:24
matelMaybe it's just me, but what sort of functional tests are we talking about exactly? Tests that are using OS API?16:24
DuncanTjgriffith: Yeah, sorry, I was thinking out loud16:24
*** BobBall has left #openstack-meeting16:24
jgriffithDuncanT: no problem16:24
*** michcha__ has joined #openstack-meeting16:24
dosaboymatel: e.g. I want to test cinder backup api with Swift backend and then with Ceph backend. I need a functional test to do that.16:25
jgriffithmatel: I think some folks are interested in things like performance testsing and more edge-case/vendor specific tests16:25
*** michchap has quit IRC16:25
jgriffithmatel: as well as the general API testsing that tempest should cover16:25
jgriffithIf folks want to submit their functional tests to Cinder and there are people that want to review/maintain fine by me16:26
*** michchap has joined #openstack-meeting16:26
jgriffithbut personally I really don't have time for that16:26
jgriffithbut other people can certainly review/approve such patches16:26
thingeei see the value in functional tests. i just don't think we really care if a vendor specific solution works.16:26
jgriffiththingee: +116:27
*** michch___ has joined #openstack-meeting16:27
mateljgriffith: I implemented a cinder driver a long ago, for XenAPINFS. So I defined a sort of library, that would be used by cinder, and I had functional tests for that library, meaning I had a running storage backend expected for those tests. Is this the kind of tests that we are talking about?16:27
*** michchap_ has quit IRC16:28
jgriffithmatel: I think that's part of it yes... there are multiple levels of detail here though16:28
dosaboymatel: yes, that is one use16:28
matelOkay, because those tests does not really fit tempest I guess.16:28
matelAt least I put them to my own repo, copied the cinder stuff there, and ran the tests.16:29
jgriffithOk, that's a 1/2 hour on this topic... perhaps we should move on?16:29
matelsure16:29
thingeeyes16:29
jgriffithFeel free to submit whatever you want16:29
matel:-)16:29
winston-dyeah, we have more to argue...16:29
*** michcha__ has quit IRC16:29
dosaboyagreed16:29
thingeelets talk more about in the cinder room.16:29
jgriffithI suppose that sort of covered the "stress test for cinder backup" ?16:29
mkodererok next topic?16:29
thingeeon a train atm on my phone :(16:29
*** michchap_ has joined #openstack-meeting16:30
*** Tross has quit IRC16:30
dosaboygo for it16:30
mkodererno not really16:30
*** spzala has quit IRC16:30
jgriffith#topic stress test for backup16:30
*** openstack changes topic to "stress test for backup (Meeting topic: cinder)"16:30
mkodererjust wanted to tell you that we are goining to have a stress test in the next sprint (3 weeks)16:30
jgriffithmkoderer: have at it16:30
dosaboysounds like functional test to me ;)16:30
jgriffithdosaboy: +116:30
mkodererso we reserved around 100 TB an a lot of machines16:30
*** michchap has quit IRC16:30
mkodererI already had a look to openstack-stress16:31
*** cody-somerville has joined #openstack-meeting16:31
*** michchap has joined #openstack-meeting16:31
mkodererbut then I found out that we already have such a tool inhouse16:31
*** tjones has joined #openstack-meeting16:31
mordredoh good. in house tools. /me grumbles16:31
mkodererso we will decide what we use or use both16:31
*** michch___ has quit IRC16:31
jgriffithmordred: :)16:31
*** anniec has joined #openstack-meeting16:31
mordreddo you guys know about the stuff jog0 has done with FakeVirt in nova?16:31
mkoderermordred: we will put it on github ofc16:31
*** tjones has left #openstack-meeting16:32
* mordred continues to grumble but will move on16:32
jgriffithmordred: yes, a bit (fakevirt)16:32
mkodererbut we will focus only on ceph16:32
mkodererso no testing of lvm or something else16:32
jgriffithmordred: the debate is some folks would like to check in functional tests specific to their device16:32
mkodererI think we will publish the results on our blog or somewhere16:32
*** spzala has joined #openstack-meeting16:32
mordredah16:32
mkodererok so that all ;)16:33
thingeebbl16:33
jgriffithmordred: my stance is it should be tempest tests they can configure their backend and run16:33
mordredI would agree16:33
jgriffithmordred: or additional *special* tests on public github16:33
jgriffithmordred: but shouldn't go in the Cinder tree16:33
dosaboymkoderer: do note that ceph backup is not expected to be performant until diff/incr backups are in place16:33
*** michcha__ has joined #openstack-meeting16:33
mordredis there an example of what type of 'special' test someone wants?16:33
mkodererdosaboy: yes we know it16:34
*** michchap_ has quit IRC16:34
jgriffithmordred: I've also mentioned in the past IIRC there's some things we can do if folks want to give you access to their lab/gear to do periodic tests?16:34
winston-dmordred: so you are telling us that you have lots of h/w ready for stress tests and will publish your in-house tool as well. that's it?16:34
mkodererbut we already need to prepare the environment16:34
winston-dsorry, it should go to mkoderer16:34
mordredjgriffith: yes. well, sort of - it's not that they give us access even16:34
jgriffithwinston-d: no, I think we'er saying that vendor specific qual and testsing should be up to the vendor not OpenStack per-say16:34
mordredjgriffith: there is a mechanism whereby anyone can subscribe to the stream of events out of gerrit16:34
mordredjgriffith: and can run tests as they see fit16:35
*** dkliban has joined #openstack-meeting16:35
mkodererwinston-d: yes that it16:35
jgriffithmordred: ahh... even better16:35
mordredjgriffith: and report the results back to the code review16:35
*** mkollaro has joined #openstack-meeting16:35
mordredjgriffith: like how smokestack works16:35
*** tjones has joined #openstack-meeting16:35
mordredit's a piece of cake to do16:35
jgriffithmordred: cool... smokestack was what I had in mind16:35
zhiyanmkoderer: do you have write the pref test case details down to some place?16:35
*** spzala has quit IRC16:35
*** michchap has quit IRC16:35
mordredthere's even a jenkins plugin a person can use that will do all of the gerrit communication for them16:35
jgriffithSo my point in all of this is that if they don't feel that tempest is sufficient they should submit enhancments to tempest16:35
mkodererYes sure16:36
jgriffithand use the method you described for their specific devices16:36
mordredyup!16:36
*** Tross has joined #openstack-meeting16:36
*** fmanco has joined #openstack-meeting16:36
mordredhttp://ci.openstack.org/third_party.html16:36
*** spzala has joined #openstack-meeting16:36
*** lpabon has joined #openstack-meeting16:36
*** fmanco has left #openstack-meeting16:36
jgriffithmordred: cool!16:36
jgriffithOk... so that was pretty much the same topic after all :)16:37
mkodererjgriffith: yes seems so ;)16:37
jgriffithso we have 38 minutes on functional tests :)16:37
med_heh. but now that's two in 37 min.s so are rate is increasing16:37
*** michcha__ has quit IRC16:38
jgriffithmed_: haha!16:38
guitarzanhahaha16:38
jgriffithmed_: always has a glass 1/n full16:38
dosaboykeep the flame going!16:38
*** hartsocks1 has quit IRC16:38
*** hemnafk is now known as hemna16:38
*** Mandell has quit IRC16:38
haomaiwangNo more topic?16:39
mkoderernext topic?16:39
*** jecarey_ has quit IRC16:39
jgriffith#topic flatten volumes16:39
*** openstack changes topic to "flatten volumes (Meeting topic: cinder)"16:39
jgriffithwinston-d: ^^16:39
*** boris-42 has quit IRC16:39
winston-dyup16:39
*** sushils has quit IRC16:40
winston-dthere is a bp intended to implement a public api extension to remove dependencies of a volume/snapshot16:40
mateldo we have a link?16:41
winston-dhere's the bp: https://blueprints.launchpad.net/cinder/+spec/add-flat-volume-api16:41
zhiyanwinston-d: this seems need backend/storage support16:41
guitarzancertainly does16:41
winston-dand avishay and I have different opinion about whether this kind of functionality should be public api or done internally16:41
matelI think in the XenServer world, we call it coalescing16:41
mkodererjgriffith: we skip topic refactor-backup-service? ;)16:42
hemnahrmm16:42
*** dkliban has quit IRC16:42
jgriffithmkoderer: I'll come back around to it16:42
hemnaour backend won't support this16:42
mkoderernp16:42
jgriffithhemna: most won't16:42
*** dkliban has joined #openstack-meeting16:42
hemnaunless we clone the volume16:42
DuncanTAny backend can do it with a full copy...16:42
hemnashould we be making public APIs for things that most backends won't support ?16:42
winston-dso i would like to hear some feedback like do you guys want to see this implement as public api or not?16:42
jgriffithDuncanT: and any volume can continue to function as they do today by the same principal16:43
haomaiwanghemna +116:43
zhiyanbut IMO, probably a lot of backend not support it natively16:43
bswartzwinston-d: netapp can support this16:43
zhiyangpfs support it16:43
jgriffithwinston-d: I've already spoken to this multiple times and my vote is -116:43
jdurgin1hemna: imo that's why this makes sense as an extension rather than a core api16:43
winston-djgriffith: noted16:43
jgriffithwinston-d: :)16:43
winston-djdurgin1: any comment? i think this bp is from ceph use cases16:43
bswartzI am in favor of it, but I'd prefer something that was less painful for those who can't support it16:43
*** IlyaE has quit IRC16:43
winston-dbswartz: which is?16:44
bswartzI'm not sure16:44
*** zehicle_at_dell has quit IRC16:44
bswartzif this is the only way to support breaking teh dependency between snapshots and volumes then I want it16:44
jdurgin1so this is meant to address an issue for some backends like ceph where a clone is dependent on the snapshot it came from16:44
*** zehicle_at_dell has joined #openstack-meeting16:45
jdurgin1the dependency can be broken by effectively doing a full copy, but obviously having copy-on-write has its advantages as well16:45
guitarzanjdurgin1: wasn't that case already fixed with the rbd config option?16:45
hemnafor us a clone is a complete copy and has no dependencies and in fact is the way to avoid dependencies from snapshotting volumes and creating volumes from snapshots.16:45
guitarzanjdurgin1: ahh, nevermind, handling both is harder :)16:46
jdurgin1guitarzan: exactly16:46
winston-djdurgin1: can you remove dependency internally, transparent to end-user?16:46
jgriffithhemna: +1 as is true for SF and LVM16:46
zhiyanwe can implement some code to allow particular backend support separation, but not easy if they not support it natively.16:46
jdurgin1winston-d: no, that's the issue really16:47
winston-djdurgin1: why not?16:47
guitarzanwinston-d: can't handle both cow and non-cow cases16:47
hemnaso is the implementation of the ceph driver's clone creating this dependency because of the ceph apis and can that just be avoided by doing something else?   Instead of backdooring this with creating a whole new public API that no one supports?16:47
winston-dguitarzan: then why a public api can help?16:47
jdurgin1hemna: no, this isn't the clone functionality16:48
hemnathe BP says cloning16:48
jdurgin1hemna: it's ceph's copy-on-write functionality when creating a volume from a snapshot16:48
jdurgin1hemna: which is called cloning16:48
hemnagah16:48
hemnaoverloading of terms here then16:49
jgriffithto be clear, the dependency isn't really in Cinder then, it's in Ceph16:49
jgriffithis that accurate?16:49
haomaiwangyes16:49
winston-dso i still don't get why can't ceph do things like remove dependency when needed automatically16:49
jdurgin1yes, and other drivers like nexenta have this issue too16:49
guitarzanwinston-d: I think the question is now to tell when it's needed or not16:49
guitarzans/now/how/16:50
jdurgin1winston-d: so like I mentioned on the review, a snapshot may have many volumes created from it16:50
*** Mandell has joined #openstack-meeting16:50
winston-djdurgin1: ok, and?16:50
jgriffithwinston-d: jdurgin1 and to be clear, creating a volume from snapshot in the other drivers doesn't not result in a dependent volume16:50
jdurgin1winston-d: and we probably don't want to use up a lot more space and i/o doing full copies for all of them at once when someone wants to delete the snapshot16:50
jdurgin1jgriffith: it does for nexenta16:50
jgriffithjdurgin1: sorry... Ceph and Nexenta16:51
jdurgin1jgriffith: and maybe others, who knows16:51
*** leizhang has quit IRC16:51
jgriffithjdurgin1: my point as I've said before is that I think there could/should be a distinction between clone and snapshot16:51
*** hartsocks has joined #openstack-meeting16:51
jgriffiththey're NOT the same thing16:51
*** tedross has quit IRC16:51
jdurgin1jgriffith: no, they're not16:51
jgriffithand that's exactly why we have this issue16:51
winston-djdurgin1: and when some user invoke such a call from public api, you are still in the same trouble?16:51
hemnajgriffith, +116:52
haomaiwangHmm, I think the strive should be given to ceph and nexenta.16:52
winston-djdurgin1: i mean a user said 'flatten this snapshot'16:52
jgriffithjdurgin1: but for Nexenta and Ceph you're saying that clones are imlemented with internal snapshots16:52
jgriffithhaomaiwang: what do you mean by "the strive"?16:52
jdurgin1winston-d: no, the api would be 'flatten this volume'16:52
*** elo has quit IRC16:53
hemnaugh16:53
jdurgin1winston-d: so that this particular volume does not depend on its parent snapshot16:53
hemnajdurgin, that's a clone :)16:53
winston-djdurgin1: cool. why would a user wants to do that do a volume?16:53
*** leizhang has joined #openstack-meeting16:54
winston-djdurgin1: he can remove his created-from-snapshot volume anytime, right?16:54
jdurgin1winston-d: yes, but he cannot remove the snapshot it was created from16:54
jdurgin1winston-d: and flattening can also improve performance for some workloads16:55
winston-djdurgin1: right, so he should invoke the api saying 'flatten this snapshot' instead of 'flatten one of the clone'16:55
*** Mandell has quit IRC16:55
*** pcm_ has joined #openstack-meeting16:55
*** kirankv has joined #openstack-meeting16:55
*** terry7 has joined #openstack-meeting16:55
*** tayyab has joined #openstack-meeting16:55
guitarzanwinston-d: no, the snapshot is already flat16:55
guitarzan:)16:55
winston-dwell, only 5 min left.16:55
winston-dwe can continue this discussion in cinder channel.16:56
guitarzanwe might need to stop digging into details during meetings16:56
hemnaand we're back to what backends actually support flattening a snapshot ?16:56
hemna1 ?16:56
*** Mandell has joined #openstack-meeting16:57
jdurgin1let's continue in the cinder channel like winston suggested16:57
winston-dfull copy?16:57
winston-dmkoderer: you are up for refactoring...16:57
mkodererok16:57
jgriffith#topic backup service refactor16:57
*** openstack changes topic to "backup service refactor (Meeting topic: cinder)"16:57
winston-d3min left, sorry16:57
mkodererso just quckly16:57
mkodererabout https://blueprints.launchpad.net/cinder/+spec/refactor-backup-service16:58
mkodererthanks for the reviews16:58
*** nati_ueno has joined #openstack-meeting16:58
mkodererso we agreed to rename backup/service to backup/driver16:58
winston-d+1 for that16:58
mkodererI will work on the the next days16:58
DuncanTmkoderer: Only if you do the renaming magic so old config files still work16:58
mkodererbut we will keep the database field "backup.service"?16:59
jgriffithDuncanT: +1, should be trivial at this point16:59
mkodereryes I know16:59
*** jlucci has joined #openstack-meeting16:59
mkodererthis is what I am working on ;)16:59
DuncanTmkoderer: Database migrations are a pain in the arse on live systems, so if renaming the field is only to look pretty, don't bother16:59
jgriffithmkoderer: we shouldn't have anything driver/backend specific in the DB anyway17:00
mkodererDuncanT: yes thats the point17:00
med_timezup17:00
*** haomaiwa_ has joined #openstack-meeting17:00
*** leizhang has quit IRC17:00
jgriffithmed_: indeed17:00
mkodererjgriffith: the service is stored there..17:01
jgriffithmkoderer: don't change the db table :)17:01
jgriffithmkoderer: huh?17:01
*** cbananth has joined #openstack-meeting17:01
mkodereryes I won't ;)17:01
jgriffithmkoderer: my point is the driver info isn't, the service is17:01
*** haomaiwang has quit IRC17:01
*** eglynn has quit IRC17:01
jgriffiththe service doesn't change17:01
jgriffithjust the driver17:01
mkodererok17:01
jgriffiththe whole point of abstraction is just that17:01
jgriffithmkoderer: no?17:01
*** mrodden1 is now known as mrodden17:01
*** bswartz has left #openstack-meeting17:02
jgriffithK... guess everyobyd is leaving17:02
*** Eustace has joined #openstack-meeting17:02
jgriffith#endmeeting cinder17:02
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"17:02
openstackMeeting ended Wed Jul  3 17:02:19 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-07-03-16.03.html17:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-07-03-16.03.txt17:02
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-07-03-16.03.log.html17:02
*** winston-d has left #openstack-meeting17:02
*** elo has joined #openstack-meeting17:02
*** ajforrest has left #openstack-meeting17:02
hartsocks#startmeeting VMwareAPI17:02
openstackMeeting started Wed Jul  3 17:02:59 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.17:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:03
*** openstack changes topic to " (Meeting topic: VMwareAPI)"17:03
openstackThe meeting name has been set to 'vmwareapi'17:03
hartsocksWho's around for VMwareAPI subteam meeting time!17:03
hartsocksName, company this time.17:03
hartsocksJust because...17:03
hartsocks:-)17:03
danwenthello!  Dan Wendlandt, vmware17:03
kirankvHi! Kiran, HP17:03
EustaceHi Eustace, HP17:03
danwentthough technically people should feel free to not indicate a company, if they prefer not to.  In openstack, people are also free to contribute as individuals17:04
yaguangHi all17:04
*** derekh has quit IRC17:04
*** michchap has joined #openstack-meeting17:04
yaguangyaguang, Canonical17:04
*** mkollaro has quit IRC17:04
hartsocks@danwent thank you. yes.17:04
hartsocksIf you don't want to name a company, you don't have to.17:04
*** sabari_ has joined #openstack-meeting17:04
*** joel-coffman has quit IRC17:04
*** gary_th has joined #openstack-meeting17:05
hartsocksI'm Shawn Hartsock from VMware tho' and this is the part of the meeting where we talk bugs...17:05
hartsocks#topic bugs17:05
*** openstack changes topic to "bugs (Meeting topic: VMwareAPI)"17:05
hartsocksAnyone have a pet bug that needs attention?17:05
*** jgallard has quit IRC17:05
*** garyTh has quit IRC17:05
hartsocksThe silence is deafening.17:06
*** tjones has quit IRC17:06
yaguangI have one that  slove  uncompatible  issue with PostgreSQL17:06
*** cody-somerville has quit IRC17:06
hartsocksHrm. Well, I meant bugs that are related to VMware's API's and drivers specifically. :-)17:07
*** jbjohnso has quit IRC17:07
yaguangoh, sorry17:07
kirankvits related to vmware :)17:07
hartsocksIs it?17:07
yaguanghttps://bugs.launchpad.net/nova/+bug/119513917:07
uvirtbotLaunchpad bug 1195139 in nova "vmware Hyper  doesn't report hypervisor version correctly to database" [Undecided,In progress]17:07
kirankvthe version issue17:07
hartsocksMy apologies.17:08
kirankvno worries17:09
hartsocksHmm… I will look more closely at this one later… but IIRC you can have versions like 5.0.0u117:09
hartsocksNot sure how that would work.17:09
*** Fdot has joined #openstack-meeting17:09
sabari_yes, i would suggest moving to a String/Text field type in the database17:09
*** tjones has joined #openstack-meeting17:09
kirankvwell but the field that is being retrieved gives the numberals only and never the update versions u1,u2... not sure it that has changed noew17:10
yaguangnova libvirt  driver  use  integer  version to  do  a  version compare17:10
*** Tross has quit IRC17:11
sabari_would that affect VMware drivers ?17:11
*** Fdot has quit IRC17:11
yaguangI mean  the column is set to interger is for that  use case17:11
*** Tross has joined #openstack-meeting17:12
sabari_I haven;t yet seen a code in the VMware driver that has such use case, may be moving to String wouldnt harm17:12
*** _ozstacker_ has quit IRC17:12
hartsocksInteresting… version numbers is one of those things that most systems treat as strings so I'm surprised this is an issue.17:12
sabari_hmmm17:12
*** eglynn has joined #openstack-meeting17:13
*** leizhang has joined #openstack-meeting17:13
kirankvwell if i have both libvirt and vmware then changing it to string would break libvirt, so id prefer not doing a db change17:13
yaguangagree with kirankv17:13
sabari_oh yeah, I almost forgot that point17:13
*** ozstacker has joined #openstack-meeting17:14
hartsocksokay.17:14
hartsocksI see your point.17:14
sabari_understood17:14
hartsocks#action hartsocks to follow up on https://bugs.launchpad.net/nova/+bug/119513917:14
uvirtbotLaunchpad bug 1195139 in nova "vmware Hyper  doesn't report hypervisor version correctly to database" [Undecided,In progress]17:14
hartsocksI'll figure out what the right triage actions are after the meeting.17:14
hartsocksAny other bugs to bring up?17:15
*** matel has quit IRC17:15
sabari_https://bugs.launchpad.net/nova/+bug/119051517:15
uvirtbotLaunchpad bug 1190515 in nova "disconnected ESXi Hosts cause VMWare driver failure" [High,In progress]17:15
sabari_There were couple of bugs related to the fix I am working on this issue.17:15
*** Tross has quit IRC17:16
sabari_It would be better to raise the priority of this bug17:16
*** Tross has joined #openstack-meeting17:16
hartsocksIt's already rated as "high" ...17:16
hartsocksyou think this is critical?17:16
*** michchap has quit IRC17:16
kirankvQuestion: does a patchset for higher priority bug get reviewed faster?17:17
hartsocksno. not really.17:17
sabari_Sorry, I thought I saw a different priority on the bug.17:17
kirankvoh!17:17
hartsocksIt's just a priority helper for us to decide.17:17
kirankvok17:18
hartsocksI'm on the bug triage team though and this might help explain the priorities...17:18
hartsocks#link https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.2917:18
hartsocksCritical means prevents a key feature from working properly17:18
hartsocksIf there's a work-around then it can't be "Critical"17:19
hartsocksJust FYI.17:19
*** psedlak_ has joined #openstack-meeting17:19
*** leizhang has quit IRC17:19
*** psedlak_ has quit IRC17:19
*** mkollaro has joined #openstack-meeting17:19
*** dcramer_ has quit IRC17:19
hartsocksAny other bugs we need to discuss?17:20
*** zhiyan has left #openstack-meeting17:20
hartsocksOkay moving on to blueprints in ...17:20
hartsocks3...17:21
hartsocks2...17:21
hartsocks#topic blueprints17:21
*** openstack changes topic to "blueprints (Meeting topic: VMwareAPI)"17:21
hartsocks#link https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver17:21
hartsocksThis is the FC support blueprint.17:21
*** eglynn has quit IRC17:22
hartsocksI think this is not even set for Havana right now.17:22
hartsocks@kirankv I think this is one of yours17:22
kirankvyes,17:22
kirankvworking on this, refactoring the iSCSI code so that it can be used for FC as well17:23
*** cp16net is now known as cp16net|away17:23
kirankvthis week a WIP patch should get posted17:23
hartsocksare you trying for Havana-3 for this?17:23
*** Tross1 has joined #openstack-meeting17:23
*** markwash has joined #openstack-meeting17:23
hartsocks(I don't see a series goal)17:23
kirankvwill initially post it as for Havana217:23
kirankvwill set it when I post the patch17:24
*** yuanz has quit IRC17:24
*** tedross has joined #openstack-meeting17:24
hartsocksokay, you can try…17:24
kirankvok17:24
*** yuanz has joined #openstack-meeting17:24
*** thingee has left #openstack-meeting17:24
hartsocks(lot of reviews for the core to get through so H2 will be hard)17:25
hartsocksLet's see what else (before I get the big ones)17:25
*** Tross has quit IRC17:25
hartsocks#link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy17:25
*** gyee has quit IRC17:25
hartsocksMy blueprint turned out to be pretty simple.17:25
hartsocksI've posted code but it is "work in progress"17:26
*** gyee has joined #openstack-meeting17:26
hartsocksI've got a bug in my gerrit account… my "work in progress" won't show up17:26
hartsocksSo just FYI17:26
kirankvisnt the clone strategy something to be decided at instance creation time rather than by the image itself17:26
hartsocksThat's why there's a patch this early.17:27
*** Tross has joined #openstack-meeting17:27
*** Tross1 has quit IRC17:27
*** dhellmann is now known as dhellmann-away17:27
hartsocksThis is one strategy that was easy. Decide that this "type" of machine performs best as a linked-clone or as a full-clone.17:27
hartsocksConsidering that you don't turn a web-server image into a database server image using "nova boot" this seems reasonable to me.17:28
kirankvok, let me see if there are options that can be specified for nova boot17:28
hartsocksThis feeds @yaguang 's work...17:29
*** radez is now known as radez_g0n317:29
yaguangthere  are  metadata  can by used to describe the  instance17:29
yaguangand we  can  get it from db17:29
*** markwash has quit IRC17:30
*** Tross1 has joined #openstack-meeting17:30
hartsocksand how do we put it into the db?17:30
sabari_hmm, the only other option would be scheduler hints17:30
yaguangyou just  need to  specify  as nova boot  options17:30
kirankv#link http://docs.openstack.org/cli/quick-start/content/nova-cli-reference.html17:31
*** virajh has joined #openstack-meeting17:31
kirankvsection on Insert metadata during launch17:31
kirankvgives the details17:31
*** leizhang has joined #openstack-meeting17:31
hartsocksokay. just like the glance CLI17:32
*** jlucci has quit IRC17:32
*** Tross has quit IRC17:32
kirankvyes17:32
*** jlucci has joined #openstack-meeting17:32
hartsocksSo the real questions are:17:32
hartsocks1. what should the default be "full" or "linked"?17:32
hartsocks2. where should you be able to override the default?17:32
*** bgorski has quit IRC17:33
yaguangbut I think we  can  set  it as  a  image  property  full clone or linked17:33
yaguangs/a/an/17:33
hartsocksI like that. (that's in the patch up right now)17:33
hartsocksI *also* like putting it at nova boot17:33
hartsocksin the instance.17:33
hartsocksI think I can do both. Letting nova boot's meta data override what is in the image.17:34
hartsocksIs that too much freedom?17:34
hartsocksIs that confusing?17:34
*** haomaiwa_ has quit IRC17:34
kirankvwe might have to see how it is being done and used in kvm today17:35
tjonesbut the customer would have to load 2 images into glance when they can just load 1 and then tell the instance how to use it17:35
kirankvthat might give us insights on how admins use it17:35
tjonesthat seems simpler from a user point of view to me17:35
yaguangin  kvm17:35
hartsocks@tjones that's why I like the idea of letting the image control a "default" but letting nova boot control an override.17:35
yaguangwe  can  config in nova.conf to  set  what  kind of  image  type to use17:35
kirankvkvm = libvirt17:35
yaguangqcow2 or  raw  ,17:35
sabari_For a while I was looking at ways to specify options at boot time. At least, I thought that the metadata service can only be used after the instance is created. I am not sure if that information will be passed along to the driver. Just curious, if some knows if that can be done.17:36
yaguanga raw  image just like  a  full clone17:36
kirankvok17:36
tjones@hrtsocks - i agree with that and i don't think its too much freedom17:36
*** zaro has quit IRC17:36
*** dcramer_ has joined #openstack-meeting17:36
hartsocksI'll post a patch in time for next Wednesday that shows both image and "nova boot" meta data switches. Then solicit your feedback again. I don't think we should worry if this is different from what KVM does.17:37
hartsocks(at least on this one small point)17:37
hartsocksBut I will look at raw vs qcow2 to help me figure out how best to write this.17:38
hartsocksNext topic?17:38
hartsocks#link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage17:38
hartsocksThis is yaguang's BP17:38
yaguangI am working on it17:38
hartsocksI noticed a setting "use_linked_clone" that defaults to true.17:39
hartsocksThis is already in the driver code on master.17:39
yaguangyes  , I also see it17:39
hartsocksSo editing localrc for your devstack should let you work.17:39
hartsocksHow will you deal with "linked-clone" disks? I noticed the instance has a link back to the image that made it.17:40
hartsocksThat's why I was working to put "linked clone" in the image.17:40
yaguangplease  take a look at  this  https://etherpad.openstack.org/vmware-disk-usage-improvement17:41
*** dcramer_ has quit IRC17:41
hartsocksokay… that's interesting...17:42
*** zaro has joined #openstack-meeting17:42
hartsocksyou're still planning on using linked clones, just off of new resized copies?17:42
kirankvyaguang: does this mean that when we use different flavors the image is copied over again just to be resized?17:42
*** michchap has joined #openstack-meeting17:43
hartsocks@yaguang hello?17:44
yaguang@kirankv, yes if we use linked clone ,17:44
*** leizhang has quit IRC17:44
hartsocks@yaguang could you resize a full-clone VMDK in place?17:45
yaguang@hartsocks, yes  I want to get  confirm from you guys17:45
kirankvok since its a local copy it would be much faster than transferring from glance17:45
yaguangif this is ok17:45
*** pschaef has quit IRC17:46
yaguangthis is also how nova with kvm does to cache images  and speed up instance build time17:46
*** leizhang has joined #openstack-meeting17:47
hartsocksThe only one that confuses me is the "linked clone" … the first part steps 1 to 417:47
yaguang@hartsocks,  a full clone vmdk is first copied to instance dir , and then do a  resize17:47
hartsocks@yaguang That's not the bit that confuses me.17:48
hartsocks@yaguang the "full clone" steps 1 to 3 make sense and that's what I thought the blueprint would do...17:49
yaguanglet me explain, the idea is when using linked clone,  we  cache a  base image first,17:49
*** bpb_ has quit IRC17:49
yaguangbecause there may be different  flavors of instance created in the same  VMware Hypervisor17:50
tjonescan't the next copy from glance be skipped as the original image is still there?  Just copy and resoze it?17:50
yaguangand they have different  size of root size17:50
*** sushils has joined #openstack-meeting17:51
yaguangdownload from glance is just  once17:51
tjonesyes - great17:51
hartsocksOkay I get it…17:51
tjonesdownload once and copy/resize after than17:51
yaguangwhen  a flavor of instance is to be created, we first  check  in local cache dir17:51
hartsocksimage-a gets image-a-small image-a-large in the image cache17:52
hartsocks?17:52
yaguangif the  resized  image disk is there17:52
*** radix has left #openstack-meeting17:52
yaguangif it isn't  we will do a  copy and resize17:52
tjonesso the steps should be - 1 - check if the image is in local cache and if not download17:52
yaguangyes17:53
tjonesok to make the edit in etherpad to make sure it's clear and we are all on the same page?17:53
hartsocksso if a request for image-abcdef-small is there we use it and do a linked clone from that point.17:53
*** dcramer_ has joined #openstack-meeting17:53
yaguangno17:53
hartsocksShould we put some notes on the Blueprint to explain this?17:54
tjonesthat's what i was getting at :-D17:54
yaguangwe do a full copy17:54
yaguangand resize  it to  image--abdfsad_1017:54
yaguangthis new image disk is used to be a linked clone of the  instance17:55
*** zehicle_at_dell has quit IRC17:55
*** michchap has quit IRC17:55
hartsocksLet's be sure to write this down in the blueprint.17:56
kirankveach flavor will have its own base image, but the base image is not copied over from glance everytime, instead the local cached image is copied and resized17:56
yaguangso different  flavor of instances  in the same server  have different  linked clone base image17:56
*** radix has joined #openstack-meeting17:56
yaguang@kirankv, exactly17:56
hartsocksJust to be clear, each flavor + image … since you can have different images with different flavors… right?17:57
kirankvyaguang: with custom flavours having same base size and different ephermal sizes use the same procedure?17:57
yaguangyes17:57
hartsocksFor example… image Debian and image Win32 each with flavors small, medium, large means 2 x 3 images.17:57
*** afazekas_ has quit IRC17:58
*** leizhang has quit IRC17:58
yaguangI think the ephermal disk handle is independent  from  root disk17:58
kirankvare ephermal disks are linked or not?17:58
yaguangis it make sense  to use linked for ephermal disks ?17:59
yaguangs/does/is/17:59
hartsocks#action yaguang to document blueprint https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage based on https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage17:59
hartsocksWe're out official meeting time.18:00
*** mtreinish has quit IRC18:00
hartsocksThe room at #openstack-vmware is open for discussion.18:00
kirankvyaguang: i will have to check more on ephermal disk18:00
*** thunquest has joined #openstack-meeting18:01
hartsocksokay. See you all next week.18:01
hartsocksI'll open next week on this topick18:02
hartsocks.18:02
hartsocks#endmeeting18:02
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"18:02
openstackMeeting ended Wed Jul  3 18:02:09 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:02
openstackMinutes:        http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-07-03-17.02.html18:02
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-07-03-17.02.txt18:02
openstackLog:            http://eavesdrop.openstack.org/meetings/vmwareapi/2013/vmwareapi.2013-07-03-17.02.log.html18:02
kirankvthis meeting went through fast .....18:03
hartsocksyeah.18:03
*** gary_th has quit IRC18:04
hartsocksLots to discuss.18:04
*** sandywalsh has quit IRC18:05
*** radez_g0n3 is now known as radez18:06
*** cbananth has quit IRC18:06
*** kirankv has quit IRC18:07
*** marun has quit IRC18:07
*** tjones has left #openstack-meeting18:08
*** marun has joined #openstack-meeting18:08
*** shengjiemin has quit IRC18:09
*** zehicle_at_dell has joined #openstack-meeting18:09
*** cp16net|away is now known as cp16net18:09
*** boris-42 has joined #openstack-meeting18:10
*** IlyaE has joined #openstack-meeting18:14
*** henrynash has quit IRC18:14
*** leizhang has joined #openstack-meeting18:16
*** marun has quit IRC18:16
*** marun has joined #openstack-meeting18:17
*** sandywalsh has joined #openstack-meeting18:18
*** henrynash has joined #openstack-meeting18:19
*** leizhang has quit IRC18:21
*** michchap has joined #openstack-meeting18:22
*** marun has quit IRC18:23
*** marun has joined #openstack-meeting18:25
*** marun has quit IRC18:28
*** yaguang has quit IRC18:29
*** salv-orlando has quit IRC18:29
*** marun has joined #openstack-meeting18:30
*** dripton_ is now known as dripton18:30
*** IlyaE has quit IRC18:34
*** spzala has quit IRC18:34
*** michchap has quit IRC18:34
*** hartsocks has left #openstack-meeting18:35
*** sandywalsh has quit IRC18:35
*** kebray_ has joined #openstack-meeting18:35
*** kebray has quit IRC18:36
*** kebray_ is now known as kebray18:36
*** Mandell has quit IRC18:37
*** henrynash has quit IRC18:37
*** danwent has quit IRC18:41
*** woodspa has joined #openstack-meeting18:42
*** Abdul has joined #openstack-meeting18:42
*** tanisdl has joined #openstack-meeting18:44
*** woodspa has quit IRC18:47
*** sandywalsh has joined #openstack-meeting18:48
*** tayyab has quit IRC18:49
*** tedross has quit IRC18:50
*** leizhang has joined #openstack-meeting18:50
*** jcoufal has quit IRC18:50
*** Mandell has joined #openstack-meeting18:50
*** radez is now known as radez_g0n318:51
*** boris-42 has quit IRC18:54
*** sdake has joined #openstack-meeting18:55
*** sdake has quit IRC18:55
*** sdake has joined #openstack-meeting18:55
*** russellb is now known as rustlebee18:56
*** IlyaE has joined #openstack-meeting18:59
*** sneezezhang has joined #openstack-meeting18:59
*** leizhang has quit IRC19:01
*** michchap has joined #openstack-meeting19:01
*** marun has quit IRC19:02
*** Eustace has quit IRC19:03
*** mkollaro has quit IRC19:05
*** sneezezhang has quit IRC19:06
*** leizhang has joined #openstack-meeting19:07
*** marun has joined #openstack-meeting19:10
*** sifusam has quit IRC19:11
*** marun has quit IRC19:11
*** pcm_ has quit IRC19:12
*** marun has joined #openstack-meeting19:13
*** michchap has quit IRC19:14
*** pcm_ has joined #openstack-meeting19:15
*** leizhang has quit IRC19:16
*** sifusam has joined #openstack-meeting19:18
*** danwent has joined #openstack-meeting19:18
*** leizhang has joined #openstack-meeting19:18
*** ssurana has joined #openstack-meeting19:18
*** garyk has quit IRC19:20
*** mdomsch has quit IRC19:21
*** marun has quit IRC19:24
*** ssurana has left #openstack-meeting19:25
*** marun has joined #openstack-meeting19:25
*** ssurana has joined #openstack-meeting19:26
*** vipul is now known as vipul-away19:28
*** IlyaE has quit IRC19:30
*** marun has quit IRC19:31
*** marun has joined #openstack-meeting19:33
*** leizhang has quit IRC19:35
*** haomaiwang has joined #openstack-meeting19:35
*** jasond has joined #openstack-meeting19:37
*** hemna has quit IRC19:39
*** henrynash has joined #openstack-meeting19:39
*** Mandell has quit IRC19:40
*** Mandell has joined #openstack-meeting19:40
*** haomaiwang has quit IRC19:40
*** michchap has joined #openstack-meeting19:40
*** novas0x2a|laptop has joined #openstack-meeting19:42
*** dkliban is now known as dkliban_afk19:43
*** andrew_plunk has joined #openstack-meeting19:45
*** barefoot has joined #openstack-meeting19:46
*** adrian_otto has joined #openstack-meeting19:51
*** therve has joined #openstack-meeting19:51
*** rs-ryan has joined #openstack-meeting19:52
*** michchap has quit IRC19:53
*** Mandell has quit IRC19:53
*** marun has quit IRC19:54
*** marun has joined #openstack-meeting19:55
*** salv-orlando has joined #openstack-meeting19:57
*** adrian_otto1 has joined #openstack-meeting19:57
*** adrian_otto has quit IRC19:57
*** rs-ryan is now known as RightScale-RyanO19:57
*** nati_ueno has quit IRC19:59
*** m4dcoder has joined #openstack-meeting19:59
*** vipul-away is now known as vipul19:59
shardy#startmeeting heat20:00
openstackMeeting started Wed Jul  3 20:00:17 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: heat)"20:00
openstackThe meeting name has been set to 'heat'20:00
shardy#topic rollcall20:00
*** openstack changes topic to "rollcall (Meeting topic: heat)"20:00
sdakeo/20:00
*** tspatzier has joined #openstack-meeting20:00
m4dcodero/20:00
therveHi!20:00
andrew_plunkhello20:00
tspatzierhi20:01
stevebakerhi20:01
jasondhi20:01
jpeeler!20:01
shardyasalkeld, SpamapS?20:02
asalkeldo/20:02
kebrayo/20:02
*** rpothier has joined #openstack-meeting20:02
sdakeis zane on pto - haven't seen him around20:02
shardyok, hi all, lets get started20:02
therveClint mentioned he'd be late20:02
shardysdake: yeah he's on pto IIRC20:03
shardytherve: ok, cool, thanks20:03
shardy#topic review last weeks actions20:03
*** openstack changes topic to "review last weeks actions (Meeting topic: heat)"20:03
shardyActually I don't think there were any20:03
*** dhellmann-away is now known as dhellmann20:03
shardyanyone have anything to raise from last week's meeting?20:03
shardy#link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-26-20.01.html20:03
adrian_otto1hi20:04
shardy#topic h2 bug/blueprint status20:04
*** openstack changes topic to "h2 bug/blueprint status (Meeting topic: heat)"20:04
*** adrian_otto1 is now known as adrian_otto20:04
* radix is belatedly here20:05
shardySo only 2 weeks to go, and probably less than 1 week to go allowing for some testing and branch to milestone-proposed20:05
shardy#link https://launchpad.net/heat/+milestone/havana-220:05
shardySo please if you have BPs or bugs outstanding, make sure the status is updated, and bump them if they won't land in time20:06
sdakewife has planned a last minute vacation - so may not make https://blueprints.launchpad.net/heat/+spec/native-nova-instance since I may be out of town ;)20:06
shardystevebaker will handle the h2 release as I'm on PTO, reminder that he'll be the one chasing from next week ;)20:06
shardysdake: Ok, cool, if you can bump to h3 if it definitely won't make it that would be good20:07
shardyh3 starting to look busy ;)20:07
sdakestill seeing if we can go, I'll let you know20:07
shardyanyone have anything to raise re h2?20:07
shardybug queue has been looking better, so the relaxation on 2*+2 seems to have helped20:08
*** topol has quit IRC20:08
shardys/bug/review20:08
therveYep looking great even!20:08
asalkeldyea, that's been good20:08
sdakewhich relaxaction?20:08
therveOnce you'll figure out what to do with the stable branch that'd be even better20:08
*** bgorski has joined #openstack-meeting20:09
shardysdake: last week we agreed that core reviewers can use their discretion and approve, e.g if there's been a trivial revision after a load of rework, ie rebase or whatever20:09
bgorskio/ sorry i'm late20:09
shardy#topic stable branch process/release20:10
*** openstack changes topic to "stable branch process/release (Meeting topic: heat)"20:10
shardySo this is a request/reminder re the stable/grizzly branch20:10
shardyI've been trying to get on top of which bugs need to be considered for backport, which is something we can all do when fixing stuff in master20:11
*** rustlebee is now known as russellb20:11
shardyyou tag the bug as grizzly-backport-potential, and target to series grizzly (which then gives you also affects grizzly)20:11
shardythen you can test a backport and propose via this process:20:12
shardyhttps://wiki.openstack.org/wiki/StableBranch#Proposing_Fixes20:12
*** Mandell has joined #openstack-meeting20:12
shardyThere will be another stable/grizzly release around the same time as h2 AIUI so something to consider over the next couple of weeks20:12
asalkeldok20:12
*** SergeyLukjanov has joined #openstack-meeting20:13
shardycool20:13
shardythat's all I have20:13
shardy#topic Open discussion20:13
*** openstack changes topic to "Open discussion (Meeting topic: heat)"20:13
stevebakersweet20:13
shardyanyone have anything, or shall we have a super-short meeting? :)20:13
kebrayIn case you haven't seen it, there's an awesome topology animation/visualization for stack creation that animates the provisioning of resources and information about them.20:13
kebrayhttps://wiki.openstack.org/w/images/e/ee/Topology_screenshot.jpg20:14
kebrayThis will be merged proposed to Horizon by Tim Schnell.20:14
therveAwesome20:14
adrian_ottoI saw the demo, it was pretty awesome.20:14
stevebakernice, looking forward to seeing it20:14
radixhuh20:15
radixthat is cool20:15
stevebakeris Tim here?20:15
sdakecool - anyone have a screencast of the demo?20:15
adrian_ottoIt's bouncy just like Curvature20:15
sdakemy jpg doesn't seem to be animating20:15
kebrayI've asked Tim to create a screen cast… or, I'll create one if I find time.20:15
adrian_ottokebray: we should definitely do that20:15
asalkeldnice20:15
radixadrian_otto: I can't wait to have all of the Curvature-like stuff implemented with Heat20:15
kebraycan do. will do.20:15
sdakeI would really like to use it  for an upcoming talk so if we could sync on getting the code in an external repo I'd appreciate that :)20:15
kebrayalready external.. Tim isn't around… but, can get you the link.20:16
sdakethanks kebray20:16
radixshardy: should I explicitly target https://blueprints.launchpad.net/heat/+spec/instance-group-nested-stack for h3?20:16
stevebakerif you think it is feasible20:17
asalkeldanyone know where user related docs go? (thinking of environment/provider stuff)20:17
radixwell, the bug it's associated with is20:17
sdakeasalkeld join openstack-docs mailing list and ask there?20:18
asalkeldyeah20:18
stevebakerasalkeld: we need a template writers guide, when that exists it should go there20:18
stevebakerthe guide should live in our tree20:18
sdakeprobbably wouldn't hurt everyone to join openstack-docs so we can sort out how to get docs into our program20:18
shardyradix: done, and you want to take it?20:18
radixyeppers20:18
radixI assigned it tomyself20:19
radixthanks20:19
stevebakeri have a vague intention to focus on docs later in H20:19
shardyradix: I assigned the BP to you too20:19
*** michchap has joined #openstack-meeting20:19
radixok cool :)20:20
sdakeone option is to take a couple weeks out of dev and focus on docs as a team - so we can learn from one another20:20
sdakejust a thought :)20:20
*** hemna has joined #openstack-meeting20:20
stevebakeryeah, we could give the doc sprint another crack, but with a bit more structure this time20:20
sdakethis is something shardy could facilitate with anne20:20
sdakestevebaker agree with more structure20:20
shardyYeah, sounds like a good plan20:20
shardy#action shardy to organize docs sprint during h320:21
asalkeldI really don't mind doing docs, it's just the cruft we have to install20:21
*** hemna has quit IRC20:21
shardyasalkeld: Yeah, last time I gave up after installing a bazillion packages20:21
stevebakerthey won't be authored in xml20:21
sdakestevebaker I think we learned from the last doc sprint that it was a) too short b) not well organized20:21
*** iccha has quit IRC20:21
SpamapSo/20:22
*** timductive has joined #openstack-meeting20:22
*** iccha has joined #openstack-meeting20:22
shardyProbably need to generate a list of all the docs we need and assign them to people20:22
shardymaybe we raise bugs for the missing stuff?20:22
sdakeshardy involve anne in the process if possible ;)20:22
shardysdake: Yeah I will speak to her20:22
*** iccha has quit IRC20:22
therveSomewhat related to doc, https://wiki.openstack.org/wiki/Heat/AutoScaling got some great content from asalkeld and radix, feedback is welcome20:23
kebrayAnne Gentle sits two rows over from me.. just fyi.20:23
radixyeah, that :)20:23
radixit'd be good to see people reviewing that page20:23
*** hemna has joined #openstack-meeting20:24
kebraywhen she's in the office that is.20:24
*** iccha has joined #openstack-meeting20:24
asalkeldsay is the rackspace db instance api like the trove api?20:24
* SpamapS wishes it were20:25
stevebakeror the cfn dbinstance?20:25
asalkeldjust wondering if we can crank out a trove resource20:25
asalkeld(with little effort)20:25
kebrayRackspace Cloud Databases is Trove, which was renamed from Reddwarf.20:26
shardyasalkeld: sounds like something which would be good to look into20:26
radixthere was a bit of conversation about trove + heat integration on the ML20:26
shardyif it's not going to take loads of effort20:26
SpamapSkebray: then why are we calling it a rackspace resource, and not OS::Trove::xx ?20:26
asalkeldso kebray we could problaby reuse a lot of code there/20:26
radixthough I guess that's a different side of what you're talking about20:26
stevebakerwriting a trove resource is different from using heat for trove's orchestration. either or both could be done20:26
SpamapSyeah don't conflate those two things20:26
radixmy bad :)20:27
SpamapSTrove may use Heat has nothing to do with Trove resources. :)20:27
adrian_ottothere are two perspectives of integration: 1) Heat uses Trove, and 2) Trove uses Heat20:27
adrian_otto#1 is going to be pretty easy20:27
SpamapSWhat about a Heat using Trove then using Heat to drive Trove to Heat...20:27
radixYES20:27
adrian_otto#2 may be more involved20:27
asalkeldI know, I just want to do the "right thing from our end"20:27
sdakecircles20:27
kebraySpamapS:  Great question.. because, so far we developed it against the Rackspace public service.  The Trove and RS DB Service run the same code, but we didn't run tests against stock trove and ensure compatibility.. but, your request is reasonable, and one that I think is worth having my team investigate.20:28
SpamapSkebray: "stock trove" != "public rackspace trove" in what ways?20:28
asalkeldthere is auth differences?20:28
adrian_ottoboth are key based, so not materially different20:29
asalkeldso you could register 2 resource types for one plugin20:29
asalkeldanyways, if it is easy, it's worth doing20:29
kebraySpamapS:  slight feature enabled differences…  and we run a different in-guest agent (C++, optimized to be happy on a 512mb instance).  HP runs the Python agent, but ya'll don't offer 512mb instances, so you have more memory to burn on an agent.20:30
*** Drone[01] has joined #openstack-meeting20:30
*** Drone[01] has quit IRC20:31
sdakec++ and optimized oxymoron20:31
kebrayRS DB uses OpenVZ as the container technology.  I think Trove uses KVM out of the box.20:31
* sdake ducks20:31
*** michchap has quit IRC20:32
hub_capheyo20:32
kebrayYeah, and as asalkeld said, auth differences.20:32
hub_capsomeone talkin Trove up in heah'20:32
hub_capfeel free to direct Q's to me20:32
shardyhey hub_cap, yeah we're talking about potentially implementing a trove Heat resource type20:32
hub_capim talkin heat in #openstack-meeting-alt fwiw ;)20:32
hub_capim talkin impling clusters in heat20:32
kebrayhub_cap:  question came up on why the Heat Resource implementation we did was specific for RS DB instead of generic just for Trove.20:33
shardyand also mentioning that you guys may want to use heat for orchestration20:33
*** adalbas has quit IRC20:33
*** amytron has joined #openstack-meeting20:34
*** eharney has quit IRC20:34
hub_cap+1 to generic for trove kebray20:34
hub_capand yes ill be working on heat (again, got sidetracked) in about a wk20:34
asalkeldI wonder what trove and lbaas use as agents20:35
asalkeld(any commonality there)20:36
sdakeat some point we can inject them so it won't matter ;-)20:36
shardytrove has it's own agent IIRC?20:36
SpamapSsounds to me like the auth differences are the only one Heat would really care about.20:36
hub_capasalkeld: we have a python agent, and weve been wondering how we can easily pull it out and get it installed by heat20:36
hub_capshardy: correct20:36
*** SlickNik has joined #openstack-meeting20:37
SpamapSAnyway, that can be done as a refactor when somebody steps up to add native Trove support.20:37
asalkeldyeah, well you can install from pypi/package20:37
hub_capya thats probably what we are going to do asalkeld20:37
sdakeinstalling from pypi is not suitable - pypi can be down or not routing for some reason20:37
*** IlyaE has joined #openstack-meeting20:37
hub_capwe will do what the clients do20:37
sdakeor too slow20:37
*** zehicle_at_dell has quit IRC20:37
*** thunquest has quit IRC20:37
*** gyee has quit IRC20:37
*** lloydde has quit IRC20:37
*** BStokes has quit IRC20:37
*** cliu_ has quit IRC20:37
*** blamar has quit IRC20:37
*** burt has quit IRC20:37
*** martines has quit IRC20:37
hub_capprobably tarballs.openstack20:37
radixI've actually had a ton of 50* errors from pypi in the last week :P20:37
sdakebest bet is to inject or prebuild20:38
*** obondarev has quit IRC20:38
hub_capeffectively we are going to think of it like we think of clients, a dependency we need in another project, we believe20:38
hub_capwe have done almost no talking on it tho fwiw20:38
stevebakersdake: prebuilding with diskimage-builder will at some point be trivially easy20:38
hub_capbut ill take what yall are saying back to the team20:38
sdakestevebaker agree20:38
sdakeavoid pypi downloading - we are moving away from that for reliability reasons20:39
hub_cap#agreed20:39
SpamapSsdake: prebuild is my preference. But yeah, with custom images being hard on certain clouds <cough>myemployer</cough> ... inject probably needs to be an option.20:39
*** dguitarbite has quit IRC20:39
sdakeyup, that is why i am working on inject blueprint next ;)20:39
asalkeldwe need pypi cache as-a-service :/20:39
hub_capid also like to talk to yall eventually about the dependency between us, yall creating a trove resource, while we are creating a trove template as well to use to install.... but thats for another day.20:40
*** dguitarbite has joined #openstack-meeting20:40
hub_capmaybe we can consolidate work so we can use the same template or something20:40
sdakeyum and deb repos already need to be cached - groan20:40
shardyasalkeld: Isn't that just called a mirror or a squid proxy?20:40
*** iccha has quit IRC20:40
sdakesquid proxy no good, need a full on mirror20:40
stevebakerSpamapS: got a feeling for how big an os-* inject payload would be?20:40
*** obondarev has joined #openstack-meeting20:40
SpamapShub_cap: I don't think the two are actually related.. but it would be good to keep communication flowing in both directions.20:40
radixyeah... this seems like a pretty general / far-reaching problem20:40
kebray#action kebray to put on RS backlog testing of RS Database Resource against generic Trove, and consider rename/refactoring of provider to use Trove for naming.20:41
hub_capSpamapS: kk20:41
sdakekebray I think only the chair can do an action20:41
kebrayhehe.. ok.20:41
hub_capid like to see the difference between them too. im always in #heat so lets chat about it, just ping me when u need me cuz i dont actively monitor it20:41
SpamapSstevebaker: hmm.. looking now20:41
*** iccha has joined #openstack-meeting20:41
kebraydepends on how the meetbot is configured I guess.20:41
sdakeshardy I think kebray wants an action :)20:41
shardysdake: IMO yum/deb repo caching or mirroring is not a heat problem to solve, it's a deployment choice/detail20:41
sdakeshardy agree20:42
*** zehicle_at_dell has joined #openstack-meeting20:42
*** thunquest has joined #openstack-meeting20:42
*** gyee has joined #openstack-meeting20:42
*** lloydde has joined #openstack-meeting20:42
*** BStokes has joined #openstack-meeting20:42
*** cliu_ has joined #openstack-meeting20:42
*** blamar has joined #openstack-meeting20:42
*** burt has joined #openstack-meeting20:42
*** martines has joined #openstack-meeting20:42
*** henrynash has quit IRC20:42
SpamapSstevebaker: 50K zipped20:42
sdakeshardy but really you do want a mirror, otherwise if your mirrors are inaccessible bad things happen20:42
SpamapSstevebaker: thats .zip20:42
shardykebray: what was you action again, sorry, missed the scrollback20:42
SpamapSstevebaker: and with everything a 'setup.py sdist' gets you20:42
sdakedoes anyone know for sure if openstack really ahs a 16k limit on the metadata?20:42
sdakerussellb ^20:42
SpamapSstevebaker: but os-* have not been avoiding dependencies20:43
asalkeldsdake, I am not convinced about that inserting20:43
russellbsdake: don't know off of the top of my head20:43
asalkeldwe used to do that20:43
kebrayshardy to add to my backlog testing RS Database Provider against Trove, and then consider refactoring that provider to be for Trove and not specific to RS Cloud.20:43
shardy #action kebray to put on RS backlog testing of RS Database Resource against generic Trove, and consider rename/refactoring of provider to use Trove for naming.20:43
asalkeldand moved away from it20:43
sdakeasalkeld we never did insert of the full scripts20:43
SpamapSstevebaker: so, for instance, os-collect-config deps on keystoneclient .. requests.. lxml.. eventlet (due to oslo stuff)20:44
sdakewe always used prebuilt jeos20:44
russellbtype is mediumtext in the db ... may be a limit in the code though20:44
stevebakerI think the 16k limit is arbitrary, but however big it is, we'll find a way to hit the limit20:44
SpamapSsdake: what about injection via object storage?20:44
SpamapSI think that is a pretty valid alternative to userdata20:44
asalkeld+120:44
sdakeSpamapS that may work, although require some serious hacking to loguserdata.py20:44
SpamapSyeah I'm not saying it will be easy. :)20:45
sdakeI'll add a blueprint after meeting20:45
SpamapSbut it may be more straight forward and more feasible long term to maintain.20:45
SpamapSuserdata is a column in a table20:45
SpamapSso it _should_ be limited.20:45
sdakeits a valid idea - I think both solutions should be available for the operators choice20:46
*** thunquest has quit IRC20:46
*** gyee has quit IRC20:46
SpamapSsdake: if by both you mean "custom image" or "inject to object storage" .. I agree.. but I think you want inject via userdata too. ;)20:46
sdakeall 3 then :)20:46
sdakeif its optional, there is no harm20:47
kebrayJust turning my attention back to the conversation, but what about SSH/SFTP to inject/bootstrap/provision server post boot.  Will that solve the use case?20:47
*** henrynash has joined #openstack-meeting20:47
sdakekebray we went down ssh bootstrapping process, for alot of reasons it wont work with how heat operates20:47
sdakemainly that we inject only one ssh key20:47
kebrayHow many do you need?20:47
sdakethe one we inject is the user key20:48
kebrayk.20:48
stevebakerone for the user, one for heat?20:48
andrew_plunksdake: after the original bootstrap process you can ssh another key over there20:48
sdakewe would need to inject another for heat20:48
kebrayI see.. you need ongoing capabilities.. not just one-time setup.20:48
sdakewe could inject it easily enough with cloudinit20:48
sdakebut key management is a pita20:48
sdakeand guests need to have ssh enabled by default - some people make images that don't do that for whatever reason20:49
SpamapS"some people make images" -- so they can put in-instance tools in those images.20:49
sdakewe tried ssh injection - it wasn't optimal at the time - prebuild was20:49
sdakeIIRC asalkeld had some serious heartburn over ssh binary injection20:50
*** RightScale-RyanO has quit IRC20:50
asalkeldcan't remember why20:50
sdakeme either20:50
andrew_plunkcan you not use ssh to install cloud-init after some initial bootstrapping?20:50
asalkeldit's been a while20:50
sdakeya that was 18 months ago20:50
shardyparamiko was unreliable for the integration test SSH iirc20:51
SpamapSandrew_plunk: isn't that what the rackspace stuff does already?20:51
kebrayandrew_plunk isn't that exactly what our RS Server Resource does?20:51
andrew_plunkcorrect SpamapS:20:51
stevebakertempest uses paramiko now20:51
andrew_plunk& keybray:20:51
shardystevebaker: maybe that's why it breaks all the time ;)20:51
stevebakerlol20:51
sdakewell lets give object download a go and go from there20:51
asalkeldwell options are good20:52
asalkeldbut too many20:52
sdakemore options = more questions in #heat :)20:52
asalkeldexactly20:52
kebraystake asalkeld heartburn over how the RS Server Resource is bootstrapping with SSH and installing cloud-init?   we wanted to avoid having to pop pre-configured special images that already had cloud-init pre-installed.20:52
asalkeldno kebray it was in a different life time20:53
kebrayAs, we don't have default images with cloud-init in the RS Cloud.20:53
asalkeld(or project)20:53
kebrayoh, ok.20:53
sdakekebray it was very early stage of heat20:53
sdakeright when we got starteed20:53
sdakenow we have something working, we can make some incremental improvements to improve the situation :)20:54
sdakeback then we had nothing working20:54
asalkeldso much for the short meeting20:54
m4dcoderSpamapS: regarding rolling update and as-update-policy, can you answer my question at http://lists.openstack.org/pipermail/openstack-dev/2013-June/010593.html?20:55
*** pcm_ has quit IRC20:56
shardy#action SpamapS to reply to m4dcoder's ML question20:57
*** eglynn has joined #openstack-meeting20:57
shardyanything else before we finish?20:57
shardyOk, well thanks all :)20:58
SpamapSm4dcoder: will answer, sorry for the delay!20:58
*** tspatzier has left #openstack-meeting20:58
shardy#endmeeting20:58
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"20:58
therveThanks!20:58
openstackMeeting ended Wed Jul  3 20:58:26 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-07-03-20.00.html20:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-07-03-20.00.txt20:58
openstackLog:            http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-07-03-20.00.log.html20:58
*** jasond has left #openstack-meeting20:58
m4dcoderSpamapS: thanks!20:58
*** gordc has joined #openstack-meeting20:58
*** therve has left #openstack-meeting20:58
*** michchap has joined #openstack-meeting20:59
*** andrew_plunk has left #openstack-meeting21:01
*** marun has quit IRC21:01
*** nealph has joined #openstack-meeting21:01
jd__#startmeeting ceilometer21:01
openstackMeeting started Wed Jul  3 21:01:26 2013 UTC.  The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot.21:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:01
*** openstack changes topic to " (Meeting topic: ceilometer)"21:01
openstackThe meeting name has been set to 'ceilometer'21:01
sandywalsho/21:01
nealpho/21:01
eglynno/21:01
litongo/21:01
*** m4dcoder has quit IRC21:01
dhellmanno/21:01
gordco/21:01
jd__hi everybody21:01
*** timductive has left #openstack-meeting21:02
jd__#topic Review Havana-2 milestone21:03
*** openstack changes topic to "Review Havana-2 milestone (Meeting topic: ceilometer)"21:03
jd__#link https://launchpad.net/ceilometer/+milestone/havana-221:03
*** rpothier has quit IRC21:03
jd__I'm afraid we are getting late21:03
jd__there's a lot of things started and needing reviews21:04
* dhellmann submitted a bunch of changes for review for one-meter-per-plugin21:04
eglynnare we looking at Friday July 12th as the effective cutoff dat for merges?21:04
jd__dhellmann: yeah I already +2'ed most/all of them21:04
*** adrian_otto has quit IRC21:04
* dhellmann has a few more to go21:04
gordcwill take a look at patches.21:04
*** dkehn has quit IRC21:04
sandywalshI should have my event collector stuff up shortly21:04
dhellmannand I will spend some time reviewing this week, too21:04
*** adrian_otto has joined #openstack-meeting21:04
nealphI've signed up for several...sandywalsh: especially interested in those.21:04
jd__eglynn: in theory I guess, though that's not fixed AFAIK21:04
eglynnjd__: k21:05
*** sdake_ has joined #openstack-meeting21:05
sandywalshI'll throw another review day in the mix this week21:05
*** sdake_ has quit IRC21:05
*** sdake_ has joined #openstack-meeting21:05
*** adrian_otto has quit IRC21:05
jd__thanks sandywalsh21:05
*** adrian_otto has joined #openstack-meeting21:05
gordcstill working on auditing events. i think we're going to post on mailing list to get feedback on that item.21:05
*** adrian_otto has left #openstack-meeting21:05
dhellmannjd__: pipeline-configuration-cleanup is small, but since it relies on this other bigger one it should probably get bumped to h321:06
jd__so in the end I don't have more to say than review! review! review! code! code! review! :)21:06
jd__dhellmann: ack, though it's low so it's not my main source of anxiety21:06
eglynnyep, noses to the grindstone on both reviews and completing BPs21:06
dhellmannjd__: ok21:06
nealphtomorrow is a US holiday...that doesn't help. :-)21:07
* jd__ . O O (damn, every day discussing with eglynn is a way to learn more words and expressions!)21:07
jd__ah indeed :)21:07
eglynnLOL21:07
jd__#topic Release python-ceilometerclient?21:07
*** openstack changes topic to "Release python-ceilometerclient? (Meeting topic: ceilometer)"21:07
jd__I think we don't need that right?21:07
eglynn1.0.1 went out last week21:08
eglynnso no, not needed21:08
jd__ok that should be good enough for now :)21:08
eglynnyep21:08
jd__#topic Multi dispatcher enablement blueprint21:08
*** openstack changes topic to "Multi dispatcher enablement blueprint (Meeting topic: ceilometer)"21:08
jd__#link https://blueprints.launchpad.net/ceilometer/+spec/multi-dispatcher-enablement21:08
jd__I think that's for litong21:08
litong@jd__, yes,21:08
litongas I indicated in the agenda,21:08
litongthe point of the blueprint is to allow one to configure multiple outlets for further data processing.21:09
*** litong has quit IRC21:10
ekarlsowhat's a outlet ?21:10
*** litong has joined #openstack-meeting21:10
jd__a database21:10
eglynnwhoops we lost litong there I think ...21:10
jd__litong: I didn't read the whole code but I think I prefer https://review.openstack.org/#/c/34301/21:10
eglynnk, back again21:10
*** IlyaE has quit IRC21:11
dhellmannthe plugins I remember seeing logged or wrote to our database using the storage driver21:11
jd__litong: though it might requires a mechanism close to the one we have in the pipelinen to be implemented correctly21:11
litonglost connection.21:11
*** joesavak has quit IRC21:11
litongthe second pipeline implementation has problem since the data has to be converted into Counter.21:11
litongwhich actually lose some information.21:12
litongit is also limiting.21:12
jd__you need to build a mix of both21:12
litongjd__, I prefer the first patch as well. but Doug wanted to use pipeline.21:12
*** Yada has quit IRC21:12
jd__a pipeline but not publishing, a pipeline calling record_metering_data21:12
*** michchap has quit IRC21:12
dhellmannwell, if that's what we're going to do, we should just go with the first patch21:12
dhellmannwe don't *need* a pipeline, I just thought it would be better than having a second thing that was so similar21:13
jd__dhellmann: we need if we want to dispatch counter recording in different "outlet"21:13
jd__with the same collector21:13
dhellmannthese aren't counters though, right?21:13
jd__these are counters21:13
sandywalshwe're running into a similar problem with event data ... it's going to have to go in the pipeline at some point. The payload of the pipeline should be more generic21:13
litongright, not counters, almost like a raw data.21:13
jd__sandywalsh: agreed21:13
dhellmannjd__: litong said that converting the data to counters lost information21:14
jd__well the collector records counters, nothing else21:14
litongyes. such as message id. though it may be generated along the way, but it may be important for some.21:14
dhellmannit records event data that includes the fields of a counter, plus some, doesn't it?21:14
dhellmannmessage id, signature, etc.21:14
dhellmannthose are not counter fields21:14
jd__litong: are you recording counters or events?21:14
jd__or notifications21:15
dhellmannevents21:15
dhellmannnot notifications, the rpc events we send as output from the pipeline now21:15
litongjd__, any thing that passed into record_metering_data without any loss.21:15
* jd__ thinks the major problem in this project is terminology :)21:15
dhellmannheh21:15
jd__litong: ok so that's counters/meters21:15
eglynnamen to that! ;)21:15
litong@jd__, totally agree. the point is that anything goes to db, should have to a chance to go to other outlet. (wished I have a better term for outlet)21:16
dhellmannjd__: this is a new layer between the pipeline RPC output and the storage driver21:16
jd__so I don't do understand why you'd lose information at first, but anyhow the pipeline could/should be data unaware21:16
dhellmannthe RPC payload includes values that are not in the counter: message signature is the most important21:16
jd__dhellmann: ah ok I get it!21:16
dhellmannbecause a Counter object, which is taken as input to the pipeline, is not the same thing as a message21:16
dhellmannwhich is the output of the pipeline21:17
dhellmannok21:17
litongjd__, but pipeline realy on data being Count since it does the filgering based on the configuration.21:17
jd__indeed, so the pipeline should be fixed to be data agnostic, that's for sure21:17
jd__so we can use it in a way it could be used as a dispatcher for record_metering_data21:17
litongI mean filtering21:17
*** eharney has joined #openstack-meeting21:17
jd__how does that sound? do I miss something? :)21:17
*** eharney has quit IRC21:17
*** eharney has joined #openstack-meeting21:17
dhellmannwe could make a generic pipeline, but that will make it possible for users to configure it with transformers that are "wrong" for the type of data being processed21:17
jd__dhellmann: the way we build pipeline allows us to specify different namespaces for everything21:18
dhellmannok21:18
jd__so we should have a different transformer namespace for different purpopses21:18
dhellmanndo we need transformations in this dispatcher?21:18
jd__or not, but we can have it and not use it21:18
dhellmannjd__: good point21:18
litongI would think not.21:18
jd__does that answer your questions litong?21:19
dhellmannso it could be as simple as a stevedore.NamedExtensionManager().map() call?21:19
jd__dhellmann: no because we may want to route 'meters' based on meter name like we do in pipeline?21:19
litongsince the point for this blueprint is to have the data handed to the plugins and it is up to the plugins to do whatever it wants to.21:19
jd__I mean, like we do in 'publishing pipeline'21:19
*** sarob has joined #openstack-meeting21:19
*** vipul is now known as vipul-away21:19
jd__but we can blind routing with NamedExtensionManager.map() if that is enough for litong21:20
jd__that's a first step21:20
*** mdomsch has joined #openstack-meeting21:20
dhellmannjd__: eventually, maybe. I think making the pipeline more flexible is more work than we have time to do between now and the deadline, so I'm looking for a solution litong can have now21:20
dhellmannright21:20
dhellmann:-)21:20
jd__agreed21:20
*** sandywalsh has quit IRC21:20
litongso making changes based on the first patch?21:20
jd__litong: yes21:20
* dhellmann is getting frustrated with colloquy and needs a new irc client21:21
litongit already uses NamedExtensionManager.21:21
dhellmannlitong: yes, I think so21:21
jd__dhellmann: erc :-)21:21
litongall right. thanks guys.21:21
litongI feel a lot better now.21:21
dhellmannlitong: sorry to put you through that!21:21
jd__litong: your first patch might be a really good start indeed, I just didn't review it :) but I will21:21
litong@dhellmann, doug, np.21:21
*** jlucci has quit IRC21:22
litongthat is ok jd__.21:22
litongwhenever you have time.21:22
litongit is still there.21:22
jd__bah it always ends well anyway ;-)21:22
dhellmannright, I'll go back and re-review it in more detail21:22
litongthanks so much.21:22
dhellmannI think we'll want to change some of the names. "dispatcher" is a little generic21:22
dhellmannwe lost sandywalsh, and he had the same question about using the pipeline21:23
litongok, what name would you guys like to use?21:23
jd__we'll tell him to go through the log, I imagine that'll solve his question?21:23
* dhellmann nods21:23
jd__#topic Open discussion21:24
*** openstack changes topic to "Open discussion (Meeting topic: ceilometer)"21:24
*** sarob has quit IRC21:24
jd__#action jd__ Write a terminology page in the documentation21:24
jd__I think everybody will thank me if I do it before the next summit21:24
eglynn+1, that'll really help with on-boarding21:24
litongtotally.21:24
*** vipul-away is now known as vipul21:25
*** sandywalsh has joined #openstack-meeting21:25
sandywalshbloody vpn sucks :)21:25
jd__i'll send a patch, you'll review, and we'll have a good chat about this once and for all :)21:25
jd__re sandywalsh21:25
jd__you missed all the fun, you may want to go through the log21:25
sandywalshwill do ... got dropped after21:25
sandywalsh<jd__> so I don't do understand why you'd lose information at first, but anyhow the pipeline could/should be data unaware21:25
sandywalsh<sandywalsh> jd__, +1 ... at the very least, some sort of mimetype check to ensure a fit between pipeline blocks21:25
sandywalsh<sandywalsh> (vs depending on python object types, since they'll be leaving the system at some point)21:25
jd__ack :)21:26
gordcgot a question: are ceilometer.tests.publisher.test_rpc_publisher.py tests suppose to run?21:26
jd__we did talked about pipeline and dhellmann was saying you could be interested21:26
jd__gordc: I don't like this question21:26
gordc:)21:26
sandywalshjd__, yup, definitely21:26
jd__gordc: did we break it?21:26
gordcjd__, there's quite a few tests that are sitting around but never run.21:27
jd__gordc: why so? :(21:27
*** stevemar has quit IRC21:27
dhellmannare they skipping explicitly, or is the test discovery failing to find them?21:27
gordcjd__, could be when we shifted from nose to testr? but they never run locally for me21:28
*** neelashah has quit IRC21:28
gordcdhellmann, the latter21:28
dhellmanngordc: open a bug ticket?21:28
*** sushils has quit IRC21:28
gordcwill do. just wanted to check if they were suppose to run... they're named like valid tests but it's not set up to run.21:29
jd__ok21:30
jd__anything else guys before I wrap up?21:30
gordcnothing from me.21:30
nealphI put a question out on the dev mail...21:30
nealphdipping toes into config options for glance.21:30
nealphany response appreciated. :)21:31
*** markmcclain has quit IRC21:31
eglynnnealph: I'll have a look ...21:31
nealphpollster config stuff....eglynn: thanks!21:31
*** sushils has joined #openstack-meeting21:32
jd__ack21:32
jd__see you on -metering then, and use that last 28 minutes to review some code :p21:32
jd__#endmeeting21:32
*** openstack changes topic to "OpenStack meetings || https://wiki.openstack.org/wiki/Meetings"21:32
dhellmann:-)21:32
openstackMeeting ended Wed Jul  3 21:32:37 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:32
openstackMinutes:        http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-07-03-21.01.html21:32
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-07-03-21.01.txt21:32
openstackLog:            http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-07-03-21.01.log.html21:32
*** gordc has left #openstack-meeting21:32
*** dprince has quit IRC21:34
*** nealph has quit IRC21:34
*** lloydde has quit IRC21:35
*** IlyaE has joined #openstack-meeting21:35
*** cliu_ has quit IRC21:35
*** otherwiseguy has joined #openstack-meeting21:36
*** michchap has joined #openstack-meeting21:38
*** vijendar has quit IRC21:41
*** henrynash has quit IRC21:42
*** henrynash has joined #openstack-meeting21:50
*** sarob has joined #openstack-meeting21:50
*** zehicle_at_dell has quit IRC21:51
*** markmcclain has joined #openstack-meeting21:51
*** vipul is now known as vipul-away21:52
*** michchap has quit IRC21:52
*** henrynash has quit IRC21:53
*** lastidiot has quit IRC21:53
*** evgeny has joined #openstack-meeting21:54
*** afazekas has quit IRC21:54
*** sarob has quit IRC21:54
*** leizhang has joined #openstack-meeting21:54
*** henrynash has joined #openstack-meeting21:56
*** vipul-away is now known as vipul21:57
*** eglynn has quit IRC21:59
*** bgorski has quit IRC22:00
*** michchap has joined #openstack-meeting22:01
*** michchap has joined #openstack-meeting22:02
*** eglynn has joined #openstack-meeting22:09
*** leizhang has quit IRC22:11
*** dcramer_ has quit IRC22:13
*** mdomsch has quit IRC22:14
*** leizhang has joined #openstack-meeting22:17
*** markpeek has quit IRC22:20
*** sarob has joined #openstack-meeting22:21
*** eglynn has quit IRC22:21
*** IlyaE has quit IRC22:21
*** jlucci has joined #openstack-meeting22:24
*** tedross has joined #openstack-meeting22:25
*** sarob has quit IRC22:25
*** dcramer_ has joined #openstack-meeting22:25
*** dkliban_afk has quit IRC22:26
*** lloydde has joined #openstack-meeting22:29
*** SergeyLukjanov has quit IRC22:32
*** fnaval has quit IRC22:35
*** jasondotstar has joined #openstack-meeting22:36
*** sdake has quit IRC22:40
*** barefoot has left #openstack-meeting22:41
*** leizhang has quit IRC22:42
*** jpeeler has quit IRC22:48
*** lloydde has quit IRC22:48
*** gongysh has quit IRC22:51
*** beagles has quit IRC22:54
*** jlucci has quit IRC22:56
*** beagles has joined #openstack-meeting22:59
*** tedross has quit IRC23:01
*** brucer has joined #openstack-meeting23:03
*** gongysh has joined #openstack-meeting23:03
*** evgeny has quit IRC23:07
*** IlyaE has joined #openstack-meeting23:08
*** fnaval has joined #openstack-meeting23:10
*** _TheDodd_ has quit IRC23:14
*** lastidiot has joined #openstack-meeting23:16
*** amytron has quit IRC23:21
*** sarob has joined #openstack-meeting23:21
*** markpeek has joined #openstack-meeting23:25
*** sarob has quit IRC23:26
*** fifieldt has joined #openstack-meeting23:27
*** rwsu has quit IRC23:28
*** kebray has quit IRC23:28
*** dprince has joined #openstack-meeting23:33
*** blamar has quit IRC23:34
*** hemna is now known as hemnafk23:35
*** branen has joined #openstack-meeting23:37
*** otherwiseguy has quit IRC23:39
*** kmartin is now known as kmartin_zz23:42
*** kmartin_zz has quit IRC23:42
*** gongysh has quit IRC23:43
*** gongysh has joined #openstack-meeting23:44
*** fifieldt has quit IRC23:46
*** fifieldt has joined #openstack-meeting23:47
*** rwsu has joined #openstack-meeting23:48
*** dprince has quit IRC23:49
*** kmartin has joined #openstack-meeting23:50
*** kmartin is now known as kmartin_zz23:51
*** lloydde has joined #openstack-meeting23:55
*** rwsu is now known as rwsu-awa23:56
*** rwsu-awa is now known as rwsu-away23:56
*** salv-orlando has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!