Thursday, 2013-12-12

*** pcm_ has quit IRC00:00
*** vipul-away is now known as vipul00:00
*** denis_makogon has quit IRC00:07
*** eankutse has quit IRC00:08
*** banix has quit IRC00:13
*** sacharya has joined #openstack-meeting-alt00:17
*** brents has quit IRC00:22
*** alazarev has quit IRC00:28
*** alazarev has joined #openstack-meeting-alt00:29
*** brents has joined #openstack-meeting-alt00:31
*** vipul is now known as vipul-away00:39
*** yidclare has quit IRC00:42
*** NehaV has quit IRC00:44
*** yamahata_ has joined #openstack-meeting-alt00:45
*** yogesh has joined #openstack-meeting-alt00:52
*** brents has quit IRC00:57
*** yamahata_ has quit IRC00:59
*** yamahata_ has joined #openstack-meeting-alt01:00
*** RajeshMohan has quit IRC01:00
*** RajeshMohan has joined #openstack-meeting-alt01:01
*** banix has joined #openstack-meeting-alt01:03
*** brents has joined #openstack-meeting-alt01:05
*** jasonb365 has joined #openstack-meeting-alt01:07
*** vipul-away is now known as vipul01:10
*** jcooley_ has quit IRC01:12
*** jergerber has quit IRC01:13
*** alazarev has quit IRC01:16
*** Barker has joined #openstack-meeting-alt01:18
*** IlyaE has quit IRC01:20
*** jasonb365 has quit IRC01:20
*** IlyaE has joined #openstack-meeting-alt01:27
*** devkulkarni has quit IRC01:27
*** mozawa has joined #openstack-meeting-alt01:30
*** IlyaE has quit IRC01:30
*** stevebaker has left #openstack-meeting-alt01:32
*** jcooley_ has joined #openstack-meeting-alt01:33
*** yogesh has quit IRC01:35
*** nosnos has joined #openstack-meeting-alt01:35
*** jmontemayor has quit IRC01:37
*** brents has quit IRC01:41
*** Barker has quit IRC01:47
*** rongze has joined #openstack-meeting-alt01:52
*** DennyZhang has joined #openstack-meeting-alt01:53
*** demorris has joined #openstack-meeting-alt01:55
*** demorris has quit IRC02:01
*** eankutse has joined #openstack-meeting-alt02:07
*** amcrn has quit IRC02:17
*** amytron has joined #openstack-meeting-alt02:20
*** eankutse has quit IRC02:21
*** jcooley_ has quit IRC02:24
*** eankutse has joined #openstack-meeting-alt02:24
*** jjmb has joined #openstack-meeting-alt02:27
*** rongze has quit IRC02:28
*** rongze has joined #openstack-meeting-alt02:29
*** eankutse has quit IRC02:37
*** vkmc has quit IRC02:47
*** julim has quit IRC02:57
*** brents has joined #openstack-meeting-alt03:02
*** amytron has quit IRC03:05
*** SushilKM has joined #openstack-meeting-alt03:06
*** diakunchikov__ has quit IRC03:12
*** ikhudoshyn has quit IRC03:12
*** yportnova has quit IRC03:12
*** ikhudoshyn has joined #openstack-meeting-alt03:12
*** yportnova has joined #openstack-meeting-alt03:12
*** diakunchikov__ has joined #openstack-meeting-alt03:13
*** esker has joined #openstack-meeting-alt03:13
*** esker has quit IRC03:14
*** eankutse has joined #openstack-meeting-alt03:19
*** jcooley_ has joined #openstack-meeting-alt03:20
*** jcooley_ has quit IRC03:26
*** devkulkarni has joined #openstack-meeting-alt03:28
*** rongze_ has joined #openstack-meeting-alt03:33
*** rongze has quit IRC03:36
*** DennyZhang has quit IRC03:37
*** yogesh has joined #openstack-meeting-alt03:45
*** eankutse has quit IRC03:49
*** brents has quit IRC03:52
*** yogesh has quit IRC03:59
*** sarob has joined #openstack-meeting-alt04:03
*** sarob has quit IRC04:05
*** yogesh has joined #openstack-meeting-alt04:06
*** sarob has joined #openstack-meeting-alt04:06
*** sarob has quit IRC04:06
*** yogesh has quit IRC04:07
*** amytron has joined #openstack-meeting-alt04:07
*** yogesh has joined #openstack-meeting-alt04:08
*** yogesh has quit IRC04:11
*** yogesh has joined #openstack-meeting-alt04:13
*** yogesh has quit IRC04:14
*** SushilKM has quit IRC04:17
*** yogesh has joined #openstack-meeting-alt04:18
*** NehaV has joined #openstack-meeting-alt04:19
*** NehaV has quit IRC04:19
*** yogesh has quit IRC04:20
*** yogesh has joined #openstack-meeting-alt04:21
*** yogesh has quit IRC04:21
*** sarob has joined #openstack-meeting-alt04:24
*** jergerber has joined #openstack-meeting-alt04:24
*** radix_ has quit IRC04:25
*** banix has quit IRC04:27
*** sarob has quit IRC04:28
*** yogesh has joined #openstack-meeting-alt04:28
*** yogesh has quit IRC04:30
*** yogesh_ has joined #openstack-meeting-alt04:32
*** yogesh_ has quit IRC04:33
*** devkulkarni has quit IRC04:36
*** jergerber has quit IRC04:39
*** yogesh has joined #openstack-meeting-alt04:44
*** mitsos has joined #openstack-meeting-alt04:44
*** yogesh has quit IRC04:44
*** yogesh has joined #openstack-meeting-alt04:47
*** yogesh has quit IRC04:48
*** mitsos has quit IRC04:49
*** rongze_ has quit IRC04:52
*** eankutse has joined #openstack-meeting-alt04:53
*** NehaV has joined #openstack-meeting-alt04:55
*** sarob has joined #openstack-meeting-alt04:57
*** nati_ueno has quit IRC05:03
*** sarob has quit IRC05:10
*** sarob has joined #openstack-meeting-alt05:10
*** sarob has quit IRC05:11
*** eankutse has quit IRC05:16
*** SergeyLukjanov has joined #openstack-meeting-alt05:17
*** NehaV has quit IRC05:22
*** rongze has joined #openstack-meeting-alt05:22
*** sarob has joined #openstack-meeting-alt05:23
*** sarob has quit IRC05:25
*** sarob has joined #openstack-meeting-alt05:26
*** sarob has quit IRC05:26
*** rongze has quit IRC05:27
*** SergeyLukjanov is now known as _SergeyLukjanov05:33
*** _SergeyLukjanov has quit IRC05:33
*** dougshelley66 has joined #openstack-meeting-alt05:33
*** enikanorov_ has quit IRC05:37
*** enikanorov has joined #openstack-meeting-alt05:37
*** dougshelley66 has quit IRC05:44
*** coolsvap has joined #openstack-meeting-alt05:45
*** alazarev has joined #openstack-meeting-alt05:46
*** yogesh has joined #openstack-meeting-alt05:49
*** SergeyLukjanov has joined #openstack-meeting-alt05:51
*** arnaud|afk|flu has joined #openstack-meeting-alt05:51
*** coolsvap has left #openstack-meeting-alt05:53
*** coolsvap has joined #openstack-meeting-alt05:53
*** arnaud|afk|flu has quit IRC05:54
*** yogesh has quit IRC05:59
*** sacharya has quit IRC06:00
*** yogesh_ has joined #openstack-meeting-alt06:00
*** radix_ has joined #openstack-meeting-alt06:06
*** jcooley_ has joined #openstack-meeting-alt06:06
*** yogesh has joined #openstack-meeting-alt06:08
*** yogesh_ has quit IRC06:10
*** yogesh_ has joined #openstack-meeting-alt06:12
*** yogesh has quit IRC06:12
*** markvoelker1 has quit IRC06:13
*** SergeyLukjanov has quit IRC06:17
*** rongze has joined #openstack-meeting-alt06:20
*** vipul has quit IRC06:22
*** vipul has joined #openstack-meeting-alt06:22
*** yogesh_ has quit IRC06:28
*** jcooley_ has quit IRC06:30
*** jcooley_ has joined #openstack-meeting-alt06:30
*** jcooley_ has quit IRC06:31
*** jcooley_ has joined #openstack-meeting-alt06:31
*** amytron has quit IRC06:33
*** SushilKM has joined #openstack-meeting-alt06:38
*** nadya_ has joined #openstack-meeting-alt06:43
*** alazarev has quit IRC06:45
*** yogesh has joined #openstack-meeting-alt06:46
*** lifeless has quit IRC06:48
*** dougshelley66 has joined #openstack-meeting-alt06:51
*** yogesh_ has joined #openstack-meeting-alt06:51
*** yogesh has quit IRC06:51
*** yogesh_ has quit IRC06:56
*** yogesh has joined #openstack-meeting-alt06:56
*** nadya_ has quit IRC06:58
*** yogesh_ has joined #openstack-meeting-alt06:58
*** yogesh has quit IRC07:02
*** yogesh_ has quit IRC07:08
*** alazarev has joined #openstack-meeting-alt07:13
*** lifeless has joined #openstack-meeting-alt07:15
*** jcooley_ has quit IRC07:18
*** jcooley_ has joined #openstack-meeting-alt07:22
*** jcoufal has joined #openstack-meeting-alt07:27
*** NikitaKonovalov has joined #openstack-meeting-alt07:27
*** yogesh has joined #openstack-meeting-alt07:32
*** yogesh has quit IRC07:40
*** boris-42 has quit IRC07:43
*** jcooley_ has quit IRC07:46
*** yogesh has joined #openstack-meeting-alt07:46
*** yogesh has quit IRC07:48
*** yogesh has joined #openstack-meeting-alt07:48
*** jcooley_ has joined #openstack-meeting-alt07:50
*** alazarev has quit IRC07:52
*** nati_ueno has joined #openstack-meeting-alt07:52
*** coolsvap has quit IRC07:56
*** jcooley_ has quit IRC07:58
*** yogesh has quit IRC08:02
*** rongze has quit IRC08:02
*** rongze has joined #openstack-meeting-alt08:06
*** rongze has quit IRC08:11
*** enikanorov_ has joined #openstack-meeting-alt08:18
*** dark_knight_ita has joined #openstack-meeting-alt08:18
*** denis_makogon has joined #openstack-meeting-alt08:19
*** yogesh has joined #openstack-meeting-alt08:19
*** igormarnat has joined #openstack-meeting-alt08:37
*** akuznetsov has joined #openstack-meeting-alt09:00
*** derekh has joined #openstack-meeting-alt09:05
*** SumitNaiksatam has joined #openstack-meeting-alt09:12
*** ekarlso has quit IRC09:17
*** ekarlso has joined #openstack-meeting-alt09:17
*** mozawa has quit IRC09:19
*** aignatov has joined #openstack-meeting-alt09:22
*** SushilKM has quit IRC09:37
*** yogesh has quit IRC09:39
*** flaper87|afk is now known as flaper8709:40
*** SushilKM has joined #openstack-meeting-alt09:43
*** denis_makogon has quit IRC09:44
*** jtomasek has joined #openstack-meeting-alt10:05
*** nosnos_ has joined #openstack-meeting-alt10:15
*** nosnos_ has quit IRC10:16
*** nosnos_ has joined #openstack-meeting-alt10:16
*** nosnos has quit IRC10:17
*** SergeyLukjanov has joined #openstack-meeting-alt10:17
*** nosnos_ has quit IRC10:20
*** diakunchikov__ has quit IRC10:28
*** dmakogon_ is now known as denis_makogon10:33
*** rongze has joined #openstack-meeting-alt10:41
*** NikitaKonovalov has quit IRC10:42
*** rongze has quit IRC10:46
*** heyongli has joined #openstack-meeting-alt10:46
*** yogesh has joined #openstack-meeting-alt10:50
*** igormarnat has left #openstack-meeting-alt10:52
*** yogesh has quit IRC10:55
*** akuznetsov has quit IRC10:55
*** akuznetsov has joined #openstack-meeting-alt10:58
*** yamahata_ has quit IRC10:58
*** boris-42 has joined #openstack-meeting-alt11:17
*** akuznetsov has quit IRC11:19
*** akuznetsov has joined #openstack-meeting-alt11:22
*** rsblendido has joined #openstack-meeting-alt11:34
*** rossella_s has joined #openstack-meeting-alt11:34
*** vkmc has joined #openstack-meeting-alt11:38
*** slagle has joined #openstack-meeting-alt11:49
*** enikanorov has quit IRC12:24
*** enikanorov has joined #openstack-meeting-alt12:24
*** rongze has joined #openstack-meeting-alt12:37
*** bcrochet has quit IRC12:38
*** rongze has quit IRC12:41
*** bcrochet has joined #openstack-meeting-alt12:43
*** SergeyLukjanov is now known as _SergeyLukjanov12:54
*** _SergeyLukjanov has quit IRC12:54
*** yamahata_ has joined #openstack-meeting-alt12:55
*** yamahata_ has quit IRC13:02
*** yamahata_ has joined #openstack-meeting-alt13:04
*** pdmars has joined #openstack-meeting-alt13:05
*** NikitaKonovalov has joined #openstack-meeting-alt13:06
*** pdmars has quit IRC13:07
*** jcoufal_ has joined #openstack-meeting-alt13:08
*** yamahata_ has quit IRC13:09
*** jcoufal has quit IRC13:10
*** pdmars has joined #openstack-meeting-alt13:11
*** rongze has joined #openstack-meeting-alt13:12
*** dark_knight_ita has quit IRC13:21
*** SergeyLukjanov has joined #openstack-meeting-alt13:23
*** yamahata_ has joined #openstack-meeting-alt13:24
*** yamahata_ has quit IRC13:35
*** lblanchard has joined #openstack-meeting-alt13:37
*** jdob has joined #openstack-meeting-alt13:40
*** kevinconway has joined #openstack-meeting-alt13:41
*** jjmb has quit IRC13:42
*** dougshelley66 has quit IRC13:53
*** abramley has joined #openstack-meeting-alt13:53
*** yamahata_ has joined #openstack-meeting-alt13:54
*** venkatesh has joined #openstack-meeting-alt13:56
*** venkatesh_ has joined #openstack-meeting-alt13:56
venkatesh_?13:56
venkatesh_sorry, pressed some wrong keys.13:57
*** yamahata_ has quit IRC13:57
*** yamahata_ has joined #openstack-meeting-alt13:57
*** yamahata_ has quit IRC13:58
*** katyafervent has quit IRC13:58
*** yamahata_ has joined #openstack-meeting-alt13:59
*** dprince has joined #openstack-meeting-alt14:01
*** rongze_ has joined #openstack-meeting-alt14:01
*** pcm_ has joined #openstack-meeting-alt14:02
*** pcm_ has left #openstack-meeting-alt14:02
*** rongze has quit IRC14:04
*** zhiyan has joined #openstack-meeting-alt14:06
*** zhiyan has quit IRC14:08
*** julim has joined #openstack-meeting-alt14:09
*** bugsduggan has joined #openstack-meeting-alt14:09
*** zhiyan has joined #openstack-meeting-alt14:09
*** bugsduggan has left #openstack-meeting-alt14:10
*** dougshelley66 has joined #openstack-meeting-alt14:11
*** yamahata_ has quit IRC14:12
*** yamahata_ has joined #openstack-meeting-alt14:13
*** jprovazn has joined #openstack-meeting-alt14:16
*** eankutse has joined #openstack-meeting-alt14:16
*** eankutse has quit IRC14:17
*** eankutse has joined #openstack-meeting-alt14:17
*** rongze has joined #openstack-meeting-alt14:20
*** rongze_ has quit IRC14:24
*** jdob has quit IRC14:34
*** jdob has joined #openstack-meeting-alt14:34
*** SushilKM has quit IRC14:35
*** banix has joined #openstack-meeting-alt14:37
*** heyongli has quit IRC14:40
*** rongze_ has joined #openstack-meeting-alt14:43
*** rongze has quit IRC14:44
*** sergmelikyan has joined #openstack-meeting-alt14:48
*** vponomaryov has joined #openstack-meeting-alt14:49
*** jergerber has joined #openstack-meeting-alt14:51
*** aostapenko has joined #openstack-meeting-alt14:51
*** demorris has joined #openstack-meeting-alt14:53
*** NehaV has joined #openstack-meeting-alt14:54
*** NehaV has quit IRC14:55
*** NehaV has joined #openstack-meeting-alt14:56
*** jecarey has joined #openstack-meeting-alt14:58
*** jvltc has joined #openstack-meeting-alt14:58
*** rraja_ has joined #openstack-meeting-alt14:58
*** achirko has joined #openstack-meeting-alt14:58
*** zhaoqin__ has joined #openstack-meeting-alt14:59
*** bill_az has joined #openstack-meeting-alt14:59
*** xyang1 has joined #openstack-meeting-alt15:00
*** japplewhite has joined #openstack-meeting-alt15:00
*** bswartz has joined #openstack-meeting-alt15:00
bswartz#startmeeting manila15:01
openstackMeeting started Thu Dec 12 15:01:08 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
*** openstack changes topic to " (Meeting topic: manila)"15:01
openstackThe meeting name has been set to 'manila'15:01
*** csaba|afk is now known as csaba15:01
bswartzhey, is anyone here today?15:01
*** gregsfortytwo has joined #openstack-meeting-alt15:01
xyang1hi15:01
vponomaryovhi15:01
bill_azHi15:01
zhaoqin__hello15:01
bswartzah, so I'm not alone15:01
csabahi15:01
rraja_hi15:01
gregsfortytwohi15:02
bswartzI don't have a specific  agenda for this week -- unfortunately I have another meeting right before this one that tends to leave me with no time to prepare :-(15:02
bswartzI'll work on solving that issue though15:02
*** amytron has joined #openstack-meeting-alt15:02
bswartzI think I want to cover the same issues we did last week because I believe we've made some progress on all of them though15:03
aostapenkohi15:03
bswartz#topic gateway-mediated multitenancy15:03
*** openstack changes topic to "gateway-mediated multitenancy (Meeting topic: manila)"15:03
bswartzso first of all, the wiki document is out of date now, due to the many new ideas that have come up in the last 6 weeks or so15:04
*** s3wong has joined #openstack-meeting-alt15:04
bswartzI plan to update the document but first I'll offer a preview and see if anyone thinks this is crazy15:04
*** yportnova_ has joined #openstack-meeting-alt15:04
*** shamail has joined #openstack-meeting-alt15:05
bswartzMy thinking is that the manila-share service itself will only understand 2 types of attach calls:15:05
*** jcooley_ has joined #openstack-meeting-alt15:05
*** jtomasek has quit IRC15:05
*** hagarth has joined #openstack-meeting-alt15:05
bswartz1) Attach directly to tenant network, including support for VLANs, full network connectivity, with a virualized server, etc15:05
*** bvandehey has joined #openstack-meeting-alt15:06
bswartzand 2) Attach to flat network, just like the existing drivers, where an multitenancy support will be handled externally, either in nova or some kind of manila agent15:06
bswartzAll of the gateway-mediated multitenancy support could be build on top of (2) I believe15:07
*** Dinny has joined #openstack-meeting-alt15:07
bswartzand all of the VLAN-based multitenancy could be build using (1) which is pretty close to being ready15:07
*** NehaV has quit IRC15:07
*** lsmola has quit IRC15:08
bswartzI need to draw a picture of how this will work and go through all of the use cases and demonstrate how each will be handled15:08
*** NehaV has joined #openstack-meeting-alt15:08
bswartzI think that this should make backend design for things like ceph/gluster/gpfs relatively easy, and the hard work will be done outside the manila-share service15:08
*** Barker has joined #openstack-meeting-alt15:09
bswartzDoes anyone think I'm crazy?15:09
bswartzoh caitlin56 isn't here, she would mention something I'm sure15:09
*** lsmola has joined #openstack-meeting-alt15:09
shamailGateways etc fall in 1 as well?15:10
bswartzno in (1) there is no gateway -- the backend is responsible for virtualizing the server and connecting directly to a tenant network15:10
*** s3wong has quit IRC15:10
*** shusya has joined #openstack-meeting-alt15:10
bswartzthat method provides more functionality, and is preferred for those backends that can support iut15:11
bswartzit*15:11
hagarthbswartz: any thoughts on how to handle multi-tenancy support externally for (2) ?15:11
shamailThanks15:11
bswartzhagarth: absolutely15:11
*** anands has joined #openstack-meeting-alt15:11
bswartzThe approach will be more or less the same as the current wiki, but the new thing I'm proposing is that the manila backend doesn't really need to be aware of most of it15:12
*** vbellur has joined #openstack-meeting-alt15:12
bswartzThe main thing I realized is that whether the model is "flat"/single tenant, or multitenant with various forms of gateways, the interaction with the actual storage server is pretty much the same15:13
bswartzin the (2) case, when the attach call comes in, the backend just has to share a directory with a client IP, that's it15:13
bswartzimplementing only that will allow us to build everything else in a generic and reusable way, I think15:14
bswartzthen for multitenant situations, there needs to be code on the hypervisor (either manila agent or nova extensions) which mounts the share and re-exports it into the tenant using one of many approaches15:15
bill_azbswartz:  for 2), I would say "attach to network" - the driver may choose to do different network plumbing  depending on req'ts15:15
bswartzIn particular I'm looking for the gpfs and gluster people to tell me why this is crazy15:15
bswartzbill_az: based on our meeting this week, I understood that gpfs operates in a flat network15:16
bill_azyep - but we may want to use vlan connections from tenant guests to specific cluster node15:17
bswartzbill_az: I realize that gpfs has many different networking options, but semantically I think they all support the concept of "grant access to /a/b/c to host 10.20.30.40"15:17
*** caitlin56 has joined #openstack-meeting-alt15:18
bill_azyes15:18
vbellurbill_az: maybe appropriate drivers can override the attach_share action?15:19
bswartzbill_az: okay I was unaware that gpfs could join a vlan directly -- maybe there's an opportunity for GPFS to implement a VLAN-mediated style of driver aka (1)15:19
*** jtomasek has joined #openstack-meeting-alt15:19
bswartzor maybe the thing that joins the VLAN is just a proxy/gateway itself15:20
bill_azI think there may end up being a hybrid of 1/215:20
bswartzthat blurs the lines a bit >_<15:20
bswartzokay I'm glad this came up though -- I'll incorporate it into my doc update15:21
bswartzbill_az: one question for you15:21
caitlin56Doesn't GPFS do pNFS-like direct transfers? That makes a straight server-side proxy tricky, unless you can limit the proxy role to metadata.15:21
*** alazarev has joined #openstack-meeting-alt15:21
bill_azbswartz:  we are still discussing design internally - I just want to point out 2) as you described it might not be exactly where we end up15:21
bswartzis the part of the system you would use to export a GPFS filesystem directly into another vlan part of GPFS itself, or some addon that you guys maintain?15:22
*** jasonb365 has joined #openstack-meeting-alt15:22
bill_azinitial driver is nfs (ganesha or could be kernel nfs) on top of gpfs15:22
bswartzcaitlin56: I think at one layer you're right, but you can always implement a second proxy layer on top of that15:22
zhaoqin__bill_az: I see your code is sharing gpfs via nfs. do you plan to let vm to mount the shares via gpfs protocol?15:23
bswartzbill_az: is there any reason that the nfs-ganesha layer couldn't sit on top of some other filesystem like cephfs, glusterfs, or another NFS?15:23
bill_azzhaoqin__:  not initially - that would be in a future driver15:24
anandsbswartz: no, don't there is...that aligns with the proposal last week15:24
zhaoqin__bill_az: ok15:24
bill_azbswartz:  thats' what ganesha brings15:24
bill_azthere is fsal for various filesystems15:25
bswartzso to answer hagarth's earlier question, nfs-ganesha is one way we can bridge arbitrary backend filesystems into a tenant network15:25
bswartzit could be the preferred method even, if it work well15:26
bswartzworks*15:26
*** bvandehey has quit IRC15:26
shamailIs Manila-agent just in architecture/design phase or has anyone started working on it already?15:26
bill_azbtw - ganesha v2 is was released this week15:26
*** sacharya has joined #openstack-meeting-alt15:26
caitlin56Can NFS-ganesha use NFSv4 or NFSv4.1 tricks?15:26
anandscaitlin56: yes it supports v4, v4.115:26
bswartzshamail: it doesn't exist -- it's just something we're thinking about15:26
vbellurcaitlin56: are you looking at something specific in v4/v4.1?15:27
bswartzcaitlin56: I'm pretty sure that ganesha-nfs will sit right in the middle of the data path though -- all traffic will flow through it when it's being used as a gateway15:27
zhaoqin__bill_az:great, it need to have a try15:27
caitlin56Our servers support v4, so nfs-ganesha could be a backup method of adding vservers. We will probably use OpenSolaris zones, however. But that has not passed QA yet.15:28
bill_azzhaoqin__:  you can ping me if you have trouble building / getting started15:28
bswartzso in that scenario I'm not sure what "tricks" it could take advantage of15:28
*** IlyaE has joined #openstack-meeting-alt15:28
zhaoqin__bill_az: thank you15:29
*** jjmb has joined #openstack-meeting-alt15:29
vbellurcaitlin56: would that mean you would require ganesha to run on OpenSolaris?15:29
anandsbswartz: speaking of all traffic being routed through ganesha, do you see it as a bottleneck?15:30
bswartzanands: definitely not15:30
caitlin56vbellur, not necessarily, we can run Linux inside an OpenSolaris zone.15:30
bswartzanands: ganesha can run on the hypervisor nodes and scale along with them15:30
vbellurcaitlin56: ok15:30
anandsbswartz: precisely, yes, its what we suggested last week as part of the proposal15:31
jvltcbswartz, yes ganesha can scale well; well 2.0 is just out there could be some issues with it; but yea architecture wise it does15:31
jvltcinfact 1.5 we experimented here at IBM and scaled very well15:31
bswartzthe important thing is that if we can locate teh ganesha gateways on the same physical hardware as the guest vms that are using them, there will be no fan-in from a network perspective15:32
bswartzthe scaling should be as good as cinder15:33
caitlin56bwartz: with the caveat that distributed proxies imply distributed security. Some customers will want the vserver option.15:33
bswartzcaitlin56: the tenants wouldn't know how the clound was built internally15:33
gregsfortytwoI'm having some trouble visualizing how Ganesha would interact with (2) (or maybe just with (2) itself); can somebody spell that out a little more?15:34
bswartzall of this should be invisible to a tenant15:34
caitlin56bswartz: if the backend servers are NFSv4 then it should actually be better than cinder. Not as good as object storage, but quite good.15:34
anandswhat about the availability story wrt ganesha? Or is the plan to discuss that separately?15:34
bswartzgregsfortytwo: don't worry you're not alone -- I intend to capture the new design in an updated wiki15:34
vbelluranands: I think we need to have another discussion around HA for NFS-Ganesha.15:35
bswartzanands: again if ganesha runs on the same physical machine where the guest lives, then hardware failures are not a problem because any hardware failure that affects ganesha will affect the guest too15:35
bswartzand we all know that software failure are not a problem because we never write bugs into our software, right?15:36
anandsbswartz: if its a ganesha crash?15:36
anandsVijay: sure15:36
bswartzhaha15:36
caitlin56bwartz: yes, there are a number of scenarios where the fact that the NFS proxy and its clients die at the same time allows you some freedom regarding NFS session rules.15:36
vbellurall the code I write is completely free of bugs :)15:36
jvltcanands, bswartz just said no software bugs. :)15:36
jvltcanands, if ganesha is on the hypervisor node; if it crashes all it needs is a restart right?15:37
bswartzokay so we need to move to the next topic15:37
jvltcIn this architecture I am guessing ganesha runs as stand alone15:37
caitlin56ananda: NFSv3 or NFSv4? Any caching done under NFSv3 is risky (but common). NFSv4 has explicit rules.15:37
bswartzI'm sure we'll keep spending time on multitenancy in the coming weeks15:37
bswartz#topic dev status15:38
*** openstack changes topic to "dev status (Meeting topic: manila)"15:38
bswartzokay can we have an update on the new changes for the last week?15:38
bswartzhttps://review.openstack.org/#/q/manila+status:open,n,z15:38
hagarthbswartz: rraja_ and csaba are adding unit tests for the flat network glusterfs driver15:39
bswartzvponomaryov? yportnova?15:39
vponomaryovWe are working on three things:15:39
*** vbellur has left #openstack-meeting-alt15:39
vponomaryov1) Transfer Manila to Alembic is in progress: https://review.openstack.org/#/c/60788/15:39
*** hagarth has left #openstack-meeting-alt15:39
vponomaryov2) NetApp driver (cmode) is in progress: https://review.openstack.org/#/c/59100/15:40
vponomaryov3) Implementation of BP https://blueprints.launchpad.net/manila/+spec/join-tenant-network is still in progress.15:40
vponomaryov3.1) https://review.openstack.org/#/c/60241/15:40
vponomaryov3.2) https://review.openstack.org/#/c/59466/15:40
*** vbellur has joined #openstack-meeting-alt15:40
vponomaryovAnd have one open item15:40
bswartzvponomaryov: will any of these be ready for merge in the next few days? I've reviewed some but not all of them15:41
vponomaryovalexpec had asked about not clear situation with driver interfaces in manila's chat15:41
caitlin56yponomaryov: When will you be confident that your interface with Neutron is stable?15:41
*** rnirmal has joined #openstack-meeting-alt15:41
vponomaryovwe beleave, can get working ones next week15:42
*** sacharya has quit IRC15:42
caitlin56yponomaryov: that fits our schedule well, we'll probably start coding early next month on the nexenta driver.15:42
bswartzyeah I need to answer alexpc15:42
vponomaryovSo, we think, that driver interfaces should be refactored15:43
bswartzlast 2 days I've been stuck in long meetings so sorry for those of you waiting for responses from me by email15:43
*** devkulkarni has joined #openstack-meeting-alt15:44
vponomaryovso, this refactor should be done asap15:44
vponomaryovbefore hard working on different drivers15:44
caitlin56yponomaryov: let us know when you'd be starting any refactoring. We'll let you know when we're about ready to start coding. Don't want to start coding 1 week before you change everything.15:45
vponomaryovit means, that lvm driver is the only acceptable for now15:45
vponomaryovand it should be refactored15:46
vponomaryoveven for singletenancy15:46
bill_azvponamaryov:  is there a blue print / design for the refactoring?  what are your thinking of changing?15:46
vponomaryovthere is no Bp for now15:47
vponomaryovwe think to change 3 existing methods to one15:47
vponomaryovbecause differnet backends will use different methods15:47
caitlin56Basically, having 3 entry points only makes sense for certain backends? Therefore just go with one and let each backend map to its implementation?15:48
bill_azvponamaryov:  that seems like a good idea15:49
vponomaryovcaitlin56: yes15:50
vponomaryovown clear implementation15:50
caitlin56+1 then, and the best time to refactor is before we have 4 backends.15:50
bswartzI agree, but the best way to get the design right is to have real use cases implemented15:51
bill_azvpononmaryov:  one question I brought up last week - I dont see multiple backend support fully implemented - is there work planned to finish that?15:51
bswartzwithout multiple functioning drivers it will be hard to validate that our design is flexible enough15:51
vponomaryovyes, the idea had popped up exactly after trying to implement15:52
bswartzso it's a bit of a chicken and an egg15:52
bswartzwhoever implements first will probably have some pain working through the refactorings as the design settles15:52
bswartzokay thanks vponomaryov15:53
caitlin56bswartz: we obviously cannot be 100% confident until after multiple backends have been implemented. Still fixing things that already look likely to need fixing makes sense.15:53
bswartz#topic open discussion15:53
*** openstack changes topic to "open discussion (Meeting topic: manila)"15:53
bswartzokay I'll open up the floor to other topics15:53
vponomaryovbill_az: we propose to refactor, and it will affect all of you15:53
*** dttocs has joined #openstack-meeting-alt15:53
vponomaryovwho had begun drivers15:53
bswartzbtw those of you who have contributed drivers already: thank you!15:54
*** katyafervent has joined #openstack-meeting-alt15:54
caitlin56yponomaryov: the nexenta resources are booked for at least 3 weeks before we start coding.15:54
*** zhiyan has quit IRC15:54
vbellurbswartz: we plan to go ahead and prototype the ganesha mediated model. I assume that would be fine given the direction we are heading?15:54
bswartzvbellur: yeah I'm looking forward to seeing a prototype15:55
vponomaryovcaitlin56: there is not a lot of work, the question about severity15:55
bswartzvbellur: does your team have familiarity with ganesha already?15:55
vbellurbswartz: cool. Yeah, we have a good degree of familiarity with ganesha.15:55
bill_azvpononmaryov:  ok on multi-backend.  we have initial gpfs driver w/ knfs working - starting on ganesha flavor now15:56
bill_azbut no problem if driver interface changes some15:56
vponomaryovso, everyone agree, that we create appropriate BP and do the refactor?15:57
*** demorris_ has joined #openstack-meeting-alt15:57
*** thinrichs has joined #openstack-meeting-alt15:57
bswartzvponomaryov: +115:57
shamailvponomaryov: +115:58
caitlin56+115:58
vbellurvponomaryov: +115:58
bswartzanything else?15:58
bswartzwe're near the end of our hour15:58
*** s3wong has joined #openstack-meeting-alt15:58
bswartzI plan to hold this meeting as usual next week15:58
*** michsmit has joined #openstack-meeting-alt15:59
bswartzthe following week is a holiday week here15:59
*** SushilKM has joined #openstack-meeting-alt15:59
*** alagalah has joined #openstack-meeting-alt15:59
bswartzwe can discuss next week, but probably I'll cancel the 26 Dec meeting15:59
vbellurbswartz: makes sense15:59
*** demorris has quit IRC15:59
*** demorris_ is now known as demorris15:59
alagalahGood morning15:59
bswartzthanks everyone15:59
aostapenkothanks, bye16:00
bswartz#endmeeting16:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"16:00
openstackMeeting ended Thu Dec 12 16:00:11 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/manila/2013/manila.2013-12-12-15.01.html16:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/manila/2013/manila.2013-12-12-15.01.txt16:00
openstackLog:            http://eavesdrop.openstack.org/meetings/manila/2013/manila.2013-12-12-15.01.log.html16:00
*** caitlin56 has quit IRC16:00
*** anands has left #openstack-meeting-alt16:00
*** alazarev has quit IRC16:00
*** achirko has left #openstack-meeting-alt16:00
mesteryhi16:00
*** gregsfortytwo has left #openstack-meeting-alt16:00
alagalahHi16:00
s3wonghello16:00
mesterybanix michsmit: there?16:01
banixHi16:01
michsmithi16:01
mestery#startmeeting networking_policy16:01
openstackMeeting started Thu Dec 12 16:01:19 2013 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
*** openstack changes topic to " (Meeting topic: networking_policy)"16:01
openstackThe meeting name has been set to 'networking_policy'16:01
*** ashaikh has joined #openstack-meeting-alt16:01
mestery#link https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy Agenda16:01
*** shamail has quit IRC16:01
sc68calmorning16:01
mestery#topic Action Items16:01
*** openstack changes topic to "Action Items (Meeting topic: networking_policy)"16:01
SumitNaiksatamhi16:01
mesterybanix and alagalah: You guys have the first action items. Any updates?16:01
mesteryThese were from last week.16:02
*** japplewhite has left #openstack-meeting-alt16:02
*** allyn has joined #openstack-meeting-alt16:02
alagalahmestery:  banix yes, I put together a strawman taxonomy and banix has fleshed it out, its available for comment16:02
banixAaalagalah, has prepared a resource diagram16:02
alagalah#link https://docs.google.com/drawings/d/1HYGUSnxcx_8wkCAwE4Wtv3a30JstOBPyuknf7UnJMp0/edit16:02
mesteryalagalah: Thanks for sharing that.16:02
mesteryHave people reviewed this yet?16:02
*** xyang1 has quit IRC16:03
*** bswartz has left #openstack-meeting-alt16:03
s3wongmestery: no, just aware of it now16:03
thinrichsMe neither--looking now.16:03
* mestery gives everyone a minute, and then we can discuss it.16:03
alagalahIt was written last night, only shared this morning, so you are still getting "first look" thinrichs s3wong16:04
s3wongalagalah: :-)16:04
mesteryThanks for writing this up alagalah.16:04
*** rraja_ has quit IRC16:04
thinrichsNot so familiar with these diagrams--why are Security, QoS, Redirect not connected to anything?16:04
banixthinrichs: These are not Neutron objects16:05
ashaikhmy first question is whether we also need a relation (mapping) from group to network -- i.e., are "endpoints" here just things that can be put in groups?  (nested groups may need this though)16:05
alagalahthinrichs:  Good observation... yes as to banix's point, the idea is to show linkages to existing neutron objects and net new objects16:05
*** dttocs has quit IRC16:05
s3wongwhat is the intended meaning of the Policy -> Group arrow?16:05
banixso we were debating whether we put them in the figure tror not16:05
thinrichsAnd did we settle on only having Networks and ports as endpoints?  I thought banix said in the google doc we had other objects.16:06
alagalahashaikh:  Great question, and if that is indeed necessary, we have a problem... imho references to existing neutron objects should really be at the edges (ie pure child nodes) of the taxonomy16:06
*** yidclare has joined #openstack-meeting-alt16:06
*** aostapenko has left #openstack-meeting-alt16:06
*** ywu has joined #openstack-meeting-alt16:07
ashaikhalagalah: it may be ok as is, but it would seem a natural mapping of group to network (as an option)16:07
alagalahthinrichs:  I made a comment wrt "networks" and "endpoints" last night...16:07
*** yamahata_ has quit IRC16:07
michsmitI assume that a given endpoint only a single network, correct ?16:07
mesterymichsmit: I would agree with that, and thinrichs had commented on that in the google doc as well.16:07
banixmichsmit: you mean a group cannot be made of more than one network? or i missed your point?16:08
*** mcohen2 has joined #openstack-meeting-alt16:08
*** dttocs has joined #openstack-meeting-alt16:08
*** zhiyan has joined #openstack-meeting-alt16:09
*** gkleiman has joined #openstack-meeting-alt16:09
*** gkleiman_ has joined #openstack-meeting-alt16:09
michsmitin the diagram, a group references 1 or more EPs which reference 1+ networks16:09
*** gkleiman_ has quit IRC16:09
alagalahashaikh:  My only concern with that is that my understand (as naive as it is) is that group should be a collection of endpoints, and endpoints at this stage, seem to make sense to be ports or networks for ease of integration but I think its short sighted to limit it to that, since the intent is to provide an application centric API16:09
banixashaikh: we don't have nested groups as is; we need to add if we need it.16:09
alagalahashaikh:  Hence I think it makes sense to not have a network/port reference in group16:10
*** dhellmann is now known as dhellmann_16:10
*** hemanthravi has joined #openstack-meeting-alt16:10
ashaikhalagalah: i think that is simpler, but we could also think of groups as only collection of endpoints (i.e., ports) and neutron networks being a way to represent groups initially , with option for other mappings16:11
mesteryOK, so should we incorporate alagalah's diagram into the document?16:11
s3wongalagalah: we do need some primitives as endpoints assigned to groups, if not network/port, what can these be?16:11
*** colinmcnamara has joined #openstack-meeting-alt16:12
*** colinmcn_ has joined #openstack-meeting-alt16:12
*** Barker has quit IRC16:13
alagalahashaikh:  Yes, that makes sense to me, and hence the diagram reflects that imho, s3wong I think we need to eventually be able to identify application by LXC / some container identified by UUID, but thats a bigger topic, I'm just trying to build that idea into this16:13
ashaikhalagalah: in short, what i think may be missing in the diag is a way to directly map a group to neutron network (i.e., does this say you have to first put it in an endpoint object)16:13
banixashaikh: Wouldn't a group with one endpoint (which is a network) give you the same?16:13
alagalahashaikh:  yes that is exactly what I'm trying to show here... and to be fair, I'm reflecting the tables but yes it makes sense16:13
*** shusya has quit IRC16:14
thinrichss3wong: I have the same question.  If Neutron doesn't know what an 'app' is, it won't be able to enforce any policy about it.  I don't see how we can ask Neutron to enforce policy about objects it doesn't know about.16:14
ashaikhbanix: yes, it could, just that having to put a network in an endpoint seems a little superfluous (but i'm ok if it simplifies the imple)16:14
*** sacharya has joined #openstack-meeting-alt16:14
*** Barker has joined #openstack-meeting-alt16:15
mesterythinrichs: I get that point. But won't it focus on the objects it knows about?16:15
s3wongthinrichs: yes, exactly16:15
*** markmcclain has joined #openstack-meeting-alt16:16
*** yportnova_ has quit IRC16:16
michsmitOverall, I like the diagram.  My 2 comments :  I don't like the pink line there, I think it should be removed. 2nd comment:  The reference for network and port should not have +16:16
*** zhaoqin__ has quit IRC16:16
banixmichsmith: instead of +, 1?16:16
michsmitbanix: yes16:16
thinrichsmestery: Suppose the entire policy just says UUID1 can't send traffic over port 80 to UUID2.  But Neutron doesn't know what UUID1 or UUID2 are.  It can't enforce the policy.  What's the point of writing that policy?16:17
s3wongmichsmit: agree there, that pink line is strange; and an endpoint is one object, a group is a collection of endpoints16:17
ashaikhmichsmit: IMO the pink line explains how a groups and policies are expressed, so is useful16:17
banixmichsmith, s3wong: pink is a placeholder16:17
banixthe pink was boack at first :)16:17
banixIn the doc, we specify the policy between a source group and a destination group16:18
banixSo the pink line is to represent that relationship16:18
thinrichsDo we ever foresee a policy that spans more than 2 groups?  Maybe a policy that talks about the need to waypoint communication between a source and a target e.g. through a proxy?16:18
thinrichsIf so, maybe we just want a + line from policy to group.16:19
ashaikhbanix: another way would be to have groups hang off of policy with a "2" annotation16:19
alagalahthinrichs:  IDeally that is EXACTLY a valid policy and how it should be expressed... I think we should confuse how we identify endpoints, with the objects that the Action leverages16:19
banixashaikh: yes16:19
alagalahthinrichs:  sigh, sorry that wasn't right16:19
s3wongashaikh: a policy is provided by a group A, another group who wants to talk to group A consumes the policy; I don't know if the pink line represents this well16:19
thinrichsI guess I imagined that there were 3 groups in that policy: source, destination, and a collection of proxies.16:19
alagalahs3wong:  Agreed, hence why it's pink and see the reference in the key16:19
ashaikhthinrichs:  that list of waypoints could be expressed in the redirect action in that case16:20
ashaikhs3wong: this is more a question then of having a policy attached to a group -- i find the produce/consume relation harder to understand16:20
thinrichsashaikh: but that was just an example off the top of my head.  Pick something that requires 3 groups but which doesn't have a pre-defined action built for handling the case.16:20
banixthinrichs: the 3rd collection will be part of the action description16:20
michsmitbanix: agreed, we need a relationship of some sort there (pink line).  I would think the arrow comes from the group and we can leave out the src/dst group16:20
*** alazarev has joined #openstack-meeting-alt16:20
banixSo here is the pink question:16:21
*** NikitaKonovalov has quit IRC16:21
*** SergeyLukjanov has quit IRC16:21
ashaikhthinrichs: in that case, i would express pairwise to be clear which communication the policy is governing16:22
banixwhether 1) we express the policy through producer/consumer relationship from groups point of view or 2) define the policy as governing traffic between two groups16:22
*** dttocs has quit IRC16:22
s3wongashaikh: that's fair, yet the arrow direction + src/dst reference makes it a bit unclear16:22
thinrichsAnother example with 3 groups: Suppose we want to prohibit traffic from src to dst from traveling through a specific group; we don't care where the traffic goes as long as it does NOT go through that group.16:22
alagalahbanix exactly... I made a comment in the BP last night along those lines16:22
banixI think these are the two models we have been discussing for sometime16:22
alagalahYes, and making it pink was my hamfisted way of highlighting that the tables etc are kind of dependent on the policy model16:23
michsmitbanix:  I think those 2 models sum it up correctly16:23
s3wongbanix: good two models summary16:23
alagalahmichsmit:  I was under the impression we had to pick one16:23
ashaikhthinrichs: you would just create a "security" policy that forbid that communication16:23
michsmitWe may be able to express both by showing that there can be more than 1 src group and more than 1 dest group16:23
*** jtomasek has quit IRC16:24
michsmitand allow the groups to refer to the policy as a src (provider) or destt (consumer)16:24
*** prasadv has joined #openstack-meeting-alt16:24
ashaikhmichsmit: if we can give flexibility to use either approach, it would be great16:24
thinrichsashaikh: would that policy be written in our policy language or in another?  I thought the point was to put such policies within our language.16:24
*** SushilKM has quit IRC16:25
alagalahmichsmit:  I had that discussion with banix too, which means a table modification but makes sense16:25
alagalahI personally like the consumes/produces model16:25
banixmichsmit: so if we allow possibly multiple source and multiple destination groups the two models become equivalent without needing "allow the groups to refer to the policy as a src (provider) or destt (consumer)" No?16:25
ashaikhthinrichs: yes, our security policy, i.e., the one in the diag, which i assume has a deny type rule16:25
michsmitbanix: i think so16:25
*** SushilKM has joined #openstack-meeting-alt16:26
mesteryOK, so lots of good discussion here around this diagram it appears.16:26
mesteryBut it's almost halfway through the meeting now.16:26
mesteryAnd there are other items yet to cover.16:26
mesteryAny concrete action items we want out of this particular discussion?16:26
thinrichsashaikh: but I'm not saying src can't talk to that specific group (let's call it G); it's just that we don't want to route traffic between src and dst through G.16:26
mesteryI think we should migrate this taxonomy doc into the main google doc. Thoughts alagalah?16:26
s3wongmestery: I guess we should incorporate the diagram into the document, and we can then comment on that diagram directly in the document16:27
alagalahThat would make sense... before we do, see that edit I made?16:27
banixmestery: yes for moving to the main doc. LEt's see if we can reach an agreement in the next couple of minutes :)16:27
mesterys3wong: Agreed.16:27
mesteryalagalah : Yes16:27
ashaikhthinrichs:  then you're back to the waypoint example, and we could handle with the redirect/classifer policy16:27
*** brents has joined #openstack-meeting-alt16:27
mestery#action alagalah to migrate taxonomy diagram into the main document16:27
ashaikhi agree about putting this diag in the main doc with the changes suggested by banix and michsmit to accommodate both approaches16:28
banixSo are we going to allow both models by allowing multiple source and destination groups?16:28
s3wongbanix: we should16:28
banixdo we all agree that is the way forward?16:28
michsmitbanix: I think so16:29
s3wongbanix: +116:29
prasadvbanix: +116:29
thinrichsbanix: sure16:29
*** alazarev has quit IRC16:29
banixGreat16:29
ekarlsowhat's network policy btw if anyone cares for after meeting16:29
mesteryOK, thanks banix.16:29
*** colinmcn_ is now known as colinmcnamara_16:29
prasadvsorry joined alittle late16:29
mesterySo, lets move on to the next topic.16:30
prasadvwhere is the taxanomy in the document16:30
mesteryprasadv: See the link from earlier in the meeting (https://docs.google.com/drawings/d/1HYGUSnxcx_8wkCAwE4Wtv3a30JstOBPyuknf7UnJMp0/edit?usp=sharing)16:30
s3wongprasadv: https://docs.google.com/drawings/d/1HYGUSnxcx_8wkCAwE4Wtv3a30JstOBPyuknf7UnJMp0/edit16:30
alagalahprasadv:  Its not in yet, I didn't have edit access to the BP16:30
mestery#topic Discussion Items16:30
*** openstack changes topic to "Discussion Items (Meeting topic: networking_policy)"16:30
mesteryalagalah: I just added you with edit writes so you're good.16:30
*** Dimit has joined #openstack-meeting-alt16:30
mesteryNext item: Endpoints/groups16:30
mesteryhttps://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy (For the item on the agenda for those who don't have it open)16:31
mesteryThe question in the agenda is: Endpoints belonging to multiple groups.16:31
mesteryI think thinrichs pointed this out in the gdoc as well.16:31
mesteryOR rather, asked about it.16:31
banixmestery: yes.16:31
banixTried to put the questions on the google doc on the list of things to discuss16:32
thinrichsSomebody else brought it up.  But the question is whether we want a classifier to have a broader range of test attributes than what was in the doc.16:32
Dimitits powerfull, but can create confusion16:32
thinrichsAnd I guess whether a classifier needs to test all of the possible attributes listed in the doc.16:32
ashaikhdon't we need to allow this to have different policies apply to the same endpoint?16:33
prasadvi brought this up. I think it is needed16:33
banixSo the question is if we allow an endpoint being in multiple groups? Should we not allow it to start?16:33
mesterySo, if we allow this, then ordering of policies becomes an issue to solve.16:33
michsmitwe likely want to start off with very simple assignment.16:33
mesterybanix: Good point. First whether we allow it or not.16:33
*** markmcclain has quit IRC16:33
s3wongMy take is the policy is applied from group to group, rather than pointing to an endpoint, therefore subjected to different policy even with a common endpoint in different groups should be fine, right?16:33
banixmestery: yes, and possibly having conflicting policies.16:34
thinrichsmestery: but I think the difficulty in implementation will affect whether or not we allow it.16:34
ashaikhmestery: wsn't there a suggestion to have  a priority with policy rules to address that16:34
mesterythinrichs: Agreed, and as michsmit said, we may want to start with the simplest case, likely not allowing this.16:34
prasadvs3wong: how does one resolve conflicts as pointed in the document16:34
mesteryashaikh: That is one way to solve it, yes.16:34
Dimitpriority alone doesnt solve it. needs to be global16:35
ashaikhthinrichs: agree, but the impl could check and disallow if it can't support ?16:35
banixashaikh: that would apply in a policy among policy rules which may not be necessary after all.16:35
alagalahprasadv:  yes, the priority thing is more complex than it appears on the surface... think ye old Cisco ACLs16:35
s3wongashaikh: priority is for having more than one policy rule classifier to match a traffic flow16:35
ashaikhs3wong: yes, a simpler cas then16:35
banixs3wong: still problem may show up if we allow one endpoint in multiple groups; I commented on the doc.16:36
thinrichsIt's pretty common to have a conflict resolution scheme for policy languages, e.g. AD uses hierarchies, firewalls use (implicit) priorities.16:36
thinrichsI don't think the need for conflict resolution is a show-stopper.16:36
alagalahthinrichs:  Agreed16:37
prasadvthinrichs: Yes I think so too16:37
banixthinrichs: Question is if we need to deal with it now?16:37
Dimithierarchies driven by users assigning the policy is an option16:37
thinrichsI think it's more a question of how hard is it to write the policy you want and get what you expect.16:37
alagalahthinrichs:  But needs to be explicitly called out, rather than implied16:37
michsmitinitially, a EP in a single group will be easiest and then we could introduce attribute-based assignment of EP to group16:37
alagalahthinrichs:  bingo16:37
banixmichsmit: makes sense16:38
thinrichsalagalah: definitely needs to be part of the language spec if conflicts are possible.  And I agree that the order that rules were added via API calls is a confusing way to resolve conflicts.  So if we go with priorities, they ought to be set explicitly.16:38
mesterythinrichs: Agree with you on that point.16:38
banixThere are two different questions here:16:38
alagalahthinrichs:  Yes, and to my point in the BP about whether this is a promise theory based implementation or...16:39
*** dark_knight_ita has joined #openstack-meeting-alt16:39
banix1) establishing order among policy rules in a policy, 2) order among policies16:39
*** Izik_Penso has joined #openstack-meeting-alt16:40
banixLet me correct the last statement16:40
Dimitbanix: agreed16:40
michsmitideally if policy rules can be expressed without ordering, things will be easier to manage16:40
alagalahOne way would be that if there is conflict to push it back to the app to work out... rather than partial implementation.. ie ask for something else16:40
alagalahLike the 413 return code16:40
thinrichsbanix: agreed those are different.16:40
ashaikhmichsmit: the explicit priority handles that case i think, right?  i'm less sure about the right way to handle globally16:41
prasadvalagalah: we should do that anyway after the priorities right?16:41
banix1) establishing order among actions in a policy rule, 2) order among policy rules 3)  order among policies16:41
thinrichsbanix: 1 is interesting b/c there may be multiple actions that we can apply simultaneously.16:41
banix1 we know the answer to (ordered list of action) 2, may not be a real issue if we do not allow overlapping classifiers 3) we can leave for later by not allowing an EP being in multiple groups16:42
thinrichsIt depends on the actions we have of course.  But right we need to resolve conflicts (1) within a policy, (2) across policies.16:42
*** jtomasek has joined #openstack-meeting-alt16:42
michsmit1 can often be expressed in an order independent manner16:42
alagalahprasadv:  Well, it maybe away of ensuring that ordering is irrelevant, ie a more "functional programming" style approach, policy is implemetnted same regardless of ordering16:42
Dimitbanix: overlapping classifiers are needed for several expressions16:42
thinrichsbanix: I fear that disallowing overlapping classifiers won't be practical.16:42
*** flaper87 is now known as flaper87|afk16:42
s3wongbanix: (2) is somewhat difficult to enforce - it implies we have to run through all classifier and reject overlap at API level16:43
banixTo narrow down the problem we are discussing, is there agreement that EP belongs to one group for now?16:44
s3wongbanix: also (1) action list may not be ordered as different action_type can be executed simultaneously16:44
thinrichsbanix: if each EP belongs to a single group, can't we still have multiple policies applied to that group and hence have to deal with conflicts?16:45
thinrichsIf we're dealing with conflicts anyway, we might as well allow an EP to belong to multiple groups and apply the same conflict resolution to it.16:45
banixs3wong: then, there won't be a need to establish order16:45
alagalahthinrichs:  #agreed16:46
s3wongthinrichs: so in essence you are suggesting that we establish orders across policies?16:46
Dimitthinrichs: different implementations will end up resolving conflicts in a different manner. users will be confused16:47
thinrichss3wong: I'm not suggesting a conflict resolution strategy yet (though order is typical).  I'm just trying to understand if there's anyway to avoid conflicts first.16:47
mesterys3wong thinrichs: Orders == priorities16:47
*** venkatesh_ has quit IRC16:47
*** venkatesh has quit IRC16:47
*** jtomasek has quit IRC16:47
*** colinmcnamara has quit IRC16:47
banixthinrichs: I see your point16:47
thinrichsDimit: The conflict resolution I'm suggesting will be part of the language spec.  So all plugins implement it the same.16:47
*** vbellur has left #openstack-meeting-alt16:47
*** boris-42 has quit IRC16:48
*** colinmcnamara_ has quit IRC16:48
*** colinmcnamara has joined #openstack-meeting-alt16:48
banixmestery: you think we can use a cross-policy priority to solve the problem?16:48
*** colinmcnamara has quit IRC16:48
prasadvthinrichs: I agree it should be part of the spec.16:48
ashaikhmestery: so you mean that we don't want explicit priorities, rather governed by ordering ?16:48
*** SumitNaiksatam has quit IRC16:48
*** 77CAAS3HQ has joined #openstack-meeting-alt16:48
*** 65MAAD7S9 has joined #openstack-meeting-alt16:48
Dimitthinrichs: i cant see a unique answer no matter what. users will not know what happened16:48
michsmitbetween a given pair of groups, do we expect more than 1 policy to be applied ?16:48
mesteryI think thinrichs had suggested using priorities for policies, which may solve the problem ashaikh, right?16:49
alagalahashaikh:  I think his point was that whether you choose ordering or priority, you still end up at the same place16:49
*** 65MAAD7S9 is now known as colinmcnamara16:49
mesteryalagalah: ^^^^ That too :)16:49
thinrichsDimit: suppose we require every policy to be assigned a unique number (via the API call).  The language spec says that the policy that applies with the highest priority is the one that applies.  Then the user understands what is happening.16:49
ashaikhmichsmit: couldn't you have have a redirect + QoS policy, for example between groups?16:49
*** ekarlso has quit IRC16:50
ashaikhalagalah: yes, same place, except not implicit in the ordering16:50
banixashaikh: but those will be policy rules in a single policy16:50
*** jvltc has left #openstack-meeting-alt16:50
*** aveiga has joined #openstack-meeting-alt16:50
michsmitashaikh:  yes, but couldn't they be combined into a single policy16:50
michsmitbanix:  your fingers are faster than mine :-)16:51
ashaikhmichsmit:  yes, as mult policy_rules16:51
s3wongmichsmit: more than one policy: sure, a policy for app tier, an another policy for Sharepoint application specifically16:51
thinrichsmestery: I think priorities are one solution, though an absolute priority like I mentioned in the example above is not the only way.  We could have a partial order of priorities and a mechanism for combining policies of the same priority into one.16:51
*** hajay___ has joined #openstack-meeting-alt16:51
hajay___openstack-meeting-alt16:51
*** hajay___ has quit IRC16:51
*** tsufiev has quit IRC16:51
prasadvs3wong: can't you combine that into policy rules if the groups are the same16:51
Dimitthinrichs: so two priorities: policy and classifier. i can still see conflict. for ep1, policy A preffered when talking to Ep2 and B preferred when talking to ep316:51
*** ekarlso has joined #openstack-meeting-alt16:51
mesteryJust a note folks: We have 9 minutes left, there is another meeting in this channel immediately following this one.16:52
banixthen in a single policy, the order can be established more easily.16:52
*** hajay__ has joined #openstack-meeting-alt16:52
thinrichsDimit: we need 2 levels of conflict resolution: within the policy and across policies.16:52
thinrichsDimit: I was giving the cross-policy resolution scheme.16:52
s3wongprasadv: I picture that we want policies to be reused, so using it like two separate ones seems to make sense16:52
michsmits3wong: if we limit an EP to a single group, the policy could be combined as well in the case of app tier/Sharepoint16:52
michsmits3wong: at least initially16:53
thinrichsDimit: The resolution scheme within a policy should ensure that we can write a policy that describes both QOs and Security.  So a strict ordering there isn't so good either.16:53
Dimitthinrichs: cross policy resolution is different for different end poinds . its possible16:53
banixmichsmit: I think that is the way to go16:53
s3wongmichsmit: then policy combination become an item we have to support in the framework16:53
thinrichsDimit: sure--I could see priorities on groups as well.  There are a bunch of variations here.  The important thing is that the language chooses one.16:54
banixshould we continue this discussion on mailing list?16:54
thinrichsI would think we start by figuring out the conflict resolution scheme for an individual policy and then worry about cross-policy conflict resolution.16:54
s3wongthen going back to thinrichs' point, at time of combining policies, we would need to resolve conflicts16:54
alagalahthinrichs:  yes16:54
mesterybanix: +1 to the mailing list discussion16:54
s3wongbanix: +116:55
alagalahbanix: +116:55
prasadv+116:55
mesteryWe're almost out of time here.16:55
alagalahWhat mailing list :)16:55
Dimitthinrichs: exactly.  it can get arbitrarily complex. thats why i suggest single group for ebd point and simple resolution16:55
mestery#topic Open Discussion and Next Steps16:55
*** openstack changes topic to "Open Discussion and Next Steps (Meeting topic: networking_policy)"16:55
mesteryFor the last 5 minutes, lets see what we'd like to accompliush for next week.16:55
mesteryAnd then do open discussion for hte last few minutes.16:55
banixopenstack-dev mark with [Neutron][Policy]16:55
*** luQAs has joined #openstack-meeting-alt16:55
alagalahbanix:  ty16:55
s3wongI did some update on the action_type=='qos' just before the meeting, please take a look16:56
mestery#info Emails for Neutron policy should go to openstack-dev marked as "[neutron] [policy]"16:56
mesterys3wong: Thank you for that.16:56
s3wongAlso, I would like the community to converge on the default set of actions we would force all plugins to support16:56
mesterys3wong: Can you send an email with that to the list?16:56
*** AlanClark has joined #openstack-meeting-alt16:56
s3wongmestery: certainly16:56
prasadvs3wong:are you going to put more clarity on redirect policy16:56
banixLet us finalize the object model (with support for combining two model)16:57
Izik_Pensofff16:57
s3wongprasadv: yes, will also send out to ML to enlist community suggestions/opinions16:57
banixs3wong: yes, we need that16:57
michsmitbanix: +1 I think there is more thinking we need to do on combining the models16:57
alagalahs3wong:  I like it but do we want to mix classification with queueing in the same action ?16:57
s3wongprasadv: also, you mentioned you want some redirect dst list mgmt, we should discuss that on ML as well16:58
prasadvs3wong: yes we do need to do that16:58
*** irenab has joined #openstack-meeting-alt16:58
mesteryOK, so lets wrap things up here.16:58
mesteryWe took a few action items which will appear in the meeting minutes.16:58
mesteryI'll add them for next week to followup on.16:58
*** demorris has quit IRC16:58
banixThanks.16:58
mesteryLets continue discussions on the ML for this week for any items which were not discussed here.16:59
mesteryAnd thanks for attending everyone!16:59
mestery#endmeeting16:59
s3wongThanks!16:59
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"16:59
openstackMeeting ended Thu Dec 12 16:59:10 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:59
prasadvthanks16:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking_policy/2013/networking_policy.2013-12-12-16.01.html16:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking_policy/2013/networking_policy.2013-12-12-16.01.txt16:59
openstackLog:            http://eavesdrop.openstack.org/meetings/networking_policy/2013/networking_policy.2013-12-12-16.01.log.html16:59
*** prasadv has quit IRC16:59
*** Dimit has quit IRC16:59
*** thinrichs has quit IRC16:59
*** jtomasek has joined #openstack-meeting-alt17:00
mesteryOK, who's here for the Neutron Third-Party Testing IRC meeting?17:00
*** Dane_ has joined #openstack-meeting-alt17:00
aveigao/17:00
*** alagalah has left #openstack-meeting-alt17:00
*** michsmit has left #openstack-meeting-alt17:00
rossella_sme!17:00
*** woodster has left #openstack-meeting-alt17:00
hajay__me too!17:00
irenabme too17:00
mesteryGreat, looks like we have a solid turnout!17:00
luQAsme too17:00
dkehnhi17:00
mestery#startmeeting networking_third_party_testing17:00
openstackMeeting started Thu Dec 12 17:00:56 2013 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:00
*** openstack changes topic to " (Meeting topic: networking_third_party_testing)"17:00
openstackThe meeting name has been set to 'networking_third_party_testing'17:01
*** pcm_ has joined #openstack-meeting-alt17:01
mestery#link https://etherpad.openstack.org/p/multi-node-neutron-tempest Etherpad masquerading as an agenda17:01
mesterySo, I do not have an official meeting setup for this, depending on what we accompliush and how often we have this, I can set that up next.17:01
*** BrianB_ has joined #openstack-meeting-alt17:01
*** rongze_ has quit IRC17:01
mesteryFor now, have a look at the agenda on the etherpad link at the top.17:01
mesteryPlease add things to the agenda, as well as the general etherpad.17:01
*** Leo_ has joined #openstack-meeting-alt17:02
mesteryOK, so I think we're all here because we are individually looking at how to handle the new third-party testing requirement for Neutron plugins and drivers.17:02
*** dukhlov_ has quit IRC17:02
mesteryI'm hoping we can use this meeting to facilitate sharing information, hurdles, and workarounds in getting to that goal.17:03
mesterySo, a first question: How are people coming along? Does anyone have this setup and working yet?17:03
rossella_swe are just starting thinking about it17:03
aveigamestery: we just finished getting our Jenkins setup17:04
*** ashaikh has quit IRC17:04
hajay__we have a somewhat functional setup where we have our n/w controller running on a separate ssystem and all openstack/devstack-driven projects on a differetn system17:04
Izik_PensoNo, we just started working on it17:04
mesteryOK, so it looks like people are mostly just starting now.17:04
aveigahowever, I was actually interested in using this to help test others' plugins in a different environment, since I doubt many people have a dual-stacked l2-provider setup17:04
irenabwe are finalizing the requirements17:04
gduanWe are planning17:04
*** emagana has joined #openstack-meeting-alt17:04
mesteryaveiga: Interesting, and valid point.17:04
emaganamestery: Hi!17:05
mesteryemagana: Howdy :)17:05
mesteryAre people using their own Jenkins instances for this with the plugin to read the upstream gerrit stream?17:05
mesteryThat's what we're going to do on our end.17:05
*** SumitNaiksatam has joined #openstack-meeting-alt17:05
*** tsufiev has joined #openstack-meeting-alt17:06
*** clayb has joined #openstack-meeting-alt17:06
anteayao/17:06
mesteryI think that approach at least allows for an easier integration with the upstream gerrit for reading and posting +1/-1 back from what I understand.17:06
mesteryanteaya: Hi!17:06
*** ivar-lazzaro has joined #openstack-meeting-alt17:06
mesteryOK, so how about this: Everyone can carve out a section on the etherpad (maybe at the bottom) to list what they are doing for testing.17:07
*** Sukhdev has joined #openstack-meeting-alt17:07
SukhdevHi17:07
mesteryWith the idea that we can share that info and people can come up with a common model for this, as much as possible17:07
mesterySukhdev: Hi.17:07
mesteryFor those who joined late: https://etherpad.openstack.org/p/multi-node-neutron-tempest <--- Etherpad with agenda and information.17:07
emaganamestery: Good Idea!17:07
anteayaSukhdev: did you get that issue sorted out with you testing nova patches?17:08
mesteryAlso, the etherpad has a section for multi-node testing, because for the most part, I expect everyone here will be doing multi-node testing, so it made sense to combine things a bit.17:08
SumitNaiksatamhi17:08
hajay__regarding tempest runs, are there plans to run selective tests against a plugin. for eg. test-extensions today expects all plugins to support all extensions17:08
Sukhdev<anteaya>: yes, thanks17:08
anteayaalso regarding etherpad use, please click the top right hand coloured button and input your name beside your colour17:09
anteayathanks17:09
anteayaSukhdev: thank you17:09
mesteryhajay__: Not that I am aware of. Can you add that under the issues seciont I just added to the etherpad?17:09
anteayaSukhdev: if you have any suggestions on how others might avoid the same issue in future, that would be nice to share17:09
mesteryanteaya: Thank you for reminding folks to identify themselves on the etherpad.17:09
anteayasure17:09
hajay__mestery: sure. thanks. out of curiosity do we have any plugin that passes all tempest tests?17:09
rossella_smestery: will multi-node be required or optional?17:09
SukhdevI have some specific questions - should I ask here or put it on ehterpad?17:10
mesteryrossella_s: Multi-node is up to the vendor I think.17:10
*** jcoufal_ has quit IRC17:10
mesterySince we're talking networking, I just assume multi-node is more interesting. :)17:10
mesterySukhdev: Ask away, and we can add issues or info into the pad.17:10
rossella_smestery: I agree but it's harder :)17:10
mesteryrossella_s: Agree :)17:10
hajay__mestery: multi node == multiple network controllers right?17:11
SukhdevI have basic setup working with Jenkins and Gerrit trigger - have specific questions - here we go:17:11
emaganarossella_s: harder even using devstack?17:11
mesteryrossella_s: Although, for the most part, I am thinking we can just spin up a multi-node devstack and run Tempest against that, Is that what you were thinking?17:11
mesteryhajay__: Multi-node is multiple compute instances.17:11
Sukhdev1) What kind of traffic are we expecting - this will determine how many VMs I need to allocate17:11
anteayarossella_s: also be aware that as of yet -infra has no structure for multi-node testing, though we are aware there is a need17:11
mesterySukhdev: I think the Tempest tests just use pings for verification that things have spun up correctly.17:12
aveigamestery: are these tests required to run under devstack?17:12
SumitNaiksatamSukhdev: in general, isn't this what each vendor will determine?17:12
mesteryanteaya rossella_s: Yes, thus my combining the two here.17:12
anteayaright17:12
mesteryaveiga: Not required, no, but likely easier for most.17:12
emaganaanteaya: so, you mean that in current tempest tests, is all in one node?17:12
Sukhdev2) I was trying to use the devstack script used for present tempest gate. I can not find it - does any one have any clue?http://ci.openstack.org/devstack-gate.html17:12
aveigaI actually have a few nits to pick with devstack, since it's not a friendly player with v617:12
mesteryemgana: Yes, there is no multi-node gate testing at this point.17:12
mesteryaveiga: I think we should file bugs and fix those issues in devstack :)17:13
anteayaemagana: yes, one node currently for tempest test with -infra check and gate17:13
*** amotoki has joined #openstack-meeting-alt17:13
rossella_semagana: anyway I think even with devstack, multi node it tough17:13
emaganamestery: No wonder, why we have so many bugs! No complaining just raising a good point to improve!17:13
*** rudrarugge has joined #openstack-meeting-alt17:13
mesterySukhdev: devstack itself comes from it's own git repo, right?17:13
aveigamestery: agreed, but stretched too thin at the moment17:13
dkehnSukhdev: https://github.com/openstack-infra/devstack-gate17:13
mesteryemagana: Agreed, it's on the list of things to address soon.17:13
mesteryemagana: But multiple-nodes is slightly complicated inthe gate due tothings like IP address needs, and the functionality of hte underlying public cloud these thigns run on, etc.17:14
emaganarossella_s: agree, just try with single node and failed most of the tests :-(17:14
*** SergeyLukjanov has joined #openstack-meeting-alt17:14
rossella_semagana: I sympathize17:14
rossella_swe should get there anyway17:14
mesteryIMHO, and marun and I chatted about this, but running everything on a single node means that node is incredibly CPU starved at different points.17:14
*** jtomasek has quit IRC17:15
mesteryAnyways, that's a different point I think, though tangentially related to this discussion of multi-node testing.17:15
*** NehaV has quit IRC17:15
*** demorris has joined #openstack-meeting-alt17:16
emaganamestery: so, recommendation is to use a baremetal node instead of VM?17:16
mesteryemagana: That is up to the plugin maintainers who are doing the third-party testing :)17:16
Sukhdev3) I saw an email from salvatore regarding the patches that we (vendors) are suppose to test - has anybody been able to set up Jekinks to pick patches that impact e.g. neutron.db, neutron.api, etc - how do you create such filter?17:16
mesteryFor example, we will not use bare metal, we will use VMs for the Cisco plugin testing.17:16
mesteryemagana: But our plan is to spinup a multi-node devstack environment and run Tempest against that.17:17
aveigahonestly, I think multi-node should be required.  If you can't pass packets between instances, then what's the point?17:17
emaganamestery: got it!17:17
mesteryaveiga: Agreed, and it's easier for the third-party stuff for sure, as it's under control of vendors.17:17
emaganaaveiga: +117:17
anteayaSukhdev: if your next issue is filtering, make a note in the etherpad and I can try to follow up to get you some answers17:17
mesterySukhdev: I have not. Can you add that under the issues section?17:17
mesteryanteaya: Awesome, thanks for the help there!17:17
anteayasure17:18
*** NikitaKonovalov has joined #openstack-meeting-alt17:18
*** lsmola has quit IRC17:18
*** NehaV has joined #openstack-meeting-alt17:18
rudraruggeWe are starting with a single node to get this going at Contrail17:18
SukhdevI will - thanks17:18
SukhdevAt Arista, we are also strating with single node first17:18
rudraruggeWe have the same question regarding filtering of tests17:19
*** dark_knight_ita is now known as marcol17:19
emaganarudrarugge: I almost ask the same question.17:19
emaganadoes anyone know how to filter tests?17:20
clarkbinfra uses Zuul to communicate between gerrit and jenkins. Jenkins is capable of filtering based on file and branch and so on. Not sure about the gerrit jenkins plugin17:20
clarkb*Zuul is capable of filtering based on file and branch and so on17:20
anteayaalso be sure to look at smokestack code: https://github.com/dprince/smokestack17:20
rudraruggeWe ran 1100 tempest tests and passed 800. 300 failed but are unrelated to our plugin. We wanted to be able to not run these tests17:20
anteayathat will be my first stop trying to address the filtering question17:20
mesterygood data points here rudrarugge!17:21
mesteryI just updated the etherpad to add a section around what people's setups look like.17:21
rudraruggeThanks anteaya17:21
*** gkleiman has quit IRC17:21
mesteryPlease add your plugin/vendor link there, and we can flesh that out as well.17:21
clarkbrudrarugge: oh filtering in that manner. tempest tests are tagged with attributes, I am sure the qa team can give you answers on filtering those17:21
mesteryThe idea would be to share this info with others again as datapoints.17:21
SukhdevCan we all agree to share our knowldge on filtering issue on etherpad, please?17:21
anteayazuul repo: http://git.openstack.org/cgit/openstack-infra/zuul/tree/17:21
mestery+1 to that Sukhdev.17:21
rossella_s+117:21
emagana+117:22
rudrarugge+117:22
mesterySukhdev: Can you drive the filtering discussion on the mailing list? Start a thread for that please, or continue the existing one if it's there.17:22
*** itzikb has joined #openstack-meeting-alt17:22
Sukhdev<mestery>: I will do that17:23
*** karthik_ has joined #openstack-meeting-alt17:23
mesteryOK, what to discuss next?17:23
SukhdevNext issue -17:23
emaganaSukhdev: And please, any data points on the filtering, copy them to the etherpad17:23
Sukhdevbaremetal vs VM17:23
mesteryemagana: +1 to that too!17:23
mesterySukhdev: I think that is a question left up to the implementor, right?17:23
mesterySukhdev: Are you looking for recommendations?17:24
Sukhdevif we are suppose to do devstack for each patch, baremetal will not scale - do you guys agree?17:24
mesteryYes17:24
rudraruggeYes17:24
irenabyes17:24
mesteryVMs can spin up on demand and scale, which is why we're going that route.17:24
mesteryNow, if your plugin depends on some special HW, you are likley left with bare metal as your only option.17:25
mesteryBut for the most part, I would expect everyone to be able to run in virtual environments.17:25
mesteryBut again, it's really plugin dependent.17:25
SumitNaiksatamfolks, i think we should first focus on getting an end to end workflow sorted out17:25
amotokihi! i think baremetal vs vm is not directly related to testing. it is up to your plugin.17:25
SumitNaiksatamstarting from getting the triggers17:25
mesteryamotoki: Agree.17:25
SukhdevThis is what I am thinking about doing - create a VM from master branch and everytime there is patch, clone the VM, apply patch, run tests and kill the VM - what do you guys think?17:25
jbrendelSumit: Agree17:26
SumitNaiksatamto actually voting back17:26
mesterySumitNaiksatam: Also agree.17:26
*** zhiyan has quit IRC17:26
SumitNaiksatamwe can then think of how we can scale better, etc.17:26
ivar-lazzaroSumitNaiksatam: +117:26
SumitNaiksatamlets first flesh the complete workflow on the etherpad17:26
mesterySumitNaiksatam: I'm hoping we can document that on the etherpad.17:26
SumitNaiksatami see that some folks are further along than otheres17:26
mesterySumitNaiksatam: I have a rough example of that already on the pad.17:26
SumitNaiksatamfor those who are, can you please document up to the point that you have reached?17:26
mesterySumitNaiksatam: I have added a section at the bottom for vendors/open source plugins to document that there.17:27
SumitNaiksatammestery: it might be beneficial to get the exact stpes17:27
SumitNaiksatammestery: great17:27
mesterySumitNaiksatam: AGreed.17:27
SumitNaiksatamif we can collectively get to the same point, it will help to make much better progress17:27
SumitNaiksatamand ask better questions17:28
SumitNaiksatamand we will all meet the I2 deadline :-)17:28
SukhdevI have been playing around with this in a VM - I have it almost working except for devstack and filtering issue - I will try to post what ever I can17:28
luQAs SumitNaiksatam: you mean which tempest test pass and/or fail?17:28
*** amytron has quit IRC17:28
SumitNaiksatamSukhdev: awesome!!17:28
mesteryYes, lets all document on the etherpad to share information.17:28
emaganaSukhdev: It will be very useful!17:28
*** jaypipes has quit IRC17:28
*** amytron has joined #openstack-meeting-alt17:29
mesteryThere are two things here: 1) The setup to get the environemt up and running. 2) Ensuring you can get a clean Tempest run with your plugin.17:29
rudraruggewe will also update the document17:29
mesteryBoth are important.17:29
SumitNaiksatamrudrarugge: great, thanks17:29
itzikbI have a basic question: Should every patch be verified17:29
mestery#2 can be worked on in parallel and in fact may expose bugs in your plugin if you're not already running Tempest tests against it.17:29
rudraruggemestery: agreed on  the 2 things17:29
SumitNaiksatammestery: the latter part is in paralle17:29
emaganaitzikb: I think just your plugin patches!17:29
SumitNaiksatammestery: you said it17:29
mesteryitzikb: Not every patch, there is an email from salv-orlando with what he recommended to filter against, see earlier discussion in the meeting logs for this meeting.17:30
mesteryOK, so lets focus on documenting where everyone is at on the etherpad.17:30
Sukhdev<mestry> regarding you point 2, tempest tests are failing in stable/havana, but, they pass on master branch -17:30
SumitNaiksatamemagana: i don't think it will be just your plugin patches17:30
*** amcrn has joined #openstack-meeting-alt17:30
SumitNaiksatamemagana: although you may choose to do that17:30
mesteryAnd sending emails to openstack dev tagged as [neutron] [third-party-testing] when appropriate.17:30
SumitNaiksatammestery: thats a good suggestion on the "filter" :-)17:31
mesteryemagana: You need to run against other patches as well, anything from Jenkins for example.17:31
itzikb@mestery: Just Neutron ?17:31
emaganaSumitNaiksatam:  I do agree that testing avery patch will be very good for the benefit of the third party plugin but we will end up re-creating the whole Infra set-up which I dont think will be easy!17:32
Sukhdevplease read an email from Salvatore regarding what patches should we be testing - it is very clear17:32
SumitNaiksatamemagana: there is middle ground between testing every patch and only your plugin17:32
SumitNaiksatamemagana: but again i think that is the vendor's choice17:32
amotokiwe need to run tests for patches which changes at leat your plugin and COMMON tests.17:32
*** doug_shelley66 has joined #openstack-meeting-alt17:32
mesteryamotoki: Agreed.17:32
*** ashaikh has joined #openstack-meeting-alt17:32
emaganaSumitNaiksatam: Understood!17:32
mesteryOK, so is there anything else to discuss today?17:33
anteayaemagana: actually it is very easy to recreate the infra set-up locally17:33
anteayaif you choose to do that it would save you a lot of time17:33
mesteryOr should we focus on filling out the etherpad before meeting next week?17:33
*** dougshelley66 has quit IRC17:33
anteayaclarkb: ^17:33
*** aignatov has quit IRC17:33
emaganaanteaya: looking forward to get there! Not sure how much VMs will need17:33
anteayayou can run zuul jenkins and nodepool locally17:33
mesteryanteaya: I think that's a good idea actually.17:33
anteayayou can set as many vms as you want17:33
clarkbanteaya: emagana: right there are other folks doing it and it is getting easier all the time. I would actually suggest using the infra toolchain to solve many of these problems. zuul, devstack-gate, and nodepool in particular17:34
Sukhdev<anteaya>: can you post this on etherpad, please?17:34
*** nati_ueno has quit IRC17:34
rossella_santeaya: is there a page that explains how to do that?17:34
emaganaclarkb: Yes, please, If you have already a cookbook, we will just follow it!17:34
clarkbrossella_s: there is! http://ci.openstack.org/running-your-own.html17:34
*** hajay__ has left #openstack-meeting-alt17:35
rossella_sclarkb: thanks!17:35
*** beyounn has joined #openstack-meeting-alt17:35
mesteryclarkb: Thanks for that!17:35
*** brents has quit IRC17:35
clarkbthat document covers A-Z and you may not need all of hte pieces17:35
clarkbdevstack-gate, zuul, and nodepool are individually documented if you want to use parts piecemeal17:36
mesteryThanks for that link clarkb!17:36
mesteryI think that will help get people going quickly.17:36
marcolclarkb: thanks for the link17:37
anteayaSukhdev: posted17:38
anteayarossella_s: yes, linked to instructions in the etherpad17:38
rossella_santeaya: thanks17:38
anteayawhich is clarkb's link17:38
mesteryOK, should we each circle back now, spend some time digesting this, ask questions on-list and in-channel before next week's meeting then?17:38
anteaya:D17:38
rossella_syees :)17:38
mesteryOK, lets do that.17:39
emaganathe magic of a link! all is easier!17:39
mesteryI'll setup another IRC meeting for next week during an Asian friendly time spot so amotoki and gongysh can join. :)17:39
Sukhdev<anteaya>: thanks17:39
mesteryThanks for joining us this week everyone!17:39
anteayamestery: great idea17:39
anteayaSukhdev: :D17:39
hemanthravithanks17:39
amotokimestery: really appreciated.17:39
mesteryLets see what progress we can make individually and as a team on this to make the testing coverage better for all of Neutron and it's plugins!17:40
mesteryamotoki: No problem, and thanks for joining us this week!17:40
mestery#endmeeting17:40
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"17:40
openstackMeeting ended Thu Dec 12 17:40:16 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:40
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/networking_third_party_testing.2013-12-12-17.00.html17:40
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/networking_third_party_testing.2013-12-12-17.00.txt17:40
SukhdevThanks a bunch17:40
openstackLog:            http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/networking_third_party_testing.2013-12-12-17.00.log.html17:40
*** hemanthravi has quit IRC17:40
*** Izik_Penso has left #openstack-meeting-alt17:41
*** Dane_ has quit IRC17:41
*** NikitaKonovalov has quit IRC17:41
*** marcol has left #openstack-meeting-alt17:41
*** hajay has joined #openstack-meeting-alt17:42
*** beyounn has quit IRC17:42
*** amotoki has quit IRC17:42
*** aveiga has left #openstack-meeting-alt17:42
*** NikitaKonovalov has joined #openstack-meeting-alt17:43
*** brents has joined #openstack-meeting-alt17:44
*** marcol has joined #openstack-meeting-alt17:44
*** shivh has joined #openstack-meeting-alt17:45
*** emagana has quit IRC17:46
*** hajay has quit IRC17:47
*** irenab has quit IRC17:48
*** pcm_ has left #openstack-meeting-alt17:49
*** alazarev has joined #openstack-meeting-alt17:50
*** allyn has quit IRC17:50
*** IlyaE has quit IRC17:51
*** rossella_s has quit IRC17:52
*** rsblendido has quit IRC17:52
*** NehaV has quit IRC17:52
*** NehaV1 has joined #openstack-meeting-alt17:52
*** s3wong has quit IRC17:53
*** bob_nettleton has joined #openstack-meeting-alt17:55
*** Sukhdev has quit IRC17:55
*** aignatov has joined #openstack-meeting-alt17:55
*** aignatov has quit IRC17:55
*** aignatov has joined #openstack-meeting-alt17:57
*** jbrendel has quit IRC17:58
*** crobertsrh has joined #openstack-meeting-alt17:58
*** mattf has joined #openstack-meeting-alt17:58
*** bdpayne has joined #openstack-meeting-alt17:58
*** derekh has quit IRC17:58
*** marcol has quit IRC17:59
SergeyLukjanovsavanna team meeting will be here in 5 mins17:59
*** 77CAAS3HQ has quit IRC17:59
*** colinmcnamara has quit IRC17:59
*** SumitNaiksatam has left #openstack-meeting-alt17:59
*** nati_ueno has joined #openstack-meeting-alt18:00
*** jmaron has joined #openstack-meeting-alt18:01
*** aignatov has quit IRC18:01
*** aignatov has joined #openstack-meeting-alt18:01
*** Dinny has quit IRC18:01
*** aignatov has quit IRC18:02
*** jmaron has quit IRC18:02
*** tmckay has joined #openstack-meeting-alt18:02
*** aignatov has joined #openstack-meeting-alt18:02
*** NikitaKonovalov has quit IRC18:03
*** jmaron has joined #openstack-meeting-alt18:03
*** hajay has joined #openstack-meeting-alt18:05
*** NikitaKonovalov has joined #openstack-meeting-alt18:05
*** karthik_ has quit IRC18:05
*** dmitryme has joined #openstack-meeting-alt18:06
*** ErikB has joined #openstack-meeting-alt18:07
SergeyLukjanovsavanan folks, are you around?18:08
aignatovo/18:09
tmckayI'm here18:09
SergeyLukjanovHWX folks?18:09
akuznetsovHi18:09
ErikBHi18:10
bob_nettletonHi18:10
SergeyLukjanov#startmeeting savanna18:10
openstackMeeting started Thu Dec 12 18:10:14 2013 UTC and is due to finish in 60 minutes.  The chair is SergeyLukjanov. Information about MeetBot at http://wiki.debian.org/MeetBot.18:10
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:10
*** openstack changes topic to " (Meeting topic: savanna)"18:10
openstackThe meeting name has been set to 'savanna'18:10
SergeyLukjanov#topic Agenda18:10
*** ylobankov has joined #openstack-meeting-alt18:10
*** openstack changes topic to "Agenda (Meeting topic: savanna)"18:10
SergeyLukjanov#link https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Next_meetings18:10
SergeyLukjanov#topic Action items from the last meeting18:10
*** openstack changes topic to "Action items from the last meeting (Meeting topic: savanna)"18:10
SergeyLukjanovboth action items from the last meeting are on me and not yet fully completed18:10
*** mozawa has joined #openstack-meeting-alt18:11
SergeyLukjanov#action SergeyLukjanov to check that all blueprints created and ping guys to make them if not18:11
SergeyLukjanov#action SergeyLukjanov add links to the blueprints to roadmap18:11
SergeyLukjanov#topic News / updates18:11
*** openstack changes topic to "News / updates (Meeting topic: savanna)"18:11
SergeyLukjanovfolks, please18:11
*** hajay has left #openstack-meeting-alt18:11
SergeyLukjanovwe're completely moved our unit tests to be excecuted by testr18:12
SergeyLukjanovand working on integration tests18:12
crobertsrhI'm currently working on job relaunch from the UI.  I'm getting close having something working.18:12
mattfi filed a cr for the foundation of the cli, i'd like lots of feedback before i add more to it - https://review.openstack.org/#/c/61565/18:12
tmckayI am working on https://blueprints.launchpad.net/savanna/+spec/edp-oozie-java-action to add general jar jobs from oozie (as opposed to mapreduce jobs)18:12
SergeyLukjanovin addition I'm working on tempest tests18:12
aignatovI'm continue working on Heat integration patch, it is opened for review.18:12
aignatovSo you are welcome18:12
aignatovbasic functionality are implemented18:12
SergeyLukjanovmattf, adding an item about cli before the open discussion18:13
aignatov#link https://review.openstack.org/#/c/55978/18:13
SergeyLukjanovdmitryme is working on the unified agents proposal18:13
NikitaKonovalovintegration tests appeared to dependent on nose and are failing now18:13
aignatovdmitryme is here :)18:14
aignatovI saw him joined recently18:14
SergeyLukjanovNikitaKonovalov, that's because we're using nose.attr in tests, it should be replaced with testr analogue18:14
NikitaKonovalovso need to make some chages to get rid of nose in tests code18:14
SergeyLukjanovNikitaKonovalov, I'll take a look on it after the meeting18:14
NikitaKonovalovok18:14
SergeyLukjanovany other updates?18:15
jmaronstill working on rack awareness for hdp, but now detoured into making EDP work in neutron over private networks18:15
ErikBWe have put together a script to install Savanna and are wondering if it makes sense to contribute?18:15
ErikB(Savanna UI and API)18:16
mattfErikB, is it anything like the puppet module that is under review?18:16
jmaronhaven't seen much work/progress on puppet module...18:16
SergeyLukjanovjmaron, great, but I have no ideas for now for the correct approach to solve your problem18:16
ErikBI haven't looked at the puppet piece, so not sure.18:16
aignatovErikB, how is installation implemented?18:16
ErikBbash script18:16
mattfoh, i've one of those too18:17
jmarontrying to work around the need to build a fully functional remote interface during periodic tasks18:17
mattfi'm hoping to ditch it for the puppet modules18:17
aignatovok, I think we may add somehing to savanna-extra repo18:17
SergeyLukjanovErikB, the muppet manifests are under review atm18:17
ErikBOK, I will check it out.18:17
SergeyLukjanovErikB, and I think it'll be merged soon18:17
jmaronmuppet?  I like it :)18:17
SergeyLukjanovErikB, stackforge/puppet-savanna18:17
mattfErikB, https://review.openstack.org/#/c/61156/18:17
SergeyLukjanovjmaron, yup :)18:17
mattfSergeyLukjanov, probably today18:18
SergeyLukjanovmattf, my +2 in on it18:18
mattfit's on my queue18:18
jmaron(now we definitely need to create a puppet derivative called muppet)18:18
SergeyLukjanovjmaron, puppet fork called muppet ;)18:18
mattfjmaron, +118:19
SergeyLukjanovbtw Jenkins was in a list of J release names ;)18:19
aignatovnames for openstack?18:19
dmitrymeaignatov: names for the release18:20
SergeyLukjanovyep18:20
dmitrymelike Grizzly was for G18:20
dmitrymeor Havana for H18:20
mattfhttps://wiki.openstack.org/wiki/Release_Naming18:20
SergeyLukjanovI'll propose Savanna for S release18:20
*** mozawa has quit IRC18:20
*** jjmb has quit IRC18:20
dmitryme:-)18:20
mattfSergeyLukjanov, i've people stumbling over savanna v gnu savannah already!18:21
SergeyLukjanovmattf, yep, I know18:21
*** itzikb has quit IRC18:21
dmitrymethey actually have very strict naming convention - use names of places nearby Summit locaiton18:21
SergeyLukjanovdmitryme, maybe the S summit will be somewhere in savanna18:21
ErikB:-)18:22
aignatovbut from Russian point of view it looks very good because Savanna is Саванна :)18:22
alazarevor in Savannah :)18:22
*** mcohen2 has quit IRC18:22
SergeyLukjanovfunny report - http://stackalytics.com/report/users/slukjanov18:23
SergeyLukjanovok, looks like there are no more updates18:23
SergeyLukjanovlet's move on18:23
*** mcohen2 has joined #openstack-meeting-alt18:23
SergeyLukjanov#topic Roadmap update / cleanup18:23
*** openstack changes topic to "Roadmap update / cleanup (Meeting topic: savanna)"18:23
SergeyLukjanovno updates from the last meeting18:23
SergeyLukjanovaction items are still on me18:24
SergeyLukjanovI hope that I do it thos week18:24
mattfSergeyLukjanov, i think that graphic suggests you need a vacation18:24
SergeyLukjanovmattf, planning 2w vacation from end of Dec18:24
*** crobertsrh has quit IRC18:25
aignatovon the next week?18:25
aignatovfrom18:25
alazarevSergeyLukjanov: this is not vacation, this is just russian holidays, aren't they?18:25
SergeyLukjanovalazarev, yup, but I'm extending it to be full 2w18:26
SergeyLukjanov#topic savanna client cli18:26
*** openstack changes topic to "savanna client cli (Meeting topic: savanna)"18:26
SergeyLukjanovmattf, please, could you please amke a small intro18:26
SergeyLukjanovmake*18:26
mattfsure, there's nearly a savanna cli18:27
mattfhttps://blueprints.launchpad.net/python-savannaclient/+spec/python-savannaclient-cli18:27
mattfinitial commit is https://review.openstack.org/#/c/61565/18:27
mattfthe foundation is based on novaclient, with minimal changes18:27
mattfi'd appreciate feedback both on the blueprint (incomplete) and especially on the initial commit.18:27
mattfi don't want to go too far into the cli impl if we aren't agreed on the basics18:28
mattf.18:28
SergeyLukjanovI don't really like it looks like in nova client... have you tried to diff into the some other clients?18:28
aignatovmattf, my first comment - please remove all unused commented code ;)18:28
mattfwill you quantify "don't really like it looks like"?18:29
SergeyLukjanovto clarify - mattf, I really appreciate your work on it18:29
mattfaignatov, so i'm purposely keeping that code so we have an idea of how far we've drifted from nova18:29
SergeyLukjanovmattf, I've seen into the neutron client, it looks more object oriented18:29
SergeyLukjanovmattf and testable/supportable18:29
SergeyLukjanovmattf, but of course, a lot of lines of code18:30
jmaronthe nova comparison is with regard to command options, code structure, or both?18:30
jmaroncommand option syntax18:30
mattfSergeyLukjanov, i didn't dig into the neutron code. the keystone folks wanted me to use theirs, but it was only a partial implementation.18:30
SergeyLukjanovand there is a common client in oslo, I don't actually know which projects are using it18:30
mattfSergeyLukjanov, seemed like none18:31
SergeyLukjanovSergeyLukjanov, yep18:31
SergeyLukjanovoh18:31
SergeyLukjanovSergeyLukjanov, how are you?18:31
SergeyLukjanovSergeyLukjanov, fine, thx18:31
tmckay:)18:31
mattfjmaron, code structure. many OS CLIs are based on novaclient. so when they ahve to migrate to something shared, we'll be close to whatever migration path is created.18:31
jmaronmattf, thx18:31
mattfSergeyLukjanov, vacation, definitely vacation18:32
SergeyLukjanovmattf, :)18:32
SergeyLukjanovmattf, in two words I really like to have a cli in our client18:32
mattfSergeyLukjanov, does neutron already have a test harness for their cli?18:32
SergeyLukjanovmattf, I'm not sire18:32
SergeyLukjanovsure*18:32
mattfthat'd be a nice reason to jump to another foundation18:33
*** crobertsrh has joined #openstack-meeting-alt18:33
SergeyLukjanovthere tons of tests here18:33
SergeyLukjanovhttps://github.com/openstack/python-neutronclient/tree/master/neutronclient/tests/unit18:33
mattfbasically, i'm very happy to steal from other clients for the framing. i'm really just interested in adding savanna specific walls and furniture18:33
aignatovmattf, did you look at heat client, it looks nice and simple18:34
aignatovand contains shell tests :)18:34
aignatovhttps://github.com/openstack/python-heatclient18:34
SergeyLukjanovmattf, it'll be great if you take a look on 'new' clients18:34
mattfaignatov, i did, the main reason to go w/ nova is migration path18:34
mattfthe keystone folks are trying to create a standard client, with security done right18:35
mattfbut it's not done18:35
SergeyLukjanovmattf, :(18:35
mattfi imagine that and others will eventually migrate to something in oslo, but it doesn't exist right now18:35
mattfand my aim is to make a savanna cli, not a framework for creating clis18:36
*** sarob has joined #openstack-meeting-alt18:36
aignatovmaybe we should create new initiative in OpenStack, CLI as a service ;)18:36
mattfsounds like folks would like me to take a second look at heat and neutron?18:36
mattfaignatov, indeed, you can be ptl18:36
mattfwhen you have something that works, i'll migrate to it18:36
* mattf grins18:36
SergeyLukjanovaignatov, CLIaaS18:37
aignatovmattf, lol18:37
*** NehaV has joined #openstack-meeting-alt18:37
jmaronwithout the 'I' would be classier...18:37
SergeyLukjanovjmaron, nice)18:37
*** clayb has left #openstack-meeting-alt18:37
mattfSergeyLukjanov, oh wait18:37
SergeyLukjanovmattf, I'm here, don't worry :)18:38
mattfi remember neutronclient now. instead of having a module system then just do it all in a single shell.py18:38
mattf(forgive me, i reviewed most of the clients back around summit time)18:38
mattfi wasn't too interested in that structure, especially since we'll have to have multiple api versions18:39
mattfwe arguably already have 1.0 and 1.1, soon also 2.018:39
mattfthough we've stuck them together in a single "api" module18:39
SergeyLukjanovmattf, it looks like neutronclient have a file/class per each operation18:40
*** NehaV1 has quit IRC18:40
mattfSergeyLukjanov, so that and the fact no one else i could see was using neutron is why i backed away from it18:40
SergeyLukjanovhttps://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/network.py#L11418:40
SergeyLukjanovmattf :)18:40
mattfSergeyLukjanov, the way they pull it together isn't as flexible as the api version approach18:40
SergeyLukjanovmattf, btw I'm just trying to cover all approaches18:41
SergeyLukjanovmattf, nova client isn't bad approach by default18:41
SergeyLukjanovso if we do something not too bad, it'll be enough to understand what is bad and migrate earlier18:42
mattfaignatov, heatclient is definitely simpler than novaclient. it actually duplicates some of the functionality we provide in our Client() too18:42
*** yogesh has joined #openstack-meeting-alt18:42
mattfSergeyLukjanov, i'm happy for the line of questioning, keeps decisions open and well reasoned18:42
mattf(or at least partially reasoned)18:43
mattfin the end, the impl is pretty simple. there's a shell.py in savannaclient that loads other shell.py from variable api version modules. the api version shell.py implements do_blah() methods that are the cli verbs18:43
mattfso we have savannaclient/shell.py and savannaclient/api/shell.py18:44
mattfthe outer one also parses the auth information and creates the Client() for use w/i the do_ methods18:44
aignatovmattf, I'd like this approach18:45
mattfaignatov, how strongly do you feel about heat, because we can debate it some more after the meeting18:45
aignatovmattf, i've just had a quick look during this meeting18:45
*** shivh has quit IRC18:45
SergeyLukjanovmattf, I'll agree with something that'll work ok :)18:45
jmaroncurious:  do we see a need for savanna-manage client, crossing tenant boundaries for management tasks across cluster instances?18:46
*** luQAs has quit IRC18:46
SergeyLukjanovjmaron, I think it'll be eventually done by using admin role18:46
*** nati_uen_ has joined #openstack-meeting-alt18:46
aignatovand I've compared heatclient and novaclient code in your patch and make a opinion :)18:46
mattfjmaron, the client allows you to change the tenant you're working against via env or arg18:47
*** eankutse has quit IRC18:47
mattfaignatov, you'll render an opinion into the review today?18:47
jmaronunderstood.  Just thought an admin may want to get a full picture rather than iterate across tenants18:47
mattfjmaron, ahhh, that's an interesting idea18:48
mattfi don't think the current api allows that very easily18:48
jmarontrue18:48
aignatovmattf, I'll try18:48
dmitrymeAs for the UI, there is an Admin tab here which allows browsing resources regardless of tenant18:49
dmitrymeit might be worth checking how it works18:49
*** nati_ueno has quit IRC18:49
mattfok, take the rest of this to the review / #savanna ?18:49
SergeyLukjanovjmaron, mattf, current API doesn't allow to do it, but I think that we should add it to v218:49
jmaron+118:49
SergeyLukjanovlooks like a time for open discussion18:49
SergeyLukjanov#topic Open Discussion18:49
*** openstack changes topic to "Open Discussion (Meeting topic: savanna)"18:49
tmckayedp question, we can follow up on #savanna.  With the addition of java actions to oozie support, I think we need a name change for job types.  Currently we have Hive, Pig, and JAR.18:50
tmckayI think we need Hive, Pig, Mapreduce, and ??18:50
tmckayBecause current JAR really means "mapreduce"18:51
*** _ozstacker_ has joined #openstack-meeting-alt18:51
*** ozstacker has quit IRC18:51
tmckayit will be a different workflow generator, arguments allowed, ect18:51
akuznetsovtmckay JavaAction?18:51
dmitrymetmckay: and what does added java action do?18:51
akuznetsovpossible we should add a oozie workflow18:52
tmckayJava actions allow Oozie to run a main(), instead of building a mapreduce job out of the specified mapper and reducer classes18:52
dmitrymeaha, I see18:52
tmckayso, the hadoop example pi estimator is a java action18:52
tmckayIt can't be run as mapreduce directly without a rewrite18:52
dmitrymeindeed it is hard to set a different names18:53
tmckayjava action launches a single mapper, which runs main(), which typically launches other mappers and reducers18:53
jmaronI see a HelloWorld example coming down the pipe….18:53
aignatovtmckay, just Java, but JAR should be renamed to MapReduce defenitely18:53
tmckayYes, that was my thought.... Hive, Pig, MapReduce, and Java18:54
dmitrymeaignatov: makes sense18:54
SergeyLukjanovare there any job types in Oozie?18:54
tmckaythat will cause changes in the UI, docs, etc...18:54
SergeyLukjanovwe can use them if yes18:54
tmckayI think just the action names18:54
*** nati_uen_ has quit IRC18:54
SergeyLukjanovyep, but looks like we really need i18:54
tmckayoh, agreed, just making a note :)18:54
SergeyLukjanovHive, Pig, MapReduce, and Java works for me18:55
SergeyLukjanovtmckay :)18:55
*** nati_ueno has joined #openstack-meeting-alt18:55
*** eankutse has joined #openstack-meeting-alt18:55
tmckaycrobertsrh, this means you too ^^ :)18:55
aignatovagreed with akusnetsov, Oozie workflow should be supported as well :) but didm;t know how it works :) :)18:55
tmckayakuznetsov, yes, we should add oozie workflow from user too18:56
tmckayfirst, java.  Then, oozie.18:56
tmckayChristmas present for me, heh18:56
SergeyLukjanovand it'll be great to be ablt to configure oozie to take jobs from swift18:56
aignatovBtw, MapReduce action has sub actions streaming and pipe18:56
tmckayyes, also on the roadmap I think18:56
akuznetsovyes oozie is most complicated because it should involve all types of job pig, hive jar etc...18:57
aignatovalso, tmckay, you know that right now EDP uses oozie-workflow schema 0.2, but we have 4.0.0 ooze in images which allows to run on schema 0.4 and greater18:57
akuznetsovstreaming and pipe will be required some image preparation if for example pipes will use a some specific python18:58
aignatovwhich Oozie is used in HWX plugin?18:58
tmckayaignatov, didn't know that.  Should we switch the schema to 0.4?18:58
jmaronaignatov, not sure about version number.  whatever is distributed with HDP 1.3.218:59
SergeyLukjanovwe're out of time guys, 1 min left18:59
tmckayany need to support both, with a config?18:59
aignatovtmckay: If it have new features allowing you to create workflows in simple manner, why not? :)18:59
jmaronquick note:  I've run into an issue with periodic tasks requiring access to a full context (specifically, the service catalog etc).  Trying to work around it, but it makes me think there may be a need for defining "privileged" periodic tasks or allowing such access to existing periodic tasks?  Will ruminate some more, may send email out to list at some point….18:59
mattfjmaron, yikes19:00
SergeyLukjanovjmaron, it's needed to query neutron, yes/19:00
aignatovI only know that 0.3 or 0.4 Oozie version contains <global> tag which allow to apply job tracker and name node definitions in a single  place19:00
SergeyLukjanov?19:00
SergeyLukjanovthank you all!19:00
SergeyLukjanov#endmeeting19:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"19:00
openstackMeeting ended Thu Dec 12 19:00:18 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-12-12-18.10.html19:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-12-12-18.10.txt19:00
openstackLog:            http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-12-12-18.10.log.html19:00
SergeyLukjanovlet's move to the #savanna19:00
jmaronSergeyLukjanov, yes19:00
*** bob_nettleton has left #openstack-meeting-alt19:01
*** jbrendel has joined #openstack-meeting-alt19:01
*** mattf has left #openstack-meeting-alt19:01
*** NehaV1 has joined #openstack-meeting-alt19:03
*** tmckay has quit IRC19:04
*** NehaV has quit IRC19:06
*** lblanchard has quit IRC19:07
*** k4n0 has joined #openstack-meeting-alt19:08
*** jbrendel has quit IRC19:11
*** NikitaKonovalov has quit IRC19:12
*** jbrendel has joined #openstack-meeting-alt19:13
*** NikitaKonovalov has joined #openstack-meeting-alt19:14
*** jbrendel has quit IRC19:15
*** NikitaKonovalov has quit IRC19:15
*** crobertsrh has quit IRC19:29
*** vipul is now known as vipul-away19:30
*** NehaV1 has quit IRC19:31
*** NehaV has joined #openstack-meeting-alt19:31
*** jmaron has quit IRC19:33
*** brents has quit IRC19:35
*** electrichead has joined #openstack-meeting-alt19:35
*** gokrokve has joined #openstack-meeting-alt19:40
*** SergeyLukjanov_ has joined #openstack-meeting-alt19:40
*** reaperhulk has joined #openstack-meeting-alt19:42
*** SushilKM__ has joined #openstack-meeting-alt19:43
*** lblanchard has joined #openstack-meeting-alt19:44
*** hemanthravi has joined #openstack-meeting-alt19:44
*** vipul-away is now known as vipul19:44
*** brents has joined #openstack-meeting-alt19:45
*** SushilKM has quit IRC19:46
*** SergeyLukjanov has quit IRC19:46
*** SergeyLukjanov_ is now known as SergeyLukjanov19:46
*** SergeyLukjanov_ has joined #openstack-meeting-alt19:47
*** SergeyLukjanov has quit IRC19:47
*** SergeyLukjanov_ has quit IRC19:48
*** SergeyLukjanov has joined #openstack-meeting-alt19:48
*** akuznetsov has quit IRC19:49
*** zhiyan has joined #openstack-meeting-alt19:50
*** hockeynut has joined #openstack-meeting-alt19:55
*** ativelkov has joined #openstack-meeting-alt19:56
*** igormarnat has joined #openstack-meeting-alt19:56
*** Barker has quit IRC19:57
jraim#startmeeting barbican19:58
openstackMeeting started Thu Dec 12 19:58:36 2013 UTC and is due to finish in 60 minutes.  The chair is jraim. Information about MeetBot at http://wiki.debian.org/MeetBot.19:58
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:58
*** openstack changes topic to " (Meeting topic: barbican)"19:58
openstackThe meeting name has been set to 'barbican'19:58
jraim#topic incubation tasks19:58
*** openstack changes topic to "incubation tasks (Meeting topic: barbican)"19:58
*** jprovazn has quit IRC19:58
*** woodster has joined #openstack-meeting-alt19:58
jraimalright, who is here for the barbican meeting. Raise the hands19:59
jraimo/19:59
electricheado/19:59
reaperhulk\o/19:59
woodster\o/19:59
jraim#agreed reaperhulk is excited to be here19:59
*** Weihan has joined #openstack-meeting-alt19:59
reaperhulkWhen am I not19:59
SheenaG\o/19:59
hockeynutpresent and accounted for19:59
jraimanyone else?20:00
jraimalright, it'll be a short one then20:00
*** Barker has joined #openstack-meeting-alt20:00
jraimlet's run down the list on incubation tasks20:00
jraimso python-barbicanclient is in stackforge now, correct? Any more to do on that one?20:01
markwashseems like we maybe have a meeting timeslot conflict?20:01
*** arnaud has joined #openstack-meeting-alt20:01
jraimmarkwash do we?20:01
reaperhulkuhoh :o20:01
jvrbanaco/20:01
markwashI could be mistaken, I'm very distracted20:01
jraimI've got us here: https://wiki.openstack.org/wiki/Meetings#Barbican_Meeting20:01
*** spredzy has joined #openstack-meeting-alt20:01
markwashwell, its' a wiki, so20:01
electrichead#info yes, python-barbicanclient is in StackForge and also in Launchpad.  All development will be done there moving forward20:01
jraimwhich meeting were you looking for?20:01
*** spredzy has quit IRC20:02
markwashhttps://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting20:02
markwash1400 and 2000 UTC thursdays alternating20:02
markwashtoday's a 2000 UTC day20:02
electricheadlooks like glance has the spot every other week :-\20:02
*** stanlagun has joined #openstack-meeting-alt20:02
*** at872kd has joined #openstack-meeting-alt20:03
jraimweird. it wasn't on the calender thing I downloaded20:03
jraimno problem20:03
jraimI'll close this and we'll reschedule ours20:03
jraim#info Jarret sucks at meetings, story at 1120:03
markwashsorry for the confusion!20:03
jraim#endmeeting20:03
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"20:03
openstackMeeting ended Thu Dec 12 20:03:28 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:03
openstackMinutes:        http://eavesdrop.openstack.org/meetings/barbican/2013/barbican.2013-12-12-19.58.html20:03
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/barbican/2013/barbican.2013-12-12-19.58.txt20:03
zhiyanthanks jraim20:03
jraimno problem, my fault20:03
openstackLog:            http://eavesdrop.openstack.org/meetings/barbican/2013/barbican.2013-12-12-19.58.log.html20:03
markwashjraim: thanks for being so accomodating!20:03
markwash#startmeeting glance20:03
openstackMeeting started Thu Dec 12 20:03:52 2013 UTC and is due to finish in 60 minutes.  The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot.20:03
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:03
*** openstack changes topic to " (Meeting topic: glance)"20:03
openstackThe meeting name has been set to 'glance'20:03
markwashhi glance folks20:03
nikhil__o/20:04
arnaudHi markwash20:04
*** hockeynut has left #openstack-meeting-alt20:04
*** at872kd has quit IRC20:04
ameadehola20:04
*** electrichead has left #openstack-meeting-alt20:04
hemanth_o/20:04
zhiyanhi20:04
stanlagunhi20:04
*** reaperhulk has left #openstack-meeting-alt20:04
markwashso folks I've been a bit absent this week, stuff at work and kitty health issues20:04
gokrokveHi20:04
*** jasonb365 has quit IRC20:04
ativelkovhi20:04
*** ashwini has joined #openstack-meeting-alt20:04
markwashso i really appreciate whoever added stuff to the agenda20:04
*** woodster has left #openstack-meeting-alt20:05
markwash#link https://etherpad.openstack.org/p/glance-team-meeting-agenda20:05
*** SheenaG has left #openstack-meeting-alt20:05
markwashlooks like we have some new folks20:05
igormarnat Hi guys! (looking around) is this a glance meeting?20:05
arnaudyes this is20:05
markwashI suspect that is because of the exciting ML threads about glance and scope expansion20:05
* markwash looks for links20:05
*** spredzy has joined #openstack-meeting-alt20:05
markwash#link http://lists.openstack.org/pipermail/openstack-dev/2013-December/021233.html20:06
markwashI propose we start off with some discussion of this20:06
markwash#topic glance and heatr20:06
*** openstack changes topic to "glance and heatr (Meeting topic: glance)"20:06
*** spredzy has quit IRC20:07
markwashfor those that aren't familiar, i suggest that you review the email link I posted20:07
*** SlickNik has left #openstack-meeting-alt20:07
markwashbut the basic idea is to expand the idea of glance to store things like heat templates, IIUC20:07
*** IlyaE has joined #openstack-meeting-alt20:07
rosmaitai am not opposed to expanding what glance contains, subject to keeping its basic philosophy intact (eg., immutable objects)20:07
*** markmcclain has joined #openstack-meeting-alt20:07
*** tsufiev_ has joined #openstack-meeting-alt20:07
markwashI guess there are some heatr folks here, care to intrroduce yourselves?20:08
*** denis_makogon_ has joined #openstack-meeting-alt20:08
ameadei support a more general catalog service20:08
gokrokvemarkwash: Hi, I am Georgy Okrokvertskhov form Mirantis20:08
gokrokveWe are not from Heater team at all :-)20:08
zhiyanyes, it catalog-generalization to me20:08
zhiyanhello gokrokve20:09
igormarnatHey guys, I'm Igor Marnat from Murano team (Mirantis)20:09
gokrokveHi zhiyan20:09
ativelkovHi, I am Alexander Tivelkov, Murano team20:09
markwashoh I see20:09
stanlagunI'm Stan Lagun - Murano & Mistral20:09
markwashsorry, I was confused, Hi Murano folks20:09
nikhil__hey folks20:09
*** akuznetsov has joined #openstack-meeting-alt20:09
markwashso, can you bring us up to speed on how Murano might fit into this integration?20:09
ativelkovMurano is not HeatR, but we are really interested in general-purpose metadata repository20:10
gokrokveMurano needs catalog for the same purpose, to store different objects20:10
stanlagunMurano is among those projects that have templates that can be stored in Glance20:10
*** denis_makogon has quit IRC20:10
*** denis_makogon_ is now known as denis_makogon20:10
gokrokvenot only templates but scripts, UI definitions and workflows20:11
ativelkovWe manipulate complex metadata packages, and we need to store them per-tenantly, index them, add tags, versions, authorships etc - in an immutable form, of course20:11
*** dmakogon_ has joined #openstack-meeting-alt20:11
gokrokveWe have an implementation of such catalog\repository in Murano20:11
markwashokay, there might be a little impedance mismatch but I think it doesn't sound like there would be any big problems20:11
*** k4n0 has left #openstack-meeting-alt20:11
gokrokveBut Heater revealed that other projects also need repository20:11
markwashthe one thing that's a little bit differnt in glance is that some of our metadata is mutable20:12
igormarnatAnd let's not miss the fact that Solum would also benefit from having the general metadata repo20:12
markwashand some of our metadata is semantically significant to Glance20:12
markwashso, not true metadata really20:12
markwashe.g. you can download images through glance20:12
stanlagunand Mistral too20:13
markwashand it will perform checksum verifications against the checksum attribute of the image20:13
*** vipul is now known as vipul-away20:13
zhiyanmarkwash: i think what gokrokve thinking is saving those files as our image entry20:13
gokrokvemarkwash: We need this too20:13
*** vipul-away is now known as vipul20:13
markwashgokrokve: ah okay20:13
gokrokvezhiyan: Kind of. At least in the first implementation.20:13
ativelkovGlance user is not able to modify an Image without changing ints id, right?20:14
zhiyanmarkwash: and also need a container... iirc, service metadata20:14
markwashativelkov: it cannot modify the image data, it can modify metadata20:14
markwash*some metadata20:14
ativelkovits perfectly fine for us20:14
ameademarkwash: i think if we want to pick this as a solid direction for glance it would make sense, we could start talking about a v3...of course there are more immediate needs we could jam into v220:14
*** jcoufal has joined #openstack-meeting-alt20:14
ativelkovLike, add tags, change description etc - right?20:15
markwashso what's our plan of attack? is there any sort of proposal already on the table?20:15
gokrokvemarkwash: We want to start submit BPs for new features required for genereic catalog20:15
markwashativelkov: yeah stuff like that20:15
stanlagunI believe what is missing is a reach queriable metadata for all glance object that would allow implementing of catalogization - arranging object into hierarchies, groups etc. based on different criterias20:15
gokrokvemarkwash: We want to make sure that this is not a surprise for you :-)20:15
markwashheh sensible20:15
markwashameade: you bring up a good point20:16
gokrokvemarkwash: Do you see a possibility to start this work in Icehouse?20:16
markwashwould it make sense to do this stuff somewhat separtely as part of v2.x?20:16
markwashand try to munge the traditional view of images and these other aspects together in a v3 after Icehouse?20:16
*** jmaron has joined #openstack-meeting-alt20:17
nikhil__markwash: +1 on that20:17
ativelkovthis sounds reasonable20:17
gokrokvemarkwash: We need this catalog to develop Murano, and Heat probably will be interested as well as they need this for HOT Software components20:17
nikhil__I kinda understand the complexity of adding stuff in domain layer so would recommed post I20:17
markwashgokrokve: okay cool. . so we shoudl jsut be on the lookout for some blueprints soon?20:18
icchathe domain model should be a separate convo we should def discuss about :)20:19
markwashgokrokve: to answer your question about Icehouse, its a bit scary but I think if we understand the bps well enough we might ahve some hopes20:19
*** wyllys_ has joined #openstack-meeting-alt20:19
ashwinimarkwash: i kind of sense an agenda forming here for glance mini summit :)20:19
zhiyangokrokve: is there a clear api definition for the metadata repo now?20:19
stanlagunAre HeatEr guys ok with this? So that there will not be 10 different private repository implementations by Icehouse release20:19
ativelkovI'll have a wiki-page with overall description of what we propose at about tomorrow20:19
markwashgokrokve: I think it may be possible to  consider some slightly drastic options to make sure we can make progress20:19
gokrokvezhiyan: There is no final API defined yet.20:19
markwashgokrokve: for example, we could start out in a separate project under the same Glance program20:19
ativelkovThen we will make more detailed blueprints on a per-feature basis20:20
icchamarkwash: do u mean, /templates like /images?20:20
gokrokvemarkwash: We can do this as a separate project. You are right. But are you ok to handle two projects.20:20
esheffieldI'm a bit concerned about adding specific code for the different objects that might be stored20:21
markwashiccha: that's one option, but just now I was saying something more like github.com/openstack/glance-template-api.git20:21
nikhil__markwash: iccha ameade also, more modular sqlaclchemy impl before adding this :)20:21
zhiyangokrokve: from you old wiki, i can see a very draft define for poc, so i think if there has a clear define, i think it will be good to see the different for common metadata entry and image20:21
esheffieldcould we think along the lines of a general metadata service with specific schemas defining the object types20:21
markwashgokrokve: handling two projects is a bit of a burden but it might still be an okay option20:21
icchai see nikhil__ 's concern about strengthing what we currently have20:21
gokrokvezhiyan: Sure. We need Heater guys to contribute too.20:21
ameadeesheffield: that makes a lot of sense20:21
*** eankutse has quit IRC20:21
ameademarkwash: i'm not opposed to this evolving into maybe a glance replacement project either20:22
nikhil__esheffield: ameade +1 with a the option of drawing the line of cutomizations upfront20:22
rosmaitaesheffield: +120:22
iccha+120:22
markwashameade: +120:22
gokrokvemarkwash: Can we discuss a separate project in ML? We will need input from TC on that too.20:22
markwashgokrokve: absolutely20:22
nikhil__too much customization will slow this down20:22
markwashgokrokve: I think we need a lot of clarity on the final api featureset though to talk about this intelligently20:23
markwashgokrokve: right now I don't know anywhere near enough to know which option would be best20:23
gokrokvemarkwash: Sure. I know that Randall is working on Heater BPs for Glance too. So we can sync up with him, to speed up the API discussion20:24
ameadethere is much discussion to be had, especially about specific use cases...i think additional meetings are in order...or during the glance meetup20:24
nikhil__+120:24
gokrokveameade: I would rather use separate meeting.20:24
nikhil__was this just related to Murano or was there anything about Mistral too?20:25
markwashameade: +1 definitely need additional discussion. gokrokve not sure if any interested folks from your side could manage a physical meetup? or if we should just schedule some separate IRC time20:25
gokrokveameade: Otherwise we will consume whole glance meeting time.20:25
*** Weihan has left #openstack-meeting-alt20:25
markwashgokrokve: yeah I think we'll wind this topic down for today here in a moment, just want to figure out the major next steps20:25
gokrokveFace 2 face will be very effective + some guys remote via hangout20:25
stanlagunMistral would also probably need some repository in the future. Nothing specific for now20:25
nikhil__gotcha20:26
gokrokveWe did this in Solum and it was efficient.20:26
markwashgokrokve: so there is a glance meetup in the works for late january near Washington DC. . not sure if that could possibly work?20:26
ameademarkwash, gokrokve: i'm definitely interested in staying heavily in the loop on this20:26
arnaudsame here20:26
*** tsufiev_ has quit IRC20:26
*** eankutse has joined #openstack-meeting-alt20:26
gokrokvemarkwash: Should work fine. We can have first meetings in IRC\ hangout and then f2f meeting for final decisions.20:27
markwashashwini: yeah we might want more than 2 days if this does become part of the summit20:27
markwashgokrokve: okay, when should we meet again?20:27
markwashon irc20:27
*** wyllys_ has left #openstack-meeting-alt20:27
gokrokvemarkwash: Lets meet on next week, say Tuesday20:28
ameade+120:28
arnaud+120:28
*** vipul is now known as vipul-away20:28
gokrokvemarkwash: We will submit some BPs and create etherpads with drafts20:28
ashwinimarkwash: yes we should have a more confirmed agenda for the mini summit to see how much time can be allocated to this discussion or we should consider 3 day options20:28
markwashgokrokve: okay sounds good20:29
gokrokvemarkwash: Also Heater team will have time to prepare20:29
markwashgokrokve: so we should look for an invite on the openstack-dev ML?20:29
*** colinmcnamara has joined #openstack-meeting-alt20:29
*** colinmcn_ has joined #openstack-meeting-alt20:29
gokrokvemarkwash: During next meeting we can also discuss how to do development \ separate project vs. branch20:29
gokrokvemarkwash: Sure. I will organize that.20:30
*** yogesh has quit IRC20:30
markwashgreat, thanks!20:30
markwashany other quick thoughts from folks? or should we move on?20:30
gokrokveNothing from our side :-)20:31
markwashthanks for showing up20:31
markwashthis is pretty exciting20:31
markwash#topic common version-agnostic api in glanceclient20:31
*** openstack changes topic to "common version-agnostic api in glanceclient (Meeting topic: glance)"20:31
esheffieldthat was mine20:32
*** krtaylor has quit IRC20:32
markwashesheffield: go for it20:32
markwash#link https://etherpad.openstack.org/p/glance-client-common-api20:32
esheffieldbasically wanted to get some more discussion on the idea of something like the image service layer being added to glanceclient20:32
esheffieldrecall that Ghe had a patch doing some initial work in that direction a while back20:33
markwashis anybody against the idea, assuming we can work out the details of what the api should look liike?20:33
esheffieldthis is all tied to supporting V2 in Nova as well, along with the requested api autodiscovery20:33
*** SushilKM__ has quit IRC20:33
esheffieldI confess that *I'm* not 100% in support of it, but mostly because I was having trouble envisioning a good clean API20:34
markwashyeah, its a little rough20:34
esheffieldand what would happen in the case of mismatches (e.g. trying to use tags but only having a V1 backend)20:34
markwashat this point, it might just need to be "whatevers in both v1 and v2"20:35
markwashso probably tags wouldn't be part of v0.0,20:35
markwashimage sharing would be really reduced or absent20:35
markwashetc20:36
esheffieldI know when we talked about it before it was mentioned that multiple locations was going to be required in Nova soon20:36
esheffieldso while I was hopeing on this helping with Nova -> Glance V2, that would be a blocker to this approach right away I think20:36
*** igormarnat has quit IRC20:37
markwashesheffield: I think multiple locations might work20:37
esheffieldthat would be great then20:37
markwashwe just have to treat v1 as having only 120:37
markwashthere are some ways we can put in some info about whether or not adding locations is supported as well20:38
markwashif that is necessary20:38
*** jasonb365 has joined #openstack-meeting-alt20:38
markwashso I think what we need is someone who has time to commit to proposing an api20:38
arnaudhow big is the work on this?20:38
* markwash is not sure20:39
icchaand we would have to maintain it for every feature we add as well20:40
markwashiccha: exactly20:40
esheffieldwell, I did some work on the images layer in Nova which is kind of what this would be and refactoring that code into something similar to this took a couple of weeks20:41
markwashI think if we do it well, it will reduce support, because we can direct people to a lib api that we actually designed with the intention of supporting cross-version20:41
arnaudesheffield: ok20:42
ameadewe still have 2 other topics for this meeting btw :)20:42
markwashesheffield: looking at your quesitons in the etherpad20:42
markwashhmm, well let's try to respond in that etherpad to carry the discussion forward20:43
esheffieldI did reuse a lot of what was in Nova already, so it was a bit crufty - I'd want to take more time and do it cleanly with a well designed api this time so probably a bit more time20:43
markwashperhaps we can get to the other topics still, that way20:43
esheffieldyes, please add thoughts and comments there!20:43
markwashokay, anyone not ready to move on for now?20:43
markwash#topic glance versioning consistency20:44
*** openstack changes topic to "glance versioning consistency (Meeting topic: glance)"20:44
markwashesheffield: is this also your topic?20:44
esheffieldheh, yes20:44
esheffieldwe can probably hold off on the broader topic there for a bit, but in working on the bug linked there some concerns came up over the verionsing20:45
esheffieldand backward compatibility20:45
esheffieldthe more immediate concern is if fixing that bug causes backward compat problems20:45
markwashah20:45
esheffieldflwang was concerned esp.20:46
markwashwell, we got out a little ahead of json patch20:46
rosmaitain draft 4, "add" on an existing member is an error20:46
markwashwe were implementing support when it was still in draft form, not sure if that is the culprit here20:46
rosmaitain current standard, it acts like a replace20:46
markwashI think that sounds like it is not a problem for backwards compat20:47
esheffieldyes, when we went to api v2.2 we said we support draft 10, but in draft 10 (and current) it's as rosmaita says20:47
*** igormarnat_ has joined #openstack-meeting-alt20:47
markwashwell, at first consideration20:47
esheffieldbut we still raise an error20:47
rosmaitawe have already deprecated openstack-images-v2.0-json-patch20:47
rosmaita(or at least i have in the API docs)20:47
*** doug_shelley66 has quit IRC20:48
markwashit sounds like this is just a bugfix20:48
*** dougshelley66 has joined #openstack-meeting-alt20:48
markwashI don't think depending on that erroring out is a behavior the client relies upon20:48
*** vipul-away is now known as vipul20:49
rosmaitaflwang was worried about API users expecting that behavior, though20:49
esheffieldthat was what I thought as well - if anything people would be having to work around it and worst case the workarounds wouldn't be needed now20:49
*** jcooley_ has quit IRC20:49
*** yogesh has joined #openstack-meeting-alt20:49
markwashhmm, flwang is not here to defend himself20:50
markwashso let's all attack!20:50
markwashj/k20:50
arnaud:)20:50
esheffieldthe glanceclient is unaffected too - it always fetches the image and generates a proper 'replace' or 'add' as needed20:50
markwashso, the only difference is that now if you use add and it already exists, you get an error?20:50
rosmaitayep20:51
markwashis there any chance we could implement so if you use the old content type it does the old behavior, and if you use the normal content type you get the bugfix?20:51
markwashprobably a bit hacky, but20:51
esheffieldthat kind of circles back to the bigger topic of version concistency20:52
rosmaitawell, the code right now rewrites the 2.0 request to be like a 2.1 request20:52
esheffieldif you get a list of versions you get several, but they're all actually the same thing20:52
markwashesheffield: yeah that always seemed a bit weird to me20:52
markwashbased on what we're doing, it would be easier to just say "we use semantic versioning" somewhere in the docs20:53
*** aveiga has joined #openstack-meeting-alt20:53
* markwash is starting to worry about time for the next topic20:54
rosmaitanext topic does not need much time20:54
nikhil__if there is a min time for open discussion I've a quick question20:54
esheffieldsorry, didn't mean to dominate things today! :-(20:54
esheffieldtake this to the ML perhaps?20:54
markwashesheffield: I  responded to the bug discussion20:55
markwashesheffield: I think we need a concrete proposal for what changes we want20:55
markwashnot just "why is this weird" :-)20:55
markwash"because OpenStack"20:55
icchalol20:55
esheffield:-)20:55
ameadegotta love the honesty20:55
markwashthere's probably lower, weirder fruit20:56
markwash:-)20:56
markwashokay20:56
rosmaitai agree we need a definite proposal for what we want to do before going to the ML20:56
ameadedoes it bother anyone else that OpenStack is camel case?20:56
markwash#topic image sharing20:56
*** openstack changes topic to "image sharing (Meeting topic: glance)"20:56
markwash(sorry to steamroll)20:56
rosmaitaso i promised to put together something to get the discussion going20:56
markwashlooks like you got a #linkt here20:56
arnaudameade: +1 :)20:56
markwashwell20:56
markwashs/linkt/link/20:56
rosmaita#link https://etherpad.openstack.org/p/glance-image-sharing-discussion20:57
rosmaitaanyway, we can discuss at next meeting20:57
rosmaitaafter everyone who's interested has looked over the etherpad20:58
*** ijw has joined #openstack-meeting-alt20:58
markwash#topic open discussion20:58
*** openstack changes topic to "open discussion (Meeting topic: glance)"20:58
nikhil__can I ask?20:58
ameadeIt seems a number of people did indeed follow my email about abandoned patches20:58
ameadei'm going to gather more stats on bugs and things that have been in progress for ages20:59
ameadethen send another reminder email and later I will laser triage patches and bugs that nobody updates20:59
markwashnikhil__: did you have a note?20:59
nikhil__currently tasks response is of the form {<task_attrs>} vs. openstack common usage {'task': <task_attrs>} ?20:59
markwashlet's be glance-consistent21:00
ameade+121:00
*** igormarnat_ has quit IRC21:00
*** baoli has joined #openstack-meeting-alt21:00
nikhil__same as images then21:00
nikhil__cool, thanks21:00
*** eankutse has quit IRC21:00
markwashokay, thanks everybody!21:00
markwashwish me luck21:00
*** eankutse has joined #openstack-meeting-alt21:00
markwash(no reason)21:00
*** BrianB__ has joined #openstack-meeting-alt21:00
rosmaitagood luck21:00
icchagood luck markwash ! (not sure for what though)21:00
markwash#endmeeting21:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"21:00
openstackMeeting ended Thu Dec 12 21:00:58 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-12-12-20.03.html21:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-12-12-20.03.txt21:01
openstackLog:            http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-12-12-20.03.log.html21:01
zhiyangood luck21:01
*** drake_ has joined #openstack-meeting-alt21:01
sc68calAlright - straight into the next meeting21:01
sc68calijw: hello there21:01
*** ashwini has left #openstack-meeting-alt21:01
ijwyo21:01
sc68calaveiga: howdy21:01
aveigao/21:01
*** BrianB__ has quit IRC21:01
sc68cal#startmeeting neutron_ipv621:02
openstackMeeting started Thu Dec 12 21:02:07 2013 UTC and is due to finish in 60 minutes.  The chair is sc68cal. Information about MeetBot at http://wiki.debian.org/MeetBot.21:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:02
*** openstack changes topic to " (Meeting topic: neutron_ipv6)"21:02
openstackThe meeting name has been set to 'neutron_ipv6'21:02
sc68calAgenda for today is a bit light - so I'd like to get through it quick and then do a big open discussion21:02
sc68cal#topic recap previous meeting21:02
*** openstack changes topic to "recap previous meeting (Meeting topic: neutron_ipv6)"21:02
sc68calSo we're still working through what we want to do with the subnet_mode review21:03
*** boris-42 has joined #openstack-meeting-alt21:03
sc68calaveiga: IJW made a good point during a convo about some of the provider networking w/ ipv6 we want to do21:04
*** Dane has joined #openstack-meeting-alt21:04
sc68calwe should be setting enable_dhcp to false when we create a v6 subnet that is announced upstream21:04
aveigais this wrt disabling instead of setting SLAAC mode? I agree21:04
*** Barker has quit IRC21:04
ijwYeah, there's two things.  Firstly, I don't really mind if we want slaac and dhcpv6 both working, that's fine, but I don't think we should tie ourselves forever to dnsmasq21:04
ijwSecond, I wanted to understand why people like slaac (other than on provider networks where it's an external router's problem, which totally makes sense)21:05
*** zhiyan has quit IRC21:05
*** rwsu has quit IRC21:05
*** brents has quit IRC21:05
aveigaijw: I think there's issues on both modes.  SLAAC has RA problems with a lot of distributions (see RHEL bugs on RA priorities)21:06
ijwYup21:06
aveigaDHCP has problems with client implementations21:06
aveigait also has RA issues, too21:06
*** mcohen2 has quit IRC21:06
sc68calaveiga: but that's if there are multiple RA packets flying around, right?21:06
ijwI'm thinking mainly about comes up on non-provider (i.e. totally internal) networks21:06
aveigabasically if you do DHCPv6, you either have to make the DHCP server a router, or have the RA come from an l3 gateway separately21:07
aveigaand coordinate21:07
ijwYup, I don't think co-ordination is a problem particularly but we have to deal with networks with no routers21:07
aveigaright, tenant private networks are irksome when it comes to this21:07
ijwI also have no problem with being in the router namespace, but there's nothing in Neutron that forces you to have a single Neutron router either21:07
*** ichihara has joined #openstack-meeting-alt21:07
ijwIt's just a nightmare, that's the problem21:08
aveigaand that's the real kicker21:08
aveigaif there are multiple rotuers, where does the RA come from?21:08
sc68calis Randy Tuttle on?21:08
sc68calor Shixiong Shang21:08
ijwSo, first things first.  In ipv4 you get to choose your network's subnet and your port's address.  Do we still want to do either of those things?21:08
sc68calI think their design addressed some of this in depth21:08
aveigaijw: are you referring to neutron network creation on the subnet side?21:09
aveigaor do you mean selecting potential networks at instance creation?21:09
*** Barker has joined #openstack-meeting-alt21:09
ijwI mean that in v4 you get (as tenant) to choose your subnet and you get to choose your address if you create a port manually (rather than letting the system do it on nova boot) if you keep it in that subnet21:09
aveigaI'm not fully familiar, but doesn't choosing your subnet mean the same thing as selecting a neutron network?21:10
ijwNo, subnet is the address range rather than the network (which is l2)21:11
aveigaoh, I think I know what you mean21:11
ijwReason I raise it is that I can imagine you'd want to stick with routeable addresses, or at least have routeable addresses chosen for your network automatically, in v621:11
*** arnaud has quit IRC21:11
aveigayes21:11
ijwIn v4, the world is your oyster, cos v4 is evil and must die21:11
aveigathe problem is going to be having dhcpv6 and slaac in the same l221:12
aveigathey would be separate subnets21:12
aveigabut same l2 domain21:12
aveigaso the RAs would be wonky21:12
ijwWe get to choose, we don't have to allow just everything21:12
aveigawe could make them mutually exclusive21:12
*** dmitryme has quit IRC21:12
sc68calshould we make an action item to review how neutron behaves when you have a network with multiple subnets?21:12
sc68calon v4 side, and compare what would happen on v6 side with multiple subnets?21:13
aveigaor force DHCOPv6 allocation to not use EUI-64 collision space21:13
ijwMy choice there would be that we allow precisely one v6 subnet per network (the v4 multiple subnet thing is a solution to address sparsity which isn't a problem we have) and use DHCPv6 so that we assign specific addrs to machine21:13
ijwsc68cal: I think how it works is you get an address from one of the subnets, at least in v4.  Good chance no-one has tried it in v6, though certainly you can set it up, I tried21:14
sc68calijw: So you'd force DHCPv6 as the only option on v6 side?21:14
sc68calmeaning no slaac21:14
ijwFor internal networks, and if we could get away with it, that's my preference.  Tenant networks: DHCPv6 or off21:14
ijwTo be fair, even in v4 you can turn off DHCP if you like.21:14
aveigaI actually don't see why non-routing tenant networks need addressing21:15
*** brents has joined #openstack-meeting-alt21:15
aveigayou get LLA for free21:15
ijwyup21:15
ijwTenant networks don't have to be non-routing, mind21:15
aveigayeah, but then I don't think I'd want to force DHCPO on them either21:15
ijw(Sorry, I  have an evil mind and at the moment it sees problems and not answers)21:15
sc68calijw: do you want to write up a blueprint for what you're thinking?21:16
aveigaijw: someone has to play devil's advocatel; nodding heads don't make good solutions21:16
aveigasc68cal: ijw: blueprint sounds liek a good idea21:16
ijwYeah, let me do that.  Problem is that (like a lot of other blueprints in this space) it excludes options whatever we implement, but I can do that21:16
aveigaat least we can all discuss it with a model in front of us21:16
sc68cal#action ijw write up blueprint discussing his preferred model for tenant networking21:17
aveigaI'm just finidng it difficult to understand why we'd exclude slaac or static?21:17
*** ativelkov has left #openstack-meeting-alt21:17
aveigabecause keep in mind, some folks want config-drive or metadata static injection as well21:17
aveigaand I can think of good use cases for it21:17
ijwWhat do you mean by static?  Remember DHCP here is 1 MAC, 1 IP21:17
sc68calOK - let's go ahead and move on from this - keep to the agenda21:17
aveigaI mean that the IP is injected into the guest config21:18
sc68callet's hold this until open discussion21:18
ijwyup21:18
aveigaok21:18
sc68cal#topic Nova IPv6 hairpin bug21:18
*** openstack changes topic to "Nova IPv6 hairpin bug (Meeting topic: neutron_ipv6)"21:18
*** drake_ has quit IRC21:18
sc68calPromise if we get through these quick we can let you guys go at it21:18
*** drake__ has joined #openstack-meeting-alt21:18
sc68calSo - we're getting some pushback on the hairpin issue21:19
ijwI like the proposed solution for hairpinning and I don't think any Neutron module needs it but should we have a quick code review?21:19
ijw(the solution being the one where VIF plugging requests it if required)(21:19
sc68cal#link https://review.openstack.org/#/c/56381/ nova ipv6 hairpin bug review21:19
sc68calijw: agreed21:19
aveigawas there an issue with that from anyone else?21:19
sc68calaveiga: one of the core devs from nova wants it to be a per VIF attribute21:20
sc68calI think we agree - provided the default is off21:20
*** stanlagun has left #openstack-meeting-alt21:20
sc68caland if someone wants it, just pass in an attribute to turn it on, from Neutron21:20
ijwDaniel is being cautious, though I think a bit too cautious.  Still.21:20
aveigaI don't fully grasp the consequences here21:20
aveigawhat is lost by defaulting to off?21:21
baoliQuestion, why hairpin has to be turned off with neutron?21:21
aveigabaoli: DAD failure21:21
ijwnova-network's usually on because the floating IP rewrite rules live between the switch and the port in nova-network21:21
ijwNeutron doesn't put them there.21:21
aveigayou see your own Neighbor Solicit, and eimmediately detect a duplicate address21:21
sc68calThe problem is nova is adding patches to their firewall drivers to block traffic from coming back and breaking ipv621:21
baoliwhere does neutron put it?21:21
aveigaah, so unless it hairpinned NAT wouldn't apply?21:22
ijwIn the router21:22
ijwYup, you can't ping your own public address iirc21:22
aveigaright21:22
sc68calThere's a big chunk of code that Nachi has been working on, to have Neutron pass in VIF attributes for firewalling21:22
aveigaso we should push this back on nova-network to have them set the hairpin option21:22
ijwWe also need to check all the other drivers - libvirt is not the world.21:23
sc68calSo it wouldn't be too hard to add another attribute to the VIF stuff to make it turn on hairpinning21:23
aveigaunless you think it's better to make it a negative case? Assume it's on, request it to be disabled per VIF?21:23
sc68calaveiga: I'd assume the inverse - always off, unless requested21:23
*** ErikB has quit IRC21:23
ijwThere's an argument for either but it's so slight let's just pick one.21:23
aveigaok, but expect an argument from the embedded parties there21:24
*** IlyaE has quit IRC21:24
sc68calI'm picking off by default21:24
sc68caland vif attribute turns it on when requested21:24
aveigaok, propose it on the ML then21:24
sc68calaveiga: I did - I asked if anyone needed it21:24
sc68calno responses21:24
ijwOK, so we have to check all the virt drivers for this and we also need to check the plugins21:25
ijwNeutron plugins21:25
*** krtaylor has joined #openstack-meeting-alt21:25
aveigais than AI for another code review?21:25
*** IlyaE has joined #openstack-meeting-alt21:25
ijwThe virt drivers need changes, the neutron plugins only review21:25
*** ErikB has joined #openstack-meeting-alt21:25
sc68calI hope that it's only libvirt that enables hairpinning21:26
aveigahow should we approach that? Setup the new VIF definition, then file bugs against the virt drivers?21:26
ijwguess so21:26
sc68calaveiga: I think so, if we start sending the attribute, drivers should honor it and enable hairpin21:26
ijwMore to the point, have to honour its absence and not enable hairpin.21:27
sc68calijw: agreed - which the review currently makes libvirt do21:27
aveigawhich is where the default on or off argument comes in21:27
sc68calOK - if nobody else objects I'll file a blueprint in nova to make hairpin a configurable setting21:28
aveiga+121:28
sc68calthen people can make sub blueprints to link to specific drivers21:28
ijwyup21:28
sc68cal#action sc68cal create blueprint in nova for hairpinning via VIF attributes21:28
*** colinmcn_ has quit IRC21:28
*** colinmcnamara has quit IRC21:28
sc68calOK we're through21:29
sc68cal#topic open discussion21:29
*** openstack changes topic to "open discussion (Meeting topic: neutron_ipv6)"21:29
ijwaviega: the point I wa ma21:29
ijwwas making earlier was that 'static' is not exclusive with the others21:29
ijwYou get that regardless of if DHCP is on or off21:29
aveigasorta21:30
ijwso really we're only talking about flags that change the network address handing-out service21:30
aveigaI can think of a scenario where having DHCP on would be detrimental to a static config21:30
*** sarob_ has joined #openstack-meeting-alt21:30
ijwYep, that's fine, so you turn it off21:30
aveigaah, that's where I agree with you21:30
ijwon/off is already available21:30
*** banix has quit IRC21:31
aveigaso are we leaving it to the operator then?21:31
ijwDHCP on/off should remain an option21:31
ijw*but*21:31
aveigathey have to know enough to disable DHCP21:31
*** SergeyLukjanov is now known as _SergeyLukjanov21:31
ijwYeah, but I'm assuming if you know you need static config you can work that one out21:31
aveigaok21:31
aveigaso in the case where we have SLAAC and DHCP21:32
ijwDHCP / SLAAC is more of an issue.  SLAAC will come up with an address which isn't the one you have put on the port, for instance21:32
aveigaI'm worried about collisions21:32
aveigaso this is why I think the port needs to take a huint from the network21:32
ijwWhat happens if we just don't let SLAAC addressed traffic through the port firewall?21:33
aveigayou potentially break distruibutions21:33
ijwObviously a SLAAC chosen address wouldn't work, but we can require that VMs using v6 take the address they've been given and don't just make it up21:33
*** sarob has quit IRC21:33
*** alazarev has quit IRC21:34
aveigaif it's a mixed mode network and you block the SLAAC addr at the port, then some distros that get the SLAAC addr up first will be forever dead in the water21:34
aveigaunless you manually alter routing tables21:34
*** sarob_ has quit IRC21:34
aveigaI think we need to make DHCP and SLAAC mutually exclusive21:35
*** mcohen2 has joined #openstack-meeting-alt21:35
ijwAgain, can we exclude mixed mode?21:35
ijwI'm out of my depth here, so I'm honestly asking21:35
aveigayes21:35
aveigayou can safely make the assumption that EUI-64 SLAAC is the least common denominator21:36
*** nati_ueno has quit IRC21:36
*** IlyaE has quit IRC21:36
aveigasince it existed for years before anything else21:36
*** demorris has quit IRC21:36
ijwYeah, but the issue is that it's damned nice to be able to force an address on a VM rather than have one given21:36
aveigaand assume that an operator choosing DHCP should know that their guest images must be able to use it21:36
aveigathen you use DHCP21:36
aveigayou can theoretically force an address21:37
aveigayou have control of the MAC21:37
ijwWell, you can predict it21:37
aveigamy turn for devil's advocate21:37
aveigawhat if you want to put multiple addresses on the same interface?21:37
aveigai.e. bind apache to one and ssh to the other?21:38
aveigaDHCP won't do that21:38
aveigaSLAAC wont'21:38
aveigabut if you filter the port on non-assigned addresses21:38
aveigathat breaks21:38
ijwstatic config, probably, at that point21:38
ijwAnd there's an extension for that that arosen did, so we're not screwed21:38
aveigadoes it support injecting rotues?21:38
aveigaroutes*21:38
ijwDon't think so, but you can see that coming, I agree21:39
aveigabecause (due to the RA bug in some distros) one of the main points for static is addressing with out RA so that you can do HSRP in v621:39
ijwWell, I'm still going to write a BP recommending against SLAAC (also, SLAAC only works on a /64 aiui)21:40
aveigathat way RA juggling isn't necessary for multi-homed switches21:40
aveigathat's correct21:40
ijwand /64 is a big tenant net21:40
ijwBut anyway, initially let's assume we're going to want to make that extension either core or more widely implemented.  I'll go and find the one in question.21:41
aveigaso what do you intend to do with clients that don't have a working DHCPv6 stack?21:41
aveigafore static injection?21:41
ijwThere's also magic to turn off antispoof entirely for a whole net.  It was much more important in nova-net than it is in Neutron (where you can have total control of all your VIFs on the network)21:41
ijwconfigdrive?21:42
ijwcloud-init happily supports config drives and they work without a functional network to get a network up.21:43
aveigaI'm wary of saying that we support IPv6 if we don't support SLAAC21:43
aveigasince SLAAC is part of the IPv6 RFC21:43
sc68cal+121:43
ijwBetter say that it's not core, I would say21:43
*** sarob has joined #openstack-meeting-alt21:43
sc68caland we have a BP registered for how we deploy openstack at Comcast21:43
sc68calwhich involves SLAAC21:43
ijwIt's not that you can't do it, but I think we should aim for v4 parity in this, which is basically that you get to choose an address from the subnet and have it assigned via a handful of recognised methods21:44
ijwSLAAC on internal networks?21:44
aveigaboth21:44
ijwOK, point me at that offline21:44
*** drake__ has quit IRC21:44
ijwWe're going around on this, so let's cover something else while we have the time21:44
sc68cal#link https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac Provider networking with slaac21:44
ijw.. that's provider nets, not internal, though21:44
*** rnirmal has quit IRC21:45
aveigawhat did you have in mind?21:45
*** jecarey has quit IRC21:45
ijwfloating IPs21:45
aveigaah, the killer21:45
*** SergeyLukjanov has joined #openstack-meeting-alt21:45
ijwAbsolutely21:45
ijwNow, no-one is allowed to say the N word without putting a quid in the swear jar21:45
aveigaI have a better proposal21:46
aveigachange the DHCP reservation21:46
aveigaand keep the valid lifetime low21:46
ijwRenumber the machine?21:46
aveigayup21:46
ijwHm21:46
ijwHow about change the router firewall21:46
ijwSo where before you would n.. things, you let incoming connections in21:46
ijwdifferent change, but same place21:47
aveigaactually21:47
aveigaDHCPv6 supports having multiple addresses21:47
aveigawe could inject both in the renew21:47
aveigawell21:47
ijwThat's a rather nice idea21:47
aveigatell it rebind fail21:47
aveigathen start over21:47
sc68calkind of like it21:48
*** sarob has quit IRC21:48
aveigaeffectively use DHCP to inject it without renumbering21:48
ijwHowever, the thing here is that even with private (fixed) v4 addresses I have external access21:48
aveigano loss of existing bindings/TCP sessions21:48
aveigaare we doing private v6? I propose we ignore ULA21:48
*** sarob has joined #openstack-meeting-alt21:48
ijwSo the question is whether this is about having two addresses for one machine or about the kind of inbound access that machine has, philosophically21:49
aveigaactually, that was selfish21:49
aveigawe should support it21:49
aveigaeh, it's reachability on an address that isn't the machine's primary access21:49
sc68calI think from the outside it's just about being able to move an IP across boxes21:49
aveigabecause you can ssh to the fixed before a flaot is assigned21:49
sc68caldynamically without mucking around inside the machines21:49
aveigaand you can reserve floats21:49
ijwYou see, that's two things - reachability and primary access - why can't it be primary?21:49
ijwsc68cal may have it21:50
aveigabecause you'd rebind daemons or kill tcp sessions21:50
ijwThough it's a very slow way of moving the address, you know21:50
sc68calI think primary access just leaked in because of the design of nova-network21:50
sc68calto the floating ip concept21:50
aveigawe shouldn't break existing access while adding a float21:50
*** jdob has quit IRC21:50
sc68calbut elastic IPs in amazon was reachability21:50
aveigayeah, but I can see that as being useful21:50
ijwOnly if you change the address, not if you use the same one and it's routeable (which it would want to be if it were to have the equivalent of a fixed address, which if it's on a router has external dialout access)21:50
*** sarob has quit IRC21:51
aveigaI think the real issue is WHICH address21:51
ijwSo the minimal solution is there's no second address, by default it can dial out and with special permissions it can dial in.  But that removes transferrability21:51
aveigaone reason for reserving floats and putting them on a new VM is for upstream security outside of OpenStack21:52
aveigayeah, transferrability is a key concept of flaots21:52
aveigathey float between VMs21:52
aveigaI don't think you can support flaoting IPs and ignore that feature21:52
aveigawhether we like it or not, people are using it in that manner already21:53
ijwOK, so it's a secondary address.  What pool are we drawing from?21:53
aveigathis sounds like a bigger topic though21:53
aveigaand we're close to time21:53
ijwYeah - clearly there's no simple answer here21:53
aveigawant to put this on the agenda for next week or take it to the ML (or the neutron channel)?21:53
ijwBoth, I think21:53
aveigamaybe some fols in the neutron channel have opinions here?21:53
aveigaok, I think we're done here21:54
aveigasc68cal: care to do the honors?21:54
ijwTo my mind it's one of the biggest stumbling blocks, possibly second only to ensuring all your addresses are routeable and non-overlapping21:54
sc68calsure - if nobody has anything else we can end early21:55
aveigayeah, I think making prov modes mutually exclusive helps there, but I agree21:55
aveigaand on that, I need to head out to get dinner anyway21:55
ijwWell, I think we have more than enough work for a week.21:55
*** demorris has joined #openstack-meeting-alt21:55
aveiga+121:55
sc68calAlright then - see everyone next week21:55
ijwo/21:55
sc68cal#endmeeting21:55
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"21:55
openstackMeeting ended Thu Dec 12 21:55:49 2013 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ipv6/2013/neutron_ipv6.2013-12-12-21.02.html21:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ipv6/2013/neutron_ipv6.2013-12-12-21.02.txt21:55
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ipv6/2013/neutron_ipv6.2013-12-12-21.02.log.html21:55
ijwDinner.  There's a fine plan.21:56
*** BrianB_ has quit IRC22:02
*** aignatov has quit IRC22:03
*** jasonb365_ has joined #openstack-meeting-alt22:04
*** ashaikh_ has joined #openstack-meeting-alt22:04
*** dougshelley66 has quit IRC22:04
*** julim_ has joined #openstack-meeting-alt22:05
*** jasonb365 has quit IRC22:05
*** jasonb365_ is now known as jasonb36522:05
*** tomoko__ has joined #openstack-meeting-alt22:05
*** aveiga has left #openstack-meeting-alt22:06
*** ashaikh has quit IRC22:07
*** ashaikh_ is now known as ashaikh22:07
*** julim has quit IRC22:07
*** markmcclain has quit IRC22:07
*** AlanClark has quit IRC22:09
*** baoli has quit IRC22:09
*** abramley has quit IRC22:10
*** pdmars has quit IRC22:11
*** boris-42 has quit IRC22:11
*** banix has joined #openstack-meeting-alt22:13
*** Dane has quit IRC22:15
*** lblanchard has quit IRC22:16
*** brents has quit IRC22:18
*** ichihara has left #openstack-meeting-alt22:18
*** dprince has quit IRC22:22
*** brents has joined #openstack-meeting-alt22:31
*** denis_makogon has quit IRC22:33
*** noslzzp has quit IRC22:33
*** krtaylor has quit IRC22:38
*** brents has quit IRC22:39
*** brents has joined #openstack-meeting-alt22:41
*** banix has quit IRC22:42
*** brents has quit IRC22:43
*** abramley has joined #openstack-meeting-alt22:43
*** Barker has quit IRC22:47
*** kevinconway has quit IRC22:48
*** eankutse1 has joined #openstack-meeting-alt22:49
*** eankutse1 has quit IRC22:49
*** banix has joined #openstack-meeting-alt22:49
*** brents has joined #openstack-meeting-alt22:50
*** tomoko__ has quit IRC22:51
*** eankutse has quit IRC22:52
*** rudrarug_ has joined #openstack-meeting-alt23:00
*** rudrarug_ has quit IRC23:01
*** rudrarugge has quit IRC23:02
*** jergerber has quit IRC23:02
*** NehaV has quit IRC23:04
*** yogesh has quit IRC23:06
*** jcoufal has quit IRC23:12
*** vkmc has quit IRC23:19
*** sacharya has quit IRC23:22
*** ijw has left #openstack-meeting-alt23:25
*** dougshelley66 has joined #openstack-meeting-alt23:26
*** jasonb365 has quit IRC23:26
*** mcohen2 has quit IRC23:30
*** amytron has quit IRC23:30
*** alazarev has joined #openstack-meeting-alt23:35
*** IlyaE has joined #openstack-meeting-alt23:44
*** sarob has joined #openstack-meeting-alt23:44
*** bdpayne has quit IRC23:48
*** ErikB has quit IRC23:51
*** SergeyLukjanov has quit IRC23:57
*** yamahata_ has joined #openstack-meeting-alt23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!