21:01:03 #startmeeting 21:01:04 Meeting started Mon Jul 16 21:01:03 2012 UTC. The chair is danwent. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:07 haha. Is that a reward? 21:01:23 #link Agenda: http://wiki.openstack.org/Network/Meetings 21:01:46 today we have a couple design/coordination issues to discuss 21:02:01 Hi 21:02:09 hoping to nudge along a few discussion to make sure they don't become blockers 21:02:23 evening all 21:02:27 garyk: hey 21:02:33 danwent: hi 21:02:37 first up, here's the current status of folsom-3: https://launchpad.net/quantum/+milestone/folsom-3 21:02:52 we are 3 weeks out from when all features should be pushed for review 21:03:07 a month out from the final release date. 21:03:27 A couple key reviews to draw attention to (unfortunately, both are not in quantum) 21:03:35 first, garyk's devstack + dhcp patch: https://review.openstack.org/#/c/9362/ 21:03:47 (arosen's v2 devstack patch got merged earlier today) 21:04:08 garyk: just need a rebase, i assume now that arosen's patch is in? 21:04:20 danwent: correct 21:04:35 great. once you do that, i'll bug dean to get the second devstack core review 21:04:55 ok 21:04:58 second is salv orlando's patch to let you specify --nic option again with v2 quantum: https://review.openstack.org/#/c/9295/ 21:05:06 this is pretty important to testing more interesting topologies 21:05:10 salv-orlando: any update on this? 21:05:29 reviewers comments 21:05:38 have been addressed - waiting for another round of review 21:05:39 looks like we already have the attention of a few nova core devs, which is great. 21:06:12 ok, any other key reviews people want to call out (this is main reviews that may be holding up additional work) 21:06:30 danwent, salv-orlando: is this related to vnic or the nic of the agent 21:06:52 this is to be able to specify the set of networks that a vm is connected to when it boots 21:07:04 danwent: thanks 21:07:15 otherwise nova will just create interface on each network the tenant has access to 21:07:24 each => every 21:07:28 Ok, so I have a few items I'd like to discuss 21:07:30 :) 21:07:33 hello, https://review.openstack.org/#/c/9506/ 21:07:48 what are these two different? 21:08:06 gongys: the difference is whether we allow IP address to be specified. 21:08:23 this is a discussion we were having with tr3buchet (as I mention in my review comment) 21:08:56 the concern is that we don't want every new thing that can be set on a quantum port (IP, security group, etc.) to be proxied through nova when a VM is created. 21:09:31 thus, the idea was that if someone wanted to customize their network connectivity, they would first create a port with exactly that connectivity via quantum, and then just passin the port-id when specifying the vNIC. 21:09:49 My god, These two changes are submitted almost at the same time. 21:10:08 They are trying to deal with the same problem. 21:10:35 We should at least make sure that they do not conflict each other 21:11:08 then we can stack the larger patch set (9506 I believe) on top of the other one 21:11:53 But these two are fixing same bug. 21:11:56 I think 9506 needs more discussion, about whether we want to support the fixed_ip attribute, or require that they pass in a port. I could see making an exception for fixed_ip, but I am very nervious about having all quantum port attributes sent in via nova. 21:12:35 we should loop tr3buchet into this discussion, as when I previously spoke with him, I think he was in favor of not supporting the fixed_ip field being passed in. 21:12:47 gongys or salv-orlando do you want to start a ML discussion on this? 21:13:17 I just want to avoid these duplication efforts. 21:13:45 danwent: I agree with your concerns, and I think the model you propose around pre-creating the Quantum port seems reasonable. 21:13:55 yeah I am fine with bringing the discussion on ML 21:13:55 Why do we have these two changes fixing the same bug? 21:13:59 are they both linked to the same bug? or did we have duplicate bugs that led to duplicate fixes? 21:14:41 in lp terms not the same bug 21:14:57 Sorry, I made a mistake. 21:15:14 we probably want to discuss in the ML whether they are actually the same bug or not according to semantics 21:15:32 I think not, but we probably want to move on as there's plenty to discuss today 21:15:37 ah… ok. well, at least there was some overlap there. part of the confusion was probably about whether the bug should be filed in quantum or nova 21:15:42 haha, they are not the same bug number. 21:15:49 one is nova bug and the other is quantum bug. They are not linked. 21:16:04 i agree. let's move on, remember to always check for duplicates when you file a bug. 21:16:08 Anyone interested in extensions adding attributes to core resources, please comment on https://review.openstack.org/#/c/9849/ soon so we can decide whether to go forward with that approach and update the provider network patch set to use it. 21:16:28 rkukura: thanks I will look at the code later on today 21:16:53 Lots of patch sets are touching the RESOURCE_ATTRIBUTE_LIST, including moving it out of router.py, so we'll need to coordinate. 21:16:58 salv-orlando: can you kick off quick discussion with tr3buchet and gongys at least to resolve differences between the two patches? 21:17:02 salv-orlando: thanks 21:17:49 Ok, next topic I wanted to bring up was about adding names to port and subnets, and their uniqueness 21:18:18 https://review.openstack.org/#/c/9836/ 21:18:29 danwent: i think that if they are not unique then there is no point for the name - for example if one is to do a gui represntation of a network 21:19:01 danwent: it only leads to confusion 21:19:02 garyk: I think the "standard" approach is that the user can decide if they want to keep them unique or not 21:19:07 dan went, gongys, tr3buchet - I am happy to discuss it at anytime on irc, ML or w/e 21:19:24 that is what nova does for servers, and what we currently do for networks. 21:19:56 +1 to letting user decide name uniqueness 21:19:57 it is what we do, i do not think that it is very usable. 21:20:00 if a user decides to have two things named the same think, they likely need to click through to the details page to see a UUID and disambiguate in a gui, which I agree is cumbersome 21:20:10 I am with dan went on this. If we enforce name uniqueness then let's just get rid of uuid 21:20:10 how does the user decide? who can enforce this? 21:20:53 i agree, one or the other. why both? it juest seems another column in the table that is a burden 21:21:20 on that node, I would not even make the name attribute mandatory (and make it non mandatory for networks as well) 21:21:33 What's the name used for? 21:21:52 I have the same question 21:22:03 its really just an easier to remember/type display handle 21:22:05 Name is more user-friendly 21:22:14 I guess for having something human-suitable on the client-side 21:22:24 Ok, so it makes sense to be optional IMHO 21:22:32 +1 on PotHix 21:23:05 in my experience, systems like this tend to have optional display names that are not-unique. 21:23:07 make* 21:23:13 +1 for optional 21:23:29 this seems to be what nova does for servers, I'm not sure about other services within openstack. 21:23:31 +1 for optional. it is useful. 21:23:36 If it is only for human-suitable, then uniqueness is not a must option. 21:23:46 I'll add my +1 - adding that we should make the name optional for networks too 21:24:03 agree 21:24:04 salv-orlando: I agree with you 21:24:09 garyk: I think its up to you to try and convince folks otherwise. Want to send an email to the list about it? 21:24:29 i can live with the optional. +1 21:24:39 Hi, if we make network name optional, I have to run 'quantum network-create' instead of 'quantum network-create myname' 21:24:57 which means we don't need any arguments to create a network. 21:24:58 yep, create without any parameter will give you a network 21:25:08 gongys: but I assume there would be a --name=foo 21:25:16 option? 21:25:38 Yes. the name is going to become a optional argument. 21:25:56 So we need nothing to create one network. 21:26:31 Looks good to me :) 21:26:35 is there any problem here? 21:26:37 ok. please coordinate with devstack changes when pushing this into the client. We'll also need to change the resource-map to make network name optional (maybe do this in same patch as adding name for port/subnet?) 21:26:39 +1 for optional network name. 21:26:42 without a name? 21:27:10 I will update my change. 21:27:11 zhuadl: can you clarify what you're asking? 21:27:49 It is tough to associate a uuid of network in CLIs and UI 21:27:50 whoops, realized I skipped an item on the agenda. want to go back to talk about opentack-common updates. 21:27:51 I mean is there any problem without a name when creating a network? 21:27:55 -1 for optional 21:28:24 we need netwotks with names (sorry) to be the bad guy. 21:28:37 shivh: is specifying --name=foo much more difficult? 21:28:39 uuid are difficult yo handle 21:28:57 the assumption is not agreeable. 21:29:12 when you create a network you may want ot have a human readable name 21:29:25 like (vlan3 based network)( 21:29:36 shivh: yes, that would still be possible. 21:29:41 Ok, in the interest of time, shivh, please send your concerns to the ML 21:29:50 we have a lot to cover still 21:29:51 will do. 21:29:53 thx 21:30:09 gongys: i'm trying to figure out the issue with the two patches to update openstack-common 21:30:24 we need to get this change in soon, as garyk pointed out that others things depend on it 21:30:33 dan, obviously without needing any information to create a network will lead to creating network unintentionally. 21:30:33 what is the key difference between the two? 21:30:48 I reckon gongs has a patch that is a superset of the other - but fixes some pep8 errors 21:31:11 I need notifier part too. 21:31:36 gongys: ok, so is the concern just that the other patch was proposed first? 21:31:41 but does a subset? 21:32:12 I disabled pep8 check on openstack common codes. 21:32:24 gongys: yes, i think that's the right thing to do 21:32:55 I dealt with context problem in order to integrate the common code into our system. 21:32:57 especially given that openstack-common folks do not seem to want to use latest pep8 version 21:33:16 danwent, gongys: sounds reasonable. which patch will be used then. there are 2 conflicting patches 21:33:35 But I am sorry to know there is another change trying to import the codes. 21:34:12 At that time I am just gobbling the BP. 21:34:16 gongys: the scalable agents got a -1 because it did not use the versioned RPC code (this also needs to be updated) 21:34:24 gongys: other one was proposed first, right? and is someone new to the community? are we able to merge that patch, and put yours on top? 21:34:56 or is it more complicated than that? 21:35:04 I agree, but that change is claiming to fix a bug which needs import the notifier part. 21:35:50 gongys: ok, could we merge it as a partial fix for that bug, then have you do the second half of the bug? Otherwise, let's just merge yours in. I just want to get this roadblock cleared. 21:36:14 Ok. 21:36:19 gongys: can you please make sure that you have the latest and greatest openstack codethen 21:36:31 of course. 21:36:39 tx 21:36:41 I will redo the update.sh. 21:36:56 ok, so we're going to approve Yaguang's patch, then apply gongys patch on top. everyone in agreement? 21:37:09 +1 21:37:11 +1 21:37:13 +1 21:37:15 gongys: looks like you'll be the second +2 on the patch. 21:37:22 ok, great. 21:37:29 +2 21:37:45 ok, last (and probably trickiest) of the discussion points. 21:37:50 -2 (sorry I just couldn't resist the temptation to troll :)) 21:37:56 arghhh! 21:37:58 :P 21:38:06 plz go ahead 21:38:13 salv-orlando: Party pooper! 21:38:15 +1 21:38:18 non-polling work for plugin agents, and dhcp 21:38:31 iff names are key :) 21:38:47 garyk has work related to making plugins more scalable 21:39:10 using rpc calls, though we need to add more logic to trigger off of API changes to notifiy agents. 21:39:12 danwent: not sure if anyone saw - i wrote a mail to the list with some issues about the updates 21:39:28 garyk: ok… actually, perhaps for something this complex, ML is better place to resolve anyway 21:39:37 * salv-orlando saw the mail, starred it and will eventually read 21:39:52 given how long the meeting is running, how about we table that and just encourage folks to respond on the ML. 21:40:08 that would be great! 21:40:15 i think this is a key item to get consensus on though, as it will be used by plugins, dhcp-agent, and l3-agent 21:40:35 #help please check-out ML thread from garyk about scalable agents and respond with thoughts 21:40:42 ok, moving on :) 21:40:53 will do 21:40:54 :) 21:40:55 i'll be happy to clarify if anyone needs further explanations. https://review.openstack.org/#/c/9591/ gives the basic idea 21:41:10 #info we're looking for people to contribute to the Quantum integration into horizon 21:41:34 I'm very worried that this isn't going to really land for Folsom, and it would be a shame if people didn't have a good graphical way to use Quantum 21:42:07 arvind has an initial patchset (https://github.com/asomya/horizon/tree/quantum-v2), but the GUI widgets he's created need to be hooked up to the Quantum client library. 21:42:32 there's also a lot more work that we wanted to do in F-3, but first we need to get this base functionality in 21:42:35 Is there any problem to be hooked to the quantum client? 21:42:58 gongys: i think its pretty simple, he's just not familiar with any of the v2 stuff (and he's on vacation for two weeks now) 21:43:06 so no progress is being made. 21:43:21 I will have a look what is going on. 21:43:23 Is there any BP created for this work? 21:43:31 gongys: that woudl be awesome. 21:43:41 i will check the horizon code. 21:43:49 bp is at https://blueprints.launchpad.net/quantum/+spec/quantum-horizon 21:43:55 zhuadl, Do you want to take it? 21:44:01 I can also help to look at this. 21:44:24 I just started to look the current asomya repository. 21:44:28 great… we could really use a few people familiar enough with horizon that we can expose new quantum functionality there once its available in the API 21:44:52 (in F-3 we'll also need to work on floating Ips and other l3 constructs) 21:45:16 #todo (again) dan create additional F-3 bps/bugs for horizon work 21:45:50 please use the BP whiteboard to coordinate the work you're doing. wouldn't want multiple people to do the same thing (resources are too precious for that) 21:46:08 we can split up the blueprint into additional chunks if people think that is useful 21:46:20 ok, anything else for F-3? 21:46:39 how is the floating-ip feature going? 21:46:40 if you have a BP that is high or above and the status is unclear, I've probably already pinged you, or will be pinging you soon. 21:46:51 gongys: started on the spec over the weekend 21:47:06 if I don't have something by thursday, i'll give it up to you, deal? :P 21:47:32 N O 21:47:35 i'm hoping to break it out into three major chunks, so multiple people can work on it 21:47:44 :) 21:47:48 :) 21:48:00 #topic community topics 21:48:20 Ok, i had a TODO that we should come up with a template for Quantum Specs 21:48:28 i have an issue with devstack and linux bridge and dhcp 21:48:34 I haven't had time yet, so I'm curious if anyone wanted to volunteer for that. 21:49:16 otherwise, i'm creating a spec template as I do the L3 stuff, so we can go with that as well. 21:49:26 ok, garyk go ahead 21:49:40 danwent: when i run devstack with v2 i ave issues. 21:49:52 I can do the spec template - and post it on the wiki. 21:50:07 salv-orlando: ok, that would be great. 21:50:14 1. when using dhcp and linux bridge - the dhcp requets do not arrive at the dnsmasq device. could be iptables. 21:50:26 2. occasionally nova launches vms with vnics 21:50:34 yeah gongys and I have chatted about that. 21:50:36 anyone else experienced this? 21:50:40 i also have the same issue. 21:50:52 the same issue #1. 21:51:18 you need to set the firewall drive to the NullFirewallDriver until we fix the bug we filed. 21:51:21 one sec 21:51:45 danwent: i'll try 21:52:01 We need use NullFirewallDriver driver for now since we have not yet exposed the right dhcp server ip. 21:52:03 garyk: ok, i forget the exact syntax, but it was on that wiki page arosen sent out 21:52:24 I remember we have a BP for it. 21:52:32 https://blueprints.launchpad.net/quantum/+spec/expose-dhcp-server-ip 21:52:34 Yes, I saw it. 21:52:37 danwent: cool, i'll take a look tomorrow. silly me should have written to list instead on gridning water with it 21:52:53 then we'll need the quantum/nova integration code to pass that as part of the network object in nova, I think. 21:53:11 ok, other communit topics 21:53:23 About notification event. 21:53:57 gongys: configuration files 21:54:07 gongys: i see notifications as being a key part of the non-polling stuff, but that's part of what we need to discuss on the ML 21:54:28 (as notifications would need to be more fine-grained) 21:54:43 gongys: sorry, go ahead... 21:55:07 ok, time is up, I prefer to do it on the ML. 21:55:09 btw, 5 minutes to end of meeting 21:55:14 ok, great 21:55:23 please sign up for the next set of review days if you're a core dev (or even if you're not) 21:55:29 http://wiki.openstack.org/Quantum/ReviewDays 21:55:43 review queue is starting to grow again, let's make sure we all pitch in to keep it manageable 21:55:58 I'm reviewing a bit each day 21:56:02 and just another reminder to use the openstack-dev (not openstack) list for design discussions 21:56:11 #topic open discussion 21:56:23 any final questions (4 mins left)? 21:56:52 Do we have anyone from quantum here at oscon? 21:57:13 ncode: perhaps some of the dreamhost guys? 21:57:15 for public networks, please follow up discussion on the ML (I already owe garyk a reply) 21:57:25 Any change of an official quantum-2012.1.1 essex stable tarball soon? 21:57:27 unfortunately, i suspect people at oscon may be the same people missing the meeting :) 21:57:48 rkukura: garyk picked up the latest today. after testing, we can definitely do a "release" 21:57:49 s/change/chance/ 21:57:57 great 21:58:12 ncode: we (Dreamhost) but I'm not sure who 21:58:24 danwent: hahahaha I will look for them 21:58:24 ok, thanks folks. see you all next week! 21:58:28 #endmeeting