21:01:46 <mestery> #startmeeting networking
21:01:47 <openstack> Meeting started Mon Jun  9 21:01:46 2014 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:50 <openstack> The meeting name has been set to 'networking'
21:01:54 <shivharis> hi
21:02:02 <mestery> #link https://wiki.openstack.org/wiki/Network/Meetings Agenda
21:02:04 <Sukhdev> Hello
21:02:15 <mestery> #topic Announcements
21:02:25 <mestery> Juno-1 is this week on Thursday.
21:02:27 <mestery> #link https://launchpad.net/neutron/+milestone/juno-1
21:02:46 <mestery> We have merged 6 BPs and tons of bugs. There are few things left which we can merge if the code reviews are good.
21:02:54 <rkukura> hi
21:02:56 <mestery> See the LP page for BPs to prioritize for review this week.
21:03:10 <mestery> Any questions on Juno-1?
21:03:57 <emagana> DVR is the most critical, right?
21:03:57 <kevinbenton> is there an explanation of the signifigance of these particular tags?
21:04:10 <mestery> emagana: Yes, we merged 2 DVR patches, my guess is the rest land in Juno-2.
21:04:21 <kevinbenton> like what’s the difference between something landing in Juno-1 vs 2,3, etc
21:04:40 <mestery> kevinbenton: Just trying to prioritize work, etc.
21:04:47 <mestery> kevinbenton: Juno-1 items all have BPs which landed for example :)
21:04:56 <markmcclain> kevinbenton: the milestones are really for planning and synchronization
21:05:07 <mestery> Next week I will start going through Juno-2 items and ensure they have BPs filed and tracked in neutron-specs.
21:05:13 <mestery> markmcclain: +1
21:05:19 <mestery> For the Juno project plan for Neutron, see here: https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
21:05:20 <mestery> #link https://wiki.openstack.org/wiki/NeutronJunoProjectPlan
21:05:30 <mestery> At a high level, that's the community items we're tracking.
21:05:47 <mestery> I don't want to derail the meeting on that now, though. Soe questions on that can be handled in Open Discussion :)
21:05:52 <mestery> OK, 2 more items for announcements:
21:05:58 <kevinbenton> thanks
21:06:05 <mestery> #link https://etherpad.openstack.org/p/neutron-juno-mid-cycle-meeting LBaaS Sprint
21:06:13 <mestery> the LBaaS sprint is next week.
21:06:20 <mestery> markmcclain and I will be attending from a core perspective.
21:06:31 <mestery> #link https://etherpad.openstack.org/p/neutron-juno-lbaas-mid-cycle Parity Sprint
21:06:35 <mestery> And the parity sprint is in July.
21:06:38 <mestery> FYI.
21:06:44 <mestery> OK, moving on to bugs.
21:06:49 <mestery> #topic Bugs
21:06:58 <mestery> Our top bug this week: https://bugs.launchpad.net/neutron/+bug/1325737
21:06:59 <uvirtbot> Launchpad bug 1325737 in neutron "Public network connectivity check failed after VM reboot" [High,Confirmed]
21:07:11 <mestery> I have triaged this down to likely being a nova issue.
21:07:16 <mestery> salv-orlando: We discussed in channel earlier today.
21:07:54 <salv-orlando> mestery: weren’t we tracking it with another bug number?
21:07:54 <mestery> When we hit this issue, the guest appears to be hung, as dumping the console returns no data.
21:08:08 <mestery> salv-orlando: Yes
21:08:09 <mestery> one second
21:08:17 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1323658
21:08:20 <uvirtbot> Launchpad bug 1323658 in nova "SSH EOFError - Public network connectivity check failed" [Undecided,New]
21:08:26 <mestery> salv-orlando: Too many bugs :)
21:08:50 <salv-orlando> mestery: I don’t want to appear pedant but if we have several bugs open for the same issue people will comment on either of two and we might lose info
21:08:58 <mestery> salv-orlando: Agreed.
21:09:00 <salv-orlando> I promise to stop my pedantry here for today
21:09:10 <mestery> #action mestery to consolidate "ssh timeout" bugs post-meeting.
21:09:16 <nati_ueno> salv-orlando: That is fuge loss for the community
21:09:20 <mestery> salv-orlando: You agree with my assessment of hte bug though?
21:10:09 <salv-orlando> mestery: I agree, but still can’t reproduce locally. It seems the instance takes the ip (as shown in syslog)
21:10:26 <salv-orlando> bug then for some reason either it hungs or the ssh server does not start
21:10:30 <mestery> salv-orlando: OK, I'll ping the nova folks to have a look at my analysis as well.
21:10:31 <armax> mestery, salv-orlando: I pushed this one https://review.openstack.org/#/c/98483/
21:10:40 <armax> but the exact failure mode is yet to reproduce
21:10:48 <mestery> armax: That change actually causes the failure to manifest slighlty differently for me :(
21:10:49 <armax> this bug is a coward :)
21:10:53 <mestery> armax: :)
21:11:16 <mestery> armax: With that change, infra is trying to save a VM when it fails.
21:11:26 <mestery> armax: I looked at two over the weekend, but couldn't find much useful info :()
21:11:26 <salv-orlando> and obviously then there is the other thing about why adding the wait_for_server_active in setup triggers this all over the place
21:11:56 <mestery> If we hit this again with that patch armax, infra will load your and my ssh key so we can get access. I'll have them ping either of us in #openstack-neutroin, ok?
21:12:16 <armax> mestery: ok
21:12:21 <mestery> OK, there is one other bug enikanorov wanted me to point out: https://bugs.launchpad.net/neutron/+bug/1328162
21:12:22 <uvirtbot> Launchpad bug 1328162 in neutron "Tempest fails to delete firewall in 300 seconds" [High,Confirmed]
21:12:40 <mestery> This one is being worked by enikanorov (who couldn't be here), he just wanted it pointed out here as a high priority one.
21:13:06 <mestery> Any other bugs the team should be aware of?
21:13:09 <SridarK> mestery: I am also taking a look at the logs - nothing very conclusive thus far
21:13:21 <mestery> SridarK: Thanks! Please sync with enikanorov as well.
21:13:30 <SridarK> mestery: will keep  in loop
21:13:37 <salv-orlando> mestery: you put the SELECT FOR UPDATE issue in the agenda
21:13:54 <salv-orlando> however if we start talking about that, it will probably mean the end of the meeting
21:13:57 <mestery> salv-orlando: I think it's leftover yet, though I still need to work with jaypipes to document that.
21:14:00 <mestery> salv-orlando: Correct. :)
21:14:03 <mestery> salv-orlando: So, lets move on. :)
21:14:11 <salv-orlando> can we agree to put an action for me to open a mailing list thread.
21:14:17 <mestery> salv-orlando: Absolutely!
21:14:27 <mestery> #action salv-orlando to start mailing list thread for "SELECT FOR UPDATE" issue.
21:14:30 <mestery> salv-orlando: ^^^ :)
21:14:31 <salv-orlando> then rossella_s and the other folks interested in this will chime in
21:14:44 <mestery> Thanks salv-orlando.
21:14:48 <mestery> #topic Team Discussion Topics
21:14:50 <salv-orlando> I don’t know if HenryG want also to take ownershp of coordinating this
21:14:54 <salv-orlando> but that’s for another day
21:15:07 <mestery> blogan: This is your section. :)
21:15:08 <mestery> #link http://lists.openstack.org/pipermail/openstack-dev/2014-June/036629.html
21:15:16 <mestery> The LBaaS team had a question for the broader Neutron team.
21:15:22 <blogan> mestery: thanks
21:15:24 <mestery> It was a ML thread, but they wanted visibility here to discuss.
21:15:40 <mestery> blogan: Can you phrase the questions here?
21:15:43 <mestery> *question
21:16:05 <blogan> basically its what is the most acceptable strategy for backwards compatibility between the old version of the lbaas api and the new
21:16:42 <marun> is the existing lbaas api any different from the test of the api?
21:16:53 <blogan> one option is to keep one API but any requests that go in the old format get translated to the new API's object model
21:17:02 <marun> test -> rest
21:17:15 <blogan> marun: what do you mean by rest of the API? neutron API?
21:17:28 <blogan> marun: or the new API?
21:17:32 <blogan> new lbaas API
21:17:34 <salv-orlando> blogan: is this API still part of the neutron endpoint or is it running in its own endpoint?
21:17:46 <blogan> salv-orlando: it's still part of neutron api
21:17:47 <salv-orlando> blogan: obviously I mean the new lbaas API
21:17:52 <markmcclain> for now we'd need to keep same endpoint
21:17:56 <marun> I mean, don't aren't we mandated to maintain support for an api for at least one cycle following deprecation?
21:18:06 <marun> don't ->
21:18:18 <mestery> marun: Yes, we will support the old one per the deprecation guidelines.
21:18:35 <salv-orlando> blogan: can you guarantee there is always a mapping from the old model to the new model?
21:18:40 <blogan> yes but the question is, since the old API and new API are different, should there be a new load balancing v2 extension and plugin or just keep the old ones that translate the old APIs to the new object model?
21:18:41 <marun> So do we really have a choice?
21:19:01 <marun> The starting point is defining the new extension api
21:19:10 <marun> And then seeing if mapping the old to the new is possible.
21:19:12 <markmcclain> I think that translating makes the most sense
21:19:22 <regXboi> markmcclain: +1
21:19:26 <mestery> Translating means we keep the existing infra as well.
21:19:27 <blogan> salv-orlando: no I cannot guarantee that since the old API is 1:1 relationships and the new one has M:N and M:1 relationships
21:19:30 <markmcclain> otherwise deployers will have confusion
21:19:53 <mestery> OK, so now that we've raised the issue, can we have interested parties reply to blogan's email?
21:20:02 <mestery> I'm afraid this could chew up the rest of the meeting time if we let it here. :)
21:20:06 * markmcclain adds to do to reply
21:20:10 <blogan> mestery: it definitely could
21:20:14 <mestery> thanks markmcclain.
21:20:17 <mestery> blogan: :)
21:20:40 <salv-orlando> blogan: but however this means that existing object model is a “subset” of the new one - and therefore everything you were able to create in the old object model could be created as well in the new one?
21:20:44 <mestery> blogan: Thanks for joining us, and we'll close on this hopefully this week on the ML.
21:21:05 <salv-orlando> mestery, blogan: ok, let’s move this to the mailing list.
21:21:13 <mestery> salv-orlando: thanks :)
21:21:19 <blogan> salv-orlando: yes ML, or we can talk after if you want
21:21:27 <mestery> OK, one other item to discuss here, per the agenda: VLAN trunk proposals.
21:21:39 <mestery> See the agenda, there are 3 proposals related to this, some in spec form.
21:21:44 <mestery> I think we need to converge these.
21:21:56 <mestery> And I think we want this feature in Juno, it helps the NFV use case for example.
21:22:02 <mestery> Comments from other cores?
21:22:12 <salv-orlando> mestery: I gave my opinion on one of these specs. I need to check what the proposer thought about it...
21:22:20 <mestery> salv-orlando: Thank you!
21:22:48 <salv-orlando> I understand the use case, but I want this to be achieved without adding concepts as sub-ports
21:22:51 <mestery> So, maybe I would propose other cores please review these and lets see if we can collapse them and approve a single spec for Juno yet.
21:23:01 <rkukura> mestery: I’ll review these specs
21:23:02 <mestery> salv-orlando: I agree with that comment 100%.
21:23:05 <mestery> rkukura: Thanks!
21:23:06 <markmcclain> salv-orlando: +1
21:23:15 <marun> did we get feedback from geoff arnold as to the survey of changes (from summit session) required for nfv?
21:23:28 <mestery> marun: No, but I can reach out to him.
21:23:45 <mestery> marun: There is a weekly NFS meeting now, they are tracking "trunk ports" as a requirement from neutron though.
21:23:49 <mestery> *NFV
21:23:49 <marun> mestery: I think it's represented in raw form in the etherpad but hopefully he has more details.
21:23:57 <mestery> marun: OK, thanks.
21:24:04 <marun> mestery: https://etherpad.openstack.org/p/servicevm
21:24:09 <marun> (at the end)
21:24:21 <mestery> OK, lets move on now, I know HenryG had something he wanted to comment on for the parity section of the agenda, and he had to leave after 30 minutes :)
21:24:25 <mestery> #topic Nova Parity
21:24:29 <mestery> markmcclain HenryG: Hi!
21:24:36 <markmcclain> hi
21:24:46 <HenryG> Hi
21:25:02 <markmcclain> so there's a spec available for database migration work
21:25:17 <HenryG> Yes, we have a more-or-less final design proposal which supports a sort of downgrade.
21:25:24 <mestery> Link by chance?
21:25:36 <HenryG> #link https://review.openstack.org/95738
21:25:52 <mestery> nati_ueno: I think you signed up to review this one, right?
21:25:57 <mestery> nati_ueno: From a core perspective?
21:25:59 <nati_ueno> mestery: sure
21:26:03 <mestery> nati_ueno: Awesome!
21:26:14 <mestery> markmcclain salv-orlando: I assume you both wil lreview this as well?
21:26:19 <markmcclain> yes
21:26:30 <regXboi> mestery: I put reading it on my list as well, I've got some scars from this in my past
21:26:39 <mestery> markmcclain regXboi: Thanks!
21:26:48 <mestery> We have good core coverage on this one then, which is great.
21:26:56 <salv-orlando> mestery: sure
21:27:00 <mestery> Lets see if we can converge on the spec and merge it so work can move forward!
21:27:07 <mestery> HenryG: Thanks for leading the DB Migration efforts!
21:27:15 <salv-orlando> I’m even happy to do some code there - it’s not rocket science but there’s a lot of stuff to cover
21:27:48 <mestery> salv-orlando: Great! I'll let you work with HenryG jlibosva and others to divy up the work.
21:27:54 <mestery> Anything else on parity markmcclain?
21:27:55 <markmcclain> other parity item is we're starting to work on code for altering device mgt from nova to neutron… contact me offline for folks interested in participating
21:28:08 <mestery> markmcclain: thanks!
21:28:54 <mestery> #topic Docs
21:29:00 <mestery> emagana: Hi! Any updates for us this week?
21:29:13 <emagana> hi there! Good news and bad news!
21:29:37 <emagana> Good news: we have very few open bugs in neutron-docs
21:29:46 <emagana> We have been closing few of them lately
21:29:57 <mestery> That is good news emagana!
21:30:02 <emagana> Bad news: we have very few open bugs in neutron-docs
21:30:15 <regXboi> heh
21:30:27 * mestery notes the irony.
21:30:27 <emagana> I have seen almost zero new bugs open for docs related to Neutron
21:30:40 <mestery> emagana: Does a bug get opened automatically when DocImpact is added?
21:30:46 <emagana> I want to encourage developers to add Doc Impact on your changes
21:30:49 <mestery> emagana: I've seen reviews with that tag, have they not opened bugs?
21:31:08 <emagana> mestery: That is correct!
21:31:09 <banix> mestery: yes, the bugs get opened automatically
21:31:24 <mestery> emagana: As an example, did this change generate a Doc bug: https://review.openstack.org/#/c/95060/
21:31:37 <emagana> So, reviewers make sure that DocImpact is addeed
21:31:51 <mestery> Good advice emagana.
21:32:27 <mestery> emagana: Thanks for the update! Anything else?
21:32:29 <emagana> I dont want to take more time.. so, I am done at least somebody wants to add something else
21:32:35 <mestery> emagana: Thanks!
21:32:40 <markmcclain> seems that if the spec has doc changes in it we should ensure DocImpact flag is in commit msg
21:32:48 <mestery> markmcclain: +1
21:33:03 <mestery> #topic Tempest
21:33:06 <mestery> mlavalle: Hi!
21:33:09 <mlavalle> Hi
21:33:21 <mlavalle> we have a total of 26 api tests merged
21:33:25 <mlavalle> working on the last 3
21:33:36 <mestery> mlavalle: Great!
21:33:51 <mlavalle> also keeping an eye on the LBaaS work in case we need to adjust the tests for the new API
21:34:04 <mestery> mlavalle: Great, thanks!
21:34:06 <mlavalle> last week merged the spec for the work to be done on scenarios
21:34:11 <mlavalle> https://review.openstack.org/#/c/95600/
21:34:20 <mestery> #link https://review.openstack.org/#/c/95600/
21:34:36 <mlavalle> right now I am writing the titorial on scenario tests that will become part of the Tempest docs
21:34:38 <nati_ueno> nice
21:35:02 <Sukhdev> mlavalle: that is great
21:35:06 <mlavalle> and also writing specs for the scenarios themselves
21:35:15 <mestery> mlavalle: This is great work!
21:35:27 <mlavalle> once I have that, I will send message to ML to invite developers to write scenarios for us
21:35:45 <mestery> mlavalle: Sounds like a plan.
21:36:01 <mlavalle> that's all I have and I'll see you next week in San SAntonio
21:36:04 <marun> tangent: how do we ensure evolution of specs?
21:36:19 <marun> I'll bring up on mailing list if there isn't an easy answer.
21:36:30 <mestery> marun: Patches to approved specs.
21:36:33 <mlavalle> marun: no easy answer Ithinlk
21:36:51 <marun> ok, will bring up on ml
21:36:54 <mestery> thanks marun.
21:37:01 <mestery> #topic L3
21:37:02 <marun> if we use bugs/bp to evolve code, we'll need something to evolve specs too
21:37:06 <carl_baldwin> mestery: hi
21:37:08 <mestery> carl_baldwin: Hi!
21:37:17 <carl_baldwin> We’re working hard on DVR.
21:37:21 <mestery> Yay!
21:37:28 <carl_baldwin> The extension patch is shaping up.
21:37:38 <mestery> carl_baldwin: We merged our first 2 DVR patches I believe, right?
21:37:43 <carl_baldwin> I’m reaching out to owners of the L3 / L2 patches and new patches should be coming soon.
21:37:57 <carl_baldwin> mestery: Yes, we’ve merged a few, paving the way.
21:38:57 <carl_baldwin> I’m still working on a document that should be useful for testing.
21:39:21 <carl_baldwin> Need updates to the L3 and L2 patches before I can do much work work on that.
21:39:56 <carl_baldwin> All of the other L3 topics are covered on the team page.
21:40:07 <mestery> Thanks carl_baldwin!
21:40:21 <mestery> #topic IPv6
21:40:24 <mestery> sc68cal: Hi there!
21:40:28 <carl_baldwin> I’ll continue to work directly with the patch owners.
21:40:29 <sc68cal> Hello!
21:40:34 <mestery> carl_baldwin: Thanks!
21:41:01 <sc68cal> So we have a bug report for the floating IP v4/v6 problem baoli found - it is listed on the agenda
21:41:29 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1323766
21:41:30 <uvirtbot> Launchpad bug 1323766 in neutron "Incorrect Floating IP behavior  in dual stack or ipv6 only network" [Undecided,New]
21:41:53 <markmcclain> it's a fun one for the few folks running dual stacks
21:42:11 <sc68cal> We will most likely dig into it in the coming weeks - but it does raise questions about implicit v4-isms in the Networking API
21:43:04 <sc68cal> The only other thing I'd like to bring up is the patch to add the subnet attributes to neutronclient
21:43:19 <mestery> #link https://review.openstack.org/#/c/75871/
21:43:27 <sc68cal> Beating me to it :)
21:43:34 <mestery> Looks like markmcclain has a -2 on that one at the moment.
21:43:41 <markmcclain> right was waiting on final attribute decision in server
21:43:54 * markmcclain needs to catch up on spec update
21:43:55 <mestery> markmcclain: OK, got it.
21:44:18 <mestery> sc68cal: Per our discussion, you still want to proceed with 2 attributes, right?
21:44:39 <sc68cal> Correct - the two attribute spec is the result of months of subteam meetings
21:44:53 <sc68cal> we originally started with a single attribute, but then that BP was superceded by the two attribute spec
21:44:55 <mestery> sc68cal: OK, that makes sense.
21:45:20 <sc68cal> I also realized that I got a DevStack patch in that assumes the attributes are usable in the client
21:45:34 <mestery> sc68cal: The 2 attribute approach?
21:45:39 <sc68cal> mestery: yes
21:45:56 <sc68cal> I had patches to devstack for our lab environment that I pushed to upstream
21:46:09 <sc68cal> that matches very closely to how we deploy our clusters
21:46:19 <mestery> So, is anyone against the two atttribute approach? Seems as if the Ipv6 subteam has converged on this approach.
21:46:31 * markmcclain is lone holdout
21:46:40 <mestery> markmcclain: OK, lets syncup offline on this one then. :)
21:46:46 <regXboi> link to thread?
21:46:48 <markmcclain> will do
21:47:07 * mestery waits to see if sc68cal digs the link out ...
21:47:17 <sc68cal> I don't think we've had a mailing list discussion about it
21:47:37 <mestery> sc68cal: OK, no worries.
21:47:39 <sc68cal> The only link could be the single attribute spec - let me get that
21:47:54 <sc68cal> #link https://review.openstack.org/#/c/87987/
21:47:57 <mestery> sc68cal: thanks!
21:47:59 <sc68cal> shoot
21:48:02 <sc68cal> that's the wrong one
21:48:02 <mestery> sc68cal: Anything else on IPv6 this week?
21:48:13 <sc68cal> #link https://review.openstack.org/#/c/92164/
21:48:17 <sc68cal> No, that's it
21:48:29 <mestery> sc68cal: Thanks!
21:48:50 <mestery> SumitNaiksatam is out this week, so we'll skip the sub-teams he's leading for today.
21:49:00 <mestery> #topic Open Discussion
21:49:08 <mestery> anteaya: You had a note around DriverLog in here.
21:49:31 <mestery> Actually, I'll cover this too.
21:49:47 <mestery> anteaya and I noticed DriverLog is reporting data which isn't actually right in some cases.
21:49:53 <mestery> We're going to try and clean this up in the coming week.
21:50:07 <mestery> Also, there are inconsistencies between that and what anteaya is tracking in infra.
21:50:13 <Sukhdev> mestery: I noticed that too
21:50:21 <mestery> Sukhdev: Yes.
21:50:37 <mestery> So, anteaya and/or I may be reaching out to plugin/driver owners in the coming week.
21:50:56 <anteaya> sorry, got distracted
21:51:00 <mestery> And I'd like to thank anteaya for leading this effort around 3rd party CI. It's a huge win for the community and a lot of work!
21:51:05 <mestery> anteaya: ^^^ :)
21:51:06 <anteaya> thanks
21:51:17 <mestery> anteaya: Did I miss antyhing?
21:51:20 <nati_ueno> ++
21:51:22 <anteaya> if you aren't sure about a system ask me and we can review it together
21:51:29 <anteaya> nope, I'm good thanks
21:51:31 <markmcclain> mestery: should be noted that brocade isn't eval'ing their own patches
21:51:36 <mestery> #info Any questions on 3rd party CI, ask anteaya or mestery.
21:51:44 <mestery> markmcclain: That's one of the htings anteaya and I noticed.
21:51:51 <mestery> markmcclain: Also, Tail-F hasn't posted a review since April.
21:51:57 <markmcclain> yep
21:52:00 <mestery> We may look to remove some thigns from tree once we sort this out.
21:52:04 <mestery> But we will give notice.
21:52:12 <anteaya> I'm less concerned about fixing driverlog and more concerned that ci systems are working
21:52:12 <mestery> I started a thread on Tail-f earlier today on the ML.
21:52:17 <mestery> anteaya: +1
21:52:22 <armax> markmcclain, mestery I have been noticing this hence my -2’s
21:52:40 <mestery> My goal is to clean this up (with anteaya's help) by the end of Juno-2.
21:52:41 <salv-orlando> if you’ve noticed vmware is down too since friday. It was sending -1 to all patches. I’m fixing that - please do not be frustated!
21:52:43 <mestery> So, stay with us.
21:52:51 <mestery> salv-orlando: Thanks for the update!
21:53:01 <markmcclain> armax: yes thanks for matching review to CI
21:53:29 <mestery> Anything else to bringup this week from anyone?
21:54:08 <mestery> If not, keep up the reviews and thanks for everyone's efforts in Juno so far!
21:54:24 <sweston_> mestery: yes, I've been working on ci for brocade.  we will be eval'ing our own patches soon
21:54:38 <mestery> sweston_: Great! I will be reaching out to you this week yet.
21:54:51 <sweston_> mestery: yay!!
21:55:01 <mestery> sweston_: :P
21:55:05 <regXboi> mestery: I will be traveling next week
21:55:12 * regXboi driving back from Memphis
21:55:21 * mestery makes a mental note to send lots of email to regXboi next week. :)
21:55:31 <mestery> regXboi: Thanks for the heads up. :)
21:55:41 <mestery> OK, thanks everyone! We'll see you all next week!
21:55:48 <nati_ueno> see ya!
21:55:48 * regXboi wanders back to ODL-land
21:55:50 <banix> bye
21:55:53 <mestery> And also, in #openstack-neutron, the ML, reviews, and possibly in my dreams. :)
21:55:55 <mestery> #endmeeting