14:00:12 <melwitt> #startmeeting nova
14:00:13 <openstack> Meeting started Thu Aug 23 14:00:12 2018 UTC and is due to finish in 60 minutes.  The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:16 <openstack> The meeting name has been set to 'nova'
14:00:19 <takashin> o/
14:00:23 <melwitt> greetings everyone
14:00:28 <stephenfin> o/
14:00:48 <edleafe> \o
14:00:48 <efried> ō/
14:00:51 * dansmith gurgles
14:00:51 <johnthetubaguy> o/
14:00:52 <kosamara> hi
14:01:02 <gibi> o/
14:01:04 <dansmith> whoa johnthetubaguy sighting!
14:01:04 <mriedem> o/
14:01:05 <melwitt> let's make a start
14:01:13 <melwitt> wow, hi johnthetubaguy o/
14:01:17 <csatari> ő/
14:01:21 * johnthetubaguy nods in shame
14:01:27 <melwitt> #topic Release News
14:01:36 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule
14:01:43 <melwitt> today is the deadline for RCs
14:01:49 <melwitt> and we're having a RC3 today
14:01:55 <melwitt> #link https://etherpad.openstack.org/p/nova-rocky-release-candidate-todo
14:02:08 <melwitt> tracking RC3 changes in there ^
14:02:34 <melwitt> one of them, I think we have to punt because the correct fix is not yet identified. are we doing a release note or something for that one mriedem?
14:03:02 <mriedem> we should,
14:03:03 <mriedem> but,
14:03:11 <mriedem> i can't get a straight answer out of anyone that knows anything about the problem
14:03:18 <melwitt> ah, okay
14:03:20 <mriedem> sahid is our best bet probably but i'm not sure if he's gone now
14:03:24 <mriedem> so you guys can chase him downstream
14:03:32 <dansmith> why is revert not an option?
14:03:40 <mriedem> the rx/tx queue config stuff?
14:03:49 <dansmith> yeah
14:03:57 <mriedem> i mean, it's always an option
14:04:09 <dansmith> it's not overly hard here yeah?
14:04:11 <mriedem> presumably it works for at least one type of sriov port,
14:04:21 <dansmith> just saying, if there's no obvious workaround and we don't know what to do to fix it..
14:04:22 <mriedem> but i don't know if anyone claims to have tested it
14:04:23 <melwitt> okay, I just pinged sahid in the nova channel
14:04:32 <melwitt> about release note advice. not sure if he's around though
14:04:49 <mriedem> moshe said vnic type direct doesn't work,
14:04:58 <mriedem> which leads me to believe sahid tested with macvtap
14:05:02 <mriedem> but again, idk
14:05:10 <stephenfin> mriedem, melwitt, dansmith: Don't know how much I can do in a few hours, but I can take a look
14:05:15 <stephenfin> If sahid isn't about
14:05:26 <dansmith> mriedem: moshe's email makes it sound like turning this on in tripleo was the trigger?
14:05:27 <mriedem> stephenfin: that would be great - it's aslo in the ML
14:05:31 <stephenfin> ack
14:05:35 <mriedem> dansmith: depends on the vnic type
14:05:36 <dansmith> so does just leaving it disabled (despite their default) work okay?
14:05:40 <melwitt> yep thanks stephenfin
14:06:01 <mriedem> if vnic_type == 'direct' we don't set some stuff in the domain xml and it explodes
14:06:06 <mriedem> or we set the wrong thing
14:06:14 <mriedem> b/c of some other TODOs related to that rx/tx queue code
14:06:16 <dansmith> regardles of the config?
14:06:20 <mriedem> there are TODOs on top of TODOs in there
14:06:28 <mriedem> tripleo defaults to 512 in the rx queue
14:06:37 <mriedem> so the workaround in tripleo is, don't default
14:06:44 <dansmith> right, that's what I'm asking..
14:06:45 <stephenfin> I understood it as we _do_ set it when we shouldn't
14:06:54 <dansmith> if there is a "don't configure that" workaround, then revert is too nuclear
14:06:54 <stephenfin> set the XML attribute, that is
14:07:00 <dansmith> if there's not a workaround, then we should probably revert
14:07:10 <melwitt> sahid just said in the nova channel, he'll reply with all the info he has
14:07:10 <mriedem> stephenfin: there is some code which assumes a default vhost interface driver,
14:07:17 <mriedem> which doesn't work for rx queue with direct vnics, apparently
14:07:19 <mriedem> per moshe's paste
14:07:39 <efried> I'll propose a fast-fail patch whenever vnic_type=='direct'
14:07:46 <mriedem> ha
14:07:47 <efried> (too soon?)
14:07:51 <mriedem> right on time
14:08:19 <mriedem> so,
14:08:34 <mriedem> if we aren't sure by eod, we can at least doc a known issue reno with the bug
14:08:40 <mriedem> saying "some types of vnics don't work with rx queues"
14:08:46 <mriedem> "test it first, obviously, dummy"
14:09:05 <melwitt> ok. who will propose the reno?
14:09:08 <mriedem> i can crank out that docs change as a placeholder for now
14:09:10 <mriedem> i can
14:09:19 <melwitt> okay cool, thanks
14:09:29 * mriedem mriedem to write a docs/reno patch for https://review.openstack.org/#/c/595592/
14:09:34 <mriedem> oops
14:09:41 <mriedem> #action mriedem to write a docs/reno patch for https://review.openstack.org/#/c/595592/
14:09:49 <mriedem> i guess you have to as chair
14:09:59 <melwitt> yeah, I wasn't sure how that worked
14:10:01 <melwitt> #action mriedem to write a docs/reno patch for https://review.openstack.org/#/c/595592/
14:10:06 <melwitt> #action mriedem to write a docs/reno patch for https://review.openstack.org/#/c/595592/
14:10:17 <melwitt> ok, anything else for release news before moving on?
14:10:18 <efried> is there somewhere that those show up before the end of the meeting?
14:10:27 <efried> the bot doesn't ack them
14:10:43 <melwitt> I dunno, I've wondered that too
14:10:48 <mriedem> they do
14:10:51 <dansmith> in the meeting summary
14:10:56 <mriedem> right
14:10:58 <melwitt> before the meeting is ended?
14:10:59 <mriedem> html
14:11:02 <mriedem> no
14:11:13 <melwitt> yeah that's what efried was asking
14:11:14 <efried> we'll see if the minutes include one, two, or three instances of that message :)
14:11:14 <mriedem> http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-08-16-21.00.html
14:11:52 <melwitt> yeah, we'll see what happens after the meeting
14:11:59 <melwitt> #topic Bugs (stuck/critical)
14:12:09 <melwitt> no critical bugs in the link
14:12:14 <melwitt> #link 45 new untriaged bugs (up 5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
14:12:20 <melwitt> #link 7 untagged untriaged bugs (up 5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:12:41 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
14:12:45 <melwitt> #help need help with bug triage
14:12:51 <melwitt> Gate status
14:12:56 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:13:22 <melwitt> anecdotally, I've been seeing a lot of gate timeouts and seemingly random failures
14:13:31 <melwitt> 3rd party CI
14:13:36 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
14:13:52 <melwitt> anyone have anything else for bugs or gate status or third party CI?
14:14:07 <melwitt> #topic Reminders
14:14:12 <melwitt> #link Stein Subteam Patches n Bugs: https://etherpad.openstack.org/p/stein-nova-subteam-tracking
14:14:20 <melwitt> #link Stein PTG planning: https://etherpad.openstack.org/p/nova-ptg-stein
14:14:25 <melwitt> #link Rocky retrospective for the PTG: https://etherpad.openstack.org/p/nova-rocky-retrospective
14:14:42 <mriedem> re gate status https://bugs.launchpad.net/nova/+bug/1788403 has been gumming up the works
14:14:42 <openstack> Launchpad bug 1788403 in OpenStack Compute (nova) "test_server_connectivity_cold_migration_revert randomly fails ssh check" [Medium,Confirmed]
14:14:46 <mriedem> i'll e-r that after the meeting
14:14:59 <mriedem> 72 hits in 7 days
14:15:24 <melwitt> oh yeah, I've seen that one
14:15:46 <melwitt> thanks
14:15:48 <melwitt> #topic Stable branch status
14:15:55 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
14:16:00 <melwitt> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:16:04 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
14:16:17 <mriedem> if we're good with https://review.openstack.org/#/c/594178/ for rc3,
14:16:20 <mriedem> we need stable cores to +2
14:16:26 <mriedem> which would be dansmith and someone else
14:16:30 <mriedem> johnthetubaguy: ?
14:16:37 <johnthetubaguy> yeah, just looking at it now
14:17:42 <melwitt> I've been saying I'm going to propose stable branch releases for queens/pike/ocata but we were waiting on a specific series of backports to merge. I need to go through and see if there's anything else to hold for
14:17:59 <melwitt> anything else for stable branch status?
14:18:03 <mriedem> i forgot about https://review.openstack.org/#/c/590801/
14:18:11 <mriedem> and can't remember if we were doing that for an rc?
14:18:36 <mriedem> "LGTM. This was a regression in Rocky which we backported to queens and  pike so we're going to get it in either way, plus this is extremely low  risk."
14:18:41 <melwitt> I don't think we identified it as such but it seems appropriate. it's a regression
14:19:05 <melwitt> oh nope, it's tagged
14:19:22 <mriedem> oh i bet i know why it didn't show up in lp
14:19:27 <mriedem> because it's marked as fixed on master
14:19:28 <melwitt> didn't show up in the link because it's no longer open?
14:19:30 <melwitt> yeah
14:19:43 <mriedem> approved
14:20:03 <melwitt> thanks for catching that
14:20:26 <melwitt> okay, moving on
14:20:33 <melwitt> #topic Subteam Highlights
14:20:46 <melwitt> cells v2, we had a meeting. dansmith want to summarize?
14:21:02 <dansmith> we talked through some of the big changes we have on the plate
14:21:12 <dansmith> current stuff like batching and handling down cells,
14:21:20 <dansmith> as well as upcoming fun stuff like cross-cell migration
14:21:39 <dansmith> and we highlighted a few of the patch sets that need more review and have been getting neglected
14:21:46 <dansmith> and finally, we established that I'm a terrible person
14:21:48 <dansmith> I think that's it
14:21:56 <melwitt> great
14:22:04 <melwitt> scheduler, efried?
14:22:07 <efried> Okay, here we go
14:22:07 <efried> /me glares squinty-eyed at Sigyn
14:22:07 <efried> tumbleweed rolls across dusty street
14:22:07 <efried> fingers twitch above holsters
14:22:14 <efried> #link nova-scheduler meeting minutes http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-08-20-14.00.html
14:22:15 <mriedem> ha
14:22:18 <mriedem> "great"
14:22:27 <efried> Still no recent pupdates. cdent says he'll resume these around PTG time. Yay!
14:22:45 <efried> Discussed status of:
14:22:45 <efried> #link reshaper series: https://review.openstack.org/#/q/topic:bp/reshape-provider-tree+status:open
14:22:45 <efried> which has since seen some additional reviews/revisions.
14:22:58 <efried> Bottom has procedural hold, but two +2s.
14:23:13 <efried> Remainder of series has reviews, needs more, should be landable very soon.
14:23:27 <efried> #link Gigantor SQL split and debug logging: https://review.openstack.org/#/c/590041/
14:23:27 <efried> Has one +2, gibi is on the hook to +A
14:23:42 <efried> #link consumer generation handling (gibi): https://review.openstack.org/#/q/topic:consumer_gen+(status:open+OR+status:merged)
14:23:42 <efried> #link ML thread on consumer gen conflict handling: http://lists.openstack.org/pipermail/openstack-dev/2018-August/133373.html
14:23:52 <efried> #link nested and shared providers for initial & migration (and other?) allocations: https://review.openstack.org/#/q/topic:use-nested-allocation-candidates+(status:open+OR+status:merged)
14:23:58 <mriedem> note on the reshaper series...
14:24:15 <mriedem> libvirt and xenapi drivers, the two that need reshaper, won't have any code started until next week at the earliest
14:24:51 <efried> I feel pretty strongly that we should not hold the series for those.
14:25:38 <mriedem> we don't have to debate it here, i was just noting it for others not in the emails
14:25:49 <mriedem> i'm on the not block forever side of the fence too
14:25:54 <efried> Even if it is imperfect - and that's a big "if" - we have a tendency to find and fix bugs as and when they are discovered by consuming code/operators/whatever.
14:25:56 <mriedem> i just need to go through the nova client side changes
14:25:58 <melwitt> yeah, I think it's fine to move forward with the series, fwiw. we can fix stuff after
14:26:14 <efried> cool beans
14:26:47 <melwitt> okay, next is notifications, gibi?
14:26:49 <efried> Since we're talking about reshaper
14:26:53 <efried> oh, I wasn't done :)
14:26:55 <melwitt> oops
14:26:56 <melwitt> sorry
14:27:00 * gibi holds
14:27:31 <efried> Matt's review of the API patch revealed a hole in policy handling, which led cdent to come up with this: https://review.openstack.org/#/c/595559/
14:27:31 <efried> ...and I was wondering if that would be appropriate to port to nova too.
14:27:51 * stephenfin worries about efried consuming operators
14:27:59 <mriedem> that would be harder in nova,
14:28:07 <efried> okay. just a thought.
14:28:09 <mriedem> placement defaults to admin-only for all routes except /
14:28:11 <cdent> yeah I left a commen to that effect
14:28:15 <efried> ight
14:28:15 <mriedem> nova is all over the obard
14:28:17 <mriedem> *board
14:28:27 <efried> moving on with sched subteam report...
14:28:30 <efried> #link Spec: Placement modeling of PCI devices ("generic device management") https://review.openstack.org/#/c/591037/
14:28:30 <cdent> harder because of finding urls, harder because of finding policies
14:28:44 <efried> And now for the big one
14:28:49 <efried> Placement extraction
14:28:49 <efried> #link epic dev ML thread http://lists.openstack.org/pipermail/openstack-dev/2018-August/133445.html
14:28:49 <efried> #link epic discussion in #openstack-tc http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T15:27:57
14:28:59 <efried> TL;DR: 1) Consensus that we should put placement in its own repo - more on that in a minute.
14:29:00 <efried> 2) Consensus that initial placement-core team should be a superset of nova-core
14:29:00 <efried> 3) Still no consensus on whether placement should be under compute or independent governance
14:29:19 <efried> So back to 1), cdent and edleafe have started work on this in a
14:29:19 <efried> #link temporary placement github repo https://github.com/EdLeafe/placement
14:29:19 <efried> which is being experimented on and respun as needed to get it presentable before seeding the official openstack repo.
14:29:26 <efried> cdent, edleafe: comments?
14:29:40 <cdent> it "works" on https://github.com/EdLeafe/placement/pull/2
14:29:49 <efried> \o/
14:29:50 <cdent> but is messy, but cleaning it up should be fine
14:29:52 <edleafe> I will be re-extracting with the items cdent identified later today
14:30:32 <efried> Okay, that's it for sched subteam. Questions, comments, concerns, heckles?
14:30:49 <mriedem> at this point,
14:30:57 <mriedem> with what 2 weeks to ptg?
14:31:05 <mriedem> i assume we wait for decisions on #3?
14:31:25 <mriedem> i mean i know where i am, and it's in the ML
14:31:34 <mriedem> and gibi too
14:31:57 <efried> What do you mean, wait for decisions? You mean try to make that decision _at_ the PTG?
14:32:15 <mriedem> that's what i'm asking
14:32:22 <mriedem> i don't know if the ML thread is dying out or what
14:32:28 <mriedem> i haven't read the latest
14:33:22 <efried> I actually haven't caught up since last night, so my statement about #3 may have been premature.
14:33:33 <mriedem> to summarize my position on #3, i mostly don't care, but think the extraction is going to be a pain in the ass and would rather not deal with the governance question at the same time
14:33:46 <mriedem> once we've extracted,
14:33:48 <mriedem> setup a new core team,
14:33:51 <mriedem> and run that for 6 months,
14:33:59 <mriedem> then flipping the governance in time for T PTL elections is easy peasy
14:34:50 <mriedem> if people are vehemently opposed to ^ as a compromise, they should speak up
14:34:57 <melwitt> yeah, my position is that I would prefer we start with things extracted to a new repo, with a new placement-core team, under compute
14:34:59 <mriedem> dansmith: jaypipes: melwitt: edleafe: cdent ^
14:35:54 <jaypipes> mriedem: I am muting myself.
14:35:54 <cdent> I'm not opposed to that, but as I understand some of the conversations haven't fully resolved, for people not currently here.
14:36:25 <mriedem> that's why i asked about the ptg
14:36:31 <mriedem> where people that should care will be in person
14:36:34 <mriedem> and it can be hashed out
14:36:35 <efried> My concern - that separation of governance gets put off indefinitely - is assuaged by reading that as "we intend and plan to split governance in T unless something serious happens to make us think that's the really wrong thing to do".
14:36:45 <edleafe> I'm not opposed, even if it seems unnecessary
14:37:27 <mriedem> i have always assumed long-term separate governance personally
14:37:56 <mriedem> if it's T or later, as i said,
14:38:00 <melwitt> fwiw, I don't want to put it off indefinitely. I want to start with things under compute, make some progress on the major tightly coupled work we need (vGPUs, NUMA, affinity, shared storage) and when that tails off, flip the governance
14:38:08 <mriedem> depends on getting through the extraction and new core team first for a cycle at least
14:38:42 <mriedem> i also don't want to hold hostages
14:38:47 <cdent> I don't think we need to rush on making a decision. There are people involved in the discussion who don't process email as fast as most of the people talking right now. From what various people have told me in the past week, they want to think about things for a few more days.
14:38:54 <mriedem> if nova isn't going to make progress client-side on ^ then we can't hold forever
14:38:55 <mriedem> IMO
14:39:22 <efried> Yes, I have a problem with the premises of "tightly coupled" and "tails off", which are recipes for "hostage" and putting off indefinitely.
14:39:23 <cdent> So I'd prefer to only proceed on the extraction and make, as yet, no assertions on the governance side of things
14:39:40 <mriedem> cdent: i think we're saying the same thing
14:39:43 <mriedem> so agree from me
14:39:46 <melwitt> I agree, if things are dragging and the client-side isn't working to make progress, then that is another issue and we won't hold forever over that
14:41:46 <cdent> I have similar concerns to efried about the perception of coupling and tailing, but I don't think we need to address that right here right now
14:41:46 <melwitt> but we are not there yet, people have been working on the client-side. but we need to prioritize that higher
14:41:46 <efried> So I'm fine with setting an actual plan to run Stein under compute and separate governance in Train. Accepting that plans can change, but that's the way we execute if things go fairly close to expected.
14:41:46 <efried> rather than explicitly leaving it open-ended.
14:42:22 <cdent> efried: yes, that's a good idea (I think), but I think we should wait before declaring that. Both ttx and dhellmann seem to have more they want to do
14:42:36 <efried> ack
14:42:52 * efried feels progress was made toward a consensus \\o/
14:42:57 <cdent> yes
14:44:15 <melwitt> okay, anything else for scheduler subteam before we move on to notifications?
14:44:28 <efried> Not from me.  Thanks.
14:44:45 <melwitt> okay, gibi?
14:44:50 <gibi> I was on PTO so no meeting and no status mail. I will resume those next week
14:45:07 <gibi> nothing serious is ongoing in notification side at the moment
14:46:04 <gibi> that is all
14:46:12 <melwitt> cool, thanks
14:46:17 <melwitt> gmann, API subteam?
14:46:36 <mriedem> he's on PTO
14:46:50 <mriedem> working through the server view builder extension merge series
14:47:01 <melwitt> ah, okay. thanks
14:47:18 <melwitt> anything else for subteams before we move on?
14:47:28 <melwitt> #topic Stuck Reviews
14:47:42 <melwitt> no items in the agenda. anyone in the room have anything for stuck reviews?
14:47:47 <kosamara> yes
14:48:01 <kosamara> Bug https://review.openstack.org/#/c/579897
14:48:10 <melwitt> oh sorry, I missed it in the agenda
14:48:10 <kosamara> Hide hypervisor id on windows guests
14:48:15 <kosamara> :)
14:48:19 <melwitt> #link Bug: Hide hypervisor id on windows guests https://review.openstack.org/#/c/579897
14:49:00 <melwitt> there's review from jaypipes there. is this stuck or are you just waiting for a response from him?
14:49:05 <kosamara> The bug is that Windows guests with PCI passthrough of Nvidia GPUs can't use the GPU due to a restriction of the nvidia driver
14:49:18 <melwitt> oh, I see now
14:49:19 <kosamara> I guess both.
14:49:20 <mriedem> we already have a similar thing elsewhere
14:49:29 <mriedem> and i thought there was another related wishlist type bug for this
14:49:45 <mriedem> from kosamara also,
14:49:46 <mriedem> but i can't find it
14:49:49 <kosamara> Which we patched some months ago
14:49:56 <mriedem> i might be thinking of the bp in rocky
14:50:05 <mriedem> https://blueprints.launchpad.net/nova/+spec/hide-hypervisor-id-flavor-extra-spec
14:50:07 <mriedem> yes ^
14:50:09 <kosamara> that was for all guests, yes that
14:50:27 <mriedem> so this is a bug in the same vein
14:50:30 <dansmith> does any other hypervisor allow for this?
14:50:39 <kosamara> this is in particular for windows guests. The problem are the extra HyperV tags, which reveal that there is a hypervisor
14:50:44 <dansmith> since this is about hiding hyper-v's signature in kvm, I would expect that they don't
14:51:26 <kosamara> dansmith: We have a flag to hide KVM's signature for the same reason
14:51:42 <dansmith> yeah I know, and I understand the use case
14:53:18 <kosamara> jaypipes: I want to add on this bug that it is triggered by a need of CERN users
14:53:41 <mriedem> pulling the CERN card eh
14:53:47 <dansmith> well, in that case!
14:54:58 <mriedem> so given we have prior art here,
14:54:59 <jaypipes> kosamara: I've said my piece on the review in question. I won't hold it up. I'm just not going to +2 it.
14:55:01 <dansmith> I dunno, I think I'm against this
14:55:17 <mriedem> why would the previous hide be ok but this isn't?
14:55:19 <dansmith> this is a libvirt-specific hack to disable flags to sidestep licensing at the expense of performance
14:55:25 <kosamara> It tries to solve a real end user problem, namely running engineering and rendering software, and it doesn't come from a vendor.
14:56:45 <kosamara> As mriedem says, the use case is essentially the same as the previous hiding.
14:57:04 <jaypipes> kosamara: like I said, I feel for you, and I don't hold this against you personally (or CERN). I just don't support hacks like this to get around what is essentially a vendor-specific licensing dilemma for NVIDIA users.
14:57:56 <melwitt> is there any correct way to fix this elsewhere?
14:58:12 <dansmith> this patch for sure is way too targeted
14:58:15 <dansmith> it's just hacked into place
14:58:27 <melwitt> gah, we only have a few minutes left here
14:58:42 <kosamara> melwitt: I don't see any other way
14:58:52 <dansmith> I'm not sure what knob we have for disabling the base flags right now, but I'm not sure why we need another one for windows specifically
14:59:18 <melwitt> okay, we'll have to move to the nova channel, have to wrap up here
14:59:19 <dansmith> I have another meeting
14:59:29 <kosamara> I'll continue in nova
14:59:32 <melwitt> #topic Open discussion
14:59:39 <melwitt> someone left a note
14:59:44 <melwitt> #link Edge Computing Group PTG schedule with a Nova session planned: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4
15:00:00 <melwitt> edge computing PTG etherpad for those interested ^
15:00:06 <mriedem> jaypipes said he was on the edge
15:00:13 <melwitt> heh
15:00:14 <mriedem> so he can be our rep
15:00:19 <melwitt> okay, we have to wrap, thanks eveyone
15:00:21 <melwitt> #endmeeting