14:00:00 <gibi> #startmeeting nova
14:00:01 <openstack> Meeting started Thu Oct 18 14:00:00 2018 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:04 <openstack> The meeting name has been set to 'nova'
14:00:09 <mriedem> o/
14:00:13 <Luzi> o/
14:00:15 <edleafe> \o
14:00:15 <gmann> o/
14:00:18 <stephenfin> o/
14:00:22 <tssurya> o/
14:00:34 <gibi> hello everyone I will be your host today
14:00:37 <takashin> o/
14:00:47 * bauzas waves
14:00:49 <cdent> o/
14:01:04 <gibi> Let's get started
14:01:08 <gibi> #topic Release News
14:01:16 <mdbooth> o/
14:01:16 <gibi> #link Stein release schedule: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule
14:01:27 <gibi> #link Stein runway etherpad: https://etherpad.openstack.org/p/nova-runways-stein
14:01:32 <gibi> #link runway #1: https://blueprints.launchpad.net/nova/+spec/use-nested-allocation-candidates (gibi) [END: 2018-10-23] next patch at https://review.openstack.org/#/c/605785
14:01:37 <gibi> #link runway #2: https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein (gmann) [END: 2018-10-25] https://review.openstack.org/#/q/topic:bp/api-extensions-merge-stein+status:open
14:01:43 <gibi> #link runway #3: <empty>
14:01:50 <gibi> and also the runway queue is empty
14:02:25 <gibi> anything to discuss about release or runways?
14:02:30 <mriedem> just a reminder,
14:02:34 <mriedem> specs don't go into the runways queue,
14:02:41 <mriedem> i had to remove another 2 specs that people posted in there yesterday
14:02:49 <mriedem> i thought mel was going to send a reminder email to the ML but i don't see it
14:03:08 <mriedem> the end
14:03:12 <gibi> mriedem: good point
14:03:35 <gibi> #topic Bugs (stuck/critical)
14:03:43 <gibi> no critical bugs
14:03:49 <gibi> #link 50 new untriaged bugs (down 11 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
14:03:54 <gibi> #link 8 untagged untriaged bugs (up 2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:04:00 <gibi> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
14:04:07 <gibi> #help need help with bug triage
14:04:23 <gibi> anything about bugs to discuss?
14:04:45 <gibi> Gate status
14:04:46 <gibi> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:04:58 <gibi> 3rd party CI
14:04:59 <gibi> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
14:05:16 <gibi> I haven't pushed patch recenty so I don't have experience about the gate
14:05:34 <gibi> based on the elastice page it works OK
14:06:03 <gibi> anything about the gate to discuss?
14:06:32 <gibi> #topic Reminders
14:06:37 <gibi> #link Stein Subteam Patches n Bugs: https://etherpad.openstack.org/p/stein-nova-subteam-tracking
14:06:40 * efried_pto waves late
14:06:41 <cdent> If you have opinions about which versions of python the gate should be testing, there's a few review in governance arguing about it
14:07:08 <gibi> cdent: thanks
14:07:14 <cdent> #link https://review.openstack.org/#/c/610708/
14:07:19 <cdent> #link https://review.openstack.org/#/c/611080/
14:07:22 <cdent> #link https://review.openstack.org/#/c/611010/
14:07:52 <mriedem> would be nice if someone summarized that thread in the ML
14:08:21 <mriedem> where could i find a tc member that likes to write...
14:08:22 <sean-k-mooney> cdent: my 2 cents if we drop py 3.5 testing then that cant be our miunum suported version but i dont want to get into that thread here
14:08:35 <cdent> mriedem: I gave up
14:08:47 <mriedem> make the rookies do it
14:09:00 <mriedem> moving on...
14:09:16 <gibi> so one more reminder
14:09:17 <gibi> #link spec review day Tuesday Oct 23: http://lists.openstack.org/pipermail/openstack-dev/2018-October/135795.html
14:09:37 * gibi started the review day today as he will be off on 23rd
14:09:52 <gibi> any other reminders?
14:09:57 <mriedem> yes,
14:10:00 <mriedem> if you have forum sessions,
14:10:05 <mriedem> get your etherpad started and linked to https://wiki.openstack.org/wiki/Forum/Berlin2018
14:10:14 <mriedem> i'm looking at stephenfin and bauzas
14:10:24 <stephenfin> ack
14:10:25 <bauzas> mriedem: yup, on my plate
14:10:29 <mriedem> and lee but he's out
14:11:06 <gibi> mriedem: thanks for the reminder
14:11:16 <gibi> #topic Stable branch status
14:11:22 <gibi> #link stable/rocky: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/rocky,n,z
14:11:27 <gibi> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
14:11:32 <gibi> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:11:39 <gibi> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
14:11:58 <gibi> anything to discuss about stable?
14:12:02 <mriedem> we'll need to do a rocky release soon
14:12:05 <mriedem> b/c we have 2 upgrade impacting issues
14:12:19 <mriedem> https://review.openstack.org/#/c/611315/ and https://review.openstack.org/#/c/611337/
14:12:27 <mriedem> dansmith: can you +W the latter ^ ?
14:12:35 <dansmith> probably
14:12:59 <mriedem> oh 3 https://review.openstack.org/#/c/610673/
14:13:05 <mriedem> we like to break upgrades round these parts
14:13:18 <dansmith> queued
14:14:15 <gibi> #topic Subteam Highlights
14:14:19 <gibi> Cells v2 (dansmith)
14:14:35 <dansmith> gibi: we don't have a meeting anymore, so I think we're not doing this bit now, but,
14:14:35 <mriedem> we should probably remove that section, the meeting is cancelled indefinitely
14:14:44 <gibi> dansmith: ahh good point
14:14:59 <dansmith> I was just saing in channel that the down cell stuff hit a test snag recently which I've now identified,
14:15:02 <gibi> #action gibi update the agenda to permanently remove the cellv2 and the notification section
14:15:03 <dansmith> so we can move forward with that soon
14:15:14 <dansmith> I think that's the major bit of cellsy news of late
14:15:19 <mriedem> gibi: already done
14:15:21 <dansmith> I was going to report it was stalled for that reason,
14:15:28 <dansmith> but the status changed in the last few minutes :)
14:15:38 <gibi> cool, thanks
14:15:45 <gibi> Scheduler (efried)
14:15:46 <efried> #link Minutes from this week's n-sch meeting http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-15-14.00.html
14:15:47 <efried> It was a short one. The only thing of note was a reminder to folks to review placement extract grenade patches, which spider out from
14:15:47 <efried> #link grenade stuff https://review.openstack.org/#/c/604454/
14:15:47 <efried> END
14:16:10 <mriedem> https://review.openstack.org/#/q/topic:cd/placement-solo+(status:open)
14:16:35 <gibi> thanks
14:16:37 <gibi> API (gmann)
14:16:48 <gibi> gmann posted the satus as a mail
14:17:19 <gibi> #link http://lists.openstack.org/pipermail/openstack-dev/2018-October/135827.html
14:17:33 <gibi> #topic Stuck Reviews
14:17:41 <gibi> there is one item on the agenda
14:17:43 <gibi> Fail to live migration if instance has a NUMA topology: https://review.openstack.org/#/c/611088/
14:18:04 <gibi> stephenfin: do you want to open it up?
14:18:24 <gibi> or artom
14:18:31 <artom> sup?
14:18:35 <stephenfin> gibi: Yeah, not much to say about it, really
14:18:46 <artom> Oh, that, yeah
14:18:52 <artom> Can of worms: opened :D
14:19:04 <stephenfin> There are review comments there. cfriesen and artom have looked at it, but I imagine it's going to be contentious
14:19:26 <sean-k-mooney> to adress that bug you need artoms
14:19:34 <sean-k-mooney> numa aware live migration spec
14:19:38 <sean-k-mooney> to be implemented
14:19:55 <dansmith> but this is about failing it, not fixing it right?
14:19:58 <stephenfin> As noted, I'd like to start enforcing this. It's a change in behavior but we have a policy of introducing changes in behavior in e.g. the API if there's a clear bug there
14:20:02 <mriedem> the only way anything works is if you accidentally land the server on a dest host that is identical to the source and the available resources are there on the dest, right?
14:20:03 <stephenfin> dansmith: correct
14:20:09 <gibi> spec #link https://review.openstack.org/#/c/599587/
14:20:09 <stephenfin> mriedem: also correct
14:20:19 <dansmith> I think refusing to intentionally break an instance makes sense
14:20:27 <dansmith> but I imagine we have to allow an override
14:20:28 <sean-k-mooney> mriedem: yes
14:20:37 <stephenfin> sean-k-mooney: I want to make this start failing until artom's spec lands
14:20:39 <dansmith> can we refuse unless force and a host is given?
14:20:56 <mriedem> also, can't this be done a higher level than the libvirt driver?
14:21:03 <artom> dansmith, yeah, that's sort of where we've been heading - Chris made the good point of evacuating a server to an identical replacement, so the operator would know it's OK
14:21:07 <dansmith> right where we have visibility to the request
14:21:08 <mriedem> you know if the instance has pci_requests/numa topology from the db
14:21:16 <sean-k-mooney> dansmith: i mean if they are using for all bets are off form schduler point of view so sure
14:21:18 <dansmith> artom: aye
14:21:23 <stephenfin> mriedem: Yeah, this is a first pass. We can definitely do it higher/earlier
14:21:46 <dansmith> stephenfin: so this isn't actually stuck you just wanted discussion?
14:21:58 <stephenfin> It was stuck when I added it
14:22:13 <mriedem> it's not really stuck
14:22:19 <dansmith> that's not what this section is for, yeah
14:22:22 <stephenfin> But it seems I may have talked artom around and I forgot to remove it. We can move on
14:22:27 <gibi> OK
14:22:27 <dansmith> stuck means cores can't agree
14:22:35 <dansmith> okay
14:22:43 <gibi> anything else that is stuck?
14:23:03 <gibi> #topic Open discussion
14:23:08 <gibi> Image Encryption Proposal (mhen, Luzi)
14:23:08 <gibi> Spec: https://review.openstack.org/#/c/608696/
14:23:09 <gibi> Library discussion: https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption
14:23:18 <Luzi> hi
14:23:20 <Luzi> as you might have noticed we would like to propose Image Encryption for OpenStack.
14:23:33 <Luzi> we have written specs for Nova, Cinder and Glance so far
14:23:46 <Luzi> the short version: image encryption would affect Nova in two ways:
14:24:04 <Luzi> 1. Nova needs to be able to decrypt an encrypted image, when creating a server from it
14:24:24 <Luzi> 2. Nova should be able to create an encrypted image from a server, with user given encryption key id and other metadata
14:25:08 <gibi> jaypipes left some comments already I also read the spec. I think it is a good start but needs some details around the API impact
14:25:10 <Luzi> we tried our best to answer the questions on the original spec and would appreciate further feedback on this to improve the proposal – we are currently looking into further specifying the API impact as requested
14:25:39 <Luzi> yes, that we are investigating right now, thank you for your comments btw :)
14:25:56 <Luzi> the other thing is:
14:26:03 <Luzi> as already mentioned on the ML, we created an etherpad to discuss the location for the proposed image encryption code:
14:26:18 <mriedem> is there a forum session for this?
14:26:22 <Luzi> except for exceptional cases.
14:26:23 <mriedem> it's more than just nova yes?
14:26:23 * mdbooth should review that. It looks a bit tied to the libvirt driver, and possibly shouldn't be.
14:26:28 <Luzi> no sadly not
14:26:36 <Luzi> #link image encryption library discussion https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption
14:26:42 <mriedem> should see if there are still forum session slots available
14:26:48 <sean-k-mooney> this sound like somethink like os-brick
14:27:07 <sean-k-mooney> its not volume based but perhapse there is some overlap
14:27:15 <Luzi> we would like to receive input and comments from all involved projects, so please participate if possible :)
14:27:46 <mriedem> so the image, backup and shelve offload APIs would all have to take encryption ID and metadata parameters for this?
14:27:56 <mriedem> *creat eimage
14:28:04 <mriedem> those are the 3 APIs off the top of my head that create snapshots
14:28:13 <mriedem> and cross-cell resize is going to use shelve
14:28:14 <mriedem> ...
14:28:38 <mriedem> unless the server is created with that information to use later for snapshots
14:28:56 <Luzi> mriedem, that is what we are still investigating
14:28:58 <mriedem> or are we just talking about a proxy?
14:29:13 <mriedem> i.e. can the snapshot image be encrypted in glance after it's created from nova?
14:29:30 <mriedem> anyway, don't need to get into those details here
14:29:34 <mriedem> a forum session would have been nice,
14:29:37 <mriedem> and might still be an option
14:29:42 <mriedem> i doubt they filled all the available slots
14:29:44 <Luzi> not from our perspective
14:29:51 <sean-k-mooney> Luzi: in your proposal are the images encypted or decryped on the host while the instance is running
14:30:34 <mriedem> Luzi: if you or your team aren't going to be at the summit in berlin then yeah ignore me about a forum session
14:30:49 <mdbooth> Remember that LVM ephemeral disk encryption is useless from a security pov, btw
14:30:58 <Luzi> we will be in berlin, but we thought the forums were all full?
14:31:00 <mdbooth> The data is always available unencrypted on the host
14:31:22 <mriedem> i guess the forum is full, so nvm
14:31:54 <mriedem> so let's move this to the spec
14:32:03 <Luzi> sean-k-mooney, the decryption happens before the vm is started and the encryption should happen in the case:
14:32:13 <gibi> OK, lets continue discussing this in the spec
14:32:20 <Luzi> ok, thank you :)
14:32:26 <gibi> Luzi: thank you for the spec
14:32:32 <gibi> there is one more item on the agenda
14:32:33 <gibi> #link Per-instance sysinfo serial for libvirt guests (mriedem/Kevin_Zheng) https://blueprints.launchpad.net/nova/+spec/per-instance-libvirt-sysinfo-serial
14:32:57 <mriedem> yar
14:33:05 <gibi> I've read the bp and I'm OK to use the instance.uuid as machine serial
14:33:20 <mriedem> ok so to summarize, the serial number that gets injected into libvirt guests comes from the host
14:33:30 <mriedem> so all guests on a libvirt host have the same serial number in their BIOS
14:33:32 <mdbooth> That sounds like a really good idea, but I think we're going to have to add something to a datamodel to indicate whether the instance was created with this or not
14:33:58 <mriedem> if the guest is running licensed software that relies on the serial, and you migrate the guest, the serial changes and your hit again for the license
14:34:14 <mriedem> so the idea is just make the serial unique to the guest, which would be the uuid
14:34:42 <mriedem> mdbooth: because of migrations and host config?
14:34:43 <mdbooth> +1, but for the same reason you don't want all instances to have their machine id changed when the host is upgraded
14:35:06 <mdbooth> mriedem: Just thinking upgrades, tbh.
14:35:39 <mriedem> the simple implementation is just the libvirt.sysinfo_serial option gains a new choice which is to use the instance uuid
14:35:45 <mriedem> then the host upgrade shouldn't matter...
14:36:18 <mriedem> now if the guest really needs this to not change, and they are migrated between hosts with different config, the behavior could change
14:36:27 <mdbooth> mriedem: Problem with host-wide config is that you can't set it without affecting everything on that host.
14:36:43 <mriedem> in that case, we'd need something on the flavor/image
14:37:05 <mriedem> then it's per-instance and can travel properly via the scheduler
14:37:17 <mdbooth> mriedem: I didn't have a specific suggestion (just read it 2 mins ago), but 'somewhere'
14:37:40 <mdbooth> Anyway, it does sound like something we want
14:37:52 <mriedem> i'm fine with it being a thing on the flavor/image, that allows it to be more dynamic
14:38:02 <gibi> mriedem: if it is flavor/image then that can override the host config and we can keep the old behavior as well
14:38:11 <mriedem> gibi: true,
14:38:19 <mriedem> so is this still specless then or should i draft a small spec?
14:38:24 <mriedem> seems i should
14:38:34 <gibi> just to agree about the name of the extra_spec ;)
14:38:43 <mriedem> os_foo_bars
14:38:45 <mriedem> done
14:38:59 <mriedem> ok i'll crank out a spec shortly
14:39:03 <mriedem> thanks
14:39:14 <gibi> anything else to discuss?
14:39:49 <mdbooth> gibi: You mean on that topic, or at all?
14:40:08 <gibi> anything for open discussion
14:40:10 <gibi> ?
14:40:28 * mdbooth wonders if we're in a position to push https://review.openstack.org/#/c/578846/ along
14:40:49 <mdbooth> It's a bug which munched a customer's data a while back.
14:41:06 <mdbooth> Testing it is somewhat complex, but I think we might be there now.
14:42:05 <gibi> mdbooth: seems like you need some core review on that patch
14:42:36 <mriedem> you can bug me for it, i just have had too many plates spinning to remember this
14:42:43 <mdbooth> Yep. mriedem was keen to add an evacuate test to the gate, which was a fun exercise :)
14:43:04 <mriedem> yeah that's all done and super hot https://review.openstack.org/#/c/602174/
14:43:06 <mdbooth> Related: mriedem do you want to push that test to the gate?
14:43:26 <mriedem> idk what that means
14:43:48 <mdbooth> mriedem: Invoke whatever incantations are required to actually start running it.
14:44:03 <mriedem> it's in the nova-live-migration job
14:44:06 <mriedem> once merged, it's there
14:44:08 <mdbooth> Merge those patches, I guess.
14:44:19 <mriedem> i would like all cores to merge all of my patches yes
14:44:24 <mriedem> b/c they are gr8
14:44:35 <gibi> I've left that two patches open in my browser for tomorrow. If that helps :)
14:44:51 <gibi> any other topic for today?
14:45:10 <mdbooth> gibi: The function test prior to my actual test is the fun bit, btw ;)
14:45:20 <mdbooth> s/actual fix/
14:45:25 <gibi> mdbooth: ack
14:45:45 <mdbooth> The fix is pretty simple.
14:46:05 <mriedem> end it
14:46:07 <mriedem> please god
14:46:20 <gibi> thank you all
14:46:27 <gibi> #endmeeting