21:00:26 #startmeeting scientific-sig 21:00:27 Meeting started Tue Sep 18 21:00:26 2018 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:27 g'day everyone 21:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:30 The meeting name has been set to 'scientific_sig' 21:00:37 greetings janders_ and all 21:00:50 what's new? 21:01:20 #link agenda for today is https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_September_18th_2018 21:02:08 Tomorrow is upgrade day over here... but this specific time it's Pike->Queens 21:03:17 We've been doing the drill on the staging environment but there's nothing quite like the real thing ... 21:03:34 oneswig: what are the main challenges? 21:04:20 In this case, not too many. One concern is correctly managing resource classes in Ironic 21:04:42 right! are you doing BIOS/firmware upgrades as well? 21:05:07 oh no. That's not in the plan (should it be I wonder?) 21:05:28 o/ 21:05:44 G'day b1air, which airport are you in today? :-) 21:05:44 Do all the changes all at once!! 21:05:56 if you were to, would you use something like lifecycle manager, or would you temporarily boot ironic nodes into a "service image" with all the tools? 21:05:58 Very near AKL as it happens 21:06:00 Fighting talk from a safe distance, that 21:06:07 ;-) 21:06:34 janders_: last time we did this, it was the latter - a heat stack for all compute instances with a service image in it. 21:07:16 right! in a KVM-centric world, it's easy - just incorporate all the BIOS/FW management tools in the image. Ironic changes this paradigm so I was wondering how do you go about it. Might be an interesting forum topic. 21:07:26 (difficulty joining on the phone) 21:07:30 Have you seen an Ansible playbook for doing firmware upgrades via the dell idrac? 21:07:33 Hello martial_ 21:07:40 #chair b1air martial_ 21:07:41 Current chairs: b1air martial_ oneswig 21:07:45 (remiss of me) 21:07:47 do you pxeboot the service image via ironic or outside ironic? 21:08:18 In that case we booted it like a standard compute instance, via Ironic 21:08:24 KVM world easy? Pull the other one @janders_ ! :-) 21:08:54 no.. I looked at the playbooks managing the settings but not the BIOS/FW versions. If it works (and I'm not worried about the playbooks, I'm worried about the Dell hardware side :) it'd be gold 21:09:14 oneswig: does this mean you had to delete all the ironic instances first? 21:09:31 b1air: KVM world is easy in this one sense :) 21:09:42 In that case, yes - I guess the lifecycle manager could have avoided that, do you think? 21:10:01 oneswig: yes - it will do all of this in the pre-boot environment (if it works..) 21:10:41 when I say "if it works" - on our few hundred nodes of HPC it definitely works for 70-95% nodes. Success rates vary. The ones that failed usually just need more attempts.. (thanks, Dell) 21:11:10 Power drain? 21:11:30 however I am unsure if Mellanox firmware can be done via Lifecycle Controller (we usually do this part from the compute OS) 21:11:33 janders_: is this the playbooks at https://github.com/dell/Dell-EMC-Ansible-Modules-for-iDRAC ? 21:12:10 janders_: only if it is a Dell OEM Mellanox part - that's the value add 21:13:28 b1air: most of our HCAs are indeed OEM - I need to revisit this (I guess the guys have always done this with mft & flint, cause it works 99/100) - in the ironic world doing everything from LC could simplify things 21:14:27 closer to the main topics - from your experience, how big do the forum sessions typically get? 21:14:49 janders_: there has also been talk previously of performing these actions as a manual cleaning step - less obtrusive but without out-of-band dependencies on idrac 21:15:00 At Monash we found the LCs to be ok reliability-wise from 13G 21:15:27 janders_: perhaps we should, indeed, look at the agenda.. 21:15:37 #topic Forum sessions 21:16:18 Forum sessions I've been in have ranged in size from ~8 people to ~50 (but about 12 holding court) 21:16:29 oneswig: this is a neat way to do it in a rolling fashion - however the drawback is having a mix of versions for quite a while as users delete/reprovision the nodes. I'm trying to come up with an option of doing it all in a defined downtime window, without affecting existing ironic instances. 21:16:36 b1air: that is great to hear! :) 21:16:55 oneswig: that is good - it shouldn't be impossible to get some bandwidth in these sessions! :) 21:17:32 I get the feeling one on Ironic and BIOS firmware management could be interesting! 21:17:46 Facilitating it but also, conversely, preventing it 21:19:30 janders_: I think at CERN they have a way of letting the instance owner select their downtime period 21:19:52 I am trying to find where I saw it described 21:20:14 Good evening priteau! 21:20:34 wow - very cool idea.. I wonder if it's leveraging AZs (which might have different downtime windows) or something else 21:20:35 Hi everyone by the way :-) 21:20:54 janders_: it may even be per-host 21:21:26 Sounds a bit like AWS' reboot/downtime scheduling API 21:22:42 thinking about it - if it's just the instance that's supposed to be up and it has no volumes etc attached it can be quite fine grained 21:23:13 however if the instance is leveraging any services coming off the control plane, it might be tricky to go below AZ-level downtime 21:23:28 or at least that's my quick high level thought without looking into details 21:23:51 very interesting topic though! :) 21:24:39 question of procedure - do we add a proposal like this to the Ironic forum etherpad, or mint our own SIG etherpad and add it to the list? 21:25:42 I found http://openstack-in-production.blogspot.com/2018/01/keep-calm-and-reboot-patching-recent.html, but it's not how I remember it 21:26:54 Another area I am interested in pursuing is support for the recent features introduced to Ironic for alternative boot methods (boot from volume, boot to ramdisk) - is there scope for getting these working with multi-tenant networking? 21:26:55 Maybe there is another procedure for the less critical upgrades 21:30:19 oneswig: alternative boot methods would definitely be of interest. Looking at the PTG notes there are some good ideas so it looks like the next step would be to find out if/when these ideas can be implemented 21:31:01 something from my side (across all the storage-related components) would be BeeGFS support/integration in OpenStack 21:31:30 Ooh, interesting. 21:31:35 would you guys be interested in this, too? 21:31:37 Like, in Manila? 21:31:48 yes, that's the most powerful scenario 21:32:05 Absolutely! We've got playbooks for it, but nothing "integrated" 21:32:12 (but does it need to be?) 21:32:16 but running VM instances (for those who still need VMs) and cinder volumes off BeeGFS would be of value as well 21:32:56 That follows quite closely what IBM was up to with SpectrumScale 21:33:00 given no kerberos support in BeeGFS for the time being I think it would be very useful to have some smarts there 21:33:22 OK, let's get these down... 21:33:29 haha! you found the logic behind my thinking 21:33:38 #link SIG brainstorming ideas https://etherpad.openstack.org/p/BER-stein-forum-scientific-sig 21:33:58 I liked what IBM have done with GPFS/Spectrum however I find deploying and maintaining this solution more and more painful as time goes 21:34:13 I see the same sentiment on the storage side 21:34:22 "it's good, but..." 21:34:41 I'll add some points to the etherpad now 21:35:11 ok, you already have - thank you! :) 21:36:10 another storage related idea 21:36:27 would you find it useful to be able to separate storage backends for instance boot drives and ephemeral drives? 21:36:43 I like the raw performance of node-local SSD/NVMe 21:37:10 however having something more resilient (and possibly shared) for the boot drive is good, too 21:37:34 I would happily see support for splitting the two up (I do not think this is possible today, please correct me if I am wrong) 21:37:41 I was just thinking about that today, so I 2nd that 21:38:17 in this case, we could even wipe ephemeral on live migration (this would have to be configurable) so only the boot drive needs to persist 21:38:53 It seems like a good idea to me, certainly worth suggesting 21:38:58 ok! 21:38:59 hello goldenfri! 21:39:17 o/ 21:40:07 janders_: if the ephemeral storage is mounted while live migrating, wouldn't the guest OS complain if data gets wiped out? 21:42:02 good point, there would have to be some smarts around it. I don't have this fully thought through yet, but I think the capability would be useful. Perhaps cloud-* services could help facilitate this? 21:42:10 OK we are linked up to https://wiki.openstack.org/wiki/Forum/Berlin2018#Etherpads_from_Teams_and_Working_Groups 21:42:31 but obviously if there's heavy IO hitting ephemeral, some service trying to umount /dev/sdb won't have a lot of luck.. 21:42:56 +1 to janders_ ephemeral separation feature request 21:43:28 janders_: VM-aware live migration? 21:43:31 I see it more likely to be used with cold migration 21:44:07 Where you have a fleet of long lived instances that you want to move around due to underlying maintenance etc 21:46:16 another thing I'm looking at is using trim/discard like features for node cleaning - however bits of this might be already implemented, looking at ironic and pxe_idrac/pxe_ilo bits 21:46:23 have any of you used this with success? 21:46:52 (I might have asked this question here already, not sure) 21:47:22 Yes I recall discussing this before, but don't think anything came of it yet 21:47:24 Did we cover this last week? I think there's an Ironic config parameter for key rotation 21:47:57 With hardware encrypted storage? 21:48:08 We use it, and when I checked up I believe it was as simple as that - with the caveat that some of the drives needed a firmware update (of course!) 21:48:29 janders_: you asked last week ;-) http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-09-12-11.00.log.html#l-139 21:48:40 b1air: hardware encryption as I understand it but with an empty secret. 21:49:01 So not really encryption... 21:49:55 Cunning - the baddies will never suspect an empty password! 21:50:04 oneswig: :) I've discussed this with too many parties and lost track (scientific-sig, RHAT, Dell, ... ) 21:50:39 janders_: your comrades here are the source of truth, you can't trust those other guys :-) 21:51:11 that's right :) can't trust those sales organisations 21:51:31 There was one other matter to cover today, before I forget 21:51:38 keycloack 21:51:43 #topic SIG event space at Berlin 21:52:09 priteau: I think we have that on the agenda for next week 21:52:18 Oh, I looked at the wrong week :-) 21:52:47 I know - it's a handy aide memoire for me, probably confusing for anyone else! 21:53:13 Anyway - We have the option of 1 working group session + 1 bof session (ie, what we've had at previous summits). 21:53:43 I think this works well enough, unless anyone prefers to shorten it? 21:53:59 b1air? martial_? Thoughts on that? 21:56:56 I have couple more forum ideas - given we're running low on time I will fire these away now 21:57:17 Please do. 21:57:23 1) being able to schedule a bare-metal instance to a specific piece of hardware (I don't think this is supported today) - would this be useful to you? 21:57:43 think --availability-zone host:x.y.z equivalent for Ironic 21:57:44 On the SIG events - looks like Wednesday morning is clear for the AI-HPC-GPU track 21:58:22 janders_: I believe that exists, in the form of a three-tuple delimited by colons 21:58:34 2) I don't think "nova rebuild" works with baremetal instances - I think it would be something useful 21:58:43 The form might be nova:: 21:59:24 On 2, are you sure? I think I've rebuilt Ironic instances before 21:59:43 Let's follow up on that... 21:59:48 in this case, I will retest both and update the etherpad as required 21:59:58 good plan, let us know! 22:00:07 OK, we are out of time 22:00:14 Thanks everyone 22:00:32 keep adding to that etherpad if you get more ideas we should advocate 22:00:56 https://etherpad.openstack.org/p/BER-stein-forum-scientific-sig 22:00:59 thanks guys! 22:01:02 #endmeeting