10:59:22 <oneswig> #startmeeting scientific-sig
10:59:23 <openstack> Meeting started Wed Oct 10 10:59:22 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
10:59:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
10:59:27 <openstack> The meeting name has been set to 'scientific_sig'
10:59:32 <verdurin> Afternoon.
10:59:46 <oneswig> just about!
10:59:53 <oneswig> #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_October_10th_2018
11:00:10 <oneswig> How are things?
11:00:19 <martial_> Hi Stig and all
11:00:31 <janders> we just entered daylight savings here, b1airo: is it same on your side of the Tasman?
11:00:51 <priteau> Hi everyone
11:01:01 <oneswig> I think b1airo's off sick today, he mentioned earlier he'd caught a bug from the kids
11:01:04 <oneswig> hi priteau
11:01:08 <oneswig> hi martial_
11:01:12 <oneswig> #chair martial_
11:01:13 <openstack> Current chairs: martial_ oneswig
11:01:17 <janders> that's one exciting agenda! :)
11:01:35 <oneswig> It's somewhat bring-and-share
11:01:35 <janders> re b1airo: :(
11:01:45 <verdurin> Hello all
11:02:27 <oneswig> At this end, we finally got our BeeGFS role blogged
11:02:34 <oneswig> #link BeeGFS Ansible role https://www.stackhpc.com/ansible-role-beegfs.html
11:03:03 <oneswig> The interest in parallel filesystems last week in the US time zone meeting largely prompted that.
11:03:14 <janders> oneswig: supercool!
11:03:26 <oneswig> janders: share and enjoy :-)
11:03:44 <oneswig> Anyone from Sanger here today?
11:04:01 <janders> I have to admit I haven;'t written a line of code for the last three weeks due to a ton of planning activities, however I was just trying to impose software-defined high-performance storage agenda item on our storage team :)
11:04:16 <janders> I think I will send them the link and try again :)
11:04:28 <oneswig> happy to help (hopefully)
11:04:51 <janders> for sure
11:05:01 <janders> one can deploy BeeGFS via Bright Cluster Manager...
11:05:24 <janders> ...or with ansible like you guys (and all the cool kids) do!
11:05:27 <oneswig> Bright's pretty good at packaging stuff like that
11:05:27 <janders> :)
11:05:37 <martial_> Thanks for the writeup Stig
11:05:41 <oneswig> (but we do indeed like to do Ansible)
11:06:45 <janders> speaking of Bright - would integrating Bright and bare-metal OpenStack would be of any interest to you guys?
11:06:59 <oneswig> I had been hoping we could hear from the Sanger team for an update on their Lustre work https://docs.google.com/presentation/d/1kGRzcdVQX95abei1bDVoRzxyC02i89_m5_sOfp8Aq6o/edit#slide=id.p3
11:07:33 <oneswig> I know some people at Bright but don't have direct use of the product.
11:07:35 <janders> I've been asked this question repeatedly over the last few weeks, thinking of reaching out to Bright to have a chat. If it's not only CSIRO asking, it might be easier to get some traction..
11:07:44 <oneswig> I'm sure others do though?
11:08:20 <janders> we're a big Bright shop on the pure-HPC side (and some of our collaborating organisations are, too)
11:08:56 <verdurin> While I know of some commercial Bright users, I can't think of many academic sites who use it in the UK
11:10:08 <oneswig> My understanding of the bright model is that they work with Bright driving OpenStack.  Are you thinking of a reversed situation in which OpenStack drives Bright (like an Ironic driver, for example)?
11:10:23 <janders> oneswig: spot on
11:10:31 <janders> "converting systems into applications"
11:10:45 <janders> Bright does HPC allright, but I don't see it doing other things that well
11:11:05 <janders> In a bare-metal cloud reality, it seems to be better suited to be a cloud app than a cloud engine
11:11:39 <oneswig> I've previously come across something along similar lines with an Ironic XCAT driver.  I don't think it ever got the momentum behind it thought to keep it going
11:11:48 <janders> however if it could talk to Nova/Ironic instead of trying to talk to ipmi interfaces on the blades, this could be very different
11:12:05 <janders> I don't think additional Ironic drivers are sustainable
11:12:21 <janders> I think things like xcat or Bright should have a mode where they hit OpenStack APIs for resources
11:12:27 <janders> and give up on bare-metal management entirely
11:13:00 <janders> but - this might be a good topic for a chat in the SIG in Berlin - I don't want to hijack too much of this meeting for this as there are more interesting things on the agenda :)
11:13:44 <verdurin> janders: interesting. Would indeed be good to discuss in Berlin.
11:13:45 <oneswig> Let's do that.  I hope some of the Bright team will come along and you can tell them direct!
11:13:59 <janders> I will try to give them notice
11:14:11 <janders> and might take this opportunity to invite them along
11:14:25 <janders> Amsterdam - Berlin should be a pretty quick bullet train ride I imagine
11:14:40 <oneswig> +1 to that, be great to have them engaged
11:14:44 <janders> (the OpenStack - Bright guys are in AMS for sure)
11:14:51 <janders> okay, adding this to my TODO
11:15:23 <oneswig> I've not been driving the meetbot
11:15:33 <oneswig> #topic parallel filesystems in OpenStack
11:15:42 <oneswig> back on track now :-)
11:15:49 <janders> sorry.. :)
11:16:15 <oneswig> So we've been looking at BeeGFS recently at this end in a couple of projects
11:16:32 <janders> this is super interesting
11:16:46 <oneswig> In the back of my mind, I'd like to gather some updates for the SIG book (https://www.openstack.org/assets/science/CrossroadofCloudandHPC.pdf)
11:17:23 <martial_> that is a good idea indeed
11:17:30 <oneswig> and to do a more decent job on that, the coverage of what's used out there would be great.
11:17:43 <janders> is there "native" OpenStack-BeeGFS integration, or would you mount BeeGFS where OpenStack is typically looking for storage (/var/lib/{glance, cinder,nova}/images etc..)
11:17:44 <oneswig> martial_: what does DMC do for filesystem access?
11:18:57 <oneswig> janders: I'd advise against the latter, it'll be a huge surprise if more than one controller mounts the same share and assumes it is local and has exclusive access...
11:19:15 <janders> oops
11:19:33 <janders> good point
11:19:44 <oneswig> To date everything we've done is around provisioning filesystems for instances not the infrastructure.
11:20:00 <oneswig> I know GPFS can do the latter and there was abandoned work by DDN on doing the same for Lustre
11:20:06 <janders> are you using RDMA as a transport?
11:20:12 <oneswig> oh yes
11:20:12 <janders> (for BeeGFS in instances)
11:20:22 <janders> I thought so :)
11:20:22 <martial_> Ceph so far with NFS for host access, which is why I was so interested in your about BeeGFS to evaluate
11:20:41 <martial_> ("in your post")
11:20:42 <oneswig> BeeGFS is refreshingly straightforward at getting RDMA working
11:20:53 <janders> do you run it through manila, or do you just present the neutron network that BeeGFS resides in to clients?
11:21:00 <janders> oneswig: +1
11:21:30 <verdurin> oneswig: BeeGFS just providing ephemeral storage in this case?
11:21:53 <oneswig> janders: OpenStack's pretty much out of the equation.  Usually we create instances on a network, and then use Ansible to make some into servers and others into clients (or all of them into both)
11:22:23 <oneswig> verdurin: It's not long-lived storage, in the case we were most recently looking at.
11:22:25 <janders> Pretty much KISS principle. Very good approach.
11:22:38 <oneswig> I mean there are many other cases where what's needed is attaching to an existing production filesystem
11:22:59 <janders> how do you handle root accounts in the cloud context?
11:23:21 <janders> take root off users? use it for clients managed by your team?
11:23:51 <oneswig> In the most recent work, the instances are managed (ie users don't get root).  Elsewhere, I've seen a model of filesystem-per-project to ensure users can only destroy their own work
11:24:41 <janders> In the latter case, is it BeeGFS-OND?
11:24:42 <oneswig> I think BeeGFS is somewhat behind the authentication options of its rivals (but I may be corrected on that)
11:25:26 <oneswig> janders: not in that case.  The filesystem is a "production filesystem" created and managed by site admins.  I mean it could be BeeOND but I don't think it is
11:25:45 <janders> oneswig: unfortunately I have to agree. Vendors offering us BeeGFS kit all claimed Kerberos (or any similar functionality) is out of the picture at this point in time
11:26:20 <janders> which sucks big time
11:26:34 <janders> either Kerberised BeeGFS (with no OpenStack integration)
11:26:40 <janders> or manila-driven BeeGFS-OND
11:26:44 <janders> would be nirvana! :)
11:26:45 <oneswig> Are there other filesystem solutions in use out there?
11:27:13 <janders> I actually used GPFS. Not really in anger but in POCs. No issues, "just worked".
11:27:21 <janders> however...
11:27:24 <janders> this was Mitaka
11:27:26 <verdurin> Owing to our increasing restricted data work, we are increasingly looking at Lustre, having historically used GPFS.
11:27:47 <janders> I'm looking at it again now for Queens and see the vendors being slack to keep up with the OpenStack release cycle
11:27:52 <oneswig> verdurin: will you be retracing the Sanger work on that?
11:28:21 <janders> I think Mitaka plugins would work on Queens, but it's a bit of a risk to do it this way
11:28:23 <verdurin> oneswig: similar, yes. Most of that is upstream now, so it's just a configuration choice.
11:28:33 <martial_> (going to be AFK for the next few minutes -- school bus)
11:28:58 <janders> is long-range storage of any interest to you guys?
11:29:10 <verdurin> Hoping to discuss it in more detail with DDN on Monday.
11:29:13 <oneswig> janders: long range?
11:29:17 <janders> (as in - mounting a filesystems tens of miliseconds, perhaps a hundred miliseconds away)
11:29:40 <oneswig> ah.  Why yes, possibly.
11:29:47 <oneswig> In a different theme.
11:30:31 <oneswig> But I'd like to evaluate Ceph for that.  I'm interested in the idea that a CephFS filesystem can have different directories served from different crush rulesets.
11:30:47 <janders> OK. We've got an interesting PoC going. Can't talk details yet, but hopefully it will be finished by Berlin so we can have a chat if you're interested. (it's not Ceph)
11:30:57 <oneswig> So you could have /canberra and /bristol - one's local to me, but I can see contents (slowly) of the other
11:31:11 <janders> Are you guys doing any large-ish scale data transfer with Pawsey Supercomputing Centre, Western Australia?
11:31:36 <janders> (the fact that both you and them work on SKA made me wonder)
11:32:00 <oneswig> janders: not that I'm aware of.  The SKA's the obvious connection there.  I am sure people do do that kind of stuff with Pawsey but it's a massive project and that's not in our wheelhouse
11:32:34 <janders> The /canberra and /bristol approach is pretty close to what we're testing
11:32:43 <oneswig> janders: Your new project - will that be shareable when it's ready to be revealed?
11:32:55 <janders> yes, I believe so
11:33:06 <martial_> janders: and presented as a lighting talk?
11:33:12 <janders> it's all NDA for now but I would imagine the vendor would be very keen to have more prospective customers :)
11:33:19 <oneswig> d
11:33:33 <oneswig> (I mean...) is it proprietary?
11:33:55 <janders> with lightning talk - possibly - just unsure if it would be Berlin or Denver timeframe :)
11:34:09 <janders> unfortunately yes, it is (but it is so cool I can tolerate it :)
11:34:42 <oneswig> Sounds very cool.
11:34:58 <janders> however - if there are still lightning talk slots available in Berlin I would be very keen to talk about ephemeral hypervisors running on Ironic
11:35:34 <martial_> Stig: are we doing the usual lighting talk session ?
11:35:35 <janders> I've got that working so no depedency on vendor finishing work
11:35:53 <oneswig> I think martial_ was thinking of the SIG lightning talks.  These are typically settled on the day (or perhaps the week before)
11:36:07 <martial_> Same here
11:36:24 <janders> I was thinking a SIG lightning talk, too
11:36:38 <oneswig> martial_: We've asked for the same format, and IIRC Wednesday morning.  I haven't seen our SIG sessions in the schedule yet.
11:36:58 <janders> ah, there's SIG-SIG-lightningtalk and Berlin-SIG-lightningtalk
11:37:13 <janders> :)
11:37:33 <oneswig> eh?
11:38:02 <janders> sometimes people do presentations in the #openstack-meeting IRC
11:38:21 <janders> plus - in the SIG meetup at Summits there's a lightning talk part
11:38:35 <oneswig> Ah, got it.  This would be in person, in Berlin - the latter
11:38:35 <janders> (and on top of all that there's the "official" Summit lightning talks)
11:38:46 <martial_> Janders: you can have both of course
11:39:15 <janders> I'd be happy to do both and I think it would actually make sense due to SC and OS Berlin clash
11:39:36 <oneswig> martial_: you're heading to SC, correct?
11:39:47 <martial_> Blair and I are
11:39:50 <oneswig> #topic SIG meetups
11:40:00 <oneswig> sorry just trying to keep the meetbot on track again...
11:40:19 <oneswig> Ah yes, thought so.  Really unfortunate timing.
11:41:31 <martial_> #link https://sc18.supercomputing.org/presentation/?id=bof113&sess=sess374
11:41:45 <martial_> So we have a session at SC18
11:42:10 <oneswig> If you get the chance while you're there, can you find out if anything's afoot on Kubernetes and MPI (as discussed last week).  I think it's a favourite topic for Christian Kniep.  I'd love to hear if there was a working, scalable implementation
11:42:25 <oneswig> (and one that wasn't warty as...)
11:42:32 <martial_> Moving this one a little beyond Openstack alone for this one
11:42:48 <martial_> But with our usual crowd
11:43:23 <martial_> I am sure Christian will be happy to discuss things indeed
11:43:35 <oneswig> Not come across Bob or Jay before - what's their connection?
11:44:07 <martial_> Bob Killen is a CNCF ambassador for K8s
11:44:37 <janders> this might seem unrelated, but it really isn't:
11:44:52 <janders> how long do you guys think the IB vs high-speed Ethernet division will last?
11:44:58 <martial_> Jay has a similar role at Suse
11:45:21 <oneswig> martial_: get that topic on the agenda then, we need to know!
11:45:50 <oneswig> janders: good question.  For as long as IB makes boat loads of cash for Mellanox I guess?
11:45:58 <janders> do you think we might end up with "EB" at around 400Gbit/s mark?
11:46:00 <oneswig> (deferring the question)
11:46:04 <martial_> I am starting that agenda soon so "yes"
11:46:33 <janders> the goss is that while IB is definitely profitable 100GE & co are making the most $$$
11:47:08 <janders> I'm mentioning this cause this might do interesting things to MPI support on technologies as k8s
11:47:53 <oneswig> janders: The tricky piece is that IB is a network in which addressing is managed using interval routes, whereas Ethernet's plug-and-play requires MAC lookups, which costs time and resource.  Inherently, it has a latency disadvantage, but is so much easier to use.
11:48:47 <oneswig> janders: it's probably easier to do MPI over IB with Kubernetes, because K8S will know nothing about it and not try to get in the way
11:49:24 <oneswig> The major issues are on the placement of nodes and creation of the communicator topology (from my limited understand)
11:49:48 <martial_> yes I was going to wonder about affinity
11:50:16 <oneswig> martial_: you're up against this panel https://sc18.supercomputing.org/presentation/?id=pan109&sess=sess305 - some speaker firepower there :)
11:51:30 <martial_> they are a panel we are a BoF but yes it is a busy program (again)
11:51:55 <martial_> we have a session on cloud federation also ... with other similar ones happening there too
11:52:07 <martial_> at least they did not ask us to merge this year
11:52:13 <oneswig> martial_: got a link?
11:52:14 <oneswig> For those that are heading to Berlin, I was wondering if an evening SIG beer might be good?
11:52:23 <martial_> and I want to keep it "short" presentation wise
11:52:41 <oneswig> Does anyone know what nights are busy at the Berlin summit so we can fit it in?
11:52:58 <martial_> #link https://sc18.supercomputing.org/presentation/?id=pan106&sess=sess294
11:53:15 <martial_> Stig: that is that (for now) for SC18 :)
11:53:47 <oneswig> Thanks martial_ - good luck with setting the questions - make 'em sweat :-)
11:53:59 <oneswig> on Berlin: I haven't checked when the big vendor events are.
11:54:13 <janders> quickly coming back to the filesystems thread
11:54:18 <oneswig> (I'll come back next time with a suggestion)
11:54:47 <janders> have you used parallel filesystems running in instances much?
11:55:09 <oneswig> In bare metal, yes.
11:55:14 <oneswig> Otherwise no.
11:55:21 <janders> I used to run "hyperconverged" like this, with a dual-core BeeGFS VM and 14 core compute VM on each compute node
11:55:33 <janders> with SRIOV/IB it was a killer back in the day
11:55:54 <oneswig> janders: interesting.  How was the VM overhead for the MDS and OSS?
11:56:09 <janders> no one complained
11:56:19 <janders> however that was a very HPC heavy system
11:56:36 <janders> with CPU-passthru, no overcommit, node-local SSD/NVMe and fair bit of RAM
11:57:11 <janders> we were maxing out SSDs on the BeeGFSes and maxing out FDR on the consumers
11:57:17 <oneswig> Sounds like a machine that would do it justice
11:57:53 <oneswig> How were you automating it?
11:57:58 <janders> it was a cool little tool (and if I were doing it today I would use your ansible role instead of dodgy shell scripts, but this project was around 2015)
11:58:05 <janders> bash :(
11:58:12 <janders> quick and dirty, worked well though
11:58:25 <janders> I think we baked-in BeeGFS rpms in the images to make it "quicker"
11:58:25 <oneswig> Sometimes, quick and dirty is all that's required :-)
11:58:39 <oneswig> We are nearly out of time.
11:58:42 <oneswig> #topic AOB
11:58:46 <oneswig> AOB?
11:59:15 <oneswig> Plenty covered in the course of discussion I guess
11:59:21 <martial_> not much here, thanks for a little deeper dive into BeeGFS
11:59:26 <martial_> janders:
11:59:27 <janders> would there be value in me talking about nova-compute on Ironic here?
11:59:32 <janders> If so, what would be a good date?
11:59:34 <martial_> look forward to your update
11:59:48 <martial_> please do
11:59:54 <oneswig> janders: you pick, don't think we have anything booked in currentkly
12:00:02 <janders> 24 Oct?
12:00:08 <janders> that will give me some time to prepare
12:00:11 <oneswig> LGTM
12:00:17 <oneswig> OK we are at the hour, better close the meeting
12:00:21 <oneswig> Thanks all
12:00:24 <oneswig> #endmeeting