21:00:28 <oneswig> #startmeeting scientific-sig
21:00:28 <martial> sure
21:00:28 <openstack> Meeting started Tue May 29 21:00:28 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:32 <openstack> The meeting name has been set to 'scientific_sig'
21:00:38 <trandles> hi all
21:00:38 <oneswig> #chair martial
21:00:39 <openstack> Current chairs: martial oneswig
21:00:46 <oneswig> Greetings!
21:00:46 <jmlowe> Hello
21:00:57 <martial> hello everybody
21:01:02 <oneswig> #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_29th_2018
21:01:12 <oneswig> Hi martial, are you travelling again this week?
21:01:24 <martial> nah it is next week
21:01:40 <oneswig> ah OK, enjoy the break :-)
21:02:01 <oneswig> b1airo: you there b1airo?
21:02:07 <b1airo> Howdy
21:02:22 <oneswig> hey b1airo
21:02:25 <oneswig> #chair b1airo
21:02:26 <openstack> Current chairs: b1airo martial oneswig
21:02:38 <oneswig> ok lets get this show on the road
21:02:47 <oneswig> #topic Vancouver roundup
21:02:54 <b1airo> Yep, just making the porridge... :-)
21:03:17 <oneswig> Firstly, thanks for everyone who joined in a good SIG session, particularly the people who spoke in the BoF
21:03:34 <martial> and your John who won :)
21:03:45 <martial> (you know "your John")
21:04:15 <oneswig> Yes indeed - bravo John Garbutt, excellently spoken
21:04:27 <oneswig> #link session etherpad https://etherpad.openstack.org/p/scientific-sig-vancouver2018-meeting
21:04:32 <b1airo> +1
21:04:35 <oneswig> Lots of things covered.
21:05:53 <oneswig> A good deal of interest in new activity areas - controlled data management, more federation, security, object storage enhancements, more performance
21:05:57 <martial> although your shed is going to be the talk of the summit :)
21:06:19 <oneswig> martial: if I could tell you it was my shed, I would proudly do so...
21:06:46 <b1airo> Maybe if we're defining a Scientific OpenStack Constellation it could have a derelict shed as its mascot!
21:07:40 <b1airo> Seems pretty quiet around here this morning
21:08:03 <oneswig> b1airo: seconded (although with minor reservations...)
21:08:17 <b1airo> Heh
21:08:23 <b1airo> Any lurkers?
21:08:53 <StefanPaetowJisc> *tumbleweed*
21:09:23 <oneswig> We should try to gather some groups around specific efforts, if we can.
21:09:43 <b1airo> oneswig: I watched a Ceph HCI NFV session in transit and heard you asking a question afterwards, so maybe you'll know the answer to my question...
21:10:11 <oneswig> I asked a question?  Was this the Intel one?
21:10:15 <b1airo> They mentioned putting CPU quota on the OSD daemons, any idea how they did it?
21:10:35 <oneswig> Hi StefanPaetowJisc - saw you lurking :-)
21:10:51 <StefanPaetowJisc> Howdy
21:10:55 <b1airo> o/ StefanPaetowJisc
21:11:31 <jmlowe> only mechanism that comes to mind is cgroups
21:11:35 <oneswig> Oh that talk - I think it must have been cgroups, would that make sense?
21:11:36 <b1airo> I assume cgroups, but not clear what type of cgroup control
21:11:44 <oneswig> snap
21:11:57 <StefanPaetowJisc> Apologies... I've been otherwise engaged.
21:12:12 <b1airo> Yeah so I assume when they say "quota" it would have to be CPU share ?
21:12:32 <oneswig> StefanPaetowJisc: I have you in my sights for an update on moonshot one week - if there's news to report?
21:12:53 <b1airo> But if that's what they used then I think their experimental setup was flawed as they didn't generate load to force the cpu share into effect
21:13:02 <StefanPaetowJisc> No news at the moment... Unless having a new logo counts. *rolls eyes*
21:13:38 <oneswig> b1airo: that was my understanding.  Going from memory they found a sweet spot around 30-50% of a cpu...
21:13:43 <oneswig> StefanPaetowJisc: This is IRC, man...
21:13:58 <StefanPaetowJisc> The macOS client drags on. And on. And on...
21:14:27 * StefanPaetowJisc repeatedly bashes his head on a desk
21:15:04 * martial pat StefanPaetowJisc on the back ... there there it gets better
21:15:14 <b1airo> Yeah I saw their numbers, but the fact that 50% made almost no difference makes me think cgroups weren't doing anything
21:15:50 <oneswig> b1airo: I don't recall what they were doing for load... CPU share only applies when a resource is under contention? That would make some sense
21:16:35 <b1airo> That's right, so how did they ensure contention to make their percentage numbers meaningful
21:17:24 <oneswig> Wasn't I sitting next to you in that talk?
21:17:47 <martial> stepping out for a few minutes
21:18:14 <b1airo> Haha, nah that was their related one
21:18:32 <b1airo> Seems they had 2 talks and a lightning talk on basically the same stuff
21:20:00 <oneswig> Ah ok.
21:20:44 <b1airo> I'll email them
21:21:11 <oneswig> Anyway, I'd like to bring together any fellow travellers on controlled datasets and federation issues - these are pertinent things we are facing in the day job.  I'm hoping to gather some notes on the controlled data story over the coming weeks to find common areas and best practice.
21:21:55 <b1airo> Sounds great
21:22:03 <jmlowe> I have the feeling I'm going to be in that boat sometime in the next year
21:22:13 <trandles> controlled data is my life
21:22:22 <b1airo> Any thoughts on how the to break the areas down?
21:22:27 <trandles> well, not literally
21:22:34 <b1airo> o/ trandles
21:23:36 <trandles> hello b1airo
21:23:46 <trandles> I hear rumors involving you
21:23:59 <oneswig> b1airo: was it this talk? https://www.openstack.org/videos/vancouver-2018/implications-of-ceph-client-performance-for-hyper-converged-telco-nfv
21:25:00 <b1airo> Oh, the gossip mill is grinding huh?
21:25:49 <oneswig> trandles: did you see that Julia had put up a POC implementation of boot-to-ramdisk? https://review.openstack.org/#/c/568940/
21:26:09 <trandles> I didn't see that, thanks for the link...I'll look at it now
21:26:27 <trandles> b1airo: nasty gossip about Hobbiton or something
21:26:37 <b1airo> This talk oneswig : https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21452/high-performance-ceph-for-hyper-converged-telco-nfv-infrastructure
21:26:59 <b1airo> Lol
21:27:27 <b1airo> It's true trandles, I'm moving back to defend the one ring
21:29:10 <StefanPaetowJisc> Back to Aotearoa?
21:29:26 <trandles> b1airo: congrats
21:29:47 <oneswig> There's only one cloud there, and its a long white one...
21:30:14 <b1airo> Thanks!
21:30:59 <b1airo> Well there are at least three OpenStack clouds there, but I fear NeSI's is probably on Kilo
21:34:20 <StefanPaetowJisc> Congrats!
21:35:24 <oneswig> b1airo: you're the #1 guy in the Southern Hemisphere for extremely problematic OpenStack upgrades... :-)
21:35:47 <b1airo> What a mantle!
21:37:24 <b1airo> oneswig: I think we should start an etherpad to try and figure out the different angles for sensitive data handling
21:37:37 <oneswig> On the subject of summit videos, can anyone recommend particular talks? Would be good to gather some links
21:37:56 <trandles> +1 - especially given all the different options for data storage
21:38:17 <b1airo> There's a lot of ways to come at it, some is OpenStack related, but much of it isn't or would only be peripherally related if auth integration was wanted/needed
21:38:44 <oneswig> b1airo: makes sense to me.
21:38:55 <oneswig> I'll make one now
21:39:30 <jmlowe> barbican dev alee was especially interested in real world performance measurements
21:40:41 <oneswig> #link Controlled data research https://etherpad.openstack.org/p/Scientific-SIG-Controlled-Data-Research
21:41:21 <b1airo> Thanks oneswig, I'll look once I'm on the bus
21:45:42 <oneswig> I had a question about DK Panda's talk, including mention of SR-IOV and live migration.  Were there any signs of that technology emerging?
21:46:17 <oneswig> #link SR-IOV and live migration https://www.openstack.org/videos/vancouver-2018/building-efficient-hpc-clouds-with-mvapich2-and-openstack-over-sr-iov-enabled-heterogeneous-clusters
21:48:22 <jmlowe> He also mentioned it at the HPC advisory council last year
21:48:43 <b1airo> I think it's been discussed upstream in qemu before
21:48:45 <oneswig> jmlowe: I recall, but this time it seemed to be working...?
21:48:53 <jmlowe> lol
21:49:19 <b1airo> Seems like a bit of a pipe dream
21:49:54 <trandles> that would a big break through if true
21:49:59 <b1airo> Does the solution he was talking about include replacing the VF with an emulated device during the migration?
21:51:00 <jmlowe> I can't imagine how that would work, there is some way to do that if you avoid layer 2 and user ethernet, if I remember correctly but doing that with ib seems impossible
21:51:16 <oneswig> b1airo: it's been a long time coming but seemed more real this time.  I don't think the solution details were covered, might need to watch it again.
21:51:43 <b1airo> Yeah it would never be transparent jmlowe , the app layer would have to handle broken QPs etc
21:52:09 <oneswig> Another talk that requires second viewing could be Sylvain's presentation on virtual GPUs: https://www.openstack.org/videos/vancouver-2018/call-it-real-virtual-gpus-in-nova
21:52:41 <b1airo> Much like RoCE/IB bonding failover from the app perspective I imagine, except twice in quick succession
21:54:22 <oneswig> b1airo: you're thinking MVAPICH MPI plays a part?  Perhaps it would need to.
21:55:26 <jmlowe> you would have to do something like take the mac address with you then notify the fabric that the topology has changed
21:56:35 <oneswig> I think this talk from Jacob and the Mellanox team stood out for tech content: https://www.openstack.org/videos/vancouver-2018/ironing-the-clouds-a-truly-performant-bare-metal-openstack-1
21:57:33 <oneswig> jmlowe: topology change in ethernet = gratuitous arp from the new position?
21:58:27 <oneswig> We are close to time... any final picks?
21:59:54 <b1airo> Thanks for reminding about the Mellanox one, I hadn't realised that was with Jacob. Will watch it now :-)
21:59:57 <jmlowe> something like that, part of the low latency of ib comes from not having to discover where to send a packet after it's been sent
22:00:03 <jmlowe> afaik
22:00:52 <oneswig> jmlowe: Ah, IB, so true.
22:00:58 <oneswig> Time's up, thanks all
22:01:03 <oneswig> #endmeeting