21:00:28 #startmeeting scientific-sig 21:00:28 sure 21:00:28 Meeting started Tue May 29 21:00:28 2018 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:32 The meeting name has been set to 'scientific_sig' 21:00:38 hi all 21:00:38 #chair martial 21:00:39 Current chairs: martial oneswig 21:00:46 Greetings! 21:00:46 Hello 21:00:57 hello everybody 21:01:02 #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_29th_2018 21:01:12 Hi martial, are you travelling again this week? 21:01:24 nah it is next week 21:01:40 ah OK, enjoy the break :-) 21:02:01 b1airo: you there b1airo? 21:02:07 Howdy 21:02:22 hey b1airo 21:02:25 #chair b1airo 21:02:26 Current chairs: b1airo martial oneswig 21:02:38 ok lets get this show on the road 21:02:47 #topic Vancouver roundup 21:02:54 Yep, just making the porridge... :-) 21:03:17 Firstly, thanks for everyone who joined in a good SIG session, particularly the people who spoke in the BoF 21:03:34 and your John who won :) 21:03:45 (you know "your John") 21:04:15 Yes indeed - bravo John Garbutt, excellently spoken 21:04:27 #link session etherpad https://etherpad.openstack.org/p/scientific-sig-vancouver2018-meeting 21:04:32 +1 21:04:35 Lots of things covered. 21:05:53 A good deal of interest in new activity areas - controlled data management, more federation, security, object storage enhancements, more performance 21:05:57 although your shed is going to be the talk of the summit :) 21:06:19 martial: if I could tell you it was my shed, I would proudly do so... 21:06:46 Maybe if we're defining a Scientific OpenStack Constellation it could have a derelict shed as its mascot! 21:07:40 Seems pretty quiet around here this morning 21:08:03 b1airo: seconded (although with minor reservations...) 21:08:17 Heh 21:08:23 Any lurkers? 21:08:53 *tumbleweed* 21:09:23 We should try to gather some groups around specific efforts, if we can. 21:09:43 oneswig: I watched a Ceph HCI NFV session in transit and heard you asking a question afterwards, so maybe you'll know the answer to my question... 21:10:11 I asked a question? Was this the Intel one? 21:10:15 They mentioned putting CPU quota on the OSD daemons, any idea how they did it? 21:10:35 Hi StefanPaetowJisc - saw you lurking :-) 21:10:51 Howdy 21:10:55 o/ StefanPaetowJisc 21:11:31 only mechanism that comes to mind is cgroups 21:11:35 Oh that talk - I think it must have been cgroups, would that make sense? 21:11:36 I assume cgroups, but not clear what type of cgroup control 21:11:44 snap 21:11:57 Apologies... I've been otherwise engaged. 21:12:12 Yeah so I assume when they say "quota" it would have to be CPU share ? 21:12:32 StefanPaetowJisc: I have you in my sights for an update on moonshot one week - if there's news to report? 21:12:53 But if that's what they used then I think their experimental setup was flawed as they didn't generate load to force the cpu share into effect 21:13:02 No news at the moment... Unless having a new logo counts. *rolls eyes* 21:13:38 b1airo: that was my understanding. Going from memory they found a sweet spot around 30-50% of a cpu... 21:13:43 StefanPaetowJisc: This is IRC, man... 21:13:58 The macOS client drags on. And on. And on... 21:14:27 * StefanPaetowJisc repeatedly bashes his head on a desk 21:15:04 * martial pat StefanPaetowJisc on the back ... there there it gets better 21:15:14 Yeah I saw their numbers, but the fact that 50% made almost no difference makes me think cgroups weren't doing anything 21:15:50 b1airo: I don't recall what they were doing for load... CPU share only applies when a resource is under contention? That would make some sense 21:16:35 That's right, so how did they ensure contention to make their percentage numbers meaningful 21:17:24 Wasn't I sitting next to you in that talk? 21:17:47 stepping out for a few minutes 21:18:14 Haha, nah that was their related one 21:18:32 Seems they had 2 talks and a lightning talk on basically the same stuff 21:20:00 Ah ok. 21:20:44 I'll email them 21:21:11 Anyway, I'd like to bring together any fellow travellers on controlled datasets and federation issues - these are pertinent things we are facing in the day job. I'm hoping to gather some notes on the controlled data story over the coming weeks to find common areas and best practice. 21:21:55 Sounds great 21:22:03 I have the feeling I'm going to be in that boat sometime in the next year 21:22:13 controlled data is my life 21:22:22 Any thoughts on how the to break the areas down? 21:22:27 well, not literally 21:22:34 o/ trandles 21:23:36 hello b1airo 21:23:46 I hear rumors involving you 21:23:59 b1airo: was it this talk? https://www.openstack.org/videos/vancouver-2018/implications-of-ceph-client-performance-for-hyper-converged-telco-nfv 21:25:00 Oh, the gossip mill is grinding huh? 21:25:49 trandles: did you see that Julia had put up a POC implementation of boot-to-ramdisk? https://review.openstack.org/#/c/568940/ 21:26:09 I didn't see that, thanks for the link...I'll look at it now 21:26:27 b1airo: nasty gossip about Hobbiton or something 21:26:37 This talk oneswig : https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21452/high-performance-ceph-for-hyper-converged-telco-nfv-infrastructure 21:26:59 Lol 21:27:27 It's true trandles, I'm moving back to defend the one ring 21:29:10 Back to Aotearoa? 21:29:26 b1airo: congrats 21:29:47 There's only one cloud there, and its a long white one... 21:30:14 Thanks! 21:30:59 Well there are at least three OpenStack clouds there, but I fear NeSI's is probably on Kilo 21:34:20 Congrats! 21:35:24 b1airo: you're the #1 guy in the Southern Hemisphere for extremely problematic OpenStack upgrades... :-) 21:35:47 What a mantle! 21:37:24 oneswig: I think we should start an etherpad to try and figure out the different angles for sensitive data handling 21:37:37 On the subject of summit videos, can anyone recommend particular talks? Would be good to gather some links 21:37:56 +1 - especially given all the different options for data storage 21:38:17 There's a lot of ways to come at it, some is OpenStack related, but much of it isn't or would only be peripherally related if auth integration was wanted/needed 21:38:44 b1airo: makes sense to me. 21:38:55 I'll make one now 21:39:30 barbican dev alee was especially interested in real world performance measurements 21:40:41 #link Controlled data research https://etherpad.openstack.org/p/Scientific-SIG-Controlled-Data-Research 21:41:21 Thanks oneswig, I'll look once I'm on the bus 21:45:42 I had a question about DK Panda's talk, including mention of SR-IOV and live migration. Were there any signs of that technology emerging? 21:46:17 #link SR-IOV and live migration https://www.openstack.org/videos/vancouver-2018/building-efficient-hpc-clouds-with-mvapich2-and-openstack-over-sr-iov-enabled-heterogeneous-clusters 21:48:22 He also mentioned it at the HPC advisory council last year 21:48:43 I think it's been discussed upstream in qemu before 21:48:45 jmlowe: I recall, but this time it seemed to be working...? 21:48:53 lol 21:49:19 Seems like a bit of a pipe dream 21:49:54 that would a big break through if true 21:49:59 Does the solution he was talking about include replacing the VF with an emulated device during the migration? 21:51:00 I can't imagine how that would work, there is some way to do that if you avoid layer 2 and user ethernet, if I remember correctly but doing that with ib seems impossible 21:51:16 b1airo: it's been a long time coming but seemed more real this time. I don't think the solution details were covered, might need to watch it again. 21:51:43 Yeah it would never be transparent jmlowe , the app layer would have to handle broken QPs etc 21:52:09 Another talk that requires second viewing could be Sylvain's presentation on virtual GPUs: https://www.openstack.org/videos/vancouver-2018/call-it-real-virtual-gpus-in-nova 21:52:41 Much like RoCE/IB bonding failover from the app perspective I imagine, except twice in quick succession 21:54:22 b1airo: you're thinking MVAPICH MPI plays a part? Perhaps it would need to. 21:55:26 you would have to do something like take the mac address with you then notify the fabric that the topology has changed 21:56:35 I think this talk from Jacob and the Mellanox team stood out for tech content: https://www.openstack.org/videos/vancouver-2018/ironing-the-clouds-a-truly-performant-bare-metal-openstack-1 21:57:33 jmlowe: topology change in ethernet = gratuitous arp from the new position? 21:58:27 We are close to time... any final picks? 21:59:54 Thanks for reminding about the Mellanox one, I hadn't realised that was with Jacob. Will watch it now :-) 21:59:57 something like that, part of the low latency of ib comes from not having to discover where to send a packet after it's been sent 22:00:03 afaik 22:00:52 jmlowe: Ah, IB, so true. 22:00:58 Time's up, thanks all 22:01:03 #endmeeting