11:00:25 #startmeeting scientific-sig 11:00:25 Meeting started Wed Jun 5 11:00:25 2019 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 11:00:28 The meeting name has been set to 'scientific_sig' 11:00:34 yarr \o/ 11:00:34 hi All 11:00:49 Good morning/day/evening :) 11:00:51 oneswig: would you mind if we start with the SDN discussion? I need to pop out for half an hour soon 11:00:51 hi janders, g'day 11:00:59 Afternoon. 11:01:01 certainly, lets do that. 11:01:03 hi verdurin 11:01:11 #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_June_5th_2019 11:01:25 OK, let's start by ignoring the running order... 11:01:27 hi all 11:01:31 o/ 11:01:36 #topic Coherent SDN fabrics 11:01:44 'ello 11:01:45 janders: you raised this at teh PTG 11:01:47 only a brief update from my side: I've had a brief chat with the Mellanox guys. Overall they don't see an issue with this, but I'd like to get more detailed feedback off them 11:01:59 #chair b1airo martial_ 11:02:00 Current chairs: b1airo martial_ oneswig 11:02:07 last week's neutron-driver meeting got cancelled so haven't spoken to the Neutron guys in detail either 11:02:26 indeed - this is follow up to the SDN consistency issues that we raised in the SIG 11:02:31 janders: ah that's too bad on the cancelled meeting 11:02:50 hi mgoddard thanks for coming 11:02:57 did you see what this is about? 11:03:18 hi oneswig yes I did, and thought I'd pop along 11:03:28 hey Mark thanks for coming 11:03:30 #link Spec in review https://review.opendev.org/#/c/565463/ 11:03:33 I haven't got as far through the neutron spec as I'd like 11:04:30 Me neither. Still a little concerned it's tied to one implementation 11:05:18 We could come back to this in 15 minutes to allow some reading time? 11:05:40 o/ 11:05:41 my understanding is they're trying to model on ODL but are trying to make the consistency check mechanism generic 11:06:20 what we can also do is - I will give you guys some time to go through this more in detail - and if you could make comments in the bug, that would be great 11:06:21 Hi hberaud, welcome 11:06:36 doesn't have to be during the meeting 11:06:43 oneswig: thx 11:06:46 janders: +1 makes sense to get the feedback out there 11:07:00 one key difference between mlnx and OVS is the async update 11:07:03 but I think if we could provide some feedback to the Neutron guys before Friday's driver meeting, that would be great 11:07:12 the mellanox driver updates NEO in an async thread 11:07:57 so not only have we committed the update to the neutron DB, we've actually finished processing the original update request by the time NEO is updated 11:08:48 right! that matches my experience/observations 11:09:35 mgoddard: in extreme cases could this cause a provisioned instance to be transiently connected to a different tenant network? Is there any means of blocking on completion of an operation? 11:10:08 o_0 ! 11:10:16 oneswig: potentially 11:10:31 oneswig: neutron does have a blocking mechanism, but I don't think mlnx uses it 11:10:38 which part of this would need to go into neutron-core, and which part would be mlnx-specific changes? 11:11:09 there is a thing called provisioning blocks which can be used for this 11:11:41 agents are inherently aysnc, so it's nothing new. You just have to know how to use it 11:11:55 convincing mlnx to use that definitely sounds like a good idea 11:11:56 Ah OK. Feature not bug 11:12:36 but how about detecting state divergence and addressing it? what we're talking about doesn't address that right? 11:12:40 or would it? 11:12:51 Probably bug not feature :) 11:13:32 state divergence is always a potential problem if the controller can be changed/updated independently of Neutron 11:13:41 janders: yes this doesn't relate to the missing port updates 11:14:04 my idea behind the bug was implementing a mechanism that brings SDN-based approaches on par with ovs - so that state divergence can be detected/addressed 11:14:26 https://review.opendev.org/#/c/565463/12/specs/stein/sbi-database-consistency.rst@332 looks like it might be able to address that 11:14:53 however it would be great to have your feedback 11:15:23 in general I think it would be worthwhile to split the discussion into two parts - the generic one (which the neutron team need to address) and the specific ones (which I'm happy to bring up with Mellanox) 11:16:00 I totally see value of improvements around provisioning blocks as suggested by mgoddard 11:17:01 Makes sense. First part may be gated, for example, on the difficulty of associating a revision number on applied config in arbitrary SDN implementations. That's reviewing the spec I guess 11:17:02 I think provisioning blocks could be an easy win 11:17:23 then the spec might be able to prevent subsequent drift 11:17:43 janders: think you can entice your friendly mlnx engineer here for part 2? 11:17:52 I have to disappear for about half an hour now - can I please ask you to make comments on what you like/dislike/any suggestions around the generic part in the bug report? It would be great to see some activity there, it would help get attention from the devs 11:18:03 oneswig: yes, I'm happy to! :) 11:18:08 +1 11:18:19 excellent - thanks guys 11:18:32 hopefully I'll be back before the end of the meeting 11:18:35 #action janders Involve Mellanox tech team on the issue 11:18:49 #action oneswig mgoddard add comments to the bug report and/or spec 11:19:08 OK, thanks janders and all, next 11:19:18 #topic CERN OpenStack roundup 11:19:36 that was fun, the CERN team laid on a great day 11:19:41 dh3: great talk 11:19:48 you are too kind *blush* 11:19:55 How's the eagle recovering from being swabbed? :-) 11:20:17 I hear there are some interesting stories from field expeditions ("Real Science") 11:20:58 I once had a support call that ended "can we get this fixed because there's a plane flying over antartica right now that needs the service back onilne..." 11:21:15 In a previous job 11:21:18 looked like a good time on Slack and Twitter 11:21:58 I enjoyed it. In the afternoon the talk on OpenStack control plane tracing was especially interesting. 11:22:25 Yes I think osprofiler could be educational as well as useful 11:22:44 #link Ilya's presentation https://indico.cern.ch/event/776411/contributions/3402240/attachments/1851629/3039923/08_-_Distributed_Tracing_in_OpenStack.pptx.pdf 11:23:20 I think we'll be doing plenty of that in due course 11:24:02 nice, thanks 11:25:24 Erwan Gallens talk on face recognition was good too - nice to cover more on the algorithms. 11:25:58 OK we should move on I think 11:26:12 #topic Secure Computing Environments 11:26:44 One question from the audience at CERN last week related to how to find best practice for controlled-access data processing 11:27:23 This is something we've attempted to grapple with previously but not got very far towards common ground. 11:27:31 Once more unto the breach... 11:28:03 Jani from Basel University and I presented on a project underway there 11:28:23 #link BioMedIT project in Basel https://indico.cern.ch/event/776411/contributions/3345191/attachments/1851634/3039929/05_-_Secure_IT_framework_for_Personalised_Health_data.pdf 11:28:45 This was a lightning talk though so no time to cover anything at all in depth 11:28:54 #link https://indico.cern.ch/event/776411/contributions/ 11:29:04 this had all the talks 11:29:25 cool, thanks martial_ 11:30:11 We don't have a best practice as such - we are still offering all the various compute/storage facilities to data owners/principal investigators and trying to help them choose the right one for their data access and analysis workload 11:30:29 buffet table vs a la carte I guess 11:31:07 On our side, the first meeting to discuss a paper about our work with restricted data is coming up. That will be science as well as infrastructure. 11:32:16 I saw a useful paper recently on classification of security levels across various domains, but I'm struggling to find it alas 11:32:38 interesting set of slides 11:34:10 looks like a decent architecture, though the thing often missing is mapping of various controls to any particular framework of reference 11:34:40 Something I'd like to see is authoritative research of the form "to achieve this level of security / standard, you must implement this...' 11:35:24 oneswig: do you have something like ISO 27001/2 in mind? 11:36:12 Yes, that kind of thing, but also there are different classification systems in different sectors - government, health data, etc. 11:38:02 yes agreed oneswig... starting with a basic multi-tenant cloud implementation, what bits of OpenStack or other standard tooling can you sprinkle in to achieve sufficient confidence in the architecture and visibility of drift or abnormalities, you then need to layer in incident response requirements, so on and so forth 11:39:42 b1airo: infrastructure configuration is certainly not the only requirement, true. 11:40:15 Yes, in my experience, lots of policies, and possibly even the setting up of new administrative bodies. 11:40:15 I'm drawing a blank on this paper, may follow up to the Slack workspace if I find it. 11:40:51 to achieve compliance with any standard there's necessarily a layer of abstraction down to specific controls, procedures (and sometimes the organisational units & heads that are responsible for them) 11:41:42 I've never had to work with that sort of published standard but experience says that it will reach from infra/admins through application devs too 11:42:30 in OpenStack today I don't think we even have a reference list of controls, let alone reference architectures 11:42:31 dh3: do you need specific standards to host eg UK BioBank data? (Or was that just animals and plants?) 11:43:19 oneswig: it's actually more complicated than that, there are many varieties of UKB data, with their own special requirements. 11:43:22 oneswig: it's not really my area but I know that for some data (human identifiable) we have had to implement separate networks, virt platforms, storage.... 11:44:50 dh3: does Sanger have policy guidelines for this that are public? 11:44:54 our UKB vanguard data is in our production OpenStack environment, but in a separate host aggregate (for perf isolation but does bring security) and separate storage/networking (via provider network) 11:45:06 oneswig: not that I have seen! but I can ask 11:45:16 dh3 Jani from Basel here, I agree with requirement for separate networks, etc. 11:45:26 Might be really useful! 11:45:34 hi heikkine, thanks for coming 11:45:51 I may drop off any moment but trying to follow what I can 11:46:27 dh3: do you go as far as separate physical storage? 11:47:15 oneswig: yes in some cases, "identifiable human data" is the usual trigger but not always 11:47:40 dh3: what is in your case separate physical storage? Separate from OpenStack? 11:48:31 heikkine: taking UKB as an example, we have it on a separate Lustre system, it's only accessible from that tenant's provider network. 11:49:02 so separate from OpenStack and separate from other projects too 11:49:03 would software-defined-storage help in this sense? 11:49:26 I see. So you do tenant isolation that way. We are starting with NFS but I have done tests with BeeGFS 11:49:33 as in - having a pool of JBOFs - and then spinning up a "private" storage cluster of a certain scale on demand 11:49:38 We are looking at Lustre to cover this kind of case in the future, given it's more amenable than GPFS. 11:49:59 A substantial discussion in its own right 11:50:38 there are some US standards that actually specify isolated physical storage 11:50:39 janders: it depends - again not entirely my area but I understand that some data access agreements are restrictive on where data can be kept (physically and geographically) - and SDS makes it harder to show an auditor "it's here" 11:51:08 Over the summer it may be good to gather some of these pieces together into a joint study. Does anyone object to being involved? 11:51:09 e.g. think ITAR data cannot be mixed with non-ITAR data on same physical storage 11:51:34 I think this is correct, Blair 11:51:36 lol, love the phrasing of that question oneswig ! 11:51:37 right! whether it works is one thing, whether it ticks compliance box might be another thing. Very useful feedback, thank you. 11:51:37 janders: yes, to echo dh3's comments, I'm dealing with a project at the moment in which the form would not cope well with such a setup. 11:52:04 SDS also makes proving secure deletion more difficult (can't even identify which HDDs to take out and shred...) 11:52:22 b1airo: opt-out from being volunteered :-) 11:52:24 (I suppose node cleaning doesn't cut it :) 11:52:42 node cleaning + DBAN maybe :) 11:53:01 janders: actually if you are using self-encrypting disks then it might... 11:53:04 Don't deprive the workshop people of their longer-for chances to deploy the angle grinder 11:53:10 longed-for 11:53:28 verdurin: --yes-i-really-mean-it ... 11:53:34 :D 11:53:40 because we know hardware level security is so good... *cough* 11:54:15 One more topic to cover today, can we follow up on this perhaps via the slack channel? 11:54:26 sounds good 11:54:29 dh3: are you on there? 11:54:47 I can send an invite - unless you're too secure for Slack of course :-) 11:55:11 #topic Justification whitepaper for on premise cloud for research and scientific computing 11:55:16 oneswig: no (not yet - do invite me please) 11:55:22 dh3: will do 11:55:58 just a quick mention, some of the US folks are very interested in pooling together on a paper that lays out the private cloud case 11:56:17 This was raised at the SIG session in the Denver PTG 11:56:18 there's a great NASA tech paper i came across today that looks at this from the HPC perspective: https://www.nas.nasa.gov/assets/pdf/papers/NAS_Technical_Report_NAS-2018-01.pdf 11:56:32 Anyone in this session recently made this case themselves? 11:56:41 thanks b1airo 11:57:17 During the session somebody mentioned the talk by Andrew Jones of NAG at the UKRI Cloud Workshop in January 11:57:25 #link Andrew Jones presentation https://www.linkedin.com/pulse/cloud-hpc-dose-reality-andrew-jones/ 11:58:30 The hope is that this material might make a useful resource to draw upon when writing procurement proposals. 11:58:42 lots of follow up reading after today's meeting - thanks for some great links guys 11:59:02 a key thing to establish early on here would be the kinds of applications that are being supported on the private cloud 11:59:09 hadn't seen that, thanks (but we are "strategically" going cloud-wards anyway) 11:59:17 b1airo: very true 11:59:20 and a valid answer might be "all of them" 11:59:36 OK nearly out of time - final comments on this? 11:59:55 raise it again for next week's timezone i guess 12:00:07 b1airo: +1, will do. 12:00:30 OK all, time to stop - thanks everyone 12:00:31 thanks guys! again - any comments in the SDN bug will be very welcome by the Neutron team 12:00:37 Thanks ;) 12:00:39 janders: will do 12:00:46 #endmeeting