09:00:30 <oneswig> #startmeeting scientific_wg
09:00:31 <openstack> Meeting started Wed Jun  7 09:00:30 2017 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:34 <openstack> The meeting name has been set to 'scientific_wg'
09:00:46 <oneswig> Greetings
09:01:03 <arnewiebalck> o/
09:01:03 <oneswig> #link agenda for today https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_June_7th_2017
09:01:13 <oneswig> arnewiebalck: hi!  Thanks for coming
09:01:18 <priteau> o/
09:01:26 <oneswig> Hi priteau
09:01:30 <arnewiebalck> oneswig: np
09:01:33 <priteau> Hello
09:02:14 <mgoddard_> o/
09:02:31 <noggin143> o/
09:03:22 <oneswig> So today we'll cover the CephFS user experience first, then the archive on Zenodo, then the other business
09:03:45 <oneswig> #topic CephFS for user home dirs
09:03:50 <verdurin> Morning
09:04:07 <oneswig> Hi verdurin mgoddard_ noggin143, welcome all
09:05:08 <oneswig> We've been working on a cluster-as-a-service deployment, aiming to be more than superficially useful, and we wanted to graft in a good choice for a home directory.
09:05:25 <oneswig> I saw Arne's video from Boston which covers a very similar subject
09:05:53 <oneswig> #link CERN's CephFS story https://www.openstack.org/videos/boston-2017/manila-on-cephfs-at-cern-the-short-way-to-production
09:06:14 <oneswig> arnewiebalck: you've been using it since last summer, right?
09:06:31 <arnewiebalck> Yes.
09:06:45 <oneswig> Why did you choose CephFS originally over (say) nfs
09:06:46 <arnewiebalck> Not for global home dirs, though.
09:07:09 <arnewiebalck> CephFS was chosen to fill a gap we had for the HPC use case.
09:07:33 <arnewiebalck> We did not have a parallel FS for HPC apps.
09:08:03 <oneswig> Did you look at other options there? Lustre, Glusterfs, etc.?
09:08:03 <arnewiebalck> It is now also used for the home dirs, but only for the HPC users within the cluster.
09:08:18 <arnewiebalck> We looked at Lustre some years ago.
09:08:30 <verdurin> arnewiebalck: this is instead of CVMFS?
09:08:34 <arnewiebalck> At the time, to consolidate our stoarge solutions.
09:09:02 <arnewiebalck> verdurin: No, cvmfs fulfills yet another use case.
09:09:29 <arnewiebalck> We had Ceph and needed a parallel FS, so CephFS was the natueral choice to try out.
09:10:06 <arnewiebalck> We were not convinced by Lustre from an operational point of view (at the time).
09:10:26 <arnewiebalck> So, CephFS is in prod for HPC since last summer.
09:10:27 <oneswig> Makes sense. Are you using CentOS Ceph Jewel or some other Ceph packages?
09:10:56 <arnewiebalck> We’re using upstream Jewel.
09:11:10 <zioproto> hello all
09:11:19 <oneswig> Hi zioproto - just starting on Cephfs
09:11:24 <zioproto> thanks
09:11:41 <arnewiebalck> Once we established CephFS with HPC, some other use cases came up, such as K8s.
09:11:58 <sw3__> hi everyone..
09:12:06 <oneswig> arnewiebalck: Did you choose to deploy from upstream because of stability or something else?
09:12:14 <arnewiebalck> And now we’re trying to replace our current NFS use cases with our CephFS deployment.
09:12:48 <oneswig> hi sw3__ welcome
09:13:03 <sw3__> thx
09:13:23 <arnewiebalck> oneswig: The Ceph team has a pretty direct link to the upstream Ceph team.
09:13:54 <zioproto> arnewiebalck: are you using Jewel ?
09:14:00 <arnewiebalck> zioproto: yes
09:14:09 <arnewiebalck> zioproto: testing Luminous as well
09:14:15 <arnewiebalck> or pre-Luminous
09:14:39 <arnewiebalck> from our experience it wroks pretty well so far
09:14:49 <verdurin> arnewiebalck: pre-Luminous with BlueStore?
09:14:52 <oneswig> arnewiebalck: Can you describe the storage pool configuration ?
09:14:52 <arnewiebalck> main issues as Tim mentioned yesterday are quotas
09:15:59 <arnewiebalck> oneswig: I’m somewhat out of my depth here … anything specific you’re interested in?
09:16:20 <oneswig> Specifically, things like bluestore, or journaled vs SSD pool for metadata
09:16:50 <noggin143> Some more details on the CERN Ceph install from Dan at https://indico.cern.ch/event/542464/contributions/2202295/attachments/1289543/1921810/cephday-dan.pdf
09:17:08 <arnewiebalck> we don’t have SSD only pools
09:17:09 <oneswig> We started with a journaled OSD for our metadata pool.  Our user applications immediately found problems with that.
09:17:26 <oneswig> noggin143: thanks, will study!
09:17:56 <arnewiebalck> oneswig: the usage from the HPC use is pretty moderate so far
09:18:32 <arnewiebalck> and no BlueStore on the production ones
09:18:38 <oneswig> arnewiebalck: We see issues with file access patterns involving fsync - up to 5s system call latency - is this something you've found?  Apparently it is resolved in lumninous but I haven't checked
09:19:32 <arnewiebalck> yes, there were issue with specific use cases as well
09:19:57 <oneswig> arnewiebalck: can you talk about any specific lessons learned?
09:21:29 <arnewiebalck> not really: most of the issues we found were resolved quickly
09:21:37 <arnewiebalck> there is a list on my slide deck
09:21:59 <arnewiebalck> but I can check with the Ceph team if I missed sth
09:22:15 <oneswig> Do you know when the quota support goes into the kernel client?
09:22:35 <arnewiebalck> no
09:22:41 <arnewiebalck> but this mostly affects K8s
09:22:53 <arnewiebalck> we use cpeh-fuse everywhere else
09:23:09 <arnewiebalck> and recommend this as the preferred way to access CephFS
09:23:23 <arnewiebalck> also due to the way features are added
09:23:45 <arnewiebalck> but also on fuse quota is only advisory
09:24:07 <oneswig> arnewiebalck: have you measured the overhead of fuse for your hpc use cases?
09:24:20 <arnewiebalck> but at least it prevents users from accidentally filling the cluster
09:24:39 <arnewiebalck> oneswig: compared to the kernel client?
09:24:44 <arnewiebalck> no
09:24:45 <oneswig> I'm in the process of trying to compare the two (and NFS)
09:25:23 <oneswig> arnewiebalck: something that intrigued me about the video of your talk - somebody asked a question about RDMA - I hadn't realised that project was active, do you know more about it?
09:26:52 <arnewiebalck> I briefly talked to Dan about it after the summit.
09:27:10 <oneswig> Dan van der Ster?
09:27:17 <arnewiebalck> If set up hyperconverged servers, he’d like to try it out.
09:27:26 <arnewiebalck> oneswig: yes, he’s our Ceph expert
09:28:33 <oneswig> arnewiebalck: we'd like to try it too! :-)
09:28:39 <arnewiebalck> for RDMA, that’s basically all I know :)
09:29:11 <arnewiebalck> we’ll get some new clusters later this year, this may be an opportunity for testing
09:29:26 <oneswig> arnewiebalck: do you manage access to the CephFS using Manila?
09:29:47 <arnewiebalck> we didn’t in the beginning, but now we do, yes
09:30:05 <arnewiebalck> and we moved the pre-Manila users to Manila
09:31:02 <oneswig> Does that mean you have multiple CephFS filesystems active?  I believe that's considered experimental (but safe on latest Jewel)?
09:31:07 <verdurin> arnewiebalck: encouraging to see some real use of Manila
09:31:25 <arnewiebalck> oneswig: we have multiple clusters
09:31:38 <arnewiebalck> verdurin: it works pretty well
09:31:55 <arnewiebalck> verdurin: apart from minor issues ;)
09:32:32 <arnewiebalck> oneswig: multiple share types with separate clusters works for us
09:32:33 <oneswig> arnewiebalck: It's certainly interesting to hear that - but our use case is bare metal so no Manila for us (AFAIK)
09:32:58 <arnewiebalck> oneswig: why not?
09:33:02 <noggin143> You could still use Manila to handle the creation etc.
09:33:32 <arnewiebalck> oneswig: Manila is basically only used for creation as a self-service portal and accounting
09:33:34 <oneswig> Ah, so it is, but what of the access to the storage network?
09:33:39 <noggin143> oneswig: Extract the secret cephx and then define a mountable filesystem on the bare metal
09:34:10 <arnewiebalck> well, the Manila server must be able to talk to the CephFS cluster, of course
09:34:40 <oneswig> noggin143: That's essentially what we do manually, providing per-project cephx keys for Ceph access
09:34:59 <oneswig> I had thought there was plumbing Manila adds in the hypervisor space too?
09:35:08 <arnewiebalck> oneswig: no
09:35:41 <oneswig> arnewiebalck: OK, sounds promising to me - I'll check it out.  Thanks!
09:36:11 <arnewiebalck> oneswig: unless you want NFS re-export
09:36:11 <noggin143> oneswig: what's cute with Manila is that the secrets are only visible to the project members so no need to mail around secrets
09:37:17 <oneswig> noggin143: sounds good.  We've been using Barbican for that.  It ties in well with our deploy scripts.  It's exciting to hear that there could be a smoother way through though
09:37:31 <oneswig> Any more on CephFS for today?
09:37:47 <oneswig> I recommend Arne's video BTW, very useful
09:38:36 <oneswig> OK - thanks Arne!  Lets move on
09:38:46 <arnewiebalck> oneswig: :)
09:38:47 <oneswig> #topic OpenStack research paper archive
09:39:01 <oneswig> Thanks for the blog post yesterday Tim
09:39:34 <oneswig> #link OpenStack in production blog post http://openstack-in-production.blogspot.co.uk/2017/06/openstack-papers-community-on-zenodo.html
09:40:11 <noggin143> oneswig: I tested the curate functionality with some folk from the EU Indigo Datacloud project and it works well
09:40:39 <oneswig> So for example, if I wanted to submit this paper on virtualised GPU direct: http://grids.ucs.indiana.edu/ptliupages/publications/15-md-gpudirect%20(3).pdf - what would I need to do?
09:41:08 <zioproto> oneswig: I guess here ? https://zenodo.org/login/?next=%2Fdeposit
09:41:12 <noggin143> oneswig: if you sign up to Zenodo (and there are various login options), you'll see a submit button
09:41:37 <zioproto> there is that big 'upload' button on top
09:41:54 <noggin143> zioproto: that's right, it's upload, not submit
09:42:17 <oneswig> Is it fair to assume any paper available online can be uploaded here?
09:42:19 <noggin143> oneswig: when you upload, you can put in details of authors, conference and DOI if you have it
09:42:51 <noggin143> oneswig: Chatting with the library folk here, the feeling is that if the paper is available on the internet freely, it could be uploaded to the repository.
09:43:16 <oneswig> noggin143: that's a huge barrier to entry removed
09:44:07 <noggin143> oneswig: it's part of the opendata movement to move publically funded research to be available to the general public.
09:44:53 <noggin143> oneswig: and also to preserve results. Often we find pointers to talks which are then 404s since the project funding has completed
09:45:12 <oneswig> noggin143: you're looking at linking with planet.openstack.org - any other thoughts on ways of keeping the archive prominent and maintained?
09:45:44 <noggin143> oneswig: I'm having a chat with the Zenodo folk to see if there is a possibility of RSS. However, there is a lot on their plate at the moment.
09:46:15 <oneswig> noggin143: I am sure that would help, and not just for this use case
09:46:15 <noggin143> oneswig: I was thinking about adding a blog with papers uploaded in the past month so that planet openstack would get the feed
09:46:46 <noggin143> oneswig: but people would not be overwhelmed by lots of entries
09:47:35 <priteau> oneswig: I am not sure we can upload random PDFs found on the web unless they are granting rights to redistribute
09:47:57 <oneswig> Is there a distinction made between research on infrastructure itself vs research performed using the infrastructure?
09:48:56 <noggin143> priteau: it depends on the journal for scientific papers. Creative Commons etc. is OK. I will review the documents when they are uploaded as part of the curation role.
09:49:04 <oneswig> priteau: that was my concern - my example grants permission provided it's not for profit.
09:49:48 <oneswig> noggin143: so zenodo takes some responsibility for the rights of the papers uploaded?
09:50:22 <noggin143> oneswig: I will clarify to be sure and update the blog accordingly.
09:50:52 <priteau> oneswig: the notice says "for personal or classroom use", I don't think uploading on zenodo qualifies. Indeed it even says later: "to post on servers or to redistribute to lists, requires prior specific permission and/or a fee."
09:50:55 <noggin143> oneswig: one of the fields on the submission is the license
09:51:43 <oneswig> priteau: so it does, well spotted. So this is tricky
09:52:36 <noggin143> oneswig: I propose to check this as part of the curation process when content is submitted.
09:52:42 <priteau> The authors who uploaded this paper to their own server may even be breaching the licence
09:53:42 <oneswig> Thanks noggin143 priteau - some clarification I think will help hugely.
09:54:05 <oneswig> Any more to add on the research paper archive?
09:54:30 <oneswig> #topic AOB
09:54:43 <oneswig> Anyone going to ISC in Frankfurt in a couple of weeks?
09:55:46 <oneswig> There's a side-workshop - Exacomm - being run by DK Panda.  We are starting on the process of evaluating their RDMA-enabled Spark
09:57:01 <oneswig> One immediately thinks of the maintenance of a special version, but perversely the OSU HiBD Spark is too new for Sahara support... seems like it's Sahara that's in need of updates!
09:57:51 <oneswig> #link In case you're at ISC http://nowlab.cse.ohio-state.edu/exacomm/
09:58:27 <oneswig> I had one further event - OpenStack Days UK in London, 26th September
09:58:56 <oneswig> #link OpenStack Days UK CFP https://www.papercall.io/openstackdaysuk
09:59:08 <oneswig> Any other events coming up?
09:59:18 <zioproto> the operators meetup in Mexico
09:59:25 <zioproto> but I did not follow closely
09:59:29 <zioproto> I dont know the details
09:59:34 <oneswig> Ah, true, you going?
09:59:37 <zioproto> but I think dates and location are set
09:59:44 <noggin143> zioproto: it's at the start of August
10:00:00 <zioproto> still have to decide if I can go
10:00:09 <noggin143> oneswig: I'll be on a beach then :-)
10:00:26 <oneswig> Again, a great choice of location for hungry operators - I thought Milan was good, but Mexico...
10:00:42 <zioproto> :)
10:00:46 <oneswig> Sadly we are out of time
10:00:56 <zioproto> have a good day ! :) ciao !
10:00:58 <oneswig> Thanks again noggin143 arnewiebalck & co
10:01:04 <oneswig> Until next time
10:01:07 <verdurin> Thanks, and bye.
10:01:08 <oneswig> #endmeeting