09:00:32 <oneswig> #startmeeting scientific-wg
09:00:33 <openstack> Meeting started Wed May 24 09:00:32 2017 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:37 <openstack> The meeting name has been set to 'scientific_wg'
09:00:49 <oneswig> #link Agenda is at https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_24th_2017
09:00:59 <oneswig> Good morning etc.
09:01:56 <oneswig> verdurin sends apologies - conflict with STFC UK cloud working group
09:03:26 <priteau> Good morning oneswig
09:03:37 <oneswig> Hi priteau good morning
09:03:44 <priteau> It seems very quiet today
09:03:56 <priteau> Is everyone out enjoying the sunshine?
09:03:58 <oneswig> It is indeed.
09:04:28 <oneswig> Very nice here.  I met with some of your colleagues from Oxford yesterday.
09:05:01 <oneswig> Square Kilometre Array workshop
09:05:32 <priteau> I haven't seen much OpenStack activity in Oxford yet. Cambridge seems way ahead.
09:05:50 <oneswig> I was interested if Chameleon keeps track of the scientific papers generated using the system?
09:06:08 <oneswig> Would probably be a quite sizeable collection if true!
09:07:02 <oneswig> These are people from Oxford using an OpenStack system at Cambridge, that is true...
09:07:03 <priteau> Yes, we ask our users to communicate their publications when they request a renewal of their Chameleon project. We also search on Google Scholar, for those users who forget to do it ;-)
09:07:29 <oneswig> Did you see the user-committee thread from Tim Bell?
09:07:36 <priteau> Yes, I read the thread.
09:07:44 <oneswig> #link for the record http://lists.openstack.org/pipermail/user-committee/2017-May/002051.html
09:07:46 <priteau> I will check our publication list
09:08:42 <priteau> I expect that many papers will just be using bare-metal nodes to run something else than OpenStack, they shouldn't really be in this archive
09:08:44 <oneswig> Thanks priteau, that's a great way to seed the collection.  I wonder how we could keep it current and maintained
09:08:57 <priteau> But I will look out for any OpenStack-specific ones
09:09:37 <oneswig> We've been mapping CephFS into our bare metal nodes as a home filesystem.  Does anything like that happen on Chameleon?
09:09:55 <oneswig> It's got some interesting attributes...
09:10:20 <priteau> We've set up Ceph as a backend for Swift so we encourage users to use the object store
09:10:35 <priteau> This sounds very interesting. Do you have more details about how you set it up?
09:10:38 <oneswig> From within the bare metal instances?
09:11:15 <oneswig> priteau: I ought to write it up.  But for now, it's fairly straightforward.  We made storage pools for the project and generated a key that's shared between users of that project.
09:11:48 <oneswig> The key is provided to user instances via Barbican - but actually it's just some Ansible that grafts it into place and does the fs mount
09:12:24 <oneswig> So we could have used ansible vault (but I prefer this solution because the ansible goes up onto github)
09:13:52 <oneswig> The other tweak I've been deploying on these nodes is a PAM module that enables ssh connections to be authenticated with Keystone
09:13:54 <priteau> I am not familiar with the authn/authz in CephFS. If users can get root, is it a security risk?
09:14:21 <oneswig> to their own project, quite likely.
09:14:48 <oneswig> We do pools and keys per project to contain the risk
09:15:10 <priteau> And each bare-metal node is set up to talk to CephFS on behalf of only one project at a time?
09:15:16 <oneswig> On this system, the project admins deploy the 'appliance' and users login as users
09:15:34 <oneswig> priteau: correct - the instances only server one project
09:15:39 <priteau> I see
09:16:51 <priteau> I would like to investigate something like that over the summer. It would be a good alternative to the lack of Cinder volumes for Ironic
09:17:08 <oneswig> We are seeing OK performance for larger writes but scattered IO and metadata is currently pretty poor.  There's a study to be done on how and why
09:17:43 <oneswig> priteau: true - and using this key users can access ceph rbd direct too.  Not that anyone does at present
09:18:27 <oneswig> It works for an appliance model, where automated middleware deployment can include fiddly site config
09:18:57 <oneswig> What's the news on the next phase of Chameleon?
09:21:49 <oneswig> priteau: cinder volumes for Ironic is still undergoing active development from what I've seen.  Don't give up on that just yet...
09:23:41 <priteau> oneswig: nothing new regarding the next phase at this point
09:24:08 <priteau> In the meantime we're starting work on upgrade our venerable Liberty installation to Ocata!
09:24:28 <oneswig> Ah interesting.  How is it deployed?
09:24:43 <priteau> RDO + OpenStack Puppet
09:25:58 <oneswig> Our current project has now filled the system.  It needs a mechanism for resource reservation...
09:26:12 <oneswig> Ever heard of Blazar in Kolla?
09:26:15 <priteau> We'll probably stick to this approach at first to stay in known territory, but I have heard about this thing called Kayobe…
09:26:28 <oneswig> ha!  Step into my office :-)
09:27:07 <oneswig> It's still going well for us.  Quite happy with it.  Adding Barbican to our deployment for example just worked
09:27:37 <priteau> I don't know of anyone using Blazar in Kolla, but if we get to investigate Kolla, I will let you know
09:28:08 <priteau> Anyway, Blazar is following the same patterns as other OpenStack projects, so it shouldn't be complex to build images for it
09:28:47 <oneswig> If the resource contention on our system gets more acute, we could make a case for integrating Blazar.  Are you still carrying local patches for it, or is everything upstream?
09:28:58 <priteau> #link https://blueprints.launchpad.net/kolla/+spec/blazar-images
09:29:48 <oneswig> only last month!  Good to see some activity here.  Will definitely follow up on that.
09:31:03 <priteau> Chameleon still has many patches locally (because we're still running Liberty), but upstream is fixing lots of issues. We're aiming for a solid release for Pike.
09:31:54 <oneswig> I'd be interested to help with any investigation of kayobe.  Let us know if you get the chance.
09:32:06 <priteau> Will do
09:33:07 <oneswig> OK, I don't think I had anything else to raise.  Any other news from you?
09:35:35 <priteau> We had good discussions at the summit about preemtible instances. Because Blazar already needs to terminate instances from nodes at the end of a reservation, it might take responsibility for being a "reaper" service in general.
09:36:37 <oneswig> That sounds good.  Does it overlap with work already done for OPIE in that respect?
09:38:22 <priteau> I haven't looked at OPIE yet, but I assume it would overlap. But Nova core devs want the reaper activity done outside of nova-{conductor,scheduler}
09:39:04 <priteau> I don't think we'll have time to actively work on this in the Pike cycle though
09:41:56 <oneswig> sorry - phone call
09:42:22 <priteau> no problem
09:42:30 <oneswig> In the pike cycle I'm very interested in the scheduler properties and how they apply to Ironic
09:43:12 <oneswig> Must have good potential for what you can do on Chameleon as well - it's a natural place for us to attach BIOS/RAID changes - the kind of "deep reconfigurability" Chameleon talks aobut?
09:47:26 <priteau> We actually had few requests for changing BIOS settings
09:47:26 <priteau> Mostly making sure SR-IOV is enabled
09:47:26 <b1airo> belated greets
09:47:26 <priteau> Hi b1airo
09:47:45 <oneswig> Hi b1airo how's things
09:48:06 <b1airo> good good - school open night scavenger hunt done
09:48:48 <oneswig> excellent.  It's just me and priteau so far and we were catching up on our bare metal projects
09:48:57 <b1airo> how do folks feel about pushing this slot out one hour later
09:49:04 <b1airo> ah ok, i thought it seemed quiet
09:49:13 <oneswig> Works for me.
09:49:36 <oneswig> Adam's out at a cloud workshop and sent his apologies.
09:49:36 <b1airo> i'm quite keen to get ironic going at monash sometime soon
09:50:05 <b1airo> wondering if anyone has experience using it in a v1 Cell ...?
09:50:41 <oneswig> b1airo: not that I'm aware of, although doesn't CERN have a cell of Ironic and weren't they running cells v1?
09:51:08 <priteau> I am fine with changing the time slot
09:52:25 <b1airo> yeah i am sure CERN has at least played with Ironic but not sure if they have done it in their prod cloud
09:53:23 <oneswig> b1airo: can you put a mail out to os-ops on the time change?  Its worth getting a few more opinions
09:54:10 <b1airo> yeah certainly will
09:54:16 <oneswig> b1airo: for a presentation I was looking for an image of M3 or Monarch infrastructure - there's nothing I could find, howcome?
09:54:33 <b1airo> other option would be to pull it back a couple of hours, but i haven't looked at how that works TZ-wise
09:55:02 <b1airo> oh.. like a picture of physical kit?
09:55:30 <b1airo> love a bit of rack pr0n
09:55:34 <oneswig> Yes - usual kind of stuff, for a slide on "prior art"
09:56:00 <oneswig> Is that kind of filth banned in Australia or something??
09:56:07 <b1airo> you're presenting something soon?
09:56:15 <oneswig> yesterday :-
09:56:21 <b1airo> haha, yeah we are very conservative here don't you know
09:56:46 <b1airo> i will have to dig a bit, i don't actually have any such imagery for M3 on hand
09:57:12 <oneswig> no problem.  It was much easier to find an image of Chameleon...
09:57:54 <b1airo> 4 (now 5) heterogeneously populated racks is a bit hard to make look nice for those purposes i guess
09:58:17 <oneswig> OK we are nearly out of time - anything else to cover?
09:58:49 <priteau> Wasn't there something about the IRC channel?
09:59:19 <oneswig> Ah - true.  We have scientific-wg up and running and I believe the infra is eavesdropping on it and taking logs too.
09:59:31 <b1airo> ah great
09:59:43 <oneswig> When the time change goes through, we could also do a channel change to meet on #scientific-wg
10:00:03 <oneswig> seems like a good idea to me.
10:00:21 <priteau> oneswig: I remember seeing discussion about doing meetings in non #openstack-meeting-* channels with arguments against it
10:00:40 <oneswig> priteau: got link to it?
10:00:48 <priteau> I am looking for it
10:01:00 <oneswig> ah - time's up, we should clear off.  Follow up afterwards on it?
10:01:10 <oneswig> Might be worth just having the discussion on the list
10:01:20 <b1airo> yep, i'll email...
10:01:27 <oneswig> #endmeeting