09:01:53 <aspiers> #startmeeting ha
09:01:54 <openstack> Meeting started Mon Feb 15 09:01:53 2016 UTC and is due to finish in 60 minutes.  The chair is aspiers. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:01:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:01:58 <openstack> The meeting name has been set to 'ha'
09:02:05 <aspiers> let's start anyway :)
09:02:47 <aspiers> #info we have apologies from NTT guys for not being here because they are at the European Ops meetup in Manchester
09:02:59 <aspiers> I wish I could have gone to that :-(
09:03:08 <aspiers> I think they might be moderating a session on compute node HA there
09:03:36 <aspiers> #topic Current status (progress, issues, roadblocks, further plans)
09:03:50 <aspiers> I'll go first since I will be quick :)
09:04:00 <aspiers> I got back from holiday last Wed
09:04:06 <aspiers> so I'm still catching up on stuff
09:04:26 <aspiers> but SUSE has made a lot of progress recently on compute node HA - we now have the whole setup totally automated via Crowbar/Chef
09:04:42 <aspiers> so it's a few simple mouse clicks to set up the whole thing, which is pretty cool
09:04:53 <aspiers> also on Friday I met 4 NTT guys including masahito
09:05:05 <aspiers> they came to London before the Ops meetup in Manchester
09:05:34 <aspiers> there were 3 developers and one operator, and we talked for a few hours about compute node HA, masakari etc.
09:05:39 <aspiers> it was a very useful meeting
09:05:58 <_gryf> aspiers, any output in a textual form out this?
09:06:09 <aspiers> _gryf: not yet but I will try to write something up
09:06:14 <_gryf> aspiers, cool
09:06:29 <aspiers> #info aspiers met 4 NTT guys working on compute node HA in London
09:06:47 <aspiers> #info (they work in Japan, but we met in London!)
09:06:59 <aspiers> while we were talking, I had an idea
09:07:14 <aspiers> to draw some kind of strategic map for community compute node HA
09:07:30 <aspiers> which outlines the various failure modes and challenges on one diagram
09:07:49 <aspiers> and shows how the current approaches cover them and where there are gaps
09:07:56 <bogdando> aspiers, and perhaps update the spec as well
09:08:03 <aspiers> bogdando: yes, great idea
09:08:29 <aspiers> currently SUSE is finalising our release 6 of SUSE OpenStack Cloud
09:08:47 <aspiers> it should be coming pretty soon, and will have full support for compute node HA
09:08:52 <aspiers> so that's it from my side
09:09:07 <aspiers> maybe bogdando next?
09:09:10 <bogdando> I updated the spec https://review.openstack.org/#/c/257809/ and reflected the state of things, as I see it. Also , we have to bring more attention to this on the cross projects meetings
09:09:34 <bogdando> my last attempt to do so in the open discussion section seemed futile
09:09:38 <aspiers> great idea
09:09:44 <aspiers> keep trying ;)
09:10:41 <aspiers> #info bogdando is working on the automatic evacuation spec
09:10:59 <bogdando> and please make revies, some things may not be correct
09:11:31 <_gryf> actually, it was Timofey, which bring this topic up - I have a little conversation with him during nova midcycle.
09:11:47 <_gryf> bogdando, did you guys coworking together on this topic?
09:12:23 * _gryf reviewd the bp the other day last week :)
09:12:31 <aspiers> so we now have a user story and a spec - that's cool
09:12:55 <bogdando> _gryf, I'm only trying to help
09:13:02 <bogdando> no code writing, yet
09:13:02 <_gryf> bogdando, that's ok :)
09:13:25 <aspiers> I'll try to contribute to these too
09:13:46 <aspiers> _gryf: anything new from your side?
09:14:04 <_gryf> aspiers, besides the reviews, nope.
09:14:10 <aspiers> ok
09:14:20 <aspiers> ddeja: anything you want to report?
09:14:29 <ddeja> aspiers: yes
09:14:59 <ddeja> I was working on preparing fence agent for running evacuation
09:15:27 <ddeja> I would have finished it last week, but new topic in Mistral occured
09:15:45 <ddeja> #link https://review.openstack.org/#/c/279018/4
09:16:16 <ddeja> Me and _gryf have a mail conversation with Mistral PTL
09:16:25 <ddeja> and this change is an output
09:16:38 <aspiers> cool
09:16:57 <ddeja> they basically decide to not wait for oslo team
09:17:40 <ddeja> to bring ACK after message is processed feature and implement it by themselfs
09:17:52 <ddeja> which is cool for us, since this may fix Mistral HA big problem
09:18:05 <ddeja> and Mistral HA was a big consern on last meeting :)
09:18:10 <ddeja> concern*
09:18:16 <aspiers> nice!
09:18:30 <_gryf> other thing to mention is, that the idea of the own implemented ack system on mistral
09:18:40 <ddeja> nervertheless, I'm trying to test this change to see if it helps and give feedback to Mistral team
09:18:41 <_gryf> is to have oslo.messaging in place
09:19:14 <ddeja> and I'll continue work on fence agent
09:19:17 <_gryf> so that if oslo team would accept the patch for the oslo itself, there would be just a small change to make it use within oslo
09:19:18 <aspiers> #info ddeja working on fence agent for running evacuation
09:19:20 <ddeja> that's all from my side
09:19:46 <_gryf> and it will not blocking mistral to going forward at the same time
09:19:54 <aspiers> #info some progress on mistral HA
09:21:14 <aspiers> is there any doc yet on the mistral compute node HA architecture?
09:21:31 <ddeja> It's under construction
09:21:33 <aspiers> ok
09:21:41 <aspiers> I'm wondering what the fence agent does exactly
09:22:03 <aspiers> is it run by Pacemaker?
09:22:12 <ddeja> for the PoC, it will only send http call to Mistral, so evacuate process is started
09:22:16 <ddeja> aspiers: yes
09:22:20 <beekhof> doh, meeting
09:22:31 <_gryf> o, hi beekhof
09:22:36 <aspiers> hey beekhof
09:22:45 <aspiers> ddeja: so that is what triggers the mistral workflow?
09:22:49 * beekhof has a keyword alert on "pacemaker" :)
09:23:00 * aspiers makes a mental note of that
09:23:05 <ddeja> aspiers: yes, it's the most simple scenario
09:23:37 <aspiers> ddeja: and that fence agent gets run after the normal fencing of the compute node?
09:23:51 <ddeja> aspiers: yup
09:23:58 <aspiers> ddeja: i.e. 2 fencing devices via fencing_topology?
09:24:09 <ddeja> exactly
09:24:15 <aspiers> cool, that makes sense
09:24:27 <aspiers> beekhof: you wanna report any status?
09:24:50 <beekhof> nothing related to here. with the possible exception of getting sbd working in remote nodes
09:25:20 <beekhof> which might be interesting later on
09:28:54 <aspiers> pacemaker pacemaker pacemaker pacemaker
09:28:54 <aspiers> :)
09:28:54 <aspiers> I guess the kids get the highest priority interrupt in the evening :) ok, let's talk about Austin quickly
09:28:54 <aspiers> #topic Austin summit
09:28:54 <aspiers> so we have some HA talks submitted
09:28:54 <aspiers> #link ddeja and aspiers submitted https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7327
09:28:54 <aspiers> please vote for it!
09:28:54 <aspiers> I also submitted https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7329
09:28:54 <aspiers> which is about automated deployment of HA in general
09:28:54 <aspiers> the intention is to share our experiences of automation with other vendors
09:29:11 <aspiers> if that's interesting then please vote for that too
09:29:18 <aspiers> anyone else submit any talks?
09:29:28 <beekhof> nope :)
09:29:39 <aspiers> IIRC masahito may have submitted one for masakari but I'm not 100% sure
09:30:14 <aspiers> anything else to mention about Austin?
09:31:32 <aspiers> if not, let's move on to AOB
09:31:39 <aspiers> #topic AOB (Any Other Business)
09:31:54 <aspiers> so one interesting implementation detail which arose during our discussions on Friday ...
09:32:04 <aspiers> I was asking NTT why they didn't use Pacemaker for process monitoring
09:32:14 <aspiers> currently they have their own for monitoring libvirtd etc.
09:32:49 <aspiers> one answer was that if libvirtd fails, it makes sense to do a nova service-disable on that node
09:33:08 <aspiers> to stop the scheduler from starting any new instances on it
09:33:28 <beekhof> extra policies basically
09:33:33 <aspiers> but at the same time, in some clouds the policy would dictate a preferred action of keeping the VMs running
09:34:01 <aspiers> beekhof: yeah
09:34:07 <aspiers> and that gave rise to a follow-on discussion about policies and action matrices etc.
09:34:15 <aspiers> another example is if the admin network fails
09:34:24 <aspiers> but everything else still works
09:34:38 <aspiers> also in that case you'd probably want to keep the VMs running but remove the node from nova-scheduler
09:34:50 <aspiers> it depends on the nature of the VMs on that node of course
09:34:55 <aspiers> and on how many
09:35:12 <aspiers> but a typical policy might be to avoid the disruption of instantly fencing that node
09:35:21 <aspiers> and instead notify the cloud ops and wait for manual evacuation
09:35:58 <aspiers> #info aspiers and NTT discussed failure modes where the preferred course of action might be notifying cloud ops to do *manual* evacuation
09:36:10 <aspiers> beekhof: does Pacemaker support that kind of thing?
09:36:23 <aspiers> it seems a shame not to have Pacemaker doing process monitoring
09:37:04 <aspiers> it's a strange case, because you'd still want Pacemaker to ensure that libvirtd is started before nova-compute
09:37:24 <aspiers> implying a standard order constraint "libvirtd before nova-compute"
09:37:53 <aspiers> if libvirtd dies, it might be OK to shut down nova-compute but not kill the VMs
09:38:15 <aspiers> anyway we don't have to discuss that now, but I just wanted to raise it
09:38:20 <aspiers> we can cover it in the user stories and specs
09:38:28 <ddeja> one question
09:38:40 <ddeja> if libvirt is dead, how we are supossed to kill vms?
09:38:41 <beekhof> i'm not bothered, its very domain specific
09:38:47 <aspiers> ddeja: fencing
09:39:02 <ddeja> fencing the whole node?
09:39:08 <aspiers> yes
09:39:13 <ddeja> oh, ok
09:39:15 <beekhof> we can have a manual fencing target... but then you're blocking everything else until a human shows up
09:39:30 <aspiers> beekhof: that might be desirable in some cases
09:39:30 <ddeja> not cool
09:39:30 <_gryf> right
09:39:47 <beekhof> i guess you could write a new agent that turned off libvirt but didnt power off the node
09:39:57 <beekhof> aspiers: almost never
09:40:14 <aspiers> beekhof: well, the situation I'm talking about doesn't involve fencing
09:40:19 <beekhof> turned the human into a single point of failure
09:40:33 <aspiers> beekhof: it's actually a degradation of service not an outage
09:40:47 <beekhof> well you can write an RA that does anything
09:40:51 <aspiers> I mean, you can no longer control VMs on that compute node, but they are still running OK
09:40:53 <beekhof> and you have on-fail=block
09:41:08 <beekhof> so unrelated resource trees can still be recovered
09:41:20 <aspiers> so the control plane has an outage, but the user workload doesn't
09:41:48 <aspiers> in that case live migration of VMs is more appropriate
09:42:08 <aspiers> but if there's no shared storage then that might cause brief workload outage
09:42:17 <aspiers> so again it comes down to policy and operator's decision
09:42:20 <ddeja> if libvirt is dead, you can't live-migrate instances, AFAICT
09:42:28 <beekhof> well, it might, but you'd never know
09:42:31 <aspiers> ddeja: oh yeah, good point
09:42:39 <_gryf> aspiers, are you talking about live migrate using virsh, or nova facility?
09:43:24 <aspiers> _gryf: that's kind of the point. it's hard for a computer to choose the right one
09:43:27 <_gryf> right, exactly what ddeja said
09:43:27 <beekhof> ok, i gotta go again
09:43:34 <aspiers> bye
09:43:37 <_gryf> beekhof, :)
09:44:08 <aspiers> well, let's include a user story where libvirtd dies, and we can figure out a good spec for how to handle it
09:44:15 <aspiers> like I said, no need to solve it in this meeting ;-)
09:44:20 <ddeja> ok
09:44:28 <aspiers> one other important piece of info
09:44:36 <aspiers> we found some serious stability bugs in Pacemaker's remote code
09:44:49 <aspiers> they are now fixed upstream
09:44:53 <aspiers> #info SUSE found some serious stability bugs in Pacemaker's remote code
09:45:06 <aspiers> #link https://github.com/ClusterLabs/pacemaker/pull/908
09:45:11 <aspiers> #link https://github.com/ClusterLabs/pacemaker/pull/909
09:45:15 * beekhof will look
09:45:20 <aspiers> beekhof: they're merged
09:45:34 <aspiers> beekhof: you probably already saw 10 days ago
09:45:40 <beekhof> versioning will change a bit for remote nodes too
09:45:53 <aspiers> beekhof: oh, what do you mean by versioning?
09:48:15 <aspiers> ok never mind :) anything else from anyone?
09:50:34 <aspiers> I guess not, so let's close for today
09:50:40 <aspiers> thanks all, and bye for now!
09:50:48 <aspiers> see you on #openstack-ha!
09:51:14 <_gryf> cu!
09:51:33 <ddeja> hi
09:51:42 <aspiers> #endmeeting