20:00:14 <shardy> #startmeeting heat
20:00:15 <openstack> Meeting started Wed Jun 19 20:00:14 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:19 <openstack> The meeting name has been set to 'heat'
20:00:23 <shardy> #topic rollcall
20:00:31 <shardy> hey, who's around?
20:00:35 <adrian_otto> hi
20:00:36 <therve> Hey there
20:00:37 <stevebaker> me
20:00:39 <zaneb> o/
20:00:53 <jpeeler> hi
20:00:59 <asalkeld> o/
20:01:14 <SpamapS> o/
20:01:25 <andrew_plunk> hello
20:01:38 <radix> hello
20:01:40 <tspatzier> Hi
20:01:50 <maksimov_> o/
20:02:11 <shardy> ok, hi all, lets get started!
20:02:18 <shardy> #topic Review last week's action
20:02:32 <shardy> So there was only one, and not sure if sdake is around?
20:02:50 <shardy> #info sdake to raise BP re gold images
20:03:29 <m4dcoder> o/
20:03:32 <shardy> I'm not sure if that happened, will check later
20:03:54 <shardy> #topic h2 blueprint status
20:04:20 <shardy> #link https://launchpad.net/heat/+milestone/havana-2
20:04:37 <shardy> So ttx observed that we appear to be behind atm
20:04:41 <kebray> o/
20:04:50 <shardy> are there BPs which are marked not started, which are actually in progress?
20:05:01 <randallburt> hey, sorry I'm late
20:05:13 <shardy> hi randallburt, no worries
20:05:23 <shardy> just discussing the h2 BP status
20:05:33 <stevebaker> SpamapS: should I assign this to you? https://blueprints.launchpad.net/heat/+spec/native-in-instance-tools
20:05:49 <SpamapS> stevebaker: yeah its what I'm working on anyway :)
20:05:56 <shardy> if people are working on stuff and the BP is still "Not Started", please change it or it's liable to get bumped to h3
20:06:04 <randallburt> wrt my stuff, I'm making review comment changes and waiting on acceptance before submitting another change
20:06:09 <shardy> likewise, if you know its
20:06:19 <shardy> not going to land, please bump to h3
20:06:38 <shardy> anyone got anything to add, status or anything?
20:07:20 <shardy> SpamapS: is that likely to land for h2?
20:07:26 <randallburt> need to have attribute schema from me and provider resource from angus before moving forward
20:07:30 <bgorski_> o/
20:07:34 <SpamapS> shardy: yes
20:07:38 <adrian_otto> randallburt: does that mean that you could make faster progress if something changed with respect to acceptance?
20:07:40 <shardy> SpamapS: cool
20:08:10 <shardy> randallburt: Yeah, I reviewed the attribute schema today and mostly looks good, so lets get that in soon
20:08:13 <randallburt> currently blocked until those are accepted
20:08:14 <SpamapS> shardy: It will likely only serve tripleo's purposes, but the framework should allow everyone to migrate eventually. :)
20:08:18 <randallburt> shardy:  will do
20:08:40 <shardy> anyone got any other concerns or comments on h2?
20:08:52 <shardy> I know the review cycle has been somewhat long recently
20:09:24 <shardy> so apologies from me, and please everyone (including non-heat-core) do lots of reviews ;)
20:09:25 * SpamapS will be diving into the review queue quite a bit today
20:09:27 <zaneb> good news, we have an agenda item for that :)
20:09:36 <randallburt> :D
20:09:47 <shardy> zaneb: yup, we're about to get to that :D
20:10:14 <shardy> #topic Gerrit approval policy
20:10:22 <shardy> all yours zaneb ;)
20:10:32 <zaneb> so, it's not clear to me that we are getting a lot of value from requiring 2 +2's before approving a patch
20:10:41 <zaneb> as I understand it other OpenStack projects require 2 core developers with different affiliations to +2 before approving
20:10:46 <zaneb> so we're not really following that rule anyway
20:10:51 <shardy> does anyone know if it's mandatory now we're integrated?
20:11:13 <zaneb> I don't think that would be even feasible at this stage of the project
20:11:18 <radix> what about just having one +2 from the opposite affiliation of the author?
20:11:27 <randallburt> poor therve ;)
20:11:33 <radix> hehe
20:11:38 <zaneb> radix: everyone is still blocked on therve and SpamapS
20:11:56 <radix> hehe, yeah
20:12:03 <therve> Yeah we can't really afford it right now
20:12:03 <zaneb> so, my point here is *not* to make the process *more* difficult!
20:12:04 <asalkeld> I honestly don't think the affilation is the issue
20:12:16 <stevebaker> me neither
20:12:16 <therve> I'm +1 on keeping things as they are
20:12:18 <zaneb> asalkeld: totally agree
20:12:24 <zaneb> here is my suggestion:
20:12:30 <zaneb> currently the +1 is effectively meaningless
20:12:30 <asalkeld> it's we might just want to be a bit more flexibe
20:12:37 <zaneb> I think we should treat +1 as if it means what it says: "looks good to me, but someone else must approve"
20:12:43 <zaneb> core reviewers decide whether to use +1 or +2, and when doing +2 approve at the same time
20:12:53 <zaneb> so e.g. if the code looks good but you're not an expert in that area, +1
20:13:00 <therve> So effectively 1 core + 1 other (maybe core)
20:13:02 <zaneb> if there were a bunch of comments on the previous patch and they've all been addressed and look good, maybe just +2 & approve
20:13:09 <zaneb> use appropriate judgement according to the situation
20:13:35 <zaneb> I'm not really concerned about the review time per se, but the number of rebases required for each patch is seriously slowing things down
20:13:38 <asalkeld> sounds good to me
20:13:40 <zaneb> so I think it is probably having a non-linear effect
20:13:41 <therve> It sounds reasonable
20:14:00 <SpamapS> zaneb: +1 is quite meaningful for us tracking non-core reviewers to make sure they understand what we expect from reviewers.
20:14:22 <shardy> SpamapS: +1!
20:14:32 <zaneb> SpamapS: agree that +1 is meaningful from non-core reviewers
20:14:38 * radix hopes so
20:14:39 <zaneb> was that your point?
20:14:46 <SpamapS> And just leaving +2 with no approval means I don't have to think "is Angus core?"
20:14:52 <stevebaker> possibly other projects have many more reviews in-flight, but maybe the rebase effort is less because the code bases are more mature
20:14:53 <shardy> Definitely, if non-core reviewers can give good feedback consistently it's hugely valuable, and makes life much easier
20:15:30 <therve> I think the current situation is more that the features are a bit tricky to grasp and nobody commits to approve
20:15:56 <asalkeld> yea, approve fear
20:15:57 <zaneb> therve: there is a bit of that
20:16:10 <shardy> therve: also differnent people spot different things
20:16:23 <shardy> I don't think we should routinely approve patches with just one reviewer
20:16:28 <shardy> unless really trivial
20:16:33 <zaneb> tbh I think if it were quicker to get a fix in when you spot something after the fact, people might not be so slow to approve
20:16:35 <therve> One thing is that if someone did -1 a branch, they should look back again
20:16:41 <therve> It doesn't seem to be always the case
20:16:42 <shardy> but if you have several +1's and several revisions, maybe it makes sense
20:17:18 <therve> shardy, Well approve means reviewing, right?
20:17:23 <zaneb> shardy: but if it has been through 15 patchsets and everyone is agreed
20:17:27 <zaneb> just merge it
20:17:30 <SpamapS> So it sounds to me like we're just bottlenecked on reviewers... have we polled the Heat dev community to see if there are those with active aspirations that we may not have mentored appropriately?
20:17:36 <zaneb> don't wait around for a second core reviewer
20:17:51 <shardy> therve: Yeah, I'm saying we shouldn't routinely ack via one reviewer doing review/approve without other eyes on the patch
20:17:54 <randallburt> shardy:  I think that and what zaneb said earlier has a lot of goodness. if there are lots of +1's and some comments that are addressed in the latest of several patches, seems ok to approve with fewer constraints
20:18:00 <radix> SpamapS: I plan on reviewing more, I haven't done many in the past couple days but it's something I fully intend to do
20:18:15 <radix> I'm trying to get a better understanding for the codebase (and some recent design discussions have helped a lot)
20:18:18 <randallburt> zaneb:  +111 because I can't type and read
20:18:22 <zaneb> SpamapS: I don't think we're lacking reviewers. we're lacking reviewers with deep knowledge of the code base
20:18:51 <shardy> radix: reviews are also a great way to learn the code :)
20:18:56 <radix> yep, definitely
20:19:19 <SpamapS> zaneb: true, and I agree, a big part of that is that the code base is young.
20:19:31 <zaneb> yeah, that problem should fix itself over time
20:19:40 <SpamapS> No reason to panic, just means we have to try harder on reviews. :)
20:20:08 <asalkeld> sounds like we all agree?
20:20:10 <therve> Yeah it's worth trying to smooth things out a bit
20:20:20 <therve> We can reevaluate if things go south anyway
20:20:33 * randallburt starts planning patch bomb
20:20:35 <shardy> do we need to have a vote, or are we all agreed anyway?
20:20:37 <zaneb> does anyone from core disagree?
20:20:58 <stevebaker> So in summary, core reviewers can use their discretion on whether to approve?
20:21:11 <zaneb> yes
20:21:12 <jpeeler> right, sounds good
20:21:28 <shardy> stevebaker: yes
20:21:36 <SpamapS> shardy: perhaps a motion on the mailing list
20:21:50 <SpamapS> tho
20:21:56 <stevebaker> and if the change breaks anything, they can be soundly beaten
20:21:58 <SpamapS> I think we have a super majority already
20:22:02 <zaneb> SpamapS: good idea
20:22:11 <shardy> SpamapS: Yep, I'll send one tomorrow - I'm still not clear if there's openstack-wide policy we should observe or not
20:22:19 <asalkeld> SpamapS, that's more likely to just generate noise
20:22:21 <zaneb> other projects might have some input for us also
20:22:21 <shardy> stevebaker: lol
20:22:30 <SpamapS> Yeah I'm reversing myself.
20:22:38 <SpamapS> ML is not necessary, I really think we can all just agree now.
20:22:47 <zaneb> stevebaker: if the change breaks anything it can be reverted just as quickly :)
20:22:54 <shardy> obviously, let's all be sensible, but I for one am pretty tired of rebasing huge patch queues for weeks on end
20:23:02 <shardy> hopefully this may help a bit :)
20:23:44 <shardy> SpamapS: Ok, I may just find out re approval policy off list then
20:24:14 <SpamapS> shardy: yeah I'm sure we are granted at least some autonomy there, but I eagerly await your confirmation of that. :)
20:24:28 * SpamapS must step afk for a few, brb
20:24:54 <shardy> Ok, anything else on this, or shall we move on to open discussion?
20:24:56 <zaneb> shardy: I don't see how they can stop us ;)
20:25:05 <zaneb> move on
20:25:14 <shardy> zaneb: I just don't want to get told off ;)
20:25:22 <shardy> #topic open discussion
20:25:24 <zaneb> pfft
20:25:32 <shardy> anyone have anything?
20:25:39 <radix> thanks for all the responses to that thread :)
20:25:47 <zaneb> just fyi all, I'll be away next week and the week after
20:26:12 <therve> stevebaker, Did you start something for fixing autoscale instance list?
20:26:52 <stevebaker> therve: not yet. I was going to write an autoscaling tempest test first
20:27:07 <stevebaker> therve: did you want to have a go?
20:27:08 <asalkeld> therve, I did a while back, but can't find the patch:(
20:27:13 <therve> stevebaker, OK cool
20:27:23 <therve> I was mostly wondering if you had a path
20:27:39 <therve> You mentioned metadata, but zaneb didn't seem to like it
20:28:10 <asalkeld> don't worry therve I'll push approve really fast;)
20:28:12 <stevebaker> this raises a good question, *where* can we store heat specific state for running resources?
20:28:23 <andrew_plunk> therve: are you guys talking about storing the lists of instances within a scalinggroup?
20:28:24 <therve> Ah :)
20:28:31 <zaneb> stevebaker: I think we need a new catch-all database column ;)
20:28:40 <zaneb> metadata is being seriously abused atm
20:28:43 <therve> andrew_plunk, Yeah, bug 1189278
20:28:44 <uvirtbot> Launchpad bug 1189278 in heat "Autoscaling max limit is capped by length of resource.nova_instance column" [High,Confirmed] https://launchpad.net/bugs/1189278
20:28:44 <shardy> andrew_plunk: yes, or more precisely how to stop doing it ;)
20:28:52 <asalkeld> runtime_data
20:29:10 <shardy> Yeah, we should stop abusing the metadata column really
20:29:20 <radix> could we create tables as needed for resources?
20:29:30 <stevebaker> maybe a key/value tags table
20:29:49 <asalkeld> radix, isn't that going to get messy
20:30:00 <shardy> for autoscaling groups, can't we just store the instance count, since the prefix is known and consistent?
20:30:09 * asalkeld scared of schema madness
20:30:28 <asalkeld> shardy, there might be holes?
20:30:33 <shardy> oh no, we've got the random suffix now
20:30:37 <radix> asalkeld: well... I donno. It depends how many resources we have
20:30:47 <shardy> asalkeld: would there?
20:30:56 <zaneb> shardy: the random suffix is irrelevant
20:30:59 <radix> asalkeld: I've worked with systems like that, it's not so bad if you have good processes around schema management
20:31:00 <asalkeld> instance failed?
20:31:04 <zaneb> currently we don't support holes
20:31:10 <zaneb> but we'll need to be able to eventually
20:31:16 <radix> the question is, do you need more structure than k/v, or is it okay by itself?
20:31:35 <stevebaker> the value can be json
20:31:36 <shardy> zaneb: Ok, may as well improve the status-quo then
20:31:48 <sdake> o/
20:31:51 <asalkeld> I am in favour of stack'like class that can have proper resource entries
20:31:52 <zaneb> stevebaker: +1
20:32:13 <andrew_plunk> asalkeld: +1
20:32:14 <zaneb> asalkeld: that also sounds like it would work
20:32:20 <jasond> anything to avoid db migration script version hell would be good
20:32:22 <randallburt> asalkeld: +1
20:32:35 <therve> stevebaker, It may be problematic if we want to have concurrency, no?
20:32:47 <therve> (Not that it's possible currently)
20:33:34 <therve> asalkeld, So I guess it links to the other subject we could talk: autoscale API
20:33:36 <stevebaker> therve: that would probably depend on how it is used
20:33:47 * SpamapS returns
20:34:26 <therve> Should we spend (much) time improving the current resources, or should we design something more robust?
20:35:01 <stevebaker> therve: could you be more specific?
20:35:01 <asalkeld> that is the same thing isn't it?
20:35:13 <zaneb> lol
20:35:14 <andrew_plunk> I think if we have an interface for each resource "type" it would be fine
20:35:16 <shardy> therve: we need a native scaling group resource anyway, but it would be good if both AWS-compatible and native resources inherited from the same non-broken base class
20:36:09 <therve> I think some people mentioned that resources are a tight fit for autoscale
20:36:29 <shardy> therve: define tight fit?
20:36:46 <therve> shardy, Limit how clean we could make it?
20:37:08 <radix> hmm
20:37:14 <radix> therve: I think everyone agrees that there should be resources
20:37:17 <shardy> therve: well heat is stack and resource orientated
20:37:43 <radix> but whether the autoscaling logic is actually inside the resource class (living in heat) or if the resource delegates to an autoscaling service separate from heat
20:37:48 <radix> in both cases, there are resources
20:37:50 <therve> Okay
20:37:54 <asalkeld> so we do want autoscale to be an actual rest api
20:37:55 <andrew_plunk> shardy: but because of the way we rely on the properties schema, people implementing multiple resources that have the same function are out of luck in many cases
20:38:00 <shardy> so if people want a non-resource orientated AS solution, I guess it doesn't belong in Heat, but I still don't really understand what the problem is with resources
20:38:15 <therve> Well I guess what I'd like to understand is what would the responsibilities of the AS service
20:38:48 <asalkeld> shardy, I think it's the interactions
20:39:13 <asalkeld> so we need a rest api to case the scale up/down actions
20:39:18 <radix> asadoughi: yeah, agreed
20:39:19 <radix> er
20:39:36 <radix> asalkeld: yeah, agreed. we need "scale-up" and "scale-down" operations no matter what the end solution is
20:39:39 <shardy> One thing I mentioned a few times is that when we rip out all the alarms/monitoring stuff in favour of ceiolometer, then all you have in heat for AS is orchestration of actions triggered by ceilometer
20:39:56 <shardy> so after that happens, I'm not really sure where the separate autoscale service fits
20:40:18 <asalkeld> mabe I can write something up
20:40:28 <radix> shardy: yeah, agreed that monitoring should be external
20:40:31 <asalkeld> and see if it makes sense to otheres
20:40:33 <therve> asalkeld, That'd be wonderful :)
20:40:39 <radix> shardy: and I'm also not super sure of why autoscale needs to be a separate service in that case
20:40:44 <shardy> you could have some other orchestration service triggered by ceilometer, or have heat triggerd by some other alarm source, but I'm not sure what use-case people see for the standalone AS service
20:40:52 <asalkeld> shardy, I guess action for me
20:41:13 <shardy> #action asalked to write up AS/Ceilometer wiki
20:41:37 <adrian_otto> I have a question about the Ceilometer suggestion before we switch topics
20:41:40 <shardy> asalkeld: if you can clarify the vision for the Heat+CM story that would be great
20:41:56 <asalkeld> shoot adrian_otto
20:42:08 <adrian_otto> for autoscale, measurement of KPI's are needed in order to trigger a scale[up|down] event...
20:42:08 <asalkeld> (as in go ahead)
20:42:11 <asalkeld> :)
20:42:17 <adrian_otto> some of those will be visible to the hypervisor (host)
20:42:29 <adrian_otto> others are only visible from within the guest
20:42:45 <asalkeld> both are possible
20:42:49 <SpamapS> All handled by ceilometer's in-instance tools IIRC
20:42:50 <adrian_otto> is there a strategy to be able to get at both categories of KPI's?
20:43:01 <asalkeld> yes
20:43:04 <adrian_otto> ok, so csilometer will have some form of an agent?
20:43:15 <adrian_otto> and that agent will be extensible?
20:43:18 <asalkeld> yip, not yet - but in the works
20:43:38 <SpamapS> s/agent/guest api/ .. tools should be really lightweight.
20:43:40 <adrian_otto> will that be based on the open source Virgo agent on github?
20:43:48 <asalkeld> adrian_otto, best if you can use curl
20:43:55 <asalkeld> i.e. super easy
20:44:17 <asalkeld> adrian_otto, I havn't see that
20:44:29 <adrian_otto> so an in-guest client wold talk to an external API running somewhere (a ceilometer service?)
20:44:44 <asalkeld> yes, rest api
20:44:52 <adrian_otto> #link https://github.com/racker/virgo
20:45:07 <kgriffs> +1 for using virgo (I recommended this a couple summits back)
20:45:34 <adrian_otto> it's a pretty compact C based agent, extensible, signing features.
20:45:44 * stevebaker has to go
20:45:51 <asalkeld> adrian_otto, I'll have a look
20:46:09 <adrian_otto> that assumes a pull model rather than a push model as you described.
20:46:20 <adrian_otto> I think it's easier to scale a pull model in large deployments
20:46:23 <SpamapS> Agents that measure things on servers are a dime a dozen. Give me a way to push that information somewhere and I'll be happy. :)
20:46:28 <adrian_otto> think what happens when you have a million guests
20:46:58 <asalkeld> I don't like that it's c
20:46:59 <adrian_otto> you have a collector for the data regardless
20:47:26 <adrian_otto> a python agent would be considerably less memory efficient
20:47:43 <adrian_otto> kgriffs can verify that claim
20:47:43 <SpamapS> adrian_otto: push/pull is the common scaling pattern that goes back to large scale manufacturing automation.. where individuals push to collector caches which are pulled from by centralized systems as resources permit.
20:47:49 <radix> I can too, unfortunately
20:47:55 <therve> radix, Heh heh
20:48:10 <radix> :)
20:48:35 <zb> I'm awake
20:49:02 <radix> one way to partially mitigate memory-hungry agents is by not running them persistently, but instead as a cron job, depending on the type of collection you're doing
20:49:04 <SpamapS> But yes, there are a bazillion ways to collect data about servers. Pull can easily be implemented by having one puller instance which then pushes everything into ceilometer.
20:49:06 <radix> but this is getting pretty off-topic
20:49:11 <sdake> morning zb ;)
20:49:24 * zb mutters at flaky internet
20:49:27 <adrian_otto> SpamapS: yep, I don't really care which way
20:49:30 <shardy> radix: that's exactly what we do atm, basic but works ;)
20:50:00 <adrian_otto> as long as there is an agent that can be easily and efficiently extended to allow the custom KPI's to surface to inform auto-scaling decisions
20:50:14 <SpamapS> Point is, ceilometer has an API for defining what acceptable operating parameters are, and for calling out to a hook URL when those parameters are exceeded.
20:50:22 <shardy> shall we follow up on this discussion on the ML after asalkeld has documented the plan for Heat/Ceilometer integration?
20:50:36 <radix> +1, can't wait for asalkeld's post :)
20:50:37 <SpamapS> which sounds suspiciously like what autoscaling does. :)
20:50:42 <asalkeld> shardy, It was more about autoscaling
20:50:57 <radix> SpamapS: I think the only piece in the middle is defining smarter stuff about what to do when scaling
20:50:59 <asalkeld> ceilo work is in progress
20:51:04 <radix> SpamapS: which isn't totally trivial
20:51:06 <kgriffs> virgo is nice because it's already there, supports signing, and is extremely light weight. All your logic is written in Lua, and it's very efficient at posting metrics since it uses luvit.
20:51:13 <shardy> asalkeld: I think the two are closely related, so it would be great to explain the CM/Heat thing as well as the AS ideas
20:51:20 <asalkeld> ok
20:51:31 <asalkeld> np
20:51:37 <shardy> I think that autoscaling == alarms+orchestration
20:51:43 <shardy> essentially
20:51:50 <asalkeld> sure
20:51:51 <SpamapS> radix: yes, _that_ would be what I call orchestration. :)
20:52:00 <radix> shardy: "orchestration" is like a gas :)
20:52:45 <shardy> asalkeld: anyway, you're best placed to join the dots for us all on the CM stuff, so hopefully that will help the AS discussions too :)
20:52:54 <asalkeld> yip
20:52:58 <SpamapS> I do think there's a need for an API for those alarm hooks to hit, and that API is "the scaling API"
20:53:04 <shardy> 8 mins, anything else from anyone?
20:53:07 <m4dcoder> if alarms+orchestration, then instead of a service that focus on AS, why not a service that has broad scope on automated actions and remediations?
20:53:32 <m4dcoder> any input to http://lists.openstack.org/pipermail/openstack-dev/2013-June/010593.html?
20:53:33 <adrian_otto> sensor/effector
20:53:43 <shardy> m4dcoder: You mean something which can orchestrate stuff, like heat?
20:53:44 <m4dcoder> i'm still trying to gather requirements for bp as-update-policy
20:53:53 <asalkeld> m4dcoder, if we just had a work flow thingy
20:53:55 <shardy> triggered by an alarm source, like ceilometer?
20:54:02 <radix> SpamapS: it needs to know about groups, and policies like "scale up 10%" vs "scale up +1" and stuff like that
20:54:11 <randallburt> queue kebray
20:54:11 <SpamapS> m4dcoder: automated actions and remediations fall into orchestration as a general topic, and the provider stuff being added to HOT will likely make that a user definable thing.
20:54:42 <kebray> workflow thingies are good.
20:54:54 <SpamapS> m4dcoder: I'd love to work with you on as-update-policy .. it falls right into my rolling updates work.
20:54:57 <kebray> But, TaskFlow projects are even better :-)
20:54:58 <randallburt> loves all the hopes and dreams being piled on to "hot provider stuff"
20:55:00 <asalkeld> SpamapS, well we just need a resource to create workflows
20:55:20 <asalkeld> and we can action those from alarm/scaling
20:55:23 <m4dcoder> SpamapS: awesome.  need some guidance.
20:55:25 <SpamapS> randallburt: its the duke nukem of orchestration
20:55:47 * asalkeld thinks it's not much to do with format
20:55:51 <randallburt> at first I was all like "HA" and then I was all like 'awww.'
20:56:21 <randallburt> just call me the John Carmack of Openstack
20:56:21 <asalkeld> kebray, how is your proj. doing?
20:56:38 <asalkeld> does it do something yet?
20:56:50 <kebray> It's great.. can't really call it my project, but TaskFlow is making excellent progress.  Cinder took a dependency on the Task library for H release.
20:56:55 <asalkeld> can I create a job to run acommand
20:56:55 * stevebaker is back
20:57:03 <kebray> create_volume is almost fully working in cinder using the new library.
20:57:24 <asalkeld> I meant aas
20:57:26 <SpamapS> nice
20:57:29 <SpamapS> 2 min
20:57:48 <asalkeld> (the service, not the lib)
20:58:06 <randallburt> there is no aas until the lib is done/vetted, right?
20:58:07 <zaneb> asalkeld: I totally misread that previous line
20:58:34 <asalkeld> yea zaneb needs some sleep
20:58:35 <kebray> create_volume stuff using TaskFlow:   https://review.openstack.org/#/c/29862/
20:59:07 <shardy> kebray: interesting
20:59:15 <shardy> time's up - thanks all!
20:59:22 <shardy> #endmeeting