20:00:16 <shardy> #startmeeting heat
20:00:18 <openstack> Meeting started Wed Jun  5 20:00:16 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:21 <openstack> The meeting name has been set to 'heat'
20:00:27 <shardy> #topic rollcall
20:00:32 <wirehead_> o/
20:00:36 <randallburt> hello all
20:00:38 <jpeeler> hi
20:00:39 <andrew_plunk> hello
20:00:40 <zaneb> heya
20:00:41 <radix> hello
20:00:43 <bgorski> hi all
20:01:09 <alexheneveld> howdy
20:01:19 <asalkeld> hi
20:01:29 <stevebaker> hi
20:01:31 <tspatzier> hi
20:01:35 <timductive> hi
20:02:13 <shardy> hi all, lets get started :)
20:02:21 <shardy> #topic Review last week's actions
20:02:31 <shardy> I actually don't think there are any:
20:02:44 <shardy> #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-05-29-20.00.html
20:03:04 <shardy> Did anyone have anything from last meeting they wanted to mention?
20:03:11 <asalkeld> nope
20:03:27 <shardy> ok
20:03:33 <shardy> #topic h2 blueprint status
20:03:50 <shardy> So thanks to zaneb for attending the release/status meeting yesterday
20:03:55 <zaneb> np
20:04:03 <shardy> there were a couple of queries re h2 bps
20:04:16 <zaneb> luckily it was the one just after h1, so not much to report :)
20:04:39 <shardy> need an assignee for stack-metadata randallburt, or anyone, interested in picking that up for h2?
20:04:53 <shardy> #link https://blueprints.launchpad.net/heat/+spec/stack-metadata
20:04:57 <randallburt> I started some work on https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L574 will probably have many questions for irc tomorrow ;) but I can switch gears with little issue
20:05:00 <asalkeld> I don't like the idea of it
20:05:22 <adrian_otto> asalkeld: ?
20:05:24 <asalkeld> see comment
20:05:26 <shardy> asalkeld: I saw your whiteboard comment, can you expand?
20:05:45 <asalkeld> well rather make a static resource
20:05:48 <asalkeld> and link to it
20:05:54 <kebray> late, but present.
20:05:57 <asalkeld> via a reference
20:06:05 <zaneb> asalkeld: rather than use a pseudo-parameter you mean?
20:06:09 <asalkeld> yea
20:06:11 <shardy> asalkeld: so a Metadata-only resource?
20:06:16 <asalkeld> sounds like a hack
20:06:23 <randallburt> asalkeld: are we asking the template author to wire these things up explicitly?
20:06:35 <asalkeld> not sure
20:06:41 <asalkeld> probably
20:06:49 <zaneb> asalkeld: so, the idea is that you have your "provider" template for, say, an Instance
20:06:56 <randallburt> if a resource needs/uses metadata, then the user passes that in via a parameter today, yes?
20:06:58 <zaneb> and the Instance has metadata
20:07:16 <zaneb> and you have to pass that through somehow to the _actual_ Instance inside the provider template
20:07:20 <asalkeld> so use a load config
20:07:58 <zaneb> asalkeld: but not all provider templates will be providers for Instances
20:08:01 <asalkeld> that we it is _more_ composeable
20:08:11 <therve> You mean launch config?
20:08:15 <asalkeld> ya
20:08:27 <asalkeld> or an openstack equiverelant
20:08:29 <randallburt> I thought the idea was to make the template look and feel just like any other resource; sounds like we need a little more discussion here before proceeding? maybe on the ML?
20:08:43 <shardy> but this is about Metadata for e.g nested stacks, which need not necessarily map to an instance config?
20:08:49 <zaneb> asalkeld: a launch config _outside_ of the provider template?
20:08:58 <asalkeld> ya
20:09:10 <asalkeld> could be a seperate file
20:09:33 <asalkeld> so almost no need for the nested stack
20:09:37 <zaneb> asalkeld: I think that's a good way for users to implement it once we actually have that resource
20:09:44 <adrian_otto> If I am understanding the blueprint properly, I think it's asking the rhetorical question: if resources can have attributes (referred to here as metadata) then so should stacks so stacks can masquerade as resources, as needed. Do I understand it correctly?
20:10:08 <zaneb> asalkeld: but it doesn't solve the problem that resources have metadata and providers, without this feature, effectively can't get at it
20:10:30 <zaneb> adrian_otto: correct
20:10:50 <adrian_otto> ok, then I support the addition of the feature.
20:10:52 <asalkeld> why can' t nested stacks just have metadata?
20:11:02 <asalkeld> (not as a parameter)
20:11:15 <zaneb> asalkeld: they can, the point is giving people a way to access it from inside the provider template
20:11:41 <asalkeld> can't you?
20:12:02 <asalkeld> ok, well I think it needs some more thought
20:12:11 <asalkeld> doesn't have to be here
20:12:11 <zaneb> asalkeld: in code, yes, but not using the template syntax
20:12:19 <zaneb> yeah, let's punt this to the mailing list
20:12:21 <asalkeld> (just seems ugly)
20:12:26 <randallburt> zaneb:  agreed
20:12:27 <tspatzier> can you give an example for metadata, I mean in contrast to properties?
20:12:44 <tspatzier> I meant parameters ...
20:12:59 <randallburt> shardy:  should I switch focus to this issue/bp then or keep going with attribute schema for now?
20:13:06 <alexheneveld> doesn't AWS support this?  so seems we should too…  http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-metadata.html
20:13:08 <zaneb> tspatzier: look at how SpamapS is using it
20:13:30 <fsargent> lo
20:13:40 <shardy> randallburt: no stick with what you're working on, we just need someone to take owndership of this sometime soon as it's a high prio BP targetted at h2
20:13:53 <randallburt> k
20:13:55 <tspatzier> zaneb: you mean in the openstack-ops repo?
20:14:00 <zaneb> alexheneveld: no, we're talking about accessing it within provider templates, which aws doesn't have
20:14:01 <shardy> Anyone going to take an action to start the ML discussion?
20:14:09 <zaneb> tspatzier: yes
20:14:27 <alexheneveld> zaneb: thx
20:14:35 <randallburt> chirp chirp
20:14:35 <zaneb> *cough*asalkeld*cough*
20:14:46 <asalkeld> you ok zaneb
20:14:48 <asalkeld> ?
20:15:01 <asalkeld> sure I can email
20:15:01 <shardy> #action asalkeld/zaneb to start ML discussion re stack metadata ;)
20:15:21 <shardy> next item is:
20:15:26 <shardy> #link https://blueprints.launchpad.net/heat/+spec/discover-catalog-resources
20:15:41 <shardy> SpamapS: is this going to happen for h2, or should we bump it to h3?
20:16:18 <zaneb> is SpamapS here? he hasn't said anything
20:16:19 <stevebaker> horizon are working on similar stuff here, we should see what they are doing
20:16:35 <shardy> Ok, we'll leave that and discuss it another time
20:16:42 <shardy> #topic blueprint/bug process reminder
20:17:09 <stevebaker> well, they already hide features if an endpoint isn't available. they're now working on introspecting api features for more fine-grained ui changes
20:17:26 <shardy> This is just a reminder, can everyone please make sure, if they're working on features, that they have an associated blueprint, and that they are targetted to whenever the feature is likely to land
20:17:39 <shardy> and likewise with bugs
20:18:27 <shardy> I know some people don't like the process, but if we all just follow the process and use launchpad etc it makes my life much, much easier :)
20:19:14 <shardy> also pls make sure commit messages and topic branches map to the BP/bug etc
20:19:52 <shardy> onwards..
20:19:58 <shardy> #topic h1 milestone release/status of Tempest integration
20:20:31 <shardy> So.  We released the h1 milestone with a critical regression, which was unfortunate
20:20:33 <SpamapS> sorry I was pulled off onto other stuff
20:20:33 * stevebaker summons mordred
20:20:51 <shardy> anyone (stevebaker?) got an update on Tempest integration?
20:21:25 <stevebaker> so heat will be switched on for devstack tempest soon, and we can gate on that
20:21:43 <SpamapS> shardy: bump discover-catalog-resources to h3. I need to focus on rolling updates.
20:21:43 <stevebaker> meaningful tests will require launching images though...
20:21:46 <shardy> also, everyone, please, please, please actually run heat and test stuff before posting big changes, particularly before we have the Tempest stuff sorted
20:22:32 <stevebaker> first barrier to that is getting this review through, https://review.openstack.org/#/c/28651/ had some comments sdague
20:22:45 <stevebaker> sorry, I meant sdague had some comments
20:23:16 <shardy> stevebaker: Ok, so it's all in-progress then, think we'll have reasonable Tempest coverage by h2?
20:23:17 <zaneb> some completely wrong comments :/
20:23:37 <radix> really good to hear about Tempest integration
20:23:43 <stevebaker> second task is having an image to test against. mordred mentioned that they intend to get dib building within openstack ci, putting test images on static.openstack.org
20:24:09 <stevebaker> however I wonder if we could just put some known good images on static.openstack.org in the meantime
20:24:38 <shardy> stevebaker: +1, just actually launching a stack would be a great start
20:24:39 <stevebaker> tl;dr, we'll have something by h2 ;)
20:24:49 <shardy> stevebaker: Ok great
20:24:51 <mordred> stevebaker: what?
20:24:52 <SpamapS> a stock Ubuntu 12.04 image works fine with heat. You just don't get heat-cfntools.
20:25:05 <SpamapS> F17 probably works fine too
20:25:16 <stevebaker> mordred: ^^ see second task
20:25:16 <sdague> stevebaker: just move it to thirdparty directory
20:25:23 <mordred> yeah - we actually have pleia2 doing work on testing tripleo right now
20:25:28 <zaneb> do we have an up-to-date guide on how to install devstack on a VM, with Heat, with images and get it all running first time?
20:25:33 <stevebaker> sdague: our point is that it is not a third party test
20:25:36 <mordred> part of that workflow chain will probably involve figuring out image publication
20:25:59 <radix> zaneb: I've been trying just that recently, it's... not easy
20:26:01 <sdague> stevebaker: it's using non openstack native datastructures, that puts it in thirdparty in tempest
20:26:11 <shardy> zaneb: I don't think so, we need to update all our getting started docs
20:26:20 <zaneb> sdague: it's using the only datastructures we support
20:26:24 <SpamapS> sdague: seeing as cloudformation does not have support for yaml.. I beg to differ.
20:26:35 <SpamapS> HeatTemplateFormatVersion: '2012-12-12'
20:27:07 <stevebaker> mordred: should we wait for the image building chain to be in place, or could we start with some manually built and uploaded images?
20:27:20 <sdague> zaneb: there are heat blueprints to implement native resource types, once those are in, that can go in api
20:27:23 <mordred> stevebaker: in tempest?
20:27:41 <mordred> stevebaker: I mean, to use inside of tempest/devstack testing?
20:27:43 <stevebaker> mordred: yes, for heat to run image-launching tests in tempest
20:27:46 <sdague> thirdparty all runs on all the runs right now anyway
20:27:55 <SpamapS> sdague: yaml formatted templates are native to heat and heat alone. The API is a native Heat API.
20:27:56 <sdague> so I'm not sure why you'd be oposed to putting it there
20:27:58 <zaneb> sdague: heat was accepted as a part of OpenStack without those
20:28:16 <mordred> stevebaker: well - right now reddwarf is building images as part of their devstack sequence
20:28:23 <mordred> stevebaker: that's the _easiest_ thing to do
20:28:48 <stevebaker> mordred: ooo, I'll take a look
20:28:53 <SpamapS> sdague: just because it works like cfn doesn't make it thirdparty.
20:28:54 <mordred> stevebaker: but - if you do have an unchanging base image or two, I imagine it wouldn't be too hard to get onto static.o.o for the short term either
20:29:21 <stevebaker> mordred: ok, lets figure it out later
20:29:24 <mordred> stevebaker: cool
20:29:55 <shardy> Ok, well we can continue this discussion over the coming weeks, but in the meantime, we share a collective responsibility to actually test stuff manually (not just unit tests) before postin
20:30:07 <shardy> posting even
20:30:34 <shardy> #topic Open discussion
20:30:43 <stevebaker> sdague: the criteria for being in thirdparty or not seems vague, you're implying we would come out of it as soon as neither the strings "aws" nor "ec2" appear in our templates, which seems somewhat arbitrary
20:31:13 <shardy> anyone have anything else they want to mention?
20:31:22 <SpamapS> right, what I'd rather see is that thirdparty is used for *pure* cloudformation template testing.
20:31:32 <timductive> is anyone else currently working on heat UI/interested in working on heat UI?
20:31:35 <radix> shardy: I'm trying as hard as I can to become a heat developer ;-P just struggling with getting a *real* working test environment
20:31:48 <zaneb> SpamapS: +1. Or for testing the heat-cfn-api
20:32:08 <stevebaker> sdague: to me, thirdparty is to validate compatibility with non-native APIs
20:32:12 <sdague> stevebaker: well, that's my -1. just went through a lot of work restructuring tempest to get all aws references out of api
20:32:26 <shardy> radix: well you just need a working grizzly/havana openstack plus heat
20:32:31 <wirehead_> We had some nice chats with radix and therve while they were in SF about autoscaling, shardy.  :)
20:32:31 <jrcookli> +1 for working on heat UI
20:32:37 <shardy> you don't *have* to use devstack
20:32:43 <zaneb> sdague: there's no AWS references in the _API_
20:33:00 <radix> shardy: yeah... I don't think I have access to any instance of that kind of environment, at the moment
20:33:06 <stevebaker> zaneb: he means the api package in tempest
20:33:12 <shardy> wirehead_: so actually, do you plan contributions in the AS area, e.g the discussed AS API?
20:33:17 <zaneb> stevebaker: ah, ok
20:33:30 <radix> shardy: yeah, that's my job :-)
20:33:31 <shardy> I've left it off the havana plan for now, as everything went kind of quiet after summit...
20:33:52 <alexheneveld> radix: this -- http://www.cloudsoftcorp.com/blog/getting-started-with-heat-devstack-vagrant/ -- may be helpful. networking can be somewhat fiddly, i've found.
20:33:58 <wirehead_> We needed to assemble for you a dark army of doom.
20:34:08 <radix> alexheneveld: yes, networking is the entirety of the cause of my headaches
20:34:35 <radix> alexheneveld: I guess I can try it this way
20:35:05 <shardy> radix: ok, well let us know in #heat if you need help and we can try to get you started
20:35:11 <radix> thanks a lot :)
20:35:18 <zaneb> shardy: I need help
20:35:22 <shardy> sounds like all our getting started docs need a refresh
20:35:22 <randallburt> lol
20:35:25 <radix> wait, I need help first!
20:35:33 <shardy> zaneb: lol ;)
20:35:35 <radix> hehe
20:35:38 * SpamapS gets pulled away again
20:36:03 <wirehead_> I have the urge to fix up the docs.  Just haven't been able to find the focused time to really get it done. :/
20:36:37 <asalkeld> any news on a rackspace server resource landing?
20:36:43 <kebray> Yep
20:36:50 <kebray> jason, andrew, and vijendar are making really good progress on resources for rackspace cloud servers, loadbalancers, and databases.  Additionally, we are planning a Horizon blueprint to graphically show progress on deploying a stack that displays resource state.
20:37:16 <shardy> kebray: is there somewhere we can see the code?
20:37:17 <stevebaker> kebray: I've been thinking about that too
20:37:17 * asalkeld keen on a server resource (alpha?)
20:37:45 <randallburt> jasond: ?
20:37:46 <sdague> stevebaker: so when heat takes in ec2 resources, what kinds of API calls is it making to nova?
20:37:47 <kebray> jasond, you have the code in public yet?
20:38:00 <jasond> it's very early, but https://github.com/jasondunsmore/heat/branches
20:38:15 <shardy> sdague: it makes openstack native (not ec2) calls to the nova API
20:38:16 <stevebaker> sdague: all calls from heat to other openstack APIs are native openstack
20:38:17 <jasond> here are my TODOs http://dunsmor.com/heat/cloud-servers-provider.html
20:38:36 <kebray> we're still playing with cnf-tools and cloud-init.. learning process.  but, good progress is being made.
20:38:56 <stevebaker> sdague: only the resource definition in the template mentions aws, its native openstack end-to-end
20:39:01 <kebray> did we ever land on whether this are going in-tree?
20:39:40 <wirehead_> The last time we discussed this, everybody seemed to be gung-ho about in-tree, but not what it meant to be in-tree.
20:39:46 <sdague> stevebaker: and in h2 we'll get os native resources?
20:39:56 <asalkeld> jasond / kebray try posting reviews and marking them as "work in progress"
20:40:11 <shardy> kebray: I think we decided yes at last weeks meeting, but nobody that keen to drive reorganizing the tree ;)
20:40:14 <asalkeld> (there is a special button "work in progress"
20:40:31 <shardy> +1 on posting WIP/draft reviews
20:40:33 <stevebaker> sdague: yes, native resource writing is ongoing, nova::server is being worked on right now
20:40:36 <jasond> i put the cloud servers resource provider under heat/engine/resources/rackspace_cloud/
20:40:37 <zaneb> kebray: there was widespread agreement on in-tree, and carping about /contrib. Mostly from me ;)
20:40:41 <kebray> asalkeld:  ok.. thx.  Yeah, we need to get this code in the right place.
20:41:06 <asalkeld> zaneb, why contib?
20:41:22 <sdague> stevebaker: so I'll +2 on the condition someone files a bug that we can track to get that cut over after heat supports it
20:41:27 <asalkeld> whats wrong with engine/resources/rackspace/
20:41:30 <zaneb> asalkeld: it doesn't seem like something to be installed by default to me
20:41:45 <asalkeld> to me it does
20:41:58 <stevebaker> sdague: ok, much appreciated. I think we already have an umbrella bp for that
20:41:59 <sdague> I'll stick that in the review comments, we can finish the conversation there
20:42:16 <zaneb> asalkeld: having a different resource type for every cloud provider is... you know... a bug, not a feature, long term
20:42:34 <kebray> zaneb: agreed :-)
20:42:36 <wirehead_> Yeah.  The whole thing where Heat becomes really and truly awesome is the ability to run your own Heat instance that talks to other people's clouds.
20:42:38 <randallburt> but a real need in the short term
20:42:42 <stevebaker> sdague: https://blueprints.launchpad.net/heat/+spec/abstract-aws
20:42:43 <asalkeld> really
20:42:58 <zaneb> randallburt: exactly
20:43:19 <asalkeld> wirehead_, +1
20:43:23 <wirehead_> I think we ought to proceed with the understanding that there will be a re-org.
20:43:24 <asalkeld> zaneb, -1
20:43:27 <asalkeld> :)
20:43:33 <sdague> stevebaker: can you file a tempest bug that links to that, just so as we approach h2 we can make sure it gets handled?
20:43:35 <zaneb> lol
20:43:55 <stevebaker> sdague: ok, will do
20:44:00 <randallburt> so, from our perspective, we're not fussed about where to put it. we'll start submitting review patches for our resources and can go from there about where folks want to put stuff
20:44:05 <wirehead_> I also suspect that once we've written 2-3 server providers, the pattern for how to not have to write 20 providers will be clearer.
20:44:32 <sdake> o/ - sorry i'm alte
20:44:44 <randallburt> well don't be alte again! ;)
20:45:06 <kebray> I can see where cloud service provider may want to optimize resource implementations for their cloud.. and, I could see contributing those back to the community.  So, in that sense, maybe they are features and not bugs… I can see both sides.  but, I digress.  It's just terminology at this point.  The solution is the same.
20:45:15 <shardy> hi sdake
20:45:49 <sdake> service desk ship me my new laptop so I can stop using the delete key pls :)
20:46:27 <shardy> kebray: yeah I think we're agreed on resource sub-directory per provider, and hopfully avoid undue duplication via the review process
20:46:55 <shardy> anyone got anything else they want do discuss?
20:46:57 <sdake> aws reorg happening in h2?
20:47:00 <zaneb> kebray: I'm trying to make a case for /contrib (badly, it appears), but I certainly wouldn't -1 a review over it
20:47:03 <sdake> (just reading scrollback)
20:47:38 <kebray> excellent.  we'll work to get our code in the right place for review asap.. help on getting the rackspace resources written from others would be great.. but, I know they aren't committed for the H milestones.
20:47:42 <shardy> sdake: we basically need native resources for everything, then the YAML templates become abstracted from awsisms
20:48:00 <sdake> shardy yup I got that - I am working on nova server atm
20:48:16 * sdake loves rebasing
20:48:18 <shardy> sdake: so what was your query re h2, can you clarify pls?
20:48:25 <therve> shardy, Do we have blueprints for all AWS resources to native?
20:48:47 <zaneb> therve: iirc there's one mega-blueprint
20:48:53 <sdake> If we are reorganizing the tree, my work will lhave to be rebased which is fine, but would prefer to know if we plan to reorg the resources dir in h2
20:48:54 <alexheneveld> kebray: zaneb: providers for other clouds feel like they should be plugins long term, maintained by third parties not by openstack - esp if heat goes core.  i've been in the adapters-keep-up game before and it's for mugs.
20:48:59 <shardy> therve: there is an abstract-aws BP, but we may not have related BPs to create native versions of every resource yet
20:49:00 <sdake> previously we weren't going to do that
20:49:10 <tspatzier> sdake: what's the state of the nova server resource? Any chance to try it out. Might want to reference it from the first HOT samples I am trying to get running.
20:49:12 <shardy> there are certainly BPs for some native resources tho
20:49:25 <stevebaker> shardy: replacing cfntools is part of aws removal too
20:49:37 <sdake> tspatzier not finished yet and have other tasks to attend to atm, but will get back to it in the next week
20:49:52 <asalkeld> shardy, for some reason bp's are getting defined for smaller and smarller bits of work
20:50:01 <shardy> stevebaker: yeah, I was thinking about that recently, we need to port cfntools to understand how to talk to the ReST API
20:50:15 <shardy> and/or the CFN API configurably
20:50:15 <tspatzier> sdake: sounds good. I also have some prep work for HOT to be done first
20:50:21 <shardy> which also removes the recurring boto pain
20:50:31 <sdake> asalkeld that is for the rest of the downstream openstack community (like marketing) to know what changed in heat
20:50:39 <kebray> alexheneveld  I don't disagree, but it seems like enough people want us to include these at the moment.  I don't have much of an opinion.. just need shardy to tell us where to put our code, and we'll push the gerrit review.
20:50:45 <sdake> they can't reasonably be expected to look at a git changelog can they? :)
20:50:58 <zaneb> alexheneveld: it works for the kernel
20:51:09 <shardy> asalkeld: Is that a problem, a BP needs to be achivevable withing a relatively short time for one person?
20:51:09 <asalkeld> yea
20:51:15 <wirehead_> boto botulism.
20:51:26 <randallburt> botoxic
20:51:32 * adrian_otto gags
20:51:43 <alexheneveld> kebray: +1 for now -- just wouldn't want to see it collapse under its own weight
20:51:46 <stevebaker> on that note I'm outa here
20:52:21 <shardy> Ok, is there anything else, or shall we finish early?
20:52:32 <sdake> shardy so the aws reorg is landing in h2?
20:52:37 <zaneb> +1 for finish early :
20:52:38 <sdake> still unclear on that
20:52:38 <alexheneveld> zaneb: good point :)
20:52:39 <zaneb> :)
20:52:49 <shardy> sdake: I still don't really know what you're asking?
20:53:01 <shardy> what will land, all the native resources, or...
20:53:05 <sdake> the moving all the resources around for environments + everything else
20:53:10 <sdake> the directory structure
20:53:34 <zaneb> alexheneveld: see last week's meeting log for more discussion
20:53:48 <shardy> sdake: I think that's not really a BP, just the first person who wants to can move stuff and deal with the resulting breakage
20:54:17 <shardy> sdake: Is it important to you that it happens for h2?
20:54:27 <sdake> I don't care I just want  to know if h2 or h3 :)
20:54:43 <zaneb> I think it's important to him that it doesn't so he doesn't have to rebase on it ;)
20:54:46 <shardy> sdake: honestly I don't know - is anyone planning it imminently?
20:54:50 * shardy is not
20:54:51 <sdake> I would prefer it doesn't happen in 2h
20:54:54 <sdake> zaneb wins :)
20:55:16 <zaneb> sdake: I think you and SpamapS were the only ones talking about doing it
20:55:28 <sdake> sounds good then i'll sync with SpamapS about that
20:55:47 <shardy> Ok, sounds good, I don't really see it as all that high-priority tbh
20:56:12 <shardy> Ok, if nothing else, we can wrap up..
20:56:45 <shardy> #endmeeting