17:59:38 #startmeeting trove 17:59:39 Meeting started Wed Mar 12 17:59:38 2014 UTC and is due to finish in 60 minutes. The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:59:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:59:42 The meeting name has been set to 'trove' 18:00:09 o/ 18:00:12 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 18:00:13 o/ 18:00:15 o/ 18:00:16 o/ 18:00:20 o/ 18:00:22 o/ 18:00:23 \o 18:00:29 o/ 18:00:33 o/ 18:00:38 o\ 18:00:46 o/ 18:00:57 Last meetings logs: http://eavesdrop.openstack.org/meetings/trove/2014/trove.2014-03-05-18.02.html 18:00:59 #link http://eavesdrop.openstack.org/meetings/trove/2014/trove.2014-03-05-18.02.html 18:01:04 o_ 18:01:12 o| 18:01:27 #topic Action Items 18:01:44 Just one from last meeting, and it's complete. 18:01:54 1. File a BP for adding datastore type/version to backup model 18:02:00 SlickNik, done 18:02:12 Yup, thanks denis_makogon 18:02:24 #topic Trove Guest Agent Upgrades bp follow up 18:02:30 #link https://blueprints.launchpad.net/trove/+spec/validation-on-restore-by-the-backup-strategy 18:03:00 esp, around to take it away? 18:03:16 #link https://blueprints.launchpad.net/trove/+spec/upgrade-guestagent 18:03:51 SlickNik, i guess it's approved 18:04:26 o/ 18:04:27 SlickNik, i'd like to raise one question in terms of this topic 18:04:30 o/ 18:04:34 a wild esp appeared! 18:04:38 * amcrn throws a pokeball 18:04:40 esp, nice to see you 18:04:50 he's working on it 18:04:52 o/ 18:04:53 denis_makogon: it's still in discussion. 18:04:55 amcrn: sorry, asleep at the keyboard again 18:05:04 hello denis_makogon :) 18:05:18 So at the meetup we had a discussion on how to handle rpc calls if the guest agent was too old- I'd like to submit we move that to a seperate blueprint to follow this one 18:05:44 It would let this one focus on making a good reference implementation of the update call 18:05:59 grapex: sounds good to me 18:06:00 fair enough, sounds reasonable 18:06:10 agreed 18:06:17 Sounds good to me. 18:06:21 One thing I'd like to bring up is there was talk in this blueprint of an audit trail in the database 18:06:45 shoot 18:06:53 My concern is I think there are multiple routines in Trove which could benefit from this and it might be wise to make that into its own blueprint / feature 18:07:06 grapex, agreed 18:07:12 so it could be re-used elsewhere. For example this could help with resize operations 18:07:19 +1 for audit tables 18:07:27 grapex: yep I hear ya 18:07:35 there are some sqlalchemy samples that add it to your models 18:07:48 and who knows how useful it might be with replication in the future 18:08:02 kevinconway: cool, I'll ping ya on this 18:08:16 grapex: are we convinced that we can land a generic audit trail solution in a reasonable timeframe? as long as it's prioritized and doesn't land months and months after guest-upgrades does, i'm with you. 18:08:56 amcrn: I think we could. If we made it too simple I'm not sure if there'd be major backwards compatability concerns if we wanted to change it later 18:08:58 grapex: could we come back and implement the audit trail after the first cut of the upgrades? or is gonna too messy? 18:09:01 is it possible to do this tasks in parallel ? 18:09:11 grapex: alright, sounds good 18:09:22 Maybe "audit" log is too much- to me this is just a way in the database to give a better idea of what status some action went into or what happened to a long running job 18:10:12 well, there's technically nothing stopping a deployer from adding triggers to existing tables that record record state on INSERT/UPDATE 18:10:13 And not to get too far down this rabbit hole, but this sort of ties into an ancient problem with Trove's API in that sometimes an API call which is async might fail but there aren't good methods for showing this failure 18:10:47 one thing I might thrown into the conversation is that backups kind of already keeps a record of backup history. 18:10:49 kevinconway: We could, but I think the fact so much work was done for an audit trail on this upgrade feature shows a hunger exists for a more general solution 18:10:58 esp: I agree 18:11:05 We spoke of something like this when we talked about "action-events" in the past. 18:11:17 #link http://docs.sqlalchemy.org/en/latest/orm/examples.html#versioning-objects 18:11:20 which I'm not saying is right or wrong but it lives there now :) 18:11:27 SlickNik: Exactly, it's unfortunate that was never finished 18:12:06 grapex: this sounds strikingly similar to the tasks that hub_cap did 18:12:08 Part of my own view on this is I'm not sure why an upgrade routine should have this feature more than several other existing routines in Trove. It adds a lot of complexity to add it for just this 18:12:19 cp16net: Yeah, SlickNik was referencing hub_cap's work 18:12:36 so the summary is to amend the blueprint to have a single row vs. a row per update, and to in parallel start fleshing out the details of action-events/historyaudit 18:12:49 yep, would be nice to talk to hup_cap, maybe he ran into some issues with this 18:12:59 Maybe we should all agree to accepting a simple v1 implementation. If we don't change the API there won't be too much danger with it. 18:13:07 I'm all for building something for auditing that's extensible, and backward compatible. Just need to be careful to not try and make it encompass _everything_. I fear down that path lies timeline and feature bloat. 18:13:12 Then agent upgrades could simply use it 18:13:22 SlickNik: Agreed 18:13:46 SlickNik: auditing shouldn't extend much past the models 18:13:59 So I think as a first pass let's just try naming the "audit trail's" table something else. 18:14:22 grapex: yeah I'm good with that 18:14:22 One other note on schema is IIRC we're adding a single new table called "upgrades" 18:14:34 and try to make it generic 18:14:46 sound good 18:15:07 one question 18:15:20 how do we plan to extract agents code from trove ? 18:15:23 Maybe we could just put the version of the guest agent into an existing table, such as heartbeats? 18:15:40 kevinconway: As an example (perhaps not a very good one, though), something mentioned earlier was resize. We don't have any of that historical data in the models so it would have to be a trail on the action (API call). 18:15:41 denis_makogon: I think hub_cap wants us to wait for Juno to do that 18:15:57 SlickNik: good example 18:16:19 grapex: yep, I think that would work ok. I was looking at that last night 18:16:25 I see it as a good idea, I don't think we need to hash out the design during this meeting though :) 18:16:34 SlickNik: yeah, i understand the concern. Those are simply things we have to create models for. Create a ResizeAction record and keep a table that tracks the state of that record over time 18:16:39 SlickNik, agreed 18:16:45 SlickNik: Ok, as long as those concerns are noted I don't think we should hash out everything either. 18:17:00 the "versioning" of the record should happen behind the scenes 18:17:06 kevinconway: I guess that's the issue though is having an explosion of models. 18:17:30 how do we plan to version the guest API ? 18:17:32 esp: You good with updating/modifying the bp with these concerns? Can you action it, please? 18:17:34 grapex: I have to look at agent_heartbeats a little more but I was hoping not to loose the state of a particular upgrade 18:17:34 kevinconway: I think I see what you're saying... 18:17:52 kevinconway esp SlickNik: Let's start a thread about an "audit" trail or whatever on the mailing list 18:18:00 as an analyst i should be able to see the state of a record at any given point in time 18:18:03 grapex: ok 18:18:07 And we *will* actually try to talk about it this time. :) 18:18:14 grapex and amcrn also suggested moving guest_agent_version to the instance table 18:18:27 esp: I think I like that idea better. 18:18:29 amcrn: ^ 18:18:30 SlickNik: yep, give me a sec 18:18:48 esp: np, thanks! 18:19:06 #action esp do update trove-guest-upgrades wiki with schema changes 18:19:24 grapex: i think it makes sense to have it in the instance table; using the heartbeat table as the source of truth for a guestagent version seems overreaching 18:19:40 but i haven't spent many mental cycles putting it through the paces 18:19:59 grapex: you meant that only status goes in agent_heart_beats right? 18:20:10 version can live in the instance table 18:20:16 amcrn, agreed instance table looks like the most appropriate place 18:20:45 and 'upgrades' --> 'events' or 'history' or 'audit trail' 18:21:10 esp: Yeah I think version should live in the instance table, I like that more 18:21:29 grapex: cool 18:21:37 esp grapex amcrn +1 to "version" in the instance table 18:21:48 Okay so we've got some clear next actions here. 18:22:02 so going into the guestagent packaging, do we have a gut feel on what the default packaging strategy should be? 18:22:08 One last question, then I'll shut up I swear 18:22:25 The MGMT API shows very specific info on updating the guest agent, such as where the swift file is 18:22:38 amcrn, looks like Swift could be the best option for any clouds 18:22:44 I think long term it would be good to simplify this somehow, so that Trove would simply know where to look for the upgrade 18:23:03 denis_makogon: wasn't referring to the endpoint, was referring to the packaging itself; i.e. deb vs. tar vs. 18:23:12 So Trove could for instance query Swift's metadata and see what version it said it had, and see if that was more recent and if so make the RPC call to update the agent 18:23:21 demorris: yep swift seems like it would fit for a lot of folks 18:23:32 amcrn, ah, i see, then DEB looks like production variant 18:23:33 That way operators wouldn't have to figure it out on their own when they made the mgmt call- they'd just say "update it!" 18:23:55 grapex: I like this idea but was hoping to get to it in a follow up blue print 18:24:02 esp: Ok, so long as it's a follow up 18:24:07 I'm good. Thanks esp! 18:24:12 * notmyname is available for questions on this topic 18:24:19 https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ may be interesting to you too 18:24:27 grapex: auto updates would simplify things a lot 18:24:40 Okay, let's move on to the next topic in the interest of time. 18:24:47 thx notmyname! 18:25:12 #topic [openstack-dev] [Trove] MySQL 5.6 disk-image-builder element 18:25:36 #link http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19036.html 18:26:07 now that multiple datastores and versions are supported, we're coming to a point in which we need to think about a holistic strategy 18:26:24 on how to handle variances in versions; whether it be handled in the diskimage, the guestagent, or a combination thereof 18:26:37 i guess there's one question related to agent - do we plan to refactor mysql manager and make it able to skip installing packages 18:26:45 some in-flight examples include MongoDB 2.4.9 (https://review.openstack.org/#/c/77461/) 18:26:58 and MySQL 5.6 (https://review.openstack.org/#/c/79413/) 18:27:06 denis_makogon: is your question part of the current topic or the previous one? 18:27:13 current 18:27:16 amcrn: haven't we already solved this in the agent with the packages portion of a datastore? 18:27:31 kevinconway: no, because the later versions of datastores don't have prebuilt packages 18:27:34 as said at the ML there's no official 5.6 mysql package 18:27:52 Well could we generalize how the guest agent pulls down the things it installs? 18:28:07 Or have it call something else that already exists and is general enough to do that? 18:28:25 well, to quickly summarize: if you take a look at that MySQL 5.6 review, it becomes clear that the installation for bleeding edge versions is fairly involved 18:28:33 Right now we're having the guest agent call a package system like apt-get. If we let it call some other smart thing as a strategy it could get us there. 18:28:37 is there something wrong with rolling a package for custom code and using that? 18:28:38 so, if one provider wants it on cloud-init, but another wants it in the diskimage-builder 18:28:45 you've effectively duplicated code in two different places 18:29:03 I think we are gonna use pip 18:29:20 I was looking at pip wheels but not sure it's mature 18:29:29 amcrn: I think there's a divide here though between operators who will want to customize Trove to run images that they've totally perfected. Since we allow datastores to be associated with images we already have that ability. 18:29:42 grapex: correct 18:29:53 what's unclear right now, is how we're going to accept patches 18:29:57 However Trove should also be able to build a single image thats vanilla and spin up a variety of stuff so its easier to dev on and for the less initiated to play with 18:30:09 So I wonder if maybe mySQL 5.6 is just too bleeding edge 18:30:15 amcrn: does every datastore need to go in kickstart? 18:30:20 kevinconway: no 18:30:42 grapex: i'm not sure the gating factor of whether we should support a version should be predicated on whether the install is complicated 18:31:00 Maybe it's ok to say that if people want to deploy a datastore version like 5.6 they can, but they need to build their own images or packages rather than have it happen in trove-integration. 18:31:09 grapex: I'm wondering the same thing. 18:31:22 Do we only accept patches to the elements once datastores have proper package versions in appropriate upstream repos? 18:31:34 that's the underlying question here 18:31:38 is the question about patches to trove or redstack:kick-start? 18:31:41 trove-integration provides the framework/ability to automatically test things in a easy way. 18:31:43 amcrn: I think we should support what we can 18:31:45 grapex, its totally ok, so i would suggest to look at heat-jeos 18:31:57 but in terms of plugging into trove-integration CI, there should be a limit on how complex it is 18:31:57 let's forget redstack kickstart 18:32:13 amcrn: so what's the question then? 18:32:27 so I have a question 18:32:31 i'm confused about the problem 18:32:33 kevinconway: patches to trove image elements, I think - which currently live in trove-integration 18:32:52 is it too difficult to build a mysql .deb package or is it that we don't want to build one since oracle isn't making one for us? 18:33:09 it's certainly not bleeding edge 18:33:27 I just want to be sure I understand why we're not including that 18:33:53 so, let me re-explain the problem statement: we have 3 different ways to support a datastore install, the question is: what is the accepted set of guidelines to determine whether that gets merged publicly? 18:34:04 is a diskimage only install patch-set acceptable? 18:34:14 if yes, how are we going to handle versioning 18:34:18 amcrn: Good summary 18:34:48 will we have dozens and dozens of /elements/ubuntu-mysql-X.X/ folders as time continues 18:34:50 amcrn: so is the question what gets merged into Trove or the dev environment setup in redstack? 18:35:21 kevinconway: nothing to do with redstack, it's about what can get merged in here https://github.com/openstack/trove-integration/tree/master/scripts/files/elements 18:35:32 kevinconway: And another question is should what goes into Trove differ from what can work in redstack? Because if it won't work in the later we won't be able to test it at which point who knows if it works 18:35:35 So I think we should accept upstream, only datastores that we can test upstream. 18:36:03 So in CI, how many images will we need to make for Trove? 18:36:15 grapex, a lot 18:36:18 so are we saying that unless it's redstack kick-startable, it shouldn't be merged into https://github.com/openstack/trove-integration/tree/master/scripts/files/elements ? 18:36:32 grapex, one per each datastore and N for each version 18:36:41 Heh 18:36:50 is there a facility in redstack for pointing at an image and using it instead of kick-starting one? 18:37:16 so those with needs beyond the normal kick-start can prepare an image any way and attach it to the datastore 18:37:24 kevinconway: there is nothing truly special about the image 18:37:35 amcrn: I think it's stronger than that. Unless it's redstack startable, and has int-tests actually running testing the image / datastore. 18:37:45 it's just that there are assumptions made by trove about what is included in the image 18:37:56 SlickNik: i'm fine with that. so now what's the criteria for when a manager should be forked vs. shared? 18:38:35 ex: you could modify the mongodb manager to deal with setParameter because that's in 2.4.9, but not in 2.0.X, and still share that manager across both versions 18:38:35 amcrn: are we talking about images or managers 18:38:41 it's all interconnected 18:39:05 amcrn, i guess, if datastore has public package that can be pulled by apt-get/yum - in this case manager could be shared 18:39:12 amcrn, if not - forked 18:39:14 amcrn: true but as you pointed out a manager could handle a wide range of images (point releases of the underlying datastore) 18:39:41 juice: right, but if you're clever enough you could create a god mysql manager class that handles 5.1, 5.5, and 5.6 if you were insane 18:39:56 so for example: should the manager always be force forked on a major version? etc. 18:40:15 amcrn: I am :) I think having the manager restrict the version is a reasonable approach 18:40:32 amcrn: That's an interesting point. I'd anticipate you'd have logic in your manager to deal with nuances like those. And when it starts to get unwieldy, you'd probably have to fork. 18:40:55 amcrn: Haven't thought this through fully, so all seat of the pants here. 18:40:58 What's "force forked?" 18:41:09 amcrn: if a manager was proven to work with a given point release, it would be added to the acceptable range 18:41:13 git fork --force? 18:41:24 grapex: turn of phrase to mean force them to create a new manager 18:41:53 kevinconway: Knowing git, that's probably already a thing that exists :P 18:41:56 amcrn: In general I think different major versions will necessitate new managers, but let's not make that a rule. 18:42:03 amcrn, at least each new manager for new version could be inherited for base manager 18:42:15 So here's a question- much of this image builder stuff just seems to be bash scripts that run to initialize an image 18:42:19 example: mysql-5.5 and 5.6 18:42:21 denis_makogon: i don't like that idea 18:42:32 object oriented programming means we need to program with more objects 18:42:42 So we have two routes- the bash scripts initialize the image while it's being built, so their results are baked in , or the guest agent installs a package later 18:42:55 Okay, 2 more minutes on this, and we need to move on in the interest of time :) 18:42:59 what if we enabled the guest agent to run these scripts to help with dev efforts? 18:43:13 Then it becomes a choice you make later, whether to bake the image for a specific type or noit 18:43:14 But there's clearly some more thinking that needs to be done around this. 18:43:15 *not 18:43:40 grapex: i'd need to see more specifics on how you'd accomplish that, but that sounds promising 18:44:08 amcrn: It seems like we've made bash our deployment technology of choice. :p 18:44:17 anyway, please review the existing patch-sets plus mailing list thread. because i can easily see without governance, having multiple versions tacked onto a single manager, then that becomes unwieldy, making it brittle to change, etc. 18:44:18 Which is ok for some use cases I guess 18:44:36 amcrn: Ok. I'll try to review this soon 18:44:49 it's a bit difficult to convey the concerns via this medium 18:45:24 amcrn: Agreed. Let's discuss this outside this meeting, perhaps offline. 18:45:45 so, moving on 18:45:46 ? 18:45:48 SlickNik: as in we all disconnect and type into empty terminals? 18:46:16 kevinconway: Thats sounds like a metaphor for some kind of philosophical journey into the self. 18:46:42 #action SlickNik set up something to continue "supported datastore versions" discussion. 18:47:20 #topic Open Discussion 18:47:24 wow 18:47:28 i have a topic 18:47:29 another topic 18:47:39 SlickNik, Point in time recovery [denis_makogon] 18:47:42 if you were part of the key signing party then you need to actually go sign all the keys 18:47:49 kevinconway: LOL! 18:47:54 i'm looking at you EVERYONE 18:48:02 I was just there to commit identity theft. 18:48:04 kevinconway: i only looked at your id to steal your identity 18:48:16 amcrn: Lol! Beat you to the joke. :) 18:48:24 guys, two topics were skipped 18:48:25 * amcrn shakes his fist @ grapex 18:48:28 ;) 18:49:00 denis_makogon: Sorry I didn't refresh the page since last night. 18:49:14 SlickNik, ok 18:49:22 #topic Point in time recovery 18:49:29 its mine 18:49:32 Trove is able to perform instance restoration (whole new instance, from scratch) from previously stored backup in remote storage (OpenStack Swift, Amazon AWS S3, etc). From administration/regular user perspective Trove should be able to perform point in time recovery. Basically it�s almost the same as restoring new instance, but the difference between restore (in terms of Trove) and recovery is huge. 18:49:32 Restore gives an ability to spin-up new instance from backup (as mentioned earlier), but the Recovery gives an ability to restore already running instance from backup. For the beginning Trove would be able to recover/restore running instance from full backup. 18:49:50 i've sent the ML about that 18:49:55 #link https://wiki.openstack.org/wiki/Trove/PointInTimeRecovery 18:50:15 so i was confused. you seem to be combining two ideas denis_makogon 18:50:29 so the main idea is to be able to restore you instance from given backup at any time 18:50:30 one is point-in-time recover and the other is recovering into a live instance 18:51:12 kevinconway: +1 18:51:26 googling said that restoring instance from the backup (that was take at some point in time) at any time is called point in time recovery 18:51:47 yes but google runs a cloud service. they are competitors! 18:51:53 you cannot trust them 18:51:53 denis_makogon: I don't think people have had time to review this since it was just added this morning. 18:52:12 kevinconway : what what what 18:52:21 +1 SlickNik 18:53:07 I just want to re-iterate that we should add items to discuss at the Wednesday Meeting on or before Monday that week. 18:53:28 SlickNik: +1 18:53:32 SlickNik: +1 18:53:34 ok 18:53:41 That way folks have some time to read the related bps. 18:54:11 but the ML with link to the BP and wiki page was sent like 2 weeks or less ago 18:54:12 denis_makogon: i support the idea of restoring the same running instance, restoring to new instances is what we have right now 18:54:32 denis_makogon: I think the next item was added a bit late, too. Let's get to it next meeting, as we won't be able to give it the full discussion time otherwise. 18:54:40 i see 18:54:54 then lets skip them all 18:55:19 denis_makogon: right, but you updated the content significantly since the monday discussion, and then added it to the meeting on tuesday night. we'll make sure to give it a look-see this week hopefully, appreciate the updates. 18:55:21 can we jump to the open discussion 18:55:37 #topic Open Discussion 18:55:40 https://bugs.launchpad.net/trove/+bug/1291516 18:55:43 #link https://bugs.launchpad.net/trove/+bug/1291516 18:55:43 so yeah key signing party 18:55:48 sign my keys 18:55:50 what will be allowed to go into the icehouse now? just just bugs? 18:56:02 kevinconway: This is about that federal investigation isn't it? 18:56:03 i think broken tests are more significant problem 18:56:10 i like pi. 18:56:13 kevinconway: do you see my signature on your key 18:56:15 kevinconway: lol 18:56:18 kevinconway: I've been signing on the key slacking :) (or was it the other way around?) 18:56:25 kevinconway: you may have to refresh your key from the keyserver 18:57:02 cp16net: Only bugs unless you ask for a Feature Freeze exception for your bp. 18:57:02 juice: i see yours, but there were quite a few of us there 18:57:08 hub_cap hasn't even signed yet 18:57:25 denis_makogon: that bug is a dupe 18:57:32 SlickNik: ok 18:57:32 SlickNik, its new 18:57:34 Pretty awesome their sandbox can detect when Python code is trying to execute commands 18:57:35 denis_makogon: I recently submitted a fix for the issue. 18:57:38 SlickNik, gate failing again 18:57:56 kevinconway: I am getting the feeling that most folks are unclear about the process 18:57:59 i think hub_cap told me i might be able to get the config params in the db with a ffe 18:58:08 cp16net: ffe? 18:58:10 SlickNik, take a look at date and submission date 18:58:14 juice: i'll send out a bash script 18:58:16 feature freeze exception? 18:58:19 but he gave me a deadline of tuesday and i missed it by a day 18:58:20 :-/ 18:58:21 Ah 18:58:22 you just enter your password and it will sign my key for you 18:58:24 yes 18:58:29 denis_makogon: Where are you seeing it fail? 18:58:34 don't read the source though 18:58:36 kevinconway: Ok. It's "1234567890" 18:58:44 SlickNik, i added links at bug description 18:58:45 i'll ping him and see what the dealio is 18:58:48 denis_makogon: You might have to rebase your patch to make sure the fix is in. 18:59:08 SlickNik, == Agenda for Mar. 12 == 18:59:08 * Trove Guest Agent Upgrades bp follow up 18:59:08 ** https://blueprints.launchpad.net/trove/+spec/upgrade-guestagent 18:59:08 * "[openstack-dev] [Trove] MySQL 5.6 disk-image-builder element" [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19036.html] 18:59:08 ** Discuss how to strategically handle multiple versions of a datastore in terms of diskimage-builder elements and/or guestagent managers. 18:59:11 ** https://review.openstack.org/#/c/77461/ (MongoDB 2.4.9) 18:59:13 ** https://review.openstack.org/#/c/79413/ (MySQL 5.6) 18:59:15 Thanks SlickNik! 18:59:15 ** https://review.openstack.org/#/c/72804/ (trove-integration changes to allow building different version of a datastore by setting ENV vars appropriately before kick-starting) 18:59:18 * Point in time recovery [denis_makogon] 18:59:22 ** https://wiki.openstack.org/wiki/Trove/PointInTimeRecovery 18:59:24 * Data volume snapshot [denis_makogon] 18:59:26 ** https://wiki.openstack.org/wiki/Trove/volume-data-snapshot-design 18:59:28 oh, sorry 18:59:29 denis_makogon: Let's take this offline. 18:59:30 the patch is already up-to-date 18:59:31 wow 18:59:34 ok 18:59:38 in #openstack-trove 18:59:53 done 18:59:55 Thanks , that's all folks! 18:59:59 #endmeeting