19:02:30 #startmeeting infra 19:02:31 Meeting started Tue Mar 8 19:02:30 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:34 The meeting name has been set to 'infra' 19:02:37 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:43 #topic Announcements 19:02:54 #info It's my extreme pleasure to welcom Paul Belanger (pabelanger) to the infra 19:02:59 -core team and as an infra-root sysadmin. 19:03:03 #undo 19:03:03 Removing item from minutes: 19:03:07 darned stray newlines 19:03:07 welcome, pabelanger ! 19:03:09 congrats pabelanger! 19:03:09 and typos 19:03:10 congratulations 19:03:10 o/ 19:03:18 much deserved, pabelanger :) 19:03:20 :D 19:03:21 pabelanger: YAY! 19:03:23 #info It's my extreme pleasure to welcome Paul Belanger (pabelanger) to the infra-core team and as an infra-root sysadmin. 19:03:28 that's what i wanted 19:03:38 Thanks! I'm honored to join the team :) 19:03:39 congrats! 19:03:51 yay, congrats pabelanger :-) 19:03:53 pabelanger has been with us for years, pioneering some of our first efforts at puppet testing and style normalization 19:04:05 \o_ 19:04:05 pabelanger, congrats! 19:04:07 helped design and maintain our pbx.openstack.org asterisk service 19:04:20 has pitched in all over the place really 19:04:49 so just wanted to say thanks! and hopefully the additional responsibility doesn't beat you down too quickly ;) 19:05:04 Happy to help where I can 19:05:09 and thanks again 19:05:12 congrats 19:05:30 pabelanger: when you get a moment, submit a change to system-config to realize your account on all servers, and we'll try to get a majority of the current infra-root admins to +2 it 19:05:48 fungi: ack 19:06:09 speaking of pabelanger, one of the two infra talks submitted to the summit was accepted: https://www.openstack.org/summit/austin-2016/summit-schedule/events/7337 19:06:19 "OpenStack Infrastruture for Beginners 19:06:25 congratulations 19:06:32 congrats! 19:06:38 yes, we'll need to make sure we crack open our design summit session brainstorming soon too, probably next week 19:06:40 so thanks to him for leading the way on the talk submissions 19:07:01 ++ 19:07:16 on that note i've confirmed with ttx and thingee that we'll stick with requesting what we had in tokyo, as far as room allocations 19:07:22 okay, on with the show 19:07:25 #topic Actions from last meeting 19:07:33 #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-01-19.15.html 19:07:38 "1. (none)" 19:07:50 we really don't seem to use #action much any more, likely my fault 19:07:56 #topic Specs approval 19:08:02 #link https://review.openstack.org/287577 Create elections.openstack.org 19:08:13 tonyb: jhesketh: you wanted to talk about this a little? 19:08:32 is this hoping to get set up in time for elections in a week-ish, or is this for newton? 19:08:50 it'd be great to have it for elections this week 19:09:04 it looks like it has a couple positive roll-call votes already 19:09:05 The oroginal idea was to use it for this election cycle but I understand if we just trial it this time 19:09:24 and use it as the primary source of tinformation in the ocata election 19:09:32 nominations open friday, right? 19:09:45 anteaya: right 19:09:57 what timeline do you want to see for the work on this spec? 19:10:07 I think we agree by friday would be tough to do 19:10:09 o/ 19:10:42 anteaya: sure. It'd be great to have it done for say the opening of the TC nominations 19:10:43 to begin with it's just publishing the output of the candidate list, so it's useful but I don't think (and tonyb will correct me if I'm wrong) it'll matter if it appears half way through the election too 19:10:48 with is about 2 weeks from now 19:10:56 i'm still leaning more toward having this appear somewhere on governance.o.o, but i think that's an implementation detail. it's not hard to graft in a separate publication tree at a specific url in the vhost config (and saves us adding another subdomain in dns, another ssl cert, et cetera), plus seems more like the logical place for it 19:10:57 but yes, it'd be best to have it before opening 19:11:25 I'm all for doing great things for elections including communicaiton 19:11:29 fungi: I'm coo with that but thought it'd be harder than the proposal, happy to be wrong 19:11:43 elections are a big part of our governance 19:11:45 I'm hesitant to do somethign quick or fast as the amount of confusion around elections is high 19:11:57 do speak up if you disagree with me 19:12:15 anteaya: would this help with any of the confusion? 19:12:17 and the governance site is not called technical-committee.openstack.org but i guess ttx and the tc may object if we put non-tc-controlled content there 19:12:37 jhesketh: I'm concerned it would create some rather than alliviating 19:12:50 anyway, feels like i'm bikeshedding now so i'll stop 19:12:52 anteaya: how so? 19:12:53 I'd be for implimenting this for the fall elections 19:13:09 last minute stuff of any kind really throws people 19:13:18 I'm fine if I'm the minority 19:13:23 anteaya: I hear you point. 19:13:25 Would it count as last minute if it were announced today? 19:13:31 (but not necessarily live today) 19:13:34 bkero: it would for me 19:13:47 and announcing something that isnt' up would create confusion 19:13:48 maybe we should invite the tc to bikeshed on the proposal and hosting location? 19:14:05 anteaya: it may be I'm under estimating the level of confusion it'd cause 19:14:18 can't hurt to bring it up during open discussion in the meeting an hour from now, if there's time 19:14:18 tonyb: I may be overestimating 19:14:30 just to let them know the spec has been proposed 19:14:32 but my position would be this would be great to have in place for next fall 19:15:44 anteaya: If it's more appropriate I can talk to you about this after the meeting 19:15:53 if you like 19:16:01 but you asked for the infra opinion 19:16:12 and as a member of infra, I'm offering my opinion 19:16:16 anteaya: not trying to shut anything down etc just don't want to over stay my welcome 19:16:19 so anyway, tonyb: jhesketh: timeline aside, you feel like this is ready to go up for final council vote with a spec approval in a couple days? and see if a tc member wants to -1 it calling for further discussion/deferral? 19:16:50 or give it another week to collect some more opinions/input? 19:17:00 fungi: the scope of the spec is small enough (imo) that it could be voted on.. resolving the location and the timing is possibly something that can be done in the spec 19:17:51 sounds like it might be best to get concensus on that before it's approved 19:17:56 tonyb: what do you think? 19:18:20 I'm fine with a week from now. 19:18:36 my understanding is this should have minimal impact to the elections process, so i also don't mind if it shows up now, partway through, or after this cycle 19:18:52 jeblair: correct. 19:19:03 but i take anteaya's point, and would suggest that perhaps a soft-launch might be the thing if we do decide to launch it earlier... 19:19:32 o/~. 19:20:07 zaro: that's like in an out of the meeting in one line 19:20:30 okay, so let's push to next week, and give the tc members a heads up 19:20:30 oops. had to leave for just a min. 19:21:06 i'm happy to volunteer to bring it to the tc's attention 19:21:16 thanks jeblair! 19:21:30 useful to have a tc member participate in our meetings ;) 19:21:36 fungi, time for #action ? 19:21:41 oh, sure 19:21:59 o/ good morning 19:22:05 #action jeblair Give the TC a heads up on "Create elections.openstack.org" spec 19:22:06 o/ 19:22:21 #topic Priority Efforts: Store Build Logs in Swift 19:22:33 #link https://review.openstack.org/#/c/269928/1/specs/logs-in-afs.rst AFS strategy 19:23:06 so probably not much to discuss during the meeting... just want to get some feedback on comments I put in that review 19:23:27 oh, i should respond to that! 19:23:48 (there is also https://review.openstack.org/#/c/254718/ which updates the swift strategy in case we wanted to return to that) 19:24:21 #link https://review.openstack.org/254718 swift strategy update 19:25:08 and then at some point we're going to have to make a decision. :) 19:25:37 yes, that would be helpful 19:25:41 so it'd be good for folks to read up on both of those and leave questions/thoughts/etc 19:26:14 definitely. anything else on this? 19:26:24 not from me, thanks :-) 19:26:33 i think they're both valid options, so it's a tricky call. 19:26:50 #topic Gerrit tuning (zaro) 19:27:05 ++ to more info on how sharding would give us resiliency and more disk space 19:27:20 there's a few things here, but probably start with most urgent. the gerrit jvm gc issue? 19:27:55 yes the gerrit jvm gc issue persists, I think the change to update the timeouts has helped (we no longer fall over consistently through the week only once the GC starts to get bad) 19:28:01 i haven't found a solution to this besides trying to throw more memory at Gerrit. i think that's what we discussed last time as well. 19:28:28 plan there is to have a plan for a gerrit server rebuild onto a larger flavor, with preemptive announcement about new ip address for companies who have bloched egress for their employees, yeah? 19:28:28 I think we were at the point of trying to understand how much memory 19:28:32 I think what we need to do is determine how much space gerrit needs, boot a new VM of that size, tell people what the new IP will be then switch at some point after people have a chance to update firewalls 19:28:39 so we could create a new vm and announce the ips 19:28:46 anteaya: yup, we need to know how big the new VM needs to be 19:28:51 right 19:29:01 so how big does the new vm need to be? 19:29:03 last time we did 1 month advance notice, right? 19:29:33 asselin: was it one month? 19:29:33 anteaya: I don't think anyone knows 19:29:33 right 19:29:33 any guesses? 19:30:02 12 parsecs? 19:30:11 currently there's 12 Gigs allocated. so i think last i suggested was up it to 20G+? 19:30:16 currently we are in a 30GB VM with 12GB allocated to the jvm 19:30:29 Any thoughts to overallocating for future-proofing? 19:30:30 if we go to a 60GB VM we can probably bump the JVM up to at least 30GB 19:30:52 that sounds good. 19:31:16 and we actually use 20G of ram 19:31:27 including apache, etc... 19:32:05 I'm guessing it's not possible to resize the vm or reattach the existing ip to a new one? 19:32:05 the other 10gb is mostly for cache/buffers so we don't have terrible i/o performance 19:32:13 jhesketh: correct 19:32:24 jhesketh: resize is not allowed and rax doesn't do floating IPs 19:32:39 clarkb: or at least not on the flavor we're currently using 19:32:44 fungi: it was one month advance notice: http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 19:32:50 right, some flavors are resizeable 19:32:51 (maybe due to pvhvm, as jeblair theorized) 19:33:00 when does xenial come out? 19:33:05 april 19:33:06 something 19:33:14 Usually near the end 19:33:15 right around summit time 19:33:22 I think thursday before summit 19:33:23 week before summit 19:33:25 april 21 looks like? 19:33:26 clarkb: yep 19:33:37 fungi: yeah 19:33:40 maybe we can apt-get dist-upgrade to xenial before we do the switch... 19:33:53 #link https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule Ubuntu Xenial Release Schedule 19:33:54 since we'll need to launch the machine soon to reserve the ip 19:34:14 jeblair: do you know if anybody has tried our puppet manifests on Xenial yet? 19:34:27 it would certainly be nice to not have two ip moves for ram increase and distro upgrade 19:34:32 pabelanger: I doubt it and they likely don't work around systemd 19:34:55 looks like xenial is still on puppet 3.7.2 19:34:55 if we decided to do that, we'd have 6-8 weeks to fix it... 19:34:56 but maybe they do, not sure how well the sysv init compat layer works for compatbility with upstarts sysv compat layer 19:35:03 sorry 3.8.4 19:35:29 pabelanger: well, we use puppetlabs' packages anyway 19:36:09 fungi: right, was curious if they made the jump to puppet 4 like fedora did. Either way, fun times head with systemd 19:36:10 what will be the policy for upgrading to xenial, in terms of our components, and of nodes for jobs? 19:36:28 why are we talking about distro upgrade? that seems like more work. 19:36:41 i don't know that we're going to have a policy. we'll be lucky to have more than a loose plan 19:36:43 +1 on upgrades being more work 19:36:51 zaro: because the next lts happens in about a month too 19:36:58 zaro: and we will want ot upgrade to xenial at some point 19:37:01 that topic made me come. Might be out of topic but have you considered migrating to Debian Jessie ? (just yes/no no need to fork the discussion) 19:37:09 zaro: if we combine them we can avoid two ip changes in a short span of time 19:37:11 to avoid double migration i guess 19:37:16 zaro: but you are right it makes things more complicated 19:37:19 we had same situation in gozer on gerrit upgrade 19:37:27 to upgrade OS + gerrit or just gerrit 19:37:37 hashar: that's a definite rathole discussion. we can bring it up during open discussion if we have one this time 19:37:45 imho, i think os upgrade is a bit tight in terms of schedule 19:37:47 it's 2x the work if we do them together. it's 4x the work if we do them separately. 19:38:00 well, i remember it took a lot of testing going from precise to trusty 19:38:09 fungi: yeah I guess it is better for out of meeting talks. Thank! 19:39:13 so i suppose it's important to decide if we're going to want to urgently start upgrading systems to xenial (noting that trusty's going to be supported for a long time still and we're even now struggling to finish migrating off precise for many of our servers) 19:39:41 yes, if we want it to just live on trusty for a few more years, that's fine 19:39:41 didn't pleia2 recommend waiting for a bit? 19:39:47 she did 19:39:57 i'm personally fine if rebuilding review.openstack.org on xenial is a 2017 timeline 19:40:07 fungi: I support that 19:40:09 +1 19:40:15 or 2018, more likely 19:40:16 and maybe start with precise to xenial jump 19:40:20 ? 19:40:24 it's mostly about when we want to tackle the systemd changes 19:40:26 clarkb: ++ 19:40:28 well, review.o.o is on trusty now 19:40:32 but sure 19:40:34 which is the main thin I'm concerned about WRT our puppet configs 19:40:37 for other systems 19:40:40 so did we like the idea of a 60GB vm? 19:40:46 fungi: that's how i took that suggestion 19:40:55 that was the last/bigggest suggestion I was tracking 19:41:07 yeah, 60gb seemed to have some consensus 19:41:10 anteaya: it sounded like zaro felt that 30GB jvm would be sufficient? 19:41:21 zaro: ^ if so the nyes I think 60GB VM will work and we should launch that size 19:41:24 60gb instance i mean (for 30gb jvm) 19:41:30 clarkb: and 60GB vm would give us 30GB for the jvm? 19:41:36 is this something that we should consider baremetal for? 19:41:45 anteaya: based on rough napkin math yes at least 30GB for jvm 19:41:46 i think that should work. 19:41:56 +1 60GB vm 19:42:15 and/or given the pain of having to do it more than once, just going for the biggest machine possible? 19:42:18 i'm concerned about additional complexity with baremetal conflating this upgrade for similar reasons that i'm concerned about upgrading to xenial as part of the rebuild 19:42:24 wikimedia Gerrit is on baremetal with 24GB used out of 32GB 19:42:28 fungi: yes, the images are different for example 19:42:36 and we can't easily provide our own if necessary, etc 19:42:37 hashar: gerrit 2.11? 19:42:38 fungi: yeah, I was being more cautious about infra-cloud 19:42:43 fungi: okay that makes sense 19:42:54 14GB being buffers/cache. Wikimedia probably has a similar traffic as your 19:42:59 anteaya: still 2.8 :( 19:43:01 and maybe we should look into what is required for resizing 19:43:02 * zaro likes jhesketh idea, go big! 19:43:07 so that if we do need to resize it is an option 19:43:11 a 60g vm would probably give us room for 40g or more for java 19:43:14 hashar: yeah 2.8 has less of a gc issue than 2.11 19:43:48 we started using a lot more ram in january 19:43:51 okay, so sounds like we're agreed on the instance size, and nobody's heavily advocating for a non-vm or for lumping in a distro upgrade 19:43:54 anteaya: I guess that is why WMF waits for you to upgrade Gerrit before we even consider doing it ;-} 19:44:08 hashar: I don't blame you 19:44:45 do we have an infra-root volunteer to spin up the replacement instance? and a volunteer (can be non-root, or the same person) to write up the announcement about the ip address change once that's done? 19:44:56 do we know why memory use increased in january? 19:45:02 jeblair: we upgraded 19:45:06 jeblair: it actually started in december 19:45:14 so if we announce this week, that puts us into the second or third week of april for the migration 19:45:16 fungi i can do it 19:45:19 http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=27&rra_id=4&view_type=&graph_start=1424412872&graph_end=1457466056 19:45:24 shall we select a date? 19:45:32 jeblair: yes we bumped the jvm from 8GB to 12 GB 19:45:34 december the system sat mostly idle while everyone was sipping eggnog 19:45:39 I'm away 1st week of april, here for the 2nd and 3rd weeks 19:45:40 clarkb: ah ok, thanks. 19:45:45 but the leaking and problems started in decemeber on 8GB 19:46:08 just weren't painful enough for us to bump the jvm up until people came back to work from holidays 19:46:20 yolanda: thanks! 19:46:26 are folks around the 2nd and 3rd week of april? 19:46:38 fungi: re baremetal I agree we shouldn't baremetal lightly 19:46:39 I will be around pre summit 19:46:40 or are folks taking time off before teh summit? 19:46:44 clarkb: thanks 19:46:45 i'm around except for week before summit and during 19:46:46 i should be 19:46:52 zaro: great 19:46:57 #agreed Gerrit for review.openstack.org will be moved to a new 60GB virtual machine instance 19:46:57 though in my use of onMetal from rax its been straighforward and very much nova like 19:47:03 so second week of april so zaro is around 19:47:27 I'm going to offer Thursday April 14th as a date? 19:47:41 #action yolanda Boot a replacement review.openstack.org and communicate the new IP address and maintenance window in an announcement E-mail 19:47:43 thats not great for me 19:47:44 there is definitely a point of diminishing returns in terms of jvm memory size. it can take some doing to find the sweet spot 19:47:53 nibalizer: what is better? 19:48:00 nibalizer: are you around that week? 19:48:01 i will be on vacation the week after though. 19:48:02 the 7th? 19:48:06 I'm away 19:48:17 we don't all need ot be around if there are enough volunteers 19:48:21 ya 19:48:22 my only travel plans are for the summit 19:48:23 plus that just inside of 4 weeks 19:48:30 okay if folks want the 7th 19:48:33 fungi: same 19:48:33 April 7th 19:48:36 ++ 19:48:39 so i can be around whenever up until the saturday before the summit 19:48:39 lets do 14th 19:48:41 WFM 19:48:46 either date 19:48:50 it gives us an extra week, people will be around, etc 19:48:59 the reason for the lead time is corp firewalls 19:49:11 7th is better i think 19:49:16 14th is good for me. 20:00 utc? earlier? later? 19:49:26 I am inclined to give them an extra week if we can otherwise they will complain and that whole idea is to avoid their complaining 19:49:29 zaro: why is that? 19:49:50 i will be on vacation week after 14th(week before summit) 19:49:53 should we just move the ip to a load balancer host? 19:50:15 nibalizer: I don't think so 19:50:17 that way we can do gerrit moves and upgrades with less disruption 19:50:17 nibalizer: to avoid complaints? 19:50:26 it's our anti-complainment field 19:50:27 because you still have to do this dance with the load balancer 19:50:41 right but haproxy doesn't have memory leaks 19:50:49 i have no problem with 14th, just know that i won't be around week after. 19:50:51 or bugs of any kind! 19:50:53 nibalizer: no but haproxy has config updates and os upgrades like everything else 19:50:56 fungi: exactly 19:51:00 clarkb: Only once: next time will be easier 19:51:01 nibalizer: I think the right way is floating IP 19:51:12 and maybe if we're switching instances we can use that? 19:51:12 persia: not once, we try to keep things relatively up to date 19:51:14 bkero: but we don't have that on rackspace 19:51:20 bkero: floating IPs don't exist 19:51:28 Ah, I thought that was just for the instance type that it was on 19:51:52 i can see potentially adding a floating ip and naming it something like ssh.review.openstack.org or something, but only having it for the use of people stuck behind firewalls who want us not to change addresses 19:51:55 * persia has too much lag and goes back to lurking. The only once was about load balancer IP dancing. 19:52:03 nibalizer: is any date the second week of april better for you? 19:52:12 nibalizer: or is that whole week bad for you? 19:52:15 and even then, if we need to move it to a different region, or a different provider, or rackspace just needs to renumber their networks, we still have this issue 19:52:25 anteaya: i don't know what my plans are yet 19:52:31 best not to count on me for anything 19:52:34 okay 19:52:48 zaro: what about earlier in the week of the 14th? 19:52:52 zaro: say the 12th? 19:52:56 i see making things hard on companies who have draconian egress filtering policies a feature, personally 19:52:56 or 11th 19:53:04 we could do the 11th 19:53:11 fungi++ 19:53:12 fungi what we did for gozer migration is just update dns entries. We had some automated scripts to move the data, then announce the outage, update the database, and update the dns entry to point to the new ip 19:53:12 yes, those all work. just know that i'll be out week after. 19:53:13 then we have all that week if we have issues 19:53:21 zaro: yup 19:53:24 the internet does not guarantee ip addresses will stay the same forever. that's why we have dns 19:53:43 is April 11th bad for anyone 19:53:47 that knows their schedule 19:54:00 having that in an automated way, supposed a small outage really 19:54:05 i can also do monday april 11 19:54:08 fungi: dns might catch on some day. 19:54:10 fungi: awesome 19:54:30 yeah, we automated and iterated the script migration dozens of times really 19:54:31 what time of day do folks like? 19:54:49 I like when we start about 1600 or 1700 and have a 4 hour window 19:54:50 mysefl 19:54:50 yolanda: the outage for this is really pretty brief anyway. the impact is companies who need to petition their firewall admis through a bunch of red tape to change where they allow ssh traffic to go 19:55:08 fungi: how long of an outage do we expect? 19:55:33 so we can have an ip reserved and announce that ip publicly with enough time 19:55:36 we can look at how long it was for the trusty upgrade cut-over. my memory says we were down for just a few minutes 19:55:40 yolanda: yes 19:55:41 (it's worth noting that users can use https to push changes, but third-party ci systems will still want to use ssh stream events) 19:55:45 fungi: awesome 19:55:58 and indeed, we should note that in the ip announcement :) 19:55:59 so a 30 minute window? 19:56:11 yolanda: ^ 19:56:13 really, our data is on a cinder volume and in a trove instance. the nova virtual machine instance for review.o.o is itself pretty stateless 19:56:40 30 minute - 1 hour range, to give more room ? 19:56:46 it's cloud native enterprise java 19:57:00 fungi had suggested 20:00 utc earlier 19:57:09 i don't mind announcing 1 hour. java is enterprise scale so our estimates should be too ;) 19:57:13 April 11th, 20:00 to 21:00 utc? 19:57:18 wfm 19:57:22 jeblair, i have the numbers from gozer data migration, not from infra. Do you have timings from previous moves? 19:57:38 anteaya: 20:00 is when our usual activity peak starts to wane which is the only reason we tend to gravitate toward then or later 19:57:44 yolanda: rsync makes it fast 19:57:47 always plan a schedule window larger than anticpiated 19:57:53 hashar: ++ 19:57:55 fungi: yup, makes sense if we expect a short outage 19:58:00 hashar is wise 19:58:10 worse thing that can happen: maintenance is completed in advance and people praise how good you are doing maintenance 19:58:19 hashar: hehe 19:58:21 keep the delta small ahead of time then rsync copies the difference with services off 19:58:24 then update dns 19:58:31 so if you plan 30 minutes, make it 2 hours. So if you screw up something and take 1 hour to recover. You are still done 30 minutes before deadline 19:58:32 fungi, two more mins 19:58:39 pabelanger: AJaeger: well, we got to one topic we had held over from last week, but not to either of yours (ubuntu-trusty and constraints job migrations). want to keep them on the agenda for next week? 19:58:47 fungi: did you want to compose an agreed? 19:58:48 then boss / end users / managers / product folks etc are all happy because you completed it soooo fast 19:59:03 fungi: next week is fine for me 19:59:07 fungi, leave it on - let's see how we move forward with constraints... 19:59:31 agreed Gerrit RAM upgrade maintenance scheduled for Monday, April 11, 200:00-21:00 UTC 19:59:38 look good? 19:59:42 200:00 19:59:43 fine 19:59:45 +1 19:59:52 #agreed Gerrit RAM upgrade maintenance scheduled for Monday, April 11, 20:00-21:00 UTC 19:59:59 thanks 20:00:01 i picked a day with fewer hours 20:00:03 tick-tock 20:00:04 :) 20:00:05 ;-à 20:00:07 those 200-hour days really get to me 20:00:12 thanks everyone! 20:00:15 #endmeeting