22:02:57 <jeblair> #startmeeting zuul
22:02:57 <openstack> Meeting started Mon Nov 28 22:02:57 2016 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:02:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:03:00 <openstack> The meeting name has been set to 'zuul'
22:03:01 <nibalizer> o/
22:03:06 <jasondotstar> o/
22:03:12 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/zuul/2016/zuul.2016-11-21-22.01.html
22:03:30 <jeblair> #link only slightly inaccurate agenda https://wiki.openstack.org/wiki/Meetings/Zuul
22:03:40 <morgan_> o/
22:03:44 * morgan_ lurks harder
22:03:54 <jeblair> #topic Actions from last meeting
22:04:02 <jeblair> #action jeblair work with Shuo_ to document roadmap location / process
22:04:10 <jeblair> ETHANKSGIVING
22:04:18 <olaph> exactly
22:04:30 <auggy> o/
22:04:34 <jeblair> #topic Status updates (Nodepool Zookeeper work)
22:04:51 <jeblair> we didn't *quite* get the new builder into production
22:05:22 <jeblair> with the unofficial day-before-thanksgiving holiday, we really only had 2 days last week
22:05:33 <jeblair> but we still made a lot of progress regardless
22:05:39 <jeblair> nb01.openstack.org does exist now
22:05:44 <SpamapS> I heard some disturbing news btw
22:05:47 <SpamapS> that only one ZK was running
22:06:08 <SpamapS> I want to point out that this will present significant operational challenges.
22:06:11 <pabelanger> yes, that is on nodepool.o.o today
22:06:22 <jeblair> yes, we had that conversation in this meeting last week: http://eavesdrop.openstack.org/meetings/zuul/2016/zuul.2016-11-21-22.01.log.html
22:06:25 <SpamapS> ZK is not really good at recovering with only one node.
22:06:47 * fungi wonders what applications are really good at recovering from the loss of a spof
22:07:02 <SpamapS> no no no.. it's worse than everything else I've dealt with that has on disk state.
22:07:03 <clarkb> again aiui its the same aituation as today with gearman...
22:07:10 <clarkb> no just igbire recovery
22:07:13 <clarkb> and move on
22:07:29 <SpamapS> Unless you're running it in a ramdisk that you clear every time the process starts, it's going to be a _beast_.
22:07:30 <fungi> i would like to know what igbire was a typo for
22:07:38 <fungi> because it's an awesome typo
22:07:44 <clarkb> *ignore
22:07:47 <mordred> o/
22:07:54 <pabelanger> I'd also be concerned if we couldn't get ZK working with a single node too, since all of our testing now is single ZK
22:07:55 <morgan_> fungi: lol i was wondering the same thing
22:08:00 <fungi> clarkb: okay, sense made. thanks!
22:08:14 <clarkb> basically its not a regression to "falback" on that behavior
22:08:23 <SpamapS> if zookeeper unexpectedly dies for any reason, you'll be left replaying transactions from the last time it successfully gracefully stopped/started.
22:08:31 <clarkb> and you can have more resiliency if you choose to run more
22:08:39 <fungi> SpamapS: so basically avoid "dirty start" scenarios and make sure if state is lost then it's really completely lost at start?
22:09:06 <jeblair> SpamapS: it has no checkpoint function?
22:09:14 <SpamapS> fungi: correct. If the process is killed in any violent way (VM sudden death, segfault, SIGKILL, etc, you need to clear the on-disk store entirely, or be prepared to wait.
22:09:26 <mordred> wow. that's awesome
22:09:28 <SpamapS> jeblair: It did not 4 years ago.
22:09:33 <SpamapS> It may have grown one. I don't know.
22:09:40 <SpamapS> The authors explicitly said "Oh, don't do that."
22:09:44 <SpamapS> Run 3.
22:09:53 <clarkb> I guess the difference is we dont also store the info in mysql anymore
22:09:53 <SpamapS> Or restart a lot.
22:09:55 * fungi wonders if harlowja has more recent experiences with such scenarios
22:10:00 <harlowja> who what
22:10:08 <jeblair> well, if it's not possible to run with one, then we probably need to drop zk and use something else
22:10:19 <fungi> recovering modern versions of zk from a dirty shutdown
22:10:21 <jeblair> because all-in-one is an explicit design goal
22:10:47 <mordred> yah. I thought the risk of "one" was just "if you crash, the system won't be up because you crashed" - which is fine for one node
22:11:05 <SpamapS> Ah they added snapCount
22:11:09 <mordred> but if the failure case is "after all crashes in single node you can expect to wait for a complete transaction log replay" - that is not fine for one node
22:11:11 <SpamapS> ok, so set snapCount low for single-server
22:11:18 <harlowja> fungi no such experience from me :-P
22:11:29 <fungi> harlowja: darn. thanks for jumping in anyway!
22:11:32 <SpamapS> (apologies, my information is from 2012.
22:11:32 <harlowja> np
22:11:33 <harlowja> ha
22:11:33 <SpamapS> )
22:11:35 <mordred> SpamapS: woot!
22:11:42 <mordred> SpamapS: I'm _very_ glad your info is out of date
22:11:46 <SpamapS> me too
22:11:54 <SpamapS> because that was a long 9 hours to recover the juju database for UDS Copenhagen.
22:11:55 <mordred> SpamapS: is snapCount in the zookeeper config?
22:11:55 <pabelanger> Yay for no rewrite
22:11:57 <jeblair> yay we don't have to start over (yet) :)
22:12:03 <pabelanger> jeblair: ++
22:12:03 <SpamapS> mordred: it is
22:12:10 <mordred> SpamapS: cool. also - yay 9 hours
22:12:15 <mordred> SpamapS: can I assume you were  ... not happy ? :)
22:12:18 <harlowja> perhaps u guys want to email the zookeeper ML
22:12:23 <SpamapS> #link https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_configuration
22:12:25 <harlowja> 2012 was a while ago :-P
22:12:29 <fungi> i also wonder if we'll be stashing nearly the amount of raw state or churn into nodepool zk as the uds juju db had
22:12:46 <jeblair> fungi: not at first, but possibly later on
22:12:49 <harlowja> http://zookeeper.apache.org/lists.html :)
22:12:58 <SpamapS> mordred: I was meh, but elmo was very.. very sad.
22:13:17 <jeblair> fungi: once we put nodes into it, and later, zuul builds
22:13:19 <SpamapS> anyway, n/m ignore me
22:13:28 <SpamapS> single server should be fine with lowish snapcount
22:13:40 <pabelanger> 100,000 appears to be the default
22:13:48 <SpamapS> This says 10,000
22:13:53 <fungi> jeblair: okay, i still have no basis for comparison to know if those are in a similar order of magnitude to whatever uds was doing unfortunately
22:13:55 <jeblair> well, we learned something we should pay attention to when we build all-in-one deployment tooling
22:14:04 <SpamapS> But I'd say let's play with it a bit
22:14:19 <phschwartz> +
22:14:22 <phschwartz> ++
22:14:29 <jeblair> fungi: er, yeah, let's assume i revise my statement to somehow drop the comparison part and just express relative growth of our use of zk.  :)
22:14:36 <fungi> "dirty shutdown" will certainly be a fun scenario to test
22:14:48 <clarkb> set it to 1k and wed still onlysnapshot once anhour on averagewith test instances in zk
22:14:51 <pabelanger> my local testing is all-in-one right now, I can try setting snapcount and killing things
22:15:35 <fungi> i think we need to set up some sacrificial servers running it and then take a hatchet to their innermost circuits
22:15:48 <fungi> just to be really, really sure
22:15:54 <Shrews> fungi: i suggested that last week  :)
22:16:04 <fungi> clearly i'm channeling you
22:16:15 * fungi has a side job channeling the living
22:16:25 <Shrews> fungi: i am mostly dead
22:16:29 <SpamapS> pretty easy to automate. kill -9 is about as dirty as you can get without offending somebody. ;)
22:16:57 <SpamapS> clarkb: that's probably fine. the number of transactions potentially being replayed is the real problem, not the frequency of snap
22:17:00 <mordred> SpamapS: explain "without offending somebody" ... I've never accomplished that in real life
22:17:24 <SpamapS> mordred: I'm offended by that.
22:17:33 <fungi> having a transaction-based checkpoint option rather than time-based might be nice
22:17:46 <fungi> but we can always calibrate
22:17:51 <clarkb> SpamapS: I am sure there is some trade off to be matched depending on performance requirements but I just do't think we are in such a situation
22:17:57 <harlowja> mordred someday u will
22:18:01 <pabelanger> re: nb01.o.o, it would be great to land https://review.openstack.org/#/c/403869/ today, then we should be ready to run nodepool-builder on the server.  I've added the cinder volume already
22:18:03 <clarkb> worst case you start without data, and repopulate from cloud api
22:18:10 <jeblair> so nb01.o.o exists but isn't quite running yet -- pabelanger has kindly agreed to take over driving that so i can make sure i'm available to review zuul patches
22:18:32 <fungi> clarkb: no, _worst_ case you start without data and let it clean up all the leaked alien nodes/images
22:19:00 <SpamapS> fungi: no, it _is_ transaction based. So setting it 10x lower is the right solution.
22:19:14 <fungi> SpamapS: oh! i misread. so yes, it is what i was hoping for
22:19:27 <SpamapS> clarkb: agreed. If we get to high-perf it might also make more sense to have 3 since downtime will likely be costing us more too.
22:19:50 <SpamapS> and the clients are really good at detecting and failing over.
22:19:58 <jeblair> SpamapS: yeah, i think we do want to move to 3 eventually, but we want to dog-food one while we still can (and we don't care about the spof issue)
22:20:23 <jeblair> by the time nodepool itself is no longer a spof, even i will want to run 3 :)
22:20:25 <fungi> yes, having a resilient cluster for large/high-volume deployments sounds fine
22:20:50 <pabelanger> shouldn't be much to stand up the other 2 servers too, the puppet-zookeeper module looks to support it
22:21:14 <fungi> but being unable to effectively set up an all-in-one deployment for "small" or test sites is also something we want to be possible
22:21:33 <fungi> s/unable/able/
22:21:34 <jeblair> fungi: i think i agree with what you were trying to say there :)
22:21:44 * fungi spliced sentences in his head again
22:22:38 <pabelanger> ++
22:22:56 <jeblair> so if folks can heed pabelanger's request to quickly review deployment-blocking changes, we should be able to start running this soon and get actual experience with it
22:23:09 <jeblair> Shrews, pabelanger: anything else about nodepool-zk?
22:23:42 <pabelanger> jeblair: it would be good to finish our pause build / upload logic this week
22:23:47 <pabelanger> if possible
22:23:54 <SpamapS> is there a topic to focus on?
22:24:00 <Shrews> pabelanger found a json exception failure that disturbes me greatly. i have no explanation for it as it should not be possible
22:24:05 <SpamapS> (a gerrit topic I mean)
22:24:26 <fungi> should be the one indicated in the spec. checking
22:24:43 <jeblair> fungi: well, we switched to just using feature/zuulv3 branch, specwise
22:24:54 <jeblair> so we can set a topic for deployment things if we want
22:24:58 <fungi> oh, right-o
22:25:14 <fungi> and http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-zookeeper-workers.html doesn't actually have the part from the template where a topic is documented
22:25:17 <jeblair> but right now, it's just one change i think
22:25:39 <jeblair> fungi: was replaced with http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-zookeeper-workers.html#gerrit-branch
22:25:53 <pabelanger> Ya, just 403869 right now
22:25:53 <fungi> yep, thanks
22:26:01 <clarkb> (which already has one +2 so its close :) )
22:26:24 <fungi> branch:feature/zuulv3
22:26:39 <jeblair> yeah, i mostly wanted to make sure people were aware that pabelanger may come with further requests like that :)
22:26:44 <fungi> ...is what we have in our priority efforts query
22:26:49 <Shrews> 400970 should also land before a production run
22:27:30 <jeblair> Shrews: probably a good idea, yeah :)
22:27:38 <fungi> #link https://review.openstack.org/403869
22:27:38 * clarkb adds that to the list
22:27:40 <pabelanger> Shrews: ack, will look
22:27:43 <fungi> #link https://review.openstack.org/400970
22:28:07 <clarkb> I like the shade error on the integration test for that
22:28:17 <clarkb> wee floating IPs
22:28:21 <mordred> yah
22:28:30 <mordred> clarkb: that's happening more frequently now
22:28:41 <fungi> #link https://review.openstack.org/#/q/status:open+AND+branch:feature/zuulv3
22:28:41 <mordred> clarkb: like, frequently enough that we may need to investigate it for real
22:28:59 <clarkb> mordred: awesome
22:29:15 <mordred> clarkb: yah. that's one word for it
22:29:43 <jeblair> mordred: like, a problem crept into nova/neutron?
22:30:18 <clarkb> oh and now its apparently in merge conflict
22:30:20 <clarkb> Shrews: ^
22:30:20 <fungi> s/crept/stumbled drunkenly while carrying a battleaxe/
22:30:30 <Shrews> clarkb: fixing
22:30:50 <jeblair> well, lets move on...
22:30:51 <jeblair> #topic Status updates (Zuul test enablement)
22:31:27 <jeblair> there are many patches!  i *think* i'm caught up on reviews for these now
22:31:31 <mordred> yay!
22:31:58 <jeblair> if i missed something, or anyone needs me to pitch in on something, please let me know
22:33:10 <Shrews> jeblair: https://review.openstack.org/400836
22:33:49 <jeblair> Shrews: yeah, i'm almost, but not quite, caught up on nodepool patches
22:33:52 <jamielennox> jeblair: an opinion on https://review.openstack.org/#/c/400003/ - but it's not urgent
22:33:56 <Shrews> just needs a +2. we can figure out the positive alien test case later
22:34:42 <pabelanger> yay for patches merging
22:35:07 <pabelanger> I still have a few in merge conflict, I'll try and clean them up tonight / tomorrow
22:36:16 <jeblair> jamielennox: yeah, i can do that -- that's also similar to another thing that came up recently -- i think it was the path to clouds.yaml so that the cli commands could work correctly...
22:36:31 <jeblair> jamielennox: is there a reason you added that on the master branch though, instead of zuulv3?
22:36:42 <clarkb> also pabelanger has comments on it
22:36:44 <jeblair> (that is the reason i did not see the change)
22:37:28 <jamielennox> jeblair: not specifically, it applies to both and figured it would get merged in but i probably should have done it on v3
22:37:40 <pabelanger> Ya, could have used that patch recently :) have diskimage-builder in a different venv, but ended up writing a wrapper script to properly source things
22:38:09 <pabelanger> but, like the idea of defining the location of disk-image-create
22:38:11 * clarkb uses symlinks to solve this problem fwiw
22:38:15 <jamielennox> ours is similar but we're running nodepool from systemd via the ../venv/bin/ path and so it has no PATH to dib
22:38:17 <clarkb> works great for virtualenv and git-review
22:38:30 <clarkb> jamielennox: yup thats exactly the solution ^
22:38:40 <pabelanger> jamielennox: I do the same, we should compare things :)
22:38:42 <fungi> i do exactly the same for _everything i pip install
22:39:05 <jamielennox> yea, can always symlink it into /bin or currently we're modifying the PATH in the unit, but this just seemed easier
22:39:19 <fungi> heck, i have ~/bin/pip as a symlink to ~/pyenvs/pip/bin/pip where the latest version of pip is installed
22:39:23 <jlk> We could also expose things using update-alternatives
22:39:41 <jamielennox> there's a bunch of ways :) i figured i'd float this and see what people thought
22:39:51 <jlk> (which puts things in the path)
22:40:09 * mordred likes the jamielennox patch - but that's probably clear because of the +2
22:40:18 <fungi> (though the example makes more sense with ~/bin/virtualenv symlinked to ~/pyenvs/virtualenv/bin/virtualenv which i use to create all the other virtualenvs)
22:40:20 <greghaynes> jamielennox: huh, dib should be in the venv, I must be missing something...
22:40:22 <jeblair> i definitely think we should be able to configure things like this.  i think the ongoing tension is whether it should be in nodepool.yaml or a different file.
22:40:30 <greghaynes> but I can check that out later
22:40:38 <mordred> jeblair: ++
22:41:04 <jamielennox> greghaynes: the venv isn't activated we're just running the python out of the venv directly and dib is being invoked as an application not a python module
22:41:32 <mordred> yah. that would do it for sure
22:41:37 <greghaynes> ah. Theres a thought that in the (very near) future dib will have a python api
22:41:44 <greghaynes> its part of v2
22:41:48 <pabelanger> jlk: ya, that would be good too. I should try that in my local env
22:41:50 <clarkb> I guess my only concern is that we don't bake in a bunch of functionality that already exists in the OS (basically avoid redundant tooling)
22:41:51 <jeblair> it's worth noting that in openstack's case, we have a configuration/content separation by way of the system-config and project-config repos.  project-config repo reviewers review 'content' like what things are installed in what diskimages, and what clouds are in use.
22:42:03 <clarkb> so yes I agree yuo should be able to configure this, and you can via $PATH
22:42:10 <SpamapS> clarkb: I have to agree with you there. Setting PATH is a pretty standard thing.
22:42:29 <ianw> yeah, with dib v2 you could conceivably "import diskimage_builder" and run the main() from python
22:42:35 <greghaynes> yep
22:42:42 <SpamapS> That we have PATH insanity because of virtualenvs is a relatively new idea.
22:42:57 <pabelanger> clarkb: Agree, if people are opposed adding it to nodepool.yaml, symlinks or PATH is a great option too
22:43:20 <clarkb> but I don't feel strongly enough to prevent anyone from adding that to nodepool
22:43:43 <jamielennox> yep, there's a bunch of deploy specific ways to solve this - i don't mind what we do, just thought i'd propose it
22:43:44 <SpamapS> Same
22:43:51 <jeblair> nodepool has so little configuration that isn't content that nearly everything is in nodepool.yaml.  i'm okay with adding non-secret configuration to nodepool.yaml.  but likely the more of it that is more "system" focused rather than "project" focused may push me toward moving that to its own file.
22:44:48 <ianw> jamielennox: there is sort of the question of why dib can't be installed in the same virtualenv as your nodepool-builder ... it's kind of odd to have them split?
22:44:49 <jeblair> but even today, we have the zmq and zk servers in there, so it's already a mix of the two.
22:45:04 <clarkb> jeblair: ya
22:45:18 <fungi> could just have nodepool take a list of conffiles and merge the yaml-parsed dict (what to do with duplicate keys is the main concern there)
22:45:33 <SpamapS> seems like the patch deserves discussion in the review
22:45:40 <jeblair> ianw: (indeed it should be already -- it's a dependency)
22:45:45 <fungi> that would allow anyone to split up their configuration along whatever lines make sense
22:46:11 <SpamapS> (not that I'm not enjoying this discussion.. but this does feel like an IRC review of the patch. :)
22:46:13 <fungi> though this is all straying pretty far from the topic of reenabling zuul tests
22:46:36 <pabelanger> ianw: I have tested nodepool and diskimage-builder in the same venv, issue rises if you don't source the venv first and just call ./venv/bin/nodepool-builder, diskimage-create no in path
22:47:18 <jeblair> any other zuul test enablement status updates?
22:47:52 <jeblair> #topic Progress summary
22:47:58 <ianw> pabelanger: ok ... let's #zuul this
22:48:18 <jeblair> SpamapS: what did you have in mind for this part of the agenda?
22:48:33 <jeblair> i don't think we've actually exercised this since our agenda-brainstorm
22:49:01 <SpamapS> jeblair: A quick rundown of the board and a chance for people to review it and speak up if they want to move things around.
22:49:06 <SpamapS> https://storyboard.openstack.org/#!/board/41
22:49:22 <SpamapS> jeblair: yeah I have been dealing with meatspace things. ;)
22:49:27 <jeblair> #link https://storyboard.openstack.org/#!/board/41
22:49:48 <Shrews> SpamapS: my thing in progress is actually done
22:49:56 <SpamapS> So, if I can ask everyone to just take a look at that board, and consider whether anything needs to be added, removed, or moved.
22:49:59 <SpamapS> Shrews: woot
22:50:10 <SpamapS> Shrews: moved
22:50:17 <rcarrillocruz> i'll move the devstack-gate roles refactoring to in-progress
22:50:24 <rcarrillocruz> i have a long-list of dependent changes now
22:50:33 <rcarrillocruz> and pabelanger also did some stutff on that iirc
22:51:19 <SpamapS> rcarrillocruz: I just added you as a user of the board, so you should be able to move things now.
22:51:34 <rcarrillocruz> cool, thx
22:51:59 <pabelanger> rcarrillocruz: Yes, I've seen your patches. Want to do some reviews on that, maybe work with clarkb to see how we can run them today
22:52:03 <jeblair> Shrews: i think phschwartz is 'in-progress' on 2000770
22:52:24 <clarkb> rcarrillocruz: pabelanger random scan of that shows they fail a lot
22:52:24 <SpamapS> feels like the general story of "nodepool changes" needs to be fleshed out and maybe moved to in progress?
22:52:37 <phschwartz> jeblair: I am. I have implemented the base of a DAG locally and will be pushing a WIP up soon.
22:52:49 <clarkb> I guess that further up the stack
22:52:58 <fungi> reviewing the state of the "Zuulv3 Operational" board seems like an excellent way to so the progress summary portion of the agenda. great idea
22:52:59 <Shrews> jeblair: that's for SpamapS, i guess
22:53:07 <fungi> s/so/do/
22:53:10 <rcarrillocruz> yeah, working on them, i'll ping you later on what is good to review for now
22:53:13 <SpamapS> jeblair: which one is 2000770 .. it's hard to find a number on that board. ;)
22:53:21 <jeblair> Shrews: yep, I got S'd
22:53:31 <jeblair> SpamapS: i think phschwartz is 'in-progress' on 2000770
22:53:37 <clarkb> rcarrillocruz: one quick comment, these changes don't actually seem to use the new playbooks, Can you organize it so that every chagne is self testing? I don't want to review and merge a bunch of dead code
22:53:42 <jhesketh> SpamapS: could you add me to the board as well please so I can track the branch merging progress
22:54:11 <clarkb> rcarrillocruz: or am I missing something important?
22:55:07 <jeblair> SpamapS: well, story 768 is referring to the next phase of zuul-nodepool work which we are not yet ready to start
22:55:07 <phschwartz> SpamapS: it is the dependency graph work.
22:55:33 <SpamapS> jeblair: OH.. so the stuff going on now isn't that? Ok, I'll move it back to backlog.
22:56:04 <SpamapS> jhesketh: added
22:56:11 <jhesketh> thanks :-)
22:56:19 <rcarrillocruz> clarkb: i started doing roles in independent changes, then created the 'ansibly' changes, that actually dpend on those role changes and replace code from d-g bash
22:56:25 <SpamapS> phschwartz: I need a title
22:56:33 <clarkb> rcarrillocruz: I'd prefer we don't do it that way, its too hard to review
22:56:34 <rcarrillocruz> but i can do everything self-testing by merging them
22:56:34 <SpamapS> or was it not even in the board yet?
22:56:48 <clarkb> rcarrillocruz: I would make each thing its own chagne that adds the playbook and uses it
22:56:56 <jeblair> SpamapS: phschwartz dag work is titled "Forward port..."
22:57:01 <jeblair> SpamapS: should probably be retitled :)
22:57:06 <clarkb> d-g is self testing so you should be able to see upfront what does and doesn't work
22:58:04 <SpamapS> Ah ok
22:58:25 <jeblair> SpamapS: and yeah, the stuff now is nodepool-builder.  the next thing is nodepool-launcher along with updated zuul-nodepool protocol.  next step in that is to refresh/approve this spec: https://review.openstack.org/305506  but we want to really run nodepool-builder first so we have a chance to make any changes based on real-world use of zookeeper
22:58:26 <SpamapS> phschwartz: I assigned the task to you and marked it in progress. It would help if you can reference the story: and task: in commit messages. :)
22:58:44 <phschwartz> SpamapS: will do.
22:58:48 <clarkb> rcarrillocruz: ok I see how this works, I think it would be easier to grok if we made each thing enable + new playbook
22:59:12 <SpamapS> jeblair: ok I'll try and update that story a bit to explain what it is.
23:00:14 <jeblair> SpamapS: i think 767 is the story for current nodepool work
23:00:18 <SpamapS> jeblair: I added "Make job trees into graphs" to 'todo'.
23:00:24 <SpamapS> jeblair: k I'll add that too
23:00:26 <SpamapS> we're running out of time
23:00:30 <SpamapS> anything else urgent?
23:00:35 <SpamapS> I want to let peple go
23:00:36 <SpamapS> people
23:00:43 <jeblair> thanks everyone!
23:00:47 <jeblair> #endmeeting