16:00:31 <rhallisey> #startmeeting kolla
16:00:32 <openstack> Meeting started Wed Aug  5 16:00:31 2015 UTC and is due to finish in 60 minutes.  The chair is rhallisey. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:36 <openstack> The meeting name has been set to 'kolla'
16:00:40 <rhallisey> #topic rollcall
16:00:45 <SamYaple> o/
16:00:46 <rhallisey> hello
16:00:51 <jasonsb> hi
16:00:56 <akwasnie> hi
16:00:57 <sdake> o/
16:01:03 <pbourke> hi
16:01:08 <echoingumesh> hi
16:01:17 <jpeeler> hey
16:01:56 <rhallisey> looks like that's everyone, but I'll give it another moment
16:02:36 <rhallisey> #topic Announcements
16:02:46 <rhallisey> k go ahead sdake
16:02:48 <sdake> so if you dont mind i'll handle this section rhallisey
16:03:41 <sdake> #1: kolla has been accepted by TC vote into the big tent https://review.openstack.org/#/c/206789/
16:03:53 <sdake> this is all because of our fantastic community
16:04:10 <pbourke> woohoo
16:04:15 <rhallisey> woo grats everyone!
16:04:18 <inc0> Long live kolla:) as TTX said
16:04:20 <sdake> i've buit 3 rockstar teams in openstack - first heat then magnum then kolla - really exciting for me
16:04:25 <pbourke> and fantastic PTL *ahem* :p
16:04:38 <sdake> ptl is just a facilitator :)
16:04:48 <sdake> although I do a bunch of technical work as well :)
16:04:56 <sdake> so give me credit for that, atleast 12% credit ;)
16:05:03 <SamYaple> 11%
16:05:06 <SamYaple> best i can do
16:05:13 <sdake> joke from the avengers movie
16:05:28 <SamYaple> joke from pawn stars
16:05:32 <rhallisey> haven't seen it :)
16:05:51 <sdake> #2 - we ar ein charge of our own schedule and releases for the moment but we will still follow the upstream schedule
16:06:08 <sdake> what this means is our deadline is end of July (iirc) for liberty-3
16:06:27 <sdake> then there is a 3-4 week FFE period (feature freeze exception)
16:06:36 <sdake> where the absolutely critical things that aren't finished can be done
16:06:49 <sdake> but ideally we should be feature frozen whenever the pstream schedule is
16:06:53 <inc0> end of July is in -6 days;)
16:06:58 <rhallisey> August
16:07:12 <sdake> #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
16:07:17 <rhallisey> -6 days lol
16:07:26 <sdake> ya aug 4
16:08:01 <sdake> we have until september 4th (another month) to close out the release and fix any problems with it
16:08:20 <sdake> i'd expect we will llock down somewhere in the middle of the rc period except for critical bug fixes
16:08:50 <sdake> #3 - samyaple changed his affiliation in stackalytics so our diversity is more accurate as a result - and looks even more fantastic
16:09:02 <sdake> he changed it to what it really was ;)
16:09:06 <SamYaple> updated*
16:09:12 <sdake> so its not like he is gaming the system or something
16:09:20 <sdake> ya updated
16:09:24 <SamYaple> stackalytics just never trackedit correctly
16:09:25 <sdake> ok thanks thats it go ahead rhallisey
16:09:36 <rhallisey> cool
16:09:58 <rhallisey> well nice work everyone lots of exciting news after being accpeted into the big tent
16:10:07 <rhallisey> and nice work on the l2 release
16:10:21 <rhallisey> onward!
16:10:22 <sdake> oh ya that should have been i na nnoucnements ;)
16:10:29 <rhallisey> #topic Liberty-3 planning
16:10:29 <sdake> grats on l2 releease folks!
16:10:42 <rhallisey> #link https://launchpad.net/kolla/+milestone/liberty-3
16:11:09 <rhallisey> so we have a lot of BPs here
16:11:18 <rhallisey> 9 are still open
16:11:25 <inc0> I can take on logs stuff
16:11:29 <rhallisey> and 2 open that are essential
16:11:33 <inc0> if noone objects
16:11:37 <rhallisey> inc0, ok assign yourself
16:11:45 <rhallisey> https://blueprints.launchpad.net/kolla/+spec/ansible-swift
16:11:48 <rhallisey> #link https://blueprints.launchpad.net/kolla/+spec/ansible-swift
16:11:50 <sdake> we hav emore that need to be filed as wlel I think
16:11:55 <rhallisey> anyone wana take on swift?
16:12:02 <rhallisey> sdake, I think so
16:12:07 <sdake> I'd like a core reviewer to take on each ansible blueprint as well
16:12:12 <rhallisey> I want to at least get coverage for critical and high
16:12:30 <sdake> so every core reviewer understands how the ansible code works
16:12:36 <rhallisey> anyone core that doesn't have an ansible wana take on swift?
16:12:39 <sdake> doing one (takes 4-8 hours) will teach you the whole system
16:12:55 * sdake points at pbourke
16:13:09 <SamYaple> pbourke knows the ansible code
16:13:13 <pbourke> I'll take it, but
16:13:33 <pbourke> just be aware I think there's a little more work in Swift than some of the others
16:13:41 <sdake> well its jsut that yu sort of did swift containers already
16:13:56 <pbourke> but yeah agree it makes sense
16:13:59 <sdake> maybe you can get someone to help?
16:14:03 <rhallisey> I can ask jmccarthy
16:14:09 <rhallisey> if you don't want to
16:14:16 <pbourke> well I sit beside him ;)
16:14:20 <sdake> ya maybe you guys can team up on it
16:14:30 <rhallisey> nice
16:15:01 <sdake> that leaves gnocchi and zaqar
16:15:05 <rhallisey> another that is high
16:15:07 <rhallisey> #link https://blueprints.launchpad.net/kolla/+spec/ansible-ceilometer
16:15:08 <sdake> which are sort of ancillary services
16:15:25 <rhallisey> does ceilometer offically work? I haven't tried it
16:15:34 <rhallisey> sdake, + ceilo
16:15:37 <sdake> i think its coming along but haven't tried it as well
16:15:42 <sdake> ya agree rhallisey
16:16:13 <sdake> maybe we can get jpeeler or mandre to take that one
16:16:20 <sdake> although i think jpeeler is doing ironic
16:16:21 <sdake> and we need ansible for ironic
16:16:29 <rhallisey> ya wfm
16:16:42 <sdake> jpeeler can you tackle ironic ansible codebase?
16:16:52 <sdake> if so, i'll send an emai to mandre about ceilometer
16:17:05 <jpeeler> i guess that fits since i was already working on it
16:17:08 <SamYaple> anycahnce anyone wants to do Trove?
16:17:18 <SamYaple> i really want to to land but so far no work has begun
16:17:26 <rhallisey> SamYaple, I'm going to get ceph going then I'll try for trove
16:17:33 <SamYaple> rhallisey: awesome
16:17:48 <SamYaple> we can do external ceph right now with kolla and ansible, would be nive to have containers
16:17:50 <sdake> rhallisey samyaple has a ref implementation of ceph in his yodu repo
16:17:50 <sdake> check that out
16:17:52 <rhallisey> the current cinder should work with a ceph cluster
16:18:07 <rhallisey> just needs some addition config
16:18:11 <sdake> yes I'd like to go completely to ceph
16:18:26 <sdake> rhallisey coolsvap did cinder container and i thhink it does ceph support with ansible
16:18:43 <sdake> not cinder container but ansible support for cinder contianer
16:18:52 <SamYaple> yea its just config things
16:19:18 <rhallisey> using tgt is more setup work that actually using external ceph
16:19:24 <rhallisey> so I expect it to work
16:19:48 <SamYaple> way more work
16:19:52 <rhallisey> ya
16:20:00 <rhallisey> but it's good to have internal storage around
16:20:05 <rhallisey> but it should be lvm
16:20:10 <sdake> its configurable iirc
16:20:14 <SamYaple> rhallisey: i can work with you on ceph containers. the procedure for inital cluster is hardest
16:20:14 <rhallisey> ya it is
16:20:20 <sdake> samyaple has actually done a thorough review
16:20:21 <SamYaple> if you do the containers i can do the ansible stuff quickly
16:20:28 <rhallisey> kk
16:20:44 <rhallisey> I just wanted to draw attention to another BP real quick
16:20:51 <rhallisey> #link https://blueprints.launchpad.net/kolla/+spec/containerize-dependencies
16:20:51 <sdake> so sounds like we need a blueprint for ceph then
16:20:56 <sdake> can someone file that?
16:20:59 <SamYaple> we have one sdake
16:20:59 <rhallisey> sdake, I think we do
16:21:06 <SamYaple> rhallisey: is assigned
16:21:16 <pbourke> was the templating discussed at all?
16:21:22 <SamYaple> wait
16:21:27 <SamYaple> can we jump back to logging for a second
16:21:43 <SamYaple> can we clarify what we are planning with that.
16:21:46 <rhallisey> SamYaple, ya I just wanted to make sure your review gets attention
16:21:55 <rhallisey> #link https://review.openstack.org/#/c/208451/
16:21:59 <SamYaple> we _need_ logging to files, but are re requiring central logging?
16:22:03 <rhallisey> SamYaple, ok go ahead
16:22:28 <SamYaple> are we calling ELKStack or central logging essential for l3
16:22:32 <rhallisey> #topic Logging
16:22:35 <sdake> what I'd like to see is per-node data container logging to files, with a logstash thing that forwards to a central logging
16:22:54 <sdake> i realize this is imperfect
16:22:54 <SamYaple> logstash stashes
16:23:01 <sdake> because some services cant store to files
16:23:07 <sdake> i thought logstash forwarded
16:23:08 <SamYaple> what is this central logging you are refering to
16:23:11 <sdake> or was part of that
16:23:14 <SamYaple> something we implement?
16:23:22 <SamYaple> or just "optional"
16:23:29 <sdake> well we are not goign to implement syslog or anything ike that
16:23:31 <SamYaple> like we dont have a contaienr, but you can build your own thing
16:23:38 <sdake> but we should try to have an intgrated soution for the problem
16:23:41 <jasonsb> i thought it was optional
16:23:48 <SamYaple> it has to be optional
16:23:50 <sdake> ya optional is fine
16:24:01 <sdake> but we should provide some mechanism of logging that is coherent
16:24:07 <SamYaple> ok so we just need logging to be rounded up on each node and optionally be forwarded somewhere
16:24:13 <sdake> because what we have now is a charlie foxtrot ;)
16:24:29 <jasonsb> maybe this will be motivation for oslo.log to work correctly
16:24:32 <sdake> i think that makes sense
16:24:36 <SamYaple> ok thats fine with me
16:24:55 <SamYaple> so the central logging bluebrint can stay in discussion
16:25:03 <dims> jasonsb: please log a bug against olso.log? :)
16:25:11 <sdake> oslo.log is fine
16:25:14 <SamYaple> and the logging blueprint will be unifying the logs and then optionally allowing them to be forwraded
16:25:39 <rhallisey> ok cool
16:25:50 <sdake> the problem is some service s outside openstack dont' even log to files
16:25:52 <jasonsb> sdake: maybe there was just a miscommunication between oslo and nova while back
16:26:13 <SamYaple> sdake: yea thats fine, rsyslog still has to exist in the logstash stuff
16:26:26 <sdake> jasonsb well file a bug - wasn't aware there was conflict  between nova and oslo.log ;)
16:26:37 <SamYaple> sdake: log everything to resyslog and spit it to the disk, then optionally forward it
16:26:50 <jasonsb> +
16:26:50 <SamYaple> thats what the midcycle agreed to and that covers everyone
16:26:51 <jasonsb> +1
16:26:57 <sdake> ok wfm
16:27:13 <rhallisey> cool sounds good
16:27:23 <rhallisey> ok we'll move on
16:27:32 <rhallisey> #topic gating
16:27:38 <SamYaple> woo
16:27:38 <sdake> actually rhallisey
16:27:40 <sdake> can you #undo that
16:27:44 <sdake> and go to templating first
16:27:49 <rhallisey> #undo
16:27:50 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x951eb90>
16:27:52 <sdake> pbourke had a topic on it he wanted to discuss
16:27:57 <rhallisey> #topic templating
16:28:02 <rhallisey> pbourke, go ahead
16:28:30 <pbourke> just wanted to see what the plans were for it, as I kind of need it yesterday :p
16:28:52 <sdake> there is an etherpad with an example prototype implementation of keystone available
16:28:55 <sdake> sec let me fidn it
16:28:59 <pbourke> ah that's right I saw that
16:29:26 <sdake> https://blueprints.launchpad.net/kolla/+spec/dockerfile-template
16:29:30 <SamYaple> i believe coolsvap|away was leading that
16:29:36 <SamYaple> whats the progress?
16:29:50 <sdake> i think we need to come to conclusion if that example is acceptable to evereyone
16:30:01 <inc0> akwasnie, did you start with coolsvap|away ?
16:30:03 <pbourke> bp is asigned to akwasnie atm, akwasnie are you taking over ?
16:30:23 <sdake> #link https://etherpad.openstack.org/p/kolla-dockerfile-template
16:30:39 <inc0> pbourke, they'll cooperate
16:30:42 <sdake> I don't recall coolsvap|away taking ownership on that
16:30:53 <akwasnie> inc0: yes
16:30:54 <SamYaple> sdake: i discussed that with him and akwasnie
16:31:00 <sdake> oh roger
16:31:01 <SamYaple> they said they would take lead
16:31:04 <pbourke> inc0: akwasnieL sounds good guys
16:31:06 <sdake> well you are on different tz then me :)
16:31:10 <SamYaple> :)
16:31:20 <sdake> so folks can we get agreement on the tempalte format
16:31:21 <sdake> I'mg ood with it
16:31:28 <sdake> i thikn its a little overcomplicated but should be fine
16:31:34 <SamYaple> we shoudl get the base template out for that ASAP akwasnie
16:31:42 <SamYaple> that way if anyone disagress they can say so
16:31:45 <pbourke> it would be nice to have a little more discussion later
16:31:48 <pbourke> on the ins and outs
16:31:51 <akwasnie> agree
16:32:04 <SamYaple> yea i think the base example should allow for that conversaiont pbourke
16:32:06 <pbourke> a WIP, even better
16:32:11 <pbourke> cool
16:32:31 <sdake> also i'd like the filesystem layout documented in the etherpad
16:32:38 <akwasnie> i will send base example in 2 days max, i think
16:32:40 * rhallisey still reading
16:32:48 <SamYaple> akwasnie: on
16:32:50 <SamYaple> ok*
16:32:55 <sdake> akwasnie if yo ucan add the filesystem layout to the etherpad that would rock ;-)
16:33:27 <rhallisey> I'm ok with adding this as a discussion for our next meeting
16:33:31 <sdake> akwasnie also are we putting these ina new directory?
16:33:53 <akwasnie> where would you like me to place those templates in kolla dir tree?
16:33:54 <rhallisey> this does change the template a bit, but I think we need it
16:33:55 <sdake> oh i see right there "docker_templates"
16:34:02 <sdake> its right in the etherpad
16:34:03 <sdake> my bad
16:34:03 <SamYaple> yes sdake, thats been confirmed and everyone is on the same page for that
16:34:06 <inc0> rhallisey, next meeting might be hard as its after midnight for us
16:34:15 <SamYaple> akwasnie: yes right beside the docker folder
16:34:17 <SamYaple> docker_templates
16:34:22 <akwasnie> ok
16:34:39 <sdake> i'd like to see the child directories as well
16:34:43 <rhallisey> inc0, ok let's do what we can now
16:34:44 <sdake> in the etherpad
16:35:22 <SamYaple> how about i submit a base structure patch with the skel structure sdake
16:35:31 <sdake> wfm
16:35:46 <rhallisey> I think we can have more discussion on this soon
16:35:48 <pbourke> the main thing the existing example is missing is some mechanism for custom snippets
16:36:01 <SamYaple> pbourke: yea easy to add. will do
16:36:05 <pbourke> awesome
16:36:06 <SamYaple> we have a linked bluebrint for that
16:36:14 <sdake> we need to be able to handle rhel
16:36:17 <SamYaple> i think we all understand the concerns at this point
16:36:27 <sdake> and that has some special logic to register the os
16:36:35 <SamYaple> we can discuss more after some code lands
16:36:39 <rhallisey> since inc0 and akwasnie won't be around next Wed we can push it back possibly or have the discussion in #kolla
16:36:41 <SamYaple> everyone ok with that?
16:36:42 <rhallisey> SamYaple, agreed
16:36:45 <pbourke> +1
16:36:51 <sdake> wfm
16:36:59 <sdake> getting ready to lose network - lets move on :)
16:37:05 <rhallisey> ya I think as we get to see me we can chat about it in #kolla
16:37:09 <akwasnie> +1
16:37:15 <inc0> lets get some prototype and dicuss it early next week
16:37:17 <rhallisey> #topic gating
16:37:21 <SamYaple> woo
16:37:34 <rhallisey> jpeeler, has been away getting married congrats!
16:37:45 <SamYaple> woo
16:37:49 <rhallisey> jpeeler, I was wondering if you were still looking at this?
16:37:54 <inc0> we've lost a good collegue
16:38:00 <akwasnie> +1
16:38:03 <inc0> may God have mercy on his soul
16:38:07 <rhallisey> lol
16:38:10 <sdake> lol inc0
16:38:14 <jpeeler> heh thanks
16:38:28 <jpeeler> which part are we talk about exactly?
16:38:39 <sdake> well we need to improve the gate
16:38:48 <rhallisey> jpeeler, just a status
16:38:50 <jpeeler> before i left, i was working to get the new builder in the gate
16:38:50 <sdake> oen thing is build.py gating
16:38:53 <rhallisey> or any opinion
16:39:26 <jpeeler> but "the gate" ncludes smoke testing, tempest, and more
16:39:35 <jpeeler> i'm assigned the blueprint to get tempest going too
16:39:38 <jpeeler> lots to do
16:39:46 <rhallisey> always is
16:39:50 <sdake> someone suggested rally for testing
16:40:01 <sdake> i kind of dismissed it but  then remembered I dont really know what rally is
16:40:08 <jpeeler> i was just going to ask that
16:40:20 <jpeeler> sdake: you mean rally instead of tempest?
16:40:25 <rhallisey> don't either
16:40:25 <sdake> so maybe that is something we should investigate
16:40:30 <sdake> I dont know I kind of dismissedi t
16:40:37 <sdake> but some people at the midcycle seemed pretty keen on it
16:40:41 <SamYaple> rally for testing would be nice but i dont think it matters atm
16:40:44 <sdake> and I realized I made a mistake by dismissing without eval
16:40:53 <SamYaple> i always thought it was good for larger scale
16:40:54 <pbourke> right now anything that gates images not building and starting up would save major time
16:41:05 <sdake> agree we need an ansiblegate
16:41:09 <SamYaple> single node in the gate well it doesnt seem like rally would be the right choice
16:41:11 <pbourke> sometimes I feel like im QA
16:41:12 <sdake> sofirst step to that is build.py gating
16:41:17 <pbourke> with building and testing patches
16:41:41 <sdake> next step is aio ansible deploy or docker in docker deploy
16:41:43 <jpeeler> right, build.py in the gate is first. and there are necessary logging changes before that can occur
16:41:50 <sdake> i think sam was going to work on docker in docker deploy
16:42:00 <SamYaple> sdake: got a partiol script
16:42:07 <SamYaple> should be drop in
16:42:09 <jpeeler> docker in docker? interesting
16:42:15 <SamYaple> jpeeler: ill tell you about it!
16:42:20 <sdake> cool so if sam finishes up the partial script i'll handle gtting the gate rollling on that
16:42:20 <rhallisey> DinD
16:42:36 <jpeeler> thatrock
16:43:08 <rhallisey> cool thanks jpeeler for the status
16:43:14 <sdake> how does docker in docker work in a 3 minute breakown samyaple
16:43:21 <sdake> just so folks understand what we are doing
16:43:37 <rhallisey> imagine putting a little box into a bigger box
16:43:45 <SamYaple> whats in the box....
16:43:51 <sdake> a dead cat
16:43:52 <SamYaple> so quick run down
16:44:00 <pbourke> an inception dvd
16:44:02 <rhallisey> dead cat
16:44:04 <rhallisey> lol
16:44:09 <SamYaple> we have a single gate, we need multinode testing
16:44:24 <SamYaple> normally that would mean vms, that is not going to work in the gate
16:44:34 <SamYaple> so we can do docker in docker, which is just what it sounds like
16:44:41 <SamYaple> have a super docker container running on the host
16:44:53 <SamYaple> and dokcer running in that container, running hte kolla services
16:45:07 <SamYaple> that allows us ot have several super docker contaienrs simulating nodes
16:45:09 <sdake> the docker running in the container running the kolla services gets its own ip address?
16:45:16 <SamYaple> yes
16:45:21 <sdake> cool
16:45:26 <inc0> SamYaple, I'm wondering if we hit problems with host-network
16:45:32 <pbourke> are we realy the first project to need multi node testing?
16:45:32 <SamYaple> inc0: thats the best part, no
16:45:39 <pbourke> s/testing/gate
16:45:49 <SamYaple> the only way out of the docker contaienr will be an l2 veth pair
16:45:51 <sdake> pbourke no trileo does it but they have their own gating system with hardware they provide to the foundation
16:45:55 <inc0> we'll use docker proxy
16:45:59 <SamYaple> with that in mind we can properly test multinode on a single node
16:46:00 <jpeeler> i'd be really surprised if the gate didn't support multiple VMs somehow, but we don't need VMs with docker in docker
16:46:12 <SamYaple> jpeeler: it does but is a pita
16:46:24 <SamYaple> and it makes gating take forever to scheduler
16:46:34 <jpeeler> i mean, yeah not saying we should do that anyway
16:46:52 <SamYaple> anyway DinD means we can do multinode on a single node reliably
16:47:03 <jpeeler> docker in docker has the added benefit of easy dev env
16:47:14 <SamYaple> it also has the benefit of testing deploying with a private registry on "fresh" hosts that havent had the containers bbuilt on them
16:47:25 <SamYaple> we build on the host and push into the regirsty then pull into the contaienrs
16:47:30 <SamYaple> very real world like
16:47:52 <SamYaple> we can also test destructively in this method which is awesome
16:48:02 <SamYaple> the time limit on the gate will determine what test  we can run
16:48:03 <rhallisey> nice
16:48:14 <SamYaple> but i thin kwe can run alot of tests
16:48:29 <SamYaple> thats about it from me
16:48:37 <jpeeler> one ?
16:48:38 <rhallisey> thanks SamYaple :)
16:48:41 <sdake> cool can't wait to see that running ;)
16:48:50 <jpeeler> how many interfaces does it require?
16:48:56 <SamYaple> jpeeler: just one :D
16:49:06 <jpeeler> yay!
16:49:08 <SamYaple> i use a veth pair for the second interface in the container
16:49:14 <SamYaple> on the host i just setup a bridge
16:49:23 <SamYaple> simulated l2 network
16:49:47 <SamYaple> the contaienr has two interfaces, the host needs only one
16:49:57 <rhallisey> nice excited to see it working
16:50:14 <rhallisey> last order of business..
16:50:16 <rhallisey> #topic Open Discussion
16:50:37 <rhallisey> anyone got any cool stuff to talk about?
16:50:53 <rhallisey> like DinDinD
16:50:57 <sdake> sounds like we have a mountain of work to do
16:51:03 <rhallisey> for sure
16:51:18 <rhallisey> ok well looks good everyone
16:51:21 <SamYaple> but we had that for L2 as well
16:51:22 <sdake> we have approximately 4-8 weeks :)
16:51:24 <SamYaple> and we handled it
16:51:32 <rhallisey> got a month until l3
16:51:37 <SamYaple> before L2 we had no config-external or ansible
16:51:38 <inc0> DinDinD sounds like an alarm which should turn on every time you add unnexessary abstraction layer
16:51:43 <rhallisey> excited for another great release :)
16:51:44 <sdake> ya lets try to get hte major changes out of the way for l3 plz :)
16:51:55 <rhallisey> inc0, infinte D
16:52:09 <SamYaple> rhallisey: pointed to it before, but i have a review up for contaienred dependencies for ansible
16:52:17 <SamYaple> so the host only required docker and docker-py
16:52:21 <SamYaple> no other deps are needed
16:52:31 <rhallisey> ya thanks for bring that back up
16:52:35 <SamYaple> it is verified working, btu the current patchset is broken
16:52:38 <sdake> ya that should be rockin
16:52:46 <rhallisey> check out sams patch it will make things a lot easier to setup on your host
16:52:49 <SamYaple> please all review it because this is a big thing and what we are going is a bit wierd
16:52:49 <sdake> let me know when it works samyaple and i'll give it a spin
16:53:00 <jpeeler> same
16:53:07 <SamYaple> it should be good tomorrow
16:53:21 <sdake> ok they aer kicking me off the internets
16:53:22 <SamYaple> had it working earlier but applied some pbourke changes and it broke
16:53:24 <sdake> I gotta jet
16:53:27 <SamYaple> need to clean up some syntax
16:53:29 <rhallisey> cool. Nice job everyone good meeting!
16:53:32 <SamYaple> by sdake
16:53:36 <SamYaple> bye*
16:53:43 <pbourke> good luck
16:53:47 <rhallisey> #endmeeting kolla