20:01:42 #startmeeting tc 20:01:43 Meeting started Tue Aug 27 20:01:42 2013 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:43 them too :) 20:01:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:46 The meeting name has been set to 'tc' 20:01:55 Our short agenda: 20:02:01 #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee 20:02:08 #topic Marconi incubation request: initial discussion 20:02:18 #link http://lists.openstack.org/pipermail/openstack-dev/2013-August/014076.html 20:02:22 #link https://wiki.openstack.org/wiki/Marconi/Incubation 20:02:29 We generally do incubation requests over two meetings 20:02:40 so that we can ask questions and get precise answers before we actually vote 20:02:55 Marconi folks: care to quickly introduce yourselves and the project ? 20:03:03 o/ 20:03:05 then we'll fire all kind of questions at you 20:03:08 sure thing 20:03:16 I'm Kurt Griffiths 20:03:39 I've been nominated to be the PTL for Marconi if it is accepted 20:03:49 I'll let everyone else introduce themselves... 20:03:57 * flaper87 is Flavio Percoco 20:04:14 Alejandro Cabrera here, enjoying working on the marconi project for some time now. 20:04:20 * ametts is Allan Metts, Director of Engineering at Rackspace 20:04:25 I've been helping kgriffs leading the project since its very beggining 20:04:37 Amit Gandhi - Dev Manager at Rackspace 20:04:42 Megan Wohlford, I am a Product Manager at Rackspace, working on Marconi 20:04:44 I am Malini Kamalambal, developer in test 20:04:53 * flaper87 is a Software Engineer at Red Hat 20:05:04 and, I should mention I am an architect at Rackspace 20:05:31 kgriffs: You mention 1.0 in your incubation wiki page. How far are you from this featureset ? 20:05:32 Oz Akan, Engineering Manager at Rackspace, I have been working on deployment and performance side of Marconi 20:05:55 lots of managers :) 20:06:09 heh, Oz is very hands on. :D 20:06:18 inapproriate 20:06:55 kgriffs: is it more the current state, or the expected Icehouse state ? 20:07:24 (or the expected J state) 20:07:28 ttx: we are close. We have some performance work to do, we need to improve logging, and we have to finish up the marconi proxy 20:07:52 I would like to have a 1.0 release ready for Icehouse 20:07:58 kgriffs: You compared with Amazon SQS in one of the follow-up emails, mentioning what you did and what they did not... 20:08:03 kgriffs: how was the 1.0 featureset defined? to fulfill a specific use case or to achieve parity with an alternative solution? 20:08:05 Is there anything they support that you don't ? Large messages, delay queues, long retention time, long polling... 20:08:11 v1.0 API, specifically 20:08:34 #link https://wiki.openstack.org/wiki/Marconi/specs/api/v1 20:08:42 ttx: yes, they support delayed messages 20:08:58 we have it on our roadmap but are waiting to see what the demand is 20:09:04 kgriffs: +1 20:09:21 who is they? 20:09:27 russellb: SQS 20:09:31 ok. 20:09:39 russellb: was in response to my question 20:09:42 we have also discussed doing long polling and even push-based transports (websockets, zmq) but I feel like I want to get a solid baseline release out first. 20:09:47 ah k 20:10:12 kgriffs: right. Was just checking those were not somehow off-limits. 20:10:18 nope 20:10:24 (not off limits) 20:10:40 kgriffs: dolphm had a question above 20:10:51 dolphm: the project's requirements have evolved over time 20:11:00 let me elaborate. :D 20:11:21 so, the initial set came out of a brainstorming unconference session at the Grizzly summit 20:12:17 we have refined them based on community discussion, mostly during our team meetings in #openstack-meeting-alt and in #openstack-marconi 20:12:42 also, a little on the mailing list, but we seemed to have better luck engaging folks via IRC 20:13:04 so, early on, Iron.IO was involved in giving feedback, also flaper87 (Flavio) from RedHat 20:13:49 and at my own company, Rackspace, we have been talking with lots of "internal" customers and have also surveyed regular customers to find out what they want 20:14:26 obviously, we ended up with a ton of ideas that we won't implement for 1.0 but we have plenty to keep us busy for quite a while. 20:14:46 other questions ? 20:14:55 so, overall, I've tried to be very open and public about design/requirements 20:15:01 or comments/opinions ? 20:15:05 kgriffs, there was talk of a zmq backend - how does/will that work? 20:15:19 FWIW, some reqs. also came from discussions at the last summit in Portland 20:15:32 e.g. are zmq endpoints exposed to users, or is it purely an implementation detail? 20:15:37 markmc: I'll let flaper87 take that q. 20:15:42 * notmyname thinks a queue service is a terribly useful feature that is missing from openstack 20:15:50 fwiw we have use cases waiting for something like this already in Heat 20:16:03 hm does that rely on oslo? 20:16:03 notmyname: agreed from a high level, at least. trying to work through the approach though ... 20:16:05 notmyname: +1... AWS has had SQS since 2004, two years before they introduced S3 and EC2 20:16:22 markmc: re zmq. Current proposal is to implement a transport plugin for zmq, which means, being able to talk to marconi using zmq as we were using HTTP 20:16:27 jd__, just for oslo utilities AFAIK 20:16:45 doesn't exist yet though, right? 20:16:48 (the zmq thing) 20:16:51 flaper87, ok, so it's an alternate frontend 20:16:51 so the important point here is that marconi isn't provisioning new queue for people, but actually implementing a queue service 20:16:52 there are 2 layers in Marconi, one for the transports and one for the storage. They both are pluggable 20:16:53 right? 20:16:54 i see an empty __init__.py file, heh 20:16:56 markmc: correct 20:16:58 russellb: correct 20:16:59 flaper87, backend remains the same? 20:17:04 markmc: yup 20:17:04 flaper87: storage == the messaging backend? 20:17:09 russellb: correct 20:17:14 and right now you have mongodb? 20:17:19 flaper87: "pluggable" is a horrible word. too many assumptions about the definition 20:17:25 (and sqlite, but presumably for testing) 20:17:27 russellb: as for now we have sqlite and mongodb 20:17:39 so does mongodb have magic messaging semantics or what? 20:17:43 notmyname: sorry, bad wording from my side. We allow third-party modules through stevedore 20:17:58 like .... why is this not an API on top of rabbit or some existing message broker? 20:18:07 we have discussed supporting AMQP and/or SQLAlchemy as well, but we want to keep the set of "official" drivers small. 20:18:07 russelb: yep, sqlite is a testing storage backend. 20:18:12 russellb: not, mongodb works as a database (as mysql / psql would do). 20:18:25 *russellb 20:18:26 russellb: an API on top of existing message brokers is on our roadmap 20:18:38 right, so what makes that the first choiec for a messaging service backend? 20:18:42 so far we have plans for 3 more storage backend 20:18:52 1) rel dbs, 2) rabbit 3) proton 20:19:06 russellb: we wanted to include a mongo driver since it can be easier to achieve HA than with, say, RabbitMQ 20:19:21 how do you implement "List messages" with rabbit as a storage backend? 20:19:23 it also gives us some extra features 20:19:23 russellb: the main reason is that we wanted to focus on the API without struggling with AMQP issues (and Message broker issues) in first place 20:19:46 markmc: heh, true. 20:19:46 markmc: i was just wondering what the use case was for supporting "list messages" 20:19:59 also, i'm very interested in digging into your views on what backends go in the project vs out 20:20:12 because based on some list posts, your position doesn't seem to align that well with other projects 20:20:27 kgriffs: so the important point here is that marconi isn't provisioning new queue for people, but actually implementing a queue service, right? <-- would like to see the answer to that question as well 20:20:42 dolphm: so, we are supporting listing messages when you want observers that don't claim messages (pub-sub). 20:21:09 ttx: correct, marconi is a queuing service itself, not a provisioning service 20:21:36 that's a little bit frightening ;) 20:21:41 Redis anyone? 20:21:51 * jd__ likes to pop tech' randomly 20:21:54 +1 jd__ 20:21:59 jd__: cppcabrera played with that but the implementation isn't complete yet 20:22:38 again, regarding listing messages, it allows interesting hybrid semantics 20:22:39 flaper87, kgriffs, how do you implement "List messages" with rabbit as a storage backend? 20:23:00 they don't? 20:23:13 heh 20:23:17 markmc: that's a great question. 20:23:17 (trying to get my head around how a storage backend based on an existing broker would work) 20:23:20 markmc: sorry, I missed that question 20:23:22 that came up last week 20:23:42 (and if it makes sense to do that, why would it not be the default) 20:23:55 just out of curiousity, why would someone want to use a rabbit backend for marconi? 20:24:09 markmc: kgriffs: based on the previous answer, wouldn't that mean that marconi isn't using something like rabbit et al for a stoage backend 20:24:29 * ttx would generally prefer less backends and a more consistent experience 20:24:35 markwash: some people have expressed interest in doing it in order to bridge to a different (may or may not be legacy) system 20:24:40 markmc: I think queuing as a service has different requirements than a standalone queuing application like rabbit or redis 20:24:45 ttx: indeed 20:24:48 ttx: yeah, but i also like not implementing our own, where it makes sense 20:24:57 notmyname, double negative there, so not sure I understand - but my question is "how would a rabbitmq storage backend work?" 20:25:00 kgriffs: I feel like some sort of adapter/shoveling mechanism would be better for that 20:25:37 oz_akan_, so a rabbitmq storage backend doesn't make sense? 20:25:49 that's would be my preference as well, esp. considering that parts of the API would not be implementable on rabbit 20:25:55 saying that we're building a queue service, but also saying that using existing message queueing systems is *not* appropriate seems odd at first take 20:25:55 markmc: from my limited knowledge, it doesn't sound like that would make sense (using a queue as storage for a queue) 20:25:59 flaper87: thoughts? 20:26:11 from my limited experience with AMQP, messages do not have to be ACK'd (and thus will not be removed)... 20:26:20 but we're building an API for a queue, not a queue itself ... i hope? 20:26:33 the emphasis is on the API, yes 20:26:38 kgriffs: agreed 20:26:41 russellb: that's my understanding 20:26:53 russellb: " ttx: correct, marconi is a queuing service itself, not a provisioning service" 20:27:06 I think not all features around the API will be supported by all backends 20:27:14 markmc: I can't say doesn't makes sense but probably is a response to a different need than using mongodb as a backend 20:27:16 storage backends* 20:27:19 i mean, it isn't provisioning AMQP queues or something 20:27:20 zaneb: i get that, but you can still have an API on top of an internal message fabric that you didn't build yourself, right? 20:27:35 it is a service itself in that it has it's own API that is multi-tenant 20:27:36 I don't think that's the point 20:27:36 #info marconi is a queuing service itself, not a provisioning service 20:27:36 russellb: not today 20:27:54 dolphm: yeah, i know ... 20:28:02 speaking hypothetically, i suppose. 20:28:22 to be clear, it is a data API, not a management API (although a management API will come along at some point) 20:28:31 seems to me that if you have a set of API methods that are supported, and using storage backend X precludes implementation of at least one of those API methods, then using storage backend X is precluded 20:28:43 (in response to "why not rabbitmq") 20:28:52 and marconi storage backens *implement* a queue, correct? 20:28:53 kgriffs: OK, so the user doesn't get direct access to the backend, but neither is marconi touching every message itself? 20:29:13 what do you mean by "touching"? 20:29:27 queuing might be a more accurate term 20:29:33 zaneb: I think all messages are submitted and retrieved through the marconi api #amiwrong? 20:29:36 markwash: IIUC Marconi implements a queue, and uses sqlite/MongoDB/* to store data 20:29:43 markwash: correct 20:30:35 It can use Rabbit to store data too, but that's a bit of a corner case 20:30:42 kgriffs: what kinds of usage monitoring/notifications do you support at present? 20:30:48 So yes, it's a bit scary 20:30:58 ttx: s/scary/awesome/ 20:30:58 scarier than, say, Trove is. 20:31:20 notmyname: +1 :) 20:31:23 notmyname: oh right. s/scary/exciting/ :) 20:31:26 markwash: for the end user, for ops, or both? 20:31:33 kgriffs: ops/billing 20:31:37 ttx: scary because of vagarity in backends and API matching up? 20:31:45 kgriffs: I guess I probably mean "ceilometer" 20:31:47 actually 20:32:05 annegentle: scary because writing message queue software or a database can be hard 20:32:21 IIRC SQS charged some small amount per api request 20:32:25 ttx: okie 20:32:34 seealso: quotas/limites 20:32:41 * gabrielhurley can't type today 20:32:47 markwash: so, let me answer that 20:32:51 annegentle: though I'm also concerned about supporting too many backends and frontends making it a deployer dream and a user nightmare 20:33:04 gabrielhurley: we have some rate limits but no quotas yet 20:33:10 ttx: yeah I was more scared of backend frontend and docs 20:33:12 gabrielhurley: that's on our roadmap, though 20:33:14 markwash: isn't that "simply" marconi being good about reporting what's going on? is ceilometer support required out of the gate? 20:33:14 depending on what you are metering, you may just use web server logs 20:33:36 ttx: as long as the same API methods are supported regardless of backend, I don't see why that would be scary for users at all 20:33:39 otherwise, we have a bp or two to work on collecting operational stats 20:33:53 torgomatic: I'd agree with that. 20:33:58 we have been discussing whether/how to use ceilometer for that 20:34:00 notmyname: oh, I'm just being a bit curious 20:34:04 or statsd or whatever 20:34:19 if there's a WSGI API with middleware, it's pretty easy to account for request using Ceilometer 20:34:31 torgomatic: as long as we don't start having extensions for those who deploy with RabbitMQ as backend, we should be good. 20:34:32 * torgomatic is a big statsd fan, fwiw 20:34:42 yep, the HTTP transport is WSGI 20:34:51 marconi doesn't try to provide a web server. 20:34:53 we want to integrate w/ ceilometer at some point 20:35:02 if there's a need to account for things like number of messages stored, I guess we should talk about it 20:35:10 Marconi provides a wsgi app that can be used w/ any container 20:35:26 * markwash envys 20:35:33 it does provide a base server using wsgiref, which is suppose to be used for testing 20:35:44 fwiw, I've had pretty awesome experiences so far wrapping marconi in middleware. It plays nice with keystone-auth-middleware. :) 20:35:47 flaper87: right, forgot to mention that. 20:35:50 :D 20:36:27 kgriffs: what do you mean by (paraphrasing) "it's and http server" but "it's not a web server"? 20:36:31 so, a developer can pip install marconi today and be up and running in a couple minutes 20:37:10 I mean, it doesn't speak/parse HTTP - it doesn't self-host unless you count wsgiref for development/testing 20:37:18 its a wsgi application, not a wsgi server 20:37:32 OK, is there a specific area we'd like to see precised before next week ? 20:37:45 #info marconi is a wsgi application, not a wsgi server 20:37:54 So far I've spotted a few worries and a few surprises, but nothing obscure ? 20:37:56 perhaps clarification what parts of the API would be required for storage backends 20:38:14 i.e. that a storage backend wouldn't be accepted unless it could implement "list messages" 20:38:23 markmc: personally I'd prefer if the API was the same for all backends 20:38:25 or that "list messages" is an optional API 20:38:34 it sounds like the whole rabbit backend thing should be deemphasized. . it was somewhat confusing for us I guess. . . 20:38:34 markmc: +1 20:38:35 otherwise I'd question the usefulness of the extra backend 20:38:50 ttx: +1 in fact I'd almost think that should be required 20:39:08 I'd prefer the API be standard for all backends as well 20:39:17 but that's just my "if I were you" opinion, not any kind of tc-vote-related issue 20:39:17 ttx: I wouldn't question it. You can have Marconi deployed in different regions and running on top of different backends 20:39:18 makes sense 20:39:36 * markmc isn't against discoverable extensions, but "list messages" seems like a fundamental part of the API 20:39:39 personally, I'm not a big fan for having partial API support depending on backends. If we think something like that is needed, it would be better to split marconi into two projects 20:39:45 if it's not a fundamental part of the API, that should be called out 20:39:49 flaper87: sure, as long as they have the same API 20:39:51 * SpamapS would like to raise a hand for "extra backends" 20:40:00 MongoDB.. AGPL.. its a problem. 20:40:18 * gabrielhurley is nt a huge fan of mongo at scale, either... 20:40:26 isn't the AGPL only an issue if you're making custom changes to the app? 20:40:26 is the mongodb client lib agpl? 20:40:27 I understand that RabbitMQ is sufficiently different from a data store that it confuses the situation.. 20:40:33 russellb: nope 20:40:34 sure, other backends. . but not a broker backend 20:40:41 but, we went for listing messages since if nothing else it lets you audit work queues, which was always a pain with SQS 20:40:52 from an openstack perspective, it's the client lib license we're mostly concerned with 20:40:54 kgriffs: is "list messages" the only api feature specifically intended to support pub-sub? 20:41:00 but surely deployers care about mongo itself, too 20:41:07 yes 20:41:09 deployers and packagers / vendors 20:41:12 russellb: All due respect, but that makes no sense at all. 20:41:16 dhellmann: it's not that it's an *issue*, so much as that you have to convince the whole world that it's not an issue 20:41:24 russellb: what good is a client library without the database? 20:41:29 pymongo is under the Apache license https://github.com/mongodb/mongo-python-driver/blob/master/LICENSE 20:41:32 I am not opposed to having alternate backends, but it may be a matter of other SQL/NoSQL rather than adding in AMQP 20:41:33 SpamapS: read the rest of what i said there bud before you freak out 20:41:38 kgriffs: (was that a yes to my question or to russellb?) 20:41:54 I'd like to revisit one aspect of the wsgi topic… namely that marconi uses another wsgi framework not found in any other integrated projects 20:41:54 SpamapS, an AGPL client lib means (or could mean, depending on your take on these things) that Marconi wouldn't be distributable under the Apache License 20:41:56 russelb: Sorry, the rest came in while I was mounting my high horse. 20:42:03 SpamapS: i noticed 20:42:05 SpamapS, which is a requirement of all our projects 20:42:13 SpamapS, that's the valid distinction russellb is making 20:42:37 markmc: Yes, and the AGPL license for MongoDB means you can't deploy without accepting the terms of the AGPL on at least MongoDB. 20:42:53 kgriffs: I like that. backend == some way to durably store stuff. not the thing that does the queue work 20:42:57 markmc: how did we solve that in ceilometer-land ? 20:42:58 SpamapS, doesn't affect the license that Marconi is distributed under 20:43:03 or did we not ? 20:43:04 My point isn't "burn MongoDB", it is "Allow deployers to choose alternatives, please." 20:43:10 ttx, it's not an AGPL client lib, it's not an issue 20:43:18 markmcclain: +1 20:43:26 SpamapS: +1 20:43:39 SpamapS: fair enough. 20:43:54 ttx: we didn't? 20:44:04 :) 20:44:27 * ttx slaps jd__ 20:44:45 markmcclain raised a good point on API btw 20:44:47 jd__: well, we built the sqlalchemy driver, too, didn't we? I don't remember the timing of that, but I think it was at least in progress. 20:44:50 markmcclain: we had a plan to test Pecan and try to replace Falcon, however, the last benchmark ( kgriffs has more details about this than me) resulted in falcon being faster and we needed that 20:45:00 dhellmann: I think so, indeed 20:45:11 that doesn't mean we won't "try" again, we just didn't stressed that point so we could focus on the design 20:45:12 ceilometer has the appropriate abstraction to allow deployers to write a new backend and choose an alternative. 20:45:15 I must have missed something, if you can't use the service without an AGPL service, isn't that still an issue? 20:45:17 hope that make sense 20:45:18 flaper87: hm, because others projects don't need speed? :] 20:45:22 * flaper87 is trying to find that blueprint 20:45:32 jd__: not saying so :) 20:45:40 SpamapS: doesn't marconi have that? 20:45:47 flaper87: i'd be curious to see the benchmarks that supported that decision 20:45:52 I did find the prereq "install mongodb" a bit surprising myself. 20:46:00 You can legally distribute binaries of it, sure, but that doesn't get you the ability to /use/ it. 20:46:07 dhellmann: re Pecan, it isn't off the table, just was de-prioritized since just getting a solid baseline release has been taking all our time 20:46:17 jd__: indeed, but there was talk of that being confusing because RabbitMQ was mentioned in the same breath as sqlite/mongodb. I want to make sure the abstraction isn't thrown out with the confusion. 20:46:27 annegentle: we can remove that from the wiki / readme 20:46:40 flaper87: so you don't need mongodb ? 20:46:44 flaper87: ok.. would interested to see the benchmarks and code to see how they are constructed 20:46:45 mongodb is what we suggest for production right now because there's no other option 20:46:48 right now 20:46:50 jd__: re speed, not all projects have APIs that are directly in the line of fire 20:46:56 flaper87: nah, if it's the "easiest" way forward for people then speak the truth :) 20:46:59 flaper87: so you do need it: it should stay on the wiki. 20:46:59 I wouldn't suggest using sqlite in production 20:47:15 kgriffs: everything's relative? ;) 20:47:26 jd__: yep. 20:47:27 kgriffs: but I get what you mean though 20:47:27 lifeless: they are entering incubation, not being integrated yet 20:47:28 we can remove it as soon as we add more backends, I guess 20:47:43 flaper87: right 20:47:53 SpamapS: ack, abstraction's good :) 20:47:54 ttx: sure, still a bit nervous-making 20:47:57 I like having sqlite because say I am implementing a Rust library 20:48:00 so we bless the general architecture and the promise of it, more than the details 20:48:03 maybe I want to run on my local box 20:48:15 with sqlite I don't have to install anything, just pip install and run marconi-server 20:48:26 anyway, that's a discussion for another time I suppose 20:48:48 we need to move to open discussion in a bit. This will be continued next week. Last minute questions ? 20:49:00 Can we make a list of points we should prepare for next week? 20:49:03 ttx: right this is about incubation now. Ok I'm good. 20:49:03 talking about architectures, there's a marconi-gc and marconi-server that needs to run and that's it? 20:49:04 markmc: mentioned something 20:49:11 but I might have missed other points 20:49:12 flaper87: +1 20:49:20 annegentle: and definitely too late to be integrated in Icehouse anyway. 20:49:58 ttx: so, seems like there would be plenty of time to iron out any implementation concerns 20:50:05 btw, when are the next elections again? 20:50:11 kgriffs: indeed 20:50:14 (if targeting the J release) 20:50:15 and is there any reason to wait for voting on things incubating next cycle? 20:50:17 russellb: end of Sept 20:50:28 russellb: we haven't in the past 20:50:42 k, i'm fine with it, just wanted to make sure ... 20:51:03 that brings us to... 20:51:08 drumroll 20:51:08 #topic Open discussion 20:51:16 Next week we'll start the end-of-cycle graduation review for Trove and Ironic 20:51:22 To decide if they should be integrated for the Icehouse release 20:51:24 ohh ohh ttx pick me pick me, i have a issue 20:51:31 hah 20:51:32 hub_cap: you wanted to raise an issue with your work on Trove Heat integration ? 20:51:38 yes so its going quite well 20:51:50 with one small issue, SpamapS and sdake and i talked about yest 20:52:06 the create stack requires a users password currently 20:52:19 russellb: so a project starting incubation now/nextweek will definitely be too young to graduate nextweek/theweekafter 20:52:19 and trove never takes a users password, only token 20:52:27 ttx: sure 20:52:41 ttx: i was basically considering projects applying now as incubating during the icehouse cycle 20:52:55 so until trusts is baked in to heat, we dont have a way to create stacks on behalf of a user 20:52:56 hub_cap: and this will be resolved when heat-trusts gets merged, right? 20:53:06 russellb: that' what we are discussing for Marconi. 20:53:13 right. 20:53:15 shardy: i believe so, from what SpamapS/sdake mentioned 20:53:17 shardy: yes 20:53:25 oh wait it was zaneb 20:53:28 which, hopefully, should happen now for Havana if all goes to plan 20:53:36 russellb: i actually encourage projects to file now, so that they can get more space at design summit 20:53:36 yes so thats what im afraid of 20:53:40 as the keystoneclient stuff we were gated on has been released 20:53:42 shardy: although, it sounds like we have to change the middleware to allow it too 20:53:43 "hopefully" :) 20:53:44 ttx: makes sense 20:54:09 even if it is merged it such a small window to get it working properly 20:54:12 hub_cap: yeah, the whole thing has taken way longer than I ever expected 20:54:37 hehe shardy /me understands... heat integration sure did take a while too for me ;) 20:54:41 hub_cap: so the back end of trove is switchable between heat/non-heat, right? 20:54:44 So bottom line is, Trove should be able to use Heat as soon as Heat can use trusts. 20:54:47 * markwash disappears for 25 minutes 20:54:50 you weren't planning to cut directly over 20:54:55 zaneb: yes, default is non heat 20:55:20 correct as discussed by the TC Icehouse would be optional, and then release+1 would be default 20:55:31 The eventual plan is for extra capabilities in trove, like complex replication scenarios and HA etc. etc. will make use of Heat, but for now.. Trove works fine w/o Heat. 20:55:35 its basically deprecating the old way of provisioning so it follows the same thing as say, removing nova volume 20:55:40 * SpamapS hopes he got that right. 20:55:51 correct SpamapS 20:55:52 so, what I suggested to hub_cap is to continue the integration work and try to get things to the point where the lack of token-only middleware in Heat is the only thing preventing support being turned on 20:56:10 zaneb: +1 20:56:20 and thats what im working with now zaneb, create is almost finished 20:56:21 and if heat-trusts makes it in to Havana, flip the switch 20:56:25 and ive got ~1 wk to finish it 20:56:39 well yes assuming 1) trusts works flawlessly, and 2) the switch is really small 20:56:56 hub_cap: well, you're not actually making use of the trusts part 20:57:03 correct zaneb 20:57:06 so it doesn't really have to work ;) 20:57:13 please find the patch creating the goverance repo here: https://review.openstack.org/#/c/43002/ awaiting your reviews 20:57:28 haha zaneb nice 20:57:39 i just need token based stack creation :) 20:57:44 #info patch creating the goverance repo here: https://review.openstack.org/#/c/43002/ awaiting your reviews 20:57:47 indeed 20:58:03 thanks ttx forgot that 20:58:22 Well the trusts functionality will work with both token and user/pass auth, so you can test it by flipping a config filel option in heat 20:58:25 so i am slightly concerned that heat integration will creep into Icehouse if the pieces dont land perfectly 20:58:39 cool shardy that makes it easy 20:58:50 hub_cap: you could grant yourself a feature freeze exceptoin over that, if you need a few more days 20:59:10 yes i would need that, as well as having heat land trusts 20:59:10 we'll all blame shardy for that 20:59:20 :) 20:59:28 ttx: haha ;) 20:59:36 so if we are 100% certain trusts will land then ill grant a FF exception 20:59:39 for myself lol 20:59:42 shardy will blame keystone, no doubt ;) 20:59:42 hub_cap: we can continue the discussion in the incubated project section of the next meeting 20:59:44 * shardy looks for someone else to blame.. 20:59:50 yes ttx 20:59:56 let's close this one 20:59:57 im good w that 20:59:59 final words ? 21:00:06 hugs 21:00:07 always hugs 21:00:12 * dolphm accepts the blame 21:00:18 #endmeeting