14:00:51 <shamail> #startmeeting interop_challenge
14:00:51 <openstack> Meeting started Wed Aug 10 14:00:51 2016 UTC and is due to finish in 60 minutes.  The chair is shamail. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:55 <openstack> The meeting name has been set to 'interop_challenge'
14:01:01 <gema> o/
14:01:03 <catherineD> o/
14:01:04 <shamail> Hi everyone, who is here for the interop challenge meeting today?
14:01:04 <leong> o/
14:01:05 <kei> o/
14:01:11 <woodburn> o/
14:01:15 <rohit404> 0/
14:01:35 <skazi> o/
14:01:44 <hjanssen-hpe> o/
14:01:57 <shamail> Thanks gema, catherineD, leong, kei, woodburn, rohit404, skazi and hjanssen-hpe
14:01:58 <lizdurst> o/
14:02:03 <tongli> o/
14:02:07 <shamail> hey lizdurst and tongli
14:02:10 <markvoelker> o/
14:02:15 <eeiden> o/
14:02:18 <shamail> The agenda for today can be found at:
14:02:21 <shamail> #link https://wiki.openstack.org/wiki/Interop_Challenge#Meeting_Information
14:02:31 <shamail> hi markvoelker and eeiden
14:02:47 <shamail> #topic review action items from previous meeting
14:02:52 <jkomg> morning
14:02:55 <shamail> #link http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-03-14.00.html
14:03:09 <shamail> The bottom of the linked log has action items that were taken in the last meeting
14:03:27 <shamail> Document how to submit requests for new tools (markvoelker, shamail)
14:03:35 <tongli> please add your name to the attendance list at the bottom. thanks.
14:03:48 <gema> that document is automatically generated
14:03:55 <gema> that list gets created at the end of the meeting for today
14:04:15 <shamail> #link https://etherpad.openstack.org/p/interop-challenge-meeting-2016-08-10
14:04:29 <shamail> in case we need to take notes
14:04:38 <gema> +1
14:04:42 <shamail> So on the first topic on documenting how to submit…
14:04:51 <shamail> #link https://wiki.openstack.org/wiki/Interop_Challenge#How_to_Propose.2FSubmit_new_tools.2Fworkloads
14:05:02 <shamail> Added this section to the wiki as a starting point
14:05:11 <shamail> markvoelker: Do you have anything to add on this action item?
14:05:44 <markvoelker> shamail: we've been kicking around ideas on how to submit results
14:06:01 <shamail> Ah, yes.  I would like to hold off on that discussion since its an agenda item for today
14:06:12 <markvoelker> 'k
14:06:24 <shamail> Thanks
14:06:36 <shamail> Determine where to share results (markvoelker, topol, tongli)
14:06:53 <shamail> As mentioned, I want to pass on this action item review for now since we will discuss in depth today
14:06:57 <shamail> as part of the current agenda
14:07:06 <shamail> AI: Please add which OpenStack versions you plan to run interop challenge tests against (everyone)
14:07:30 <gema> #action Please add which OpenStack versions you plan to run interop
14:07:41 <gema> #action Please add which OpenStack versions you plan to run interop challenge tests against (everyone)
14:07:53 <shamail> This was an action from our previous meeting but I don’t think we discussed where to put this information
14:07:53 <gema> (sorry, copy & paste)
14:08:22 <shamail> This might be another one that intersects with the “sharing results” conversation or I have some thoughts I can share in the open discussion section
14:08:25 <shamail> np gema, thanks.
14:08:43 <shamail> Onwards!
14:08:43 <leong> +1 shamail.. i think the OpenStack version can be part of the "results"
14:08:50 <shamail> leong: I agree
14:08:55 <shamail> #topic Test candidates review
14:09:04 <tongli> @leong, +1
14:09:05 <shamail> The test candidates can be found in two sources:
14:09:14 <shamail> https://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process
14:09:31 <shamail> Or line 28 in the etherpad for today
14:09:38 <shamail> tongli: can you lead this topic?
14:09:51 <tongli> @shamail, sure.
14:10:15 <tongli> I think we are open on what tools to use, either terramform, ansible, heat, your choice.
14:10:43 <tongli> probably we need to figure out the content of the app. lampstack has been one, dockerswarm is another.
14:10:57 <gema> tongli: do you mean that the same workload deployed with different tools in different clouds means the workload is interoperable?
14:11:04 <tongli> we currently already have dockerswarm in terraform.
14:11:34 <tongli> @gema, I think we should talk about that. not exactly sure if different tools for the same work load will accomplish something more than just one tool.
14:11:50 <gema> tongli: I think different deploying tools will use different apis
14:11:55 <jkomg> gema: the same workload deployed with the same tools in multiple clouds. Just multiple tools, I'd think.
14:12:06 <gema> what we should do is have different deploying tools as different use cases?
14:12:09 <tongli> for example, if we have lampstack in terraform and ansible, will that be better vs lampstack terraform.
14:12:10 <gema> maybe same workload?
14:12:14 <gema> or multiple ones
14:12:26 <shamail> jkomg: +1
14:12:42 <gema> tongli: it will be different
14:13:02 <jkomg> I don't see the effectiveness of employing lampstack with terraform and ansible; you're deploying the same thing with different tools. We're not testing the tools.
14:13:13 <jkomg> s/employing/deploying/g
14:13:24 <tongli> komg, +1.
14:13:25 <gema> jkomg: if the tools are using the openstack apis they are part of getting the workload running
14:13:54 <gema> jkomg: agree with you we are not trying to test the tools
14:13:56 <jkomg> True, but I don't think anyone thinks there's a difference between lamp via ansible and lamp via heat, or terraform.
14:14:05 <gema> jkomg: I think it is different
14:14:12 <shamail> The question I see is that we need to agree whether we are building a catalog of workloads to test or tools… If workloads then once a workload (e.g. LAMPStack) is available using any tool then we should switch gears to the next workload.   The tool leveraged is open but I am curious about the value of multiple tools for the same workloads.
14:14:13 <tongli> ok, so for one workload, we just have one scripts (either in terraform, ansible, or heat). we will just run that against multiple clouds.
14:14:19 <tongli> can we all agree on that?
14:14:35 <jkomg> shamail: +1
14:14:37 <jkomg> tongli: +1
14:14:47 <gema> shamail: it depends on the api coverage we are after demonstrating
14:14:47 <leong> shamail +1
14:14:58 <gema> shamail: we could do each workload with a different tool
14:15:00 <gema> that'd work
14:15:08 <shamail> gema: I get your point too.
14:15:10 <skazi> shamail: +1
14:15:16 <leong> +1 gema
14:15:22 <shamail> Essentially with multiple tools doing the same workload, we might cover a broader API set.
14:15:28 <gema> yep
14:16:10 <tongli> so looks we want to use multiple tools for same work load? I am ok with that as well.
14:16:38 <shamail> I think based on that maybe we shoot for one tool per workload as a starting point and we can revisit running the same workload with additional tool if A) we have time remaining in the challenge and B) we have multiple tools in the repo for a workload
14:16:38 <hjanssen-hpe> hjanssen-hpe: +1
14:16:49 <hjanssen-hpe> I mean
14:16:54 <gema> shamail: +1
14:17:01 <hjanssen-hpe> shamail: +1
14:17:07 <catherineD> shamail: +1 one tool per workload
14:17:11 <shamail> tongli: I am proposing we start with 1 per workload and come back to running the workload with additional tools after we have run at least each workload once
14:17:12 <markvoelker> +1.  We have a limited amount of time before Barcelona, so I think for this first go-around we just want to use what we can get going quickly.
14:17:37 <skazi> shamail: +1
14:17:39 <jkomg> +1
14:17:40 <leong> let's work on what we have today and we can refine it later
14:17:45 <tongli> @shamail, that is fine. we already have lampstack in heat, terraform and ansible (I am working on it now)
14:17:49 <gema> shamail: there is no need to run all the tools per workload, if we use them independently of what we are deploying, it almost doesn't matter
14:17:51 <tongli> that will work.
14:17:55 <gema> it demonstrates interop
14:18:16 <shamail> #agree The team will start with one tool per workload for tests, we will be open to running the same workload with additional tools but only after the first go-around has been completed for each workload
14:18:16 <tongli> so at the top level directory, we will have ansible, terraform and heat.
14:18:26 <shamail> gema: +1
14:18:30 <tongli> workload goes into one of the directory.
14:18:35 <leong> we can just let the user to decide which tool they want to use for workload
14:18:53 <tongli> leong, +1
14:19:09 <tongli> I think that is settled.
14:19:13 <gema> if you want to be able to compare results
14:19:20 <gema> we should all use the same tool for the same workload
14:19:23 <gema> else they are not comparable
14:19:31 <jkomg> exactly
14:19:46 <gema> so we agree which tool goes with which workload and do it all that way?
14:19:55 <hjanssen-hpe> Interoperability means doing the same thing everywhere and you are interoperable if the results are the same
14:20:06 <gema> hjanssen-hpe: yep
14:20:14 <shamail> gema and hjanssen-hpe: +1
14:20:14 <tongli> @gema, that is the thing, people may not agree on which tool to use?
14:20:31 <gema> tongli: if we have all the tools represented, one per workload, they'll agree
14:20:36 <catherineD> so we have one workload (LAMP stack) with 3  tools (Heat, terraform and anisble) should we choose one comibnation of workload + tool
14:20:38 <gema> everybody gets a bit of benefit from this exercise
14:20:41 <tongli> I would rather have the options for developers. and we can define the content of the app.
14:20:49 <shamail> If the results are not based on the same test framework (which includes tool + workload) then it makes them hard to be seen as the same
14:21:01 <gema> shamail: +1
14:21:14 <hjanssen-hpe> shamail: =1
14:21:38 <hjanssen-hpe> shamail: +1   (Sorry, I hate my new keyboard)
14:21:42 <tongli> we currently already see terraform, heat being used.
14:21:46 <shamail> all good hjanssen-hpe
14:21:56 <tongli> seems to me ansible is a way better tool for this.
14:22:52 <gema> as long as the tool works for one of us it should work for all
14:22:58 <tongli> can we just start with these three and see how people use them?
14:23:02 <gema> tool+workload combination, I mean
14:23:12 <shamail> I think we should have a common starting point (pick one: heat, terraform, or ansible) and then revisit later if there is time remaining
14:23:26 <luzC> shamail +1
14:23:41 <nikhil> hogepodge: gentle reminder, I've you scheduled for a Q&A with glance team tomorrow's mtg https://etherpad.openstack.org/p/glance-team-meeting-agenda . Please feel free to update/remove depending on your availability.
14:23:46 <tongli> @shamail, we already have terraform and heat in there.
14:23:56 <shamail> #startvote Which tool should we run initially for LAMPStack tests? terraform heat ansible
14:23:57 <openstack> Begin voting on: Which tool should we run initially for LAMPStack tests? Valid vote options are terraform, heat, ansible.
14:23:57 <catherine_d|1> shamail: pick one workload+tool to start with
14:23:58 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
14:24:27 <gema> #vote ansible
14:24:31 <shamail> Voting is open, please select your preferred tool and we can see the results in a second
14:24:36 <hjanssen-hpe> #vote ansible
14:24:43 <tongli> #vote ansible
14:24:45 * dhellmann arrives late
14:24:48 * nikhil apologizes for interrupting the meeting. the channel topic confused to indicate no meeting :/
14:24:49 <shamail> hi dhellmann
14:24:52 <shamail> np nikhil
14:24:56 <dhellmann> #vote ansible
14:24:58 <jkomg> #vote ansible
14:25:00 <luzC> #vote heat
14:25:02 <leong> #vote heat
14:25:12 <MartinST_> #vote ansible
14:25:29 * markvoelker is fairly agnostic and thinks we should use whatever tools we actually have in the repo today
14:25:38 <shamail> Thanks markvoelker
14:25:49 <shamail> that would be heat or terraform
14:26:08 <gema> then heat got more votes :D
14:26:08 <shamail> lizdurst, catherine_d|1: closing out the voting…
14:26:12 <tongli> @shamail, right, doing this , heat hurts.
14:26:18 <tongli> head hurts
14:26:28 <shamail> #endvote
14:26:29 <openstack> Voted on "Which tool should we run initially for LAMPStack tests?" Results are
14:26:30 <openstack> heat (2): leong, luzC
14:26:30 <catherine_d|1> of the two workload submitted heat+LAMP and terraform+LAMP stack
14:26:31 <openstack> ansible (6): MartinST_, dhellmann, hjanssen-hpe, tongli, gema, jkomg
14:26:59 <catherine_d|1> I would like to see which whether the workload are similar ...
14:27:20 <shamail> markvoelker: I agree with your position but it seems that ansible won by a lot (even when people are aware that it isn’t in the repo yet)
14:27:20 <jkomg> which we can do after we have our initial data sets if there's time
14:27:31 <tongli> I think that as long as we define the content of the workload, tools do not matter that much.
14:27:58 <tongli> same workload should test same things such as install, function calls.
14:28:12 <markvoelker> shamail: Ansible's fine as long as someone has a playbook to contribute that we can all run. =)
14:28:18 <shamail> markvoelker: +1
14:28:20 <jkomg> +1
14:28:21 <rohit404> IMO, the three tools are not directly comparable so not sure what criteria I need to use to pick one of the tools...i'm actually ok with what we have in the repo
14:28:36 <shamail> Does anyone have an ansible playbook for LAMPstack in the works or available already?
14:28:39 <luzC> markvoelker: +1
14:28:50 <catherine_d|1> markvoelker: +1
14:28:51 <jkomg> I think tongli said he's working on ansible
14:29:01 <shamail> ah, okay.. thanks jkomg
14:29:03 <tongli> @shmail, I am working on it now. should put up a patch later today or tomorrow.
14:29:12 <jkomg> word
14:29:34 <shamail> Okay to summarize this topic as “team will use ansible and LAMPstack as the first tool+workload combination”?
14:29:50 <gema> +1
14:30:02 <hjanssen-hpe> +1
14:30:08 <shamail> #agree Team will use ansible and LAMPstack as the first tool+workload combination to generate results.
14:30:25 <shamail> #action tongli will post ansible playbook for LAMPstack
14:30:34 <catherine_d|1> shamail: LAMP stack to me is the middleware ... do we have application on top of the LAMP stack submitted?
14:30:58 <tongli> right, let's talk about the content of the app on top of lamp stack.
14:31:06 <shamail> catherine_d|1: We don’t yet…
14:31:56 <tongli> wordpress has been mentioned few times.
14:32:30 <markvoelker> IMHO I'm not much concerned with the actual app, but rather what it needs from OpenStack.
14:33:14 <gema> and wordpress wouldn't need much
14:33:14 <markvoelker> E.g. it's going to want an instance for a web server that has external connectivity, a separate network tha's not external w/another instance + volume for a database, that sort of thing
14:33:18 <hjanssen-hpe> The app should use Openstack features/facilities and not just test a VM
14:33:20 <catherine_d|1> Does Wordpress include in the current 2 LAMP stack submissions?
14:33:25 <tongli> @markvoelder, I thought that the work load is to test a deployed app runs correctly on openstack.
14:33:49 <catherine_d|1> markvoelker: the app is to demonstrate that the LAMP stack works
14:33:53 <hjanssen-hpe> markvoelker: +1
14:34:01 <shamail> markvoelker: +1
14:34:06 <leong> the app on the top doesn't really matter unless that involves testing the OpenStack API
14:34:12 <markvoelker> tongli: Sure, but if all OpenStack is doing is spinning up a single instance running a generic x86 linux os, we haven't really proved much in terms of interoperability
14:34:23 <hjanssen-hpe> Deploying an app without using Openstack only shows that the hypervisor works
14:34:32 <leong> catherine_d|1 the wordpress is included in the current heat LAMP
14:34:34 <gema> markvoelker: should be able to run on an AArch64 linux too
14:34:37 <markvoelker> Modern apps need more than that from the IaaS layer, so we want to exercise more OpenStack capabilities
14:34:44 <catherine_d|1> leong: thx
14:35:03 <gema> as in, the image shouldn't matter
14:35:17 <tongli> so the terraform lampstack I put up there does the following:
14:35:30 <tongli> 1. provision 3 nodes (can be more configurable)
14:35:50 <tongli> 2. install mysql
14:36:01 <tongli> 3. install lamp components
14:36:04 <hogepodge> o/
14:36:12 <tongli> 4. add a one page web app
14:36:16 <shamail> hi hogepodge
14:36:40 <tongli> 5. 3 nodes working together serve the lamp stack app.
14:36:51 <tongli> just my first shot at it.
14:36:58 <markvoelker> tongli: sounds totally reasonable.  Bonus points if we could get it to exercise a few more capabilities (like if the mysql box had a persistent cinder volume attached or was on an isolated netowrk).
14:37:05 <dhellmann> tongli : seems like a good demo
14:37:13 <dhellmann> markvoelker : ++
14:37:29 <gema> tongli: +1
14:37:30 <leong> markvoelker.. i think the heat template covered that
14:37:45 <hjanssen-hpe> tongli:  An excellent start!
14:37:50 <shamail> nice leong
14:37:57 <catherine_d|1> tongli:so the verification of the deployed LAMP stack works is by for a user to hit the webpage?
14:38:00 <tongli> ok, I will add the cinder volume for database.
14:38:10 <leong> in the heat template, the db layer using cinder volume as the persistent storage
14:38:22 <dhellmann> tongli : or trove? :-)
14:38:48 <tongli> are we all ok that the actual app can be just some thing simple to prove that database was connected and everything else is working?
14:38:50 <catherine_d|1> leong: could you please describe the workload that you submitted (Heat+LAMPstack)
14:39:01 <shamail> tongli: +1
14:39:01 <tongli> do not need to be a very complex app?
14:39:14 <leong> is a 3 tier lamp stack as well with wordpress install
14:39:27 <dhellmann> tongli : +1
14:39:30 <leong> it provision 3 network for each tier
14:39:33 <tongli> @dhellmann, hmmm, we are doing lamp, trove , interesting idea though.
14:39:50 <hjanssen-hpe> tongli: +1
14:39:51 <leong> the db tier also utilise Cinder volume for persistence store
14:40:11 <skazi> tongli: +1, imo the networking itself is already showing some openstack features
14:40:21 <leong> there is also a separate heat template if you want to test AutoScaling (but this is with ceilometer dependent)
14:41:06 <leong> question: are we testing devcore-specific api in this interop challenge? or beyond?
14:41:08 <catherine_d|1> leong: how do user verify that this workload is functional ?
14:41:26 <jkomg> initial testing should stick to defcore standards; we don't want to be too in the weeds
14:41:26 <tongli> ok, I think we are all saying the similar thing. application should be simple.
14:41:27 <gema> leong: beyond
14:41:32 <jkomg> really? alright
14:41:58 <gema> we could step it up next cycle, when networking is in full in the program
14:42:13 <shamail> jkomg: +1
14:42:21 <hogepodge> leong: catherine_d|1: I'll reiterate that one of the initial tasks should be to run the tests that are part of DefCore, that would cover 100% of the DefCore required capabilities
14:42:28 <skazi> I think it's better to have more apps than just one with many features
14:42:30 <leong> catherine_d|1 the Heat engine will output the results if the deployment is sucessful... and user can validate by viewing the workpress
14:42:36 <skazi> at least from the presentation pov
14:42:41 <tongli> the test should include all the setup, installation steps are successful (or fail), then access the application (via restful api maybe)
14:42:51 <shamail> I think initially the thought was to use capabilities covered by the OpenStack-Powered program… We could definitely expand beyond that but that might be after the Barcelona summit.
14:43:09 <jkomg> shamail: +1
14:43:20 <dhellmann> shamail : that makes sense; start with what we're already testing
14:43:30 <leong> hogepodge.. i understand that.. that's why the heat template that we submitted has two different files.. one is only test defcore api, another is to test beyond
14:43:35 <shamail> We wanted to use OpenStack-Powered Clouds/Services to showcase API interoperability as a foundation and then show workloads being deployed as the next layer
14:44:09 <shamail> hogepodge: +1
14:44:53 <tongli> the patch I put up also create security groups, rules, keypairs etc.
14:44:56 <shamail> hogepodge: the process outlined in the etherpad/wiki (draft) today starts with the first task being to run RefStack against the cloud
14:44:59 <shamail> https://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process
14:45:02 <tongli> so that running it will be easy.
14:45:34 <tongli> @shamail, we are running out of the times.
14:45:45 <tongli> there are so many other things we need to address from the agenda.
14:45:48 <rohit404> so, ansible + LAMP + wordpress ?
14:45:53 <jkomg> +1
14:45:54 <shamail> thanks tongli
14:46:28 <shamail> Okay, so please review the testing candidate workloads/tools and we can discuss them again next week.  I am going to change topics now.
14:46:31 <hogepodge> shamail: sweet thanks
14:46:47 <leong> +1 shamail
14:47:05 <catherine_d|1> hogepodge: RefStack is one of the test category for testing ...
14:47:15 <shamail> On this topic, I’d like to state that we selected a starting point but please continue to add tools/workloads because I am sure once we start testing that people might be able to get multiple test runs in after becoming comfortable with the submission process.  I think having multiple tools/workloads in the repo is beneficial and I hope we get to run more than just one tool per workload (but have to prioritize)
14:47:27 <shamail> #link https://review.openstack.org/#/q/status:open+project:openstack/osops-tools-contrib
14:47:32 <shamail> open reviews for scripts as well.
14:47:37 <shamail> please go review later :)
14:47:43 <shamail> #topic Discuss how/where to share results (results repository)
14:47:44 <leong> agree shamail! :)
14:47:58 <shamail> markvoelker, tongli: can you lead this topic since you two have been thinking about this?
14:48:09 <markvoelker> sure
14:48:12 <shamail> thanks
14:48:22 <markvoelker> So basically once we pick workloads and people run them, we need a way to collect results
14:48:31 <markvoelker> And what's probably most valuable isn't binary "pass/fail"
14:48:40 <markvoelker> What's probably valuable is two things:
14:49:05 <markvoelker> 1.) Some light analysis of failures (e.g. "it didn't work because it assumes floating IP's will be used; we use provider networks instead")
14:50:08 <leong> looks like we need to define a 'result template' such as pass/fail, if fail, what fail
14:50:12 <markvoelker> 2.)  For things that did actually run for everyone, we can do some analysis later to glean best practices (E.g. "this checked for available image API's rather than assuming Glance v2 was available")
14:50:31 <markvoelker> So to that end we started a basic template for reporting results
14:50:46 <markvoelker> This isn't final, but gives you sort of a feel for what we'd like to collect
14:50:54 <tongli> @markvoelker, right, define a template (hint, yaml), then at the end of the run, replace the variables in the tempalte.
14:50:58 <markvoelker> # link http://paste.openstack.org/show/553587/ Skeleton of results template
14:51:10 <shamail> markvoelker: Can you please post it in the etherpad as well?
14:51:17 <markvoelker> shamail: sure
14:51:23 <tongli> @markvoelker, yaml ,please.
14:51:24 <shamail> We can add it to the wiki later in the week
14:51:26 <shamail> thanks
14:51:40 <markvoelker> As far as the means of collecting that info, there's a couple of methods we could use
14:51:47 <tongli> eventually we feed these results to some charting tools to plot nice charts.
14:52:01 <markvoelker> E.g. for the initial run for Barcelona, we could just set up a SurveyMonkey (or similar tool) for folks to report to
14:52:18 <leong> tongli +1 yaml+++1
14:52:21 <markvoelker> Or we could just email.  Or...etc etc etc
14:52:36 <tongli> @markvoelder, you are not suggesting manually doing this, right?
14:52:54 <tongli> I would rather produce a yaml file then http post to somewhere.
14:52:57 <markvoelker> tongli: I am.  Because I'm not sure we've got time to build a client wrapper and server before Barcelona. =)
14:53:06 <gema> yeah, manually testing this is not going to scale
14:53:13 <markvoelker> Longer term I'm all in favor of automation
14:53:16 <shamail> tongli: some of the things markvoelker highlighted (e.g. light analysis of failures) will have to be manual
14:53:19 <jkomg> It'd have to be at least partially manual
14:53:23 <jkomg> right
14:53:23 <gema> plus some of those questions are open to interpretation
14:53:25 <shamail> jkomg: +1
14:53:26 <markvoelker> But if we want to have soemthing to show before Barcelona, I think we need to move fast.
14:53:46 <leong> the result template can be in yaml, how to submit can be manual at this stage
14:54:05 <shamail> I agree that a simple pass/fail for each task doesn’t give us the data to learn from
14:54:09 <shamail> leong: +1
14:54:10 <leong> the template that markvoelker shown can be defined in yaml
14:54:13 <tongli> for the location of the results, I would just suggest we have some thing like swift or http post capable site.
14:54:29 <shamail> I think we need to separate out this conversation between: what are we collecting, what format are we collecting it in
14:54:41 <leong> then people can either http post or email the yaml result back..
14:54:56 <gema> or git commit it
14:54:57 <shamail> Does the proposal for what type of information we should capture after tests look good as proposed by markvoelker?
14:55:02 <markvoelker> shamail: +1.  I'm far less concenred about the method of collection than figuring out what useful data we can collect.
14:55:04 <leong> and of course a http post or something can built to validate the 'result format'
14:55:12 <shamail> markvoelker: ditto :)
14:55:12 <gema> shamail: +1
14:55:17 <jkomg> shamail: does to me, +1 one markvoelker
14:55:29 <skazi> markvoelker: +1
14:55:41 <tongli> another github project to contain the results?
14:55:55 <jkomg> also +1 that the data is more important than how we collect it
14:56:00 <shamail> It seems that we all like this notion of capturing results and including a light analysis/brief description of where a workload failed
14:56:05 <markvoelker> So, for today: is there any important data that we should be collecting that isn't in http://paste.openstack.org/show/553587/ already?
14:56:31 <shamail> markvoelker: The summary sounded good to me, I will review the questions later in the day again
14:56:45 <gema> markvoelker: I would add bug number if any, that you raised from this testing
14:56:54 <gema> that makes all the errors we uncover traceable
14:57:03 <shamail> #action please look at the results template at http://paste.openstack.org/show/553587/ and share thoughts on ML on whether it captures everything we’d want (everyone)
14:57:14 <markvoelker> gema: bug against what?  OpenStack?
14:57:21 <gema> markvoelker: against any project
14:57:25 <tongli> @gema, are you saying we create bugs against the cloud ?
14:57:29 <gema> openstack, linux, lamp
14:57:36 <markvoelker> gema: I'm thinking a lot of failures we're going to see won't be the result of OpenStack bugs
14:57:38 <shamail> #agree We will discuss format/process for submitting results next week
14:57:49 <shamail> We are almost out of time and I think that will be a good conversation as well
14:57:53 <gema> markvoelker: ok
14:58:00 <shamail> #topic Open Discussion
14:58:04 <markvoelker> E.g. it'll be things like "this Ansible play assumed I need floating IP's for external connectivity, but the cloud I'm testing uses provider networks that are routable"
14:58:10 <shamail> We have two minutes remaining
14:58:42 <shamail> markvoelker: +1
14:59:08 <tongli> I would think that the test will run automatically on a daily basis.
14:59:26 <tongli> which will produce a lot of results, then we can chart over the time
14:59:38 <shamail> Do we expect change over time?
14:59:44 <shamail> besides capacity issues
14:59:44 <jkomg> hopefully not :P
15:00:08 <tongli> hmmm, after errors get fixed, you would like to run again, right?
15:00:11 <luzC> what would be the timeframe, once we have the playbooks we are expected to test the cloud for how many days?
15:00:16 <shamail> Alright, thanks everyone for making this a great meeting!  See you next week.
15:00:18 <shamail> #endmeeting