Wednesday, 2016-08-10

*** kei_ has joined #openstack-defcore01:36
openstackgerritMark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json  https://review.openstack.org/35133901:37
*** kei has quit IRC01:38
*** kei_ has quit IRC01:40
openstackgerritMark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json  https://review.openstack.org/35133901:47
*** woodster_ has quit IRC03:09
*** rarcea has quit IRC04:34
*** rarcea has joined #openstack-defcore04:47
*** pcaruana has quit IRC05:01
-openstackstatus- NOTICE: zuul is being restarted to reload configuration. Jobs should be re-enqueued but if you're missing anything (and it's not on http://status.openstack.org/zuul/) please issue a recheck in 30min.05:24
openstackgerritCatherine Diep proposed openstack/defcore: Remove tests that require second set of credentials from next.  https://review.openstack.org/33860905:55
*** pcaruana has joined #openstack-defcore07:24
openstackgerritCatherine Diep proposed openstack/defcore: Flag advisory tests in 2016.01 due to requirement of admin credential.  https://review.openstack.org/35328707:33
*** openstackgerrit has quit IRC08:18
*** openstackgerrit has joined #openstack-defcore08:19
*** xiangfeiz has joined #openstack-defcore09:40
*** xiangfeiz has quit IRC10:21
*** woodster_ has joined #openstack-defcore12:36
*** edmondsw has joined #openstack-defcore13:00
*** skazi has joined #openstack-defcore13:05
openstackgerritMark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json  https://review.openstack.org/35133913:24
*** ametts has joined #openstack-defcore13:36
*** Tetsuo has joined #openstack-defcore13:46
*** woodburn has joined #openstack-defcore13:47
*** hjanssen-hpe has joined #openstack-defcore13:51
*** hj-hpe has joined #openstack-defcore13:51
*** tkfjt has joined #openstack-defcore13:56
*** lizdurst has joined #openstack-defcore13:58
*** tkfjt_ has joined #openstack-defcore13:59
*** tongli has joined #openstack-defcore13:59
*** tkfjt_ has left #openstack-defcore13:59
*** leong has joined #openstack-defcore13:59
*** kei_ has joined #openstack-defcore13:59
*** kei_ is now known as kei14:00
*** shamail has joined #openstack-defcore14:00
shamail#startmeeting interop_challenge14:00
openstackMeeting started Wed Aug 10 14:00:51 2016 UTC and is due to finish in 60 minutes.  The chair is shamail. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
openstackThe meeting name has been set to 'interop_challenge'14:00
gemao/14:01
catherineDo/14:01
shamailHi everyone, who is here for the interop challenge meeting today?14:01
leongo/14:01
*** rohit404 has joined #openstack-defcore14:01
keio/14:01
woodburno/14:01
rohit4040/14:01
*** tkfjt__ has joined #openstack-defcore14:01
skazio/14:01
hjanssen-hpeo/14:01
shamailThanks gema, catherineD, leong, kei, woodburn, rohit404, skazi and hjanssen-hpe14:01
lizdursto/14:01
tonglio/14:02
shamailhey lizdurst and tongli14:02
markvoelkero/14:02
eeideno/14:02
shamailThe agenda for today can be found at:14:02
shamail#link https://wiki.openstack.org/wiki/Interop_Challenge#Meeting_Information14:02
*** tkfjt has quit IRC14:02
*** jkomg has joined #openstack-defcore14:02
shamailhi markvoelker and eeiden14:02
shamail#topic review action items from previous meeting14:02
jkomgmorning14:02
shamail#link http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-03-14.00.html14:02
*** Jokoester has joined #openstack-defcore14:02
shamailThe bottom of the linked log has action items that were taken in the last meeting14:03
shamailDocument how to submit requests for new tools (markvoelker, shamail)14:03
tongliplease add your name to the attendance list at the bottom. thanks.14:03
gemathat document is automatically generated14:03
gemathat list gets created at the end of the meeting for today14:03
*** MartinST_ has joined #openstack-defcore14:04
shamail#link https://etherpad.openstack.org/p/interop-challenge-meeting-2016-08-1014:04
shamailin case we need to take notes14:04
gema+114:04
shamailSo on the first topic on documenting how to submit…14:04
shamail#link https://wiki.openstack.org/wiki/Interop_Challenge#How_to_Propose.2FSubmit_new_tools.2Fworkloads14:04
shamailAdded this section to the wiki as a starting point14:05
shamailmarkvoelker: Do you have anything to add on this action item?14:05
markvoelkershamail: we've been kicking around ideas on how to submit results14:05
shamailAh, yes.  I would like to hold off on that discussion since its an agenda item for today14:06
markvoelker'k14:06
shamailThanks14:06
shamailDetermine where to share results (markvoelker, topol, tongli)14:06
shamailAs mentioned, I want to pass on this action item review for now since we will discuss in depth today14:06
shamailas part of the current agenda14:06
shamailAI: Please add which OpenStack versions you plan to run interop challenge tests against (everyone)14:07
gema#action Please add which OpenStack versions you plan to run interop14:07
gema#action Please add which OpenStack versions you plan to run interop challenge tests against (everyone)14:07
shamailThis was an action from our previous meeting but I don’t think we discussed where to put this information14:07
gema(sorry, copy & paste)14:07
shamailThis might be another one that intersects with the “sharing results” conversation or I have some thoughts I can share in the open discussion section14:08
shamailnp gema, thanks.14:08
shamailOnwards!14:08
leong+1 shamail.. i think the OpenStack version can be part of the "results"14:08
*** xiangfei1 has joined #openstack-defcore14:08
shamailleong: I agree14:08
shamail#topic Test candidates review14:08
tongli@leong, +114:09
shamailThe test candidates can be found in two sources:14:09
shamailhttps://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process14:09
shamailOr line 28 in the etherpad for today14:09
shamailtongli: can you lead this topic?14:09
tongli@shamail, sure.14:09
tongliI think we are open on what tools to use, either terramform, ansible, heat, your choice.14:10
tongliprobably we need to figure out the content of the app. lampstack has been one, dockerswarm is another.14:10
gematongli: do you mean that the same workload deployed with different tools in different clouds means the workload is interoperable?14:10
tongliwe currently already have dockerswarm in terraform.14:11
tongli@gema, I think we should talk about that. not exactly sure if different tools for the same work load will accomplish something more than just one tool.14:11
gematongli: I think different deploying tools will use different apis14:11
jkomggema: the same workload deployed with the same tools in multiple clouds. Just multiple tools, I'd think.14:11
gemawhat we should do is have different deploying tools as different use cases?14:12
tonglifor example, if we have lampstack in terraform and ansible, will that be better vs lampstack terraform.14:12
gemamaybe same workload?14:12
gemaor multiple ones14:12
shamailjkomg: +114:12
gematongli: it will be different14:12
jkomgI don't see the effectiveness of employing lampstack with terraform and ansible; you're deploying the same thing with different tools. We're not testing the tools.14:13
jkomgs/employing/deploying/g14:13
tonglikomg, +1.14:13
gemajkomg: if the tools are using the openstack apis they are part of getting the workload running14:13
gemajkomg: agree with you we are not trying to test the tools14:13
jkomgTrue, but I don't think anyone thinks there's a difference between lamp via ansible and lamp via heat, or terraform.14:13
gemajkomg: I think it is different14:14
shamailThe question I see is that we need to agree whether we are building a catalog of workloads to test or tools… If workloads then once a workload (e.g. LAMPStack) is available using any tool then we should switch gears to the next workload.   The tool leveraged is open but I am curious about the value of multiple tools for the same workloads.14:14
tongliok, so for one workload, we just have one scripts (either in terraform, ansible, or heat). we will just run that against multiple clouds.14:14
tonglican we all agree on that?14:14
jkomgshamail: +114:14
jkomgtongli: +114:14
gemashamail: it depends on the api coverage we are after demonstrating14:14
leongshamail +114:14
gemashamail: we could do each workload with a different tool14:14
gemathat'd work14:15
shamailgema: I get your point too.14:15
skazishamail: +114:15
leong+1 gema14:15
shamailEssentially with multiple tools doing the same workload, we might cover a broader API set.14:15
gemayep14:15
tongliso looks we want to use multiple tools for same work load? I am ok with that as well.14:16
shamailI think based on that maybe we shoot for one tool per workload as a starting point and we can revisit running the same workload with additional tool if A) we have time remaining in the challenge and B) we have multiple tools in the repo for a workload14:16
hjanssen-hpehjanssen-hpe: +114:16
hjanssen-hpeI mean14:16
gemashamail: +114:16
hjanssen-hpeshamail: +114:17
catherineDshamail: +1 one tool per workload14:17
shamailtongli: I am proposing we start with 1 per workload and come back to running the workload with additional tools after we have run at least each workload once14:17
markvoelker+1.  We have a limited amount of time before Barcelona, so I think for this first go-around we just want to use what we can get going quickly.14:17
skazishamail: +114:17
jkomg+114:17
leonglet's work on what we have today and we can refine it later14:17
tongli@shamail, that is fine. we already have lampstack in heat, terraform and ansible (I am working on it now)14:17
gemashamail: there is no need to run all the tools per workload, if we use them independently of what we are deploying, it almost doesn't matter14:17
tonglithat will work.14:17
gemait demonstrates interop14:17
*** DaisukeB has joined #openstack-defcore14:17
shamail#agree The team will start with one tool per workload for tests, we will be open to running the same workload with additional tools but only after the first go-around has been completed for each workload14:18
tongliso at the top level directory, we will have ansible, terraform and heat.14:18
shamailgema: +114:18
tongliworkload goes into one of the directory.14:18
leongwe can just let the user to decide which tool they want to use for workload14:18
tonglileong, +114:18
tongliI think that is settled.14:19
gemaif you want to be able to compare results14:19
gemawe should all use the same tool for the same workload14:19
gemaelse they are not comparable14:19
jkomgexactly14:19
gemaso we agree which tool goes with which workload and do it all that way?14:19
hjanssen-hpeInteroperability means doing the same thing everywhere and you are interoperable if the results are the same14:19
gemahjanssen-hpe: yep14:20
shamailgema and hjanssen-hpe: +114:20
tongli@gema, that is the thing, people may not agree on which tool to use?14:20
gematongli: if we have all the tools represented, one per workload, they'll agree14:20
catherineDso we have one workload (LAMP stack) with 3  tools (Heat, terraform and anisble) should we choose one comibnation of workload + tool14:20
gemaeverybody gets a bit of benefit from this exercise14:20
tongliI would rather have the options for developers. and we can define the content of the app.14:20
shamailIf the results are not based on the same test framework (which includes tool + workload) then it makes them hard to be seen as the same14:20
gemashamail: +114:21
*** catherine_d|1 has joined #openstack-defcore14:21
hjanssen-hpeshamail: =114:21
hjanssen-hpeshamail: +1   (Sorry, I hate my new keyboard)14:21
tongliwe currently already see terraform, heat being used.14:21
shamailall good hjanssen-hpe14:21
tongliseems to me ansible is a way better tool for this.14:21
gemaas long as the tool works for one of us it should work for all14:22
tonglican we just start with these three and see how people use them?14:22
gematool+workload combination, I mean14:23
shamailI think we should have a common starting point (pick one: heat, terraform, or ansible) and then revisit later if there is time remaining14:23
luzCshamail +114:23
nikhilhogepodge: gentle reminder, I've you scheduled for a Q&A with glance team tomorrow's mtg https://etherpad.openstack.org/p/glance-team-meeting-agenda . Please feel free to update/remove depending on your availability.14:23
tongli@shamail, we already have terraform and heat in there.14:23
shamail#startvote Which tool should we run initially for LAMPStack tests? terraform heat ansible14:23
openstackBegin voting on: Which tool should we run initially for LAMPStack tests? Valid vote options are terraform, heat, ansible.14:23
catherine_d|1shamail: pick one workload+tool to start with14:23
openstackVote using '#vote OPTION'. Only your last vote counts.14:23
gema#vote ansible14:24
shamailVoting is open, please select your preferred tool and we can see the results in a second14:24
hjanssen-hpe#vote ansible14:24
tongli#vote ansible14:24
* dhellmann arrives late14:24
* nikhil apologizes for interrupting the meeting. the channel topic confused to indicate no meeting :/14:24
shamailhi dhellmann14:24
shamailnp nikhil14:24
dhellmann#vote ansible14:24
jkomg#vote ansible14:24
luzC#vote heat14:25
leong#vote heat14:25
MartinST_#vote ansible14:25
* markvoelker is fairly agnostic and thinks we should use whatever tools we actually have in the repo today14:25
shamailThanks markvoelker14:25
shamailthat would be heat or terraform14:25
gemathen heat got more votes :D14:26
shamaillizdurst, catherine_d|1: closing out the voting…14:26
tongli@shamail, right, doing this , heat hurts.14:26
tonglihead hurts14:26
shamail#endvote14:26
openstackVoted on "Which tool should we run initially for LAMPStack tests?" Results are14:26
openstackheat (2): leong, luzC14:26
catherine_d|1of the two workload submitted heat+LAMP and terraform+LAMP stack14:26
openstackansible (6): MartinST_, dhellmann, hjanssen-hpe, tongli, gema, jkomg14:26
catherine_d|1I would like to see which whether the workload are similar ...14:26
shamailmarkvoelker: I agree with your position but it seems that ansible won by a lot (even when people are aware that it isn’t in the repo yet)14:27
jkomgwhich we can do after we have our initial data sets if there's time14:27
tongliI think that as long as we define the content of the workload, tools do not matter that much.14:27
tonglisame workload should test same things such as install, function calls.14:27
markvoelkershamail: Ansible's fine as long as someone has a playbook to contribute that we can all run. =)14:28
shamailmarkvoelker: +114:28
jkomg+114:28
rohit404IMO, the three tools are not directly comparable so not sure what criteria I need to use to pick one of the tools...i'm actually ok with what we have in the repo14:28
shamailDoes anyone have an ansible playbook for LAMPstack in the works or available already?14:28
luzCmarkvoelker: +114:28
catherine_d|1markvoelker: +114:28
jkomgI think tongli said he's working on ansible14:28
shamailah, okay.. thanks jkomg14:29
tongli@shmail, I am working on it now. should put up a patch later today or tomorrow.14:29
jkomgword14:29
shamailOkay to summarize this topic as “team will use ansible and LAMPstack as the first tool+workload combination”?14:29
gema+114:29
hjanssen-hpe+114:30
shamail#agree Team will use ansible and LAMPstack as the first tool+workload combination to generate results.14:30
shamail#action tongli will post ansible playbook for LAMPstack14:30
catherine_d|1shamail: LAMP stack to me is the middleware ... do we have application on top of the LAMP stack submitted?14:30
tongliright, let's talk about the content of the app on top of lamp stack.14:30
shamailcatherine_d|1: We don’t yet…14:31
tongliwordpress has been mentioned few times.14:31
markvoelkerIMHO I'm not much concerned with the actual app, but rather what it needs from OpenStack.14:32
gemaand wordpress wouldn't need much14:33
markvoelkerE.g. it's going to want an instance for a web server that has external connectivity, a separate network tha's not external w/another instance + volume for a database, that sort of thing14:33
hjanssen-hpeThe app should use Openstack features/facilities and not just test a VM14:33
catherine_d|1Does Wordpress include in the current 2 LAMP stack submissions?14:33
tongli@markvoelder, I thought that the work load is to test a deployed app runs correctly on openstack.14:33
catherine_d|1markvoelker: the app is to demonstrate that the LAMP stack works14:33
hjanssen-hpemarkvoelker: +114:33
shamailmarkvoelker: +114:34
leongthe app on the top doesn't really matter unless that involves testing the OpenStack API14:34
markvoelkertongli: Sure, but if all OpenStack is doing is spinning up a single instance running a generic x86 linux os, we haven't really proved much in terms of interoperability14:34
hjanssen-hpeDeploying an app without using Openstack only shows that the hypervisor works14:34
leongcatherine_d|1 the wordpress is included in the current heat LAMP14:34
gemamarkvoelker: should be able to run on an AArch64 linux too14:34
markvoelkerModern apps need more than that from the IaaS layer, so we want to exercise more OpenStack capabilities14:34
catherine_d|1leong: thx14:34
gemaas in, the image shouldn't matter14:35
tongliso the terraform lampstack I put up there does the following:14:35
tongli1. provision 3 nodes (can be more configurable)14:35
tongli2. install mysql14:35
tongli3. install lamp components14:36
hogepodgeo/14:36
tongli4. add a one page web app14:36
shamailhi hogepodge14:36
tongli5. 3 nodes working together serve the lamp stack app.14:36
tonglijust my first shot at it.14:36
markvoelkertongli: sounds totally reasonable.  Bonus points if we could get it to exercise a few more capabilities (like if the mysql box had a persistent cinder volume attached or was on an isolated netowrk).14:36
dhellmanntongli : seems like a good demo14:37
dhellmannmarkvoelker : ++14:37
gematongli: +114:37
leongmarkvoelker.. i think the heat template covered that14:37
hjanssen-hpetongli:  An excellent start!14:37
shamailnice leong14:37
catherine_d|1tongli:so the verification of the deployed LAMP stack works is by for a user to hit the webpage?14:37
tongliok, I will add the cinder volume for database.14:38
leongin the heat template, the db layer using cinder volume as the persistent storage14:38
dhellmanntongli : or trove? :-)14:38
*** tkfjt__ has quit IRC14:38
tongliare we all ok that the actual app can be just some thing simple to prove that database was connected and everything else is working?14:38
catherine_d|1leong: could you please describe the workload that you submitted (Heat+LAMPstack)14:38
shamailtongli: +114:39
tonglido not need to be a very complex app?14:39
leongis a 3 tier lamp stack as well with wordpress install14:39
dhellmanntongli : +114:39
leongit provision 3 network for each tier14:39
tongli@dhellmann, hmmm, we are doing lamp, trove , interesting idea though.14:39
hjanssen-hpetongli: +114:39
leongthe db tier also utilise Cinder volume for persistence store14:39
skazitongli: +1, imo the networking itself is already showing some openstack features14:40
leongthere is also a separate heat template if you want to test AutoScaling (but this is with ceilometer dependent)14:40
leongquestion: are we testing devcore-specific api in this interop challenge? or beyond?14:41
catherine_d|1leong: how do user verify that this workload is functional ?14:41
jkomginitial testing should stick to defcore standards; we don't want to be too in the weeds14:41
tongliok, I think we are all saying the similar thing. application should be simple.14:41
gemaleong: beyond14:41
jkomgreally? alright14:41
gemawe could step it up next cycle, when networking is in full in the program14:41
shamailjkomg: +114:42
hogepodgeleong: catherine_d|1: I'll reiterate that one of the initial tasks should be to run the tests that are part of DefCore, that would cover 100% of the DefCore required capabilities14:42
skaziI think it's better to have more apps than just one with many features14:42
leongcatherine_d|1 the Heat engine will output the results if the deployment is sucessful... and user can validate by viewing the workpress14:42
skaziat least from the presentation pov14:42
tonglithe test should include all the setup, installation steps are successful (or fail), then access the application (via restful api maybe)14:42
shamailI think initially the thought was to use capabilities covered by the OpenStack-Powered program… We could definitely expand beyond that but that might be after the Barcelona summit.14:42
jkomgshamail: +114:43
dhellmannshamail : that makes sense; start with what we're already testing14:43
leonghogepodge.. i understand that.. that's why the heat template that we submitted has two different files.. one is only test defcore api, another is to test beyond14:43
shamailWe wanted to use OpenStack-Powered Clouds/Services to showcase API interoperability as a foundation and then show workloads being deployed as the next layer14:43
shamailhogepodge: +114:44
tonglithe patch I put up also create security groups, rules, keypairs etc.14:44
shamailhogepodge: the process outlined in the etherpad/wiki (draft) today starts with the first task being to run RefStack against the cloud14:44
shamailhttps://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process14:44
tongliso that running it will be easy.14:45
tongli@shamail, we are running out of the times.14:45
tonglithere are so many other things we need to address from the agenda.14:45
rohit404so, ansible + LAMP + wordpress ?14:45
jkomg+114:45
shamailthanks tongli14:45
shamailOkay, so please review the testing candidate workloads/tools and we can discuss them again next week.  I am going to change topics now.14:46
hogepodgeshamail: sweet thanks14:46
leong+1 shamail14:46
catherine_d|1hogepodge: RefStack is one of the test category for testing ...14:47
shamailOn this topic, I’d like to state that we selected a starting point but please continue to add tools/workloads because I am sure once we start testing that people might be able to get multiple test runs in after becoming comfortable with the submission process.  I think having multiple tools/workloads in the repo is beneficial and I hope we get to run more than just one tool per workload (but have to prioritize)14:47
shamail#link https://review.openstack.org/#/q/status:open+project:openstack/osops-tools-contrib14:47
shamailopen reviews for scripts as well.14:47
shamailplease go review later :)14:47
shamail#topic Discuss how/where to share results (results repository)14:47
leongagree shamail! :)14:47
shamailmarkvoelker, tongli: can you lead this topic since you two have been thinking about this?14:47
markvoelkersure14:48
shamailthanks14:48
markvoelkerSo basically once we pick workloads and people run them, we need a way to collect results14:48
markvoelkerAnd what's probably most valuable isn't binary "pass/fail"14:48
markvoelkerWhat's probably valuable is two things:14:48
markvoelker1.) Some light analysis of failures (e.g. "it didn't work because it assumes floating IP's will be used; we use provider networks instead")14:49
leonglooks like we need to define a 'result template' such as pass/fail, if fail, what fail14:50
markvoelker2.)  For things that did actually run for everyone, we can do some analysis later to glean best practices (E.g. "this checked for available image API's rather than assuming Glance v2 was available")14:50
markvoelkerSo to that end we started a basic template for reporting results14:50
markvoelkerThis isn't final, but gives you sort of a feel for what we'd like to collect14:50
tongli@markvoelker, right, define a template (hint, yaml), then at the end of the run, replace the variables in the tempalte.14:50
markvoelker# link http://paste.openstack.org/show/553587/ Skeleton of results template14:50
shamailmarkvoelker: Can you please post it in the etherpad as well?14:51
markvoelkershamail: sure14:51
tongli@markvoelker, yaml ,please.14:51
shamailWe can add it to the wiki later in the week14:51
shamailthanks14:51
markvoelkerAs far as the means of collecting that info, there's a couple of methods we could use14:51
tonglieventually we feed these results to some charting tools to plot nice charts.14:51
markvoelkerE.g. for the initial run for Barcelona, we could just set up a SurveyMonkey (or similar tool) for folks to report to14:52
leongtongli +1 yaml+++114:52
markvoelkerOr we could just email.  Or...etc etc etc14:52
tongli@markvoelder, you are not suggesting manually doing this, right?14:52
tongliI would rather produce a yaml file then http post to somewhere.14:52
markvoelkertongli: I am.  Because I'm not sure we've got time to build a client wrapper and server before Barcelona. =)14:52
gemayeah, manually testing this is not going to scale14:53
markvoelkerLonger term I'm all in favor of automation14:53
shamailtongli: some of the things markvoelker highlighted (e.g. light analysis of failures) will have to be manual14:53
jkomgIt'd have to be at least partially manual14:53
jkomgright14:53
gemaplus some of those questions are open to interpretation14:53
shamailjkomg: +114:53
markvoelkerBut if we want to have soemthing to show before Barcelona, I think we need to move fast.14:53
leongthe result template can be in yaml, how to submit can be manual at this stage14:53
shamailI agree that a simple pass/fail for each task doesn’t give us the data to learn from14:54
shamailleong: +114:54
leongthe template that markvoelker shown can be defined in yaml14:54
tonglifor the location of the results, I would just suggest we have some thing like swift or http post capable site.14:54
shamailI think we need to separate out this conversation between: what are we collecting, what format are we collecting it in14:54
leongthen people can either http post or email the yaml result back..14:54
gemaor git commit it14:54
shamailDoes the proposal for what type of information we should capture after tests look good as proposed by markvoelker?14:54
markvoelkershamail: +1.  I'm far less concenred about the method of collection than figuring out what useful data we can collect.14:55
leongand of course a http post or something can built to validate the 'result format'14:55
shamailmarkvoelker: ditto :)14:55
gemashamail: +114:55
jkomgshamail: does to me, +1 one markvoelker14:55
skazimarkvoelker: +114:55
tonglianother github project to contain the results?14:55
jkomgalso +1 that the data is more important than how we collect it14:55
shamailIt seems that we all like this notion of capturing results and including a light analysis/brief description of where a workload failed14:56
markvoelkerSo, for today: is there any important data that we should be collecting that isn't in http://paste.openstack.org/show/553587/ already?14:56
shamailmarkvoelker: The summary sounded good to me, I will review the questions later in the day again14:56
gemamarkvoelker: I would add bug number if any, that you raised from this testing14:56
gemathat makes all the errors we uncover traceable14:56
shamail#action please look at the results template at http://paste.openstack.org/show/553587/ and share thoughts on ML on whether it captures everything we’d want (everyone)14:57
markvoelkergema: bug against what?  OpenStack?14:57
gemamarkvoelker: against any project14:57
tongli@gema, are you saying we create bugs against the cloud ?14:57
gemaopenstack, linux, lamp14:57
markvoelkergema: I'm thinking a lot of failures we're going to see won't be the result of OpenStack bugs14:57
shamail#agree We will discuss format/process for submitting results next week14:57
shamailWe are almost out of time and I think that will be a good conversation as well14:57
gemamarkvoelker: ok14:57
shamail#topic Open Discussion14:58
markvoelkerE.g. it'll be things like "this Ansible play assumed I need floating IP's for external connectivity, but the cloud I'm testing uses provider networks that are routable"14:58
shamailWe have two minutes remaining14:58
*** lizdurst has quit IRC14:58
shamailmarkvoelker: +114:58
tongliI would think that the test will run automatically on a daily basis.14:59
tongliwhich will produce a lot of results, then we can chart over the time14:59
shamailDo we expect change over time?14:59
shamailbesides capacity issues14:59
jkomghopefully not :P14:59
tonglihmmm, after errors get fixed, you would like to run again, right?15:00
luzCwhat would be the timeframe, once we have the playbooks we are expected to test the cloud for how many days?15:00
shamailAlright, thanks everyone for making this a great meeting!  See you next week.15:00
shamail#endmeeting15:00
openstackMeeting ended Wed Aug 10 15:00:18 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.log.html15:00
shamailI have to run but this is DefCore channel so conversation can continue :)15:00
markvoelkertongli: well, honestly a big part of what I'd hope to get out of this is a set of best practices.  I'm not sure daily runs really help with that.15:00
* shamail sneaks out15:01
markvoelkerE.g. I'm more interested in figuring out that if I want to deploy a 3-tier web app with Ansible, when I go write my plays I should do things like:15:01
tongli@markvoelker, so you expect just one set of result from each cloud?15:01
gematongli: yeah, me too15:01
jkomgone set from each version15:01
gemanot sure what we would gain by automating this15:02
jkomgthe expectation is that x test will run the same on x cloud at x version, no matter the # of iterations, right?15:02
gemathe value of this testing is how wide the run is15:02
gemanot how many of them we do15:02
*** DaisukeB has quit IRC15:02
gemajkomg: right15:02
markvoelker1.) Check for supported image API's rather than assuming Glance v1 is available15:02
*** shamail has quit IRC15:02
markvoelker2.) Don't assume floating IP's are necessary for external connectivity15:03
markvoelker3.) etc etc etc15:03
markvoelkertongli: Pretty much.  Get results from a bunch of clouds, see what we can learn from the failures, then repeat at some later point with the workloads adjusted based on what we learned the last time.15:03
markvoelkertongli: at the end of the day, if we can provide app devs with some best practices to ensure their workloads run across more clouds, that's a win.15:03
tonglimark, if we very well define the content of each work load, then these things will be tested each run.15:03
jkomgI think if we see different results with the same configuration sets on the same clouds on the same version, interop is the least of our worries, right? :P15:04
gemahaha15:04
gemajkomg: for our own peace of mind we should run them on each upgrade15:04
gemaeven if not shared , but for ourselves15:05
tongliif the result will be just posted couple of times, then manual is fine.15:05
tonglithought that we will run this on a daily basis.15:05
* jkomg is just steering away from having to hang this framework on jenkins or something for a daily run15:05
gematongli: manual has the problem that it is open for interpretation15:05
jkomggema: oh for sure, we want version testing15:05
jkomggema: I think if we're careful we can minimize that, like the framework markvoelker setup15:05
markvoelkertongli: Sure, my point was just that running a daily test doesn't necessarily do much for us.  E.g. VIO version 2.5 is a shipped distribution...it's not changing daily.  The Heat template or Ansible playbook we run might, but it's not.15:05
jkomgdid it work? no? what was the error? did the error help? or similar15:06
jkomgmarkvoelker: +1, that's what I'm saying too15:06
tongliwe did not get to talk about clouds we will run against.15:06
tongliI really want to know what you guys think.15:07
jkomgWe should be able to get some of this done via the list, right?15:07
markvoelkerjkomg: yep, I think so15:07
tonglithe clouds we will run against is kind of big deal since the results eventually will mean something.15:07
markvoelkertongli: Well, speaking for myself: I'd like to see the workloads run against as many products as we can that are actually available to consumers right now.15:08
jkomgmarkvoelker: +115:08
jkomgthat's why we have cast a wide net for involvement15:08
markvoelkerE.g. public clouds (RAX, OVH, etc), distributions (RHOSP, VIO, MOS, etc), etc15:08
tongliso you mean public openstack cloud,15:08
*** kbaikov has quit IRC15:08
jkomgall the clouds, as many as we can get15:09
jkomgwe want this not only to succeed, but to showcase that yep, throw x workload on any stack, it'll run like you expect15:09
jkomgwith expected results15:09
markvoelkerAt the end of the day if we come up with best practices for creating interoperable workloads, then as an app dev I know if I follow that guidance then I can choose from a whole wealth of products.  Which is way cool.15:09
jkomgright15:09
jkomgand best practices aside, we're proving without a doubt that refstack testing works to show interop, with functional testing of as much those tests as possible.15:11
jkomgala here's my refstack results, and here's a number of functioning workload tests proving said results.15:11
jkomginterop? check.15:11
jkomganything else is gravy15:12
luzCI have to run, talk to you soon :)15:13
jkomgpeace out15:13
*** galstrom_zzz is now known as galstrom15:13
*** leong has quit IRC15:14
hogepodgeMeeting agenda for today, a bit later than normal. Please add topics as you see fit.15:20
hogepodgehttps://etherpad.openstack.org/p/DefCoreLunar.1315:20
eglutethanks hogepodge!15:28
*** MartinST_ has quit IRC15:50
*** pcaruana has quit IRC16:29
*** ametts has quit IRC16:38
openstackgerritChris Hoge proposed openstack/defcore: Update schema to 1.6 to disallow additional properties  https://review.openstack.org/35136316:43
*** rohit404 has quit IRC16:51
*** catherine_d|1 has quit IRC16:55
*** galstrom is now known as galstrom_zzz16:58
*** tongli has quit IRC17:19
*** Jokoester has quit IRC17:21
*** ametts has joined #openstack-defcore17:28
*** woodster_ has quit IRC17:39
*** galstrom_zzz is now known as galstrom18:35
*** ametts has quit IRC18:42
*** ametts has joined #openstack-defcore18:43
*** woodster_ has joined #openstack-defcore19:13
openstackgerritMerged openstack/defcore: Flag advisory tests in 2016.01 due to requirement of admin credential.  https://review.openstack.org/35328719:52
*** ametts has quit IRC20:34
*** jkomg has quit IRC21:30
*** jkomg has joined #openstack-defcore21:35
*** jkomg has quit IRC21:40
*** galstrom is now known as galstrom_zzz22:15
*** edmondsw has quit IRC22:36
*** woodster_ has quit IRC23:49

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!