Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added API for ElasticSearch #446

Closed
wants to merge 1 commit into from

Conversation

ayersek64
Copy link

No description provided.

@ksheedlo
Copy link
Contributor

ksheedlo commented Oct 2, 2015

Hi @ayersek64 ! Thanks for the PR.

I see you're a Racker. Can you tell us a little bit about what project you're working on and why you're interested in using Mimic to mock Elasticsearch?

@ayersek64
Copy link
Author

I'm working on Abacus, a metric reporting tool for several Rackspace products like Cloud Servers, Cloud Load Balancers, and RackConnect.
Here is a link to the production page:
https://abacus.rax.io/

I'm currently working on changes to the Product Report Card in the Production menu. This view gets some of its data from an ElasticSearch instance running in production, but it is very slow for Development and difficult to control the response data to see different app behaviors. I would like to have a localized alternative and Mimic fits that need.

-Kevin Ayers
BRDS SW Developer, IPA Shared Services
Rackspace - The #1 Managed Cloud Company
office: (210) 312-5888 | mobile: (512) 997-5372

From: Ken Sheedlo <notifications@github.51.almailto:notifications@github.com>
Reply-To: rackerlabs/mimic <reply@reply.github.51.almailto:reply@reply.github.com>
Date: Friday, October 2, 2015 at 1:11 AM
To: rackerlabs/mimic <mimic@noreply.github.51.almailto:mimic@noreply.github.com>
Cc: Rackspace Hosting <Kevin.Ayers@rackspace.commailto:Kevin.Ayers@rackspace.com>
Subject: Re: [mimic] Added API for ElasticSearch (#446)

Hi @ayersek64https://github.com/ayersek64 ! Thanks for the PR.

I see you're a Racker. Can you tell us a little bit about what project you're working on and why you're interested in using Mimic to mock Elasticsearch?

Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-144933994.

@ayersek64
Copy link
Author

Can someone describe what I need to do to get the Travis CI build to pass?

I see from the details that the builds are failing during unit tests when executing the ElasticSearch unit test on 'import mock'.
I have updated requirements.txt to include 'mock 1.3.0', so I'm not sure why the CI environment does not have mock available for the unit test.

Is the requirements.txt file not used to setup the CI build environment? If not, what else do I need to update to ensure mock is included?

Thanks.

@glyph
Copy link
Contributor

glyph commented Nov 11, 2015

@ayersek64 - No, requirements.txt is not used; it is installed from setup.py. Since mimic is an application designed to be installed from PyPI, you always have to add dependencies to setup.py since that is the primary source of truth.

However, more importantly, we don't use the mock module in our tests on purpose. mock leads to poor-quality tests that stub out important pieces of the implementation and don't provide a high degree of assurance that things aren't broken. In fact Mimic's tag line has been, at times, 'testing without mocking' :).

@glyph
Copy link
Contributor

glyph commented Nov 11, 2015

@ayersek64 - The best way to eliminate usage of mock in these tests would be to get rid of the file_based_responses module and replace it with responses generated from a set of model objects which represent elasticsearch indexes. If you land the PR in the state that it's in now, where the ES responses come pre-initialized with data specific to Abacus, if your tests using mimic depend on this they will eventually break, when someone else who wants to use Mimic to test their Elasticsearch cluster either changes this data to be representative of their application or, more likely to get through code review, changes it so that it starts with an empty ElasticSearch cluster and must be populated.

@glyph
Copy link
Contributor

glyph commented Nov 11, 2015

@ayersek64 - to make it easier to figure out if the tests are going to pass in travis-ci, you can run tox (or detox) locally and you should get test results that install Mimic the same way.

('response for index/type not available in'
' ElasticSearch data file [elasticsearch.json]')}

log.msg("\n{'ES status': 'ok', 'indexes': " + ",".join(indexes) +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than doing string manipulation to produce a log message, make use of Twisted's structured logging support. This would be from twisted.logger import Logger rather than from twisted.python import log; more details here.

@ayersek64
Copy link
Author

Wow. Interesting tag line.

I've been using mock since I've started at Rackspace 4 months ago on the Abacus project and it has been extremely helpful in isolation method/function execution in unit tests and I've actually discovered and resolved numerous bugs in the code. The mocking I did for Mimic actually helped me find numerous problems with my code as well. However, my experiences don't matter and the combined expertise of the project team do, so I will work on the modifications you recommended.

On another note, the JSON file based response was intended to provide dynamic control of return data without having to restart the Mimic engine each time new data is needed. I do see your point about another team committing a modified JSON file that would break my use of Mimic. Originally, I wanted some way to specify the file to use when launching Mimic, but didn't see a clean way to do this.

My time for Mimic work is limited at this point, so updates will be slow in coming, but I will do my best to get things working they way the Mimic team prefers.

-Kevin Ayers
BRDS SW Developer, IPA Shared Services
Rackspace - The #1 Managed Cloud Company
office: (210) 312-5888 | mobile: (512) 997-5372

From: Glyph <notifications@github.51.almailto:notifications@github.com>
Reply-To: rackerlabs/mimic <reply@reply.github.51.almailto:reply@reply.github.com>
Date: Wednesday, November 11, 2015 at 2:36 PM
To: rackerlabs/mimic <mimic@noreply.github.51.almailto:mimic@noreply.github.com>
Cc: Rackspace Hosting <Kevin.Ayers@rackspace.commailto:Kevin.Ayers@rackspace.com>
Subject: Re: [mimic] Added API for ElasticSearch (#446)

@ayersek64https://github.com/ayersek64 - No, requirements.txt is not used; it is installed from setup.py. Since mimic is an application designed to be installed from PyPI, you always have to add dependencies to setup.py since that is the primary source of truth.

However, more importantly, we don't use the mock module in our tests on purpose. mock leads to poor-quality tests that stub out important pieces of the implementation and don't provide a high degree of assurance that things aren't broken. In fact Mimic's tag line has been, at times, 'testing without mocking' :).

Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-155902852.

@glyph
Copy link
Contributor

glyph commented Nov 13, 2015

Wow. Interesting tag line.

Well, it's also the current tag line that Mimic is "an API-compatible mock", so we have a fraught relationship with the word "mock" ;-).

I've been using mock since I've started at Rackspace 4 months ago on the Abacus project and it has been extremely helpful in isolation method/function execution in unit tests and I've actually discovered and resolved numerous bugs in the code. The mocking I did for Mimic actually helped me find numerous problems with my code as well

mock can be a great introduction to the art of test doubles; and if you're not in the habit of requiring 100% test coverage and deterministic execution of every code path, then it can really elevate your testing. However, every test has multiple purposes; mock is great for making tests more unit-test-ish (i.e. more isolated from irrelevant concerns), but it's horrible for making tests more regression-test-ish (i.e. useful for detecting future breakages). There are a lot of techniques which can give you similar results to make tests just as isolated but more useful for testing the code as it's actually used.

You might find this video interesting: https://www.youtube.com/watch?v=Xu5EhKVZdV8

However, my experiences don't matter and the combined expertise of the project team do, so I will work on the modifications you recommended.

I wouldn't say that your experiences don't matter at all. You are after all a member of the project team as much as anybody :).

On another note, the JSON file based response was intended to provide dynamic control of return data without having to restart the Mimic engine each time new data is needed. I do see your point about another team committing a modified JSON file that would break my use of Mimic. Originally, I wanted some way to specify the file to use when launching Mimic, but didn't see a clean way to do this.

This is a discussion that periodically comes up, and we may need to find a way to address it. Generally, Mimic's preference is to populate data by using the same "real" public APIs that you'd use to populate data in a real service, to make sure as little code as possible is specific to mimic. For example, when testing Nova, you "boot servers" in Mimic just as you would against a real cloud to create new servers, rather than specifying servers that should exist on the command line.

However, many folks know what configuration they want to test against and it's not entirely clear when or how to issue those API requests to populate the data that they want. So Mimic is deficient in that it doesn't provide a clear path to add test fixtures when you want to start it up with preconfigured.

My time for Mimic work is limited at this point, so updates will be slow in coming, but I will do my best to get things working they way the Mimic team prefers.

This is totally understandable; Mimic is nobody's full-time job, so we all get this to a greater or lesser degree.

While I think we have a clear idea of an ideal state (don't include any test data, populate it with real elasticsearch methods) I don't think we should block you from integrating this until it's totally perfect. But this comment is getting pretty long already so I'll elaborate more later :)

@ayersek64
Copy link
Author

Thank you for the reply and the video link.

I'm always looking to improve my testing skills and strategies and the video 'puts another tool in my toolbox'.

Thanks again.

-Kevin Ayers
BRDS SW Developer, IPA Shared Services
Rackspace - The #1 Managed Cloud Company
office: (210) 312-5888 | mobile: (512) 997-5372

From: Glyph <notifications@github.51.almailto:notifications@github.com>
Reply-To: rackerlabs/mimic <reply@reply.github.51.almailto:reply@reply.github.com>
Date: Thursday, November 12, 2015 at 8:33 PM
To: rackerlabs/mimic <mimic@noreply.github.51.almailto:mimic@noreply.github.com>
Cc: Rackspace Hosting <Kevin.Ayers@rackspace.commailto:Kevin.Ayers@rackspace.com>
Subject: Re: [mimic] Added API for ElasticSearch (#446)

Wow. Interesting tag line.

Well, it's also the current tag line that Mimic is "an API-compatible mock", so we have a fraught relationship with the word "mock" ;-).

I've been using mock since I've started at Rackspace 4 months ago on the Abacus project and it has been extremely helpful in isolation method/function execution in unit tests and I've actually discovered and resolved numerous bugs in the code. The mocking I did for Mimic actually helped me find numerous problems with my code as well

mock can be a great introduction to the art of test doubles; and if you're not in the habit of requiring 100% test coverage and deterministic execution of every code path, then it can really elevate your testing. However, every test has multiple purposes; mock is great for making tests more unit-test-ish (i.e. more isolated from irrelevant concerns), but it's horrible for making tests more regression-test-ish (i.e. useful for detecting future breakages). There are a lot of techniques which can give you similar results to make tests just as isolated but more useful for testing the code as it's actually used.

You might find this video interesting: https://www.youtube.com/watch?v=Xu5EhKVZdV8

However, my experiences don't matter and the combined expertise of the project team do, so I will work on the modifications you recommended.

I wouldn't say that your experiences don't matter at all. You are after all a member of the project team as much as anybody :).

On another note, the JSON file based response was intended to provide dynamic control of return data without having to restart the Mimic engine each time new data is needed. I do see your point about another team committing a modified JSON file that would break my use of Mimic. Originally, I wanted some way to specify the file to use when launching Mimic, but didn't see a clean way to do this.

This is a discussion that periodically comes up, and we may need to find a way to address it. Generally, Mimic's preference is to populate data by using the same "real" public APIs that you'd use to populate data in a real service, to make sure as little code as possible is specific to mimic. For example, when testing Nova, you "boot servers" in Mimic just as you would against a real cloud to create new servers, rather than specifying servers that should exist on the command line.

However, many folks know what configuration they want to test against and it's not entirely clear when or how to issue those API requests to populate the data that they want. So Mimic is deficient in that it doesn't provide a clear path to add test fixtures when you want to start it up with preconfigured.

My time for Mimic work is limited at this point, so updates will be slow in coming, but I will do my best to get things working they way the Mimic team prefers.

This is totally understandable; Mimic is nobody's full-time job, so we all get this to a greater or lesser degree.

While I think we have a clear idea of an ideal state (don't include any test data, populate it with real elasticsearch methods) I don't think we should block you from integrating this until it's totally perfect. But this comment is getting pretty long already so I'll elaborate more later :)

Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-156303036.

@glyph
Copy link
Contributor

glyph commented Jan 26, 2016

hey @ayersek64 - I realize that you don't have a lot of time to spend on maintaining Mimic, but if you're using it for testing, I'm concerned that we may end up with divergent versions of Mimic deployed publicly. Do you have a few cycles to maybe clean this up into something minimal that could be landed - perhaps stacking on top of the in-progress top-level mock plugin support in #500 - and then stuffing your test data into it via a control-plane API?

@ayersek64
Copy link
Author

Unfortunately, I'm finding it very difficult to find cycles outside my required workload.

I'm thinking I need to cancel the PR and try again in the future.

I'm not using Mimic for testing anymore. We scored QE resources and they stood up instances of ElasticSearch and populated it with deterministic data alleviating the need to use Mimic.

-Kevin Ayers
BRDS SW Developer, IPA Shared Services
Rackspace - The #1 Managed Cloud Company
office: (210) 312-5888 | mobile: (512) 997-5372

From: Glyph <notifications@github.51.almailto:notifications@github.com>
Reply-To: rackerlabs/mimic <reply@reply.github.51.almailto:reply@reply.github.com>
Date: Tuesday, January 26, 2016 at 5:35 PM
To: rackerlabs/mimic <mimic@noreply.github.51.almailto:mimic@noreply.github.com>
Cc: Rackspace Hosting <Kevin.Ayers@rackspace.commailto:Kevin.Ayers@rackspace.com>
Subject: Re: [mimic] Added API for ElasticSearch (#446)

hey @ayersek64https://github.com/ayersek64 - I realize that you don't have a lot of time to spend on maintaining Mimic, but if you're using it for testing, I'm concerned that we may end up with divergent versions of Mimic deployed publicly. Do you have a few cycles to maybe clean this up into something minimal that could be landed - perhaps stacking on top of the in-progress top-level mock plugin support in #500#500 - and then stuffing your test data into it via a control-plane API?

Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-175293184.

@glyph
Copy link
Contributor

glyph commented Jan 27, 2016

OK. If you're definitely not going to work on it, let's close this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants