-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added API for ElasticSearch #446
Conversation
Hi @ayersek64 ! Thanks for the PR. I see you're a Racker. Can you tell us a little bit about what project you're working on and why you're interested in using Mimic to mock Elasticsearch? |
I'm working on Abacus, a metric reporting tool for several Rackspace products like Cloud Servers, Cloud Load Balancers, and RackConnect. I'm currently working on changes to the Product Report Card in the Production menu. This view gets some of its data from an ElasticSearch instance running in production, but it is very slow for Development and difficult to control the response data to see different app behaviors. I would like to have a localized alternative and Mimic fits that need. -Kevin Ayers From: Ken Sheedlo <notifications@github.51.almailto:notifications@github.com> Hi @ayersek64https://github.com/ayersek64 ! Thanks for the PR. I see you're a Racker. Can you tell us a little bit about what project you're working on and why you're interested in using Mimic to mock Elasticsearch? Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-144933994. |
5191165
to
33be545
Compare
Can someone describe what I need to do to get the Travis CI build to pass? I see from the details that the builds are failing during unit tests when executing the ElasticSearch unit test on 'import mock'. Is the requirements.txt file not used to setup the CI build environment? If not, what else do I need to update to ensure mock is included? Thanks. |
@ayersek64 - No, However, more importantly, we don't use the |
@ayersek64 - The best way to eliminate usage of |
@ayersek64 - to make it easier to figure out if the tests are going to pass in travis-ci, you can run |
('response for index/type not available in' | ||
' ElasticSearch data file [elasticsearch.json]')} | ||
|
||
log.msg("\n{'ES status': 'ok', 'indexes': " + ",".join(indexes) + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than doing string manipulation to produce a log message, make use of Twisted's structured logging support. This would be from twisted.logger import Logger
rather than from twisted.python import log
; more details here.
Wow. Interesting tag line. I've been using mock since I've started at Rackspace 4 months ago on the Abacus project and it has been extremely helpful in isolation method/function execution in unit tests and I've actually discovered and resolved numerous bugs in the code. The mocking I did for Mimic actually helped me find numerous problems with my code as well. However, my experiences don't matter and the combined expertise of the project team do, so I will work on the modifications you recommended. On another note, the JSON file based response was intended to provide dynamic control of return data without having to restart the Mimic engine each time new data is needed. I do see your point about another team committing a modified JSON file that would break my use of Mimic. Originally, I wanted some way to specify the file to use when launching Mimic, but didn't see a clean way to do this. My time for Mimic work is limited at this point, so updates will be slow in coming, but I will do my best to get things working they way the Mimic team prefers. -Kevin Ayers From: Glyph <notifications@github.51.almailto:notifications@github.com> @ayersek64https://github.com/ayersek64 - No, requirements.txt is not used; it is installed from setup.py. Since mimic is an application designed to be installed from PyPI, you always have to add dependencies to setup.py since that is the primary source of truth. However, more importantly, we don't use the mock module in our tests on purpose. mock leads to poor-quality tests that stub out important pieces of the implementation and don't provide a high degree of assurance that things aren't broken. In fact Mimic's tag line has been, at times, 'testing without mocking' :). Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-155902852. |
Well, it's also the current tag line that Mimic is "an API-compatible mock", so we have a fraught relationship with the word "mock" ;-).
You might find this video interesting: https://www.youtube.com/watch?v=Xu5EhKVZdV8
I wouldn't say that your experiences don't matter at all. You are after all a member of the project team as much as anybody :).
This is a discussion that periodically comes up, and we may need to find a way to address it. Generally, Mimic's preference is to populate data by using the same "real" public APIs that you'd use to populate data in a real service, to make sure as little code as possible is specific to mimic. For example, when testing Nova, you "boot servers" in Mimic just as you would against a real cloud to create new servers, rather than specifying servers that should exist on the command line. However, many folks know what configuration they want to test against and it's not entirely clear when or how to issue those API requests to populate the data that they want. So Mimic is deficient in that it doesn't provide a clear path to add test fixtures when you want to start it up with preconfigured.
This is totally understandable; Mimic is nobody's full-time job, so we all get this to a greater or lesser degree. While I think we have a clear idea of an ideal state (don't include any test data, populate it with real elasticsearch methods) I don't think we should block you from integrating this until it's totally perfect. But this comment is getting pretty long already so I'll elaborate more later :) |
Thank you for the reply and the video link. I'm always looking to improve my testing skills and strategies and the video 'puts another tool in my toolbox'. Thanks again. -Kevin Ayers From: Glyph <notifications@github.51.almailto:notifications@github.com> Wow. Interesting tag line. Well, it's also the current tag line that Mimic is "an API-compatible mock", so we have a fraught relationship with the word "mock" ;-). I've been using mock since I've started at Rackspace 4 months ago on the Abacus project and it has been extremely helpful in isolation method/function execution in unit tests and I've actually discovered and resolved numerous bugs in the code. The mocking I did for Mimic actually helped me find numerous problems with my code as well mock can be a great introduction to the art of test doubles; and if you're not in the habit of requiring 100% test coverage and deterministic execution of every code path, then it can really elevate your testing. However, every test has multiple purposes; mock is great for making tests more unit-test-ish (i.e. more isolated from irrelevant concerns), but it's horrible for making tests more regression-test-ish (i.e. useful for detecting future breakages). There are a lot of techniques which can give you similar results to make tests just as isolated but more useful for testing the code as it's actually used. You might find this video interesting: https://www.youtube.com/watch?v=Xu5EhKVZdV8 However, my experiences don't matter and the combined expertise of the project team do, so I will work on the modifications you recommended. I wouldn't say that your experiences don't matter at all. You are after all a member of the project team as much as anybody :). On another note, the JSON file based response was intended to provide dynamic control of return data without having to restart the Mimic engine each time new data is needed. I do see your point about another team committing a modified JSON file that would break my use of Mimic. Originally, I wanted some way to specify the file to use when launching Mimic, but didn't see a clean way to do this. This is a discussion that periodically comes up, and we may need to find a way to address it. Generally, Mimic's preference is to populate data by using the same "real" public APIs that you'd use to populate data in a real service, to make sure as little code as possible is specific to mimic. For example, when testing Nova, you "boot servers" in Mimic just as you would against a real cloud to create new servers, rather than specifying servers that should exist on the command line. However, many folks know what configuration they want to test against and it's not entirely clear when or how to issue those API requests to populate the data that they want. So Mimic is deficient in that it doesn't provide a clear path to add test fixtures when you want to start it up with preconfigured. My time for Mimic work is limited at this point, so updates will be slow in coming, but I will do my best to get things working they way the Mimic team prefers. This is totally understandable; Mimic is nobody's full-time job, so we all get this to a greater or lesser degree. While I think we have a clear idea of an ideal state (don't include any test data, populate it with real elasticsearch methods) I don't think we should block you from integrating this until it's totally perfect. But this comment is getting pretty long already so I'll elaborate more later :) Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-156303036. |
hey @ayersek64 - I realize that you don't have a lot of time to spend on maintaining Mimic, but if you're using it for testing, I'm concerned that we may end up with divergent versions of Mimic deployed publicly. Do you have a few cycles to maybe clean this up into something minimal that could be landed - perhaps stacking on top of the in-progress top-level mock plugin support in #500 - and then stuffing your test data into it via a control-plane API? |
Unfortunately, I'm finding it very difficult to find cycles outside my required workload. I'm thinking I need to cancel the PR and try again in the future. I'm not using Mimic for testing anymore. We scored QE resources and they stood up instances of ElasticSearch and populated it with deterministic data alleviating the need to use Mimic. -Kevin Ayers From: Glyph <notifications@github.51.almailto:notifications@github.com> hey @ayersek64https://github.com/ayersek64 - I realize that you don't have a lot of time to spend on maintaining Mimic, but if you're using it for testing, I'm concerned that we may end up with divergent versions of Mimic deployed publicly. Do you have a few cycles to maybe clean this up into something minimal that could be landed - perhaps stacking on top of the in-progress top-level mock plugin support in #500#500 - and then stuffing your test data into it via a control-plane API? Reply to this email directly or view it on GitHubhttps://github.com//pull/446#issuecomment-175293184. |
OK. If you're definitely not going to work on it, let's close this PR. |
No description provided.