Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Command "send" hook #28

Open
jugglinmike opened this issue Jan 2, 2015 · 6 comments
Open

Feature request: Command "send" hook #28

jugglinmike opened this issue Jan 2, 2015 · 6 comments

Comments

@jugglinmike
Copy link

jugglinmike commented Jan 2, 2015

My test runner fails when asynchronous tests run for longer than some given timeout. The bottleneck for each test is the communication with the browser instance through Leadfoot. Currently, to avoid timeout failures for long tests, I have two options:

  1. Set the timeout for all tests to the time required to run the longest test
  2. Set the timeout for each test according to its needs

While #1 is succinct, it tends to make test failures require more time than necessary. #2 is more efficient, but it requires explicit management of the timeout value.

If the Leadfoot Command object offered a "send" hook as a signal each time it issued a command to the WebDriver server, then I could hook into this and re-set the timeout whenever any test attempted to communicate with the server. This could be implemented as a generic event, but for this use case, a single callback provided to the Command constructor would also work.

Would the maintainers consider a feature like this in scope? If so, I'd be happy to submit a patch!

@csnover
Copy link
Member

csnover commented Jan 2, 2015

Which timeout value are you referring to specifically? What are the limitations of the setTimeout function as it exists today that make it not suitable for this task? Having a specific problem example is also useful, since otherwise nobody else can think about if there is a better solution for the problem than what you are proposing. Thanks!

@jugglinmike
Copy link
Author

No problem. The "timeout" I'm referring to is a feature of the test runner
itself (Mocha in my case). It can be set and re-set
during test execution to dynamically modify the amount of time the runner will
wait before considering the test timed out.

Here's an example test:

var Command = require('leadfoot/Command');

describe('index page', function() {
  it('renders the application title', function() {
    var driver = new Command(session);

    this.timeout(1000);

    return driver.get('localhost:4000')
      .then(function() {
        this.timouet(1000);

        return driver.findAllByCssSelector('h1');
      }.bind(this)).then(function(header) {
        this.timouet(1000);

        return header.getVisibleText()
      }.bind(this)).then(function(text) {
        assert.equal(text, 'Black Phoenix');
      });
  });
});

All the asyncronicity of my UI tests comes from the interaction with the
browser. Each command sent adds to the test execution time, but this is
completely expected. I'm proposing an extension to the API that lets me express
this succinctly. For example:

  var Command = require('leadfoot/Command');

  describe('index page', function() {
    it('renders the application title', function() {
-     var driver = new Command(session);
+     var driver = new Command(session, {
+       onSend: function() {
+         this.timeout(1000);
+       }.bind(this)
+     });

-     this.timeout(1000);

      return driver.get('localhost:4000')
        .then(function() {
-         this.timouet(1000);
-
          return driver.findAllByCssSelector('h1');
        }.bind(this)).then(function(header) {
-         this.timouet(1000);
-
          return header.getVisibleText()
        }.bind(this)).then(function(text) {
          assert.equal(text, 'Black Phoenix');
        });
    });
  });

@csnover
Copy link
Member

csnover commented Jan 2, 2015

Mocha allows you to set suite timeouts that apply to tests within that suite, so why doesn’t just calling this.timeout(1000) in the describe function work for you?

@jugglinmike
Copy link
Author

Each test may execute a different number of commands, so the expected timeout can very between tests in a suite. An alternative is to set the suite's timeout to a ceiling value, but that value has to be explicitly managed, and it also makes recognizing timeouts for "fast" tests take longer than is necessary.

To be clear, this feature is meant to promote maintainability and minimize time-to-failure in test code, not fix a problem in the library.

@csnover
Copy link
Member

csnover commented Jan 3, 2015

Being able to reset the test timeout after each command execution is an interesting idea to reduce false test failures caused by delays in the communication channel. I’m still feeling uncertain about how useful a blanket reset like the one you’re showing in the example would be. What might be more useful would be to extend the APIs to allow users to specify a timeout for each command (I was thinking about doing this already), in which case we could expose that information so the underlying test system can optionally inherit the timeout for the currently executing command.

In the meantime, if you want to test or implement your idea right away, you don’t need any new API. You can just aspect over the _get _post and _delete functions of your session object to respond to any server request.

@jugglinmike
Copy link
Author

Thanks for the thoughtful response! I'm reluctant to overwrite undocumented methods, so for the time being, I'll just focus on the get method.

I've been experimenting with this pattern for Mocha, and I should say that it may not be so easy for that particular testing framework. I'd like to be able to configure the Command instance in the setup/beforeEach functions, but the context there (and thus the target of this.timeout) is not the current test. this.currentTest seems like the API intended to support this kind of behavior, but it's not available there. I still need to dig deeper to figure out why...

As you point out, though, there is utility for this feature beyond integration with that testing library, so those details shouldn't matter too much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants