Search code examples
ruby-on-railsrubyrspectestngacceptance-testing

Advantage to using describe/it over feature/scenario in specs? (besides syntactic sugar)


Ruby 1.9.3, Rails 3.1.10, RSpec 2.13.0, Capybara 2.2.1

I am writing tests for a Rails 3 app -- a GUI for customers (and admins) to configure various phone settings. I have written 6 or so spec files, with plenty others wrriten before (to which I used as templates). The following is a snapshot of what the spec files look like.

# spec/features/admin/administrators_spec.rb
require 'spec_helper'
include AdministratorHelper
include Helpers
feature "Exercise Administrators page"
  include_context "shared admin context"
  background do
    visit administrators_path
  end
  scenario "show index page" do
    title.should == "Administrators"
  end
  # ... other happy path tests
  # SAD PATH TESTS #
  scenario "validation: delete no administrators", js:true do
    click_button "Delete"
    page.driver.accept_js_confirms!
    error_message("Error: You did not select any administrators for deletion.")
  end
end

To my understanding, feature/scenario are exclusive to Capybara... and acceptance testing. Other collaborators said that our "acceptance tests" test everything -- whether the database saved entries, whether the view is rendered correctly, etc. Each spec is associated with a page in the GUI, not by model/controller.

He had me take a course on edX (CS169.1x), and they taught testing differently -- separate spec file per model and controller. They also used the describe/context/it method of writing tests.

  1. Is there any advantage to writing tests with describe/it over feature/scenario? (Besides syntactic sugar)
  2. By using Capybara's feature/scenario, does it slow down the test suite? (Compared to using RSpec's keywords)
  3. What exactly are the tests I am writing (as explained in the code block)? Acceptance, unit, a combination?
  4. Would writing tests like the above alone achieve higher coverage? (Our next goal is >80%)

Thank you for all the help and clarification.


Solution

  • I think the question is a bit broad, but it is possible to answer with some advice and opinions based on my own experience.

    1. Is there any advantage to writing tests with describe/it over feature/scenario? (Besides syntactic sugar)

    Not as far as I know. However, you may find some convenient test framework features are easier to implement in one scheme than another.

    1. By using Capybara's feature/scenario, does it slow down the test suite? (Compared to using RSpec's keywords)

    Just using the keywords will not be a large factor in speed of processing. What kind of web driver and host simulation you are using will have a larger impact.

    1. What exactly are the tests I am writing (as explained in the code block)? Acceptance, unit, a combination?

    I would call them acceptance tests. However, there is not always a clear dividing line, and you need to look at how the tests will be run, and how they will be used in your development process.

    A mature development pipeline may have two or three separate test suites used for different purposes, and probably implemented using different test frameworks. You might want a set of very fast tests (usually unit tests) implemented to run as a quick automated test of new code commits for instance.

    1. Would writing tests like the above alone achieve higher coverage? (Our next goal is >80%)

    The tests can exercise any user-accessible feature of the application, and any of your own code that is exercised can be considered covered. It is likely you can get higher than 80% C0 coverage (Ruby coverage tools don't usually provide deeper details such as C1), provided you do not have a lot of utility scripts or other code that is not user-accessible.


    I suspect using a specific test framework's keywords will have minimal impact. However, using Capybara to acceptance test the application via the web interface is going to be much slower than running lower-level unit tests of individual modules.

    Speed of tests can vary orders of magnitude. For tight unit tests around a fast module, I might expect to run 100 examples per second. On a web development project, I typically run 10-20 examples per second on unit tests, but maybe 1 example per second on acceptance tests (which is roughly the ballpark you are getting here). When using Capybara via a browser driver on a hosted copy of a site, I might expect to run one example in 10 seconds, so a suite with over 100 tests has to be run only for critical-path tests, such as versus release candidates.